id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
57,455 | https://en.wikipedia.org/wiki/Sago | Sago () is a starch extracted from the pith, or spongy core tissue, of various tropical palm stems, especially those of Metroxylon sagu. It is a major staple food for the lowland peoples of New Guinea and the Maluku Islands, where it is called saksak, rabia and sagu. The largest supply of sago comes from Southeast Asia, particularly Indonesia and Malaysia. Large quantities of sago are sent to Europe and North America for cooking purposes. It is traditionally cooked and eaten in various forms, such as rolled into balls, mixed with boiling water to form a glue-like paste (papeda), or as a pancake.
Sago is often produced commercially in the form of "pearls" (small rounded starch aggregates, partly gelatinized by heating). Sago pearls can be boiled with water or milk and sugar to make a sweet sago pudding. Sago pearls are similar in appearance to the pearled starches of other origin, e.g. cassava starch (tapioca) and potato starch. They may be used interchangeably in some dishes, and tapioca pearls are often marketed as "sago", since they are much cheaper to produce. Compared to tapioca pearls, real sago pearls are off-white, uneven in size, brittle and cook very quickly.
The name sago is also sometimes used for starch extracted from other sources, especially the sago cycad, Cycas revoluta. The sago cycad is also commonly known as the sago palm, although this is a misnomer as cycads are not palms. Extracting edible starch from the sago cycad requires special care due to the poisonous nature of cycads. Cycad sago is used for many of the same purposes as palm sago.
The fruit of palm trees from which the sago is produced is not allowed to ripen fully, as full ripening completes the life cycle of the tree and exhausts the starch reserves in the trunk to produce the seeds to the point of death, leaving a hollow shell. The palms are cut down when they are about 15 years old, just before or shortly after the inflorescence appears. The stems, which grow high, are split out. The starch-containing pith is taken from the stems and ground to powder. The powder is kneaded in water over a cloth or sieve to release the starch. The water with the starch passes into a trough where the starch settles. After a few washings, the starch is ready to be used in cooking. A single palm yields about of dry starch.
Historical records
Sago was noted by the Chinese historian Zhao Rukuo (1170–1231) during the Song dynasty. In his Zhu Fan Zhi (1225), a collection of descriptions of foreign countries, he writes that the kingdom of Boni "produces no wheat, but hemp and rice, and they use sha-hu (sago) for grain".
Sources, extraction and preparation
Palm sago
The sago palm, Metroxylon sagu, is found in tropical lowland forest and freshwater swamps across Southeast Asia and New Guinea and is the primary source of sago. It tolerates a wide variety of soils and may reach 30 meters in height (including the leaves). Several other species of the genus Metroxylon, particularly Metroxylon salomonense and Metroxylon amicarum, are also used as sources of sago throughout Melanesia and Micronesia.
Sago palms grow very quickly, in clumps of different ages similar to bananas, one sucker matures, then flowers and dies. It is replaced by another sucker, with up to 1.5 m of vertical stem growth per year. The stems are thick and are either self-supporting or have a moderate climbing habit; the leaves are pinnate. Each palm trunk produces a single inflorescence at its tip at the end of its life. Sago palms are harvested at the age of 7–15 years, just before or shortly after the inflorescence appears and when the stems are full of starch stored for use in reproduction. One palm can yield 150–300 kg of starch.
Sago is extracted from Metroxylon palms by splitting the stem lengthwise and removing the pith which is then crushed and kneaded to release the starch before being washed and strained to extract the starch from the fibrous residue. The raw starch suspension in water is then collected in a settling container.
Cycad sago
The sago cycad, Cycas revoluta, is a slow-growing wild or ornamental plant. Its common names "sago palm" and "king sago palm" are misnomers as cycads are not palms. Processed starch known as sago is made from this and other cycads. It is a less-common food source for some peoples of the Pacific and Indian Oceans. Unlike palms, cycads are highly poisonous: most parts of the plant contain the neurotoxins cycasin and BMAA. Consumption of cycad seeds has been implicated in the outbreak of Parkinson's disease-like neurological disorder in Guam and other locations in the Pacific. Thus, before any part of the plant may safely be eaten the toxins must be removed through extended processing.
Sago is extracted from the sago cycad by cutting the pith from the stem, root and seeds of the cycads, grinding the pith to a coarse flour, before being dried, pounded, and soaked. The starch is then washed carefully and repeatedly to leach out the natural toxins. The starchy residue is then dried and cooked, producing a starch similar to palm sago/sabudana.
Cassava sago
In many countries including Australia, Brazil, and India, tapioca pearls made from cassava root are also referred to as sago, sagu, sabudana, etc.
Uses
Nutrition
Sago from Metroxylon palms is nearly pure carbohydrate and has very little protein, vitamins, or minerals. of dry sago typically comprises 94 grams of carbohydrate, 0.2 grams of protein, 0.5 grams of dietary fiber, 10 mg of calcium, 1.2 mg of iron and negligible amounts of fat, carotene, thiamine and ascorbic acid and yields approximately of food energy. Sago palms are typically found in areas unsuited for other forms of agriculture, so sago cultivation is often the most ecologically appropriate form of land-use and the nutritional deficiencies of the food can often be compensated for with other readily available foods.
Sago starch can be baked (resulting in a product analogous to bread, pancake, or biscuit) or mixed with boiling water to form a paste. It is a main staple of many traditional communities in New Guinea and Maluku in the form of papeda, Borneo, South Sulawesi (most known in Luwu Regency) and Sumatra. In Palembang, sago is one of the ingredients to make pempek. In Brunei, it is used for making the popular local dish called the ambuyat. It is also used commercially in making noodles and white bread. Sago starch can also be used as a thickener for other dishes. It can be made into steamed puddings such as sago plum pudding.
In Malaysia, the traditional food "keropok lekor" (fish cracker) uses sago as one of its main ingredients. In the making of the popular keropok lekor of Losong in Kuala Terengganu, each kilogram of fish meat is mixed with half a kilogram of fine sago, with a little salt added for flavour. Tons of raw sago are imported each year into Malaysia to support the keropok lekor industry.
In 1805, two captured crew members of the shipwrecked schooner Betsey were kept alive until their escape from an undetermined island on a diet of sago.
Any starch can be pearled by heating and stirring small aggregates of moist starch, producing partly gelatinized dry kernels that swell but remain intact on boiling. Pearl sago closely resembles pearl tapioca. Both are typically small (about 2 mm diameter) dry, opaque balls. Both may be white (if very pure) or colored naturally gray, brown or black, or artificially pink, yellow, green, etc. When soaked and cooked, both become much larger, translucent, soft and spongy. Both are widely used in Indian, Bangladeshi and Sri Lankan cuisine in a variety of dishes and around the world, usually in puddings. In India, it is used in a variety of dishes such as desserts boiled with sweetened milk on occasion of religious fasts.
The Penan people of Borneo have sago from Eugeissona palms as their staple carbohydrate.
Textile production
Sago starch is also used to treat fiber in a process is called sizing, which makes fibers easier to machine. The process helps to bind the fiber, give it a predictable slip for running on metal, standardize the level of hydration of the fiber and give the textile more body. Most of the natural based cloth and clothing has been sized; this leaves a residue which is removed in the first wash.
Other uses
Because many traditional people rely on sago-palm as their main food staple and because supplies are finite, in some areas commercial or industrial harvesting of wild stands of sago-palm can conflict with the food needs of local communities.
There is also a research conducted to potentially make use of the waste from sago palm industry as an adsorbent for cleaning up oil spills.
See also
Arenga pinnata
Landang
Sandige
References
Citations
General and cited references
Flach, M. and F. Rumawas, eds. (1996). Plant Resources of South-East Asia (PROSEA) No. 9: Plants Yielding Non-Seed Carbohydrates. Leiden: Blackhuys.
Lie, Goan-Hong. (1980). "The Comparative Nutritional Roles of Sago and Cassava in Indonesia." In: Stanton, W.R. and M. Flach, eds., Sago: The Equatorial Swamp as a Natural Resource. The Hague, Boston, London: Martinus Nijhoff.
McClatchey, W., H.I. Manner, and C.R. Elevitch. (2005). "Metroxylon amicarum, M. paulcoxii, M. sagu, M. salomonense, M. vitiense, and M. warburgii (sago palm), ver. 1.1". In: Elevitch, C.R. (ed.) Species Profiles for Pacific Island Agroforestry. Permanent Agriculture Resources (PAR), Holualoa, Hawaii.
Pickell, D. (2002). Between the Tides: A Fascinating Journey Among the Kamoro of New Guinea. Singapore: Periplus Press.
Stanton, W.R. and M. Flach, eds., Sago: The Equatorial Swamp as a Natural Resource. The Hague, Boston, London: Martinus Nijhoff.
Further reading
External links
Species profile for Metroxylon sagu
http://www.fao.org/ag/agA/AGAP/FRG/AFRIS/Data/416.HTM
Sago Uses
Edible thickening agents
Food ingredients
Indian cuisine
Indonesian cuisine
Malagasy cuisine
Melanesian cuisine
Oceanian cuisine
Papua New Guinean cuisine
Staple foods
Tropical agriculture | Sago | Technology | 2,432 |
66,659,365 | https://en.wikipedia.org/wiki/Actinoscypha%20muelleri | Actinoscypha muelleri is a species of fungus belonging to the family Dermateaceae.
References
Dermateaceae
Fungus species | Actinoscypha muelleri | Biology | 28 |
53,451,378 | https://en.wikipedia.org/wiki/International%20Symposium%20on%20Mathematical%20Foundations%20of%20Computer%20Science | MFCS, the International Symposium on Mathematical Foundations of Computer Science is an academic conference organized annually since 1972. The topics of the conference cover the entire field of theoretical computer science. Up to 2012, the conference was held in different locations in Poland, Czech Republic and Slovakia but, since MFCS 2013, it travels around Europe. All contributions are strongly peer-reviewed. From 1974 to 2015, conference articles were published in proceedings published by Springer in the Lecture Notes in Computer Science series. Since 2016 the proceedings have been published by the Leibniz International Proceedings in Informatics.
Recent history of the symposium
References
Theoretical computer science conferences
Recurring events established in 1972 | International Symposium on Mathematical Foundations of Computer Science | Technology | 133 |
17,861,460 | https://en.wikipedia.org/wiki/Neo-eclectic%20architecture | Neo-eclectic architecture is a name for an architectural style that has influenced residential building construction in North America in the latter part of the 20th century and early part of the 21st. It is a contemporary version of Revivalism that has perennially occurred since Neoclassical architecture developed in the mid 18th century.
In contrast to the occasionally faux and low-budget Neo-Eclectic detached homesteads, the term New Classical architecture identifies contemporary buildings that stick to the basic ideals, proportions, materials and craftsmanship of traditional architecture.
Characteristics
Neo-eclectic architecture combines a wide array of decorative techniques taken from an assortment of different house styles. It can be considered a devolution from the clean and unadorned modernist styles and principles behind the Mid-Century modern and Ranch-style houses that dominated North American residential design and construction in the first decades after the Second World War. It is an outgrowth of postmodern architecture, yet differs from postmodernism in that it is not creatively experimental.
Applications
Some Neo-Eclectic buildings will combine an array of different historical styles in a single building. A house so designed may have Cape Cod, Mission Revival, Tudor Revival, or Châteauesque and French Provincial elements all at the same time. Often houses, or whole subdivisions, will focus on one revival style. Different historical styles predominated in different regions. In California elements from the Mediterranean Revival and Spanish Colonial Revival Style continue to be a regional vernacular and popular. In New England and the Mid-Atlantic the Colonial Revival Style and Georgian Revival architecture combinations are common.
In Neo-Eclectic architecture the revival elements are almost always decorative, consisting of surface elements such as claddings and windows. Details such as heavy moldings and/or trim (that would be cut stone or plaster in traditional architecture) are usually extruded foam with a stucco veneer. Aside from specifications adjusted for lower quality, newer growth lumber, the basic construction of Neo-Eclectic houses is unchanged from previous house styles such as the ranch-style house. An important development leading to the modern Neo-Eclectic style is the popularity of EIFS, a form of external insulation that is easy to apply and can be coloured and shaped to appear like an array of different materials such as stucco and stone.
Critiques
Neo-eclectic architecture is most prominent in what are pejoratively known as McMansions, but it has been embraced by almost all residential builders. Across North America most suburbs built in the last three decades can largely be described as Neo-Eclectic .
Critics of Neo-Eclectic architecture see the style as pretentious, wasteful and/or garish, and unoriginal. Typically and somewhat deceptively, the Neo-Eclectic style plays an instrumental role in making cheaply built, over-sized
tract homes on comparatively small parcels of land appear as something far greater than the sum of their parts.
See also
New Classical Architecture, a more accurate reference style to historical architecture.
Neomodern architecture, a current modernist response.
Sustainable architecture
Sustainable development
Sustainable design
Snout house
Mar del Plata style, an Argentine 20th century eclectic style
References
External links
Humanities Web - Neo-Eclectic Style
House styles
American architectural styles
Revival architectural styles
+ | Neo-eclectic architecture | Engineering | 641 |
68,136,865 | https://en.wikipedia.org/wiki/Kaseya | Kaseya Limited ( ) is a company headquartered in Miami that develops software for network monitoring, system monitoring, and other information technology applications. It is majority-owned by Insight Partners and owns the naming rights to the Kaseya Center. The name of the company comes from a word meaning "protect and defend" in the Sioux language. The company was estimated to be valued at $12 billion in April 2023.
History
Kaseya was founded in 2000 in California by Mark Sutherland and Paul Wong, who previously worked together on a project for the National Security Agency.
In 2003, Gerald Blackie joined the company as its CEO.
In June 2013, Insight Partners acquired control of the company and Yogesh Gupta became CEO.
In July 2015, Fred Voccola was named CEO of the company.
In 2018, the company moved its headquarters from Boston to Brickell, Miami.
In April 2023, the company acquired the naming rights to the Kaseya Center in a 17-year, $117.4 million agreement.
In 2024, the company laid off 150 employees, about 8% of its Miami workforce. The company stated that it was part of its normal performance-based reviews and that the jobs would not disappear.
Security Issues
In 2015, Kaseya fixed a directory traversal vulnerability in their remote access tool. The same bug was present in the company's support website for a further six years.
In 2018, the company's remote tool was infiltrated and hackers were able to commandeer affected computers to mine cryptocurrency.
In July 2021, the Kaseya VSA ransomware attack, perpetrated by REvil, led to downtime for 60 customers and over 1,500 downstream businesses.
Business practices
Kaseya faced backlash in 2022, when customers found that terms and conditions were being updated to introduce automatically renewing three-year terms, getting only a 30 days notice of the change.
Acquisitions
References
External links
2001 establishments in California
American companies established in 2001
Network management
Software companies based in Florida
Software companies established in 2001
Software companies of the United States
System administration | Kaseya | Technology,Engineering | 428 |
11,553,073 | https://en.wikipedia.org/wiki/Marasmius%20tenuissimus | Marasmius tenuissimus is a species of agaric fungus in the large agaric genus Marasmius.
See also
List of Marasmius species
References
Fungi described in 1838
Fungal plant pathogens and diseases
tenuissimus
Fungus species | Marasmius tenuissimus | Biology | 53 |
31,507,233 | https://en.wikipedia.org/wiki/Douglas%202229 | Douglas Aircraft Company's Model 2229 was a proposed supersonic transport (SST) originally started as a private study. The design progressed as far as making mock-ups of the cockpit area and wind-tunnel models of the overall layout. After studying the design, Douglas concluded that the SST would not work economically, and declined to enter the Model 2229 in the National Supersonic Transport (NST) program in 1963.
Development
Background
Through the 1950s, understanding of supersonic aerodynamics had improved to the point where sustained operation at high Mach was first becoming possible. A combination of new engines, engine intakes, new planforms like the delta wing, and new materials like titanium and stainless steel had solved many of the problems of earlier designs. By the late 1950s, the United States was in the midst of building two supercruising aircraft, the Lockheed A-12 and B-70 Valkyrie, and the UK was considering the Avro 730.
The concept of a supersonic transport seemed like a natural evolution of existing designs, which had long striven for "higher, faster". However, at supersonic speeds, lift works in a very different way that subsonic, and always less efficient. Subsonic transports of the era were reaching lift-to-drag ratios of about 19, whereas even the most advanced wing designs for SSTs were around 9. This was not a concern for military aircraft, where speed was life, but an SST would require twice as much fuel to move a passenger, increasing operational costs. To offset the operational costs, proponents of the SST concept suggested the lower trip times would command higher ticket prices. This would make them attractive to a segment of the market that currently paid higher ticket prices for first class seats. In theory, the faster trip times would also allow a reduction in costs needed to fly a given number of passengers, as fewer aircraft would be needed to cover a given route.
By 1960, several companies had shown models or mock-ups of SST designs, but most of these were trial balloons with no serious study behind them. But in an era when progress generally meant faster, there was a widespread feeling that the SST was the next natural step in aircraft design.
Douglas Model 2229
Like other companies, Douglas had been considering the SST concept since the late 1950s. Invariably they were unimpressed with the results. In one case, the only room to be found for the required fuel load was in the fuselage, prompting one designer to sketch a cartoon showing the passengers sitting in diving suits immersed in the fuel under large signs saying "No Smoking!"
But as the market turned to the SST idea, Douglas started the 2229 project. This was one of the first serious efforts to be shown to the press. The 2229 broadly followed the layout introduced in the B-70, although it used the compound delta layout. The B-70 had a shoulder-mounted wing to best use compression lift generated by the nose and engine intakes, but this was not suitable for a transport where the fuselage is best situated above the wing for better visibility and easier loading. The wing stretched from the single vertical rudder at the rear almost to the front of the fuselage, where two much smaller delta canards were mounted high on the fuselage.
The four engines were situated in an 80 foot long box under the wing, like the B-70. Douglas' design differed in using two shock cones at the front of the intake, rather than the large splitter in the B-70. This led into a single large duct with three-part variable-profile walls that slowed the intake air to subsonic speeds. Behind this were separate ducts leading to the engines. The landing gear folded into the space beside the duct.
Other features were taken directly from the B-70. At high speeds the outer 20 feet of each wing folded down to improve compression lift, although to a much lower angle than the B-70's 75-degree droop. The nose area used the rising ramp from the B-70, as opposed to the drooping nose of the Concorde or Boeing 2707.
National Supersonic Transport
By early 1963 a number of forces were gathering to propel Bristol and Sud Aviation to consider merging features of their designs into a joint effort. Caught by patriotic pride, especially the support of Charles de Gaulle, these meetings gathered steam. By mid-1963 it was becoming clear that these efforts were likely to reach an agreement. Around the same time, it became known that the Soviet Union had started development of their own SST design.
This set off something of a panic in the US. Although their estimates and financial predictions continuously demonstrated very poor operating economics, political considerations overturned these concerns. By the spring of 1963 the Federal Aviation Administration (FAA) was well into the process of defining an SST development program, and the private announcement in May that Pan American Airlines had placed options on the Concorde overrode any remaining concerns. The SST program was announced on 5 June 1963.
Douglas declines
By this point the Model 2229 effort had progressed to detail design. The 100-passenger aircraft had settled at about 420,000 lb, heavier than the Boeing 707 while holding 20% fewer passengers. As the operational costs of an aircraft are roughly defined by the aircraft fuel use, a function of weight, divided by the number of passengers, these numbers were not encouraging.
The marketing department was no more impressed; examining a report by the Stanford Research Institute (SRI), they found that SRI had calculated a market for 325 aircraft based on the assumption that every route over 1000 miles would use an SST. Their own assumption was that only routes with high utilization would use them, leading to a market of only 151 aircraft. Considering that Douglas had yet to turn a profit on the DC-8 in spite of sales of over 200 aircraft, they were highly skeptical that there was any profit to be had.
On 26 August, Donald Douglas Jr. wrote a letter to the head of the FAA, Najeeb Halaby, saying that they would not be entering the 2229 into the NST program. In the letter, Douglas stated their main reasons were the problems involved in introducing new models of the DC-8 and DC-9, along with various military commitments, which left them struggling for development resources.
In spite of Douglas' letter glossing over their serious concerns, the press still felt their withdrawal from the program was serious. The New York Times reported that it was an example of "widespread industry caution" about the program. By this point, however, the program was well entrenched and led to the selection of the Boeing 2707 for the NST program.
References
Notes
Bibliography
(After), "After the DC-8", Flight International, 30 November 1961, p. 849
Erik Conway, "High-speed dreams: NASA and the technopolitics of supersonic transportation", JHU Press, 2005
Supersonic transports
2229
Quadjets
Abandoned civil aircraft projects of the United States
Delta-wing aircraft | Douglas 2229 | Physics | 1,426 |
3,332,231 | https://en.wikipedia.org/wiki/Flashing%20%28weatherproofing%29 | Flashing refers to thin pieces of impervious material installed to prevent the passage of water into a structure from a joint or as part of a weather resistant barrier system. In modern buildings, flashing is intended to decrease water penetration at objects such as chimneys, vent pipes, walls, windows and door openings to make buildings more durable and to reduce indoor mold problems. Metal flashing materials include lead, aluminium, copper, stainless steel, zinc alloy, and other materials.
Etymology and related terms
The origin of the term flash and flashing are uncertain, but may come from the Middle English verb flasshen, 'to sprinkle, splash', related to flask.
Counter-flashing (or cover flashing, cap flashing) is a term used when there are two parallel pieces of flashing employed together such as on a chimney, where the counter-flashing is built into the chimney and overlaps a replaceable piece of base flashing. Strips of lead used for flashing an edge were sometimes called an apron, and the term is still used for the piece of flashing below a chimney. The up-hill side of a chimney may have a small gable-like assembly called a cricket with cricket flashing or on narrow chimneys with no cricket a back flashing or back pan flashing. Flashing may be let into a groove in a wall or chimney called a reglet.
Purpose
Before the availability of sheet products for flashing, builders used creative methods to minimize water penetration. These methods included angling roof shingles away from the joint, placing chimneys at the ridge, building steps into the sides of chimneys to throw off water, and covering the seams between roofing materials with mortar . The introduction of manufactured flashing decreased water penetration at obstacles such as chimneys, vent pipes, walls which abut roofs, window and door openings, etc., thus making buildings more durable and reducing indoor mold problems. It is also essential to prevent leaks around skylights or roof windows. Moreover, flashing is important to ensure the integrity of a roof prior to a solar panel installation.
Builders' books, such as Loudons An Encyclopædia of Cottage, Farm, and Villa Architecture and Furniture... gave instructions on installing lead flashing by 1832 and in 1875 Notes on Building Construction provided detailed instruction and is well illustrated with methods still used today.
Flashing may be exposed or concealed. Exposed flashing is usually of a sheet metal, while concealed flashing may be metal or a flexible, adhesive-backed material, particularly around wall penetrations such as window and door openings.
Materials
In earlier days, birch bark was occasionally used as a flashing material. Most flashing materials today are metal, plastic, rubber, or impregnated paper.
Metal flashing materials include lead, aluminium, copper, stainless steel, zinc alloy, other architectural metals or a metal with a coating such as galvanized steel, lead-coated copper, anodized aluminium, terne-coated copper, galvalume (aluminium-zinc alloy coated sheet steel), and metals similar to stone-coated metal roofing. Metal flashing should be provided with expansion joints on long runs to prevent deformation of the metal sheets due to expansion and contraction, and should not stain or be stained by adjacent materials or react chemically with them.
An important type of potential chemical reaction between metal flashing materials is galvanic corrosion. Copper and lead cannot be used in contact with or even above aluminium, zinc, or coated steel without an increased risk of premature corrosion. Also, aluminium and zinc flashing cannot be used in contact with pressure treated wood due to rapid corrosion. Aluminium is also damaged by wet mortar and wet plaster. Salt spray in coastal areas may accelerate corrosion; so stainless steel, copper, or coated aluminium are recommended flashing materials near salt water.
Types of flexible flashing products are rubberized asphalt, butyl rubber, polyvinylidene fluoride (sometimes known as kylar or hylar), and acrylic. The different types have different application temperature ranges, material adhesion compatibility, chemical compatibility, levels of volatile organic compounds, and resistance to ultraviolet light exposure. Adhesive backed materials can aid installation, but such adhesives are not intended for long-term water-resistance.
Copper is an excellent material for flashing because of its malleability, strength, solder-ability, workability, high resistance to the caustic effects of mortars and hostile environments, and long service life (see: copper flashing). This enables a roof to be built without weak points. Since flashing is expensive to replace if it fails, copper's long life is a major cost advantage.
Cold rolled (to 1/8-hard temper) copper is recommended for most flashing applications. This material offers more resistance than soft copper to the stresses of expansion and contraction. Soft copper can be specified where extreme forming is required, such as in complicated roof shapes. Thermal movement in flashings is prevented or is permitted only at predetermined locations.
"Soft zinc" is a newer, proprietary flashing material. It is a relatively malleable material, making it useful for complex roofing connections. It provides normal soft soldering capabilities and delivers easy folding. Soft zinc is said to be an "environmentally friendly" replacement for lead flashing; like lead, it is recyclable, while avoiding lead-contaminated runoff.
Types
Flashing types are named according to location or shape:
Roof flashing Placed around discontinuities or objects which protrude from the roof of a building to deflect water away from seams or joints and in valleys where the runoff is concentrated.
Wall flashing May be embedded in a wall to direct water that has penetrated the wall back outside, or it may be applied in a manner intended to prevent the entry of water into the wall. Wall flashing is typically found at interruptions in the wall, such as windows and points of structural support.
Sill flashing (or sill pan) A concealed flashing placed under windows or door thresholds to prevent water from entering a wall at those points.
Roof penetration flashing Used to waterproof pipes, supports, cables, and all roof protrusions. Stainless steel penetration flashings have proven to be the longest lasting and most reliable roof flashing type.
Channel flashing Shaped like a “U” or channel to catch water (e.g., where the edge of a tile roof meets a wall).
Through wall flashing Spans the thickness of the wall and directs water to weep holes.
Cap flashing (drip cap) Often used above windows and doors.
Drip edge A metal used at the edges of a roof.
Step flashing (soaker, base flashing) Pieces of flashing material which overlap each other in "steps".
Counter flashing (cap flashing) Covers a base flashing.
Pipe flashing (pipe boot, vent boot, pipe flange) A product used where pipes penetrate roofs.
Chimney flashing A general term for flashing a chimney to cover the intersections of the chimney and install a damp proof course (DPC)
Kickout flashing At the very bottom of a roof/wall intersection, the lowermost step flashing specially formed to deflect water away from the wall.
Valley flashing In the valley of two intersecting roof planes.
A structure incorporating flashing has to be carefully engineered and constructed so that water is directed away from the structure and not inside. Flashing improperly installed can direct water into a building.
Environmental impact
In the US and UK, at least, lead flashing and fittings are still readily available, despite the environmental concerns associated with bulk use of this heavy metal. The Lead Sheet Association touts its recyclability and extreme durability.
See also
Damp proofing
Housewrap
Rainscreen
References
External links
Roof Flashing Details Illustrated summary of various types of flashing
Video on how to weld sheet lead
Video series of still images showing some complex lead flashing work
Moisture protection
Building engineering
Building materials | Flashing (weatherproofing) | Physics,Engineering | 1,581 |
23,080,305 | https://en.wikipedia.org/wiki/Intelligence-based%20design | Intelligence-based design is the purposeful manipulation of the built-environment to effectively engage humans in an essential manner through complex organized information. Intelligence-based theory evidences the conterminous relationship between mind and matter, i.e. the direct neurological evaluations of surface, structure, pattern, texture and form. Intelligence-based theory maintains that our sense of well-being is established through neuro-engagement with the physical world at the deepest level common to all people i.e. "Innate Intelligence."
These precursory readings of the physical environment represent an evolved set of information processing skills that the human mind has developed over millennia through direct lived experience. This physiological engagement with the world operates in a more immediate sense than the summary events of applied meaning or intellectual speculation. It is through this direct neurological engagement that humans connect more fully with the world. Many of mankind's early religious associations with physical structures were informed by an intuitive understanding that structure and materials speak to our deeper self, i.e. the human spirit, the soul. Intelligence based theory reveals this effectual dimension of the built-environment and its relationship to human cognitive development, mental acuity, perceptual awareness, spirituality, and sense of well-being. It is within this realm that the mind's eye connects, or fails to connect, with the world outside. The degree of neuro-connectivity which occurs at these intervals serves to render the built-environment either intelligible or un-intelligible. The study and theory of this occurrence is known as "Intelligence-based design".
Antecedents
Several distinct strands of design thinking, in parallel development, lead towards Intelligence-based design. Christopher Alexander contributed early on to the scientific approach to design, by proposing a theory of design in his book Notes on the Synthesis of Form. Those were the years when Artificial Intelligence was being developed by Herbert A. Simon, and Alexander was part of that movement. His later work A Pattern Language, although written for architects and urbanists, was picked up by the software community and used as a combinatorial and organizational rubric for software complexity, especially Design patterns (computer science). Alexander's most recent work The Nature of Order continues by building up a framework for design that relies upon natural and biological structures. Entirely separate from this, E. O. Wilson introduced the Biophilia hypothesis to describe the affinity of humans to other living structures, and to conjecture our innate need for such a connection. This topic was later investigated by Stephen R. Kellert and others, and applied to the design of the artificial environment. The third and independent component of the theory is the recent developments in mobile robotics by Rodney Brooks, where a breakthrough occurred by largely dispensing with internal memory. The practical concept of "Intelligence without representation" otherwise known as the Subsumption architecture and Behavior-based robotics introduced by Brooks suggests a parallel with the way human beings interact with, and design their own environment. These notions are brought together in Intelligence-based design, which is a topic currently under investigation for design applications in both architecture and urbanism.
References
Stephen R. Kellert, Judith Heerwagen, and Martin Mador, Editors, BIOPHILIC DESIGN: THE THEORY, SCIENCE AND PRACTICE OF BRINGING BUILDINGS TO LIFE, John Wiley, New York, 2008.
Masden, K. G. II, “Architecture, Nature, and Human Intelligence,” The History of Mundaneum 1999-2009, IUCN/International Union for the Conservation of Nature & MUNDANEUM, Spanish versions, 2009.
Salingaros, Nikos A., & Masden, K. G. II, “Intelligence-Based Design: A Sustainable Foundation for Architectural Education World Wide,” International Journal of Architectural Research, MIT, Vol. 2, Issue 1, 2008, pp. 129–188.
Salingaros, Nikos A., & Masden, K. G. II, “Restructuring 21st Century Architecture Through Human Intelligence,” Inaugural issue of International Journal of Architectural Research, MIT, Vol.1, Issue 1, 2007, pp. 36–52.
Design
Cognition
Architectural design | Intelligence-based design | Engineering | 839 |
52,044,083 | https://en.wikipedia.org/wiki/Cryoimmunotherapy | Cryoimmunotherapy, also referred to as cryoimmunology, is an oncological treatment for various cancers that combines cryoablation of tumor with immunotherapy treatment. In-vivo cryoablation of a tumor, alone, can induce an immunostimulatory, systemic anti-tumor response, resulting in a cancer vaccine—the abscopal effect. Thus, cryoablation of tumors is a way of achieving autologous, in-vivo tumor lysate vaccine and treat metastatic disease. However, cryoablation alone may produce an insufficient immune response, depending on various factors, such as high freeze rate. Combining cryotherapy with immunotherapy enhances the immunostimulating response and has synergistic effects for cancer treatment.
Although, cryoblation and immunotherapy has been used successfully in oncological clinical practice for over 100 years, and can treat metastatic disease with curative intent, it has been ignored in modern practice. Only recently has cryoimmunotherapy been resurrected to become the gold standard in cancer treatment of all stages of disease.
History
Immunological effects resulting from the cryoablation of tumors was first observed in the 1960s. Since the 1960s, Tanaka treated metastatic breast cancer patients with cryotherapy and reported cryoimmunological reaction resulting from cryotherapy. In the 1970s, systemic immunological response from local cryoablation of prostate cancer was also clinically observed. In the 1980s, Tanaka, of Japan, continued to advance the clinical practice of cryoimmunology with combination treatments including: cryochemotherapy and cryoimmunotherapy. In 1997, Russian scientists confirmed the efficacy of cryoimmunotherapy in inhibiting metastases in advanced cancer. In 2000s, China, following closely with the exciting developments, enthusiastically embraced cryoablation treatment for cancer and has been leading the practice ever since with cryoimmunotherapy treatments available for cancer patients in numerous hospitals and medical clinics throughout China. In the 2010s, American researchers and medical professionals, started to explore cryoimmunotherapy for systemic treatment of cancer.
Mechanisms of actions
Cryoablation of tumor induces necrosis of tumor cells. The immunotherapeutic effect of cryoablation of tumor is the result of the release of intracellular tumor antigens from within the necrotized tumor cells. The released tumor antigens help activate anti-tumor T cells, which destroy remaining malignant cells. Thus, cryoablation of tumor elicits a systemic anti-tumor immunologic response.
The resulting immunostimulation from cryoablation may not be sufficient to induce sustained, systemic regression of metastases, and can be synergised with the combination of immunotherapy treatment and vaccine adjuvants.
Various adjuvant immunotherapy and chemotherapy treatments can be combined with cryoablation to sustain systemic anti-tumor response with regression of metastases, including:
Injection of immunomodulating drugs (i.e.: therapeutic antibodies) and vaccine adjuvants (saponins) directly into the cryoablated, necrotized tumor lysate, immediately after cryoablation
Administration of autologous immune enhancement therapy, including: dendritic cell therapy, CIK cell therapy
See also
Combinatorial ablation and immunotherapy
Photoimmunotherapy
References
External links
The Great Prostate Hoax: How Big Medicine Hijacked...
Immunologic Response to Cryoablation of Breast Cancer
Modern Cryosurgery for Cancer
Percutaneous Cryotherapy of Renal Cell Carcinoma Under an Open MRI System
Modern Cryosurgery for Cancer
Tumor Ablation: Effects on Systemic and Local Anti-Tumor Immunity and on Other Tumor-Microenvironment Interactions
Basics of Cryosurgery
Cryosurgery: A Practical Manual
Dermatological Cryosurgery and Cryotherapy
The Abscopal Effect and the Prospect of Using Cancer Against Itself
Tumor Ablation: Principles and Practice
Cryoimmunologie: Cryoimmunology: colloque
Metastatic Bone Disease: An Integrated Approach to Patient Care
Musculoskeletal Cancer Surgery: Treatment of Sarcomas and Allied Diseases
Prospects for cryo-immunotherapy in cases of metastasizing carcinoma of the prostate .
Therapy
Cancer
Cryobiology | Cryoimmunotherapy | Physics,Chemistry,Biology | 929 |
188,094 | https://en.wikipedia.org/wiki/Nikolaas%20Tinbergen | Nikolaas "Niko" Tinbergen ( , ; 15 April 1907 – 21 December 1988) was a Dutch biologist and ornithologist who shared the 1973 Nobel Prize in Physiology or Medicine with Karl von Frisch and Konrad Lorenz for their discoveries concerning the organization and elicitation of individual and social behavior patterns in animals. He is regarded as one of the founders of modern ethology, the study of animal behavior.
In 1951, he published The Study of Instinct, an influential book on animal behaviour.
In the 1960s, he collaborated with filmmaker Hugh Falkus on a series of wildlife films, including The Riddle of the Rook (1972) and Signals for Survival (1969), which won the Italia prize in that year and the American blue ribbon in 1971.
Early life and education
Born in The Hague, Netherlands, he was one of five children of Dirk Cornelis Tinbergen and his wife Jeannette van Eek. His brother, Jan Tinbergen, won the first Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel in 1969. They are the only siblings to each win a Nobel Prize. Another brother, Luuk Tinbergen, was also a noted biologist.
Tinbergen's interest in nature manifested itself when he was young. He studied biology at Leiden University and was a prisoner of war during World War II in Kamp Sint-Michielsgestel. Tinbergen's experience as a prisoner of the Nazis led to some friction with longtime intellectual collaborator Konrad Lorenz, and it was several years before the two reconciled.
After the war, Tinbergen moved to England, where he taught at the University of Oxford and was a fellow first at Merton College, Oxford, and later at Wolfson College, Oxford. Several of his graduate students went on to become prominent biologists including Richard Dawkins, Marian Dawkins, Desmond Morris, Iain Douglas-Hamilton, and Tony Sinclair.
The Study of Instinct
In 1951 Tinbergen's The Study of Instinct was published. Behavioural ecologists and evolutionary biologists still recognise the contribution this book offered the field of behavioural science studies. The Study of Instinct summarises Tinbergen's ideas on innate behavioural reactions in animals and the adaptiveness and evolutionary aspects of these behaviours. By behaviour, he means the total movements made by the intact animal; innate behaviour is that which is not changed by the learning process. The major question of the book is the role of internal and external stimuli in controlling the expression of behaviour.<ref name="Hinde, R. A. 1956">Hinde, R. A. Ethological Models and the Concept of 'Drive'''. British Journal for the Philosophy of Science, 6, 321–331 (1956)</ref>
In particular, he was interested in explaining 'spontaneous' behaviours: those that occurred in their complete form the first time they were performed and that seemed resistant to the effects of learning. He explains how behaviour can be considered a combination of these spontaneous behaviour patterns and as set series of reactions to particular stimuli. Behaviour is a reaction in that to a certain extent it is reliant on external stimuli, however it is also spontaneous since it is also dependent upon internal causal factors.
His model for how certain behavioural reactions are provoked was based on work by Konrad Lorenz. Lorenz postulated that for each instinctive act there is a specific energy which builds up in a reservoir in the brain. In this model, Lorenz envisioned a reservoir with a spring valve at its base that an appropriate stimulus could act on, much like a weight on a scale pan pulling against a spring and releasing the reservoir of energy, an action which would lead an animal to express the desired behaviour.
Tinbergen added complexity to this model, a model now known as Tinbergen's hierarchical model. He suggested that motivational impulses build up in nervous centres in the brain which are held in check by blocks. The blocks are removed by an innate releasing mechanism that allows the energy to flow to the next centre (each centre containing a block that needs to be removed) in a cascade until the behaviour is expressed. Tinbergen's model shows multiple levels of complexity and that related behaviours are grouped.
An example is in his experiments with foraging honey bees. He showed that honey bees show curiosity for yellow and blue paper models of flowers, and suggested that these were visual stimuli causing the buildup of energy in one specific centre. However, the bees rarely landed on the model flowers unless the proper odour was also applied. In this case, the chemical stimuli of the odour allowed the next link in the chain to be released, encouraging the bee to land. The final step was for the bee to insert its mouthparts into the flower and initiate suckling. Tinbergen envisioned this as concluding the reaction set for honey bee feeding behaviour.
Nobel Prize
In 1973, Tinbergen, along with Konrad Lorenz and Karl von Frisch, were awarded the Nobel Prize in Physiology or Medicine "for their discoveries concerning organization and elicitation of individual and social behaviour patterns". The award recognised their studies on genetically programmed behaviour patterns, their origins, maturation and their elicitation by key stimuli. In his Nobel Lecture, Tinbergen addressed the somewhat unconventional decision of the Nobel Foundation to award the prize for Physiology or Medicine to three men who had until recently been regarded as "mere animal watchers". Tinbergen stated that their revival of the "watching and wondering" approach to studying behaviour could indeed contribute to the relief of human suffering.
The studies performed by the trio on fish, insects and birds laid the foundation for further studies on the importance of specific experiences during critical periods of normal development, as well as the effects of abnormal psychosocial situations in mammals. At the time, these discoveries were stated to have caused "a breakthrough in the understanding of the mechanisms behind various symptoms of psychiatric disease, such as anguish, compulsive obsession, stereotypic behaviour and catatonic posture". Tinbergen's contribution to these studies included the testing of the hypotheses of Lorenz/von Frisch by means of "comprehensive, careful, and ingenious experiments" as well as his work on supernormal stimuli. The work of Tinbergen during this time was also regarded as having possible implications for further research in child development and behaviour.
He also caused some intrigue by dedicating a large part of his acceptance speech to FM Alexander, originator of the Alexander technique, a method which investigates postural reflexes and responses in human beings.
Other awards and honours
In 1950 Tinbergen became member of the Royal Netherlands Academy of Arts and Sciences. He was elected a Fellow of the Royal Society (FRS) in 1962. He was elected to the American Academy of Arts and Sciences in 1961, the United States National Academy of Sciences in 1974, and the American Philosophical Society in 1975. He was also awarded the Godman-Salvin Medal in 1969 by the British Ornithologists' Union, and in 1973 received the Swammerdam Medal and Wilhelm Bölsche Medal (from the Genootschap ter bervordering van Natuur-, Genees- en Heelkunde of the University of Amsterdam and the Kosmos-Gesellschaft der Naturfreunde respectively).
Approach to animal behaviour
Tinbergen described four questions he believed should be asked of any animal behaviour,Further Diagrams on The Four Areas of Biology by Gerhard Medicus (Documents No. 6, 7 and 8 of Block 1 in English) which were:
Causation (mechanism): what are the stimuli that elicit the response, and how has it been modified by recent learning? How do behaviour and psyche "function" on the molecular, physiological, neuro-ethological, cognitive and social level, and what do the relations between the levels look like? (compare: Nicolai Hartmann: "The laws about the levels of complexity")
Development (ontogeny): how does the behaviour change with age, and what early experiences are necessary for the behaviour to be shown? Which developmental steps (the ontogenesis follows an "inner plan") and which environmental factors play when / which role? (compare: Recapitulation theory)
Function (adaptation): how does the behaviour impact on the animal's chances of survival and reproduction?
Evolution (phylogeny): how does the behaviour compare with similar behaviour in related species, and how might it have arisen through the process of phylogeny? Why did structural associations (behaviour can be seen as a "time space structure") evolve in this manner and not otherwise?*
These have been long recognized in Philosophy of Biology to strongly correspond with the efficient, material, formal and final causes of Aristotelian causality, though Tinbergen does not reference Aristotle in his work. In ethology and sociobiology, causation and ontogeny are summarised as the "proximate mechanisms", while adaptation and phylogeny are the "ultimate mechanisms". They are still considered as the cornerstone of modern ethology, sociobiology and transdisciplinarity in Human Sciences.
Supernormal stimulus
A major body of Tinbergen's research focused on what he termed the supernormal stimulus. This was the concept that one could build an artificial object which was a stronger stimulus or releaser for an instinct than the object for which the instinct originally evolved. He constructed plaster eggs to see which a bird preferred to sit on, finding that they would select those that were larger, had more defined markings, or more saturated colour—and a dayglo-bright one with black polka dots would be selected over the bird's own pale, dappled eggs.
Tinbergen found that territorial male three-spined stickleback (a small freshwater fish) would attack a wooden fish model more vigorously than a real male if its underside was redder. He constructed cardboard dummy butterflies with more defined markings that male butterflies would try to mate with in preference to real females. The superstimulus, by its exaggerations, clearly delineated what characteristics were eliciting the instinctual response.
Among the modern works calling attention to Tinbergen's classic work is Deirdre Barrett's 2010 book, Supernormal Stimuli.
Autism
Tinbergen applied his observational methods to the problems of autistic children. He recommended a "holding therapy" in which parents hold their autistic children for long periods of time while attempting to establish eye contact, even when a child resists the embrace. However, his interpretations of autistic behaviour, and the holding therapy that he recommended, lacked scientific support and the therapy has been described as controversial and potentially abusive, particularly by individuals with autism themselves.
Bibliography
Some of the publications of Tinbergen are:
1939: 'The Behavior of the Snow Bunting in Spring.' In: Transactions of the Linnaean Society of New York, vol. V (October 1939).
1951: The Study of Instinct. Oxford, Clarendon Press.
1952: Derived activities; their causation, biological significance, origin, and emancipation during evolution. Q. Rev. Biol. 27:1–32. . .
1953: The Herring Gull's World. London, Collins.
1953: Social Behaviour in Animals: With Special Reference to Vertebrates. Methuen & Co. (reprinted 2014): London & New York : Psychology Press. (print & eBook)
Publications about Tinbergen and his work:
Burkhardt Jr., RW (2005). Patterns of Behavior : Konrad Lorenz, Niko Tinbergen, and the Founding of Ethology.
Kruuk, H (2003). Niko's Nature: The Life of Niko Tinbergen and His Science of Animal Behaviour. Oxford, Oxford University Press.
Stamp Dawkins, M; Halliday, TR; Dawkins, R (1991). The Tinbergen Legacy''. London, Chapman & Hall.
Personal life
Tinbergen was a member of the advisory committee to the Anti-Concorde Project and was also an atheist.
Tinbergen married Elisabeth Rutten (1912–1990) and they had five children. Later in life he suffered depression and feared he might, like his brother Luuk, commit suicide. He was treated by his friend, whose ideas he had greatly influenced, John Bowlby. Tinbergen died on 21 December 1988, after suffering a stroke at his home in Oxford, England.
References
External links
1907 births
1988 deaths
20th-century Dutch zoologists
Autism researchers
Dutch atheists
Dutch expatriates in the United Kingdom
Dutch Nobel laureates
Dutch ornithologists
Dutch prisoners of war in World War II
Ethologists
Fellows of Merton College, Oxford
Fellows of the Royal Society
Fellows of Wolfson College, Oxford
Foreign associates of the National Academy of Sciences
Leiden University alumni
Members of the Royal Netherlands Academy of Arts and Sciences
New Naturalist writers
Nobel laureates in Physiology or Medicine
Scientists from The Hague
World War II prisoners of war held by Germany
20th-century atheists
Members of the American Philosophical Society
APA Distinguished Scientific Award for an Early Career Contribution to Psychology recipients | Nikolaas Tinbergen | Biology | 2,719 |
13,629,352 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD93 | In molecular biology, Small Nucleolar RNA SNORD93 (also known as HBII-336) is a non-coding RNA (ncRNA) molecule that functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the Eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and is also often referred to as a guide RNA.
SNORD93 belongs to the C/D box class of snoRNAs which contain the C (UGAUGA) and D (CUGA) box motifs. Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. This snoRNA is the human orthologue of mouse snoRNA MBII-336.
SNORD93 is predicted to guide the 2'O-ribose methylation of 18S ribosomal RNA (rRNA) residue A576.
Additionally, SNORD93 can be processed into a smaller, microRNA-like fragment (termed snoRNA-derived RNA(sdRNA)) that contributes to the malignant phenotype of breast cancer.
The processed piece (sdRNA-93) has been shown to target Pipox, a sarcosine metabolism-related protein whose expression significantly correlates with distinct molecular subtypes of breast cancer.
References
External links
Non-coding RNA | Small nucleolar RNA SNORD93 | Chemistry | 327 |
13,266,701 | https://en.wikipedia.org/wiki/BT-Epoxy | BT-Epoxy (BT short for Bismaleimide-Triazine resin) is one of a number of thermoset resins used in printed circuit boards (PCBs). It is a mixture of epoxy resin, a common raw material for PCBs and BT resins. This is in turn a mixture of bismaleimide, which is also used as a raw material for PCBs and cyanate ester. Three cyano groups of the cyanate ester are trimerized to a triazine ring structure, hence the T in the name. In presence of a bismaleimide, the double bond of the maleimide group can copolymerize with the cyano groups to heterocyclic 6-membered aromatic ring structures with two nitrogen atoms (pyrimidines). The cure reaction occurs at temperatures up to , and is catalyzed by strongly basic molecules like Dabco (diazabicyclooctane) and 4-DMAP (4-dimethylaminopyridin). Products with very high glass transition temperatures (Tg) up to and very low dielectric constant can be obtained. These properties make these materials very attractive for use in PCBs, which are often subjected to such conditions.
See also
Printed circuit board
Synthetic resin
Epoxy
References
Synthetic resins | BT-Epoxy | Chemistry | 286 |
1,242,477 | https://en.wikipedia.org/wiki/Mu%20Draconis | Mu Draconis (μ Draconis, abbreviated Mu Dra, μ Dra) is a multiple star system near the head of the constellation of Draco. With a combined magnitude of 4.92, it is visible to the naked eye. Based on parallax estimates by the Hipparcos spacecraft, it is located approximately 89 light-years from the Sun.
The system consists of a single primary star (designated Mu Draconis A, officially named Alrakis from the traditional name of the system), a secondary binary pair (Mu Draconis B) and a further single star (C). B's two components are designated Mu Draconis Ba and Bb.
Mu Draconis A and Ba are nearly identical F-type main-sequence stars, with masses of and , respectively. Both have the spectral class of F5V, and have similar apparent magnitude, at 5.66 and 5.69, respectively. The secondary, Mu Draconis B, has a drifting radial velocity, and is itself a spectroscopic binary with an orbital period of 2,270 days. The distance between both stars is 2 arcseconds, so a telescope with a diameter of at least 6 centimetres is necessary to see them separate. The smaller component, Mu Draconis Bb, has a mass of . Mu Draconis C is a 14th magnitude common-proper-motion companion 13.2" away from the bright pair, with a mass of .
Nomenclature
μ Draconis (Latinised to Mu Draconis) is the star's Bayer designation. The designations of the three constituents as Mu Draconis A, B and C, and those of B's components - Mu Draconis Ba and Bb - derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
It is also known by the name Arrakis (or Errakis), which is derived from the name given to it by Arabian stargazers, الراقص al-rāqiṣ "the trotting (camel)" (lit. "the dancing one").
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Alrakis for the component Mu Draconis A on February 1, 2017, and it is now so included in the List of IAU-approved Star Names.
This star, along with Beta Draconis (Rastaban), Gamma Draconis (Eltanin), Nu Draconis ('Kuma') and Xi Draconis (Grumium) were Al ʽAwāïd, the Mother Camels, which were known in Latin as the Quinque Dromedarii.
Cultural references
Science fiction writer Frank Herbert chose Arrakis as the name of the primary planet of Canopus (α Carinae) in his Dune series of novels, aware that the word "Arrakis" is the transliteration into English of the Arabic words for "the Dancer" (al-Raqis).
References
Binary stars
Draco (constellation)
Draconis, Mu
Draconis, 21
F-type main-sequence stars
Alrakis
083608
6369 70
Durchmusterung objects
154905 6
9584 | Mu Draconis | Astronomy | 720 |
30,631,605 | https://en.wikipedia.org/wiki/Dopant%20activation | Dopant activation is the process of obtaining the desired electronic contribution from impurity species in a semiconductor host. The term is often restricted to the application of thermal energy following the ion implantation of dopants. In the most common industrial example, rapid thermal processing is applied to silicon following the ion implantation of dopants such as phosphorus, arsenic and boron. Vacancies generated at elevated temperature (1200 °C) facilitate the movement of these species from interstitial to substitutional lattice sites while amorphization damage from the implantation process recrystallizes. A relatively rapid process, peak temperature is often maintained for less than one second to minimize unwanted chemical diffusion.
References
Semiconductor properties | Dopant activation | Physics,Materials_science | 141 |
61,588,990 | https://en.wikipedia.org/wiki/Olivier%20pile | An Olivier pile is a drilled displacement pile:. This is an underground deep foundation pile made of concrete or reinforced concrete with a screw-shaped shaft (helical shaft) which is performed without soil removal.
History
The Belgian Gerdi Vankeirsbilck applied for the production patent for the Olivier pile in April 1996. This technique was implemented by his own company and various licences have been granted in Belgium and abroad. Due to its screw-shaped shaft, the Olivier pile is particularly suitable for use in soils with low load-bearing capacities, such as clay and loam. In 2018 a patent was applied for drilling without the use of a lost bit.
Description
An Olivier pile is drilled into the ground by the use of drilling rig with a top-type rotary drive with variable rate of penetration. A lost tip is attached to a partial-flight auger which, in turn, is attached to a casing. The casing, which is rotated clockwise continuously, penetrates into the ground by the action of a torque and a vertical force. At the desired installation depth, the lost tip is released, and the reinforcing cage is inserted into the casing. Concrete is then placed inside the casing through a funnel. The casing and the partialflight auger are extracted by counter-clockwise rotation. The shaft of the Olivier pile has the shape of a screw.
The casing has an external diameter of 324mm (12.75"), with a wall thickness of 25mm (1"). The casing consists of several parts assembled with watertight couplings, which are strong enough to handle the maximum torque produced by the rotary drive. The various auger heads, for the various diameters of the Olivier Pile, all have a larger diameter than the casing.
Common diameters of the auger head
diameter 36–56 cm (14"-22")
diameter 41–61 cm (16"-24")
diameter 46–66 cm (18"- 26")
diameter 51–71 cm (20"-28")
diameter 56–76 cm (22"-30")
Installation
An Olivier Pile is screwed into the ground without vibration, the soil is displaced sideways. No soil is transported to the surface.
The rotary pressure is compared with the results of the cone penetration test.
The rotary head with a maximum torque of 55 ton meter is pulled up and down along an adjustable leader, which is strong enough to hold all the forces. The leader is adjustable in all directions, so that it is possible to drill leaning backward or forward.
When the desired depth has been reached, it is possible to place reinforcement through the casing.
Then the auger head is screwed back. The lost tip is disconnected from the auger head and left behind. During the screwback process, the soil is displaced again. This system is called 'double soil displacement'.
While screwing back, concrete is poured into the casing. Because of the weight of the concrete, the concrete is injected into the space under the auger head. During the screwing back, the Olivier Pile is formed with the screw-shaped shaft. The screw-shape has a pitch of ±250mm (±10") and an external diameter of ±200mm (±8") larger than the base shaft. This process continues until the entire casing and the auger head are back on the ground.
External links
Research and Development Activities on Pile Foundations in Europe, wtcb.be
Drilled Displacement Piles – Current Practice and Design, researchgate.net
Cast piles, floridabuildingofficials.com
The history of displacement piles, soilmecna.com
See also
Franki pile
Deep foundation
References
Geotechnical engineering
Civil engineering
Foundations (buildings and structures) | Olivier pile | Engineering | 764 |
17,807,140 | https://en.wikipedia.org/wiki/Priming%20%28immunology%29 | Priming is the first contact that antigen-specific T helper cell precursors have with an antigen. It is essential to the T helper cells' subsequent interaction with B cells to produce antibodies. Priming of antigen-specific naive lymphocytes occurs when antigen is presented to them in immunogenic form (capable of inducing an immune response). Subsequently, the primed cells will differentiate either into effector cells or into memory cells that can mount stronger and faster response to second and upcoming immune challenges. T and B cell priming occurs in the secondary lymphoid organs (lymph nodes and spleen).
Priming of naïve T cells requires dendritic cell antigen presentation. Priming of naive CD8 T cells generates cytotoxic T cells capable of directly killing pathogen-infected cells. CD4 cells develop into a diverse array of effector cell types depending on the nature of the signals they receive during priming. CD4 effector activity can include cytotoxicity, but more frequently it involves the secretion of a set of cytokines that directs the target cell to make a particular response. This activation of naive T cell is controlled by a variety of signals: recognition of antigen in the form of a peptide: MHC complex on the surface of a specialized antigen-presenting cell delivers signal 1; interaction of co-stimulatory molecules on antigen-presenting cells with receptors on T cells delivers signal 2 (one notable example includes a B7 ligand complex on antigen-presenting cells binding to the CD28 receptor on T cells); and cytokines that control differentiation into different types of effector cells deliver signal 3.
Cross-priming
Cross-priming refers to the stimulation of antigen-specific CD8+ cytotoxic T lymphocytes (CTLs) by dendritic cell presenting an antigen acquired from the outside of the cell. Cross-priming is also called immunogenic cross-presentation. This mechanism is vital for priming of CTLs against viruses and tumours.
Immune priming (invertebrate immunity)
Immune priming is a memory-like phenomenon described in invertebrate taxa of animals, first described by Hans G. Boman and colleagues using Drosophila fruit flies. In vertebrates, immune memory is based on adaptive immune cells called B and T lymphocytes, which provide an enhanced and faster immune response when challenged with the same pathogen for a second time. It is evolutionarily advantageous for an organism to produce a rapid immune response to common pathogens it is likely to be exposed to again. In the 1940s-1960s, the budding field of immunology assumed that invertebrates did not have memory-like immune functions as they do not produce antibodies needed for adaptive immunity. In 1972, Boman and colleagues' experiments overturned this assumption, showing that fruit flies could be "vaccinated" against a repeat infection by the same bacteria if they were first exposed to a freeze-thawed pathogen. Flies previously exposed to freeze-thawed bacteria cleared subsequent infection better than naive flies. Since then, evidence supporting innate memory-like functions have been found across model invertebrates, including insects and crustaceans.
Mechanism of immune priming
Results of immune priming research commonly find that mechanism conferring defense against a given pathogen is dependent on the kind of insect species and microbe used for given experiment. That could be due to host-pathogen coevolution. For every species is convenient to develop a specialised defense against a pathogen (e.g. bacterial strain) that it encounters the most. In arthropod model, the red flour beetle Tribolium castaneum, it has been shown that the route of infection (cuticular, septic or oral) is important for the defence mechanism generation. Innate immunity in insects is based on non-cellular mechanisms, including production of antimicrobial peptides (AMPs), reactive oxygen species (ROS) or activation of the prophenol oxidase cascade. Cellular parts of insect innate immunity are hemocytes, which can eliminate pathogens by nodulation, encapsulation or phagocytosis. The innate response during immune priming differs based on the experimental setup, but generally it involves enhancement of humoral innate immune mechanisms and increased levels of hemocytes. There are two hypothetical scenarios of immune induction, on which immune priming mechanism could be based. The first mechanism is induction of long-lasting defences, such as circulating immune molecules, by the priming antigens in the host body, which remain until the secondary encounter. The second mechanism describes a drop after the initial priming response, but a stronger defence upon a secondary challenge. The most probable scenario is the combination of these two mechanisms.
Trans-generational immune priming
Trans-generational immune priming (TGIP) describes the transfer of parental immunological experience to its progeny, which may help the survival of the offspring when challenged with the same pathogen. Similar mechanism of offspring protection against pathogens has been studied for a very long time in vertebrates, where the transfer of maternal antibodies helps the newborns immune system fight an infection before its immune system can function properly on its own. In the last two decades TGIP in invertebrates was heavily studied. Evidence supporting TGIP were found in all colleopteran, crustacean, hymenopteran, orthopteran and mollusk species, but in some other species the results still remain contradictory. The experimental outcome could be influenced by the procedure used for particular investigation. Some of these parameters include the infection procedure, the sex of the offspring and the parent and the developmental stage.
References
Priming | Priming (immunology) | Biology | 1,167 |
61,240,539 | https://en.wikipedia.org/wiki/C6442H9966N1706O2018S40 | {{DISPLAYTITLE:C6442H9966N1706O2018S40}}
The molecular formula C6442H9966N1706O2018S40 (molar mass: 144.88 kg/mol) may refer to:
Carlumab
Crenezumab | C6442H9966N1706O2018S40 | Chemistry | 70 |
4,138,713 | https://en.wikipedia.org/wiki/CBOSS%20Corporation | CBOSS Corporation (Convergent Business Operation Support System) is a telecom company primarily based in Russia and with offices located in Finland, UAE and Vietnam. CBOSS Corporation, also known as CBOSS Group, develops IT solutions for the automation of telecommunications enterprises.
One of the three biggest Russian mobile operators, MTS used the CBOSS billing solution from 1998 until 2004, when it switched to FORIS
OSS-IN from STROM Telecom company Mikhail Severov, Пробил час большого Билла. Петербургские операторы меняют счетные системы // SpbIT.su, 2005-04-05: " «Мобильные ТелеСистемы». До последнего времени ее корпоративным стандартом было использование биллинговой системы разработки московской CBOSS, которую она эксплуатирует с 1998 года. Однако в прошлом году эта ситуация изменилась, в частности, в Москве была запущена система Foris от чешской компании STROM Telecom"
In 2004 CBOSS was rated as the 11th biggest IT company in Russia by CNews.ru. In 2006, CBOSS Corporation was recognized as the #1 IT-provider of integrated solutions for telecommunications in EMEA by Informa Telecoms Group
In February 2004 CBOSS acquired the online billing solutions subsidiary of Fujitsu Services Oy and its product - rtBilling (CBOSSrtb) prepaid billing system. This system was used by several mobile operators: Britain O2, Australian Optus, Canadian Rogers, Austrian One GmbH and Columbian Colombia Movil.
In 2008 CBOSS was selected by German MVNECO GmbH to provide IT infrastructure and IT solutions to implement mobile virtual network activities.
References
External links
Company website
Telecommunications companies of Russia
Software companies of Russia
Telecommunications companies established in 1996
Russian brands
Telecommunications billing systems
Business software companies | CBOSS Corporation | Technology | 584 |
33,814,239 | https://en.wikipedia.org/wiki/Sustainable%20urbanism | Sustainable urbanism is both the study of cities and the practices to build them (urbanism), that focuses on promoting their long term viability by reducing consumption, waste and harmful impacts on people and place while enhancing the overall well-being of both people and place. Well-being includes the physical, ecological, economic, social, health and equity factors, among others, that comprise cities and their populations. In the context of contemporary urbanism, the term cities refers to several scales of human settlements from towns to cities, metropolises and mega-city regions that includes their peripheries / suburbs / exurbs. Sustainability is a key component to professional practice in urban planning and urban design along with its related disciplines landscape architecture, architecture, and civil and environmental engineering. Green urbanism and ecological urbanism are other common terms that are similar to sustainable urbanism, however they can be construed as focusing more on the natural environment and ecosystems and less on economic and social aspects. Also related to sustainable urbanism are the practices of land development called Sustainable development, which is the process of physically constructing sustainable buildings, as well as the practices of urban planning called smart growth or growth management, which denote the processes of planning, designing, and building urban settlements that are more sustainable than if they were not planned according to sustainability criteria and principles.
Terminology
The origin of the term sustainable urbanism has been attributed to Professor Susan Owens of Cambridge University in the UK in the 1990s, according to her doctoral student and now professor of architecture Phillip Tabb. The first university graduate program named Sustainable Urbanism was founded by professors Michael Neuman and Phillip Tabb at Texas A&M University in 2002. There are now dozens of university programs with that name worldwide. As of 2018, there are hundreds of scholarly articles, books and publications whose titles contain the exact words sustainable urbanism and thousands of articles, books and publications that contain that exact term, according to Google Scholar.
In 2007, two important events occurred in the USA that furthered the knowledge base and diffusion of sustainable urbanism. First was the International Conference on Sustainable Urbanism at Texas A&M University in April, which drew nearly 200 persons from five continents. Second, later in the year, was the publication of the book Sustainable Urbanism by Doug Farr. According to Farr, this approach aims to eliminate environmental impacts of urban development by supplying and providing all resources locally. The full life cycle of services and public goods such as electricity and food are evaluated from production to consumption with the intent of eliminating waste or environmental externalities. Since that time, significant research and practice worldwide has broadened the term considerably to include social, economic, welfare and public health factors, among others, to the environmental and physical factors in the Farr book; thus taking it beyond an urban design field into all of urban planning, policy and development. Approaches that focus on the social and economic aspects use the terms fair cities and just cities. The United Nations has incorporated sustainable urbanism into its global sustainable development goals as goal 11, Sustainable Cities and Communities.
There are a range of organizations promoting and researching sustainable urbanism practices, including governmental agencies, non-governmental organizations, professional associations, universities and research institutes, philanthropic foundations and professional enterprises around the world. Related to sustainable urbanism is the Ecocity or Ecological Urbanism movement which is another approach that focuses on creating urban environments based on ecological principles, and the resilient cities movement which focuses on addressing depleting resources by creating distributed local resources to replace global supply chain in case of major disruption. As resilient cities thinking has evolved, it too has gone beyond climate change to incorporate resilient responses by hybrid urban-natural ecosystems such as city regions to natural disasters, war and conflict, economic shocks and crises, massive migration, and other shocks.
Sustainable Urbanism: Urban Design With Nature , by Doug Farr (2007)
The architect and urban planner Doug Farr discusses making cities walkable, along with combining elements of ecological urbanism, sustainable urban infrastructure, and new urbanism, and goes beyond them to close the loop on resource use and bring everything into the city or town. This approach is centered on increasing the quality of life by affording greater accessibility to activities and places within a short distance and by increasing the quality of products that are offered.
Comparison of similar principles
New Urbanism emerged in the 1980s and was an early touchstone for sustainable urbanism, since it is based around bringing activities and land uses closer together, increasing urban and suburban densities, being more efficient in terms of infrastructure provision and transport energy use, and having more within walking distance. There were significant critiques of New Urbanism and its more international term Compact city that found it was a limited approach. Its principal conclusion was that the sustainability of a city could not be measured by form alone, and that processes were critical to measure sustainability. The criticism of New Urbanism is that it attempts to apply 19th century urban form to 21st century cities and that New Urbanism excludes economic diversity by creating expensive places to live that are highly privatized and controlled. Also, critics believe that, while the New Urbanism contains many attractive ideas, it may have difficulty dealing with a wide range of contemporary issues including scale, transportation, planning and codes, regionalism, and marketing.
Sustainable urbanism bridges the gaps of New Urbanism by including the factors listed in the lead paragraph of this Wikipedia entry.
Smart growth is a related approach to sustainable urbanism. As conceived by urban planners, it helps achieve greater jobs–housing balance, but it is likely to leave the sense of place unaddressed. While New Urbanism may fulfill that dimension of sense of place, it is not viewed as an approach that will lead to communities that are energy self-reliant. The ecological city approach seems to complementary to the other two approaches in terms of their respective areas of strengths and weakness.
Green urbanism probably contains the most similar ideas with sustainable urbanism. They both emphasize on interplay of cities with nature, as well as shaping better communities and lifestyles. However, the principles of green urbanism are based on the triple-zero framework: zero fossil-fuel energy use, zero waste, and zero emissions. Sustainable Urbanism, on the other hand, is more focused on designing communities that are walkable and transit-served so that people will prefer to meet their daily needs on foot.
Defining elements of Sustainable Urbanism
Compactness
Compactness, or density, plays an important yet limited role in sustainable urban development because it can support reductions in per-capita transport energy use by increasing walking, cycling, active transport and public transit use. The relatively low density of some urban and especially suburban and exurban development is too low to support efficient transit and walk-to destinations. Such low-density development is a characteristic of urban sprawl, which is the major cause of high dependence on private automobiles, inefficient infrastructure, increased obesity, loss of farmlands and natural habitats, pollution, and so on. For these reasons, sustainable urbanism tends to promote more compact development with greater intensities of use and greater variety of uses and activities in a given urban area.
Research has shown that low-density development can exacerbate non-point source pollutant loadings by consuming absorbent open space and increasing impervious surface area relative to compact development. While increasing densities regionally can better protect water resources at a regional level, higher-density development can create more impervious cover, which increases water quality problems in nearby or adjacent water bodies.
Increasing neighborhood population density also supports improved public transit service. Concentrating development density in and around transit stops and corridors maximizes people's willingness to walk and thus reduces car ownership and use. Sustainable urbanism seeks to integrate infrastructure design increase with density, because a concentrated mixed-use development required less per capita infrastructure usage compared to detached single-family housing.
Biophilia and Biophilic Cities
The Biophilia hypothesis was introduced by E. O. Wilson. It refers to the connection between humans and other living systems. Within this concept, humans are biologically predisposed to caring for nature. Biophilic cities are those that bring nature into the city by increasing parks and open spaces, green and blue corridors, and networks that link them. Increasingly, biophilia refers to habitats that support other species, sustainable food production and urban agriculture. Thus, biophilia and biophilic cities are an underlying component of sustainable urbanism.
Sustainable corridors
Sustainable corridors are similar to a wildlife corridor in that they connect one area to another efficiently, cheaply, and safely. They allow people to pass from their immediate proximity to another without relying on cars or other wasteful and inefficient products. It also relies on accessibility to all people in the community so that the mode of transportation is the most convenient and easiest to use for everyone. Sustainable Corridors also include biodiversity corridors to allow animals to move around communities so that they may still live in and around cities.
High performance buildings
High performance buildings are designed and constructed to maximize operational energy savings and minimize environmental impacts of the construction and operation of the buildings. Building construction and operation generates a great deal of ‘externalized costs’ such as material waste, energy inefficiencies and pollution. High performance buildings aim to minimize these and make the process much more efficient and less harmful. New York City Department of Design & Construction put out a set of guidelines in April 1999 on High performance buildings that have broad application to sustainable urbanism as a whole worldwide.
By incorporating environmentally sound materials and systems, improving indoor air quality and using natural or high efficiency lighting, it minimizes a building impact on its natural surroundings; additionally, those who work or live in these buildings directly benefit from these differences. Some building owners have even reported increased worker productivity as a result of the improved conditions. However, because these other benefits are more difficult to quantify than direct energy savings, the real value of high performance buildings can easily be underestimated by traditional accounting methods that do not recognize ‘external’ municipal and regional costs and benefits. The cost evaluations of high performance building should account for the economic, social, and environmental benefits that accompany green buildings.
Energy efficiency/clean energy resources
Reduce energy use and demand through passive solar techniques and integrated building design. This process looks at optimum orientation and maximizes the thermal efficiency of the building envelope (windows, walls, roof) while also considering the interaction of the HVAC, lighting, and control systems. Integrated design uses daylight to reduce electrical demand, and incorporates energy efficient lighting, motors, and equipment. Where feasible, renewable energy sources such as photovoltaic cells, solar hot water, and geothermal exchange are used in tandem with other low emission technologies, such as fuel cells. This results in direct energy cost savings (fuel and electricity) yield a good rate of return based on the initial investment. Other external benefits include improved air quality from reduced fuel consumption (limiting nitrous oxide, sulfur dioxide, methane, and other gases that contribute to air pollution). Additionally, reducing the overall aggregate electrical load significantly reduces carbon dioxide emissions.
Improved indoor environment
Improve indoor air quality by eliminating unhealthy emissions – such as volatile organic compounds (VOCs) – from building materials, products, and furnishings, and through outside filtering and distribution techniques that control pollutants. Maximize the use of controlled daylighting, which can then be augmented by high quality artificial lighting. Provide good acoustic control. Results in high performance facilities can help address a wide range of human resource concerns by improving the total quality of the interior environment. In addition, attention to building wellness today helps avoid future costs of corrections. Such ‘well building’ design emphasis can improve occupant comfort, health, and well-being, in turn reducing employee absenteeism and turnover.
Source reduction, pollution prevention and recycling
Renewable resources, and are themselves recyclable, and that have been manufactured in a manner less damaging to the environment. Implement construction and demolition (C&D) waste prevention/management strategies and selective site sorting of materials for salvage, recycling, or disposal. These actions will prevent unnecessary depletion of natural resources and will reduce air, water, and soil pollution. They will also strengthen the market for recycled materials, and the manufacture of products with post-consumer content. Long-term, better C&D waste management can reduce waste disposal costs, ease stress on landfills, and minimize the cost of transporting waste to disposal facilities outside the city.
High performance infrastructure
High-performance infrastructure refers to core best management practices (BMPs) applicable to the typical section of the public right-of-way, encompassing street sidewalk, underground utilities, stormwater infrastructure, landscapes, and streetscape elements. In addition to many public health and environmental benefits, financial benefits include decreased first costs, decreased operation and maintenance costs, decreased energy costs and increased real estate values.
Component optimization
At the single-component level, standard details may be improved to optimize performance, minimize environmental impact, use materials more efficiently or extended lifecycle. Examples include using reclaimed supplementary cement materials to increase pavement strength or designing water-efficient landscapes to reduce irrigation needs and water consumption.
Multifunctional optimization
Improving single components does not consider the whole system in place, so multifunctional optimization guidelines seek to minimize conflicts among parts and promote synergies. This could lead to long-term savings, improved performance and lifecycle, and increased returns on municipal investments. One example is using permeable pavement to reduce stormwater runoff and peak demand on stormwater management infrastructure while providing an adequate driving surface for vehicles.
Integrated design
Systems-oriented design focuses on improving the performance of the entire roadway system. It requires cross-disciplinary teamwork at the planning, scoping, design and construction phases. It promotes comprehensive performance improvements, compounds environmental benefits and potentially offers substantial cost savings. An example of integrated design would be designing a roadway with a diversely planted center median that functions as both a traffic-calming device and a stormwater bioretention area to improve pedestrian safety, minimize stormwater runoff, dampen street noise and improve air quality.
Examples of sustainable urbanism
Current leading examples as of 2018, which need to be described and explained here in greater detail, include the Hammarby Sjöstad district in Stockholm, Sweden, Freiburg, Germany, BedZED in Hackbridge, Sutton England, a suburb of London, and Serenbe near Atlanta, Georgia in the US.
Newington, Sydney, Australia
A suburb in western Sydney, Australia, Newington, was the home to the athletes of the 2000 Summer Olympics and 2000 Summer Paralympics. It was built on a brownfield site, and it was developed by Mirvac Lend Lease Village Consortium from 1997. Redevelopment of the village was completed in 1999, but further development is still occurring. After the Games, Newington stimulated the Australian market for green products, and it became a solar village housing approximately 5,000 people. Unfortunately, the development failed to build neighborhood centers with walk-to services, which perpetuates automobile dependence. Furthermore, Newington does not provide any affordable housing.
Key Sustainable Urbanism Thresholds:
High performance buildings: Solar panels are installed in every home in Newington. “At the time of its construction it was the largest solar village in the world… The collective energy generated by these photovoltaic panels will prevent 1,309 tons of from entering the atmosphere per year, the equivalent of 262 cars being taken off the road. ” By using window awnings, wool insulation, slab construction, and efficient water fixtures, over 90 percent of the homes are designed to consume 50 percent less energy and water than conventional homes.
Sustainable corridors and biophilia: At Newington, 90 percent of the plantings are native species. 21 acres of the development site is incorporated into the Millennium Parklands. 40 percent of stormwater runoff infiltrates the groundwater supply and the rest is cleansed on-site and channeled to the ponds in the Parklands, providing important habitats. In addition, The Haslams Creek was rehabilitated from a concrete channel to a natural watercourse.
Dongtan, Shanghai, China
Dongtan is a development in Eastern Chongming Island, which is roughly a one-hour trip from downtown Shanghai. It was once planned as “the world’s first eco-city,” attempting to become an energy self-sufficient, carbon-neutral, and mostly car-free eco-city housing 500,000 residents. The first phase of the development is supposed to complete by 2010, and entire development by 2050, but the Dongtan project has been delayed indefinitely due to financial issues, among other things.
Key sustainable urbanism thresholds:
Compactness: Dongtan is planned to achieve densities of 84-112 people per acre, which will support efficient mass transit, social infrastructure, and a range of businesses. Most homes will mid-rise apartment buildings clustered toward the city center. Parks, lakes and other public open space will be scattered around the densely designed neighborhoods.
High performance Infrastructures: Dongtan is designed to utilize various types of renewable energy, coming as close as possible to carbon neutrality. Wind turbines with different scales and solar panels will produce most of the energy Dongtan will need. The most ambitious portion of the energy infrastructure is the combined heat and power system (CHP), converting waste from different sources into energy, including sewage, compost, organic waste such as rice husks.
Upton, Northampton, England
Upton is part of the southwest district of Northampton, England, lying between the existing town edge and the motorway. Originally farming land, Upton was developed by English Partnerships, the national regeneration agency for England, with high standards of building and design codes. The planning outline started in 1997, and the sites were planned to be completed by 2011.
Key sustainable urbanism thresholds:
High performance buildings and infrastructure: The Upton development is planned to employ sustainable urban drainage systems (SUSD), controlling the flow and quality of water entering the sewage system. Other green technologies being implemented include green roofs, microcombined heat and power (micro-CHP), rainwater harvesting systems, and PV systems.
Sustainable Neighborhoods: Upton is currently developing its transit system. As soon as the first residents move in, a twice-hourly bus service will begin running in the neighborhoods. A car sharing program is also proposed. The development is achieving its social sustainability by requiring that 22 percent of scattered units be permanently affordable housing.
Sustainable urbanism organizations
Transition Town movement works to promote citizen based resilience to transition to a low carbon future.
Eco-City Builders holds a bi-annual conference on sustainable urbanism and promotes high performance planning and urban design practices.
The IGLUS Project at EPFL is a global action research network which is aimed at improving performance of cities in the areas of efficiency, resilience and sustainability by promoting more innovative governance approaches in urban infrastructure systems.
The Eco Cities Project at the University of Manchester (UK) is a research organization developing and validating sustainable urbanism practices.
Biophilic Cities Network.
The Institute for Sustainable Cities (New York City) works with the City of New York and residents to promote sustainable urbanism practices and policies.
International Council for Local Environmental Initiatives (ICLEI) supports policy, good governance, and local governmental practices to improve sustainability and resilience. They are working on four specific sustainable urbanism initiatives: (a) Resilient Communities and Cities, (b) Just and Peaceful Communities, (c) Viable Local Economies, and (d) Eco-efficient Cities.
The United Nations Habitat promotes sustainable urbanism practices around the globe to localize Agenda 21 with the UNEP. The Sustainable Cities Programme was established in 1990 as a joint UN-HABITAT/UNEP agency.
The Stockholm Resilience Center promotes practices to allow cities and places to adapt to climate change and resource depletion through sustainability practices.
LEED-ND
The LEED for Neighborhood Development (LEED-ND) is the United States' first rating system for green neighborhoods. The LEED-ND was created out of a partnership with the Congress for New Urbanism, the U.S. Green Building Council (USGBC), and the Natural Resource Defense Council (NRDC). It provides a coordinated environmental strategy to achieve sustainability at the level of entire neighborhoods and communities. LEED-ND is a rating system that certifies green neighborhoods, building off USGBC's Leadership in Energy and Environmental Design (LEED), which is a third-party verification system that a development meets high standards of environmental responsibility. LEED-ND combines the principles of new urbanism, green building, and smart growth to create the first accepted national standard for neighborhood design that extends LEED's scope beyond the individual to a more holistic (neighborhood/community) perception of the context of the buildings.
Criticism
There are professionals who are concerned that the use "Sustainable urbanism" as a label risks debasing the term "sustainable", with developments being labeled as examples of "Sustainable Urbanism", which, while substantially better than much modern development, are not truly sustainable according to the Brundtland definition of sustainability, taken from the landmark 1987 United Nations Report on Environment and Development titled Our Common Future: Report of the World Commission on Environment and Development.
See also
Circles of Sustainability
Co-benefits of climate change mitigation
Environmental planning
SmartCode
Sustainability
Sustainable development
Sustainable city
Urban resilience
Urban vitality
Water-sensitive urban design
Transport:
References
Urbanism
Urban planning
Sustainable urban planning | Sustainable urbanism | Engineering | 4,393 |
3,207,976 | https://en.wikipedia.org/wiki/First%20Monday%20%28journal%29 | First Monday is a monthly peer-reviewed open access academic journal covering research on the Internet, published in the United States.
Publication
The journal is sponsored and hosted by the University of Illinois at Chicago. It is published on the first Monday of every month. In 2011, the journal had an acceptance rate of about 15%.
The journal has no article processing charges and no advertisements.
History
According to the chief editor, Edward Valauskas, the journal emerged before the open access model emerged:
First Monday is among the first peer-reviewed journals on the Internet. It originated in the summer of 1995 with a proposal to start a new Internet-only, peer-reviewed journal about the Internet by eventual editor-in-chief Edward J. Valauskas to Munksgaard, a Danish publisher. Munksgaard agreed to publish the journal in September 1995. The first issue appeared on 6 May 1996, the first Monday of May, also the opening of the Fifth International World Wide Web Conference in Paris. The first issue was distributed at that conference on diskette as well as released on the Internet from a server in Copenhagen at the address www.firstmonday.dk.
In December 1998, Munksgaard sold the journal to three of the editors: Edward J. Valauskas, Esther Dyson, and Rishab Aiyer Ghosh. The server was moved from Copenhagen to the University of Illinois at Chicago's Library. The first issue based on a server in Chicago appeared 4 January 1999.
Conferences
The first First Monday conference took place 4–6 November 2001 in Maastricht at the International Institute of Infonomics. To celebrate First Monday's 10th birthday in 2006, a conference took place at the University of Illinois at Chicago, 15–17 May 2006. The theme of the conference was Openness: Code, science and content. Over 200 participants from over 30 countries took part in the meeting. Papers from the Conference were published in the June and July issues. The conference was sponsored by The Open Society Institute, The John D. and Catherine T. MacArthur Foundation, The University of Illinois at Chicago University Library and The Maastricht Economic Research Institute on Innovation and Technology (MERIT), University of Maastricht.
References
External links
A discussion with the journal's founder in 2006
Computer science journals
Cultural journals
Digital humanities
English-language journals
Mass media about Internet culture
Monthly journals
Open access journals
Academic journals established in 1996
University of Illinois Chicago
1996 establishments in Illinois
Academic and educational organizations in Chicago | First Monday (journal) | Technology | 512 |
382,251 | https://en.wikipedia.org/wiki/Natural%20philosophy | Natural philosophy or philosophy of nature (from Latin philosophia naturalis) is the philosophical study of physics, that is, nature and the physical universe while ignoring any supernatural influence. It was dominant before the development of modern science.
From the ancient world (at least since Aristotle) until the 19th century, natural philosophy was the common term for the study of physics (nature), a broad term that included botany, zoology, anthropology, and chemistry as well as what we now call physics. It was in the 19th century that the concept of science received its modern shape, with different subjects within science emerging, such as astronomy, biology, and physics. Institutions and communities devoted to science were founded. Isaac Newton's book Philosophiæ Naturalis Principia Mathematica (1687) (English: Mathematical Principles of Natural Philosophy) reflects the use of the term natural philosophy in the 17th century. Even in the 19th century, the work that helped define much of modern physics bore the title Treatise on Natural Philosophy (1867).
In the German tradition, Naturphilosophie (philosophy of nature) persisted into the 18th and 19th centuries as an attempt to achieve a speculative unity of nature and spirit, after rejecting the scholastic tradition and replacing Aristotelian metaphysics, along with those of the dogmatic churchmen, with Kantian rationalism. Some of the greatest names in German philosophy are associated with this movement, including Goethe, Hegel, and Schelling. Naturphilosophie was associated with Romanticism and a view that regarded the natural world as a kind of giant organism, as opposed to the philosophical approach of figures such as John Locke and others espousing a more mechanical philosophy of the world, regarding it as being like a machine.
Origin and evolution of the term
The term natural philosophy preceded current usage of natural science (i.e. empirical science). Empirical science historically developed out of philosophy or, more specifically, natural philosophy. Natural philosophy was distinguished from the other precursor of modern science, natural history, in that natural philosophy involved reasoning and explanations about nature (and after Galileo, quantitative reasoning), whereas natural history was essentially qualitative and descriptive.
Greek philosophers defined natural philosophy as the combination of beings living in the universe, ignoring things made by humans. The other definition refers to human nature.
In the 14th and 15th centuries, natural philosophy was one of many branches of philosophy, but was not a specialized field of study. The first person appointed as a specialist in Natural Philosophy per se was Jacopo Zabarella, at the University of Padua in 1577.
Modern meanings of the terms science and scientists date only to the 19th century. Before that, science was a synonym for knowledge or study, in keeping with its Latin origin. The term gained its modern meaning when experimental science and the scientific method became a specialized branch of study apart from natural philosophy, especially since William Whewell, a natural philosopher from the University of Cambridge, proposed the term "scientist" in 1834 to replace such terms as "cultivators of science" and "natural philosopher".
From the mid-19th century, when it became increasingly unusual for scientists to contribute to both physics and chemistry, "natural philosophy" came to mean just physics, and the word is still used in that sense in degree titles at the University of Oxford and University of Aberdeen. In general, chairs of Natural Philosophy established long ago at the oldest universities are nowadays occupied mainly by physics professors. Isaac Newton's book Philosophiae Naturalis Principia Mathematica (1687), whose title translates to "Mathematical Principles of Natural Philosophy", reflects the then-current use of the words "natural philosophy", akin to "systematic study of nature". Even in the 19th century, a treatise by Lord Kelvin and Peter Guthrie Tait, which helped define much of modern physics, was titled Treatise on Natural Philosophy (1867).
Scope
Plato's earliest known dialogue, Charmides, distinguishes between science or bodies of knowledge that produce a physical result, and those that do not. Natural philosophy has been categorized as a theoretical rather than a practical branch of philosophy (like ethics). Sciences that guide arts and draw on the philosophical knowledge of nature may produce practical results, but these subsidiary sciences (e.g., architecture or medicine) go beyond natural philosophy.
The study of natural philosophy seeks to explore the cosmos by any means necessary to understand the universe. Some ideas presuppose that change is a reality. Although this may seem obvious, there have been some philosophers who have denied the concept of metamorphosis, such as Plato's predecessor Parmenides and later Greek philosopher Sextus Empiricus, and perhaps some Eastern philosophers. George Santayana, in his Scepticism and Animal Faith, attempted to show that the reality of change cannot be proven. If his reasoning is sound, it follows that to be a physicist, one must restrain one's skepticism enough to trust one's senses, or else rely on anti-realism.
René Descartes' metaphysical system of mind–body dualism describes two kinds of substance: matter and mind. According to this system, everything that is "matter" is deterministic and natural—and so belongs to natural philosophy—and everything that is "mind" is volitional and non-natural, and falls outside the domain of philosophy of nature.
Branches and subject matter
Major branches of natural philosophy include astronomy and cosmology, the study of nature on the grand scale; etiology, the study of (intrinsic and sometimes extrinsic) causes; the study of chance, probability and randomness; the study of elements; the study of the infinite and the unlimited (virtual or actual); the study of matter; mechanics, the study of translation of motion and change; the study of nature or the various sources of actions; the study of natural qualities; the study of physical quantities; the study of relations between physical entities; and the philosophy of space and time. (Adler, 1993)
History
Humankind's mental engagement with nature certainly predates civilization and the record of history. Philosophical, and specifically non-religious thought about the natural world, goes back to ancient Greece. These lines of thought began before Socrates, who turned from his philosophical studies from speculations about nature to a consideration of man, or in other words, political philosophy. The thought of early philosophers such as Parmenides, Heraclitus, and Democritus centered on the natural world. In addition, three Presocratic philosophers who lived in the Ionian town of Miletus (hence the Milesian School of philosophy), Thales, Anaximander, and Anaximenes, attempted to explain natural phenomena without recourse to creation myths involving the Greek gods. They were called the physikoi ("natural philosophers") or, as Aristotle referred to them, the physiologoi. Plato followed Socrates in concentrating on man. It was Plato's student, Aristotle, who, in basing his thought on the natural world, returned empiricism to its primary place, while leaving room in the world for man. Martin Heidegger observes that Aristotle was the originator of conception of nature that prevailed in the Middle Ages into the modern era:
Aristotle surveyed the thought of his predecessors and conceived of nature in a way that charted a middle course between their excesses.
"The world we inhabit is an orderly one, in which things generally behave in predictable ways, Aristotle argued, because every natural object has a "nature"—an attribute (associated primarily with form) that makes the object behave in its customary fashion..." Aristotle recommended four causes as appropriate for the business of the natural philosopher, or physicist, "and if he refers his problems back to all of them, he will assign the 'why' in the way proper to his science—the matter, the form, the mover, [and] 'that for the sake of which. While the vagaries of the material cause are subject to circumstance, the formal, efficient and final cause often coincide because in natural kinds, the mature form and final cause are one and the same. The capacity to mature into a specimen of one's kind is directly acquired from "the primary source of motion", i.e., from one's father, whose seed (sperma) conveys the essential nature (common to the species), as a hypothetical ratio.
Material cause An object's motion will behave in different ways depending on the [substance/essence] from which it is made. (Compare clay, steel, etc.)
Formal cause An object's motion will behave in different ways depending on its material arrangement. (Compare a clay sphere, clay block, etc.)
Efficient cause That which caused the object to come into being; an "agent of change" or an "agent of movement".
Final cause The reason that caused the object to be brought into existence.
From the late Middle Ages into the modern era, the tendency has been to narrow "science" to the consideration of efficient or agency-based causes of a particular kind:
In ancient Greece
Early Greek philosophers studied motion and the cosmos. Figures like Hesiod regarded the natural world as offspring of the gods, whereas others like Leucippus and Democritus regarded the world as lifeless atoms in a vortex. Anaximander deduced that eclipses happen because of apertures in rings of celestial fire. Heraclitus believed that the heavenly bodies were made of fire that were contained within bowls. He thought that eclipses happen when the bowl turned away from the earth. Anaximenes is believed to have stated that an underlying element was air, and by manipulating air someone could change its thickness to create fire, water, dirt, and stones. Empedocles identified the elements that make up the world, which he termed the roots of all things, as fire, air, earth, and water. Parmenides argued that all change is a logical impossibility. He gives the example that nothing can go from nonexistence to existence. Plato argues that the world is an imperfect replica of an idea that a divine craftsman once held. He also believed that the only way to truly know something was through reason and logic. Not the study of the object itself, but that changeable matter is a viable course of study.
Aristotle's philosophy of nature
"An acorn is potentially, but not actually, an oak tree. In becoming an oak tree, it becomes actually what it originally was only potentially. This change thus involves passage from potentiality to actuality — not from non-being to being but from one kind or degree to being another"
Aristotle held many important beliefs that started a convergence of thought for natural philosophy. Aristotle believed that attributes of objects belong to the objects themselves, and share traits with other objects that fit them into a category. He uses the example of dogs to press this point. An individual dog may have very specific attributes (ex. one dog can be black and another brown) but also very general ones that classify it as a dog (ex. four-legged). This philosophy can be applied to many other objects as well. This idea is different from that of Plato, with whom Aristotle had a direct association. Aristotle argued that objects have properties "form" and something that is not part of its properties "matter" that defines the object. The form cannot be separated from the matter. Given the example that you can not separate properties and matter since this is impossible, you cannot collect properties in a pile and matter in another.
Aristotle believed that change was a natural occurrence. He used his philosophy of form and matter to argue that when something changes you change its properties without changing its matter. This change occurs by replacing certain properties with other properties. Since this change is always an intentional alteration whether by forced means or by natural ones, change is a controllable order of qualities. He argues that this happens through three categories of being: non-being, potential being, and actual being. Through these three states the process of changing an object never truly destroys an object's forms during this transition state but rather just blurs the reality between the two states. An example of this could be changing an object from red to blue with a transitional purple phase.
Medieval philosophy of motion
Medieval thoughts on motion involved much of Aristotle's works Physics and Metaphysics. The issue that medieval philosophers had with motion was the inconsistency found between book 3 of Physics and book 5 of Metaphysics. Aristotle claimed in book 3 of Physics that motion can be categorized by substance, quantity, quality, and place. where in book 5 of Metaphysics he stated that motion is a magnitude of quantity. This disputation led to some important questions to natural philosophers: Which category/categories does motion fit into? Is motion the same thing as a terminus? Is motion separate from real things? These questions asked by medieval philosophers tried to classify motion.
William of Ockham gives a good concept of motion for many people in the Middle Ages. There is an issue with the vocabulary behind motion that makes people think that there is a correlation between nouns and the qualities that make nouns. Ockham states that this distinction is what will allow people to understand motion, that motion is a property of mobiles, locations, and forms and that is all that is required to define what motion is. A famous example of this is Occam's razor, which simplifies vague statements by cutting them into more descriptive examples. "Every motion derives from an agent." becomes "each thing that is moved, is moved by an agent" this makes motion a more personal quality referring to individual objects that are moved.
Natural philosophy in the early modern period
The scientific method has ancient precedents, and Galileo exemplifies a mathematical understanding of nature, which is a hallmark of modern natural scientists. Galileo proposed that objects falling regardless of their mass would fall at the same rate, as long as the medium they fall in is identical. The 19th-century distinction of a scientific enterprise apart from traditional natural philosophy has its roots in prior centuries. Proposals for a more "inquisitive" and practical approach to the study of nature are notable in Francis Bacon, whose ardent convictions did much to popularize his insightful Baconian method. The Baconian method is employed throughout Thomas Browne's encyclopaedia Pseudodoxia Epidemica (1646–1672), which debunks a wide range of common fallacies through empirical investigation of nature. The late-17th-century natural philosopher Robert Boyle wrote a seminal work on the distinction between physics and metaphysics called, A Free Enquiry into the Vulgarly Received Notion of Nature, as well as The Skeptical Chymist, after which the modern science of chemistry is named, (as distinct from proto-scientific studies of alchemy). These works of natural philosophy are representative of a departure from the medieval scholasticism taught in European universities, and anticipate in many ways, the developments that would lead to science as practiced in the modern sense. As Bacon would say, "vexing nature" to reveal "her" secrets (scientific experimentation), rather than a mere reliance on largely historical, even anecdotal, observations of empirical phenomena, would come to be regarded as a defining characteristic of modern science, if not the very key to its success. Boyle's biographers, in their emphasis that he laid the foundations of modern chemistry, neglect how steadily he clung to the scholastic sciences in theory, practice and doctrine. However, he meticulously recorded observational detail on practical research, and subsequently advocated not only this practice, but its publication, both for successful and unsuccessful experiments, so as to validate individual claims by replication.
Natural philosophers of the late 17th or early 18th century were sometimes insultingly described as 'projectors'. A projector was an entrepreneur who invited people to invest in his invention but – as the caricature went – could not be trusted, usually because his device was impractical. Jonathan Swift satirized natural philosophers of the Royal Society as 'the academy of projectors' in his novel Gulliver's Travels. Historians of science have argued that natural philosophers and the so-called projectors sometimes overlapped in their methods and aims.
Current work in the philosophy of science and nature
In the middle of the 20th century, Ernst Mayr's discussions on the teleology of nature brought up issues that were dealt with previously by Aristotle (regarding final cause) and Kant (regarding reflective judgment).
Especially since the mid-20th-century European crisis, some thinkers argued the importance of looking at nature from a broad philosophical perspective, rather than what they considered a narrowly positivist approach relying implicitly on a hidden, unexamined philosophy. One line of thought grows from the Aristotelian tradition, especially as developed by Thomas Aquinas. Another line springs from Edmund Husserl, especially as expressed in The Crisis of European Sciences. Students of his such as Jacob Klein and Hans Jonas more fully developed his themes. Last, but not least, there is the process philosophy inspired by Alfred North Whitehead's works.
Among living scholars, Brian David Ellis, Nancy Cartwright, David Oderberg, and John Dupré are some of the more prominent thinkers who can arguably be classed as generally adopting a more open approach to the natural world. Ellis (2002) observes the rise of a "New Essentialism". David Oderberg (2007) takes issue with other philosophers, including Ellis to a degree, who claim to be essentialists. He revives and defends the Thomistic-Aristotelian tradition from modern attempts to flatten nature to the limp subject of the experimental method. In Praise of Natural Philosophy: A Revolution for Thought and Life (2017), Nicholas Maxwell argues that we need to reform philosophy and put science and philosophy back together again to create a modern version of natural philosophy.
See also
References
Further reading
E.A. Burtt, Metaphysical Foundations of Modern Science (Garden City, NY: Doubleday and Company, 1954).
Philip Kitcher, Science, Truth, and Democracy. Oxford Studies in Philosophy of Science. Oxford; New York: Oxford University Press, 2001. LCCN:2001036144
Bertrand Russell, A History of Western Philosophy and Its Connection with Political and Social Circumstances from the Earliest Times to the Present Day (1945) Simon & Schuster, 1972.
David Snoke, Natural Philosophy: A Survey of Physics and Western Thought. Access Research Network, 2003. .Natural Philosophy: A Survey of Physics and Western Thought Welcome to The Old Schoolhouse® Magazine
Nancy R. Pearcey and Charles B. Thaxton, The Soul of Science: Christian Faith and Natural Philosophy (Crossway Books, 1994, ).
Alfred N. Whitehead, Process and Reality, The Macmillan Company, 1929.
René Thom, Modèles mathématiques de la morphogenèse, Christian Bourgois, 1980.
Claude Paul Bruter, Topologie et perception, Maloine, 3 vols. 1974/1976/1986.
Jean Largeault, Principes classiques d'interprétation de la nature, Vrin, 1988.
Moritz Schlick, Philosophy of Nature, Philosophical Library, New York, 1949.
Andrew G. Van Melsen, The Philosophy of Nature, Duquesne University, Pittsburgh 1954.
Miguel Espinoza, A Theory of Intelligibility. A Contribution to the Revival of the Philosophy of Nature, Thombooks Press, Toronto, ON, 2020.
Miguel Espinoza, La matière éternelle et ses harmonies éphémères, L'Harmattan, Paris, 2017.
External links
"Aristotle's Natural Philosophy", Stanford Encyclopedia of Philosophy
Institute for the Study of Nature
"A Bigger Physics," a talk at MIT by Michael Augros
Other articles
History of philosophy
History of science | Natural philosophy | Technology | 4,079 |
63,576,142 | https://en.wikipedia.org/wiki/Cryptic%20self-incompatibility | Cryptic self-incompatibility (CSI) is the botanical expression that's used to describe a weakened self-incompatibility (SI) system. CSI is one expression of a mixed mating system in flowering plants. Both SI and CSI are traits that increase the frequency of fertilization of ovules by outcross pollen, as opposed to self-pollen.
Background
Although the evident product of more outcrossing is a mutual result among SI systems, CSI should not be mistaken for any other form of true SI, such as common gametophytic SI or sporophytic SI. Robert Bowman outlined the distinction when he posited that cryptic SI allows for full seed set via self-pollination when outcross pollen is limited or absent. CSI has been observed to be a significant benefit to flowering plants as it allows plants to avoid inbreeding depression in their offspring when outcross pollen is available. Because this breeding method allows for full seed set it is thought of as another form of reproductive assurance. The contemporary understanding of this breeding system, which involves self-pollen discrimination, outlines the "best-of-both-worlds" hypothesis that was described by Bowman in 1987; and later refined and given a name by Becerra and Lloyd in 1992.
CSI was first described by A.J. Bateman in 1956 as a weak incompatibility system that results in a significantly higher proportion of seeds set by outcross pollen within an individual as opposed to self-pollen, when both types are present on the stigma in equal amounts.
Since the first documented observation of CSI our understanding of how these systems work has undergone several refinements as more studies are conducted. There are multiple known mechanisms through which CSI acts but it is commonly defined as a form of parental selection that occurs post-pollination. Although not all mechanisms of CSI acts have been described.
Mechanisms
Pollen competition
This form of CSI is achieved by having differential pollen tube growth. It has been observed that, on average, the pollen tubes from pollen that is genetically similar to the stigma will grow more slowly than the pollen tubes from pollen that is not related to the style, known as outcross pollen. CSI occurs by stylar discrimination, such that outcross pollen tubes are favored over self pollen tubes based on differential pollen-tube growth, resulting in increased outcrossing frequency as pollen load size increases.
Pollen tube attrition
Pollen tube attrition is the failure of a pollen tube that is caused by inhibiting tube growth before fertilization can occur. This phenomenon is another way through which CSI can act. This is accomplished by failing a higher proportion of self-pollen tubes which will end up favoring fertilization by outcross pollen. This type of stylar inhibition within flowering plants, which are normally self-compatible, are known to result in mixed mating systems.
References
Plant reproduction | Cryptic self-incompatibility | Biology | 582 |
23,276,709 | https://en.wikipedia.org/wiki/Reinforced%20rubber | Reinforced rubber products are one of the largest groups of composite materials, though rarely referred to as composite materials. Familiar examples are automobile tyres, hoses, and conveyor belts.
Composite reinforced structure
Reinforced rubber products combine a rubber matrix and a reinforcing material so that high strength to flexibility ratios can be achieved. The reinforcing material, usually a kind of fibre, provides the strength and stiffness. The rubber matrix, with low strength and stiffness, provides air-fluid tightness and supports the reinforcing materials to maintain their relative positions. These positions are of great importance because they influence the resulting mechanical properties.
A composite structure in which all fibres are loaded equally everywhere when pressurized, is called an isotropic structure, and the type of loading is named an isotensoidal loading. To meet the isotensoidal concept the structure geometry must have an isotensoid meridian profile and the fibres must be positioned following geodesic paths. A geodesic path connects two arbitrary points on a continuous surface by means of the shortest possible way.
Straight rubber hoses
To achieve optimal loading in a straight rubber hose the fibres must be positioned under an angle of approximately 54.7 angular degrees, also referred to as the magic angle. The magic angle of 54.7 exactly balances the internal-pressure-induced longitudinal stress and the hoop (circumferential) stress, as observed in most biological pressurized fiber-wound cylinders, like arteries. If the fiber angle is initially above or below 54.7, it will change under increased internal pressure until it rises to the magic angle where hoop stresses and longitudinal stresses equalize, with concomitant accommodations in hose diameter and hose length. A hose with an initially low fiber angle will rise under pressure to 54.7, inducing a hose diameter increase and a length decrease, whereas a hose with an initially high fiber angle will drop to 54.7, inducing a hose diameter decrease and a length increase. The equilibrium state is a fiber angle of 54.7. In this situation, the fibres tend to be loaded purely in tension, so ~100% of their strength resists the forces acting on the hose due to the internal pressure. (The magic angle for cylindrical shapes of 54.7 angular degrees is based on calculations in which the influence of the matrix material is neglected. Therefore, depending on the stiffness of the rubber material used, the actual equilibrium angle can vary a few tenths of degrees from the magic angle.)
When the fibres of the reinforcement structure are placed under angles larger than 54.7 angular degrees, the fibres want to relocate to their optimal path when pressurized. This means that the fibres will re-orient themselves until they have reached their force equilibrium. In this case this will lead to an increase in length and a decrease in diameter. With angles smaller than 54.7 degrees the opposite will occur. A product which makes use of this principle is a pneumatic muscle.
Reinforcement of complex shaped rubber products
For a cylinder with a constant diameter, the reinforcement angle is constant as well and is 54.7º. This also known as the magic angle or neutral angle. The neutral angle is the angle where a wound structure is in equilibrium. For a cylinder, this is 54.7º, but for a more complex shape like a bellows which has a varying radius over the length of the product, this neutral angle is different for each radius. In other words, for complex shapes there is not one magic angle but the fibres follow a geodesic path with angles varying with the change in radius. To obtain a reinforcement structure with isotensoidal loading the geometry of the complex shape must follow an isotensoid meridian profile.
Reinforcement application technology
The mil fabric reinforcement can be applied on the rubber products with different processes. For straight hoses, the most used processes are braiding, spiralling, knitting, and wrapping. The first three processes have in common that multiple strands of fibres are applied to the product simultaneously on a predetermined pattern in an automated process. The fourth process comprises manual or semi-automated wrapping of rubber sheets reinforced with fabric plies. For the reinforcement of complex shaped rubber products like bellows most manufacturers use these fabric reinforced rubber sheets. These sheets are made by calendering of rubber onto pre-woven fabric plies. The products are manufactured by wrapping (mostly manually) these sheets around a mandrel until enough rubber and reinforcement is applied. However, the disadvantage of using these sheets is that it is impossible to control the positioning of the individual fibres of the fabric when applied on complex shapes. Therefore, no geodesic paths can be achieved and therefore also no isotensoid loading is possible. To obtain isotensoide loading on a complex shape, the shape must have an isotensoideal profile and geodesic positioning of the fibre structure is required. This can be achieved by using automated winding processes like filament winding or spiralling.
References
Composite materials | Reinforced rubber | Physics | 1,020 |
18,333,649 | https://en.wikipedia.org/wiki/180-line%20television%20system | 180-line is an early electronic television system. It was used in Germany after March 22, 1935, using telecine transmission of film, intermediate film system, or cameras using the Nipkow disk. Simultaneously, fully electronic transmissions using cameras based on the iconoscope began on January 15, 1936, with definition of 375 lines.
The Berlin Summer Olympic Games were televised, using both closed-circuit 375-line fully electronic iconoscope-based cameras and 180-line intermediate film cameras transmitting to Berlin, Hamburg, Munich, Nuremberg, and Bayreuth via special Reichspost long-distance cables in August 1936. In Berlin, twenty-eight public 180-line television rooms were opened for anybody who did not own a television set.
Some TV sets for this system were available, including the French Grammont models, Telefunken FE II and FE III, and Fernseh Tischmodell
After February 1937, both 180- and 375-line systems were replaced by a superior, 441-line system.
References
External links
http://www.compulink.co.uk/~rrussell/tccgen/manual/tcgen0.html Colour Test Card Generator - Introduction and specification
Television technology | 180-line television system | Technology | 251 |
23,981,454 | https://en.wikipedia.org/wiki/C25H32N2O2 | {{DISPLAYTITLE:C25H32N2O2}}
The molecular formula C25H32N2O2 (molar mass: 392.53 g/mol, exact mass: 392.2464 u) may refer to:
Dextromoramide
Levomoramide
Racemoramide (or simply moramide) | C25H32N2O2 | Chemistry | 74 |
65,330,455 | https://en.wikipedia.org/wiki/Flood%20and%20Water%20Management%20Act%202010 | The Flood and Water Management Act 2010 (c.29) is a UK Act of Parliament relating to the management of the risk concerning flooding and coastal erosion. The Act aims to reduce the flood risk associated with extreme weather, compounded by climate change. It created the role of Lead Local Flood Authority, which is the local government authority responsible for managing flood risk in the local government area. The Act gave new powers to local authorities, the Environment Agency, The Welsh Ministers and water companies.
The Act relates almost entirely to England and Wales, with the exception of Section 46 'Abolition of Fisheries Committee (Scotland)', which relates to Scotland, and Sections 48 'Subordinate legislation' and 49 'Technical provision' which relate to England, Wales and Scotland. Parts of the Act apply differently in England and Wales. Schedule 3 of the Act has been implemented in Wales, but not in England.
The Flood and Water Management Act was preceded by The Pitt Review of 2007. Led by Sir Michael Pitt, the Pitt Review was a high-profile independent review of the lessons to be learned from the floods of 2007. The report put forward a number of recommendations, including the need for a "wider brief for the Environment Agency" and for local councils to be given powers and responsibilities to "protect communities through robust building and planning controls". The implementation of the Flood and Water Management Act was one of a number of actions taken by parliament as a result of the Pitt Review.
Roles and Responsibilities Set Out in the Act
Lead Local Flood Authority (LLFA)
The role of Lead Local Flood Authority (LLFA) in England is given to the unitary authority, or, if there is no unitary authority, the county council for the area. In Wales, the role is fulfilled by the county council or the county borough council. The LLFA is given certain responsibilities by the Act.
The LLFA is required, by Sections 9 and 10 of the Act, to create and maintain a local flood risk management strategy to set objectives to manage flooding locally, specify measures proposed to achieve the objectives, outline how and when the measures will be implemented, and list the costs and benefits of the measures and how the measures will be paid for. The LLFA must apply the local flood risk management strategy and monitor its effectiveness and progress.
The LLFA must establish and maintain a register of flood risk assets, including information on their ownership and state of repair, which should be made available to the public. In March 2019, 109 out of 152 LLFAs had compiled up-to-date asset registers. The LLFA is a statutory consultee on applications for planning permission in England and Wales. This means they must be consulted by the local planning authority and have the opportunity to object to the planning application, recommend refusal, or recommend a condition to be attached to the planning permission, if the LLFA deems that flood risk and drainage has not been appropriately addressed in the planning application. However, in effect, this role is given by the SAB in Wales.
Section 19 requires the LLFA to investigate flooding events in their area and publish a report.
Environment Agency
The Environment Agency is required by Section 7 of the Act to develop a national flood and coastal erosion risk management strategy (FCERM) for England. The strategy describes the roles of all flood risk management authorities, including LLFAs, councils, internal drainage boards, highway authorities and water and sewerage companies, who must all exercise their responsibilities consistently with the strategy. The EA is required by Section 18 of the Act to produce an annual report on flood and coastal erosion risk management. The report describes how flood risk management authorities are managing the current risk of flood and coastal erosion and how they are planning for the future risk. The report details how LLFAs have progressed on their local strategies and asset registers.
As of 2020, the strategy of FCERM was altered to have three long term ambitions:
Climate resilient places
Today’s growth and infrastructure resilient in tomorrow’s climate
A nation ready to respond and adapt to flooding and coastal change
Welsh Ministers
The Welsh Ministers have the equivalent role of the Environment Agency with regard to developing a strategy for flood and coastal erosion risk management in Wales.
Schedule 3 of the Act was implemented in Wales on 7 January 2019. It gives the Welsh Ministers the responsibility to publish national standards for the implementation of sustainable drainage, or SUDS. Sustainable drainage systems aim to manage the runoff of surface water from development projects, treating it as near to its source as possible. The decision to implement Schedule 3 of the Act in Wales was driven by the Well-being of Future Generations Act.
SUDS Approving Body (SAB)
The role of SUDS Approving Bodies (SAB) is created by Schedule 3 of the Act, which has so far only been implemented in Wales. The role falls to the county council or county borough council, as for the role of LLFA. Under Schedule 3 of the Act, construction work which has drainage implications may not be commenced unless a drainage system for the work has been approved by the SAB. In Wales, this applies to all construction projects with a total area exceeding 100 sq m, or of more than 1 dwelling, unless they can be proven not to have drainage implications.
Water Companies
Sections 35 and 36 of the Act provided amendments to the Water Industry Act 1991. The amendments allowed the water company to put a temporary ban on using potable water for a number of uses including using hosepipes for watering gardens and cleaning outdoor spaces.
References
United Kingdom Acts of Parliament 2010
Flood control in the United Kingdom
Water management
Politics of the United Kingdom
Water | Flood and Water Management Act 2010 | Environmental_science | 1,131 |
63,934,082 | https://en.wikipedia.org/wiki/Covid-Organics | Covid-Organics (CVO) is an Artemisia-based drink that Andry Rajoelina, president of Madagascar, claims can prevent and cure Coronavirus disease 2019 (COVID-19). The drink is produced from a species under the Artemisia genus from which artemisinin is extracted for malaria treatment. No publicly available clinical trial data supports the safety or efficacy of this drink.
Covid-Organics was developed and produced in Madagascar by the Malagasy Institute of Applied Research. Madagascar was the first country to decide to integrate Artemisia into COVID-19 treatment when the NGO Maison de l'Artemisia France contacted numerous African countries during the COVID-19 pandemic. At least one researcher from another part of Africa, Dr. Jérôme Munyangi of the Democratic Republic of the Congo, contributed. Some of the research on Artemisia, led by African scientists, had been carried out in France and Canada. On 20 April 2020, Rajoelina announced in a television broadcast that his country had found "preventive and curative" cure for COVID-19. Rajoelina publicly sipped from a bottle of Covid-Organics and ordered a nation-wide distribution to families.
In 2022, Covid-Organics is not recommended by the WHO.
World Health Organization
On 20 May 2020, Rajoelina announced on his Twitter account that the World Health Organization (WHO) will sign a confidentiality agreement with Madagascar regarding the formulation of CVO in order to perform clinical observation. On 21 May 2020, WHO director general Tedros Adhanom confirmed his video conference with Rajoelina, and that the WHO will cooperate with Madagascar on research and development of COVID-19 therapy. The WHO does not recommend the use of non-pharmaceutical Artemisia plant matter. The official position of WHO is that it "supports scientifically-proven traditional medicine" and "recognizes that traditional, complementary and alternative medicine has many advantages".
Controversy
A wide range of scientific criticism followed the launch of Covid-Organics from within and outside Africa. Before cooperating with Madagascar, the World Health Organization (WHO) issued a warning against use of an untested COVID-19 remedy and said Africans deserve medicine that went through proper scientific trials. At the time, Covid-Organics efficacy and safety was tested on fewer than 20 people within a period of three weeks. In order to meet established scientific standards, the two parties later agreed on a partnership for Covid-Organics to be registered for WHO's Solidarity trials, an international program for fast tracking clinical trials on COVID-19 treatment candidates. The African Union (AU) demanded detailed scientific data on Covid-Organics for analysis by Africa CDC after it had been briefed by Madagascar authorities about the herbal remedy. Africa Centres for Disease Control and Prevention expressed its interest in data for Covid-Organics for the purpose of quickly scaling up an effective and safe remedy. In April, the Economic Community of West African States (ECOWAS) denied ordering a package of CVO after media reports that it had ordered for CVO and said the West Africa Health Organization (WAHO) would only endorse products shown to be effective and safe for use through well-known scientific procedure. As concerns about the safety of CVO grow, South Africa offered to help Madagascar conduct a clinical trial on the herbal tonic.
There are concerns over widespread usage of Artemisia accelerating drug resistance toward ACTs for malaria treatment.
Patronage
More than 19 African and Caribbean countries have taken delivery of CVO as of May 2020 to combat COVID-19. On 20 May, Ghanaian government finally placed an order for CVO for testing after weeks of pressure from Ghanaians that the herbal remedy be used to halt the spread of Coronavirus. At the end of April, Equatorial Guinea, among the first to express support for the remedy, sent a special envoy to Madagascar for a donated shipment of CVO. Madagascar sent quantities of the product to at least 10 African countries in 2020.
Covid-organics Plus
On 2 October 2020, President Andry Rajoelina inaugurated a medical factory named "Pharmalagasy" and officially started to produce CVO pills named "CVO-plus".
On 5 July 2021, WHO issued a statement announcing the completion of phase 3 clinical trials of the CVO+ dry capsule at the National Center for the Application of Pharmaceutical Research (CNARP) of Madagascar, indicating that the results will be reviewed by the Regional Expert Advisory Committee formed in partnership with Africa CDC. The committee will advise the manufacturer on the next steps to take.
See also
List of unproven methods against COVID-19
References
External links
WHO – Global Malaria Programme – The use of non-pharmaceutical forms of Artemisia
WHO statement regarding use of traditional medicine against Covid-19
COVID-19 drug development
COVID-19 pandemic in Africa
COVID-19 misinformation
Healthcare in Madagascar
Alternative medicine | Covid-Organics | Chemistry | 1,008 |
474,163 | https://en.wikipedia.org/wiki/Intel%20iAPX%20432 | The iAPX 432 (Intel Advanced Performance Architecture) is a discontinued computer architecture introduced in 1981. It was Intel's first 32-bit processor design. The main processor of the architecture, the general data processor, is implemented as a set of two separate integrated circuits, due to technical limitations at the time. Although some early 8086, 80186 and 80286-based systems and manuals also used the iAPX prefix for marketing reasons, the iAPX 432 and the 8086 processor lines are completely separate designs with completely different instruction sets.
The project started in 1975 as the 8800 (after the 8008 and the 8080) and was intended to be Intel's major design for the 1980s. Unlike the 8086, which was designed the following year as a successor to the 8080, the iAPX 432 was a radical departure from Intel's previous designs meant for a different market niche, and completely unrelated to the 8080 or x86 product lines.
The iAPX 432 project is considered a commercial failure for Intel, and was discontinued in 1986.
Description
The iAPX 432 was referred to as a "micromainframe", designed to be programmed entirely in high-level languages. The instruction set architecture was also entirely new and a significant departure from Intel's previous 8008 and 8080 processors as the iAPX 432 programming model is a stack machine with no visible general-purpose registers. It supports object-oriented programming, garbage collection and multitasking as well as more conventional memory management directly in hardware and microcode. Direct support for various data structures is also intended to allow modern operating systems to be implemented using far less program code than for ordinary processors. Intel iMAX 432 is a discontinued operating system for the 432, written entirely in Ada, and Ada was also the intended primary language for application programming. In some aspects, it may be seen as a high-level language computer architecture.
These properties and features resulted in a hardware and microcode design that was more complex than most processors of the era, especially microprocessors. However, internal and external buses are (mostly) not wider than 16-bit, and, just like in other 32-bit microprocessors of the era (such as the 68000 or the 32016), 32-bit arithmetical instructions are implemented by a 16-bit ALU, via random logic and microcode or other kinds of sequential logic. The iAPX 432 enlarged address space over the 8080 was also limited by the fact that linear addressing of data could still only use 16-bit offsets, somewhat akin to Intel's first 8086-based designs, including the contemporary 80286 (the new 32-bit segment offsets of the 80386 architecture was described publicly in detail in 1984).
Using the semiconductor technology of its day, Intel's engineers weren't able to translate the design into a very efficient first implementation. Along with the lack of optimization in a premature Ada compiler, this contributed to rather slow but expensive computer systems, performing typical benchmarks at roughly 1/4 the speed of the new 80286 chip at the same clock frequency (in early 1982). This initial performance gap to the rather low-profile and low-priced 8086 line was probably the main reason why Intel's plan to replace the latter (later known as x86) with the iAPX 432 failed. Although engineers saw ways to improve a next generation design, the iAPX 432 capability architecture had now started to be regarded more as an implementation overhead rather than as the simplifying support it was intended to be.
Originally designed for clock frequencies of up to 10 MHz, actual devices sold were specified for maximum clock speeds of 4 MHz, 5 MHz, 7 MHz and 8 MHz with a peak performance of 2 million instructions per second at 8 MHz.
History
Development
Intel's 432 project started in 1976, a year after the 8-bit Intel 8080 was completed and a year before their 16-bit 8086 project began. The 432 project was initially named the 8800, as their next step beyond the existing Intel 8008 and 8080 microprocessors. This became a very big step. The instruction sets of these 8-bit processors were not very well fitted for typical Algol-like compiled languages. However, the major problem was their small native addressing ranges, just 16 KB for 8008 and 64 KB for 8080, far too small for many complex software systems without using some kind of bank switching, memory segmentation, or similar mechanism (which was built into the 8086, a few years later on). Intel now aimed to build a sophisticated complete system in a few LSI chips, that was functionally equal to or better than the best 32-bit minicomputers and mainframes requiring entire cabinets of older chips. This system would support multiprocessors, modular expansion, fault tolerance, advanced operating systems, advanced programming languages, very large applications, ultra reliability, and ultra security. Its architecture would address the needs of Intel's customers for a decade.
The iAPX 432 development team was managed by Bill Lattin, with Justin Rattner (who would later become CTO of Intel) as the lead engineer (although one source states that Fred Pollack was the lead engineer). Initially the team worked from Santa Clara, but in March 1977 Lattin and his team of 17 engineers moved to Intel's new site in Portland. Pollack later specialized in superscalarity and became the lead architect of the i686 chip Intel Pentium Pro.
It soon became clear that it would take several years and many engineers to design all this. And it would similarly take several years of further progress in Moore's Law, before improved chip manufacturing could fit all this into a few dense chips. Meanwhile, Intel urgently needed a simpler interim product to meet the immediate competition from Motorola, Zilog, and National Semiconductor. So Intel began a rushed project to design the 8086 as a low-risk incremental evolution from the 8080, using a separate design team. The mass-market 8086 shipped in 1978.
The 8086 was designed to be backward-compatible with the 8080 in the sense that 8080 assembly language could be mapped on to the 8086 architecture using a special assembler. Existing 8080 assembly source code (albeit no executable code) was thereby made upward compatible with the new 8086 to a degree. In contrast, the 432 had no software compatibility or migration requirements. The architects had total freedom to do a novel design from scratch, using whatever techniques they guessed would be best for large-scale systems and software. They applied fashionable computer science concepts from universities, particularly capability machines, object-oriented programming, high-level CISC machines, Ada, and densely encoded instructions. This ambitious mix of novel features made the chip larger and more complex. The chip's complexity limited the clock speed and lengthened the design schedule.
The core of the design — the main processor — was termed the General Data Processor (GDP) and built as two integrated circuits: one (the 43201) to fetch and decode instructions, the other (the 43202) to execute them. Most systems would also include the 43203 Interface Processor (IP) which operated as a channel controller for I/O, and an Attached Processor (AP), a conventional Intel 8086 which provided "processing power in the I/O subsystem".
These were some of the largest designs of the era. The two-chip GDP had a combined count of approximately 97,000 transistors while the single chip IP had approximately 49,000. By comparison, the Motorola 68000 (introduced in 1979) had approximately 40,000 transistors.
In 1983, Intel released two additional integrated circuits for the iAPX 432 Interconnect Architecture: the 43204 Bus Interface Unit (BIU) and 43205 Memory Control Unit (MCU). These chips allowed for nearly glueless multiprocessor systems with up to 63 nodes.
The project's failures
Some of the innovative features of the iAPX 432 were detrimental to good performance. In many cases, the iAPX 432 had a significantly slower instruction throughput than conventional microprocessors of the era, such as the National Semiconductor 32016, Motorola 68010 and Intel 80286. One problem was that the two-chip implementation of the GDP limited it to the speed of the motherboard's electrical wiring. A larger issue was the capability architecture needed large associative caches to run efficiently, but the chips had no room left for that. The instruction set also used bit-aligned variable-length instructions instead of the usual semi-fixed byte or word-aligned formats used in the majority of computer designs. Instruction decoding was therefore more complex than in other designs. Although this did not hamper performance in itself, it used additional transistors (mainly for a large barrel shifter) in a design that was already lacking space and transistors for caches, wider buses and other performance oriented features. In addition, the BIU was designed to support fault-tolerant systems, and in doing so up to 40% of the bus time was held up in wait states.
Another major problem was its immature and untuned Ada compiler. It used high-cost object-oriented instructions in every case, instead of the faster scalar instructions where it would have made sense to do so. For instance the iAPX 432 included a very expensive inter-module procedure call instruction, which the compiler used for all calls, despite the existence of much faster branch and link instructions. Another very slow call was enter_environment, which set up the memory protection. The compiler ran this for every single variable in the system, even when variables were used inside an existing environment and did not have to be checked. To make matters worse, data passed to and from procedures was always passed by value-return rather than by reference. When running the Dhrystone benchmark, parameter passing took ten times longer than all other computations combined.
According to the New York Times, "the i432 ran 5 to 10 times more slowly than its competitor, the Motorola 68000".
Impact and similar designs
The iAPX 432 was one of the first systems to implement the new IEEE-754 Standard for Floating-Point Arithmetic.
An outcome of the failure of the 432 was that microprocessor designers concluded that object support in the chip leads to a complex design that will invariably run slowly, and the 432 was often cited as a counter-example by proponents of RISC designs. However, some hold that the OO support was not the primary problem with the 432, and that the implementation shortcomings (especially in the compiler) mentioned above would have made any CPU design slow. Since the iAPX 432 there has been only one other attempt at a similar design, the Rekursiv processor, although the INMOS Transputer's process support was similar — and very fast.
Intel had spent considerable time, money, and mindshare on the 432, had a skilled team devoted to it, and was unwilling to abandon it entirely after its failure in the marketplace. A new architect—Glenford Myers—was brought in to produce an entirely new architecture and implementation for the core processor, which would be built in a joint Intel/Siemens project (later BiiN), resulting in the i960-series processors. The i960 RISC subset became popular for a time in the embedded processor market, but the high-end 960MC and the tagged-memory 960MX were marketed only for military applications.
According to the New York Times, Intel's collaboration with HP on the Merced processor (later known as Itanium) was the company's comeback attempt for the very high-end market.
Architecture
The iAPX 432 instructions have variable length, between 6 and 321 bits. Unusually, they are not byte-aligned, that is, they may contain odd numbers of bits and directly follow each other without regard to byte boundaries.
Object-oriented memory and capabilities
The iAPX 432 has hardware and microcode support for object-oriented programming and capability-based addressing. The system uses segmented memory, with up to 224 segments of up to 64 KB each, providing a total virtual address space of 240 bytes. The physical address space is 224 bytes (16 MB).
Programs are not able to reference data or instructions by address; instead they must specify a segment and an offset within the segment. Segments are referenced by access descriptors (ADs), which provide an index into the system object table and a set of rights (capabilities) governing accesses to that segment. Segments may be "access segments", which can only contain Access Descriptors, or "data segments" which cannot contain ADs. The hardware and microcode rigidly enforce the distinction between data and access segments, and will not allow software to treat data as access descriptors, or vice versa.
System-defined objects consist of either a single access segment, or an access segment and a data segment. System-defined segments contain data or access descriptors for system-defined data at designated offsets, though the operating system or user software may extend these with additional data. Each system object has a type field which is checked by microcode, such that a Port Object cannot be used where a Carrier Object is needed. User programs can define new object types which will get the full benefit of the hardware type checking, through the use of type control objects (TCOs).
In Release 1 of the iAPX 432 architecture, a system-defined object typically consisted of an access segment, and optionally (depending on the object type) a data segment specified by an access descriptor at a fixed offset within the access segment.
By Release 3 of the architecture, in order to improve performance, access segments and data segments were combined into single segments of up to 128 kB, split into an access part and a data part of 0–64 KB each. This reduced the number of object table lookups dramatically, and doubled the maximum virtual address space.
The iAPX432 recognizes fourteen types of predefined system objects:
instruction object contains executable instructions
domain object represents a program module and contains references to subroutines and data
context object represents the context of a process in execution
type-definition object represents a software-defined object type
type-control object represents type-specific privilege
object table identifies the system's collection of active object descriptors
storage resource object represents a free storage pool
physical storage object identifies free storage blocks in memory
storage claim object limits storage that may be allocated by all associated storage resource objects
process object identifies a running process
port object represents a port and message queue for interprocess communication
carrier Carriers carry messages to and from ports
processor contains state information for one processor in the system
processor communication object is used for interprocessor communication
Garbage collection
Software running on the 432 does not need to explicitly deallocate objects that are no longer needed. Instead, the microcode implements part of the marking portion of Edsger Dijkstra's on-the-fly parallel garbage collection algorithm (a mark-and-sweep style collector). The entries in the system object table contain the bits used to mark each object as being white, black, or grey as needed by the collector. The iMAX 432 operating system includes the software portion of the garbage collector.
Instruction format
Executable instructions are contained within a system "instruction object". Due to instructions being bit-aligned, a 16-bit bit displacement into the instruction object allows the object to contain up to 65,536 bits (8,192 bytes) of instructions.
Instructions consist of an operator, consisting of a class and an opcode, and zero to three operand references. "The fields are organized to present information to the processor in the sequence required for decoding". More frequently used operators are encoded using fewer bits. The instruction begins with the 4 or 6 bit class field which indicates the number of operands, called the order of the instruction, and the length of each operand. This is optionally followed by a 0 to 4 bit format field which describes the operands (if there are no operands the format is not present). Then come zero to three operands, as described by the format. The instruction is terminated by the 0 to 5 bit opcode, if any (some classes contain only one instruction and therefore have no opcode). "The Format field permits the GDP to appear to the programmer as a zero-, one-, two-, or three-address architecture." The format field indicates that an operand is a data reference, or the top or next-to-top element of the operand stack.
See also
iAPX, for the iAPX name
Notes
References
External links
IAPX 432 manuals at Bitsavers.org
Computer History Museum
Intel iAPX432 Micromainframe contains a list of all the Intel documentation associated with the iAPX 432, a list of hardware part numbers and a list of more than 30 papers.
Capability systems
High-level language computer architecture
Intel microprocessors | Intel iAPX 432 | Technology | 3,569 |
1,608,716 | https://en.wikipedia.org/wiki/Mass%20General%20Brigham | Mass General Brigham (MGB) (formerly Partners HealthCare) is a not-for-profit, integrated health care system that engages in medical research, teaching, and patient care. It is the largest hospital-based research enterprise in the United States, with annual funding of more than $2 billion. The system's annual revenue was nearly $18 billion in 2022. It is also an educational institution, founded by Brigham and Women's Hospital and Massachusetts General Hospital. The system provides clinical care through two academic hospitals, three specialty hospitals, seven community hospitals, home care services, a health insurance plan, and a robust network of specialty practices, urgent care facilities, and outpatient clinics/surgical centers. It is the largest private employer in Massachusetts. In 2023, the system reported that from 2017–2021 its overall economic impact was $53.4 billion – more than the annual state budget.
History
Mass General Brigham was founded by the academic medical centers (AMCs) which give it its name: Massachusetts General Hospital (colloquially referred to as "Mass General") and Brigham and Women's Hospital ("the Brigham"). Both hospitals were founded in the early 1800s, are based in Boston, and serve as major teaching hospitals of Harvard Medical School.
In 1994, fueled by economic and political pressure to cut costs on patient care and health care education, the two hospitals merged to create a new parent corporation: Partners HealthCare. The two entities continued to operate largely independently, and remained competitors in multiple areas, until 2019.
In 2015, Partners launched an electronic health record (EHR) system, allowing doctors, nurses, and other caregivers easier access patients' medical history. The effort computerized millions of health records across the system, creating one record for each Partners patient, allowing information to be more easily shared among caregivers.
In 2016, the system moved to into their current headquarters, located in Somerville's Assembly Row. The building allowed Mass General Brigham to merge 14 other offices.
In 2019, 25 years after the founding of Partners, the health system made the decision to fully integrate the organization under the new name "Mass General Brigham".
Mass General Brigham has 2.5 million patients annually, generating $18 billion in operating revenue and more than $2 billion in research funding. Brigham and Women's and Massachusetts General are consistently ranked among the best hospitals in America, while Massachusetts Eye and Ear, McLean, and Spaulding are also among the nation's best in their respective specialties.
Its current President and CEO is Dr. Anne Klibanski.
Board of Directors
The system's current Board of Directors consists of the following members:
Executive Committee of the Board
Scott M. Sperling (Chairman) – Co-Chief Executive Officer at Thomas H. Lee Partners
John Fish – CEO of Suffolk Construction Company
Jonathan Kraft – President of The Kraft Group
Board Members
Robert Atchinson – Co-Founder of Adage Capital Management
Marc Casper – Chairman/President/CEO of Thermo Fisher Scientific
Yolonda Colson, MD – Chief for the Division of Thoracic Surgery at Massachusetts General Hospital
Zara Cooper, MD – Brigham & Women's Hospital, Assoc Professor of Surgery at Harvard Medical School
Anne Finucane – Vice Chair of Bank of America, Board Chair of Bank of America Europe.
Benjamin Gomez – Head of Capital Markets at BNP Paribas Real Estate Spain
Tiffany Gueye – Fmr Chief Operating Officer at Blue Meridian Partners
Susan Hockfield – Professor of Neuroscience and President Emerita at MIT
Albert A. Holman, III – Founder and President of Chestnut Partners, Inc
David W. Ives – Fmr Chairman, Northshore International Insurance Services, Inc
Anne Klibanski, MD – President & CEO of Mass General Brigham
Carl J. Martignetti – President of Martignetti Companies
Nitin Nohria – Fmr Dean of Harvard Business School
Diane B. Patrick – Senior Counsel at Ropes & Gray
Phillip Ragon – Founder/Owner/CEO of InterSystems Corporation
Pamela Reeve – Fmr CEO and current chair of multiple publicly traded & non-profit companies
Paula Ness Speers – Partner and Managing Director, Health Advances
James D. Taiclet – President & CEO of Lockheed Martin
Alexander L. Thorndike – President of Choate Investment Advisors
Carol Vallone – Board Chair at McLean Hospital, Advisory Director at Berkshire Partners
Composition
Current members of Mass General Brigham include:
Affiliated Organizations
Nobel Laureates
There are at least 22 Nobel Prize winners affiliated with Mass General Brigham institutions.
History of Firsts
The following is a lists of medical firsts and milestones accomplished by Mass General Brigham institutions:
1811: Massachusetts General Hospital opens and becomes the first teaching hospital of Harvard Medical School.
1818: The Asylum for the Insane, a division of Mass General, opens as the first hospital in New England to treat mental illness. In 1892, it is renamed McLean Hospital, which is known today as the flagship mental health hospital of Harvard Medical School and Mass General Brigham.
1832: The Boston Lying-in Hospital was founded in Boston, MA, as one of the nation's first maternity hospitals dedicated to women unable to afford in-home medical care. It is the first of Brigham and Women's Hospital predecessor institutions.
1837: The first North American book on tumors was written by MGH co-founder Dr. John Collins Warren.
1841: MGH's Warren Library became the first general hospital library in the U.S.
1846: William T.G. Morton, MD, and John Collins Warren, MD, of Mass General perform the first successful public demonstration of surgical ether anesthesia.
1846: The "first truly significant medical patent ever issued" was U.S. Patent No. 4848. It was given to Drs. Charles T. Jackson and William T. G. Morton for the discovery of sulfuric ether as a surgical anesthetic.
1847: MGH's Dr. John Barnard Swett Jackson became the first professor of pathology in the U.S.
1870: MGH's Dr. James Clarke White opened the first ward in North America dedicated to skin diseases; the following year, he became the first American professor of dermatology.
1888: MGH opened the Bradlee Operating Theater, the first aseptic operating room in U.S.
1896: Walter J. Dodd, an apothecary and photographer at MGH, produced the first X-ray exposure in a U.S. hospital.
1900: Two 1878 graduates of the Massachusetts General Hospital Training School for Nurses, Sophia Palmer and Mary E. P. Davis, founded the American Journal of Nursing, the first independent nursing publication to be owned and operated by nurses.
1905: Though MGH's Ida M. Cannon and Dr. Richard Cabot are credited with establishing the first Social Service department located within a hospital.
1914: MGH physician Dr. Paul Dudley White introduced the use of the electrocardiogram (ECG) in the U.S.
1914: A pioneering allergy clinic was instituted by MGH's Dr. Joseph L. Goodale, who was "the first to make a skin test with substances other than pollen."
1921: MGH physician Dr. Ernest Amory Codman founded the Registry of Bone Sarcoma, the first national registry of its kind in the U.S.
1923: The first successful heart valve surgery in the world is performed at the Peter Bent Brigham Hospital by Elliot C. Cutler, MD.
1926: Harvey Cushing, MD, performs the first surgery using an electrosurgical generator in an operating room at the Peter Bent Brigham Hospital.
1929: The first polio victim is saved using the newly developed Drinker respirator (iron lung) at the Peter Bent Brigham Hospital.
1934: Under the leadership of Dr. Richard C. Cabot, MGH became the first hospital in the country to offer a pastoral care training program.
1937: A man of many "firsts", MGH endocrinologist Dr. Fuller Albright described what came to be known as Albright Syndrome.
1939: MGH's Dr. Edward D. Churchill, who performed the first successful pericardiectomy in the United States, developed the technique of segmental resection of the lung for certain infections like bronchiectasis.
1940: MGH's Ada Plumer became the first "official IV [intravenous] nurse" in the U.S. Until that time, it had been a medical role.
1942: MGH's Dr. Saul Hertz and MIT physicist Dr. Arthur Roberts used radioactive iodine for the first time as a therapeutic agent in the diagnosis and treatment of Graves' disease, helping to usher in the field of nuclear medicine.
1949: Carl Walter, MD, invents and perfects a way to collect, store and transfuse blood at the Peter Bent Brigham Hospital.
1954: The first successful human organ transplant, a kidney transplant, is accomplished at Peter Bent Brigham Hospital. Joseph Murray, MD, receives the Nobel Prize for this work.
1955: Drs. Wilma Jeanne Canada and Leonard W. Cronkhite, Jr., both residents in radiology at MGH, were the first to recognize the syndrome that bears their name, Cronkhite–Canada Syndrome.
1960: Dwight Harken, MD inserts the first prosthetic aortic valve directly into a human heart at the site of the biological valve. He also implants the first "demand" pacemaker and pioneers the use of the first pacemakers at the Peter Bent Brigham Hospital.
1962: Joseph Murray, MD, performs the world's first successful kidney transplantation from an unrelated cadaver donor. The procedure included the first clinical use of the immunosuppressive drug azathioprine.
1962: MGH's Dr. Ronald Malt and his team led the first successful limb replantation after twelve-year-old Everett "Red" Knowles's arm had been severed in an accident.
1963: MGH's Dr. Charles Huggins helped revolutionize blood bank procedures through his invention of the cytoglomerator, enabling freezing and storing red blood cells for extended periods.
1968: The first telemedicine system, which linked a medical station at Boston's Logan Airport with doctors at MGH, was established.
1969: Considered the "father of modern-day tracheal surgery" in the United States, MGH's Dr. Hermes Grillo developed original operations for disorders that were once considered uncorrectable.
1971: Three MGH physicians: Drs. Howard Ulfelder, Arthur L. Herbst and David C. Poskanzer, were the first to discover the link between the vaginal clear cell adenocarcinoma and the drug DES (diethylstilbestrol), at one time prescribed to prevent miscarriages.
1974: MGH dermatologists Drs. Thomas Fitzpatrick and John Parrish introduced the field of photochemotherapy to treat skin disorders such as psoriasis.
1976: Brigham and Women's Hospital researchers launch the Nurses' Health Study, enrolling 122,000 women in America's first and largest women's health study.
1978: MGH's Dr. Jeffrey B. Cooper, with colleagues at MGH and MIT, developed the "Boston Anesthesia System," the first anesthesia machine engineered by way of human-factors studies, and the first with computer-based operations.
1981: Faulker Hospital makes history by being the first to successfully transfuse a patient with "rejuvenated blood".
1981: MGH surgeon Dr. John F. Burke, along with Dr. Ioannis V. Yannas, from the Massachusetts Institute of Technology's department of mechanical engineering, invented the first commercially reproducible, synthetic human skin.
1983: MGH neurogeneticist Dr. James Gusella lead a team that found a genetic marker for Huntington's disease.
1983: Dr. Allan Goroll, a pioneer of modern primary care, collaborated with his MGH colleagues on the first textbook in that field.
1989: MGH became the first hospital in the U.S. whose library had an online catalog
1991: Dr. Jack Belliveau, researcher in MGH's Athinoula A. Martinos Center for Biomedical Imaging, reported the first demonstration of functional MRI (fMRI).
1995: The Brigham performs the nation's first triple organ transplant, removing three organs from a single donor—two lungs and a heart—and transplanting them into three patients.
1999: MGH's Dr. Thomas Spitzer and colleagues reported on the first-ever organ transplant carried out with the intention of stopping antirejection therapy.
2000: In what is believed to be a first in organ transplantation, the Brigham performs a quadruple transplant, harvesting four organs from a single donor—a kidney, two lungs and a heart—and transplanting them into four patients.
2004: The Brigham performs the nation's first implant of the new Intrinsic dual-chamber implantable cardioverter-defibrillator (ICD).
2007: MGH surgeons performed the first total hip replacement using a joint socket lined with a novel material invented at MGH.
2009: Brigham surgeons complete the second partial facial transplant in the United States.
2011: A multidisciplinary team at Brigham and Women's Hospital, led by Bohdan Pomahac, MD, performs the first full-face transplant in the U.S.
2015: An international team led by MGH researchers identified the first gene that causes mitral valve prolapse.
2016: The Brigham performs the first bilateral arm transplant on a patient injured during military service.
2016: MGH was the first hospital in the country where a liver transplant was performed using what doctors loosely call "liver in a box", a portable device.
2016: A surgical team led by MGHers Drs. Curtis L. Cetrulo, Jr. and Dicken S.C. Ko performed the country's first genitourinary vascularized composite allograft (penile) transplant.
2017: MGH's Dr. Bradley E. Bernstein, along with colleagues from MGH, Mass. Eye and Ear, and the Broad Institute at MIT, created the first atlas of head and neck cancer.
Innovations and ventures
Mass General Brigham is the largest hospital system-based research enterprise in America, with an annual research budget exceeding $2 billion. It is the top system for National Institutes of Health (NIH) funding in the world, receiving $1.04 billion from NIH in 2022.
The system's funding for research has grown from $1.5 billion in 2012 to $2.3 billion in 2023, with nearly 2/3 of the funds coming from outside of Massachusetts. Research revenues in 2022 were $2.2 billion. In 2023, the system said it had over 2,700 ongoing clinical trials, focused on accelerating new treatments and therapies. Among the system's recent innovations: Visudyne for macular degeneration, Enbrel for rheumatoid arthritis, Eloctate and Alprolix for hemophilia, Entyvio for crohn's disease, and total joint replacements such as Durasul, Longevity, E1, and Vicacit-E.
Expansion and influence
In May 2000, CEO Dr. Samuel Thier and William C. Van Faasen, CEO of Blue Cross Blue Shield of Massachusetts—the state's biggest health insurer—agreed to a deal that raised insurance costs all across Massachusetts. They agreed that Van Faasen would substantially increase insurance payments to Mass General Brigham doctors and hospitals, largely correcting the underpayments of the previous 10 years. However, Partners issued a statement saying that Thier pledged only that he would treat all insurers equally. According to Boston Globe investigative journalists, Blue Cross and other insurers increased the rate they paid Mass General Brigham by 75 percent between 2000 and 2008, though CEO James J. Mongan argued insurance rates in Massachusetts have gone up at roughly the same rate as the national average.
In 2013, Mass General Brigham's plan to take over 378-bed South Shore Hospital in Weymouth was reviewed due to fears that the expansion plan is anticompetitive, a conduct Mass General Brigham had been accused of over the past four years in other cases. In 2015, the system abandoned their plans to invest $200 million into the hospital.
In April 2017, the United States District Court for the District of Massachusetts announced that Partners HealthCare System and one of its hospitals, Brigham and Women's Hospital, agreed to pay a $10 million fine to resolve allegations that a stem cell research lab fraudulently obtained federal grant funding. Federal prosecutors commended the Brigham for disclosing allegations of fraudulent research at the lab and for taking steps to prevent future recurrences of such conduct.
In May 2017, Partners announced they would be cutting more than $600 million in expenses over the next three years in an effort to control higher costs and to become more efficient. The cost-cutting initiative was called Partners 2.0, and the plan looked to reduce costs in research, care delivery, revenue collection, and supply chain. The plan began on October 1, 2017 and eliminated jobs. The company lost $108 million in 2016, but was profitable in 2017 despite industry turmoil.
In February 2018, Partners announced that 100 coders would have their jobs outsourced to India in a cost saving move. This was all part of the non-profit hospital and physicians network's three-year plan to reduce $500 million to $800 million in overhead costs. CEO Dr. David Torchiana said the job cuts were a financial necessity, adding that most sectors outsource call centers and back-office functions.
During the SARS-CoV-2 pandemic, Partners HealthCare, who reported operating income of $484 million (3.5% operating margin) in fiscal year 2019, refused hazard pay to its healthcare workers despite lack of proper PPE. However, they did not layoff or furlough any employees during the pandemic, while cutting executive salaries. The system explained it does not calibrate pay and benefits based upon patients' conditions, because a core part of its mission is delivering the same high-quality care to all patients regardless of the severity of their condition. Partners also provided employees with pay and benefits for those unable to work due to COVID-related illness, eight weeks of pay for those temporarily without work, and hotel rooms for employees.
Mass General Brigham reported a loss of operations of $432 million (−2.6% operating margin) in fiscal year 2022 due to historic cost inflation, significant workforce shortages, and a worsening capacity crisis. Many health care systems and hospitals nationwide are experiencing the worst year financially since the start of the COVID-19 pandemic. In response, the system announced its plan for a long-term sustainable future, which includes the following initiatives: Advancing integration to improve patient care and identify efficiencies, addressing the labor shortage by building workforce pipelines, and reducing expenses.
See also
Partners Harvard Medical International
Steward Health Care System
References
External links
Partners International Medical Services
Spaudling Rehabilitation Network
Partners Healthcare At Home
1994 establishments in Massachusetts
Healthcare in Boston
Hospital networks in the United States
Life sciences industry
Massachusetts General Hospital
Non-profit organizations based in Boston
Medical and health organizations based in Massachusetts | Mass General Brigham | Biology | 3,996 |
1,448,147 | https://en.wikipedia.org/wiki/NHK%20Science%20%26%20Technology%20Research%20Laboratories | NHK Science & Technology Research Laboratories (STRL, ), headquartered in Setagaya, Tokyo, Japan, is responsible for technical research at NHK, Japan's public broadcaster.
Work done by the STRL includes research on direct-broadcast satellite (BS), Integrated Services Digital Broadcasting, high-definition television, and ultra-high-definition television.
On May 9, 2013, NHK and Mitsubishi Electric announced that they had jointly developed the first High Efficiency Video Coding (HEVC) encoder for 8K Ultra HD TV, which is also called Super Hi-Vision (SHV). The HEVC encoder supports the Main 10 profile at Level 6.1 allowing it to encode 10-bit video with a resolution of 7680 × 4320 at 60 fps. The HEVC encoder has 17 3G-SDI inputs and uses 17 boards for parallel processing with each board encoding a row of 7680 × 256 pixels to allow for real time video encoding. The HEVC encoder was shown at the NHK Science & Technology Research Laboratories Open House 2013 that took place from May 30 to June 2.
See also
NHK
NHK Twinscam
Ultra-high-definition television (UHDTV)
High Efficiency Video Coding (HEVC) – Video codec that supports resolutions up to 8K UHDTV (7680 × 4320)
References
External links
STRL - Japanese
STRL - English
NHK Open House 2013 - English
NHK
Mass media in Tokyo
Television technology
Engineering research institutes
Scientific organizations established in 1930
Audio engineering
Radio technology
Research institutes in Japan
Sound production technology
Sound recording technology
1930 establishments in Japan | NHK Science & Technology Research Laboratories | Technology,Engineering | 335 |
8,160,211 | https://en.wikipedia.org/wiki/Cerebellar%20model%20articulation%20controller | The cerebellar model arithmetic computer (CMAC) is a type of neural network based on a model of the mammalian cerebellum. It is also known as the cerebellar model articulation controller. It is a type of associative memory.
The CMAC was first proposed as a function modeler for robotic controllers by James Albus in 1975 (hence the name), but has been extensively used in reinforcement learning and also as for automated classification in the machine learning community. The CMAC is an extension of the perceptron model. It computes a function for input dimensions. The input space is divided up into hyper-rectangles, each of which is associated with a memory cell. The contents of the memory cells are the weights, which are adjusted during training. Usually, more than one quantisation of input space is used, so that any point in input space is associated with a number of hyper-rectangles, and therefore with a number of memory cells. The output of a CMAC is the algebraic sum of the weights in all the memory cells activated by the input point.
A change of value of the input point results in a change in the set of activated hyper-rectangles, and therefore a change in the set of memory cells participating in the CMAC output. The CMAC output is therefore stored in a distributed fashion, such that the output corresponding to any point in input space is derived from the value stored in a number of memory cells (hence the name associative memory). This provides generalisation.
Building blocks
In the adjacent image, there are two inputs to the CMAC, represented as a 2D space. Two quantising functions have been used to divide this space with two overlapping grids (one shown in heavier lines). A single input is shown near the middle, and this has activated two memory cells, corresponding to the shaded area. If another point occurs close to the one shown, it will share some of the same memory cells, providing generalisation.
The CMAC is trained by presenting pairs of input points and output values, and adjusting the weights in the activated cells by a proportion of the error observed at the output. This simple training algorithm has a proof of convergence.
It is normal to add a kernel function to the hyper-rectangle, so that points falling towards the edge of a hyper-rectangle have a smaller activation than those falling near the centre.
One of the major problems cited in practical use of CMAC is the memory size required, which is directly related to the number of cells used. This is usually ameliorated by using a hash function, and only providing memory storage for the actual cells that are activated by inputs.
One-step convergent algorithm
Initially least mean square (LMS) method is employed to update the weights of CMAC. The convergence of using LMS for training CMAC is sensitive to the learning rate and could lead to divergence. In 2004, a recursive least squares (RLS) algorithm was introduced to train CMAC online. It does not need to tune a learning rate. Its convergence has been proved theoretically and can be guaranteed to converge in one step. The computational complexity of this RLS algorithm is O(N3).
Hardware implementation infrastructure
Based on QR decomposition, an algorithm (QRLS) has been further simplified to have an O(N) complexity. Consequently, this reduces memory usage and time cost significantly. A parallel pipeline array structure on implementing this algorithm has been introduced.
Overall by utilizing QRLS algorithm, the CMAC neural network convergence can be guaranteed, and the weights of the nodes can be updated using one step of training. Its parallel pipeline array structure offers its great potential to be implemented in hardware for large-scale industry usage.
Continuous CMAC
Since the rectangular shape of CMAC receptive field functions produce discontinuous staircase function approximation, by integrating CMAC with B-splines functions, continuous CMAC offers the capability of obtaining any order of derivatives of the approximate functions.
Deep CMAC
In recent years, numerous studies have confirmed that by stacking several shallow structures into a single deep structure, the overall system could achieve better data representation, and, thus, more effectively deal with nonlinear and high complexity tasks. In 2018, a deep CMAC (DCMAC) framework was proposed and a backpropagation algorithm was derived to estimate the DCMAC parameters. Experimental results of an adaptive noise cancellation task showed that the proposed DCMAC can achieve better noise cancellation performance when compared with that from the conventional single-layer CMAC.
Summary
See also
References
Further reading
Albus, J.S. (1971). "Theory of Cerebellar Function". In: Mathematical Biosciences, Volume 10, Numbers 1/2, February 1971, pgs. 25–61
Albus, J.S. (1975). "New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)". In: Transactions of the ASME Journal of Dynamic Systems, Measurement, and Control, September 1975, pgs. 220 – 227
Albus, J.S. (1979). "Mechanisms of Planning and Problem Solving in the Brain". In: Mathematical Biosciences 45, pgs 247–293, 1979.
Iwan, L., and Stengel, R., "The Application of Neural Networks to Fuel Processors for Fuel Cells" In IEEE Transactions on Vehicular Technology, Vol. 50 (1), pp. 125-143, 2001.
Tsao, Y. (2018). "Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller". In: IEEE Access 6, April 2018, pgs 37395-37402.
External links
Blog on Cerebellar Model Articulation Controller (CMAC) by Ting Qin. More details on the one-step convergent algorithm, code development, etc.
Computational neuroscience
Artificial neural networks
Network architecture
Networks | Cerebellar model articulation controller | Engineering | 1,233 |
34,535,168 | https://en.wikipedia.org/wiki/Donald%20R.%20F.%20Harleman | Donald Robert Fergusson Harleman (December 5, 1922 – September 28, 2005) was an American civil engineer noted for his research of the flow of contaminants through water and harbor cleanup efforts around the world.
Harleman was credited with cleanup efforts of harbors around the world: Australia, Brazil, China, India, and Mexico, among others. He advised government agencies on the Boston Harbor cleanup.
Harleman was elected to the National Academy of Engineering in 1974 "for leadership in the development of theoretical and experimental techniques in the field of fluid mechanics".
The Boston Globe called Harleman "an internationally recognized civil engineer in the field of water quality and waste treatment".
The New York Times said that Harleman "was regarded as a leader in fluid mechanics" and said he was "water pollution expert who aided cleanups worldwide".
Harleman was Ford Professor of Civil Engineering and Director of Ralph M. Parsons Laboratory at the Massachusetts Institute of Technology.
Chronology
1922: born on December in Palmerton, Pennsylvania
1943: B.S. in civil engineering, Pennsylvania State University
1947: M.S., Massachusetts Institute of Technology
1950: Doctorate, Massachusetts Institute of Technology
1950: assistant professor of hydraulics, Massachusetts Institute of Technology
1975-1990: Ford Professor of Environmental Engineering, Massachusetts Institute of Technology
1991: retired from MIT as Ford Professor emeritus
2005: died of cancer on September 28 on Nantucket, Massachusetts
References
1922 births
2005 deaths
Engineers from Pennsylvania
MIT School of Engineering faculty
Penn State College of Engineering alumni
MIT School of Engineering alumni
Members of the United States National Academy of Engineering
People from Carbon County, Pennsylvania
American environmental scientists
20th-century American engineers | Donald R. F. Harleman | Environmental_science | 332 |
43,947,222 | https://en.wikipedia.org/wiki/Strikingly | Strikingly is a Chinese, "mobile-first" website builder and blogging platform. Its aim is to allow a user with little or no development experience to create mobile optimized websites and blogs. In addition to smartphones and tablets, websites created with Strikingly are "enhanced for viewing across all devices", including desktops.
Strikingly is the first Chinese company to graduate from the Y Combinator seed accelerator.
History
Chief executive officer (CEO) David Haisha Chen, chief technology officer (CTO) Dafeng Guo, and chief design officer (CDO) Teng Bao founded the company in 2012.
Strikingly released its beta platform in August 2012. In June 2013, Strikingly was selected for Y Combinator’s winter startup program in Mountain View. In April 2013, Strikingly raised a total of $1.5M in seed round from 16 investors, including Ron Conway, founder of SV Angel, and other prominent venture capital firms including Y Combinator, Index Ventures, Funders Club, Infinite Venture Partners, ZenShin Capital, and Innovations Works. In August 2017, the 5 year old startup announced it had raised $6M in a Series A round of investment from CAS Holding, Infinity Venture Partners, Innovation Works, Kevin Hale and TEEC.
Product
Strikingly is a free online website builder and blogging platform. The website builder and blogging platform also includes built-in SEO features, social media plug-ins, page analytics, and form/email collecting functionalities. The product is aimed at both individuals and small businesses, and has mainly been used to showcase portfolios, digital resumes, events, startup projects, and to create personal branding websites. In early 2014, Strikingly also launched its one-click site builder with Facebook and LinkedIn, allowing users to build mobile optimized web pages quickly. The builder pulls information like profile pictures, location, contact information, and work experience from the social media page to create a mobile optimized website.
Other competitors in the website building industry include Webflow, Wix.com, Weebly, Squarespace, Metaconex, WordPress,
References
External links
Cloud platforms
Internet properties established in 2012
Blog hosting services
Y Combinator companies | Strikingly | Technology | 450 |
47,515,762 | https://en.wikipedia.org/wiki/Dictionary%20of%20Irish%20Architects | The Dictionary of Irish Architects is an online database which contains biographical and bibliographical information on architects, builders and craftsmen born or working in Ireland during the period 1720 to 1940, and information on the buildings on which they worked. Although it is principally devoted to architects, it includes engineers who designed buildings and structures, some builders, some artists and craftsmen, and some amateurs and writers on architectural subjects.
The dictionary was initially devised and created by Ana Martha Rowan.
Architects from Britain and elsewhere who never resided in Ireland but designed buildings there are not given full biographical treatment, and only their Irish works are listed. Irish-born architects who emigrated are similarly treated; their careers after their departure from Ireland are not described in detail, and only their Irish works are listed in full.
The Dictionary of Irish Architects was created and compiled in the Irish Architectural Archive (IAA) over a period of thirty years. It was made publicly available online in January 2009. According to the IAA it remains a "work in progress" with new data added and updated since its initial release. As of 2018, it reportedly contained 6,700 entries.
References
External links
Online databases
Architecture in Ireland
Architects
Online encyclopedias
Irish architectural history | Dictionary of Irish Architects | Technology,Engineering | 243 |
3,989,092 | https://en.wikipedia.org/wiki/Siegel%27s%20theorem%20on%20integral%20points | In mathematics, Siegel's theorem on integral points states that a curve of genus greater than zero has only finitely many integral points over any given number field.
The theorem was first proved in 1929 by Carl Ludwig Siegel and was the first major result on Diophantine equations that depended only on the genus and not any special algebraic form of the equations. For g > 1 it was superseded by Faltings's theorem in 1983.
Statement
Siegel's theorem on integral points: For a smooth algebraic curve C of genus g defined over a number field K, presented in affine space in a given coordinate system, there are only finitely many points on C with coordinates in the ring of integers O of K, provided g > 0.
History
In 1926, Siegel proved the theorem effectively in the special case , so that he proved this theorem conditionally, provided the Mordell's conjecture is true.
In 1929, Siegel proved the theorem unconditionally by combining a version of the Thue–Siegel–Roth theorem, from diophantine approximation, with the Mordell–Weil theorem from diophantine geometry (required in Weil's version, to apply to the Jacobian variety of C).
In 2002, Umberto Zannier and Pietro Corvaja gave a new proof by using a new method based on the subspace theorem.
Effective versions
Siegel's result was ineffective for (see effective results in number theory), since Thue's method in diophantine approximation also is ineffective in describing possible very good rational approximations to almost all algebraic numbers of degree . Siegel proved it effectively only in the special case in 1926. Effective results in some cases derive from Baker's method.
See also
Diophantine geometry
References
Diophantine equations
Theorems in number theory | Siegel's theorem on integral points | Mathematics | 370 |
72,458,972 | https://en.wikipedia.org/wiki/Amanita%20veldiei | Amanita veldiei is a species of Amanita found in South Africa
References
External links
veldiei
Fungus species | Amanita veldiei | Biology | 29 |
4,868 | https://en.wikipedia.org/wiki/B.%20F.%20Skinner | Burrhus Frederic Skinner (March 20, 1904 – August 18, 1990) was an American psychologist, behaviorist, inventor, and social philosopher. He was the Edgar Pierce Professor of Psychology at Harvard University from 1958 until his retirement in 1974.
Skinner developed behavior analysis, especially the philosophy of radical behaviorism, and founded the experimental analysis of behavior, a school of experimental research psychology. He also used operant conditioning to strengthen behavior, considering the rate of response to be the most effective measure of response strength. To study operant conditioning, he invented the operant conditioning chamber (aka the Skinner box), and to measure rate he invented the cumulative recorder. Using these tools, he and Charles Ferster produced Skinner's most influential experimental work, outlined in their 1957 book Schedules of Reinforcement.
Skinner was a prolific author, publishing 21 books and 180 articles. He imagined the application of his ideas to the design of a human community in his 1948 utopian novel, Walden Two, while his analysis of human behavior culminated in his 1958 work, Verbal Behavior.
Skinner, John B. Watson and Ivan Pavlov, are considered to be the pioneers of modern behaviorism. Accordingly, a June 2002 survey listed Skinner as the most influential psychologist of the 20th century.
Biography
Skinner was born in Susquehanna, Pennsylvania, to Grace and William Skinner, the latter of whom was a lawyer. Skinner became an atheist after a Christian teacher tried to assuage his fear of the hell that his grandmother described. His brother Edward, two and a half years younger, died at age 16 of a cerebral hemorrhage.
Skinner's closest friend as a young boy was Raphael Miller, whom he called Doc because his father was a doctor. Doc and Skinner became friends due to their parents' religiousness and both had an interest in contraptions and gadgets. They had set up a telegraph line between their houses to send messages to each other, although they had to call each other on the telephone due to the confusing messages sent back and forth. During one summer, Doc and Skinner started an elderberry business to gather berries and sell them door to door. They found that when they picked the ripe berries, the unripe ones came off the branches too, so they built a device that was able to separate them. The device was a bent piece of metal to form a trough. They would pour water down the trough into a bucket, and the ripe berries would sink into the bucket and the unripe ones would be pushed over the edge to be thrown away.
Education
Skinner attended Hamilton College in Clinton, New York, with the intention of becoming a writer. He found himself at a social disadvantage at the college because of his intellectual attitude. He was a member of Lambda Chi Alpha fraternity.
He wrote for the school paper, but, as an atheist, he was critical of the traditional mores of his college. After receiving his Bachelor of Arts in English literature in 1926, he attended Harvard University, where he would later research and teach. While attending Harvard, a fellow student, Fred S. Keller, convinced Skinner that he could make an experimental science of the study of behavior. This led Skinner to invent a prototype for the Skinner box and to join Keller in the creation of other tools for small experiments.
After graduation, Skinner unsuccessfully tried to write a novel while he lived with his parents, a period that he later called the "Dark Years". He became disillusioned with his literary skills despite encouragement from the poet Robert Frost, concluding that he had little world experience and no strong personal perspective from which to write. His encounter with John B. Watson's behaviorism led him into graduate study in psychology and to the development of his own version of behaviorism.
Later life
Skinner received a PhD from Harvard in 1931, and remained there as a researcher for some years. In 1936, he went to the University of Minnesota in Minneapolis to teach. In 1945, he moved to Indiana University, where he was chair of the psychology department from 1946 to 1947, before returning to Harvard as a tenured professor in 1948. He remained at Harvard for the rest of his life. In 1973, Skinner was one of the signers of the Humanist Manifesto II.
In 1936, Skinner married Yvonne "Eve" Blue. The couple had two daughters, Julie (later Vargas) and Deborah (later Buzan; married Barry Buzan). Yvonne died in 1997, and is buried in Mount Auburn Cemetery, Cambridge, Massachusetts.
Skinner's public exposure had increased in the 1970s, he remained active even after his retirement in 1974, until his death. In 1989, Skinner was diagnosed with leukemia and died on August 18, 1990, in Cambridge, Massachusetts. Ten days before his death, he was given the lifetime achievement award by the American Psychological Association and gave a talk concerning his work.
Contributions to psychology
Behaviorism
Skinner referred to his approach to the study of behavior as radical behaviorism, which originated in the early 1900s as a reaction to depth psychology and other traditional forms of psychology, which often had difficulty making predictions that could be tested experimentally. This philosophy of behavioral science assumes that behavior is a consequence of environmental histories of reinforcement (see applied behavior analysis). In his words:
Foundations of Skinner's behaviorism
Skinner's ideas about behaviorism were largely set forth in his first book, The Behavior of Organisms (1938). Here, he gives a systematic description of the manner in which environmental variables control behavior. He distinguished two sorts of behavior which are controlled in different ways:
Respondent behaviors are elicited by stimuli, and may be modified through respondent conditioning, often called classical (or pavlovian) conditioning, in which a neutral stimulus is paired with an eliciting stimulus. Such behaviors may be measured by their latency or strength.
Operant behaviors are 'emitted', meaning that initially they are not induced by any particular stimulus. They are strengthened through operant conditioning (aka instrumental conditioning), in which the occurrence of a response yields a reinforcer. Such behaviors may be measured by their rate.
Both of these sorts of behavior had already been studied experimentally, most notably: respondents, by Ivan Pavlov; and operants, by Edward Thorndike. Skinner's account differed in some ways from earlier ones, and was one of the first accounts to bring them under one roof.
The idea that behavior is strengthened or weakened by its consequences raises several questions. Among the most commonly asked are these:
Operant responses are strengthened by reinforcement, but where do they come from in the first place?
Once it is in the organism's repertoire, how is a response directed or controlled?
How can very complex and seemingly novel behaviors be explained?
1. Origin of operant behavior
Skinner's answer to the first question was very much like Darwin's answer to the question of the origin of a 'new' bodily structure, namely, variation and selection. Similarly, the behavior of an individual varies from moment to moment; a variation that is followed by reinforcement is strengthened and becomes prominent in that individual's behavioral repertoire. Shaping was Skinner's term for the gradual modification of behavior by the reinforcement of desired variations. Skinner believed that 'superstitious' behavior can arise when a response happens to be followed by reinforcement to which it is actually unrelated.
2. Control of operant behavior
The second question, "how is operant behavior controlled?" arises because, to begin with, the behavior is "emitted" without reference to any particular stimulus. Skinner answered this question by saying that a stimulus comes to control an operant if it is present when the response is reinforced and absent when it is not. For example, if lever-pressing only brings food when a light is on, a rat, or a child, will learn to press the lever only when the light is on. Skinner summarized this relationship by saying that a discriminative stimulus (e.g. light or sound) sets the occasion for the reinforcement (food) of the operant (lever-press). This three-term contingency (stimulus-response-reinforcer) is one of Skinner's most important concepts, and sets his theory apart from theories that use only pair-wise associations.
3. Explaining complex behavior
Most behavior of humans cannot easily be described in terms of individual responses reinforced one by one, and Skinner devoted a great deal of effort to the problem of behavioral complexity. Some complex behavior can be seen as a sequence of relatively simple responses, and here Skinner invoked the idea of "chaining". Chaining is based on the fact, experimentally demonstrated, that a discriminative stimulus not only sets the occasion for subsequent behavior, but it can also reinforce a behavior that precedes it. That is, a discriminative stimulus is also a "conditioned reinforcer". For example, the light that sets the occasion for lever pressing may also be used to reinforce "turning around" in the presence of a noise. This results in the sequence "noise – turn-around – light – press lever – food." Much longer chains can be built by adding more stimuli and responses.
However, Skinner recognized that a great deal of behavior, especially human behavior, cannot be accounted for by gradual shaping or the construction of response sequences. Complex behavior often appears suddenly in its final form, as when a person first finds his way to the elevator by following instructions given at the front desk. To account for such behavior, Skinner introduced the concept of rule-governed behavior. First, relatively simple behaviors come under the control of verbal stimuli: the child learns to "jump," "open the book," and so on. After a large number of responses come under such verbal control, a sequence of verbal stimuli can evoke an almost unlimited variety of complex responses.
Reinforcement
Reinforcement, a key concept of behaviorism, is the primary process that shapes and controls behavior, and occurs in two ways: positive and negative. In The Behavior of Organisms (1938), Skinner defines negative reinforcement to be synonymous with punishment, i.e. the presentation of an aversive stimulus. This definition would subsequently be re-defined in Science and Human Behavior (1953).
In what has now become the standard set of definitions, positive reinforcement is the strengthening of behavior by the occurrence of some event (e.g., praise after some behavior is performed), whereas negative reinforcement is the strengthening of behavior by the removal or avoidance of some aversive event (e.g., opening and raising an umbrella over your head on a rainy day is reinforced by the cessation of rain falling on you).
Both types of reinforcement strengthen behavior, or increase the probability of a behavior reoccurring; the difference being in whether the reinforcing event is something applied (positive reinforcement) or something removed or avoided (negative reinforcement). Punishment can be the application of an aversive stimulus/event (positive punishment or punishment by contingent stimulation) or the removal of a desirable stimulus (negative punishment or punishment by contingent withdrawal). Though punishment is often used to suppress behavior, Skinner argued that this suppression is temporary and has a number of other, often unwanted, consequences. Extinction is the absence of a rewarding stimulus, which weakens behavior.
Writing in 1981, Skinner pointed out that Darwinian natural selection is, like reinforced behavior, "selection by consequences". Though, as he said, natural selection has now "made its case," he regretted that essentially the same process, "reinforcement", was less widely accepted as underlying human behavior.
Schedules of reinforcement
Skinner recognized that behavior is typically reinforced more than once, and, together with Charles Ferster, he did an extensive analysis of the various ways in which reinforcements could be arranged over time, calling it the schedules of reinforcement.
The most notable schedules of reinforcement studied by Skinner were continuous, interval (fixed or variable), and ratio (fixed or variable). All are methods used in operant conditioning.
Continuous reinforcement (CRF): each time a specific action is performed the subject receives a reinforcement. This method is effective when teaching a new behavior because it quickly establishes an association between the target behavior and the reinforcer.
Interval schedule: based on the time intervals between reinforcements.
Fixed interval schedule (FI): A procedure in which reinforcements are presented at fixed time periods, provided that the appropriate response is made. This schedule yields a response rate that is low just after reinforcement and becomes rapid just before the next reinforcement is scheduled.
Variable interval schedule (VI): A procedure in which behavior is reinforced after scheduled but unpredictable time durations following the previous reinforcement. This schedule yields the most stable rate of responding, with the average frequency of reinforcement determining the frequency of response.
Ratio schedules: based on the ratio of responses to reinforcements.
Fixed ratio schedule (FR): A procedure in which reinforcement is delivered after a specific number of responses have been made.
Variable ratio schedule (VR): A procedure in which reinforcement comes after a number of responses that is randomized from one reinforcement to the next (e.g. slot machines). The lower the number of responses required, the higher the response rate tends to be. Variable ratio schedules tend to produce very rapid and steady responding rates in contrast with fixed ratio schedules where the frequency of response usually drops after the reinforcement occurs.
Token economy
"Skinnerian" principles have been used to create token economies in a number of institutions, such as psychiatric hospitals. When participants behave in desirable ways, their behavior is reinforced with tokens that can be changed for such items as candy, cigarettes, coffee, or the exclusive use of a radio or television set.
Verbal Behavior
Challenged by Alfred North Whitehead during a casual discussion while at Harvard to provide an account of a randomly provided piece of verbal behavior, Skinner set about attempting to extend his then-new functional, inductive approach to the complexity of human verbal behavior. Developed over two decades, his work appeared in the book Verbal Behavior. Although Noam Chomsky was highly critical of Verbal Behavior, he conceded that Skinner's "S-R psychology" was worth a review. Behavior analysts reject Chomsky's appraisal of Skinner's work as merely "stimulus-response psychology," and some have argued that this mischaracterization highlights a poor understanding of Skinner's work and the field of behavior analysis as a whole.
Verbal Behavior had an uncharacteristically cool reception, partly as a result of Chomsky's review, partly because of Skinner's failure to address or rebut any of Chomsky's criticisms. Skinner's peers may have been slow to adopt the ideas presented in Verbal Behavior because of the absence of experimental evidence—unlike the empirical density that marked Skinner's experimental work.
Scientific inventions
Operant conditioning chamber
An operant conditioning chamber (also known as a "Skinner box") is a laboratory apparatus used in the experimental analysis of animal behavior. It was invented by Skinner while he was a graduate student at Harvard University. As used by Skinner, the box had a lever (for rats), or a disk in one wall (for pigeons). A press on this "manipulandum" could deliver food to the animal through an opening in the wall, and responses reinforced in this way increased in frequency. By controlling this reinforcement together with discriminative stimuli such as lights and tones, or punishments such as electric shocks, experimenters have used the operant box to study a wide variety of topics, including schedules of reinforcement, discriminative control, delayed response ("memory"), punishment, and so on. By channeling research in these directions, the operant conditioning chamber has had a huge influence on course of research in animal learning and its applications. It enabled great progress on problems that could be studied by measuring the rate, probability, or force of a simple, repeatable response. However, it discouraged the study of behavioral processes not easily conceptualized in such terms—spatial learning, in particular, which is now studied in quite different ways, for example, by the use of the water maze.
Cumulative recorder
The cumulative recorder makes a pen-and-ink record of simple repeated responses. Skinner designed it for use with the operant chamber as a convenient way to record and view the rate of responses such as a lever press or a key peck. In this device, a sheet of paper gradually unrolls over a cylinder. Each response steps a small pen across the paper, starting at one edge; when the pen reaches the other edge, it quickly resets to the initial side. The slope of the resulting ink line graphically displays the rate of the response; for example, rapid responses yield a steeply sloping line on the paper, slow responding yields a line of low slope. The cumulative recorder was a key tool used by Skinner in his analysis of behavior, and it was very widely adopted by other experimenters, gradually falling out of use with the advent of the laboratory computer and use of line graphs. Skinner's major experimental exploration of response rates, presented in his book with Charles Ferster, Schedules of Reinforcement, is full of cumulative records produced by this device.
Air crib
The air crib is an easily cleaned, temperature- and humidity-controlled box-bed intended to replace the standard infant crib. After raising one baby, Skinner felt that he could simplify the process for parents and improve the experience for children. He primarily thought of the idea to help his wife cope with the day-to-day tasks of child rearing. Skinner had some specific concerns about raising a baby in the rough environment where he lived in Minnesota. Keeping the child warm was a central priority (Faye, 2010). Though this was the main goal, it also was designed to reduce laundry, diaper rash, and cradle cap, while still allowing the baby to be more mobile and comfortable. Reportedly it had some success in these goals as it was advertised commercially with an estimate of 300 children who were raised in the air crib. Psychology Today tracked down 50 children and ran a short piece on the effects of the air crib. The reports came back positive and that these children and parents enjoyed using the crib (Epstein, 2005). One of these air cribs resides in the gallery at the Center for the History of Psychology in Akron, Ohio (Faye, 2010).
The air crib was designed with three solid walls and a safety-glass panel at the front which could be lowered to move the baby in and out of the crib. The floor was stretched canvas. Sheets were intended to be used over the canvas and were easily rolled off when soiled. Addressing Skinners' concern for temperature, a control box on top of the crib regulated temperature and humidity. Filtered air flowed through the crib from below. This crib was higher than most standard cribs, allowing easier access to the child without the need to bend over (Faye, 2010).
The air crib was a controversial invention. It was popularly characterized as a cruel pen, and it was often compared to Skinner's operant conditioning chamber (or "Skinner box"). Skinner's article in Ladies Home Journal, titled "Baby in a Box", caught the eye of many and contributed to skepticism about the device (Bjork, 1997). A picture published with the article showed the Skinners' daughter, Deborah, peering out of the crib with her hands and face pressed upon the glass. Skinner also used the term "experiment" when describing the crib, and this association with laboratory animal experimentation discouraged the crib's commercial success, although several companies attempted to produce and sell it.
In 2004, therapist Lauren Slater repeated a claim that Skinner may have used his baby daughter in some of his experiments. His outraged daughter publicly accused Slater of not making a good-faith effort to check her facts before publishing. Debora was quoted by the Guardian saying "According to Opening Skinner's Box: Great Psychological Experiments of the Twentieth Century, my father, who was a psychologist based at Harvard from the 1950s to the 90s, "used his infant daughter, Deborah, to prove his theories by putting her for a few hours a day in a laboratory box . . . in which all her needs were controlled and shaped". But it's not true. My father did nothing of the sort."
Teaching machine
The teaching machine was a mechanical device whose purpose was to administer a curriculum of programmed learning. The machine embodies key elements of Skinner's theory of learning and had important implications for education in general and classroom instruction in particular.
In one incarnation, the machine was a box that housed a list of questions that could be viewed one at a time through a small window. (see picture.) There was also a mechanism through which the learner could respond to each question. Upon delivering a correct answer, the learner would be rewarded.
Skinner advocated the use of teaching machines for a broad range of students (e.g., preschool aged to adult) and instructional purposes (e.g., reading and music). For example, one machine that he envisioned could teach rhythm. He wrote:The instructional potential of the teaching machine stemmed from several factors: it provided automatic, immediate and regular reinforcement without the use of aversive control; the material presented was coherent, yet varied and novel; the pace of learning could be adjusted to suit the individual. As a result, students were interested, attentive, and learned efficiently by producing the desired behavior, "learning by doing."
Teaching machines, though perhaps rudimentary, were not rigid instruments of instruction. They could be adjusted and improved based upon the students' performance. For example, if a student made many incorrect responses, the machine could be reprogrammed to provide less advanced prompts or questions—the idea being that students acquire behaviors most efficiently if they make few errors. Multiple-choice formats were not well-suited for teaching machines because they tended to increase student mistakes, and the contingencies of reinforcement were relatively uncontrolled.
Not only useful in teaching explicit skills, machines could also promote the development of a repertoire of behaviors that Skinner called self-management. Effective self-management means attending to stimuli appropriate to a task, avoiding distractions, reducing the opportunity of reward for competing behaviors, and so on. For example, machines encourage students to pay attention before receiving a reward. Skinner contrasted this with the common classroom practice of initially capturing students' attention (e.g., with a lively video) and delivering a reward (e.g., entertainment) before the students have actually performed any relevant behavior. This practice fails to reinforce correct behavior and actually counters the development of self-management.
Skinner pioneered the use of teaching machines in the classroom, especially at the primary level. Today computers run software that performs similar teaching tasks, and there has been a resurgence of interest in the topic related to the development of adaptive learning systems.
Pigeon-guided missile
During World War II, the US Navy required a weapon effective against surface ships, such as the German Bismarck class battleships. Although missile and TV technology existed, the size of the primitive guidance systems available rendered automatic guidance impractical. To solve this problem, Skinner initiated Project Pigeon, which was intended to provide a simple and effective guidance system. Skinner trained pigeons through operant conditioning to peck a camera obscura screen showing incoming targets on individual screens (Schultz-Figueroa, 2019). This system divided the nose cone of a missile into three compartments, with a pigeon placed in each. Within the ship, the three lenses projected an image of distant objects onto a screen in front of each bird. Thus, when the missile was launched from an aircraft within sight of an enemy ship, an image of the ship would appear on the screen. The screen was hinged, which connected the screens to the bomb's guidance system. This was done through four small rubber pneumatic tubes that were attached to each side of the frame, which directed a constant airflow to a pneumatic pickup system that controlled the thrusters of the bomb. Resulting in the missile being guided towards the targeted ship, through just the peck coming from the pigeon (Schultz-Figueroa, 2019).
Despite an effective demonstration, the project was abandoned, and eventually more conventional solutions, such as those based on radar, became available. Skinner complained that "our problem was no one would take us seriously." Before the project was completely abandoned it was tested extensively in the laboratory. After the United States Army ultimately denied it the United States Naval Research Laboratory picked up Skinner's Research and renamed it Project ORCON, which was a contraction of "organic" and "control". Skinner worked closely with the US Naval Research Laboratory continuously testing the pigeon's tracking capacity for guiding missiles to their intended targets. In the end, the pigeons' performance and accuracy relied on so many uncontrollable factors that Project ORCON, like Project Pigeon before it, was again discontinued. It was never used in the field.
Verbal summator
Early in his career Skinner became interested in "latent speech" and experimented with a device he called the verbal summator. This device can be thought of as an auditory version of the Rorschach inkblots. When using the device, human participants listened to incomprehensible auditory "garbage" but often read meaning into what they heard. Thus, as with the Rorschach blots, the device was intended to yield overt behavior that projected subconscious thoughts. Skinner's interest in projective testing was brief, but he later used observations with the summator in creating his theory of verbal behavior. The device also led other researchers to invent new tests such as the tautophone test, the auditory apperception test, and the Azzageddi test.
Influence on teaching
Along with psychology, education has also been influenced by Skinner's views, which are extensively presented in his book The Technology of Teaching, as well as reflected in Fred S. Keller's Personalized System of Instruction and Ogden R. Lindsley's Precision Teaching.
Skinner argued that education has two major purposes:
to teach repertoires of both verbal and nonverbal behavior; and
to interest students in learning.
He recommended bringing students' behavior under appropriate control by providing reinforcement only in the presence of stimuli relevant to the learning task. Because he believed that human behavior can be affected by small consequences, something as simple as "the opportunity to move forward after completing one stage of an activity" can be an effective reinforcer. Skinner was convinced that, to learn, a student must engage in behavior, and not just passively receive information.
Skinner believed that effective teaching must be based on positive reinforcement which is, he argued, more effective at changing and establishing behavior than punishment. He suggested that the main thing people learn from being punished is how to avoid punishment. For example, if a child is forced to practice playing an instrument, the child comes to associate practicing with punishment and thus develops feelings of dreadfulness and wishes to avoid practicing the instrument. This view had obvious implications for the then widespread practice of rote learning and punitive discipline in education. The use of educational activities as punishment may induce rebellious behavior such as vandalism or absence.
Because teachers are primarily responsible for modifying student behavior, Skinner argued that teachers must learn effective ways of teaching. In The Technology of Teaching (1968), Skinner has a chapter on why teachers fail: He says that teachers have not been given an in-depth understanding of teaching and learning. Without knowing the science underpinning teaching, teachers fall back on procedures that work poorly or not at all, such as:
using aversive techniques (which produce escape and avoidance and undesirable emotional effects);
relying on telling and explaining ("Unfortunately, a student does not learn simply when he is shown or told.");
failing to adapt learning tasks to the student's current level; and
failing to provide positive reinforcement frequently enough.
Skinner suggests that any age-appropriate skill can be taught. The steps are
Clearly specify the action or performance the student is to learn.
Break down the task into small achievable steps, going from simple to complex.
Let the student perform each step, reinforcing correct actions.
Adjust so that the student is always successful until finally the goal is reached.
Shift to intermittent reinforcement to maintain the student's performance.
Contributions to social theory
Skinner is popularly known mainly for his books Walden Two (1948) and Beyond Freedom and Dignity, (for which he made the cover of Time magazine). The former describes a fictional "experimental community" in 1940s United States. The productivity and happiness of citizens in this community is far greater than in the outside world because the residents practice scientific social planning and use operant conditioning in raising their children.
Walden Two, like Thoreau's Walden, champions a lifestyle that does not support war, or foster competition and social strife. It encourages a lifestyle of minimal consumption, rich social relationships, personal happiness, satisfying work, and leisure. In 1967, Kat Kinkade and others founded the Twin Oaks Community, using Walden Two as a blueprint. The community still exists and continues to use the Planner-Manager system and other aspects of the community described in Skinner's book, though behavior modification is not a community practice.
In Beyond Freedom and Dignity, Skinner suggests that a technology of behavior could help to make a better society. We would, however, have to accept that an autonomous agent is not the driving force of our actions. Skinner offers alternatives to punishment, and challenges his readers to use science and modern technology to construct a better society.
Political views
Skinner's political writings emphasized his hopes that an effective and human science of behavioral control – a technology of human behavior – could help with problems as yet unsolved and often aggravated by advances in technology such as the atomic bomb. Indeed, one of Skinner's goals was to prevent humanity from destroying itself. He saw political activity as the use of aversive or non-aversive means to control a population. Skinner favored the use of positive reinforcement as a means of control, citing Jean-Jacques Rousseau's novel Emile: or, On Education as an example of literature that "did not fear the power of positive reinforcement."
Skinner's book, Walden Two, presents a vision of a decentralized, localized society, which applies a practical, scientific approach and behavioral expertise to deal peacefully with social problems. (For example, his views led him to oppose corporal punishment in schools, and he wrote a letter to the California Senate that helped lead it to a ban on spanking.) Skinner's utopia is both a thought experiment and a rhetorical piece. In Walden Two, Skinner answers the problem that exists in many utopian novels – "What is the Good Life?" The book's answer is a life of friendship, health, art, a healthy balance between work and leisure, a minimum of unpleasantness, and a feeling that one has made worthwhile contributions to a society in which resources are ensured, in part, by minimizing consumption.
Skinner described his novel as "my New Atlantis", in reference to Bacon's utopia.
Superstition' in the Pigeon" experiment
One of Skinner's experiments examined the formation of superstition in one of his favorite experimental animals, the pigeon. Skinner placed a series of hungry pigeons in a cage attached to an automatic mechanism that delivered food to the pigeon "at regular intervals with no reference whatsoever to the bird's behavior." He discovered that the pigeons associated the delivery of the food with whatever chance actions they had been performing as it was delivered, and that they subsequently continued to perform these same actions.Skinner suggested that the pigeons behaved as if they were influencing the automatic mechanism with their "rituals", and that this experiment shed light on human behavior:Modern behavioral psychologists have disputed Skinner's "superstition" explanation for the behaviors he recorded. Subsequent research (e.g. Staddon and Simmelhag, 1971), while finding similar behavior, failed to find support for Skinner's "adventitious reinforcement" explanation for it. By looking at the timing of different behaviors within the interval, Staddon and Simmelhag were able to distinguish two classes of behavior: the terminal response, which occurred in anticipation of food, and interim responses, that occurred earlier in the interfood interval and were rarely contiguous with food. Terminal responses seem to reflect classical (as opposed to operant) conditioning, rather than adventitious reinforcement, guided by a process like that observed in 1968 by Brown and Jenkins in their "autoshaping" procedures. The causation of interim activities (such as the schedule-induced polydipsia seen in a similar situation with rats) also cannot be traced to adventitious reinforcement and its details are still obscure (Staddon, 1977).
Criticism
Noam Chomsky
American linguist Noam Chomsky published a review of Skinner's Verbal Behavior in the linguistics journal Language in 1959. Chomsky argued that Skinner's attempt to use behaviorism to explain human language amounted to little more than word games. Conditioned responses could not account for a child's ability to create or understand an infinite variety of novel sentences. Chomsky's review has been credited with launching the cognitive revolution in psychology and other disciplines. Skinner, who rarely responded directly to critics, never formally replied to Chomsky's critique, but endorsed Kenneth MacCorquodale's 1972 reply.
Many academics in the 1960s believed that Skinner's silence on the question meant Chomsky's criticism had been justified. But MacCorquodale wrote that Chomsky's criticism did not focus on Skinner's Verbal Behavior, but rather attacked a confusion of ideas from behavioral psychology. MacCorquodale also regretted Chomsky's aggressive tone. Furthermore, Chomsky had aimed at delivering a definitive refutation of Skinner by citing dozens of animal instinct and animal learning studies. On the one hand, he argued that the studies on animal instinct proved that animal behavior is innate, and therefore Skinner was mistaken. On the other, Chomsky's opinion of the studies on learning was that one cannot draw an analogy from animal studies to human behavior—or, that research on animal instinct refutes research on animal learning.
Chomsky also reviewed Skinner's Beyond Freedom and Dignity, using the same basic motives as his Verbal Behavior review. Among Chomsky's criticisms were that Skinner's laboratory work could not be extended to humans, that when it was extended to humans it represented "scientistic" behavior attempting to emulate science but which was not scientific, that Skinner was not a scientist because he rejected the hypothetico-deductive model of theory testing, and that Skinner had no science of behavior.
Psychodynamic psychology
Skinner has been repeatedly criticized for his supposed animosity towards Sigmund Freud, psychoanalysis, and psychodynamic psychology. Some have argued, however, that Skinner shared several of Freud's assumptions, and that he was influenced by Freudian points of view in more than one field, among them the analysis of defense mechanisms, such as repression. To study such phenomena, Skinner even designed his own projective test, the "verbal summator" described above.
J. E. R. Staddon
As understood by Skinner, ascribing dignity to individuals involves giving them credit for their actions. To say "Skinner is brilliant" means that Skinner is an originating force. If Skinner's determinist theory is right, he is merely the focus of his environment. He is not an originating force and he had no choice in saying the things he said or doing the things he did. Skinner's environment and genetics both allowed and compelled him to write his book. Similarly, the environment and genetic potentials of the advocates of freedom and dignity cause them to resist the reality that their own activities are deterministically grounded. J. E. R. Staddon has argued the compatibilist position; Skinner's determinism is not in any way contradictory to traditional notions of reward and punishment, as he believed.
Professional career
Roles
1936–1937 Instructor, University of Minnesota
1937–1939 Assistant Professor, University of Minnesota
1939–1945 Associate Professor, University of Minnesota
1945–1948 Professor and chair, Indiana University
1947–1948 William James Lecturer, Harvard University
1948–1958 Professor, Harvard University
1958–1974 Professor of Psychology, Harvard University
1949–1950 President, Midwestern Psychological Association
1954–1955 President, Eastern Psychological Association
1966–1967 President, Pavlovian Society of North America
1974–1990 Professor of Psychology and Social Relations Emeritus, Harvard University
Awards
1926 AB, Hamilton College
1930 MA, Harvard University
1930–1931 Thayer Fellowship
1931 PhD, Harvard University
1931–1932 Walker Fellowship
1931–1933 National Research Council Fellowship
1933–1936 Junior Fellowship, Harvard Society of Fellows
1942 Guggenheim Fellowship (postponed until 1944–1945)
1942 Howard Crosby Warren Medal, Society of Experimental Psychologists
1958 Distinguished Scientific Contribution Award, American Psychological Association
1958–1974 Edgar Pierce Professor of Psychology, Harvard University
1964–1974 Career Award, National Institute of Mental Health
1966 Edward Lee Thorndike Award, American Psychological Association
1968 National Medal of Science, National Science Foundation
1969 Overseas Fellow in Churchill College, Cambridge
1971 Gold Medal Award, American Psychological Foundation
1971 Joseph P. Kennedy Jr., Foundation for Mental Retardation International award
1972 Humanist of the Year, American Humanist Association
1972 Creative Leadership in Education Award, New York University
1972 Career Contribution Award, Massachusetts Psychological Association
1978 Distinguished Contributions to Educational Research Award and Development, American Educational Research Association
1978 National Association for Retarded Citizens Award
1985 Award for Excellence in Psychiatry, Albert Einstein School of Medicine
1985 President's Award, New York Academy of Science
1990 William James Fellow Award, American Psychological Society
1990 Lifetime Achievement Award, American Psychological Association
1991 Outstanding Member and Distinguished Professional Achievement Award, Society for Performance Improvement
1997 Scholar Hall of Fame Award, Academy of Resource and Development
2011 Committee for Skeptical Inquiry Pantheon of Skeptics—Inducted
2024 Ig Nobel Peace Prize for his work on the pigeon-guided bomb project.
Honorary degrees
Skinner received honorary degrees from:
Alfred University
Ball State University
Dickinson College
Hamilton College
Harvard University
Hobart and William Smith Colleges
Johns Hopkins University
Keio University
Long Island University C. W. Post Campus
McGill University
North Carolina State University
Ohio Wesleyan University
Ripon College
Rockford College
Tufts University
University of Chicago
University of Exeter
University of Missouri
University of North Texas
Western Michigan University
University of Maryland, Baltimore County.
Honorary societies
Skinner was inducted to the following honorary societies:
PSI CHI International Honor Society in Psychology
American Philosophical Society
American Academy of Arts and Sciences
United States National Academy of Sciences
Bibliography
1938. The Behavior of Organisms: An Experimental Analysis, 1938. , .
1948. Walden Two. (revised 1976 ed.).
1953. Science and Human Behavior. .
1957. Schedules of Reinforcement, with C. B. Ferster. .
1957. Verbal Behavior. .
1961. The Analysis of Behavior: A Program for Self Instruction, with James G. Holland. .
1968.The Technology of Teaching. New York: Appleton-Century-Crofts. .
1969. Contingencies of Reinforcement: A Theoretical Analysis. .
1971. Beyond Freedom and Dignity. .
1974. About Behaviorism. .
1976. Particulars of My Life: Part One of an Autobiography. .
1978. Reflections on Behaviorism and Society. .
1979. The Shaping of a Behaviorist: Part Two of an Autobiography. .
1980. Notebooks, edited by Robert Epstein. .
1982. Skinner for the Classroom, edited by R. Epstein. .
1983. Enjoy Old Age: A Program of Self-Management, with M. E. Vaughan. .
1983. A Matter of Consequences: Part Three of an Autobiography. , .
1987. Upon Further Reflection. .
1989. Recent Issues in the Analysis of Behavior. .
Cumulative Record: A Selection of Papers, 1959, 1961, 1972 and 1999 as Cumulative Record: Definitive Edition. (paperback)
Includes reprint: Skinner, B. F. 1945. "Baby in a Box." Ladies' Home Journal. — Skinner's original, personal account of the much-misrepresented "Baby in a box" device.
See also
Applied behavior analysis
Back to Freedom and Dignity
References
Notes
Citations
Further reading
Chiesa, M. (2004). Radical Behaviorism: The Philosophy and the Science.
Epstein, Robert (1997). "Skinner as self-manager." Journal of Applied Behavior Analysis 30:545–69. Retrieved 2 June 2005 – via ENVMED.rochester.edu
Sundberg, M. L. (2008) The VB-MAPP: The Verbal Behavior Milestones Assessment and Placement Program
Basil-Curzon, L. (2004) Teaching in Further Education: A outline of Principles and Practice
Hardin, C.J. (2004) Effective Classroom Management
Kaufhold, J. A. (2002) The Psychology of Learning and the Art of Teaching
Bjork, D. W. (1993) B. F. Skinner: A Life
Dews, P. B., ed. (1970) Festschrift For B. F. Skinner.New York: Appleton-Century-Crofts.
Evans, R. I. (1968) B. F. Skinner: the man and his ideas
Nye, Robert D. (1979) What Is B. F. Skinner Really Saying? Englewood Cliffs, NJ: Prentice-Hall.
Rutherford, A. (2009) Beyond the box: B. F. Skinner's technology of behavior from laboratory to life, 1950s–1970s.. Toronto: University of Toronto Press.
Sagal, P. T. (1981) Skinner's Philosophy. Washington, DC: University Press of America.
Smith, D. L. (2002). On Prediction and Control. B. F. Skinner and the Technological Ideal of Science. In W. E. Pickren & D. A. Dewsbury, (Eds.), Evolving Perspectives on the History of Psychology, Washington, D.C.: American Psychological Association.
Swirski, Peter (2011) "How I Stopped Worrying and Loved Engineering or Communal Life, Adaptations, and B.F. Skinner's Walden Two". American Utopia and Social Engineering in Literature, Social Thought, and Political History. New York, Routledge.
Wiener, D. N. (1996) B. F. Skinner: benign anarchist
Wolfgang, C.H. and Glickman, Carl D. (1986) Solving Discipline Problems Allyn and Bacon, Inc
External links
B. F. Skinner Foundation homepage
National Academy of Sciences biography
I was not a lab rat, response by Skinner's daughter about the "baby box"
Audio Recordings Society for Experimental Analysis of Behavior
Reprint of "the Minotaur of the Behaviorist Maze: Surviving Stanford's Learning House in the 1970s: Journal of Humanistic Psychology, Vol. 51, Number 3, July 2011. 266–272.
1904 births
1990 deaths
20th-century American inventors
20th-century atheists
20th-century American non-fiction writers
20th-century American philosophers
Action theorists
American atheists
20th-century American psychologists
American skeptics
Behaviourist psychologists
Burials at Mount Auburn Cemetery
Deaths from leukemia in Massachusetts
Determinists
Ethologists
Hamilton College (New York) alumni
Harvard Graduate School of Arts and Sciences alumni
Harvard University Department of Psychology faculty
Ig Nobel laureates
Members of the United States National Academy of Sciences
National Medal of Science laureates
People from Susquehanna County, Pennsylvania
Philosophers from Massachusetts
Philosophers from Pennsylvania
Philosophers from Minnesota
American philosophers of culture
American philosophers of education
American philosophers of language
American philosophers of mind
Philosophers of psychology
American philosophers of science
American philosophers of technology
American political philosophers
University of Minnesota faculty
Writers from Cambridge, Massachusetts
20th-century American zoologists
American educational psychologists
Members of the American Philosophical Society
APA Distinguished Scientific Award for an Early Career Contribution to Psychology recipients | B. F. Skinner | Biology | 8,968 |
72,628,213 | https://en.wikipedia.org/wiki/Helle%20Ploug | Helle Ploug is marine scientist known for her work on particles in seawater. She is a professor at the University of Gothenburg, and was named a fellow of the Association for the Sciences of Limnology and Oceanography in 2017.
Education and career
Ploug grew up in Denmark. She has an M.Sc. (1992) and a Ph.D. (1996) from Aarhus University. Following her Ph.D. she did postdoctoral work at the Max Planck Institute for Marine Microbiology and the University of Copenhagen. Starting in 2006 she was a scientist at the Alfred Wegener Institute for Polar and Marine Research, and in 2008 she moved to Stockholm University where she had a Marie Curie fellowship. In 2006 she became an associate professor at the University of Gothenburg where she was promoted to professor in 2013.
Research
Ploug's early research used fiber optic sensors to measure light in marine sediments. She went on to examine how particles assemble in marine systems. Her work on particles includes developing methods to quantify bacterial use of particles, and the implications for consumption of particles produced by copepods. Ploug has developed methods to measure how fast particles sink through the ocean and the rate sinking particles are converted into carbon dioxide. Her recent research has focused on measurements of biogeochemical cycling at the single cell level using Nanoscale secondary ion mass spectrometry.
Selected publications
Awards and honors
In 2017 Ploug was named a fellow of the Association for the Sciences of Limnology and Oceanography.
References
External links
Living people
Women climatologists
Aarhus University alumni
Academic staff of the University of Gothenburg
Women oceanographers
Biogeochemists
Year of birth missing (living people) | Helle Ploug | Chemistry | 346 |
30,265,855 | https://en.wikipedia.org/wiki/Mercury%28I%29%20oxide | Mercury(I) oxide, also known as mercurous oxide, is an inorganic metal oxide with the chemical formula Hg2O.
It is a brown/black powder, insoluble in water but soluble in nitric acid. With hydrochloric acid, it reacts to form calomel, Hg2Cl2. Mercury(I) oxide is toxic but without taste or smell. It is chemically unstable and converts to mercury(II) oxide and mercury metal.
References
Oxides
Mercury(I) compounds
Inorganic compounds | Mercury(I) oxide | Chemistry | 112 |
53,347,752 | https://en.wikipedia.org/wiki/Home%20idle%20load | Home idle load is the continuous residential electric energy consumption as measured by smart meters. It differs from standby power (loads) in that it includes energy consumption by devices that cycle on and off within the hourly period of standard smart meters (such as fridges, aquarium heaters, wine coolers, etc.). As such, home idle loads can be measured accurately by smart meters. As at 2014, home idle load constituted an average of 32% of household electricity consumption in the U.S.
Type of devices
The primary categories of devices that contribute to Home Idle Load include:
Electronic devices that consume electricity while not being actively used (including televisions, game consoles, digital picture frames, etc.)
Home infrastructure devices (including analog thermostats, doorbells, telephones, clocks, GFCI outlets, smoke alarms, continuous hot water recirculation pumps, etc.).
Any type of device used to maintain a continuous temperature differential (including freezers, icemakers, refrigerators, wine coolers, terrarium heaters, heated floors, instant hot water dispensers, etc.). Although such devices may need to stay on continuously, more recent models have proven to be more efficient and can result in considerably lower home idle loads.
Reducing home idle load
Approaches to reduce home idle loads include:
Disabling electronic devices with standby power loads either manually (unplugging) or by managing power strips (including smart power socket types)
Using a timer switch that stops electric consumption from devices when not in use
Using a smart power strip with a master outlet that manages electricity for multiple devices
Replacing older (or malfunctioning) devices with more efficient options
References
Electricity
Energy conservation
Environmental impact of the energy industry
Electronics and the environment
Electric power | Home idle load | Physics,Engineering | 360 |
89,371 | https://en.wikipedia.org/wiki/Combinational%20logic | In automata theory, combinational logic (also referred to as time-independent logic) is a type of digital logic that is implemented by Boolean circuits, where the output is a pure function of the present input only. This is in contrast to sequential logic, in which the output depends not only on the present input but also on the history of the input. In other words, sequential logic has memory while combinational logic does not.
Combinational logic is used in computer circuits to perform Boolean algebra on input signals and on stored data. Practical computer circuits normally contain a mixture of combinational and sequential logic. For example, the part of an arithmetic logic unit, or ALU, that does mathematical calculations is constructed using combinational logic. Other circuits used in computers, such as half adders, full adders, half subtractors, full subtractors, multiplexers, demultiplexers, encoders and decoders are also made by using combinational logic.
Practical design of combinational logic systems may require consideration of the finite time required for practical logical elements to react to changes in their inputs. Where an output is the result of the combination of several different paths with differing numbers of switching elements, the output may momentarily change state before settling at the final state, as the changes propagate along different paths.
Representation
Combinational logic is used to build circuits that produce specified outputs from certain inputs. The construction of combinational logic is generally done using one of two methods: a sum of products, or a product of sums. Consider the following truth table:
Using sum of products, all logical statements which yield true results are summed, giving the result:
Using Boolean algebra, the result simplifies to the following equivalent of the truth table:
Logic formula minimization
Minimization (simplification) of combinational logic formulas is done using the following rules based on the laws of Boolean algebra:
With the use of minimization (sometimes called logic optimization), a simplified logical function or circuit may be arrived upon, and the logic combinational circuit becomes smaller, and easier to analyse, use, or build.
See also
Asynchronous circuit
Field-programmable gate array
Formal verification
Ladder logic
Programmable logic controller
Relay logic
Sequential logic
Tseytin transformation
References
External links
Logic in computer science
Digital electronics | Combinational logic | Mathematics,Engineering | 478 |
8,403,792 | https://en.wikipedia.org/wiki/Stofor | Stofor, pronounced as in "Stow Four", is a store and forward message switching system designed by Fenwood Designs Ltd, UK in 1980.
Market and specification
Stofor was aimed squarely at the bottom end of the market and its competitors were from companies such as Chernikeeff (now John Lilley and Gillie Ltd) and Racal but Stofor was soon outselling both with ease. The Stofor range was based on a 4 MHz Zilog Z80 processor with 64K RAM and provided from 4 to 64 ports. Early models, and some later ones, were floppy disk based but later and larger versions had 10Mb Winchester technology hard disks for storage. Stofor was the only message switching system of its size to boast a custom text editor designed by Fenwood with the needs of the telex user in mind. Its main workload was in the sending and receiving of telex messages but it was also put to work in a wide range of other communications areas including fax.
The initial design was for a system that would replace a London commodity broker's antiquated telex room, containing 12 telex machines, with a computer for sending and receiving messages plus a number of VDUs (computer displays) for preparing and editing messages. The resulting Stofor system was an instant hit and many orders were placed when the broker's associates and competitors saw the system in operation. Additional options such as direct input from dedicated word processors, leased line working and several other features were soon requested and incorporated.
Users
Stofor became very popular in the City of London and was in use by banks, shipping brokers, commodity traders and insurance companies. Digital Equipment Corporation used one as the centre of its European message distribution network in Reading and others were in use by a number of household names in the food, chemical and engineering industries. Several were supplied to INMARSAT to provide telex links via satellite for ships at sea, and to the Lockheed subsidiary Memrykord, who provided international flight-planning services.
Manufacture
When Stofor was born, Fenwood employed about six staff but this grew rapidly until there were forty eight, covering all aspects such design, procurement, production, software writing, sales and customer support. Sadly, with the rapid growth of fax and then the advent of email, demand for message switching systems such as Stofor died away and Fenwood moved into other markets, finally being wound up in the late 1990s. During their 'lives' Fenwood and Stofor were based in Farncombe, Godalming, Guildford; all in Surrey, UK and in Aldershot, Hampshire, UK.
Designers
Stofor was the result of work by Fenwood's three directors: John Kashel, Sales and marketing director, who came up with the concept and basic specification; Bob Fearnley, Technical Director, who designed the hardware and Tom Grainger, Software Director, who wrote the software. The original software for the product had been written by a third party but, as they let Fenwood down badly on delivery dates, Tom Grainger completed the job and eventually rewrote all the software in-house.
See also
Fax
Network switch
Store and forward delay
Store-and-forward switching center
References
Source: R.M Fearnley, Technical Director, 1977-1998
Networking hardware | Stofor | Engineering | 687 |
602,490 | https://en.wikipedia.org/wiki/Sober%20space | In mathematics, a sober space is a topological space X such that every (nonempty) irreducible closed subset of X is the closure of exactly one point of X: that is, every nonempty irreducible closed subset has a unique generic point.
Definitions
Sober spaces have a variety of cryptomorphic definitions, which are documented in this section . In each case below, replacing "unique" with "at most one" gives an equivalent formulation of the T0 axiom. Replacing it with "at least one" is equivalent to the property that the T0 quotient of the space is sober, which is sometimes referred to as having "enough points" in the literature.
With irreducible closed sets
A closed set is irreducible if it cannot be written as the union of two proper closed subsets. A space is sober if every nonempty irreducible closed subset is the closure of a unique point.
In terms of morphisms of frames and locales
A topological space X is sober if every map that preserves all joins and all finite meets from its partially ordered set of open subsets to is the inverse image of a unique continuous function from the one-point space to X.
This may be viewed as a correspondence between the notion of a point in a locale and a point in a topological space, which is the motivating definition.
Using completely prime filters
A filter F of open sets is said to be completely prime if for any family of open sets such that , we have that for some i. A space X is sober if each completely prime filter is the neighbourhood filter of a unique point in X.
In terms of nets
A net is self-convergent if it converges to every point in , or equivalently if its eventuality filter is completely prime. A net that converges to converges strongly if it can only converge to points in the closure of . A space is sober if every self-convergent net converges strongly to a unique point .
In particular, a space is T1 and sober precisely if every self-convergent net is constant.
As a property of sheaves on the space
A space X is sober if every functor from the category of sheaves Sh(X) to Set that preserves all finite limits and all small colimits must be the stalk functor of a unique point x.
Properties and examples
Any Hausdorff (T2) space is sober (the only irreducible subsets being points), and all sober spaces are Kolmogorov (T0), and both implications are strict.
Sobriety is not comparable to the T1 condition:
an example of a T1 space which is not sober is an infinite set with the cofinite topology, the whole space being an irreducible closed subset with no generic point;
an example of a sober space which is not T1 is the Sierpinski space.
Moreover T2 is stronger than T1 and sober, i.e., while every T2 space is at once T1 and sober, there exist spaces that are simultaneously T1 and sober, but not T2. One such example is the following: let X be the set of real numbers, with a new point p adjoined; the open sets being all real open sets, and all cofinite sets containing p.
Sobriety of X is precisely a condition that forces the lattice of open subsets of X to determine X up to homeomorphism, which is relevant to pointless topology.
Sobriety makes the specialization preorder a directed complete partial order.
Every continuous directed complete poset equipped with the Scott topology is sober.
Finite T0 spaces are sober.
The prime spectrum Spec(R) of a commutative ring R with the Zariski topology is a compact sober space. In fact, every spectral space (i.e. a compact sober space for which the collection of compact open subsets is closed under finite intersections and forms a base for the topology) is homeomorphic to Spec(R) for some commutative ring R. This is a theorem of Melvin Hochster.
More generally, the underlying topological space of any scheme is a sober space.
The subset of Spec(R) consisting only of the maximal ideals, where R is a commutative ring, is not sober in general.
See also
Stone duality, on the duality between topological spaces that are sober and frames (i.e. complete Heyting algebras) that are spatial.
References
Further reading
General topology
Separation axioms
Properties of topological spaces | Sober space | Mathematics | 929 |
6,611,405 | https://en.wikipedia.org/wiki/List%20of%20stars%20in%20Apus | This is the list of notable stars in the constellation Apus, sorted by decreasing brightness.
See also
Lists of stars by constellation
References
List
Apus | List of stars in Apus | Astronomy | 31 |
54,741,842 | https://en.wikipedia.org/wiki/Nanoparticle%20deposition | Nanoparticle deposition refers to the process of attaching nanoparticles to solid surfaces called substrates to create coatings of nanoparticles. The coatings can have a monolayer or a multilayer and organized or unorganized structure based on the coating method used. Nanoparticles are typically difficult to deposit due to their physical properties.
Challenges
Nanoparticles can be made from different materials such as metals, ceramics and polymers. The stability of the nanoparticles can be an issue as nanoparticles have a tendency to lower their very high surface energy, which originates from their high surface-to-bulk ratio. Bare nanoparticles tend to stabilize themselves either by sorption of molecules from the surroundings or by lowering the surface area through coagulation and agglomeration. Usually the formation of these aggregates is unwanted. The tendency of a nanoparticle to coagulate can be controlled by modifying the surface layer. In a liquid medium, suitable ligand molecules are commonly attached to the nanoparticle surface, as they provide solubility in suitable solvents and prevent coagulation.
Deposition methods
There are multiple different coating methods available to deposit nanoparticles. The methods differ by their ability to control particle packing density and layer thickness, ability to use different particles and the complexity of the method and the instrumentation needed.
Langmuir-Blodgett
In the Langmuir-Blodgett method, the nanoparticles are injected at air-water interphase in a special Langmuir-Blodgett Trough. The floating particles are compressed closer to each other with motorized barriers which allow to control the packing density of the particles. After compressing the particles to the desired packing density, they are transferred on a solid substrate using vertical (Langmuir-Blodgett) or horizontal (Langmuir-Schaefer) dipping to create a monolayer coating. Controlled multilayer coatings can be made repeating the dipping procedure multiple times.
The benefits of the Langmuir-Blodgett method include a firm control over the packing density and the layer thickness achieved that have been shown to be better than with other methods, the ability to use different shapes and materials of substrates and particles and the possibility to characterize the particle layer during deposition for example a Brewster Angle Microscope. As a disadvantage, a successful Langmuir-Blodgett deposition requires optimization of multiple measurement parameters such as dipping speed, temperature and dipping packing density.
Dip coating and spin coating
The spin and dip coating methods are simple methods for nanoparticle deposition. They are useful tools especially in creating self-assembled layers and films where the packing density isn't critical. Accurate and vibration-free sample withdrawal speeds can be used to have control over the film thickness. Creating high density monolayers is typically very difficult since the methods are lacking the packing density control. Also, the volume of nanoparticle suspension required for both spin coating and dip coating is rather big which may be an issue when using expensive nanoparticle materials.
Other methods
Other possible deposition methods include methods utilizing particle self-assembly by solvent evaporation, doctor blade, chemical vapor deposition and transfer printing. Some of these methods like solvent evaporation are extremely simple but produce low-quality films. Other methods such as the chemical vapor deposition are effective for certain types of particles and substrates but are limited in particle types that can be used and require heavier instrumentation investments. Also hybrid methods such as combining self-assembly to Langmuir-Blodgett have been used.
Nanoparticle coating applications
Coatings and thin films made from nanoparticles are being used in various applications including displays, sensors, medical devices, energy storages and energy harvesting. Examples include
Using graphene oxide for applications in electronics
Using nanoparticles of metal oxides, carbon nanotubes and quantum dots in photovoltaics, displays and sensors
Using polymers and nanocomposites in nanolithographic patterning
Using nanoparticles to scatter light, creating new optical effects
See also
Langmuir-Blodgett
Dip coating
Spin coating
Nanoparticle
External links
Pdf: Fabricating Highly Organized Nanoparticle Thin Films
References
Nanoparticles
Coatings | Nanoparticle deposition | Chemistry | 877 |
638,645 | https://en.wikipedia.org/wiki/Short%20QT%20syndrome | Short QT syndrome (SQT) is a very rare genetic disease of the electrical system of the heart, and is associated with an increased risk of abnormal heart rhythms and sudden cardiac death. The syndrome gets its name from a characteristic feature seen on an electrocardiogram (ECG) – a shortening of the QT interval. It is caused by mutations in genes encoding ion channels that shorten the cardiac action potential, and appears to be inherited in an autosomal dominant pattern. The condition is diagnosed using a 12-lead ECG. Short QT syndrome can be treated using an implantable cardioverter-defibrillator or medications including quinidine. Short QT syndrome was first described in 2000, and the first genetic mutation associated with the condition was identified in 2004.
Signs and symptoms
Those affected by short QT syndrome (SQT) have an increased risk of developing abnormal heart rhythms. These abnormal heart rhythms often occur at a young age. They may take relatively benign forms such as atrial fibrillation, leading to symptoms of palpitations, breathlessness, or fatigue. Accordingly, atrial fibrillation presenting in a newborn should raise the suspicion of short QT syndrome. In addition, far more dangerous heart rhythm disturbances such as ventricular fibrillation can also occur in those with short QT syndrome, leading to blackouts or even sudden death. More than a third of those with short QT present with ventricular arrhythmias or sudden cardiac death, while one in five cases are detected during family screening, and one in five cases are found incidentally after an electrocardiogram (ECG) has been recorded for another reason.
If someone with short QT syndrome is examined while their heart is beating in an abnormal rhythm such as atrial fibrillation, this can be detected by feeling their pulse. No abnormal signs will usually be found when examining someone with short QT syndrome while their heart is beating in its normal or sinus rhythm.
Cause
Short QT syndrome is a genetic disorder caused by mutations in genes responsible for producing certain ion channels within heart cells. It appears to be inherited in an autosomal dominant pattern. Some genetic variants cause an increased flow of potassium out of the cell, while others reduce the flow of calcium into the cell. The common effect of all these variants is to shorten the cardiac action potential, reflected on the surface ECG as a shortening of the QT interval. A list of genes in which variants have been associated with short QT syndrome can be found in the table below.
Mechanism
The overall effect of each of the genetic variants associated with short QT syndrome is to shorten the cardiac action potential, which in turn increases the risk of developing abnormal heart rhythms including atrial fibrillation and ventricular fibrillation. During the normal rhythm of the heart, or sinus rhythm, smooth waves of electrical activity pass regularly through the cardiac muscle. In contrast, during atrial or ventricular fibrillation, waves of electrical activation spiral through the cardiac muscle chaotically in a mass of disorganised, broken wavelets. The consequence of fibrillation is that the chambers of the heart affected by the disorganised electrical activation lose their pumping ability – fibrillation of the cardiac atria in atrial fibrillation leads to an irregular pulse, and fibrillation of the cardiac ventricles in ventricular fibrillation renders the heart unable to pump blood at all.
There are several possible mechanisms by which short action potentials might promote fibrillation. The link between these mechanisms is how the duration of the action potential influences how frequently a heart muscle cell can be excited. A shorter action potential generally allow a heart muscle cell to be excited more frequently – the refractory period is shorter.
The first mechanism, referred to as the dispersion of repolarisation, occurs because the action potential shortening seen in this condition occurs to a greater extent in some layers of the heart wall than in others. This means that at certain points in the cardiac cycle, some layers of the heart wall will have fully repolarised, and are therefore ready to contract again, while other regions are only partially repolarised and therefore are still within their refractory period and not yet able to be re-excited. If a triggering impulse arrives at this critical point in the cardiac cycle, the wavefront of electrical activation will conduct in some regions but block in others, potentially leading to wavebreak and re-entrant arrhythmias.
The second mechanism relates to the increased number of fibrillatory wavelets that can simultaneously exist if the action potential decreases, in a concept known as the arrhythmia wavelength. During fibrillation, the chaotic wavelets rotate, or re-enter, within the muscle of the heart, continually extinguishing and reforming. The volume of tissue in which each wavelet can complete a re-entrant circuit is dependent on the refractory period of the tissue and the speed at which the waves of depolarisation traverse move – the conduction velocity. The product of the conduction velocity and refractory period is known as the wavelength. In tissue with a lower wavelength a wavelet can re-enter within a smaller volume of tissue. A shorter refractory period therefore allows more wavelets to exist within a given volume of tissue, reducing the chance of all wavelets simultaneously extinguishing and terminating the arrhythmia.
Diagnosis
Short QT syndrome is diagnosed primarily using an electrocardiogram (ECG), but may also take into account the clinical history, family history, and possibly genetic testing. Whilst a diagnostic scoring system has been proposed that incorporate all of these factors (the Gollob score), it is uncertain whether this score is useful for diagnosis or risk stratification, and the Gollob score has not been universally accepted by international consensus guidelines. There continues to be uncertainty regarding the precise QT interval cutoff that is should be used for diagnosis.
12-lead ECG
The mainstay of diagnosis of short QT syndrome is the 12-lead ECG. The precise QT duration used to diagnose the condition remains controversial with consensus guidelines giving cutoffs varying from 330 ms, 340 ms or even 360 ms when other clinical, familial, or genetic factors are present. The QT interval normally varies with heart rate, but this variation occurs to a lesser extent in those with short QT syndrome. It is therefore recommended that the QT interval is assessed at heart rates close to 60 beats per minute. Other features that may be seen on the ECG in short QT syndrome include tall, peaked T-waves and PR segment depression.
Other features supporting diagnosis
Other features that support a diagnosis of short QT syndrome include: a history of ventricular fibrillation or ventricular tachycardia despite an apparently structurally normal heart; a family history of confirmed short QT syndrome; a family history of sudden cardiac death aged <40 years; and identification of a genetic mutation consistent with short QT syndrome.
Invasive electrophysiological studies, in which wires are passed into the heart to stimulate and record the heart's electrical impulses, are not currently recommended for diagnosing short QT syndrome or predicting the risk of sudden cardiac death.
Treatment
The treatment for short QT syndrome is aimed at preventing abnormal heart rhythms and reducing the risk of sudden cardiac death. It has been difficult to experimentally test potential treatments as the condition is very rare, so the evidence for treatment effectiveness comes largely from consensus opinion. In addition to treating the person identified as having the condition, screening of family members may be recommended.
Implantable cardioverter-defibrillator
In those with short QT syndrome who have already experienced a life-threatening abnormal heart rhythm such as ventricular fibrillation, an implantable cardioverter-defibrillator (ICD) may be recommended to reduce the chance of sudden death. This device is implanted under the skin and can continually monitor the heart rhythm. If the device detects a dangerous heart rhythm disturbance it can deliver a small electric shock with the aim of restoring a rhythm. Implanting an ICD in someone with short QT syndrome who has not yet experienced a life-threatening arrhythmia is more controversial but may be considered.
Medication
Medication aimed at correcting the ECG abnormality – the shortened QT interval – has been tried. Quinidine, a class Ia antiarrhythmic agent, has been shown to partially correct the QT interval and make the heart more resilient to artificially-induced abnormal heart rhythms, although it is still uncertain at present whether this translates to a lower risk of sudden death. Sotalol, another antiarrhythmic, may prolong the QT in some subtypes of short QT syndrome. Other medications including beta blockers, flecainide, and amiodarone have been tried, but at present there is little evidence to support their use.
Drugs can also be used to treat the less dangerous abnormal heart rhythm that is also associated with short QT – atrial fibrillation. Propafenone, a class 1c antiarrhythmic, may be helpful in those with short QT to prevent atrial fibrillation. Those who develop atrial fibrillation may also require medication to decrease blood clotting in order to reduce the risk of stroke.
Epidemiology
Short QT syndrome is a very rare condition with, as of 2018, fewer than 300 cases described in the medical literature. As a genetic syndrome, those affected are born with the condition. Symptoms can occur in newborns, potentially presenting as sudden infant death syndrome. Males and females are equally likely to be affected, and have a similar risk of sudden cardiac death.
Prognosis
The rarity of short QT syndrome makes calculating prognosis accurately difficult. The risk of sudden cardiac death has been estimated at 0.8% per year, leading to a cumulative risk of sudden cardiac death of 41% by the age of 40. A previous history of cardiac arrest predicts a higher likelihood of further dangerous arrhythmias. Some have suggested that those with the shortest QT intervals may have a higher risk of arrhythmias, but this view has not been supported by all. The findings from invasive electrophysiological studies do not predict an individual with short QT syndrome's risk of cardiac arrest.
History
The first report of short QT syndrome to be published was in 2000, describing a family with short QT intervals on the 12-lead ECG, atrial fibrillation occurring at a young age, and an unrelated patient who had a sudden cardiac death associated with a short QT interval. The association between short QT and sudden cardiac death was described in 2003, and the first gene associated with the condition was identified in 2004. Criteria for diagnosing Short QT syndrome were proposed in 2011. Recently the first animal model of short QT syndrome was presented, enabling more in depth analysis of arrhythmia mechanisms.
See also
Channelopathy
Long QT syndrome
Brugada syndrome
Catecholaminergic polymorphic ventricular tachycardia
References
External links
Cardiac arrhythmia
Channelopathies
Autosomal dominant disorders
Syndromes affecting the heart
Cardiogenetic disorders
Single-nucleotide polymorphism associated disease | Short QT syndrome | Biology | 2,356 |
71,068,051 | https://en.wikipedia.org/wiki/Hong%20Kong%20Astronomical%20Society | The Hong Kong Astronomical Society (HKAS) is the first public Hong Kong astronomical body, for amateur astronomers and other interested individuals. The primary objectives of the HKAS are to promote popular science of astronomy and other related sciences; raise public awareness of astronomy and other related topics such as light pollution; offer astronomical seminars, and provide astronomical training courses to schools and companies.
Current mission of the society is to enhance and share humanity's scientific understanding of the universe as a diverse and inclusive astronomical community.
Predecessor and change of name
In 1970, a group of secondary school students established Hong Kong Amateur Astronomical Union (AAU)... which was then renamed and registered as Hong Kong Amateur Astronomical Society (AAS) in 1974, the predecessor of HKAS, under the Cap. 151 Societies Ordinance of Hong Kong.
In 1992, Hong Kong Amateur Astronomical Society was renamed as the current name, to facilitate its property of overseas exchange.
Publications
Historical
Until 1974, AAU published a single-sheet stencil-printed leaflet titled "Astronomy Information" which distributed to members and astronomy clubs in secondary schools. The Society also strove to raise fellow members' standard in astronomy systematically by compiling lecture notes as well as organizing public lectures on basic astronomy.
In 1974, the AAS and Sky Observers' Association (HK) jointly published the "Hong Kong Astronomical Journal" which was available for public subscription, which conveys astronomical news and information for fellow amateurs and members of public, and the society solely published the journal two years later. 34 issues were distributed and the circulation of the journal peaked at over 1,000 in total. In 1981, the Society terminated the journal's publication.
In 1977, HKAS was invited to write a monthly column on astronomy in Wah Kiu Yat Po, a local Chinese newspaper, until the newspaper was sold to South China Morning Post in 1991.
In 1978, the society published "Lunar Eclipse Handbook" which can be purchased publicly.
In 1998, the Hong Kong Commercial Daily invited HKAS to write for the paper's monthly astronomy column, to spread astronomical knowledge, news and information to general public.
Activities
Photographic competition and exhibition
In 1975, AAS organized the first local astrophotographic competition with the winning and participating entries exhibited at the "Astrophotographic Exhibition" at the High Block of Hong Kong City Hall to raise public interest in astronomy. More than 10,000 people attended this exhibition, which was the first large-scale exhibition on astronomy in the territory.
Hong Kong Astronomical Convention
In January 1977, AAS organized the First Hong Kong Astronomical Convention at Cheung Chau, an outlying island with a large playground, for displaying numerous large and small telescopes including a small radio telescope called a corner reflector.
The Second Hong Kong Astronomical Convention was held by AAS in 1982 jointly with the Hong Kong Space Museum at the Sai Kung Bradbury Astronomy Camp.
Radio programmes
In 1983, the AAS and RTHK launched a weekly radio programme about astronomy called "Cosmic Journey". The half-hour programme had 34 episodes and covered various astronomy topics.
In 1985, the AAS and RTHK cooperated again to present new broadcast programme "Cosmic Journey II".
In 1999, the HKAS and RTHK launched a new radio programme called "Unlimited Universe"
Participations
Local
Astronomical Training Programme for Secondary School Students
Astronomical Training Programme for Secondary Students, or ATPSS, is co-organised by the Hong Kong Space Museum, the Department of Physics of the Chinese University of Hong Kong and HKAS. It provides a comprehensive training in astronomy to local full-time Secondary 4 and Secondary 5 students. It aims to cultivate and promote students' interests in natural science
Mainland China
With Yunnan Astronomical Observatory
HKAS has involved in a wide field survey research project held by Yunnan Astronomical Observatory in 2015. A 0.45m Wide Field telescope in a 4m dome in Gaomeigu, Lijiang under the collaboration between both parties, and the society assisted in fine-tuning, field testing of the observation system; and performing observational researches such as "A New Magnetically Active Binary System Discovered in Yunnan-Hong Kong Wide Field Survey"
International
In 1994, International Occultation and Timing Association (IOTA) authorized the Occultation Timing Section of the HKAS to compute and dispatch predictions of occultation events to Greater China (including Mainland China, Hong Kong, Macau, Taiwan), India, Mongolia and Southeast Asia countries such as Singapore, Philippines, Vietnam, Malaysia.
With International Astronomical Union
Among other Hong Kong astronomical institutions, HKAS is a supporter of IAU's activities. The society supported IAU100 Name ExoWorlds event held by IAU Office of National Outreach to select name of star and planet in HD 212771.
Internet presence
e-Groups, website and Facebook
In 2007, the society created the public online forum for, and the society's Lunar Occultation Section resumed dissemination of planetary occultation information at its own Occultation Forum.
In 2009, the society established its own Facebook page and group, whereas the Lunar Occultation Section had its own Facebook page in July 2017.
Hong Kong Astronomy (HKAStro) mobile apps
The Hong Kong Astronomy mobile application for iOS devices was launched at Apple App Store on 19 May 2012.
The Apps provided daily astronomy and space-related news, observational information, astronomical activities, regional night sky conditions, 16-day weather forecast, cloud coverage forecast, moon phase, sunrise, sunset, moonrise, moonset, and other relevant information.
The Android version was launched on 13 July 2014 whereas the Windows Phone version was launched on 31 August 2014. Apart from local users, the Apps encompasses users in mainland China, Macau, Taiwan and overseas Chinese.
In 2017, the Hong Kong Astronomy Apps was awarded as a Meritorious Healthy Apps For Mobile Phone and Tablet Users chosen by the Office for Film, Newspaper and Article Administration of HKSAR Government
References
External links
Official web page of HKAS, in Traditional Chinese
Official forum of HKAS, in Traditional Chinese
Chinese wiki page
Astronomy societies
Non-profit organisations based in Hong Kong | Hong Kong Astronomical Society | Astronomy | 1,225 |
1,586,715 | https://en.wikipedia.org/wiki/Jackson-Gwilt%20Medal | The Jackson-Gwilt Medal is an award that has been issued by the Royal Astronomical Society (RAS) since 1897. The original criteria were for the invention, improvement, or development of astronomical instrumentation or techniques; for achievement in observational astronomy; or for achievement in research into the history of astronomy. In 2017, the history of astronomy category was removed for subsequent awards and was transferred to a new award, the Agnes Mary Clerke Medal.
The frequency of the medal has varied over time. Initially, it was irregular, with gaps of between three and five years between awards. From 1968 onwards, it was awarded regularly every three years; from 2004 every two years; and since 2008 it has been awarded every year.
The award is named after Hannah Jackson née Gwilt. She was a niece of Joseph Gwilt (an architect and Fellow of the RAS) and daughter of George Gwilt (another Fellow); Hannah donated the original funds for the medal. It is the second oldest award issued by the RAS, after the Gold Medal.
List of winners
Source is unless otherwise noted.
See also
List of astronomy awards
References
Awards established in 1897
Awards of the Royal Astronomical Society
1897 establishments in the United Kingdom | Jackson-Gwilt Medal | Astronomy | 245 |
52,421,781 | https://en.wikipedia.org/wiki/Progesterone%20%28medication%29 | Progesterone (P4), sold under the brand name Prometrium among others, is a medication and naturally occurring steroid hormone. It is a progestogen and is used in combination with estrogens mainly in hormone therapy for menopausal symptoms and low sex hormone levels in women. It is also used in women to support pregnancy and fertility and to treat gynecological disorders. Progesterone can be taken by mouth, vaginally, and by injection into muscle or fat, among other routes. A progesterone vaginal ring and progesterone intrauterine device used for birth control also exist in some areas of the world.
Progesterone is well tolerated and often produces few or no side effects. However, a number of side effects are possible, for instance mood changes. If progesterone is taken by mouth or at high doses, certain central side effects including sedation, sleepiness, and cognitive impairment can also occur. The medication is a naturally occurring progestogen and hence is an agonist of the progesterone receptor (PR), the biological target of progestogens like endogenous progesterone. It opposes the effects of estrogens in various parts of the body like the uterus and also blocks the effects of the hormone aldosterone. In addition, progesterone has neurosteroid effects in the brain.
Progesterone was first isolated in pure form in 1934. It first became available as a medication later that year. Oral micronized progesterone (OMP), which allowed progesterone to be taken by mouth, was introduced in 1980. A large number of synthetic progestogens, or progestins, have been derived from progesterone and are used as medications as well. Examples include medroxyprogesterone acetate and norethisterone. In 2022, it was the 125th most commonly prescribed medication in the United States, with more than 5million prescriptions.
Medical uses
Menopause
Progesterone is used in combination with an estrogen as a component of menopausal hormone therapy for the treatment of menopausal symptoms in peri- and postmenopausal women. It is used specifically to provide endometrial protection against unopposed estrogen-induced endometrial hyperplasia and cancer in women with intact uteruses. A 2016 systematic review of endometrial protection with progesterone recommended 100 mg/day continuous oral progesterone, 200 mg/day cyclic oral progesterone, 45 to 100 mg/day cyclic vaginal progesterone, and 100 mg alternate-day vaginal progesterone. Twice-weekly 100 mg vaginal progesterone was also recommended, but more research is needed on this dose and endometrial monitoring may be advised. Transdermal progesterone was not recommended for endometrial protection.
The REPLENISH trial was the first adequately powered study to show that continuous 100 mg/day oral progesterone with food provides adequate endometrial protection. Cyclic 200 mg/day oral progesterone has also been found to be effective in the prevention of endometrial hyperplasia, for instance in the Postmenopausal Estrogen/Progestin Interventions (PEPI) trial. However, the PEPI trial was not adequately powered to fully quantify endometrial hyperplasia or cancer risk. No adequately powered studies have assessed endometrial protection with vaginal progesterone. In any case, the Early versus Late Intervention Trial with Estradiol (ELITE) found that cyclic 45 mg/day vaginal progesterone gel showed no significant difference from placebo in endometrial cancer rates. Due to the vaginal first-pass effect, low doses of vaginal progesterone may allow for adequate endometrial protection. Although not sufficiently powered, various other smaller studies have also found endometrial protection with oral or vaginal progesterone. There is inadequate evidence for endometrial protection with transdermal progesterone cream.
Oral progesterone has been found to significantly reduce hot flashes relative to placebo. The combination of an estrogen and oral progesterone likewise reduces hot flashes. Estrogen plus oral progesterone has been found to significantly improve quality of life. The combination of an estrogen and 100 to 300 mg/day oral progesterone has been found to improve sleep outcomes. Moreover, sleep was improved to a significantly better extent than estrogen plus medroxyprogesterone acetate. This may be attributable to the sedative neurosteroid effects of progesterone. Reduction of hot flashes may also help to improve sleep outcomes. Based on animal research, progesterone may be involved in sexual function in women. However, very limited clinical research suggests that progesterone does not improve sexual desire or function in women.
The combination of an estrogen and oral progesterone has been found to improve bone mineral density (BMD) to a similar extent as an estrogen plus medroxyprogesterone acetate. Progestogens, including progesterone, may have beneficial effects on bone independent of those of estrogens, although more research is required to confirm this notion. The combination of an estrogen and oral or vaginal progesterone has been found to improve cardiovascular health in women in early menopause but not in women in late menopause. Estrogen therapy has a favorable influence on the blood lipid profile, which may translate to improved cardiovascular health. The addition of oral or vaginal progesterone has neutral or beneficial effects on these changes. This is in contrast to various progestins, which are known to antagonize the beneficial effects of estrogens on blood lipids. Progesterone, both alone and in combination with an estrogen, has been found to have beneficial effects on skin and to slow the rate of skin aging in postmenopausal women.
In the French E3N-EPIC observational study, the risk of diabetes was significantly lower in women on menopausal hormone therapy, including with the combination of an oral or transdermal estrogen and oral progesterone or a progestin.
Transgender women
Progesterone is used as a component of feminizing hormone therapy for transgender women in combination with estrogens and antiandrogens. However, the addition of progestogens to HRT for transgender women is controversial and their role is unclear. Some patients and clinicians believe anecdotally that progesterone may enhance breast development, improve mood, regulate sleep, and increase sex drive. However, there is a lack of evidence from well-designed studies to support these notions at present. In addition, progestogens can produce undesirable side effects, although bioidentical progesterone may be safer and better tolerated than synthetic progestogens like medroxyprogesterone acetate.
Because some believe that progestogens are necessary for full breast development, progesterone is sometimes used in transgender women with the intention of enhancing breast development. However, a 2014 review concluded the following on the topic of progesterone for enhancing breast development in transgender women:
Our knowledge concerning the natural history and effects of different cross-sex hormone therapies on breast development in [transgender] women is extremely sparse and based on low quality of evidence. Current evidence does not provide evidence that progestogens enhance breast development in [transgender] women. Neither do they prove the absence of such an effect. This prevents us from drawing any firm conclusion at this moment and demonstrates the need for further research to clarify these important clinical questions.
Data on menstruating women shows there is no correlation between water retention, and levels of progesterone or estrogen. Despite this, some theorise progesterone might cause temporary breast enlargement due to local fluid retention, and may thus give a misleading appearance of breast growth. Aside from a hypothetical involvement in breast development, progestogens are not otherwise known to be involved in physical feminization.
Pregnancy support
Vaginally dosed progesterone is being investigated as potentially beneficial in preventing preterm birth in women at risk for preterm birth. The initial study by Fonseca suggested that vaginal progesterone could prevent preterm birth in women with a history of preterm birth. According to a recent study, women with a short cervix that received hormonal treatment with a progesterone gel had their risk of prematurely giving birth reduced. The hormone treatment was administered vaginally every day during the second half of a pregnancy. A subsequent and larger study showed that vaginal progesterone was no better than placebo in preventing recurrent preterm birth in women with a history of a previous preterm birth, but a planned secondary analysis of the data in this trial showed that women with a short cervix at baseline in the trial had benefit in two ways: a reduction in births less than 32 weeks and a reduction in both the frequency and the time their babies were in intensive care.
In another trial, vaginal progesterone was shown to be better than placebo in reducing preterm birth prior to 34 weeks in women with an extremely short cervix at baseline. An editorial by Roberto Romero discusses the role of sonographic cervical length in identifying patients who may benefit from progesterone treatment. A meta-analysis published in 2011 found that vaginal progesterone cut the risk of premature births by 42 percent in women with short cervixes. The meta-analysis, which pooled published results of five large clinical trials, also found that the treatment cut the rate of breathing problems and reduced the need for placing a baby on a ventilator.
Fertility support
Progesterone is used for luteal support in assisted reproductive technology (ART) cycles such as in vitro fertilization (IVF). It is also used to correct luteal phase deficiency to prepare the endometrium for implantation in infertility therapy and is used to support early pregnancy.
Birth control
A progesterone vaginal ring is available for birth control when breastfeeding in a number of areas of the world. An intrauterine device containing progesterone has also been marketed under the brand name Progestasert for birth control, including previously in the United States.
Gynecological disorders
Progesterone is used to control persistent anovulatory bleeding.
Other uses
Progesterone is of unclear benefit for the reversal of mifepristone-induced abortion. Evidence is insufficient to support use in traumatic brain injury.
Progesterone has been used as a topical medication applied to the scalp to treat female and male pattern hair loss. Variable effectiveness has been reported, but overall its effectiveness for this indication in both sexes has been poor.
Breast pain
Progesterone is approved under the brand name Progestogel as a 1% topical gel for local application to the breasts to treat breast pain in certain countries. It is not approved for systemic therapy. It has been found in clinical studies to inhibit estrogen-induced proliferation of breast epithelial cells and to abolish breast pain and tenderness in women with the condition. However, in one small study in women with cyclic breast pain it was ineffective. Vaginal progesterone has also been found to be effective in the treatment of breast pain and tenderness.
Premenstrual syndrome
Historically, progesterone has been widely used in the treatment of premenstrual syndrome. A 2012 Cochrane review found insufficient evidence for or against the effectiveness of progesterone for this indication. Another review of 10 studies found that progesterone was not effective for this condition, although it stated that insufficient evidence is available currently to make a definitive statement on progesterone in premenstrual syndrome.
Catamenial epilepsy
Progesterone can be used to treat catamenial epilepsy by supplementation during certain periods of the menstrual cycle.
Available forms
Progesterone is available in a variety of different forms, including oral capsules; sublingual tablets; vaginal capsules, tablets, gels, suppositories, and rings; rectal suppositories; oil solutions for intramuscular injection; and aqueous solutions for subcutaneous injection. A 1% topical progesterone gel is approved for local application to the breasts to treat breast pain, but is not indicated for systemic therapy. Progesterone was previously available as an intrauterine device for use in hormonal contraception, but this formulation was discontinued. Progesterone is also limitedly available in combination with estrogens such as estradiol and estradiol benzoate for use by intramuscular injection.
In addition to approved pharmaceutical products, progesterone is available in unregulated custom compounded and over-the-counter formulations like systemic transdermal creams and other preparations. The systemic efficacy of transdermal progesterone is controversial and has not been demonstrated.
Contraindications
Contraindications of progesterone include hypersensitivity to progesterone or progestogens, prevention of cardiovascular disease (a Black Box warning), thrombophlebitis, thromboembolic disorder, cerebral hemorrhage, impaired liver function or disease, breast cancer, reproductive organ cancers, undiagnosed vaginal bleeding, missed menstruations, miscarriage, or a history of these conditions. Progesterone should be used with caution in people with conditions that may be adversely affected by fluid retention such as epilepsy, migraine headaches, asthma, cardiac dysfunction, and renal dysfunction. It should also be used with caution in patients with anemia, diabetes mellitus, a history of depression, previous ectopic pregnancy, and unresolved abnormal Pap smear. Use of progesterone is not recommended during pregnancy and breastfeeding. However, the medication has been deemed usually safe in breastfeeding by the American Academy of Pediatrics, but should not be used during the first four months of pregnancy. Some progesterone formulations contain benzyl alcohol, and this may cause a potentially fatal "gasping syndrome" if given to premature infants.
Side effects
Progesterone is well tolerated, and many clinical studies have reported no side effects. Side effects of progesterone may include abdominal cramps, back pain, breast tenderness, constipation, nausea, dizziness, edema, vaginal bleeding, hypotension, fatigue, dysphoria, depression, and irritability, among others. Central nervous system depression, such as sedation and cognitive/memory impairment, can also occur.
Vaginal progesterone may be associated with vaginal irritation, itchiness, and discharge, decreased libido, painful sexual intercourse, vaginal bleeding or spotting in association with cramps, and local warmth or a "feeling of coolness" without discharge. Intramuscular injection may cause mild-to-moderate pain at the site of injection. High intramuscular doses of progesterone have been associated with increased body temperature, which may be alleviated with paracetamol treatment.
Progesterone lacks undesirable off-target hormonal activity, in contrast to various progestins. As a result, it is not associated with androgenic, antiandrogenic, estrogenic, or glucocorticoid effects. Conversely, progesterone can still produce side effects related to its antimineralocorticoid and neurosteroid activity. Compared to the progestin medroxyprogesterone acetate, there are fewer reports of breast tenderness with progesterone. In addition, the magnitude and duration of vaginal bleeding with progesterone are reported to be lower than with medroxyprogesterone acetate.
Central depression
Progesterone can produce central nervous system depression as an adverse effect, particularly with oral administration or with high doses of progesterone. These side effects may include drowsiness, sedation, sleepiness, fatigue, sluggishness, reduced vigor, dizziness, lightheadedness, confusion, and cognitive, memory, and/or motor impairment. Limited available evidence has shown minimal or no adverse influence on cognition with oral progesterone (100–600 mg), vaginal progesterone (45 mg gel), or progesterone by intramuscular injection (25–200 mg). However, high doses of oral progesterone (300–1200 mg), vaginal progesterone (100–200 mg), and intramuscular progesterone (100–200 mg) have been found to result in dose-dependent fatigue, drowsiness, and decreased vigor. Moreover, high single doses of oral progesterone (1200 mg) produced significant cognitive and memory impairment. Intravenous infusion of high doses of progesterone (e.g., 500 mg) has been found to induce deep sleep in humans. Some individuals are more sensitive and can experience considerable sedative and hypnotic effects at lower doses of oral progesterone (e.g., 400 mg).
Sedation and cognitive and memory impairment with progesterone are attributable to its inhibitory neurosteroid metabolites. These metabolites occur to a greater extent with oral progesterone, and may be minimized by switching to a parenteral route. Progesterone can also be taken before bed to avoid these side effects and to help with sleep. The neurosteroid effects of progesterone are unique to progesterone and are not shared with progestins.
Breast cancer
Breast cell proliferation has been found to be significantly increased by the combination of an oral estrogen plus cyclic medroxyprogesterone acetate in postmenopausal women but not by the combination of transdermal estradiol plus oral progesterone. Studies of topical estradiol and progesterone applied to the breasts for 2 weeks have been found to result in highly pharmacological local levels of estradiol and progesterone. These studies have assessed breast proliferation markers and have found increased proliferation with estradiol alone, decreased proliferation with progesterone, and no change in proliferation with estradiol and progesterone combined. In the Postmenopausal Estrogen/Progestin Interventions (PEPI) trial, the combination of estrogen and cyclic oral progesterone resulted in a higher mammographic breast density than estrogen alone (3.1% vs. 0.9%) but a non-significantly lower breast density than the combination of estrogen and cyclic or continuous medroxyprogesterone acetate (3.1% vs. 4.4–4.6%). Higher breast density is a strong known risk factor for breast cancer. Other studies have had mixed findings however. A 2018 systematic review reported that breast density with an estrogen plus oral progesterone was significantly increased in three studies and unchanged in two studies. Changes in breast density with progesterone appear to be less than with the compared progestins.
In large short-term observational studies, estrogen alone and the combination of estrogen and oral progesterone have generally not been associated with an increased risk of breast cancer. Conversely, the combination of estrogen and almost any progestin, such as medroxyprogesterone acetate or norethisterone acetate, has been associated with an increased risk of breast cancer. The only exception among progestins is dydrogesterone, which has shown similar risk to that of oral progesterone. Breast cancer risk with estrogen and progestin therapy is duration-dependent, with the risk being significantly greater with more than 5 years of exposure relative to less than 5 years. In contrast to shorter-term studies, the longer-term observations (>5 years) of the French E3N study showed significant associations of both estrogen plus oral progesterone and estrogen plus dydrogesterone with higher breast cancer risk, similarly to estrogen plus other progestogens. Oral progesterone has very low bioavailability and has relatively weak progestogenic effects. The delayed onset of breast cancer risk with estrogen plus oral progesterone is potentially consistent with a weak proliferative effect of oral progesterone on the breasts. As such, a longer duration of exposure may be necessary for a detectable increase in breast cancer risk to occur. In any case, the risk remains lower than that with most progestins. A 2018 systematic review of progesterone and breast cancer concluded that short-term use (<5 years) of an estrogen plus progesterone is not associated with a significant increase in risk of breast cancer but that long-term use (>5 years) is associated with greater risk. The conclusions for progesterone were the same in a 2019 meta-analysis of the worldwide epidemiological evidence by the Collaborative Group on Hormonal Factors in Breast Cancer (CGHFBC).
Most data on breast density changes and breast cancer risk are with oral progesterone. Data on breast safety with vaginal progesterone are scarce. The Early versus Late Intervention Trial with Estradiol (ELITE) was a randomized controlled trial of about 650 postmenopausal women who used estradiol and 45 mg/day cyclic vaginal progesterone. Incidence of breast cancer was reported as an adverse effect. The absolute incidences were 10 cases in the estradiol plus vaginal progesterone group and 8 cases in the control group. However, the study was not adequately powered for quantifying breast cancer risk.
Blood clots
Whereas the combination of estrogen and a progestin is associated with increased risk of venous thromboembolism (VTE) relative to estrogen alone, there is no difference in risk of VTE with the combination of estrogen and oral progesterone relative to estrogen alone. Hence, in contrast to progestins, oral progesterone added to estrogen does not appear to increase coagulation or VTE risk. The reason for the differences between progesterone and progestins in terms of VTE risk are unclear. However, they may be due to very low progesterone levels and relatively weak progestogenic effects produced by oral progesterone. In contrast to oral progesterone, non-oral progesterone—which can achieve much higher progesterone levels—has not been assessed in terms of VTE risk.
Overdose
Progesterone is likely to be relatively safe in overdose. Levels of progesterone during pregnancy are up to 100-fold higher than during normal menstrual cycling, although levels increase gradually over the course of pregnancy. Oral dosages of progesterone of as high as 3,600 mg/day have been assessed in clinical trials, with the main side effect being sedation. There is a case report of progesterone misuse with an oral dosage of 6,400 mg per day. Administration of as much as 500 mg progesterone by intravenous infusion in humans was uneventful in terms of toxicity, but did induce deep sleep, though the individuals were still able to be awakened with sufficient stimulation.
Interactions
There are several notable drug interactions with progesterone. Certain selective serotonin reuptake inhibitors (SSRIs) such as fluoxetine, paroxetine, and sertraline may increase the GABAA receptor-related central depressant effects of progesterone by enhancing its conversion into 5α-dihydroprogesterone and allopregnanolone via activation of 3α-HSD. Progesterone potentiates the sedative effects of benzodiazepines and alcohol. Notably, there is a case report of progesterone abuse alone with very high doses. 5α-Reductase inhibitors such as finasteride and dutasteride inhibit the conversion of progesterone into the inhibitory neurosteroid allopregnanolone, and for this reason, may have the potential to reduce the sedative and related effects of progesterone.
Progesterone is a weak but significant agonist of the pregnane X receptor (PXR), and has been found to induce several hepatic cytochrome P450 enzymes, such as CYP3A4, especially when concentrations are high, such as with pregnancy range levels. As such, progesterone may have the potential to accelerate the metabolism of various medications.
Pharmacology
Pharmacodynamics
Progesterone is a progestogen, or an agonist of the nuclear progesterone receptors (PRs), the PR-A, PR-B, and PR-C. In addition, progesterone is an agonist of the membrane progesterone receptors (mPRs), including the mPRα, mPRβ, mPRγ, mPRδ, and mPRϵ. Aside from the PRs and mPRs, progesterone is a potent antimineralocorticoid, or antagonist of the mineralocorticoid receptor, the biological target of the mineralocorticoid aldosterone. In addition to its activity as a steroid hormone, progesterone is a neurosteroid. Among other neurosteroid activities, and via its active metabolites allopregnanolone and pregnanolone, progesterone is a potent positive allosteric modulator of the GABAA receptor, the major signaling receptor of the inhibitory neurotransmitter γ-aminobutyric acid (GABA).
The PRs are expressed widely throughout the body, including in the uterus, cervix, vagina, fallopian tubes, breasts, fat, skin, pituitary gland, hypothalamus, and in other areas of the brain. In accordance, progesterone has numerous effects throughout the body. Among other effects, progesterone produces changes in the female reproductive system, the breasts, and the brain. Progesterone has functional antiestrogenic effects due to its progestogenic activity, including in the uterus, cervix, and vagina. The effects of progesterone may influence health in both positive and negative ways. In addition to the aforementioned effects, progesterone has antigonadotropic effects due to its progestogenic activity, and can inhibit ovulation and suppress gonadal sex hormone production.
The activities of progesterone besides those mediated by the PRs and mPRs are also of significance. Progesterone lowers blood pressure and reduces water and salt retention among other effects via its antimineralocorticoid activity. In addition, progesterone can produce sedative, hypnotic, anxiolytic, euphoric, amnestic, cognitive-impairing, motor-impairing, anticonvulsant, and even anesthetic effects via formation of sufficiently high concentrations of its neurosteroid metabolites and consequent GABAA receptor potentiation in the brain.
There are differences between progesterones and progestins, such as medroxyprogesterone acetate and norethisterone, with implications for pharmacodynamics and pharmacokinetics, as well as for efficacy, tolerability, and safety.
Pharmacokinetics
The pharmacokinetics of progesterone are dependent on its route of administration. The medications is approved in the form of oil-filled capsules containing micronized progesterone for oral administration, termed oral micronized progesterone or OMP. It is also available in the form of vaginal or rectal suppositories or pessaries, topical creams and gels, oil solutions for intramuscular injection, and aqueous solutions for subcutaneous injection.
Routes of administration that progesterone has been used by include oral, intranasal, transdermal/topical, vaginal, rectal, intramuscular, subcutaneous, and intravenous injection. Vaginal progesterone is available in the form of progesterone capsules, tablets or inserts, gels, suppositories or pessaries, and rings.
The bioavailability of progesterone was commonly overestimated due to the immunoassay method of analysis failing to distinguish between progesterone itself and its metabolites. Newer methods have adjusted the oral bioavailbility estimate from 6.2 to 8.6% down to less than 2.4%.
Chemistry
Progesterone is a naturally occurring pregnane steroid and is also known as pregn-4-ene-3,20-dione. It has a double bond (4-ene) between the C4 and C5 positions and two ketone groups (3,20-dione), one at the C3 position and the other at the C20 position. Due to its pregnane core and C4(5) double bond, progesterone is often abbreviated as P4. It is contrasted with pregnenolone, which has a C5(6) double bond and is often abbreviated as P5.
Derivatives
A large number of progestins, or synthetic progestogens, have been derived from progesterone. They can be categorized into several structural groups, including derivatives of retroprogesterone, 17α-hydroxyprogesterone, 17α-methylprogesterone, and 19-norprogesterone, with a respective example from each group including dydrogesterone, medroxyprogesterone acetate, medrogestone, and promegestone. The progesterone ethers quingestrone (progesterone 3-cyclopentyl enol ether) and progesterone 3-acetyl enol ether are among the only examples that do not belong to any of these groups. Another major group of progestins, the 19-nortestosterone derivatives, exemplified by norethisterone (norethindrone) and levonorgestrel, are not derived from progesterone but rather from testosterone.
A variety of synthetic inhibitory neurosteroids have been derived from progesterone and its neurosteroid metabolites, allopregnanolone and pregnanolone. Examples include alfadolone, alfaxolone, ganaxolone, hydroxydione, minaxolone, and renanolone. In addition, C3 and C20 conjugates of progesterone, such as progesterone carboxymethyloxime (progesterone 3-(O-carboxymethyl)oxime; P4-3-CMO), P1-185 (progesterone 3-O-(L-valine)-E-oxime), EIDD-1723 (progesterone 20E-[O-[(phosphonooxy)methyl]oxime] sodium salt), EIDD-036 (progesterone 20-oxime; P4-20-O), and VOLT-02 (chemical structure unreleased), have been developed as water-soluble prodrugs of progesterone and its neurosteroid metabolites.
Synthesis
Chemical syntheses of progesterone have been published.
History
Discovery and synthesis
The hormonal action of progesterone was discovered in 1929. Pure crystalline progesterone was isolated in 1934 and its chemical structure was determined. Later that year, chemical synthesis of progesterone was accomplished. Shortly following its chemical synthesis, progesterone began being tested clinically in women.
Injections and implants
In 1933 or 1934, Schering introduced progesterone in oil solution as a medication by intramuscular injection under the brand name Proluton. This was the first pharmaceutical formulation of progesterone to be marketed for medical use. It was initially a corpus luteum extract, becoming pure synthesized progesterone only subsequently. A clinical study of the formulation was published in 1933. Multiple formulations of progesterone in oil solution for intramuscular injection, under the brand names Proluton, Progestin, and Gestone, were available by 1936. A parenteral route was used because oral progesterone had very low activity and was thought to be inactive. Progesterone was initially very expensive due to the large doses required. However, with the start of steroid manufacturing from diosgenin in the 1940s, costs greatly decreased.
Subcutaneous pellet implants of progesterone were first studied in women in the late 1930s. They were the first long-acting progestogen formulation. Pellets were reported to be extruded out of the skin within a few weeks at high rates, even when implanted beneath the deep fascia, and also produced frequent inflammatory reactions at the site of implantation. In addition, they were absorbed too slowly and achieved unsatisfactorily low progesterone levels. Consequently, they were soon abandoned, in favor of other preparations such as aqueous suspensions. However, subcutaneous pellet implants of progesterone were later studied as a form of birth control in women in the 1980s and early 1990s, though no preparations were ultimately marketed.
Aqueous suspensions of progesterone crystals for intramuscular injection were first described in 1944. These preparations were on the market in the 1950s under a variety of brand names including Flavolutan, Luteosan, Lutocyclin M, and Lutren, among others. Aqueous suspensions of steroids were developed because they showed much longer durations than intramuscular injection of steroids in oil solution. However, local injection site reactions, which do not occur with oil solutions, have limited the clinical use of aqueous suspensions of progesterone and other steroids. Today, a preparation with the brand name Agolutin Depot remains on the market in the Czech Republic and Slovakia. A combined preparation of progesterone, estradiol benzoate, and lidocaine remains available with the brand name Clinomin Forte in Paraguay as well. In addition to aqueous suspensions, water-in-oil emulsions of steroids were studied by 1949, and long-acting emulsions of progesterone were introduced for use by intramuscular injection under the brand names Progestin and Di-Pro-Emulsion (with estradiol benzoate) by the 1950s. Due to lack of standardization of crystal sizes, crystalline suspensions of steroids had marked variations in effect. Emulsions were said to be even more unreliable.
Macrocrystalline aqueous suspensions of progesterone as well as microspheres of progesterone were investigated as potential progestogen-only injectable contraceptives and combined injectable contraceptives (with estradiol) by the late 1980s and early 1990s but were never marketed.
Aqueous solutions of water-insoluble steroids were first developed via association with colloid solubility enhancers in the 1940s. An aqueous solution of progesterone for use by intravenous injection was marketed by Schering AG under the brand name Primolut Intravenous by 1962. One of its intended uses was the treatment of threatened abortion, in which rapid-acting effect was desirable. An aqueous solution of progesterone complexed with cyclodextrin to increase its water solubility was introduced for use by once-daily subcutaneous injection in Europe under the brand name Prolutex in the mid-2010s.
In the 1950s, long-acting parenteral progestins such as hydroxyprogesterone caproate, medroxyprogesterone acetate, and norethisterone enanthate were developed and introduced for use by intramuscular injection. They lacked the need for frequent injections and the injection site reactions associated with progesterone by intramuscular injection and soon supplanted progesterone for parenteral therapy in most cases.
Oral and sublingual
The first study of oral progesterone in humans was published in 1949. It found that oral progesterone produced significant progestational effects in the endometrium in women. Prior to this study, animal research had suggested that oral progesterone was inactive, and for this reason, oral progesterone had never been evaluated in humans. A variety of other early studies of oral progesterone in humans were also published in the 1950s and 1960s. These studies generally reported oral progesterone to be only very weakly active. Oral non-micronized progesterone was introduced as a pharmaceutical medication around 1953, for instance as Cyclogesterin (1 mg estrogenic substances and 30 mg progesterone tablets) for menstrual disturbances by Upjohn, though it saw limited use. Another preparation, which contained progesterone alone, was Synderone (trademark registered by Chemical Specialties in 1952).
Sublingual progesterone in women was first studied in 1944 by Robert Greenblatt. Buccal progesterone tablets were marketed by Schering under the brand name Proluton Buccal Tablets by 1949. Sublingual progesterone tablets were marketed under the brand names Progesterone Lingusorbs and Progesterone Membrettes by 1951. A sublingual tablet formulation of progesterone has been approved under the brand name Luteina in Poland and Ukraine and remains marketed today.
Progesterone was the first progestogen that was found to inhibit ovulation, both in animals and in women. Injections of progesterone were first shown to inhibit ovulation in animals between 1937 and 1939. Inhibition of fertilization by administration of progesterone during the luteal phase was also demonstrated in animals between 1947 and 1949. Ovulation inhibition by progesterone in animals was subsequently re-confirmed and expanded on by Gregory Pincus and colleagues in 1953 and 1954. Findings on inhibition of ovulation by progesterone in women were first presented at the Fifth International Conference on Planned Parenthood in Tokyo, Japan in October 1955. Three different research groups presented their findings on this topic at the conference. They included Pincus (in conjunction with John Rock, who did not attend the conference); a nine-member Japanese group led by Masaomi Ishikawa; and the two-member team of Abraham Stone and Herbert Kupperman. The conference marked the beginning of a new era in the history of birth control. The results were subsequently published in scientific journals in 1956 in the case of Pincus and in 1957 in the case of Ishikawa and colleagues. Rock and Pincus also subsequently described findings from 1952 that "pseudopregnancy" therapy with a combination of high doses of diethylstilbestrol and oral progesterone prevented ovulation and pregnancy in women.
Unfortunately, the use of oral progesterone as a hormonal contraceptive was plagued by problems. These included the large and by extension expensive doses required, incomplete inhibition of ovulation even at high doses, and a frequent incidence of breakthrough bleeding. At the 1955 Tokyo conference, Pincus had also presented the first findings of ovulation inhibition by oral progestins in animals, specifically 19-nortestosterone derivatives like noretynodrel and norethisterone. These progestins were far more potent than progesterone, requiring much smaller doses orally. By December 1955, inhibition of ovulation by oral noretynodrel and norethisterone had been demonstrated in women. These findings as well as results in animals were published in 1956. Noretynodrel and norethisterone did not show the problems associated with oral progesterone—in the studies, they fully inhibited ovulation and did not produce menstruation-related side effects. Consequently, oral progesterone was abandoned as a hormonal contraceptive in women. The first birth control pills to be introduced were a noretynodrel-containing product in 1957 and a norethisterone-containing product in 1963, followed by numerous others containing a diversity of progestins. Progesterone itself has never been introduced for use in birth control pills.
More modern clinical studies of oral progesterone demonstrating elevated levels of progesterone and end-organ responses in women, specifically progestational endometrial changes, were published between 1980 and 1983. Up to this point, many clinicians and researchers apparently still thought that oral progesterone was inactive. It was not until almost half a century after the introduction of progesterone in medicine that a reasonably effective oral formulation of progesterone was marketed. Micronization of progesterone and suspension in oil-filled capsules, which allowed progesterone to be absorbed several-fold more efficiently by the oral route, was first studied in the late 1970s and described in the literature in 1982. This formulation, known as oral micronized progesterone (OMP), was then introduced for medical use under the brand name Utrogestan in France in 1982. Subsequently, oral micronized progesterone was introduced under the brand name Prometrium in the United States in 1998. By 1999, oral micronized progesterone had been marketed in more than 35 countries. In 2019, the first combination of oral estradiol and progesterone was introduced under the brand name Bijuva in the United States.
A sustained-release (SR) formulation of oral micronized progesterone, also known as "oral natural micronized progesterone sustained release" or "oral NMP SR", was marketed in India in 2012 under the brand name Gestofit SR. Many additional brand names followed. The preparation was originally developed in 1986 by a compounding pharmacy called Madison Pharmacy Associates in Madison, Wisconsin in the United States.
Vaginal, rectal, and uterine
Vaginal progesterone suppositories were first studied in women by Robert Greenblatt in 1954. Shortly thereafter, vaginal progesterone suppositories were introduced for medical use under the brand name Colprosterone in 1955. Rectal progesterone suppositories were first studied in men and women by Christian Hamburger in 1965. Vaginal and rectal progesterone suppositories were introduced for use under the brand name Cyclogest by 1976. Vaginal micronized progesterone gels and capsules were introduced for medical use under brand names such as Utrogestan and Crinone in the early 1990s. Progesterone was approved in the United States as a vaginal gel in 1997 and as a vaginal insert in 2007. A progesterone contraceptive vaginal ring known as Progering was first studied in women in 1985 and continued to be researched through the 1990s. It was approved for use as a contraceptive in lactating mothers in Latin America by 2004. A second progesterone vaginal ring known as Fertiring was developed as a progesterone supplement for use during assisted reproduction and was approved in Latin America by 2007.
Development of a progesterone-containing intrauterine device (IUD) for contraception began in the 1960s. Incorporation of progesterone into IUDs was initially studied to help reduce the risk of IUD expulsion. However, while addition of progesterone to IUDs showed no benefit on expulsion rates, it was unexpectedly found to induce endometrial atrophy. This led in 1976 to the development and introduction of Progestasert, a progesterone-containing product and the first progestogen-containing IUD. Unfortunately, the product had various problems that limited its use. These included a short duration of efficacy of only one year, a high cost, a relatively high 2.9% failure rate, a lack of protection against ectopic pregnancy, and difficult and sometimes painful insertions that could necessitate use of a local anesthetic or analgesic. As a result of these issues, Progestasert never became widely used, and was discontinued in 2001. It was used mostly in the United States and France while it was marketed.
Transdermal and topical
A topical gel formulation of progesterone, for direct application to the breasts as a local therapy for breast disorders such as breast pain, was introduced under the brand name Progestogel in Europe by 1972. No transdermal formulations of progesterone for systemic use have been successfully marketed, in spite of efforts of pharmaceutical companies towards this goal. The low potency of transdermal progesterone has thus far precluded it as a possibility. Although no formulations of transdermal progesterone are approved for systemic use, transdermal progesterone is available in the form of creams and gels from custom compounding pharmacies in some countries, and is also available over-the-counter without a prescription in the United States. However, these preparations are unregulated and have not been adequately characterized, with low and unsubstantiated effectiveness.
Society and culture
Generic names
Progesterone is the generic name of the drug in English and its , , , , , and , while progestérone is its name in French and its . It is also referred to as progesteronum in Latin, progesterona in Spanish and Portuguese, and progesteron in German.
Brand names
Progesterone is marketed under a large number of brand names throughout the world. Examples of major brand names under which progesterone has been marketed include Crinone, Crinone 8%, Cyclogest, Endogest, Endometrin, Estima, Geslutin, Gesterol, Gestone, Luteina, Luteinol, Lutigest, Lutinus, Microgest, Progeffik, Progelan, Progendo, Progering, Progest, Progestaject, Progestan, Progesterone, Progestin, Progestogel, Prolutex, Proluton, Prometrium, Prontogest, Strone, Susten, Utrogest, and Utrogestan.
Availability
Progesterone is widely available in countries throughout the world in a variety of formulations. Progesterone in the form of oral capsules; vaginal capsules, tablets/inserts, and gels; and intramuscular oil have widespread availability. The following formulations/routes of progesterone have selective or more limited availability:
A tablet of micronized progesterone which is marketed under the brand name Luteina is indicated for sublingual administration in addition to vaginal administration and is available in Poland and Ukraine.
A progesterone suppository which is marketed under the brand name Cyclogest is indicated for rectal administration in addition to vaginal administration and is available in Cyprus, Hong Kong, India, Malaysia, Malta, Oman, Singapore, South Africa, Thailand, Tunisia, Turkey, the United Kingdom, and Vietnam.
An aqueous solution of progesterone complexed with β-cyclodextrin for subcutaneous injection is marketed under the brand name Prolutex in the Czech Republic, Hungary, Italy, Poland, Portugal, Slovakia, Spain, and Switzerland.
A non-systemic topical gel formulation of progesterone for local application to the breasts to treat breast pain is marketed under the brand name Progestogel and is available in Belgium, Bulgaria, Colombia, Ecuador, France, Georgia, Germany, Hong Kong, Lebanon, Peru, Romania, Russia, Serbia, Switzerland, Tunisia, Venezuela, and Vietnam. It was also formerly available in Italy, Portugal, and Spain, but was discontinued in these countries.
A progesterone intrauterine device was previously marketed under the brand name Progestasert and was available in Canada, France, the United States, and possibly other countries, but was discontinued.
Progesterone vaginal rings are marketed under the brand names Fertiring and Progering and are available in Chile, Ecuador, and Peru.
A sustained-release tablet formulation of oral micronized progesterone (also known as "oral natural micronized progesterone sustained release" or "oral NMP SR") is marketed in India under the brand names Lutefix Pro (CROSMAT Technology), Dubagest SR, Gestofit SR, and Susten SR, among many others.
In addition to single-drug formulations, the following progesterone combination formulations are or have been marketed, albeit with limited availability:
A combination pack of progesterone capsules for oral use and estradiol gel for transdermal use is marketed under the brand name Estrogel Propak in Canada.
A combination pack of progesterone capsules and estradiol tablets for oral use is marketed in an under the brand name Duogestan in Belgium.
Progesterone and estradiol in an aqueous suspension for use by intramuscular injection is marketed under the brand name Cristerona FP in Argentina.
Progesterone and estradiol in microspheres in an oil solution for use by intramuscular injection is marketed under the brand name Juvenum in Mexico.
Progesterone and estradiol benzoate in an oil solution for use by intramuscular injection is marketed under the brand names Duogynon, Duoton Fort T P, Emmenovis, Gestrygen, Lutofolone, Menovis, Mestrolar, Metrigen Fuerte, Nomestrol, Phenokinon-F, Prodiol, Pro-Estramon-S, Proger F, Progestediol, and Vermagest and is available in Belize, Egypt, El Salvador, Ethiopia, Guatemala, Honduras, Italy, Lebanon, Malaysia, Mexico, Nicaragua, Taiwan, Thailand, and Turkey.
Progesterone and estradiol hemisuccinate in an oil solution for use by intramuscular injection is marketed under the brand name Hosterona in Argentina.
Progesterone and estrone for use by intramuscular injection is marketed under the brand name Synergon in Monaco.
United States
, progesterone is available in the United States in the following formulations:
Oral: Capsules: Prometrium (100 mg, 200 mg, 300 mg)
Vaginal: Tablets: Endometrin (100 mg); Gels: Crinone (4%, 8%)
Intramuscular injection: Oil: Progesterone (50 mg/mL)
A 25 mg/mL concentration of progesterone oil for intramuscular injection and a 38 mg/device progesterone intrauterine device (Progestasert) have been discontinued.
An oral combination formulation of micronized progesterone and estradiol in oil-filled capsules (brand name Bijuva) is marketed in the United States for the treatment of menopausal symptoms and endometrial hyperplasia.
Progesterone is also available in unregulated custom preparations from compounding pharmacies in the United States. In addition, transdermal progesterone is available over-the-counter in the United States, although the clinical efficacy of transdermal progesterone is controversial.
Research
Progesterone was studied as a progestogen-only injectable contraceptive, but was never marketed. Combinations of estradiol and progesterone as a macrocrystalline aqueous suspension and as an aqueous suspension of microspheres have been studied as once-a-month combined injectable contraceptives, but were likewise never marketed.
Progesterone has been assessed for the suppression of sex drive and spermatogenesis in men. In one study, 100 mg rectal suppositories of progesterone given five times per day for 9 days resulted in progesterone levels of 5.5 to 29 ng/mL and suppressed circulating testosterone and growth hormone levels by about 50% in men, but did not affect libido or erectile potency in this short treatment period. In other studies, 50 mg/day progesterone by intramuscular injection for 10 weeks in men produced azoospermia, decreased testicular size, markedly suppressed libido and erectile potency, and resulted in minimal semen volume upon ejaculation.
An oil and water nanoemulsion of progesterone (particles of <1 mm in diameter) using micellar nanoparticle technology for transdermal administration known as Progestsorb NE was under development by Novavax for use in menopausal hormone therapy in the 2000s. However, development was discontinued in 2007 and the formulation was never marketed.
References
Further reading
5α-Reductase inhibitors
Drugs developed by AbbVie
Alkene derivatives
Anticonvulsants
Antigonadotropins
Antihypertensive agents
Antimineralocorticoids
Diketones
GABAA receptor positive allosteric modulators
Galactagogues
General anesthetics
Glucocorticoids
Glycine receptor antagonists
Hepatotoxins
Hypnotics
Drugs developed by Merck
Neuroprotective agents
Neurosteroids
Nicotinic antagonists
Obstetric drugs
Pregnane X receptor agonists
Pregnanes
Progesterone
Progestogens
Prolactin releasers
Sedatives
Sigma antagonists
Orphan drugs | Progesterone (medication) | Biology | 11,479 |
41,851,969 | https://en.wikipedia.org/wiki/Hitler%20and%20Mannerheim%20recording | The Hitler and Mannerheim recording is a 1942 recording of a private conversation between German dictator Adolf Hitler, and Carl Gustaf Emil Mannerheim, Commander-in-Chief of the Finnish Defence Forces. It took place on a secret visit made to Finland by Hitler to honour Mannerheim's 75th birthday on 4 June 1942, during the Continuation War, a sub-theatre of World War II. Thor Damen, a sound engineer for the Finnish broadcaster Yleisradio (YLE) who had been assigned to record the official birthday proceedings, recorded the first eleven minutes of Hitler and Mannerheim's private conversation—without Hitler's knowledge. It is the only known recording of Hitler speaking in an unofficial tone.
Visit by Hitler
In June 1941, Nazi Germany invaded the Soviet Union. Despite the initial and overwhelming success of the campaign, the Soviets repulsed the German assault on Moscow and stalled the German advance. Hitler required his allies including Finland, which was fighting its second war with the Soviet Union in two years to tie down as much of the enormous Soviet military machine as possible.
In 1942, Hitler, under extreme secrecy, visited Finland, officially to congratulate Mannerheim on his 75th birthday. Mannerheim did not wish to greet Hitler at his headquarters, as it would have appeared like a state visit. Therefore, the meeting occurred at Imatra in southern Finland. At Immola Airfield, Hitler was greeted and accompanied by President Risto Ryti and Finnish officials to Mannerheim's personal train, where a birthday meal and negotiations took place.
Recording
After the official greetings and speeches had taken place, Hitler and Mannerheim, accompanied by other German and Finnish officials, entered Mannerheim's private wagon for cigars, drinks, and lunch. In this wagon, a large and visible microphone had been set up by Thor Damen, a sound engineer for the Finnish broadcaster Yleisradio (YLE), who had been assigned to record Hitler's official speech and birthday message to Mannerheim.
After the official speeches, Damen continued to record the now-private conversation, with Hitler unaware that the conversation was still being recorded. After eleven minutes, Hitler's SS guards realised what Damen was doing and made a cutthroat gesture to demand that he cease recording. The SS guards demanded the tape be destroyed, but YLE was allowed to keep the tape in a sealed container with the promise that it never be opened again. The tape was given to the head of the State Censors' Office, Kustaa Vilkuna, returned to YLE in 1957, and made publicly available a few years later. It is the only known recording of Hitler speaking in an unofficial tone and one of the very few recordings in which Hitler may be heard delivering a narrative without raising his voice.
The conversation
While the official reason for Hitler's visit – which had been arranged just the day before – was to celebrate Mannerheim's birthday, Hitler's actual purpose was to ensure that Finland would remain allied to Nazi Germany by reiterating the dangers of Bolshevism, thereby preventing any Finnish feelers to either the Soviets or to the Western Allies. Hitler wanted to reassure himself that he had the Finns' continuing support. He extensively discussed the Winter War, Molotov's demands over Europe, and concerns over the Soviet occupation of Bessarabia and Northern Bukovina, which put Germany at risk of losing its petroleum supplies controlled by Romania.
On the tape, Hitler dominated the discussion, with others at the table – Mannerheim, Ryti, and Generalfeldmarschall Wilhelm Keitel – mostly silent. He discussed the failure of Operation Barbarossa, Italian defeats in North Africa, the invasions of Yugoslavia and Greece, his surprise at the Soviet Union's ability to produce thousands of tanks, and his strategic concerns about Romanian petroleum wells. Hitler was at pains to present German policy as having been consistent throughout, but also emphasised that imminent Russian aggression had given him no choice but to attack the Soviets.
Aside from this broad summary of the war in the East, Hitler did not reveal any of his future military plans, specifically an upcoming German offensive, of which the Finns were informed only the day before it occurred – much to Mannerheim's exasperation. Despite Hitler's visit and monologue, and a return visit from Mannerheim, the Nazis' continuing military crisis over the next six months would provoke the Finns into looking for a way out of their alliance with Germany.
Authenticity
After the tape was revealed to the public, some believed it was a fake because Hitler's voice sounded too soft. After listening to the recording, Rochus Misch, Hitler's former bodyguard and radio operator, said: "He is speaking normally, but I have problems with the tone; the intonation isn't quite right. Sometimes it seems okay, but at other points not. I have the feeling it's someone mimicking Hitler. It really sounds as if someone is mimicking him." Photographs taken on the day of the event showed that Hitler had been drinking alcohol, which could have affected his voice, as he rarely drank. Specialists from postwar Germany's Federal Criminal Police Office later examined the tape, and Head of Frequencies Stefan Gfroerer declared that it is "very obvious to us that this is Hitler's voice."
In popular culture
Mannerheim's saloon coach, where the meeting with Hitler took place, is displayed outside a Shell service station on Finnish national road 12 in Sastamala, Pirkanmaa. It has been open to the public since 1969. The private wagon, where the recording took place, is located in Mikkeli. It is open to the public only once a year, on 4 June, Mannerheim's birthday.
The recording was used by Swiss actor Bruno Ganz when he rehearsed Hitler's manner of speaking for his role in the 2004 film Downfall.
See also
Diplomatic history of World War II
Hitler's Stalingrad speech
Hitler's Table Talk
Mannerheim (family)
References
Notes
Bibliography
External links
Yleisradio article on the recording, including a full copy of it
1942 in Finland
1942 documents
Adolf Hitler
Carl Gustaf Emil Mannerheim
Continuation War
Diplomatic visits
Finland in World War II
Imatra
June 1942
Field recording | Hitler and Mannerheim recording | Engineering | 1,286 |
11,869,917 | https://en.wikipedia.org/wiki/Southeast%20Asian%20haze | The Southeast Asian haze is a fire-related recurrent transboundary air pollution issue. Haze events, where air quality reaches hazardous levels due to high concentrations of airborne particulate matter from burning biomass, have caused adverse health, environmental and economic impacts in several countries in Southeast Asia. Caused primarily by slash-and-burn land clearing, the problem flares up every dry season to varying degrees and generally is worst between July and October and during El Niño events. Transboundary haze in Southeast Asia has been recorded since 1972 with the 1997 and 2015 events being particularly severe.
Industrial-scale slash-and-burn practices to clear land for agricultural purposes are a major cause of the haze, particularly for palm oil and pulpwood production in the region. Burning land occurs as it is cheaper and faster compared to cutting and clearing using excavators or other machinery. Fires started for this purpose sometimes spread and create forest fires, worsening the problem. The high concentration of peat in soil contributes to the haze's density and high sulphur content.
Fires in Indonesia (particularly South Sumatra and Riau in Sumatra, and Kalimantan in Borneo), and to a lesser extent in Malaysia and Thailand, have been identified as sources. The haze regularly has a major impact on air quality in Indonesia, Malaysia, Singapore and Brunei Darussalam; to a lesser extent and in particularly severe years, it also impacts the Philippines, Thailand, Vietnam, Cambodia and countries outside the region.
Haze events have been shown to cause health issues and mortality in affected areas, and have caused disruption to economic activity and education as sectors are forced to close to minimise exposure to hazardous air. The haze also has a substantial environmental impact, being a major contributor to greenhouse gas emissions in the region and affecting wildlife and ecosystems.
The haze is an international issue which has caused regional political tensions. Efforts have been made to mitigate haze events and their impacts, and some relevant frameworks for regional cooperation among ASEAN countries have been introduced. Challenges remain in implementing these, and mitigation efforts have failed to prevent haze from reoccurring.
Causes
Most haze events have resulted from smoke from fires that occurred on peatlands in Sumatra and the Kalimantan region of Borneo island. Poor accountability and transparency of Indonesian agricultural companies, and limited political and economic incentives to hold companies to account, have been identified as key barriers to mitigating the issue.
Undisturbed humid tropical forests are considered to be very resistant to fire, experiencing rare fires only during extraordinary dry periods.
A study published in 2005 concluded that there is no single dominant cause of fire in a particular site and there are wide differences in the causes of fires in different sites. The study identified the following direct and indirect causes of fire:
Direct causes of fire
Fire as a tool in land clearing
Fire as a weapon in land tenure or land use disputes
Accidental or escaped fires
Fire connected with resource extraction
Indirect causes of fire
Land tenure and land use allocation conflicts and competition
Forest degrading practices
Economic incentives/disincentives
Population growth and migration
Inadequate fire fighting and management capacity
Fire as a tool in land clearing
Fire is the cheapest and fastest method to clear land in preparation for planting. Fire is used to clear the plant material left over from logging or old crops. Mechanically raking the plant material into long piles and letting them rot over time, is expensive and slow, and could harbour pests. Clearing land with machines and chemicals can cost up to US$200 per hectare while using fire costs US$5 per hectare.
After a peat swamp forest has been cleared and drained, the peat soil is still unsuitable for agriculture, because peat soil is nutrient-poor and acidic (pH 3 - 4). To make the soil suitable for agriculture, the pH has to be neutralised and nutrients added. Pests and plant diseases also have to be removed. One method is to use chemicals such as limestone to neutralise the acidity, as well as fertilisers and pesticides. This method costs about Rupiah 30 - 40 million per hectare. Alternatively, fire is used to clear the plant material left over from logging. The fire kills pests and the resulting ash serves to fertilise the soil and neutralise the acidity. This method costs Rupiah 2 million per hectare.
Land conflicts
In Indonesia, the Basic Forestry Law grants the Ministry of Forestry authority over all land classified as forests. Approximately 49% of the nation (909,070 square kilometres) is covered by actual forest, although the government classifies 69% of the land area (1,331,270 square kilometres) as forest. The land rights of traditional communities that live on land classified as forest cannot be registered and are generally unrecognised by the state. Therefore, these communities do not really have the ability to enforce rules at the village level and exclude outsiders such as oil palm plantations, logging companies, residents of other villages, migrants, small-scale loggers or transmigrants. Competing claims in turn leads to land conflicts. As the number of new, external actors increases, so does the likelihood that fire will be used as a weapon.
Role of peat
A peatland is an area where organic material such as leaves and twigs had accumulated naturally under waterlogged conditions in the last 10,000 years. This layer of organic material, known as peat, can be up to 20m deep. Indonesia has 265,500 km2 of peatland, which comprises 13.9% of its land area. Malaysia also has significant peatland in the Peninsular and Borneo, at 26,685 km2, covering 8.1% of its land area.
Although originally a wetland ecosystem, much of the peatland in Southeast Asia have been drained for human activities such as agriculture, forestry and urban development. A report published in 2011 stated that more than 30% of peat swamp forests had been converted to agricultural land and a further 30% had been logged or degraded in the past 20 to 30 years. Excessive drainage in peat results in the top layer of peat drying out. Due to its high carbon content, dry peat is extremely susceptible to burning, especially during the dry season.
Studies have shown that peat fires are a major contributor to the haze. In 2009, around 40% of all fires in Peninsular Malaysia, Borneo, Sumatra and Java were detected in peatlands, even though they cover only 10% of the land area studied. The concentration of sulphur in rain falling over Singapore in 1997 correlated closely with the PM2.5 concentration, which can be attributed to the strong sulphur emission from peat fires.
History
Southeast Asian haze has frequently reoccurred, with the severity and regions affected differing between seasons. The issue has been recorded since 1972. The 1997 Southeast Asian haze, caused by major forest fires in Indonesia, is thought to be the most severe on record, leading to dangerous pollution across most of Southeast Asia and affecting air quality as far as Sri Lanka. The 2015 haze has also been highlighted as a particularly severe year. In 2020, lockdowns and other social movement restrictions introduced due to the COVID-19 pandemic are thought to have reduced air pollution across the region.
1997 Southeast Asian haze
1997 Indonesian forest fires
2005 Malaysian haze
2006 Southeast Asian haze
2009 Southeast Asian haze
2010 Southeast Asian haze
2013 Southeast Asian haze
2015 Southeast Asian haze
2016 Southeast Asian haze
2017 Southeast Asian haze
2019 Southeast Asian haze
2023 Southeast Asian haze
Effects
Haze related damages can be attributed to two sources: the haze causing fire and the haze itself. Each of the two factors can create significant disruption to people's daily lives and affect people's health. As a whole the recurring haze incidents affected regional economy and generated contention between governments of nations affected.
Direct fire damage
Haze fires can cause many kinds of damage that are local as well as transboundary. These include loss of direct and indirect forest benefits, timber, agricultural products and biodiversity. The fires also incur significant firefighting costs and carbon release to the atmosphere.
Forest fires that contribute to haze are a part of deforestation in Indonesia and Malaysia, a major environmental issue.
Economy
Some of the more direct damage caused by haze includes damage to regional tourism during haze periods, as flights have to be cancelled or delayed during particularly severe events. The haze also leads to industrial production losses, airline and airport losses, damage to fisheries, and incurs the costs on cloud seeding. In addition, severe haze weather can lead to reduced crop productivity, accidents, evacuations, and the loss of confidence of foreign investors.
The 1997 Southeast Asian haze is thought to have led to US$9bn in damages across ASEAN whilst the 2015 haze cost Indonesia alone an estimated $16bn.
Education
School closures have affected many parts of Malaysia, Singapore and Indonesia, sometimes for several weeks, due to hazardous air pollution.
Health
The health effects of haze depend on its severity as measured by the Pollutants Standards Index (PSI). Levels above 100 are classified as unhealthy and anything above 300 as hazardous. There is also individual variation regarding the ability to tolerate air pollution. Most people would at most experience sneezing, running nose, eye irritation, dry throat and dry cough from the pollutants.
However, persons with medical conditions like asthma, chronic lung disease, chronic sinusitis and allergic skin conditions are likely to be more severely affected by the haze and they may experience more severe symptoms. Children and the elderly in general are more likely to be affected. For some, symptoms may worsen with physical activity. One study linked the haze to increased lung cancer diagnoses in Malaysia.
The transboundary Southeast Asian haze has been linked to various cardiovascular conditions including acute ischemic stroke, acute myocardial infarction and cardiac arrest. These studies found dose-dependent effect of PSI on the risk of development these conditions. There appears to be increased susceptibility amongst the elderly and those with history of heart disease and diabetes mellitus. The risk is elevated for several days after exposure. PSI during periods of haze has also been correlated with all-cause mortality, as well as respiratory-illnesses that presented to Emergency Departments and hospital admissions.
The 1997 Southeast Asian haze is estimated to have directly led to 40,000 hospitalisations. A 2016 study estimated the 2015 Southeast Asian haze may have caused around 100,000 deaths, most of which were in Indonesia; the BBC estimated over 500,000 suffered from respiratory ailments in the same season.
A population study found that individuals experienced mild psychological stress, which was associated with the perceived dangerous PSI level and the number of physical symptoms.
Environment
In addition to the direct burning of rainforest, the haze also harms wildlife in the region such as orangutans, birds and amphibians, by impacting their health and reproduction. It has also been suggested that haze affects marine ecosystems.
The haze also contributes to greenhouse gas emissions, to an extent that Indonesia's national daily emissions increased tenfold and temporarily exceeded that of China and the United States during the 2015 haze season. Deforestation in Indonesia contributed to the country being the third highest emitter in the world as of 2013. Commentators have suggested Indonesia's emissions during haze seasons undermine potential efforts to reach its pledged Nationally Determined Contribution under the Paris Agreement.
Responses
Countries have responded to haze events with state of emergency declarations, cloud seeding to clear air and mobilising firefighting resources to areas being burned. The public have also been recommended to stay at home with the doors closed, and wear face masks when outside to minimise exposure to hazardous air quality. During the severe 1997 haze caused primarily by forest fires in Indonesia, Malaysian Prime Minister Mahathir Mohamad announced Operation Haze, sending Malaysian firefighters to Indonesia to support the response.
Singapore and Malaysia continuously monitor and report air pollution levels, using the Pollutant Standards Index and Air Pollution Index, respectively.
ASEAN introduced a Transboundary Haze agreement in 2002 following the severe international impact of the 1997 haze. Indonesia became the last country in ASEAN to ratify it in 2014, despite its major contribution to the issue.
Singapore introduced the Transboundary Haze Pollution Act 2014, that criminalises activities overseas that contribute to haze. Implementation of the domestic act to mitigate the regional issue has been challenging, and has affected Indonesia–Singapore relations. Singapore's investigations into individuals involved in the 2015 haze were accepted by Indonesia, on the condition that it did not violate Indonesian sovereignty. Efforts have been made to introduce a similar domestic law in Malaysia, although the government shelved this in 2020.
The Roundtable on Sustainable Palm Oil added "no peat" to its certification scheme in response to the link between palm oil and peat burning.
Proposed solutions
The below solutions are proposed by Dennis et al. to mitigate the direct and indirect causes of fires which result in haze.
Reduce the use of fire as a tool in land clearing
Indonesian law prohibits the use of fire to clear land for any agriculture but weak enforcement is a major issue. Many companies have also claimed that zero burning is impractical and uncompetitive given the lack of meaningful penalties for illegal burning.
Land-use allocations and tenure
Research shows that the most common cause of fire was related to competition and conflict about land tenure and land allocation. Land-use allocation decisions made by central government agencies often overlap with the concession boundaries of local jurisdictions and indigenous communities' territories. Regional reforms are needed to resolve the resource conflicts and they offer opportunities for the regional government to reconcile decisions with those of local and customary institutions. Regional reforms should also ensure that land and resource allocations and decisions at all levels are compatible with physical site characteristics, prominently taking fire risks into account. However, Indonesia's legacy of inaccurate maps, overlapping boundaries, and a lack of technical expertise at the Provincial and District levels will make this a difficult task.
Reduce forest degrading practices
Policies to improve land management and measures to restore ecological integrity to degraded natural forests are extremely important to reduce the incidence of repeated fires. Promoting community involvement in such rehabilitation efforts is critical for their success in reducing fire risks.
Capacity to prevent and suppress fires
The fires in Kalimantan and Sumatra highlight the need to develop fire management systems that address concerns of specific areas. Sufficient resources must be made available to improve fire management in regions that need them, while recognising the diverse needs of different regions and the people within them.
Technology such as remote sensing, digital mapping, and instantaneous communications can help to predict, detect, and respond to potential fire crises. However, such technology should be broadly accessible, widely used, and transparently controlled before they can be effective in improving fire management in remote regions.
Economic disincentives and incentives
In addition to effective criminal and monetary penalties for illegal burning and liability for fire damage, some policy analysts believe in the potential for economic policy reforms and market-based incentives. A combination of eco-labeling and international trade restrictions could reduce markets for commodities that posed high-fire risks in their production. The government could also provide fiscal advantages to support companies' investments in fire management.
See also
ASEAN Agreement on Transboundary Haze Pollution
Asian Brown Cloud
Chemical Equator
Peat swamp forest
Air pollution in Malaysia
Environmental issues in Indonesia
Deforestation in Indonesia
Palm oil production in Indonesia
Stubble burning
Combustion of biomass
References
Smog
Air pollution by region
Environmental disasters
Transboundary environmental issues
Environmental issues in Brunei
Environmental issues in Malaysia
Environmental issues in Indonesia
Environmental issues in Thailand
Health disasters in Asia
Health disasters in Malaysia
Health disasters in Indonesia
Health disasters in Singapore
Health disasters in Thailand | Southeast Asian haze | Physics | 3,136 |
11,566,620 | https://en.wikipedia.org/wiki/Wireless%20Zero%20Configuration | Wireless Zero Configuration (WZC), also known as Wireless Auto Configuration, or WLAN AutoConfig, is a wireless connection management utility included with Microsoft Windows XP and later operating systems as a service that dynamically selects a wireless network to connect to based on a user's preferences and various default settings. This can be used instead of, or in the absence of, a wireless network utility from the manufacturer of a computer's wireless networking device. The drivers for the wireless adapter query the NDIS Object IDs and pass the available network names (SSIDs) to the service. The service then lists them in the user interface on the Wireless Networks tab in the connection's Properties or in the Wireless Network Connection dialog box accessible from the notification area. A checked (debug) build version of the WZC service can be used by developers to obtain additional diagnostic and tracing information logged by the service.
Overview
Wireless Zero Configuration was first introduced with Windows XP. In Windows Vista and Windows 7, the service that provides equivalent functionality is called "WLAN AutoConfig". It is based on the Native Wi-Fi architecture introduced in Windows Vista.
Initially, there was no Wireless LAN API in Windows XP for developers to create wireless client programs and manage profiles and connections. After the release of Windows Vista, Microsoft released KB918997, which includes a Wireless LAN API for Windows XP SP2. It was later integrated into Windows XP Service Pack 3.
See also
List of Microsoft Windows components
Wireless connection management utility
Wireless LAN client comparison
References
External links
Wireless Zero Configuration application programming interface
Active Directory Schema Extensions for Windows Vista Wireless and Wired Group Policy Enhancements
The Cable Guy: Wireless Group Policy Settings for Windows Vista
Windows services
Wireless networking | Wireless Zero Configuration | Technology,Engineering | 357 |
66,116,392 | https://en.wikipedia.org/wiki/HD%20189567 | HD 189567 is a star with a pair of orbiting exoplanets, located in the southern constellation of Pavo. It is also known as Gliese 776, CD-67 2385, and HR 7644. The star has an apparent visual magnitude of 6.07, which is bright enough for it to be dimly visible to the naked eye. It lies at a distance of 58 light years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −10.5 km/s.
The spectrum of HD 189567 presents as an ordinary G-type main-sequence star with a stellar classification of G3V. It has 83% of the mass of the Sun but 110% of the Sun's radius. The star is moderately depleted in heavy elements, having 55% of the solar abundance of iron, but is less depleted in oxygen, having 80% of its solar abundance. It has a low level of magnetic activity in its chromosphere. Age estimates range from 4.11 Gyr based on chromospheric heating to 11.26 Gyr from stellar rotation. The star is radiating 2.1 times the luminosity of the Sun from its photosphere at an effective temperature of 5,726 K.
Planetary system
One exoplanet was discovered around the star in 2011, HD 189567 b. This exoplanet has an estimated minimum mass of 8.5 Earth masses, which means that it is most likely a mini-Neptune. It has an orbital period of 14.3 days, placing it well interior to the habitable zone of the star system. The planet's existence was confirmed in 2021, along with the discovery of a second planet, HD 189567 c.
References
G-type main-sequence stars
Planetary systems with two confirmed planets
Pavo (constellation)
7644
CD-67 2385
Gliese and GJ objects
189567
098959
J20053286-6719156 | HD 189567 | Astronomy | 416 |
5,664 | https://en.wikipedia.org/wiki/Consciousness | Consciousness, at its simplest, is awareness of internal and external existence. However, its nature has led to millennia of analyses, explanations, and debate by philosophers, scientists, and theologians. Opinions differ about what exactly needs to be studied or even considered consciousness. In some explanations, it is synonymous with the mind, and at other times, an aspect of it. In the past, it was one's "inner life", the world of introspection, of private thought, imagination, and volition. Today, it often includes any kind of cognition, experience, feeling, or perception. It may be awareness, awareness of awareness, metacognition, or self-awareness, either continuously changing or not. The disparate range of research, notions, and speculations raises a curiosity about whether the right questions are being asked.
Examples of the range of descriptions, definitions or explanations are: ordered distinction between self and environment, simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event, or mental process of the brain.
Etymology
The words "conscious" and "consciousness" in the English language date to the 17th century, and the first recorded use of "conscious" as a simple adjective was applied figuratively to inanimate objects ("the conscious Groves", 1643). It derived from the Latin conscius (con- "together" and scio "to know") which meant "knowing with" or "having joint or common knowledge with another", especially as in sharing a secret. Thomas Hobbes in Leviathan (1651) wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another". There were also many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase has the figurative sense of "knowing that one knows", which is something like the modern English word "conscious", but it was rendered into English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness".
The Latin conscientia, literally 'knowledge-with', first appears in Roman juridical texts by writers such as Cicero. It means a kind of shared knowledge with moral value, specifically what a witness knows of someone else's deeds. Although René Descartes (1596–1650), writing in Latin, is generally taken to be the first philosopher to use conscientia in a way less like the traditional meaning and more like the way modern English speakers would use "conscience", his meaning is nowhere defined. In Search after Truth (, Amsterdam 1701) he wrote the word with a gloss: conscientiâ, vel interno testimonio (translatable as "conscience, or internal testimony"). It might mean the knowledge of the value of one's own thoughts.
The origin of the modern concept of consciousness is often attributed to John Locke who defined the word in his Essay Concerning Human Understanding, published in 1690, as "the perception of what passes in a man's own mind". The essay strongly influenced 18th-century British philosophy, and Locke's definition appeared in Samuel Johnson's celebrated Dictionary (1755).
The French term conscience is defined roughly like English "consciousness" in the 1753 volume of Diderot and d'Alembert's Encyclopédie as "the opinion or internal feeling that we ourselves have from what we do".
Problem of definition
Scholars are divided as to whether Aristotle had a concept of consciousness. He does not use any single word or terminology that is clearly similar to the phenomenon or concept defined by John Locke. Victor Caston contends that Aristotle did have a concept more clearly similar to perception.
Modern dictionary definitions of the word consciousness evolved over several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between inward awareness and perception of the physical world, or the distinction between conscious and unconscious, or the notion of a mental entity or mental activity that is not physical.
The common-usage definitions of consciousness in Webster's Third New International Dictionary (1966) are as follows:
awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self
inward awareness of an external object, state, or fact
concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness]
the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical
the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS
waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . .
the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS
The Cambridge English Dictionary defines consciousness as "the state of understanding and realizing something".
The Oxford Living Dictionary defines consciousness as "[t]he state of being aware of and responsive to one's surroundings", "[a] person's awareness or perception of something", and "[t]he fact of awareness by the mind of itself and the world".
Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The corresponding entry in the Routledge Encyclopedia of Philosophy (1998) reads:
ConsciousnessPhilosophers have used the term consciousness for four main topics: knowledge in general, intentionality, introspection (and the knowledge it specifically generates) and phenomenal experience... Something within one's mind is 'introspectively conscious' just in case one introspects it (or is poised to do so). Introspection is often thought to deliver one's primary knowledge of one's mental life. An experience or other mental entity is 'phenomenally conscious' just in case there is 'something it is like' for one to have it. The clearest examples are: perceptual experience, such as tastings and seeings; bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one's own actions or perceptions; and streams of thought, as in the experience of thinking 'in words' or 'in images'. Introspection and phenomenality seem independent, or dissociable, although this is controversial.
Traditional metaphors for mind
During the early 19th century, the emerging field of geology inspired a popular metaphor that the mind likewise had hidden layers "which recorded the past of the individual". By 1875, most psychologists believed that "consciousness was but a small part of mental life", and this idea underlies the goal of Freudian therapy, to expose the of the mind.
Other metaphors from various sciences inspired other analyses of the mind, for example: Johann Friedrich Herbart described ideas as being attracted and repulsed like magnets; John Stuart Mill developed the idea of "mental chemistry" and "mental compounds", and Edward B. Titchener sought the "structure" of the mind by analyzing its "elements". The abstract idea of states of consciousness mirrored the concept of states of matter.
In 1892, William James noted that the "ambiguous word 'content' has been recently invented instead of 'object'" and that the metaphor of mind as a seemed to minimize the dualistic problem of how "states of consciousness can " things, or objects; by 1899 psychologists were busily studying the "contents of conscious experience by introspection and experiment". Another popular metaphor was James's doctrine of the stream of consciousness, with continuity, fringes, and transitions.
James discussed the difficulties of describing and studying psychological phenomena, recognizing that commonly-used terminology was a necessary and acceptable starting point towards more precise, scientifically justified language. Prime examples were phrases like inner experience and personal consciousness:
From introspection to awareness
Prior to the 20th century, philosophers treated the phenomenon of consciousness as the "inner world [of] one's own mind", and introspection was the mind "attending to" itself, an activity seemingly distinct from that of perceiving the 'outer world' and its physical phenomena. In 1892 William James noted the distinction along with doubts about the inward character of the mind:
By the 1960s, for many philosophers and psychologists who talked about consciousness, the word no longer meant the 'inner world' but an indefinite, large category called awareness, as in the following example:
Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland emphasized external awareness, and expressed a skeptical attitude more than a definition:
Using 'awareness', however, as a definition or synonym of consciousness is not a simple matter:
Influence on research
Many philosophers have argued that consciousness is a unitary concept that is understood by the majority of people despite the difficulty philosophers have had defining it. Max Velmans proposed that the "everyday understanding of consciousness" uncontroversially "refers to experience itself rather than any particular thing that we observe or experience" and he added that consciousness "is [therefore] exemplified by the things that we observe or experience", whether thoughts, feelings, or perceptions. Velmans noted however, as of 2009, that there was a deep level of "confusion and internal division" among experts about the phenomenon of consciousness, because researchers lacked "a sufficiently well-specified use of the term...to agree that they are investigating the same thing". He argued additionally that "pre-existing theoretical commitments" to competing explanations of consciousness might be a source of bias.
Within the "modern consciousness studies" community the technical phrase 'phenomenal consciousness' is a common synonym for all forms of awareness, or simply 'experience', without differentiating between inner and outer, or between higher and lower types. With advances in brain research, "the presence or absence of experienced phenomena" of any kind underlies the work of those neuroscientists who seek "to analyze the precise relation of conscious phenomenology to its associated information processing" in the brain. This neuroscientific goal is to find the "neural correlates of consciousness" (NCC). One criticism of this goal is that it begins with a theoretical commitment to the neurological origin of all "experienced phenomena" whether inner or outer. Also, the fact that the easiest 'content of consciousness' to be so analyzed is "the experienced three-dimensional world (the phenomenal world) beyond the body surface" invites another criticism, that most consciousness research since the 1990s, perhaps because of bias, has focused on processes of external perception.
From a history of psychology perspective, Julian Jaynes rejected popular but "superficial views of consciousness" especially those which equate it with "that vaguest of terms, experience". In 1976 he insisted that if not for introspection, which for decades had been ignored or taken for granted rather than explained, there could be no "conception of what consciousness is" and in 1990, he reaffirmed the traditional idea of the phenomenon called 'consciousness', writing that "its denotative definition is, as it was for René Descartes, John Locke, and David Hume, what is introspectable". Jaynes saw consciousness as an important but small part of human mentality, and he asserted: "there can be no progress in the science of consciousness until ... what is introspectable [is] sharply distinguished" from the processes of cognition such as perception, reactive awareness and attention, and automatic forms of learning, problem-solving, and decision-making.
The cognitive science point of view—with an inter-disciplinary perspective involving fields such as psychology, linguistics and anthropology—requires no agreed definition of "consciousness" but studies the interaction of many processes besides perception. For some researchers, consciousness is linked to some kind of "selfhood", for example to certain pragmatic issues such as the feeling of agency and the effects of regret and action on experience of one's own body or social identity. Similarly Daniel Kahneman, who focused on systematic errors in perception, memory and decision-making, has differentiated between two kinds of mental processes, or cognitive "systems": the "fast" activities that are primary, automatic and "cannot be turned off", and the "slow", deliberate, effortful activities of a secondary system "often associated with the subjective experience of agency, choice, and concentration". Kahneman's two systems have been described as "roughly corresponding to unconscious and conscious processes". The two systems can interact, for example in sharing the control of attention. While System 1 can be impulsive, "System 2 is in charge of self-control", and "When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do".
Some have argued that we should eliminate the concept from our understanding of the mind, a position known as consciousness semanticism.
In medicine, a "level of consciousness" terminology is used to describe a patient's arousal and responsiveness, which can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the level of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree or level of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale.
Philosophy of mind
While historically philosophers have defended various views on consciousness, surveys indicate that physicalism is now the dominant position among contemporary philosophers of mind. For an overview of the field, approaches often include both historical perspectives (e.g., Descartes, Locke, Kant) and organization by key issues in contemporary debates. An alternative is to focus primarily on current philosophical stances and empirical findings.
Coherence of the concept
Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is too narrow, either because the concept of consciousness is embedded in our intuitions, or because we all are illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of entities, or identities, acting in the world. Thus, by speaking of "consciousness" we end up leading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings.
Types
Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness.
Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms.
There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility".
Distinguishing consciousness from its contents
Sam Harris observes: "At the level of your experience, you are not a body of cells, organelles, and atoms; you are consciousness and its ever-changing contents". Seen in this way, consciousness is a subjectively experienced, ever-present field in which things (the contents of consciousness) come and go.
Christopher Tricker argues that this field of consciousness is symbolized by the mythical bird that opens the Daoist classic the Zhuangzi. This bird's name is Of a Flock (peng 鵬), yet its back is countless thousands of miles across and its wings are like clouds arcing across the heavens. "Like Of a Flock, whose wings arc across the heavens, the wings of your consciousness span to the horizon. At the same time, the wings of every other being's consciousness span to the horizon. You are of a flock, one bird among kin."
Mind–body problem
Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown.
The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as mind–body dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland.
Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes's rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics), and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought.
Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness.
A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing. Empirical evidence is against the notion of quantum consciousness, an experiment about wave function collapse led by Catalina Curceanu in 2022 suggests that quantum consciousness, as suggested by Roger Penrose and Stuart Hameroff, is highly implausible.
Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum.
Problem of other minds
Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. The problem of other minds is a philosophical problem traditionally stated as the following epistemological question: Given that I can only observe the behavior of others, how can I know that others have minds? The problem of other minds is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at the University of Pittsburgh) regarding the literature and research studying artificial intelligence in androids.
The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in a research paper titled "The Unimagined Preposterousness of Zombies", argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences.
Qualia
The term "qualia" was introduced in philosophical literature by C. I. Lewis. The word is derived from Latin and means "of what sort". It is basically a quantity or property of something as perceived or experienced by an individual, like the scent of rose, the taste of wine, or the pain of a headache. They are difficult to articulate or describe. The philosopher and scientist Daniel Dennett describes them as "the way things seem to us", while philosopher and cognitive scientist David Chalmers expanded on qualia as the "hard problem of consciousness" in the 1990s. When qualia is experienced, activity is simulated in the brain, and these processes are called neural correlates of consciousness (NCCs). Many scientific studies have been done to attempt to link particular brain regions with emotions or experiences.
Species which experience qualia are said to have sentience, which is central to the animal rights movement, because it includes the ability to experience pain and suffering.
Scientific study
For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies.
Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it.
Measurement via verbal report
Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness.
For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation).
Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness.
Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity, and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies related to the neuroscience of free will have also shown that the influence consciousness has on decision-making is not always straightforward.
Mirror test and contingency awareness
Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, orcas, pigeons, European magpies and elephants have all been observed to pass this test. While some other animals like pigs have been shown to find food by looking into the mirror.
Contingency awareness is another such approach, which is basically the conscious understanding of one's actions and its effects on one's environment. It is recognized as a factor in self-recognition. The brain processes during contingency awareness and learning is believed to rely on an intact medial temporal lobe and age. A study done in 2020 involving transcranial direct current stimulation, Magnetic resonance imaging (MRI) and eyeblink classical conditioning supported the idea that the parietal cortex serves as a substrate for contingency awareness and that age-related disruption of this region is sufficient to impair awareness.
Neural correlates
A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies.
Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations.
A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with the brain's internal model of the visual world.
Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia.
In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states.
Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologs can be identified? The general conclusion from the study by Butler, et al. is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologs have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homolog/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists.
Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans.
A study in 2016 looked at lesions in specific areas of the brainstem that were associated with coma and vegetative states. A small region of the rostral dorsolateral pontine tegmentum in the brainstem was suggested to drive consciousness through functional connectivity with two cortical regions, the left ventral anterior insular cortex, and the pregenual anterior cingulate cortex. These three regions may work together as a triad to maintain consciousness.
Models
A wide range of empirical theories of consciousness have been proposed. Adrian Doerig and colleagues list 13 notable theories, while Anil Seth and Tim Bayne list 22 notable theories.
Global workspace theory
Global workspace theory (GWT) is a cognitive architecture and theory of consciousness proposed by the cognitive psychologist Bernard Baars in 1988. Baars explains the theory with the metaphor of a theater, with conscious processes represented by an illuminated stage. This theater integrates inputs from a variety of unconscious and otherwise autonomous networks in the brain and then broadcasts them to unconscious networks (represented in the metaphor by a broad, unlit "audience"). The theory has since been expanded upon by other scientists including cognitive neuroscientist Stanislas Dehaene and Lionel Naccache.
Integrated information theory
Integrated information theory (IIT), pioneered by neuroscientist Giulio Tononi in 2004, postulates that consciousness resides in the information being processed and arises once the information reaches a certain level of complexity. Additionally, IIT is one of the only leading theories of consciousness that attempts to create a 1:1 mapping between conscious states and precise, formal mathematical descriptions of those mental states. Proponents of this model suggest that it may provide a physical grounding for consciousness in neurons, as they provide the mechanism by which information is integrated. This also relates to the "hard problem of consciousness" proposed by David Chalmers. The theory remains controversial, because of its lack of credibility.
Orchestrated objective reduction
Orchestrated objective reduction (Orch-OR), or the quantum theory of mind, was proposed by scientists Roger Penrose and Stuart Hameroff, and states that consciousness originates at the quantum level inside neurons. The mechanism is held to be a quantum process called objective reduction that is orchestrated by cellular structures called microtubules, which form the cytoskeleton around which the brain is built. The duo proposed that these quantum processes accounted for creativity, innovation, and problem-solving abilities. Penrose published his views in the book The Emperor's New Mind. In 2014, the discovery of quantum vibrations inside microtubules gave new life to the argument.
Attention schema theory
In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X.
Entropic brain theory
The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested.
Projective consciousness model
In 2017, work by David Rudrauf and colleagues, including Karl Friston, applied the active inference paradigm to consciousness, leading to the projective consciousness model (PCM), a model of how sensory data is integrated with priors in a process of projective transformation. The authors argue that, while their model identifies a key relationship between computation and phenomenology, it does not completely solve the hard problem of consciousness or completely close the explanatory gap.
Claustrum being the conductor for consciousness
In 2004, a proposal was made by molecular biologist Francis Crick (co-discoverer of the double helix), which stated that to bind together an individual's experience, a conductor of an orchestra is required. Together with neuroscientist Christof Koch, he proposed that this conductor would have to collate information rapidly from various regions of the brain. The duo reckoned that the claustrum was well suited for the task. However, Crick died while working on the idea.
The proposal is backed by a study done in 2014, where a team at the George Washington University induced unconsciousness in a 54-year-old woman suffering from intractable epilepsy by stimulating her claustrum. The woman underwent depth electrode implantation and electrical stimulation mapping. The electrode between the left claustrum and anterior-dorsal insula was the one which induced unconsciousness. Correlation for interactions affecting medial parietal and posterior frontal channels during stimulation increased significantly as well. Their findings suggested that the left claustrum or anterior insula is an important part of a network that subserves consciousness, and that disruption of consciousness is related to increased EEG signal synchrony within frontal-parietal networks. However, this remains an isolated, hence inconclusive study.
Biological function and evolution
The emergence of consciousness during biological evolution remains a topic of ongoing scientific inquiry. The survival value of consciousness is still a matter of exploration and understanding. While consciousness appears to play a crucial role in human cognition, decision-making, and self-awareness, its adaptive significance across different species remains a subject of debate.
Some people question whether consciousness has any survival value. Some argue that consciousness is a by-product of evolution. Thomas Henry Huxley for example defends in an essay titled "On the Hypothesis that Animals are Automata, and its History" an epiphenomenalist theory of consciousness, according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops a similar evolutionary argument in the book The Self and Its Brain.
Opinions are divided on when and how consciousness first arose. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Further exploration of the origins of consciousness, particularly in molluscs, has been done by Peter Godfrey Smith in his book Metazoa.
Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyzes, etc.), and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of Ezequiel Morsella.
As noted earlier, even among writers who consider consciousness to be well-defined, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends.
Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above).
Altered states
There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance.
The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed.
Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention.
A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role.
There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness.
The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts.
Medical aspects
The medical approach to consciousness is scientifically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. The medical approach focuses mostly on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end.
Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works.
Assessment
In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious.
The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language.
In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity.
Disorders
Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category.
Medical experts increasingly view anosognosia as a disorder of consciousness. Anosognosia is a Greek-derived term meaning "unawareness of disease". This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the left side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that does not make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary.
Outside human adults
In children
Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection". In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness". Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind", calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts". They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age".
In animals
The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed.
Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled "What Is it Like to Be a Bat?". He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence.
On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey:
"We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society."
"Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors."
In artificial intelligence
The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote:
One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness.
In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped.
In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition.
In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on a machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that a machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
Stream of consciousness
William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890.
According to James, the "stream of thought" is governed by five characteristics:
Every thought tends to be part of a personal consciousness.
Within each personal consciousness thought is always changing.
Within each personal consciousness thought is sensibly continuous.
It always appears to deal with objects independent of itself.
It is interested in some parts of these objects to the exclusion of others.
A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyzes various phenomena in the world, or analyzes the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics.
Narrative form
In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologues of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers.
Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom:
Spiritual approaches
The Upanishads hold the oldest recorded map of consciousness, as explored by sages through meditation.
To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world.
The Canadian psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who have attained "intellectual enlightenment or illumination".
Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness, a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels.
Other examples include the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff.
See also
Notes
References
Further reading
External links
Cognitive neuroscience
Cognitive psychology
Concepts in epistemology
Concepts in the philosophy of mind
Concepts in the philosophy of science
Emergence
Mental processes
Metaphysical properties
Metaphysics of mind
Neuropsychological assessment
Ontology
Phenomenology
Theory of mind | Consciousness | Biology | 15,076 |
4,030,560 | https://en.wikipedia.org/wiki/Flag%20protocol | A flag protocol (or flag code) is a set of rules and regulations for the display of flags within a country, including national, subnational, and foreign flags. Generally, flag protocols call for the national flag to be the most prominent flag (i.e, in the position of honor), flown highest and to its own right (the viewer's left) and for the flag to never touch the ground. Enforcement of flag protocols vary by nation, with some countries using flag protocols as recommendations and guidelines, while some countries enforce the violations of flag protocol with civil or criminal penalties.
General guidelines
The following guidelines are generally used between all countries.
Position of honor
The position of honor is reserved for the most prevalent flag. Typically, the national flag of the country in which it is being displayed. Following the national flags, flags of sub-federal divisions (such as states or provinces) typically follow and then other flags such as for armed forces or personal flags.
In the case of foreign nations, the host country receives highest precedence, and other national flags are displayed in alphabetical order.
The position of honor is centered or to the flag's own right (a viewer's left). When carried in single file (such as for color guard), the flag of honor leads.
When two poles are crossed, the position of honor is the flag that ends on the left side from the point of view of an observer (the pole will therefore end on the right). In a semicircle, the position of honor is the center. If a full circle is used outside an entrance, the position of honor is directly over the entrance. If used to line the walls of a room, the flag should be placed directly opposite the entrance. When placed with a podium or at a place of worship, the flag should hang directly behind or on a pole to the right of the speaker, from the point of view of the flag.
On a vehicle the flag should be put on a window or affixed securely to the front of the chassis, on the nearside of the vehicle, i.e. the one opposite the driver. (In other words, in countries that drive on the right hand side of the road, a flag is on the right of the vehicle.) On a vehicle where a visiting Head of State or Government is sharing a car with the host Head of State or Government, the host's flag takes the nearside position, the guest's flag on the offside.
Hanging
When flown horizontally, as from a flag pole, the flag should be oriented so that the canton is closest to the top of the pole. If hung against a wall, the canton should be placed in the upper-left corner from the point of view of the observer.
When hung vertically, flags should be rotated so the canton is again closest to the top of the pole. If the flag is displayed against a wall, the canton should again appear in the upper-left corner, which requires that the flag be both rotated and "flipped" from its horizontal orientation.
On a helicopter
Sometimes in a ceremonial flypast, a flag is flown from a weighted rope dangling from beneath a helicopter.
By country
Australia
It does not matter when someone waves the Australian flag.
Brazil
The Brazilian flag can be waved or flown in different directions.
Brunei
Members of the royal family and the nobility each have their own flags. The Standard of the Sultan must be flown only over Istana Nurul Iman. Only the Standard of the Sultan, the Crown Prince, the 'Viziers' and 'Cheterias' (royal nobles) will be flown every day at their respective residents. Other personal royal flags of the Pengirans and personal flags of the non-royal nobles (such as Pehin Manteris) will only be flown during a ceremonial period announced by the Prime Minister's Office such as Sultan's Birthday, Royal Wedding and National Day. The public generally will fly the national flag during these periods. As in many other countries, Bruneians consider it taboo for the flag to touch the ground.
Canada
There are established rules for flying the National flag of Canada. For example "The National Flag will always be flown on its own flagpole...It is improper to fly the National Flag with another flag, of any type, on the same flagpole." As well as adhering to the position of honour guidelines.
With the rise of synthetic fabrics, Canada has recently revised its guidelines on the disposal of the national flag.
Flags made of natural fibres (wool, cotton, linen) should be burned in a dignified manner, privately, without ceremony or public attention being drawn to the destruction of the material.
Flags made of synthetic material (nylon or polyester) should not be burned due to environmental damage and potential fire hazard. They should be respectfully torn into strips, with each element of the flag reduced to a single colour, so that the remaining pieces do not resemble a flag. The individual pieces should then be placed in a bag for disposal – the shreds of fabric should not be reused or fashioned into anything.
France
When a French vessel meets another French ship, it is to lower and raise its ensign as a greeting. A merchant ship meeting a ship of the French Navy will greet three times.
India
The flag of India has a very distinctive protocol. It is governed by the Flag Code of India, 2002; the Emblems and Names (Prevention of Improper Use) Act, 1950; and the Prevention of Insults to National Honour Act, 1971.
Insults to the national flag result in up to three years of imprisonment as punishment, or a fine, or both.
Official regulation states that the flag must never touch the ground or water, or be used as a drapery in any form.
Disposal of damaged flags is also covered by the flag code. Damaged or soiled flags may not be cast aside or disrespectfully destroyed; they have to be destroyed as a whole and in private, preferably by burning or by any other method consistent with the dignity of the flag.
Italy
Obligation to exhibit
The law, implementing Article 12 of the Constitution and following of Italy's membership of the European Union, lays down the general provisions governing the use and display of the flag of the Italian Republic and the flag of European Union (in its territory). In particular, in public buildings the flag of the Italian Republic, the flag of the European Union and the portrait of the president of the Italian Republic must be displayed in the offices of the most important Italian institutional offices:
a) members of the Council of Ministers and Undersecretaries of State;
b) managers in charge of general directorates or equivalent postal services in the central administrations of the State as well as managers in charge of peripheral offices of the State having a territorial constituency no smaller than the province;
c) holders of the highest institutional office of public bodies of national dimension, and holders of the managerial offices corresponding to those referred to in letter b);
d) holders of the highest institutional office of the independent authorities;
e) managers of judicial offices;
f) heads of diplomatic representations, consular offices and Italian cultural institutes abroad. For honorary consuls the exhibition is optional.
The flag of Italy must also be displayed outside all schools of all levels, outside university complexes, outside the buildings that host the voting operations, outside the prefectures, police headquarters, palaces of justice and outside the central post offices.
The flag of Italy must also be displayed on all public offices on the Tricolour Day (7 January), the Anniversary of the Lateran Treaty (11 February), the Anniversary of the Liberation (25 April), the Labour Day (1 May), the Europe Day (9 May), the Feast of the Italian Republic (2 June), the commemoration of the Four days of Naples (28 September), the feast of the patron saint of Italy (Francis of Assisi, 4 October), United Nations Day (24 October; here the tricolour must fly together with the flag of the United Nations) and National Unity and Armed Forces Day (4 November).
When displayed alongside other flags, the flag of Italy takes the position of honour; it is raised first and lowered last. Other national flags should be arranged in alphabetical order. Where two (or more than three) flags appear together, the national flag should be placed to the right (left of the observer); in a display of three flags in line, the national flag occupies the central position. The European flag is also flown from government buildings on a daily basis. In the presence of a foreign visitor belonging to a member state, this takes precedence over the Italian flag. As a sign of mourning, flags flown externally shall be lowered to half-mast; two black ribbons may be attached to those otherwise displayed.
Exposure mode
The tricolor is often accompanied by the flag of the European Union and the banners of local authorities. In the case of two flags displayed, the national flag must be placed on the right (left for those watching, i.e. the position of honour), while if the flags are in an odd number, the tricolor must be hoisted in the centre. This last provision is no longer applicable in the event that the flag of another country belonging to the European Union is displayed: in this circumstance the Italian flag gives up the central place to the EU flag.
As a rule, no more than one flag can be applied to each flagpole. An exception is the presidential standard, which is hoisted on the Torrino del Quirinale, under the tricolour, when the third pole is occupied by the flag of a host country. If there are three flagpoles available but only two flags to be displayed, the central flagpole must be left free and the order of importance of the flags must be respected.
For example, flags displayed on public buildings must appear, from the outside, in the following orders:
, daily;
, on United Nations Day;
, in the presence of a host country (not belonging to the EU);
, in the presence of a host country (belonging to the EU);
, in the regional, provincial and municipal headquarters with three flagpoles;
, in the regional, provincial and municipal headquarters with four flagpoles;
, in regional, provincial and municipal headquarters in the presence of a host country;
, in regional, provincial and municipal headquarters in the presence of a host country.
The law also regulates their dimensions: without prejudice to the proportions of 2:3, which must always be respected, the tricolor flags displayed inside the buildings must be 100x150 cm large, with the pole 250 cm long, while those that fly outside must be 2x3 m or 3x4.5 m, with the pole 4 or 8 m high depending on whether it is installed on a balcony respectively or on the ground. In the case of the presence of flags of other states, such as on the occasion of official visits by foreign personalities, the foreign banners must not be larger than the tricolour.
The tricolour flags displayed must always be in excellent condition, fully extended and must never touch water or land. In no case can figures and writings be written or printed on the cloth. Furthermore, the Italian flag can never be used as a simple drapery or as a fabric in common use (e.g. to cover tables or as curtains).
In the event of public mourning the banner can be raised at half-mast and two strips of black velvet can be affixed to the cloth; the latter are instead mandatory when the tricolour participates in funeral ceremonies. In public ceremonies, the tricolour must always parade first.
Flag-folding
There is a precise way to fold the tricolour correctly, by taking into account the three vertical bands of which the banner is composed.
The flag must be folded according to the boundaries of the colour bands: first the red band and then the green band must be folded over the white one in order to leave only the latter two colours visible; only subsequently should it be folded further in order to completely cover the red and white with green—the only colour that must be visible at the time of the closure of the cloth.
Legal protection
Article 292 of the Italian Penal Code ("Insult or damage to the flag or other emblem of the State") protects the Italian flag by providing for the crime of insulting it, or other banners bearing the national colours, thus providing:
Japan
The flag of Japan can be flown in many different directions.
Philippines
The flag of The Philippines strictly follows a specific position. The blue field should be to the right (left of the observer) in time of peace, and the red field to the right (left of the observer) in time of war. When displayed over the middle of a street, as between buildings or post, the flag should be suspended vertically with the blue stripe pointing to north or east.
Saudi Arabia
Because the flag of Saudi Arabia bears the Shahada, it is never flown at half-mast.
South Korea
The South Korean flag can be waved everywhere.
United Kingdom
Unlike many other countries, use of the national flag, the Union Jack, for many informal purposes such as on clothing is accepted.
The Department for Communities and Local Government in November 2012 released the Plain English guide to flying flags for England, a "summary of the new, more liberalised, controls over flag flying that were introduced on 12 October 2012". In England, the statute governing the flying of flags are The Town and Country Planning (Control of Advertisements) (England) (Amendment) Regulations 2007 and 2012.
The Union Jack as well as national flags of the constituent countries are flown at half mast on days of national mourning such as after the death of a sovereign. The only flag in the UK that never flies at half mast is the Royal Standard, the personal flag of the monarch. This is due to there never not being a monarch as when one dies another immediately ascends to the throne and thus the flag must be flown at full mast at all times over the residence, building, ship or car that the monarch is in.
United States
When displayed either horizontally or vertically against a wall, the union should be uppermost and to the flag's own right, that is, to the observer's left. When displayed in a window, the flag should be displayed in the same way, with the union or blue field to the left of the observer in the street.
The flag should be to the speaker's right (also described as the flag's own right or audience's left), that is to the left of the podium or pulpit as the speaker is facing the audience. Old guidelines had a distinction whether the flag was at the level of the speaker on a stage or the level of the audience. That distinction has been eliminated and the rule simplified.
When the flag is displayed at half-staff, it is customary to raise it briskly to the top the flag pole, then lower it slowly to the half-way mark. This is also done when lowering the flag. The flag is only displayed at half-staff by presidential decree or act of Congress, except on two days: On Pearl Harbor Remembrance Day, the flag can be displayed at half-staff until sundown; on Memorial Day, the flag is flown at half-staff until noon, and then raised to full staff for the remainder of the day.
When displaying the US flag, it is customary for it to be above a state's flag when flown on the same pole. When flown separately, a state's flag may be at the same height as the US flag, with the US flag to the left of the state flag, from the perspective of the viewer. When flown with several state flags, the US flag should be at the same height and to the flag's own right (viewer's left), or at the center of and higher than a grouping of state flags. The idea that only the Texas and Hawaii flags—having been the national flags of the Republic of Texas and the Kingdom of Hawaii—may be flown at an equal height to the US flag is a legend. In fact, any other flag may be flown at an equal height to the US flag provided the US flag is at the leftmost staff from the perspective of the viewer.
The flag of the United States is used to drape the coffins of deceased veterans of the armed forces. When it is so used, the Union (white stars on blue background) is placed above the deceased's left shoulder.
According to United States Code found in Title 4, Chapter 1 pertaining to patriotic customs and observances:
These laws were supplemented by executive orders and presidential proclamations.
Uruguay
National flags cannot be adulterated on any way, nor be used with other intention than as national symbols as stated by law. It is also prohibited for buildings to raise flags other than national flags. The public loyalty oath to the flag must be taken once by every citizen and is celebrated on 19 June at learning institutes. The disposal of damaged flags is done by the Uruguayan Army. Each year on 23 September damaged flags are burnt as an official act.
Gallery
This gallery shows a few examples of flag protocol in practice.
See also
Courtesy flag
Flag desecration
Vexillology
Notes
References
Bibliography
External links
US Flag Disposal Instructions
Royal Yachting Association's advice on flag etiquette
US Flag Etiquette
UK flag flying protocol
Etiquette
Vexillology
Water transport | Flag protocol | Biology | 3,548 |
5,744,837 | https://en.wikipedia.org/wiki/Friction%20of%20distance | Friction of distance is a core principle of geography that states that movement incurs some form of cost, in the form of physical effort, energy, time, and/or the expenditure of other resources, and that these costs are proportional to the distance traveled. This cost is thus a resistance against movement, analogous (but not directly related) to the effect of friction against movement in classical mechanics. The subsequent preference for minimizing distance and its cost underlies a vast array of geographic patterns from economic agglomeration to wildlife migration, as well as many of the theories and techniques of spatial analysis, such as Tobler's first law of geography, network routing, and cost distance analysis. To a large degree, friction of distance is the primary reason why geography is relevant to many aspects of the world, although its importance (and perhaps the importance of geography) has been decreasing with the development of transportation and communication technologies.
History
It is not known who first coined the term "friction of distance," but the effect of distance-based costs on geographic activity and geographic patterns has been a core element of academic geography since its initial rise in the 19th Century. von Thünen's isolated state model of exurban land use (1826), possibly the earliest geographic theory, directly incorporated the cost of transportation of different agricultural products as one of the determinants for how far from a town each type of goods could be produced profitably. The industrial location theory of Alfred Weber (1909) and the central place theory of Walter Christaller (1933) were also basically optimizations of space to minimize travel costs.
By the 1920s, social scientists began to incorporate principles of physics (more precisely, some of its mathematical formalizations), such as gravity, specifically the inverse square law found in Newton's law of universal gravitation. Geographers quickly identified a number of situations in which the interaction between places, whether migration between cities or the distribution of residences willing to patronize a shop, exhibited this distance decay due to the advantages of minimizing distance traveled. Gravity models and other Distance optimization models became widespread during the quantitative revolution of the 1950s and the subsequent rise of spatial analysis. Gerald Carrothers (1956) was one of the first to explicitly use the analogy of "friction" to conceptualize the effect of distance, suggesting that these distance optimizations needed to acknowledge that the effect varies according to localized factors. Ian McHarg, as published in Design with Nature (1969), was among those who developed the multifaceted nature of distance costs, although he did not initially employ mathematical or computational methods to optimize them.
In the era of geographic information systems, starting in the 1970s, many of the existing proximity models and new algorithms were automated as analysis tools, making them significantly easier to use by a wider set of professionals. These tools have tended to focus on problems that could be solved deterministically, such as buffers, Cost distance analysis, interpolation and network routing. Other problems that apply the friction of distance are much more difficult (i.e., NP-hard), such as the traveling salesman problem and cluster analysis, and automated tools to solve them (usually using heuristic algorithms such as k-means clustering) are less widely available, or only recently available, in GIS software.
Distance Costs
As an illustration, picture a hiker standing on the side of an isolated wooded mountain, who wishes to travel to the other side of the mountain. There are essentially an infinite number of paths she could take to get there. Traveling directly over the mountain peak is "expensive," in that every ten meters spent climbing requires significant effort. Traveling ten meters cross country through the woods requires significantly more time and effort than traveling ten meters along a developed trail or through open meadow. Taking a level route along a road going around the mountain has a much lower cost (in both effort and time) for every ten meters, but the total cost accumulates over a much longer distance. In each case, the amount of time and/or effort required to travel ten meters is a measurement of the friction of distance. Determining the optimal route requires balancing these costs, and can be solved using the technique of cost distance analysis.
In another, very common example, a person wants to drive from his home to the nearest hospital. Of the many (but finite) possible routes through the road network, the one with the shortest distance passes through residential neighborhoods with low speed limits and frequent stops. An alternative route follows a bypass highway around the neighborhoods, having a significantly longer distance, with much higher speed limits and infrequent stops. Thus, this alternative has a much lower unit friction of distance (in this case, time), but it accumulates over a greater distance, requiring calculations to determine the optimal (taking the least total travel time), perhaps using the network analysis algorithms commonly found in web maps such as Google Maps.
The costs that are proportional to distance can take a number of forms, each of which may or may not be relevant in a given geographic situation:
Travel cost, the resources required to move through space. This is most commonly time, energy, or fuel consumption, but may also include more subjective costs such as nuisance.
Traffic cost, the impedance resulting from the aggregate volume of travelers exceeding the optimum capacity of the space (usually a linear network in this case).
Construction cost, the resources required to build the infrastructure that makes travel through the space possible, such as roads, pipes, and cables.
Environmental impacts, the negative effects on the natural or human environment caused by the infrastructure or the travel along it. For example, one would want to minimize the length of residential neighborhood or wetland destroyed to build a highway.
Some of these costs are easily quantifiable and measurable, such as transit time, fuel consumption, and construction costs, thus naturally lending themselves to optimization algorithms. That said, there may be a significant amount of uncertainty in predicting them due to variability over time (e.g., travel time through a road network depending on changing traffic volume) or variability in individual situations (e.g., how fast a person wishes to drive). Other costs are much more difficult to measure due to their qualitative or subjective nature, such as political protest or ecological impact; these typically require the creation of "pseudo-measures" in the form of indices or scales to operationalize.
All of these costs are fields in that they are spatially intensive (a "density" of cost per unit distance) and vary over space. The cost field (often called a cost surface) may be a continuous, smooth function or may have abrupt changes. This variability of cost occurs both in unconstrained (two- or three-dimensional) space, as well as in constrained networks, such as roads and cable telecommunications.
Applications
A large number of geographic theories, spatial analysis techniques, and GIS applications are directly based on the practical effects of friction of distance:
Tobler's first law of geography, formalized as spatial autocorrelation, states that nearby locations are more likely to similar in many aspects than distant locations, typically being the result of a history of greater interactions between them.
gravity models, distance decay and other models of spatial interaction are based on the tendency of the volume of interaction between two locations to decrease as the distance between them increases due to the friction of distance, often in a pattern that is analogous (mathematically, not physically) to the Inverse-square law of many of the properties in Physics, such as illuminance and gravity.
Economic agglomeration is the tendency of institutions that frequently interact with each other to move close together in physical space, such as the concentration of business services (advertising, finance, etc.) in large cities to be near corporate headquarters.
Location theory includes a number of theories and techniques for determining the optimal location to site a particular activity, based on minimizing travel costs. Notable examples include the classical early 20th Century theories of Johann Heinrich von Thünen, Walter Christaller, and Alfred Weber, and GIS-era algorithms for Location-allocation.
Network analysis includes a number of problems and techniques for modeling travel constrained to a linear network or graph, such as roads, public utilities, or streams. Many of these are optimization problems to minimize travel cost, such as the ubiquitous Dijkstra's algorithm to find the minimal cost path between two locations.
Cost distance analysis, a series of algorithms for finding minimal-cost paths through an unconstrained space in which cost varies as a field.
Migration of humans and animals is often seen as the result of balancing the advantages of remaining stationary (due to the friction of distance) with "push/pull" factors that encourage one to leave one location or to move to another location.
Spatial diffusion is the gradual spread of culture, ideas, and institutions across space over time, in which the desirability of one place adopting the traits of a separate place overcome the friction of distance.
Time geography explores how human activity is affected by the constraints of movement, especially temporal costs.
Time-space Convergence
Historically, the friction of distance was very high for most types of movement, making long-distance movement and interaction relatively slow and rare (but not non-existent). The result was a strongly localized human geography, manifested in aspects as varied as language and economy. One of the most profound effects of the technological advances since 1800, including the railroad, the automobile, and the telephone, has been to drastically reduce the costs of moving people, goods, and information over long distances. This led to widespread diffusion and integration, ultimately resulting in many of the aspects of globalization. The geographic effect of this diminishing friction of distance is called time-space convergence or cost-space convergence.
Of these technologies, telecommunications, especially the Internet, has perhaps had the most profound effect. Although there are still distance-based costs of transmitting information, such as the laying of cable and the generation of electromagnetic signal energy (traditionally manifesting in ways such as long-distance telephone charges), these are now so small for any meaningful unit of information that they are no longer managed in a distance-based form, but are bundled into fixed (not based on distance) service costs. For example, some portion of the fee for mobile telephone service covers the higher costs of long-distance service, but the customer does not see it, and thus does not make communication decisions based on distance. The rise of free shipping has similar causes and effects on retail trade.
It has been argued that the virtual elimination of the friction of distance in many aspects of society has resulted in the "death of Geography," in which relative location is no longer relevant to many tasks in which it formerly played a crucial role. It is now possible to conduct many interactions over global distances almost as easily as over local distances, including retail trade, business-to-business services, and some types of remote work. Thus, these services could be theoretically provided from anywhere with equal cost. The COVID-19 pandemic has tested and accelerated many of these trends.
Conversely, others have seen a strengthening in the geographic effects of other aspects of life, or perhaps the increasing focus on them as traditional distance-based aspects have become less relevant. This includes the lifestyle amenities of a place, such as local natural landscapes or urban nightlife that must be experienced in person (thus requiring physical travel and thus entailing the friction of distance). Also, many people prefer in-person interactions that could technically be conducted remotely, such as business meetings, education, tourism, and shopping, which should make distance-based effects relevant for the foreseeable future. The contrasting trends of "frictional" and "frictionless" factors have necessitated a more nuanced analysis of geography than the traditional blanket statements of location always mattering, or the recent claims that location does not matter at all.
References
Human migration
International factor movements
Economic geography
Anthropology
Distance | Friction of distance | Physics,Mathematics | 2,434 |
24,366,417 | https://en.wikipedia.org/wiki/Cellular%20and%20Molecular%20Life%20Sciences | Cellular and Molecular Life Sciences is a peer-reviewed scientific journal covering cellular and molecular life sciences. It was established in 1945 as Experientia, obtaining its current name in 1994. The Editors-in-chief are Roberto Bruzzone and Jean Leon Thomas. According to the Journal Citation Reports, the journal has a 2020 impact factor of 9.261.
References
External links
Molecular and cellular biology journals
Academic journals established in 1945
Springer Science+Business Media academic journals
Monthly journals
English-language journals | Cellular and Molecular Life Sciences | Chemistry | 100 |
51,135,583 | https://en.wikipedia.org/wiki/Von%20Stahel%20und%20Eysen | Von Stahel und Eysen (English: On Steel and Iron) is the first printed book on metallurgy, published in 1532 by several publishers: Kunegunde Hergot in Nuremberg, Melchior Sachs in Erfurt, and Peter Jordan in Mainz. It has been suggested that Hergot was probably the first to publish the text, as the material seems to come from Nuremberg: its material on tempering and quenching is similar to the short treatise on hardening iron beginning 'Von dem herten. Nu spricht meister Alkaym' in the late fourteenth- or early fifteenth-century Nuremberg manuscript Nürnberger Handschrift GNM 3227a.
About half the text is on how to harden iron and steel through tempering and quenching, mentioning water, but also a range of recipes of varying degrees of elaborateness. The recipe 'take clarified honey, fresh urine of a he-goat, alum, borax, olive oil, and salt; mix everything well together and quench therein' might, through the urea content of the urine (H2NCONH2), have helped to produce nitrated, 'case-hardened' iron. Less likely to have been efficacious is: 'take varnish, dragon's blood, horn scrapings, half as much salt, juice made from earthworms, radish juice, tallow, and vervain and quench therein. It is also very advantageous in hardening if a piece that is to be hardened is first thoroughly cleaned and well polished'.
A modern commentator on some of the more outlandish techniques in the book noted: "There isn't really much to say...except that perhaps it was meant to trip up rivals. However, this may not be the case because similar instructions were circulated in 1708 in Nuremberg."
The text also includes techniques for colouring, soldering, and etching. Etching was quite a new technology at the time, and Von Stahel und Eysen provides the first attested recipes.
Translations
Williams, H. (trans.), 'A sixteenth-century German treatise: Von Stahel und Eysen. 1532', Technical studies in the field of the fine arts, 4.2 (October, 1935), 63-92.
Smith, Cyril Stanley (ed.), Sources for the History of the Science of Steel, 1532-1786, Society for the History of Technology, 4 (Cambridge, Mass.: Society for the History of Technology, 1968), pp. 7–19.
References
Alchemical documents
Engineering textbooks
1532 books
German books
German non-fiction books
History of metallurgy | Von Stahel und Eysen | Chemistry,Materials_science | 563 |
2,903,089 | https://en.wikipedia.org/wiki/Xi%20Aurigae | Xi Aurigae, Latinized from ξ Aurigae, is the Bayer designation for a single, white-hued star in the northern constellation of Auriga. This star was once considered part of the constellation of Camelopardalis and held the Flamsteed designation 32 Camelopardalis. It is visible to the naked eye with an apparent visual magnitude of +5.0. The measured annual parallax shift of this star is , which corresponds to a physical distance of with a 3 light-year margin of error. At that distance, the visual magnitude of the star is diminished by an extinction of 0.108 due to interstellar dust.
This is an A-type main sequence star with a stellar classification of A2 Va. Although it was one of the first stars to be cataloged as a Lambda Boötis star, Murphy et al. (2015) don't consider it to be a member of this population. The star has nearly twice the mass of the Sun and about 1.1 times the Sun's radius. It is an estimated 174 million years old and is spinning with a projected rotational velocity of 62 km/s. Xi Aurigae is radiating 49.5 times the Sun's luminosity from its photosphere at an effective temperature of around 9,152 K.
References
External links
HR 2029
Image Xi Aurigae
A-type main-sequence stars
Aurigae, Xi
Auriga
Durchmusterung objects
Aurigae, 30
039283
027949
2029 | Xi Aurigae | Astronomy | 323 |
67,643,876 | https://en.wikipedia.org/wiki/Chaetocerotales | Chaetocerotales is an order of diatoms belonging to the class Mediophyceae.
Families:
Acanthocerataceae
Attheyaceae
Chaetocerotaceae
References
Diatoms
Diatom orders | Chaetocerotales | Biology | 47 |
62,628,409 | https://en.wikipedia.org/wiki/Ali%20Sunyaev | Ali Sunyaev (, Ali Rashidowitsch Sjunjajew; born June 11, 1981, in Moscow, USSR) is a professor for computer science and vice president at the Technical University of Munich (TUM), .
Life
His father is Rashid Sunyaev (reputable cosmologist and director of the Max Planck Institute for Astrophysics). His mother Gyuzal Sunyaeva is a physician. His brother Shamil Sunyaev is Distinguished Chair Professor for genetics at the Harvard Medical School at the Harvard University. Due to the work of his father, his family moved from the Russian Federation to Germany in 1996.
Ali Sunyaev studied computer science at the Technical University of Munich (TUM) from 2000 to 2005. In 2005, he joined the graduate school of the Institute of Computer Science at the Technical University of Munich to do his PhD in computer science and information systems on the topic 'Design and Application of a Security Analysis Method for Healthcare Telematics in Germany'. From 2010 to 2016, he was assistant professor for information systems and information systems quality at the University of Cologne in Germany. From 2016 to December 2017, he was full professor for information systems and systems development and director of the Research Center for Information Systems Design at the University of Kassel in Germany. Subsequently, Ali Sunyaev was full professor for computer science and director of the Institute of Applied Informatics and Formal Description Methods at the Karlsruhe Institute of Technology (KIT) from January 2018 to September 2024. Since October 2024, he is professor for computer science at the Technical University of Munich and vice president of TUM ().
In 2009 and 2012, Ali Sunyaev was a guest researcher at the Harvard-MIT Division of Health Sciences and Technology, Intelligent Health Lab, Harvard School of Engineering and Applied Sciences, Boston, Massachusetts, USA. In 2011, he was a guest lecturer at the Higher School of Economics (HSE) in Moscow. Since 2024 Sunyaev is a member of the scientific council of the DFG (German Research Foundation), a member of the board of directors of the German Informatics Society (GI), and a member of the DFG-funded Karlsruhe Decision & Design Lab (KD²Lab).
Research
Ali Sunyaev conducts research on the design, use, and societal interactions of internet technologies. His main research interests include development of innovative health IT, cloud computing, distributed ledger technologies, trustworthy artificial intelligence, and information security management.
Ali Sunyaev leads multiple research projects funded by funding bodies such as the Helmholtz Association of German Research Centres, the Russian Science Foundation, or the German Research Foundation (Deutsche Forschungsgemeinschaft).
His work has been published in leading international scientific outlets in computer science, information systems, medical informatics, and economics and is featured in a variety of media outlets.
References
Information systems researchers
1981 births
Living people
Russian computer scientists | Ali Sunyaev | Technology | 595 |
230,360 | https://en.wikipedia.org/wiki/Windows%20CE | Windows CE, later known as Windows Embedded CE and Windows Embedded Compact, is a discontinued operating system developed by Microsoft for mobile and embedded devices. It was part of the Windows Embedded family and served as the software foundation of several products including the Handheld PC, Pocket PC, Auto PC, Windows Mobile, Windows Phone 7 and others.
Unlike Windows Embedded Standard, Windows For Embedded Systems, Windows Embedded Industry and Windows IoT, which are based on Windows NT, Windows CE uses a different hybrid kernel. Microsoft licensed it to original equipment manufacturers (OEMs), who could modify and create their own user interfaces and experiences, with Windows Embedded Compact providing the technical foundation to do so.
Earlier versions of Windows CE worked on MIPS and SHx architectures, but in version 7.0 released in 2011—when the product was also renamed to Embedded Compact—support for these were dropped but remained for MIPS II architecture. The final version, Windows Embedded Compact 2013 (version 8.0), released in 2013, only supports x86 and ARM processors with board support package (BSP) directly. It had mainstream support until October 9, 2018, and extended support ended on October 10, 2023; however, license sales for OEMs will continue until 2028.
Features
Windows CE is optimized for devices that have minimal memory; a Windows CE kernel may run with one megabyte of memory. Devices are often configured without disk storage, and may be configured as a "closed" system that does not allow for end-user extension (for instance, it can be burned into ROM). Windows CE conforms to the definition of a real-time operating system, with a deterministic interrupt latency. From Version 3 and onward, the system supports 256 priority levels and uses priority inheritance for dealing with priority inversion. The fundamental unit of execution is the thread. This helps to simplify the interface and improve execution time.
The first version known during development under the code name "Pegasus" featured a Windows-like GUI and a number of Microsoft's popular apps, all trimmed down for smaller storage, memory, and speed of the palmtops of the day. Since then, Windows CE has evolved into a component-based, embedded, real-time operating system. It is no longer targeted solely at hand-held computers. Many platforms have been based on the core Windows CE operating system, including Microsoft's AutoPC, Pocket PC 2000, Pocket PC 2002, Windows Mobile 2003, Windows Mobile 2003 SE, Windows Mobile 5, Windows Mobile 6, Smartphone 2002, Smartphone 2003, Portable Media Center, Zune, Windows Phone 7 and many industrial devices and embedded systems. Windows CE even powered select games for the Sega Dreamcast and was the operating system of the Gizmondo handheld.
A distinctive feature of Windows CE compared to other Microsoft operating systems is that large parts of it are offered in source code form. First, source code was offered to several vendors, so they could adjust it to their hardware. Then products like Platform Builder (an integrated environment for Windows CE OS image creation and integration, or customized operating system designs based on CE) offered several components in source code form to the general public. However, a number of core components that do not need adaptation to specific hardware environments (other than the CPU family) are still distributed in binary only form.
Windows CE 2.11 was the first embedded Windows release to support a console and a Windows CE version of .
History
Windows Embedded Compact was formerly known as Windows CE. According to Microsoft, "CE" is not an explicit acronym for anything, although it implies a number of notions that Windows developers had in mind, such as "compact", "connectable", "compatible", "companion" and "efficient". The name changed once in 2006, with the release of Windows Embedded CE 6.0, and again in 2011, with the release of Windows Embedded Compact 7.
Windows CE was originally announced by Microsoft at the Computer Dealers' Exhibition (COMDEX) in 1996 and was demonstrated on stage by Bill Gates and John McGill. Microsoft had been testing Pegasus in early 1995 and released a strict reference platform to several hardware partners. The devices had to have the following minimum hardware specifications:
SH3, MIPS 3000 or MIPS 4000 CPU
Minimum of 4 MB of ROM
Minimum of 2 MB of RAM with a backup power source, such as a CR2032 coin cell battery
Powered by two AA batteries
A physical QWERTY keyboard including Ctrl, Alt, and Shift keys
A LCD of 480×240 pixels with four shades of gray and two bits per pixel with touchscreen that could be operated by either stylus or finger
An IrDa transceiver
Serial port
PC Card socket
Built-in speaker
Devices of the time mainly had 480×240 pixel displays with the exception of the Hewlett-Packard 'Palmtop PC' which had a 640×240 display. Each window took over the full display. Navigation was done by tapping or double tapping on an item. A contextual menu was also available by the user pressing the ALT key and tapping on the screen. Windows CE 1.0 did not include a cascading Start menu, although Windows 95 and Windows NT 4.0 did. Microsoft released the Windows CE 1.0 Power Toys that included a cascading menu icon that appeared in the system tray. Also bundled were several other utilities, most notable were a sound applet for the system tray, enabling the user to quickly mute or unmute their device or adjust the volume and a 'pocket' version of Paint.
The release of Windows CE 2.0 was well received. Microsoft learned its lessons from consumer feedback of Windows CE 1.0 and made many improvements to the operating system. The Start menu was a cascading menu, identical to those found on Windows 95 and Windows NT 4.0. Color screens were also supported and manufacturers raced to release the first color H/PC. The first to market was Hewlett Packard with the HP 620LX. Windows CE 2.0 also supported a broader range of CPU architectures. Programs could be also installed directly in the OS by double clicking on CAB files. Due to the nature of the ROMs that contained the operating system, users were not able to flash their devices with the newer operating system. Instead manufacturers released upgrade ROMs that users had to physically install in their devices, after removing the previous version. This would usually wipe the data on the device and present the user with the setup wizard upon first boot.
In November 1999, it was reported that Microsoft was planning to rename Windows CE to Windows Powered. The name only appeared in brand in Handheld PC 2000 and a build of Windows 2000 Advanced Server for network-attached storage devices (which bears no relation to Windows CE). Various Windows CE 3.0 products announced at CES 2001 were marketed under a "Windows Powered" umbrella name.
Development tools
Visual Studio
Microsoft Visual Studio 2012, 2013, and 2015 support apps and Platform Builder development for Windows Embedded Compact 2013.
Microsoft Visual Studio 2008 and earlier support projects for older releases of Windows CE/Windows Mobile, producing executable programs and platform images either as an emulator or attached by cable to an actual mobile device. A mobile device is not necessary to develop a CE program. The .NET Compact Framework supports a subset of the .NET Framework with projects in C#, and Visual Basic (.NET), but not Managed C++. "Managed" apps employing the .NET Compact Framework also require devices with significantly larger memories (8 MB or more) while unmanaged apps can still run successfully on smaller devices. In Visual Studio 2010, the Windows Phone Developer Tools are used as an extension, allowing Windows Phone 7 apps to be designed and tested within Visual Studio.
Free Pascal and Lazarus
Free Pascal introduced the Windows CE port in Version 2.2.0, targeting ARM and x86 architectures. Later, the Windows CE header files were translated for use with Lazarus, a rapid application development (RAD) software package based on Free Pascal. Windows CE apps are designed and coded in the Lazarus integrated development environment (IDE) and compiled with an appropriate cross compiler.
Platform Builder
This programming tool is used for building the platform (BSP + Kernel), device drivers (shared source or custom made) and also the apps. This is a one stop environment to get the system up and running. One can also use Platform Builder to export a software development kit (SDK) for the target microprocessor (SuperH, x86, MIPS, ARM etc.) to be used with another associated tool set named below.
Others
The Embedded Microsoft Visual C++ (eVC) a tool for development of embedded apps for Windows CE. It can be used standalone using the SDK exported from Platform Builder or using the Platform Builder's Platform Manager connectivity setup.
CeGcc project provides GNU development tools, such as GNU C, GNU C++ and binutils that targeting Windows CE; 2 SDKs are available to choose from a standard Windows CE platform SDK based on MinGW, and a newlib-based SDK which may be easier for porting programs from POSIX systems.
CodeGear Delphi Prism runs in Visual Studio, also supports the .NET Compact Framework and thus can be used to develop mobile apps. It employs the Oxygene compiler created by RemObjects Software, which targets .NET, the .NET Compact Framework, and Mono. Its command-line compiler is available free of charge.
Basic4ppc a programming language similar to Embedded Visual Basic, targets the .NET Compact Framework and supports Windows CE and Windows Mobile devices.
GLBasic a very easy to learn and use BASIC dialect that compiles for many platforms, including Windows CE and Windows Mobile. It can be extended by writing inline C/C++ code.
LabVIEW a graphical programming language, supporting many platforms, including Windows CE.
MortScript is the semi-standard, extremely lightweight, automation SDK popular with the GPS enthusiasts. Uses the scripts written in its own language, with the syntax being aside to VBScript or JScript.
AutoHotkey a port of the open source macro-creation and automation software utility available for Windows CE. It allows the construction of macros and simple GUI apps developed by systems analyst Jonathan Maxian Timkang.
Relationship to Windows Mobile, Pocket PC, and Smartphone
Often Windows CE, Windows Mobile, and Pocket PC are used interchangeably, in part due to their common origin. This practice is not entirely accurate. Windows CE is a modular/componentized operating system that serves as the foundation of several classes of devices. Some of these modules provide subsets of other components' features (e.g. varying levels of windowing support; DCOM vs COM), others which are separate (bitmap or TrueType font support), and others which add additional features to another component. One can buy a kit (the Platform Builder) which contains all these components and the tools with which to develop a custom platform. Apps such as Excel Mobile (formerly Pocket Excel) are not part of this kit. The older Handheld PC version of Pocket Word and several other older apps are included as samples, however.
Windows Mobile is best described as a subset of platforms based on a Windows CE underpinning. Currently, Pocket PC (now called Windows Mobile Classic), Smartphone (Windows Mobile Standard), and Pocket PC Phone Edition (Windows Mobile Professional) are the three main platforms under the Windows Mobile umbrella. Each platform uses different components of Windows CE, plus supplemental features and apps suited for their respective devices.
Pocket PC and Windows Mobile are Microsoft-defined custom platforms for general PDA use, consisting of a Microsoft-defined set of minimum profiles (Professional Edition, Premium Edition) of software and hardware that is supported. The rules for manufacturing a Pocket PC device are stricter than those for producing a custom Windows CE-based platform. The defining characteristics of the Pocket PC are the touchscreen as the primary human interface device and its extremely portable size.
CE 3.0 is the basis for Pocket PC 2000 and Pocket PC 2002. A successor to CE 3.0 is CE.net. "PocketPC [is] a separate layer of code on top of the core Windows CE OS… Pocket PC is based on Windows CE, but it's a different offering." And licensees of Pocket PC are forbidden to modify the WinCE part.
The Smartphone platform is a feature-rich OS and interface for cellular phone handsets. SmartPhone offers productivity features to business users, such as email, and multimedia abilities for consumers. The SmartPhone interface relies heavily on joystick navigation and PhonePad input. Devices running SmartPhone do not include a touchscreen interface. SmartPhone devices generally resemble other cellular handset form factors, whereas most Phone Edition devices use a PDA form factor with a larger display.
Releases
See also
ActiveSync
Handheld PC
Handheld PC Explorer
List of Windows CE Devices
Microsoft Kin
Modular Windows
Palm-size PC
Pocket PC
Portable Media Center
Tablet PC
Windows Phone
Zune HD
Dreamcast
Windows Mobile
References
External links
Benchmarking Real-time Determinism in Microsoft Windows CE
A Brief History of Windows CE, by HPC:Factor with screenshots of the various versions
, Archived copy of website hosted by Handheld PC
Windows XP Embedded on MSDN
Mike Hall's Windows Embedded Blog
1996 software
ARM operating systems
Discontinued Microsoft operating systems
Products and services discontinued in 2023
Windows CE
Embedded Compact
X86 operating systems
Monolithic kernels | Windows CE | Technology | 2,748 |
5,111,544 | https://en.wikipedia.org/wiki/HD%20117440 | HD 117440, also known by its Bayer designation d Centauri, is a binary star system in the southern constellation of Centaurus. It is visible to the naked eye with a combined apparent visual magnitude of 3.90. The distance to this system is approximately 900 light years based on parallax measurements. It is drifting closer to the Sun with a radial velocity of −2 km/s.
A companion star was first reported by T. J. J. See in 1897 at an angular separation of from the primary. Orbital elements for the pair were published by W. S. Finsen in 1962 then updated in 1964, yielding an orbital period of 83.1 years with a semimajor axis of and an eccentricity of 0.52. Both components are evolved G-type giant stars with a yellow, Sun-like hue. The primary, component A, has an apparent magnitude of +4.64, while the secondary, component B, has an apparent magnitude of +5.03.
References
G-type giants
Binary stars
Centaurus
Centauri, d
Durchmusterung objects
117440
065936
5089 | HD 117440 | Astronomy | 236 |
41,579,742 | https://en.wikipedia.org/wiki/Aconiazide | Aconiazide is an anti-tuberculosis medication. It is a prodrug of isoniazide that was developed and studied for its lower toxicity, but it does not appear to be marketed anywhere in the world in 2021.
References
Prodrugs
Carboxylic acids
Hydrazides
4-Pyridyl compounds | Aconiazide | Chemistry | 69 |
40,383,385 | https://en.wikipedia.org/wiki/Candida%20keroseneae | Candida keroseneae is a species of yeast in the genus Candida, family Saccharomycetaceae. Described as new to science in 2011, it was isolated from aviation fuel.
Taxonomy
The type strain of this yeast (IMI 395605T) was isolated from aircraft fuel (kerosene) sampled from a European aircraft. Later analysis demonstrated that the isolated strains were able to grow in liquid media containing 50% Jet A-1 aviation fuel. Molecular analysis was performed using the ribosomal RNA gene sequences of internal transcribed spacer regions in addition to the D1/D2 domains of the 26S nuclear ribosomal RNA gene. The two isolated strains clustered within the Candida membranifaciens clade, with C. tumulicola as the most closely related species. The specific epithet keroseneae is New Latin for kerosene, the substrate of the new species.
Description
The yeast cells, after growth on glucose-peptone-yeast extract broth culture for three days at , are egg-shaped to elongated, measuring 3–11 by 1–3.5 μm. They occur singly, in budding pairs, or as short pseudohyphae. The yeast can assimilate the following carbon sources: glucose, galactose, sucrose, L-arabinose, cellobiose, maltose, trehalose, lactose, D-xylose, rhamnose, isomaltulose, melibiose, melezitose; mannitol, sorbitol, glycerol, erythritol; N-acetyl glucosamine, 2-ketogluconate, α-methyl-D-glucoside, levulinate and glucosamine. The yeast grew at a variety of temperatures between , but no growth was observed at or .
The kerosene from which the two yeast strains were isolated was analyzed with gas chromatography and shown to have 48 identifiable components. C. keroseneae appears to consume the n-alkane compounds hexadecane, heptadecane, and octadecane. Other microbes that can contaminate fuels include the yeast Yarrowia lipolytica, the filamentous fungus Hormoconis resinae, and the bacterium Pseudomonas aeruginosa.
See also
Amorphotheca resinae
Fuel polishing
References
Fungi described in 2011
Yeasts
keroseneae
Fungus species | Candida keroseneae | Biology | 522 |
17,666,127 | https://en.wikipedia.org/wiki/Transient%20equilibrium | In nuclear physics, transient equilibrium is a situation in which equilibrium is reached by a parent-daughter radioactive isotope pair where the half-life of the daughter is shorter than the half-life of the parent. Contrary to secular equilibrium, the half-life of the daughter is not negligible compared to parent's half-life. An example of this is a molybdenum-99 generator producing technetium-99 for nuclear medicine diagnostic procedures. Such a generator is sometimes called a cow because the daughter product, in this case technetium-99, is milked at regular intervals. Transient equilibrium occurs after four half-lives, on average.
Activity in transient equilibrium
The activity of the daughter is given by the Bateman equation:
where and are the activity of the parent and daughter, respectively. and are the half-lives (inverses of reaction rates in the above equation modulo ln(2)) of the parent and daughter, respectively, and BR is the branching ratio.
In transient equilibrium, the Bateman equation cannot be simplified by assuming the daughter's half-life is negligible compared to the parent's half-life. The ratio of daughter-to-parent activity is given by:
Time of maximum daughter activity
In transient equilibrium, the daughter activity increases and eventually reaches a maximum value that can exceed the parent activity. The time of maximum activity is given by:
where and are the half-lives of the parent and daughter, respectively. In the case of ^{99\!m}Tc-^{99}Mo generator, the time of maximum activity () is approximately 24 hours, which makes it convenient for medical use.
See also
Bateman equation
Secular equilibrium
References
Radioactivity | Transient equilibrium | Physics,Chemistry | 353 |
343,338 | https://en.wikipedia.org/wiki/80%20%28number%29 | 80 (eighty) is the natural number following 79 and preceding 81.
In mathematics
80 is:
the sum of Euler's totient function φ(x) over the first sixteen integers.
a semiperfect number, since adding up some subsets of its divisors (e.g., 1, 4, 5, 10, 20 and 40) gives 80.
a ménage number.
palindromic in bases 3 (22223), 6 (2126), 9 (889), 15 (5515), 19 (4419) and 39 (2239).
a repdigit in bases 3, 9, 15, 19 and 39.
the sum of the first 4 twin prime pairs ((3 + 5) + (5 + 7) + (11 + 13) + (17 + 19)).
The Pareto principle (also known as the 80-20 rule) states that, for many events, roughly 80% of the effects come from 20% of the causes.
Every solvable configuration of the 15 puzzle can be solved in no more than 80 single-tile moves.
References
External links
wiktionary:eighty for 80 in other languages.
Integers | 80 (number) | Mathematics | 250 |
22,400,875 | https://en.wikipedia.org/wiki/Stem%20cell%20laws | Stem cell laws are the law rules, and policy governance concerning the sources, research, and uses in treatment of stem cells in humans. These laws have been the source of much controversy and vary significantly by country. In the European Union, stem cell research using the human embryo is permitted in Sweden, Spain, Finland, Belgium, Greece, Britain, Denmark and the Netherlands; however, it is illegal in Germany, Austria, Ireland, Italy, and Portugal. The issue has similarly divided the United States, with several states enforcing a complete ban and others giving support. Elsewhere, Japan, India, Iran, Israel, South Korea, China, and Australia are supportive. However, New Zealand, most of Africa (except South Africa), and most of South America (except Brazil) are restrictive.
Science background
The information presented here covers the legal implications of embryonic stem cells (ES), rather than induced pluripotent stem cells (iPSCs). The laws surrounding the two differ because while both have similar capacities in differentiation, their modes of derivation are not. While embryonic stem cells are taken from embryoblasts, induced pluripotent stem cells are undifferentiated from somatic adult cells.
Stem cells are cells found in most, if not all, multi-cellular organisms. A common example of a stem cell is the hematopoietic stem cell (HSC) which are multipotent stem cells that give rise to cells of the blood lineage. In contrast to multipotent stem cells, embryonic stem cells are pluripotent and are thought to be able to give rise to all cells of the body. Embryonic stem cells were isolated in mice in 1981, and in humans in 1998.
Stem cell treatments are a type of cell therapy that introduce new cells into adult bodies for possible treatment of cancer, somatic cell nuclear transfer, diabetes, and other medical conditions. Cloning also might be done with stem cells. Stem cells have been used to repair tissue damaged by disease.
Because ES cells are cultured from the embryoblast 4–5 days after fertilization, harvesting them is most often done from donated embryos from in vitro fertilization (IVF) clinics. In January 2007, researchers at Wake Forest University reported that "stem cells drawn from amniotic fluid donated by pregnant women hold much of the same promise as embryonic stem cells."
Europe
The European Union has yet to issue consistent regulations with respect to stem cell research in member states. Whereas Germany, Austria, Italy, Finland, Portugal and the Netherlands prohibit or severely restrict the use of embryonic stem cells, Greece, Sweden, Spain and the United Kingdom have created the legal basis to support this research. Belgium bans reproductive cloning but allows therapeutic cloning of embryos. France prohibits reproductive cloning and embryo creation for research purposes, but enacted laws (with a sunset provision expiring in 2009) to allow scientists to conduct stem cell research on imported a large amount of embryos from in vitro fertilization treatments. Germany has restrictive policies for stem cell research, but a 2008 law authorizes "the use of imported stem cell lines produced before May 1, 2007." Italy has a 2004 law that forbids all sperm or egg donations and the freezing of embryos, but allows, in effect, using existing stem cell lines that have been imported. Sweden forbids reproductive cloning, but allows therapeutic cloning and authorized a stem cell bank.
According to modern stem cell researchers, Spain is one of the leaders in stem cell research and currently has one of the most progressive legislations worldwide with respect to human embryonic stem cell (hESC) research. The new Spanish law allows existing frozen embryos – of which there are estimated to be tens of thousands in Spain – to be kept for patient's future use, donated for another infertile couple, or used in research. In 2003, Spain's laws state that embryos left over from IVF and donated by the couple that created them can be used in research, including ES cell research, if they have been frozen for more than five years.
In 2001, the British Parliament amended the Human Fertilisation and Embryology Act 1990 (since amended by the Human Fertilisation and Embryology Act 2008) to permit the destruction of embryos for hESC harvests but only if the research satisfies one of the following requirements:
Increases knowledge about the development of embryos,
Increases knowledge about serious disease, or
Enables any such knowledge to be applied in developing treatments for serious disease.
The United Kingdom is one of the leaders in stem cell research, in the opinion of Lord Sainsbury, Science and Innovation Minister for the UK. A new £10 million stem cell research centre has been announced at the University of Cambridge.
Africa
The primary legislation in South Africa that deals with embryo research is the Human Tissue Act, which is set to be replaced by Chapter 8 of the National Health Act. The NHA Chapter 8 has been enacted by parliament, but not yet signed into force by the president. The process of finalising these regulations is still underway. The NHA Chapter 8 allows the Minister of Health to give permission for research on embryos not older than 14 days. The legislation on embryo research is complemented by the South African Medical Research Council's Ethics Guidelines. These Guidelines advise against the creation of embryos for the sole purpose of research. In the case of Christian Lawyers Association of South Africa & others v Minister of Health & others the court ruled that the Bill of Rights is not applicable to the unborn. It has therefore been argued based on constitutional grounds (the right to human dignity, and the right to freedom of scientific research) that the above limitations on embryo research are overly inhibitive of the autonomy of scientists, and hence unconstitutional.
Asia
China prohibits human reproductive cloning but allows the creation of human embryos for research and therapeutic purposes. India banned in 2004 reproductive cloning, permitted therapeutic cloning. In 2004, Japan’s Council for Science and Technology Policy voted to allow scientists to conduct stem cell research for therapeutic purposes, though formal guidelines have yet to be released. In December 2012, Japanese Prime Minister, Shinzō Abe, announced an investment into regenerative medicine of ¥110 billion (US$1 billion) over the next decade. The South Korean government promotes therapeutic cloning, but forbids cloning. The Philippines prohibits human embryonic and aborted human fetal stem cells and their derivatives for human treatment and research. In 1999, Israel passed legislation banning reproductive, but not therapeutic, cloning. Saudi Arabia religious officials issued a decree that sanctions the use of embryos for therapeutic and research purposes. According to the Royan Institute for Reproductive Biomedicine, Iran has some of the most liberal laws on stem cell research and cloning. Laws and regulations in Jordan allow stem-cell research. A center for stem cell research has acquired a license to begin operating in April 2017 at the University of Jordan.
Americas
Brazil
Brazil has passed legislation to permit stem cell research using excess in vitro fertilized embryos that have been frozen for at least three years.
United States
Federal law places restrictions on funding and use of hES cells through amendments to the budget bill. In 2001, George W. Bush implemented a policy limiting the number of stem cell lines that could be used for research. There were some state laws concerning stem cells that were passed in the mid-2000s. New Jersey's 2004 S1909/A2840 specifically permitted human cloning for the purpose of developing and harvesting human stem cells, and Missouri's 2006 Amendment Two legalized certain forms of embryonic stem cell research in the state. On the other hand, Arkansas, Indiana, Louisiana, Michigan, North Dakota and South Dakota passed laws to prohibit the creation or destruction of human embryos for medical research.
During Bush's second term, in July 2006, he used his first Presidential veto on the Stem Cell Research Enhancement Act. The Stem Cell Research Enhancement Act was the name of two similar bills, and both were vetoed by President George W. Bush and were not enacted into law. New Jersey congressman Chris Smith wrote the Stem Cell Therapeutic and Research Act of 2005, which made some narrow exceptions, and was signed into law by President Bush.
In November 2004, California voters approved Proposition 71, creating a US$3 billion state taxpayer-funded institute for stem cell research, the California Institute for Regenerative Medicine. It hopes to provide $300 million a year.
In 2014, United States v. Regenerative Sciences, LLC upheld FDA's regulation of stem cell therapies.
Barack Obama removed the restriction of federal funding signed by Bush in 2001, which only allowed funding on the 21 cell lines already created. However, the Dickey Amendment to the budget, The Omnibus Appropriations Act of 2009, still bans federal funding of creating new cell lines. In other words, the federal government will now fund research which uses the hundreds of more lines created by public and private funds.
Canada
In March 2002, the Canadian Institutes of Health Research announced the first ever guidelines for human pluripotent stem cell research in Canada. The federal granting agencies, CIHR, Natural Sciences and Engineering Research Council, and Social Sciences and Humanities Research Council of Canada teamed up and agreed that no research with human IPSCs would be funded without review and approval from the Stem Cell Oversight Committee (SCOC).
In March 2004, Canadian parliament enacted the Assisted Human Reproduction Act (AHRA), modeled on the United Kingdom’s Human Fertilization and Embryology Act of 1990. Highlights of the act include prohibitions against the creation of embryos for research purposes and the criminalization of commercial transactions in human reproductive tissues.
In 2005, Canada enacted a law permitting research on discarded embryos from in vitro fertilization procedures. However, it prohibits the creation of human embryos for research.
On June 30, 2010, The Updated Guidelines for Human Pluripotent Stem Cell Research outline that:
The embryos used must originally have been created for reproductive purposes
The persons for whom the embryos were created must provide free and informed consent for the unrestricted research use of any embryos created, which are no longer required for reproductive purposes
The ova, sperm, nor embryo must not have been obtained through commercial transactions
Canada's National Embryonic Stem Cell Registry:
contains all human embryonic stem cell lines generated using CIHR funds or funds from any of the research councils
is a prerequisite for obtaining CIHR funding for human embryonic stem cell research
will minimize the need to generate large numbers of cell lines, and decrease the need for donation of large numbers of embryos
Oceania
Australia is partially supportive (exempting reproductive cloning yet allowing research on embryonic stem cells that are derived from the process of IVF). New Zealand, however, restricts stem cell research.
See also
National Center for Regenerative Medicine
Christian views on cloning
References
Further reading
Frank Bellomo, The Stem Cell Divide: The Facts, Fiction, and the Fear Driving the Greatest Scientific, Political, and Religious Debate of Our Time (American Management Association, New York 2006) .
Kerstin Klein, "Illiberal Biopolitics, Human Embryos and the Stem Cell Controversy in China" (London School of Economics and Political Science, London, 2010).
Pam Solo and Gasil Gressberg, The Promise and Politics of Stem Cell Research (Praeger, Westport, Connecticut, 2007) .
External links
Regulation of stem cell research in Europe summaries by country on www.eurostemcell.org
The Hinxton Group: An International Consortium on Stem Cells, Ethics & Law
A Scientific-Industrial Complex? By Sigrid Fry-Revere
On the Personhood of Pre-implantation Embryos
Laws
Medical controversies
Biotechnology law
Laws
Medical law | Stem cell laws | Chemistry,Biology | 2,405 |
60,874,834 | https://en.wikipedia.org/wiki/Emily%20Klein | Emily M. Klein is a professor of geology and geochemistry at Duke University. She studies volcanic eruptions and the process of oceanic crust creation. She has spent over thirty years investigating the geology of mid-ocean ridges and identified the importance of the physical conditions of mantle melting on the chemical composition of basalt.
Early life and education
Emily Klein was born in Los Angeles, California. Growing up, Klein was very interested in the field of medicine as she was always around her father who was a doctor. She worked at the office on Saturdays which led her to volunteer at the local hospital and the women's clinic. She would also always take science courses in school and be involved in science projects, and summer science programs. She got so interested and involved in medical science that she thought she would become a medical doctor herself. When she moved to New York City to attend Barnard College, she however became more interested in English and writing. While she still enjoyed science and continued to take science courses, she now pursued her greater passion for writing, be that journalism, creative writing, amongst other things. She went on to become a feature editor for the newspaper, and finally got her major in English.
After graduating from Barnard College in 1979, she became a science writer for a while, but soon went on to take a job at the Columbia University College of Physicians and Surgeons, as a physiology laboratory technician. Here she took part in multiple field research projects, where she did a bit of everything, from writing proposals, to doing laboratory work and experiments, to writing up and presenting research results. One of these studies entailed studying a monkey colony in Puerto Rico. She became interested in geology, and earned tuition credits to study courses at Columbia University. It was during this time as a researcher that she happened to stumble across a group of geologists, and consequently became really interested in the field. Although she had never taken any geology courses in her undergrad, her strong background in other sciences allowed her to easily transition into the field of geology. She started to take some geology courses at Columbia University where she worked at the time, and soon got accepted into the graduate program where she pursued a master's degree in geology. She went on to receive her doctorate degree here later. Her academic background and experience as a laboratory technician led her to become a geochemist.
During her time at graduate school, while studying geochemistry she went on sea expeditions to study the oceanic crust and the new idea of plate tectonics. Since the idea of plate tectonics was so new to the field, she decided to pursue that as her main field of research.
She also investigated the chemical composition of the volcanic rocks collected from mid-ocean ridges around the world. She was awarded the Bruce C. Heezen Memorial Prize for her doctoral thesis in 1987. During her time at Columbia University she worked with Charles Langmuir on the study of mid-ocean ridge basalts, and together they produced many papers which gave her name increasing recognition within the field of geology. Langmuir and Klein demonstrated that the chemical composition of basalt correlates with the physical environment the basalt is recovered from; including the depth and thickness of the oceanic crust. This work marked a paradigm shift in the understanding of petrogenesis.
Research and career
Klein has been involved in geology and geochemistry for over 40 years. Her research has been focused on oceanic crust, specifically completing deep sea research to track tectonic plate movement. She also found a fascination in analyzing volcanic activity and has researched the chemical processes of underwater volcanic activity. She is still active in her career today continuing to travel and complete sea excursions to gather research and data. Her most recent cruise included gathering data using “echo sounder” mapping technology. This technology uses sound beams to measure topographical structures on the ocean floor. Klein continues her research even when she is not on sea expeditions. A large portion of her discoveries occur in a chemistry lab. Most recently, she has been working on melting basalt rock (volcanic rock) as a way to theorize how ocean ridges change and evolve. This melting of basalt rock is an extensive process that requires chipping the rock into small pieces, then grinding it to a powder form, and finally heating it to 1200 degrees in order to find a melting point.
After graduating, she received a lot of offers and opportunities, but she decided to teach undergrad at Duke University instead with the hope of inspiring students to study earth sciences.
Klein joined Duke University as an Assistant Professor in 1989. She was made Professor in 2005. Part of the reason for this decision was because she had gotten married and wanted to start a family. Now she really enjoys teaching undergrad, and particularly enjoys opening young student’s minds to new ideas and introducing them to the vast field of scientific exploration and research.This semester (2021) she is co-teaching (with a faculty colleague in engineering) a project course called: Energy and Environment: Design and Innovation. She is also extremely passionate about supporting women and underrepresented minorities in science. She has observed that many women drop out of sciences quite early on, so she tries to inspire them to stay and pursue a career in the field.
From 2004 to 2012, Klein served as Director of the Baldwin Scholars' Program at Duke University, which provides leadership opportunities for women students. Klein was appointed Chair of Earth & Ocean Sciences at the Nicholas School in 2017.
Klein studies the movement of magma in the oceanic crust. She is interested in mid-ocean ridge, a globe encircling belt of volcanoes including the mid-Atlantic ridge. Klein has been on over eleven oceanographic cruises, investigating Incipient Ridge, Hess Deep and Pito Deep Rift. She uses remotely operated underwater vehicles to map the deep ocean, and directs submersible vessels to collect rock samples. She puts these rocks in a furnace, then analyses the chemical composition of the rocks using spectrometers. She is mainly interested in silica, iron, magnesium and aluminium, but also analyses trace elements such as copper, vanadium and uranium. On a cruise of the RV Atlantis, Klein discovered new deep sea hydrothermal vents in the Pacific Ocean. The vents, which Klein named the medusa hydrothermal vents, emit hot springs of iron-darkened water. In 2018 Klein took part in the RV Sally Ride (AGOR-28) investigation of the Cocos-Nazca spreading system.
Hess Deep
Klein researched volcanic eruptions and how it led to the development of crust on the ocean floor. To research this she focused on the processes that occurred under the ocean floor, where she studied the movement of magma underneath the crust. She studied the chemical composition of lava and collected samples from ocean floors to see differences in lava.
In 1999, Klein went on a voyage to research the Hess Deep Rift. During this voyage she found evidence that opposed the idea that mid-ocean ridges had magma that always rose up from the magma chamber to the surface. By studying the composition of lava she was able to retrieve key information about the temperature and pressures of magma below the crust, as well as determining its origin.
Klein researched samples of dikes beside rift walls, and assumed that they formed from the same part of the magma chamber, thus making their chemical composition relatively the same. Through further research, however, Klein discovered that the chemical structures of the dikes were clearly distinct from one another. Leading to the conclusion that the dikes must have originated from separate magma chambers.
Through her research findings, she concluded that dikes in Hess Deep had magma that didn’t reach the surface and contained crystals and other minerals which made the magma light enough to reach the surface of the sea.
Ultimately, Klein found that magma does not rise straight up to the surface of the ocean floor, and that dikes cannot be chemically identified by only the composition of lava on the seafloor. Researchers must take into account that magma can travel sideways and rise in other parts of the magma chamber.
Incipient Rift
In 2002 Klein sailed to the East Pacific Rise to further research a tectonic plate named the Galapagos Microplate. She wanted to carry out her endeavour to find lava samples of the incipient rift. They found volcanic activity along the entire rift, discovering that it was a plate boundary and what could be a newly forming microplate. This finding essentially caused scientists to rethink research on the evolution of the Galapagos microplate area.
Pito Deep
Klein has done extensive research regarding Pito Deep, an underwater abyss, in order to gain a greater understanding of the geology under the oceanic floor. Klein and other scientists sent a robot (Jason II) underwater to take pictures and obtain samples of lava and rocks for further testing.
The main purpose of researching Pito Deep was to gain information about the ocean's crust. This is difficult to do since there are particular places in the ocean where tectonic forces prevent gaining access to the crust for the purpose of study. In the Pito Deep abyss, tectonic forces cause a large fault and rift, enabling geologists like Klein to look into the deeper layers of the ocean’s crust.
Awards and honors
1987 Bruce C. Heezen Memorial Prize
1992 Geochemical Society F.W. Clarke Medal
1992 National Science Foundation Young Investigator Award
2003 Geological Society of America Ingerson Lecture
2006 Duke University Bass Fellow
2018 Duke University Distinguished Service Professor
2022 Fellow of the American Association for the Advancement of Science
The parents of one of Klein's undergraduate students donated $100,000 to create an Emily M. Klein endowment fund.
Select Academic Works
References
Women geochemists
Duke University faculty
Columbia University alumni
Barnard College alumni
Women oceanographers
American women geologists
American geologists
Year of birth missing (living people)
Living people
American women academics
21st-century American women
Fellows of the American Association for the Advancement of Science | Emily Klein | Chemistry | 1,992 |
5,482,961 | https://en.wikipedia.org/wiki/Antimony%20triiodide | Antimony triiodide is the chemical compound with the formula SbI3. This ruby-red solid is the only characterized "binary" iodide of antimony, i.e. the sole compound isolated with the formula SbxIy. It contains antimony in its +3 oxidation state. Like many iodides of the heavier main group elements, its structure depends on the phase. Gaseous SbI3 is a molecular, pyramidal species as anticipated by VSEPR theory. In the solid state, however, the Sb center is surrounded by an octahedron of six iodide ligands, three of which are closer and three more distant. For the related compound BiI3, all six Bi—I distances are equal.
Production
It may be formed by the reaction of antimony with elemental iodine, or the reaction of antimony trioxide with hydroiodic acid.
Alternatively, it may be prepared by the interaction of antimony and iodine in boiling benzene or tetrachloroethane.
Uses
SbI3 has been used as a dopant in the preparation of thermoelectric materials.
References
External links
Iodides
Metal halides
Antimony(III) compounds | Antimony triiodide | Chemistry | 252 |
50,017,260 | https://en.wikipedia.org/wiki/Bernhard%20Keller | Bernhard Keller (born 1962) is a Swiss mathematician, specializing in algebra. He is a professor at the University of Paris.
Keller received in 1990 his PhD from the University of Zurich under Pierre Gabriel with the thesis On Derived Categories.
His research is in homological algebra and the representation theory of quivers and finite-dimensional algebras. He has applied triangulated Calabi–Yau categories to the (additive) categorification
of cluster algebras.
In 2013, he received an honorary degree from the University of Antwerp.
In 2014 he received the Sophie Germain Prize.
He was an Invited Speaker at the International Congress of Mathematicians in Madrid in 2006,
with a talk On differential graded categories.
Keller is a fellow of the American Mathematical Society.
Selected works
with Idun Reiten:
References
External links
Bernhard Keller's homepage
1964 births
Algebraists
20th-century Swiss mathematicians
21st-century Swiss mathematicians
University of Zurich alumni
Academic staff of the University of Paris
Living people
Fellows of the American Mathematical Society | Bernhard Keller | Mathematics | 202 |
7,778,528 | https://en.wikipedia.org/wiki/Mxwendler | MXWendler is a software system created by the German company device+context.
The software is intended to create live visuals, commonly used in clubs, music festivals, theatres, facade projections and arts events.
The software renders live video using common graphics hardware instead of using the CPU. The company's website claims this makes the software 'very fast'. Another aspect is the "generative aspect"' of the software: it has a built-in feedback loop which allows it to generate new videostreams by using the feedback characteristics as proposed by VJs like mxzehn instead of solely relying on triggered prepared footage clips.
See also
Video performance artist
External links
company homepage
mxzehn
Multimedia software | Mxwendler | Technology | 144 |
15,942 | https://en.wikipedia.org/wiki/John%20von%20Neumann | John von Neumann ( ; ; December 28, 1903 – February 8, 1957) was a Hungarian and American mathematician, physicist, computer scientist and engineer. Von Neumann had perhaps the widest coverage of any mathematician of his time, integrating pure and applied sciences and making major contributions to many fields, including mathematics, physics, economics, computing, and statistics. He was a pioneer in building the mathematical framework of quantum physics, in the development of functional analysis, and in game theory, introducing or codifying concepts including cellular automata, the universal constructor and the digital computer. His analysis of the structure of self-replication preceded the discovery of the structure of DNA.
During World War II, von Neumann worked on the Manhattan Project. He developed the mathematical models behind the explosive lenses used in the implosion-type nuclear weapon. Before and after the war, he consulted for many organizations including the Office of Scientific Research and Development, the Army's Ballistic Research Laboratory, the Armed Forces Special Weapons Project and the Oak Ridge National Laboratory. At the peak of his influence in the 1950s, he chaired a number of Defense Department committees including the Strategic Missile Evaluation Committee and the ICBM Scientific Advisory Committee. He was also a member of the influential Atomic Energy Commission in charge of all atomic energy development in the country. He played a key role alongside Bernard Schriever and Trevor Gardner in the design and development of the United States' first ICBM programs. At that time he was considered the nation's foremost expert on nuclear weaponry and the leading defense scientist at the U.S. Department of Defense.
Von Neumann's contributions and intellectual ability drew praise from colleagues in physics, mathematics, and beyond. Accolades he received range from the Medal of Freedom to a crater on the Moon named in his honor.
Life and education
Family background
Von Neumann was born in Budapest, Kingdom of Hungary (then part of the Austro-Hungarian Empire), on December 28, 1903, to a wealthy, non-observant Jewish family. His birth name was Neumann János Lajos. In Hungarian, the family name comes first, and his given names are equivalent to John Louis in English.
He was the eldest of three brothers; his two younger siblings were Mihály (Michael) and Miklós (Nicholas). His father Neumann Miksa (Max von Neumann) was a banker and held a doctorate in law. He had moved to Budapest from Pécs at the end of the 1880s. Miksa's father and grandfather were born in Ond (now part of Szerencs), Zemplén County, northern Hungary. John's mother was Kann Margit (Margaret Kann); her parents were Kann Jákab and Meisels Katalin of the Meisels family. Three generations of the Kann family lived in spacious apartments above the Kann-Heller offices in Budapest; von Neumann's family occupied an 18-room apartment on the top floor.
On February 20, 1913, Emperor Franz Joseph elevated John's father to the Hungarian nobility for his service to the Austro-Hungarian Empire. The Neumann family thus acquired the hereditary appellation Margittai, meaning "of Margitta" (today Marghita, Romania). The family had no connection with the town; the appellation was chosen in reference to Margaret, as was their chosen coat of arms depicting three marguerites. Neumann János became margittai Neumann János (John Neumann de Margitta), which he later changed to the German Johann von Neumann.
Child prodigy
Von Neumann was a child prodigy who at six years old could divide two eight-digit numbers in his head and converse in Ancient Greek. He, his brothers and his cousins were instructed by governesses. Von Neumann's father believed that knowledge of languages other than their native Hungarian was essential, so the children were tutored in English, French, German and Italian. By age eight, von Neumann was familiar with differential and integral calculus, and by twelve he had read Borel's La Théorie des Fonctions. He was also interested in history, reading Wilhelm Oncken's 46-volume world history series (General History in Monographs). One of the rooms in the apartment was converted into a library and reading room.
Von Neumann entered the Lutheran Fasori Evangélikus Gimnázium in 1914. Eugene Wigner was a year ahead of von Neumann at the school and soon became his friend.
Although von Neumann's father insisted that he attend school at the grade level appropriate to his age, he agreed to hire private tutors to give von Neumann advanced instruction. At 15, he began to study advanced calculus under the analyst Gábor Szegő. On their first meeting, Szegő was so astounded by von Neumann's mathematical talent and speed that, as recalled by his wife, he came back home with tears in his eyes. By 19, von Neumann had published two major mathematical papers, the second of which gave the modern definition of ordinal numbers, which superseded Georg Cantor's definition. At the conclusion of his education at the gymnasium, he applied for and won the Eötvös Prize, a national award for mathematics.
University studies
According to his friend Theodore von Kármán, von Neumann's father wanted John to follow him into industry, and asked von Kármán to persuade his son not to take mathematics. Von Neumann and his father decided that the best career path was chemical engineering. This was not something that von Neumann had much knowledge of, so it was arranged for him to take a two-year, non-degree course in chemistry at the University of Berlin, after which he sat for the entrance exam to ETH Zurich, which he passed in September 1923. Simultaneously von Neumann entered Pázmány Péter University, then known as the University of Budapest, as a Ph.D. candidate in mathematics. For his thesis, he produced an axiomatization of Cantor's set theory. In 1926, he graduated as a chemical engineer from ETH Zurich and simultaneously passed his final examinations summa cum laude for his Ph.D. in mathematics (with minors in experimental physics and chemistry) at the University of Budapest.
He then went to the University of Göttingen on a grant from the Rockefeller Foundation to study mathematics under David Hilbert. Hermann Weyl remembers how in the winter of 1926–1927 von Neumann, Emmy Noether, and he would walk through "the cold, wet, rain-wet streets of Göttingen" after class discussing hypercomplex number systems and their representations.
Career and private life
Von Neumann's habilitation was completed on December 13, 1927, and he began to give lectures as a Privatdozent at the University of Berlin in 1928. He was the youngest person elected Privatdozent in the university's history. He began writing nearly one major mathematics paper per month. In 1929, he briefly became a Privatdozent at the University of Hamburg, where the prospects of becoming a tenured professor were better, then in October of that year moved to Princeton University as a visiting lecturer in mathematical physics.
Von Neumann was baptized a Catholic in 1930. Shortly afterward, he married Marietta Kövesi, who had studied economics at Budapest University. Von Neumann and Marietta had a daughter, Marina, born in 1935; she would become a professor. The couple divorced on November 2, 1937. On November 17, 1938, von Neumann married Klára Dán.
In 1933 Von Neumann accepted a tenured professorship at the Institute for Advanced Study in New Jersey, when that institution's plan to appoint Hermann Weyl appeared to have failed. His mother, brothers and in-laws followed von Neumann to the United States in 1939. Von Neumann anglicized his name to John, keeping the German-aristocratic surname von Neumann. Von Neumann became a naturalized U.S. citizen in 1937, and immediately tried to become a lieutenant in the U.S. Army's Officers Reserve Corps. He passed the exams but was rejected because of his age.
Klára and John von Neumann were socially active within the local academic community. His white clapboard house on Westcott Road was one of Princeton's largest private residences. He always wore formal suits. He enjoyed Yiddish and "off-color" humor. In Princeton, he received complaints for playing extremely loud German march music; Von Neumann did some of his best work in noisy, chaotic environments. According to Churchill Eisenhart, von Neumann could attend parties until the early hours of the morning and then deliver a lecture at 8:30.
He was known for always being happy to provide others of all ability levels with scientific and mathematical advice. Wigner wrote that he perhaps supervised more work (in a casual sense) than any other modern mathematician. His daughter wrote that he was very concerned with his legacy in two aspects: his life and the durability of his intellectual contributions to the world.
Many considered him an excellent chairman of committees, deferring rather easily on personal or organizational matters but pressing on technical ones. Herbert York described the many "Von Neumann Committees" that he participated in as "remarkable in style as well as output". The way the committees von Neumann chaired worked directly and intimately with the necessary military or corporate entities became a blueprint for all Air Force long-range missile programs. Many people who had known von Neumann were puzzled by his relationship to the military and to power structures in general. Stanisław Ulam suspected that he had a hidden admiration for people or organizations that could influence the thoughts and decision making of others.
He also maintained his knowledge of languages learnt in his youth. He knew Hungarian, French, German and English fluently, and maintained a conversational level of Italian, Yiddish, Latin and Ancient Greek. His Spanish was less perfect. He had a passion for and encyclopedic knowledge of ancient history, and he enjoyed reading Ancient Greek historians in the original Greek. Ulam suspected they may have shaped his views on how future events could play out and how human nature and society worked in general.
Von Neumann's closest friend in the United States was the mathematician Stanisław Ulam. Von Neumann believed that much of his mathematical thought occurred intuitively; he would often go to sleep with a problem unsolved and know the answer upon waking up. Ulam noted that von Neumann's way of thinking might not be visual, but more aural. Ulam recalled, "Quite independently of his liking for abstract wit, he had a strong appreciation (one might say almost a hunger) for the more earthy type of comedy and humor".
Illness and death
In 1955, a mass was found near von Neumann's collarbone, which turned out to be cancer originating in the skeleton, pancreas or prostate. (While there is general agreement that the tumor had metastasised, sources differ on the location of the primary cancer.) The malignancy may have been caused by exposure to radiation at Los Alamos National Laboratory. As death neared he asked for a priest, though the priest later recalled that von Neumann found little comfort in receiving the last riteshe remained terrified of death and unable to accept it. Of his religious views, Von Neumann reportedly said, "So long as there is the possibility of eternal damnation for nonbelievers it is more logical to be a believer at the end," referring to Pascal's wager. He confided to his mother, "There probably has to be a God. Many things are easier to explain if there is than if there isn't."
He died Roman Catholic on February 8, 1957, at Walter Reed Army Medical Hospital and was buried at Princeton Cemetery.
Mathematics
Set theory
At the beginning of the 20th century, efforts to base mathematics on naive set theory suffered a setback due to Russell's paradox (on the set of all sets that do not belong to themselves). The problem of an adequate axiomatization of set theory was resolved implicitly about twenty years later by Ernst Zermelo and Abraham Fraenkel. Zermelo–Fraenkel set theory provided a series of principles that allowed for the construction of the sets used in the everyday practice of mathematics, but did not explicitly exclude the possibility of the existence of a set that belongs to itself. In his 1925 doctoral thesis, von Neumann demonstrated two techniques to exclude such sets—the axiom of foundation and the notion of class.
The axiom of foundation proposed that every set can be constructed from the bottom up in an ordered succession of steps by way of the Zermelo–Fraenkel principles. If one set belongs to another, then the first must necessarily come before the second in the succession. This excludes the possibility of a set belonging to itself. To demonstrate that the addition of this new axiom to the others did not produce contradictions, von Neumann introduced the method of inner models, which became an essential demonstration instrument in set theory.
The second approach to the problem of sets belonging to themselves took as its base the notion of class, and defines a set as a class that belongs to other classes, while a proper class is defined as a class that does not belong to other classes. On the Zermelo–Fraenkel approach, the axioms impede the construction of a set of all sets that do not belong to themselves. In contrast, on von Neumann's approach, the class of all sets that do not belong to themselves can be constructed, but it is a proper class, not a set.
Overall, von Neumann's major achievement in set theory was an "axiomatization of set theory and (connected with that) elegant theory of the ordinal and cardinal numbers as well as the first strict formulation of principles of definitions by the transfinite induction".
Von Neumann paradox
Building on the Hausdorff paradox of Felix Hausdorff (1914), Stefan Banach and Alfred Tarski in 1924 showed how to subdivide a three-dimensional ball into disjoint sets, then translate and rotate these sets to form two identical copies of the same ball; this is the Banach–Tarski paradox. They also proved that a two-dimensional disk has no such paradoxical decomposition. But in 1929, von Neumann subdivided the disk into finitely many pieces and rearranged them into two disks, using area-preserving affine transformations instead of translations and rotations. The result depended on finding free groups of affine transformations, an important technique extended later by von Neumann in his work on measure theory.
Proof theory
With the contributions of von Neumann to sets, the axiomatic system of the theory of sets avoided the contradictions of earlier systems and became usable as a foundation for mathematics, despite the lack of a proof of its consistency. The next question was whether it provided definitive answers to all mathematical questions that could be posed in it, or whether it might be improved by adding stronger axioms that could be used to prove a broader class of theorems.
By 1927, von Neumann was involving himself in discussions in Göttingen on whether elementary arithmetic followed from Peano axioms. Building on the work of Ackermann, he began attempting to prove (using the finistic methods of Hilbert's school) the consistency of first-order arithmetic. He succeeded in proving the consistency of a fragment of arithmetic of natural numbers (through the use of restrictions on induction). He continued looking for a more general proof of the consistency of classical mathematics using methods from proof theory.
A strongly negative answer to whether it was definitive arrived in September 1930 at the Second Conference on the Epistemology of the Exact Sciences, in which Kurt Gödel announced his first theorem of incompleteness: the usual axiomatic systems are incomplete, in the sense that they cannot prove every truth expressible in their language. Moreover, every consistent extension of these systems necessarily remains incomplete. At the conference, von Neumann suggested to Gödel that he should try to transform his results for undecidable propositions about integers.
Less than a month later, von Neumann communicated to Gödel an interesting consequence of his theorem: the usual axiomatic systems are unable to demonstrate their own consistency. Gödel replied that he had already discovered this consequence, now known as his second incompleteness theorem, and that he would send a preprint of his article containing both results, which never appeared. Von Neumann acknowledged Gödel's priority in his next letter. However, von Neumann's method of proof differed from Gödel's, and he was also of the opinion that the second incompleteness theorem had dealt a much stronger blow to Hilbert's program than Gödel thought it did. With this discovery, which drastically changed his views on mathematical rigor, von Neumann ceased research in the foundations of mathematics and metamathematics and instead spent time on problems connected with applications.
Ergodic theory
In a series of papers published in 1932, von Neumann made foundational contributions to ergodic theory, a branch of mathematics that involves the states of dynamical systems with an invariant measure. Of the 1932 papers on ergodic theory, Paul Halmos wrote that even "if von Neumann had never done anything else, they would have been sufficient to guarantee him mathematical immortality". By then von Neumann had already written his articles on operator theory, and the application of this work was instrumental in his mean ergodic theorem.
The theorem is about arbitrary one-parameter unitary groups and states that for every vector in the Hilbert space, exists in the sense of the metric defined by the Hilbert norm and is a vector which is such that for all . This was proven in the first paper. In the second paper, von Neumann argued that his results here were sufficient for physical applications relating to Boltzmann's ergodic hypothesis. He also pointed out that ergodicity had not yet been achieved and isolated this for future work.
Later in the year he published another influential paper that began the systematic study of ergodicity. He gave and proved a decomposition theorem showing that the ergodic measure preserving actions of the real line are the fundamental building blocks from which all measure preserving actions can be built. Several other key theorems are given and proven. The results in this paper and another in conjunction with Paul Halmos have significant applications in other areas of mathematics.
Measure theory
In measure theory, the "problem of measure" for an -dimensional Euclidean space may be stated as: "does there exist a positive, normalized, invariant, and additive set function on the class of all subsets of ?" The work of Felix Hausdorff and Stefan Banach had implied that the problem of measure has a positive solution if or and a negative solution (because of the Banach–Tarski paradox) in all other cases. Von Neumann's work argued that the "problem is essentially group-theoretic in character": the existence of a measure could be determined by looking at the properties of the transformation group of the given space. The positive solution for spaces of dimension at most two, and the negative solution for higher dimensions, comes from the fact that the Euclidean group is a solvable group for dimension at most two, and is not solvable for higher dimensions. "Thus, according to von Neumann, it is the change of group that makes a difference, not the change of space." Around 1942 he told Dorothy Maharam how to prove that every complete σ-finite measure space has a multiplicative lifting; he did not publish this proof and she later came up with a new one.
In a number of von Neumann's papers, the methods of argument he employed are considered even more significant than the results. In anticipation of his later study of dimension theory in algebras of operators, von Neumann used results on equivalence by finite decomposition, and reformulated the problem of measure in terms of functions. A major contribution von Neumann made to measure theory was the result of a paper written to answer a question of Haar regarding whether there existed an algebra of all bounded functions on the real number line such that they form "a complete system of representatives of the classes of almost everywhere-equal measurable bounded functions". He proved this in the positive, and in later papers with Stone discussed various generalizations and algebraic aspects of this problem. He also proved by new methods the existence of disintegrations for various general types of measures. Von Neumann also gave a new proof on the uniqueness of Haar measures by using the mean values of functions, although this method only worked for compact groups. He had to create entirely new techniques to apply this to locally compact groups. He also gave a new, ingenious proof for the Radon–Nikodym theorem. His lecture notes on measure theory at the Institute for Advanced Study were an important source for knowledge on the topic in America at the time, and were later published.
Topological groups
Using his previous work on measure theory, von Neumann made several contributions to the theory of topological groups, beginning with a paper on almost periodic functions on groups, where von Neumann extended Bohr's theory of almost periodic functions to arbitrary groups. He continued this work with another paper in conjunction with Bochner that improved the theory of almost periodicity to include functions that took on elements of linear spaces as values rather than numbers. In 1938, he was awarded the Bôcher Memorial Prize for his work in analysis in relation to these papers.
In a 1933 paper, he used the newly discovered Haar measure in the solution of Hilbert's fifth problem for the case of compact groups. The basic idea behind this was discovered several years earlier when von Neumann published a paper on the analytic properties of groups of linear transformations and found that closed subgroups of a general linear group are Lie groups. This was later extended by Cartan to arbitrary Lie groups in the form of the closed-subgroup theorem.
Functional analysis
Von Neumann was the first to axiomatically define an abstract Hilbert space. He defined it as a complex vector space with a Hermitian scalar product, with the corresponding norm being both separable and complete. In the same papers he also proved the general form of the Cauchy–Schwarz inequality that had previously been known only in specific examples. He continued with the development of the spectral theory of operators in Hilbert space in three seminal papers between 1929 and 1932. This work cumulated in his Mathematical Foundations of Quantum Mechanics which alongside two other books by Stone and Banach in the same year were the first monographs on Hilbert space theory. Previous work by others showed that a theory of weak topologies could not be obtained by using sequences. Von Neumann was the first to outline a program of how to overcome the difficulties, which resulted in him defining locally convex spaces and topological vector spaces for the first time. In addition several other topological properties he defined at the time (he was among the first mathematicians to apply new topological ideas from Hausdorff from Euclidean to Hilbert spaces) such as boundness and total boundness are still used today. For twenty years von Neumann was considered the 'undisputed master' of this area. These developments were primarily prompted by needs in quantum mechanics where von Neumann realized the need to extend the spectral theory of Hermitian operators from the bounded to the unbounded case. Other major achievements in these papers include a complete elucidation of spectral theory for normal operators, the first abstract presentation of the trace of a positive operator, a generalisation of Riesz's presentation of Hilbert's spectral theorems at the time, and the discovery of Hermitian operators in a Hilbert space, as distinct from self-adjoint operators, which enabled him to give a description of all Hermitian operators which extend a given Hermitian operator. He wrote a paper detailing how the usage of infinite matrices, common at the time in spectral theory, was inadequate as a representation for Hermitian operators. His work on operator theory lead to his most profound invention in pure mathematics, the study of von Neumann algebras and in general of operator algebras.
His later work on rings of operators lead to him revisiting his work on spectral theory and providing a new way of working through the geometric content by the use of direct integrals of Hilbert spaces. Like in his work on measure theory he proved several theorems that he did not find time to publish. He told Nachman Aronszajn and K. T. Smith that in the early 1930s he proved the existence of proper invariant subspaces for completely continuous operators in a Hilbert space while working on the invariant subspace problem.
With I. J. Schoenberg he wrote several items investigating translation invariant Hilbertian metrics on the real number line which resulted in their complete classification. Their motivation lie in various questions related to embedding metric spaces into Hilbert spaces.
With Pascual Jordan he wrote a short paper giving the first derivation of a given norm from an inner product by means of the parallelogram identity. His trace inequality is a key result of matrix theory used in matrix approximation problems. He also first presented the idea that the dual of a pre-norm is a norm in the first major paper discussing the theory of unitarily invariant norms and symmetric gauge functions (now known as symmetric absolute norms). This paper leads naturally to the study of symmetric operator ideals and is the beginning point for modern studies of symmetric operator spaces.
Later with Robert Schatten he initiated the study of nuclear operators on Hilbert spaces, tensor products of Banach spaces, introduced and studied trace class operators, their ideals, and their duality with compact operators, and preduality with bounded operators. The generalization of this topic to the study of nuclear operators on Banach spaces was among the first achievements of Alexander Grothendieck. Previously in 1937 von Neumann published several results in this area, for example giving 1-parameter scale of different cross norms on and proving several other results on what are now known as Schatten–von Neumann ideals.
Operator algebras
Von Neumann founded the study of rings of operators, through the von Neumann algebras (originally called W*-algebras). While his original ideas for rings of operators existed already in 1930, he did not begin studying them in depth until he met F. J. Murray several years later. A von Neumann algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. The von Neumann bicommutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as being equal to the bicommutant. After elucidating the study of the commutative algebra case, von Neumann embarked in 1936, with the partial collaboration of Murray, on the noncommutative case, the general study of factors classification of von Neumann algebras. The six major papers in which he developed that theory between 1936 and 1940 "rank among the masterpieces of analysis in the twentieth century"; they collect many foundational results and started several programs in operator algebra theory that mathematicians worked on for decades afterwards. An example is the classification of factors. In addition in 1938 he proved that every von Neumann algebra on a separable Hilbert space is a direct integral of factors; he did not find time to publish this result until 1949. Von Neumann algebras relate closely to a theory of noncommutative integration, something that von Neumann hinted to in his work but did not explicitly write out. Another important result on polar decomposition was published in 1932.
Lattice theory
Between 1935 and 1937, von Neumann worked on lattice theory, the theory of partially ordered sets in which every two elements have a greatest lower bound and a least upper bound. As Garrett Birkhoff wrote, "John von Neumann's brilliant mind blazed over lattice theory like a meteor". Von Neumann combined traditional projective geometry with modern algebra (linear algebra, ring theory, lattice theory). Many previously geometric results could then be interpreted in the case of general modules over rings. His work laid the foundations for some of the modern work in projective geometry.
His biggest contribution was founding the field of continuous geometry. It followed his path-breaking work on rings of operators. In mathematics, continuous geometry is a substitute of complex projective geometry, where instead of the dimension of a subspace being in a discrete set it can be an element of the unit interval . Earlier, Menger and Birkhoff had axiomatized complex projective geometry in terms of the properties of its lattice of linear subspaces. Von Neumann, following his work on rings of operators, weakened those axioms to describe a broader class of lattices, the continuous geometries.
While the dimensions of the subspaces of projective geometries are a discrete set (the non-negative integers), the dimensions of the elements of a continuous geometry can range continuously across the unit interval . Von Neumann was motivated by his discovery of von Neumann algebras with a dimension function taking a continuous range of dimensions, and the first example of a continuous geometry other than projective space was the projections of the hyperfinite type II factor.
In more pure lattice theoretical work, he solved the difficult problem of characterizing the class of (continuous-dimensional projective geometry over an arbitrary division ring ) in abstract language of lattice theory. Von Neumann provided an abstract exploration of dimension in completed complemented modular topological lattices (properties that arise in the lattices of subspaces of inner product spaces): Dimension is determined, up to a positive linear transformation, by the following two properties. It is conserved by perspective mappings ("perspectivities") and ordered by inclusion. The deepest part of the proof concerns the equivalence of perspectivity with "projectivity by decomposition"—of which a corollary is the transitivity of perspectivity.
For any integer every -dimensional abstract projective geometry is isomorphic to the subspace-lattice of an -dimensional vector space over a (unique) corresponding division ring . This is known as the Veblen–Young theorem. Von Neumann extended this fundamental result in projective geometry to the continuous dimensional case. This coordinatization theorem stimulated considerable work in abstract projective geometry and lattice theory, much of which continued using von Neumann's techniques. Birkhoff described this theorem as follows: Any complemented modular lattice having a "basis" of pairwise perspective elements, is isomorphic with the lattice of all principal right-ideals of a suitable regular ring . This conclusion is the culmination of 140 pages of brilliant and incisive algebra involving entirely novel axioms. Anyone wishing to get an unforgettable impression of the razor edge of von Neumann's mind, need merely try to pursue this chain of exact reasoning for himself—realizing that often five pages of it were written down before breakfast, seated at a living room writing-table in a bathrobe.
This work required the creation of regular rings. A von Neumann regular ring is a ring where for every , an element exists such that . These rings came from and have connections to his work on von Neumann algebras, as well as AW*-algebras and various kinds of C*-algebras.
Many smaller technical results were proven during the creation and proof of the above theorems, particularly regarding distributivity (such as infinite distributivity), von Neumann developing them as needed. He also developed a theory of valuations in lattices, and shared in developing the general theory of metric lattices.
Birkhoff noted in his posthumous article on von Neumann that most of these results were developed in an intense two-year period of work, and that while his interests continued in lattice theory after 1937, they became peripheral and mainly occurred in letters to other mathematicians. A final contribution in 1940 was for a joint seminar he conducted with Birkhoff at the Institute for Advanced Study on the subject where he developed a theory of σ-complete lattice ordered rings. He never wrote up the work for publication.
Mathematical statistics
Von Neumann made fundamental contributions to mathematical statistics. In 1941, he derived the exact distribution of the ratio of the mean square of successive differences to the sample variance for independent and identically normally distributed variables. This ratio was applied to the residuals from regression models and is commonly known as the Durbin–Watson statistic for testing the null hypothesis that the errors are serially independent against the alternative that they follow a stationary first order autoregression.
Subsequently, Denis Sargan and Alok Bhargava extended the results for testing whether the errors on a regression model follow a Gaussian random walk (i.e., possess a unit root) against the alternative that they are a stationary first order autoregression.
Other work
In his early years, von Neumann published several papers related to set-theoretical real analysis and number theory. In a paper from 1925, he proved that for any dense sequence of points in , there existed a rearrangement of those points that is uniformly distributed. In 1926 his sole publication was on Prüfer's theory of ideal algebraic numbers where he found a new way of constructing them, thus extending Prüfer's theory to the field of all algebraic numbers, and clarified their relation to p-adic numbers.
In 1928 he published two additional papers continuing with these themes. The first dealt with partitioning an interval into countably many congruent subsets. It solved a problem of Hugo Steinhaus asking whether an interval is -divisible. Von Neumann proved that indeed that all intervals, half-open, open, or closed are -divisible by translations (i.e. that these intervals can be decomposed into subsets that are congruent by translation). His next paper dealt with giving a constructive proof without the axiom of choice that algebraically independent reals exist. He proved that are algebraically independent for . Consequently, there exists a perfect algebraically independent set of reals the size of the continuum. Other minor results from his early career include a proof of a maximum principle for the gradient of a minimizing function in the field of calculus of variations, and a small simplification of Hermann Minkowski's theorem for linear forms in geometric number theory.
Later in his career together with Pascual Jordan and Eugene Wigner he wrote a foundational paper classifying all finite-dimensional formally real Jordan algebras and discovering the Albert algebras while attempting to look for a better mathematical formalism for quantum theory. In 1936 he attempted to further the program of replacing the axioms of his previous Hilbert space program with those of Jordan algebras in a paper investigating the infinite-dimensional case; he planned to write at least one further paper on the topic but never did. Nevertheless, these axioms formed the basis for further investigations of algebraic quantum mechanics started by Irving Segal.
Physics
Quantum mechanics
Von Neumann was the first to establish a rigorous mathematical framework for quantum mechanics, known as the Dirac–von Neumann axioms, in his influential 1932 work Mathematical Foundations of Quantum Mechanics. After having completed the axiomatization of set theory, he began to confront the axiomatization of quantum mechanics. He realized in 1926 that a state of a quantum system could be represented by a point in a (complex) Hilbert space that, in general, could be infinite-dimensional even for a single particle. In this formalism of quantum mechanics, observable quantities such as position or momentum are represented as linear operators acting on the Hilbert space associated with the quantum system.
The physics of quantum mechanics was thereby reduced to the mathematics of Hilbert spaces and linear operators acting on them. For example, the uncertainty principle, according to which the determination of the position of a particle prevents the determination of its momentum and vice versa, is translated into the non-commutativity of the two corresponding operators. This new mathematical formulation included as special cases the formulations of both Heisenberg and Schrödinger.
Von Neumann's abstract treatment permitted him to confront the foundational issue of determinism versus non-determinism, and in the book he presented a proof that the statistical results of quantum mechanics could not possibly be averages of an underlying set of determined "hidden variables", as in classical statistical mechanics. In 1935, Grete Hermann published a paper arguing that the proof contained a conceptual error and was therefore invalid. Hermann's work was largely ignored until after John S. Bell made essentially the same argument in 1966. In 2010, Jeffrey Bub argued that Bell had misconstrued von Neumann's proof, and pointed out that the proof, though not valid for all hidden variable theories, does rule out a well-defined and important subset. Bub also suggests that von Neumann was aware of this limitation and did not claim that his proof completely ruled out hidden variable theories. The validity of Bub's argument is, in turn, disputed. Gleason's theorem of 1957 provided an argument against hidden variables along the lines of von Neumann's, but founded on assumptions seen as better motivated and more physically meaningful.
Von Neumann's proof inaugurated a line of research that ultimately led, through Bell's theorem and the experiments of Alain Aspect in 1982, to the demonstration that quantum physics either requires a notion of reality substantially different from that of classical physics, or must include nonlocality in apparent violation of special relativity.
In a chapter of The Mathematical Foundations of Quantum Mechanics, von Neumann deeply analyzed the so-called measurement problem. He concluded that the entire physical universe could be made subject to the universal wave function. Since something "outside the calculation" was needed to collapse the wave function, von Neumann concluded that the collapse was caused by the consciousness of the experimenter. He argued that the mathematics of quantum mechanics allows the collapse of the wave function to be placed at any position in the causal chain from the measurement device to the "subjective consciousness" of the human observer. In other words, while the line between observer and observed could be drawn in different places, the theory only makes sense if an observer exists somewhere. Although the idea of consciousness causing collapse was accepted by Eugene Wigner, the Von Neumann–Wigner interpretation never gained acceptance among the majority of physicists.
Though theories of quantum mechanics continue to evolve, a basic framework for the mathematical formalism of problems in quantum mechanics underlying most approaches can be traced back to the mathematical formalisms and techniques first used by von Neumann. Discussions about interpretation of the theory, and extensions to it, are now mostly conducted on the basis of shared assumptions about the mathematical foundations.
Viewing von Neumann's work on quantum mechanics as a part of the fulfilment of Hilbert's sixth problem, mathematical physicist Arthur Wightman said in 1974 his axiomization of quantum theory was perhaps the most important axiomization of a physical theory to date. With his 1932 book, quantum mechanics became a mature theory in the sense it had a precise mathematical form, which allowed for clear answers to conceptual problems. Nevertheless, von Neumann in his later years felt he had failed in this aspect of his scientific work as despite all the mathematics he developed, he did not find a satisfactory mathematical framework for quantum theory as a whole.
Von Neumann entropy
Von Neumann entropy is extensively used in different forms (conditional entropy, relative entropy, etc.) in the framework of quantum information theory. Entanglement measures are based upon some quantity directly related to the von Neumann entropy. Given a statistical ensemble of quantum mechanical systems with the density matrix , it is given by Many of the same entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy and conditional quantum entropy. Quantum information theory is largely concerned with the interpretation and uses of von Neumann entropy, a cornerstone in the former's development; the Shannon entropy applies to classical information theory.
Density matrix
The formalism of density operators and matrices was introduced by von Neumann in 1927 and independently, but less systematically by Lev Landau and Felix Bloch in 1927 and 1946 respectively. The density matrix allows the representation of probabilistic mixtures of quantum states (mixed states) in contrast to wavefunctions, which can only represent pure states.
Von Neumann measurement scheme
The von Neumann measurement scheme, the ancestor of quantum decoherence theory, represents measurements projectively by taking into account the measuring apparatus which is also treated as a quantum object. The 'projective measurement' scheme introduced by von Neumann led to the development of quantum decoherence theories.
Quantum logic
Von Neumann first proposed a quantum logic in his 1932 treatise Mathematical Foundations of Quantum Mechanics, where he noted that projections on a Hilbert space can be viewed as propositions about physical observables. The field of quantum logic was subsequently inaugurated in a 1936 paper by von Neumann and Garrett Birkhoff, the first to introduce quantum logics, wherein von Neumann and Birkhoff first proved that quantum mechanics requires a propositional calculus substantially different from all classical logics and rigorously isolated a new algebraic structure for quantum logics. The concept of creating a propositional calculus for quantum logic was first outlined in a short section in von Neumann's 1932 work, but in 1936, the need for the new propositional calculus was demonstrated through several proofs. For example, photons cannot pass through two successive filters that are polarized perpendicularly (e.g., horizontally and vertically), and therefore, a fortiori, it cannot pass if a third filter polarized diagonally is added to the other two, either before or after them in the succession, but if the third filter is added between the other two, the photons will indeed pass through. This experimental fact is translatable into logic as the non-commutativity of conjunction . It was also demonstrated that the laws of distribution of classical logic, and , are not valid for quantum theory.
The reason for this is that a quantum disjunction, unlike the case for classical disjunction, can be true even when both of the disjuncts are false and this is in turn attributable to the fact that it is frequently the case in quantum mechanics that a pair of alternatives are semantically determinate, while each of its members is necessarily indeterminate. Consequently, the distributive law of classical logic must be replaced with a weaker condition. Instead of a distributive lattice, propositions about a quantum system form an orthomodular lattice isomorphic to the lattice of subspaces of the Hilbert space associated with that system.
Nevertheless, he was never satisfied with his work on quantum logic. He intended it to be a joint synthesis of formal logic and probability theory and when he attempted to write up a paper for the Henry Joseph Lecture he gave at the Washington Philosophical Society in 1945 he found that he could not, especially given that he was busy with war work at the time. During his address at the 1954 International Congress of Mathematicians he gave this issue as one of the unsolved problems that future mathematicians could work on.
Fluid dynamics
Von Neumann made fundamental contributions in the field of fluid dynamics, including the classic flow solution to blast waves, and the co-discovery (independently by Yakov Borisovich Zel'dovich and Werner Döring) of the ZND detonation model of explosives. During the 1930s, von Neumann became an authority on the mathematics of shaped charges.
Later with Robert D. Richtmyer, von Neumann developed an algorithm defining artificial viscosity that improved the understanding of shock waves. When computers solved hydrodynamic or aerodynamic problems, they put too many computational grid points at regions of sharp discontinuity (shock waves). The mathematics of artificial viscosity smoothed the shock transition without sacrificing basic physics.
Von Neumann soon applied computer modelling to the field, developing software for his ballistics research. During World War II, he approached R. H. Kent, the director of the US Army's Ballistic Research Laboratory, with a computer program for calculating a one-dimensional model of 100 molecules to simulate a shock wave. Von Neumann gave a seminar on his program to an audience which included his friend Theodore von Kármán. After von Neumann had finished, von Kármán said "Of course you realize Lagrange also used digital models to simulate continuum mechanics." Von Neumann had been unaware of Lagrange's .
Other work
While not as prolific in physics as he was in mathematics, he nevertheless made several other notable contributions. His pioneering papers with Subrahmanyan Chandrasekhar on the statistics of a fluctuating gravitational field generated by randomly distributed stars were considered a tour de force. In this paper they developed a theory of two-body relaxation and used the Holtsmark distribution to model the dynamics of stellar systems. He wrote several other unpublished manuscripts on topics in stellar structure, some of which were included in Chandrasekhar's other works. In earlier work led by Oswald Veblen von Neumann helped develop basic ideas involving spinors that would lead to Roger Penrose's twistor theory. Much of this was done in seminars conducted at the IAS during the 1930s. From this work he wrote a paper with A. H. Taub and Veblen extending the Dirac equation to projective relativity, with a key focus on maintaining invariance with regards to coordinate, spin, and gauge transformations, as a part of early research into potential theories of quantum gravity in the 1930s. In the same time period he made several proposals to colleagues for dealing with the problems in the newly created quantum field theory and for quantizing spacetime; however, both his colleagues and he did not consider the ideas fruitful and did not pursue them. Nevertheless, he maintained at least some interest, in 1940 writing a manuscript on the Dirac equation in de Sitter space.
Economics
Game theory
Von Neumann founded the field of game theory as a mathematical discipline. He proved his minimax theorem in 1928. It establishes that in zero-sum games with perfect information (i.e., in which players know at each time all moves that have taken place so far), there exists a pair of strategies for both players that allows each to minimize their maximum losses. Such strategies are called optimal. Von Neumann showed that their minimaxes are equal (in absolute value) and contrary (in sign). He improved and extended the minimax theorem to include games involving imperfect information and games with more than two players, publishing this result in his 1944 Theory of Games and Economic Behavior, written with Oskar Morgenstern. The public interest in this work was such that The New York Times ran a front-page story. In this book, von Neumann declared that economic theory needed to use functional analysis, especially convex sets and the topological fixed-point theorem, rather than the traditional differential calculus, because the maximum-operator did not preserve differentiable functions.
Von Neumann's functional-analytic techniques—the use of duality pairings of real vector spaces to represent prices and quantities, the use of supporting and separating hyperplanes and convex sets, and fixed-point theory—have been primary tools of mathematical economics ever since.
Mathematical economics
Von Neumann raised the mathematical level of economics in several influential publications. For his model of an expanding economy, he proved the existence and uniqueness of an equilibrium using his generalization of the Brouwer fixed-point theorem. Von Neumann's model of an expanding economy considered the matrix pencil A − λB with nonnegative matrices A and B; von Neumann sought probability vectors p and q and a positive number λ that would solve the complementarity equation along with two inequality systems expressing economic efficiency. In this model, the (transposed) probability vector p represents the prices of the goods while the probability vector q represents the "intensity" at which the production process would run. The unique solution λ represents the growth factor which is 1 plus the rate of growth of the economy; the rate of growth equals the interest rate.
Von Neumann's results have been viewed as a special case of linear programming, where his model uses only nonnegative matrices. The study of his model of an expanding economy continues to interest mathematical economists. This paper has been called the greatest paper in mathematical economics by several authors, who recognized its introduction of fixed-point theorems, linear inequalities, complementary slackness, and saddlepoint duality. In the proceedings of a conference on von Neumann's growth model, Paul Samuelson said that many mathematicians had developed methods useful to economists, but that von Neumann was unique in having made significant contributions to economic theory itself. The lasting importance of the work on general equilibria and the methodology of fixed point theorems is underscored by the awarding of Nobel prizes in 1972 to Kenneth Arrow, in 1983 to Gérard Debreu, and in 1994 to John Nash who used fixed point theorems to establish equilibria for non-cooperative games and for bargaining problems in his Ph.D. thesis. Arrow and Debreu also used linear programming, as did Nobel laureates Tjalling Koopmans, Leonid Kantorovich, Wassily Leontief, Paul Samuelson, Robert Dorfman, Robert Solow, and Leonid Hurwicz.
Von Neumann's interest in the topic began while he was lecturing at Berlin in 1928 and 1929. He spent his summers in Budapest, as did the economist Nicholas Kaldor; Kaldor recommended that von Neumann read a book by the mathematical economist Léon Walras. Von Neumann noticed that Walras's General Equilibrium Theory and Walras's law, which led to systems of simultaneous linear equations, could produce the absurd result that profit could be maximized by producing and selling a negative quantity of a product. He replaced the equations by inequalities, introduced dynamic equilibria, among other things, and eventually produced his paper.
Linear programming
Building on his results on matrix games and on his model of an expanding economy, von Neumann invented the theory of duality in linear programming when George Dantzig described his work in a few minutes, and an impatient von Neumann asked him to get to the point. Dantzig then listened dumbfounded while von Neumann provided an hourlong lecture on convex sets, fixed-point theory, and duality, conjecturing the equivalence between matrix games and linear programming.
Later, von Neumann suggested a new method of linear programming, using the homogeneous linear system of Paul Gordan (1873), which was later popularized by Karmarkar's algorithm. Von Neumann's method used a pivoting algorithm between simplices, with the pivoting decision determined by a nonnegative least squares subproblem with a convexity constraint (projecting the zero-vector onto the convex hull of the active simplex). Von Neumann's algorithm was the first interior point method of linear programming.
Computer science
Von Neumann was a founding figure in computing, with significant contributions to computing hardware design, to theoretical computer science, to scientific computing, and to the philosophy of computer science.
Hardware
Von Neumann consulted for the Army's Ballistic Research Laboratory, most notably on the ENIAC project, as a member of its Scientific Advisory Committee. Although the single-memory, stored-program architecture is commonly called von Neumann architecture, the architecture was based on the work of J. Presper Eckert and John Mauchly, inventors of ENIAC and its successor, EDVAC.
While consulting for the EDVAC project at the University of Pennsylvania, von Neumann wrote an incomplete First Draft of a Report on the EDVAC. The paper, whose premature distribution nullified the patent claims of Eckert and Mauchly, described a computer that stored both its data and its program in the same address space, unlike the earliest computers which stored their programs separately on paper tape or plugboards. This architecture became the basis of most modern computer designs.
Next, von Neumann designed the IAS machine at the Institute for Advanced Study in Princeton, New Jersey. He arranged its financing, and the components were designed and built at the RCA Research Laboratory nearby. Von Neumann recommended that the IBM 701, nicknamed the defense computer, include a magnetic drum. It was a faster version of the IAS machine and formed the basis for the commercially successful IBM 704.
Algorithms
Von Neumann was the inventor, in 1945, of the merge sort algorithm, in which the first and second halves of an array are each sorted recursively and then merged.
As part of Von Neumann's hydrogen bomb work, he and Stanisław Ulam developed simulations for hydrodynamic computations. He also contributed to the development of the Monte Carlo method, which used random numbers to approximate the solutions to complicated problems.
Von Neumann's algorithm for simulating a fair coin with a biased coin is used in the "software whitening" stage of some hardware random number generators. Because obtaining "truly" random numbers was impractical, von Neumann developed a form of pseudorandomness, using the middle-square method. He justified this crude method as faster than any other method at his disposal, writing that "Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin." He also noted that when this method went awry it did so obviously, unlike other methods which could be subtly incorrect.
Stochastic computing was introduced by von Neumann in 1953, but could not be implemented until advances in computing of the 1960s. Around 1950 he was also among the first to talk about the time complexity of computations, which eventually evolved into the field of computational complexity theory.
Cellular automata, DNA and the universal constructor
Von Neumann's mathematical analysis of the structure of self-replication preceded the discovery of the structure of DNA. Ulam and von Neumann are also generally credited with creating the field of cellular automata, beginning in the 1940s, as a simplified mathematical model of biological systems.
In lectures in 1948 and 1949, von Neumann proposed a kinematic self-reproducing automaton. By 1952, he was treating the problem more abstractly. He designed an elaborate 2D cellular automaton that would automatically make a copy of its initial configuration of cells. The Von Neumann universal constructor based on the von Neumann cellular automaton was fleshed out in his posthumous Theory of Self Reproducing Automata.
The von Neumann neighborhood, in which each cell in a two-dimensional grid has the four orthogonally adjacent grid cells as neighbors, continues to be used for other cellular automata.
Scientific computing and numerical analysis
Considered to be possibly "the most influential researcher in scientific computing of all time", von Neumann made several contributions to the field, both technically and administratively. He developed the Von Neumann stability analysis procedure, still commonly used to avoid errors from building up in numerical methods for linear partial differential equations. His paper with Herman Goldstine in 1947 was the first to describe backward error analysis, although implicitly. He was also one of the first to write about the Jacobi method. At Los Alamos, he wrote several classified reports on solving problems of gas dynamics numerically. However, he was frustrated by the lack of progress with analytic methods for these nonlinear problems. As a result, he turned towards computational methods. Under his influence Los Alamos became the leader in computational science during the 1950s and early 1960s.
From this work von Neumann realized that computation was not just a tool to brute force the solution to a problem numerically, but could also provide insight for solving problems analytically, and that there was an enormous variety of scientific and engineering problems towards which computers would be useful, most significant of which were nonlinear problems. In June 1945 at the First Canadian Mathematical Congress he gave his first talk on general ideas of how to solve problems, particularly of fluid dynamics numerically. He also described how wind tunnels were actually analog computers, and how digital computers would replace them and bring a new era of fluid dynamics. Garrett Birkhoff described it as "an unforgettable sales pitch". He expanded this talk with Goldstine into the manuscript "On the Principles of Large Scale Computing Machines" and used it to promote the support of scientific computing. His papers also developed the concepts of inverting matrices, random matrices and automated relaxation methods for solving elliptic boundary value problems.
Weather systems and global warming
As part of his research into possible applications of computers, von Neumann became interested in weather prediction, noting similarities between the problems in the field and those he had worked on during the Manhattan Project. In 1946 von Neumann founded the "Meteorological Project" at the Institute for Advanced Study, securing funding for his project from the Weather Bureau, the US Air Force and US Navy weather services. With Carl-Gustaf Rossby, considered the leading theoretical meteorologist at the time, he gathered a group of twenty meteorologists to work on various problems in the field. However, given his other postwar work he was not able to devote enough time to proper leadership of the project and little was accomplished.
This changed when a young Jule Gregory Charney took up co-leadership of the project from Rossby. By 1950 von Neumann and Charney wrote the world's first climate modelling software, and used it to perform the world's first numerical weather forecasts on the ENIAC computer that von Neumann had arranged to be used; von Neumann and his team published the results as Numerical Integration of the Barotropic Vorticity Equation. Together they played a leading role in efforts to integrate sea-air exchanges of energy and moisture into the study of climate. Though primitive, news of the ENIAC forecasts quickly spread around the world and a number of parallel projects in other locations were initiated.
In 1955 von Neumann, Charney and their collaborators convinced their funders to open the Joint Numerical Weather Prediction Unit (JNWPU) in Suitland, Maryland, which began routine real-time weather forecasting. Next up, von Neumann proposed a research program for climate modeling: The approach is to first try short-range forecasts, then long-range forecasts of those properties of the circulation that can perpetuate themselves over arbitrarily long periods of time, and only finally to attempt forecast for medium-long time periods which are too long to treat by simple hydrodynamic theory and too short to treat by the general principle of equilibrium theory. Positive results of Norman A. Phillips in 1955 prompted immediate reaction and von Neumann organized a conference at Princeton on "Application of Numerical Integration Techniques to the Problem of the General Circulation". Once again he strategically organized the program as a predictive one to ensure continued support from the Weather Bureau and the military, leading to the creation of the General Circulation Research Section (now the Geophysical Fluid Dynamics Laboratory) next to the JNWPU. He continued work both on technical issues of modelling and in ensuring continuing funding for these projects.
During the late 19th century, Svante Arrhenius suggested that human activity could cause global warming by adding carbon dioxide to the atmosphere. In 1955, von Neumann observed that this may already have begun: "Carbon dioxide released into the atmosphere by industry's burning of coal and oil – more than half of it during the last generation – may have changed the atmosphere's composition sufficiently to account for a general warming of the world by about one degree Fahrenheit." His research into weather systems and meteorological prediction led him to propose manipulating the environment by spreading colorants on the polar ice caps to enhance absorption of solar radiation (by reducing the albedo). However, he urged caution in any program of atmosphere modification: What could be done, of course, is no index to what should be done... In fact, to evaluate the ultimate consequences of either a general cooling or a general heating would be a complex matter. Changes would affect the level of the seas, and hence the habitability of the continental coastal shelves; the evaporation of the seas, and hence general precipitation and glaciation levels; and so on... But there is little doubt that one could carry out the necessary analyses needed to predict the results, intervene on any desired scale, and ultimately achieve rather fantastic results. He also warned that weather and climate control could have military uses, telling Congress in 1956 that they could pose an even bigger risk than ICBMs.
Technological singularity hypothesis
The first use of the concept of a singularity in the technological context is attributed to von Neumann, who according to Ulam discussed the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." This concept was later fleshed out in the 1970 book Future Shock by Alvin Toffler.
Defense work
Manhattan Project
Beginning in the late 1930s, von Neumann developed an expertise in explosions—phenomena that are difficult to model mathematically. During this period, he was the leading authority of the mathematics of shaped charges, leading him to a large number of military consultancies and consequently his involvement in the Manhattan Project. The involvement included frequent trips to the project's secret research facilities at the Los Alamos Laboratory in New Mexico.
Von Neumann made his principal contribution to the atomic bomb in the concept and design of the explosive lenses that were needed to compress the plutonium core of the Fat Man weapon that was later dropped on Nagasaki. While von Neumann did not originate the "implosion" concept, he was one of its most persistent proponents, encouraging its continued development against the instincts of many of his colleagues, who felt such a design to be unworkable. He also eventually came up with the idea of using more powerful shaped charges and less fissionable material to greatly increase the speed of "assembly".
When it turned out that there would not be enough uranium-235 to make more than one bomb, the implosive lens project was greatly expanded and von Neumann's idea was implemented. Implosion was the only method that could be used with the plutonium-239 that was available from the Hanford Site. He established the design of the explosive lenses required, but there remained concerns about "edge effects" and imperfections in the explosives. His calculations showed that implosion would work if it did not depart by more than 5% from spherical symmetry. After a series of failed attempts with models, this was achieved by George Kistiakowsky, and the construction of the Trinity bomb was completed in July 1945.
In a visit to Los Alamos in September 1944, von Neumann showed that the pressure increase from explosion shock wave reflection from solid objects was greater than previously believed if the angle of incidence of the shock wave was between 90° and some limiting angle. As a result, it was determined that the effectiveness of an atomic bomb would be enhanced with detonation some kilometers above the target, rather than at ground level.
Von Neumann was included in the target selection committee that was responsible for choosing the Japanese cities of Hiroshima and Nagasaki as the first targets of the atomic bomb. Von Neumann oversaw computations related to the expected size of the bomb blasts, estimated death tolls, and the distance above the ground at which the bombs should be detonated for optimum shock wave propagation. The cultural capital Kyoto was von Neumann's first choice, a selection seconded by Manhattan Project leader General Leslie Groves. However, this target was dismissed by Secretary of War Henry L. Stimson.
On July 16, 1945, von Neumann and numerous other Manhattan Project personnel were eyewitnesses to the first test of an atomic bomb detonation, which was code-named Trinity. The event was conducted as a test of the implosion method device, at the Alamogordo Bombing Range in New Mexico. Based on his observation alone, von Neumann estimated the test had resulted in a blast equivalent to but Enrico Fermi produced a more accurate estimate of 10 kilotons by dropping scraps of torn-up paper as the shock wave passed his location and watching how far they scattered. The actual power of the explosion had been between 20 and 22 kilotons. It was in von Neumann's 1944 papers that the expression "kilotons" appeared for the first time.
Von Neumann continued unperturbed in his work and became, along with Edward Teller, one of those who sustained the hydrogen bomb project. He collaborated with Klaus Fuchs on further development of the bomb, and in 1946 the two filed a secret patent outlining a scheme for using a fission bomb to compress fusion fuel to initiate nuclear fusion. The Fuchs–von Neumann patent used radiation implosion, but not in the same way as is used in what became the final hydrogen bomb design, the Teller–Ulam design. Their work was, however, incorporated into the "George" shot of Operation Greenhouse, which was instructive in testing out concepts that went into the final design. The Fuchs–von Neumann work was passed on to the Soviet Union by Fuchs as part of his nuclear espionage, but it was not used in the Soviets' own, independent development of the Teller–Ulam design. The historian Jeremy Bernstein has pointed out that ironically, "John von Neumann and Klaus Fuchs, produced a brilliant invention in 1946 that could have changed the whole course of the development of the hydrogen bomb, but was not fully understood until after the bomb had been successfully made."
For his wartime services, von Neumann was awarded the Navy Distinguished Civilian Service Award in July 1946, and the Medal for Merit in October 1946.
Post-war work
In 1950, von Neumann became a consultant to the Weapons Systems Evaluation Group, whose function was to advise the Joint Chiefs of Staff and the United States Secretary of Defense on the development and use of new technologies. He also became an adviser to the Armed Forces Special Weapons Project, which was responsible for the military aspects on nuclear weapons. Over the following two years, he became a consultant across the US government. This included the Central Intelligence Agency (CIA), a member of the influential General Advisory Committee of the Atomic Energy Commission, a consultant to the newly established Lawrence Livermore National Laboratory, and a member of the Scientific Advisory Group of the United States Air Force During this time he became a "superstar" defense scientist at the Pentagon. His authority was considered infallible at the highest levels of the US government and military.
During several meetings of the advisory board of the US Air Force, von Neumann and Edward Teller predicted that by 1960 the US would be able to build a hydrogen bomb light enough to fit on top of a rocket. In 1953 Bernard Schriever, who was present at the meeting, paid a personal visit to von Neumann at Princeton to confirm this possibility. Schriever enlisted Trevor Gardner, who in turn visited von Neumann several weeks later to fully understand the future possibilities before beginning his campaign for such a weapon in Washington. Now either chairing or serving on several boards dealing with strategic missiles and nuclear weaponry, von Neumann was able to inject several crucial arguments regarding potential Soviet advancements in both these areas and in strategic defenses against American bombers into government reports to argue for the creation of ICBMs. Gardner on several occasions brought von Neumann to meetings with the US Department of Defense to discuss with various senior officials his reports. Several design decisions in these reports such as inertial guidance mechanisms would form the basis for all ICBMs thereafter. By 1954, von Neumann was also regularly testifying to various Congressional military subcommittees to ensure continued support for the ICBM program.
However, this was not enough. To have the ICBM program run at full throttle they needed direct action by the President of the United States. They convinced President Eisenhower in a direct meeting in July 1955, which resulted in a presidential directive on September 13, 1955. It stated that "there would be the gravest repercussions on the national security and on the cohesion of the free world" if the Soviet Union developed the ICBM before the US and therefore designated the ICBM project "a research and development program of the highest priority above all others." The Secretary of Defense was ordered to commence the project with "maximum urgency". Evidence would later show that the Soviets indeed were already testing their own intermediate-range ballistic missiles at the time. Von Neumann would continue to meet the President, including at his home in Gettysburg, Pennsylvania, and other high-level government officials as a key advisor on ICBMs until his death.
Atomic Energy Commission
In 1955, von Neumann became a commissioner of the Atomic Energy Commission (AEC), which at the time was the highest official position available to scientists in the government. (While his appointment formally required that he sever all his other consulting contracts, an exemption was made for von Neumann to continue working with several critical military committees after the Air Force and several key senators raised concerns.) He used this position to further the production of compact hydrogen bombs suitable for intercontinental ballistic missile (ICBM) delivery. He involved himself in correcting the severe shortage of tritium and lithium 6 needed for these weapons, and he argued against settling for the intermediate-range missiles that the Army wanted. He was adamant that H-bombs delivered deep into enemy territory by an ICBM would be the most effective weapon possible, and that the relative inaccuracy of the missile would not be a problem with an H-bomb. He said the Russians would probably be building a similar weapon system, which turned out to be the case. While Lewis Strauss was away in the second half of 1955 von Neumann took over as acting chairman of the commission.
In his final years before his death from cancer, von Neumann headed the United States government's top-secret ICBM committee, which would sometimes meet in his home. Its purpose was to decide on the feasibility of building an ICBM large enough to carry a thermonuclear weapon. Von Neumann had long argued that while the technical obstacles were sizable, they could be overcome. The SM-65 Atlas passed its first fully functional test in 1959, two years after his death. The more advanced Titan rockets were deployed in 1962. Both had been proposed in the ICBM committees von Neumann chaired. The feasibility of the ICBMs owed as much to improved, smaller warheads that did not have guidance or heat resistance issues as it did to developments in rocketry, and his understanding of the former made his advice invaluable.
Von Neumann entered government service primarily because he felt that, if freedom and civilization were to survive, it would have to be because the United States would triumph over totalitarianism from Nazism, Fascism and Soviet Communism. During a Senate committee hearing he described his political ideology as "violently anti-communist, and much more militaristic than the norm".
Personality
Work habits
Herman Goldstine commented on von Neumann's ability to intuit hidden errors and remember old material perfectly. When he had difficulties he would not labor on; instead, he would go home and sleep on it and come back later with a solution. This style, 'taking the path of least resistance', sometimes meant that he could go off on tangents. It also meant that if the difficulty was great from the very beginning, he would simply switch to another problem, not trying to find weak spots from which he could break through. At times he could be ignorant of the standard mathematical literature, finding it easier to rederive basic information he needed rather than chase references.
After World War II began, he became extremely busy with both academic and military commitments. His habit of not writing up talks or publishing results worsened. He did not find it easy to discuss a topic formally in writing unless it was already mature in his mind; if it was not, he would, in his own words, "develop the worst traits of pedantism and inefficiency".
Mathematical range
The mathematician Jean Dieudonné said that von Neumann "may have been the last representative of a once-flourishing and numerous group, the great mathematicians who were equally at home in pure and applied mathematics and who throughout their careers maintained a steady production in both directions". According to Dieudonné, his specific genius was in analysis and "combinatorics", with combinatorics being understood in a very wide sense that described his ability to organize and axiomize complex works that previously seemed to have little connection with mathematics. His style in analysis followed the German school, based on foundations in linear algebra and general topology. While von Neumann had an encyclopedic background, his range in pure mathematics was not as wide as Poincaré, Hilbert or even Weyl: von Neumann never did significant work in number theory, algebraic topology, algebraic geometry or differential geometry. However, in applied mathematics his work equalled that of Gauss, Cauchy or Poincaré.
According to Wigner, "Nobody knows all science, not even von Neumann did. But as for mathematics, he contributed to every part of it except number theory and topology. That is, I think, something unique." Halmos noted that while von Neumann knew lots of mathematics, the most notable gaps were in algebraic topology and number theory; he recalled an incident where von Neumann failed to recognize the topological definition of a torus. Von Neumann admitted to Herman Goldstine that he had no facility at all in topology and he was never comfortable with it, with Goldstine later bringing this up when comparing him to Hermann Weyl, who he thought was deeper and broader.
In his biography of von Neumann, Salomon Bochner wrote that much of von Neumann's works in pure mathematics involved finite and infinite dimensional vector spaces, which at the time, covered much of the total area of mathematics. However he pointed out this still did not cover an important part of the mathematical landscape, in particular, anything that involved geometry "in the global sense", topics such as topology, differential geometry and harmonic integrals, algebraic geometry and other such fields. Von Neumann rarely worked in these fields and, as Bochner saw it, had little affinity for them.
In one of von Neumann's last articles, he lamented that pure mathematicians could no longer attain deep knowledge of even a fraction of the field. In the early 1940s, Ulam had concocted for him a doctoral-style examination to find weaknesses in his knowledge; von Neumann was unable to answer satisfactorily a question each in differential geometry, number theory, and algebra. They concluded that doctoral exams might have "little permanent meaning". However, when Weyl turned down an offer to write a history of mathematics of the 20th century, arguing that no one person could do it, Ulam thought von Neumann could have aspired to do so.
Preferred problem-solving techniques
Ulam remarked that most mathematicians could master one technique that they then used repeatedly, whereas von Neumann had mastered three:
A facility with the symbolic manipulation of linear operators;
An intuitive feeling for the logical structure of any new mathematical theory;
An intuitive feeling for the combinatorial superstructure of new theories.
Although he was commonly described as an analyst, he once classified himself an algebraist, and his style often displayed a mix of algebraic technique and set-theoretical intuition. He loved obsessive detail and had no issues with excess repetition or overly explicit notation. An example of this was a paper of his on rings of operators, where he extended the normal functional notation, to . However, this process ended up being repeated several times, where the final result were equations such as . The 1936 paper became known to students as "von Neumann's onion" because the equations "needed to be peeled before they could be digested". Overall, although his writings were clear and powerful, they were not clean or elegant. Although powerful technically, his primary concern was more with the clear and viable formation of fundamental issues and questions of science rather than just the solution of mathematical puzzles.
According to Ulam, von Neumann surprised physicists by doing dimensional estimates and algebraic computations in his head with fluency Ulam likened to blindfold chess. His impression was that von Neumann analyzed physical situations by abstract logical deduction rather than concrete visualization.
Lecture style
Goldstine compared his lectures to being on glass, smooth and lucid. By comparison, Goldstine thought his scientific articles were written in a much harsher manner, and with much less insight. Halmos described his lectures as "dazzling", with his speech clear, rapid, precise and all encompassing. Like Goldstine, he also described how everything seemed "so easy and natural" in lectures but puzzling on later reflection. He was a quick speaker: Banesh Hoffmann found it very difficult to take notes, even in shorthand, and Albert Tucker said that people often had to ask von Neumann questions to slow him down so they could think through the ideas he was presenting. Von Neumann knew about this and was grateful for his audience telling him when he was going too quickly. Although he did spend time preparing for lectures, he rarely used notes, instead jotting down points of what he would discuss and for how long.
Eidetic memory
Von Neumann was also noted for his eidetic memory, particularly of the symbolic kind. Herman Goldstine writes:
Von Neumann was reportedly able to memorize the pages of telephone directories. He entertained friends by asking them to randomly call out page numbers; he then recited the names, addresses and numbers therein. Stanisław Ulam believed that von Neumann's memory was auditory rather than visual.
Mathematical quickness
Von Neumann's mathematical fluency, calculation speed, and general problem-solving ability were widely noted by his peers. Paul Halmos called his speed "awe-inspiring." Lothar Wolfgang Nordheim described him as the "fastest mind I ever met". Enrico Fermi told physicist Herbert L. Anderson: "You know, Herb, Johnny can do calculations in his head ten times as fast as I can! And I can do them ten times as fast as you can, Herb, so you can see how impressive Johnny is!" Edward Teller admitted that he "never could keep up with him", and Israel Halperin described trying to keep up as like riding a "tricycle chasing a racing car."
He had an unusual ability to solve novel problems quickly. George Pólya, whose lectures at ETH Zürich von Neumann attended as a student, said, "Johnny was the only student I was ever afraid of. If in the course of a lecture I stated an unsolved problem, the chances were he'd come to me at the end of the lecture with the complete solution scribbled on a slip of paper." When George Dantzig brought von Neumann an unsolved problem in linear programming "as I would to an ordinary mortal", on which there had been no published literature, he was astonished when von Neumann said "Oh, that!", before offhandedly giving a lecture of over an hour, explaining how to solve the problem using the hitherto unconceived theory of duality.
A story about von Neumann's encounter with the famous fly puzzle has entered mathematical folklore. In this puzzle, two bicycles begin 20 miles apart, and each travels toward the other at 10 miles per hour until they collide; meanwhile, a fly travels continuously back and forth between the bicycles at 15 miles per hour until it is squashed in the collision. The questioner asks how far the fly traveled in total; the "trick" for a quick answer is to realize that the fly's individual transits do not matter, only that it has been traveling at 15 miles per hour for one hour. As Eugene Wigner tells it, Max Born posed the riddle to von Neumann. The other scientists to whom he had posed it had laboriously computed the distance, so when von Neumann was immediately ready with the correct answer of 15 miles, Born observed that he must have guessed the trick. "What trick?" von Neumann replied. "All I did was sum the geometric series."
Self-doubts
Rota wrote that von Neumann had "deep-seated and recurring self-doubts". John L. Kelley reminisced in 1989 that "Johnny von Neumann has said that he will be forgotten while Kurt Gödel is remembered with Pythagoras, but the rest of us viewed Johnny with awe." Ulam suggests that some of his self-doubts with regard for his own creativity may have come from the fact he had not discovered several important ideas that others had, even though he was more than capable of doing so, giving the incompleteness theorems and Birkhoff's pointwise ergodic theorem as examples. Von Neumann had a virtuosity in following complicated reasoning and had supreme insights, yet he perhaps felt he did not have the gift for seemingly irrational proofs and theorems or intuitive insights. Ulam describes how during one of his stays at Princeton while von Neumann was working on rings of operators, continuous geometries and quantum logic he felt that von Neumann was not convinced of the importance of his work, and only when finding some ingenious technical trick or new approach did he take some pleasure in it. However, according to Rota, von Neumann still had an "incomparably stronger technique" compared to his friend, despite describing Ulam as the more creative mathematician.
Legacy
Accolades
Nobel Laureate Hans Bethe said "I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man". Edward Teller observed "von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us." Peter Lax wrote "Von Neumann was addicted to thinking, and in particular to thinking about mathematics". Eugene Wigner said, "He understood mathematical problems not only in their initial aspect, but in their full complexity." Claude Shannon called him "the smartest person I've ever met", a common opinion. Jacob Bronowski wrote "He was the cleverest man I ever knew, without exception. He was a genius." Due to his wide reaching influence and contributions to many fields, von Neumann is widely considered a polymath.
Wigner noted the extraordinary mind that von Neumann had, and he described von Neumann as having a mind faster than anyone he knew, stating that:
"It seems fair to say that if the influence of a scientist is interpreted broadly enough to include impact on fields beyond science proper, then John von Neumann was probably the most influential mathematician who ever lived," wrote Miklós Rédei. Peter Lax commented that von Neumann would have won a Nobel Prize in Economics had he lived longer, and that "if there were Nobel Prizes in computer science and mathematics, he would have been honored by these, too." Rota writes that "he was the first to have a vision of the boundless possibilities of computing, and he had the resolve to gather the considerable intellectual and engineering resources that led to the construction of the first large computer" and consequently that "No other mathematician in this century has had as deep and lasting an influence on the course of civilization." He is widely regarded as one of the greatest and most influential mathematicians and scientists of the 20th century.
Neurophysiologist Leon Harmon described him in a similar manner, calling him the only "true genius" he had ever met: "von Neumann's mind was all-encompassing. He could solve problems in any domain. ... And his mind was always working, always restless." While consulting for non-academic projects von Neumann's combination of outstanding scientific ability and practicality gave him a high credibility with military officers, engineers, and industrialists that no other scientist could match. In nuclear missilery he was considered "the clearly dominant advisory figure" according to Herbert York. Economist Nicholas Kaldor said he was "unquestionably the nearest thing to a genius I have ever encountered." Likewise, Paul Samuelson wrote, "We economists are grateful for von Neumann's genius. It is not for us to calculate whether he was a Gauss, or a Poincaré, or a Hilbert. He was the incomparable Johnny von Neumann. He darted briefly into our domain and it has never been the same since."
Honors and awards
Events and awards named in recognition of von Neumann include the annual John von Neumann Theory Prize of the Institute for Operations Research and the Management Sciences, IEEE John von Neumann Medal, and the John von Neumann Prize of the Society for Industrial and Applied Mathematics. Both the crater von Neumann on the Moon and the asteroid 22824 von Neumann are named in his honor.
Von Neumann received awards including the Medal for Merit in 1947, the Medal of Freedom in 1956, and the Enrico Fermi Award also in 1956. He was elected a member of multiple honorary societies, including the American Academy of Arts and Sciences and the National Academy of Sciences, and he held eight honorary doctorates. On May 4, 2005, the United States Postal Service issued the American Scientists commemorative postage stamp series, designed by artist Victor Stabin. The scientists depicted were von Neumann, Barbara McClintock, Josiah Willard Gibbs, and Richard Feynman.
was established in Kecskemét, Hungary in 2016, as a successor to Kecskemét College.
Selected works
Von Neumann's first published paper was On the position of zeroes of certain minimum polynomials, co-authored with Michael Fekete and published when von Neumann was 18. At 19, his solo paper On the introduction of transfinite numbers was published. He expanded his second solo paper, An axiomatization of set theory, to create his PhD thesis. His first book, Mathematical Foundations of Quantum Mechanics, was published in 1932. Following this, von Neumann switched from publishing in German to publishing in English, and his publications became more selective and expanded beyond pure mathematics. His 1942 Theory of Detonation Waves contributed to military research, his work on computing began with the unpublished 1946 On the principles of large scale computing machines, and his publications on weather prediction began with the 1950 Numerical integration of the barotropic vorticity equation. Alongside his later papers were informal essays targeted at colleagues and the general public, such as his 1947 The Mathematician, described as a "farewell to pure mathematics", and his 1955 Can we survive technology?, which considered a bleak future including nuclear warfare and deliberate climate change. His complete works have been compiled into a six-volume set.
See also
List of pioneers in computer science
Teapot Committee
The MANIAC, 2023 book about von Neumann
(English title: Adventures of a Mathematician), biopic about Stanislaw Ulam also features John von Neumann.
Notes
References
Description, contents, incl. arrow-scrollable preview, & review.
Further reading
Books
Popular periodicals
Journals
External links
A more or less complete bibliography of publications of John von Neumann by Nelson H. F. Beebe
von Neumann's profile at Google Scholar
Oral History Project - The Princeton Mathematics Community in the 1930s, contains many interviews that describe contact and anecdotes of von Neumann and others at the Princeton University and Institute for Advanced Study community.
Oral history interviews (from the Charles Babbage Institute, University of Minnesota) with: Alice R. Burks and Arthur W. Burks; Eugene P. Wigner; and Nicholas C. Metropolis.
zbMATH profile
Query for "von neumann" on the digital repository of the Institute for Advanced Study.
Von Neumann vs. Dirac on Quantum Theory and Mathematical Rigor – from Stanford Encyclopedia of Philosophy
Quantum Logic and Probability Theory - from Stanford Encyclopedia of Philosophy
FBI files on John von Neumann released via FOI
Biographical video by David Brailsford (John Dunford Professor Emeritus of computer science at the University of Nottingham)
John von Neumann: Prophet of the 21st Century 2013 Arte documentary on John von Neumann and his influence in the modern world (in German and French with English subtitles).
John von Neumann - A Documentary 1966 detailed documentary by the Mathematical Association of America containing remarks by several of his colleagues including Ulam, Wigner, Halmos, Morgenstern, Bethe, Goldstine, Strauss and Teller.
1903 births
1957 deaths
20th-century American mathematicians
20th-century American physicists
Algebraists
American anti-communists
American computer scientists
American nuclear physicists
American operations researchers
American people of Hungarian-Jewish descent
American Roman Catholics
Hungarian physicists
American systems scientists
Mathematicians from Austria-Hungary
Ballistics experts
Burials at Princeton Cemetery
Deaths from cancer in Washington, D.C.
Carl-Gustaf Rossby Research Medal recipients
Cellular automatists
Computer designers
Converts to Roman Catholicism from Judaism
Cyberneticists
Elected Members of the International Statistical Institute
Enrico Fermi Award recipients
ETH Zurich alumni
Fasori Gimnázium alumni
Fellows of the American Physical Society
Fellows of the Econometric Society
Fluid dynamicists
Functional analysts
Game theorists
Hungarian anti-communists
Hungarian computer scientists
Hungarian emigrants to the United States
20th-century Hungarian inventors
20th-century Hungarian Jews
20th-century Hungarian mathematicians
20th-century Hungarian physicists
Hungarian nobility
Hungarian nuclear physicists
Hungarian Roman Catholics
Institute for Advanced Study faculty
Jewish anti-communists
Jewish American physicists
Lattice theorists
Manhattan Project people
Mathematical economists
Mathematical physicists
Mathematicians from Budapest
Measure theorists
Medal for Merit recipients
Members of the American Philosophical Society
Members of the Lincean Academy
Members of the Royal Netherlands Academy of Arts and Sciences
Members of the United States National Academy of Sciences
Mental calculators
Monte Carlo methodologists
Naturalized citizens of the United States
Numerical analysts
Oak Ridge National Laboratory people
Operations researchers
Operator theorists
People from Pest, Hungary
Presidents of the American Mathematical Society
Princeton University faculty
Probability theorists
Quantum physicists
RAND Corporation people
Recipients of the Medal of Freedom
Researchers of artificial life
American set theorists
Theoretical physicists
Academic staff of the University of Göttingen
John
Yiddish-speaking people
Academic staff of the University of Hamburg
Recipients of the Navy Distinguished Civilian Service Award | John von Neumann | Physics,Chemistry,Mathematics | 18,263 |
272,041 | https://en.wikipedia.org/wiki/Intentional%20radiator | An intentional radiator is any device that is deliberately designed to produce radio waves.
Radio transmitters of all kinds, including the garage door opener, cordless telephone, cellular phone, wireless video sender, wireless microphone, and many others fall into this category.
In the United States, intentional radiators are regulated under 47 CFR Part 15, Subpart C.
See also
Spurious emission
Unintentional radiator (incidental radiator)
Radio electronics
Garage door openers | Intentional radiator | Technology,Engineering | 102 |
24,406,012 | https://en.wikipedia.org/wiki/C12H16N2O2 | {{DISPLAYTITLE:C12H16N2O2}}
The molecular formula C12H16N2O2 (molar mass: 220.27 g/mol) may refer to:
Eltoprazine, a serenic, or antiaggressive drug
Methylenedioxybenzylpiperazine, a psychoactive drug | C12H16N2O2 | Chemistry | 78 |
34,239,730 | https://en.wikipedia.org/wiki/Peter%20Liss | Peter Simon Liss CBE FRS (born 27 October 1942) is a British environmental scientist, and professorial fellow, at the University of East Anglia.
Background
Liss studied at Durham University, earning a BSc. He then completed a PhD at the University of Wales.
Career
His research group is a part of the Laboratory for Global Marine and Atmospheric Chemistry (LGMAC).
He is a member of the Solar Radiation Working Group.
He was awarded the Challenger Society Medal in 2000 and the John Jeyes Medal of the Royal Society of Chemistry in 2003/04.
References
External links
https://royalsociety.org/people/peter-liss-11824/
https://web.archive.org/web/20120415140750/http://www.vmine.net/scienceinparliament/9%20Feb%20Marine%20Engineering%20meeting%20notice%20revised270110.pdf
http://www.nio.org/index/option/com_newsdisplay/task/view/tid/4/sid/23/nid/77
1942 births
Living people
Alumni of University College, Durham
Academics of the University of East Anglia
Fellows of the Royal Society
Commanders of the Order of the British Empire
British scientists
Environmental scientists | Peter Liss | Environmental_science | 279 |
17,872,875 | https://en.wikipedia.org/wiki/Spice%20%28oceanography%29 | Spice, spiciness, or spicity, symbol τ, is a term in oceanography referring to variations in the temperature and salinity of seawater over space or time, whose combined effects leave the water's density unchanged. For a given spice, any change in temperature is offset by a change in salinity to maintain unchanged density. An increase in temperature decreases density, but an increase in salinity increases density. Such density-compensated thermohaline variability is ubiquitous in the upper ocean. Warmer, saltier water is more spicy while cooler, less salty water is more minty. For a density ratio of 1, all the thermohaline variability is spice, and there are no density fluctuations.
Mathematical description
The density of seawater controls much of the movement of water, or the thermohaline flow, in the ocean. The density of seawater is primarily determined by the temperature and salinity of that water. Changes in these two main parameters, potential temperature Θ and salinity S, are multiplied with their thermal expansion or haline contraction coefficient equal to each other; and are both proportional to a change in density and are both terms of the linearized equation of state of the ocean (TEOS-10). This similarity is supposed to be relevant for understanding the consequences of sea water mixing.
The density doesn't change over an isopycnal. However, by mixing a change in temperature and salinity can occur. Therefore spiciness is introduced as variable that is proportional to thermal expansion and haline contraction. Integration of this variable along an isopycnal leads to the following equation.
Spiciness could be described as the isothermal gradient of the density that equals the isohaline gradient of the density.
The isopycnal gradient of spiciness should equal to the isopycnal gradient of temperature and salinity by multiplication with the derivative in the other variable of the density.
Another mathematical implication for the existence of a spiciness influence manifests itself in a -diagram, where the negative slope of the isopleths equals the ratio between the temperature- and salinity derivative of the spiciness.
Applications
A purpose for introducing spiciness is to decrease the amount of state variables needed; the density at constant depth is a function of potential temperature and salinity and of using both, spiciness can be used. If the goal is to only quantify the variation of water parcels along isopycnals, the variation in absolute salinity or temperature can be used instead because it gives the same information with the same amount of variables.
Another purpose is to examine how the stability ratio varies vertically on a water column. The stability ratio is a number determining the involvement of temperature changes relative to the involvement salinity changes in a vertical profile, which yields relevant information about the stability of the water column:
The vertical variation of this number is often shown in a spiciness-potential density diagram and/or plot, where the angle shows the stability.
Computation
The spiciness can be calculated in several programming languages with the Gibbs SeaWater (GSW) toolbox. It is used to derive thermodynamic seawater properties and is adopted by the Intergovernmental Oceanographic Commission (IOC), International Association for the Physical Sciences of the Oceans (IAPSO) and the Scientific Committee on Oceanic Research (SCOR). They use the definition of spiciness (gsw_spiciness0(), gsw_spiciness1(), gsw_spiciness2() at respectively 0, 1000 and 2000 dbar) provided by. These isobars are chosen because they correspond to commonly used potential density surfaces. Areas with constant density but different spiciness have a net water flow of heat and salinity due to diffusion.
Disagreements
The exact definition of spiciness is debated. Specifically, the orthogonality of the density with spiciness and the used scaling factor of potential temperature and salinity. McDougall claims that orthogonality should not be imposed because:
There is no physical reason to impose orthogonality.
Imposing orthogonality would 'necessarily depends on an arbitrary scaling factor of the salinity and temperature axes'. In other words, spiciness would have different meanings for different (chosen) scaling factors.
The meaning of spiciness changes with density. As a result, spiciness may only be useful over small vertical extensions in the surface layer.
McDougall is adopted by the Intergovernmental Oceanographic Commission (IOC), International Association for the Physical Sciences of the Oceans (IAPSO) and the Scientific Committee on Oceanic Research (SCOR) due to their implementation of spiciness in the TEOS-10.
Huang claims that the orthogonal system is superior to the non orthogonal system because the coordinates can be regarded as independent and distances between points can be calculated more easily.
McDougall recommended that the spiciness should not be used. Instead, they recommend that the variation of salinity should be used to differentiate between isopycnal water parcels and the stability ratio on vertical water columns for stability.
References
Oceanography
Oceanographical terminology
Thermodynamics
Fluid dynamics | Spice (oceanography) | Physics,Chemistry,Mathematics,Engineering,Environmental_science | 1,075 |
781,050 | https://en.wikipedia.org/wiki/Angeline%20Stickney | Chloe Angeline Stickney Hall (November 1, 1830 – July 3, 1892) was an American mathematician and suffragist. She was married to astronomer Asaph Hall and collaborated with her husband in searching for the moons of Mars, performing mathematical calculations on the data he collected.
Early life
Angeline Stickney was born to Theophilus Stickney and Electa Cook on November 1, 1830. In 1847, she took three terms of study funded by her cousin, Harriette Downs, at Rodman Union Seminary. Stickney was able to attend New-York Central College with help from her sister Ruth and by teaching at the college. She majored in science and mathematics, completed coursework in calculus and mathematical astronomy, and graduated with the college's first class, in 1855. New-York Central College was a progressive school where students of modest means, including women and free African Americans, could earn a college degree. It was here that she became passionate about the causes of women's suffrage and the abolition of slavery.
Angeline Stickney and Asaph Hall met at Central College. Stickney was two years ahead of Hall. She was his instructor in geometry and German. During their days together as teacher and student, Hall and his classmates would devise questions and problems that they were convinced Stickney could not solve, yet she reportedly never failed to solve them.
Marriage and astronomy
Stickney and Hall married in Elkhorn, Wisconsin, on March 31, 1856. As was common at the time, Stickney ended her formal academic career after she married. Immediately after the wedding, the couple moved to Ann Arbor, Michigan, so that Hall could continue his education at the University of Michigan. Three months later, they moved to Shalersville, Ohio. It was Stickney who communicated with her husband's employer, Captain Gillis, and successfully suggested that he should be made a professor at the Naval Observatory.
Stickney encouraged Hall to continue his search for satellites of Mars when he was ready to give up, and he successfully discovered the moons Phobos and Deimos. However, when she asked for payment equal to a man's salary for her calculations, her husband refused, and Angeline then discontinued her work.
Personal life
Stickney Hall home-schooled all four of her children, all of whom later attended Harvard University. Her third son, Angelo Hall, a Unitarian minister, published a biography of Hall in 1908, titled An Astronomer's Wife. Her oldest son, Asaph Hall, Jr., was born on October 6, 1859, and served as director of the Detroit Observatory from 1892 to 1905. Her other sons were named Samuel (second son) and Percival (fourth son); Percival Hall (1872–1953) was the second president of Gallaudet University from 1910 to 1946 (he himself was not deaf).
She died at North Andover, Massachusetts, at age 61. The largest crater on Phobos, Stickney Crater, is named after her.
Further reading
References
1830 births
1892 deaths
People associated with astronomy
American suffragists
American abolitionists
19th-century American mathematicians
19th-century American women mathematicians
People from Georgetown (Washington, D.C.)
New York Central College faculty
New York Central College alumni
American women civil rights activists | Angeline Stickney | Astronomy | 663 |
10,500,944 | https://en.wikipedia.org/wiki/Stochastic%20game | In game theory, a stochastic game (or Markov game), introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.
Stochastic games generalize Markov decision processes to multiple interacting decision makers, as well as strategic-form games to dynamic situations in which the environment changes in response to the players’ choices.
Two-player games
Stochastic two-player games on directed graphs are widely used for modeling and analysis of discrete systems operating in an unknown (adversarial) environment. Possible configurations of a system and its environment are represented as vertices, and the transitions correspond to actions of the system, its environment, or "nature". A run of the system then corresponds to an infinite path in the graph. Thus, a system and its environment can be seen as two players with antagonistic objectives, where one player (the system) aims at maximizing the probability of "good" runs, while the other player (the environment) aims at the opposite.
In many cases, there exists an equilibrium value of this probability, but optimal strategies for both players may not exist.
We introduce basic concepts and algorithmic questions studied in this area, and we mention some long-standing open problems. Then, we mention selected recent results.
Theory
The ingredients of a stochastic game are: a finite set of players ; a state space (either a finite set or a measurable space ); for each player , an action set
(either a finite set or a measurable space ); a transition probability from , where is the action profiles, to , where is the probability that the next state is in given the current state and the current action profile ; and a payoff function from to , where the -th coordinate of , , is the payoff to player as a function of the state and the action profile .
The game starts at some initial state . At stage , players first observe , then simultaneously choose actions , then observe the action profile , and then nature selects according to the probability . A play of the stochastic game, ,
defines a stream of payoffs , where .
The discounted game with discount factor () is the game where the payoff to player is . The -stage game
is the game where the payoff to player is .
The value , respectively , of a two-person zero-sum stochastic game , respectively , with finitely many states and actions exists, and Truman Bewley and Elon Kohlberg (1976) proved that converges to a limit as goes to infinity and that converges to the same limit as goes to .
The "undiscounted" game is the game where the payoff to player is the "limit" of the averages of the stage payoffs. Some precautions are needed in defining the value of a two-person zero-sum and in defining equilibrium payoffs of a non-zero-sum . The uniform value of a two-person zero-sum stochastic game exists if for every there is a positive integer and a strategy pair of player 1 and of player 2 such that for every and and every the expectation of with respect to the probability on plays defined by and is at least , and the expectation of with respect to the probability on plays defined by and is at most . Jean-François Mertens and Abraham Neyman (1981) proved that every two-person zero-sum stochastic game with finitely many states and actions has a uniform value.
If there is a finite number of players and the action sets and the set of states are finite, then a stochastic game with a finite number of stages always has a Nash equilibrium. The same is true for a game with infinitely many stages if the total payoff is the discounted sum.
The non-zero-sum stochastic game has a uniform equilibrium payoff if for every there is a positive integer and a strategy profile such that for every unilateral deviation by a player , i.e., a strategy profile with for all , and every the expectation of with respect to the probability on plays defined by is at least , and the expectation of with respect to the probability on plays defined by is at most . Nicolas Vieille has shown that all two-person stochastic games with finite state and action spaces have a uniform equilibrium payoff.
The non-zero-sum stochastic game has a limiting-average equilibrium payoff if for every there is a strategy profile such that for every unilateral deviation by a player , the expectation of the limit inferior of the averages of the stage payoffs with respect to the probability on plays defined by is at least , and the expectation of the limit superior of the averages of the stage payoffs with respect to the probability on plays defined by is at most . Jean-François Mertens and Abraham Neyman (1981) proves that every two-person zero-sum stochastic game with finitely many states and actions has a limiting-average value, and Nicolas Vieille has shown that all two-person stochastic games with finite state and action spaces have a limiting-average equilibrium payoff. In particular, these results imply that these games have a value and an approximate equilibrium payoff, called the liminf-average (respectively, the limsup-average) equilibrium payoff, when the total payoff is the limit inferior (or the limit superior) of the averages of the stage payoffs.
Whether every stochastic game with finitely many players, states, and actions, has a uniform equilibrium payoff, or a limiting-average equilibrium payoff, or even a liminf-average equilibrium payoff, is a challenging open question.
A Markov perfect equilibrium is a refinement of the concept of sub-game perfect Nash equilibrium to stochastic games.
Stochastic games have been combined with Bayesian games to model uncertainty over player strategies. The resulting stochastic Bayesian game model is solved via a recursive combination of the Bayesian Nash equilibrium equation and the Bellman optimality equation.
Applications
Stochastic games have applications in economics, evolutionary biology and computer networks. They are generalizations of repeated games which correspond to the special case where there is only one state.
See also
Stochastic process
Notes
Further reading
(suitable for undergraduates; main results, no proofs)
External links
Lecture on Stochastic Two-Player Games by Antonin Kucera
Game theory game classes
Mathematical and quantitative methods (economics) | Stochastic game | Mathematics | 1,455 |
4,392,777 | https://en.wikipedia.org/wiki/Doob%20martingale | In the mathematical theory of probability, a Doob martingale (named after Joseph L. Doob, also known as a Levy martingale) is a stochastic process that approximates a given random variable and has the martingale property with respect to the given filtration. It may be thought of as the evolving sequence of best approximations to the random variable based on information accumulated up to a certain time.
When analyzing sums, random walks, or other additive functions of independent random variables, one can often apply the central limit theorem, law of large numbers, Chernoff's inequality, Chebyshev's inequality or similar tools. When analyzing similar objects where the differences are not independent, the main tools are martingales and Azuma's inequality.
Definition
Let be any random variable with . Suppose is a filtration, i.e. when . Define
then is a martingale, namely Doob martingale, with respect to filtration .
To see this, note that
;
as .
In particular, for any sequence of random variables on probability space and function such that , one could choose
and filtration such that
i.e. -algebra generated by . Then, by definition of Doob martingale, process where
forms a Doob martingale. Note that . This martingale can be used to prove McDiarmid's inequality.
McDiarmid's inequality
The Doob martingale was introduced by Joseph L. Doob in 1940 to establish concentration inequalities such as McDiarmid's inequality, which applies to functions that satisfy a bounded differences property (defined below) when they are evaluated on random independent function arguments.
A function satisfies the bounded differences property if substituting the value of the th coordinate changes the value of by at most . More formally, if there are constants such that for all , and all ,
See also
References
Probabilistic inequalities
Statistical inequalities
Martingale theory | Doob martingale | Mathematics | 416 |
64,178,230 | https://en.wikipedia.org/wiki/Patrick%20H.%20Diamond | Patrick Henry Diamond is an American theoretical plasma physicist. He is currently a professor at the University of California, San Diego, and a director of the Fusion Theory Institute at the National Fusion Research Institute in Daejeon, South Korea, where the KSTAR Tokamak is operated.
In 2011, Diamond was jointly awarded the Hannes Alfvén Prize with Akira Hasegawa and Kunioki Mima for important contributions to the theory of turbulent transport in plasmas. In addition to applications in controlled nuclear fusion, he also specializes in astrophysical plasmas.
Early life and career
Diamond was raised in the Bay Ridge section of Brooklyn, NY. He graduated St. Anselm’s Elementary School and Xavierian High School, both located in Bay Ridge Brooklyn, NY
Diamond received his Ph.D. in 1979 from the Massachusetts Institute of Technology.
Honors and awards
In 1986, he was inducted a fellow of the American Physical Society. In 1988, he became a Sloan Research Fellow.
In 2011, Diamond was awarded the Hannes Alfvén Prize by the European Physical Society for "laying the foundations of modern numerical transport simulations and key contributions on self-generated zonal flows and flow shear decorrelation mechanisms which form the basis of modern turbulence in plasmas".
Publications
References
American plasma physicists
Fellows of the American Physical Society
Living people
Plasma physicists
Massachusetts Institute of Technology alumni
Sloan Research Fellows
Year of birth missing (living people) | Patrick H. Diamond | Physics | 287 |
18,037,563 | https://en.wikipedia.org/wiki/Underwater%20bridge |
An underwater bridge is a military structure that was employed during World War II and the Korean War.
Underwater bridges, typically constructed of logs, sand and dirt just beneath the surface of the water in a river or similar narrow body of water, allow heavier vehicles to cross the river driving through only shallow water. Air reconnaissance finds such "bridges" difficult to spot; even if seen, air strikes have difficulty in destroying them as the water insulates the structure from blasts.
Soviet troops, notably Georgy Zhukov, used the concept during the Khalkhin Gol campaign (1939) and in World War II (1941-1945). North Korean troops also used such structures during the Korean War (1950-1953), particularly to cross the Naktong River during the Battle of Pusan Perimeter of August-September 1950. In the Vietnam War the Ho Chi Minh trail used such bridges.
See also
Low water crossing
Ford
References
Bibliography
External links
Lone Sentry article about underwater bridges
Military bridging equipment
Military tactics
World War II military equipment of the Soviet Union
Military equipment of North Korea | Underwater bridge | Engineering | 218 |
39,168,618 | https://en.wikipedia.org/wiki/List%20of%20scale-model%20industry%20people | This list contains the names of significant contributors to the history and development of the scale-model and model-kit industry, including engineers, artists, designers, draftsmen, tool-makers, executives, historians and promoters.
This list does not include people primarily notable for the competitive operation of a scale-model vehicle, unless they have a notable career as designers or executives or outside of the hobby industry such as car designers.
A
Steve Adams: founder of Adams Models
Harold Alden: early aircraft modeler and industry promoter
John Allen: influential model railroader
John Amendola: kit-box artist
Mel Anderson: gas engine designer (for flying scale aircraft)
John Andrews: artist, kit designer and marketer for Hawk
Ray Arden: inventor of the Glo-Plug
William Atwood: engine designer (for flying scale aircraft)
B
Charlie Bauer: flying aircraft kit designer
Adrien Bertin: R/C car glowplug engine designer
Edward Beshar: early model designer and builder
Jack Besser: co-founder of Monogram Models
Maxwell Basset: gas-powered model pioneer
Pieter Bervoets: co-founder of Serpent
Dick Branstner: co-founder of MPC, kit designer, creator of the 'Color Me Gone' dragsters
William Brown IV: created first successful gas engine for mass-marketed flying aircraft
C
Bill Campbell: illustrator, kit-box art designer/painter; creator of the Weird-Ohs and Silly Surfers series for Hawk
Tom Carter: industry executive
Bill Coulter: writer; builder; kit-developer
James "Jim" Petit Cox (1910-1993): illustrator, kit-box art designer/painter. Notably associated with Aurora
John Cuomo (1901-1971): first salesman for Aurora
Bill Cushenbery: custom kit designer, notably associated with AMT
Roger Curtis: Co-founder of Associated Electrics and R/C car designer (RC10, RC12)
D
Tom Daniel: kit-designer of custom and caricature subjects for Monogram Models
Bruce Devlin: Laboratory Manager of wind tunnel testing and engineering flow modeling company Airflow Sciences Corporation
E
Billy Easton: R/C car designer for Serpent (and previously Team Durango)
Scott Eidson: kit-box artist
Fred Ertl, Sr.: founder of the Ertl Company
F
Jacque Fresco (1916–): co-founder of Revell
G
West Gallogly: founder of AMT
Lorenzo Ghiglieri: kit-box artist
Joseph E. Giammarino (1916–1992): co-founder of Aurora Plastics Corporation
"Dirty Donny" Gillies: kit-box artist
Lewis H. Glaser (1917-1972): co-founder of Revell
Walt Good: considered "the father of remote-control aircraft modeling"
Carl Goldberg: model aircraft designer and industry executive
Kevin Gowland, John Gowland and Jack Gowland: co-founders of Gowland and Gowland; the company's Highway Pioneers line of early car model kits was marketed by Revell in the early '50s, and was instrumental in the early growth of the scale model hobby in the US.
Paul K. Guillows: founder of GHQ Model Airplane Company (later known as Guillows)
Leonardo Garofali: co-founder of SG Racing Cars and Blitz Model Technica
Jaures Garofali: founder of Super Tigre Saturno micromecanica
H
Scott T. Hards: entrepreneur, founder of Hobby Link Japan
John Hanley: tool and die maker; founder of Jo-Han
Suguro Hasegawa: founder of Hasegawa Corporation
Roger Harney: industry executive
Don Holthaus: founder of Modelhaus
Juraj Hudy: founder of XRAY and Hudy
Gene Husting (1927-2014): executive of Associated Electrics
I
Takashi Ishii, founder/owner of Studio 27/Gilles Company
J
Dean Jeffries: kit designer, auto customizer, notably associated with MPC
Bob Johnson: industry executive
K
Al Kalmbach: founder of Model Railroader magazine and Kalmbach Publishing, early industry promoter
Yuichi Kanai: radio-controlled car designer, significantly for Kyosho (Inferno)
Shigeru Kawada: radio-controlled car designer and founder-owner of Kawada Model
Jim Keeler: industry executive
Levon Kemalyan (1907–1976): manufacturer, founder of Kemtron
Richard Kishady: kit-box artist
Akira Kogawa: radio-controlled car designer, significantly for Kyosho (Optima, Scorpion) and Hobby Products International (Baja 5B and 5T)
Shigeru Komatsuzaki: science-fiction illustrator; kit-box artist
Jo Kotula: aviation illustrator; kit-box artist
Dick Korda: award-winning model airplane designer and flyer
Nicholas Kove (1891-1958): founder of Airfix
Oscar Koveleski: CanAm racing pioneer; founder of Auto World; promoter of slot car racing, model building, and RC racing and building
Mort Künstler: kit-box artist, notably associated with Aurora in the 1960s.
L
Jürgen Lautenbach: founder of LRP Electronic
Cliff Lett: R/C car designer and president of Associated Electrics
William M. "Bill" Lester (1908-2005): founder of Pyro Plastics Corporation, pioneer in the development of injection molding
Paul Lindberg: founder of Lindberg Products, Inc.
William, Walter and Arthur Lines: co-founders of Frog
Dick Locher: kit-box artist
Ted Longshaw (1926-2011): hobby shop owner and founder of BRCA, EFRA and IFMAR
Gil Losi: founder of Losi, former owner of Ranch Pit Shop
M
Joe Mansour: co-founder of Frog
Tom Marshall: founder of Buggleskelly Station
Dick Mates Sr. and Phil Mates (brothers): co-founders of Hawk Model Company
Dick Mates Jr.: executive at Hawk
Bob McCleod: industry executive
Fred Megow: founder of Megow Models (Philadelphia, 1920s)
Tom Morgan: kit-box artist, most notably associated with Hawk in the 1960s, as well as Monogram
Robert Mudry: President of wind tunnel testing and engineering flow modeling company Airflow Sciences Corporation
John Mueller: directed product and mold design and technical illustration groups for AMT
N
Philippe Neidhart: founder of Team Orion, CEO of Neidhart SA
Kody Numedahl: radio-controlled car designer for Team Associated
O
Jack Odell: co-founder of Lesney Products and founder of Lledo
Irwin G. Ohlsson: gas engine designer and manufacturer (flying model aircraft)
P
Keith Plested: founder of PB Racing and co-founder of BRCA
Irwin S. Polk and Nathan J. Polk: owners of Polk Hobbies; promoters of the scale model hobby
Ernest Provetti: founder of Trinity Products
Tarquinio Provini: founder of Protar
Q
Joseph Quagraine: designer/founder of JQ Products
R
Bob Reder: co-founder of Monogram Models
Mike Reedy (1941 - 2011): founder of Reedy and creator of Reedy International Race of Champions
F. Lee Renaud (-1983): founder of Airtronics
Bob Rule (businessman), founder of BoLink
S
Franco Sabatini: designer and co-founder of SG Racing Cars
Michael Salven: executive of Serpent, designer (Impact, Vector)
Koji Sanada: R/C car designer, president and CEO of Mugen Seiki, formerly of Aoyagi Metals Company
Pierre Scerri: model engineer
Gary Schmidt: industry executive
Cyril Schumacher: founder of Schumacher Racing Products
Leon Shulman: aircraft kit designer
Abe Shikes (1908–1997): co-founder of Aurora Plastics Corporation
George Siposs (1931-1995): radio controlled car racing pioneer and founder of ROAR
Victor Stanzel: flying aircraft model designer
James Hay Stevens (1913 -1973): aircraft kit designer, notably associated with Airfix
John Steel: kit-box artist
Henry Struck: aircraft kit designer
Hisashi Suzuki (1936-2010): radio controlled car racing pioneer (in Japan), founder of Kyosho and JMRCA
Masayuki Suzuki: former president of Kyosho
T
Fumito Taki: radio-controlled car designer, significantly for Tamiya (famous works: Sand Scorcher)
Yoshio Tamiya (1905 - 1988): founder of Tamiya
Masayuki Tamiya (c1957/1958 - 2017): president of Tamiya
Shunsaku Tamiya: chairman of Tamiya
Nils F. Testor: founder of Testors
George A. Toteff Jr. (1925-2011): founder of MPC, inventor of the AMT 3-in-1 kit; innovator of one-piece body car-model tooling.
Ron Ton co-founder of Serpent rc cars inventor of the two speed gearbox and the centax clutch for model cars
U
Yukijiro Umino: designer for Yokomo (Yokomo MR4TC BDx)
V
Dave Vander Wal: industry executive
Gordon Varney: founder of Varney Scale Models, developer of early injection molded plastic kits
Norm Veber: industry executive
Peter Vetri: Founder and President of Atlantis Toy and Hobby and Current President of the HMA (Hobby Manufacturers Association)
W
Jim Walker: inventor of U-Control Line Flying (scale aircraft)
Jairus Watson: graphic designer, automotive conceptual artist, kit-box illustrator.
Dick Wellman: kit-box artist
Patrick Wentzel: Co-Founder of Carousel Modelers and Miniatures Association
Tom West: industry executive
Linn H. Westcott: long-time editor of Model Railroader, hobby industry promoter
Charles Wilmot: co-founder of Frog
John Wilmot: co-founder of Frog, pioneer of plastic scale kit models
Gerald Wingrove: model engineer
X
Y
Tomoaki "Tom" Yokobori: founder of Yokomo and Yatabe Arena
Z
References
Notes
Bibliography
External links
Hobby Manufacturers Association (HMA); https://www.hmahobby.org
Plastics Academy Hall of Fame; https://archive.today/20130705001947/http://www.plasticsacademy.org/swp/articles.php?articleId=89
Scale modeling
Model aircraft
Model boats | List of scale-model industry people | Physics | 2,116 |
14,150,975 | https://en.wikipedia.org/wiki/MAP2K3 | Dual specificity mitogen-activated protein kinase kinase 3 is an enzyme that in humans is encoded by the MAP2K3 gene.
The protein encoded by this gene is a dual specificity protein kinase that belongs to the MAP kinase kinase family. This kinase is activated by mitogenic and environmental stress, and participates in the MAP kinase-mediated signaling cascade. It phosphorylates and thus activates MAPK14/p38-MAPK. This kinase can be activated by insulin, and is necessary for the expression of glucose transporter. Expression of RAS oncogene is found to result in the accumulation of the active form of this kinase, which thus leads to the constitutive activation of MAPK14, and confers oncogenic transformation of primary cells. The inhibition of this kinase is involved in the pathogenesis of Yersinia pseudotuberculosis. Multiple alternatively spliced transcript variants that encode distinct isoforms have been reported for this gene.
Interactions
MAP2K3 has been shown to interact with TAOK2 and PLCB2.
References
Further reading
EC 2.7.12 | MAP2K3 | Chemistry | 230 |
3,612,259 | https://en.wikipedia.org/wiki/Hierarchy%20%28mathematics%29 | In mathematics, a hierarchy is a set-theoretical object, consisting of a preorder defined on a set. This is often referred to as an ordered set, though that is an ambiguous term that many authors reserve for partially ordered sets or totally ordered sets. The term pre-ordered set is unambiguous, and is always synonymous with a mathematical hierarchy. The term hierarchy is used to stress a hierarchical relation among the elements.
Sometimes, a set comes equipped with a natural hierarchical structure. For example, the set of natural numbers N is equipped with a natural pre-order structure, where whenever we can find some other number so that . That is, is bigger than only because we can get to from using . This idea can be applied to any commutative monoid. On the other hand, the set of integers Z requires a more sophisticated argument for its hierarchical structure, since we can always solve the equation by writing .
A mathematical hierarchy (a pre-ordered set) should not be confused with the more general concept of a hierarchy in the social realm, particularly when one is constructing computational models that are used to describe real-world social, economic or political systems. These hierarchies, or complex networks, are much too rich to be described in the category Set of sets. This is not just a pedantic claim; there are also mathematical hierarchies, in the general sense, that are not describable using set theory.
Other natural hierarchies arise in computer science, where the word refers to partially ordered sets whose elements are classes of objects of increasing complexity. In that case, the preorder defining the hierarchy is the class-containment relation. Containment hierarchies are thus special cases of hierarchies.
Related terminology
Individual elements of a hierarchy are often called levels and a hierarchy is said to be infinite if it has infinitely many distinct levels but said to collapse if it has only finitely many distinct levels.
Example
In theoretical computer science, the time hierarchy is a classification of decision problems according to the amount of time required to solve them.
See also
Order theory
Nested set collection
Lattice
Tree related topics:
Tree structure
Tree (data structure)
Tree (graph theory)
Tree network
Tree (descriptive set theory)
Tree (set theory)
Effective complexity hierarchies:
Polynomial hierarchy
Exponential hierarchy
Chomsky hierarchy
Ineffective complexity hierarchies:
Arithmetical hierarchy
Hyperarithmetical hierarchy
Analytical hierarchy
In set theory or logic:
Borel hierarchy
Difference hierarchy
Wadge hierarchy
Abstract algebraic hierarchy
References
Hierarchy
Set theory | Hierarchy (mathematics) | Mathematics | 517 |
66,141,833 | https://en.wikipedia.org/wiki/Meike%20Akveld | Meike Maria Elisabeth Akveld is a Swiss mathematician and textbook author, whose professional interests include knot theory, symplectic geometry, and mathematics education. She is a tenured senior scientist and lecturer in the mathematics and teacher education group in the Department of Mathematics at ETH Zurich. She is also the organizer of the Mathematical Kangaroo competitions in Switzerland, and president of the Association Kangourou sans Frontières, a French-based international society devoted to the popularization of mathematics.
Education
Akveld earned a bachelor's degree from the University of Warwick and took Part III of the Mathematical Tripos at the University of Cambridge. She completed her Ph.D. at ETH Zurich in 2000, with the dissertation Hofer geometry for Lagrangian loops, a Legendrian knot and a travelling wave jointly supervised by Dietmar Salamon and Leonid Polterovich.
Books
Akveld's mathematics books include:
Canonical metrics in Kähler geometry (by Tian Gang, based on notes taken by Akveld, Birkhäuser, 2000)
Knoten in der Mathematik: Ein Spiel mit Schnüren, Bildern und Formeln (Knots in mathematics: A game with strings, pictures and formulas, in German, Orell Füssli, 2007)
Hofer geometry for Lagrangian loops: And a Legendrian Knot and a travelling wave (VDM Verlag, 2008)
Integrieren - do it yourself (in German, with Ursula Eisler and Daniel Zogg, Orell Füssli, 2010)
Knots Unravelled: From String to Mathematics (with Andrew Jobbings, Arbelos, 2011)
Analysis I and Analysis II (in German, with René Sperb, VDF Hochschulverlag, 2012 and 2015)
Knopen in de wiskunde (Knots in mathematics, in Dutch, with Ab van der Roest, Epsilon Uitgaven, 2015)
Mathe mit dem Känguru 5: Die schönsten Aufgaben von 2015 bis 2019 (Math with the kangaroo 5: The most beautiful problems from 2015 to 2019, in German, with Alexander Unger, Monika Noack, and , Hanser Verlag, 2019)
References
External links
Home page
Year of birth missing (living people)
Living people
Swiss mathematicians
Swiss women mathematicians
Alumni of the University of Warwick
Alumni of the University of Cambridge
ETH Zurich alumni
Academic staff of ETH Zurich
Mathematics educators
Topologists | Meike Akveld | Mathematics | 502 |
27,199,722 | https://en.wikipedia.org/wiki/Flucindole | Flucindole (, ; developmental code name WIN-35150) is an antipsychotic with a tricyclic and tryptamine-like structure that was never marketed. It is the 6,8-difluoro derivative of ciclindole. The drug is about 5 to 10times more potent than ciclindole both in vitro and in vivo.
See also
5-Fluoro-AMT
Iprindole
References
Abandoned drugs
Antipsychotics
Carbazoles
Dimethylamino compounds
Fluoroarenes
Heterocyclic compounds with 3 rings
Nitrogen heterocycles
Tryptamines | Flucindole | Chemistry | 133 |
25,403,351 | https://en.wikipedia.org/wiki/Chang%27s%20conjecture | In model theory, a branch of mathematical logic, Chang's conjecture, attributed to Chen Chung Chang by , states that every model of type (ω2,ω1) for a countable language has an elementary submodel of type (ω1, ω). A model is of type (α,β) if it is of cardinality α and a unary relation is represented by a subset of cardinality β. The usual notation is .
The axiom of constructibility implies that Chang's conjecture fails. Silver proved the consistency of Chang's conjecture from the consistency of an ω1-Erdős cardinal. Hans-Dieter Donder showed a weak version of the reverse implication: if CC is not only consistent but actually holds, then ω2 is ω1-Erdős in K.
More generally, Chang's conjecture for two pairs (α,β), (γ,δ) of cardinals is the claim
that every model of type (α,β) for a countable language has an elementary submodel of type (γ,δ).
The consistency of was shown by Laver from the consistency of a huge cardinal.
References
Model theory
Set theory
Conjectures | Chang's conjecture | Mathematics | 243 |
58,969,184 | https://en.wikipedia.org/wiki/Jan%20Lever | Jan Lever (Groningen, 20 July 1922 – Amsterdam, 23 November 2010) was a Dutch biologist specialized in zoology, endocrinology and evolutionary biology. His ideas on evolution may be characterized as a form of theistic evolution. Lever was an important voice in shaping Dutch public debate on evolution and biology, particularly in protestant circles.
Life and career
The Reformed Lever studied biology at Utrecht University from 1939 onwards. Lever obtained his doctorate in 1950. He served as a professor of zoology at the Vrije Universiteit Amsterdam from 1952 until 1986. In his inaugural lecture, he presented his belief that evolution was the way in which God had created the world. He further developed this argument in his book Creatie en evolutie (1956; English translation 1958 under the title Creation and Evolution).
In 1970, Lever was elected member of the Royal Netherlands Academy of Arts and Sciences.
Research and writings
In the early 1950s, Lever started research on the neuroendocrine system of the basommatophoran snail, Lymnaea stagnalis. Lever himself found a particular kind of neurons in the ancylid snail Ferrissia, and his student Joos Joosse demonstrated those neurons in other snail species. Lymnaea became the animal of choice for physiological research at the VU Amsterdam laboratory for decades.
At the beginning of his career, Lever’s ideas about evolution and the species concept were inspired by the Reformational philosophy of philosopher Herman Dooyeweerd. During his student days, Lever became acquainted with the biologist Johann Heinrich Diemer, who had published about Dooyeweerd's approach in the domain of biology in the 1930s. Diemer was a great influence on Lever's thought. Together with Dooyeweerd, Lever wrote four articles about the species concept in the journal Philosophia Reformata (1948–1950) in which species were defined as constant types. In his book, Creatie en Evolutie (1956), Lever still subscribed to Dooyeweerd’s philosophy but also suggested that it is possible that biological evolution occurred. Later he moved away from these earlier ideas about species constancy, and adhered to a kind of theistic evolution.
Publications
Creatie en Evolutie (1956); translated in English (1958) as Creation and Evolution
Quantitative beach research I, The "left-right-phenomenon": Sorting of lamellibranch valves on sandy beaches (1958)
Waar blijven we? (1969); translated in English (1970) as Where are we headed? A biologist talks about origins, evolution, and the future
Geïntegreerde biologie (1973)
Schepping en evolutie (1985)
Bomengids van Amsterdam-Zuid (2002)
Feniks en broedmachine. Reisverhalen over deze wondere wereld (2003)
Langs de mysterieuze grenzen van het leven (2006)
Een bioloog leest de Bijbel (2010)
Literature
Ab Flipse (2010), "In memoriam prof. dr. Jan Lever, 20 juli 1922 – 23 november 2010". HDC.VU.NL, 9 December 2010. Accessed 5 November 2018.
Ab Flipse (2016), "Jan Lever: bioloog, bruggenbouwer en boegbeeld van de VU", in: Verder kijken. Honderdvijfendertig jaar Vrije Universiteit Amsterdam in de samenleving. Amsterdam: Vrije Universiteit, 196-202.
Willem Bouwman (2006), "De evolutie van professor Lever", Nederlands Dagblad, 17 maart 2006.
Ab Flipse (red.)(2022), Jan Lever - Honderd. Terugblikken op leven en werk van VU-bioloog Jan Lever (1922-2010) (Amsterdam: HDC, 2022); 158 pp.; ; Donum-reeks dl. 21
References
External links
Jan Lever pages
1922 births
2010 deaths
Utrecht University alumni
Academic staff of Vrije Universiteit Amsterdam
Dutch biologists
Dutch zoologists
Theistic evolutionists
Dutch biblical scholars
Bible commentators
Reformed Churches Christians from the Netherlands
Members of the Royal Netherlands Academy of Arts and Sciences | Jan Lever | Biology | 896 |
23,817,797 | https://en.wikipedia.org/wiki/Howard%20N.%20Potts%20Medal | The Howard N. Potts Medal was one of The Franklin Institute Awards for science and engineering award presented by the Franklin Institute of Philadelphia, Pennsylvania. It is named for Howard N. Potts. The first Howard N. Potts Medal was awarded in 1911 but was merged in 1991, along with other Franklin Institute historical awards, into the Benjamin Franklin Medal.
Laureates
The following people received the Howard N. Potts Medal:
1911 - William Weber Coblentz (Physics)
1912 - William Arthur Bone (Chemistry)
1913 - James A. Bizzell (Earth Science)
1913 - Thomas Lyttleton Lyon (Earth Science) for "Plants and Relation to Nitrate in Soils"
1914 - Ralph Modjeski (Engineering)
1916 - William Jackson Humphreys (Physics)
1916 - William Spencer Murray (Engineering)
1917 - Ulric Dahlgren (Life Science)
1918 - Alexander Gray (Engineering)
1918 - Arthur Edwin Kennelly (Engineering)
1918 - Louis Vessot King (Engineering)
1919 - Reynold Janney (Engineering)
1919 - Clarence P. Landreth (Chemistry)
1919 - Harvey D. Williams (Engineering)
1920 - Wendell Addison Barker (Invention)
1920 - Edward P. Bullard, Jr. (Engineering)
1921 - Elmer Verner McCollum (Life Science)
1921 - Alfred O. Tate (Engineering)
1922 - Ernest George Coker (Physics)
1922 - Charles R. Downs (Chemistry)
1922 - Richard Bishop Moore (Chemistry)
1922 - J. M. Weiss (Chemistry)
1923 - Albert Wallace Hull (Chemistry) for X-ray crystallography
1924 - John August Anderson (Engineering)
1924 - William Gaertner (Engineering)
1925 - Charles Thomson Rees Wilson (Physics)
1926 - William David Coolidge (Physics)
1926 - Howard W. Matheson (Chemistry)
1927 - George E. Beggs (Physics)
1927 - Marion Eppley (Engineering)
1928 - Eugene C. Sullivan (Chemistry)
1928 - William C. Taylor (Chemistry)
1928 - Oscar G. Thurow (Engineering)
1931 - Benno Strauss (Engineering)
1932 - George Paget Thomson (Physics)
1933 - Igor Ivanovich Sikorsky (Engineering)
1934 - Ernst Georg Fischer (Engineering)
1936 - Felix Andries Vening Meinesz (Engineering)
1937 - John Clyde Hostetter (Engineering)
1938 - Lars Olai Grondahl (Engineering)
1939 - Newcomb K. Chaney (Engineering)
1939 - H. Jermain Creighton (Engineering)
1941 - Harold Eugene Edgerton (Engineering)
1942 - Jesse Wakefield Beams (Physics)
1942 - Harcourt Colborne Drake (Engineering)
1942 - Bernard Lyot (Physics)
1943 - Don Francisco Ballen (Life Science)
1943 - Paul Renno Heyl (Physics)
1945 - Edwin Albert Link (Engineering)
1946 - Ira Sprague Bowen (Physics)
1946 - Bengt Edlen (Physics)
1946 - Sanford Alexander Moss (Engineering)
1947 - Vladimir Kosma Zworykin (Engineering)
1948 - Eugene Jules Houdry (Chemistry)
1948 - Clarence A. Lovell (Engineering)
1948 - David Bigelow Parkinson (Engineering)
1949 - J. Presper Eckert, Jr. (Computer and Cognitive Science)
1949 - Clinton Richards Hanna (Engineering)
1949 - John William Mauchly (Computer and Cognitive Science)
1950 - Merle Anthony Tuve (Engineering)
1951 - Basil A. Adams (Engineering)
1951 - Clifford M. Foust (Physics)
1951 - Eric Leighton Holmes (Chemistry)
1956 - Edwin H. Land (Engineering)
1958 - William Nelson Goodwin, Jr. (Engineering)
1958 - Emanuel Rosenberg (Engineering)
1959 - George W. Morey (Engineering)
1960 - Charles Stark Draper (Engineering)
1962 - Wilbur H. Goss (Engineering)
1964 - Erwin Wilhelm Müller (Engineering)
1965 - Christopher Sydney Cockerell (Engineering)
1966 - Robert Kunin (Chemistry)
1967 - John Louis Moll (Engineering)
1968 - Heinrich Focke (Engineering)
1969 - Albert Ghiorso (Chemistry)
1969 - Charles P. Ginsburg (Engineering)
1970 - Jacques-Yves Cousteau (Life Science)
1971 - William David McElroy (Life Science)
1972 - Jacques Ernest Piccard (Engineering)
1973 - Charles Howard Vollum (Engineering)
1974 - Jay Wright Forrester (Engineering)
1975 - LeGrand G. Van Uitert (Engineering)
1976 - Stephanie L. Kwolek (Engineering)
1976 - Paul W. Morgan (Engineering)
1977 - Godfrey N. Hounsfield (Life Science)
1978 - Michael Szwarc (Chemistry)
1979 - Seymour Roger Cray (Computer and Cognitive Science)
1979 - Richard Travis Whitcomb (Engineering)
1980 - Stanley G. Mason (Physics)
1981 - August Uno Lamm (Engineering)
1982 - Charles Gilbert Overberger (Chemistry)
1983 - George G. Guilbault (Life Science)
1983 - Paul Christian Lauterbur (Physics)
1985 - William Cochran (Life Science)
1986 - Martin David Kruskal (Physics)
1986 - Norman J. Zabusky (Physics)
1988 - Dudley Dean Fuller (Engineering)
1989 - Sir Charles William Oatley (Physics)
1991 - Richard E. Morley (Computer and Cognitive Science)
See also
List of general science and technology awards
References
Science and technology awards
Franklin Institute awards
Awards established in 1911
1911 establishments in Pennsylvania | Howard N. Potts Medal | Technology | 1,103 |
3,723,875 | https://en.wikipedia.org/wiki/Asymmetric%20crying%20facies | Asymmetric crying facies (ACF), also called partial unilateral facial paresis and hypoplasia of depressor angula oris muscle, is a minor congenital anomaly caused by agenesis or hypoplasia of the depressor anguli oris muscle, one of the muscles that control the movements of the lower lip. This unilateral facial weakness is first noticed when the infant cries or smiles, affecting only one corner of the mouth and occurs on the left side in nearly 80% of cases.
When the hypoplasia of the depressor anguli oris muscle is associated with congenital cardiac defects, the term 'Cayler cardiofacial syndrome' is used. Cayler syndrome is part of 22q11.2 deletion syndrome. It was characterized by Cayler in 1969.
References
General
External links
Crying
Autosomal monosomies and deletions
Cayler cardiofacial syndrome
crying | Asymmetric crying facies | Physics,Biology | 202 |
2,399,787 | https://en.wikipedia.org/wiki/Time-tracking%20software | Time-tracking software are computer programs that allows users to record time spent on tasks or projects. Time-tracking software may include time-recording software, which uses user activity monitoring to record the activities performed on a computer and the time spent on each project and task.
Timesheet software
Timesheet software is software used to maintain timesheets. It was popularized when computers were first introduced to the office environment with the goal of automating heavy paperwork for big organizations.
See also
Comparison of time-tracking software
Computer surveillance
Employee-scheduling software
Meeting scheduling tool
Project-management software
Schedule (workplace)
Time and attendance
Time management
References
Accounting software
Business software
Time management | Time-tracking software | Physics,Technology | 134 |
27,261,055 | https://en.wikipedia.org/wiki/HD%2032450 | HD 32450, also known as Gliese 185 is a binary star in the constellation Lepus. It is located about 28 light years from the Solar System. This star will make its closest approach to the Sun in roughly 350,000 years, when it comes within .
See also
List of star systems within 25–30 light-years
References
Durchmusterung objects
0185
023452
032450
Lepus (constellation)
K-type main-sequence stars
Binary stars | HD 32450 | Astronomy | 99 |
47,881,061 | https://en.wikipedia.org/wiki/Humphrey%20visual%20field%20analyser | Humphrey field analyser (HFA) is a tool for measuring the human visual field that is commonly used by optometrists, orthoptists and ophthalmologists, particularly for detecting monocular visual field.
The results of the analyser identify the type of vision defect. Therefore, it provides information regarding the location of any disease processes or lesion(s) throughout the visual pathway. This guides and contributes to the diagnosis of the condition affecting the patient's vision. These results are stored and used for monitoring the progression of vision loss and the patient's condition.
Medical uses
The analyser can be used for screening, monitoring and assisting in the diagnosis of certain conditions. There are numerous testing protocols to select, based on the purpose. The first number denotes the extent of the field measured on the temporal side, from the centre of fixation, in degrees. The '-2' represents the pattern of the points tested. They include:
10-2: Measures 10 degrees temporally and nasally and tests 68 points. Used for macula, retinal and neuro-ophthalmic conditions and advanced glaucoma
24-2: Measures 24 degrees temporally and 30 degrees nasally and tests 54 points. Used for neuro-ophthalmic conditions and general screening as well as early detection of glaucoma
30-2: Measures 30 degrees temporally and nasally and tests 76 points. Used for general screening, early glaucoma and neurological conditions
The above tests can be performed in either SITA-Standard or SITA-Fast. SITA-Fast is a quicker method of testing. It produces similar results compared to SITA-Standard, however repeatability is questionable and it is slightly less sensitive
There are additional tests for more specific purposes such as the following:
Esterman – Used to test the functionality of a patient's vision to ensure they are safe to drive, as requested by VicRoads, Australia
SITA SWAP: Short Wavelength Automated Perimetry (SWAP) is used for detection of early glaucomatous loss
Method of assessment
The analyser test takes approximately 5–8 minutes, excluding patient set up. There are multiple steps which need to be done before commencement of the test to ensure reliable results are attained.
The test type and eye are firstly selected and the patient's details are entered, including their refractive error. The analyser will provide a lens strength and type (either spherical and/or cylindrical), if required for the test. In these instances, wire-rimmed trial lenses are generally used, with the cylindrical lens placed closest to the patient so the axis is easily read. The clinician can alter the fixation targets as per necessary (see Fixation Targets for advice).
Before putting the patient onto the machine, the patient is instructed to maintain fixation on the central target and is given a buzzer to only press when they see a light stimulus. It is not possible to see every light and some lights appear brighter/duller and slower/faster than others. The eye not being tested is patched and the room lights are dimmed prior to commencement of the test.
The patient is positioned appropriately and comfortably against the forehead rest and chin rest. Minor adjustments to the head position are made to centre the pupil on the display screen to allow eye monitoring throughout the test. The lens holder should be as close to the patient's eye as possible to avoid artefacts (see Disadvantages for possible artefacts).
It is important for the patient to blink normally, relax and maintain concentration throughout the test. This will increase the reliability of results.
How it works
The analyser projects a series of white light stimuli of varying intensities (brightness), throughout a uniformly illuminated bowl. The patient uses a handheld button that they press to indicate when they see a light. This assesses the retina's ability to detect a stimulus at specific points within the visual field. This is called retinal sensitivity and is recorded in 'decibels' (dB).
The analyser currently utilises the Swedish Interactive Thresholding Algorithm (SITA); a formula which allows the fastest and most accurate visual field assessment to date. Results are then compared against an age-matched database which highlights unusual and suspicious vision loss, potentially caused by pathology.
Fixation targets
There are different targets a patient can fixate on during the test. They are chosen on the basis of the patient's conditions.
Central target: Yellow light in the bowl's centre
Small diamond: For patients who cannot see the central target such as those with macular degeneration. The patient looks into the centre of the four lights
Large diamond: For patients who cannot see the above two
Interpreting results
Reliability indices
Issues of reliability are critical in result interpretation. These include, but not limited to, the patient losing concentration, closing their eyes or pressing the buzzer too frequently. Monitoring fixation is made visible via the display screen and gaze tracker, located at the bottom of the printout. The degree of reliability is determined by the reliability indices located on the printout (Fig. 4). These are assessed first and allow the examiner to determine if the end results are reliable. These indices include:
Fixation losses: Recorded when a patient responds to a stimulus that is projected on to area of their blind spot. Fixation losses exceeding 20%, are denoted with an 'XX' next to the score, and deems results unreliable
False positives: Recorded when a patient responds when there is no stimulus present. This patient is often referred to as 'buzzer happy'. False positives exceeding 15% are denoted with an 'XX' and results are considered unreliable. This may indicate that the patient is anxious and concerned about missing targets
False negatives: Recorded when a patient does not respond to brighter stimuli where a duller stimulus has already been seen. High false negative scores indicate that the patient is fatigued, inattentive, a malingerer or has genuine significant visual field loss. Literature presents various percentages regarding reliability. However, majority of the literature defines that false negatives exceeding approximately 30% deem results unreliable.
Plots
After reliability is determined, the remaining data is assessed.
Numerical display
The numerical display represents raw values of patient's retinal sensitivity at specific retinal points in dB. Higher numbers equate to higher retinal sensitivities. Sensitivity is greatest in the central field and decreases towards the periphery. Normal values are approximately 30 dB while recorded values of <0 dB equate to no sensitivity measured.
Grey scale
The grey scale is a graphical representation of the numerical display, allowing for easy interpretation of the field loss. Lower sensitivities are indicated by darker areas and higher sensitivities are represented with a lighter tone. This scale is used to demonstrate vision changes to the patient but is not used for diagnostic purposes.
Total deviation
The numerical total demonstrates the difference between measured values and population age-norm values at specific retinal points.
Negative values indicate lower than normal sensitivity
Positive indicates higher
0 equals no change
The statistical display (located below the numerical total) demonstrates the percentage of the normal population who measure below the patient's value at a specific retinal point. The probability display provides this percentage a key for interpreting the statistical display. For example, the darkest square in the key represents that <0.5% of the population would also attain this result, indicating that the vision loss is extensive. The total deviation plots highlight diffuse vision loss (i.e. the total departure from the age-norm).
Pattern deviation
The pattern deviation provides a numerical total and statistical display as the total deviation plot. However, it accounts for general reductions of vision caused by media opacities (e.g. cataract), uncorrected refractive error, reductions in sensitivity due to age and pupil miosis. This highlights focal loss only (i.e. vision loss suspected from only pathological processes). Therefore, this is the main plot referred to when making a diagnosis. The pattern deviation plot is generally lighter than the total deviation because of the factors accounted for.
Global indices
These provide a statistical summary of the field with one number. Although not used for initial diagnosis, they are essential for monitoring glaucoma progression. They include:
Mean deviation (MD): Derived from the total deviation and represents the overall mean departure from the age-corrected norm. A negative value indicates field loss, while a positive value indicates that the field is above average. A P value is provided if the global indices are abnormal. It provides a statistical representation of the population. For example, P <2% means that less than 2% of the population have vision loss worse than measured
Pattern standard deviation (PSD): Derived from the pattern deviation and thus highlights focal loss only. A high PSD, indicating irregular vision, is therefore a more useful indicator of glaucomatous progression, than the MD
Glaucoma hemifield test
The glaucoma hemifield test (GHT) provides assessment of the visual field where glaucomatous damage is often seen. It compares five corresponding and mirrored areas in the superior and inferior visual fields. The result of either 'Outside Normal Limits' (significant difference in superior and inferior fields), 'Borderline' (suspicious differences) or 'Within Normal Limits' (no differences) is only considered when the patient has, or is a suspect for, glaucoma. This is only available in 30-2 and 24-2 analyser protocol.
Visual field index
The visual field index (VFI) reflects retinal ganglion cell loss and function, as a percentage, with central points weighted more.
It is expressed as a percentage of visual function; with 100% being a perfect age-adjusted visual field and 0% represents a perimetrically blind field. The pattern deviation probability plot (or total deviation probability plot when MD is worse than -20 dB) is used to identify abnormal points and age corrected sensitivity at each point is calculated using total deviation numerical map. VFI is a reliable index on which glaucomatous visual field severity staging can be based.
The shaded pattern of vision loss provided on the pattern deviation plot allows for diagnosis of the type of vision loss present. This contributes to other clinical findings in the diagnosis of certain conditions. The types of vision loss and associated conditions are not described in the extent of this article, however Figure 5 provides typical examples of visual field loss seen. Refer to #See also for more information.
Advantages and disadvantages
Advantages
Provides a comprehensive visual field assessment and ensures reliable results
Compares patient's data to age-matched populations
Distinguishes focal from diffuse vision loss
Can be used for patients who are wheelchair users, hearing impaired, have postural and fixation problems and/or very low visual acuity
Provides a baseline measurement
Simple for the examiner to perform and interpret
Disadvantages
Requires a higher level of patient understanding and concentration compared to other visual field tests
Time-consuming
Learning effect: new patients improve as more tests are performed due to understanding of the test conditions. Consider the third test as the baseline result
Potential for artefacts (i.e. uncharacteristic vision loss) (fig. 6). Below is a list of possible artefacts and a representation of how they may appear. These can however, be managed with correct patient setup.
Uncorrected refractive error and aphakia cause significant decrease in visual field sensitivity
Rim of trial frame can simulate glaucomatous loss
Media opacities and keratoconus cause decreased sensitivity
Ptosis cause superior visual field loss
Miosis cause decreased peripheral sensitivity
See also
Hemianopsia
Quadrantanopia
Scotoma
Visual field
Homonymous hemianopsia
Binasal hemianopsia
Bitemporal hemianopsia
Visual field test
References
Diagnostic ophthalmology
Medical equipment | Humphrey visual field analyser | Biology | 2,432 |
7,693,235 | https://en.wikipedia.org/wiki/Error%20amplifier%20%28electronics%29 | An error amplifier is an electronic component that amplifies the difference between its two inputs. If one input is a reference signal, this difference can be considered the error in the other.
The error amplifier is usually used with feedback loops owing to its self-correcting mechanism. A common application is in feedback unidirectional voltage control circuits, where the sampled output voltage of the circuit under control is fed back and compared to a stable reference voltage. Any difference between the two generates a compensating error voltage which tends to move the output voltage towards the design specification.
Devices
Discrete Transistors
Operational amplifiers
Applications
Regulated power supply
D.C Power Amplifiers
Measurement Equipment
Servomechanisms
See also
Differential amplifier
External links
Error Amplifier Design and Application , alphascientific.com. Originally accessed 27 April 2009, now 404. Try https://web.archive.org/web/20081006222215/http://www.alphascientific.com/technotes/technote3.pdf
Error amplifier as an element in a voltage regulator:Stability analysis of low-dropout linear regulators with a PMOS pass element
Electronic amplifiers | Error amplifier (electronics) | Technology | 237 |
8,802,094 | https://en.wikipedia.org/wiki/Absorption%20cross%20section | In physics, absorption cross-section is a measure of the probability of an absorption process. More generally, the term cross-section is used in physics to quantify the probability of a certain particle-particle interaction, e.g., scattering, electromagnetic absorption, etc. (Note that light in this context is described as consisting of particles, i.e., photons.) A typical absorption cross-section has units of cm2⋅molecule−1. In honor of the fundamental contribution of Maria Goeppert Mayer to this area, the unit for the two-photon absorption cross section is named the "GM". One GM is 10−50 cm4⋅s⋅photon−1.
In the context of ozone shielding of ultraviolet light, absorption cross section is the ability of a molecule to absorb a photon of a particular wavelength and polarization. Analogously, in the context of nuclear engineering, it refers to the probability of a particle (usually a neutron) being absorbed by a nucleus. Although the units are given as an area, it does not refer to an actual size area, at least partially because the density or state of the target molecule will affect the probability of absorption. Quantitatively, the number of photons absorbed, between the points and along the path of a beam is the product of the number of photons penetrating to depth times the number of absorbing molecules per unit volume times the absorption cross section :
.
The absorption cross-section is closely related to molar absorptivity and mass absorption coefficient.
For a given particle and its energy, the absorption cross-section of the target material can be calculated from mass absorption coefficient using:
where:
is the mass absorption coefficient
is the molar mass in g/mol
is Avogadro constant
This is also commonly expressed as:
where:
is the absorption coefficient
is the atomic number density
See also
Cross section (physics)
Photoionisation cross section
Nuclear cross section
Neutron cross section
Mean free path
Compton scattering
Transmittance
Attenuation
Beer–Lambert law
High energy X-rays
Attenuation coefficient
Absorption spectroscopy
References
Electromagnetism
Nuclear physics
Scattering, absorption and radiative transfer (optics) | Absorption cross section | Physics,Chemistry | 439 |
509,972 | https://en.wikipedia.org/wiki/Pripyat | Pripyat ( ; , ), also known as Prypiat (, ), is an abandoned industrial city in Kyiv Oblast, Ukraine, located near the border with Belarus. Named after the nearby river, Pripyat, it was founded on 4 February 1970 as the ninth atomgrad (a type of closed city in the Soviet Union) to serve the nearby Chernobyl Nuclear Power Plant, which is located north of the abandoned city of Chernobyl, after which the power plant is named. Pripyat was officially proclaimed a city in 1979 and had grown to a population of 49,360 by the time it was evacuated on the afternoon of 27 April 1986, one day after the Chernobyl disaster.
Although it was located within the administrative district of Ivankiv Raion (now Vyshhorod Raion since the 2020 raion reform), the abandoned municipality now has the status of city of regional significance within the larger Kyiv Oblast, and is administered directly from the capital of Kyiv. Pripyat is supervised by the State Emergency Service of Ukraine which manages activities for the entire Chernobyl exclusion zone. Following the 1986 Chernobyl nuclear disaster, the entire population of Pripyat was moved to the purpose-built city of Slavutych.
History
Early years
Access to Pripyat, unlike cities of military importance, was not restricted before the disaster as the Soviet Union deemed nuclear power stations safer than other types of power plants. Nuclear power stations were presented as achievements of Soviet engineering, harnessing nuclear power for peaceful projects. The slogan "peaceful atom" () was popular during those times. The original plan had been to build the plant only from Kyiv, but the Ukrainian Academy of Sciences, among other bodies, expressed concern that would be too close to the city. As a result, the power station and Pripyat were built at their current locations, about from Kyiv.
Post-Chernobyl disaster
In 1986, the city of Slavutych was constructed to replace Pripyat. After Chernobyl, this was the second-largest city for accommodating power plant workers and scientists in the Commonwealth of Independent States.
One notable landmark often featured in photographs in the city and visible from aerial-imaging websites is the long-abandoned Ferris wheel located in the Pripyat amusement park, which had been scheduled to have its official opening five days after the disaster, in time for May Day celebrations. The Azure Swimming Pool and Avanhard Stadium are two other popular tourist sites.
On 4 February 2020, former residents of Pripyat gathered in the abandoned city to celebrate the 50th anniversary of Pripyat's establishment. This was the first time former residents returned to the city since its abandonment in 1986. The 2020 Chernobyl Exclusion Zone wildfires reached the outskirts of the town, but they did not reach the plant.
During the 2022 Russian invasion of Ukraine, the city was occupied by Russian forces during the Battle of Chernobyl after several hours of heavy fighting. On 31 March Russian troops withdrew from the plant and other parts of Kyiv Oblast. On 3 April Ukrainian troops took control of Pripyat again.
Infrastructure and statistics
The following statistics are from 1 January 1986.
The population was 49,400. The average age was about 26 years old. Total living space was : 13,414 apartments in 160 apartment blocks, 18 halls of residence accommodating up to 7,621 single males or females, and eight halls of residence for married or de facto couples.
Education: 15 kindergartens and elementary schools for 4,980 children, and five secondary schools for 6,786 students.
Healthcare: one hospital could accommodate up to 410 patients, and three clinics.
Trade: 25 stores and malls; 27 cafes, cafeterias, and restaurants collectively could serve up to 5,535 customers simultaneously. 10 warehouses could hold 4,430 tons of goods.
Culture: the Palace of Culture Energetik; a cinema; and a school of arts, with eight different societies.
Sports: 10 gyms, 10 shooting galleries, three indoor swimming-pools, two stadiums.
Recreation: one park, 35 playgrounds, 18,136 trees, 33,000 rose plants, 249,247 shrubs.
Industry: four factories with annual turnover of 477,000,000 rubles. One nuclear power plant with four reactors (plus two more planned).
Transportation: Yanov railway station, 167 urban buses, plus the nuclear power plant car park with 400 spaces.
Telecommunication: 2,926 local phones managed by the Pripyat Phone Company, plus 1,950 phones owned by Chernobyl power station's administration, Jupiter plant, and Department of Architecture and Urban Development.
Safety
A concern is whether it is safe to visit Pripyat and its surroundings. The Zone of Alienation is considered relatively safe to visit, and several Ukrainian companies offer guided tours around the area. In most places within the city, the level of radiation does not exceed an equivalent dose of 1 μSv (one microsievert) per hour.
Climate
The climate of Pripyat is designated as Dfb (Warm-summer humid continental climate) on the Köppen Climate Classification System.
In popular culture
Films
(Alphabetical by title)
The horror film Chernobyl Diaries (2012) was inspired by the Chernobyl disaster in 1986 and takes place in Pripyat.
The majority of the film Land of Oblivion (2011) was shot on location in Pripyat.
Pripyat is featured in the History Channel documentary Life After People.
The drone manufacturer DJI produced Lost City of Chernobyl (May 2015), a documentary film about the work of photographer and cinematographer Philip Grossman and his five-year project in Pripyat and the Zone of Exclusion.
Filmmaker Danny Cooke used a drone to capture shots of the abandoned amusement park, some residential shots of decaying walls, children's toys, and gas masks, and collected them in a 3-minute short film Postcards From Chernobyl (released in November 2014), while making footage for the CBS News 60 Minutes episode "Chernobyl: The Catastrophe That Never Ended" (early 2014).
With the help of drones, aerial views of Pripyat were shot and later edited to appear as a deserted London in the film The Girl with All the Gifts (2016).
The documentary White Horse (2008) was filmed in Pripyat.
Literature
(Alphabetical by artist)
Markiyan Kamysh's novel, Stalking the Atomic City: Life Among the Decadent and the Depraved of Chornobyl, is about illegal trips to the Chernobyl Exclusion Zone.
The Chernobyl Poems of Lyubov Sirota by the professor of Washington University Paul Brians
Lyubov Sirota’s novel "The Pripyat Syndrome"; Language: English, Publisher: Independently published (February 18, 2021), Paperback: 202 pages, – Lyubov Sirota (Author), Birgitta Ingemanson (Editor), Paul Brians (Editor), A. Yukhimenko (Illustrator), Natalia Ryumina (Translator)
Much of the James Rollins' novel The Last Oracle takes place in Pripyat and around Chernobyl. The story revolves around a team of American "Killer Scientist" special agents who must stop a terrorist plot to unleash on the world the radiation of Lake Karachay, during the installation of the new sarcophagus over the Chernobyl nuclear power plant.
The exclusion zone is the setting for Karl Schroeder's science fiction short story "The Dragon of Pripyat".
Music
(Alphabetical by artist)
The Ukrainian singer Alyosha recorded most of the video for her Eurovision 2010 entry, "Sweet People", in Pripyat.
Ash, the rock band from Northern Ireland, has a song titled "Pripyat" included in their album A–Z Vol.1.
The song "Dead City" () by the Ukrainian symphonic metal band DELIA is about Pripyat, and scenes from the music video were shot in the city. DELIA's vocalist, Anastasia Sverkunova, was born in Pripyat just before the Chernobyl disaster.
In 2006, musician Example featured Pripyat in his 18-minute documentary of the ghost town and in his promotional video for his track, "What We Made".
German composer and pianist Hauschka included a piece titled "Pripyat" on his 2014 album Abandoned City (on which each track is titled after a different abandoned place.)
The Scottish post-rock band Mogwai included a song titled "Pripyat" on their album Atomic (2016), which is a soundtrack to Mark Cousins' documentary Atomic, Living in Dread and Promise.
The Irish folk-rock singer Christy Moore included a song called "Farewell to Pripyat" on his album Voyage (1989), the song credited to Tim Dennehy.
Marillion guitarist Steve Rothery's first solo album is titled The Ghosts of Pripyat (2014).
The Australian rapper Seth Sentry included the two-part song "Pripyat" in his album Strange New Past (2015).
The English rock band Suede used the city to shoot their music video clip Life Is Golden, including takes of the Azure Swimming Pool, Pripyat amusement park, and Polissya hotel.
The Italian Rapper Caparezza has a song titled "Come Pripyat" on his album Exuvia, released in 2021.
The Belarusian post-punk band Molchat Doma released a music video for their song titled, "Waves" () as part of their album Etazhi. The music video was filmed in Pripyat through a series of varying drone shots; displaying famous landmarks of the abandoned city.
Television
(Alphabetical by series)
The 60 Minutes episode "Chernobyl: The Catastrophe That Never Ended" (early 2014) aired on CBS.
HBO's drama miniseries Chernobyl (2019) is based on the Chernobyl Nuclear Disaster. The scenes set in 1986 Pripyat were filmed in Vilnius, Lithuania.
in the Chris Tarrant: Extreme Railways Season 5 episode "Extreme Nuclear Railway: A Journey Too Far?" (episode 22), Chris Tarrant visits Chernobyl on his journey through Ukraine.
Discovery Science Channel's Mysteries of the Abandoned episode "Chernobyl's Deadly Secrets", produced and hosted by Philip Grossman, was filmed over a four-day period in Pripyat and the Chernobyl Nuclear Power Plant, in 2017.
The Animal Planet nature investigation series River Monsters conducted an extensive 2013 investigation within Pripyat, the exclusion zone, and the Chernobyl Power Plant in search of a radioactive mutated wels catfish.
A David Attenborough documentary depicts natural life in Pripyat.
Video games
Call of Duty 4: Modern Warfare's (and its remaster's) single-player campaign includes levels "All Ghillied Up" and "One Shot, One Kill", which are set in Pripyat. The former is considered to be one of the greatest video game levels of all time.
The S.T.A.L.K.E.R franchise is set around the Chornobyl exclusion zone, and prominently features Pripyat in the series, namely in S.T.A.L.K.E.R.: Call of Pripyat.
SCUM, developed by the Croatian studio Gamepires, features a radiation area than includes a fictional city of "Krsko," which is an accurate reproduction of Pripyat, including points of interest.
Transport
The city was served by Yaniv station on the Chernihiv–Ovruch railway. It was an important passenger hub of the line and was located between the southern suburb of Pripyat and Yaniv. An electric train terminus of Semikhody, built in 1988 and located in front of the nuclear plant, is currently the only operating station near Pripyat connecting it to Slavutych.
Notable people
Markiyan Kamysh (born 1988) writer, illegal Chernobyl explorer ("stalker")
Vitali Klitschko (born 1971) politician, mayor of Kyiv and former professional boxer
Wladimir Klitschko (born 1976) former professional boxer
Alexander Sirota (born 1976) photographer, journalist and filmmaker
Lyubov Sirota (born 1956) poet, writer, playwright, journalist and translator
Gallery
See also
Cultural impact of the Chernobyl disaster
FC Stroitel Pripyat
References
External links
Pripyat-Info – Informational Portal: «PRIPYAT :: EXCLUSION ZONE»
Pripyat.com – Site created by former residents
25 years of satellite imagery over Chernobyl Pripyat map
pripyatpanorama.com – Pripyat in Panoramas Project
exploringthezone.com – 5-year project documenting the Pripyat and Chernobyl Zone
Pripyat-city.ru – Blog related to Pripyat
ChernobylGallery.com – Photographs of Pripyat and Chernobyl
– 2018 Footage of Pripyat and Chernobyl
Populated places established in the Ukrainian Soviet Socialist Republic
Cities and towns built in the Soviet Union
Cities in Kyiv Oblast
Ghost towns in the Chernobyl Exclusion Zone
Radioactively contaminated areas
Modern ruins
Ghost towns in Ukraine
Cities of regional significance in Ukraine
1970 establishments in Ukraine
Populated places established in 1970
Company towns in Ukraine
Socialist planned cities
Populated places disestablished in 1986
1986 disestablishments in Ukraine | Pripyat | Chemistry,Technology | 2,779 |
5,094,755 | https://en.wikipedia.org/wiki/Psi%20Cassiopeiae | Psi Cassiopeiae (ψ Cassiopeiae) is a binary star system in the northern constellation of Cassiopeia.
The primary component, ψ Cassiopeiae A, is an orange K-type giant with an apparent magnitude of +5.0; it is a double star, designated CCDM J01259+6808AB, with a fourteenth magnitude star (component B) located 3 arcseconds from the primary. Located about 25 arcseconds distant there is a 9.8 magnitude optical companion CCDM J01259+6808CD, designated ψ Cassiopeiae B in older star catalogues, which is itself another double; CD comprises a 9.4 magnitude component C and a 10 magnitude component D.
References
K-type giants
Binary stars
4
Cassiopeiae, Psi
Cassiopeia (constellation)
Durchmusterung objects
Cassiopeiae, 36
008491
006692
0399 | Psi Cassiopeiae | Astronomy | 201 |
8,696,574 | https://en.wikipedia.org/wiki/Aqueous%20normal-phase%20chromatography | Aqueous normal-phase chromatography (ANP) is a chromatographic technique that involves the mobile phase compositions and polarities between reversed-phase chromatography (RP) and normal-phase chromatography (NP), while the stationary phases are polar.
Principle
In normal-phase chromatography, the stationary phase is polar and the mobile phase is nonpolar. In reversed phase the opposite is true; the stationary phase is nonpolar and the mobile phase is polar. Typical stationary phases for normal-phase chromatography are silica or organic moieties with cyano and amino functional groups. For reversed phase, alkyl hydrocarbons are the preferred stationary phase; octadecyl (C18) is the most common stationary phase, but octyl (C8) and butyl (C4) are also used in some applications. The designations for the reversed phase materials refer to the length of the hydrocarbon chain.
In normal-phase chromatography, the least polar compounds elute first and the most polar compounds elute last. The mobile phase consists of a nonpolar solvent such as hexane or heptane mixed with a slightly more polar solvent such as isopropanol, ethyl acetate or chloroform. Retention decreases as the amount of polar solvent in the mobile phase increases. In reversed phase chromatography, the most polar compounds elute first with the more nonpolar compounds eluting later. The mobile phase is generally a mixture of water and miscible polarity-modifying organic solvent, such as methanol, acetonitrile or THF. Retention increases as the fraction of the polar solvent (water) in the mobile phase is higher. Normal phase chromatography retains molecules via an adsorptive mechanism, and is used for the analysis of solutes readily soluble in organic solvents. Separation is achieved based on the polarity differences among functional groups such as amines, acids, metal complexes, etc. as well as their steric properties, while in reversed-phase chromatography, a partition mechanism typically occurs for the separation by non-polar differences.
In the aqueous normal-phase chromatography the support is based on a silica with "hydride surface" which is distinguishable from the other silica support materials, used either in normal phase, reversed phase, or hydrophilic interaction chromatography. Most silica materials used for chromatography have a surface composed primarily of silanols (-Si-OH). In a "hydride surface" the terminal groups are primarily -Si-H. The hydride surface can also be functionalized with carboxylic acids and long-chain alkyl groups. Mobile phases for ANPC are based on organic solvents as bulk solvents (such as methanol or acetonitrile) with a small amount of water as a modifier of polarity; thus, the mobile phase is both "aqueous" (water is present) and "normal phase type" (less polar than the stationary phase). Thus, polar solutes (such as acids and amines) are more strongly retained, with the ability to affect the retention, which decreases as the amount of water in the mobile phase increases.
Typically the mobile phases are rich with organic solvents, with amount of the nonpolar solvent in the mobile phase at least 60% or greater to reach minimal required retention. A true ANP stationary phase will be able to function in both the reversed phase and normal phase modes with only the amount of water in the eluent varying. Thus a continuum of solvents can be used from 100% aqueous to pure organic. ANP retention has been demonstrated for a variety of polar compounds on the hydride based stationary phases. Recent investigations have demonstrated that silica hydride materials have a very thin water layer (about 0.5 monolayer) in comparison to HILIC phases that can have from 6–8 monolayers. In addition the substantial negative charge on the surface of hydride phases is the result of hydroxide ion adsorption from the solvent rather than silanols.
Features
An interesting feature of these phases is that both polar and nonpolar compounds can be retained over some range of mobile phase composition (organic/aqueous). The retention mechanism of polar compounds has recently been shown to be the result of the formation of a hydroxide layer on the surface of the silica hydride. Thus positively charged analytes are attracted to the negatively charged surface and other polar analytes are likely to be retained through displacement of hydroxide or other charged species on the surface. This property distinguishes it from a pure HILIC (hydrophilic interaction chromatography) columns where separation by polar differences is obtained through partitioning into a water-rich layer on the surface, or a pure RP stationary phase on which separation by nonpolar differences in solutes is obtained with very limited secondary mechanisms operating.
Another important feature of the hydride-based phases is that for many analyses it is usually not necessary to use a high pH mobile phase to analyze polar compounds such as bases. The aqueous component of the mobile phase usually contains from 0.1 to 0.5% formic or acetic acid, which is compatible with detector techniques that include mass spectral analysis.
References
C. Kulsing, Y. Nolvachai, P.J. Marriott, R.I. Boysen, M.T. Matyska, J.J. Pesek, M.T.W. Hearn, J. Phys. Chem B, 119 (2015) 3063-3069.
J. Soukup, P. Janas, P. Jandera, J. Chromatogr. A, 1286 (2013) 111-118
Chromatography | Aqueous normal-phase chromatography | Chemistry | 1,235 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.