id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
300,278 | https://en.wikipedia.org/wiki/Integrated%20receiver/decoder | An integrated receiver/decoder (IRD) is an electronic device used to pick up a radio-frequency signal and convert digital information transmitted in it.
Consumer IRDs
Consumer IRDs, commonly called set-top boxes, are used by end users and are much cheaper compared to professional IRDs. To curb content piracy, they also lack many features and interfaces found in professional IRDs such as outputting uncompressed SDI video or ASI transport stream dumps. They are also designed to be more aesthetically pleasing.
Professional IRDs
Commonly found in radio, television, Cable and satellite broadcasting facilities, the IRD is generally used for the reception of contribution feeds that are intended for re-broadcasting. The IRD is the interface between a receiving satellite dish or Telco networks and a broadcasting facility video/audio infrastructure.
Professional IRDs have various features that consumer IRDs lack such as:
SDI outputs.
ASI inputs / outputs.
TSoIP inputs.
AES/EBU Audio decoding.
VBI reinsertion.
WSS data and pass through.
Transport stream demultiplexing.
Genlock input.
Frame synchronization of digital video output to analogue input.
Closed captions and VITS/ITS/VITC Insertion.
Video test pattern generator.
Remote management over LAN/WAN.
GPI interface - For sending external alarm triggers.
Rack mountable.
Uses
direct broadcast satellite (DBS) television applications like DirecTV, Astra or DishTV
fixed service satellite (FSS) applications like VideoCipher, DigiCipher, or PowerVu
digital audio radio satellite (DARS) applications like XM Satellite Radio and Sirius Satellite Radio
digital audio broadcasting (DAB) applications like Eureka 147 and IBOC
digital video broadcasting (DVB) applications like DVB-T and ATSC
See also
ATSC tuner
Broadcast engineering
Set-top box
Television technology
Television terminology | Integrated receiver/decoder | Technology,Engineering | 386 |
5,382,739 | https://en.wikipedia.org/wiki/Vassili%20Samarsky-Bykhovets | Vasili Yevgrafovich Samarsky-Bykhovets (; 7 November 1803 – 31 May 1870) was a Russian mining engineer and the chief of Russian Mining Engineering Corps between 1845 and 1861. The mineral samarskite (samarskite-Y, samarskite-Yb and calciosamarskite), and chemical element samarium are named after him. He was the first person whose name was given to a chemical element.
Biography
Samarsky-Bykhovets was born in a noble family in the Tomsk Governorate, located in the Asian part of Russia east of the Ural Mountains. He received military engineer education at the local Mining Cadet Corps, and after graduation in 1823 served in a military position at the Kolyvan-Resurrection plants and the associated mines in the Urals. In 1828, he was transferred to Saint Petersburg, where he consecutively assumed positions of an assistant in the Cabinet of His Imperial Majesty, chief clerk of the Mining Department, senior aide, and staff officer in the Corps of Mining Engineers. In 1834, he was promoted to the rank of captain and in 1845 to colonel. The next year he was appointed as Chief of Staff of the Corps of Mining Engineers and remained in that position until 1861. While Chief of Staff, he began teaching at Saint Petersburg Mining Institute and eventually became a member of the scientific council there. He was promoted to Lieutenant General in 1860, and in 1861 became chairman of the Board of the Corps of Mining Engineers, and also chairman of the Commission on the Revision of the Mining Charter. He took a three-months sabbatical leave in 1862 to attend an international scientific exhibition in London, and died in 1870.
Relation to samarskite
Samarsky-Bykhovets himself was not involved in the studies of samarskite and samarium. As a mining official, he merely granted access to mineral samples from the Urals to the German mineralogist Gustav Rose and his brother Heinrich Rose. Gustav Rose in 1839 described a new mineral in those samples and named it uranotantalum believing that its composition is dominated by the chemical element tantalum. In 1846–47, his brother and colleague-mineralogist Heinrich Rose found the major component of the mineral to be niobium and suggested altering the name to avoid confusion. The newly chosen name samarskite acknowledged the role of Samarsky-Bykhovets in granting access to the mineral samples. Later, several lanthanide elements were isolated from this mineral, and one of them, samarium, was named after the mineral, once again honoring Samarsky-Bykhovets.
References
1803 births
1870 deaths
Mining engineers
Military personnel of the Russian Empire
Engineers from the Russian Empire
Rare earth scientists | Vassili Samarsky-Bykhovets | Engineering | 560 |
26,587,782 | https://en.wikipedia.org/wiki/Scotophor | A scotophor is a material showing reversible darkening and bleaching when subjected to certain types of radiation. The name means dark bearer, in contrast to phosphor, which means light bearer. Scotophors show tenebrescence (reversible photochromism) and darken when subjected to an intense radiation such as sunlight. Minerals showing such behavior include hackmanite sodalite, spodumene and tugtupite. Some pure alkali halides also show such behavior.
Scotophors can be sensitive to light, particle radiation (e.g. electron beam – see cathodochromism), X-rays, or other stimuli. The induced absorption bands in the material, caused by F-centers created by electron bombardment, can be returned to their non-absorbing state, usually by light and/or heating.
Scotophors sensitive to electron beam radiation can be used instead of phosphors in cathode ray tubes, for creating a light absorbing instead of light emitting image. Such displays are viewable in bright light and the image is persistent, until erased.
The image would be retained until erased by flooding the scotophor with a high-intensity infrared light or by electro-thermal heating. Using conventional deflection and raster formation circuitry, a bi-level image could be created on the membrane and retained even when power was removed from the CRT.
In Germany, scotophor tubes were developed by Telefunken as blauschrift-röhre ("dark-trace tube"). The heating mechanism was a layer of mica with transparent thin film of tungsten. When the image was to be erased, current was applied to the tungsten layer; even very dark images could be erased in 5–10 seconds.
Scotophors typically require a higher-intensity electron beam to change color than phosphors need to emit light. Screens with layers of a scotophor and a phosphor are therefore possible, where the phosphor, flooded with a dedicated wide-beam low-intensity electron gun, produces backlight for the scotophor, and optionally highlights selected areas of the screen if bombarded with electrons with higher energy but still insufficient to penetrate the phosphor and change the scotophor state.
The main application of scotophors was in plan position indicators, specialized military radar displays. The achievable brightness allowed projecting the image to a larger surface. The ability to quickly record a persistent trace found its use in some oscilloscopes.
Materials
Potassium chloride is used as a scotophor with designation P10 in dark-trace CRTs (also called dark trace tubes, color center tubes, cathodochromic displays or scotophor tubes), e.g. in the Skiatron. This CRT replaced the conventional light-emitting phosphor layer on the face of the tube screen with a scotophor such as potassium chloride (KCl). Potassium chloride has the property that when a crystal is struck by an electron beam, that spot would change from translucent white to a dark magenta color. By backlighting such a CRT with a white or green circular fluorescent lamp, the resulting image would appear as black information against a green background or as magenta information against a white background. A benefit, aside from the semi-permanent storage of the displayed image, is that the brightness of the resultant display is only limited by the illumination source and optics. The F-centers, however, have tendency to aggregate, and the screen needs to be heated to fully erase the image.
The image on KCl can be formed by depositing a charge of over 0.3 microcoulomb per square centimeter, by an electron beam with energy typically at 8–10 keV. The erasure can be achieved in less than a second by heating the scotophor at 150 °C.
KCl was the most common scotophor used. Other halides show the same property; potassium bromide absorbs in bluish end of the spectrum, resulting in a brown trace, sodium chloride produces a trace that is colored more towards orange.
Another scotophor used in dark-trace CRTs is a modified sodalite, fired in reducing atmosphere or having some chlorides substituted with sulfate ions. Its advantage against KCl is its higher writing speed, less fatigue, and the F-centers do not aggregate, therefore it is possible to substantially erase the screen with light only, without heating.
See also
Solarization (disambiguation)
Photochromism
References
Display technology
Optical materials
Chromism | Scotophor | Physics,Chemistry,Materials_science,Engineering | 971 |
15,929,035 | https://en.wikipedia.org/wiki/Telavancin | Telavancin (trade name Vibativ by Cumberland Pharmaceuticals) is a bactericidal lipoglycopeptide for use in MRSA or other Gram-positive infections. Telavancin is a semi-synthetic derivative of vancomycin.
The FDA approved the drug in September 2009 for complicated skin and skin structure infections (cSSSI), and in June 2013 for hospital-acquired and ventilator-associated bacterial pneumonia caused by Staphylococcus aureus.
History
On 19 October 2007, the US Food and Drug Administration (FDA) issued an approvable letter for telavancin. Its developer, Theravance, submitted a complete response to the letter, and the FDA has assigned a Prescription Drug User Fee Act (PDUFA) target date of 21 July 2008.
On 19 November 2008, an FDA antiinfective drug advisory committee concluded that they would recommend telavancin be approved by the FDA.
The FDA approved the drug on 11 September 2009 for complicated skin and skin structure infections (cSSSI).
Theravance has also submitted telavancin to the FDA in a second indication, nosocomial pneumonia, sometimes referred to as hospital-acquired pneumonia, or HAP. On 30 November 2012, an FDA advisory panel endorsed approval of a once-daily formulation of telavancin for nosocomial pneumonia when other alternatives are not suitable. However, telavancin did not win the advisory committee's recommendation as first-line therapy for this indication. The committee indicated that the trial data did not prove "substantial evidence" of telavancin's safety and efficacy in hospital-acquired pneumonia, including ventilator-associated pneumonia caused by Gram-positive organisms Staphylococcus aureus and Streptococcus pneumoniae. On 21 June 2013 FDA gave approval for telavancin to treat patients with hospital-acquired pneumonia, but indicated it should be used only when alternative treatments are not suitable. FDA staff had indicated telavancin has a "substantially higher risk for death" for patients with kidney problems or diabetes compared to vancomycin.
On March 11 2013, Clinigen Group plc and Theravance, Inc. announced that they have entered into an exclusive commercialization agreement in the European Union (EU) and certain other countries located in Europe for VIBATIV® (telavancin) for the treatment of nosocomial pneumonia (hospital-acquired), including ventilator-associated pneumonia, known or suspected to be caused by methicillin resistant Staphylococcus aureus (MRSA) when other alternatives are not suitable.
Mechanism of action
Like vancomycin, telavancin inhibits bacterial cell wall synthesis by binding to the D-Ala-D-Ala terminus of the peptidoglycan in the growing cell wall (see Pharmacology and chemistry of vancomycin). In addition, it disrupts bacterial membranes by depolarization.
Adverse effects
Common but harmless adverse effects include nausea, vomiting, constipation, and headache.
Telavancin has a higher rate of kidney failure than vancomycin in two clinical trials. It showed teratogenic effects in animal studies.
Interactions
Telavancin inhibits the liver enzymes CYP3A4 and CYP3A5. No data regarding the clinical relevance are available.
References
Antibiotics
Halogen-containing natural products
Astellas Pharma
Chloroarenes | Telavancin | Biology | 717 |
158,715 | https://en.wikipedia.org/wiki/Radiator | A radiator is a heat exchanger used to transfer thermal energy from one medium to another for the purpose of cooling and heating. The majority of radiators are constructed to function in cars, buildings, and electronics.
A radiator is always a source of heat to its environment, although this may be for either the purpose of heating an environment, or for cooling the fluid or coolant supplied to it, as for automotive engine cooling and HVAC dry cooling towers. Despite the name, most radiators transfer the bulk of their heat via convection instead of thermal radiation.
History
The Roman hypocaust is an early example of a type of radiator for building space heating. Franz San Galli, a Prussian-born Russian businessman living in St. Petersburg, is credited with inventing the heating radiator around 1855, having received a radiator patent in 1857, but American Joseph Nason and Scot Rory Gregor developed a primitive radiator in 1841 and received a number of U.S. patents for hot water and steam heating.
Radiation and convection
Heat transfer from a radiator occurs by two mechanisms: thermal radiation and convection into flowing air or liquid. Conduction is not normally a major source of heat transfer in radiators. A radiator may even transfer heat by phase change, for example, drying a pair of socks. In practice, the term "radiator" refers to any of a number of devices in which a liquid circulates through exposed pipes (often with fins or other means of increasing surface area). The term "convector" refers to a class of devices in which the source of heat is not directly exposed.
To increase the surface area available for heat exchange with the surroundings, a radiator will have multiple fins, in contact with the tube carrying liquid pumped through the radiator. Air (or other exterior fluid) in contact with the fins carries off heat. If air flow is obstructed by dirt or damage to the fins, that portion of the radiator is ineffective at heat transfer.
Heating
Radiators are commonly used to heat buildings on the European continent. In a radiative central heating system, hot water or sometimes steam is generated in a central boiler and circulated by pumps through radiators within the building, where this heat is transferred to the surroundings.
In some countries, portable radiators are common to heat a single room, as a safer alternative to space heater and fan heater.
Heating, ventilation, and air conditioning
Radiators are used in dry cooling towers and closed-loop cooling towers for cooling buildings using liquid-cooled chillers for heating, ventilation, and air conditioning (HVAC) while keeping the chiller coolant isolated from the surroundings.
Engine cooling
Radiators are used for cooling internal combustion engines, mainly in automobiles but also in piston-engined aircraft, railway locomotives, motorcycles, stationary generating plants and other places where heat engines are used (watercrafts, having an unlimited supply of a relatively cool water outside, usually use the liquid-liquid heat exchangers instead).
To cool down the heat engine, a coolant is passed through the engine block, where it absorbs heat from the engine. The hot coolant is then fed into the inlet tank of the radiator (located either on the top of the radiator, or along one side), from which it is distributed across the radiator core through tubes to another tank on the opposite end of the radiator. As the coolant passes through the radiator tubes on its way to the opposite tank, it transfers much of its heat to the tubes which, in turn, transfer the heat to the fins that are lodged between each row of tubes. The fins then release the heat to the ambient air. Fins are used to greatly increase the contact surface of the tubes to the air, thus increasing the exchange efficiency. The cooled liquid is fed back to the engine, and the cycle repeats. Normally, the radiator does not reduce the temperature of the coolant back to ambient air temperature, but it is still sufficiently cooled to keep the engine from overheating.
This coolant is usually water-based, with the addition of glycols to prevent freezing and other additives to limit corrosion, erosion and cavitation. However, the coolant may also be an oil. The first engines used thermosiphons to circulate the coolant; today, however, all but the smallest engines use pumps.
Up to the 1980s, radiator cores were often made of copper (for fins) and brass (for tubes, headers, and side-plates, while tanks could also be made of brass or of plastic, often a polyamide). Starting in the 1970s, use of aluminium increased, eventually taking over the vast majority of vehicular radiator applications. The main inducements for aluminium are reduced weight and cost.
Since air has a lower heat capacity and density than liquid coolants, a fairly large volume flow rate (relative to the coolant's) must be blown through the radiator core to capture the heat from the coolant. Radiators often have one or more fans that blow air through the radiator. To save fan power consumption in vehicles, radiators are often behind the grille at the front end of a vehicle. Ram air can give a portion or all of the necessary cooling air flow when the coolant temperature remains below the system's designed maximum temperature, and the fan remains disengaged.
Electronics and computers
As electronic devices become smaller, the problem of dispersing waste heat becomes more difficult. Tiny radiators known as heat sinks are used to convey heat from the electronic components into a cooling air stream. Heatsinks do not use water, rather they conduct the heat from the source. High-performance heat sinks have copper to conduct better. Heat is transferred to the air by conduction and convection; a relatively small proportion of heat is transferred by radiation owing to the low temperature of semiconductor devices compared to their surroundings.
Radiators are also used in liquid cooling loops for rejecting heat.
Spacecraft
Radiators are found as components of some spacecraft. These radiators work by radiating heat energy away as light (generally infrared given the temperatures at which spacecraft try to operate) because in the vacuum of space neither convection nor conduction can work to transfer heat away. On the International Space Station, these can be seen clearly as large white panels attached to the main truss. They can be found on both crewed and uncrewed craft.
See also
Heat sink
Heat spreader
Heat pipe
Heat pump
Radiatori – small, squat pasta shaped to resemble radiators
References
Heating, ventilation, and air conditioning
Plumbing
Residential heating appliances
Russian inventions
Vehicle parts | Radiator | Technology,Engineering | 1,392 |
12,994,627 | https://en.wikipedia.org/wiki/Rothmund%E2%80%93Thomson%20syndrome | Rothmund–Thomson syndrome (RTS) is a rare autosomal recessive skin condition.
There have been several reported cases associated with osteosarcoma. A hereditary basis, mutations in the DNA helicase RECQL4 gene, causing problems during initiation of DNA replication has been implicated in the syndrome.
Signs and symptoms
Sun-sensitive rash with prominent poikiloderma and telangiectasias
Juvenile cataracts
Saddle nose
Congenital bone defects, including short stature and radial ray anomalies such as absent thumbs
Hair growth problems (absent eyelashes, eyebrows and/or hair)
Hypogonadism has not been well documented
Hypodontia
Calcium problems (not documented in journals)
Ear problems (not documented in journals but identified by patients in support groups)
Produces osteosarcoma
The skin is normal at birth. Between 3 and 6 months of age, the affected carrier develops poikiloderma on the cheeks. This characteristic "rash" that all RTS carriers have can develop on the arms, legs and buttocks. "Poikiloderma consists of areas of increased and decreased pigmentation, prominent blood vessels, and thinning of the skin."
Accelerated aging
In humans, individuals with RTS, and carrying the RECQL4 germline mutation, can have several clinical features of accelerated aging. These features include atrophic skin and pigment changes, alopecia, osteopenia, cataracts and an increased incidence of cancer. Also in mice, RECQL4 mutants show features of accelerated aging.
Causes
RTS is caused by a mutation of the RECQL4 gene, located at chromosome 8q24.3. The disorder is inherited in an autosomal recessive manner. This means the defective gene responsible for the disorder is located on an autosome (chromosome 8 is an autosome), and two copies of the defective gene (one inherited from each parent) are required in order to be born with the disorder. The parents of an individual with an autosomal recessive disorder both carry one copy of the defective gene, but usually do not experience any signs or symptoms of the disorder.
DNA repair
RECQL4 has a crucial role in DNA end resection that is the initial step required for homologous recombination (HR)-dependent double-strand break repair. When RECQL4 is depleted, HR-mediated repair and 5' end resection are severely reduced in vivo. RECQL4 also appears to be necessary for other forms of DNA repair including non-homologous end joining, nucleotide excision repair and base excision repair. The association of deficient RECQL4-mediated DNA repair with accelerated aging is consistent with the DNA damage theory of aging.
Diagnosis
Management
History
The condition was originally described by August von Rothmund (1830–1906) in 1868. Matthew Sydney Thomson (1894–1969) published further descriptions in 1936.
See also
Poikiloderma vasculare atrophicans
List of cutaneous conditions
List of radiographic findings associated with cutaneous conditions
References
External links
GeneReviews/NCBI/NIH/UW entry on Rothmund-Thomson Syndrome
Autosomal recessive disorders
DNA replication and repair-deficiency disorders
Genodermatoses
Rare diseases
Syndromes affecting the skin
Progeroid syndromes
Syndromes affecting stature
Syndromes affecting the eye
Diseases named after discoverers | Rothmund–Thomson syndrome | Biology | 697 |
29,071,142 | https://en.wikipedia.org/wiki/P%20ring | The P ring forms part of the basal body of the bacterial appendage known as the flagellum. It is known to be embedded in the peptidoglycan cell wall. Together with the L ring, it has the function of anchoring the flagellum to the cell surface.
References
Bacteria | P ring | Biology | 64 |
11,570,295 | https://en.wikipedia.org/wiki/Pyrrhoderma%20noxium | Pyrrhoderma noxium is a species of plant pathogen. It attacks a wide range of tropical plants, and is the cause of brown root rot disease. It has been described as "an aggressive and destructive pathogen". The pathogen invades roots with contact between roots of a potential host with the substrate on which the fungus is growing.
Infection
P. noxium attacks the roots and lower trunk of trees, causing roots to rot and resulting in dieback (another term for root rot). It causes brown root rot disease, which afflicts over 200 plant species in tropical and subtropical regions. The pathogen can survive in the soil and on dead plant material for more than a decade, and the primary source of infection to other plants and trees is from contact with infected root material to the healthy plant's root.
Treatment
Fungicides Calixin, Bayleton, and Nustar inhibits growth for P. noxium on agar medium, however was not ultimately found to be effective in eradicating the fungus in infested wood. A mixture of ammonia and urea, as well as just volatile ammonia in itself, was found able to kill P. noxium in infested wood. Strains of Trichoderma applied in mulch around infected P. noxium trees started to grow new roots within 6–8 weeks of application, and the mycelium of P. noxium was eradicated after 8–11 weeks of exposure.
Distribution
P. noxium has been recorded from tropical regions, as well as Japan and Australia, but has not been reported from South America.
List of countries where P. noxium is present
Hosts
Brown root rot
P. noxium causes brown root rot, which is a serious problem in Taiwan and Hong Kong.
See also
Root rot
References
Fungal plant pathogens and diseases
Fungi described in 1932
Fungus species | Pyrrhoderma noxium | Biology | 385 |
12,465,832 | https://en.wikipedia.org/wiki/C6H10 | The molecular formula C6H10 (molar mass: 82.14 g/mol) may refer to:
Cyclohexene
2,3-Dimethyl-1,3-butadiene
1,2-Hexadiene
1,3-Hexadiene
1,4-Hexadiene
1,5-Hexadiene
2,4-Hexadiene
1-Hexyne
2-Hexyne
3-Hexyne
Methylenecyclopentane
4-Methyl-1-pentyne
4-Methyl-2-pentyne
3-Methyl-1-pentyne
3,3-Dimethyl-1-butyne
Molecular formulas | C6H10 | Physics,Chemistry | 149 |
17,953,561 | https://en.wikipedia.org/wiki/Mexican%20Petroleum%20Institute | The Mexican Petroleum Institute (in Spanish: Instituto Mexicano del Petróleo, IMP) is a public research organization dedicated to developing technical solutions, conducting basic and applied research and providing specialized training to Pemex, the state-owned government-granted monopoly in Mexico's petroleum industry.
The Institute was founded on 23 August 1965 by federal decree and is based in Mexico City. Despite facing significant budget constraints in recent years and being accused of depending excessively on foreign technology by noted physicist Leopoldo García-Colín, it was the leading patent applicant among Mexican institutions in 2005 and houses one of the most advanced microscopes on the planet.
Noted researchers
Leopoldo García-Colín: physicist laureated with the 1988 National Prize for Arts and Sciences.
Luis E. Miramontes: co-inventor of the first oral contraceptive. Laureated with the National Prize on Chemistry "Andrés Manuel del Rio" in 1986.
Octavio Novaro: recipient of the 1993 UNESCO Science Prize for his contributions to understanding catalysis phenomena.
Alexander Balankin: physicist laureated with the 2002 National Prize for Arts and Sciences and recipient of the 2005 UNESCO Science Prize for his remarkable ability to relate his research in fractal mechanics to technological applications that has provided great benefits to Mexico and worldwide.
See also
Indian Institute of Petroleum
French Institute of Petroleum
Vietnam Petroleum Institute
References
Research institutes in Mexico
Chemical research institutes
Pemex
Petroleum industry in Mexico
Research institutes established in 1965
1965 establishments in Mexico | Mexican Petroleum Institute | Chemistry | 293 |
32,323,110 | https://en.wikipedia.org/wiki/Paul%20Vredeman%20de%20Vries | Paul Vredeman de Vries (Antwerp, 1567 – Amsterdam, 1617), was a Flemish painter and draughtsman who specialised in architectural paintings and, in particular, church interiors.
Life
He was a son of the Dutch-born architect, painter and engineer Hans Vredeman de Vries who at the time was working in the Southern Netherlands. In 1564 his father had fled to Antwerp from Mechelen, where he had been living in order to escape the Inquisition. He trained with his father who was as a painter interested in perspective and therefore painted mainly architectural paintings.
He is known to have collaborated with his father in the completion of large assignments. He worked from 1592 to 1595 in Danzig where his father was employed in the design of defensive works. He worked from 1596 to 1599 in Prague where he painted the ceilings and the reception rooms of Emperor Rudolf II's castle. He was active in Amsterdam from 1599 to 1617. There is a record of a notice of marriage between him and Mayken Godelet issued in Amsterdam and dated 24 April 1601. In 1649 she was buried in Nieuwe Kerk. The year her husband died is uncertain; it could be 1630 when his designs for furniture were published or later, but before 1636.
He was the master of Hendrick Aerts and Isaak van den Blocke, both artists of Flemish descent who were living in Gdansk. His older brother Salomon Vredeman de Vries (1556–1604) was also an architectural draughtsmen and painter who collaborated with him and his father.
Work
Vredeman de Vries specialised in architectural paintings and, in particular, imaginary church interiors and palaces. His paintings show a meticulous attention to perspective. He developed an original poetic vision in his mature years. The facades of palaces that are richly ornate and the colonnades of which open onto flowered courtyards with fountains and sculptures provide the setting for scenes from ancient history or sacred history. The rigour of the geometric composition, which incorporates the principles of the contemporary treatises on perspective, is tempered by an atmospheric rendering of the light with its silvery shimmers. The interiors are a reconstruction of the interior decoration of his day such as chairs trimmed with leather, embossed leather hangings and canopies with valances.
He collaborated with his father on large assignments as well as with other artists such as Frans Francken the Younger, Jan Brueghel the Elder, Dirck de Quade van Ravesteyn (while in Prague), Pieter-Franz Isaaksz and Adriaen van Nieulandt (in Amsterdam).
Vredeman de Vries was also active as an engraver. He contributed some of the 31 engravings to his father's architectural treatise entitled Architectura which was published in 1606. He further made the engravings for a series entitled Verscheyden Schrynwerck als Portalen, Kleerkassen, Buffeten (Various woodwork for porches, wardrobes and cupboards), published in 1630. These designs for beds, buffets, cabinets and interior porches in Louis XIII-style would influence Dutch interior design well into the second half of the 17th century.
References
There is an oil painting by Paul Vredeman de Vries in the Hunterian Art Gallery in Glasgow, Scotland, Trajan and The Widow.
External links
1567 births
16th-century Flemish painters
Flemish Baroque painters
17th-century Dutch painters
Painters from Antwerp
1617 deaths | Paul Vredeman de Vries | Engineering | 716 |
2,939,751 | https://en.wikipedia.org/wiki/Methylparaben | Methylparaben (methyl paraben) one of the parabens, is a preservative with the chemical formula CH3(C6H4(OH)COO). It is the methyl ester of p-hydroxybenzoic acid.
Natural occurrences
Methylparaben serves as a pheromone for a variety of insects and is a component of queen mandibular pheromone.
It is a pheromone in wolves produced during estrus associated with the behavior of alpha male wolves preventing other males from mounting females in heat.
Uses
Methylparaben is an anti-fungal agent often used in a variety of cosmetics and personal-care products. It is also used as a food preservative and has the E number E218.
Methylparaben is commonly used as a fungicide in Drosophila food media at 0.1%. To Drosophila, methylparaben is toxic at higher concentrations, has an estrogenic effect (mimicking estrogen in rats and having anti-androgenic activity), and slows the growth rate in the larval and pupal stages at 0.2%.
Safety
There is controversy about whether methylparaben or propylparabens are harmful at concentrations typically used in body care or cosmetics. Methylparaben and propylparaben are considered generally recognized as safe (GRAS) by the USFDA for food and cosmetic antibacterial preservation. Methylparaben is readily metabolized by common soil bacteria, making it completely biodegradable.
Methylparaben is readily absorbed from the gastrointestinal tract or through the skin. It is hydrolyzed to p-hydroxybenzoic acid and rapidly excreted in urine without accumulating in the body. Acute toxicity studies have shown that methylparaben is practically non-toxic by both oral and parenteral administration in animals. In a population with normal skin, methylparaben is practically non-irritating and non-sensitizing; however, allergic reactions to ingested parabens have been reported. A 2008 study found no competitive binding for human estrogen and androgen receptors for methylparaben, but varying levels of competitive binding were seen with butyl- and isobutyl-paraben.
Studies indicate that methylparaben applied on the skin may react with UVB, leading to increased skin aging and DNA damage.
References
External links
Methylparaben at Hazardous Substances Data Bank
Methylparaben at Household Products Database
European Commission Scientific Committee on Consumer Products Extended Opinion on the Safety Evaluation of Parabens (2005)
Methyl esters
Parabens
E-number additives
Semiochemicals
Insect pheromones
ja:メチルパラベン | Methylparaben | Chemistry | 561 |
1,022,185 | https://en.wikipedia.org/wiki/Baldwin%20effect | In evolutionary biology, the Baldwin effect describes an effect of learned behaviour on evolution. James Mark Baldwin and others suggested that an organism's ability to learn new behaviours (e.g. to acclimatise to a new stressor) will affect its reproductive success and will therefore have an effect on the genetic makeup of its species through natural selection. It posits that subsequent selection might reinforce the originally learned behaviors, if adaptive, into more in-born, instinctive ones. Though this process appears similar to Lamarckism, that view proposes that living things inherited their parents' acquired characteristics. The Baldwin effect only posits that learning ability, which is genetically based, is another variable in / contributor to environmental adaptation. First proposed during the Eclipse of Darwinism in the late 19th century, this effect has been independently proposed several times, and today it is generally recognized as part of the modern synthesis.
"A New Factor in Evolution"
The effect, then unnamed, was put forward in 1896 in a paper "A New Factor in Evolution" by the American psychologist James Mark Baldwin, with a second paper in 1897. The paper proposed a mechanism for specific selection for general learning ability. As the historian of science Robert Richards explains:
Selected offspring would tend to have an increased capacity for learning new skills rather than being confined to genetically coded, relatively fixed abilities. In effect, it places emphasis on the fact that the sustained behaviour of a species or group can shape the evolution of that species. The "Baldwin effect" is better understood in evolutionary developmental biology literature as a scenario in which a character or trait change occurring in an organism as a result of its interaction with its environment becomes gradually assimilated into its developmental genetic or epigenetic repertoire. In the words of the philosopher of science Daniel Dennett:
An update to the Baldwin effect was developed by Jean Piaget, Paul Weiss, and Conrad Waddington in the 1960s–1970s. This new version included an explicit role for the social in shaping subsequent natural change in humans (both evolutionary and developmental), with reference to alterations of selection pressures.
Subsequent research shows that Baldwin was not the first to identify the process; Douglas Spalding mentioned it in 1873.
Controversy and acceptance
Initially Baldwin's ideas were not incompatible with the prevailing, but uncertain, ideas about the mechanism of transmission of hereditary information and at least two other biologists put forward very similar ideas in 1896. In 1901, Maurice Maeterlinck referred to behavioural adaptations to prevailing climates in different species of bees as "what had merely been an idea, therefore, and opposed to instinct, has thus by slow degrees become an instinctive habit". The Baldwin effect theory subsequently became more controversial, with scholars divided between "Baldwin boosters" and "Baldwin skeptics". The theory was first called the "Baldwin effect" by George Gaylord Simpson in 1953. Simpson "admitted that the idea was theoretically consistent, that is, not inconsistent with the modern synthesis", but he doubted that the phenomenon occurred very often, or if so, could be proven to occur. In his discussion of the reception of the Baldwin-effect theory Simpson points out that the theory appears to provide a reconciliation between a neo-Darwinian and a neo-Lamarckian approach and that "Mendelism and later genetic theory so conclusively ruled out the extreme neo-Lamarckian position that reconciliation came to seem unnecessary". In 1942, the evolutionary biologist Julian Huxley promoted the Baldwin effect as part of the modern synthesis, saying the concept had been unduly neglected by evolutionists.
In the 1960s, the evolutionary biologist Ernst Mayr contended that the Baldwin effect theory was untenable because
the argument is stated in terms of the individual genotype, whereas what is really exposed to the selection pressure is a phenotypically and genetically variable population;
it is not sufficiently emphasized that the degree of modification of the phenotype is in itself genetically controlled;
it is assumed that phenotypic rigidity is selectively superior to phenotypic flexibility.
In 1987 Geoffrey Hinton and Steven Nowlan demonstrated by computer simulation that learning can accelerate evolution, and they associated this with the Baldwin effect.
Paul Griffiths suggests two reasons for the continuing interest in the Baldwin effect. The first is the role mind is understood to play in the effect. The second is the connection between development and evolution in the effect. Baldwin's account of how neurophysiological and conscious mental factors may contribute to the effect brings into focus the question of the possible survival value of consciousness.
Still, David Depew observed in 2003, "it is striking that a rather diverse lot of contemporary evolutionary theorists, most of whom regard themselves as supporters of the Modern Synthesis, have of late become 'Baldwin boosters'". These
According to Dennett, also in 2003, recent work has rendered the Baldwin effect "no longer a controversial wrinkle in orthodox Darwinism". Potential genetic mechanisms underlying the Baldwin effect have been proposed for the evolution of natural (genetically determinant) antibodies. In 2009, empirical evidence for the Baldwin effect was provided from the colonisation of North America by the house finch.
The Baldwin effect has been incorporated into the extended evolutionary synthesis.
Comparison with genetic assimilation
The Baldwin effect has been confused with, and sometimes conflated with, a different evolutionary theory also based on phenotypic plasticity, C. H. Waddington's genetic assimilation. The Baldwin effect includes genetic accommodation, of which one type is genetic assimilation. Science historian Laurent Loison has written that "the Baldwin effect and genetic assimilation, even if they are quite close, should not be conflated".
See also
Notes
References
External links
Baldwinian evolution
Bibliography
Extended evolutionary synthesis
Evolutionary biology
Selection | Baldwin effect | Biology | 1,161 |
22,745,050 | https://en.wikipedia.org/wiki/Winters%27s%20formula | Winters's formula, named after R. W. Winters, is a formula used to evaluate respiratory compensation when analyzing acid-base disorders in the presence of metabolic acidosis. It can be given as:
,
where − is given in units of mEq/L and P will be in units of mmHg.
History
Dr. R. W. Winters was an American physician and graduate from Yale Medical School. He was a professor of pediatrics at Columbia University College of Physicians and Surgeons. In 1974 he was awarded the Borden Award gold medal by the American Academy of Pediatrics.
Dr. R. W. Winters conducted an experiment in the 1960s on 60 patients with varying degrees of metabolic acidosis. He aimed to empirically determine a mathematical expression representing the effect of respiratory compensation during metabolic acidosis. He measured the blood pH, plasma PCO2, blood base excess, and plasma bicarbonate concentrations. He focused on the relationship between plasma PCO2 and plasma bicarbonate. Winter's Formula was derived from a linear regression of this relationship between plasma PCO2 and plasma bicarbonate.
Physiology
There are four primary acid-base derangements that can occur in the human body - metabolic acidosis, metabolic alkalosis, respiratory acidosis, and respiratory alkalosis. These are characterized by a serum pH below 7.4 (acidosis) or above 7.4 (alkalosis), and whether the cause is from a metabolic process or respiratory process. If the body experiences one of these derangements, the body will try to compensate by inducing an opposite process (e.g. induced respiratory alkalosis for a primary metabolic acidosis).
Respiratory compensation is one of three major processes the body uses to react to derangements in acid-base status (above or below pH 7.4). It is slower than the initial bicarbonate buffer system in the blood, but faster than renal compensation. Respiratory compensation usually begins within minutes to hours, but alone will not completely return arterial pH to a normal value (7.4). Winter's Formula quantifies the amount of respiratory compensation during metabolic acidosis.
During metabolic acidosis, a decrease in pH stimulates chemoreceptors. Peripheral chemoreceptors are found in the aortic and carotid bodies and respond to changes in the PaCO2, the arterial partial pressure of carbon dioxide. Central chemoreceptors are found in the brainstem and respond primarily to decreased pH in the cerebrospinal fluid. In response to decreased pH, these chemoreceptors lead to an increase in minute ventilation and increased elimination of carbon dioxide. A decrease in carbon dioxide lowers PaCO2 and pushes arterial pH towards normal.
Clinical use
One difficulty in evaluation acid-base derangements is the presence of multiple pathologies. A patient may present with a metabolic acidosis process alone, but they may also have a concomitant respiratory acidosis. Winters's formula gives an expected value for the patient's P; the patient's actual (measured) P is then compared to this. Using this information, physicians may elucidate additional causes of the acid-base derangement and identify different treatment options which may not have otherwise been considered.
If the two values correspond, respiratory compensation is considered to be adequate.
If the measured P is higher than the calculated value, there is also a primary respiratory acidosis.
If the measured P is lower than the calculated value, there is also a primary respiratory alkalosis.
References
Respiratory therapy
Mathematics in medicine | Winters's formula | Mathematics | 731 |
31,041,801 | https://en.wikipedia.org/wiki/Heck%E2%80%93Matsuda%20reaction | The Heck–Matsuda (HM) reaction is an organic reaction and a type of palladium catalysed arylation of olefins that uses arenediazonium salts as an alternative to aryl halides and triflates.
The use of arenediazonium salts presents some advantages over traditional aryl halide electrophiles, for example, the use of phosphines as ligand are not required and thus negating the requirement for anaerobic conditions, which makes the reaction more practical and easier to handle. Additionally, the reaction can be performed with or without a base and is often faster than traditional Heck protocols.
Allylic alcohols, conjugated alkenes, unsaturated heterocycles and unactivated alkenes are capable of being arylated with arenediazonium salts using simple catalysts such as palladium acetate (Pd(OAc)2) or tris(dibenzylideneacetone)dipalladium(0) (Pd2dba3) at room temperature in air, and in benign and conventional solvents.
In addition to the intermolecular variant of the HM reaction, intramolecular cyclization processes have also been developed for the construction of a range of oxygen and nitrogen heterocycles.
The catalytic cycle for the Heck-Matsuda arylation reaction has four main steps: oxidative addition, migratory insertion or carbopalladation, syn β-elimination and reductive elimination. The proposed Heck catalytic cycle involving cationic palladium with diazonium salts was reinforced by studies with mass spectrometry (ESI) by Correia and co-workers. These results also show the complex interactions that occur in the coordination sphere of palladium during the Heck reaction with arenediazonium salt.
A related reaction is the Meerwein arylation that precedes the Heck reaction. Meerwein arylation often use copper salts, but may in some cases be done without a transition metal.
See also
Palladium-catalyzed coupling reactions
Meerwein arylation
References
Organic reactions
Name reactions | Heck–Matsuda reaction | Chemistry | 452 |
162,132 | https://en.wikipedia.org/wiki/Derangement | In combinatorial mathematics, a derangement is a permutation of the elements of a set in which no element appears in its original position. In other words, a derangement is a permutation that has no fixed points.
The number of derangements of a set of size is known as the subfactorial of or the derangement number or de Montmort number (after Pierre Remond de Montmort). Notations for subfactorials in common use include , , , or .
For , the subfactorial equals the nearest integer to , where denotes the factorial of and is Euler's number.
The problem of counting derangements was first considered by Pierre Raymond de Montmort in his Essay d'analyse sur les jeux de hazard in 1708; he solved it in 1713, as did Nicholas Bernoulli at about the same time.
Example
Suppose that a professor gave a test to 4 students – A, B, C, and D – and wants to let them grade each other's tests. Of course, no student should grade their own test. How many ways could the professor hand the tests back to the students for grading, such that no student receives their own test back? Out of 24 possible permutations (4!) for handing back the tests,
{| style="font:125% monospace;line-height:1;border-collapse:collapse;"
|ABCD,
|ABDC,
|ACBD,
|ACDB,
|ADBC,
|ADCB,
|-
|BACD,
|BADC,
|BCAD,
|BCDA,
|BDAC,
|BDCA,
|-
|CABD,
|CADB,
|CBAD,
|CBDA,
|CDAB,
|CDBA,
|-
|DABC,
|DACB,
|DBAC,
|DBCA,
|DCAB,
|DCBA.
|}
there are only 9 derangements (shown in blue italics above). In every other permutation of this 4-member set, at least one student gets their own test back (shown in bold red).
Another version of the problem arises when we ask for the number of ways n letters, each addressed to a different person, can be placed in n pre-addressed envelopes so that no letter appears in the correctly addressed envelope.
Counting derangements
Counting derangements of a set amounts to the hat-check problem, in which one considers the number of ways in which n hats (call them h1 through hn) can be returned to n people (P1 through Pn) such that no hat makes it back to its owner.
Each person may receive any of the n − 1 hats that is not their own. Call the hat which the person P1 receives hi and consider his owner: Pi receives either P1's hat, h1, or some other. Accordingly, the problem splits into two possible cases:
Pi receives a hat other than h1. This case is equivalent to solving the problem with n − 1 people and n − 1 hats because for each of the n − 1 people besides P1 there is exactly one hat from among the remaining n − 1 hats that they may not receive (for any Pj besides Pi, the unreceivable hat is hj, while for Pi it is h1). Another way to see this is to rename h1 to hi, where the derangement is more explicit: for any j from 2 to n, Pj cannot receive hj.
Pi receives h1. In this case the problem reduces to n − 2 people and n − 2 hats, because P1 received his hat and Pi received h1's hat, effectively putting both out of further consideration.
For each of the n − 1 hats that P1 may receive, the number of ways that P2, ..., Pn may all receive hats is the sum of the counts for the two cases.
This gives us the solution to the hat-check problem: Stated algebraically, the number !n of derangements of an n-element set is
for ,
where and
The number of derangements of small lengths is given in the table below.
There are various other expressions for , equivalent to the formula given above. These include
for
and
for
where is the nearest integer function and is the floor function.
Other related formulas include
and
The following recurrence also holds:
Derivation by inclusion–exclusion principle
One may derive a non-recursive formula for the number of derangements of an n-set, as well. For we define to be the set of permutations of objects that fix the object. Any intersection of a collection of of these sets fixes a particular set of objects and therefore contains permutations. There are such collections, so the inclusion–exclusion principle yields
and since a derangement is a permutation that leaves none of the n objects fixed, this implies
On the other hand, since we can choose elements to be in their own place and
derange the other elements in just ways, by definition.
Growth of number of derangements as n approaches ∞
From
and
by substituting one immediately obtains that
This is the limit of the probability that a randomly selected permutation of a large number of objects is a derangement. The probability converges to this limit extremely quickly as increases, which is why is the nearest integer to The above semi-log graph shows that the derangement graph lags the permutation graph by an almost constant value.
More information about this calculation and the above limit may be found in the article on the
statistics of random permutations.
Asymptotic expansion in terms of Bell numbers
An asymptotic expansion for the number of derangements in terms of Bell numbers is as follows:
where is any fixed positive integer, and denotes the -th Bell number. Moreover, the constant implied by the big O-term does not exceed .
Generalizations
The problème des rencontres asks how many permutations of a size-n set have exactly k fixed points.
Derangements are an example of the wider field of constrained permutations. For example, the ménage problem asks if n opposite-sex couples are seated man-woman-man-woman-... around a table, how many ways can they be seated so that nobody is seated next to his or her partner?
More formally, given sets A and S, and some sets U and V of surjections A → S, we often wish to know the number of pairs of functions (f, g) such that f is in U and g is in V, and for all a in A, f(a) ≠ g(a); in other words, where for each f and g, there exists a derangement φ of S such that f(a) = φ(g(a)).
Another generalization is the following problem:
How many anagrams with no fixed letters of a given word are there?
For instance, for a word made of only two different letters, say n letters A and m letters B, the answer is, of course, 1 or 0 according to whether n = m or not, for the only way to form an anagram without fixed letters is to exchange all the A with B, which is possible if and only if n = m. In the general case, for a word with n1 letters X1, n2 letters X2, ..., nr letters Xr, it turns out (after a proper use of the inclusion-exclusion formula) that the answer has the form
for a certain sequence of polynomials Pn, where Pn has degree n. But the above answer for the case r = 2 gives an orthogonality relation, whence the Pn's are the Laguerre polynomials (up to a sign that is easily decided).
In particular, for the classical derangements, one has that
where is the upper incomplete gamma function.
Computational complexity
It is NP-complete to determine whether a given permutation group (described by a given set of permutations that generate it) contains any derangements.
{| class="wikitable collapsible collapsed" style="margin:0; width:100%"
|+ Table of factorial and derangement values
|-
! scope="col" |
! scope="col" class="nowrap" | Permutations,
! scope="col" class="nowrap" | Derangements,
! scope="col" |
|-
| style="text-align: center" | 0
| 1
=1×100
| 1
=1×100
| = 1
|-
| style="text-align: center" | 1
| 1
=1×100
| 0
| = 0
|-
| style="text-align: center" | 2
| 2
=2×100
| 1
=1×100
| = 0.5
|-
| style="text-align: center" | 3
| 6
=6×100
| 2
=2×100
|align="right"| ≈0.33333 33333
|-
| style="text-align: center" | 4
| 24
=2.4×101
| 9
=9×100
| = 0.375
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 5
| 120
=1.20×102
| 44
=4.4×101
|align="right"| ≈0.36666 66667
|-
| style="text-align: center" | 6
| 720
=7.20×102
| 265
=2.65×102
|align="right"| ≈0.36805 55556
|-
| style="text-align: center" | 7
| 5,040
=5.04×103
| 1,854
≈1.85×103
|align="right"| ≈0.36785,71429
|-
| style="text-align: center" | 8
| 40,320
≈4.03×104
| 14,833
≈1.48×104
|align="right"| ≈0.36788 19444
|-
| style="text-align: center" | 9
| 362,880
≈3.63×105
| 133,496
≈1.33×105
|align="right"| ≈0.36787 91887
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 10
| 3,628,800
≈3.63×106
| 1,334,961
≈1.33×106
|align="right"| ≈0.36787 94643
|-
| style="text-align: center" | 11
| 39,916,800
≈3.99×107
| 14,684,570
≈1.47×107
|align="right"| ≈0.36787 94392
|-
| style="text-align: center" | 12
| 479,001,600
≈4.79×108
| 176,214,841
≈1.76×108
|align="right"| ≈0.36787 94413
|-
| style="text-align: center" | 13
| 6,227,020,800
≈6.23×109
| 2,290,792,932
≈2.29×109
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 14
| 87,178,291,200
≈8.72×1010
| 32,071,101,049
≈3.21×1010
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 15
|style="font-size:80%;"| 1,307,674,368,000
≈1.31×1012
|style="font-size:80%;"| 481,066,515,734
≈4.81×1011
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 16
|style="font-size:80%;"| 20,922,789,888,000
≈2.09×1013
|style="font-size:80%;"| 7,697,064,251,745
≈7.70×1012
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 17
|style="font-size:80%;"| 355,687,428,096,000
≈3.56×1014
|style="font-size:80%;"| 130,850,092,279,664
≈1.31×1014
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 18
|style="font-size:80%;"| 6,402,373,705,728,000
≈6.40×1015
|style="font-size:80%;"| 2,355,301,661,033,953
≈2.36×1015
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 19
|style="font-size:80%;"| 121,645,100,408,832,000
≈1.22×1017
|style="font-size:80%;"| 44,750,731,559,645,106
≈4.48×1016
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 20
|style="font-size:80%;"| 2,432,902,008,176,640,000
≈2.43×1018
|style="font-size:80%;"| 895,014,631,192,902,121
≈8.95×1017
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 21
|style="font-size:80%;"| 51,090,942,171,709,440,000
≈5.11×1019
|style="font-size:80%;"| 18,795,307,255,050,944,540
≈1.88×1019
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 22
|style="font-size:80%;"| 1,124,000,727,777,607,680,000
≈1.12×1021
|style="font-size:80%;"| 413,496,759,611,120,779,881
≈4.13×1020
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 23
|style="font-size:80%;"| 25,852,016,738,884,976,640,000
≈2.59×1022
|style="font-size:80%;"| 9,510,425,471,055,777,937,262
≈9.51×1021
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 24
|style="font-size:80%;"| 620,448,401,733,239,439,360,000
≈6.20×1023
|style="font-size:80%;"| 228,250,211,305,338,670,494,289
≈2.28×1023
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 25
|style="font-size:80%;"| 15,511,210,043,330,985,984,000,000
≈1.55×1025
|style="font-size:80%;"| 5,706,255,282,633,466,762,357,224
≈5.71×1024
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 26
|style="font-size:80%;"| 403,291,461,126,605,635,584,000,000
≈4.03×1026
|style="font-size:80%;"| 148,362,637,348,470,135,821,287,825
≈1.48×1026
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 27
|style="font-size:80%;"| 10,888,869,450,418,352,160,768,000,000
≈1.09×1028
|style="font-size:80%;"| 4,005,791,208,408,693,667,174,771,274
≈4.01×1027
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 28
|style="font-size:80%;"| 304,888,344,611,713,860,501,504,000,000
≈3.05×1029
|style="font-size:80%;"| 112,162,153,835,443,422,680,893,595,673
≈1.12×1029
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 29
|style="font-size:80%;"| 8,841,761,993,739,701,954,543,616,000,000
≈8.84×1030
|style="font-size:80%;"| 3,252,702,461,227,859,257,745,914,274,516
≈3.25×1030
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 30
|style="font-size:80%;"| 265,252,859,812,191,058,636,308,480,000,000
≈2.65×1032
|style="font-size:80%;"| 97,581,073,836,835,777,732,377,428,235,481
≈9.76×1031
|align="right"| ≈0.36787 94412
|}
Footnotes
References
External links
Permutations
Fixed points (mathematics)
Integer sequences
es:Subfactorial | Derangement | Mathematics | 4,358 |
32,150,687 | https://en.wikipedia.org/wiki/Vitamin%20B12-binding%20domain | In molecular biology, the vitamin B12-binding domain is a protein domain which binds to cobalamin (vitamin B12). It can bind two different forms of the cobalamin cofactor, with cobalt bonded either to a methyl group (methylcobalamin) or to 5'-deoxyadenosine (adenosylcobalamin). Cobalamin-binding domains are mainly found in two families of enzymes present in animals and prokaryotes, which perform distinct kinds of reactions at the cobalt-carbon bond. Enzymes that require methylcobalamin carry out methyl transfer reactions. Enzymes that require adenosylcobalamin catalyse reactions in which the first step is the cleavage of adenosylcobalamin to form cob(II)alamin and the 5'-deoxyadenosyl radical, and thus act as radical generators. In both types of enzymes the B12-binding domain uses a histidine to bind the cobalt atom of cobalamin cofactors. This histidine is embedded in a DXHXXG sequence, the most conserved primary sequence motif of the domain. Proteins containing the cobalamin-binding domain include:
Animal and prokaryotic methionine synthase (), which catalyse the transfer of a methyl group from methyl-cobalamin to homocysteine, yielding enzyme-bound cob(I)alamin and methionine.
Animal and prokaryotic methylmalonyl-CoA mutase (), which are involved in the degradation of several amino acids, odd-chain fatty acids and cholesterol via propionyl-CoA to the tricarboxylic acid cycle.
Prokaryotic lysine 5,6-aminomutase ().
Prokaryotic glutamate mutase ().
Prokaryotic methyleneglutarate mutase ().
Prokaryotic isobutyryl-CoA mutase ().
The core structure of the cobalamin-binding domain is characterised by a five-stranded alpha/beta (Rossmann) fold, which consists of 5 parallel beta-sheets surrounded by 4-5 alpha helices in three layers (alpha/beta/alpha). Upon binding cobalamin, important elements of the binding site appear to become structured, including an alpha-helix that forms on one side of the cleft accommodating the nucleotide 'tail' of the cofactor. In cobalamin, the cobalt atom can be either free (dmb-off) or bound to dimethylbenzimidazole (dmb-on) according to the pH. When bound to the cobalamin-binding domain, the dimethylbenzimidazole ligand is replaced by the active histidine (His-on) of the DXHXXG motif. The replacement of dimethylbenzimidazole by histidine allows switching between the catalytic and activation cycles. In methionine synthase the cobalamin cofactor is sandwiched between the cobalamin-binding domain and an approximately 90 residues N-terminal domain forming a helical bundle comprising two pairs of antiparallel helices. This N-terminal domain forms a 4-helical bundle cap, in the conversion to the active conformation of this enzyme, the 4-helical cap rotates to allow the cobalamin cofactor to bind the activation domain.
References
Protein domains | Vitamin B12-binding domain | Biology | 732 |
74,761,545 | https://en.wikipedia.org/wiki/Thermoascus%20verrucosus | Thermoascus verrucosus is a species of fungus in the genus Thermoascus in the order of Eurotiales.
References
Thermoascaceae
Fungi described in 1975
Fungus species | Thermoascus verrucosus | Biology | 45 |
15,945,893 | https://en.wikipedia.org/wiki/Stage%20machinery | Stage machinery, also known as stage mechanics, comprises the mechanical devices used to create special effects in theatrical productions, including scene changes, lowering actors through the stage floor (traps) and enabling actors to 'fly' over the stage.
Alexandra Palace Theatre, London and the Gaiety Theatre, Isle of Man are two theatres which have retained stage machinery of all types under the stage.
Scene Changing
The wings of a theatre stage had to be at least half the width of the stage each side of the proscenium arch and the fly system for flying scenery had to be twice the height of the stage.
Drum and Shaft
This consisted of a shaft around which was built one or more circular drums which had a much larger diameter than the shaft. A rope wound round the drum was pulled in order to rotate the shaft and if there was more than one drum on the shaft, several pieces of scenery could be moved at the same time to raise the scenery wings and backdrops.
Slote/Sloat
This was a pair of vertical runners used to raise or lower a long profile of low scenery such as a groundrow, pieces of scenery made of canvas stretched over wood and used to represent items such as water or flowers, through a narrow slot in the stage floor.
Column Wave
The column wave, developed by the Italian architect Nicola Sabbatini, was a 16th-century stage machine used to provide the appearance waves on the sea.
Bridge
This was a heavy wooden platform with counterweights which were used to raise and lower either heavy pieces of scenery or a group of actors, from below the stage to stage level.
Scruto
Scruto consisted of narrow strips of wood attached side by side on canvas material forming a continuous sheet which could be rolled. The scruto could be mounted vertically and rolled up or down to change the scenery or horizontally in the stage floor to form a trap cover.
Traps
Anapiesma was the ancient Greek version of the stage trap we know today. It was a concealed opening under the stage floor, where actors and props would be hidden before they appeared on stage. The joists of the stage floor were cut and the opening was concealed in different ways, depending on the type of trap. In the 19th century many different kinds of traps were used. All except the Corsican trap were located downstage near the proscenium arch.
The trap room is the large space below the stage where actors prepared to make their entrance and where the winches, drums and other machinery needed to operate traps and scenery were kept. It was referred to as "hell".
Newspaper advertisements looked for trap performers and newspaper notices for shows might advertise how high a performer flew out of a trap.
Grave Trap
This trap was positioned centrally and was named after its use in Shakespeare's Hamlet. It measured about 6 by 3 feet and consisted of a platform below the stage which could be raised or lowered.
Star Trap
These were counterweighted traps which could be used to allow actors playing supernatural beings, such as ghosts in melodrama and demons and fairies in pantomime, to appear suddenly.
The hole through which the actor appeared consisted of triangular flaps, hinged with leather, which opened upwards, resembling a star. The actor stood on a small platform below the trap and counterweights of up to 200 kg, attached to the platform, were raised by stage hands using ropes, at which point the platform moved up rapidly and the actor 'flew' through the trap. The trap closed immediately with no visible opening, giving the illusion that the actor had appeared through the solid stage floor. The star traps were hazardous. The first pantomime at Alexandra Palace Theatre, 'The Yellow Dwarf' had to be delayed when an actor twisted his spine and sprained muscles in his back in preparation for the role. Despite this, they were still used in the first half of the 20th century until banned by the actors' union Equity.
Bristle Trap
To create bristle traps the wood in the stage floor was replaced by bristles which were painted to match the stage floor.
Vampire Trap
This trap was invented for the James Planché 1820 adaption of Polidori's The Vampyre. It involved two hinged traps in the stage floor that an actor could step through in order to vanish from the stage. The trap then immediately closed, giving the impression that the actor was passing through solid matter.
Leap Trap
This trap consisted of two hinged traps in scenery that an actor could step through in a single jump to either enter or leave the stage. It closed immediately, giving the impression that the actor was passing through solid matter.
Corsican Trap
These traps used a counterweighted platform and slatted shutters, sometimes made of scruto, which allowed an actor to rise through the stage floor while at the same time moving across it. It was developed for the play The Corsican Brothers by Dion Boucicault, in which the ghost of a murdered man rose slowly across the stage and through the stage floor to haunt his twin brother. It was played at the Princess's Theatre London in 1852. It consisted of a bristle trap set between 2 long sliders positioned across the stage, the first drawing the trap across the stage and the second closing behind. The actor stood on a small truck which ran along an inclined track below the stage which started 6 feet below the stage and rose to stage level. The only working Corsican trap in the world now is at the Gaiety Theatre in the Isle of Man, where they also have a model demonstrating how it works
Cauldron Trap
This trap, named from the witches' scene in Macbeth, was usually just a square opening through which items could be passed into a bottomless cauldron.
Corner Traps
These had an area of about 2 feet square, covered by a piece of scruto and would have been situated at each side of the stage near the proscenium arch. They could be used to raise or lower a person through the stage. This idea was further developed in Italy in the late 14th century using ropes and pulleys so that many actors could descend or ascend together.
Flying machines
Theatrical machinery was used by the Greeks in the 5th century BC to lower actors to the stage. In England, by the end of the 18th century, diagrams of complicated flights were drawn and by the mid 19th century the fly systems used consisted of pulleys and counterweights. Towards the end of the 19th century, George Kirby founded a company specifically for equipment used for flying actors and produced the effects needed to fly actors in the early productions of Peter Pan. George's son Joseph continued the business and founded Kirby's Flying Ballet troupe, which performed in the first half of the 20th century. The lines to which the actors were attached were known as Kirby lines.
References
Scenic design
Theatre | Stage machinery | Engineering | 1,384 |
507,960 | https://en.wikipedia.org/wiki/Solid%20geometry | Solid geometry or stereometry is the geometry of three-dimensional Euclidean space (3D space).
A solid figure is the region of 3D space bounded by a two-dimensional closed surface; for example, a solid ball consists of a sphere and its interior.
Solid geometry deals with the measurements of volumes of various solids, including pyramids, prisms (and other polyhedrons), cubes, cylinders, cones (and truncated cones).
History
The Pythagoreans dealt with the regular solids, but the pyramid, prism, cone and cylinder were not studied until the Platonists. Eudoxus established their measurement, proving the pyramid and cone to have one-third the volume of a prism and cylinder on the same base and of the same height. He was probably also the discoverer of a proof that the volume enclosed by a sphere is proportional to the cube of its radius.
Topics
Basic topics in solid geometry and stereometry include:
incidence of planes and lines
dihedral angle and solid angle
the cube, cuboid, parallelepiped
the tetrahedron and other pyramids
prisms
octahedron, dodecahedron, icosahedron
cones and cylinders
the sphere
other quadrics: spheroid, ellipsoid, paraboloid and hyperboloids.
Plane vs Solid Geometry
Advanced topics include:
projective geometry of three dimensions (leading to a proof of Desargues' theorem by using an extra dimension)
further polyhedra
descriptive geometry.
List of solid figures
Whereas a sphere is the surface of a ball, for other solid figures it is sometimes ambiguous whether the term refers to the surface of the figure or the volume enclosed therein, notably for a cylinder.
Techniques
Various techniques and tools are used in solid geometry. Among them, analytic geometry and vector techniques have a major impact by allowing the systematic use of linear equations and matrix algebra, which are important for higher dimensions.
Applications
A major application of solid geometry and stereometry is in 3D computer graphics.
See also
Euclidean geometry
Shape
Solid modeling
Surface
Notes
References
Solid geometry | Solid geometry | Physics | 416 |
2,907,966 | https://en.wikipedia.org/wiki/Gamma%20matrices | In mathematical physics, the gamma matrices, also called the Dirac matrices, are a set of conventional matrices with specific anticommutation relations that ensure they generate a matrix representation of the Clifford algebra It is also possible to define higher-dimensional gamma matrices. When interpreted as the matrices of the action of a set of orthogonal basis vectors for contravariant vectors in Minkowski space, the column vectors on which the matrices act become a space of spinors, on which the Clifford algebra of spacetime acts. This in turn makes it possible to represent infinitesimal spatial rotations and Lorentz boosts. Spinors facilitate spacetime computations in general, and in particular are fundamental to the Dirac equation for relativistic particles. Gamma matrices were introduced by Paul Dirac in 1928.
In Dirac representation, the four contravariant gamma matrices are
is the time-like, Hermitian matrix. The other three are space-like, anti-Hermitian matrices. More compactly, and where denotes the Kronecker product and the (for ) denote the Pauli matrices.
In addition, for discussions of group theory the identity matrix () is sometimes included with the four gamma matricies, and there is an auxiliary, "fifth" traceless matrix used in conjunction with the regular gamma matrices
The "fifth matrix" is not a proper member of the main set of four; it is used for separating nominal left and right chiral representations.
The gamma matrices have a group structure, the gamma group, that is shared by all matrix representations of the group, in any dimension, for any signature of the metric. For example, the 2×2 Pauli matrices are a set of "gamma" matrices in three dimensional space with metric of Euclidean signature (3, 0). In five spacetime dimensions, the four gammas, above, together with the fifth gamma-matrix to be presented below generate the Clifford algebra.
Mathematical structure
The defining property for the gamma matrices to generate a Clifford algebra is the anticommutation relation
where the curly brackets represent the anticommutator, is the Minkowski metric with signature , and is the identity matrix.
This defining property is more fundamental than the numerical values used in the specific representation of the gamma matrices. Covariant gamma matrices are defined by
and Einstein notation is assumed.
Note that the other sign convention for the metric, necessitates either a change in the defining equation:
or a multiplication of all gamma matrices by , which of course changes their hermiticity properties detailed below. Under the alternative sign convention for the metric the covariant gamma matrices are then defined by
Physical structure
The Clifford algebra over spacetime can be regarded as the set of real linear operators from to itself, , or more generally, when complexified to as the set of linear operators from any four-dimensional complex vector space to itself. More simply, given a basis for , is just the set of all complex matrices, but endowed with a Clifford algebra structure. Spacetime is assumed to be endowed with the Minkowski metric . A space of bispinors, , is also assumed at every point in spacetime, endowed with the bispinor representation of the Lorentz group. The bispinor fields of the Dirac equations, evaluated at any point in spacetime, are elements of (see below). The Clifford algebra is assumed to act on as well (by matrix multiplication with column vectors in for all ). This will be the primary view of elements of in this section.
For each linear transformation of , there is a transformation of given by for in If belongs to a representation of the Lorentz group, then the induced action will also belong to a representation of the Lorentz group, see Representation theory of the Lorentz group.
If is the bispinor representation acting on of an arbitrary Lorentz transformation in the standard (4 vector) representation acting on , then there is a corresponding operator on given by equation:
showing that the quantity of can be viewed as a basis of a representation space of the 4 vector representation of the Lorentz group sitting inside the Clifford algebra. The last identity can be recognized as the defining relationship for matrices belonging to an indefinite orthogonal group, which is written in indexed notation. This means that quantities of the form
should be treated as 4 vectors in manipulations. It also means that indices can be raised and lowered on the using the metric as with any 4 vector. The notation is called the Feynman slash notation. The slash operation maps the basis of , or any 4 dimensional vector space, to basis vectors . The transformation rule for slashed quantities is simply
One should note that this is different from the transformation rule for the , which are now treated as (fixed) basis vectors. The designation of the 4 tuple as a 4 vector sometimes found in the literature is thus a slight misnomer. The latter transformation corresponds to an active transformation of the components of a slashed quantity in terms of the basis , and the former to a passive transformation of the basis itself.
The elements form a representation of the Lie algebra of the Lorentz group. This is a spin representation. When these matrices, and linear combinations of them, are exponentiated, they are bispinor representations of the Lorentz group, e.g., the of above are of this form. The 6 dimensional space the span is the representation space of a tensor representation of the Lorentz group. For the higher order elements of the Clifford algebra in general and their transformation rules, see the article Dirac algebra. The spin representation of the Lorentz group is encoded in the spin group (for real, uncharged spinors) and in the complexified spin group for charged (Dirac) spinors.
Expressing the Dirac equation
In natural units, the Dirac equation may be written as
where is a Dirac spinor.
Switching to Feynman notation, the Dirac equation is
The fifth "gamma" matrix, 5
It is useful to define a product of the four gamma matrices as , so that
(in the Dirac basis).
Although uses the letter gamma, it is not one of the gamma matrices of The index number 5 is a relic of old notation: used to be called "".
has also an alternative form:
using the convention or
using the convention
Proof:
This can be seen by exploiting the fact that all the four gamma matrices anticommute, so
where is the type (4,4) generalized Kronecker delta in 4 dimensions, in full antisymmetrization. If denotes the Levi-Civita symbol in dimensions, we can use the identity .
Then we get, using the convention
This matrix is useful in discussions of quantum mechanical chirality. For example, a Dirac field can be projected onto its left-handed and right-handed components by:
Some properties are:
It is Hermitian:
Its eigenvalues are ±1, because:
It anticommutes with the four gamma matrices:
In fact, and are eigenvectors of since
and
Five dimensions
The Clifford algebra in odd dimensions behaves like two copies of the Clifford algebra of one less dimension, a left copy and a right copy. Thus, one can employ a bit of a trick to repurpose as one of the generators of the Clifford algebra in five dimensions. In this case, the set therefore, by the last two properties (keeping in mind that ) and those of the ‘old’ gammas, forms the basis of the Clifford algebra in spacetime dimensions for the metric signature . .
In metric signature , the set is used, where the are the appropriate ones for the signature. This pattern is repeated for spacetime dimension even and the next odd dimension for all . For more detail, see higher-dimensional gamma matrices.
Identities
The following identities follow from the fundamental anticommutation relation, so they hold in any basis (although the last one depends on the sign choice for ).
Miscellaneous identities
1.
2.
3.
4.
5.
6. where
Trace identities
The gamma matrices obey the following trace identities:
Proving the above involves the use of three main properties of the trace operator:
tr(A + B) = tr(A) + tr(B)
tr(rA) = r tr(A)
tr(ABC) = tr(CAB) = tr(BCA)
This implies
|}
This can only be fulfilled if
The extension to 2n + 1 (n integer) gamma matrices, is found by placing two gamma-5s after (say) the 2n-th gamma-matrix in the trace, commuting one out to the right (giving a minus sign) and commuting the other gamma-5 2n steps out to the left [with sign change (-1)^2n = 1]. Then we use cyclic identity to get the two gamma-5s together, and hence they square to identity, leaving us with the trace equalling minus itself, i.e. 0.
|}
|}
For the term on the right, we'll continue the pattern of swapping with its neighbor to the left,
{|
|
|
|-
|
|
|}
Again, for the term on the right swap with its neighbor to the left,
{|
|
|
|-
|
|
|}
Eq (3) is the term on the right of eq (2), and eq (2) is the term on the right of eq (1). We'll also use identity number 3 to simplify terms like so:
So finally Eq (1), when you plug all this information in gives
The terms inside the trace can be cycled, so
So really (4) is
or
|}
Add to both sides of the above to see
.
Now, this pattern can also be used to show
.
Simply add two factors of , with different from and . Anticommute three times instead of once, picking up three minus signs, and cycle using the cyclic property of the trace.
So,
.
|}
Conjugating with one more time to get rid of the two s that are there, we see that is the reverse of . Now,
{|
|
|
|(since trace is invariant under similarity transformations)
|-
|
|
|(since trace is invariant under transposition)
|-
|
|
|(since the trace of a product of gamma matrices is real)
|}
|}
Normalization
The gamma matrices can be chosen with extra hermiticity conditions which are restricted by the above anticommutation relations however. We can impose
, compatible with
and for the other gamma matrices (for )
, compatible with
One checks immediately that these hermiticity relations hold for the Dirac representation.
The above conditions can be combined in the relation
The hermiticity conditions are not invariant under the action of a Lorentz transformation because is not necessarily a unitary transformation due to the non-compactness of the Lorentz group.
Charge conjugation
The charge conjugation operator, in any basis, may be defined as
where denotes the matrix transpose. The explicit form that takes is dependent on the specific representation chosen for the gamma matrices, up to an arbitrary phase factor. This is because although charge conjugation is an automorphism of the gamma group, it is not an inner automorphism (of the group). Conjugating matrices can be found, but they are representation-dependent.
Representation-independent identities include:
The charge conjugation operator is also unitary , while for it also holds that for any representation. Given a representation of gamma matrices, the arbitrary phase factor for the charge conjugation operator can also be chosen such that , as is the case for the four representations given below (Dirac, Majorana and both chiral variants).
Feynman slash notation
The Feynman slash notation is defined by
for any 4-vector .
Here are some similar identities to the ones above, but involving slash notation:
where is the Levi-Civita symbol and Actually traces of products of odd number of is zero and thus
for odd.
Many follow directly from expanding out the slash notation and contracting expressions of the form with the appropriate identity in terms of gamma matrices.
Other representations
The matrices are also sometimes written using the 2×2 identity matrix, , and
where k runs from 1 to 3 and the σk are Pauli matrices.
Dirac basis
The gamma matrices we have written so far are appropriate for acting on Dirac spinors written in the Dirac basis; in fact, the Dirac basis is defined by these matrices. To summarize, in the Dirac basis:
In the Dirac basis, the charge conjugation operator is real antisymmetric,
Weyl (chiral) basis
Another common choice is the Weyl or chiral basis, in which remains the same but is different, and so is also different, and diagonal,
or in more compact notation:
The Weyl basis has the advantage that its chiral projections take a simple form,
The idempotence of the chiral projections is manifest.
By slightly abusing the notation and reusing the symbols we can then identify
where now and are left-handed and right-handed two-component Weyl spinors.
The charge conjugation operator in this basis is real antisymmetric,
The Dirac basis can be obtained from the Weyl basis as
via the unitary transform
Weyl (chiral) basis (alternate form)
Another possible choice of the Weyl basis has
The chiral projections take a slightly different form from the other Weyl choice,
In other words,
where and are the left-handed and right-handed two-component Weyl spinors, as before.
The charge conjugation operator in this basis is
This basis can be obtained from the Dirac basis above as via the unitary transform
Majorana basis
There is also the Majorana basis, in which all of the Dirac matrices are imaginary, and the spinors and Dirac equation are real. Regarding the Pauli matrices, the basis can be written as
where is the charge conjugation matrix, which matches the Dirac version defined above.
The reason for making all gamma matrices imaginary is solely to obtain the particle physics metric , in which squared masses are positive. The Majorana representation, however, is real. One can factor out the to obtain a different representation with four component real spinors and real gamma matrices. The consequence of removing the is that the only possible metric with real gamma matrices is .
The Majorana basis can be obtained from the Dirac basis above as via the unitary transform
Cl1,3(C) and Cl1,3(R)
The Dirac algebra can be regarded as a complexification of the real algebra Cl1,3(), called the space time algebra:
Cl1,3() differs from Cl1,3(): in Cl1,3() only real linear combinations of the gamma matrices and their products are allowed.
Two things deserve to be pointed out. As Clifford algebras, Cl1,3() and Cl4() are isomorphic, see classification of Clifford algebras. The reason is that the underlying signature of the spacetime metric loses its signature (1,3) upon passing to the complexification. However, the transformation required to bring the bilinear form to the complex canonical form is not a Lorentz transformation and hence not "permissible" (at the very least impractical) since all physics is tightly knit to the Lorentz symmetry and it is preferable to keep it manifest.
Proponents of geometric algebra strive to work with real algebras wherever that is possible. They argue that it is generally possible (and usually enlightening) to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in a real Clifford algebra that square to −1, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces. Some of these proponents also question whether it is necessary or even useful to introduce an additional imaginary unit in the context of the Dirac equation.
In the mathematics of Riemannian geometry, it is conventional to define the Clifford algebra Clp,q() for arbitrary dimensions . The Weyl spinors transform under the action of the spin group . The complexification of the spin group, called the spinc group , is a product of the spin group with the circle The product just a notational device to identify with The geometric point of this is that it disentangles the real spinor, which is covariant under Lorentz transformations, from the component, which can be identified with the fiber of the electromagnetic interaction. The is entangling parity and charge conjugation in a manner suitable for relating the Dirac particle/anti-particle states (equivalently, the chiral states in the Weyl basis). The bispinor, insofar as it has linearly independent left and right components, can interact with the electromagnetic field. This is in contrast to the Majorana spinor and the ELKO spinor (Eigenspinoren des Ladungskonjugationsoperators), which cannot (i.e. they are electrically neutral), as they explicitly constrain the spinor so as to not interact with the part coming from the complexification. The ELKO spinor is a Lounesto class 5 spinor.
However, in contemporary practice in physics, the Dirac algebra rather than the space-time algebra continues to be the standard environment the spinors of the Dirac equation "live" in.
Other representation-free properties
The gamma matrices are diagonalizable with eigenvalues for , and eigenvalues for .
In particular, this implies that is simultaneously Hermitian and unitary, while the are simultaneously anti–Hermitian and unitary.
Further, the multiplicity of each eigenvalue is two.
More generally, if is not null, a similar result holds. For concreteness, we restrict to the positive norm case with The negative case follows similarly.
It follows that the solution space to (that is, the kernel of the left-hand side) has dimension 2. This means the solution space for plane wave solutions to Dirac's equation has dimension 2.
This result still holds for the massless Dirac equation. In other words, if null, then has nullity 2.
Euclidean Dirac matrices
In quantum field theory one can Wick rotate the time axis to transit from Minkowski space to Euclidean space. This is particularly useful in some renormalization procedures as well as lattice gauge theory. In Euclidean space, there are two commonly used representations of Dirac matrices:
Chiral representation
Notice that the factors of have been inserted in the spatial gamma matrices so that the Euclidean Clifford algebra
will emerge. It is also worth noting that there are variants of this which insert instead on one of the matrices, such as in lattice QCD codes which use the chiral basis.
In Euclidean space,
Using the anti-commutator and noting that in Euclidean space , one shows that
In chiral basis in Euclidean space,
which is unchanged from its Minkowski version.
Non-relativistic representation
Footnotes
See also
Pauli matrices
Gell-Mann matrices
Higher-dimensional gamma matrices
Fierz identity
Spacetime algebra
Citations
References
External links
Dirac matrices on mathworld including their group properties
Dirac matrices as an abstract group on GroupNames
Spinors
Matrices
Clifford algebras
Articles containing proofs | Gamma matrices | Mathematics | 3,966 |
285,343 | https://en.wikipedia.org/wiki/MacLife | MacLife (stylized as Mac|Life) is an American monthly magazine published by Future US. It focuses on products produced by Apple, including the Macintosh personal computer, iPad, and iPhone. It was sold as a print product on newsstands, but is now a digital-only product distributed through Magazines Direct and the Mac|Life app, the latter of which can be obtained via the App Store. From September 1996 until February 2007, the magazine was known as MacAddict.
History
MacLife is one of two successor magazines to the defunct CD-ROM Today. First published in 1993 by Imagine Publishing (now Future US), CD-ROM Today was targeted at both Windows and Macintosh users, and each issue shipped with a CD-ROM of shareware and demo programs. In August 1996, CD-ROM Today ceased publication, with two magazines taking its place: MacAddict for Macintosh users, and boot (now Maximum PC) for Windows users.
As was the case with CD-ROM Today, MacAddict'''s discs included shareware and demo programs, but also came with other added features, such as staff videos and previews of content inside the magazine's hard copy. The MacAddict website was updated daily with news relevant to Apple products. MacAddict also had a mascot, a stick-figure named Max. By 1998, MacAddict had surpassed Macworld as the Macintosh magazine with the highest consumer newsstand spending due to its $7.99 cover price.
In February 2007, MacAddict was relaunched as MacLife. The new magazine had physically larger print editions than the old magazine, was focused on the creativity of Mac users, and no longer came with a CD-ROM.
In April 2023, MacLife issued its last print edition and switched to a digital-only format.
In Germany, an unrelated magazine of the same name is published by Falkemedia from Kiel ().
Reviewing system
From 1996 to mid-2002, there were four rating icons, which depicted Max. There was "Blech" (the lowest), "Yeah, Whatever" (a mediocre product), "Spiffy" (a solid yet imperfect product), and "Freakin' Awesome" (the highest). From 2002 to 2009, it was replaced with a more conventional five-point system. In 2010, MacLife adopted a 10-point system that included half stars.
See also
MacFormat'' – sister publication published in the United Kingdom
References
External links
– official site
Archived MacAddict magazines on the Internet Archive
Future plc
Computer magazines published in the United States
Monthly magazines published in the United States
Macintosh magazines
Macintosh websites
Magazines established in 1996
Magazines published in the San Francisco Bay Area | MacLife | Technology | 556 |
187,273 | https://en.wikipedia.org/wiki/Han%20unification | Han unification is an effort by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the so-called CJK languages into a single set of unified characters. Han characters are a feature shared in common by written Chinese (hanzi), Japanese (kanji), Korean (hanja) and Vietnamese (chữ Hán).
Modern Chinese, Japanese and Korean typefaces typically use regional or historical variants of a given Han character. In the formulation of Unicode, an attempt was made to unify these variants by considering them as allographsdifferent glyphs representing the same "grapheme" or orthographic unit hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan.
Nevertheless, many characters have regional variants assigned to different code points, such as Traditional (U+500B) versus Simplified (U+4E2A).
Rationale and controversy
The Unicode Standard details the principles of Han unification.
The Ideographic Research Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process.
One rationale was the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 characters. Version 1 of Unicode was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these CJK Unified Ideographs. Unicode was later extended to 21 bits allowing many more CJK characters (97,680 are assigned, with room for more).
An article hosted by IBM attempts to illustrate part of the motivation for Han unification:
In fact, the three ideographs for "one" (, , or ) are encoded separately in Unicode, as they are not considered national variants. The first is the common form in all three countries, while the second and third are used on financial instruments to prevent tampering (they may be considered variants).
However, Han unification has also caused considerable controversy, particularly among the Japanese public, who, with the nation's literati, have a history of protesting the culling of historically and culturally significant variants. (See . Today, the list of characters officially recognized for use in proper names continues to expand at a modest pace.)
In 1993, the Japan Electronic Industries Development Association (JEIDA) published a pamphlet titled "" (We are feeling anxious for the future character encoding system ), summarizing major criticism against the Han Unification approach adopted by Unicode.
Graphemes versus glyphs
A grapheme is the smallest abstract unit of meaning in a writing system. Any grapheme has many possible glyph expressions, but all are recognized as the same grapheme by those with reading and writing knowledge of a particular writing system. Although Unicode typically assigns characters to code points to express the graphemes within a system of writing, the Unicode Standard (section 3.4 D7) cautions:
However, this quote refers to the fact that some graphemes are composed of several graphic elements or "characters". So, for example, the character
combined with (generating the combination "å") might be understood by a user as a single grapheme while being composed of multiple Unicode abstract characters. In addition, Unicode also assigns some code points to a small number (other than for compatibility reasons) of formatting characters, whitespace characters, and other abstract characters that are not graphemes, but instead used to control the breaks between lines, words, graphemes and grapheme clusters. With the unified Han ideographs, the Unicode Standard makes a departure from prior practices in assigning abstract characters not as graphemes, but according to the underlying meaning of the grapheme: what linguists sometimes call sememes. This departure therefore is not simply explained by the oft quoted distinction between an abstract character and a glyph, but is more rooted in the difference between an abstract character assigned as a grapheme and an abstract character assigned as a sememe. In contrast, consider ASCII's unification of punctuation and diacritics, where graphemes with widely different meanings (for example, an apostrophe and a single quotation mark) are unified because the glyphs are the same. For Unihan the characters are not unified by their appearance, but by their definition or meaning.
For a grapheme to be represented by various glyphs means that the grapheme has glyph variations that are usually determined by selecting one font or another or using glyph substitution features where multiple glyphs are included in a single font. Such glyph variations are considered by Unicode a feature of rich text protocols and not properly handled by the plain text goals of Unicode. However, when the change from one glyph to another constitutes a change from one grapheme to another—where a glyph cannot possibly still, for example, mean the same grapheme understood as the small letter "a"—Unicode separates those into separate code points. For Unihan the same thing is done whenever the abstract meaning changes, however rather than speaking of the abstract meaning of a grapheme (the letter "a"), the unification of Han ideographs assigns a new code point for each different meaning—even if that meaning is expressed by distinct graphemes in different languages. Although a grapheme such as "ö" might mean something different in English (as used in the word "coördinated") than it does in German (as used in the word "schön"), it is still the same grapheme and can be easily unified so that English and German can share a common abstract Latin writing system (along with Latin itself). This example also points to another reason that "abstract character" and grapheme as an abstract unit in a written language do not necessarily map one-to-one. In English the combining diaeresis, "¨", and the "o" it modifies may be seen as two separate graphemes, whereas in languages such as Swedish, the letter "ö" may be seen as a single grapheme. Similarly in English the dot on an "i" is understood as a part of the "i" grapheme whereas in other languages, such as Turkish, the dot may be seen as a separate grapheme added to the dotless "ı".
To deal with the use of different graphemes for the same Unihan sememe, Unicode has relied on several mechanisms: especially as it relates to rendering text. One has been to treat it as simply a font issue so that different fonts might be used to render Chinese, Japanese or Korean. Also font formats such as OpenType allow for the mapping of alternate glyphs according to language so that a text rendering system can look to the user's environmental settings to determine which glyph to use. The problem with these approaches is that they fail to meet the goals of Unicode to define a consistent way of encoding multilingual text.
So rather than treat the issue as a rich text problem of glyph alternates, Unicode added the concept of variation selectors, first introduced in version 3.2 and supplemented in version 4.0. While variation selectors are treated as combining characters, they have no associated diacritic or mark. Instead, by combining with a base character, they signal the two character sequence selects a variation (typically in terms of grapheme, but also in terms of underlying meaning as in the case of a location name or other proper noun) of the base character. This then is not a selection of an alternate glyph, but the selection of a grapheme variation or a variation of the base abstract character. Such a two-character sequence however can be easily mapped to a separate single glyph in modern fonts. Since Unicode has assigned 256 separate variation selectors, it is capable of assigning 256 variations for any Han ideograph. Such variations can be specific to one language or another and enable the encoding of plain text that includes such grapheme variations.
Unihan "abstract characters"
Since the Unihan standard encodes "abstract characters", not "glyphs", the graphical artifacts produced by Unicode have been considered temporary technical hurdles, and at most, cosmetic. However, again, particularly in Japan, due in part to the way in which Chinese characters were incorporated into Japanese writing systems historically, the inability to specify a particular variant was considered a significant obstacle to the use of Unicode in scholarly work. For example, the unification of "grass" (explained above), means that a historical text cannot be encoded so as to preserve its peculiar orthography. Instead, for example, the scholar would be required to locate the desired glyph in a specific typeface in order to convey the text as written, defeating the purpose of a unified character set. Unicode has responded to these needs by assigning variation selectors so that authors can select grapheme variations of particular ideographs (or even other characters).
Small differences in graphical representation are also problematic when they affect legibility or belong to the wrong cultural tradition. Besides making some Unicode fonts unusable for texts involving multiple "Unihan languages", names or other orthographically sensitive terminology might be displayed incorrectly. (Proper names tend to be especially orthographically conservative—compare this to changing the spelling of one's name to suit a language reform in the US or UK.) While this may be considered primarily a graphical representation or rendering problem to be overcome by more artful fonts, the widespread use of Unicode would make it difficult to preserve such distinctions. The problem of one character representing semantically different concepts is also present in the Latin part of Unicode. The Unicode character for a curved apostrophe is the same as the character for a right single quote (’). On the other hand, the capital Latin letter A is not unified with the Greek letter Α or the Cyrillic letter А. This is, of course, desirable for reasons of compatibility, and deals with a much smaller alphabetic character set.
While the unification aspect of Unicode is controversial in some quarters for the reasons given above, Unicode itself does now encode a vast number of seldom-used characters of a more-or-less antiquarian nature.
Some of the controversy stems from the fact that the very decision of performing Han unification was made by the initial Unicode Consortium, which at the time was a consortium of North American companies and organizations (most of them in California), but included no East Asian government representatives. The initial design goal was to create a 16-bit standard, and Han unification was therefore a critical step for avoiding tens of thousands of character duplications. This 16-bit requirement was later abandoned, making the size of the character set less of an issue today.
The controversy later extended to the internationally representative ISO: the initial CJK Joint Research Group (CJK-JRG) favored a proposal (DIS 10646) for a non-unified character set, "which was thrown out in favor of unification with the Unicode Consortium's unified character set by the votes of American and European ISO members" (even though the Japanese position was unclear). Endorsing the Unicode Han unification was a necessary step for the heated ISO 10646/Unicode merger.
Much of the controversy surrounding Han unification is based on the distinction between glyphs, as defined in Unicode, and the related but distinct idea of graphemes. Unicode assigns abstract characters (graphemes), as opposed to glyphs, which are a particular visual representations of a character in a specific typeface. One character may be represented by many distinct glyphs, for example a "g" or an "a", both of which may have one loop (, ) or two (, ). Yet for a reader of Latin script based languages the two variations of the "a" character are both recognized as the same grapheme. Graphemes present in national character code standards have been added to Unicode, as required by Unicode's Source Separation rule, even where they can be composed of characters already available. The national character code standards existing in CJK languages are considerably more involved, given the technological limitations under which they evolved, and so the official CJK participants in Han unification may well have been amenable to reform.
Unlike European versions, CJK Unicode fonts, due to Han unification, have large but irregular patterns of overlap, requiring language-specific fonts. Unfortunately, language-specific fonts also make it difficult to access a variant which, as with the "grass" example, happens to appear more typically in another language style. (That is to say, it would be difficult to access "grass" with the four-stroke radical more typical of Traditional Chinese in a Japanese environment, which fonts would typically depict the three-stroke radical.) Unihan proponents tend to favor markup languages for defining language strings, but this would not ensure the use of a specific variant in the case given, only the language-specific font more likely to depict a character as that variant. (At this point, merely stylistic differences do enter in, as a selection of Japanese and Chinese fonts are not likely to be visually compatible.)
Chinese users seem to have fewer objections to Han unification, largely because Unicode did not attempt to unify Simplified Chinese characters with Traditional Chinese characters. (Simplified Chinese characters are used among Chinese speakers in the People's Republic of China, Singapore, and Malaysia. Traditional Chinese characters are used in Hong Kong and Taiwan (Big5) and they are, with some differences, more familiar to Korean and Japanese users.) Unicode is seen as neutral with regards to this politically charged issue, and has encoded Simplified and Traditional Chinese glyphs separately (e.g. the ideograph for "discard" is U+4E1F for Traditional Chinese Big5 #A5E1 and U+4E22 for Simplified Chinese GB #2210). It is also noted that Traditional and Simplified characters should be encoded separately according to Unicode Han Unification rules, because they are distinguished in pre-existing PRC character sets. Furthermore, as with other variants, Traditional to Simplified characters is not a one-to-one relationship.
Alternatives
There are several alternative character sets that are not encoding according to the principle of Han Unification, and thus free from its restrictions:
CNS character set
CCCII character set
TRON
Mojikyō
These region-dependent character sets are also seen as not affected by Han Unification because of their region-specific nature:
ISO/IEC 2022 (based on sequence codes to switch between Chinese, Japanese, Korean character sets – hence without unification)
Big5 extensions
GCCS and its successor HKSCS
However, none of these alternative standards has been as widely adopted as Unicode, which is now the base character set for many new standards and protocols, internationally adopted, and is built into the architecture of operating systems (Microsoft Windows, Apple macOS, and many Unix-like systems), programming languages (Perl, Python, C#, Java, Common Lisp, APL, C, C++), and libraries (IBM International Components for Unicode (ICU) along with the Pango, Graphite, Scribe, Uniscribe, and ATSUI rendering engines), font formats (TrueType and OpenType) and so on.
In March 1989, a (B)TRON-based system was adopted by Japanese government organizations "Center for Educational Computing" as the system of choice for school education including compulsory education. However, in April, a report titled "1989 National Trade Estimate Report on Foreign Trade Barriers" from Office of the United States Trade Representative have specifically listed the system as a trade barrier in Japan. The report claimed that the adoption of the TRON-based system by the Japanese government is advantageous to Japanese manufacturers, and thus excluding US operating systems from the huge new market; specifically the report lists MS-DOS, OS/2 and UNIX as examples. The Office of USTR was allegedly under Microsoft's influence as its former officer Tom Robertson was then offered a lucrative position by Microsoft. While the TRON system itself was subsequently removed from the list of sanction by Section 301 of the Trade Act of 1974 after protests by the organization in May 1989, the trade dispute caused the Ministry of International Trade and Industry to accept a request from Masayoshi Son to cancel the Center of Educational Computing's selection of the TRON-based system for the use of educational computers. The incident is regarded as a symbolic event for the loss of momentum and eventual demise of the BTRON system, which led to the widespread adoption of MS-DOS in Japan and the eventual adoption of Unicode with its successor Windows.
Merger of all equivalent characters
There has not been any push for full semantic unification of all semantically linked characters, though the idea would treat the respective users of East Asian languages the same, whether they write in Korean, Simplified Chinese, Traditional Chinese, Kyūjitai Japanese, Shinjitai Japanese or Vietnamese. Instead of some variants getting distinct code points while other groups of variants have to share single code points, all variants could be reliably expressed only with metadata tags (e.g., CSS formatting in webpages). The burden would be on all those who use differing versions of , , , , whether that difference be due to simplification, international variance or intra-national variance. However, for some platforms (e.g., smartphones), a device may come with only one font pre-installed. The system font must make a decision for the default glyph for each code point and these glyphs can differ greatly, indicating different underlying graphemes.
Consequently, relying on language markup across the board as an approach is beset with two major issues. First, there are contexts where language markup is not available (code commits, plain text). Second, any solution would require every operating system to come pre-installed with many glyphs for semantically identical characters that have many variants. In addition to the standard character sets in Simplified Chinese, Traditional Chinese, Korean, Vietnamese, Kyūjitai Japanese and Shinjitai Japanese, there also exist "ancient" forms of characters that are of interest to historians, linguists and philologists.
Unicode's Unihan database has already drawn connections between many characters. The Unicode database catalogs the connections between variant characters with distinct code points already. However, for characters with a shared code point, the reference glyph image is usually biased toward the Traditional Chinese version. Also, the decision of whether to classify pairs as semantic variants or z-variants is not always consistent or clear, despite rationalizations in the handbook.
So-called semantic variants of (U+4E1F) and (U+4E22) are examples that Unicode gives as differing in a significant way in their abstract shapes, while Unicode lists and as z-variants, differing only in font styling. Paradoxically, Unicode considers and to be near identical z-variants while at the same time classifying them as significantly different semantic variants. There are also cases of some pairs of characters being simultaneously semantic variants and specialized semantic variants and simplified variants: (U+500B) and (U+4E2A). There are cases of non-mutual equivalence. For example, the Unihan database entry for (U+4E80) considers (U+9F9C) to be its z-variant, but the entry for does not list as a z-variant, even though was obviously already in the database at the time that the entry for was written.
Some clerical errors led to doubling of completely identical characters such as (U+FA23) and (U+27EAF). If a font has glyphs encoded to both points so that one font is used for both, they should appear identical. These cases are listed as z-variants despite having no variance at all. Intentionally duplicated characters were added to facilitate bit-for-bit round-trip conversion. Because round-trip conversion was an early selling point of Unicode, this meant that if a national standard in use unnecessarily duplicated a character, Unicode had to do the same. Unicode calls these intentional duplications "compatibility variants" as with 漢 (U+FA9A) which calls (U+6F22) its compatibility variant. As long as an application uses the same font for both, they should appear identical. Sometimes, as in the case of with U+8ECA and U+F902, the added compatibility character lists the already present version of as both its compatibility variant and its z-variant. The compatibility variant field overrides the z-variant field, forcing normalization under all forms, including canonical equivalence. Despite the name, compatibility variants are actually canonically equivalent and are united in any Unicode normalization scheme and not only under compatibility normalization. This is similar to how is canonically equivalent to a pre-composed . Much software (such as the MediaWiki software that hosts Wikipedia) will replace all canonically equivalent characters that are discouraged (e.g. the angstrom symbol) with the recommended equivalent. Despite the name, CJK "compatibility variants" are canonically equivalent characters and not compatibility characters.
漢 (U+FA9A) was added to the database later than (U+6F22) was and its entry informs the user of the compatibility information. On the other hand, (U+6F22) does not have this equivalence listed in this entry. Unicode demands that all entries, once admitted, cannot change compatibility or equivalence so that normalization rules for already existing characters do not change.
Some pairs of Traditional and Simplified are also considered to be semantic variants. According to Unicode's definitions, it makes sense that all simplifications (that do not result in wholly different characters being merged for their homophony) will be a form of semantic variant. Unicode classifies and as each other's respective traditional and simplified variants and also as each other's semantic variants. However, while Unicode classifies (U+5104) and (U+4EBF) as each other's respective traditional and simplified variants, Unicode does not consider and to be semantic variants of each other.
Unicode claims that "Ideally, there would be no pairs of z-variants in the Unicode Standard." This would make it seem that the goal is to at least unify all minor variants, compatibility redundancies and accidental redundancies, leaving the differentiation to fonts and to language tags. This conflicts with the stated goal of Unicode to take away that overhead, and to allow any number of any of the world's scripts to be on the same document with one encoding system. Chapter One of the handbook states that "With Unicode, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data interchange, simplified software, and reduced development costs. While taking the ASCII character set as its starting point, the Unicode Standard goes far beyond ASCII's limited ability to encode only the upper- and lowercase letters A through Z. It provides the capacity to encode all characters used for the written languages of the world – more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility."
This leaves the option to settle on one unified reference grapheme for all z-variants, which is contentious since few outside of Japan would recognize and as equivalent. Even within Japan, the variants are on different sides of a major simplification called Shinjitai. Unicode would effectively make the PRC's simplification of (U+4FA3) and (U+4FB6) a monumental difference by comparison. Such a plan would also eliminate the very visually distinct variations for characters like (U+76F4) and (U+96C7).
One would expect that all simplified characters would simultaneously also be z-variants or semantic variants with their traditional counterparts, but many are neither. It is easier to explain the strange case that semantic variants can be simultaneously both semantic variants and specialized variants when Unicode's definition is that specialized semantic variants have the same meaning only in certain contexts. Languages use them differently. A pair whose characters are 100% drop-in replacements for each other in Japanese may not be so flexible in Chinese. Thus, any comprehensive merger of recommended code points would have to maintain some variants that differ only slightly in appearance even if the meaning is 100% the same for all contexts in one language, because in another language the two characters may not be 100% drop-in replacements.
Examples of language-dependent glyphs
In each row of the following table, the same character is repeated in all six columns. However, each column is marked (by the lang attribute) as being in a different language: Chinese (simplified and two types of traditional), Japanese, Korean, or Vietnamese. The browser should select, for each character, a glyph (from a font) suitable to the specified language. (Besides actual character variation—look for differences in stroke order, number, or direction—the typefaces may also reflect different typographical styles, as with serif and non-serif alphabets.) This only works for fallback glyph selection if you have CJK fonts installed on your system and the font selected to display this article does not include glyphs for these characters.
No character variant that is exclusive to Korean or Vietnamese has received its own code point, whereas almost all Shinjitai Japanese variants or Simplified Chinese variants each have distinct code points and unambiguous reference glyphs in the Unicode standard.
In the twentieth century, East Asian countries made their own respective encoding standards. Within each standard, there coexisted variants with distinct code points, hence the distinct code points in Unicode for certain sets of variants. Taking Simplified Chinese as an example, the two character variants of (U+5167) and (U+5185) differ in exactly the same way as do the Korean and non-Korean variants of (U+5168). Each respective variant of the first character has either (U+5165) or (U+4EBA). Each respective variant of the second character has either (U+5165) or (U+4EBA). Both variants of the first character got their own distinct code points. However, the two variants of the second character had to share the same code point.
The justification Unicode gives is that the national standards body in the PRC made distinct code points for the two variations of the first character /, whereas Korea never made separate code points for the different variants of . There is a reason for this that has nothing to do with how the domestic bodies view the characters themselves. China went through a process in the twentieth century that changed (if not simplified) several characters. During this transition, there was a need to be able to encode both variants within the same document. Korean has always used the variant of with the (U+5165) radical on top. Therefore, it had no reason to encode both variants. Korean language documents made in the twentieth century had little reason to represent both versions in the same document.
Almost all of the variants that the PRC developed or standardized got distinct code points owing simply to the fortune of the Simplified Chinese transition carrying through into the computing age. This privilege however, seems to apply inconsistently, whereas most simplifications performed in Japan and mainland China with code points in national standards, including characters simplified differently in each country, did make it into Unicode as distinct code points.
Sixty-two Shinjitai "simplified" characters with distinct code points in Japan got merged with their Kyūjitai traditional equivalents, like . This can cause problems for the language tagging strategy. There is no universal tag for the traditional and "simplified" versions of Japanese as there are for Chinese. Thus, any Japanese writer wanting to display the Kyūjitai form of may have to tag the character as "Traditional Chinese" or trust that the recipient's Japanese font uses only the Kyūjitai glyphs, but tags of Traditional Chinese and Simplified Chinese may be necessary to show the two forms side by side in a Japanese textbook. This would preclude one from using the same font for an entire document, however. There are two distinct code points for in Unicode, but only for "compatibility reasons". Any Unicode-conformant font must display the Kyūjitai and Shinjitai versions' equivalent code points in Unicode as the same. Unofficially, a font may display differently with 海 (U+6D77) as the Shinjitai version and 海 (U+FA45) as the Kyūjitai version (which is identical to the traditional version in written Chinese and Korean).
The radical (U+7CF8) is used in characters like /, with two variants, the second form being simply the cursive form. The radical components of (U+7D05) and (U+7EA2) are semantically identical and the glyphs differ only in the latter using a cursive version of the component. However, in mainland China, the standards bodies wanted to standardize the cursive form when used in characters like . Because this change happened relatively recently, there was a transition period. Both (U+7D05) and (U+7EA2) got separate code points in the PRC's text encoding standards bodies so Chinese-language documents could use both versions. The two variants received distinct code points in Unicode as well.
The case of the radical (U+8278) proves how arbitrary the state of affairs is. When used to compose characters like (U+8349), the radical was placed at the top, but had two different forms. Traditional Chinese and Korean use a four-stroke version. At the top of should be something that looks like two plus signs (). Simplified Chinese, Kyūjitai Japanese and Shinjitai Japanese use a three-stroke version, like two plus signs sharing their horizontal strokes (, i.e. ). The PRC's text encoding bodies did not encode the two variants differently. The fact that almost every other change brought about by the PRC, no matter how minor, did warrant its own code point suggests that this exception may have been unintentional. Unicode copied the existing standards as is, preserving such irregularities.
The Unicode Consortium has recognized errors in other instances. The myriad Unicode blocks for CJK Han Ideographs have redundancies in original standards, redundancies brought about by flawed importation of the original standards, as well as accidental mergers that are later corrected, providing precedent for dis-unifying characters.
For native speakers, variants can be unintelligible or be unacceptable in educated contexts. English speakers may understand a handwritten note saying "4P5 kg" as "495 kg", but writing the nine backwards (so it looks like a "P") can be jarring and would be considered incorrect in any school. Likewise, to users of one CJK language reading a document with "foreign" glyphs: variants of can appear as mirror images, can be missing a stroke/have an extraneous stroke, and may be unreadable to Non-Japanese people. (In Japan, both variants are accepted).
Examples of some non-unified Han ideographs
In some cases, often where the changes are the most striking, Unicode has encoded variant characters, making it unnecessary to switch between fonts or lang attributes. However, some variants with arguably minimal differences get distinct codepoints, and not every variant with arguably substantial changes gets a unique codepoint. As an example, take a character such as (U+5165), for which the only way to display the variants is to change font (or lang attribute) as described in the previous table. On the other hand, for (U+5167), the variant of (U+5185) gets a unique codepoint. For some characters, like / (U+514C/U+5151), either method can be used to display the different glyphs. In the following table, each row compares variants that have been assigned different code points. For brevity, note that shinjitai variants with different components will usually (and unsurprisingly) take unique codepoints (e.g., ). They will not appear here nor will the simplified Chinese characters that take consistently simplified radical components (e.g., /, /). This list is not exhaustive.
Ideographic Variation Database (IVD)
In order to resolve issues brought by Han unification, a Unicode Technical Standard known as the Unicode Ideographic Variation Database have been created to resolve the problem of specifying specific glyph in plain text environment. By registering glyph collections into the Ideographic Variation Database (IVD), it is possible to use Ideographic Variation Selectors to form Ideographic Variation Sequence (IVS) to specify or restrict the appropriate glyph in text processing in a Unicode environment.
Unicode ranges
Ideographic characters assigned by Unicode appear in the following blocks:
CJK Unified Ideographs (4E00–9FFF) (Otherwise known as URO, abbreviation of Unified Repertoire and Ordering)
CJK Unified Ideographs Extension A (3400–4DBF)
CJK Unified Ideographs Extension B (20000–2A6DF)
CJK Unified Ideographs Extension C (2A700–2B73F)
CJK Unified Ideographs Extension D (2B740–2B81F)
CJK Unified Ideographs Extension E (2B820–2CEAF)
CJK Unified Ideographs Extension F (2CEB0–2EBEF)
CJK Unified Ideographs Extension G (30000–3134F)
CJK Unified Ideographs Extension H (31350–323AF)
CJK Unified Ideographs Extension I (2EBF0–2EE5F)
CJK Compatibility Ideographs (F900–FAFF) (the twelve characters at FA0E, FA0F, FA11, FA13, FA14, FA1F, FA21, FA23, FA24, FA27, FA28 and FA29 are actually "unified ideographs" not "compatibility ideographs")
Unicode includes support of CJKV radicals, strokes, punctuation, marks and symbols in the following blocks:
CJK Radicals Supplement (2E80–2EFF)
CJK Strokes (31C0–31EF)
CJK Symbols and Punctuation (3000–303F)
Ideographic Description Characters (2FF0–2FFF)
Additional compatibility (discouraged use) characters appear in these blocks:
CJK Compatibility (3300–33FF)
CJK Compatibility Forms (FE30–FE4F)
CJK Compatibility Ideographs (F900–FAFF)
CJK Compatibility Ideographs Supplement (2F800–2FA1F)
Enclosed CJK Letters and Months (3200–32FF)
Enclosed Ideographic Supplement (1F200–1F2FF)
Kangxi Radicals (2F00–2FDF)
These compatibility characters (excluding the twelve unified ideographs in the CJK Compatibility Ideographs block) are included for compatibility with legacy text handling systems and other legacy character sets. They include forms of characters for vertical text layout and rich text characters that Unicode recommends handling through other means.
International Ideographs Core
The International Ideographs Core (IICore) is a subset of 9810 ideographs derived from the CJK Unified Ideographs tables, designed to be implemented in devices with limited memory, input/output capability, and/or applications where the use of the complete ISO 10646 ideograph repertoire is not feasible. There are 9810 characters in the current standard.
Unihan database files
The Unihan project has always made an effort to make available their build database.
The libUnihan project provides a normalized SQLite Unihan database and corresponding C library. All tables in this database are in fifth normal form. libUnihan is released under the LGPL, while its database, UnihanDb, is released under the MIT License.
See also
Notes
References
Chinese-language computing
Encodings of Japanese
Korean-language computing
Unicode
Natural language and computing
Character encoding
Chinese character lists | Han unification | Technology | 7,600 |
3,170,403 | https://en.wikipedia.org/wiki/Andromeda%20VIII | Andromeda VIII (And VIII / 8) is a galaxy discovered in August 2003. It is a companion galaxy to the Andromeda Galaxy, M31, and evaded detection for so long due to its diffuse nature. The galaxy was finally discovered by measuring the redshifts of stars in front of Andromeda, which proved to have different velocities than M31 and hence were part of a different galaxy.
As of at least 2006, the actuality of And VIII as a galaxy has not yet been firmly established (Merrett et al. 2006).
See also
List of Andromeda's satellite galaxies
References
External links
SEDS Webpage for Andromeda VIII
Andromeda Subgroup
Local Group
Andromeda (constellation)
Dwarf spheroidal galaxies
5056928
Astronomical objects discovered in 2003 | Andromeda VIII | Astronomy | 171 |
23,605,363 | https://en.wikipedia.org/wiki/1RXS | 1RXS is an acronym which is the prefix used for the First ROSAT X-ray Survey (1st ROSAT X-ray Survey). This is a catalogue of astronomical objects that were visible in the X-ray spectrum from the ROSAT satellite, in the field of X-ray astronomy.
Examinations of 1RXS has shown that many sources can be identified, such as old neutron stars, while other entries are "intriguing", according to one researcher.
See also
1RXS J160929.1−210524, example
References
External links
Catalog site
Astronomical surveys
X-ray astronomy
ROSAT objects | 1RXS | Astronomy | 132 |
23,782,531 | https://en.wikipedia.org/wiki/International%20Planetary%20Data%20Alliance | The International Planetary Data Alliance (IPDA), founded in 2006, is a closely cooperating partnership to maintain the quality and performance of data (including data formats) from planetary research using instruments in space. Specific tasks include promoting the international exchange of high-quality scientific data, organized by a set of standards to facilitate data management. NASA's Planetary Data System is the de facto standard for archiving planetary data. Member organizations participate in both its Board and on specific projects related to building standards and interoperable systems.
In 2008, a Committee on Space Research (COSPAR) resolution made the IPDA an official body to set standards around the world regarding the archiving of planetary data.
See also
Agenzia Spaziale Italiana
Centre National d'Études Spatiales
European Space Agency
German Aerospace Center
Indian Space Research Organisation
Japan Aerospace Exploration Agency
National Aeronautics and Space Administration
UK Space Agency
References
External links
The International Planetary Data Alliance
ESA Planetary Science Archive
NASA Planetary Data System
Space organizations
International scientific organizations | International Planetary Data Alliance | Astronomy | 199 |
45,293,075 | https://en.wikipedia.org/wiki/Cramer%27s%20theorem%20%28algebraic%20curves%29 | In algebraic geometry, Cramer's theorem on algebraic curves gives the necessary and sufficient number of points in the real plane falling on an algebraic curve to uniquely determine the curve in non-degenerate cases. This number is
where is the degree of the curve. The theorem is due to Gabriel Cramer, who published it in 1750.
For example, a line (of degree 1) is determined by 2 distinct points on it: one and only one line goes through those two points. Likewise, a non-degenerate conic (polynomial equation in and with the sum of their powers in any term not exceeding 2, hence with degree 2) is uniquely determined by 5 points in general position (no three of which are on a straight line).
The intuition of the conic case is this: Suppose the given points fall on, specifically, an ellipse. Then five pieces of information are necessary and sufficient to identify the ellipse—the horizontal location of the ellipse's center, the vertical location of the center, the major axis (the length of the longest chord), the minor axis (the length of the shortest chord through the center, perpendicular to the major axis), and the ellipse's rotational orientation (the extent to which the major axis departs from the horizontal). Five points in general position suffice to provide these five pieces of information, while four points do not.
Derivation of the formula
The number of distinct terms (including those with a zero coefficient) in an n-th degree equation in two variables is (n + 1)(n + 2) / 2. This is because the n-th degree terms are numbering n + 1 in total; the (n − 1) degree terms are numbering n in total; and so on through the first degree terms and numbering 2 in total, and the single zero degree term (the constant). The sum of these is (n + 1) + n + (n − 1) + ... + 2 + 1 = (n + 1)(n + 2) / 2 terms, each with its own coefficient. However, one of these coefficients is redundant in determining the curve, because we can always divide through the polynomial equation by any one of the coefficients, giving an equivalent equation with one coefficient fixed at 1, and thus [(n + 1)(n + 2) / 2] − 1 = n(n + 3) / 2 remaining coefficients.
For example, a fourth degree equation has the general form
with 4(4+3)/2 = 14 coefficients.
Determining an algebraic curve through a set of points consists of determining values for these coefficients in the algebraic equation such that each of the points satisfies the equation. Given n(n + 3) / 2 points (xi, yi), each of these points can be used to create a separate equation by substituting it into the general polynomial equation of degree n, giving n(n + 3) / 2 equations linear in the n(n + 3) / 2 unknown coefficients. If this system is non-degenerate in the sense of having a non-zero determinant, the unknown coefficients are uniquely determined and hence the polynomial equation and its curve are uniquely determined. More than this number of points would be redundant, and fewer would be insufficient to solve the system of equations uniquely for the coefficients.
Degenerate cases
An example of a degenerate case, in which n(n + 3) / 2 points on the curve are not sufficient to determine the curve uniquely, was provided by Cramer as part of Cramer's paradox. Let the degree be n = 3, and let nine points be all combinations of x = −1, 0, 1 and y = −1, 0, 1. More than one cubic contains all of these points, namely all cubics of equation Thus these points do not determine a unique cubic, even though there are n(n + 3) / 2 = 9 of them. More generally, there are infinitely many cubics that pass through the nine intersection points of two cubics (Bézout's theorem implies that two cubics have, in general, nine intersection points)
Likewise, for the conic case of n = 2, if three of five given points all fall on the same straight line, they may not uniquely determine the curve.
Restricted cases
If the curve is required to be in a particular sub-category of n-th degree polynomial equations, then fewer than n(n + 3) / 2 points may be necessary and sufficient to determine a unique curve. For example, three (non-collinear) points determine a circle: the generic circle is given by the equation where the center is located at (a, b) and the radius is r. Equivalently, by expanding the squared terms, the generic equation is where Two restrictions have been imposed here compared to the general conic case of n = 2: the coefficient of the term in xy is restricted to equal 0, and the coefficient of y2 is restricted to equal the coefficient of x2. Thus instead of five points being needed, only 5 − 2 = 3 are needed, coinciding with the 3 parameters a, b, k (equivalently a, b, r) that need to be identified.
See also
Five points determine a conic
References
Algebraic geometry
Analytic geometry | Cramer's theorem (algebraic curves) | Mathematics | 1,096 |
47,128,053 | https://en.wikipedia.org/wiki/Oligopeptide%20P11-4 | Oligopeptide P11-4 is a synthetic, pH controlled self-assembling peptide used for biomimetic mineralization e.g. for enamel regeneration or as an oral care agent. P11-4 (INCI name Oligopeptide 104) consists of the natural occurring amino acids Glutamine, Glutamic acid, Phenylalanine, Tryptophan and Arginine. The resulting higher molecular structure has a high affinity to tooth mineral.
P11-4 has been developed and patented by The University of Leeds (UK). The Swiss company Credentis has licensed the peptide technology and markets it under the trade names including CUROLOX, REGENAMEL, and EMOFLUOR. They offer three products with this technology. As of June 2016 in Switzerland products are available with new Brand names from Dr. Wild & Co AG.
Mechanism of action
P11-4 is an α-peptide that self-assembles into β-sheet amyloids with a hydrogel appearance at low pH. It builds a 3-D bio-matrix with binding sites for calcium ions serving as nucleation point for hydroxyapatite (tooth mineral) formation. The high affinity to tooth mineral is based on matching distances of Ca-ion binding sites on P11-4 and Ca spacing in the crystal lattice of hydroxyapatite. The matrix formation is pH controlled and thus allows control matrix activity and place of formation.
P11-4 in dental applications
Self assembling properties of P11-4 are used to regenerate early caries lesions. By application of P11-4 on the tooth surface, the peptide diffuse through the intact hypomineralized plate into the early caries lesion body and start, due to the low pH in such a lesion, to self-assemble generating a peptide scaffold mimicking the enamel matrix.
Around the newly formed matrix de-novo enamel-crystals are formed from calcium phosphate present in saliva. Through the remineralization caries activity is significantly reduced in comparison with a fluoride treatment alone.
In aqueous oral care gels the peptide is present as matrix. It binds directly as matrix to the tooth mineral and forms a stable layer on the teeth. This layer does protect the teeth from acid attacks. It also occludes open dentin tubules and thus reduces the dental sensitivity.
Uses
Treatment of initial caries lesions
Regenerating enamel
Dentin hypersensitivity
Acid protection
Availability
Availability of products containing P11-4 vary by country, with some products available only to dentists, and others available to the retail public.
Medical device for caries treatment and enamel regeneration:
CURODONT REPAIR (EU)
REGENAMEL (CH)
Cosmetic products for acid protection and dentin desensitization:
CURODONT PROTECT (EU)
EMOFLUOR PROTECT GEL PROFESSIONAL (CH)
CURODONT D'SENZ (EU & CH)
EMOFLUOR DESENS GEL PROFESSIONAL (CH)
Candida Protect Professional (CH)
See also
Amorphous calcium phosphate (Recaldent)
Remineralisation of teeth
Oligopeptide
Biomimetic materials
Fluoride
References
External links
University of Leeds Centre for Molecular Nanoscience website
credentis ag website
vvardis ag website
Dental materials
Peptide therapeutics
Hendecapeptides
Acetamides | Oligopeptide P11-4 | Physics | 699 |
8,688,638 | https://en.wikipedia.org/wiki/Washington%20Award | The Washington Award is an American engineering award.
Since 1916 it has been given annually for "accomplishments which promote the happiness, comfort, and well-being of humanity". It is awarded jointly by the following engineering societies: American Institute of Mining, Metallurgical, and Petroleum Engineers, American Nuclear Society, American Society of Civil Engineers, American Society of Mechanical Engineers, Institute of Electrical and Electronics Engineers, National Society of Professional Engineers, and Western Society of Engineers (which administers the award).
Honorees
Source: The Washington Award
Herbert C. Hoover, 1919
Robert W. Hunt, 1922
Arthur N. Talbot, 1924
Jonas Waldo Smith, 1925
John Watson Alvord, 1926
Orville Wright, 1927
Michael Idvorsky Pupin, 1928
Bion Joseph Arnold, 1929
Mortimer Elwyn Cooley, 1930
Ralph Modjeski, 1931
William David Coolidge, 1932
Ambrose Swasey, 1935
Charles Franklin Kettering, 1936
Frederick Gardner Cottrell, 1937
Frank Baldwin Jewett, 1938
Daniel Webster Mead, 1939
Daniel Cowan Jackling, 1940
Ralph Budd, 1941
William Lamont Abbott, 1942
Andrey Abraham Potter, 1943
Henry Ford, 1944
Arthur Holly Compton, 1945
Vannevar Bush, 1946
Karl Taylor Compton, 1947
Ralph Edward Flanders, 1948
John Lucian Savage, 1949
Wilfred Sykes, 1950
Edwin Howard Armstrong, 1951
Henry Townley Heald, 1952
Gustav Egloff, 1953
Lillian Moller Gilbreth, 1954
Charles Erwin Wilson, 1955
Robert E. Wilson, 1956
Walker Lee Cisler, 1957
Ben Moreell, 1958
James R. Killian, Jr., 1959
Herbert Payne Sedwick, 1960
William V. Kahler, 1961
Alexander C. Monteith, 1962
Philip Sporn, 1963
John Slezak, 1964
Glenn Theodore Seaborg, 1965
Augustus Braun Kinzel, 1966
Frederick Lawson Hovde, 1967
James B. Fisk, 1968
Nathan M. Newmark, 1969
H.G. Rickover, 1970
William L. Everitt, 1971
Thomas Otten Paine, 1972
John A. Volpe, 1973
John D. deButts, 1974
David Packard, 1975
Ralph B. Peck, 1976
Michael Tenenbaum, 1977
Dixy Lee Ray, 1978
Marvin Camras, 1979
Neil Armstrong, 1980
John E. Swearingen, 1981
Manson Benedict, 1982
John Bardeen, 1983
Robert W. Galvin, 1984
Stephen D. Bechtel, 1985
Mark Shepherd Jr., 1986
Grace Murray Hopper, 1987
James McDonald, 1988
Sherwood L. Fawcett, 1989
John H. Sununu, 1990
Frank Borman, 1991
Leon M. Lederman, 1992
William States Lee, 1993
Kenneth H. Olson, 1994
George W. Housner, 1995
Wilson Greatbatch, 1996
Frank Kreith, 1997
John R. Conrad, 1998
Jack S. Kilby, 1999
Donna Lee Shirley, 2000
Dan Bricklin, 2001
Bob Frankston, 2001
Richard J. Robbins, 2002
Eugene Cernan, 2003
Nick Holonyak, 2004
Robert S. Langer, 2005
Henry Petroski, 2006
Michael J. Birck, 2007
Dean Kamen, 2008
Clyde N. Baker, Jr., 2009
Alvy Ray Smith, 2010
Martin C. Jischke, 2011
Martin Cooper, 2012
Kristina M. Johnson, 2013
Bill Nye, 2014
Bernard Amadei, 2015
Aprille Joy Ericsson, 2016
Chuck Hull, 2017
Ivan Sutherland, 2018
Margaret Hamilton, 2019
Richard A. Berger, 2020
John B. Goodenough, 2021
John A. Rogers, 2022
Gwynne Shotwell, 2023
Robert Kahn & Vint Cerf, 2024
See also
List of engineering awards
References
External links
Engineering awards
Awards established in 1916
American science and technology awards
1916 establishments in the United States | Washington Award | Technology | 758 |
63,044,039 | https://en.wikipedia.org/wiki/Semiorthogonal%20decomposition | In mathematics, a semiorthogonal decomposition is a way to divide a triangulated category into simpler pieces. One way to produce a semiorthogonal decomposition is from an exceptional collection, a special sequence of objects in a triangulated category. For an algebraic variety X, it has been fruitful to study semiorthogonal decompositions of the bounded derived category of coherent sheaves, .
Semiorthogonal decomposition
Alexei Bondal and Mikhail Kapranov (1989) defined a semiorthogonal decomposition of a triangulated category to be a sequence of strictly full triangulated subcategories such that:
for all and all objects and , every morphism from to is zero. That is, there are "no morphisms from right to left".
is generated by . That is, the smallest strictly full triangulated subcategory of containing is equal to .
The notation is used for a semiorthogonal decomposition.
Having a semiorthogonal decomposition implies that every object of has a canonical "filtration" whose graded pieces are (successively) in the subcategories . That is, for each object T of , there is a sequence
of morphisms in such that the cone of is in , for each i. Moreover, this sequence is unique up to a unique isomorphism.
One can also consider "orthogonal" decompositions of a triangulated category, by requiring that there are no morphisms from to for any . However, that property is too strong for most purposes. For example, for an (irreducible) smooth projective variety X over a field, the bounded derived category of coherent sheaves never has a nontrivial orthogonal decomposition, whereas it may have a semiorthogonal decomposition, by the examples below.
A semiorthogonal decomposition of a triangulated category may be considered as analogous to a finite filtration of an abelian group. Alternatively, one may consider a semiorthogonal decomposition as closer to a split exact sequence, because the exact sequence of triangulated categories is split by the subcategory , mapping isomorphically to .
Using that observation, a semiorthogonal decomposition implies a direct sum splitting of Grothendieck groups:
For example, when is the bounded derived category of coherent sheaves on a smooth projective variety X, can be identified with the Grothendieck group of algebraic vector bundles on X. In this geometric situation, using that comes from a dg-category, a semiorthogonal decomposition actually gives a splitting of all the algebraic K-groups of X:
for all i.
Admissible subcategory
One way to produce a semiorthogonal decomposition is from an admissible subcategory. By definition, a full triangulated subcategory is left admissible if the inclusion functor has a left adjoint functor, written . Likewise, is right admissible if the inclusion has a right adjoint, written , and it is admissible if it is both left and right admissible.
A right admissible subcategory determines a semiorthogonal decomposition
,
where
is the right orthogonal of in . Conversely, every semiorthogonal decomposition arises in this way, in the sense that is right admissible and . Likewise, for any semiorthogonal decomposition , the subcategory is left admissible, and , where
is the left orthogonal of .
If is the bounded derived category of a smooth projective variety over a field k, then every left or right admissible subcategory of is in fact admissible. By results of Bondal and Michel Van den Bergh, this holds more generally for any regular proper triangulated category that is idempotent-complete.
Moreover, for a regular proper idempotent-complete triangulated category , a full triangulated subcategory is admissible if and only if it is regular and idempotent-complete. These properties are intrinsic to the subcategory. For example, for X a smooth projective variety and Y a subvariety not equal to X, the subcategory of of objects supported on Y is not admissible.
Exceptional collection
Let k be a field, and let be a k-linear triangulated category. An object E of is called exceptional if Hom(E,E) = k and Hom(E,E[t]) = 0 for all nonzero integers t, where [t] is the shift functor in . (In the derived category of a smooth complex projective variety X, the first-order deformation space of an object E is , and so an exceptional object is in particular rigid. It follows, for example, that there are at most countably many exceptional objects in , up to isomorphism. That helps to explain the name.)
The triangulated subcategory generated by an exceptional object E is equivalent to the derived category of finite-dimensional k-vector spaces, the simplest triangulated category in this context. (For example, every object of that subcategory is isomorphic to a finite direct sum of shifts of E.)
Alexei Gorodentsev and Alexei Rudakov (1987) defined an exceptional collection to be a sequence of exceptional objects such that for all i < j and all integers t. (That is, there are "no morphisms from right to left".) In a proper triangulated category over k, such as the bounded derived category of coherent sheaves on a smooth projective variety, every exceptional collection generates an admissible subcategory, and so it determines a semiorthogonal decomposition:
where , and denotes the full triangulated subcategory generated by the object . An exceptional collection is called full if the subcategory is zero. (Thus a full exceptional collection breaks the whole triangulated category up into finitely many copies of .)
In particular, if X is a smooth projective variety such that has a full exceptional collection , then the Grothendieck group of algebraic vector bundles on X is the free abelian group on the classes of these objects:
A smooth complex projective variety X with a full exceptional collection must have trivial Hodge theory, in the sense that for all ; moreover, the cycle class map must be an isomorphism.
Examples
The original example of a full exceptional collection was discovered by Alexander Beilinson (1978): the derived category of projective space over a field has the full exceptional collection
,
where O(j) for integers j are the line bundles on projective space. Full exceptional collections have also been constructed on all smooth projective toric varieties, del Pezzo surfaces, many projective homogeneous varieties, and some other Fano varieties.
More generally, if X is a smooth projective variety of positive dimension such that the coherent sheaf cohomology groups are zero for i > 0, then the object in is exceptional, and so it induces a nontrivial semiorthogonal decomposition . This applies to every Fano variety over a field of characteristic zero, for example. It also applies to some other varieties, such as Enriques surfaces and some surfaces of general type.
A source of examples is Orlov's blowup formula concerning the blowup of a scheme at a codimension locally complete intersection subscheme with exceptional locus . There is a semiorthogonal decomposition where is the functor with is the natural map.
While these examples encompass a large number of well-studied derived categories, many naturally occurring triangulated categories are "indecomposable". In particular, for a smooth projective variety X whose canonical bundle is basepoint-free, every semiorthogonal decomposition is trivial in the sense that or must be zero. For example, this applies to every variety which is Calabi–Yau in the sense that its canonical bundle is trivial.
See also
Derived noncommutative algebraic geometry
Notes
References
Algebraic geometry | Semiorthogonal decomposition | Mathematics | 1,631 |
12,448,287 | https://en.wikipedia.org/wiki/Low-barrier%20hydrogen%20bond | A Low-barrier hydrogen bond (LBHB) is a special type of hydrogen bond. LBHBs can occur when the pKa of the two heteroatoms are closely matched, which allows the hydrogen to be more equally shared between them. This hydrogen-sharing causes the formation of especially short, strong hydrogen bonds.
Description
Standard hydrogen bonds are longer (e.g. 2.8 Å for an O···O h-bond), and the hydrogen ion clearly belongs to one of the heteroatoms. When pKa of the heteroatoms is closely matched, a LBHB becomes possible at a shorter distance (~2.55 Å). When the distance further decreases (< 2.29 Å) the bond is characterized as a single-well or short-strong hydrogen bond.
Proteins
Low barrier hydrogen bonds occur in the water-excluding environments of proteins. Multiple residues act together in a charge-relay system to control the pKa values of the residues involved. LBHBs also occur on the surfaces of proteins, but are unstable due to their proximity to bulk water, and the conflicting requirements of strong salt-bridges in protein-protein interfaces.
Enzyme catalysis
Low-barrier hydrogen bonds have been proposed to be relevant to enzyme catalysis in two types of circumstance. Firstly, a low-barrier hydrogen bond in a charge relay network within an active site could activate a catalytic residue (e.g. between acid and base within a catalytic triad). Secondly, an LBHB could form during catalysis to stabilise a transition state (e.g. with substrate transition state in an oxyanion hole). Both of these mechanisms are contentious, with theoretical and experimental evidence split on whether they occur. Since the 2000s, the general consensus has been that LBHBs are not used by enzymes to aid catalysis. However, in 2012, a low-barrier hydrogen bond has been proposed to be involved in phosphate-arsenate discrimination for a phosphate transport protein. This finding might indicate the possibility of low-barrier hydrogen bonds playing a catalytic role in ion size selection for some very rare cases.
References
Chemical bonding | Low-barrier hydrogen bond | Physics,Chemistry,Materials_science | 442 |
979,306 | https://en.wikipedia.org/wiki/Orbital%20station-keeping | In astrodynamics, orbital station-keeping is keeping a spacecraft at a fixed distance from another spacecraft or celestial body. It requires a series of orbital maneuvers made with thruster burns to keep the active craft in the same orbit as its target. For many low Earth orbit satellites, the effects of non-Keplerian forces, i.e. the deviations of the gravitational force of the Earth from that of a homogeneous sphere, gravitational forces from Sun/Moon, solar radiation pressure and air drag, must be counteracted.
For spacecraft in a halo orbit around a Lagrange point, station-keeping is even more fundamental, as such an orbit is unstable; without an active control with thruster burns, the smallest deviation in position or velocity would result in the spacecraft leaving orbit completely.
Perturbations
The deviation of Earth's gravity field from that of a homogeneous sphere and gravitational forces from the Sun and Moon will in general perturb the orbital plane. For a Sun-synchronous orbit, the precession of the orbital plane caused by the oblateness of the Earth is a desirable feature that is part of mission design but the inclination change caused by the gravitational forces of the Sun and Moon is undesirable. For geostationary spacecraft, the inclination change caused by the gravitational forces of the Sun and Moon must be counteracted by a rather large expense of fuel, as the inclination should be kept sufficiently small for the spacecraft to be tracked by non-steerable antennae.
For spacecraft in a low orbit, the effects of atmospheric drag must often be compensated for, often to avoid re-entry; for missions requiring the orbit to be accurately synchronized with the Earth’s rotation, this is necessary to prevent a shortening of the orbital period.
Solar radiation pressure will in general perturb the eccentricity (i.e. the eccentricity vector); see Orbital perturbation analysis (spacecraft). For some missions, this must be actively counter-acted with maneuvers. For geostationary spacecraft, the eccentricity must be kept sufficiently small for a spacecraft to be tracked with a non-steerable antenna. Also for Earth observation spacecraft for which a very repetitive orbit with a fixed ground track is desirable, the eccentricity vector should be kept as fixed as possible. A large part of this compensation can be done by using a frozen orbit design, but often thrusters are needed for fine control maneuvers.
Low Earth orbit
For spacecraft in a very low orbit, the atmospheric drag is sufficiently strong to cause a re-entry before the intended end of mission if orbit raising maneuvers are not executed from time to time.
An example of this is the International Space Station (ISS), which has an operational altitude above Earth's surface of between 400 and 430 km (250-270 mi). Due to atmospheric drag the space station is constantly losing orbital energy. In order to compensate for this loss, which would eventually lead to a re-entry of the station, it has to be reboosted to a higher orbit from time to time. The chosen orbital altitude is a trade-off between the average thrust needed to counter-act the air drag and the impulse needed to send payloads and people to the station.
GOCE which orbited at 255 km (later reduced to 235 km) used ion thrusters to provide up to 20 mN of thrust to compensate for the drag on its frontal area of about 1 m2.
Earth observation spacecraft
For Earth observation spacecraft typically operated in an altitude above the Earth surface of about 700 – 800 km the air-drag is very faint and a re-entry due to air-drag is not a concern. But if the orbital period should be synchronous with the Earth's rotation to maintain a fixed ground track, the faint air-drag at this high altitude must also be counter-acted by orbit raising maneuvers in the form of thruster burns tangential to the orbit. These maneuvers will be very small, typically in the order of a few mm/s of delta-v. If a frozen orbit design is used these very small orbit raising maneuvers are sufficient to also control the eccentricity vector.
To maintain a fixed ground track it is also necessary to make out-of-plane maneuvers to compensate for the inclination change caused by Sun/Moon gravitation. These are executed as thruster burns orthogonal to the orbital plane. For Sun-synchronous spacecraft having a constant geometry relative to the Sun, the inclination change due to the solar gravitation is particularly large; a delta-v in the order of 1–2 m/s per year can be needed to keep the inclination constant.
Geostationary orbit
For geostationary spacecraft, thruster burns orthogonal to the orbital plane must be executed to compensate for the effect of the lunar/solar gravitation that perturbs the orbit pole with typically 0.85 degrees per year. The delta-v needed to compensate for this perturbation keeping the inclination to the equatorial plane amounts to in the order 45 m/s per year. This part of the GEO station-keeping is called North-South control.
The East-West control is the control of the orbital period and the eccentricity vector performed by making thruster burns tangential to the orbit. These burns are then designed to keep the orbital period perfectly synchronous with the Earth rotation and to keep the eccentricity sufficiently small. Perturbation of the orbital period results from the imperfect rotational symmetry of the Earth relative the North/South axis, sometimes called the ellipticity of the Earth equator. The eccentricity (i.e. the eccentricity vector) is perturbed by the solar radiation pressure. The fuel needed for this East-West control is much less than what is needed for the North-South control.
To extend the life-time of geostationary spacecraft with little fuel left one sometimes discontinues the North-South control only continuing with the East-West control. As seen from an observer on the rotating Earth the spacecraft will then move North-South with a period of 24 hours. When this North-South movement gets too large a steerable antenna is needed to track the spacecraft. An example of this is Artemis.
To save weight, it is crucial for GEO satellites to have the most fuel-efficient propulsion system. Almost all modern satellites are therefore employing a high specific impulse system like plasma or ion thrusters.
Lagrange points
Orbits of spacecraft are also possible around Lagrange points—also referred to as libration points—five equilibrium points that exist in relation to two larger solar system bodies. For example, there are five of these points in the Sun-Earth system, five in the Earth-Moon system, and so on. Spacecraft may orbit around these points with a minimum of propellant required for station-keeping purposes. Two orbits that have been used for such purposes include halo and Lissajous orbits.
One important Lagrange point is Earth-Sun , and three heliophysics missions have been orbiting L1 since approximately 2000. Station-keeping propellant use can be quite low, facilitating missions that can potentially last decades should other spacecraft systems remain operational. The three spacecraft—Advanced Composition Explorer (ACE), Solar Heliospheric Observatory (SOHO), and the Global Geoscience WIND satellite—each have annual station-keeping propellant requirements of approximately 1 m/s or less.
Earth-Sun —approximately 1.5 million kilometers from Earth in the anti-sun direction—is another important Lagrange point, and the ESA Herschel space observatory operated there in a Lissajous orbit during 2009–2013, at which time it ran out of coolant for the space telescope. Small station-keeping orbital maneuvers were executed approximately monthly to maintain the spacecraft in the station-keeping orbit.
The James Webb Space Telescope will use propellant to maintain its halo orbit around the Earth-Sun L2, which provides an upper limit to its designed lifetime: it is being designed to carry enough for ten years. However, the precision of trajectory following launch by an Ariane 5 is credited with potentially doubling the lifetime of the telescope by leaving more hydrazine propellant on-board than expected.
The CAPSTONE orbiter and the planned Lunar Gateway is stationed along a 9:2 synodically resonant Near Rectilinear Halo Orbit (NRHO) around the Earth-Moon L2 Lagrange point.
See also
Delta-v budget
Orbital perturbation analysis
Reboost
Teleoperator Retrieval System (robotic device for attaching to another spacecraft and boosting or changing its orbit)
References
External links
Station-keeping at the Encyclopedia of Astrobiology, Astronomy, and Spaceflight
XIPS Xenon Ion Propulsion Systems
Jules Verne boosts ISS orbit Jules Verne boosts ISS orbit (report from the European Space Agency)
Orbital maneuvers
Astrodynamics
Earth orbits | Orbital station-keeping | Engineering | 1,805 |
15,036,711 | https://en.wikipedia.org/wiki/Strobilation | Strobilation or transverse fission is a form of asexual reproduction consisting of the spontaneous transverse segmentation of the body. It is observed in certain cnidarians and helminths. This mode of reproduction is characterized by high offspring output, which, in the case of the parasitic tapeworms, is of great significance.
Strobilation in cnidarians
The process starts with preliminary morphological changes. In particular, the cnidarian's tentacles tend to be reabsorbed.
Neck-formation: transverse constrictions appear near the upper extremity of the animal. A strobilating polyp is called a strobila while the non-strobilating polyp is called a scyphistoma or scyphopolyp.
Segmentation: the number of constriction sites increases and migrates down the body length, transforming the body into a sequence of disks. The fissures intensify until the initial body is divided into equally spaced, separate segments. The oral end of the polyp becomes the oral end of the ephyra.
Metamorphosis: neurosecretory products of the two previous processes now disappear.
Neck-formation and segmentation are only separated for clarity purposes. In reality, the two processes are simultaneous, with segmentation to release new ephyras occurring at the upper end while neck formation spreads further down the body. Usually, a portion of the animal remains adhered to the substrate and regenerates the body.
Examples
The moon jellyfish (Aurelia aurita) reproduces both sexually and by strobilation. This latter process occurs during the colonial polyp stage and produces either polyps or juvenile Medusae called ephyra. Strobilation tend to occur at specific periods, typically early spring. As ephyra size remains constant regardless of the polyp size, larger polyps produce more numerous ephyras.
Some scyphozoans, such as Nausithoe aurea, cnidarians also strobilate in their solitary polyp form, producing either ephyra or planuloids. Strobilation does not happen periodically, but is thought to be induced by external stimuli, such as iodine, light regime, temperature, or food availability.
Induction in laboratory conditions
Strobilation is successfully induced in laboratory conditions by intensive feeding and temperature lowering, and also by the effect of artificial compounds.
Strobilation in helminths
In cestodes, the whole body except for the head and the neck undergoes strobilation continuously, reflecting the important role reproduction plays in the parasitic mode of life. The strobilating section is called strobila, or scolex, and each of its segments is a proglottid. As they mature, proglottids are disposed in the feces of the host.
References
Reproduction in animals
Scyphozoa | Strobilation | Biology | 603 |
43,475,307 | https://en.wikipedia.org/wiki/Abell%20Catalog%20of%20Planetary%20Nebulae | The Abell Catalog of Planetary Nebulae was created in 1966 by George O. Abell and was composed of 86 entries thought to be planetary nebulae.
The objects were collected from discoveries, about half by Albert George Wilson and the rest by Abell, Robert George Harrington, and Rudolph Minkowski. All were discovered before August 1955 as part of the National Geographic Society – Palomar Observatory Sky Survey on photographic plates created with the Samuel Oschin telescope at Mount Palomar. Four are better known from previous catalogs: Abell 50 is NGC 6742, Abell 75 is NGC 7076, Abell 37 is IC 972, and Abell 81 is IC 1454. Another four were later rejected as not being planetaries: Abell 11 (reflection nebula), Abell 32 (red plate flaw), Abell 76 (ring galaxy PGC 85185), and Abell 85 (supernova remnant CTB 1 and noted as possibly such in Abell's 1966 paper). Another three were also not included in the Strasbourg-ESO Catalogue of Galactic Planetary Nebulae (SEC): Abell 9, Abell 17 (red plate flaw), and Abell 64.
Planetaries on the list are best viewed with a large aperture telescope (e.g. ) and an OIII filter.
See also
Abell 21
Abell 33
Abell 39
References
External links
Abell Planetarische Nebel
A complete list of planetary nebulae in the Abell catalogue
Astronomical catalogues
Planetary nebulae | Abell Catalog of Planetary Nebulae | Astronomy | 309 |
57,247,594 | https://en.wikipedia.org/wiki/U.S.%20Army%20Corps%20of%20Engineers%20Superintendent%27s%20House%20and%20Workmen%27s%20Office | The U.S. Army Corps of Engineers Superintendent's House and Workmen's Office, in Woodbury Park, Woodbury, Kentucky, was listed on the National Register of Historic Places in 1980.
The listing is for two buildings on a high bluff:
the superintendent's house (1912–13), a two-story Flemish bond brick house with a wraparound porch, and
a one-and-a-half-story office Flemish bond brick building (c.1889)
They are located on Federal Hill, overlooking Lock and Dam #4 of the Green River, within an Butler County park.
At the time of the 1980 NRHP nomination, the buildings had been vacant since 1973, when the U.S. Army Corps of Engineers moved its last employee out of Woodbury.
See also
Finney Hotel, 1890 hotel also in Woodbury Park and NRHP-listed
References
National Register of Historic Places in Butler County, Kentucky
Government buildings completed in 1913
1913 establishments in Kentucky
Unused buildings in Kentucky
1889 establishments in Kentucky
Government buildings completed in 1889
Office buildings completed in 1889
Houses completed in 1913
United States Army Corps of Engineers | U.S. Army Corps of Engineers Superintendent's House and Workmen's Office | Engineering | 226 |
39,502,824 | https://en.wikipedia.org/wiki/Sharing%20economy | The sharing economy is a socio-economic system whereby consumers share in the creation, production, distribution, trade and consumption of goods, and services. These systems take a variety of forms, often leveraging information technology and the Internet, particularly digital platforms, to facilitate the distribution, sharing and reuse of excess capacity in goods and services.
It can be facilitated by nonprofit organizations, usually based on the concept of book-lending libraries, in which goods and services are provided for free (or sometimes for a modest subscription) or by commercial entities, in which a company provides a service to customers for profit.
It relies on the will of the users to share and the overcoming of stranger danger.
It provides benefits, for example can lower the GHG emissions of products by 77%-85%.
Origins
Dariusz Jemielniak and Aleksandra Przegalinska credit Marcus Felson and Joe L. Spaeth's academic article "Community Structure and Collaborative Consumption" published in 1978 with coining the term economy of sharing.
The term "sharing economy" began to appear around the time of the Great Recession, enabling social technologies, and an increasing sense of urgency around global population growth and resource depletion. Lawrence Lessig was possibly first to use the term in 2008, though others claim the origin of the term is unknown.
Definition and related concepts
There is a conceptual and semantic confusion caused by the many facets of Internet-based sharing leading to discussions regarding the boundaries and the scope of the sharing economy and regarding the definition of the sharing economy. Arun Sundararajan noted in 2016 that he is "unaware of any consensus on a definition of the sharing economy". As of 2015, according to a Pew Research Center survey, only 27% of Americans had heard of the term "sharing economy".
The term "sharing economy" is often used ambiguously and can imply different characteristics. Survey respondents who had heard of the term had divergent views on what it meant, with many thinking it concerned "sharing" in the traditional sense of the term. To this end, the terms “sharing economy” and “collaborative consumption” have often been used interchangeably. Collaborative consumption refers to the activities and behaviors that drive the sharing economy, making the two concepts closely interrelated. A definition published in the Journal of Consumer Behavior in 2015 emphasizes these synergies: “Collaborative consumption takes place in organized systems or networks, in which participants conduct sharing activities in the form of renting, lending, trading, bartering, and swapping of goods, services, transportation solutions, space, or money.”
The sharing economy is sometimes understood exclusively as a peer-to-peer phenomenon while at times, it has been framed as a business-to-customer phenomenon. Additionally, the sharing economy can be understood to encompass transactions with a permanent transfer of ownership of a resource, such as a sale, while other times, transactions with a transfer of ownership are considered beyond the boundaries of the sharing economy. One definition of the sharing economy, developed to integrate existing understandings and definitions, based on a systematic review is: "the sharing economy is an IT-facilitated peer-to-peer model for commercial or non-commercial sharing of underutilized goods and service capacity through an intermediary without transfer of ownership"The phenomenon has been defined from a legal perspective as "a for-profit, triangular legal structure where two parties (Providers and Users) enter into binding contracts for the provision of goods (partial transfer of the property bundle of rights) or services (ad hoc or casual services) in exchange for monetary payment through an online platform operated by a third party (Platform Operator) with an active role in the definition and development of the legal conditions upon which the goods and services are provided." Under this definition, the "Sharing Economy" is a triangular legal structure with three different legal actors: "1) a Platform Operator which using technology provides aggregation and interactivity to create a legal environment by setting the terms and conditions for all the actors; (2) a User who consumes the good or service on the terms and conditions set by the Platform Operator; and (3) a Provider who provides a good or service also abiding by the Platform Operator's terms and conditions."
While the term "sharing economy" is the term most often used, the sharing economy is also referred to as the access economy, crowd-based capitalism, collaborative economy, community-based economy, gig economy, peer economy, peer-to-peer (P2P) economy, platform economy, renting economy and on-demand economy, though at times some of those terms have been defined as separate if related topics.
The notion of "sharing economy" has often been considered an oxymoron, and a misnomer for actual commercial exchanges. Arnould and Rose proposed to replace the misleading term "sharing" with "mutuality". In an article in Harvard Business Review, authors Giana M. Eckhardt and Fleura Bardhi argue that "sharing economy" is a misnomer, and that the correct term for this activity is access economy. The authors say, "When 'sharing' is market-mediated—when a company is an intermediary between consumers who don't know each other—it is no longer sharing at all. Rather, consumers are paying to access someone else's goods or services." The article states that companies (such as Uber) that understand this, and whose marketing highlights the financial benefits to participants, are successful, while companies (such as Lyft) whose marketing highlights the social benefits of the service are less successful. According to George Ritzer, this trend towards increased consumer input in commercial exchanges refers to the notion of prosumption, which, as such, is not new. Jemielniak and Przegalinska note that the term sharing economy is often used to discuss aspects of the society that do not predominantly relate to the economy, and propose a broader term collaborative society for such phenomena.
The term "platform capitalism" has been proposed by some scholars as more correct than "sharing economy" in discussion of activities of for-profit companies like Uber and Airbnb in the economy sector. Companies that try to focus on fairness and sharing, instead of just profit motive, are much less common, and have been contrastingly described as platform cooperatives (or cooperativist platforms vs capitalist platforms). In turn, projects like Wikipedia, which rely on unpaid labor of volunteers, can be classified as commons-based peer-production initiatives. A related dimension is concerned with whether users are focused on non-profit sharing or maximizing their own profit. Sharing is a model that is adapting to the abundance of resource, whereas for-profit platform capitalism is a model that persists in areas where there is still a scarcity of resources.
Yochai Benkler, one of the earliest proponents of open source software, who studied the tragedy of the commons, which refers to the idea that when people all act solely in our self-interest, they deplete the shared resources they need for their own quality of life, posited that network technology could mitigate this issue through what he called "commons-based peer production", a concept first articulated in 2002. Benkler then extended that analysis to "shareable goods" in Sharing Nicely: On Shareable Goods and the emergence of sharing as a modality of economic production, written in 2004.
Actors of the sharing economy
There are a wide range of actors who participate in the sharing economy. This includes individual users, for-profit enterprises, social enterprise or cooperatives, digital platform companies, local communities, non-profit enterprises and the public sector or the government. Individual users are the actors engaged in sharing goods and resources through "peer-to-peer (P2P) or business-to-peer (B2P) transactions". The for-profit enterprises are those actors who are profit-seekers who buy, sell, lend, rent or trade with the use of digital platforms as means to collaborate with other actors. The social enterprises, sometimes referred to as cooperatives, are mainly "motivated by social or ecological reasons" and seek to empower actors as means of genuine sharing. Digital platforms are technology firms that facilitate the relationship between transacting parties and make profits by charging commissions. The local communities are the players at the local level with varied structures and sharing models where most activities are non-monetized and often carried out to further develop the community. The non-profit enterprises have a purpose of "advancing a mission or purpose" for a greater cause and this is their primary motivation which is genuine sharing of resources. In addition, the public sector or the government can participate in the sharing economy by "using public infrastructures to support or forge partnerships with other actors and to promote innovative forms of sharing".
Commercial dimension
Lizzie Richardson noted that sharing economy "constitutes an apparent paradox, framed as both part of the capitalist economy and as an alternative". A distinction can be made between free sharing, such as genuine sharing, and for-profit sharing, often associated with companies such as Uber, Airbnb, and TaskRabbit. Commercial co-options of the 'sharing economy' encompass a wide range of structures including mostly for-profit, and, to a lesser extent, co-operative structures. The sharing economy provides expanded access to products, services and talent beyond one-to-one or singular ownership, which is sometimes referred to as "disownership". Individuals actively participate as users, providers, lenders or borrowers in varied and evolving peer-to-peer exchange schemes.
The usage of the term sharing by for-profit companies has been described as "abuse" and "misuse" of the term, or more precisely, its commodification. In commercial applications, the sharing economy can be considered a marketing strategy more than an actual 'sharing economy' ethos; for example, Airbnb has sometimes been described as a platform for individuals to 'share' extra space in their homes, but in some cases, the space is rented, not shared. Airbnb listings additionally are often owned by property management corporations. This has led to a number of legal challenges, with some jurisdiction ruling, for example, that ride sharing through for-profit services like Uber de facto makes the drivers indistinguishable from regular employees of ride sharing companies. The escrow-like model practiced by several of the largest sharing economy platforms, which facilitate and handle contracting and payments on behalf of their subscribers, further underlines an emphasis on access and transaction rather than on sharing.
Sharing of resources has been known in business-to-business (B2B) like heavy machinery in agriculture and forestry as well as in business-to-consumer (B2C) like self-service laundry. But three major drivers enable consumer-to-consumer (C2C) sharing of resources for a broad variety of new goods and services as well as new industries. First, customer behavior for many goods and services changes from ownership to sharing. Second, online social networks and electronic markets more easily link consumers. And third, mobile devices and electronic services make the use of shared goods and services more convenient.
Size and growth
United States
According to a report by the United States Department of Commerce in June 2016, quantitative research on the size and growth of the sharing economy remains sparse. Growth estimates can be challenging to evaluate due to different and sometimes unspecified definitions about what sort of activity counts as sharing economy transactions. The report noted a 2014 study by PricewaterhouseCoopers, which looked at five components of the sharing economy: travel, car sharing, finance, staffing and streaming. It found that global spending in these sectors totaled about $15 billion in 2014, which was only about 5% of the total spending in those areas. The report also forecasted a possible increase of "sharing economy" spending in these areas to $335 billion by 2025, which would be about 50% of the total spending in these five areas. A 2015 PricewaterhouseCoopers study found that nearly one-fifth of American consumers partake in some type of sharing economy activity. A 2017 report by Diana Farrell and Fiona Greig suggested that at least in the US, sharing economy growth may have peaked.
Europe
A February 2018 study ordered by the European Commission and the Directorate-General for Internal Market, Industry, Entrepreneurship and SMEs indicated the level of collaborative economy development between the EU-28 countries across the transport, accommodation, finance and online skills sectors. The size of the collaborative economy relative to the total EU economy was estimated to be €26.5 billion in 2016. Some experts predict that shared economy could add between €160 and €572 billion to the EU economy in the upcoming years.
According to "The Sharing Economy in Europe" from 2022 the sharing economy is spreading rapidly and widely in today's European societies; however, the sharing economy requires more regulation at European level because of increasing problems related to its functioning. The authors also suggest that sometimes the local initiatives, especially when it comes to specific niches, are doing even better than global corporations.
China
In China, the sharing economy doubled in 2016, reaching 3.45 trillion yuan ($500 billion) in transaction volume, and was expected to grow by 40% per year on average over the next few years, according to the country's State Information Center. In 2017, an estimated 700 million people used sharing economy platforms. According to a report from State Information Center of China, in 2022 sharing economy is still growing and reached about 3.83 trillion yuan (US$555 billion). The report also includes an overview of 7 main sectors of China's sharing economy: domestic services, production capacity, knowledge, and skills, shared transportation, shared healthcare, co-working space, and shared accommodation.
In most sharing-economy platforms in China the user profiles connected to WeChat or Alipay which require real name and identification, which ensures that service abuse is minimised. This fact contributes to an increase in interest for shared healthcare services.
Russia
According to TIARCENTER and the Russian Association of Electronic Communications, eight key verticals of Russia's sharing economy (C2C sales, odd jobs, car sharing, carpooling, accommodation rentals, shared offices, crowdfunding, and goods sharing) grew 30% to 511 billion rubles ($7.8 billion) in 2018.
Japan
According to Sharing Economy Association of Japan, The market size of the sharing economy in Japan in 2021 was 2.4 trillion yen. It is expected to expand up to 14.2799 trillion yen in FY2030.
Overall the Japanese environment is not well suited for the development of a sharing economy. Industries do not seek new revolutionary solutions and some services are banned. For example, for ride-hailing services, Uber is not very popular in Japan as the public transport is very sufficient and the regulations ban from operating private car-sharing services and taxi apps are much more popular. According to The Japan Times (2024) it is possible that car-sharing services will be available in the future, however only in certain areas when taxis are deemed in short supply.
Economic effects
The impacts of the access economy in terms of costs, wages and employment are not easily measured and appear to be growing. Various estimates indicate that 30-40% of the U.S. workforce is self-employed, part-time, temporary or freelancers. However, the exact percentage of those performing short-term tasks or projects found via technology platforms was not effectively measured as of 2015 by government sources. In the U.S., one private industry survey placed the number of "full-time independent workers" at 17.8 million in 2015, roughly the same as 2014. Another survey estimated the number of workers who do at least some freelance work at 53.7 million in 2015, roughly 34% of the workforce and up slightly from 2014.
Economists Lawrence F. Katz and Alan B. Krueger wrote in March 2016 that there is a trend towards more workers in alternative (part-time or contract) work arrangements rather than full-time; the percentage of workers in such arrangements rose from 10.1% in 2005 to 15.8% in late 2015. Katz and Krueger defined alternative work arrangements as "temporary help agency workers, on-call workers, contract company workers, and independent contractors or free-lancers". They also estimated that approximately 0.5% of all workers identify customers through an online intermediary; this was consistent with two others studies that estimated the amount at 0.4% and 0.6%.
At the individual transaction level, the removal of a higher overhead business intermediary (say a taxi company) with a lower cost technology platform helps reduce the cost of the transaction for the customer while also providing an opportunity for additional suppliers to compete for the business, further reducing costs. Consumers can then spend more on other goods and services, stimulating demand and production in other parts of the economy. Classical economics argues that innovation that lowers the cost of goods and services represents a net economic benefit overall. However, like many new technologies and business innovations, this trend is disruptive to existing business models and presents challenges for governments and regulators.
For example, should the companies providing the technology platform be liable for the actions of the suppliers in their network? Should persons in their network be treated as employees, receiving benefits such as healthcare and retirement plans? If consumers tend to be higher income persons while the suppliers are lower-income persons, will the lower cost of the services (and therefore lower compensation of the suppliers) worsen income inequality? These are among the many questions the on-demand economy presents.
Cost management and budgeting by providers
Using a personal car to transport passengers or deliveries requires payment, or sufferance, of costs for fees deducted by the dispatching company, fuel, wear and tear, depreciation, interest, taxes, as well as adequate insurance. The driver is typically not paid for driving to an area where fares might be found in the volume necessary for high earnings, or driving to the location of a pickup or returning from a drop-off point. Mobile apps have been written that help a driver be aware of and manage such costs has been introduced.
Effects on infrastructure
Ridesharing companies have affected traffic congestion and Airbnb has affected housing availability. According to transportation analyst Charles Komanoff, "Uber-caused congestion has reduced traffic speeds in downtown Manhattan by around 8 percent".
Effects on crime and litigation
Depending on the structure of the country's legal system, companies involved in the sharing economy may shift legal realm where cases involving sharers is disputed. Technology (such as algorithmic controls) which connects sharers also allows for the development of policies and standards of service. Companies can act as 'guardians' of their customer base by monitoring their employee's behavior. For example, Uber and Lyft can monitor their employees' driving behavior, location, and provide emergency assistance. Several studies have shown that In the United States, the sharing economy restructures how legal disputes are resolved and who is considered the victims of potential crime.
In the United States's civil law, the dispute is between two individuals, determining which individual (if any) is the victim of the other party. U.S. criminal law considers the actions of a criminal who "victimizes" the state or federal law(s) by breaking said law(s). In criminal law cases, a government court punishes the offender to make the legal victim (the government) whole, but any civilian victim does not necessarily receive restitution from the state. In civil law cases, it is the direct victim party, not the state, who receives the compensatory restitution, fees, or fines. While it is possible for both kinds of law to apply to a case, the additional contracts created in sharing economy agreements creates the opportunity for more cases to be classified as civil law disputes. When the sharing economy is directly involved, the victim is the individual rather than the state. This means the civilian victim of a crime is more likely to receive compensation under a civil law case in the sharing economy than in the criminal law precedent. The introduction of civil law cases has the potential to increase victims' ability to be made whole, since the legal change shifts incentives of consumers towards action.
Benefits
Suggested benefits of the sharing economy include:
Additional flexible job opportunities as gig workers
Freelance work entails better opportunities for employment, as well as more flexibility for workers, since people have the ability to pick and choose the time and place of their work. As freelance workers, people can plan around their existing schedules and maintain multiple jobs if needed. Evidence of the appeal to this type of work can be seen from a 2015 survey conducted by the Freelancers Union, which showed that around 34% of the U.S. population was involved in freelance work.
Freelance work can also be beneficial for small businesses. During their early developmental stages, many small companies can't afford or aren't in need of full-time departments, but rather require specialized work for a certain project or for a short period of time. With freelance workers offering their services in the sharing economy, firms are able to save money on long-term labor costs and increase marginal revenue from their operations.
The sharing economy allows workers to set their own hours of work. An Uber driver explains, "the flexibility extends far beyond the hours you choose to work on any given week. Since you don’t have to make any sort of commitment, you can easily take time off for the big moments in your life as well, such as vacations, a wedding, the birth of a child, and more." Workers are able to accept or reject additional work based on their needs while using the commodities they already possess to make money. It provides increased flexibility of work hours and wages for independent contractors of the sharing economy
Depending on their schedules and resources, workers can provide services in more than one area with different companies. This allows workers to relocate and continue earning income. Also, by working for such companies, the transaction costs associated with occupational licenses are significantly lowered. For example, in New York City, taxi drivers must have a special driver's license and undergo training and background checks, while Uber contractors can offer "their services for little more than a background check".
The percentage of seniors in the work force increased from 20.7% in 2009 to 23.1% in 2015, an increase in part attributed to additional employment as gig workers.
Transparent and open data increases innovation
A common premise is that when information about goods is shared (typically via an online marketplace), the value of those goods may increase for the business, for individuals, for the community and for society in general.
Many state, local and federal governments are engaged in open data initiatives and projects such as data.gov. The theory of open or "transparent" access to information enables greater innovation, and makes for more efficient use of products and services, and thus supporting resilient communities.
Reduction in unused value
Unused value refers to the time over which products, services, and talents lay idle. This idle time is wasted value that business models and organizations that are based on sharing can potentially utilize. The classic example is that the average car is unused 95% of the time. This wasted value can be a significant resource, and hence an opportunity, for sharing economy car solutions. There is also significant unused value in "wasted time", as articulated by Clay Shirky in his analysis of the power of crowds connected by information technology. Many people have unused capacity in the course of their day. With social media and information technology, such people can donate small slivers of time to take care of simple tasks that others need doing. Examples of these crowdsourcing solutions include the for-profit Amazon Mechanical Turk and the non-profit Ushahidi.
Christopher Koopman, an author of a 2015 study by George Mason University economists, said the sharing economy "allows people to take idle capital and turn them into revenue sources". He has stated, "People are taking spare bedroom[s], cars, tools they are not using and becoming their own entrepreneurs."
Arun Sundararajan, a New York University economist who studies the sharing economy, told a congressional hearing that "this transition will have a positive impact on economic growth and welfare, by stimulating new consumption, by raising productivity, and by catalyzing individual innovation and entrepreneurship".
Lower prices due to increased competition and reusing items
An independent data study conducted by Busbud in 2016 compared the average price of hotel rooms with the average price of Airbnb listings in thirteen major cities in the United States. The research concluded that in nine of the thirteen cities, Airbnb rates were lower than hotel rates by an average price of $34.56. A further study conducted by Busbud compared the average hotel rate with the average Airbnb rate in eight major European cities. The research concluded that the Airbnb rates were lower than the hotel rates in six of the eight cities by a factor of $72. Data from a separate study shows that with Airbnb's entry into the market in Austin, Texas hotels were required to lower prices by 6 percent to keep up with Airbnb's lower prices.
The sharing economy lowers consumer costs via borrowing and recycling items.
Environmental benefits
The sharing economy reduces negative environmental impacts by decreasing the amount of goods needed to be produced, cutting down on industry pollution (such as reducing the carbon footprint and overall consumption of resources)
The sharing economy allows the reuse and repurpose of already existing commodities. Under this business model, private owners share the assets they already possess when not in use.
The sharing economy accelerates sustainable consumption and production patterns.
In 2019 a comprehensive study checked the effect of one sharing platform, which facilitate the sharing of around 7,000 product and services, on greenhouse gas emissions. It found the emissions were reduced by 77%-85%.
Access to goods without the requirement to purchase
The sharing economy provides people with access to goods who can't afford or have no interest in buying them.
Increase in quality of products and services
The sharing economy facilitates increased quality of service through rating systems provided by companies involved in the sharing economy It also facilitates increased quality of service provided by incumbent firms that work to keep up with sharing firms like Uber and Lyft
Other benefits
A study in Intereconomics / The Review of European Economic Policy noted that the sharing economy has the potential to bring many benefits for the economy, while noting that this presupposes that the success of sharing economy services reflects their business models rather than 'regulatory arbitrage' from avoiding the regulation that affects traditional businesses.
Additional benefits include:
Strengthening communities
Increased independence, flexibility and self-reliance by decentralization, the abolition of monetary entry-barriers, and self-organization
Increased participatory democracy
Maximum benefit for sellers and buyers: Enables users to improve living standards by eliminating the emotional, physical, and social burdens of ownership. Without the need to maintain a large inventory, deadweight loss is reduced, prices are kept low, all while remaining competitive in the markets.
New jobs are created, and products bought, as people acquire items such as cars or apartments to use in the sharing economy activities.
Criticism
Oxford Internet Institute Economic Geographer Mark Graham argued that key parts of the sharing economy impose a new balance of power onto workers. By bringing together workers in low- and high-income countries, gig economy platforms that are not geographically confined can bring about a 'race to the bottom' for workers.
Relationship to job loss
New York Magazine wrote that the sharing economy has succeeded in large part because the real economy has been struggling. Specifically, in the magazine's view, the sharing economy succeeds because of a depressed labor market, in which "lots of people are trying to fill holes in their income by monetizing their stuff and their labor in creative ways", and in many cases, people join the sharing economy because they've recently lost a full-time job, including a few cases where the pricing structure of the sharing economy may have made their old jobs less profitable (e.g. full-time taxi drivers who may have switched to Lyft or Uber). The magazine writes that "In almost every case, what compels people to open up their homes and cars to complete strangers is money, not trust.... Tools that help people trust in the kindness of strangers might be pushing hesitant sharing-economy participants over the threshold to adoption. But what's getting them to the threshold in the first place is a damaged economy and harmful public policy that has forced millions of people to look to odd jobs for sustenance."
Uber's "audacious plan to replace human drivers" may increase job loss as even freelance driving will be replaced by automation.
However, in a report published in January 2017, Carl Benedikt Frey found that while the introduction of Uber had not led to jobs being lost, but had caused a reduction in the incomes of incumbent taxi drivers of almost 10%. Frey found that the "sharing economy", and Uber, in particular, has had substantial negative impacts on workers wages.
Some people believe the Great Recession led to the expansion of the sharing economy because job losses enhanced the desire for temporary work, which is prevalent in the sharing economy. However, there are disadvantages to the worker; when companies use contract-based employment, the "advantage for a business of using such non-regular workers is obvious: It can lower labor costs dramatically, often by 30 percent, since it is not responsible for health benefits, social security, unemployment or injured workers' compensation, paid sick or vacation leave and more. Contract workers, who are barred from forming unions and have no grievance procedure, can be dismissed without notice".
Treatment of workers as independent contractors and not employees
There is debate over the status of the workers within the sharing economy; whether they should be treated as independent contractors or employees of the companies. This issue seems to be most relevant among sharing economy companies such as Uber. The reason this has become such a major issue is that the two types of workers are treated very differently. Contract workers are not guaranteed any benefits and pay can be below average. However, if they are employees, they are granted access to benefits and pay is generally higher. This has been described as "shifting liabilities and responsibilities" to the workers, while denying them the traditional job security. It has been argued that this trend is de facto "obliterating the achievements of unions thus far in their struggle to secure basic mutual obligations in worker-employer relations".
In Uberland: How the Algorithms are Rewriting the Rules of Work, technology ethnographer Alex Rosenblat argues that Uber's reluctance to classify its drivers as "employees" strips them of their agency as the company's revenue-generating workforce, resulting in lower compensation and, in some cases, risking their safety. In particular, Rosenblat critiques Uber's ratings system, which she argues elevates passengers to the role of "middle managers" without offering drivers the chance to contest poor ratings. Rosenblat notes that poor ratings, or any other number of unspecified breaches of conduct, can result in an Uber driver's "deactivation", an outcome Rosenblat likens to being fired without notice or stated cause. Prosecutors have used Uber's opaque firing policy as evidence of illegal worker misclassification; Shannon Liss-Riordan, an attorney leading a class action lawsuit against the company, claims that "the ability to fire at will is an important factor in showing a company's workers are employees, not independent contractors."
The California Public Utilities Commission filed a case, later settled out of court, that "addresses the same underlying issue seen in the contract worker controversy—whether the new ways of operating in the sharing economy model should be subject to the same regulations governing traditional businesses". Like Uber, Instakart faced similar lawsuits. In 2015, a lawsuit was filed against Instakart alleging the company misclassified a person who buys and delivers groceries as an independent contractor. Instakart had to eventually make all such people as part-time employees and had to accord benefits such as health insurance to those qualifying. This led to Instakart having thousands of employees overnight from zero.
A 2015 article by economists at George Mason University argued that many of the regulations circumvented by sharing economy businesses are exclusive privileges lobbied for by interest groups. Workers and entrepreneurs not connected to the interest groups engaging in this rent-seeking behavior are thus restricted from entry into the market. For example, taxi unions lobbying a city government to restrict the number of cabs allowed on the road prevents larger numbers of drivers from entering the marketplace.
The same research finds that while access economy workers do lack the protections that exist in the traditional economy, many of them cannot actually find work in the traditional economy. In this sense, they are taking advantage of opportunities that the traditional regulatory framework has not been able to provide for them. As the sharing economy grows, governments at all levels are reevaluating how to adjust their regulatory schemes to accommodate these workers.
However, a 2021 research on Uber's downfall in Turkey, which was carried out with user-generated content from TripAdvisor comments and YouTube videos related to Uber use in Istanbul, finds that the main reasons for people to use Uber are that since the drivers are independent, they tend to treat the customers in a kinder way than the regular taxi drivers and that it's much cheaper to use Uber. Although, Turkish taxi drivers claim that Uber's operations in Turkey are illegal because the independent drivers don't pay the operating license fee, which is compulsory for taxi drivers to pay, to the government. Their efforts led to the banning of Uber in Turkey by the Turkish government in October 2019. After being unavailable for approximately two years, Uber eventually became available again in Turkey in January 2021.
Benefits not accrued evenly
Andrew Leonard, Evgeny Morozov, criticized the for-profit sector of the sharing economy, writing that sharing economy businesses "extract" profits from their given sector by "successfully [making] an end run around the existing costs of doing business" – taxes, regulations, and insurance. Similarly, In the context of online freelancing marketplaces, there have been worries that the sharing economy could result in a 'race to the bottom' in terms of wages and benefits: as millions of new workers from low-income countries come online.
Susie Cagle wrote that the benefits big sharing economy players might be making for themselves are "not exactly" trickling down, and that the sharing economy "doesn't build trust" because where it builds new connections, it often "replicates old patterns of privileged access for some, and denial for others". William Alden wrote that "The so-called sharing economy is supposed to offer a new kind of capitalism, one where regular folks, enabled by efficient online platforms, can turn their fallow assets into cash machines ... But the reality is that these markets also tend to attract a class of well-heeled professional operators, who outperform the amateurs—just like the rest of the economy".
The local economic benefit of the sharing economy is offset by its current form, which is that huge tech companies reap a great deal of profit in many cases. For example, Uber, which is estimated to be worth $50B as of mid-2015, takes up to 30% commission from the gross revenue of its drivers, leaving many drivers making less than minimum wage. This is reminiscent of a peak Rentier state "which derives all or a substantial portion of its national revenues from the rent of indigenous resources to external clients".
Other issues
Companies such as Airbnb and Uber do not share reputation data. Individual behavior on any one platform doesn't transfer to other platforms. This fragmentation has some negative consequences, such as the Airbnb squatters who had previously deceived Kickstarter users to the tune of $40,000. Sharing data between these platforms could have prevented the repeat incident. Business Insider's view is that since the sharing economy is in its infancy, this has been accepted. However, as the industry matures, this will need to change.
Giana Eckhardt and Fleura Bardhi say that the access economy promotes and prioritizes cheap fares and low costs rather than personal relationships, which is tied to similar issues in crowdsourcing. For example, consumers reap similar benefits from Zipcar as they would from a hotel. In this example, the primary concern is the low cost. Because of this, the "sharing economy" may not be about sharing but rather about access. Giana Eckhardt and Fleura Bardhi say the "sharing" economy has taught people to prioritize cheap and easy access over interpersonal communication, and the value of going the extra mile for those interactions has diminished.
Concentration of power can lead to unethical business practices. By using a software named 'Greyball', Uber was able to make it difficult for regulatory officials to use the application. Another schemes allegedly implemented by Uber includes using its application to show 'phantom' cars nearby to consumers on the app, implying shorter pick-up times than could actually be expected. Uber denied the allegation.
Regulations that cover traditional taxi companies but not ridesharing companies can put taxis at a competitive disadvantage. Uber has faced criticism from taxi drivers worldwide due to the increased competition. Uber has also been banned from several jurisdictions due to failure to comply with licensing laws.
An umbrella sharing service named Sharing E Umbrella was started in 11 cities across China in 2017 lost almost all of the 300,000 umbrellas placed out for sharing purposes during the first few weeks.
Treatment of workers/Lack of employee benefits: Since access economy companies rely on independent contractors, they are not offered the same protections as that of full-time salary employees in terms of workers comp, retirement plans, sick leave, and unemployment. This debate has caused Uber to have to remove their presence in several locations such as Alaska. Uber stirred up a large controversy in Alaska because if Uber drivers were considered registered taxi drivers, that would mean they would be entitled to receiving workers' compensation insurance. However, if they were considered independent contractors they would not receive these same benefits. Due to all of the disputes, Uber pulled services from Alaska. In addition, ride-share drivers’ status continues to be ambiguous when it comes to legal matters. On New Year's Eve in 2013, an off-duty driver for Uber killed a pedestrian while looking for a rider. Since the driver was considered a contractor, Uber would not compensate for the victim's family. The contract states that the service is a matching platform and "the company does not provide transportation services, and ... has no liability for services ... provided by third parties."
Quality discrepancies: Since access economy companies rely on independent workers, the quality of service can differ between various individual providers on the same platform. In 2015, Steven Hill from the New America Foundation cited his experience signing up to become a host on Airbnb as simple as uploading a few photos to the website "and within 15 minutes my place was 'live' like an Airbnb rental. No background check, no verifying my ID, no confirming my personal details, no questions asked. Not even any contact with a real human from their trust and safety team. Nothing." However, due to the reputation model, customers are provided with a peer-reviewed rating of the provider and are given a choice of whether to proceed with the transaction.
Inadequate liability guarantees: Though some companies offer liability guarantees such as Airbnb's "Host Guarantee" that promises to pay up to 1 million in damages, it is extremely difficult to prove fault.
Ownership and usage: The access economy blurs the difference between ownership and usage, which allows for the abuse or neglect of items absent policies.
Replacement of small local companies with large international tech companies. For example, taxi companies tend to be locally owned and operated, while Uber is California-based. Therefore, taxi company profits tend to stay local, while some portion of access economy profits flows out of the local community.
Examples
Agriculture
Garden sharing
Heifer International
Seed swap
Finance
Crowdfunding
Peer-to-peer banking
Peer-to-peer lending
Virtual economy
Food
Food bank
Social dining
Property
Bartering
Book swapping
Borrowing center
Clothes swapping
Fractional ownership
Give-away shop
Library of things
Timeshare
Toy library
Labor
Expert network
Open innovation
Open source product development
Coworking
Freelance marketplace
Local exchange trading system
Time banks
Real estate
Co-housing
Coliving
Collaborative workspace
CouchSurfing
Emergencybnb
Home exchange
Transportation
Bicycle-sharing system
Carpool
Carsharing and peer-to-peer carsharing
Flight sharing
Share taxi
Electric two-wheeler sharing
Ridesharing company
Vanpool
Governance
Government by algorithm
Business
Product service system
Technology
Cloud computing
GNU Project
Open-source software
Volunteer computing
Digital rights
Copyleft
Free art license
Open content
Other
Club theory
Wikimedia movement
Wikipedia
Principles for regulation in the sharing economy
In order to reap the real benefits of a sharing economy and somehow address some issues that revolve around it, there is a great need for the government and policy-makers to create the “right enabling framework based on a set of guiding principles” proposed by the World Economic Forum. These principles are derived from the analysis of global policymaking and consultation with experts. The following are the seven principles for regulation in the sharing economy.
The first principle is creating space for innovation. This entails that “governments need to provide an initially encouraging environment while also building necessary infrastructure to allow for the development of innovation hubs.”
The second principle is that sharing economy should be people centered. This means that policies should be focused on “increasing the overall welfare of the population” as well as “improving the quality of life.”
The third principle is taking a proactive approach. This means that “new business models need to be brought into the mainstream and governments need to make clear frameworks that minimize uncertainty.”
The fourth principle is the assessment of the whole regulatory system which means administrative burdens on exiting systems should be lifted in order to give equal level of access to all actors in the network.
The fifth principle is the data-driven government. Since most sharing economy relies on the use of digital platforms, data can be easily collected, analyzed, and shared which can boost the urban environment through public-private partnerships.
The sixth principle talks about the flexible governance where actors should consider the nature of technology which is fast evolving. This calls for a sustained dialogue with key stakeholders, so all interests and rights are further protected and safeguarded.
The last principle is a shared regulation where all the players should be involved in regulatory discussions as well as in the enforcement of policy.
See also
Collaborative consumption
Common ownership
Gig worker
Holiday cottage
List of gig economy companies
Platform economy
Post-capitalism
Vacation rental
References
Social networks
Peer-to-peer
Business models
Information Age
Economies
Economic systems | Sharing economy | Technology | 8,801 |
566,520 | https://en.wikipedia.org/wiki/Catalog%20of%20Nearby%20Habitable%20Systems | The Catalog of Nearby Habitable Systems (HabCat) is a catalogue of star systems which conceivably have habitable planets. The list was developed by scientists Jill Tarter and Margaret Turnbull under the auspices of Project Phoenix, a part of SETI.
The list was based upon the Hipparcos Catalogue (which has 118,218 stars) by filtering on a wide range of star system features. The current list contains 17,129 "HabStars".
External links
Target Selection for SETI: 1. A Catalog of Nearby Habitable Stellar Systems, Turnbull, Tarter, submitted 31 Oct 2002 (last accessed 19 Jan 2010)
Target selection for SETI. II. Tycho-2 dwarfs, old open clusters, and the nearest 100 stars , by Turnbull and Tarter, (last accessed 19 Jan 2010)
HabStars - an article on the NASA website
Astronomical catalogues of stars
Search for extraterrestrial intelligence
Exoplanet catalogues | Catalog of Nearby Habitable Systems | Astronomy | 199 |
3,874,080 | https://en.wikipedia.org/wiki/NGC%206633 | NGC 6633 is a large bright open cluster in the constellation Ophiuchus. Discovered in 1745-46 by Philippe Loys de Chéseaux, it was independently rediscovered by Caroline Herschel in 1783 and included in her brother William's catalog as H VIII.72. Bright enough to be seen with the naked eye, the cluster is considered a fine object for binoculars or small telescopes.
NGC 6633 is also known as the Tweedledum Cluster (paired with IC 4756 as Tweedledee), also as the Captain Hook Cluster and the Wasp Cluster. It is also designated Collinder 380 or Melotte 201. Nearly as large as the full moon, the cluster contains 38 known stars and shines with a total magnitude of 4.6; the brightest star is of mag 7.6. Its age has been estimated at 660 million years.
The cluster contains at least one chemically peculiar star - NGC 6633 48 (BD+06 3755).
The 8th-magnitude binary star HD 169959 (NGC 6633 58) is within the line-of-sight of the open cluster but is not physically associated with it.
References
External links
6633
Open clusters
Ophiuchus | NGC 6633 | Astronomy | 244 |
66,523,160 | https://en.wikipedia.org/wiki/Meme%20stock | A meme stock is a stock that gains popularity among retail investors through social media. The popularity of meme stocks is generally based on internet memes shared among traders, on platforms such as Reddit's r/wallstreetbets. Investors in such stocks are often young and inexperienced investors. As a result of their popularity, meme stocks often trade at prices that are above their estimated valueas based on fundamental analysis and are known for being extremely speculative and volatile.
History
Interest in meme stocks started in 2020, in what the U.S. Securities and Exchange Commission has called a "meme stock phenomenon". The stock of American video game retailer GameStop has been one of the most popular meme stocks, with mass purchases of the stock leading to a short squeeze on GameStop in early 2021. The stock of entertainment company AMC is also cited as a prominent example. Other examples include the stocks of Bed, Bath & Beyond, National Beverage, and Koss. The distinction between a meme stock and a non-meme stock is not always clear; for example, Tesla has some of the characteristics of a meme stock: a high price-earnings ratio and being frequently discussed by amateur retail traders on social media, yet some professional analysts do not consider it to be overpriced.
Interest in meme stocks is associated with trading platform Robinhood, which pioneered commission-free trading. According to The New York Times, "Robinhood was the tool of choice for traders in the original meme stocks".
Some meme stocks have often become popular among retail investors after being targeted by short-selling professional investors, such as hedge funds, with participants having the explicit aim of causing losses among those firms. News coverage has described the choice to purchase such stocks as an act of rebellion intended to humble short-selling professional investors.
According to an SEC report, while some hedge funds had big losses, the meme stocks phenomenon did not widely impact hedge funds. The SEC staff report also stated, "some investors that had been invested in the target stocks prior to the market events benefitted unexpectedly from the price rises, while others, including quantitative and high-frequency hedge funds, joined the market rally to trade profitably." By June 2021, according to Financial Times, some hedge funds were systematically analyzing meme stocks.
On July 5, 2024 Reddit users speculated that Keith Gill, who was previously involved in the GameStop meme stock fad, was about to invest in headphone maker Koss Corporation around July 4 (US Independence Day) after a May post by him in which a microphone emoji appeared with a US flag on the backdrop. As a result of the speculation, a single Koss share raised to $18.50 before ending at $13.35 in that day's session.
See also
Day trading
Dogecoin
Do-it-yourself investing
Keith Gill
Meme man
Meme coin
Retail investor
YOLO (aphorism)
References
2020s in economic history
Internet culture
Internet memes
Reddit
Robinhood (company)
Social media | Meme stock | Technology | 624 |
22,795,783 | https://en.wikipedia.org/wiki/CIML%20community%20portal | The computational intelligence and machine learning (CIML) community portal is an international multi-university initiative. Its primary purpose is to help facilitate a virtual scientific community infrastructure for all those involved with, or interested in, computational intelligence and machine learning. This includes CIML research-, education, and application-oriented resources residing at the portal and others that are linked from the CIML site.
Overview
The CIML community portal was created to facilitate an online virtual scientific community wherein anyone interested in CIML can share research, obtain resources, or simply learn more. The effort is currently led by Jacek Zurada (principal investigator), with Rammohan Ragade and Janusz Wojtusiak, aided by a team of 25 volunteer researchers from 13 different countries.
The ultimate goal of the CIML community portal is to accommodate and cater to a broad range of users, including experts, students, the public, and outside researchers interested in using CIML methods and software tools. Each community member and user will be guided through the portal resources and tools based on their respective CIML experience (e.g. expert, student, outside researcher) and goals (e.g. collaboration, education). A preliminary version of the community's portal, with limited capabilities, is now operational and available for users. All electronic resources on the portal are peer-reviewed to ensure high quality and cite-ability for literature.
Further reading
Jacek M. Zurada, Janusz Wojtusiak, Fahmida Chowdhury, James E. Gentle, Cedric J. Jeannot, and Maciej A. Mazurowski, Computational Intelligence Virtual Community: Framework and Implementation Issues, Proceedings of the IEEE World Congress on Computational Intelligence, Hong Kong, June 1–6, 2008.
Jacek M. Zurada, Janusz Wojtusiak, Maciej A. Mazurowski, Devendra Mehta, Khalid Moidu, Steve Margolis, Toward Multidisciplinary Collaboration in the CIML Virtual Community, Proceedings of the 2008 Workshop on Building Computational Intelligence and Machine Learning Virtual Organizations, pp. 62–66
Chris Boyle, Artur Abdullin, Rammohan Ragade, Maciej A. Mazurowski, Janusz Wojtusiak, Jacek M. Zurada, Workflow considerations in the emerging CI-ML virtual organization, Proceedings of the 2008 Workshop on Building Computational Intelligence and Machine Learning Virtual Organizations, pp. 67–70
See also
Artificial Intelligence
Computational Intelligence
Machine Learning
National Science Foundation
References
External links
Machine learning
International research institutes | CIML community portal | Engineering | 528 |
3,183,229 | https://en.wikipedia.org/wiki/Ion%20plating | Ion plating (IP) is a physical vapor deposition (PVD) process that is sometimes called ion assisted deposition (IAD) or ion vapor deposition (IVD) and is a modified version of vacuum deposition. Ion plating uses concurrent or periodic bombardment of the substrate, and deposits film by atomic-sized energetic particles called ions. Bombardment prior to deposition is used to sputter clean the substrate surface. During deposition the bombardment is used to modify and control the properties of the depositing film. It is important that the bombardment be continuous between the cleaning and the deposition portions of the process to maintain an atomically clean interface. If this interface is not properly cleaned, then it can result into a weaker coating or poor adhesion.
They are many different processes to vacuum deposited coatings in which they are used for various applications such as corrosion resistance and wear on the material.
Process
In ion plating, the energy, flux and mass of the bombarding species along with the ratio of bombarding particles to depositing particles are important processing variables. The depositing material may be vaporized either by evaporation, sputtering (bias sputtering), arc vaporization or by decomposition of a chemical vapor precursor chemical vapor deposition (CVD). The energetic particles used for bombardment are usually ions of an inert or reactive gas, or, in some cases, ions of the condensing film material ("film ions"). Ion plating can be done in a plasma environment where ions for bombardment are extracted from the plasma or it may be done in a vacuum environment where ions for bombardment are formed in a separate ion gun. The latter ion plating configuration is often called Ion Beam Assisted Deposition (IBAD). By using a reactive gas or vapor in the plasma, films of compound materials can be deposited.
Ion plating is used to deposit hard coatings of compound materials on tools, adherent metal coatings, optical coatings with high densities, and conformal coatings on complex surfaces.
Pros
Better surface coverage than other methods (Physical vapor deposition, Sputter deposition).
More energy available on the surface of the bombarding species, resulting in more complete bonding.
Flexibility with the level of ion bombardment.
Improved chemical reactions when supplying plasma and energy to surface of the bombarding species.
Durability of material improves at least 8 times more.
Cons
Increased variables to take into account when compared to other techniques.
Uniformity of plating not always consistent
Excessive heating to the substrate
Compressive stress
This process is costly and time consuming
Background information on ion plating
The ion plating process was first described in the technical literature by Donald M. Mattox of Sandia National Laboratories in 1964. As described by this article, it was used initially to enhance film adhesion and improve surface coverage.
History
This process was first used in the 1960's and was continued throughout the time by using specific cleaning techniques and film growth reactive and quasi reactive deposition techniques. Sputter cleaning has been used since the 1950's for cleaning scientific surfaces. In the 1970's, high-rate DC magnetron sputtering has shown that bombardment densified the films and helped the hardness of materials. As we further progressed, we learned in 1983 that bombardment was used as concurrent bombardment of inserted gas ions.
See also
List of coating techniques
References
Further reading
Chemical processes
Physical vapor deposition techniques
Thin film deposition | Ion plating | Chemistry,Materials_science,Mathematics | 687 |
6,180,572 | https://en.wikipedia.org/wiki/Fainting%20room | A fainting room was a private room, common in the Victorian era, which typically contained fainting couches. Such couches or sofas typically had an arm on one side only to permit easy access to a reclining position, similar to its cousin the chaise longue, although the sofa style most typically featured a back at one end (usually the side with the arm) so that the resulting position was not purely supine.
There are also accounts that mention fainting rooms in eighteenth-century America. These rooms, which were also referred to as bedrooms (bedrooms were called chambers), were located in the ground floor and contained a daybed that allowed occupants to rest for brief periods during the day.
Theories for prevalence
One theory for the predominance of fainting couches is that women were actually fainting because their corsets were laced too tightly, thus restricting blood flow. By preventing movement of the ribs, corsets restricted airflow to the lungs and, as a result, if the wearer exerted themselves to the point of needing large quantities of oxygen and was unable to fully inflate the lungs, this could lead to fainting. Hyperventilation for any reason could also potentially result in brief loss of consciousness.
One book associates Victorian fainting rooms with a claim that they are meant to force women to remain indoors and inactive under the guise of ensuring privacy, class, and interiority.
See also
Corset controversy
Social aspects of clothing
References
Victorian era
Rooms | Fainting room | Engineering | 304 |
11,292,458 | https://en.wikipedia.org/wiki/1-Propanol%20%28data%20page%29 | This page provides supplementary chemical data on 1-Propanol (n-propanol).
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source.
Structure and properties
Thermodynamic properties
Vapor pressure of liquid
Table data obtained from CRC Handbook of Chemistry and Physics 44th ed.
Distillation data
Spectral data
References
Chemical data pages
Chemical data pages cleanup | 1-Propanol (data page) | Chemistry | 104 |
57,078,309 | https://en.wikipedia.org/wiki/Ordinal%20tree | An ordinal tree, by analogy with an ordinal number, is a rooted tree of arbitrary degree in which the children of each node are ordered, so that one refers to the ith child in the sequence of children of a node.
See also
Cardinal tree
References
Data types
Trees (data structures)
Knowledge representation
Abstract data types | Ordinal tree | Mathematics | 69 |
56,965,199 | https://en.wikipedia.org/wiki/RU-2309 | RU-2309, also known as 18-methylmetribolone, δ9,11-17α,18-dimethyl-19-nortestosterone, or 17α,18-dimethylestr-4,9,11-trien-17β-ol-3-one, is a 17α-alkylated androgen/anabolic steroid (AAS) of the 19-nortestosterone group which was never marketed. It is the C18 methyl or C13β ethyl derivative of metribolone. The compound is closely related to tetrahydrogestrinone (THG), which has the same chemical structure as RU-2309 except for possessing an ethyl group at the C17α position instead of a methyl group. Hence, it could also be referred to as 17α-methyl-THG. RU-2309 shows high affinity for the androgen, progesterone, and glucocorticoid receptors.
See also
List of androgens/anabolic steroids
References
Tertiary alcohols
Anabolic–androgenic steroids
Estranes
Glucocorticoids
Hepatotoxins
Ketones
Progestogens | RU-2309 | Chemistry | 257 |
54,734,615 | https://en.wikipedia.org/wiki/Nephromyces | Nephromyces is a genus of apicomplexans that are symbionts of the ascidian genus Molgula (sea grapes).
Systematics
Nephromyces was first described in 1888 by Alfred Mathieu Giard as a chytrid fungus, because of its filamentous cells. He formally named three species, each corresponding to a different species of the host animal. Molecular phylogenetics later showed that Nephromyces are not actually fungi, but instead constitute a group within the Apicomplexa that is related to the Piroplasmida.
Species of Nephromyces
Nephromyces molgularum Giard, 1888
Nephromyces rosocovitanus Giard, 1888
Nephromyces sorokini Giard, 1888
Description
Nephromyces is found in the lumen of the renal sac of its host animals. The renal sac is a closed, fluid-filled structure that is derived from the epicardium during development. There are different cell types (at least seven in Nephromyces from Molgula manhattensis) which appear to be different life cycle stages, as the different types appear in a consistent sequence after initial infection of the host animal. However, in a mature infection, different stages simultaneously co-occur in the same host individual. They include filaments (trophic stages), spores, motile but non-flagellated cells, and biflagellated swarmer cells. The non-flagellated motile cells resemble the sporozoites of other apicomplexans, while the spores contain structures that resemble the rhoptries of the apical complex, another typical apicomplexan feature.
Symbiosis
Nephromyces is specific to the family Molgulidae, and has been found in species of Molgula and at least one other molgulid genus, Bostrichobranchus (B. pilularis). Every wild-collected adult Molgula animal examined has been found to contain Nephromyces, suggesting that it is a beneficial symbiont rather than a parasite; this makes Nephromyces an exception among apicomplexans, which are usually parasitic on their animal hosts. However, animals without Nephromyces can be obtained by spawning and raising them in filtered seawater. These symbiont-free animals have been used to study the Nephromyces life cycle. Nephromyces is released into surrounding seawater when its host dies, and cells of Nephromyces can remain alive and infective for at least 29 days outside of a host.
The renal sac organ where Nephromyces lives contains high concentrations of urate, a nitrogenous waste product. Activity of urate oxidase, an enzyme that breaks down urate, has been found in Nephromyces cells, hence they may be using the waste products from their host animal as a nitrogen source for themselves.
Intracellular bacteria have been found within cells of Nephromyces from Molgula manhattensis and M. occidentalis, making this a symbiosis within a symbiosis.
References
Apicomplexa genera
Symbiosis | Nephromyces | Biology | 692 |
21,573,926 | https://en.wikipedia.org/wiki/FIND%2C%20the%20global%20alliance%20for%20diagnostics | FIND (Foundation for Innovative New Diagnostics) is a global health non-profit based in Geneva, Switzerland. FIND functions as a product development partnership, engaging in active collaboration with over 150 partners to facilitate the development, evaluation, and implementation of diagnostic tests for poverty-related diseases. The organisation's Geneva headquarters are in Campus Biotech. Country offices are located in New Delhi, India; Cape Town, South Africa; and Hanoi, Viet Nam.
History
FIND was launched at the 56th World Health Assembly in 2003 in response to the critical need for innovative and affordable diagnostic tests for diseases in low- and middle-income countries.
The initiative was launched by the Bill and Melinda Gates Foundation and WHO's Special Programme for Research and Training in Tropical Diseases (TDR), and its initial focus was to speed up the development and evaluation of tuberculosis tests.
In 2011, FIND was recognized as an "Other International Organization" by the Swiss Government, alongside DNDi and Medicines for Malaria Venture.
Priorities
The organization focuses on improving diagnosis in several disease areas, including hepatitis C, HIV, malaria, neglected tropical diseases (sleeping sickness, Chagas disease, leishmaniasis, buruli ulcer), and tuberculosis. Alongside this, FIND works on diagnostic connectivity, antimicrobial resistance, acute febrile illness, and outbreak preparedness.
To support this work, FIND engages in development of target product profiles, maintains clinical trial platforms, manages specimen banks, negotiates preferential product pricing for developing markets, and creates and implements trainings and lab strengthening tools.
In 2020, FIND became a co-convener of the Diagnostics Pillar of the Access to COVID-19 Tools Accelerator with The Global Fund to Fight AIDS, Tuberculosis and Malaria. Together they supported the development of reliable rapid antigen tests for COVID-19, and guaranteed access to 120 million rapid tests at an affordable price to low- and middle-income countries.
FIND also aims at improving the diagnostics ecosystem by working on activities such as sequencing, managing a biobank network to facilitate diagnostic development across diseases, helping countries optimize their networks of diagnostic services, and developing digital tools, such as algorithms, that can help healthcare workers provide better diagnosis.
Recent achievements
From 2015 to 2020, fifteen new diagnostic technologies supported by FIND received regulatory clearance, and 10 of them were in use by the end of 2020 in low- and middle-income countries.
One example of such tests is Abbott's BIOLINE HAT 2.0, a rapid test for African trypanosomiasis, a disease also known as sleeping sickness. In 2021 Abbott donated 450,000 of these tests to scale up testing in low- and middle-income countries.
Over the same period, FIND supported the development of four multi-disease diagnostic platforms:
Cepheid's GeneXpert MTB/RIF for simultaneous rapid tuberculosis diagnosis and rapid antibiotic sensitivity test
Eiken's LAMP platform for the detection of diseases including tuberculosis, malaria, sleeping sickness and leishmaniasis
Molbio's Truenat, a point-of-care rapid molecular test for diagnosis of infectious diseases
DCN's Fluoro rapid test for gonorrhoea
In April 2020, the World Health Organization launched the ACT-Accelerator partnership, a global collaboration to accelerate the development, production and equitable distribution of vaccines, diagnostics and therapeutics for COVID-19. Leading the diagnostic pillar together with The Global Fund to Fight AIDS, Tuberculosis and Malaria, FIND has worked to enable access to tests by boosting research and development, Emergency Use Listing, independent assessment, and manufacturing of tests.
Together with partners FIND has developed and made available online courses and training packages for healthcare workers on COVID-19 testing.
FIND has also created a portal to provide an overview of the COVID-19 testing landscape, including a directory of COVID-19 diagnostics commercialized., and a tracker centralizing all the data reported by the countries on COVID-19 tests performed, incidence, deaths and positivity rate.
Funding and leadership
FIND receives its funding from more than thirty donors, including bilateral and multilateral organizations as well as private foundations.
Members of the Board of Directors include Ilona Kickbusch, George F. Gao, David L. Heymann, Shobana Kamineni and Sheila Tlou.
References
Biomedical research foundations
Bill & Melinda Gates Foundation
Foundations based in Switzerland
Organizations established in 2003
International medical and health organizations
Tropical diseases
Organisations based in Geneva | FIND, the global alliance for diagnostics | Engineering,Biology | 907 |
12,519,967 | https://en.wikipedia.org/wiki/Mahamudra%20%28Hatha%20Yoga%29 | Mahamudra is a hatha yoga gesture (mudra) whose purpose is to improve control over the sexual potential. The sexual potential, associated with apana, is essential in the process of awakening of the dormant spiritual energy (Kundalini) and attaining of spiritual powers (siddhi).
Execution
Pressure is exerted with the heel on the perineum. This zone is considered to be closely involved in the control of the vital and sexual potential. At the same time, the throat is compressed (Jalandhara Bandha) activating the throat chakra (Vishuddha Chakra) - center of the akashic energies of void.
Effect
By activating the energies of akasha and simultaneously stimulating the energies of Muladhara Chakra, Kundalini awakens and raises through the central channel, Sushumna Nadi. The void is considered to be a substrate, an intermediary state in any transformation. Here it projects the lower energies up the spine, transforming them in spiritual energies instead. Thus Mahamudra is a gesture of alchemical transformation and elevation of the sexual potential, and at the same time a method of awakening of the supreme energy of the body, Kundalini.
Hatha Yoga Pradipika
The Hatha Yoga Pradipika describes Mahamudra as follows:
References
Gestures
Mudras | Mahamudra (Hatha Yoga) | Biology | 275 |
34,351,960 | https://en.wikipedia.org/wiki/Right%20to%20withdraw | The right to withdraw is a concept in clinical research ethics that a study participant in a clinical trial has a right to end participation in that trial at will. According to ICH GCP guidelines, a person can withdraw from the research at any point in time and the participant is not required to reveal the reason for discontinuation.
Children in research
When children participate in clinical research their parents or guardians must give assent for them to participate, but ethics dictate that even in this case it is best to get the consent of the research subject. Studies have shown that children participating in research have little understanding of the right to withdraw when they are presented with the option.
Biobanks
Withdrawal from participating in biobank research is problematic for many reasons, including the fact that participant's data is often de-identified to grant research participant privacy.
References
Research ethics
Human subject research | Right to withdraw | Technology | 176 |
7,216,005 | https://en.wikipedia.org/wiki/Unidirectional%20network | A unidirectional network (also referred to as a unidirectional gateway or data diode) is a network appliance or device that allows data to travel in only one direction. Data diodes can be found most commonly in high security environments, such as defense, where they serve as connections between two or more networks of differing security classifications. Given the rise of industrial IoT and digitization, this technology can now be found at the industrial control level for such facilities as nuclear power plants, power generation and safety critical systems like railway networks.
After years of development, data diodes have evolved from being only a network appliance or device allowing raw data to travel only in one direction, used in guaranteeing information security or protection of critical digital systems, such as industrial control systems, from inbound cyber attacks, to combinations of hardware and software running in proxy computers in the source and destination networks. The hardware enforces physical unidirectionality, and the software replicates databases and emulates protocol servers to handle bi-directional communication. Data Diodes are now capable of transferring multiple protocols and data types simultaneously. It contains a broader range of cybersecurity features like secure boot, certificate management, data integrity, forward error correction (FEC), secure communication via TLS, among others. A unique characteristic is that data is transferred deterministically (to predetermined locations) with a protocol "break" that allows the data to be transferred through the data diode.
Data diodes are commonly found in high security military and government environments, and are now becoming widely spread in sectors like oil & gas, water/wastewater, airplanes (between flight control units and in-flight entertainment systems), manufacturing and cloud connectivity for industrial IoT. New regulations have increased demand and with increased capacity, major technology vendors have lowered the cost of the core technology.
History
The first data diodes were developed by governmental organizations in the eighties and nineties. Because these organizations work with confidential information, making sure their network is secure is of the highest priority. Primary solutions used by these organizations were air gaps. But, as the amount of transferable data increased, and a continuous and real-time data stream became more important, these organizations had to look for an automated solution.
In the search for more standardization, an increasing number of organizations started to look for a solution that was a better fit for their activities. Commercial solutions created by stable organizations succeeded given the level of security and long-term support.
In the United States, utilities and oil and gas companies have used data diodes for several years, and regulators have encouraged their use to protect equipment and processes in safety instrumented systems (SISs). The Nuclear Regulatory Commission (NRC) now mandates the use of data diodes and many other sectors, in addition to electrical and nuclear, also use data diodes effectively.
In Europe, regulators and operators of several safety-critical systems started recommending and implementing regulations on the use of unidirectional gateways.
In 2013 the working, Industrial Control System Cybersecurity, directed by the French Network and Information Security Agency (ANSSI) stated that is forbidden to use firewalls to connect any class 3 network, such as railway switching systems, to a lower class network or corporate network, only unidirectional technology is permitted.
Applications
Real time monitoring of safety-critical networks
Secure OT – IT bridge
Secure cloud connectivity of critical OT networks
Database replication
Data mining
Trusted back-end and hybrid cloud hosted solutions (private / public)
Secure data exchange for data marketplaces
Secure credential/ certificate provisioning
Secure cross-data base sharing
Secure printing from a less secure network to a high secure network (reducing print costs)
Transferring application and operating system updates from a less secure network to a high secure network
Time synchronization in highly secure networks
File transfer
Streaming video
Sending/receiving alerts or alarms from open to critical/confidential networks
Sending/receiving emails from open to critical/confidential networks
Government
Commercial companies
Usage
Unidirectional network devices are typically used to guarantee information security or protection of critical digital systems, such as Industrial control systems, from cyber attacks. While use of these devices is common in high security environments such as defense, where they serve as connections between two or more networks of differing security classifications, the technology is also being used to enforce one-way communications outbound from critical digital systems to untrusted networks connected to the Internet.
The physical nature of unidirectional networks only allows data to pass from one side of a network connection to another, and not the other way around. This can be from the "low side" or untrusted network, to the "high side" or trusted network, or vice versa. In the first case, data in the high side network is kept confidential and users retain access to data from the low side. Such functionality can be attractive if sensitive data is stored on a network which requires connectivity with the Internet: the high side can receive Internet data from the low side, but no data on the high side are accessible to Internet-based intrusion. In the second case, a safety-critical physical system can be made accessible for online monitoring, yet be insulated from all Internet-based attacks that might seek to cause physical damage. In both cases, the connection remains unidirectional even if both the low and the high network are compromised, as the security guarantees are physical in nature.
There are two general models for using unidirectional network connections. In the classical model, the purpose of the data diode is to prevent export of classified data from a secure machine while allowing import of data from an insecure machine. In the alternative model, the diode is used to allow export of data from a protected machine while preventing attacks on that machine. These are described in more detail below.
One-way flow to less secure systems
Involves systems that must be secured against remote/external attacks from public networks while publishing information to such networks. For example, an election management system used with electronic voting must make election results available to the public while at the same time it must be immune to attack.
This model is applicable to a variety of critical infrastructure protection problems, where protection of the data in a network is less important than reliable control and correct operation of the network. For example, the public living downstream from a dam needs up-to-date information on the outflow, and the same information is a critical input to the control system for the floodgates. In such a situation, it is critical that the flow of information be from the secure control system to the public, and not vice versa.
One-way flow to more secure systems
The majority of unidirectional network applications in this category are in defense, and defense contractors. These organizations traditionally have applied air gaps to keep classified data physically separate from any Internet connection. With the introduction of unidirectional networks in some of these environments, a degree of connectivity can safely exist between a network with classified data, and a network with an Internet connection.
In the Bell–LaPadula security model, users of a computer system can only create data at or above their own security level. This applies in contexts where there is a hierarchy of information classifications. If users at each security level share a machine dedicated to that level, and if the machines are connected by data diodes, the Bell–LaPadula constraints can be rigidly enforced.
Benefits
Traditionally, when the IT network provides DMZ server access for an authorized user, the data is vulnerable to intrusions from the IT network. However, with a unidirectional gateways separating a critical side or OT network with sensitive data from an open side with business and Internet connectivity, normally IT network, organizations can achieve the best of both worlds, enabling the connectivity required and assuring security. This holds true even if the IT network is compromised, because the traffic flow control is physical in nature.
No reported cases of data diodes being bypassed or exploited to enable two-way traffic.
Lower long-term operating cost (OPEX) cost as there are no rules to maintain. Although there will be software updates to be installed. Often these devices need to be maintained by the vendors.
The unidirectional software layer cannot be configured to allow two-way traffic due to the physical disconnection of the RX or TX line.
Weaknesses
As of June 2015, unidirectional gateways were not yet commonly used or well understood.
Unidirectional gateways are unable to route the majority of network traffic and break most protocols.
Cost; data diodes were originally expensive, although lower cost solutions are now available.
Specific use cases that require a two-way data flow can be difficult to achieve.
Variations
The simplest form of a unidirectional network is a modified, fiber-optic network link, with send and receive transceivers removed or disconnected for one direction, and any link failure protection mechanisms disabled. Some commercial products rely on this basic design, but add other software functionality that provides applications with an interface which helps them pass data across the link.
All-optical data diodes can support very high channel capacities and are among the simplest. In 2019, Controlled Interfaces demonstrated its (now patented) one-way optical fiber link using 100G commercial off-the-shelf transceivers in a pair of Arista network switch platforms. No specialized driver software is required.
Other more sophisticated commercial offerings enable simultaneous one-way data transfer of multiple protocols that usually require bidirectional links. The German companies INFODAS and GENUA have developed software based ("logical") data diodes that use a Microkernel Operating system to ensure unidirectional data transfer. Due to the software architecture these solutions offer higher speed than conventional hardware based data diodes.
ST Engineering, have developed its own Secure e-Application Gateway, consisting of multiple data diodes and other software components, to enable real-time bi-directional HTTP(S) web services transactions over the internet while protecting the secured networks from both malicious injects and data leakage.
In 2018, Siemens Mobility released an industrial grade unidirectional gateway solution in which the data diode, Data Capture Unit, uses electromagnetic induction and new chip design to achieve an EBA safety assessment, guaranteeing secure connectivity of new and existing safety critical systems up to Safety integrity level (SIL) 4 to enable secure IoT and provide data analytics and other cloud hosted digital services.
In 2022, Fend Incorporated released a data diode capable of acting as a Modbus Gateway with full optical isolation. This diode is targeted at industrial markets and critical infrastructure serving to bridge old outdated technology with newer IT systems. The diode also functions as a Modbus converter, with the ability to connect to serial RTU systems on one side and Ethernet TCP systems on the other.
The US Naval Research Laboratory (NRL) has developed its own unidirectional network called the Network Pump. This is in many ways similar to DSTO's work, except that it allows a limited backchannel going from the high side to the low side for the transmission of acknowledgments. This technology allows more protocols to be used over the network, but introduces a potential covert channel if both the high- and low-side are compromised through artificially delaying the timing of the acknowledgment.
Different implementations also have differing levels of third party certification and accreditation. A cross domain guard intended for use in a military context may have or require extensive third party certification and accreditation. A data diode intended for industrial use, however, may not have or require third party certification and accreditation at all, depending on the application.
Notable vendors
BAE Systems - US/UK
Fend Incorporated - US
Siemens - Germany
ST Engineering - Singapore
Technolution - Netherlands
See also
Bell–LaPadula model for security
Network tap
Intrusion detection system
References
External links
Patton Blog: Employing Simplex Data Circuits for Ultra-High-Security Networking
SANS Institute Paper on Tactical Data Diodes in Industrial Automation and Control Systems.
Guide to Industrial Control Systems (ICS) Security United States Department of Commerce - National Institute of Standards and technology on data diode use on Industrial Control Systems.
Improving Industrial Control System Cybersecurity with Defense-in-Depth Strategies United States Department of Homeland - Security Industrial Control Systems Cyber Emergency Response Team on data diode use.
Networking hardware
Computer network security | Unidirectional network | Engineering | 2,545 |
66,293,606 | https://en.wikipedia.org/wiki/M62812 | M62812 is a drug which acts as a potent and selective antagonist of toll-like receptor 4 (TLR4). In animal studies it blocks TLR4-mediated cytokine release and has antiinflammatory effects, showing efficacy in animal models of arthritis and septic shock.
See also
TLR4-IN-C34
Resatorvid
VGX-1027
References
Receptor antagonists
Benzothiazoles
Anilines
Amines | M62812 | Chemistry | 98 |
6,548,981 | https://en.wikipedia.org/wiki/Rubedo | Rubedo is a Latin word meaning "redness" that was adopted by alchemists to define the fourth and final major stage in their magnum opus. Both gold and the philosopher's stone were associated with the color red, as rubedo signaled alchemical success, and the end of the great work. Rubedo is also known by the Greek word iosis.
Interpretation
The three alchemical stages preceding rubedo were nigredo (blackness), which represented putrefaction and spiritual death; albedo (whiteness), which represented purification; and citrinitas (yellowness), the solar dawn or awakening. Some sources describe the alchemical process as three-phased with citrinitas serving as mere extension and takes place between albedo and rubedo. The rubedo stage entails the attempt of the alchemist to integrate the psychospiritual outcomes of the process into a coherent sense of self before its re-entry to the world. The stage can take some time or years to complete due to the required synthesis and substantiation of insights and experiences.
The symbols used in alchemical writing and art to represent this red stage can include blood, a phoenix, a rose, a crowned king, or a figure wearing red clothes. Countless sources mention a reddening process; the seventeenth dictum of the 12th century Turba Philosophorum is one example:
Psychology
In the framework of psychological development (especially with followers of Jungian psychology), these four alchemical steps are viewed as analogous to the process of attaining individuation or the process that allows an individual to attain the integration of opposites, their transcendence, and, finally, emergence out of an undifferentiated unconscious. In an archetypal schema, rubedo represents the Self archetype, and is the culmination of the four stages, the merging of ego and Self. It is also described as a stage that gives birth to a new personality. Represented by the color of blood in alchemy, the stage indicates a process that cannot be reversed since it involves the struggle of the self towards its manifestation.
The Self manifests itself in "wholeness," a point in which a person discovers their true nature. Another interpretation phrased it as "reunification" which entail the reunion of body, soul, and spirit, leading to a diminished inner conflict.
See also
Psychology and Alchemy
Unity of opposites
References
Further reading
Jung, C. G. Psychology and Alchemy 2nd. ed. (Transl. by R. F. C. Hull)
External links
Jung’s Quaternity, Mandalas, the Philosopher's Stone and the Self
Alchemical processes | Rubedo | Chemistry | 554 |
213,639 | https://en.wikipedia.org/wiki/Arnold%20Sommerfeld | Arnold Johannes Wilhelm Sommerfeld (; 5 December 1868 – 26 April 1951) was a German theoretical physicist who pioneered developments in atomic and quantum physics, and also educated and mentored many students for the new era of theoretical physics. He served as doctoral supervisor and postdoc supervisor to seven Nobel Prize winners and supervised at least 30 other famous physicists and chemists. Only J. J. Thomson's record of mentorship offers a comparable list of high-achieving students. He was nominated for the Nobel Prize 84 times, more than any other physicist (including Otto Stern, who got nominated 82 times), becoming the most nominated person to never win the Nobel Prize.
He introduced the second quantum number, azimuthal quantum number, and the third quantum number, magnetic quantum number. He also introduced the fine-structure constant and pioneered X-ray wave theory.
Early life and education
Sommerfeld was born in 1868 to a family with deep ancestral roots in Prussia. His mother Cäcilie Matthias (1839–1902) was the daughter of a Potsdam builder. His father Franz Sommerfeld (1820–1906) was a physician from a leading family in Königsberg, where Arnold's grandfather had resettled from the hinterland in 1822 for a career as Court Postal Secretary in the service of the Kingdom of Prussia. Sommerfeld was baptized a Christian in his family's Prussian Evangelical Protestant Church, and although not religious, he would never renounce his Christian faith.
Sommerfeld studied mathematics and physical sciences at the Albertina University of his native city, Königsberg, East Prussia. His dissertation advisor was the mathematician Ferdinand von Lindemann, and he also benefited from classes with mathematicians Adolf Hurwitz and David Hilbert and physicist Emil Wiechert. His participation in the student fraternity Deutsche Burschenschaft resulted in a dueling scar on his face. He received his Ph.D. on 24 October 1891 (age 22).
After receiving his doctorate, Sommerfeld remained at Königsberg to work on his teaching diploma. He passed the national exam in 1892 and then began a year of military service, which was done with the reserve regiment in Königsberg. He completed his obligatory military service in September 1893, and for the next eight years continued voluntary eight-week military service. With his turned up moustache, his physical build, his Prussian bearing, and the fencing scar on his face, he gave the impression of being a colonel in the hussars.
Career
Göttingen
In October 1893, Sommerfeld went to the University of Göttingen, which was the center of mathematics in Germany. There, he became assistant to Theodor Liebisch, at the Mineralogical Institute, through a fortunate personal contact – Liebisch had been a professor at the University of Königsberg and a friend of the Sommerfeld family.
In September 1894, Sommerfeld became Felix Klein's assistant, which included taking comprehensive notes during Klein's lectures and writing them up for the Mathematics Reading Room, as well as managing the reading room. Sommerfeld's Habilitationsschrift was completed under Klein, in 1895, which allowed Sommerfeld to become a Privatdozent at Göttingen. As a Privatdozent, Sommerfeld lectured on a wide range of mathematical and mathematical physics topics. His lectures on partial differential equations were first offered at Göttingen, and they evolved over his teaching career to become Volume VI of his textbook series Lectures on Theoretical Physics, under the title Partial Differential Equations in Physics.
Lectures by Klein in 1895 and 1896 on rotating bodies led Klein and Sommerfeld to write a four-volume text Die Theorie des Kreisels – a 13-year collaboration, 1897–1910. The first two volumes were on theory, and the latter two were on applications in geophysics, astronomy, and technology. The association Sommerfeld had with Klein influenced Sommerfeld's turn of mind to be applied mathematics and in the art of lecturing.
While at Göttingen, Sommerfeld met Johanna Höpfner, daughter of Ernst Höpfner, curator at Göttingen. In October 1897 Sommerfeld began the appointment to the Chair of Mathematics at the Bergakademie in Clausthal-Zellerfeld; he was successor to Wilhelm Wien. This appointment provided enough income to eventually marry Johanna.
At Klein's request, Sommerfeld took on the position of editor of Volume V of Enzyklopädie der mathematischen Wissenschaften; it was a major undertaking which lasted from 1898 to 1926.
Aachen
In 1900, Sommerfeld started his appointment to the Chair of Applied Mechanics at the Königliche Technische Hochschule Aachen (later RWTH Aachen University) as extraordinarius professor, which was arranged through Klein's efforts. At Aachen, he developed the theory of hydrodynamics, which would retain his interest for a long time. Later, at the University of Munich, Sommerfeld's students Ludwig Hopf and Werner Heisenberg would write their Ph.D. theses on this topic. For his contributions to the understanding of journal bearing lubrication during his time at Aachen, he was named as one of the 23 "Men of Tribology" by Duncan Dowson.
Munich
From 1906, Sommerfeld established himself as ordinarius professor of physics and director of the new Theoretical Physics Institute at the University of Munich. He was selected for these positions by Wilhelm Röntgen, Director of the Physics Institute at Munich, which was looked upon by Sommerfeld as being called to a "privileged sphere of action".
Until the late 19th century and early 20th century, experimental physics in Germany was considered as having a higher status within the community. In the early 20th century, theorists, such as Sommerfeld at Munich and Max Born at the University of Göttingen, with their early training in mathematics, turned this around so that mathematical physics, i.e., theoretical physics, became the prime mover, and experimental physics was used to verify or advance theory. After getting their doctorates with Sommerfeld, Wolfgang Pauli, Werner Heisenberg, and Walter Heitler became Born's assistants and made significant contributions to the development of quantum mechanics, which was then in very rapid development.
During his 32 years of teaching at Munich, Sommerfeld taught general and specialized courses, as well as holding seminars and colloquia. The general courses were on mechanics, mechanics of deformable bodies, electrodynamics, optics, thermodynamics and statistical mechanics, and partial differential equations in physics. They were held four hours per week, 13 weeks in the winter and 11 weeks in the summer, and were for students who had taken experimental physics courses from Röntgen and later by Wilhelm Wien. There was also a two-hour weekly presentation for the discussion of problems. The specialized courses were of topical interest and based on Sommerfeld's research interests; material from these courses appeared later in the scientific literature publications of Sommerfeld. The objective of these special lectures was to grapple with current issues in theoretical physics and for Sommerfeld and the students to garner a systematic comprehension of the issue, independent of whether or not they were successful in solving the problem posed by the current issue. For the seminar and colloquium periods, students were assigned papers from the current literature and they then prepared an oral presentation. From 1942 to 1951, Sommerfeld worked on putting his lecture notes in order for publication. They were published as the six-volume Lectures on Theoretical Physics.
For a list of students, please see the list organized by type. Four of Sommerfeld's doctoral students, Werner Heisenberg, Wolfgang Pauli, Peter Debye, and Hans Bethe went on to win Nobel Prizes, while others, most notably, Walter Heitler, Rudolf Peierls, Karl Bechert, Hermann Brück, Paul Peter Ewald, Eugene Feenberg, Herbert Fröhlich, Erwin Fues, Ernst Guillemin, Helmut Hönl, Ludwig Hopf, Adolf Kratzer, Otto Laporte, Wilhelm Lenz, Karl Meissner, Rudolf Seeliger, Ernst C. Stückelberg, Heinrich Welker, Gregor Wentzel, Alfred Landé, and Léon Brillouin became famous in their own right. Three of Sommerfeld's postdoctoral supervisees, Linus Pauling, Isidor I. Rabi and Max von Laue, won Nobel Prizes, and ten others, William Allis, Edward Condon, Carl Eckart, Edwin C. Kemble, William V. Houston, Karl Herzfeld, Walther Kossel, Philip M. Morse, Howard Robertson, and Wojciech Rubinowicz went on to become famous in their own right. Walter Rogowski, an undergraduate student of Sommerfeld at RWTH Aachen, also went on to become famous in his own right. Max Born believed Sommerfeld's abilities included the "discovery and development of talents". Albert Einstein told Sommerfeld: "What I especially admire about you is that you have, as it were, pounded out of the soil such a large number of young talents." Sommerfeld's style as a professor and institute director did not put distance between him and his colleagues and students. He invited collaboration from them, and their ideas often influenced his own views in physics. He entertained them in his home and met with them in cafes before and after seminars and colloquia. Sommerfeld owned an alpine ski hut to which students were often invited for discussions of physics as demanding as the sport.
While at Munich, Sommerfeld came in contact with the special theory of relativity by Albert Einstein, which was not yet widely accepted. His mathematical contributions to the theory helped its acceptance by the skeptics. In 1914 he worked with Léon Brillouin on the propagation of electromagnetic waves in dispersive media. He became one of the founders of quantum mechanics; some of his contributions included co-discovery of the Sommerfeld–Wilson quantization rules (1915), a generalization of Bohr's atomic model, introduction of the Sommerfeld fine-structure constant (1916), co-discovery with Walther Kossel of the Sommerfeld–Kossel displacement law (1919), and publishing Atombau und Spektrallinien (1919), which became the "bible" of atomic theory for the new generation of physicists who developed atomic and quantum physics. The book underwent 4 editions from 1919 to 1924, to incorporate the latest advances in quantum mechanics, before splitting into two volumes.
In 1918, Sommerfeld succeeded Einstein as chair of the Deutsche Physikalische Gesellschaft (DPG). One of his accomplishments was the founding of a new journal. The scientific papers published in DPG journals became so voluminous, that in 1919 a committee of the DPG recommended the establishment of Zeitschrift für Physik for publication of original research articles, which commenced in 1920. Since any reputable scientist could have their article published without refereeing, time between submission and publication was very rapid – as fast as two weeks. This greatly stimulated the scientific theoretical developments, especially that of quantum mechanics in Germany at that time, as this journal was the preferred publication vehicle for the new generation of quantum theorists with avant-garde views.
In the winter semester of 1922/1923, Sommerfeld gave the Carl Schurz Memorial Professor of Physics lectures at the University of Wisconsin–Madison.
In 1927 Sommerfeld applied Fermi–Dirac statistics to the Drude model of electrons in metals – a model put forth by Paul Drude. The new theory solved many of the problems predicting thermal properties the original model had and became known as the Drude–Sommerfeld model.
In 1928/1929, Sommerfeld traveled globally, with major stops in India, China, Japan, and the United States.
Sommerfeld was a great theoretician. Besides his invaluable contributions to quantum theory, he worked in other fields of physics, such as the classical theory of electromagnetism. For example, he proposed a solution to the problem of a radiating hertzian dipole over a conducting earth, which over the years led to many applications. His Sommerfeld identity and Sommerfeld integrals are to the present day the most common way to solve this kind of problem. Also, as a mark of the prowess of Sommerfeld's school of theoretical physics and the rise of theoretical physics in the early 1900s, as of 1928, nearly one-third of the ordinarius professors of theoretical physics in the German-speaking world were students of Sommerfeld.
On 1 April 1935 Sommerfeld achieved emeritus status. He remained as his own temporary replacement during the selection process for his successor, which took until 1 December 1939. The process was lengthy due to academic and political differences between the Munich Faculty's selection and that of both the Reichserziehungsministerium (REM; Reich Education Ministry) and the supporters of , which was anti-Semitic and had a bias against theoretical physics, especially including quantum mechanics. The appointment of Wilhelm Müller – who was not a theoretical physicist, had not published in a physics journal, and was not a member of the Deutsche Physikalische Gesellschaft – as a replacement for Sommerfeld, was considered such a travesty and detrimental to educating a new generation of physicists that both Ludwig Prandtl, director of the Kaiser Wilhelm Institut für Strömungsforschung (Kaiser Wilhelm Institute for Flow Research), and Carl Ramsauer, director of the research division of the Allgemeine Elektrizitäts-Gesellschaft (General Electric Company) and president of the Deutsche Physikalische Gesellschaft, made reference to this in their correspondence to officials in the Reich. In an attachment to Prandtl's 28 April 1941 letter to Reich Marshal Hermann Göring, Prandtl referred to the appointment as "sabotage" of necessary theoretical physics instruction. In an attachment to Ramsauer's 20 January 1942 letter to Reich Minister Bernhard Rust, Ramsauer concluded that the appointment amounted to the "destruction of the Munich theoretical physics tradition".
As for Sommerfeld's once patriotic views, he wrote to Einstein shortly after Hitler took power: "I can assure you that the misuse of the word ‘national’ by our rulers has thoroughly broken me of the habit of national feelings that was so pronounced in my case. I would now be willing to see Germany disappear as a power and merge into a pacified Europe."
Sommerfeld was awarded many honors in his lifetime, such as the Lorentz Medal, the Max-Planck Medal, the Oersted Medal, election to the Royal Society of London, the United States National Academy of Sciences, Academy of Sciences of the USSR, the Indian Academy of Sciences, and other academies including those in Berlin, Munich, Göttingen, and Vienna, as well as having conferred on him numerous honorary degrees from universities including Rostock, Aachen, Calcutta, and Athens. He was elected an Honorary member of the Optical Society in 1950. He was nominated for the Nobel Prize 84 times, more than any other physicist (including Otto Stern, who got nominated 82 times), but he never received the award.
Sommerfeld died on April 26, 1951, in Munich from injuries after a traffic accident while walking with his grandchildren. The accident occurred at the corner of Dietlindenstrasse and Biedersteiner Strasse near his house which was located at Dunantstrasse 6. He is buried at the Nordfriedhof close to where he lived at the time.
In 2004, the center for theoretical physics at the University of Munich was named after him.
Works
Articles
Arnold Sommerfeld, "Mathematische Theorie der Diffraction" (The Mathematical Theory of Diffraction), Math. Ann. 47(2–3), pp. 317–374. (1896). .
Translated by Raymond J. Nagem, Mario Zampolli, and Guido Sandri in Mathematical Theory of Diffraction (Birkhäuser Boston, 2003),
Arnold Sommerfeld, "Uber die Ausbreitung der Wellen in der Drahtlosen Telegraphie" (The Propagation of Waves in Wireless Telegraphy), Ann. Physik [4] 28, 665 (1909); 62, 95 (1920); 81, 1135 (1926).
Arnold Sommerfeld, "Some Reminiscences of My Teaching Career", American Journal of Physics Volume 17, Number 5, 315–316 (1949). Address upon receipt of the 1948 Oersted Medal.
Books
Arnold Sommerfeld, Atombau und Spektrallinien (Friedrich Vieweg und Sohn, Braunschweig, 1919)
Arnold Sommerfeld, translated from the third German edition by Henry L. Brose Atomic Structure and Spectral Lines (Methuen, 1923)
Arnold Sommerfeld, Three Lectures on Atomic Physics (London: Methuen, 1926)
Arnold Sommerfeld, Atombau und Spektrallinien, Wellenmechanischer Ergänzungband (Vieweg, Braunschweig, 1929)
Arnold Sommerfeld, translated by Henry L. Brose Wave-Mechanics: Supplementary Volume to Atomic Structure and Spectral Lines (Dutton, 1929)
Arnold Sommerfeld, Lectures on Wave Mechanics Delivered before the Calcutta University (Calcutta University, 1929)
Arnold Sommerfeld and Hans Bethe, Elektronentheorie der Metalle, in H. Geiger and K. Scheel, editors Handbuch der Physik Volume 24, Part 2, 333–622 (Springer, 1933). This nearly 300-page chapter was later published as a separate book: Elektronentheorie der Metalle (Springer, 1967).
Arnold Sommerfeld, Mechanik – Vorlesungen über theoretische Physik Band 1 (Akademische Verlagsgesellschaft Becker & Erler, 1943)
Arnold Sommerfeld, translated from the fourth German edition by Martin O. Stern, Mechanics – Lectures on Theoretical Physics Volume I (Academic Press, 1964)
Arnold Sommerfeld, Mechanik der deformierbaren Medien – Vorlesungen über theoretische Physik Band 2 (Akademische Verlagsgesellschaft Becker & Erler, 1945)
Arnold Sommerfeld, translated from the second German edition by G. Kuerti, Mechanics of Deformable Bodies – Lectures on Theoretical Physics Volume II (Academic Press, 1964)
Arnold Sommerfeld, Elektrodynamik – Vorlesungen über theoretische Physik Band 3 (Klemm Verlag, Erscheinungsort, 1948)
Arnold Sommerfeld, translated from the German by Edward G. Ramberg Electrodynamics – Lectures on Theoretical Physics Volume III (Academic Press, 1964)
Arnold Sommerfeld, Optik – Vorlesungen über theoretische Physik Band 4 (Dieterich'sche Verlagsbuchhandlung, 1950)
Arnold Sommerfeld, translated from the first German edition by Otto Laporte and Peter A. Moldauer Optics – Lectures on Theoretical Physics Volume IV (Academic Press, 1964)
Arnold Sommerfeld, Thermodynamik und Statistik – Vorlesungen über theoretische Physik Band 5 Herausgegeben von Fritz Bopp und Josef Meixner. (Diederich sche Verlagsbuchhandlung, 1952)
Arnold Sommerfeld, edited by F. Bopp and J. Meixner, and translated by J. Kestin, Thermodynamics and Statistical Mechanics – Lectures on Theoretical Physics Volume V (Academic Press, 1964)
Arnold Sommerfeld, Partielle Differentialgleichungen der Physik – Vorlesungen über theoretische Physik Band 6 (Dieterich'sche Verlagsbuchhandlung, 1947)
Arnold Sommerfeld, translated by Ernest G. Straus, Partial Differential Equations in Physics – Lectures on Theoretical Physics Volume VI (Academic Press, first printing 1949, second printing 1953; also as n°1 of AP pure and applied mathematics collection)
Felix Klein and Arnold Sommerfeld, Über die Theorie des Kreisels [4 volumes] (Teubner, 1897)
Felix Klein and Arnold Sommerfeld, translated by Raymond J. Nagem and Guido Sandri, The Theory of the Top, vol 1. (Boston: Birkhauser, 2008)
See also
List of things named after Arnold Sommerfeld
References
Further reading
Benz, Ulrich, Arnold Sommerfeld. Lehrer und Forscher an der Schwelle zum Atomzeitalter 1868–1951 (Wissenschaftliche Verlagsgesellschaft, 1975)
Beyerchen, Alan D., Scientists Under Hitler: Politics and the Physics Community in the Third Reich (Yale, 1977)
Born, Max, Arnold Johannes Wilhelm Sommerfeld, 1868–1951, Obituary Notices of Fellows of the Royal Society Volume 8, Number 21, pp. 274–296 (1952)
Cassidy, David C., Uncertainty: The Life and Science of Werner Heisenberg (W. H. Freeman and Company, 1992), (Since Werner Heisenberg was one of Sommerfeld's Ph.D. students, this is an indirect source of information on Sommerfeld, but the information on him is rather extensive and well documented.)
Eckert, Michael, Arnold Sommerfeld: Atomphysiker und Kulturbote 1868–1951. Eine Biografie (Deutsches Museum, Wallstein Verlag, 2013)
Eckert, Michael, trans. Tom Artin, Arnold Sommerfeld: Science, Life and Turbulent Times, 1868–1951 (Springer, 2013)
Eckert, Michael, Propaganda in science: Sommerfeld and the spread of the electron theory of metals, Historical Studies in the Physical and Biological Sciences Volume 17, Number 2, pp. 191–233 (1987)
Eckert, Michael, Mathematics, Experiments, and Theoretical Physics: The Early Days of the Sommerfeld School, Physics in Perspective Volume 1, Number 3, pp. 238–252 (1999)
Hentschel, Klaus (Editor) and Ann M. Hentschel (Editorial Assistant and Translator), Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996)
Jungnickel, Christa and Russell McCormmach. Intellectual Mastery of Nature: Theoretical Physics from Ohm to Einstein, Volume 1: The Torch of Mathematics, 1800 to 1870. University of Chicago Press, paper cover, 1990a.
Jungnickel, Christa and Russell McCormmach. Intellectual Mastery of Nature. Theoretical Physics from Ohm to Einstein, Volume 2: The Now Mighty Theoretical Physics, 1870 to 1925. University of Chicago Press, Paper cover, 1990b.
Kant, Horst, Arnold Sommerfeld – Kommunikation und Schulenbildung in Fuchs-Kittowski, Klaus; Laitko, Hubert; Parthey, Heinrich; Umstätter, Walther (editors), Wissenschaft und Digitale Bibliothek: Wissenschaftsforschung Jahrbuch 1998 135–152 (Verlag der Gesellschaft für Wissenschaftsforschung, 2000)
Kirkpatrick, Paul, Address of Recommendation by Professor Paul Kirkpatrick, Chairman of the Committee on Awards, American Journal of Physics Volume 17, Number 5, pp. 312–314 (1949). Address preceding award to Arnold Sommerfeld, recipient of the 1948 Oersted Medal for Notable Contributions to the Teaching of Physics, 28 January 1949.
Kragh, Helge, Quantum Generations: A History of Physics in the Twentieth Century (Princeton University Press, fifth printing and first paperback printing, 2002),
Kuhn, Thomas S., John L. Heilbron, Paul Forman, and Lini Allen, Sources for History of Quantum Physics (American Philosophical Society, 1967)
Mehra, Jagdish, and Helmut Rechenberg, The Historical Development of Quantum Theory. Volume 1, Part 1, The Quantum Theory of Planck, Einstein, Bohr and Sommerfeld 1900–1925: Its Foundation and the Rise of Its Difficulties. (Springer, 1982),
Pauling, Linus, Arnold Sommerfeld: 1868–1951, Science Volume 114, Number 2963, pp. 383–384 (1951)
Singh, Rajinder, "Arnold Sommerfeld – The Supporter of Indian Physics in Germany" Current Science 81 No. 11, 10 December 2001, pp. 1489–1494
Walker, Mark, Nazi Science: Myth, Truth, and the German Atomic Bomb (Persius, 1995),
External links
Annotated bibliography for Arnold Sommerfeld from the Alsos Digital Library for Nuclear Issues
Arnold Sommerfeld Biography – American Philosophical Society (includes information on his students.)
Arnold Sommerfeld Biography – Zurich ETH-Bibliothek
Karin Reich (1995) Die Rolle Arnold Sommerfeld bei der Diskussion um die Vektorrechnung
Arnold Sommerfeld's Students – The Mathematics Genealogy Project
N. Mukunda (2015) Arnold Sommerfeld: Physicist and Teacher Beyond Compare from Indian Academy of Sciences
Michael Eckert (Video): Sommerfeld's Munich Quantum School – 3rd Conference on the History of Quantum Physics (June 2011)
Together with: Presentation, including many historical pictures
Hans Bethe talking about his time as Sommerfeld's Student on Peoples Archive
Relativitätstheorie – Sommerfeld's 1921 introduction to special and general relativity for general audiences (German) ()
Sommerfeld-Project – Leibniz-Rechenzentrum der Wissenschaften
A collection of digitized materials related to Sommerfeld's and Linus Pauling's structural chemistry research.
Arnold Sommerfeld and Condensed Matter Physics, Annual Review of Condensed Matter Physics Vol. 8:31–49
Arnold Sommerfeld
1868 births
1951 deaths
Scientists from Königsberg
Scientists from the Province of Prussia
19th-century German physicists
20th-century German physicists
German fluid dynamicists
Optical physicists
German quantum physicists
Tribologists
University of Königsberg alumni
Academic staff of the Ludwig Maximilian University of Munich
Academic staff of RWTH Aachen University
Academic staff of the University of Göttingen
Academic staff of the Clausthal University of Technology
Foreign associates of the National Academy of Sciences
Foreign members of the Royal Society
Foreign members of the USSR Academy of Sciences
Honorary members of the USSR Academy of Sciences
Lorentz Medal winners
Recipients of the Matteucci Medal
Winners of the Max Planck Medal
Road incident deaths in West Germany | Arnold Sommerfeld | Materials_science | 5,534 |
2,868,762 | https://en.wikipedia.org/wiki/Hexagonal%20lattice | The hexagonal lattice (sometimes called triangular lattice) is one of the five two-dimensional Bravais lattice types. The symmetry category of the lattice is wallpaper group p6m. The primitive translation vectors of the hexagonal lattice form an angle of 120° and are of equal lengths,
The reciprocal lattice of the hexagonal lattice is a hexagonal lattice in reciprocal space with orientation changed by 90° and primitive lattice vectors of length
Honeycomb point set
The honeycomb point set is a special case of the hexagonal lattice with a two-atom basis. The centers of the hexagons of a honeycomb form a hexagonal lattice, and the honeycomb point set can be seen as the union of two offset hexagonal lattices.
In nature, carbon atoms of the two-dimensional material graphene are arranged in a honeycomb point set.
Crystal classes
The hexagonal lattice class names, Schönflies notation, Hermann-Mauguin notation, orbifold notation, Coxeter notation, and wallpaper groups are listed in the table below.
See also
Square lattice
Hexagonal tiling
Close-packing
Centered hexagonal number
Eisenstein integer
Voronoi diagram
Hermite constant
References
Lattice points
Crystal systems | Hexagonal lattice | Chemistry,Materials_science,Mathematics | 256 |
15,028,540 | https://en.wikipedia.org/wiki/BZW1 | Basic leucine zipper and W2 domain-containing protein 1 is a protein that in humans is encoded by the BZW1 gene and expressed in the nucleus. It enables RNA and cadherin binding activity.
Salivary gland carcinoma and eastern equine encephalitis are associated with BZW1.
Interactions
BZW1 has been shown to interact with PSTPIP1 and CDC5L.
References
Further reading
External links | BZW1 | Chemistry | 95 |
106,364 | https://en.wikipedia.org/wiki/Algebraic%20structure | In mathematics, an algebraic structure or algebraic system consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities (known as axioms) that these operations must satisfy.
An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors).
Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms).
In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring.
The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category.
Introduction
Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, and are associative laws, and and are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law.
Sets with one or more operations that obey specific laws are called algebraic structures. When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem.
In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses.
Common axioms
Equational axioms
An axiom of an algebraic structure often has the form of an identity, that is, an equation such that the two sides of the equals sign are expressions that involve operations of the algebraic structure and variables. If the variables in the identity are replaced by arbitrary elements of the algebraic structure, the equality must remain true. Here are some common examples.
Commutativity An operation is commutative if for every and in the algebraic structure.
Associativity An operation is associative if for every , and in the algebraic structure.
Left distributivity An operation is left distributive with respect to another operation if for every , and in the algebraic structure (the second operation is denoted here as , because the second operation is addition in many common examples).
Right distributivity An operation is right distributive with respect to another operation if for every , and in the algebraic structure.
Distributivity An operation is distributive with respect to another operation if it is both left distributive and right distributive. If the operation is commutative, left and right distributivity are both equivalent to distributivity.
Existential axioms
Some common axioms contain an existential clause. In general, such a clause can be avoided by introducing further operations, and replacing the existential clause by an identity involving the new operation. More precisely, let us consider an axiom of the form "for all there is such that where is a -tuple of variables. Choosing a specific value of for each value of defines a function which can be viewed as an operation of arity , and the axiom becomes the identity
The introduction of such auxiliary operation complicates slightly the statement of an axiom, but has some advantages. Given a specific algebraic structure, the proof that an existential axiom is satisfied consists generally of the definition of the auxiliary function, completed with straightforward verifications. Also, when computing in an algebraic structure, one generally uses explicitly the auxiliary operations. For example, in the case of numbers, the additive inverse is provided by the unary minus operation
Also, in universal algebra, a variety is a class of algebraic structures that share the same operations, and the same axioms, with the condition that all axioms are identities. What precedes shows that existential axioms of the above form are accepted in the definition of a variety.
Here are some of the most common existential axioms.
Identity element
A binary operation has an identity element if there is an element such that for all in the structure. Here, the auxiliary operation is the operation of arity zero that has as its result.
Inverse element
Given a binary operation that has an identity element , an element is invertible if it has an inverse element, that is, if there exists an element such that For example, a group is an algebraic structure with a binary operation that is associative, has an identity element, and for which all elements are invertible.
Non-equational axioms
The axioms of an algebraic structure can be any first-order formula, that is a formula involving logical connectives (such as "and", "or" and "not"), and logical quantifiers () that apply to elements (not to subsets) of the structure.
Such a typical axiom is inversion in fields. This axiom cannot be reduced to axioms of preceding types. (it follows that fields do not form a variety in the sense of universal algebra.) It can be stated: "Every nonzero element of a field is invertible;" or, equivalently: the structure has a unary operation such that
The operation can be viewed either as a partial operation that is not defined for ; or as an ordinary function whose value at 0 is arbitrary and must not be used.
Common algebraic structures
One set with operations
Simple structures: no binary operation:
Set: a degenerate algebraic structure S having no operations.
Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers.
Group: a monoid with a unary operation (inverse), giving rise to inverse elements.
Abelian group: a group whose binary operation is commutative.
Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition.
Ring: a semiring whose additive monoid is an abelian group.
Division ring: a nontrivial ring in which division by nonzero elements is defined.
Commutative ring: a ring in which the multiplication operation is commutative.
Field: a commutative division ring (i.e. a commutative ring which contains a multiplicative inverse for every nonzero element).
Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law.
Complete lattice: a lattice in which arbitrary meet and joins exist.
Bounded lattice: a lattice with a greatest element and least element.
Distributive lattice: a lattice in which each of meet and join distributes over the other. A power set under union and intersection forms a distributive lattice.
Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation.
Two sets with operations
Module: an abelian group M and a ring R acting as operators on M. The members of R are sometimes called scalars, and the binary operation of scalar multiplication is a function R × M → M, which satisfies several axioms. Counting the ring operations these systems have at least three operations.
Vector space: a module where the ring R is a field or, in some contexts, a division ring.
Algebra over a field: a module over a field, which also carries a multiplication operation that is compatible with the module structure. This includes distributivity over addition and linearity with respect to multiplication.
Inner product space: a field F and vector space V with a definite bilinear form .
Hybrid structures
Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure.
Topological group: a group with a topology compatible with the group operation.
Lie group: a topological group with a compatible smooth manifold structure.
Ordered groups, ordered rings and ordered fields: each type of structure with a compatible partial order.
Archimedean group: a linearly ordered group for which the Archimedean property holds.
Topological vector space: a vector space whose M has a compatible topology.
Normed vector space: a vector space with a compatible norm. If such a space is complete (as a metric space) then it is called a Banach space.
Hilbert space: an inner product space over the real or complex numbers whose inner product gives rise to a Banach space structure.
Vertex operator algebra
Von Neumann algebra: a *-algebra of operators on a Hilbert space equipped with the weak operator topology.
Universal algebra
Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by identities and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry).
Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x, m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group.
Some structures do not form varieties, because either:
It is necessary that 0 ≠ 1, 0 being the additive identity element and 1 being a multiplicative identity element, but this is a nonidentity;
Structures such as fields have some axioms that hold only for nonzero members of S. For an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations.
Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because , but fields do not have zero divisors.
Category theory
Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure.
There are various concepts in category theory that try to capture the algebraic character of a context, for instance
algebraic category
essentially algebraic category
presentable category
locally presentable category
monadic functors and categories
universal property.
Different meanings of "structure"
In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring structure on the set ", means that we have defined ring operations on the set . For another example, the group can be seen as a set that is equipped with an algebraic structure, namely the operation'' .
See also
Free object
Mathematical structure
Signature (logic)
Structure (mathematical logic)
Notes
References
Category theory
External links
Jipsen's algebra structures. Includes many structures not mentioned here.
Mathworld page on abstract algebra.
Stanford Encyclopedia of Philosophy: Algebra by Vaughan Pratt.
Abstract algebra
Mathematical structures | Algebraic structure | Mathematics | 3,002 |
19,389,633 | https://en.wikipedia.org/wiki/Tarski%E2%80%93Seidenberg%20theorem | In mathematics, the Tarski–Seidenberg theorem states that a set in (n + 1)-dimensional space defined by polynomial equations and inequalities can be projected down onto n-dimensional space, and the resulting set is still definable in terms of polynomial identities and inequalities. The theorem—also known as the Tarski–Seidenberg projection property—is named after Alfred Tarski and Abraham Seidenberg. It implies that quantifier elimination is possible over the reals, that is that every formula constructed from polynomial equations and inequalities by logical connectives (or), (and), (not) and quantifiers (for all), (exists) is equivalent to a similar formula without quantifiers. An important consequence is the decidability of the theory of real-closed fields.
Although the original proof of the theorem was constructive, the resulting algorithm has a computational complexity that is too high for using the method on a computer. George E. Collins introduced the algorithm of cylindrical algebraic decomposition, which allows quantifier elimination over the reals in double exponential time. This complexity is optimal, as there are examples where the output has a double exponential number of connected components. This algorithm is therefore fundamental, and it is widely used in computational algebraic geometry.
Statement
A semialgebraic set in Rn is a finite union of sets defined by a finite number of polynomial equations and inequalities, that is by a finite number of statements of the form
and
for polynomials p and q. We define a projection map π : Rn +1 → Rn by sending a point (x1, ..., xn, xn +1) to (x1, ..., xn). Then the Tarski–Seidenberg theorem states that if X is a semialgebraic set in Rn +1 for some n ≥ 1, then π(X) is a semialgebraic set in Rn.
Failure with algebraic sets
If we only define sets using polynomial equations and not inequalities then we define algebraic sets rather than semialgebraic sets. For these sets the theorem fails, i.e. projections of algebraic sets need not be algebraic. As a simple example consider the hyperbola in R2 defined by the equation
This is a perfectly good algebraic set, but projecting it down by sending (x, y) in R2 to x in R produces the set of points satisfying x ≠ 0. This is a semialgebraic set, but it is not an algebraic set as the algebraic sets in R are R itself, the empty set and the finite sets.
This example shows also that, over the complex numbers, the projection of an algebraic set may be non-algebraic. Thus the existence of real algebraic sets with non-algebraic projections does not rely on the fact that the field of real numbers is not algebraically closed.
Another example is the parabola in R2, which is defined by the equation
Its projection onto the x-axis is the half-line [0, ∞), a semialgebraic set that cannot be obtained from algebraic sets by (finite) intersections, unions, and set complements.
Relation to structures
This result confirmed that semialgebraic sets in Rn form what is now known as an o-minimal structure on R. These are collections of subsets Sn of Rn for each n ≥ 1 such that we can take finite unions and complements of the subsets in Sn and the result will still be in Sn, moreover the elements of S1 are simply finite unions of intervals and points. The final condition for such a collection to be an o-minimal structure is that the projection map on the first n coordinates from Rn +1 to Rn must send subsets in Sn +1 to subsets in Sn. The Tarski–Seidenberg theorem tells us that this holds if Sn is the set of semialgebraic sets in Rn.
See also
Decidability of first-order theories of the real numbers
References
External links
Tarski–Seidenberg theorem at PlanetMath.org
Real algebraic geometry
Theorems in algebraic geometry | Tarski–Seidenberg theorem | Mathematics | 884 |
42,576,177 | https://en.wikipedia.org/wiki/BioMotiv | BioMotiv is an accelerator company associated with The Harrington Project, an initiative centered at University Hospitals of Cleveland. Therapeutic opportunities were identified through relationships with The Harrington Discovery Institute, university and research institutions, disease foundations, and industry sources. Once opportunities are identified, BioMotiv oversees the development, funding, active management, and partnering of the therapeutic products.
History
The Harrington Project was launched as an effort to bridge varying aspects of the drug development sphere. In response to recent decline in the number of traditional, early-stage biotechnology venture capital firms, BioMotiv utilizes an asset-centric model to in-license, fund, and manage technologies in-house. Its goal is to address the "valley of death" between research, discovery, and early clinical-stage drug development. Projects were advanced by the management team through clinical proof-of-concept, and then out-licensed via strategic alliances with pharmaceutical companies.
The company has raised over $146 million to date. Major investors include The Harrington Family, University Hospitals of Cleveland, Takeda Pharmaceutical Company, Biogen, Arix Bioscience, and Charles River Laboratories.
Leadership
Ron Harrington, BioMotiv's Board of Managers Chairman and The Harrington Project's lead, led Edgepark Medical Supplies and after growing the business successfully, sold the company in 2011 to Goldman Sachs and Clayton, Dubilier & Rice (which renamed the company AssuraMed and sold it again in 2014 to Cardinal Health, Inc.).
BioMotiv recruited Baiju Shah, former CEO of BioEnterprise, a non-profit aimed at boosting Cleveland's healthcare economy, to lead its efforts. Prior to founding BioEnterprise, Baiju worked at McKinsey & Company.
Ted Torphy is BioMotiv's Chief Scientific Officer. Prior to BioMotiv, he served as Global Head of External Innovation & Business Models for Discovery Sciences, Vice President and Head of External Research and Early Development, and Corporate Vice President and Head of Johnson & Johnson's Corporate Office of Science & Technology.
David C. U'Prichard is the Chairman of the Advisory Board; he has served on a number of biotechnology boards, as a venture partner at several funds, and was Chairman of Research and Development at SmithKline Beecham.
Subsidiaries
BioMotiv launched its first subsidiary company, Orca Pharmaceuticals, in 2013. Based in Oxford, England, it worked in collaboration with New York University Innovation Venture Fund to develop RAR-related orphan receptor gamma (RORy) inhibitors for treatments of psoriasis, ankylosing spondylitis, and inflammatory bowel disease.
Today, BioMotiv's subsidiary portfolio includes ten companies across five indication areas:
Partnerships
BioMotiv's strategic partners include leaders in the drug development and pharmaceutical industries, as well as disease foundations.
In September 2014, BioMotiv entered into its first strategic partnership with Takeda Pharmaceutical Company in the areas of Immunology & Inflammation and Cardio-metabolic Diseases.
Today, BioMotiv has four strategic partners:
Takeda
Biogen
Arix Bioscience
Charles River Laboratories
See also
Medical Research
Translational Research
Biotechnology
Further reading
Crossing Over the Valley of Death. Faster Cures. Faster Cures
Biotech Funding Gets Harder to Find. Wall Street Journal. Wall Street Journal
References
External links
BioMotiv,
The Harrington Project
Categories
Companies based in Cleveland
Biotechnology | BioMotiv | Biology | 684 |
1,277,806 | https://en.wikipedia.org/wiki/Cidofovir | Cidofovir, brand name Vistide, is a topical or injectable antiviral medication primarily used as a treatment for cytomegalovirus (CMV) retinitis (an infection of the retina of the eye) in people with AIDS.
Cidofovir was approved for medical use in 1996.
Medical use
DNA virus
Its only indication that has received regulatory approval worldwide is cytomegalovirus retinitis. Cidofovir has also shown efficacy in the treatment of aciclovir-resistant HSV infections. Cidofovir has also been investigated as a treatment for progressive multifocal leukoencephalopathy with successful case reports of its use. Despite this, the drug failed to demonstrate any efficacy in controlled studies. Cidofovir might have anti-smallpox efficacy and might be used on a limited basis in the event of a bioterror incident involving smallpox cases. Brincidofovir, a cidofovir derivative with much higher activity against smallpox that can be taken orally has been developed. It has inhibitory effects on varicella-zoster virus replication in vitro although no clinical trials have been done to date, likely due to the abundance of safer alternatives such as aciclovir. Cidofovir shows anti-BK virus activity in a subgroup of transplant recipients. Cidofovir is being investigated as a complementary intralesional therapy against papillomatosis caused by HPV.
It first received FDA approval on 26 June 1996, TGA approval on 30 April 1998 and EMA approval on 23 April 1997.
It has been used topically to treat warts.
Other
It has been suggested as an antitumour agent, due to its suppression of FGF2.
Administration
Cidofovir is only available as an intravenous formulation. Cidofovir is to be administered with probenecid which decreases side effects to the kidney. Probenecid mitigates nephrotoxicity by inhibiting organic anion transport of the proximal tubule epithelial cells of the kidney. In addition, hydration must be administered to patients receiving cidofovir. 1 liter of normal saline is recommended in conjunction with each dose of cidofovir.
Side effects
The major dose-limiting side effect of cidofovir is nephrotoxicity (i.e., kidney damage). Other common side effects (occurring in >1% of people treated with the drug) include:
Nausea
Vomiting
Neutropenia
Hair loss
Weakness
Headache
Chills
Decreased intraocular pressure
Uveitis
Iritis
Whereas uncommon side effects include: anaemia and elevated liver enzymes and rare side effects include: tachycardia and Fanconi syndrome. Probenecid (a uricosuric drug) and intravenous saline should always be administered with each cidofovir infusion to prevent this nephrotoxicity.
Contraindications
Hypersensitivity to cidofovir or probenecid (as probenecid needs to be given concurrently to avoid nephrotoxicity).
Interactions
It is known to interact with nephrotoxic agents (e.g. amphotericin B, foscarnet, IV aminoglycosides, IV pentamide, vancomycin, tacrolimus, non-steroid anti-inflammatory drugs, etc.) to increase their nephrotoxic potential. As it must be given concurrently with probenecid it is advised that drugs that are known to interact with probenecid (e.g. drugs that probenecid interferes with the renal tubular secretion of, such as paracetamol, aciclovir, aminosalicylic acid, etc.) are also withheld.
Mechanism of action
Its active metabolite, cidofovir diphosphate, inhibits viral replication by selectively inhibiting viral DNA polymerases. It also inhibits human polymerases, but this action is 8–600 times weaker than its actions on viral DNA polymerases. It also incorporates itself into viral DNA, hence inhibiting viral DNA synthesis during reproduction.
It possesses in vitro activity against the following viruses:
Human herpesviruses
Adenoviruses
Human poxviruses (including the smallpox virus)
Human papillomavirus
History
Cidofovir was discovered at the Institute of Organic Chemistry and Biochemistry, Prague, by Antonín Holý, and developed by Gilead Sciences and is marketed with the brand name Vistide by Gilead in the US, and by Pfizer elsewhere.
Synthesis
Cidofovir can be synthesized from a pyrimidone derivative and a protected derivative of glycidol.
See also
Brincidofovir, a prodrug of cidofovir
References
Gilead Sciences
Anti-herpes virus drugs
Pyrimidones
Amines
Ethers
Phosphonic acids
Primary alcohols | Cidofovir | Chemistry | 1,028 |
140,988 | https://en.wikipedia.org/wiki/Bow%20drill | A bow drill is a simple hand-operated type of tool, consisting of a rod (the spindle or drill shaft) that is set in rapid rotary motion by means of a cord wrapped around it, kept taut by a bow which is pushed back and forth with one hand. This tool of prehistoric origin has been used both as a drill, to make holes on solid materials such as wood, stone, bone, or teeth, and as a fire drill to start a fire.
The spindle can be held into a fixed frame, or by a hand-held block (the hand piece or thimble) with a hole into which the top of the shaft is inserted. Some lubricant should be used to reduce friction between these two parts, otherwise, it could lead to some trouble when doing it too fast. A popular campcraft book of 1920 attributed this invention to the Inuit. In Mehrgarh (Pakistan) it has been dated between the 4th-5th millennium BCE.
The string of the bow is wrapped once around the spindle, so that it is tight enough not to slip during operation. In a variation called the Egyptian bow drill, the cord is wound around the shaft multiple times, or is fixed to it by a knot or a hole.
The strap drill is a simpler version, where the bow is absent and the cord is kept taut by pulling the ends with both hands, while moving them left and right at the same time. In the absence of a frame, the thimble is shaped so that it can be held with the chin or the mouth.
The bow lathe used for traditional woodturning uses the same principle as the bow drill.
History
Bow drills with green jasper bits were used in Mehrgarh between the 4th and 5th millennium BC to drill holes into lapis lazuli and carnelian. Similar drills were found in other parts of the Indus Valley civilization and Iran one millennium later.
Usage as a fire drill
For use as a fire drill, the shaft should have a blunt end, which is placed into a small cavity of a stationary piece of wood (the fireboard). Turning the shaft with high speed and downward pressure generates heat, which eventually creates powdered charcoal and ignites it forming a small ember.
For drilling, the lower end of the spindle may be fitted with a hard drill bit that creates the hole by abrasion or cutting.
See also
Pump drill
Brace (tool)
References
External links
The Egyptian Bow Drill
Information on constructing and using a hand drill
Mechanical hand tools
Woodworking hand tools
Fire making
Inventions of the Indus Valley Civilisation | Bow drill | Physics | 527 |
43,194,879 | https://en.wikipedia.org/wiki/Geometric%20transformation | In mathematics, a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning, such as preserving distances, angles, or ratios (scale). More specifically, it is a function whose domain and range are sets of points — most often both or both — such that the function is bijective so that its inverse exists. The study of geometry may be approached by the study of these transformations, such as in transformation geometry.
Classifications
Geometric transformations can be classified by the dimension of their operand sets (thus distinguishing between, say, planar transformations and spatial transformations). They can also be classified according to the properties they preserve:
Displacements preserve distances and oriented angles (e.g., translations);
Isometries preserve angles and distances (e.g., Euclidean transformations);
Similarities preserve angles and ratios between distances (e.g., resizing);
Affine transformations preserve parallelism (e.g., scaling, shear);
Projective transformations preserve collinearity;
Each of these classes contains the previous one.
Möbius transformations using complex coordinates on the plane (as well as circle inversion) preserve the set of all lines and circles, but may interchange lines and circles.
Conformal transformations preserve angles, and are, in the first order, similarities.
Equiareal transformations, preserve areas in the planar case or volumes in the three dimensional case. and are, in the first order, affine transformations of determinant 1.
Homeomorphisms (bicontinuous transformations) preserve the neighborhoods of points.
Diffeomorphisms (bidifferentiable transformations) are the transformations that are affine in the first order; they contain the preceding ones as special cases, and can be further refined.
Transformations of the same type form groups that may be sub-groups of other transformation groups.
Opposite group actions
Many geometric transformations are expressed with linear algebra. The bijective linear transformations are elements of a general linear group. The linear transformation A is non-singular. For a row vector v, the matrix product vA gives another row vector w = vA.
The transpose of a row vector v is a column vector vT, and the transpose of the above equality is Here AT provides a left action on column vectors.
In transformation geometry there are compositions AB. Starting with a row vector v, the right action of the composed transformation is w = vAB. After transposition,
Thus for AB the associated left group action is In the study of opposite groups, the distinction is made between opposite group actions because commutative groups are the only groups for which these opposites are equal.
Active and passive transformations
See also
Coordinate transformation
Erlangen program
Symmetry (geometry)
Motion
Reflection
Rigid transformation
Rotation
Topology
Transformation matrix
References
Further reading
Dienes, Z. P.; Golding, E. W. (1967) . Geometry Through Transformations (3 vols.): Geometry of Distortion, Geometry of Congruence, and Groups and Coordinates. New York: Herder and Herder.
David Gans – Transformations and geometries.
John McCleary (2013) Geometry from a Differentiable Viewpoint, Cambridge University Press
Modenov, P. S.; Parkhomenko, A. S. (1965) . Geometric Transformations (2 vols.): Euclidean and Affine Transformations, and Projective Transformations. New York: Academic Press.
A. N. Pressley – Elementary Differential Geometry.
Yaglom, I. M. (1962, 1968, 1973, 2009) . Geometric Transformations (4 vols.). Random House (I, II & III), MAA (I, II, III & IV).
Geometry
Functions and mappings
Symmetry
Transformation (function) | Geometric transformation | Physics,Mathematics | 769 |
40,502,985 | https://en.wikipedia.org/wiki/Venom%20optimization%20hypothesis | Venom optimization hypothesis, also known as venom metering, is a biological hypothesis which postulates that venomous animals have physiological control over their production and use of venoms. It explains the economic use of venom because venom is a metabolically expensive product, and that there is a biological mechanism for controlling their specific use. The hypothetical concept was proposed by Esther Wigger, Lucia Kuhn-Nentwig, and Wolfgang Nentwig of the Zoological Institute at the University of Bern, Switzerland, in 2002.
A number of venomous animals have been experimentally found to regulate the amount of venom they use during predation or defensive situations. Species of anemones, jellyfish, ants, scorpions, spiders, and snakes are found to use their venoms frugally depending on the situation and size of their preys or predators.
Development
Venom optimization hypothesis was postulated by Wigger, Kuhn-Nentwig, and Nentwig from their studies of the amount of venom used by a wandering spider Cupiennius salei. This spider produces a neurotoxic peptide called CsTx-1 for paralysing its prey. It does not weave webs for trapping preys, and therefore, entirely depends on its venom for predation. It is known to prey on a variety of insects including butterflies, moths, earwigs, cockroaches, flies and grasshoppers. Its venom glands store only about 10 μl of crude venom. Refilling of the glands takes 2–3 days and the lethal efficacy of the venom is, initially, very low for several days, requiring 8 to 18 days for full effect. It was found that the amount of venom released differed for each specific prey. For example, for bigger and stronger insects like beetles, the spider uses the entire amount of its venom; while for small ones, it uses only a small amount, thus economizing its costly venom. In fact, experiments show that the amount of venom released is just sufficient (at the lethal dose) to paralyze the target organism depending on the size or strength, and is not more than what is necessary.
Concept
Animal venoms are complex biomolecules and hence, their biological synthesis require high metabolic activity. A particular venom itself is a complex chemical mixture composed of hundreds of proteins and non-proteinaceous compounds, resulting in a potent weapon for prey immobilization and predator deterrence. The metabolic cost of venom is sufficiently high to result in secondary loss of venom whenever its use becomes non-essential to survival of the animal. This suggests that venomous animals may have evolved strategies for minimizing venom expenditure, that they should use them only as and when required, and that too in optimal amount.
References
Ethology
Toxins
Biology theories | Venom optimization hypothesis | Biology,Environmental_science | 567 |
53,270,578 | https://en.wikipedia.org/wiki/European%20Study%20Groups%20with%20Industry | A European Study Group with Industry (ESGI) is usually a week-long meeting where applied mathematicians work on problems presented by industry and research centres. The aim of the meeting is to solve or at least make progress on the problems.
The study group concept originated in Oxford, in 1968 (initiated by Leslie Fox and Alan Tayler). Subsequently, the format was adopted in other European countries to form ESGIs. Currently, with a variety of names, they appear in the same or a similar format throughout the world. More specific topics have also formed the subject of focussed meetings, such as the environment, medicine and agriculture.
Problems successfully tackled at study groups are discussed in a number of textbooks as well as a collection of case studies, European Success Stories in Industrial Mathematics. A guide for organising and running study groups is provided by the European Consortium for Mathematics in Industry.
European Study Group with Industry
A European Study Group with Industry or ESGI is a type of workshop where mathematicians work on problems presented by industry representatives. The meetings typically last five days, from Monday to Friday. On the Monday morning the industry representatives present problems of current interest to an audience of applied mathematicians. Subsequently, the mathematicians split into working groups to investigate the suggested topics. On the Friday solutions and results are presented back to the industry representative. After the meeting a report is prepared for the company, detailing the progress made and usually with suggestions for further work or experiments.
History
The original Study Groups with Industry started in Oxford in 1968. The format provided a method for initiating interaction between universities and private industry which often led to further collaboration, student projects and new fields of research (many advances in the field of free or moving boundary problems are attributed to the industrial case studies of the 1970s.). Study groups were later adopted in other countries, starting in Europe and then spreading throughout the world. The subject areas have also diversified, for example the Mathematics in Medicine Study Groups, Mathematics in the Plant Sciences Study Groups, the environment, uncertainty quantification and agriculture.
The academics work on the problems for free. The following have been given as motivation for this work:
Discovering new problems and research areas with practical applications.
The possibility of further projects and collaboration with industry.
The opportunity for future funding.
A number of reasons have also been quoted for companies to attend ESGIs:
The possibility of a quick solution to their problem, or at least guidance on a way forward.
Mathematicians can help to identify and correctly formulate a problem for further study.
Access to state-of-the-art techniques.
Building contacts with top researchers in a given field.
ESGIs are currently an activity of the European Consortium for Mathematics in Industry. Their ESGI webpage contains details of European meetings and contact details for prospective industry or academics participants. The current co-ordinator of the ESGIs is Prof. Tim Myers of the Centre de Recerca Matemática, Barcelona. Between 2015 and 2019 ESGIs are eligible for funding through the COST network MI-Net (Maths for Industry Network).
List of recent meetings
Past European meetings are listed on the European Consortium for Mathematics in Industry website. International meetings are covered by the Mathematics in Industry Information Service.
Recent ESGIs include:
ESGI 150, Basque Centre for Applied Mathematics, 21–25 October 2019
ESGI 144, Warsaw, 17 – 22 March 2019
ESGI 145, Cambridge, Apr. 8-12 2019
ESGI 147 Spain, Apr. 8-12 2019
ESGI 152, Palanga, Lithuania, 10–14 June 2019
ESGI 155, Polytechnic Institute of Leiria, Portugal, 1–5 July 2019.
ESGI 154, U. Southern Denmark, 19–23 August 2019
ESGI 148/SWI 2019 Netherlands, Wageningen, 28 Jan. – 1 Feb., 2019
ESGI 151 Estonia, Tartu 4-8 Feb. 2019
ESGI 149 Innsbruck, March 4–8, 2019
International study groups
As well as being held throughout Europe, annual study groups take place in Australia, Brazil, Canada, India, New Zealand, United States, Russia, and South Africa. A site dedicated solely to Dutch study groups may be found here Dutch ESGI. Information on past and upcoming meetings throughout the world may be found on the Mathematics in Industry Information Service website.
Literature
There are many books on mathematical modelling, a number of them containing problems arising from ESGIs or other study groups from around the world, examples include:
Practical Applied Mathematics Modelling, Analysis, Approximation
Topics in Industrial Mathematics: Case Studies and Related Mathematical Methods
Industrial Mathematics: A Course in Solving Real-World Problems
The book European Success Stories in Industrial Mathematics contains brief descriptions of a wide variety of industrial mathematics case studies. The Mathematics in Industry Information Service contains a large repository of past reports from study groups throughout the world.
A guide for organising and running study groups, the ESGI Handbook, has been developed by the Mathematics for Industry Network.
References
Applied mathematics
Mathematics education in the United Kingdom | European Study Groups with Industry | Mathematics | 998 |
35,786,644 | https://en.wikipedia.org/wiki/Walter%20Boas%20Medal | The Walter Boas Medal is awarded by the Australian Institute of Physics for research in Physics in Australia. It is named in memory of is named in memory of Walter Boas (1904-1982) — an eminent scientist and metallurgist who worked on the physics of metals.
Recipients
Source:
1984 James A. Piper, Macquarie University (inaugural winner)
1985 Peter Hannaford, CSIRO Division of Materials Technology
1986 Donald Melrose, Sydney University
1987 Anthony William Thomas, University of Adelaide
1988 Robert Delbourgo, University of Tasmania
1989 Jim Williams, University of Western Australia
1990 Geoff Opat, University of Melbourne
1990 Tony Klein, University of Melbourne
1991 Parameswaran Hariharan, CSIRO Division of Applied Physics
1992 Bruce Harold John McKellar, University of Melbourne
1993 Jim Williams, Australian National University
1994 No medal awarded
1995 David Blair, University of Western Australia
1996 Andris Stelbovics, Murdoch University
1996 Igor Bray, Flinders University
1997 Keith Nugent, University of Melbourne
1997 Stephen W. Wilkins, CSIRO
1998 Robert Clark, University of NSW
1999 No medal awarded
2000 Hans A. Bachor, Australian National University
2001 Anthony G. Williams, University of Adelaide
2002 Peter Robinson, University of Sydney
2003 Gerard J. Milburn, University of Queensland
2004 George Dracoulis, Australian National University
2005 Yuri Kivshar, Australian National University
2006 Michael Edmund Tobar, The University of Western Australia
2007 Derek Leinweber, University of Adelaide
2008 Peter Drummond, Swinburne University of Technology
2009 Victor Flambaum, University of New South Wales
2010 Kostya Ostrikov, CSIRO
2011 Ben Eggleton, University of Sydney
2012 Lloyd Hollenberg, University of Melbourne
2013 Chennupati Jagadish, Australian National University
2014 Stuart Wyithe, University of Melbourne
2015 Min Gu, Swinburne University of Technology
2016 Geraint F. Lewis, University of Sydney
2017 David McClelland, Australian National University
2018 Elisabetta Barberio, University of Melbourne
2019 Andrea Morello, University of NSW
2020 Joss Bland-Hawthorn, University of Sydney
2021 Howard Wiseman, Griffith University
2022 Susan M. Scott, Australian National University
2023 Mahananda Dasgupta and David John Hinde, Australian National University
See also
List of physics awards
List of prizes named after people
References
Australian science and technology awards
Physics awards
Awards established in 1984 | Walter Boas Medal | Technology | 482 |
60,104,849 | https://en.wikipedia.org/wiki/Bloom%20filters%20in%20bioinformatics | Bloom filters are space-efficient probabilistic data structures used to test whether an element is a part of a set. Bloom filters require much less space than other data structures for representing sets, however the downside of Bloom filters is that there is a false positive rate when querying the data structure. Since multiple elements may have the same hash values for a number of hash functions, then there is a probability that querying for a non-existent element may return a positive if another element with the same hash values has been added to the Bloom filter. Assuming that the hash function has equal probability of selecting any index of the Bloom filter, the false positive rate of querying a Bloom filter is a function of the number of bits, number of hash functions and number of elements of the Bloom filter. This allows the user to manage the risk of a getting a false positive by compromising on the space benefits of the Bloom filter.
Bloom filters are primarily used in bioinformatics to test the existence of a k-mer in a sequence or set of sequences. The k-mers of the sequence are indexed in a Bloom filter, and any k-mer of the same size can be queried against the Bloom filter. This is a preferable alternative to hashing the k-mers of a sequence with a hash table, particularly when the sequence is very long, since it is very demanding to store large numbers of k-mers in memory.
Applications
Sequence characterization
The preprocessing step in many bioinformatics applications involves classifying sequences, primarily classifying reads from a DNA sequencing experiment. For example, in metagenomic studies it is important to be able to tell if a sequencing read belongs to a new species. and in clinical sequencing projects it is vital to filter out reads from the genomes of contaminating organisms. There are many bioinformatics tools that use Bloom filters to classify reads by querying k-mers of a read to a set of Bloom filters generated from known reference genomes. Some tools that use this method are FACS and BioBloom tools. While these methods may not outclass other bioinformatics classification tools like Kraken, they offer a memory-efficient alternative.
A recent area of research with Bloom filters in sequence characterization is in developing ways to query raw reads from sequencing experiments. For example, how can one determine which reads contain a specific 30-mer in the entire NCBI Sequence Read Archive? This task is similar to that which is accomplished by BLAST, however it involves querying a much larger dataset; while BLAST queries against a database of reference genomes, this task demands that specific reads that contain the k-mer are returned. BLAST and similar tools cannot handle this problem efficiently, therefore Bloom filter based data structures have been implemented to this end. Binary bloom trees are binary trees of Bloom filters that facilitates querying transcripts in large RNA-seq experiments. BIGSI borrows bitsliced signatures from the field of document retrieval to index and query the entirety of microbial and viral sequencing data in the European Nucleotide Archive. The signature of a given dataset is encoded as a set of Bloom filters from that dataset.
Genome assembly
The memory efficiency of Bloom filters has been used in genome assembly as a way to reduce the space footprint of k-mers from sequencing data. The contribution of Bloom filter based assembly methods is combining Bloom filters and de Bruijn graphs into a structure called a probabilistic de Bruijn graph, which optimizes memory usage at the cost of the false positive rate inherent to Bloom filters. Instead of storing the de Bruijn graph in a hash table, it is stored in a Bloom filter.
Using a Bloom filter to store the de Bruijn graph complicates the graph traversal step to build the assembly, since edge information is not encoded in the Bloom filter. Graph traversal is accomplished by querying the Bloom filter for any of the four possible subsequent k-mers from the current node. For example, if the current node is for the k-mer ACT, then the next node must be for one of the k-mers CTA, CTG, CTC or CTT. If a query k-mer exists in the Bloom filter, then the k-mer is added to the path. Therefore, there are two sources for false positives in querying the Bloom filter when traversing the de Bruijn graph. There is the probability that one or more of the three false k-mers exist elsewhere in the sequencing set to return a false positive, and there is the aforementioned inherent false positive rate of the Bloom filter itself. The assembly tools that use Bloom filters must account for these sources of false positives in their methods. ABySS 2 and Minia are examples of assemblers that uses this approach for de novo assembly.
Sequencing error correction
Next-generation sequencing (NGS) methods have allowed the generation of new genome sequences much faster and cheaper than the previous Sanger sequencing methods. However, these methods have a higher error rate, which complicates downstream analysis of the sequence and can even give rise to erroneous conclusions. Many methods have been developed to correct the errors in NGS reads, but they use large amounts of memory which makes them impractical for large genomes, such as the human genome. Therefore, tools using Bloom filters have been developed to address these limitations, taking advantage of their efficient memory usage. Musket and BLESS are examples of such tools. Both methods use the k-mer spectrum approach for error correction. The first step of this approach is to count the multiplicity of k-mers, however while BLESS only uses Bloom filters to store the counts, Musket uses Bloom filters only to count unique k-mers, and stores non-unique k-mers in a hash table, as described in a previous work
RNA-Seq
Bloom filters are also employed in some RNA-Seq pipelines. RNA-Skim clusters RNA transcripts and then uses Bloom filters to find sig-mers: k-mers that are only found in one of the clusters. These sig-mers are then used to estimate the transcript abundance levels. Therefore, it does not analyze every possible k-mer which results in performance and memory-usage improvements, and has been shown to work as well as previous methods.
References
Bioinformatics | Bloom filters in bioinformatics | Engineering,Biology | 1,312 |
42,657,576 | https://en.wikipedia.org/wiki/Christianity%20and%20science | Most scientific and technical innovations prior to the Scientific Revolution were achieved by societies organized by religious traditions. Ancient Christian scholars pioneered individual elements of the scientific method. Historically, Christianity has been and still is a patron of sciences. It has been prolific in the foundation of schools, universities and hospitals, and many Christian clergy have been active in the sciences and have made significant contributions to the development of science.
Historians of science such as Pierre Duhem credit medieval Catholic mathematicians and philosophers such as John Buridan, Nicole Oresme and Roger Bacon as the founders of modern science. Duhem concluded that "the mechanics and physics of which modern times are justifiably proud to proceed, by an uninterrupted series of scarcely perceptible improvements, from doctrines professed in the heart of the medieval schools". Many of the most distinguished classical scholars in the Byzantine Empire held high office in the Eastern Orthodox Church. Protestantism has had an important influence on science, according to the Merton Thesis, there was a positive correlation between the rise of English Puritanism and German Pietism on the one hand, and early experimental science on the other.
Christian scholars and scientists have made noted contributions to science and technology fields, as well as medicine, both historically and in modern times. Some scholars state that Christianity contributed to the rise of the Scientific Revolution. Between 1901 and 2001, about 56.5% of Nobel prize laureates in scientific fields were Christians, and 26% were of Jewish descent (including Jewish atheists).
Events in Christian Europe, such as the Galileo affair, that were associated with the Scientific Revolution and the Age of Enlightenment led some scholars such as John William Draper to postulate a conflict thesis, holding that religion and science have been in conflict throughout history. While the conflict thesis remains popular in atheistic and antireligious circles, it has lost favor among most contemporary historians of science. Most contemporary historians of science believe the Galileo affair is an exception in the overall relationship between science and Christianity and have also corrected numerous false interpretations of this event.
Overview
Most sources of knowledge available to the early Christians were connected to pagan worldviews as the early Christians largely lived among pagans. There were various opinions on how Christianity should regard pagan learning, which included its ideas about nature. For instance, among early Christian teachers, from Tertullian (c. 160–220) held a generally negative opinion of Greek philosophy, while Origen (c. 185–254) regarded it much more favourably and required his students to read nearly every work available to them.
Earlier attempts at reconciliation of Christianity with Newtonian mechanics appear quite different from later attempts at reconciliation with the newer scientific ideas of evolution or relativity. Many early interpretations of evolution polarized themselves around a struggle for existence. These ideas were significantly countered by later findings of universal patterns of biological cooperation. According to John Habgood, all man really knows here is that the universe seems to be a mix of good and evil, beauty and pain, and that suffering may somehow be part of the process of creation. Habgood holds that Christians should not be surprised that suffering may be used creatively by God, given their faith in the symbol of the Cross. Robert John Russell has examined consonance and dissonance between modern physics, evolutionary biology, and Christian theology.
Christian philosophers Augustine of Hippo (354–430) and Thomas Aquinas held that scriptures can have multiple interpretations on certain areas where the matters were far beyond their reach, therefore one should leave room for future findings to shed light on the meanings. Augustine Argued:Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world, about the motion and orbit of the stars ... Now, it is a disgraceful and dangerous thing for an infidel to hear a Christian, presumably giving the meaning of Holy Scripture, talking non-sense on these topics; and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn. The shame is not so much that an ignorant individual is derided, but that people outside the household of the faith think our sacred writers held such opinions, and, to the great loss of those for whose salvation we toil, the writers of our Scripture are criticized and rejected as unlearned men.The "Handmaiden" tradition, which saw secular studies of the universe as a very important and helpful part of arriving at a better understanding of scripture, was adopted throughout Christian history from early on. Also, the sense that God created the world as a self-operating system is what motivated many Christians throughout the Middle Ages to investigate nature.
The Byzantine Empire was one of the peaks in Christian history and Christian civilization, and Constantinople remained the leading city of the Christian world in size, wealth, and culture. There was a renewed interest in classical Greek philosophy, as well as an increase in literary output in vernacular Greek. The Byzantine science played an important role in the transmission of classical knowledge to the Islamic world and to Renaissance Italy, and also in the transmission of Islamic science to Renaissance Italy. Many of the most distinguished classical scholars held high office in the Eastern Orthodox Church.
Modern historians of science such as J.L. Heilbron, Alistair Cameron Crombie, David Lindberg, Edward Grant, Thomas Goldstein, and Ted Davis have reviewed the popular notion that medieval Christianity was a negative influence in the development of civilization and science. In their views, not only did the monks save and cultivate the remnants of ancient civilization during the barbarian invasions, but the medieval church promoted learnings and science through its sponsorship of many universities which, under its leadership, grew rapidly in Europe in the eleventh and twelfth centuries. St. Thomas Aquinas, the Church's "model theologian", not only argued that reason is in harmony with faith, he even recognized that reason can contribute to understanding revelation, and so encouraged intellectual development. He was not unlike other medieval theologians who sought out reason in the effort to defend his faith. Some of today's scholars, such as Stanley Jaki, have claimed that Christianity with its particular worldview, was a crucial factor for the emergence of modern science. According to professor Noah J Efron, virtually all modern scholars and historians agree that Christianity moved many early-modern intellectuals to study nature systematically.
David C. Lindberg states that the widespread popular belief that the Middle Ages was a time of ignorance and superstition due to the Christian church is a "caricature". According to Lindberg, while there are some portions of the classical tradition which suggest this view, these were exceptional cases. It was common to tolerate and encourage critical thinking about the nature of the world. The relation between Christianity and science is complex and cannot be simplified to either harmony or conflict, according to Lindberg. Lindberg reports that "the late medieval scholar rarely experienced the coercive power of the church and would have regarded himself as free (particularly in the natural sciences) to follow reason and observation wherever they led. There was no warfare between science and the church." Ted Peters in Encyclopedia of Religion writes that although there is some truth in the "Galileo's condemnation" story but through exaggerations, it has now become "a modern myth perpetuated by those wishing to see warfare between science and religion who were allegedly persecuted by an atavistic and dogma-bound ecclesiastical authority". In 1992, the Catholic Church's seeming vindication of Galileo attracted much comment in the media:
A degree of concord between science and religion can be seen in religious belief and empirical science. The belief that God created the world and therefore humans, can lead to the view that he arranged for humans to know the world. This is underwritten by the doctrine of imago dei. In the words of Thomas Aquinas, "Since human beings are said to be in the image of God in virtue of their having a nature that includes an intellect, such a nature is most in the image of God in virtue of being most able to imitate God".
During the Enlightenment, a period "characterized by dramatic revolutions in science" and the rise of Protestant challenges to the authority of the Catholic Church via individual liberty, the authority of Christian scriptures became strongly challenged. As science advanced, acceptance of a literal version of the Bible became "increasingly untenable" and some in that period presented ways of interpreting scripture according to its spirit on its authority and truth.
Regarding the subject on the distribution of Nobel Prizes by religion between 1901 and 2000, the data taken from Baruch A. Shalev, shows that between the years 1901 and 2000 reveals that 654 Laureates belong to 28 different religion. 65.4% have identified Christianity in its various forms as their religious preference. Overall, Christians have won a total of 78.3% of all the Nobel Prizes in Peace, 72.5% in Chemistry, 65.3% in Physics, 62% in Medicine, 54% in Economics and 49.5% of all Literature awards.
History
Roots of scientific revolution
Between 1150 and 1200, Christian scholars had traveled to Sicily and Spain to retrieve the writings of Aristotle, which had been lost to the West after the Fall of the Roman Empire. This produced a period of cultural ferment that one "modern historian has called the twelfth century renaissance". Thomas Aquinas responded by writing his monumental summas in support of human reason as compatible with faith. Christian theology adapted to Aristotle's secular and humanistic natural philosophy. By the Late Middle Ages, Aquinas's rationalism was being heatedly debated in the new universities. William Ockham resolved the conflict by arguing that faith and reason should be pursued separately so that each could achieve its own end. Historians of science David C. Lindberg, Ronald Numbers and Edward Grant have described what followed as a "medieval scientific revival". Science historian Noah Efron has written that Christianity provided the early "tenets, methods, and institutions of what in time became modern science".
Modern western universities have their origins directly in the Medieval Church. They began as cathedral schools, and all students were considered clerics. This was a benefit as it placed the students under ecclesiastical jurisdiction and thus imparted certain legal immunities and protections. The cathedral schools eventually became partially detached from the cathedrals and formed their own institutions, the earliest being the University of Bologna (1088), the University of Oxford (1096), and the University of Paris (c. 1150).
Some scholars have noted a direct tie between "particular aspects of traditional Christianity" and the rise of science. Other scholars and historians have credited Christianity with laying the foundation for the Scientific Revolution. According to Robert K. Merton, the values of English Puritanism and German Pietism led to the scientific revolution of the 17th and 18th centuries. (The Merton Thesis is both widely accepted and disputed.) Merton explained that the connection between religious affiliation and interest in science was the result of a significant synergy between the ascetic Protestant values and those of modern science.
Influence of biblical worldviews on early modern science
At first, according to Andrew Dickson White's 1896 book A History of the Warfare of Science with Theology in Christendom, a biblical worldview affected negatively the progress of science through time. Dickinson also argues that immediately following the Reformation matters were even worse. The interpretations of Scripture by Luther and Calvin became as sacred to their followers as the Scripture itself. For instance, when Georg Calixtus ventured, in interpreting the Psalms, to question the accepted belief that "the waters above the heavens" were contained in a vast receptacle upheld by a solid vault, he was bitterly denounced as heretical. Today, much of the scholarship in which the conflict thesis was originally based is considered to be inaccurate. For instance, the claim that early Christians rejected scientific findings by the Greco-Romans is false, since the "handmaiden" view of secular studies was seen to shed light on theology. This view was widely adapted throughout the early medieval period and afterwards by theologians (such as Augustine) and ultimately resulted in fostering interest in knowledge about nature through time. Also, the claim that people of the Middle Ages widely believed that the Earth was flat was first propagated in the same period that originated the conflict thesis and is still very common in popular culture. Modern scholars regard this claim as mistaken, as the contemporary historians of science David C. Lindberg and Ronald L. Numbers write: "there was scarcely a Christian scholar of the Middle Ages who did not acknowledge [earth's] sphericity and even know its approximate circumference." From the fall of Rome to the time of Columbus, all major scholars and many vernacular writers interested in the physical shape of the earth held a spherical view with the exception of Lactantius and Cosmas.
H. Floris Cohen argued for a biblical Protestant, but not excluding Catholicism, influence on the early development of modern science. He presented Dutch historian R. Hooykaas' argument that a biblical world-view holds all the necessary antidotes for the hubris of Greek rationalism: a respect for manual labour, leading to more experimentation and empiricism, and a supreme God that left nature and open to emulation and manipulation. It supports the idea early modern science rose due to a combination of Greek and biblical thought.
Oxford historian Peter Harrison is another who has argued that a Biblical worldview was significant for the development of modern science. Harrison contends that Protestant approaches to the book of scripture had significant, if largely unintended, consequences for the interpretation of the book of nature. Harrison has also suggested that literal readings of the Genesis narratives of the Creation and Fall motivated and legitimated scientific activity in seventeenth-century England. For many of its seventeenth-century practitioners, science was imagined to be a means of restoring a human dominion over nature that had been lost as a consequence of the Fall.
Historian and professor of religion Eugene M. Klaaren holds that "a belief in divine creation" was central to an emergence of science in seventeenth-century England. The philosopher Michael Foster has published analytical philosophy connecting Christian doctrines of creation with empiricism. Historian William B. Ashworth has argued against the historical notion of distinctive mind-sets and the idea of Catholic and Protestant sciences. Historians James R. Jacob and Margaret C. Jacob have argued for a linkage between seventeenth-century Anglican intellectual transformations and influential English scientists (e.g., Robert Boyle and Isaac Newton). John Dillenberger and Christopher B. Kaiser have written theological surveys, which also cover additional interactions occurring in the eighteenth, nineteenth, and twentieth centuries. Philosopher of Religion, Richard Jones, has written a philosophical critique of the "dependency thesis" which assumes that modern science emerged from Christian sources and doctrines. Though he acknowledges that modern science emerged in a religious framework, that Christianity greatly elevated the importance of science by sanctioning and religiously legitimizing it in medieval period, and that Christianity created a favorable social context for it to grow; he argues that direct Christian beliefs or doctrines were not primary source of scientific pursuits by natural philosophers, nor was Christianity, in and of itself, exclusively or directly necessary in developing or practicing modern science.
Oxford University historian and theologian John Hedley Brooke wrote that "when natural philosophers referred to laws of nature, they were not glibly choosing that metaphor. Laws were the result of legislation by an intelligent deity. Thus, the philosopher René Descartes (1596–1650) insisted that he was discovering the "laws that God has put into nature." Later Newton would declare that the regulation of the solar system presupposed the "counsel and dominion of an intelligent and powerful Being." Historian Ronald L. Numbers stated that this thesis "received a boost" from mathematician and philosopher Alfred North Whitehead's Science and the Modern World (1925). Numbers has also argued, "Despite the manifest shortcomings of the claim that Christianity gave birth to science—most glaringly, it ignores or minimizes the contributions of ancient Greeks and medieval Muslims—it too, refuses to succumb to the death it deserves." The sociologist Rodney Stark of Baylor University, argued in contrast that "Christian theology was essential for the rise of science."
Reconciliation in Britain in the early 20th century
In Reconciling Science and Religion: The Debate in Early-twentieth-century Britain, historian of biology Peter J. Bowler argues that in contrast to the conflicts between science and religion in the U.S. in the 1920s (most famously the Scopes Trial), during this period Great Britain experienced a concerted effort at reconciliation, championed by intellectually conservative scientists, supported by liberal theologians but opposed by younger scientists and secularists and conservative Christians. These attempts at reconciliation fell apart in the 1930s due to increased social tensions, moves towards neo-orthodox theology and the acceptance of the modern evolutionary synthesis.
In the twentieth century, several ecumenical organizations promoting a harmony between science and Christianity were founded, most notably the American Scientific Affiliation, The Biologos Foundation, Christians in Science, The Society of Ordained Scientists, and The Veritas Forum.
Branches of Christianity
Catholicism
While refined and clarified over the centuries, the Catholic position on the relationship between science and religion is one of harmony and has maintained the teaching of natural law as set forth by Thomas Aquinas. For example, regarding scientific study such as that of evolution, the church's unofficial position is an example of theistic evolution, stating that faith and scientific findings regarding human evolution are not in conflict, though humans are regarded as a special creation, and that the existence of God is required to explain both monogenism and the spiritual component of human origins. Catholic schools have included all manners of scientific study in their curriculum for many centuries. Historian John Heilbron says that "The Roman Catholic Church gave more financial and social support to the study of astronomy for over six centuries, from the recovery of ancient learning during the late Middle Ages into the Enlightenment, then any other, and probably all, other Institutions."
The first universities in Europe were established by Catholic Church monks. The first Western European institutions generally considered to be universities were established in present-day Italy (including the Kingdom of Sicily, the Kingdom of Naples, and the Kingdom of Italy), the Kingdom of England, the Kingdom of France, Holy Roman Empire, the Kingdom of Spain, the Kingdom of Portugal and the Kingdom of Scotland between the 11th and 15th centuries for the study of the arts and the higher disciplines of theology, law, and medicine. These universities evolved from much older Christian cathedral schools and monastic schools, and it is difficult to define the exact date when they became true universities, though the lists of studia generalia for higher education in Europe held by the Vatican are a useful guide:
Galileo once stated "The intention of the Holy Spirit is to teach us how to go to heaven, not how the heavens go." In 1981, John Paul II, then pope of the Catholic Church, spoke of the relationship this way: "The Bible itself speaks to us of the origin of the universe and its make-up, not in order to provide us with a scientific treatise, but in order to state the correct relationships of Man with God and with the universe. Sacred Scripture wishes simply to declare that the world was created by God, and in order to teach this truth it expresses itself in the terms of the cosmology in use at the time of the writer". The influence of the Church on Western letters and learning has been formidable. The ancient texts of the Bible have deeply influenced Western art, literature and culture. For centuries following the collapse of the Western Roman Empire, small monastic communities were practically the only outposts of literacy in Western Europe. In time, the cathedral schools developed into Europe's earliest universities and the church has established thousands of primary, secondary and tertiary institutions throughout the world in the centuries since. The Church and clergymen have also sought at different times to censor texts and scholars. Thus, different schools of opinion exist as to the role and influence of the Church in relation to western letters and learning.
One view, first propounded by Enlightenment philosophers, asserts that the Church's doctrines are entirely superstitious and have hindered the progress of civilization. Communist states have made similar arguments in their education in order to inculcate a negative view of Catholicism (and religion in general) in their citizens. The most famous incidents cited by such critics are narratives of the Church in relation to Copernicus, Galileo Galilei and Johannes Kepler.
In opposition to this view, some historians of science, including non-Catholics such as J.L. Heilbron, A.C. Crombie, David Lindberg, Edward Grant, Thomas Goldstein, and Ted Davis, have argued that the Church had a significant, positive influence on the development of Western civilization. They hold that, not only did monks save and cultivate the remnants of ancient civilization during the barbarian invasions, but that the Church promoted learning and science through its sponsorship of many universities which, under its leadership, grew rapidly in Europe in the eleventh and twelfth centuries. St.Thomas Aquinas, the Church's "model theologian," argued that reason is in harmony with faith, and that reason can contribute to a deeper understanding of revelation, and so encouraged intellectual development. The Church's priest-scientists, many of whom were Jesuits, have been among the leading lights in astronomy, genetics, geomagnetism, meteorology, seismology, and solar physics, becoming some of the "fathers" of these sciences. Examples include important churchmen such as the Augustinian abbot Gregor Mendel (pioneer in the study of genetics), Roger Bacon (a Franciscan friar who was one of the early advocates of the scientific method), and Belgian priest Georges Lemaître (the first to propose the Big Bang theory; see Religious interpretations of the Big Bang theory). Other notable priest scientists have included Albertus Magnus, Robert Grosseteste, Nicholas Steno, Francesco Grimaldi, Giambattista Riccioli, Roger Boscovich, and Athanasius Kircher. Even more numerous are Catholic laity involved in science: Henri Becquerel who discovered radioactivity; Galvani, Volta, Ampere, Marconi, pioneers in electricity and telecommunications; Lavoisier, "father of modern chemistry"; Vesalius, founder of modern human anatomy; and Cauchy, one of the mathematicians who laid the rigorous foundations of calculus.
Throughout history many Catholic clerics have made significant contributions to science. These cleric-scientists include Nicolaus Copernicus, Gregor Mendel, Georges Lemaître, Albertus Magnus, Roger Bacon, Pierre Gassendi, Roger Joseph Boscovich, Marin Mersenne, Bernard Bolzano, Francesco Maria Grimaldi, Nicole Oresme, Jean Buridan, Robert Grosseteste, Christopher Clavius, Nicolas Steno, Athanasius Kircher, Giovanni Battista Riccioli, William of Ockham, and others. The Catholic Church has also produced many lay scientists and mathematicians.
Cistercian in science
The Catholic Cistercian order used its own numbering system, which could express numbers from 0 to 9999 in a single sign. According to one modern Cistercian, "enterprise and entrepreneurial spirit" have always been a part of the order's identity, and the Cistercians "were catalysts for development of a market economy" in twelfth-century Europe. Until the Industrial Revolution, most of the technological advances in Europe were made in the monasteries. According to the medievalist Jean Gimpel, their high level of industrial technology facilitated the diffusion of new techniques: "Every monastery had a model factory, often as large as the church and only several feet away, and waterpower drove the machinery of the various industries located on its floor." Waterpower was used for crushing wheat, sieving flour, fulling cloth and tanning – a "level of technological achievement [that] could have been observed in practically all" of the Cistercian monasteries.
The English science historian James Burke examines the impact of Cistercian waterpower, derived from Roman watermill technology such as that of Barbegal aqueduct and mill near Arles in the fourth of his ten-part Connections TV series, called "Faith in Numbers". The Cistercians made major contributions to culture and technology in medieval Europe: Cistercian architecture is considered one of the most beautiful styles of medieval architecture; and the Cistercians were the main force of technological diffusion in fields such as agriculture and hydraulic engineering.
Jesuits in science
Between the sixteenth and eighteenth centuries, the teaching of science in Jesuit schools, as laid down in the Ratio atque Institutio Studiorum Societatis Iesu ("The Official Plan of studies for the Society of Jesus") of 1599, was almost entirely based on the works of Aristotle.
The Jesuits, nevertheless, have made numerous significant contributions to the development of science. For example, the Jesuits have dedicated significant study to earthquakes, and seismology has been described as "the Jesuit science". The Jesuits have been described as "the single most important contributor to experimental physics in the seventeenth century". According to Jonathan Wright in his book God's Soldiers, by the eighteenth century the Jesuits had "contributed to the development of pendulum clocks, pantographs, barometers, reflecting telescopes and microscopes, to scientific fields as various as magnetism, optics and electricity. They observed, in some cases before anyone else, the colored bands on Jupiter's surface, the Andromeda nebula and Saturn's rings. They theorized about the circulation of the blood (independently of Harvey), the theoretical possibility of flight, the way the moon affected the tides, and the wave-like nature of light."
The Jesuit China missions of the sixteenth and seventeenth centuries introduced Western science and astronomy, then undergoing its own revolution, to China. One modern historian writes that in late Ming courts, the Jesuits were "regarded as impressive especially for their knowledge of astronomy, calendar-making, mathematics, hydraulics, and geography". The Society of Jesus introduced, according to Thomas Woods, "a substantial body of scientific knowledge and a vast array of mental tools for understanding the physical universe, including the Euclidean geometry that made planetary motion comprehensible". Another expert quoted by Woods said the scientific revolution brought by the Jesuits coincided with a time when science was at a very low level in China.
The missionary efforts and other work of the Society of Jesus, or Jesuits, between the 16th and 17th century played a significant role in continuing the transmission of knowledge, science, and culture between China and the West, and influenced Christian culture in Chinese society today.
Protestant influence
Protestantism has promoted economic growth and entrepreneurship, especially in the period after the Scientific and the Industrial Revolution. Scholars have identified a positive correlation between the rise of Protestantism and human capital formation, work ethic, economic development, and the development of the state system.
Protestantism had an important influence on science, according to the Merton thesis there was a positive correlation between the rise of Puritanism and Protestant Pietism on the one hand and early experimental science on the other. The Merton thesis has two separate parts: Firstly, it presents a theory that science changes due to an accumulation of observations and improvement in experimental techniques and methodology; secondly, it puts forward the argument that the popularity of science in seventeenth-century England and the religious demography of the Royal Society (English scientists of that time were predominantly Puritans or other Protestants) can be explained by a correlation between Protestantism and the scientific values. In his theory, Robert K. Merton focused on English Puritanism and German Pietism as having been responsible for the development of the Scientific Revolution of the seventeenth and eighteenth centuries. Merton explained that the connection between religious affiliation and interest in science was the result of a significant synergy between the ascetic Protestant values and those of modern science. Protestant values encouraged scientific research by allowing science to study God's influence on the world and thus providing a religious justification for scientific research.
According of Scientific Elite: Nobel Laureates in the United States by Harriet Zuckerman, a review of American Nobel Prize winners awarded between 1901 and 1972, 72% of American Nobel Prize laureates, have identified from Protestant background. Overall, Americans of Protestant background have won a total of 84.2% of all awarded Nobel Prizes in Chemistry, 60% in Medicine, 58.6% in Physics, between 1901 and 1972.
Some of the first colleges and universities in America, including Harvard, Yale, Princeton, Columbia, Dartmouth, Pennsylvania, Duke, Boston, Williams, Bowdoin, Middlebury, and Amherst, all were founded by mainline Protestant denominations.
Quakers in science
The Religious Society of Friends, commonly known as Quakers, encouraged some values which may have been conducive to encouraging scientific talents. A theory suggested by David Hackett Fischer in his book Albion's Seed indicated early Quakers in the US preferred "practical study" to the more traditional studies of Greek or Latin popular with the elite. Another theory suggests their avoidance of dogma or clergy gave them a greater flexibility in response to science.
Despite those arguments a major factor is agreed to be that the Quakers were initially discouraged or forbidden to go to the major law or humanities schools in Britain due to the Test Act. They also at times faced similar discriminations in the United States, as many of the colonial universities had a Puritan or Anglican orientation. This led them to attend "Godless" institutions or forced them to rely on hands-on scientific experimentation rather than academia.
Because of these issues it has been stated Quakers are better represented in science than most religions. There are sources, Pendlehill and Encyclopædia Britannica, that indicate that for over two centuries they were overrepresented in the Royal Society. Mention is made of this possibility in studies referenced in religiosity and intellince and in a book by Arthur Raistrick. Whether this is still accurate, there have been several noteworthy members of this denomination in science. The following names a few.
Eastern Christian influence
Christian scientists and scholars (particularly Nestorian and Jacobite Christians) contributed to the Arab Islamic Civilization during the Ummayad and the Abbasid periods by translating works of Greek philosophers to Syriac and afterwards to Arabic. Over a century and a half, primarily Middle Eastern Oriental Syriac Christian scholars in House of Wisdom translated all scientific and philosophic Greek texts into Arabic language in the House of Wisdom. They also excelled in philosophy, science (Masawaiyh, Eutychius of Alexandria, and Jabril ibn Bukhtishu) and theology (such as Tatian, Bardaisan, Babai the Great, Nestorius, and Thomas of Marga) and the personal physicians of the Abbasid Caliphs were often Christians, such as the long-serving Bukhtishu dynasty. Many scholars of the House of Wisdom were of Assyrian Christian background.
Among the Copts in Egypt, every monastery and probably every church once had its own library of manuscripts.
In the fifth century AD, nine Christian Syrian Monks translated Greek, Hebrew, and Syriac works into the Ethiopian language of Ge'ez and organized Christian monastic orders and schools, some of which are still in existence today. By the sixth century AD, Assyrian Christians had begun exporting back to the Byzantine Empire their own works on science, philosophy and medicine. the literary output of the Assyrians was vast. The third largest corpus of Christian writing, after Latin and Greek, is by the Assyrians in the Assyrian language. In the field of medicine, the Assyrian Bukhtishu family produced nine generations of physicians, and founded the great medical school at Gundeshapur in Iran. When Abbasid Caliph al-Mansur became ill and no physician in Baghdad could cure him, he sent for the dean of the medical school in Gundeshapur, which was renowned as the best of its time The Assyrian philosopher Job of Edessa developed a physical theory of the universe, in the Assyrian language, that rivaled Aristotle's theory, and that sought to replace matter with forces (a theory that anticipated some ideas in quantum mechanics, such as the spontaneous creation and destruction of matter that occurs in the quantum vacuum). One of the greatest Assyrian achievements of the fourth century was the founding of one of the oldest universities in the world, the School of Nisibis, which had three departments, theology, philosophy and medicine, and which became a magnet and center of intellectual development in the Middle East. The statutes of the School of Nisibis, which have been preserved, later became the model upon which the first Italian university was based. The first Mongolian writing system (which was first set down by assyiran monks) used the Assyrian Aramaic and Syriac alphabets, with the name "Tora Bora" being an Assyrian phrase meaning "arid mountain." The hierarchical structure of Buddhism is modeled after the Church of the East. The Assyrian Christian Stephanos translated the work of Greek physician Pedanius Dioscorides into the Arabic language, and for over a century, this translated medical text was used by the Muslim states.
In the field of Optics, Nestorian Christian Hunayn ibn-Ishaq's textbook on ophthalmology called the Ten Treatises on the Eye, which was written in 950 A.D., remained the authoritative source on the subject in the western world until the 1800s.
It was a Christian scholar and Bishop from Nisibis named Severus Sebokht who was the first to describe and incorporate Indian mathematical symbols in the mid 7th century, which were then adopted into Islamic culture and are now known as the Arabic numerals.
During the fourth through the seventh centuries, scholarly work in the Syriac and Greek languages was either newly initiated, or carried on from the Hellenistic period. Centers of learning and of transmission of classical wisdom included colleges such as the School of Nisibis, and later the School of Edessa, and the renowned hospital and medical academy of Jundishapur; libraries included the Library of Alexandria and the Imperial Library of Constantinople; other centers of translation and learning functioned at Merv, Salonika, Nishapur and Ctesiphon, situated just south of what later became Baghdad. The House of Wisdom was a library, translation institute, and academy established in Abbasid-era Baghdad, Iraq. Nestorians played a prominent role in the formation of Arab culture, with the Jundishapur school being prominent in the late Sassanid, Umayyad and early Abbasid periods. The distinguished historian of science George Sarton called Jundishapur "the greatest intellectual center of the time." Notably, eight generations of the Nestorian Bukhtishu family served as private doctors to caliphs and sultans between the eighth and eleventh centuries.
The common and persistent myth claiming that Islamic scholars "saved" the classical work of Aristotle and other Greek philosophers from destruction and then graciously passed it on to Europe is baseless. According to the myth, these works would otherwise have perished in the long European Dark Age between the fifth and tenth centuries. Ancient Greek texts and Greek culture were never "lost" to be somehow "recovered" and "transmitted" by Islamic scholars, as many keep claiming: the texts were always there, preserved and studied by the scholars and monks of the Byzantines and passed on to the rest of Europe and to the Islamic world at various times. Aristotle had been translated in France at the abbey of Mont Saint-Michel before translations of Aristotle into Arabic (via the Syriac of the Christian scholars from the conquered lands of the Byzantine Empire). Michael Harris points out:
Historian John Julius Norwich adds that “much of what we know about antiquity—especially Hellenic and Roman literature and Roman law—would have been lost forever if it weren't for the scholars and scribes of Constantinople.”
The Byzantine science played an important role in the transmission of classical knowledge to the Islamic world and to Renaissance Italy, and also in the transmission of Islamic science to Renaissance Italy. Many of the most distinguished classical scholars held high office in the Eastern Orthodox Church. The migration waves of Byzantine scholars and émigrés in the period following the Crusader sacking of Constantinople in 1204 and the end of the Byzantine Empire in 1453, is considered by many scholars key to the revival of Greek and Roman studies that led to the development of the Renaissance humanism and science. These émigrés brought to Western Europe the relatively well-preserved remnants and accumulated knowledge of their own (Greek) civilization, which had mostly not survived the Early Middle Ages in the West. According to the Encyclopædia Britannica: "Many modern scholars also agree that the exodus of Greeks to Italy as a result of this event marked the end of the Middle Ages and the beginning of the Renaissance". The Byzantines pioneered the concept of the hospital as an institution offering medical care and the possibility of a cure for the patients, as a reflection of the ideals of Christian charity, rather than merely a place to die.
Paper, which the Muslims received from China in the eighth century, was being used in the Byzantine Empire by the ninth century. There were very large private libraries, and monasteries possessed huge libraries with hundreds of books that were lent to people in each monastery's region. Thus were preserved the works of classical antiquity.
When Saint Cyril was sent by the Byzantine emperor in an embassy to the Arabs in the ninth century, he astonished his Muslim hosts with his knowledge of philosophy and science as well as theology. Historian Maria Mavroudi recounts:
Perspectives on evolution
In recent history, the theory of evolution has been at the centre of controversy between Christianity and science, largely in America. Christians who accept a literal interpretation of the biblical account of creation find incompatibility between Darwinian evolution and their interpretation of the Christian faith. Creation science or scientific creationism is a branch of creationism that attempts to provide scientific support for the Genesis creation narrative in the Book of Genesis and attempts to disprove generally accepted scientific facts, theories and scientific paradigms about the geological history of Earth, formation of the Solar System, Big Bang cosmology, the chemical origins of life and evolution. It began in the 1960s as a fundamentalist Christian effort in the United States to prove Biblical inerrancy and falsify the scientific evidence for evolution. It has since developed a sizable religious following in the United States, with creation science ministries branching worldwide. In 1925, The State of Tennessee passed the Butler Act, which prohibited the teaching of the theory of evolution in all schools in the state. Later that year, a similar law was passed in Mississippi, and likewise, Arkansas in 1927. In 1968, these "anti-monkey" laws were struck down by the Supreme Court of the United States as unconstitutional, "because they established a religious doctrine violating both the First and Fourth Amendments to the Constitution."
Most scientists have rejected creation science for several reasons, including that its claims do not refer to natural causes and cannot be tested. In 1987, the United States Supreme Court ruled that creationism is religion, not science, and cannot be advocated in public school classrooms.
Theistic evolution is a discipline that accepts the current scientific understanding of the age of the Earth and the theory of evolution. It includes a range of beliefs, including views described as evolutionary creationism, which accepts contemporary science, but also upholds classical religious understandings of God and creation in Christian context. This position has been endorsed by the Catholic Church. Proponents of theistic evolution include prominent Christian philosopher and theologian, William Lane Craig, Founder of BioLogos, Francis Collins, Prominent conservative Christian Theologian, Tim Keller, and prominent Christian philosopher Alvin Plantinga.
Modern reception
Individual scientists' views
Christian Scholars and Scientists have made noted contributions to science and technology fields, as well as Medicine, both historically and in modern times. Many well-known historical figures who influenced Western science considered themselves Christian such as Nicolaus Copernicus, Galileo Galilei, Johannes Kepler, Isaac Newton Robert Boyle, Francis Bacon, Gottfried Wilhelm Leibniz, Emanuel Swedenborg, Alessandro Volta, Carl Friedrich Gauss, Antoine Lavoisier, André-Marie Ampère, John Dalton, James Clerk Maxwell, William Thomson, 1st Baron Kelvin, Louis Pasteur, Michael Faraday, and J. J. Thomson.
Isaac Newton, for example, believed that gravity caused the planets to revolve about the Sun, and credited God with the design. In the concluding General Scholium to the Philosophiae Naturalis Principia Mathematica, he wrote: "This most beautiful System of the Sun, Planets and Comets, could only proceed from the counsel and dominion of an intelligent and powerful being." Other famous founders of science who adhered to Christian beliefs include Galileo, Johannes Kepler, René Descartes, Blaise Pascal, and others.
Throughout history many Catholic clerics have made significant contributions to science. These cleric-scientists include Nicolaus Copernicus, Gregor Mendel, Georges Lemaître, Albertus Magnus, Roger Bacon, Pierre Gassendi, Roger Joseph Boscovich, Marin Mersenne, Bernard Bolzano, Francesco Maria Grimaldi, Nicole Oresme, Jean Buridan, Robert Grosseteste, Christopher Clavius, Nicolas Steno, Athanasius Kircher, Giovanni Battista Riccioli, William of Ockham, and others. The Catholic Church has also produced many lay scientists and mathematicians.
Prominent modern scientists advocating Christian belief include Nobel Prize–winning physicists Charles Townes (United Church of Christ member) and William Daniel Phillips (United Methodist Church member), evangelical Christian and past head of the Human Genome Project Francis Collins, and climatologist John T. Houghton.
Scientific Revolution
Some scholars have noted a direct tie between "particular aspects of traditional Christianity" and the rise of science.
Protestantism has had an important influence on science, according to the Merton thesis, there was a positive correlation between the rise of English Puritanism and German Pietism on the one hand and early experimental science on the other. Robert K. Merton focused on English Puritanism and German Pietism as having been responsible for the development of the scientific revolution of the seventeenth and eighteenth centuries. He explained that the connection between religious affiliation and interest in science was the result of a significant synergy between the ascetic Protestant values and those of modern science.
The history professor Peter Harrison attributes Christianity to having contributed to the rise of the Scientific Revolution:
Nobel Prize
According to 100 Years of Nobel Prizes a review of Nobel prizes award between 1901 and 2000 reveals that (65.4%) of Nobel Prizes Laureates, have identified Christianity in its various forms as their religious preference (427 prizes). Overall, Christians are considered a total of 72.5% in Chemistry between 1901 and 2000, 65.3% in Physics, 62% in Medicine, 54% in Economics. Between 1901 and 2000 it was revealed that among 654 Laureates 31.9% have identified as Protestant in its various forms (208 prize), 20.3% were Christians (no information about their denominations; 133 prize), 11.6% have identified as Catholic and 1.6% have identified as Eastern Orthodox. Although Christians make up over 33.2% of the world's population, they have won a total of 65.4% of all Nobel prizes between 1901 and 2000.
In an estimate by scholar Benjamin Beit-Hallahmi, between 1901 and 2001, about 57.1% of Nobel prize winners were either Christians or with a Christian background. Between 1901 and 2001, about 56.5% of laureates in scientific fields were Christians. According to scholar Benjamin Beit-Hallahmi, Protestants were overrepresented in scientific categories and Catholics were well-represented in the Literature and Peace categories.
In an estimate made by Weijia Zhang from Arizona State University and Robert G. Fuller from University of Nebraska–Lincoln, between 1901 and 1990, 60% of Physics Nobel prize winners had Christian backgrounds.
According of Scientific Elite: Nobel Laureates in the United States by Harriet Zuckerman, a review of American Nobel prizes winners awarded between 1901 and 1972, 72% of American Nobel Prize Laureates, have identified from Protestant background. Overall, Americans of Protestant background have won a total of 84.2% of all awarded Nobel Prizes in Chemistry, 60% in Medicine, 58.6% in Physics, between 1901 and 1972.
Criticism
Events in Christian Europe, such as the Galileo affair, that were associated with the Scientific Revolution and the Age of Enlightenment led scholars such as John William Draper to postulate a conflict thesis, holding that religion and science have been in conflict methodologically, factually and politically throughout history. This thesis is held by several scientists like Richard Dawkins and Lawrence Krauss. While the conflict thesis remains popular in atheistic and antireligious circles, it has lost favor among most contemporary historians of science, and the majority of scientists in elite universities in the U.S. do not hold a conflict view.
More recently, Thomas E. Woods, Jr., asserts that, despite the widely held conception of the Catholic Church as being anti-science, this conventional wisdom has been the subject of "drastic revision" by historians of science over the last 50 years. Woods asserts that the mainstream view now is that the "Church [has] played a positive role in the development of science ... even if this new consensus has not yet managed to trickle down to the general public." Science historian Ronald L. Numbers corroborates this view, writing that "Historians of science have known for years that White's and Draper's accounts are more propaganda than history. ...Yet the message has rarely escaped the ivory tower."
Trial of Galileo
In 1610, Galileo published his Sidereus Nuncius (Starry Messenger), describing observations made with his new telescope. These and other discoveries exposed difficulties with the understanding of the heavens that was common at the time. Scientists, along with the Catholic Church, had adopted Aristotle's view of the earth as fixed in place, since Aristotle's rediscovery 300 years prior. Jeffrey Foss writes that, by Galileo's time, the Aristotelian-Ptolemaic view of the universe had become "fully integrated with Catholic theology".
Scientists of the day largely rejected Galileo's assertions, since most had no telescope, and Galileo had no physical theory to explain how planets could orbit the sun which, according to Aristotelian physics, was impossible. (That would not be resolved for another hundred years.) Galileo's peers alerted religious authorities to his "errors" and asked them to intervene. In response, the church forbade Galileo from teaching it, though it did not forbid discussing it, so long as it was clear it was merely a hypothesis. Galileo published books and asserted scientific superiority. He was summoned before the Roman Inquisition twice. First warned, he was next sentenced to house arrest on a charge of "grave suspicion of heresy".
The Galileo affair has been considered by many to be a defining moment in the history of the relationship between religion and science. Since the creation of the Conflict thesis by Andrew Dickson White and John William Draper in the late nineteenth century, religion has been depicted as oppressive and oppositional to science. Edward Daub explains that, while "twentieth century historians of science dismantled White and Draper's claims, it is still popular in public perception". Casting Galileo's story as a contest between science and religion is an oversimplification, writes Jeffrey Foss. Galileo was heir to a long scientific tradition with deep medieval Christian roots.
See also
Conflict thesis
Continuity thesis
Deep ecology
Demarcation problem
Faith and rationality
History of science
Intelligent design
Issues in Science and Religion
List of scholars on the relationship between religion and science
Natural theology
Philosophy of science
Politicization of science
Religious skepticism
Relationship between religion and science
Psychology of religion
Scientific method and religion
Theistic evolution
By tradition:
List of Christians in science and technology
List of Christian Nobel laureates
Catholic Church and science and Catholic Church and evolution
List of Catholic clergy scientists
List of lay Catholic scientists
List of Catholic priests and religious awarded the Nobel Prize
Merton thesis
Parson-naturalist
Quakers in science
In the US:
American Scientific Affiliation
Center for Theology and the Natural Sciences
Creation–evolution controversy
John Templeton Foundation
Notes
Works cited
Further reading
Spierer, Eugen. God-of-the-Gaps Arguments in Light of Luther's Theology of the Cross.
External links
Christianity And The Scientist by Ian G. Barbour
Cambridge Christians in Science (CiS) group
Christians in Science website
Ian Ramsey Centre, Oxford
The Society of Ordained Scientists-Mostly Church of England
"Science in Christian Perspective" The (ASA)
Canadian Scientific and Christian Affiliation (CSCA)
The International Society for Science & Religion's founding members.(Of various faiths including Christianity)
Association of Christians in the Mathematical Sciences
Secular Humanism.org article on Science and Religion
History of science | Christianity and science | Technology | 10,016 |
29,182,681 | https://en.wikipedia.org/wiki/Isoplanatic%20patch | The isoplanatic patch is defined as an arbitrary area of the sky over which the path length of incoming electromagnetic waves (such as light or radio waves) only varies by a relatively small amount relative to their wavelength. Typically this area is measured by angular size. Poor seeing or a larger telescope aperture will decrease the size of a patch. Thus, the patch size varies inversely with the Fried parameter and the telescope's angular resolution. In order to correct for atmospheric distortion, telescopes fitted with adaptive optics use a bright light source such as a laser to identify the properties of a patch in the area of interest.
See also
optical resolution
References
Further reading
Birney S, Gonzalez G, Oesper D "observational astronomy" second edition, Cambridge university press, 2006
Astronomical imaging
Observational astronomy
Speckle imaging | Isoplanatic patch | Astronomy | 164 |
67,143,185 | https://en.wikipedia.org/wiki/Carbon%20Design%20System | Carbon Design System is a free and open-source design system and library created by IBM, which implements the IBM Design Language, and licensed under Apache License 2.0. Its public development initially started on June 10, 2015. Their components have multiple implementations, which includes a vanilla JS and CSS implementation and React (maintained by the Carbon Core team), while the community maintains the frameworks developed in Svelte, Vue.js, and Web Components. The official typeface to be used according to the guidelines is the IBM Plex typeface, with alternative typefaces for CJK scripts are Noto Sans CJK SC, Noto Sans CJK TC, and Noto Sans JP.
See also
Design language
Flat design
Fluent Design System by Microsoft
Material Design by Google
References
External links
Design language
Graphical user interfaces
IBM | Carbon Design System | Engineering | 170 |
76,395,135 | https://en.wikipedia.org/wiki/Social%20justice%20index | A social justice index is a set of numbers which have been calculated through weighing several indicators of various entities, usually countries, but also regions or commercial firms. These indicators are considered related to social justice.
The European Union Social Justice Index, published in September 2015 by Bertelsmann Stiftung, is based on 35 indicators. The highest number (7.48) is given to Sweden, whilst the lowest one (3.57) goes to Greece.
The Social Justice in the EU and OECD Index, published in September 2019 also by Bertelsmann Stiftung, ranks 41 countries, from the highest one (7.90, Iceland) to the lowest one (4.76, Mexico). It considers 6 dimensions of social justice: poverty prevention, equitable education, labor market access, social inclusion and non-discrimination, intergenerational justice and health. For some countries, like Sweden, this index has been calculated since 2009 and every 2 or 3 years.
The Adasina Social Justice Index is a stock market index of about 9,000 publicly traded securities. Adasina is a financial analysis firm. These securities are included in this index (or excluded from it) according to 4 criteria: racial justice, gender justice, economic justice and climate justice. The Adasina Social Justice Index is designed to support progressive movements.
See also
Government effectiveness index
World Governance Index
References
Effective altruism
Index numbers
International rankings | Social justice index | Mathematics,Biology | 291 |
41,049,134 | https://en.wikipedia.org/wiki/Fervidobacterium%20gondwanense | Fervidobacterium gondwanense (F. godwanense) is a species of thermophilic anaerobic bacteria. It is non-sporulating, motile, gram-negative, and rod-shaped. F. godwanense was isolated in Great Artesian basin in Australia from non-volcanicly heated geothermal waters.
Fervidobacterium godwanense grows best at temperatures from 65 °C to 68 °C and does not grow at all below 44 °Celsius. Fervidobacterium godwanense habitat are volcanic marine or terrestrial hotsprings. This species can also live in man made places such as hot water storage tanks.
References
Further reading
Ravot, Gilles, et al. "L-Alanine production from glucose fermentation by hyperthermophilic members of the domains Bacteria and Archaea: a remnant of an ancestral metabolism?." Applied and Environmental Microbiology 62.7 (1996): 2657–2659.
Dworkin, Martin, and Stanley Falkow, eds. The Prokaryotes: Vol. 7: Proteobacteria: Delta and Epsilon Subclasses. Deeply Rooting Bacteria. Vol. 7. Springer, 2006.
External links
Type strain of Fervidobacterium gondwanense at BacDive - the Bacterial Diversity Metadatabase
Thermotogota
Gram-positive bacteria | Fervidobacterium gondwanense | Biology | 296 |
1,197,980 | https://en.wikipedia.org/wiki/Downregulation%20and%20upregulation | In biochemistry, in the biological context of organisms' regulation of gene expression and production of gene products, downregulation is the process by which a cell decreases the production and quantities of its cellular components, such as RNA and proteins, in response to an external stimulus. The complementary process that involves increase in quantities of cellular components is called upregulation.
An example of downregulation is the cellular decrease in the expression of a specific receptor in response to its increased activation by a molecule, such as a hormone or neurotransmitter, which reduces the cell's sensitivity to the molecule. This is an example of a locally acting (negative feedback) mechanism.
An example of upregulation is the response of liver cells exposed to such xenobiotic molecules as dioxin. In this situation, the cells increase their production of cytochrome P450 enzymes, which in turn increases degradation of these dioxin molecules.
Downregulation or upregulation of an RNA or protein may also arise by an epigenetic alteration. Such an epigenetic alteration can cause expression of the RNA or protein to no longer respond to an external stimulus. This occurs, for instance, during drug addiction or progression to cancer.
Downregulation and upregulation of receptors
All living cells have the ability to receive and process signals that originate outside their membranes, which they do by means of proteins called receptors, often located at the cell's surface imbedded in the plasma membrane. When such signals interact with a receptor, they effectively direct the cell to do something, such as dividing, dying, or allowing substances to be created, or to enter or exit the cell. A cell's ability to respond to a chemical message depends on the presence of receptors tuned to that message. The more receptors a cell has that are tuned to the message, the more the cell will respond to it.
Receptors are created, or expressed, from instructions in the DNA of the cell, and they can be increased, or upregulated, when the signal is weak, or decreased, or downregulated, when it is strong. Their level can also be up or down regulated by modulation of systems that degrade receptors when they are no longer required by the cell.
Downregulation of receptors can also occur when receptors have been chronically exposed to an excessive amount of a ligand, either from endogenous mediators or from exogenous drugs. This results in ligand-induced desensitization or internalization of that receptor. This is typically seen in animal hormone receptors. Upregulation of receptors, on the other hand, can result in super-sensitized cells, especially after repeated exposure to an antagonistic drug or prolonged absence of the ligand.
Some receptor agonists may cause downregulation of their respective receptors, while most receptor antagonists temporarily upregulate their respective receptors. The disequilibrium caused by these changes often causes withdrawal when the long-term use of a drug is discontinued.
Upregulation and downregulation can also happen as a response to toxins or hormones. An example of upregulation in pregnancy is hormones that cause cells in the uterus to become more sensitive to oxytocin.
Example: Insulin receptor downregulation
Elevated levels of the hormone insulin in the blood trigger downregulation of the associated receptors. When insulin binds to its receptors on the surface of a cell, the hormone receptor complex undergoes endocytosis and is subsequently attacked by intracellular lysosomal enzymes. The internalization of the insulin molecules provides a pathway for degradation of the hormone, as well as for regulation of the number of sites that are available for binding on the cell surface. At high plasma concentrations, the number of surface receptors for insulin is gradually reduced by the accelerated rate of receptor internalization and degradation brought about by increased hormonal binding. The rate of synthesis of new receptors within the endoplasmic reticulum and their insertion in the plasma membrane do not keep pace with their rate of destruction. Over time, this self-induced loss of target cell receptors for insulin reduces the target cell's sensitivity to the elevated hormone concentration.
This process is illustrated by the insulin receptor sites on target cells, e.g. liver cells, in a person with type 2 diabetes. Due to the elevated levels of blood glucose in an individual, the β-cells (islets of Langerhans) in the pancreas must release more insulin than normal to meet the demand and return the blood to homeostatic levels. The near-constant increase in blood insulin levels results from an effort to match the increase in blood glucose, which will cause receptor sites on the liver cells to downregulate and decrease the number of receptors for insulin, increasing the subject's resistance by decreasing sensitivity to this hormone. There is also a hepatic decrease in sensitivity to insulin. This can be seen in the continuing gluconeogenesis in the liver even when blood glucose levels are elevated. This is the more common process of insulin resistance, which leads to adult-onset diabetes.
Another example can be seen in diabetes insipidus, in which the kidneys become insensitive to arginine vasopressin.
Drug addiction
Family-based, adoption, and twin studies have indicated that there is a strong (50%) heritable component to vulnerability to substance abuse addiction.
Especially among genetically vulnerable individuals, repeated exposure to a drug of abuse in adolescence or adulthood causes addiction by inducing stable downregulation or upregulation in expression of specific genes and microRNAs through epigenetic alterations. Such downregulation or upregulation has been shown to occur in the brain's reward regions, such as the nucleus accumbens.
Cancer
DNA damage appears to be the primary underlying cause of cancer. DNA damage can also increase epigenetic alterations due to errors during DNA repair. Such mutations and epigenetic alterations can give rise to cancer (see malignant neoplasms). Investigation of epigenetic down- or upregulation of repaired DNA genes as possibly central to progression of cancer has been regularly undertaken since 2000.
Epigenetic downregulation of the DNA repair gene MGMT occurs in 93% of bladder cancers, 88% of stomach cancers, 74% of thyroid cancers, 40–90% of colorectal cancers, and 50% of brain cancers. Similarly, epigenetic downregulation of LIG4 occurs in 82% of colorectal cancers and epigenetic downregulation of NEIL1 occurs in 62% of head and neck cancers and in 42% of non-small-cell lung cancers.
Epigenetic upregulation of the DNA repair genes PARP1 and FEN1 occurs in numerous cancers (see Regulation of transcription in cancer). PARP1 and FEN1 are essential genes in the error-prone and mutagenic DNA repair pathway microhomology-mediated end joining. If this pathway is upregulated, the excess mutations it causes can lead to cancer. PARP1 is over-expressed in tyrosine kinase-activated leukemias, in neuroblastoma, in testicular and other germ cell tumors, and in Ewing's sarcoma. FEN1 is upregulated in the majority of cancers of the breast, prostate, stomach, neuroblastomas, pancreas, and lung.
See also
Regulation of gene expression
Transcriptional regulation
Enhancer (genetics)
References
External links
Molecular biology
Genetics
Cell biology | Downregulation and upregulation | Chemistry,Biology | 1,555 |
68,202,003 | https://en.wikipedia.org/wiki/Joel%20Lexchin | Joel Lexchin is a professor emeritus at the York University Faculty of Health where he taught about pharmaceutical policy, an Associate Professor in the Department of Family and Community Medicine at the University of Toronto, an emergency physician at the Toronto General Hospital and a Fellow in the Canadian Academy of Health Sciences. Lexchin is the author of over 160 peer-reviewed publications.
Biography
Lexchin received his MD from the University of Toronto in 1977.
From 1992 for two years Lexchin was a member of the Ontario Drug Quality and Therapeutics Committee. He was the chair of the Drugs and Pharmacotherapy Committee of the Ontario Medical Association from 1997 for two years.
In 2013, he was quoted in a learned article on Drug patents: the evergreening problem, and he wrote the article on the pharmaceutical industry for the Canadian Encyclopedia.
Lexchin is frequently critical of Canada's drug regulator, the Health Products and Food Branch, as has been noticed in the learned press.
In 2006, Lexchin was quoted by Manzer: "Drug approvals are not all science. There’s always decisions to be made around how much risk are we willing to take in terms of drugs, and I think as the industry takes on a larger role in funding the regulatory bodies that those kinds of decisions tend to be made more in favour of the drug companies," and in 2010 was noticed in a Toronto Star article entitled "Health Canada keeps some drug studies secret".
References
Canadian physicians
Living people
Drug control law
Drug safety
Pharmaceuticals policy
Year of birth missing (living people) | Joel Lexchin | Chemistry | 308 |
46,445,382 | https://en.wikipedia.org/wiki/Kelp%20noodles | Kelp noodles or cheon sa chae (), are semi-transparent noodles made from the jelly-like extract left after steaming edible kelp. They are made without the addition of grain flour or starch. Kelp noodles have a crunchy texture and are low in calories. They can be eaten raw, in salads, but for added taste, some prefer to cook them in water with spices added for flavoring. Many restaurants serve kelp noodles in stir fry dishes. The noodles usually require rinsing before being added to a stir fry dish towards the end of cooking time.
Nutrition
Along with their low caloric content, kelp noodles also contain minimal nutrients.
Dishes
Kelp noodles are mostly prepared in various Asian cuisine as a low-carbohydrate substitute for rice and pasta. They are commonly used in soups, salads, stir-fries and vegetable side dishes. Since they have a neutral taste they take on the flavors of the dishes to which they are added. The noodles can be purchased online or in health food supermarkets, and restaurants are beginning to offer kelp noodles as an alternative to more traditional noodles or rice in their dishes.
Potential economic impact
The popularity of kelp noodles among health-conscious consumers is growing because of the rising demand for gluten-free food products.
References
Further reading
Chinese noodles
East Asian cuisine
Korean noodles
Seaweeds | Kelp noodles | Biology | 282 |
38,782,331 | https://en.wikipedia.org/wiki/Centre%20for%20Human%20Ecology | The Centre for Human Ecology is an independent academic institute based in Glasgow, Scotland. It was founded in 1972 by Conrad Hal Waddington at the University of Edinburgh.
References
External links
Centre for Human Ecology
Human ecology
Education in Scotland
1972 establishments in Scotland
University of Edinburgh
Research institutes in Scotland | Centre for Human Ecology | Environmental_science | 58 |
9,885,419 | https://en.wikipedia.org/wiki/Tip-speed%20ratio | The tip-speed ratio, λ, or TSR for wind turbines is the ratio between the tangential speed of the tip of a blade and the actual speed of the wind, v. The tip-speed ratio is related to efficiency, with the optimum varying with blade design. Higher tip speeds result in higher noise levels and require stronger blades due to larger centrifugal forces.
The tip speed of the blade can be calculated as , where is the rotational speed of the rotor and R is the rotor radius. Therefore, we can also write:
where is the wind speed at the height of the blade hub.
Cp–λ curves
The power coefficient, , expresses what fraction of the power in the wind is being extracted by the wind turbine. It is generally assumed to be a function of both tip-speed ratio and pitch angle. Below is a plot of the variation of the power coefficient with variations in the tip-speed ratio when the pitch is held constant:
The case for variable speed wind turbines
Originally, wind turbines were fixed speed. This has the benefit that the rotor speed in the generator is constant, so that the frequency of the AC voltage is fixed. This allows the wind turbine to be directly connected to a transmission system. However, from the figure above, we can see that the power coefficient is a function of the tip-speed ratio. By extension, the efficiency of the wind turbine is a function of the tip-speed ratio.
Ideally, one would like to have a turbine operating at the maximum value of Cp at all wind speeds. This means that as the wind speed changes, the rotor speed must change as well such that Cp = Cp max. A wind turbine with a variable rotor speed is called a variable-speed wind turbine. Whilst this does mean that the wind turbine operates at or close to Cp max for a range of wind speeds, the frequency of the AC voltage generator will not be constant. This can be seen in the equation
where N is the rotor's angular speed, f is the frequency of the AC voltage generated in the stator windings, and P is the number of poles in the generator inside the nacelle. Therefore, direct connection to a transmission system for a variable speed is not permissible. What is required is a power converter which converts the signal generated by the turbine generator into DC and then converts that signal to an AC signal with the grid/transmission system frequency.
The case against variable speed wind turbines
Variable-speed wind turbines cannot be directly connected to a transmission system. One of the drawbacks of this is that the inertia of the transmission system is reduced as more variable-speed wind turbines are put online. This can result in more significant drops in the transmission system's voltage frequency in the event of the loss of a generating unit. Furthermore, variable-speed wind turbines require power electronics, which increases the complexity of the turbine and introduces new sources of failures. On the other hand, it has been suggested that additional energy capture achieved by comparing a variable-speed wind turbine to a fixed speed wind turbine is approximately 2%.
References
Wind turbines
Engineering ratios | Tip-speed ratio | Mathematics,Engineering | 630 |
11,097,780 | https://en.wikipedia.org/wiki/Saturn%20V-B | Studied in 1968 by Marshall Space Flight Center, the Saturn V-B was considered an interesting vehicle concept because it nearly represents a single-stage to orbit booster, but is actually a stage and a half booster just like the Atlas. The booster would achieve liftoff via five regular F-1 engines; four of the five engines on the Saturn V-B would be jettisoned and could be fully recoverable, with the sustainer stage on the rocket continuing the flight into orbit. The rocket could have had a good launch capability similar to that of the Space Shuttle if it was constructed, but it never flew.
Concept
With use of the Saturn V vehicle during Apollo, NASA began considering plans for a hypothesized evolutionary Saturn V family concept that spans the earth orbital payload spectrum from 50,000 to over 500,000 lbs. The "B" derivative of the Saturn V was a stage and one- half version of the then current S-IC stage and would become the first stage in an effective and economical assembly of upper stages of the evolutionary Saturn family.
The booster would achieve liftoff via five regular F-1 engines; four of the five engines on the Saturn V-B would be jettisoned and could be fully recoverable, with the sustainer stage on the rocket continuing the flight into orbit. The vehicle would be capable of a LEO payload of 50,000 lb with a standard S-IC stage length of 138 ft. Increases in the length of the stage could significantly increase this capability.
See also
Saturn-Shuttle
References
Further reading
Saturn V-B refers to Boeing study
Saturn V | Saturn V-B | Astronomy | 324 |
481,056 | https://en.wikipedia.org/wiki/Slater%20determinant | In quantum mechanics, a Slater determinant is an expression that describes the wave function of a multi-fermionic system. It satisfies anti-symmetry requirements, and consequently the Pauli principle, by changing sign upon exchange of two electrons (or other fermions). Only a small subset of all possible fermionic wave functions can be written as a single Slater determinant, but those form an important and useful subset because of their simplicity.
The Slater determinant arises from the consideration of a wave function for a collection of electrons, each with a wave function known as the spin-orbital , where denotes the position and spin of a single electron. A Slater determinant containing two electrons with the same spin orbital would correspond to a wave function that is zero everywhere.
The Slater determinant is named for John C. Slater, who introduced the determinant in 1929 as a means of ensuring the antisymmetry of a many-electron wave function, although the wave function in the determinant form first appeared independently in Heisenberg's and Dirac's articles three years earlier.
Definition
Two-particle case
The simplest way to approximate the wave function of a many-particle system is to take the product of properly chosen orthogonal wave functions of the individual particles. For the two-particle case with coordinates and , we have
This expression is used in the Hartree method as an ansatz for the many-particle wave function and is known as a Hartree product. However, it is not satisfactory for fermions because the wave function above is not antisymmetric under exchange of any two of the fermions, as it must be according to the Pauli exclusion principle. An antisymmetric wave function can be mathematically described as follows:
This does not hold for the Hartree product, which therefore does not satisfy the Pauli principle. This problem can be overcome by taking a linear combination of both Hartree products:
where the coefficient is the normalization factor. This wave function is now antisymmetric and no longer distinguishes between fermions (that is, one cannot indicate an ordinal number to a specific particle, and the indices given are interchangeable). Moreover, it also goes to zero if any two spin orbitals of two fermions are the same. This is equivalent to satisfying the Pauli exclusion principle.
Multi-particle case
The expression can be generalised to any number of fermions by writing it as a determinant. For an N-electron system, the Slater determinant is defined as
where the last two expressions use a shorthand for Slater determinants: The normalization constant is implied by noting the number N, and only the one-particle wavefunctions (first shorthand) or the indices for the fermion coordinates (second shorthand) are written down. All skipped labels are implied to behave in ascending sequence. The linear combination of Hartree products for the two-particle case is identical with the Slater determinant for N = 2. The use of Slater determinants ensures an antisymmetrized function at the outset. In the same way, the use of Slater determinants ensures conformity to the Pauli principle. Indeed, the Slater determinant vanishes if the set is linearly dependent. In particular, this is the case when two (or more) spin orbitals are the same. In chemistry one expresses this fact by stating that no two electrons with the same spin can occupy the same spatial orbital.
Example: Matrix elements in a many electron problem
Many properties of the Slater determinant come to life with an example in a non-relativistic many electron problem.
The one particle terms of the Hamiltonian will contribute in the same manner as for the simple Hartree product, namely the energy is summed and the states are independent
The multi-particle terms of the Hamiltonian will introduce exchange term to lower of the energy for the anti-symmetrized wave function
Starting from a molecular Hamiltonian:
where are the electrons and are the nuclei and
For simplicity we freeze the nuclei at equilibrium in one position and we remain with a simplified Hamiltonian
where
and where we will distinguish in the Hamiltonian between the first set of terms as (the "1" particle terms)
and the last term (the "2" particle term) which contains exchange term for a Slater determinant.
The two parts will behave differently when they have to interact with a Slater determinant wave function. We start to compute the expectation values of one-particle terms
In the above expression, we can just select the identical permutation in the determinant in the left part, since all the other N! − 1 permutations would give the same result as the selected one. We can thus cancel N! at the denominator
Because of the orthonormality of spin-orbitals it is also evident that only the identical permutation survives in the determinant on the right part of the above matrix element
This result shows that the anti-symmetrization of the product does not have any effect for the one particle terms and it behaves as it would do in the case of the simple Hartree product.
And finally we remain with the trace over the one-particle Hamiltonians
Which tells us that to the extent of the one-particle terms the wave functions of the electrons are independent of each other and the expectation value of total system is given by the sum of expectation value of the single particles.
For the two-particle terms instead
If we focus on the action of one term of , it will produce only the two terms
And finally
which instead is a mixing term. The first contribution is called the "coulomb" term or "coulomb" integral and the second is the "exchange" term or exchange integral. Sometimes different range of index in the summation is used since the Coulomb and exchange contributions exactly cancel each other for .
It is important to notice explicitly that the exchange term, which is always positive for local spin-orbitals, is absent in simple Hartree product. Hence the electron-electron repulsive energy on the antisymmetrized product of spin-orbitals is always lower than the electron-electron repulsive energy on the simple Hartree product of the same spin-orbitals. Since exchange bielectronic integrals are different from zero only for spin-orbitals with parallel spins, we link the decrease in energy with the physical fact that electrons with parallel spin are kept apart in real space in Slater determinant states.
As an approximation
Most fermionic wavefunctions cannot be represented as a Slater determinant. The best Slater approximation to a given fermionic wave function can be defined to be the one that maximizes the overlap between the Slater determinant and the target wave function. The maximal overlap is a geometric measure of entanglement between the fermions.
A single Slater determinant is used as an approximation to the electronic wavefunction in Hartree–Fock theory. In more accurate theories (such as configuration interaction and MCSCF), a linear combination of Slater determinants is needed.
Discussion
The word "detor" was proposed by S. F. Boys to refer to a Slater determinant of orthonormal orbitals, but this term is rarely used.
Unlike fermions that are subject to the Pauli exclusion principle, two or more bosons can occupy the same single-particle quantum state. Wavefunctions describing systems of identical bosons are symmetric under the exchange of particles and can be expanded in terms of permanents.
See also
Antisymmetrizer
Electron orbital
Fock space
Quantum electrodynamics
Quantum mechanics
Physical chemistry
Hund's rule
Hartree–Fock method
References
External links
Many-Electron States in E. Pavarini, E. Koch, and U. Schollwöck: Emergent Phenomena in Correlated Matter, Jülich 2013,
Quantum mechanics
Quantum chemistry
Theoretical chemistry
Computational chemistry
Determinants
Pauli exclusion principle | Slater determinant | Physics,Chemistry | 1,654 |
11,257,930 | https://en.wikipedia.org/wiki/Mapping%20the%20Atari | Mapping the Atari, written by Ian Chadwick and published by COMPUTE! Publications in 1983, is an address-by-address explanation of the memory layout of the Atari 8-bit computers. The introduction is by Optimized Systems Software co-founder Bill Wilkinson.
The book covers the 64K address space of the system's 6502 processor from low to high, including addresses used by the operating system or mapped to hardware registers, as well as how to use them. For example, location 756 (2F4) CHBAS contains the starting memory address that tells ANTIC where to find the character set. The author explains how to use this feature to build custom character sets.
An updated version covering changes to the operating system and newer machines like the 130XE followed in 1985. Antic magazine serialized the book in 1989 and 1990.
Reception
The Addison-Wesley Book of Atari Software 1984 recommended Mapping the Atari, calling it "the most valuable reference book for machine language programmers". Antic introduced the serialized version of the book as follows:
References
External links
1983 non-fiction books
Atari 8-bit computers
Computer books | Mapping the Atari | Technology | 228 |
49,748,557 | https://en.wikipedia.org/wiki/Keio%20Alpha | Keio Alpha is a student design team formed by a group of postgraduate students from Graduate School of System Design and Management, Keio University, Japan. In July 2015, the team accepted an invitation from Elon Musk to submit design entries for an open-source pod that would advance Hyperloop technology development, in the SpaceX-sponsored Hyperloop pod competition.
The team was founded by David Chew Vee Kuan with his faculty advisor, Prof Yoshiaki Ohkami, to build a team of four students to submit a preliminary design to SpaceX for review.
The team proposed the vision of Hyperloop System in connecting satellite cities with major cities of distance more than 500 km but able to travel at speed of plane and at the convenience of a train. People able to move from the place they live and the place they work where could be 500 km apart but only 30minutes away. The team proposed a pod design using Magnetic Levitation, Light Vacuum Tolerant Vehicle System, Magnetic Brake System and Safety System Architecture in their design proposal.
Following review of the high-level design, the team was invited to the Design Weekend at Texas A&M University on 29–30 January 2016 to present their ideas and design to panel of judges along with other 120 teams.
Keio Alpha Hyperloop Team was selected as one of the 30 finalist teams to build, and then race, their pod in the prototype vacuum tube test track adjacent to SpaceX Headquarters at Hawthorne, (Los Angeles) California, USA.
References
Hyperloop | Keio Alpha | Technology,Engineering | 312 |
14,428,782 | https://en.wikipedia.org/wiki/Frizzled-1 | Frizzled-1 (Fz-1) is a protein that in humans is encoded by the FZD1 gene.
Function
Members of the 'frizzled' gene family encode 7-transmembrane domain proteins that are receptors for Wnt signaling proteins. The FZD1 protein contains a signal peptide, a cysteine-rich domain in the N-terminal extracellular region, 7 transmembrane domains, and a C-terminal PDZ domain-binding motif. The FZD1 transcript is expressed in various tissues.
References
Further reading
External links
G protein-coupled receptors | Frizzled-1 | Chemistry | 126 |
463,835 | https://en.wikipedia.org/wiki/Life%20on%20Mars | The possibility of life on Mars is a subject of interest in astrobiology due to the planet's proximity and similarities to Earth. To date, no conclusive evidence of past or present life has been found on Mars. Cumulative evidence suggests that during the ancient Noachian time period, the surface environment of Mars had liquid water and may have been habitable for microorganisms, but habitable conditions do not necessarily indicate life.
Scientific searches for evidence of life began in the 19th century and continue today via telescopic investigations and deployed probes, searching for water, chemical biosignatures in the soil and rocks at the planet's surface, and biomarker gases in the atmosphere.
Mars is of particular interest for the study of the origins of life because of its similarity to the early Earth. This is especially true since Mars has a cold climate and lacks plate tectonics or continental drift, so it has remained almost unchanged since the end of the Hesperian period. At least two-thirds of Mars' surface is more than 3.5 billion years old, and it could have been habitable 4.48 billion years ago, 500 million years before the earliest known Earth lifeforms; Mars may thus hold the best record of the prebiotic conditions leading to life, even if life does not or has never existed there.
Following the confirmation of the past existence of surface liquid water, the Curiosity, Perseverance and Opportunity rovers started searching for evidence of past life, including a past biosphere based on autotrophic, chemotrophic, or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, fossils, and organic compounds on Mars is now a primary objective for space agencies.
The discovery of organic compounds inside sedimentary rocks and of boron on Mars are of interest as they are precursors for prebiotic chemistry. Such findings, along with previous discoveries that liquid water was clearly present on ancient Mars, further supports the possible early habitability of Gale Crater on Mars. Currently, the surface of Mars is bathed with ionizing radiation, and Martian soil is rich in perchlorates toxic to microorganisms. Therefore, the consensus is that if life exists—or existed—on Mars, it could be found or is best preserved in the subsurface, away from present-day harsh surface processes.
In June 2018, NASA announced the detection of seasonal variation of methane levels on Mars. Methane could be produced by microorganisms or by geological means. The European ExoMars Trace Gas Orbiter started mapping the atmospheric methane in April 2018, and the 2022 ExoMars rover Rosalind Franklin was planned to drill and analyze subsurface samples before the programme's indefinite suspension, while the NASA Mars 2020 rover Perseverance, having landed successfully, will cache dozens of drill samples for their potential transport to Earth laboratories in the late 2020s or 2030s. As of February 8, 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. In October 2024, NASA announced that it may be possible for photosynthesis to occur within dusty water ice exposed in the mid-latitude regions of Mars.
Early speculation
Mars's polar ice caps were discovered in the mid-17th century. In the late 18th century, William Herschel proved they grow and shrink alternately, in the summer and winter of each hemisphere. By the mid-19th century, astronomers knew that Mars had certain other similarities to Earth, for example that the length of a day on Mars was almost the same as a day on Earth. They also knew that its axial tilt was similar to Earth's, which meant it experienced seasons just as Earth does—but of nearly double the length owing to its much longer year. These observations led to increasing speculation that the darker albedo features were water and the brighter ones were land, whence followed speculation on whether Mars may be inhabited by some form of life.
In 1854, William Whewell, a fellow of Trinity College, Cambridge, theorized that Mars had seas, land and possibly life forms. Speculation about life on Mars exploded in the late 19th century, following telescopic observation by some observers of apparent Martian canals—which were later found to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilization. This idea led British writer H. G. Wells to write The War of the Worlds in 1897, telling of an invasion by aliens from Mars who were fleeing the planet's desiccation.
The 1907 book Is Mars Habitable? by British naturalist Alfred Russel Wallace was a reply to, and refutation of, Lowell's Mars and Its Canals. Wallace's book concluded that Mars "is not only uninhabited by intelligent beings such as Mr. Lowell postulates, but is absolutely uninhabitable." Historian Charles H. Smith refers to Wallace's book as one of the first works in the field of astrobiology.
Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen were present in the Martian atmosphere. The influential observer Eugène Antoniadi used the 83-cm (32.6 inch) aperture telescope at Meudon Observatory at the 1909 opposition of Mars and saw no canals, the outstanding photos of Mars taken at the new Baillaud dome at the Pic du Midi observatory also brought formal discredit to the Martian canals theory in 1909, and the notion of canals began to fall out of favor.
Habitability
Chemical, physical, geological, and geographic attributes shape the environments on Mars. Isolated measurements of these factors may be insufficient to deem an environment habitable, but the sum of measurements can help predict locations with greater or lesser habitability potential. The two current ecological approaches for predicting the potential habitability of the Martian surface use 19 or 20 environmental factors, with an emphasis on water availability, temperature, the presence of nutrients, an energy source, and protection from solar ultraviolet and galactic cosmic radiation.
Scientists do not know the minimum number of parameters for determination of habitability potential, but they are certain it is greater than one or two of the factors in the table below. Similarly, for each group of parameters, the habitability threshold for each is to be determined. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. There are no full-Mars simulations published yet that include all of the biocidal factors combined. Furthermore, the possibility of Martian life having a far different biochemistry and habitability requirements than the terrestrial biosphere is an open question. A common hypothesis is methanogenic Martian life, and while such organisms exist on Earth too, they are exceptionally rare and cannot survive in the majority of terrestrial environments that contain oxygen.
Past
Recent models have shown that, even with a dense CO2 atmosphere, early Mars was colder than Earth has ever been. Transiently warm conditions related to impacts or volcanism could have produced conditions favoring the formation of the late Noachian valley networks, even though the mid-late Noachian global conditions were probably icy. Local warming of the environment by volcanism and impacts would have been sporadic, but there should have been many events of water flowing at the surface of Mars. Both the mineralogical and the morphological evidence indicates a degradation of habitability from the mid Hesperian onward. The exact causes are not well understood but may be related to a combination of processes including loss of early atmosphere, or impact erosion, or both. Billions of years ago, before this degradation, the surface of Mars was apparently fairly habitable, consisted of liquid water and clement weather, though it is unknown if life existed on Mars.
The loss of the Martian magnetic field strongly affected surface environments through atmospheric loss and increased radiation; this change significantly degraded surface habitability. When there was a magnetic field, the atmosphere would have been protected from erosion by the solar wind, which would ensure the maintenance of a dense atmosphere, necessary for liquid water to exist on the surface of Mars. The loss of the atmosphere was accompanied by decreasing temperatures. Part of the liquid water inventory sublimed and was transported to the poles, while the rest became
trapped in permafrost, a subsurface ice layer.
Observations on Earth and numerical modeling have shown that a crater-forming impact can result in the creation of a long-lasting hydrothermal system when ice is present in the crust. For example, a 130 km large crater could sustain an active hydrothermal system for up to 2 million years, that is, long enough for microscopic life to emerge, but unlikely to have progressed any further down the evolutionary path.
Soil and rock samples studied in 2013 by NASA's Curiosity rover's onboard instruments brought about additional information on several habitability factors. The rover team identified some of the key chemical ingredients for life in this soil, including sulfur, nitrogen, hydrogen, oxygen, phosphorus and possibly carbon, as well as clay minerals, suggesting a long-ago aqueous environment—perhaps a lake or an ancient streambed—that had neutral acidity and low salinity. On December 9, 2013, NASA reported that, based on evidence from Curiosity studying Aeolis Palus, Gale Crater contained an ancient freshwater lake which could have been a hospitable environment for microbial life. The confirmation that liquid water once flowed on Mars, the existence of nutrients, and the previous discovery of a past magnetic field that protected the planet from cosmic and solar radiation, together strongly suggest that Mars could have had the environmental factors to support life. The assessment of past habitability is not in itself evidence that Martian life has ever actually existed. If it did, it was probably microbial, existing communally in fluids or on sediments, either free-living or as biofilms, respectively. The exploration of terrestrial analogues provide clues as to how and where best look for signs of life on Mars.
Impactite, shown to preserve signs of life on Earth, was discovered on Mars and could contain signs of ancient life, if life ever existed on the planet.
On June 7, 2018, NASA announced that the Curiosity rover had discovered organic molecules in sedimentary rocks dating to three billion years old. The detection of organic molecules in rocks indicate that some of the building blocks for life were present.
Research into how the conditions for habitability ended is ongoing. On October 7, 2024, NASA announced that the results of the previous three years of sampling onboard Curiosity suggested that based on high carbon-13 and oxygen-18 levels in the regolith, the early Martian atmosphere was less likely than previously thought, to be stable enough to support surface water hospitable to life, with rapid wetting-drying cycles and very high-salinity cryogenic brines providing potential explanations.
Present
Conceivably, if life exists (or existed) on Mars, evidence of life could be found, or is best preserved, in the subsurface, away from present-day harsh surface conditions. Present-day life on Mars, or its biosignatures, could occur kilometers below the surface, or in subsurface geothermal hot spots, or it could occur a few meters below the surface. The permafrost layer on Mars is only a couple of centimeters below the surface, and salty brines can be liquid a few centimeters below that but not far down. Water is close to its boiling point even at the deepest points in the Hellas basin, and so cannot remain liquid for long on the surface of Mars in its present state, except after a sudden release of underground water.
So far, NASA has pursued a "follow the water" strategy on Mars and has not searched for biosignatures for life there directly since the Viking missions. The consensus by astrobiologists is that it may be necessary to access the Martian subsurface to find currently habitable environments.
Cosmic radiation
In 1965, the Mariner 4 probe discovered that Mars had no global magnetic field that would protect the planet from potentially life-threatening cosmic radiation and solar radiation; observations made in the late 1990s by the Mars Global Surveyor confirmed this discovery. Scientists speculate that the lack of magnetic shielding helped the solar wind blow away much of Mars's atmosphere over the course of several billion years. As a result, the planet has been vulnerable to radiation from space for about 4 billion years.
Recent in-situ data from Curiosity rover indicates that ionizing radiation from galactic cosmic rays (GCR) and solar particle events (SPE) may not be a limiting factor in habitability assessments for present-day surface life on Mars. The level of 76 mGy per year measured by Curiosity is similar to levels inside the ISS.
Cumulative effects
Curiosity rover measured ionizing radiation levels of 76 mGy per year. This level of ionizing radiation is sterilizing for dormant life on the surface of Mars. It varies considerably in habitability depending on its orbital eccentricity and the tilt of its axis. If the surface life has been reanimated as recently as 450,000 years ago, then rovers on Mars could find dormant but still viable life at a depth of one meter below the surface, according to an estimate. Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars since Mars lost its protective magnetosphere and atmosphere. After mapping cosmic radiation levels at various depths on Mars, researchers have concluded that over time, any life within the first several meters of the planet's surface would be killed by lethal doses of cosmic radiation. The team calculated that the cumulative damage to DNA and RNA by cosmic radiation would limit retrieving viable dormant cells on Mars to depths greater than 7.5 meters below the planet's surface.
Even the most radiation-tolerant terrestrial bacteria would survive in dormant spore state only 18,000 years at the surface; at 2 meters—the greatest depth at which the ExoMars rover will be capable of reaching—survival time would be 90,000 to half a million years, depending on the type of rock.
Data collected by the Radiation assessment detector (RAD) instrument on board the Curiosity rover revealed that the absorbed dose measured is 76 mGy/year at the surface, and that "ionizing radiation strongly influences chemical compositions and structures, especially for water, salts, and redox-sensitive components such as organic molecules." Regardless of the source of Martian organic compounds (meteoric, geological, or biological), its carbon bonds are susceptible to breaking and reconfiguring with surrounding elements by ionizing charged particle radiation. These improved subsurface radiation estimates give insight into the potential for the preservation of possible organic biosignatures as a function of depth as well as survival times of possible microbial or bacterial life forms left dormant beneath the surface. The report concludes that the in situ "surface measurements—and subsurface estimates—constrain the preservation window for Martian organic matter following exhumation and exposure to ionizing radiation in the top few meters of the Martian surface."
In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled and were associated with an aurora 25 times brighter than any observed earlier, due to a major, and unexpected, solar storm in the middle of the month.
UV radiation
On UV radiation, a 2014 report concludes that "[T]he Martian UV radiation environment is rapidly lethal to unshielded microbes but can be attenuated by global dust storms and shielded completely by < 1 mm of regolith or by other organisms." In addition, laboratory research published in July 2017 demonstrated that UV irradiated perchlorates cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. The penetration depth of UV radiation into soils is in the sub-millimeter to millimeter range and depends on the properties of the soil. A recent study found that photosynthesis could occur within dusty ice exposed in the Martian mid-latitudes because the overlying dusty ice blocks the harmful ultraviolet radiation at Mars’ surface.
Perchlorates
The Martian regolith is known to contain a maximum of 0.5% (w/v) perchlorate (ClO4−) that is toxic for most living organisms, but since they drastically lower the freezing point of water and a few extremophiles can use it as an energy source (see Perchlorates - Biology) and grow at concentrations of up to 30% (w/v) sodium perchlorate by physiologically adapting to increasing perchlorate concentrations, it has prompted speculation of what their influence would be on habitability.
Research published in July 2017 shows that when irradiated with a simulated Martian UV flux, perchlorates become even more lethal to bacteria (bactericide). Even dormant spores lost viability within minutes. In addition, two other compounds of the Martian surface, iron oxides and hydrogen peroxide, act in synergy with irradiated perchlorates to cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. It was also found that abraded silicates (quartz and basalt) lead to the formation of toxic reactive oxygen species. The researchers concluded that "the surface of Mars is lethal to vegetative cells and renders much of the surface and near-surface regions uninhabitable." This research demonstrates that the present-day surface is more uninhabitable than previously thought, and reinforces the notion to inspect at least a few meters into the ground to ensure the levels of radiation would be relatively low.
However, researcher Kennda Lynch discovered the first-known instance of a habitat containing perchlorates and perchlorates-reducing bacteria in an analog environment: a paleolake in Pilot Valley, Great Salt Lake Desert, Utah, United States. She has been studying the biosignatures of these microbes, and is hoping that the Mars Perseverance rover will find matching biosignatures at its Jezero Crater site.
Recurrent slope lineae
Recurrent slope lineae (RSL) features form on Sun-facing slopes at times of the year when the local temperatures reach above the melting point for ice. The streaks grow in spring, widen in late summer and then fade away in autumn. This is hard to model in any other way except as involving liquid water in some form, though the streaks themselves are thought to be a secondary effect and not a direct indication of the dampness of the regolith. Although these features are now confirmed to involve liquid water in some form, the water could be either too cold or too salty for life. At present they are treated as potentially habitable, as "Uncertain Regions, to be treated as Special Regions".). They were suspected as involving flowing brines back then.
The thermodynamic availability of water (water activity) strictly limits microbial propagation on Earth, particularly in hypersaline environments, and there are indications that the brine ionic strength is a barrier to the habitability of Mars. Experiments show that high ionic strength, driven to extremes on Mars by the ubiquitous occurrence of divalent ions, "renders these environments uninhabitable despite the presence of biologically available water."
Nitrogen fixation
After carbon, nitrogen is arguably the most important element needed for life. Thus, measurements of nitrate over the range of 0.1% to 5% are required to address the question of its occurrence and distribution. There is nitrogen (as N2) in the atmosphere at low levels, but this is not adequate to support nitrogen fixation for biological incorporation. Nitrogen in the form of nitrate could be a resource for human exploration both as a nutrient for plant growth and for use in chemical processes. On Earth, nitrates correlate with perchlorates in desert environments, and this may also be true on Mars. Nitrate is expected to be stable on Mars and to have formed by thermal shock from impact or volcanic plume lightning on ancient Mars.
On March 24, 2015, NASA reported that the SAM instrument on the Curiosity rover detected nitrates by heating surface sediments. The nitrogen in nitrate is in a "fixed" state, meaning that it is in an oxidized form that can be used by living organisms. The discovery supports the notion that ancient Mars may have been hospitable for life. It is suspected that all nitrate on Mars is a relic, with no modern contribution. Nitrate abundance ranges from non-detection to 681 ± 304 mg/kg in the samples examined until late 2017. Modeling indicates that the transient condensed water films on the surface should be transported to lower depths (≈10 m) potentially transporting nitrates, where subsurface microorganisms could thrive.
In contrast, phosphate, one of the chemical nutrients thought to be essential for life, is readily available on Mars.
Low pressure
Further complicating estimates of the habitability of the Martian surface is the fact that very little is known about the growth of microorganisms at pressures close to those on the surface of Mars. Some teams determined that some bacteria may be capable of cellular replication down to 25 mbar, but that is still above the atmospheric pressures found on Mars (range 1–14 mbar). In another study, twenty-six strains of bacteria were chosen based on their recovery from spacecraft assembly facilities, and only Serratia liquefaciens strain ATCC 27592 exhibited growth at 7 mbar, 0 °C, and CO2-enriched anoxic atmospheres.
Liquid water
Liquid water is a necessary but not sufficient condition for life as humans know it, as habitability is a function of a multitude of environmental parameters. Liquid water cannot exist on the surface of Mars except at the lowest elevations for minutes or hours. Liquid water does not appear at the surface itself, but it could form in minuscule amounts around dust particles in snow heated by the Sun. Also, the ancient equatorial ice sheets beneath the ground may slowly sublimate or melt, accessible from the surface via caves.
Water on Mars exists almost exclusively as water ice, located in the Martian polar ice caps and under the shallow Martian surface even at more temperate latitudes. A small amount of water vapor is present in the atmosphere. There are no bodies of liquid water on the Martian surface because the water vapor pressure is less than 1 Pa, the atmospheric pressure at the surface averages —about 0.6% of Earth's mean sea level pressure—and because the temperature is far too low, () leading to immediate freezing. Despite this, about 3.8 billion years ago, there was a denser atmosphere, higher temperature, and vast amounts of liquid water flowed on the surface, including large oceans.
It has been estimated that the primordial oceans on Mars would have covered between 36% and 75% of the planet. On November 22, 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region of Mars. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior.
Analysis of Martian sandstones, using data obtained from orbital spectrometry, suggests that the waters that previously existed on the surface of Mars would have had too high a salinity to support most Earth-like life. Tosca et al. found that the Martian water in the locations they studied all had water activity, aw ≤ 0.78 to 0.86—a level fatal to most Terrestrial life. Haloarchaea, however, are able to live in hypersaline solutions, up to the saturation point.
In June 2000, possible evidence for current liquid water flowing at the surface of Mars was discovered in the form of flood-like gullies. Additional similar images were published in 2006, taken by the Mars Global Surveyor, that suggested that water occasionally flows on the surface of Mars. The images showed changes in steep crater walls and sediment deposits, providing the strongest evidence yet that water coursed through them as recently as several years ago.
There is disagreement in the scientific community as to whether or not the recent gully streaks were formed by liquid water. Some suggest the flows were merely dry sand flows. Others suggest it may be liquid brine near the surface, but the exact source of the water and the mechanism behind its motion are not understood.
In July 2018, scientists reported the discovery of a subglacial lake on Mars, below the southern polar ice cap, and extending sideways about , the first known stable body of water on the planet. The lake was discovered using the MARSIS radar on board the Mars Express orbiter, and the profiles were collected between May 2012 and December 2015. The lake is centered at 193°E, 81°S, a flat area that does not exhibit any peculiar topographic characteristics but is surrounded by higher ground, except on its eastern side, where there is a depression. However, subsequent studies disagree on whether any liquid can be present at this depth without anomalous heating from the interior of the planet. Instead, some studies propose that other factors may have led to radar signals resembling those containing liquid water, such as clays, or interference between layers of ice and dust.
Silica
In May 2007, the Spirit rover disturbed a patch of ground with its inoperative wheel, uncovering an area 90% rich in silica. The feature is reminiscent of the effect of hot spring water or steam coming into contact with volcanic rocks. Scientists consider this as evidence of a past environment that may have been favorable for microbial life and theorize that one possible origin for the silica may have been produced by the interaction of soil with acid vapors produced by volcanic activity in the presence of water.
Based on Earth analogs, hydrothermal systems on Mars would be highly attractive for their potential for preserving organic and inorganic biosignatures. For this reason, hydrothermal deposits are regarded as important targets in the exploration for fossil evidence of ancient Martian life.
Possible biosignatures
In May 2017, evidence of the earliest known life on land on Earth may have been found in 3.48-billion-year-old geyserite and other related mineral deposits (often found around hot springs and geysers) uncovered in the Pilbara Craton of Western Australia. These findings may be helpful in deciding where best to search for early signs of life on the planet Mars.
Methane
Methane (CH4) is chemically unstable in the current oxidizing atmosphere of Mars. It would quickly break down due to ultraviolet radiation from the Sun and chemical reactions with other gases. Therefore, a persistent presence of methane in the atmosphere may imply the existence of a source to continually replenish the gas.
Trace amounts of methane, at the level of several parts per billion (ppb), were first reported in Mars's atmosphere by a team at the NASA Goddard Space Flight Center in 2003. Large differences in the abundances were measured between observations taken in 2003 and 2006, which suggested that the methane was locally concentrated and probably seasonal. On June 7, 2018, NASA announced it has detected a seasonal variation of methane levels on Mars.
The ExoMars Trace Gas Orbiter (TGO), launched in March 2016, began on April 21, 2018, to map the concentration and sources of methane in the atmosphere, as well as its decomposition products such as formaldehyde and methanol. As of May 2019, the Trace Gas Orbiter showed that the concentration of methane is under detectable level (< 0.05 ppbv).
The principal candidates for the origin of Mars's methane include non-biological processes such as water-rock reactions, radiolysis of water, and pyrite formation, all of which produce H2 that could then generate methane and other hydrocarbons via Fischer–Tropsch synthesis with CO and CO2. It has also been shown that methane could be produced by a process involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars. Although geologic sources of methane such as serpentinization are possible, the lack of current volcanism, hydrothermal activity or hotspots are not favorable for geologic methane.
Living microorganisms, such as methanogens, are another possible source, but no evidence for the presence of such organisms has been found on Mars, until June 2019 as methane was detected by the Curiosity rover. Methanogens do not require oxygen or organic nutrients, are non-photosynthetic, use hydrogen as their energy source and carbon dioxide (CO2) as their carbon source, so they could exist in subsurface environments on Mars. If microscopic Martian life is producing the methane, it probably resides far below the surface, where it is still warm enough for liquid water to exist.
Since the 2003 discovery of methane in the atmosphere, some scientists have been designing models and in vitro experiments testing the growth of methanogenic bacteria on simulated Martian soil, where all four methanogen strains tested produced substantial levels of methane, even in the presence of 1.0wt% perchlorate salt.
A team led by Levin suggested that both phenomena—methane production and degradation—could be accounted for by an ecology of methane-producing and methane-consuming microorganisms.
Research at the University of Arkansas presented in June 2015 suggested that some methanogens could survive in Mars's low pressure. Rebecca Mickol found that in her laboratory, four species of methanogens survived low-pressure conditions that were similar to a subsurface liquid aquifer on Mars. The four species that she tested were Methanothermobacter wolfeii, Methanosarcina barkeri, Methanobacterium formicicum, and Methanococcus maripaludis. In June 2012, scientists reported that measuring the ratio of hydrogen and methane levels on Mars may help determine the likelihood of life on Mars. According to the scientists, "low H2/CH4 ratios (less than approximately 40)" would "indicate that life is likely present and active". The observed ratios in the lower Martian atmosphere were "approximately 10 times" higher "suggesting that biological processes may not be responsible for the observed CH4". The scientists suggested measuring the H2 and CH4 flux at the Martian surface for a more accurate assessment. Other scientists have recently reported methods of detecting hydrogen and methane in extraterrestrial atmospheres.
Even if rover missions determine that microscopic Martian life is the seasonal source of the methane, the life forms probably reside far below the surface, outside of the rover's reach.
Formaldehyde
In February 2005, it was announced that the Planetary Fourier Spectrometer (PFS) on the European Space Agency's Mars Express Orbiter had detected traces of formaldehyde in the atmosphere of Mars. Vittorio Formisano, the director of the PFS, has speculated that the formaldehyde could be the byproduct of the oxidation of methane and, according to him, would provide evidence that Mars is either extremely geologically active or harboring colonies of microbial life. NASA scientists consider the preliminary findings well worth a follow-up but have also rejected the claims of life.
Viking lander biological experiments
The 1970s Viking program placed two identical landers on the surface of Mars tasked to look for biosignatures of microbial life on the surface. The 'Labeled Release' (LR) experiment gave a positive result for metabolism, while the gas chromatograph–mass spectrometer did not detect organic compounds. The LR was a specific experiment designed to test only a narrowly defined critical aspect of the theory concerning the possibility of life on Mars; therefore, the overall results were declared inconclusive. No Mars lander mission has found meaningful traces of biomolecules or biosignatures. The claim of extant microbial life on Mars is based on old data collected by the Viking landers, currently reinterpreted as sufficient evidence of life, mainly by Gilbert Levin, Joseph D. Miller, Navarro, Giorgio Bianciardi and Patricia Ann Straat.
Assessments published in December 2010 by Rafael Navarro-Gonzáles indicate that organic compounds "could have been present" in the soil analyzed by both Viking 1 and 2. The study determined that perchlorate—discovered in 2008 by Phoenix lander—can destroy organic compounds when heated, and produce chloromethane and dichloromethane as a byproduct, the identical chlorine compounds discovered by both Viking landers when they performed the same tests on Mars. Because perchlorate would have broken down any Martian organics, the question of whether or not Viking found organic compounds is still wide open.
The Labeled Release evidence was not generally accepted initially, and, to this day lacks the consensus of the scientific community.
Meteorites
As of 2018, there are 224 known Martian meteorites (some of which were found in several fragments). These are valuable because they are the only physical samples of Mars available to Earth-bound laboratories. Some researchers have argued that microscopic morphological features found in ALH84001 are biomorphs, however this interpretation has been highly controversial and is not supported by the majority of researchers in the field.
Seven criteria have been established for the recognition of past life within terrestrial geologic samples. Those criteria are:
Is the geologic context of the sample compatible with past life?
Is the age of the sample and its stratigraphic location compatible with possible life?
Does the sample contain evidence of cellular morphology and colonies?
Is there any evidence of biominerals showing chemical or mineral disequilibria?
Is there any evidence of stable isotope patterns unique to biology?
Are there any organic biomarkers present?
Are the features indigenous to the sample?
For general acceptance of past life in a geologic sample, essentially most or all of these criteria must be met. All seven criteria have not yet been met for any of the Martian samples.
ALH84001
In 1996, the Martian meteorite ALH84001, a specimen that is much older than the majority of Martian meteorites that have been recovered so far, received considerable attention when a group of NASA scientists led by David S. McKay reported microscopic features and geochemical anomalies that they considered to be best explained by the rock having hosted Martian bacteria in the distant past. Some of these features resembled terrestrial bacteria, aside from their being much smaller than any known form of life. Much controversy arose over this claim, and ultimately all of the evidence McKay's team cited as evidence of life was found to be explainable by non-biological processes. Although the scientific community has largely rejected the claim ALH 84001 contains evidence of ancient Martian life, the controversy associated with it is now seen as a historically significant moment in the development of exobiology.
Nakhla
The Nakhla meteorite fell on Earth on June 28, 1911, on the locality of Nakhla, Alexandria, Egypt.
In 1998, a team from NASA's Johnson Space Center obtained a small sample for analysis. Researchers found preterrestrial aqueous alteration phases and objects of the size and shape consistent with Earthly fossilized nanobacteria.
Analysis with gas chromatography and mass spectrometry (GC-MS) studied its high molecular weight polycyclic aromatic hydrocarbons in 2000, and NASA scientists concluded that as much as 75% of the organic compounds in Nakhla "may not be recent terrestrial contamination".
This caused additional interest in this meteorite, so in 2006, NASA managed to obtain an additional and larger sample from the London Natural History Museum. On this second sample, a large dendritic carbon content was observed. When the results and evidence were published in 2006, some independent researchers claimed that the carbon deposits are of biologic origin. It was remarked that since carbon is the fourth most abundant element in the Universe, finding it in curious patterns is not indicative or suggestive of biological origin.
Shergotty
The Shergotty meteorite, a Martian meteorite, fell on Earth on Shergotty, India on August 25, 1865, and was retrieved by witnesses almost immediately. It is composed mostly of pyroxene and thought to have undergone preterrestrial aqueous alteration for several centuries. Certain features in its interior suggest remnants of a biofilm and its associated microbial communities.
Yamato 000593
Yamato 000593 is the second largest meteorite from Mars found on Earth. Studies suggest the Martian meteorite was formed about 1.3 billion years ago from a lava flow on Mars. An impact occurred on Mars about 12 million years ago and ejected the meteorite from the Martian surface into space. The meteorite landed on Earth in Antarctica about 50,000 years ago. The mass of the meteorite is and it has been found to contain evidence of past water movement. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to NASA scientists.
Ichnofossil-like structures
Organism–substrate interactions and their products are important biosignatures on Earth as they represent direct evidence of biological behaviour. It was the recovery of fossilized products of life-substrate interactions (ichnofossils) that has revealed biological activities in the early history of life on the Earth, e.g., Proterozoic burrows, Archean microborings and stromatolites. Two major ichnofossil-like structures have been reported from Mars, i.e. the stick-like structures from Vera Rubin Ridge and the microtunnels from Martian Meteorites.
Observations at Vera Rubin Ridge by the Mars Space Laboratory rover Curiosity show millimetric, elongate structures preserved in sedimentary rocks deposited in fluvio-lacustrine environments within Gale Crater. Morphometric and topologic data are unique to the stick-like structures among Martian geological features and show that ichnofossils are among the closest morphological analogues of these unique features. Nevertheless, available data cannot fully disprove two major abiotic hypotheses, that are sedimentary cracking and evaporitic crystal growth as genetic processes for the structures.
Microtunnels have been described from Martian meteorites. They consist of straight to curved microtunnels that may contain areas of enhanced carbon abundance. The morphology of the curved microtunnels is consistent with biogenic traces on Earth, including microbioerosion traces observed in basaltic glasses. Further studies are needed to confirm biogenicity.
Geysers
The seasonal frosting and defrosting of the southern ice cap results in the formation of spider-like radial channels carved on 1-meter thick ice by sunlight. Then, sublimed CO2 – and probably water – increase pressure in their interior producing geyser-like eruptions of cold fluids often mixed with dark basaltic sand or mud. This process is rapid, observed happening in the space of a few days, weeks or months, a growth rate rather unusual in geology – especially for Mars.
A team of Hungarian scientists propose that the geysers' most visible features, dark dune spots and spider channels, may be colonies of photosynthetic Martian microorganisms, which over-winter beneath the ice cap, and as the sunlight returns to the pole during early spring, light penetrates the ice, the microorganisms photosynthesize and heat their immediate surroundings. A pocket of liquid water, which would normally evaporate instantly in the thin Martian atmosphere, is trapped around them by the overlying ice. As this ice layer thins, the microorganisms show through grey. When the layer has completely melted, the microorganisms rapidly desiccate and turn black, surrounded by a grey aureole. The Hungarian scientists believe that even a complex sublimation process is insufficient to explain the formation and evolution of the dark dune spots in space and time. Since their discovery, fiction writer Arthur C. Clarke promoted these formations as deserving of study from an astrobiological perspective.
A multinational European team suggests that if liquid water is present in the spiders' channels during their annual defrost cycle, they might provide a niche where certain microscopic life forms could have retreated and adapted while sheltered from solar radiation. A British team also considers the possibility that organic matter, microbes, or even simple plants might co-exist with these inorganic formations, especially if the mechanism includes liquid water and a geothermal energy source. They also remark that the majority of geological structures may be accounted for without invoking any organic "life on Mars" hypothesis. It has been proposed to develop the Mars Geyser Hopper lander to study the geysers up close.
Forward contamination
Planetary protection of Mars aims to prevent biological contamination of the planet. A major goal is to preserve the planetary record of natural processes by preventing human-caused microbial introductions, also called forward contamination. There is abundant evidence as to what can happen when organisms from regions on Earth that have been isolated from one another for significant periods of time are introduced into each other's environment. Species that are constrained in one environment can thrive – often out of control – in another environment much to the detriment of the original species that were present. In some ways, this problem could be compounded if life forms from one planet were introduced into the totally alien ecology of another world.
The prime concern of hardware contaminating Mars derives from incomplete spacecraft sterilization of some hardy terrestrial bacteria (extremophiles) despite best efforts. Hardware includes landers, crashed probes, end-of-mission disposal of hardware, and the hard landing of entry, descent, and landing systems. This has prompted research on survival rates of radiation-resistant microorganisms including the species Deinococcus radiodurans and genera Brevundimonas, Rhodococcus, and Pseudomonas under simulated Martian conditions. Results from one of these experimental irradiation experiments, combined with previous radiation modeling, indicate that Brevundimonas sp. MV.7 emplaced only 30 cm deep in Martian dust could survive the cosmic radiation for up to 100,000 years before suffering 106 population reduction. The diurnal Mars-like cycles in temperature and relative humidity affected the viability of Deinococcus radiodurans cells quite severely. In other simulations, Deinococcus radiodurans also failed to grow under low atmospheric pressure, under 0 °C, or in the absence of oxygen.
Survival under simulated Martian conditions
Since the 1950s, researchers have used containers that simulate environmental conditions on Mars to determine the viability of a variety of lifeforms on Mars. Such devices, called "Mars jars" or "Mars simulation chambers", were first described and used in U.S. Air Force research in the 1950s by Hubertus Strughold, and popularized in civilian research by Joshua Lederberg and Carl Sagan.
On April 26, 2012, scientists reported that an extremophile lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). The ability to survive in an environment is not the same as the ability to thrive, reproduce, and evolve in that same environment, necessitating further study.
Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none has considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith including perchlorates, and others, all at the same time and in combination. Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly.
Water salinity and temperature
Astrobiologists funded by NASA are researching the limits of microbial life in solutions with high salt concentrations at low temperature. Any body of liquid water under the polar ice caps or underground is likely to exist under high hydrostatic pressure and have a significant salt concentration. They know that the landing site of Phoenix lander was found to be regolith cemented with water ice and salts, and the soil samples likely contained magnesium sulfate, magnesium perchlorate, sodium perchlorate, potassium perchlorate, sodium chloride and calcium carbonate. Earth bacteria capable of growth and reproduction in the presence of highly salted solutions, called halophile or "salt-lover", were tested for survival using salts commonly found on Mars and at decreasing temperatures. The species tested include Halomonas, Marinococcus, Nesterenkonia, and Virgibacillus. Laboratory simulations show that whenever multiple Martian environmental factors are combined, the survival rates plummet quickly, however, halophile bacteria were grown in a lab in water solutions containing more than 25% of salts common on Mars, and starting in 2019, the experiments will incorporate exposure to low temperature, salts, and high pressure.
Mars-like regions on Earth
On 21 February 2023, scientists reported the findings of a "dark microbiome" of unfamiliar microorganisms in the Atacama Desert in Chile, a Mars-like region of Earth.
Missions
Mars-2
Mars-1 was the first spacecraft launched to Mars in 1962, but communication was lost while en route to Mars. With Mars-2 and Mars-3 in 1971–1972, information was obtained on the nature of the surface rocks and altitude profiles of the surface density of the soil, its thermal conductivity, and thermal anomalies detected on the surface of Mars. The program found that its northern polar cap has a temperature below and that the water vapor content in the atmosphere of Mars is five thousand times less than on Earth. No signs of life were found.
Signs of life of the Mars space program AMS from orbit were not found. The descent vehicle Mars-2 crashed on landing, the descent vehicle Mars-3 launched 1.5 minutes after landing in the Ptolemaeus crater, but worked only 14.5 seconds/
Mariner 4
Mariner 4 probe performed the first successful flyby of the planet Mars, returning the first pictures of the Martian surface in 1965. The photographs showed an arid Mars without rivers, oceans, or any signs of life. Further, it revealed that the surface (at least the parts that it photographed) was covered in craters, indicating a lack of plate tectonics and weathering of any kind for the last 4 billion years. The probe also found that Mars has no global magnetic field that would protect the planet from potentially life-threatening cosmic rays. The probe was able to calculate the atmospheric pressure on the planet to be about 0.6 kPa (compared to Earth's 101.3 kPa), meaning that liquid water could not exist on the planet's surface. After Mariner 4, the search for life on Mars changed to a search for bacteria-like living organisms rather than for multicellular organisms, as the environment was clearly too harsh for these.
Viking orbiters
Liquid water is necessary for known life and metabolism, so if water was present on Mars, the chances of it having supported life may have been determinant. The Viking orbiters found evidence of possible river valleys in many areas, erosion and, in the southern hemisphere, branched streams.
Viking biological experiments
The primary mission of the Viking probes of the mid-1970s was to carry out experiments designed to detect microorganisms in Martian soil because the favorable conditions for the evolution of multicellular organisms ceased some four billion years ago on Mars. The tests were formulated to look for microbial life similar to that found on Earth. Of the four experiments, only the Labeled Release (LR) experiment returned a positive result, showing increased 14CO2 production on first exposure of soil to water and nutrients. All scientists agree on two points from the Viking missions: that radiolabeled 14CO2 was evolved in the Labeled Release experiment, and that the GCMS detected no organic molecules. There are vastly different interpretations of what those results imply: A 2011 astrobiology textbook notes that the GCMS was the decisive factor due to which "For most of the Viking scientists, the final conclusion was that the Viking missions failed to detect life in the Martian soil."
Norman Horowitz was the head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life.
One of the designers of the Labeled Release experiment, Gilbert Levin, believes his results are a definitive diagnostic for life on Mars. Levin's interpretation is disputed by many scientists. A 2006 astrobiology textbook noted that "With unsterilized Terrestrial samples, though, the addition of more nutrients after the initial incubation would then produce still more radioactive gas as the dormant bacteria sprang into action to consume the new dose of food. This was not true of the Martian soil; on Mars, the second and third nutrient injections did not produce any further release of labeled gas." Other scientists argue that superoxides in the soil could have produced this effect without life being present. An almost general consensus discarded the Labeled Release data as evidence of life, because the gas chromatograph and mass spectrometer, designed to identify natural organic matter, did not detect organic molecules. More recently, high levels of organic chemicals, particularly chlorobenzene, were detected in powder drilled from one of the rocks, named "Cumberland", analyzed by the Curiosity rover. The results of the Viking mission concerning life are considered by the general expert community as inconclusive.
In 2007, during a Seminar of the Geophysical Laboratory of the Carnegie Institution (Washington, D.C., US), Gilbert Levin's investigation was assessed once more. Levin still maintains that his original data were correct, as the positive and negative control experiments were in order. Moreover, Levin's team, on April 12, 2012, reported a statistical speculation, based on old data—reinterpreted mathematically through cluster analysis—of the Labeled Release experiments, that may suggest evidence of "extant microbial life on Mars". Critics counter that the method has not yet been proven effective for differentiating between biological and non-biological processes on Earth so it is premature to draw any conclusions.
A research team from the National Autonomous University of Mexico headed by Rafael Navarro-González concluded that the GCMS equipment (TV-GC-MS) used by the Viking program to search for organic molecules, may not be sensitive enough to detect low levels of organics. Klaus Biemann, the principal investigator of the GCMS experiment on Viking wrote a rebuttal. Because of the simplicity of sample handling, TV–GC–MS is still considered the standard method for organic detection on future Mars missions, so Navarro-González suggests that the design of future organic instruments for Mars should include other methods of detection.
After the discovery of perchlorates on Mars by the Phoenix lander, practically the same team of Navarro-González published a paper arguing that the Viking GCMS results were compromised by the presence of perchlorates. A 2011 astrobiology textbook notes that "while perchlorate is too poor an oxidizer to reproduce the LR results (under the conditions of that experiment perchlorate does not oxidize organics), it does oxidize, and thus destroy, organics at the higher temperatures used in the Viking GCMS experiment." Biemann has written a commentary critical of this Navarro-González paper as well, to which the latter have replied; the exchange was published in December 2011.
Phoenix lander, 2008
The Phoenix mission landed a robotic spacecraft in the polar region of Mars on May 25, 2008, and it operated until November 10, 2008. One of the mission's two primary objectives was to search for a "habitable zone" in the Martian regolith where microbial life could exist, the other main goal being to study the geological history of water on Mars. The lander has a 2.5 meter robotic arm that was capable of digging shallow trenches in the regolith. There was an electrochemistry experiment which analysed the ions in the regolith and the amount and type of antioxidants on Mars. The Viking program data indicate that oxidants on Mars may vary with latitude, noting that Viking 2 saw fewer oxidants than Viking 1 in its more northerly position. Phoenix landed further north still.
Phoenixs preliminary data revealed that Mars soil contains perchlorate, and thus may not be as life-friendly as thought earlier. The pH and salinity level were viewed as benign from the standpoint of biology. The analysers also indicated the presence of bound water and CO2. A recent analysis of Martian meteorite EETA79001 found 0.6 ppm ClO4−, 1.4 ppm ClO3−, and 16 ppm NO3−, most likely of Martian origin. The ClO3− suggests presence of other highly oxidizing oxychlorines such as ClO2− or ClO, produced both by UV oxidation of Cl and X-ray radiolysis of ClO4−. Thus only highly refractory and/or well-protected (sub-surface) organics are likely to survive. In addition, recent analysis of the Phoenix WCL showed that the Ca(ClO4)2 in the Phoenix soil has not interacted with liquid water of any form, perhaps for as long as 600 Myr. If it had, the highly soluble Ca(ClO4)2 in contact with liquid water would have formed only CaSO4. This suggests a severely arid environment, with minimal or no liquid water interaction.
Mars Science Laboratory (Curiosity rover)
The Mars Science Laboratory mission is a NASA project that launched on November 26, 2011, the Curiosity rover, a nuclear-powered robotic vehicle, bearing instruments designed to assess past and present habitability conditions on Mars. The Curiosity rover landed on Mars on Aeolis Palus in Gale Crater, near Aeolis Mons (a.k.a. Mount Sharp), on August 6, 2012.
On December 16, 2014, NASA reported the Curiosity rover detected a "tenfold spike", likely localized, in the amount of methane in the Martian atmosphere. Sample measurements taken "a dozen times over 20 months" showed increases in late 2013 and early 2014, averaging "7 parts of methane per billion in the atmosphere". Before and after that, readings averaged around one-tenth that level. In addition, low levels of chlorobenzene (), were detected in powder drilled from one of the rocks, named "Cumberland", analyzed by the Curiosity rover.
Mars 2020 (Perseverance rover)
The NASA Mars 2020 mission includes the Perseverance rover. Launched on July 30, 2020 it is intended to investigate an astrobiologically relevant ancient environment on Mars. This includes its surface geological processes and history, and an assessment of its past habitability and the potential for preservation of biosignatures within accessible geological materials. Perseverance has been on Mars for .
The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available.
Future astrobiology missions
ExoMars is a European-led multi-spacecraft programme currently under development by the European Space Agency (ESA) and the Roscosmos for launch in 2016 and 2020. Its primary scientific mission will be to search for possible biosignatures on Mars, past or present. A rover with a core drill will be used to sample various depths beneath the surface where liquid water may be found and where microorganisms or organic biosignatures might survive cosmic radiation. The program was suspended in 2022, and is unlikely to launch before 2028.
Mars sample-return mission – The best life detection experiment proposed is the examination on Earth of a soil sample from Mars. However, the difficulty of providing and maintaining life support over the months of transit from Mars to Earth remains to be solved. Providing for still unknown environmental and nutritional requirements is daunting, so it was concluded that "investigating carbon-based organic compounds would be one of the more fruitful approaches for seeking potential signs of life in returned samples as opposed to culture-based approaches."
Human colonization of Mars
Some of the main reasons for colonizing Mars include economic interests, long-term scientific research best carried out by humans as opposed to robotic probes, and sheer curiosity. Surface conditions and the presence of water on Mars make it arguably the most hospitable of the planets in the Solar System, other than Earth. Human colonization of Mars would require in situ resource utilization (ISRU); A NASA report states that "applicable frontier technologies include robotics, machine intelligence, nanotechnology, synthetic biology, 3-D printing/additive manufacturing, and autonomy. These technologies combined with the vast natural resources should enable, pre- and post-human arrival ISRU to greatly increase reliability and safety and reduce cost for human colonization of Mars."
Interactive Mars map
See also
Areography (geography of Mars)
Carbonates on Mars
Chloride-bearing deposits on Mars
Composition of Mars
Elysium Planitia
Fretted terrain
Geology of Mars
Glaciers on Mars
Gravity of Mars
Groundwater on Mars
Hecates Tholus
Lakes on Mars
List of quadrangles on Mars
List of rocks on Mars
Magnetic field of Mars
Mars Geyser Hopper
Martian craters
Martian dichotomy
Martian geyser
Martian gullies
Martian soil
Mineralogy of Mars
Ore resources on Mars
Scientific information from the Mars Exploration Rover mission
Seasonal flows on warm Martian slopes
Vallis
Water on Mars
References
External links
Scientist says that life on Mars is likely today
Ancient salty sea on Mars wins as the most important scientific achievement of 2004 – Journal Science
Mars meteor found on Earth provides evidence that suggests microbial life once existed on Mars
Scientific American Magazine (November 2005 Issue) Did Life Come from Another World?
Audio interview about "Dark Dune Spots"
Astrobiology
Mars
Astronomical controversies | Life on Mars | Astronomy,Biology | 12,006 |
2,903,271 | https://en.wikipedia.org/wiki/42%20Aurigae | 42 Aurigae is a star in the northern constellation of Auriga. The designation is from the star catalogue of English astronomer John Flamsteed, first published in 1712. It has an apparent visual magnitude of 6.53, which places it just below the visibility limit for normal eyesight under good seeing conditions. It displays an annual parallax shift of 13.24 mas, which yields a distance estimate of around 246 light years. The star is moving closer to the Sun with a radial velocity of −12 km/s. It is a member of the Ursa Major Moving Group of stars that share a common motion through space.
The star was assigned a stellar classification of F0 V by Nancy Roman in 1949, indicating it is an F-type main-sequence star. However, in 1995 Abt and Morrell catalogued it as class ; a somewhat hotter and more massive A-type main-sequence star that displays spectral peculiarities as well as nebulous lines brought about by rapid rotation. It is around a billion years old with a high rate of spin, showing a projected rotational velocity of 228 km/s. The star has an estimated 1.7 times the mass of the Sun and is radiating 10 times the Sun's luminosity from its photosphere at an effective temperature of around 7,660 K.
References
F-type main-sequence stars
A-type main-sequence stars
Ursa Major moving group
Auriga
Durchmusterung objects
Aurigae, 42
043244
029884
2228 | 42 Aurigae | Astronomy | 320 |
20,962,049 | https://en.wikipedia.org/wiki/McGehee%20transformation | The McGehee transformation was introduced by Richard McGehee to study the triple collision singularity in the n-body problem.
The transformation blows up the single point in phase space where the collision occurs into a collision manifold, the phase space point is cut out and in its place a smooth manifold is pasted. This allows the phase space singularity to be studied in detail.
What McGehee found was a distorted sphere with four horns pulled out to infinity and the points at their tips deleted. McGehee then went on to study the flow on the collision manifold.
References
Celestial Encounters, The Origins of Chaos and Stability, Diacu/Holmes, , Princeton Science Library
Classical mechanics | McGehee transformation | Physics | 139 |
60,848,020 | https://en.wikipedia.org/wiki/Tutte%E2%80%93Grothendieck%20invariant | In mathematics, a Tutte–Grothendieck (TG) invariant is a type of graph invariant that satisfies a generalized deletion–contraction formula. Any evaluation of the Tutte polynomial would be an example of a TG invariant.
Definition
A graph function f is TG-invariant if:
Above G / e denotes edge contraction whereas G \ e denotes deletion. The numbers c, x, y, a, b are parameters.
Generalization to matroids
The matroid function f is TG if:
It can be shown that f is given by:
where E is the edge set of M; r is the rank function; and
is the generalization of the Tutte polynomial to matroids.
Grothendieck group
The invariant is named after Alexander Grothendieck because of a similar construction of the Grothendieck group used in the Riemann–Roch theorem. For more details see:
References
Graph invariants | Tutte–Grothendieck invariant | Mathematics | 200 |
75,735,877 | https://en.wikipedia.org/wiki/Zosurabalpin | Zosurabalpin (RG6006, Abx-MCP, Ro7223280) is an experimental antibiotic developed in a collaboration between the pharmaceutical company Roche and scientists from Harvard University, for the treatment of carbapenem-resistant Acinetobacter baumannii (CRAB). It targets a lipopolysaccharide transporter. It works by recognizing a composite binding site made up of both the Lpt transporter and its LPS substrate. The chemical family to which it belongs was first disclosed in 2019, but the particular structure of RG6006 remained confidential until publication of the testing results in 2023.
See also
Clovibactin
Novobiocin
Teixobactin
References
Antibiotics
Peptides
Heterocyclic compounds with 3 rings
Sulfur heterocycles
Biphenyls
Pyridines | Zosurabalpin | Chemistry,Biology | 182 |
903,117 | https://en.wikipedia.org/wiki/Karyorrhexis | Karyorrhexis (from Greek κάρυον karyon 'kernel, seed, nucleus' and ῥῆξις rhexis 'bursting') is the destructive fragmentation of the nucleus of a dying cell whereby its chromatin is distributed irregularly throughout the cytoplasm. It is usually preceded by pyknosis and can occur as a result of either programmed cell death (apoptosis), cellular senescence, or necrosis.
In apoptosis, the cleavage of DNA is done by Ca2+ and Mg2+ -dependent endonucleases.
Overview
During apoptosis, a cell goes through a series of steps as it eventually breaks down into apoptotic bodies, which undergo phagocytosis. In the context of karyorrhexis, these steps are, in chronological order, pyknosis (the irreversible condensation of chromatin), karyorrhexis (fragmentation of the nucleus and condensed DNA) and karyolysis (dissolution of the chromatin due to endonucleases).
Karyorrhexis involves the breakdown of the nuclear envelope and the fragmentation of condensed chromatin due to endonucleases. In cases of apoptosis, karyorrhexis ensures that nuclear fragments are quickly removed by phagocytes. In necrosis, however, this step fails to progress in an orderly manner, leaving behind fragmented cellular debris, further contributing to tissue damage and inflammation.
Process of Nuclear Envelope Dissolution During Karyorrhexis
In the intrinsic pathway of apoptosis, environmental factors such as oxidative stress signal pro-apoptotic members of the Bcl-2 protein family to eventually break the outer membrane of the mitochondria. This causes cytochrome c to leak into the cytoplasm, which causes a cascade of events that eventually leads to the activation of several caspases. One of these caspases, caspase-6, is known to cleave nuclear lamina proteins such as lamin A/C, which hold the nuclear envelope together, thereby aiding in the dissolution of the nuclear envelope.
Process of Condensed Chromatin Fragmentation During Karyorrhexis
In the process of karyorrhexis through apoptosis, DNA is fragmented in an orderly manner by endonucleases such as caspase-activated DNase and discrete nucleosomal units are formed. This is because the DNA has already been condensed during pyknosis, meaning it has been wrapped around histones in an organized manner, with around 180 base pairs per histone. The fragmented chromatin observed during karyorrhexis is made when activated endonucleases cleave the DNA in between the histones, resulting in orderly, discrete nucleosomal units. These short DNA fragments left by the endonucleases can be identified on an agar gel during electrophoresis due to their unique “laddered” appearance, allowing researchers to better identify cell death through apoptosis.
Nucleus Degradation in Other Forms of Cell Death
Karyorrhexis is associated with a controlled breakdown of the nuclear envelope, typically by caspases that destroy lamins during apoptosis. However, for other forms of cell death that are less controlled than apoptosis, such as necrosis (unprogrammed cell death), the degradation of the nucleus is caused by other factors. Unlike apoptosis, necrosis cells are characterized by having a ruptured plasma membrane, no association with the activation of caspases, and typically invoking an inflammatory response. Because necrosis is a caspase-independent process, the nucleus may stay intact during early stages of cell death before being ripped open due to osmotic stress and other factors associated with having a hole in the plasma membrane. A specialized form of necrosis, called necroptosis, has a slightly more controlled degradation of the nucleus. This process is dependent on calpain, which is a protease that also degrades lamins, destabilizing the structure of the nucleus. However, similar to necrosis, this process also involves a ruptured plasma membrane, which contributes to the uncontrolled degradation of the nuclear envelope.
Unlike karyorrhexis in apoptosis which produces apoptotic bodies to be digested through phagocytosis, karyorrhexis in necroptosis leads to the expulsion of cell contents into extracellular space to be digested through pinocytosis.
Triggers and Mechanisms
The process of apoptosis, and thereby nucleus degradation through karyorrhexis, is invoked by various physiological and pathological stimuli. DNA damage, oxidative stress, hypoxia, and infections can initiate signaling cascades leading to nuclear degradation through the intrinsic pathway of apoptosis. The intrinsic pathway can also be induced through ethanol, which activates apoptosis-related proteins such as BAX and caspases. Additionally, if the death receptors on a cell’s surface are activated, such as CD95, the activation of caspases and nuclear envelope degradation can be triggered as well. In all of these processes, caspases such as caspase-3 play a key role by cleaving nuclear lamins and promoting chromatin fragmentation. In necrosis, uncontrolled calcium influx and activation of proteases such as calpains accelerate the process, highlighting the contrasting regulatory mechanisms between necrotic and apoptotic karyorrhexis.
The level of DNA damage determines whether a cell undergoes apoptosis or cell senescence. Cellular senescence refers to the cessation of the cell cycle and thus cell division, which can be observed after a fixed amount (approximately 50) of doublings in primary cells. One cause of cellular senescence is DNA damage through the shortening of telomeres. This causes a DNA damage response (DDR), which, if prolonged over a long period of time, activates ATR and ATM damage kinases. These kinases activate two more kinases, Chk1 and Chk2 kinases, which can alter the cell in a few different ways. One of these ways is by activating a transcription factor known as p53. If the level of DNA damage is mild, the p53 will opt to activate CIP, which inhibits CDKs, arresting the cell cycle. However, if the level of DNA damage is severe enough, p53 can trigger apoptotic pathways which lead to the dissolution of the nuclear envelope through karyorrhexis.
Pathological Implications
Karyorrhexis is a prominent feature in conditions related to cell death, such as ischemia and neurodegenerative disorders. It has been observed during myocardial infarction and brain stroke, indicating its contribution to cell death in acute stress responses. Moreover, disorders such as placental vascular malperfusion have highlighted the role of karyorrhexis in fetal demise, particularly when it disrupts normal tissue homeostasis.
In cancer, apoptotic karyorrhexis plays a dual role. While it facilitates controlled cell death, aiding in tumor suppression, resistance to apoptosis in cancer cells results in evasion of this pathway, promoting malignancy. Therapeutic interventions targeting apoptotic pathways attempt to restore this phase of nuclear degradation to induce tumor regression.
See also
Karyolysis
References
Cellular processes
Cellular senescence
Programmed cell death | Karyorrhexis | Chemistry,Biology | 1,564 |
20,781 | https://en.wikipedia.org/wiki/Monoid%20ring | In abstract algebra, a monoid ring is a ring constructed from a ring and a monoid, just as a group ring is constructed from a ring and a group.
Definition
Let R be a ring and let G be a monoid. The monoid ring or monoid algebra of G over R, denoted R[G] or RG, is the set of formal sums ,
where for each and rg = 0 for all but finitely many g, equipped with coefficient-wise addition, and the multiplication in which the elements of R commute with the elements of G. More formally, R[G] is the free R-module on the set G, endowed with R-linear multiplication defined on the base elements by g·h := gh, where the left-hand side is understood as the multiplication in R[G] and the right-hand side is understood in G.
Alternatively, one can identify the element with the function eg that maps g to 1 and every other element of G to 0. This way, R[G] is identified with the set of functions such that } is finite. equipped with addition of functions, and with multiplication defined by
.
If G is a group, then R[G] is also called the group ring of G over R.
Universal property
Given R and G, there is a ring homomorphism sending each r to r1 (where 1 is the identity element of G),
and a monoid homomorphism (where the latter is viewed as a monoid under multiplication) sending each g to 1g (where 1 is the multiplicative identity of R).
We have that α(r) commutes with β(g) for all r in R and g in G.
The universal property of the monoid ring states that given a ring S, a ring homomorphism , and a monoid homomorphism to the multiplicative monoid of S,
such that α'(r) commutes with β'(g) for all r in R and g in G, there is a unique ring homomorphism such that composing α and β with γ produces α' and β
'.
Augmentation
The augmentation is the ring homomorphism defined by
The kernel of η is called the augmentation ideal. It is a free R-module with basis consisting of 1 – g for all g in G not equal to 1.
Examples
Given a ring R and the (additive) monoid of natural numbers N (or {xn} viewed multiplicatively), we obtain the ring R[{xn}] =: R[x] of polynomials over R.
The monoid Nn (with the addition) gives the polynomial ring with n variables: R[Nn] =: R[X1, ..., Xn].
Generalization
If G is a semigroup, the same construction yields a semigroup ring R[G].
See also
Free algebra
Puiseux series
References
Further reading
R.Gilmer. Commutative semigroup rings. University of Chicago Press, Chicago–London, 1984
Ring theory | Monoid ring | Mathematics | 637 |
27,347,581 | https://en.wikipedia.org/wiki/Symposium%20on%20Trends%20in%20Functional%20Programming | The Symposium on Trends in Functional Programming (TFP) is focused on research in the field of functional programming and investigating relationships with other branches of computer science.
See also
ICFP: International Conference on Functional Programming
External links
Home page of TFP
Computer science conferences
Programming languages conferences | Symposium on Trends in Functional Programming | Technology | 56 |
6,992,164 | https://en.wikipedia.org/wiki/Ordered%20Bell%20number | In number theory and enumerative combinatorics, the ordered Bell numbers or Fubini numbers count the weak orderings on a set of elements. Weak orderings arrange their elements into a sequence allowing ties, such as might arise as the outcome of a horse race.
The ordered Bell numbers were studied in the 19th century by Arthur Cayley and William Allen Whitworth. They are named after Eric Temple Bell, who wrote about the Bell numbers, which count the partitions of a set; the ordered Bell numbers count partitions that have been equipped with a total order. Their alternative name, the Fubini numbers, comes from a connection to Guido Fubini and Fubini's theorem on equivalent forms of multiple integrals. Because weak orderings have many names, ordered Bell numbers may also be called by those names, for instance as the numbers of preferential arrangements or the numbers of asymmetric generalized weak orders.
These numbers may be computed via a summation formula involving binomial coefficients, or by using a recurrence relation. They also count combinatorial objects that have a bijective correspondence to the weak orderings, such as the ordered multiplicative partitions of a squarefree number or the faces of all dimensions of a permutohedron.
Definitions and examples
Weak orderings arrange their elements into a sequence allowing ties. This possibility describes various real-world scenarios, including certain sporting contests such as horse races. A weak ordering can be formalized axiomatically by a partially ordered set for which incomparability is an equivalence relation. The equivalence classes of this relation partition the elements of the ordering into subsets of mutually tied elements, and these equivalence classes can then be linearly ordered by the weak ordering. Thus, a weak ordering can be described as an ordered partition, a partition of its elements and a total order on the sets of the partition. For instance, the ordered partition {a,b},{c},{d,e,f} describes an ordered partition on six elements in which a and b are tied and both less than the other four elements, and c is less than d, e, and f, which are all tied with each other.
The th ordered Bell number, denoted here , gives the number of distinct weak orderings on elements. For instance, there are three weak orderings on the two elements a and b: they can be ordered with a before b, with b before a, or with both tied. The figure shows the 13 weak orderings on three elements.
Starting from , the ordered Bell numbers are
When the elements to be ordered are unlabeled (only the number of elements in each tied set matters, not their identities) what remains is a composition or ordered integer partition, a representation of as an ordered sum of positive integers. For instance, the ordered partition {a,b},{c},{d,e,f} discussed above corresponds in this way to the composition 2 + 1 + 3. The number of compositions of is exactly . This is because a composition is determined by its set of partial sums, which may be any subset of the integers from 1 to .
History
The ordered Bell numbers appear in the work of , who used them to count certain plane trees with totally ordered leaves. In the trees considered by Cayley, each root-to-leaf path has the same length, and the number of nodes at distance from the root must be strictly smaller than the number of nodes at distance , until reaching the leaves. In such a tree, there are pairs of adjacent leaves, that may be weakly ordered by the height of their lowest common ancestor; this weak ordering determines the tree. call the trees of this type "Cayley trees", and they call the sequences that may be used to label their gaps (sequences of positive integers that include at least one copy of each positive integer between one and the maximum value in the sequence) "Cayley permutations".
traces the problem of counting weak orderings, which has the same sequence as its solution, to the work of . These numbers were called Fubini numbers by Louis Comtet, because they count the different ways to rearrange the ordering of sums or integrals in Fubini's theorem, which in turn is named after Guido Fubini. The Bell numbers, named after Eric Temple Bell, count the partitions of a set, and the weak orderings that are counted by the ordered Bell numbers may be interpreted as a partition together with a total order on the sets in the partition.
The equivalence between counting Cayley trees and counting weak orderings was observed in 1970 by Donald Knuth, using an early form of the On-Line Encyclopedia of Integer Sequences (OEIS). This became one of the first successful uses of the OEIS to discover equivalences between different counting problems.
Formulas
Summation
Because weak orderings can be described as total orderings on the subsets of a partition, one can count weak orderings by counting total orderings and partitions, and combining the results appropriately. The Stirling numbers of the second kind, denoted , count the partitions of an -element set into nonempty subsets. A weak ordering may be obtained from such a partition by choosing one of total orderings of its subsets. Therefore, the ordered Bell numbers can be counted by summing over the possible numbers of subsets in a partition (the parameter ) and, for each value of , multiplying the number of partitions by the number of total orderings . That is, as a summation formula:
By general results on summations involving Stirling numbers, it follows that the ordered Bell numbers are log-convex, meaning that they obey the inequality for all .
An alternative interpretation of the terms of this sum is that they count the features of each dimension in a permutohedron of dimension , with the th term counting the features of dimension . A permutohedron is a convex polytope, the convex hull of points whose coordinate vectors are the permutations of the numbers from 1 to . These vectors are defined in a space of dimension , but they and their convex hull all lie in an -dimensional affine subspace. For instance, the three-dimensional permutohedron is the truncated octahedron, the convex hull of points whose coordinates are permutations of (1,2,3,4), in the three-dimensional subspace of points whose coordinate sum is 10. This polyhedron has one volume (), 14 two-dimensional faces (), 36 edges (), and 24 vertices (). The total number of these faces is 1 + 14 + 36 + 24 = 75, an ordered Bell number, corresponding to the summation formula above for .
By expanding each Stirling number in this formula into a sum of binomial coefficients, the formula for the ordered Bell numbers may be expanded out into a double summation. The ordered Bell numbers may also be given by an infinite series:
Another summation formula expresses the ordered Bell numbers in terms of the Eulerian numbers , which count the permutations of items in which pairs of consecutive items are in increasing order:
where is the th Eulerian polynomial. One way to explain this summation formula involves a mapping from weak orderings on the numbers from 1 to to permutations, obtained by sorting each tied set into numerical order. Under this mapping, each permutation with consecutive increasing pairs comes from weak orderings, distinguished from each other by the subset of the consecutive increasing pairs that are tied in the weak ordering.
Generating function and approximation
As with many other integer sequences, reinterpreting the sequence as the coefficients of a power series and working with the function that results from summing this series can provide useful information about the sequence.
The fast growth of the ordered Bell numbers causes their ordinary generating function to diverge; instead the exponential generating function is used. For the ordered Bell numbers, it is:
Here, the left hand side is just the definition of the exponential generating function and the right hand side is the function obtained from this summation.
The form of this function corresponds to the fact that the ordered Bell numbers are the numbers in the first column of the infinite matrix . Here is the identity matrix and is an infinite matrix form of Pascal's triangle. Each row of starts with the numbers in the same row of Pascal's triangle, and then continues with an infinite repeating sequence of zeros.
Based on a contour integration of this generating function, the ordered Bell numbers can be expressed by the infinite sum
Here, stands for the natural logarithm. This leads to an approximation for the ordered Bell numbers, obtained by using only the term for in this sum and discarding the remaining terms:
where . Thus, the ordered Bell numbers are larger than the factorials by an exponential factor. Here, as in Stirling's approximation to the factorial, the indicates asymptotic equivalence. That is, the ratio between the ordered Bell numbers and their approximation tends to one in the limit as grows arbitrarily large. As expressed in little o notation, the relative error is , and the error term decays exponentially as grows.
Comparing the approximations for and shows that
For example, taking gives the approximation to .
This sequence of approximations, and this example from it, were calculated by Ramanujan, using a general method for solving equations numerically (here, the equation ).
Recurrence and modular periodicity
As well as the formulae above, the ordered Bell numbers may be calculated by the recurrence relation
The intuitive meaning of this formula is that a weak ordering on items may be broken down into a choice of some nonempty set of items that go into the first equivalence class of the ordering, together with a smaller weak ordering on the remaining items. There are choices of the first set, and choices of the weak ordering on the rest of the elements. Multiplying these two factors, and then summing over the choices of how many elements to include in the first set, gives the number of weak orderings, . As a base case for the recurrence, (there is one weak ordering on zero items). Based on this recurrence, these numbers can be shown to obey certain periodic patterns in modular arithmetic: for sufficiently large ,
Many more modular identities are known, including identities modulo any prime power. Peter Bala has conjectured that this sequence is eventually periodic (after a finite number of terms) modulo each positive integer , with a period that divides Euler's totient function of , the number of residues mod that are relatively prime to .
Applications
Combinatorial enumeration
As has already been mentioned, the ordered Bell numbers count weak orderings, permutohedron faces, Cayley trees, Cayley permutations, and equivalent formulae in Fubini's theorem. Weak orderings in turn have many other applications. For instance, in horse racing, photo finishes have eliminated most but not all ties, called in this context dead heats, and the outcome of a race that may contain ties (including all the horses, not just the first three finishers) may be described using a weak ordering. For this reason, the ordered Bell numbers count the possible number of outcomes of a horse race. In contrast, when items are ordered or ranked in a way that does not allow ties (such as occurs with the ordering of cards in a deck of cards, or batting orders among baseball players), the number of orderings for items is a factorial number , which is significantly smaller than the corresponding ordered Bell number.
Problems in many areas can be formulated using weak orderings, with solutions counted using ordered Bell numbers. consider combination locks with a numeric keypad, in which several keys may be pressed simultaneously and a combination consists of a sequence of keypresses that includes each key exactly once. As they show, the number of different combinations in such a system is given by the ordered Bell numbers. In seru, a Japanese technique for balancing assembly lines, cross-trained workers are allocated to groups of workers at different stages of a production line. The number of alternative assignments for a given number of workers, taking into account the choices of how many stages to use and how to assign workers to each stage, is an ordered Bell number. As another example, in the computer simulation of origami, the ordered Bell numbers give the number of orderings in which the creases of a crease pattern can be folded, allowing sets of creases to be folded simultaneously.
In number theory, an ordered multiplicative partition of a positive integer is a representation of the number as a product of one or more of its divisors. For instance, 30 has 13 multiplicative partitions, as a product of one divisor (30 itself), two divisors (for instance ), or three divisors (, etc.). An integer is squarefree when it is a product of distinct prime numbers; 30 is squarefree, but 20 is not, because its prime factorization repeats the prime 2. For squarefree numbers with prime factors, an ordered multiplicative partition can be described by a weak ordering on its prime factors, describing which prime appears in which term of the partition. Thus, the number of ordered multiplicative partitions is given by . On the other hand, for a prime power with exponent , an ordered multiplicative partition is a product of powers of the same prime number, with exponents summing to , and this ordered sum of exponents is a composition of . Thus, in this case, there are ordered multiplicative partitions. Numbers that are neither squarefree nor prime powers have a number of ordered multiplicative partitions that (as a function of the number of prime factors) is between these two extreme cases.
A parking function, in mathematics, is a finite sequence of positive integers with the property that, for every up to the sequence length, the sequence contains at least values that are at most . A sequence of this type, of length , describes the following process: a sequence of cars arrives on a street with parking spots. Each car has a preferred parking spot, given by its value in the sequence. When a car arrives on the street, it parks in its preferred spot, or, if that is full, in the next available spot. A sequence of preferences forms a parking function if and only if each car can find a parking spot on or after its preferred spot. The number of parking functions of length is exactly . For a restricted class of parking functions, in which each car parks either on its preferred spot or on the next spot, the number of parking functions is given by the ordered Bell numbers. Each restricted parking function corresponds to a weak ordering in which the cars that get their preferred spot are ordered by these spots, and each remaining car is tied with the car in its preferred spot. The permutations, counted by the factorials, are parking functions for which each car parks on its preferred spot. This application also provides a combinatorial proof for upper and lower bounds on the ordered Bell numbers of a simple form,
The ordered Bell number counts the number of faces in the Coxeter complex associated with a Coxeter group of type . Here, a Coxeter group can be thought of as a finite system of reflection symmetries, closed under repeated reflections, whose mirrors partition a Euclidean space into the cells of the Coxeter complex. For instance, corresponds to , the system of reflections of the Euclidean plane across three lines that meet at the origin at angles. The complex formed by these three lines has 13 faces: the origin, six rays from the origin, and six regions between pairs of rays.
uses the ordered Bell numbers to analyze -ary relations, mathematical statements that might be true of some choices of the arguments to the relation and false for others. He defines the "complexity" of a relation to mean the number of other relations one can derive from the given one by permuting and repeating its arguments. For instance, for , a relation on two arguments and might take the form . By Kemeny's analysis, it has derived relations. These are the given relation , the converse relation obtained by swapping the arguments, and the unary relation obtained by repeating an argument. (Repeating the other argument produces the same relation.)
apply these numbers to optimality theory in linguistics. In this theory, grammars for natural languages are constructed by ranking certain constraints, and (in a phenomenon called factorial typology) the number of different grammars that can be formed in this way is limited to the number of permutations of the constraints. A paper reviewed by Ellison and Klein suggested an extension of this linguistic model in which ties between constraints are allowed, so that the ranking of constraints becomes a weak order rather than a total order. As they point out, the much larger magnitude of the ordered Bell numbers, relative to the corresponding factorials, allows this theory to generate a much richer set of grammars.
Other
If a fair coin (with equal probability of heads or tails) is flipped repeatedly until the first time the result is heads, the number of tails follows a geometric distribution. The moments of this distribution are the ordered Bell numbers.
Although the ordinary generating function of the ordered Bell numbers fails to converge, it describes a power series that (evaluated at and then multiplied by ) provides an asymptotic expansion for the resistance distance of opposite vertices of an -dimensional hypercube graph. Truncating this series to a bounded number of terms and then applying the result for unbounded values of approximates the resistance to arbitrarily high order.
In the algebra of noncommutative rings, an analogous construction to the (commutative) quasisymmetric functions produces a graded algebra WQSym whose dimensions in each grade are given by the ordered Bell numbers.
In spam filtering, the problem of assigning weights to sequences of words with the property that the weight of any sequence exceeds the sum of weights of all its subsequences can be solved by using weight for a sequence of words, where is obtained from the recurrence equation
with base case . This recurrence differs from the one given earlier for the ordered Bell numbers, in two respects: omitting the term from the sum (because only nonempty sequences are considered), and adding one separately from the sum (to make the result exceed, rather than equalling, the sum). These differences have offsetting effects, and the resulting weights are the ordered Bell numbers.
References
Integer sequences
Enumerative combinatorics | Ordered Bell number | Mathematics | 3,830 |
35,152,193 | https://en.wikipedia.org/wiki/Pulsar%20clock | A pulsar clock is a clock which depends on counting radio pulses emitted by pulsars.
Pulsar clock in Gdańsk
The first pulsar clock in the world was installed in St Catherine's Church, Gdańsk, Poland, in 2011. It was the first clock to count the time using a signal source outside the Solar System, and represents the second type of clock to measure time using a signal source outside the Earth, after sundials. The pulsar clock consists of a radiotelescope with 16 antennas, which receive signals from six designated pulsars. Digital processing of the pulsar signals is done by an FPGA device.
Pulsar clock in Brussels
On October 5, 2011, a display showing the exact time of the pulsar clock, as a repeater of Gdańsk's pulsar clock, was installed in the European Parliament in Brussels, Belgium.
References
Clocks in Poland
Clocks in Belgium
Pulsars
Culture in Gdańsk
Culture in Brussels
Polish inventions | Pulsar clock | Astronomy | 204 |
13,030,993 | https://en.wikipedia.org/wiki/Warm%20air%20intake | A warm air intake (WAI) also known as a hot air intake (HAI), is a system to decrease the amount of the air going into a car for the purpose of increasing the fuel economy of the internal-combustion engine.
This term is sometimes used erroneously to refer to a short ram air intake, which is another engine modification that aims to improve power output by increasing the static air pressure inside the intake manifold.
Operation
All warm air intakes operate on the principle of decreasing the amount of oxygen available for combustion with fuel. Warm air from inside the engine bay is used opposed to air taken from the generally more restrictive stock intake. Warmer air is less dense, and thus contains less oxygen to burn fuel in. The car's ECU compensates by opening the throttle wider to admit more air. This, in turn, decreases the resistance the engine must overcome to suck air in. The net effect is for the engine to intake the same amount of oxygen (and thus burn the same amount of fuel, producing the same power) but with less pumping losses, allowing for a gain in fuel economy, at the expense of top-end power.
Opposite principle of a cold air intake (CAI) which significantly differs by collecting air from a colder source outside the engine.
In the extreme, a warm air intake can eliminate the need for a conventional throttle and thus eliminate throttle losses.
See also
Carburetor heat
Early fuel evaporator
References
Engine technology | Warm air intake | Technology | 293 |
58,622,713 | https://en.wikipedia.org/wiki/Aspergillus%20angustatus | Aspergillus angustatus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2016. It has been isolated from mangifera indica root in Mali.
Growth and morphology
A. angustatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
angustatus
Fungi described in 2016
Fungus species | Aspergillus angustatus | Biology | 120 |
63,690,330 | https://en.wikipedia.org/wiki/32K%20resolution | 32K resolution refers to a display resolution of approximately 32,000 pixels horizontally. A resolution of 30720 × 17280 for an aspect ratio of 16:9 is speculated to be standardized. This doubles the pixel count of 16K in each dimension, for a total of 530.8 megapixels (530,841,600 pixels), 4 times as many pixels as the 16K resolution. It has 16 times as many pixels as 8K resolution, 64 times as many pixels as 4K resolution, 256 times the pixels as Full HD or 1080p resolution, and 576 times the pixels as HD or 720p resolution.
There are plans from different groups to start implementing 32K technology. While there are a few cameras that can shoot in 32K resolution, even 8K still does not have as widespread usage as 1080p and 4K do. There are less than 3% of televisions supporting 8K (with only some 9th generation gaming consoles supporting it), and none using 16K.
Two primary limiting factors in 32K are display resolution and CPU/GPU capability.
History
Development
In 2018, Sony installed a 16K screen into the front of a cosmetics store in Yokohama, south of Tokyo. The widescreen display is believed to be the largest 16K screen yet. Sony has plans to make the product available, in custom sizes, for wealthy consumers. They are also currently working on developing a 32K display.
Currently, it is possible to run 32K resolutions using multi-monitor setups with AMD Eyefinity or Nvidia Surround using 16 8K TVs or monitors. However, this type of setup is costly and difficult to implement. No displays or monitors singly capable of displaying a 32K resolution are available to the consumer market yet.
Technology
No handheld devices yet.
Cameras in development
The Linea HS 32K
Cameras
Teledyne DALSA 32K Super Resolution CLHS Camera
Gaming
Gaming at 32K is very unlikely to be possible in the near future. To achieve the resolution, sixteen 8K televisions or monitors in a multi-monitor setups with AMD Eyefinity or Nvidia Surround would be required.
Editing
Currently, only Blackmagic Design's DaVinci Resolve 17 supports editing at 32K resolution.
See also
4K resolution digital video formats with a horizontal resolution of around 4,000 pixels
5K resolution digital video formats with a horizontal resolution of around 5,000 pixels, aimed at non-television computer monitor usage
8K resolution digital video formats with a horizontal resolution of around 8,000 pixels
10K resolution digital video formats with a horizontal resolution of around 10,000 pixels, aimed at non-television computer monitor usage
16K resolution digital video formats with a horizontal resolution of around 16,000 pixels
Ultra-high-definition television (UHDTV) digital video formats with resolutions of 4K () and 8K ()
Rec. 2020 ITU-R Recommendation for UHDTV
Digital movie camera
Digital cinematography makes extensive use of UHD video
List of large sensor interchangeable-lens video cameras
References
Digital imaging
Display technology
Film and video technology | 32K resolution | Engineering | 633 |
59,046,614 | https://en.wikipedia.org/wiki/Forder%20Lectureship | The Forder Lectureship is awarded by the London Mathematical Society to a research mathematician from the United Kingdom who has made an eminent contribution to the field of mathematics and who can also speak effectively at a more popular level. The lectureship is named for Professor H.G. Forder, formerly of the University of Auckland, and a benefactor of the London Mathematical Society. The lectureship was funded in 1986 by the London Mathematical Society and the New Zealand Mathematical Society, first began in 1987, and is normally awarded every two years. Recipients of the lectureship will give a four- to six-week lecturing tour of most New Zealand universities. In alternate years the Aitken Lectureship is awarded.
Recipients
The recipients of the Forder Lectureship are:
1987: E.C. Zeeman
1989: Michael F. Atiyah
1991: Peter Whittle
1993: Roger Penrose
1995: E.G. Rees
1997: Ian Stewart
1999: Michael Berry
2001: Tom Körner
2003: Caroline Series
2005: Martin Bridson
2008: Peter Cameron
2010: Ben Green
2012: Geoffrey Grimmett
2015: Endre Süli
2016: Julia Gog
See also
Naylor Prize and Lectureship
List of mathematics awards
References
Awards of the London Mathematical Society
Biennial events
University and college lecture series
Higher education in New Zealand
New Zealand–United Kingdom relations
1987 establishments in England
1987 establishments in New Zealand | Forder Lectureship | Technology | 280 |
7,754,858 | https://en.wikipedia.org/wiki/ENRICH | ENRICH is a 125-item questionnaire for married couples that examines communication, conflict resolution, role relationship, financial management, expectations, sexual relationship, personality compatibility, marital satisfaction, and other personal beliefs related to marriage. It was developed by University of Minnesota family psychologist David Olson, Ph.D., and colleagues as a method of assessing the health of married couple relationships and is now used by over 100,000 facilitators in the United States and worldwide.
In studies of couples who completed the questionnaire, Fowers and Olson found ENRICH could predict divorce with 85% accuracy. Results from discriminant analysis indicated that using either the individual scores or couples' scores, happily married couples could be discriminated from unhappily married couples with 85-95% accuracy. A 2001 paper found sexual intimacy within relationships was positively associated with marital satisfaction.
PREPARE/ENRICH
ENRICH has evolved into a complete online program called PREPARE/ENRICH, which also examines the beliefs of couples preparing to marry and provides couple exercises to build relationship skills. This new program helps couples with the following:
Identify strength and growth areas
Explore personality traits
Strengthen communication skills
Resolve conflicts and reduce stress
Compare family backgrounds
Comfortably discuss financial issues
Establish personal, couple, and family goals
References
Interpersonal relationships | ENRICH | Biology | 260 |
58,506,058 | https://en.wikipedia.org/wiki/Tribasic | Tribasic may refer to:
A tribasic, or triprotic acid, containing three potential protons to donate
A tribasic salt, with three hydrogen atoms, with respect to the parent acid, replaced by cations
See also
Monobasic (disambiguation)
Dibasic (disambiguation)
Polybasic (disambiguation)
Chemical nomenclature | Tribasic | Chemistry | 83 |
5,808,493 | https://en.wikipedia.org/wiki/List%20of%20essential%20oils | Essential oils are volatile and liquid aroma compounds from natural sources, usually plants. They are not oils in a strict sense, but often share with oils a poor solubility in water. Essential oils often have an odor and are therefore used in food flavoring and perfumery. They are usually prepared by fragrance extraction techniques (such as distillation, cold pressing, or Solvent extraction). Essential oils are distinguished from aroma oils (essential oils and aroma compounds in an oily solvent), infusions in a vegetable oil, absolutes, and concretes. Typically, essential oils are highly complex mixtures of often hundreds of individual aroma compounds.
Agar oil or , distilled from agarwood (Aquilaria malaccensis). Highly prized for its fragrance.
Ajwain oil, distilled from the leaves of (Carum copticum). Oil contains 35–65% thymol.
Amyris oil
Angelica root oil, distilled from the Angelica archangelica. Has a green musky scent.
Anise oil, from the Pimpinella anisum, rich odor of licorice
Armoise/Mugwort oil A green and camphorous essential oil.
Asafoetida oil, used to flavor food.
Attar or ittar, used in perfumes for fragrances such as rose and sandlewood.
Balsam of Peru, from the Myroxylon, used in food and drink for flavoring, in perfumes and toiletries for a cheaper alternative to vanilla.
Basil oil, used in making perfumes, as well as in aromatherapy
Bay leaf oil is used in perfumery and aromatherapy
Beeswax absolute A solid absolute with a rich, honeyed scent. Mainly used in perfumery.
Bergamot oil, used in aromatherapy and in perfumes.
Birch oil used in aromatherapy
Bitter Almond oil, Mainly used to extract benzaldehyde for the use of perfumery. Has a rich maraschino cherry scent
Black pepper oil is distilled from the berries of Piper nigrum.
Buchu oil, made from the buchu shrub. Considered toxic and no longer widely used. Formerly used medicinally.
Calamodin oil or calamansi essential oil comes from a citrus tree in the Philippines extracted via cold press or steam distillation.
Calamus oil Used in perfumery and formerly as a food additive
Camphor oil used in cosmetics and household cleaners.
Cannabis flower essential oil, used as a flavoring in foods, primarily candy and beverages. Also used as a scent in perfumes, cosmetics, soaps, and candles.
Caraway seed oil, used a flavoring in foods. Also used in mouthwashes, toothpastes, etc. as a flavoring agent.
Cardamom seed oil, used in aromatherapy. Extracted from seeds of subspecies of Zingiberaceae (ginger). Also used as a fragrance in soaps, perfumes, etc.
Carrot seed oil, used in aromatherapy.
Cedar oil (or cedarwood oil), primarily used in perfumes and fragrances.
Chamomile oil, there are many varieties of chamomile but only two are used in aromatherapy, Roman and German. German chamomile contains a higher level of the chemical azulene
Cinnamon oil, used for flavoring
Cistus ladanifer leaves and flowers used in perfumery.
Citron oil, used in Ayurveda and perfumery.
Citronella oil, from a plant related to lemon grass is used as an insect repellent
Clary Sage oil, used in perfumery and as an additive flavoring in some alcoholic beverages.
Clove oil used in perfumery and medicinally.
Coconut oil, used for skin, food, and hair
Coffee oil, used to flavor food.
Coriander oil
Costmary oil (bible leaf oil), formerly used medicinally in Europe; still used as such in southwest Asia. Discovered to contain up to 12.5% of the toxin β-thujone.
Costus root oil
Cranberry seed oil, equally high in omega-3 and omega-6 fatty acids, primarily used in the cosmetic industry.
Cubeb oil, used to flavor foods.
Cumin seed oil/black seed oil, used as a flavor, particularly in meat products
Curry leaf oil, used to flavor food.
Cypress oil, used in cosmetics
Cypriol oil, from Cyperus scariosus
Davana oil, from the Artemisia pallens, used as a perfume ingredient
Dill oil, chemically almost identical to Caraway seed oil. High carvone content.
Douglas-fir oil is unique amongst conifer oils as Douglas-fir is not a true Fir but its own genus. The New Zealand variety steam distilled using mountain spring water is particularly sought after for its purity and chemical profile.
Elecampane oil
Elemi oil, used as a perfume and fragrance ingredient. Comes from the oleoresins of Canarium luzonicum and Canarium ovatum which are common in the Philippines.
Eucalyptus oil, historically used as a germicide.
Fennel seed oil
Fenugreek oil, used for cosmetics from ancient times.
Fir oil
Frankincense oil, used in aromatherapy and in perfumes.
Galangal oil , used to flavor food.
Galbanum oil, used in perfumery.
Garlic oil is distilled from Allium sativum.
Geranium oil, also referred to as geranol. Used in herbal medicine, aromatherapy, and perfumery.
Ginger oil, used medicinally in many cultures, and has been studied extensively as a nausea treatment, where it was found more effective than placebo.
Goldenrod oil used in herbal medicine, including treatment of urological problems.
Grapefruit oil, extracted from the peel of the fruit. Used in aromatherapy. Contains 90% limonene.
Henna oil, used in body art. Known to be dangerous to people with certain enzyme deficiencies. Pre-mixed pastes are considered dangerous, primarily due to adulterants.
Helichrysum oil
Hickory nut oil
Horseradish oil
Hyssop
Jasmine oil, used for its flowery fragrance.
Juniper berry oil, used as a flavor.
Lavender oil, used primarily as a fragrance.
Ledum
Lemon oil, similar in fragrance to the fruit. Unlike other essential oils, lemon oil is usually cold pressed. Used in cosmetics.
Lemongrass. Lemongrass is a highly fragrant grass from India. The oil is very useful for insect repellent.
Lime
Litsea cubeba oil, lemon-like scent, often used in perfumes and aromatherapy.
Linalool
Mandarin
Marjoram
Manuka oil
Melissa oil (Lemon balm), sweet smelling oil
Mentha arvensis oil, mint oil, used in flavoring toothpastes, mouthwashes and pharmaceuticals, as well as in aromatherapy.
Moringa oil, can be used directly on the skin and hair. It can also be used in soap and as a base for other cosmetics.
Mountain Savory
Mugwort oil, used in ancient times for medicinal and magical purposes. Currently considered to be a neurotoxin.
Mustard oil, containing a high percentage of allyl isothiocyanate or other isothiocyanates, depending on the species of mustard
Myrrh oil, warm, slightly musty smell.
Myrtle
Neem oil or neem tree oil
Neroli is produced from the blossom of the bitter orange tree.
Nutmeg oil
Orange oil, like lemon oil, cold pressed rather than distilled. Consists of 90% d-Limonene. Used as a fragrance, in cleaning products and in flavoring foods.
Oregano oil, contains thymol and carvacrol
Orris oil is extracted from the roots of the Florentine iris (Iris florentina), Iris germanica and Iris pallida. It is used as a flavouring agent, in perfume, and medicinally.
Palo Santo
Parsley oil, used in soaps, detergents, colognes, cosmetics and perfumes, especially men's fragrances.
Patchouli oil, very common ingredient in perfumes.
Perilla essential oil, extracted from the leaves of the perilla plant. Contains about 50–60% perillaldehyde.
Pennyroyal oil, highly toxic. It is abortifacient and can even in small quantities cause acute liver and lung damage.
Peppermint oil
Petitgrain
Pine oil, used as a disinfectant, and in aromatherapy.
Ravensara
Red Cedar
Roman Chamomile
Rose oil, distilled from rose petals, used primarily as a fragrance.
Rosehip oil, distilled from the seeds of the Rosa rubiginosa or Rosa mosqueta.
Rosemary oil, distilled from the flowers of Rosmarinus officinalis.
Rosewood oil, used primarily for skin care applications.
Sage oil,
Sandalwood oil, used primarily as a fragrance, for its pleasant, woody fragrance.
Sassafras oil, from sassafras root bark. Used in aromatherapy, soap-making, perfumes, and the like. Formerly used as a spice, and as the primary flavoring of root beer, inter alia. Sassafras oil is heavily regulated in the United States due to its high safrole content.
Savory oil, from Satureja species. Used in aromatherapy, cosmetic and soap-making applications.
Schisandra oil
Spearmint oil, often used in flavoring mouthwash and chewing gum, among other applications.
Spikenard
Spruce oil
Star anise oil, highly fragrant oil using in cooking. Also used in perfumery and soaps, has been used in toothpastes, mouthwashes, and skin creams. 90% of the world's star anise crop is used in the manufacture of Tamiflu, a drug used to treat influenza, and is hoped to be useful for avian flu
Tangerine
Tarragon oil, distilled from Artemisia dracunculus
Tea tree oil, extracted from Melaleuca alternifolia.
Thyme oil
Tsuga belongs to the pine tree family.
Turmeric, used to flavor food.
Valerian
Warionia, used as a perfume ingredient
Vetiver oil (khus oil) a thick, amber oil, primarily from India. Used as a fixative in perfumery, and in aromatherapy.
Western red cedar
Wintergreen
Yarrow oil
Ylang-ylang
See also
Eau de Cologne
Perfume
Books
Julia Lawless, The Illustrated Encyclopedia of Essential Oils: The Complete Guide to the Use of Oils in Aromatherapy and Herbalism () 1995
The Complete Book of Essential Oils & Aromatherapy
References
Essential oils | List of essential oils | Chemistry | 2,208 |
21,572,370 | https://en.wikipedia.org/wiki/Kenneth%20H.%20Hunt | Kenneth Henderson Hunt (1920–2002) was Foundation Professor of Engineering at Monash University in Melbourne, Australia and an expert in kinematics.
Hunt was born in Seaford, East Sussex, in the United Kingdom, on 7 June 1920. He studied engineering at Balliol College, Oxford University and, during World War II, served in the Royal Engineers. After the war, he worked in the oil industry until 1949, when he took a lecturership at the University of Melbourne. He moved to Monash in 1960, at which time he was appointed Foundation Professor, and was dean of engineering there from 1961 to 1975. He is the author of Mechanisms and Motion (1959) and Kinematic Geometry of Mechanisms (1978).
References
HUNT Kenneth Henderson 7 June 1920 – 21 August 2002, Australian Academy of Technological Sciences and Engineering.
.
.
Foundation Deans: Kenneth Hunt, Monash University.
Further reading
Kinematics
Alumni of Balliol College, Oxford
Academic staff of the University of Melbourne
Engineers from Melbourne
People from Seaford, East Sussex
1920 births
2002 deaths
British emigrants to Australia
Military personnel from East Sussex
Royal Engineers soldiers
British Army personnel of World War II | Kenneth H. Hunt | Physics,Technology | 235 |
62,529,175 | https://en.wikipedia.org/wiki/Epichlo%C3%AB%20stromatolonga | Epichloë stromatolonga is a haploid species in the fungal genus Epichloë.
A systemic and seed-transmissible grass symbiont first described in 2009, Epichloë stromatolonga is a sister lineage to Epichloë amarillans, Epichloë baconii, Epichloë festucae and Epichloë mollis.
Epichloë stromatolonga is found in Asia, where it has been identified in the grass species Calamagrostis epigejos. Epichloë stromatolonga is not known to have a sexual phase.
References
stromatolonga
Fungi described in 2009
Fungi of Asia
Fungus species | Epichloë stromatolonga | Biology | 143 |
19,751,940 | https://en.wikipedia.org/wiki/Diastereomeric%20recrystallization | Diastereomeric recrystallisation is a method of chiral resolution of enantiomers from a racemic mixture. It differs from asymmetric synthesis, which aims to produce a single enantiomer from the beginning, in that diastereomeric recrystallisation separates two enantiomers that have already mixed into a single solution.
The strategy of diastereomeric recrystallisation involves two steps. The first step is to convert the enantiomers into diastereomers by way of a chemical reaction. A mixture of enantiomers may contain two isomers of a molecule with one chiral center. After adding a second chiral center in a determined location, the two isomers are still different, but they are no longer mirror images of each other; rather, they become diastereomers.
In a prototypical example, a mixture of R and S enantiomers with one chiral center would become a mixture of (R,S) and (S,S) diastereomers. (The R-S notation is explained here.) The conversion of the enantiomeric mixture into a diastereomer pair, depending on the nature of the chemicals, can be via covalent bond formation with the enantiopure resolving agent, or by salt formation, the latter being particularly convenient since acid base chemistry is typically quite operationally simple and high yielding.
The second step, once the diastereomers have formed, is to separate them using recrystallisation. This is possible because enantiomers have shared physical properties such as melting point and boiling point, but diastereomers have different chemical properties, so they can be separated like any two different molecules. It is these, now different, physical properties e.g. Melting point & Enthalpy of fusion which determine the eutectic composition (see Eutectic system) which correlates with the maximum yield of pure diastereomer in the crystallization (Rmax, see example melting point phase diagram of a diastereomeric system across all compositions in Figure 1). Various methods have been developed to screening diastereomeric resolutions by determining the eutectic composition as a means of ranking for yield efficiency.
References
Stereochemistry | Diastereomeric recrystallization | Physics,Chemistry | 471 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.