id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
970,013
https://en.wikipedia.org/wiki/Jean%20Clouet
Jean (or Janet or Jehannot) Clouet (c. 1485 – 1540/1) was a painter, draughtsman and miniaturist from the Burgundian Netherlands whose known active work period took place in France. He was court painter to French king Francis I. Together with his son François Clouet he is counted among the leading 16th century portrait painters working in France. They are particularly known for their accomplished drawings, using black chalk and pure red chalk. Biography Little is known about the early life of Clouet. Art historians have generally assumed that he was a native of the Burgundian Netherlands, either in French speaking Valenciennes, County of Hainaut or Flemish speaking Brussels, Duchy of Brabant. He may have been the Jehan Cloet from Brussels mentioned in the accounts of the Duke of Burgundy. His father may have been Michel Clauwet or Clauet, a painter from Valenciennes who had settled in Brussels. In a document regarding the succession of his uncle, the painter Simon Marmion, dated 6 May 1499, Michel's two minor children, Janet and Polet, are mentioned, but there is no evidence that this Janet Clauwet was indeed Jean Clouet. Born around 1485 and trained in Flanders, Clouet spent most of his career in France. His connection with the Paris court of the French King Francis I is attested in the court accounts from 1516 until 1537. Originally he was appointed as painter and wardrobe valet at wages of 180 livres tournois. He was promoted to extraordinary valet in 1519 and finally to the new position of painter and gentleman in 1524. In 1522, on the death of the court painter Jean Bourdichon his wages were increased to 240 livres, equal to those received by the official portrait painter Jean Perréal. Perréal's departure in 1527 made Clouet the highest paid ordinary painter, confirming his status as the almost exclusive creator of portraits for the royal family and the court. His title of master painter, likely received in Flanders, also allowed him to work for private patrons, such as the notary of the King Jacques Thiboust, whose portrait he painted in 1516, and his uncle by marriage Pierre Fichepain, who commissioned a Saint Jerome from him in 1522. He lived in the 1520s in Tours, where he met and married his wife Jeanne Boucault, who was the daughter of a goldsmith. The couple had two children, François who would succeed him as a court painter, and Catherine. Catherine married Abel Foullon. Their son Benjamin Foullon (or Foulon) also became a portrait painter and miniaturist. The painter Simon Bélot worked in Jean's workshop in Tours. At the end of the 1520s, the family moved to Paris, where they lived in the rue Sainte-Avoye. From 1540, Clouet, perhaps ill, was replaced in the king's service by his son François. In July 1540, he was godfather to a child of Mathurin Régnier. He died shortly afterwards, in late 1540 or early 1541, and was buried in the Holy Innocents' Cemetery. An act in the Trésor des Chartes, the ancient archives of the French crown, states that Clouet's son Jean would succeed his father as painter and valet from November 1541 and that Jean Clouet was born outside France and never became a naturalized Frenchman. The act also allows François to inherit his father's estate, which otherwise under French law would have escheated to the French crown as Jean was a foreigner. His brother Paul, known as Clouet de Navarre, was in the service of Marguerite d'Angoulême, sister of Francis I, and is referred to in a letter written by Marguerite about 1529. Work Jean Clouet was undoubtedly a very skillful portrait painter, although no work in existence has been proved to be his. About 10 to 15 portrait paintings are currently attributed to him and fewer miniatures, like two portrait of Francis I and two smaller ones by his workshop in the Louvre, that of un unknown man at Hampton Court, that of the Dauphin Francis, son of Francis I at Antwerp.He painted in 1530 a portrait of the mathematician Oronce Finé, at the age of 36. This portrait is now known only through a print. Clouet is generally believed be the author of a very large number of the 130 portrait drawings now preserved at Musée Condé in Chantilly as well as other drawings at the Bibliothèque Nationale de France. Paintings Portrait of Francis I as Saint John the Baptist, 1518, oil on panel, 96.5 x 79 cm, Louvre, Paris. Portrait of a Banker, 1522, oil on panel, 42.5 x 32.7 cm, Saint Louis Art Museum. Portrait of Madeleine of France, c. 1522, oil on panel, 16.1 x 12.7 cm. Portrait of Charlotte of France, c. 1522, oil on panel, 17.78 x 13.34 cm, Minneapolis Institute of Art. Portrait of the Dauphin Francis of France, 1522–1525, 16 x 13 cm, Royal Museum of Fine Arts Antwerp. Portrait of Madame de Canaples, c. 1525, oil on panel, 36 x 28.5 cm, National Gallery of Scotland, Edinburgh. Portrait of Claude de Lorraine Duke of Guise, 1528–1530, oil on panel, 29 x 26 cm, Palazzo Pitti, Florence. Portrait of Marguerite d'Angoulême, c. 1530, oil on panel, 59.8 x 51.4 cm, Walker Art Gallery, Liverpool. Portrait of Francis I, c. 1530, oil on panel, 96 x 74 cm, Louvre. Portrait of a Man Holding a Volume of Petrarch, formerly said Portrait of Claude d'Urfé, c. 1530–1535, oil on panel, 38.4 x 33 cm, Royal Collection, Hampton Court. Portrait of Guillaume Budé, c. 1536, oil on panel, 39.7 x 34.3 cm, Metropolitan Museum of Art, New York. Drawings and miniatures Seven miniature portraits in the Manuscript of the Gallic War in the Bibliothèque Nationale (13,429) are attributed to Jean Clouet with very strong probability, and to these may be added an eighth in the collection of J. Pierpont Morgan, and representing Charles I de Cossé, Maréchal de Brissac, identical in its characteristics with the seven already known. There are other miniatures in the collection of Mr Morgan, which may be attributed to Jean Clouet with some strong degree of probability, inasmuch as they closely resemble the portrait drawings at Chantilly and in Paris which are taken to be his work. The collection of drawings preserved in France, and attributed to this artist and his school, comprises portraits of all the important persons of the time of Francis I. In one album of drawings the portraits are annotated by the king himself, and his merry reflections, stinging taunts or biting satires, add very largely to a proper understanding of the life of his time and court. Definite evidence, however, is still lacking to establish the attribution of the best of these drawings and of certain oil paintings to Jean Clouet. Notes References Cécile Scailliérez Francis I by Clouet, meeting des Musées Nationaux, 1996 Dictionary Bénézit critic and documentary dictionary of painters, sculptors, designers and writers of all times and all countries, vol. 3 January 1999, p 13440. (), p. 725 Oxford Dictionary edited by Robert Maillard, Universal Dictionary of painting, vol. 2 Smeets Offset BV Weert (Netherlands), October 1975, p 3000. (), p. 42-43 (en) Peter Mellen, Jean Clouet, complete edition of the drawings, miniatures and paintings, London, New York, Phaidon Press, 1971, 262 p. () Peter Mellen (trans. Anne Roullet), Jean Clouet, Catalogue raisonné of the drawings, miniatures and paintings, Paris, Flammarion, 1971, p. 250 Lawrence Gowing (Pref. Michel Laclotte) The paintings in the Louvre, Paris Editions Nathan, 1988, 686 p. (), p. 204 External links 1480 births 1541 deaths French portrait miniaturists Early Netherlandish painters 15th-century French painters French male painters 16th-century French painters French Renaissance painters Painters from Brussels 1485 births French portrait painters Portrait miniaturists Flemish portrait painters People from Brussels French court painters
Jean Clouet
Engineering
1,800
27,414,879
https://en.wikipedia.org/wiki/NHL%20repeat
The NHL repeat, named after ncl-1, HT2A and lin-41, is an amino acid sequence found largely in a large number of eukaryotic and prokaryotic proteins. For example, the repeat is found in a variety of enzymes of the copper type II, ascorbate-dependent monooxygenase family which catalyse the C-terminus alpha-amidation of biological peptides. In many it occurs in tandem arrays, for example in the RING finger beta-box, coiled-coil (RBCC) eukaryotic growth regulators. The arthropod 'Brain Tumor' protein (Brat; ) is one such growth regulator that contains a 6-bladed NHL-repeat beta-propeller. The NHL repeats are also found in serine/threonine protein kinase (STPK) in diverse range of pathogenic bacteria. These STPK are transmembrane receptors with an intracellular N-terminal kinase domain and extracellular C-terminal sensor domain. In the STPK, PknD, from Mycobacterium tuberculosis, the sensor domain forms a rigid, six-bladed b-propeller composed of NHL repeats with a flexible tether to the transmembrane domain. The NHL repeat has also been used to design a family of fully symmetrical 6-blade beta-propeller proteins called "Pizza". These proteins can also be engineered to bind mineral nanocrystals. References Protein domains
NHL repeat
Biology
302
29,875,125
https://en.wikipedia.org/wiki/Divisibility%20sequence
In mathematics, a divisibility sequence is an integer sequence indexed by positive integers n such that for all m, n. That is, whenever one index is a multiple of another one, then the corresponding term also is a multiple of the other term. The concept can be generalized to sequences with values in any ring where the concept of divisibility is defined. A strong divisibility sequence is an integer sequence such that for all positive integers m, n, Every strong divisibility sequence is a divisibility sequence: if and only if . Therefore, by the strong divisibility property, and therefore . Examples Any constant sequence is a strong divisibility sequence. Every sequence of the form for some nonzero integer k, is a divisibility sequence. The numbers of the form (Mersenne numbers) form a strong divisibility sequence. The repunit numbers in any base form a strong divisibility sequence. More generally, any sequence of the form for integers is a divisibility sequence. In fact, if and are coprime, then this is a strong divisibility sequence. The Fibonacci numbers form a strong divisibility sequence. More generally, any Lucas sequence of the first kind is a divisibility sequence. Moreover, it is a strong divisibility sequence when . Elliptic divisibility sequences are another class of such sequences. References Sequences and series Integer sequences Arithmetic functions
Divisibility sequence
Mathematics
293
45,532,115
https://en.wikipedia.org/wiki/N-Acetyllactosamine
N-Acetyllactosamine (LacNAc) (also known as CD75) is a nitrogen-containing disaccharide, a lactosamine derivative that is substituted with an acetyl group on its glucosamine component. The N-acetyllactosamine is a component of many glycoproteins and functions as a carbohydrate antigen that is thought to play roles in normal cellular recognition as well as in malignant transformation and metastasis. It is also found in the structure of human milk oligosaccharides and has prebiotic effects. References External links Amino sugars Disaccharides Acetamides
N-Acetyllactosamine
Chemistry
142
11,849,588
https://en.wikipedia.org/wiki/New%20Galilee%20%28the%20Sixth%20Epoch%29
The New Galilee is the name given in the Western Wisdom Teachings to "a new heaven and a new earth" mentioned in the Bible. From the viewpoint of these Christian esoteric teachings, the New Galilee represents the future Sixth Epoch in mankind's evolutionary path and will see the transition of humanity to the etheric region of the Earth, where “sorrow and pain will cease and he[man] will have entered the path to the city of peace--Jer-u-salem, the future New Jerusalem to be established within, the heavenly ‘bride’ of the Christ's Race in the making.” Usage in the Western Wisdom Teachings According to the Rosicrucian writings of Max Heindel the sixth sub-race of the current Aryan Epoch (the fifth epoch) has evolved among the Slavic peoples and the seventh sub-race is now evolving from this sixth sub-race. Heindel refers that the United States is the melting pot to form the last race in human evolution that will exist at the beginning of the Sixth Epoch, the New Galilee: See also Second Coming (Esoteric Christian teachings) Last Judgment (Esoteric Christian tradition) The New Earth References External links Rays from the Rose Cross: Christ is the Divine Messenger Astrological ages Christian cosmology Christian eschatology Esoteric Christianity Rosicrucianism
New Galilee (the Sixth Epoch)
Physics
270
2,259,967
https://en.wikipedia.org/wiki/W49B
W49B (also known as SNR G043.3-00.2 or 3C 398) is a nebula in Westerhout 49 (W49). The nebula is a supernova remnant, probably from a type Ib or Ic supernova that occurred around 1,000 years ago. It may have produced a gamma-ray burst and is thought to have left a black hole remnant. Nebula W49B is a supernova remnant (SNR) located roughly 33,000 light-years from Earth. Radio wavelengths show a shell four arc minutes across. There are infrared "rings" (about 25 light-years in diameter) forming a "barrel", and intense X-ray radiation coming from forbidden emission of nickel and iron in a bar along its axis. W49B is also one of the most luminous SNRs in the galaxy at gamma-ray wavelengths. It is invisible at optical wavelengths. W49B has a number of other unusual properties. It shows x-ray emission from chromium and manganese, something seen in only one other SNR. The iron in the nebula is seen only in the western half of the nebula, while other elements are distributed throughout the nebula. The outer shell is interpreted as a wind-blown bubble of molecular hydrogen within the interstellar medium, commonly seen around hot luminous stars. Away from the galactic plane, there is little gas and it is very faint optically. The shell is around 10 parsecs across and 1.9 parsecs thick. Inside the shell are the x-ray jets. Where the southeastern jet reaches the shell there is a bow-shock. Supernova The quantity of iron and nickel within the SNR, and its asymmetric nature, imply a jet-driven type Ib or Ic supernova produced by a star with an initial mass around . Such supernovae are thought to be the source of some long-duration gamma-ray bursts. The properties of the SNR suggest that the supernova occurred about 1,000 years ago. Due to large amounts of galactic dust, the supernova would have been invisible to Earthly viewers. The quantities of heavy elements such as chromium and manganese, produced by the explosive nucleosynthesis of silicon during the supernova itself, suggests that the explosion was not sufficiently energetic to produce a gamma-ray burst but does not rule it out entirely. Remnant The remnant from a core collapse supernova may be a neutron star or black hole. No neutron star can be detected within W49B although it would be expected to be clearly visible. This, and the models which best reproduce the nebula, imply that the remnant is a black hole. See also List of supernova remnants References External links Aquila (constellation) Supernova remnants Gamma-ray bursts 398
W49B
Physics,Astronomy
573
4,809,015
https://en.wikipedia.org/wiki/Overhead%20crane
An overhead crane, commonly called a bridge crane, is a type of crane found in industrial environments. An overhead crane consists of two parallel rails seated on longitudinal I-beams attached to opposite steel columns by means of brackets. The traveling bridge spans the gap. A hoist, the lifting component of a crane, travels along the bridge. If the bridge is rigidly supported on two or more legs running on two fixed rails at ground level, the crane is called a gantry crane (USA, ASME B30 series) or a goliath crane (UK, BS 466). Unlike mobile or construction cranes, overhead cranes are typically used for either manufacturing or maintenance applications, where efficiency or downtime are critical factors. Single Girder Overhead Crane The single girder type overhead crane is the most common overhead crane. It is generally used for light applications, normally up to 10 tonnes. Double Girder Overhead Crane The double girder overhead crane structure is used for heavier applications up to 125 tons and reaching over 100 feet of span. It can also be used to gain lifting height because the hoist of the double girder overhead crane is placed on the beams and the hook fits between them. Suspended Overhead Crane The rails of a suspended overhead crane are secured to the ceiling of the building. The elimination of dedicated support columns provides additional floor space, but limits lifting capacity. History In 1876 Sampson Moore in England designed and supplied the first ever electric overhead crane, which was used to hoist guns at the Royal Arsenal in Woolwich, London. Since that time Alliance Machine, now defunct, holds an AISE citation for one of the earliest cranes in the USA market. This crane was in service until approximately 1980, and is now in a museum in Birmingham, Alabama. Over the years important innovations, such as the Weston load brake (which is now rare) and the wire rope hoist (which is still popular), have come and gone. The original hoist contained components mated together in what is now called the built-up style hoist. These built up hoists are used for heavy-duty applications such as steel coil handling and for users desiring long life and better durability. They also provide for easier maintenance. Now many hoists are package hoists, built as one unit in a single housing, generally designed for ten-year life, but the life calculation is based on an industry standard when calculating actual life. See the Hoists Manufacturers Institute site for true life calculation, which is based on load and hours used. In today's modern world for the North American market, there are a few governing bodies for the industry. The Overhead Alliance is a group that represents Crane Manufacturers Association of America, Hoist Manufacturers Institute, and Monorail Manufacturers Association. These product counsels of the Material Handling Industry of America have joined forces to create promotional materials to raise the awareness of the benefits of overhead lifting. The members of this group are marketing representatives of the member companies. Early manufacture 1830: First Crane company in Germany, Ludwig Stuckenholz company. 1840: Mass production of overhead cranes starts in Germany. 1854: Sampson Moore & Co in Liverpool, England patents a new winch mechanism that allowed the lifting of heavier weights (such as naval guns) by an electric motor. 1861: The first steam powered overhead crane is installed by John Ramsbottom at the Crewe Railway workshops. Power was transmitted to the crane from a pulley driven by a stationary engine through an endless cotton rope. 1887: The Ludwig Stuckenholz company introduces electrical components to overhead cranes, determining industry design. 1910: The first mass-produced electric motor hoist starts in Germany. Configurations: While sharing major components, overhead cranes are manufactured in a number of configurations based on applications. EOT (Electric Overhead Traveling) Crane EOT cranes are a common type of overhead crane. They are found in many factories and warehouses. These cranes are electrically operated by a control pendant, radio/IR remote pendant, or from an operator cabin attached to the crane. Rotary overhead crane This type of overhead crane has one end of the bridge mounted on a fixed pivot and the other end carried on an annular track; the bridge traverses the circular area beneath. This offers improvement over a jib crane by making possible a longer reach and eliminating lateral strains on the building walls. Polar crane This type of overhead crane has both ends of the bridge mounted on an annular track. The bridge runs to entire diameter of the track, as opposed to just the radius for a rotary crane. Polar cranes are commonly used in containment buildings at nuclear power stations, fitting their circular, steam pressure containing design. Applications Overhead cranes are commonly used in the refinement of steel and other metals, such as copper and aluminium. At every step of the manufacturing process, until it leaves a factory as a finished product, metal is handled by an overhead crane. Raw materials are poured into a furnace by crane, hot metal is then rolled to specific thickness and tempered or annealed, and then stored by an overhead crane for cooling, the finished coils are lifted and loaded onto trucks and trains by overhead crane, and the fabricator or stamper uses an overhead crane to handle the steel in his factory. The automobile industry uses overhead cranes to handle raw materials. Smaller workstation cranes, such as jib or gantry cranes, handle lighter loads in a work area, such as CNC mill or saw. Almost all paper mills use bridge cranes for regular maintenance, needing removal of heavy press rolls and other equipment. The bridge cranes are used in the initial construction of paper machines because they make it easier to install the heavy cast iron paper drying drums and other massive equipment, some weighing as much as 70 tons. Recently, overhead cranes have been used in the wind-power industry. Giant cranes such as this one are being used to build the world's longest wind turbines. In many instances, the cost of a bridge crane can be largely offset with savings from not renting mobile cranes in the construction of a facility that uses a lot of heavy processing equipment. Gallery See also Container crane Crane (railroad) Gantry crane EOT crane References Standards ASME B30.2: "Overhead and Gantry Cranes (Top Running Bridge, Single or Multiple Girder, Top Running Trolley Hoist)" ASME B30.17: "Overhead and Gantry Cranes (Top Running Bridge, Single Girder, Underhung Hoist)" ASME B30.11: "Monorails and Underhung Cranes" BS 466: "Specification for Power driven overhead travelling cranes, semi-goliath and goliath cranes for general use" (1984) ISO 4301-5: "Cranes; classification; part 5: overhead travelling and portal bridge cranes" (1991) ISO 8686-5: "Cranes; design principles for loads and load combinations; part 5: overhead travelling and portal bridge cranes" (1992) Indian Standard – 807 Indian Standard – 3177 Indian Standard -4137 FEM 1.001: "Rules for the Design of Hoisting Appliances" External links OSHA Regs for overhead cranes Cranes (machines)
Overhead crane
Engineering
1,451
36,477,113
https://en.wikipedia.org/wiki/Allergic%20salute
The allergic salute (sometimes called the nasal salute) is the characteristic and sometimes habitual gesture of wiping and/or rubbing the nose in an upwards or transverse manner with the fingers, palm, or back of the hand. It is termed a salute because the upward movement of the hand acts as an unintentional gesture. The habit of using the hand to wipe the nose is observed more often in children but is common in adults as well. Saluting most commonly temporarily relieves nasal itching as well as removing small amounts of nasal mucus. In people who are experiencing seizures, nose wiping has been observed as a semi-voluntary action. Process The upwards wiping of the nose and nostrils allows for running mucus to be wiped off quickly and easily. Also, as the nostrils are being pushed up the air passages through the nose become temporarily propped open. This is especially beneficial if the air passages are swollen and the nostrils are itchy due to irritations such as allergic rhinitis. The mucus that is wiped onto the hand will most likely carry bacteria and other germs which could then in turn be passed along to other people. Habitual as well as fast or rough saluting may also result in a crease (known as a transverse nasal crease or groove) running across the nose, and can lead to permanent physical deformity observable in childhood and adulthood. See also Allergic shiner Eskimo kissing Nose picking References Allergology Gestures Habits Nose Rhinology
Allergic salute
Biology
302
58,455,897
https://en.wikipedia.org/wiki/Aspergillus%20aurantiobrunneus
Aspergillus aurantiobrunneus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1965. It has been reported to produce emeremophiline, emericolin A-D, variecolin, variecolol, desferritriacetylfusigen, sterigmatocystin, variecoacetal A & B, variecolactone, variecolin, and variecolol. Growth and morphology A. aurantiobrunneus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References aurantiobrunneus Fungi described in 1965 Fungus species
Aspergillus aurantiobrunneus
Biology
186
20,750,165
https://en.wikipedia.org/wiki/Aluminium%20magnesium%20boride
Aluminium magnesium boride or Al3Mg3B56, colloquially known as BAM, is a chemical compound of aluminium, magnesium and boron. Whereas its nominal formula is AlMgB14, the chemical composition is closer to Al0.75Mg0.75B14. It is a ceramic alloy that is highly resistive to wear and has an extremely low coefficient of sliding friction, reaching a record value of 0.04 in unlubricated and 0.02 in lubricated AlMgB14−TiB2 composites. First reported in 1970, BAM has an orthorhombic structure with four icosahedral B12 units per unit cell. This ultrahard material has a coefficient of thermal expansion comparable to that of other widely used materials such as steel and concrete. Synthesis BAM powders are commercially produced by heating a nearly stoichiometric mixture of elemental boron (low grade because it contains magnesium) and aluminium for a few hours at a temperature in the range 900 °C to 1500 °C. Spurious phases are then dissolved in hot hydrochloric acid. To ease the reaction and make the product more homogeneous, the starting mixture can be processed in a high-energy ball mill. All pretreatments are carried out in a dry, inert atmosphere to avoid oxidation of the metal powders. BAM films can be coated on silicon or metals by pulsed laser deposition, using AlMgB14 powder as a target, whereas bulk samples are obtained by sintering the powder. BAM usually contains small amounts of impurity elements (e.g., oxygen and iron) that enter the material during preparation. It is thought that the presence of iron (most often introduced as wear debris from mill vials and media) serves as a sintering aid. BAM can be alloyed with silicon, phosphorus, carbon, titanium diboride (TiB2), aluminium nitride (AlN), titanium carbide (TiC) or boron nitride (BN). Properties BAM has the lowest known unlubricated coefficient of friction (0.04) possibly due to self-lubrication. Structure Most superhard materials have simple, high-symmetry crystal structures, e.g., diamond cubic or zinc blende. However, BAM has a complex, low-symmetry crystal structure with 64 atoms per unit cell. The unit cell is orthorhombic and its most salient feature is four boron-containing icosahedra. Each icosahedron contains 12 boron atoms. Eight more boron atoms connect the icosahedra to the other elements in the unit cell. The occupancy of metal sites in the lattice is lower than one, and thus, while the material is usually identified with the formula AlMgB14, its chemical composition is closer to Al0.75Mg0.75B14. Such non-stoichiometry is common for borides (see crystal structure of boron-rich metal borides and boron carbide). The unit cell parameters of BAM are a = 1.0313 nm, b = 0.8115 nm, c = 0.5848 nm, Z = 4 (four structure units per unit cell), space group Imma, Pearson symbol oI68, density 2.59 g/cm3. The melting point is roughly estimated as 2000 °C. Optoelectronic BAM has a bandgap of about ~1.5 eV. Significant absorption is observed at sub-bandgap energies and attributed to metal atoms. Electrical resistivity depends on the sample purity and is about 104 Ohm·cm. The Seebeck coefficient is relatively high, between −5.4 and −8.0 mV/K. This property originates from electron transfer from metal atoms to the boron icosahedra and is favorable for thermoelectric applications. Hardness & Fracture toughness The microhardness of BAM powders is 32–35 GPa. It can be increased to 45 GPa by alloying with Boron rich Titanium Boride, Fracture toughness can be increased with TiB2 or by depositing a quasi-amorphous BAM film. Addition of AlN or TiC to BAM reduces its hardness. By definition, a hardness value exceeding 40 GPa makes BAM a superhard material. In the BAM−TiB2 composite, the maximum hardness and toughness are achieved at ~60 vol.% of TiB2. The wear rate is improved by increasing the TiB2 content to 70–80% at the expense of ~10% hardness loss. The TiB2 additive is a wear-resistant material itself with a hardness of 28–35 GPa. Thermal expansion The thermal expansion coefficient (TEC, also known as Coefficient Of Thermal Expansion, COTE) for AlMgB14 was measured as 9 K−1 by dilatometry and by high temperature X-ray diffraction using synchrotron radiation. This value is fairly close to the COTE of widely used materials such as steel, titanium and concrete. Based on the hardness values reported for AlMgB14 and the materials themselves being used as wear resistant coatings, the COTE of AlMgB14 could be used in determining coating application methods and the performance of the parts once in service. Friction A composite of BAM and TiB2 (70 volume percent of TiB2) has one of the lowest values of friction coefficients, which amounts to 0.04–0.05 in dry scratching by a diamond tip (cf. 0.04 for Teflon) and decreases to 0.02 in water-glycol-based lubricants. Applications BAM is commercially available and is being studied for potential applications. For example, pistons, seals and blades on pumps could be coated with BAM or BAM + TiB2 to reduce friction between parts and to increase wear resistance. The reduction in friction would reduce energy use. BAM could also be coated onto cutting tools. The reduced friction would lessen the force necessary to cut an object, extend tool life, and possibly allow increased cutting speeds. Coatings only 2–3 micrometers thick have been found to improve efficiency and reduce wear in cutting tools. See also Non-stick surface References External links Material slicker than Teflon New Scientist Article on BAM. News on AlMgB14 Press Release with photos. Ceramic materials Superhard materials Dry lubricants Borides Aluminium compounds Magnesium compounds
Aluminium magnesium boride
Physics,Engineering
1,324
31,168,137
https://en.wikipedia.org/wiki/Richard%20Gorham
Colonel Sir Richard Masters Gorham CBE, DFC, JP (3 October 1917 – 8 July 2006) was a prominent Bermudian parliamentarian, businessman and philanthropist, who served as a pilot during the Second World War when he played a decisive role in the Battle of Monte Cassino, earning the Distinguished Flying Cross. Second World War Bermuda Volunteer Engineers Born in Pembroke, Bermuda, the son of Mr. and Mrs. A. J. Gorham, he enlisted in the Bermuda Volunteer Engineers in 1938. The unit was mobilised, along with the other part-time units of the Bermuda Garrison (the Bermuda Militia Artillery (BMA), Bermuda Volunteer Rifle Corps (BVRC), and the Bermuda Militia Infantry), when the Second World War was declared. As a corporal, he was attached to the signalling division at the Royal Naval Dockyard and earned a commission as a result of his saving an exercise when he suggested an emergency method of signalling visually to replace a broken wireless transmitter. Bermuda Militia Artillery Gorham was commissioned as a second lieutenant into the Bermua Militia Artillery on 20 December 1940, to replace Second Lieutenant Francis J. Gosling, who had trained as a pilot at the Bermuda Flying School and was to depart for the United Kingdom in January for transfer to the Royal Air Force. Gorham would serve only briefly with the unit before following Gosling across the Atlantic. He learnt of an instruction from the Army Council that prevented commanding officers from barring officers under their command from taking any training course for which they volunteered (although his former BVE commanding officer, Major Cecil Montgomery-Moore, DFC – having transferred from the BVRC in France to the Royal Flying Corps when he had been commissioned during the First World War, and heading the Bermuda Flying School during the Second World War – must undoubtedly have approved of what Gorham intended). Royal Artillery and Royal Air Force At the time, the Royal Air Force was having great difficulty in providing effective air observation post pilots to the British Army. In 1918, the British Army lost its air wing when the Royal Flying Corps was merged with the Royal Naval Air Service to create the independent Royal Air Force (RAF). Since then, the RAF had jealously guarded its monopoly on British military and naval aviation. They provided the Royal Navy with RAF aircrew and support personnel to operate the aircraft of the Fleet Air Arm, although the Navy had been allowed to begin training its own aircrew before the war began. The RAF also provided the aircraft and crews that worked in close support roles to the Army, notably the AOP pilots. These were pilots of light aircraft, such as the Auster, who acted as artillery spotters, directing the fire of the guns of the Royal Artillery from the air. Having had poor success at training RAF pilots to direct artillery fire, it was decided to train Army officers who were proficient at the task to pilot aeroplanes. This preceded the recreation of a new air wing within the British Army, the Army Air Corps (which initially included parachute and glider landed units, as well as the Glider Pilot Regiment, but would eventually take over the AOP and other air support roles from the RAF). Then Second Lieutenants Gorham and Hugh Gregg (who had been commissioned into the BMA from the ranks of the BVE on 28 May 1941) relinquished their BMA commissions on 27 June 1942, on departing Bermuda for England (via Canada), where they received Regular Army emergency commissions into the Royal Artillery on 8 July 1942. Trained by the RAF, they served in squadrons controlled by the RAF. Gorham served in North Africa and Italy. In Italy, while in command of B Flight of 655 Squadron, he played the decisive role in the Battle of Monte Cassino when he spotted a German division moving in half-tracked German Armoured Personnel Carriers to counter attack the British 5th Division and the Polish Corps, which were themselves attacking the German-occupied monastery. Contacting the senior Royal Artillery fire control officer on the ground. All two-thousand field guns within range were switched from their local targets and placed under his control. Gorham directed their fire down onto the German Division. The guns fired for hours, with Gorham taking turns with other AOP pilots. The German division was completely destroyed, and the Allied ground forces broke through four days later. For this action, Gorham received the Distinguished Flying Cross, a relative rarity for an Army officer. Post-war service Gorham relinquished his commission as a war substantive lieutenant on 13 June 1946, when he was appointed an honorary captain. Returning to Bermuda, he found the BMA and the BVRC had been reduced to skeleton commands and the BVE and BMI disbanded in 1946, along with the Home Guard. The BMA and BVRC would both maintain skeleton command structures till their strengths were built back up again in 1951 (they would amalgamate in 1965 into the Bermuda Regiment). Gorham entered the BVRC, renamed the Bermuda Rifles, in which he served for a number of years as second-in-command. He was intended to replace Lieutenant-Colonel J.C. Astwood as commanding officer in 1954, but was unable to do so due to illness. Major HRG Evans instead took command. Gorham was part of the detachment sent to London for the coronation of Queen Elizabeth II in 1953, departing from Bermuda aboard on 30 April. He retired from the army with the substantive rank of captain, however he was awarded the honorary rank of colonel in the Royal Artillery. Civil life In his civil life, Richard Gorham became a prominent businessman and Member of the Parliament of Bermuda (originally titled Member of the Colonial Parliament, or MCP, but today simply Member of Parliament, or MP). He donated much of his wealth to a host of causes, including the Bermuda Maritime Museum, and the Bermuda Sloop Foundation. He was appointed an Ordinary Commander of the Civil Division of the Most Excellent Order (CBE) in the Queen's New Year Honours on 31 December 1977, and a Knight Bachelor in the Queen's New Year Honours on 31 December 1994, for public services. Bibliography Jennifer M. Ingham (now Jennifer M. Hind), Defence, Not Defiance: A History of the Bermuda Volunteer Rifle Corps, Pembroke, Bermuda: The Island Press Ltd. Lt. Commander Ian Strannack, The Andrew and the Onions: The Story of the Royal Navy in Bermuda, 1795–1975, The Bermuda Maritime Museum Press, The Bermuda Maritime Museum, P.O. Box MA 133, Mangrove Bay, Bermuda MA BX. Dr. Edward C. Harris, Bermuda Forts 1612–1957, The Bermuda Maritime Museum Press, The Bermuda Maritime Museum. Lt.-Col. Roger Willock, USMC, Bulwark of Empire: Bermuda's Fortified Naval Base 1860–1920, The Bermuda Maritime Museum Press, The Bermuda Maritime Museum. Sqn.-Ldr Colin A. Pomeroy, Flying Boats of Bermuda, Printlink, PO Box 937, Hamilton, Bermuda HM DX. Major Cecil Montgomery-Moore, DFC, and Peter Kilduff, That's My Bloody Plane, Chester, Connecticut: The Pequot Press, 1975. . References External links The Royal Gazette, Bermuda, 30 December 1994 Bermuda Online website Bermuda War Veterans Association (BWVA) notice Notice of the marriage of Gorham's stepdaughter, Robin Auchincloss, to Henry Dwight Sedgwick Notice of the marriage of Gorham's son, Anthony Masters McIntire Gorham, a trust officer at the Bank of Butterfield, Hamilton, Bermuda, to Laura Young Taylor Bermuda Biological Station for Research: Honour Roll of Donors – The Associates Program 2003 Bermuda Online: Bermuda's History from 1952 to 1999 1994; 31 December. 12 Islanders are recognised in the Queen's New Year's Honours List Categories Bermudian politicians British Army personnel of World War II Knights Bachelor Royal Artillery officers Military engineers Bermudian soldiers Bermudian aviators 1917 births 2006 deaths People from Pembroke Parish Recipients of the Distinguished Flying Cross (United Kingdom) Bermudian justices of the peace Bermudian people of World War II
Richard Gorham
Engineering
1,679
24,973,032
https://en.wikipedia.org/wiki/Split%20Up%20%28expert%20system%29
Split Up is an intelligent decision support system, which makes predictions about the distribution of marital property following divorce in Australia. It is designed to assist judges, registrars of the Family Court of Australia, mediators and lawyers. Split Up operates as a hybrid system, combining rule – based reasoning with neural network theory. Rule based reasoning operates within strict parameters, in the form: IF < condition(s) > then <action>. Neural networks, by contrast, are considered to be better suited to generate decisions in uncertain domains, since they can be taught to weigh the factors considered by judicial decision makers from case data. Yet, they do not provide an explanation for the conclusions they reach. Split_up, with a view to overcome this flaw, uses argument structures proposed by Toulmin as the basis for representations from which explanations can be generated. Application In Australian family law, a judge in determining the distribution of property will: identify the assets of the marriage included in the common pool establish what percentage of the common pool each party will receive determine a final property order in line with the decisions made in 1. and 2. Split_Up implements step 1 and 2 : the common pool determination and the prediction of a percentage split. The common pool determination Since the determination of marital property is rule based, it is implemented using directed graphs. However, the percentage split between the parties is discretionary in that a judge has a wide discretion to look at each party's contributions to the marriage under section 79(4) of the Family Law Act 1975. Broadly, the contributions can be taken as financial or non-financial. The party who can demonstrate a larger contribution to the marital relationship will receive a larger proportion of the assets. The court may further look at each party's financial resources and future needs under section 75(2)of the Family Law Act 1975. These needs can include factors such as the inability to gain employment, the continued care of a child under 18 years of age or medical expenses. This means that different judges may and will reach different conclusions based on the same facts, since each judge assigns different relevant weights to each factor. Split_up determines the percentage split by using a combination of rule- based reasoning and neural networks. The percentage split determination In order to determine how judges weigh the different factors, 103 written judgements of commonplace cases were used to establish a database comprising 94 relevant factors for percentage split determination. The factors relevant for a percentage split determination are: Past contributions of a husband relative to those of a wife The husband's future needs relative to those of the wife The wealth of the marriage The factors relevant for a determination of past contributions are The relative direct and indirect contributions of both parties The length of the marriage The relative contributions of both parties to the homemaking role The hierarchy provides a structure that is used to decompose the task of predicting an outcome into 35 subtasks. Outputs of tasks further down the hierarchy are used as inputs into sub-tasks higher up the hierarchy. Each sub-task is treated as a separate and smaller data mining exercise. Twenty one solid arcs represent inferences performed with the use of rule sets. For example, the level of wealth of a marriage is determined by a rule, which uses the common pool value. By contrast, the fourteen dashed arcs establish inferences performed with the use of neural networks. These receive their name from the fact that they resemble a nervous system in the brain. They consist of many self – adjusting processing elements cooperating in a densely interconnected network. Each processing element generates a single output that is transmitted to the other processing element. The output signal of a processing element depends on the input to the processing element, i.e. each input is gated by a weighting factor that determines the amount of influence that the input will have on the output. The strength of the weighting factors is adjusted autonomously by the processing element as the data is processed. In Split_Up, the neural network is a statistical technique for learning the weights of each of the relevant attributes used in a percentage split determination of marital property. Hence the inputs to the neural network are contributions, future needs and wealth, and the output the percentage split predicted. On each arc there is a statistical weight. Using back propagation the neural network learns the necessary pattern to recognize the prediction. It is trained by repeatedly exposing it to examples of the problem and learning the significance (weights) of the input nodes. The neural network used by Split_up is said to generalise well if the output of the network is correct (or nearly correct) for examples not seen during training, which classifies it as an intelligent system. Toulmin Argument Structure Since the manner in which these weights are learned is primarily statistical, domain knowledge of legal rules and principles is not modelled directly. However, explanations for a legal conclusion in a domain as discretionary as the determining the distribution of property following divorce, are at least as important as the conclusion reached. Hence the creators of Split_Up used Toulmin Argument structures, to provide independent explanations of the conclusions reached. These operate on the basis that every argument makes an assertion based on some data. The assertion of the argument stands as the claim of the argument. Since knowing the data and the claim, does not necessarily mean that the claim follows from the data, a mechanism is required to justify the claim in the light of the data. The justification is known as the warrant. The backing of an argument supports the validity of the warrant. In the legal domain, this is typically a reference to a statute or a precedent. Here, a neural network (or rules), produce a conclusion from the data of an argument and the data, warrant and backing are reproduced to generate an explanation. It is noteworthy, though, that an argument's warrant is reproduced as an explanation regardless of the claim values used. This lack of claim - sensitivity must be overcome by the different users, i.e., the judge, the representatives for the wife and the representatives for the husband, each of whom is encouraged to use the system to prepare their cases, but not to rely exclusively on its outcome. References Argument technology Expert systems Legal software Government by algorithm
Split Up (expert system)
Technology,Engineering
1,249
47,110,389
https://en.wikipedia.org/wiki/Hanger%20Cotton%20Gin
The Hanger Cotton Gin is a historic cotton gin in Sweet Home, Arkansas. Built about 1876, it is a rare surviving example of a steam-powered gin. The main building is a three-story frame structure covered in board-and-batten siding. The gin was only operated commercially for a brief period, and was out of service by 1892. Since then, the building has been used as a barn and grain storage facility. It was probably built by Peter Hanger, whose family has been prominent in the Little Rock business community since that time. The gin was listed on the National Register of Historic Places in 1976. See also National Register of Historic Places listings in Pulaski County, Arkansas References Commercial buildings on the National Register of Historic Places in Arkansas Industrial buildings completed in 1876 Cotton gin National Register of Historic Places in Pulaski County, Arkansas 1876 establishments in Arkansas Cotton industry in the United States Steam power
Hanger Cotton Gin
Physics
186
32,622,165
https://en.wikipedia.org/wiki/Ambiguity%20resolution
Ambiguity resolution is used to find the value of a measurement that requires modulo sampling. This is required for pulse-Doppler radar signal processing. Measurements Some types of measurements introduce an unavoidable modulo operation in the measurement process. This happens with all radar systems. Radar aliasing happens when: Pulse repetition frequency (PRF) is too low to sample Doppler frequency directly PRF is too high to sample range directly Pulse Doppler sonar uses similar principles to measure position and velocity involving liquids. Radar Systems Radar systems operating at a PRF below about 3 kHz pulse rate produce true range, but produce ambiguous target speed. Radar systems operating at a PRF above 30 kHz produce true target speed, but produce ambiguous target range. Medium PRF systems produce both ambiguous range measurement and ambiguous radial speed measurement using PRF from 3 kHz to 30 kHz. Ambiguity resolution finds true range and true speed by using ambiguous range and ambiguous speed measurements with multiple PRF. Doppler Measurements Doppler systems involve velocity measurements similar to the kind of measurements made using a strobe light. For example, a strobe light can be used as a tachometer to measure rotational velocity for rotating machinery. Strobe light measurements can be inaccurate because the light may be flashing 2 or 3 times faster than shaft rotation speed. The user can only produce an accurate measurement by increasing the pulse rate starting near zero until pulses are fast enough to make the rotating object appear stationary. Radar and sonar systems use the same phenomenon to detect target speed. Operation The ambiguity region is shown graphically in this image. The x axis is range (left-right). The y axis is radial speed. The z axis is amplitude (up-down). The shape of the rectangles changes when the PRF changes. The unambiguous zone is in the lower left corner. All of the other blocks have ambiguous range or ambiguous radial velocity. Pulse Doppler radar relies on medium pulse repetition frequency (PRF) from about 3 kHz to 30 kHz. Each transmit pulse is separated by between 5 km and 50 km of distance. Range Ambiguity Resolution The received signals from multiple PRF are compared using the range ambiguity resolution process. Each range sample is converted from time domain I/Q samples into frequency domain. Older systems use individual filters for frequency filtering. Newer systems use digital sampling and a Fast Fourier transform or Discrete Fourier transform instead of physical filters. Each filter converts time samples into a frequency spectrum. Each spectrum frequency corresponds with a different speed. These samples are thresholded to obtain ambiguous range for several different PRF. Frequency Ambiguity Resolution The received signals are also compared using the frequency ambiguity resolution process. A blind velocity occurs when Doppler frequency falls close to the PRF. This folds the return signal into the same filter as stationary clutter reflections. Rapidly alternating different PRF while scanning eliminates blind frequencies. Further reading References Radar Doppler effects Electromagnetism
Ambiguity resolution
Physics
599
9,895,813
https://en.wikipedia.org/wiki/Cyclin%20D/Cdk4
The Cyclin D/Cdk4 complex is a multi-protein structure consisting of the proteins Cyclin D and cyclin-dependent kinase 4, or Cdk4, a serine-threonine kinase. This complex is one of many cyclin/cyclin-dependent kinase complexes that are the "hearts of the cell-cycle control system" and govern the cell cycle and its progression. As its name would suggest, the cyclin-dependent kinase is only active and able to phosphorylate its substrates when it is bound by the corresponding cyclin. The Cyclin D/Cdk4 complex is integral for the progression of the cell from the Growth 1 phase to the Synthesis phase of the cell cycle, for the Start or G1/S checkpoint. Basic Mechanism Under non-dividing conditions (when the cell is in the G0 phase of the cell cycle), Retinoblastoma protein (Rb) is bound with the E2F transcription factor. During the G0 to G1 transition, growth factor stimulates the synthesis of Cyclin D protein, whose concentration increases until it peaks around the G1/S transition. In the early to middle stages of G1 phase, Cyclin D binds to the constitutively expressed Cdk4 protein, which creates an activated CyclinD/Cdk4 complex. Once activated, the Cyclin D/Cdk4 complex docks at a C-terminus helix on the Retinoblastoma protein (pRb), which is driven by a recognition site for the C-terminus helix on Cyclin D. Upon docking, CyclinD/Cdk4 mono-phosphorylates the Rb protein, which disrupts the Rb/E2F interaction and is sufficient to initiate E2F induction. E2F transcriptionally activates a number of downstream target genes required in the next stages of the cell cycle and in DNA replication by binding to their DNA promoter regions. These genes include the cyclin E and A genes, and other genes associated with the G1/S transition. Synthesis of cyclin E and subsequent binding to constitutively expressed Cdk2 leads to a surge in activity of the cyclin E/Cdk2 complex, which is responsible for hyperphosphorylation of Rb. Rb hyperphosphorylation leads to the complete inactivation of Rb and release of E2F, initiating a positive feedback loop between E2F and cyclin E/Cdk2 that stimulates expression of E2F-driven G1/S transition genes, and, at a certain level, activates the bistable switch that drives irreversible progression into S phase. Regulation There are multiple regulation points within this signaling pathway. First and foremost, under non-dividing conditions multiple proteins can inhibit the Cyclin D/Cdk4 complex by binding Cdk4 and inhibiting its association with Cyclin D. Primarily, this is accomplished by p27 but it can also be done by p16 and p21. However, this pathway is stimulated by the upstream binding of growth factors (GF), either from within the cell itself or from neighboring cells. Stimulation by growth factors activates any of a number of receptor tyrosine kinase (RTK) proteins. These receptor tyrosine kinases in turn phosphorylate and activate many other proteins, including Fos/Jun/Myc and phosphatidylinositol 3 kinase (PI-3-K). Fos/Jun/Myc helps to activate the Cyclin D/Cdk4 complex. Phosphatidylinositol 3 kinase phosphorylates p27 (or p16 or p21) and SCF/Skp1. The phosphorylation of p27 inhibits p27's ability to bind Cdk4, thus freeing Cdk4 to associate with Cyclin D and form an active complex. SCF/Skp1 (an E3 ubiquitin ligase) helps to further inhibit p27 and thus further help activate the Cyclin D/Cdk4 complex. Also, p27 acts as an inhibitor of Cyclin E and Cyclin A. So, its inhibition also facilitates the activation of downstream mitotic processes, as noted above. There are also other peripheral regulators of the Cyclin D/Cdk4 complex. In megakaryocytes, it is regulated by the GATA-1 transcription factor. GATA-1 serves as an activating transcription factor of Cyclin D and potentially also as a repressor of the Cyclin D inhibitor, p16. Cdk4 also requires activation upon complex assembly with Cyclin D. This is accomplished by a Cdk activating kinase (CAK), which phosphorylates Cdk4 at threonine 172. Cancer Disruptions in The CyclinD/Cdk4 Axis in Cancer The function of the Cyclin D/Cdk4 complex suggests an obvious link to cancer and tumorigenesis. In fact, disruptions in the Cyclin D/Cdk4 axis that lead to increased Cyclin D/Cdk4 activity have been detected in many cancers. There are a number of drivers of these disruptions. First, tumors can overexpress Cyclin D1, as has been found in breast and pancreatic cancer. Second, tumors can have mutations or amplifications in the Cdk4 protein, as has been found in melanoma and squamous cell carcinoma of the head and neck. Third, tumors can experience reduction in or complete loss of negative regulators of Cyclin D/Cdk4, either by mutation, deletion, or downregulation of the inhibitors. Homozygous deletions in p16, an INK4-family inhibitor of Cdk4, have been found in over 50% of gliomas, and mutations in p16 have been found in numerous cancer types including familial melanomas; lymphomas; and esophageal, pancreatic, and non-small cell lung cancers. Decreased expression of p27, a CIP/KIP-family inhibitor, has been found in a number of colon, breast, prostate, liver, lung, bladder, ovary, and stomach cancers; it is an indicator of poorer prognosis in these cancers. Fourth, tumors can downregulate miRNAs that target Cdk4, as has been found in bladder cancer. Lastly, tumors can have dysregulation in upstream oncogenic signaling pathways like the phosphatidylinositol 3-kinase (P13K) pathway, the mitogen-activated protein kinase (MAPK) kinase, the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) pathway, and steroid hormone signaling pathways that promote CyclinD/Cdk4 activity. Disruption of the Cyclin D/Cdk4 axis through any of these three mechanisms induces phosphorylation of Rb, transcription of E2F-driven genes, uncontrolled progression through the G1/S checkpoint, and ultimately cancer cell growth. Selective Cdk4/6 Inhibitors Given the role of Cyclin D/Cdk4 in cancer progression, the development of selective Cdk4/6 inhibitors has been of increased interest in recent years. Currently, three Cdk4/6 inhibitors have been approved or are in late-stage development: palbociclib, ribociclib, and abemaciclib. All three of these inhibitors are ATP-competitive, orally administered medications. Thus far, selective Cdk4/6 inhibitors have shown the most promise when used in combination with other anti-estrogen therapies for the treatment of hormone-receptor-positive (HR+) advanced breast cancer. In HR+ breast cancer, cells retain wildtype Rb expression and have overexpressed Cyclin D1. Recent Stage III clinical trial results from the MONALEESA-2 study have indicated that ribociclib in combination with the nonsteroidal aromatase inhibitor letrozole increased median overall survival by 12.5 months in HR+, human epidermal growth factor receptor 2 (HER2)-negative postmenopausal breast cancer patients compared to treatment with letrozole alone. Results from the PALOMA-2 study showed that treatment with palbociclib and letrozole increased progression-free survival by 10.3 months compared to treatment with letrozole alone for patients with previously untreated HR+ positive, HER2-negative breast cancer. Results from the MONARCH-3 study have shown that treatment with abemaciclib in combination with letrozole or anastrozole increased median progression-free survival by 13.42 months for postmenopausal patients with untreated HR+, HER2-negative breast cancer. Additional studies into the efficacy of these combination therapies on advanced breast cancer survival in different settings are ongoing. Based on the encouraging results from the clinical trials, additional studies are also underway to investigate the effect of the three selective Cdk4/6 inhibitors on other neoplasms like non-small cell lung cancer. Of note, some patients in pre-clinical and clinical settings have not responded to the Cdk4/6 inhibitors or have become resistant to them. Research into the mechanisms behind these clinical outcomes is still in progress. Non-Canonical Roles of CyclinD/Cdk4 In addition to its canonical role in promoting progression through the cell cycle, CyclinD/Cdk4 also plays a role in regulating cell differentiation and metabolism in a variety of contexts. Cell Differentiation Rb can promote cell cycle differentiation by interacting with multiple different cell-type-specific transcription factors. These transcription factors include MyoD and MEF2, which regulate muscle cell differentiation, and RUNX2, which regulates osteoblast differentiation. When CyclinD/Cdk4 phosphorylates and inactivates Rb, Rb’s role in driving cell differentiation via interaction with these transcription factors is inhibited. Cyclin D/Cdk4 activity can also directly block the association of MEF2 with GRIP-1, a transcription co-activator, which inhibits MEF2’s ability to induce muscle gene expression and subsequent differentiation. Cyclin D/Cdk4 activity can also regulate cell differentiation through Rb-independent pathways. For instance, CyclinD/Cdk4 activity has been shown to phosphorylate the transcription factor GATA4, which targets it for degradation and inhibits differentiation of cardiomyocytes. Additionally, Cyclin D/Cdk4 activity is thought to block neurogenesis in neural stem cells and promote the expansion of basal progenitors. Metabolism Gluconeogenesis in the liver is critical for survival during times of fasting and starvation. CyclinD1/Cdk4 activity has been shown to play a role in regulating glucose homeostasis by suppressing hepatic gluconeogenesis via phosphorylation-induced inhibition of the peroxisome proliferator-activated receptor γ coactivator-1α (PGC1α). (PGC1α) is a transcriptional coactivator that drives the gene expression programming for gluconeogenesis in the liver. Additional research has shown that the CyclinD/Cdk4/Rb/ER2F pathway also influences the expression of Kir6.2, a subunit of the ATP-sensitive K channel that regulates glucose-induced insulin secretion. When the CyclinD/Cdk4 complex is inhibited, Kir6.2 expression is downregulated, and there is impaired insulin secretion and glucose intolerance when tested in mouse models. Given that glucose homeostasis is dysregulated in diabetes, there is ongoing interest in whether the CyclinD/Cdk4 complex could be a potential target for disease treatment. See also Human papillomavirus References Cell cycle Human proteins Protein complexes Transcription factors es:Ciclina D/Cdk4
Cyclin D/Cdk4
Chemistry,Biology
2,553
49,104,107
https://en.wikipedia.org/wiki/Fomalhaut%20C
Fomalhaut C, also designated LP 876-10, is the distant third star of the Fomalhaut system. It is about five degrees from Fomalhaut, roughly halfway between it and the Helix Nebula. It is currently from Fomalhaut (A), and 3.2 light-years away from Fomalhaut B (0.987 pc). The entire system is approximately from the Solar System. Discovery and observation Fomalhaut C was catalogued as a high-proper-motion star by Willem Luyten in 1979, and later, in October 2013, was determined to be part of the Fomalhaut system. The star has a mass of , while Fomalhaut A is , and Fomalhaut B is . The apparent magnitude of Fomalhaut C is 12.6 requiring a six-inch aperture or larger telescope for direct visual observation. The entire system of Fomalhaut is around 440 million years old, which is roughly a tenth of the Solar System's age. Debris disk In December 2013, a debris disk was discovered around this star. This is the second debris disk in the system, as a first one was discovered around Fomalhaut A. The debris disc was discovered with the Herschel Space Telescope, and with the telescope the debris disc's temperature has been estimated at . The distance from the star was originally thought to be around , but since it is hypothesized that it is mainly small grains, which trap more heat, it may be further. However, if it were beyond 40 AU, it would have been already cataloged, which gives it a radius between 10 and 40 AU. The disk was directly detected by ALMA in 2021 and by JWST in 2024. Comets In addition to the debris disk, there are also comets orbiting Fomalhaut C. The debris disk orbiting C is sometimes referred to as a comet belt, due to some very elliptical orbits. The disk around Fomalhaut A is also thought to have many comets, as it is also elliptical. The Fomalhaut A belt is thought to possibly be due to a close encounter with either an undiscovered exoplanet, Fomalhaut B, or Fomalhaut C. With both A and C having comet belts, the absence of one around B is a mystery. Not only does the presence of comets make the belt more elliptical, it also makes it brighter which takes a part in its discovery. It has been hypothesized that Fomalhaut C could have hidden exoplanets within its belt of comets and asteroids. It has also been hypothesized that A & C have interacted which could have formed C's comet belt if the interaction involved A giving up comets and debris. With Fomalhaut B not having any discs or belts around it, it could have been unaffected by the encounter between them. Orbit The orbit of Fomalhaut C around A is estimated to take 20 million years to complete. Due to the age of the three stars being 440 million years, and the distance of 2.5 light-years, this would mean that Fomalhaut C has only completed a full orbit around Fomalhaut A 22 times. The tidal radius of the Fomalhaut system is , which makes Fomalhaut C well within it, which further proves the Fomalhaut system as trinary. See also Fomalhaut TW Piscis Austrini (Fomalhaut B) Fomalhaut b Notes References Aquarius (constellation) M-type main-sequence stars J22480446-2422075 Piscis Austrini, Alpha C Fomalhaut Circumstellar disks
Fomalhaut C
Astronomy
810
4,234,887
https://en.wikipedia.org/wiki/AP%20Computer%20Science
The Advanced Placement (AP) Computer Science (shortened to AP Comp Sci or APCS) program includes two Advanced Placement courses and examinations covering the field of computer science. They are offered by the College Board to high school students as an opportunity to earn college credit for college-level courses. The program consists of two current courses (Computer Science Principles and Computer Science A) and one discontinued course (Computer Science AB). AP Computer Science was taught using Pascal for the 1984–1998 exams, C++ for 1999–2003, and Java since 2004. Courses There are two AP computer science courses currently offered. Computer Science Principles is considered to be a more "big picture" course than the programming-intensive Computer Science A. AP Computer Science A AP Computer Science A is a programming-based course, equivalent to a first-semester–level college course. AP CSA emphasizes object-oriented programming and is taught using the programming language of Java. The course has an emphasis on problem-solving using data structures and algorithms. AP Computer Science Principles AP Computer Science Principles is an introductory college-level course in computer science with an emphasis on computational thinking and the impacts of computing. The course has no designated programming language, and teaches algorithms and programming, complementing Computer Science A. AP Computer Science AB (discontinued) AP Computer Science AB included all the topics of AP Computer Science A, as well as a more formal and a more in-depth study of algorithms, data structures, and data abstraction. For example, binary trees were studied in AP Computer Science AB but not in AP Computer Science A. The use of recursive data structures and dynamically allocated structures were fundamental to AP Computer Science AB. AP Computer Science AB was equivalent to a full-year college course. Due to low numbers of students taking the exam, AP Computer Science AB was discontinued following the May 2009 exam administration. See also Computer science education References Further reading Computer Science Computer science education
AP Computer Science
Technology
390
33,183
https://en.wikipedia.org/wiki/Woman
A woman is an adult female human. Before adulthood, a female child or adolescent is referred to as a girl. Typically, women are of the female sex and inherit a pair of X chromosomes, one from each parent, and fertile women are capable of pregnancy and giving birth from puberty until menopause. More generally, sex differentiation of the female fetus is governed by the lack of a present, or functioning, SRY gene on either one of the respective sex chromosomes. Female anatomy is distinguished from male anatomy by the female reproductive system, which includes the ovaries, fallopian tubes, uterus, vagina, and vulva. An adult woman generally has a wider pelvis, broader hips, and larger breasts than an adult man. These characteristics facilitate childbirth and breastfeeding. Women typically have less facial and other body hair, have a higher body fat composition, and are on average shorter and less muscular than men. Throughout human history, traditional gender roles within patriarchal societies have often defined and limited women's activities and opportunities, resulting in gender inequality; many religious doctrines and legal systems stipulate certain rules for women. With restrictions loosening during the 20th century in many societies, women have gained wider access to careers and the ability to pursue higher education. Violence against women, whether within families or in communities, has a long history and is primarily committed by men. Some women are denied reproductive rights. The movements and ideologies of feminism have a shared goal of achieving gender equality. Some women are transgender, meaning they were assigned male at birth, while some women are intersex, meaning they have sex characteristics that do not fit typical notions of female biology. Etymology The spelling of woman in English has progressed over the past millennium from to wīmmann to wumman, and finally, the modern spelling woman. In Old English, had the gender-neutral meaning of , akin to the Modern or . The word for was or () whereas was or (from ). However, following the Norman Conquest, man began to mean , and by the late 13th century it had largely replaced . The consonants and in coalesced into the modern woman, while narrowed to specifically mean a married woman (). It is a popular misconception that the term "woman" is etymologically connected to "womb". "Womb" derives from the Old English word meaning (cognate to the modern German colloquial term "" from Old High German for ). Terminology The word woman can be used generally, to mean any female human, or specifically, to mean an adult female human as contrasted with girl. The word girl originally meant "young person of either sex" in English; it was only around the beginning of the 16th century that it came to mean specifically a female child. The term girl is sometimes used colloquially to refer to a young or unmarried woman; however, during the early 1970s, feminists challenged such use because the use of the word to refer to a fully grown woman may cause offense. In particular, previously common terms such as office girl are no longer widely used. Conversely, in certain cultures which link family honor with female virginity, the word girl (or its equivalent in other languages) is still used to refer to a never-married woman; in this sense it is used in a fashion roughly analogous to the more-or-less obsolete English maid or maiden. Different countries have different laws, but age 18 is frequently considered the age of majority (the age at which a person is legally considered an adult). The social sciences' views on what it means to be a woman have changed significantly since the early 20th century as women gained more rights and greater representation in the workforce, with scholarship in the 1970s moving toward a focus on the sex–gender distinction and social construction of gender. There are various words used to refer to the quality of being a woman. The term "womanhood" merely means the state of being a woman; "femininity" is used to refer to a set of typical female qualities associated with a certain attitude to gender roles; "womanliness" is like "femininity", but is usually associated with a different view of gender roles. "Distaff" is an archaic adjective derived from women's conventional role as a spinner, now used only as a deliberate archaism. Menarche, the onset of menstruation, occurs on average at age 12–13. Many cultures have rites of passage to symbolize a girl's coming of age, such as confirmation in some branches of Christianity, bat mitzvah in Judaism, or a custom of a special celebration for a certain birthday (generally between 12 and 21), like the quinceañera of Latin America. Biology Male and female bodies have some differences. Some differences, such as the external sex organs, are visible, while other differences, such as internal anatomy and genetic characteristics, are not visible. Genetic characteristics Typically, the cells of female humans contain two X chromosomes, while the cells of male humans have an X and a Y chromosome. During early fetal development, all embryos have phenotypically female genitalia up until week 6 or 7, when a male embryo's gonads differentiate into testes due to the action of the SRY gene on the Y chromosome. Sex differentiation proceeds in female humans in a way that is independent of gonadal hormones. Because humans inherit mitochondrial DNA only from the mother's ovum, genealogical researchers can trace maternal lineage far back in time. Hormonal characteristics, menstruation and menopause Female puberty triggers bodily changes that enable sexual reproduction via fertilization. In response to chemical signals from the pituitary gland, the ovaries secrete hormones that stimulate maturation of the body, including increased height and weight, body hair growth, breast development and menarche (the onset of menstruation). Most girls go through menarche between ages 12–13, and are then capable of becoming pregnant and bearing children. Pregnancy generally requires internal fertilization of the eggs with sperm, via either sexual intercourse or artificial insemination, though in vitro fertilization allows fertilization to occur outside the human body. Humans are similar to other large mammals in that they usually give birth to a single offspring per pregnancy, but are unusual in being altricial compared to most other large mammals, meaning young are undeveloped at time of birth and require the aid of their parents or guardians to fully mature. Sometimes humans have multiple births, most commonly twins. Usually between ages 49–52, a woman reaches menopause, the time when menstrual periods stop permanently, and they are no longer able to bear children. Unlike most other mammals, the human lifespan usually extends many years after menopause. Many women become grandmothers and contribute to the care of grandchildren and other family members. Many biologists believe that the extended human lifespan is evolutionarily driven by kin selection, though other theories have also been proposed. Morphological and physiological characteristics In terms of biology, the female sex organs are involved in the reproductive system, whereas the secondary sex characteristics are involved in breastfeeding children and attracting a mate. Humans are placental mammals, which means the mother carries the fetus in the uterus and the placenta facilitates the exchange of nutrients and waste between the mother and fetus. The internal female genitalia consist of the ovaries, gonads that produce female gametes called ova, the fallopian tubes, tubular structures that transport the egg cells, the uterus, an organ with tissue to protect and nurture the developing fetus and its cervix to expel it, the accessory glands (Bartholin's and Skene's), two pairs of glands that help lubricate during intercourse, and the vagina, an organ used in copulating and birthing. The vulva (external female genitalia) consists of the clitoris, labia majora, labia minora and vestibule. The vestibule is where the vaginal and urethral openings are located. The mammary glands are hypothesized to have evolved from apocrine-like glands to produce milk, a nutritious secretion that is the most distinctive characteristic of mammals, along with live birth. In mature women, the breast is generally more prominent than in most other mammals; this prominence, not necessary for milk production, is thought to be at least partially the result of sexual selection. Estrogens, which are primary female sex hormones, have a significant impact on a female's body shape. They are produced in both men and women, but their levels are significantly higher in women, especially in those of reproductive age. Besides other functions, estrogens promote the development of female secondary sexual characteristics, such as breasts and hips. As a result of estrogens, during puberty, girls develop breasts and their hips widen. Working against estrogen, the presence of testosterone in a pubescent female inhibits breast development and promotes muscle and facial hair development. Circulatory system Women have lower hematocrit (the volume percentage of red blood cells in blood) than men; this is due to lower testosterone, which stimulates the production of erythropoietin by the kidney. The normal hematocrit level for a woman is 36% to 48% (for men, 41% to 50%). The normal level of hemoglobin (an oxygen-transport protein found in red blood cells) for women is 12.0 to 15.5 g/dL (for men, 13.5 to 17.5 g/dL). Women's hearts have finer-grained textures in the muscle compared to men's hearts, and the heart muscle's overall shape and surface area also differs to men's when controlling for body size and age. In addition, women's hearts age more slowly compared to men's hearts. Sex distribution Girls are born slightly less frequently than boys (the ratio is around 1:1.05). Out of the total human population in 2015, there were 1018 men for every 1000 women. Intersex women Intersex women have an intersex condition, usually defined as those born with ambiguous genitalia. Most individuals with ambiguous genitalia are assigned female at birth, and most intersex women are cisgender. The medical practices to assign binary female to intersex youth is often controversial. Some intersex conditions are associated with typical rates of female gender identity, while others are associated with substantially higher rates of identifying as LGBT compared to the general population. Sexuality and gender Female sexuality and attraction are variable, and a woman's sexual behavior can be affected by many factors, including evolved predispositions, personality, upbringing, and culture. While most women are heterosexual, significant minorities are lesbian or bisexual. Most cultures use a gender binary in which woman is one of the two genders, the other being man; others have a third gender. Most women are cisgender, meaning their female sex assignment at birth corresponds with their female gender identity. Some women are transgender, meaning they were assigned male at birth. Trans women may experience gender dysphoria, the distress brought upon by the discrepancy between a person's gender identity and their sex assigned at birth. Gender dysphoria may be treated with gender-affirming care, which may include social or medical transition. Social transition may involve changes such as adopting a new name, hairstyle, clothing, and pronoun associated with the individual's affirmed female gender identity. A major component of medical transition for trans women is feminizing hormone therapy, which causes the development of female secondary sex characteristics (such as breasts, redistribution of body fat, and lower waist–hip ratio). Medical transition may also involve gender-affirming surgery, and a trans woman may undergo one or more feminizing procedures which result in anatomy that is typically gendered female. Like cisgender women, trans women may have any sexual orientation. Health Factors that specifically affect the health of women in comparison with men are most evident in those related to reproduction, but sex differences have been identified from the molecular to the behavioral scale. Some of these differences are subtle and difficult to explain, partly due to the fact that it is difficult to separate the health effects of inherent biological factors from the effects of the surrounding environment they exist in. Sex chromosomes and hormones, as well as sex-specific lifestyles, metabolism, immune system function, and sensitivity to environmental factors are believed to contribute to sex differences in health at the levels of physiology, perception, and cognition. Women can have distinct responses to drugs and thresholds for diagnostic parameters. Some diseases primarily affect or are exclusively found in women, such as lupus, breast cancer, cervical cancer, or ovarian cancer. The medical practice dealing with female reproduction and reproductive organs is called gynaecology ("science of women"). Maternal mortality Maternal mortality or maternal death is defined by WHO as "the death of a woman while pregnant or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management but not from accidental or incidental causes." In 2008, noting that each year more than 500,000 women die of complications of pregnancy and childbirth and at least seven million experience serious health problems while 50 million more have adverse health consequences after childbirth, the World Health Organization urged midwife training to strengthen maternal and newborn health services. To support the upgrading of midwifery skills the WHO established a midwife training program, Action for Safe Motherhood. In 2017, 94% of maternal deaths occur in low and lower middle-income countries. Approximately 86% of maternal deaths occur in sub-Saharan Africa and South Asia, with sub-Saharan Africa accounting for around 66% and Southern Asia accounting for around 20%. The main causes of maternal mortality include pre-eclampsia and eclampsia, unsafe abortion, pregnancy complications from malaria and HIV/AIDS, and severe bleeding and infections following childbirth. Most European countries, Australia, Japan, and Singapore are very safe in regard to childbirth. In 1990, the US ranked 12th of the 14 developed countries that were analyzed and since that time the death rates of every country have steadily improved while the US rate has spiked dramatically. While the others that were analyzed in 1990 show a 2017 death rate of fewer than 10 deaths per every 100,000 live births, the U.S. rate rose to 26.4. Furthermore, for every one of the 700 to 900 women who die in the U.S. each year during pregnancy or childbirth, 70 experience significant complications, totaling more than one percent of all births. Life expectancy The life expectancy for women is generally longer than men's. This advantage begins from birth, with newborn girls more likely to survive the first year than boys. Worldwide, women live six to eight years longer than men. However, this varies by place and situation. For example, discrimination against women has lowered female life expectancy in some parts of Asia so that men there live longer than women. The difference in life expectancy are believed to be partly due to biological advantages and partly due to gendered behavioral differences between men and women. For example, women are less likely to engage in unhealthy behaviors like smoking and reckless driving, and consequently have fewer preventable premature deaths from such causes. In some developed countries, the life expectancy is evening out. This is believed to caused both by worse health behaviors among women, especially an increased rate of smoking tobacco by women, and improved health among men, such as less cardiovascular disease. The World Health Organization (WHO) writes that it is "important to note that the extra years of life for women are not always lived in good health." Reproductive rights Reproductive rights are legal rights and freedoms relating to reproduction and reproductive health. The International Federation of Gynecology and Obstetrics has stated that: ... the human rights of women include their right to have control over and decide freely and responsibly on matters related to their sexuality, including sexual and reproductive health, free of coercion, discrimination and violence. Equal relationships between women and men in matters of sexual relations and reproduction, including full respect for the integrity of the person, require mutual respect, consent and shared responsibility for sexual behavior and its consequences. The World Health Organization reports that based on data from 2010 to 2014, 56 million induced abortions occurred worldwide each year (25% of all pregnancies). Of those, about 25 million were considered as unsafe. The WHO reports that in developed regions about 30 women die for every 100,000 unsafe abortions and that number rises to 220 deaths per 100,000 unsafe abortions in developing regions and 520 deaths per 100,000 unsafe abortions in sub-Saharan Africa. The WHO ascribes these deaths to: restrictive laws poor availability of services high cost stigma conscientious objection of health-care providers unnecessary requirements, such as mandatory waiting periods, mandatory counseling, provision of misleading information, third-party authorization, and medically unnecessary tests that delay care. Femininity Femininity (also called womanliness or girlishness) is a set of attributes, behaviors, and roles generally associated with women and girls. Although femininity is socially constructed, some behaviors considered feminine are biologically influenced. The extent to which femininity is biologically or socially influenced is subject to debate. It is distinct from the definition of the biological female sex, as both men and women can exhibit feminine traits. History The earliest women whose names are known include: Neithhotep (c. 3200 BCE), the wife of Narmer and the first queen of ancient Egypt. Merneith (c. 3000 BCE), consort and regent of ancient Egypt during the first dynasty. She may have been ruler of Egypt in her own right. Peseshet (c. 2600 BCE), a physician in Ancient Egypt. Puabi (c. 2600 BCE), or Shubad – queen of Ur whose tomb was discovered with many expensive artifacts. Other known pre-Sargonic queens of Ur (royal wives) include Ashusikildigir, Ninbanda, and Gansamannu. Kugbau (circa 2,500 BCE), a taverness from Kish chosen by the Nippur priesthood to become hegemonic ruler of Sumer, and in later ages deified as "Kubaba". Tashlultum (c. 2400 BCE), Akkadian queen, wife of Sargon of Akkad and mother of Enheduanna. Baranamtarra (c. 2384 BCE), prominent and influential queen of Lugalanda of Lagash. Other known pre-Sargonic queens of the first Lagash dynasty include Menbara-abzu, Ashume'eren, Ninkhilisug, Dimtur, and Shagshag, and the names of several princesses are also known. Enheduanna (c. 2285 BCE), the high priestess of the temple of the Moon God in the Sumerian city-state of Ur and possibly the first known poet and first named author of either gender. Shibtu (c. 1775 BCE), king Zimrilim's consort and queen of the Syrian city-state of Mari. During her husband's absence, she ruled as regent of Mari and enjoyed extensive administrative powers as queen. Culture and gender roles In recent history, gender roles have changed greatly. At some earlier points in history, children's occupational aspirations starting at a young age differed according to gender. Traditionally, middle class women were involved in domestic tasks emphasizing child care. For poorer women, economic necessity compelled them to seek employment outside the home even if individual poor women may have preferred domestic tasks. Many of the occupations that were available to them were lower in pay than those available to men. As changes in the labor market for women came about, availability of employment changed from only "dirty", long hour factory jobs to "cleaner", more respectable office jobs where more education was demanded. Married women's participation in the U.S. labor force rose from 5.6–6% in 1900 to 23.8% in 1923. These shifts in the labor force led to changes in the attitudes towards women at work, allowing for the revolution which resulted in women becoming career and education oriented. In the 1970s, many female academics, including scientists, avoided having children. Throughout the 1980s, institutions tried to equalize conditions for men and women in the workplace. Even so, the inequalities at home hampered women's opportunities: professional women were still generally considered responsible for domestic labor and child care, which limited the time and energy they could devote to their careers. Until the early 20th century, U.S. women's colleges required their women faculty members to remain single, on the grounds that a woman could not carry on two full-time professions at once. According to Schiebinger, "Being a scientist and a wife and a mother is a burden in society that expects women more often than men to put family ahead of career." (p. 93). Movements advocate equality of opportunity for both sexes and equal rights irrespective of gender. Through a combination of economic changes and the efforts of the feminist movement, in recent decades women in many societies have gained access to careers beyond the traditional homemaker. Despite these advances, modern women in Western society still face challenges in the workplace as well as with the topics of education, violence, health care, politics, and motherhood, and others. Sexism can be a main concern and barrier for women almost anywhere, though its forms, perception, and gravity vary between societies and social classes. There has been an increase in the endorsement of egalitarian gender roles in the home by both women and men. Although a greater number of women are seeking higher education, their salaries are often less than those of men. CBS News said in 2005 that in the United States women who are ages 30 to 44 and hold a university degree make 62% of what similarly qualified men do, a lower rate than in all but three of the 19 countries for which numbers are available. Some Western nations with greater inequality in pay are Germany, New Zealand and Switzerland. Religion Particular religious doctrines have specific stipulations relating to gender roles, social and private interaction between the sexes, appropriate dressing attire for women, and various other issues affecting women and their position in society. In many countries, these religious teachings influence the criminal law, or the family law of those jurisdictions (see Sharia law, for example). The relation between religion, law and gender equality has been discussed by international organizations. Violence against women The UN Declaration on the Elimination of Violence against Women defines "violence against women" as: It identifies three forms of such violence: that which occurs in the family, that which occurs within the general community, and that which is perpetrated or condoned by the State. It also states that "violence against women is a manifestation of historically unequal power relations between men and women". Violence against women remains a widespread problem, fueled, especially outside the West, by patriarchal social values, lack of adequate laws, and lack of enforcement of existing laws. Social norms that exist in many parts of the world hinder progress towards protecting women from violence. For example, according to surveys by UNICEF, the percentage of women aged 15–49 who think that a husband is justified in hitting or beating his wife under certain circumstances is as high as 90% in Afghanistan and Jordan, 87% in Mali, 86% in Guinea and Timor-Leste, 81% in Laos, and 80% in the Central African Republic. A 2010 survey conducted by the Pew Research Center found that stoning as a punishment for adultery was supported by 82% of respondents in Egypt and Pakistan, 70% in Jordan, 56% Nigeria, and 42% in Indonesia. Specific forms of violence that affect women include female genital mutilation, sex trafficking, forced prostitution, forced marriage, rape, sexual harassment, honor killings, acid throwing, and dowry related violence. Governments can be complicit in violence against women, such as when stoning is used as a legal punishment, mostly for women accused of adultery. There have also been many forms of violence against women which have been prevalent historically, notably the burning of witches, the sacrifice of widows (such as sati) and foot binding. The prosecution of women accused of witchcraft has a long tradition; for example, during the early modern period (between the 15th and 18th centuries), witch trials were common in Europe and in the European colonies in North America. Today, there remain regions of the world (such as parts of Sub-Saharan Africa, rural North India, and Papua New Guinea) where belief in witchcraft is held by many people, and women accused of being witches are subjected to serious violence. In addition, there are also countries which have criminal legislation against the practice of witchcraft. In Saudi Arabia, witchcraft remains a crime punishable by death, and in 2011 the country beheaded a woman for 'witchcraft and sorcery'. It is also the case that certain forms of violence against women have been recognized as criminal offences only during recent decades, and are not universally prohibited, in that many countries continue to allow them. This is especially the case with marital rape. In the Western World, there has been a trend towards ensuring gender equality within marriage and prosecuting domestic violence, but in many parts of the world women still lose significant legal rights when entering a marriage. Sexual violence against women greatly increases during times of war and armed conflict, during military occupation, or ethnic conflicts; most often in the form of war rape and sexual slavery. Contemporary examples of sexual violence during war include rape during the Armenian Genocide, rape during the Bangladesh Liberation War, rape in the Bosnian War, rape during the Rwandan genocide, and rape during Second Congo War. In Colombia, the armed conflict has also resulted in increased sexual violence against women. The most recent case was the sexual jihad done by ISIL where 5000–7000 Yazidi and Christian girls and children were sold into sexual slavery during the genocide and rape of Yazidi and Christian women, some of whom jumped to their death from Mount Sinjar, as described in a witness statement. Laws and policies on violence against women vary by jurisdiction. In the European Union, sexual harassment and human trafficking are subject to directives. Clothing, fashion and dress codes Women in different parts of the world dress in different ways, with their choices of clothing being influenced by local culture, religious tenets, traditions, social norms, and fashion trends, among other factors. Different societies have different ideas about modesty. In many jurisdictions, laws limit what women may or may not wear. This is especially the case in regard to Islamic dress. While certain jurisdictions legally mandate such clothing (the wearing of the headscarf), other countries forbid or restrict the wearing of certain hijab attire (such as burqa/covering the face) in public places (one such country is France – see French ban on face covering). These laws – both those mandating and those prohibiting certain articles of dress – are highly controversial. Fertility and family life The total fertility rate (TFR) – the average number of children born to a woman over her lifetime – differs significantly between different regions of the world. In 2016, the highest estimated TFR was in Niger (6.62 children born per woman) and the lowest in Singapore (0.82 children/woman). While most Sub-Saharan African countries have a high TFR, which creates problems due to lack of resources and contributes to overpopulation, most Western countries currently experience a sub replacement fertility rate which may lead to population ageing and population decline. In many parts of the world, there has been a change in family structure over the past few decades. For instance, in the West, there has been a trend of moving away from living arrangements that include the extended family to those which only consist of the nuclear family. There has also been a trend to move from marital fertility to non-marital fertility. Children born outside marriage may be born to cohabiting couples or to single women. While births outside marriage are common and fully accepted in some parts of the world, in other places they are highly stigmatized, with unmarried mothers facing ostracism, including violence from family members, and in extreme cases even honor killings. In addition, sex outside marriage remains illegal in many countries (such as Saudi Arabia, Pakistan, Afghanistan, Iran, Kuwait, Maldives, Morocco, Oman, Mauritania, United Arab Emirates, Sudan, and Yemen). The social role of the mother differs between cultures. In many parts of the world, women with dependent children are expected to stay at home and dedicate all their energy to child raising, while in other places mothers most often return to paid work (see working mother and stay-at-home mother). Education Single-sex education has traditionally been dominant and is still highly relevant. Universal education, meaning state-provided primary and secondary education independent of gender, is not yet a global norm, even if it is assumed in most developed countries. In some Western countries, women have surpassed men at many levels of education. For example, in the United States in 2005/2006, women earned 62% of associate degrees, 58% of bachelor's degrees, 60% of master's degrees, and 50% of doctorates. The educational gender gap in Organisation for Economic Co-operation and Development (OECD) countries has been reduced over the last 30 years. Younger women today are far more likely to have completed a tertiary qualification: in 19 of the 30 OECD countries, more than twice as many women aged 25 to 34 have completed tertiary education than have women aged 55 to 64. In 21 of 27 OECD countries with comparable data, the number of women graduating from university-level programmes is equal to or exceeds that of men. 15-year-old girls tend to show much higher expectations for their careers than boys of the same age. While women account for more than half of university graduates in several OECD countries, they receive only 30% of tertiary degrees granted in science and engineering fields, and women account for only 25% to 35% of researchers in most OECD countries. Research shows that while women are studying at prestigious universities at the same rate as men they are not being given the same chance to join the faculty. Sociologist Harriet Zuckerman has observed that the more prestigious an institute is, the more difficult and time-consuming it will be for women to obtain a faculty position there. In 1989, Harvard University tenured its first woman in chemistry, Cynthia Friend, and in 1992 its first woman in physics, Melissa Franklin. She also observed that women were more likely to hold their first professional positions as instructors and lecturers while men are more likely to work first in tenure positions. According to Smith and Tang, as of 1989, 65% of men and only 40% of women held tenured positions and only 29% of all scientists and engineers employed as assistant professors in four-year colleges and universities were women. In the Soviet Union, 40% of chemistry PhDs went to women in the 1960s. In 1992, women earned 9% of the PhDs awarded in engineering, but only one percent of those women became professors. In 1995, 11% of professors in science and engineering were women. In relation, only 311 deans of engineering schools were women, which is less than 1% of the total. Even in psychology, a degree in which women earn the majority of PhDs, they hold a significant amount of fewer tenured positions, roughly 19% in 1994. Literacy World literacy is lower for women than for men. In 2020, 87% of the world's women were literate, compared to 90% of men. But sub-Saharan Africa and southwest Asia lagged behind the rest of the world; only 59% of women in sub-Saharan Africa were literate. Government and politics Women are underrepresented in government in most countries. In January 2019, the global average of women in national assemblies was 24.3%. Suffrage is the civil right to vote, and women's suffrage movements have a long historic timeline. For example, women's suffrage in the United States was achieved gradually, first at state and local levels in the late 19th and early 20th centuries, then in 1920 when women in the US received universal suffrage with the passage of the Nineteenth Amendment to the United States Constitution. Some Western countries were slow to allow women to vote, notably Switzerland, where women gained the right to vote in federal elections in 1971, and in the canton of Appenzell Innerrhoden women were granted the right to vote on local issues only in 1991, when the canton was forced to do so by the Federal Supreme Court of Switzerland; and Liechtenstein, in 1984, through a women's suffrage referendum. Science, literature and art Women have, throughout history, made contributions to science, literature and art. Science and medicine One area where women have been permitted most access historically was that of obstetrics and gynecology (prior to the 18th century, caring for pregnant women in Europe was undertaken by women; from the mid 18th century onwards, medical monitoring of pregnant women started to require rigorous formal education, to which women did not generally have access, and thus the practice was largely transferred to men). Literature Writing was generally also considered acceptable for upper-class women, although achieving success as a female writer in a male-dominated world could be very difficult; as a result of several women writers adopted a male pen name (e.g. George Sand, George Eliot). Music Women have been composers, songwriters, instrumental performers, singers, conductors, music scholars, music educators, music critics/music journalists and other musical professions. There are music movements, events and genres related to women, women's issues and feminism. In the 2010s, while women comprise a significant proportion of popular music and classical music singers, and a significant proportion of songwriters (many of them being singer-songwriters), there are few women record producers, rock critics and rock instrumentalists. Although there have been a huge number of women composers in classical music, from the Medieval period to the present day, women composers are significantly underrepresented in the commonly performed classical music repertoire, music history textbooks and music encyclopedias; for example, in the Concise Oxford History of Music, Clara Schumann is one of the only female composers who is mentioned. Women comprise a significant proportion of instrumental soloists in classical music and the percentage of women in orchestras is increasing. A 2015 article on concerto soloists in major Canadian orchestras, however, indicated that 84% of the soloists with the Montreal Symphony Orchestra were men. In 2012, women still made up just 6% of the top-ranked Vienna Philharmonic orchestra. Women are less common as instrumental players in popular music genres such as rock and heavy metal, although there have been a number of notable female instrumentalists and all-female bands. Women are particularly underrepresented in extreme metal genres. Women are also underrepresented in orchestral conducting, music criticism/music journalism, music producing, and sound engineering. While women were discouraged from composing in the 19th century, and there are few women musicologists, women became involved in music education "... to such a degree that women dominated [this field] during the later half of the 19th century and well into the 20th century." According to Jessica Duchen, a music writer for London's The Independent, women musicians in classical music are "... too often judged for their appearances, rather than their talent" and they face pressure "... to look sexy onstage and in photos." Duchen states that while "[t]here are women musicians who refuse to play on their looks, ... the ones who do tend to be more materially successful." According to the UK's Radio 3 editor, Edwina Wolstencroft, the classical music industry has long been open to having women in performance or entertainment roles, but women are much less likely to have positions of authority, such as being the leader of an orchestra. In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process. Gender symbol The glyph (♀) for the planet and Roman goddess Venus, or Aphrodite in Greek, is the symbol used in biology for the female sex. In ancient alchemy, the Venus symbol stood for copper and was associated with femininity. See also Notes References Further reading Chafe, William H. , The American Woman: Her Changing Social, Economic, And Political Roles, 1920–1970, Oxford University Press, 1972. Routledge International Encyclopedia of Women, 4 vls., ed. by Cheris Kramarae and Dale Spender, Routledge 2000 Women in World History : a biographical encyclopedia, 17 vls., ed. by Anne Commire, Waterford, Conn. [etc.] : Yorkin Publ. [etc.], 1999–2002 Woman In all ages and in all countries in 10 volumes. Illustrated edition deluxe limited to 1,000 numbered copies with an index by Rénald Lévesque External links Female Gender identity
Woman
Biology
7,708
48,429,138
https://en.wikipedia.org/wiki/Tricholoma%20khakicolor
Tricholoma khakicolor is an agaric fungus of the genus Tricholoma. Found in Peninsular Malaysia, it was described as new to science in 1994 by English mycologist E.J.H. Corner. See also List of Tricholoma species References khakicolor Fungi described in 1994 Fungi of Asia Taxa named by E. J. H. Corner Fungus species
Tricholoma khakicolor
Biology
84
58,510,824
https://en.wikipedia.org/wiki/C17H18N2O2
{{DISPLAYTITLE:C17H18N2O2}} The molecular formula C17H18N2O2 (molar mass: 282.337 g/mol) may refer to: Lysergic acid methyl ester Salpn ligand Molecular formulas
C17H18N2O2
Physics,Chemistry
60
2,952,018
https://en.wikipedia.org/wiki/Novobiocin
Novobiocin, also known as albamycin, is an aminocoumarin antibiotic that is produced by the actinomycete Streptomyces niveus, which has recently been identified as a subjective synonym for S. spheroides a member of the class Actinomycetia. Other aminocoumarin antibiotics include clorobiocin and coumermycin A1. Novobiocin was first reported in the mid-1950s (then called streptonivicin). Clinical use It is active against Staphylococcus epidermidis and may be used to differentiate it from the other coagulase-negative Staphylococcus saprophyticus, which is resistant to novobiocin, in culture. Novobiocin was licensed for clinical use under the tradename Albamycin (Upjohn) in the 1960s. Its efficacy has been demonstrated in preclinical and clinical trials. The oral form of the drug has since been withdrawn from the market due to lack of efficacy. A combination product of novobiocin and tetracycline, sold by Upjohn under brand names such as Panalba and Albamycin-T, was in particular the subject of intense FDA scrutiny before it was finally taken off the market. Novobiocin is an effective antistaphylococcal agent used in the treatment of MRSA. Mechanism of action The molecular basis of action of novobiocin, and other related drugs clorobiocin and coumermycin A1 has been examined. Aminocoumarins are very potent inhibitors of bacterial DNA gyrase and work by targeting the GyrB subunit of the enzyme involved in energy transduction. Novobiocin as well as the other aminocoumarin antibiotics act as competitive inhibitors of the ATPase reaction catalysed by GyrB. The potency of novobiocin is considerably higher than that of the fluoroquinolones that also target DNA gyrase, but at a different site on the enzyme. The GyrA subunit is involved in the DNA nicking and ligation activity. Novobiocin has been shown to weakly inhibit the C-terminus of the eukaryotic Hsp90 protein (high micromolar IC50). Modification of the novobiocin scaffold has led to more selective Hsp90 inhibitors. Novobiocin has also been shown to bind and activate the Gram-negative lipopolysaccharide transporter LptBFGC. The ATP binding pocket of polymerase theta is blocked by novobiocin resulting in a loss of ATPase activity. This results in the loss of microhomology-mediated end joining as a pathway for homologous recombination deficient cells to circumvent DNA damaging agents. The action of novobiocin is syngeristic with PARP inhibitors for reducing tumor size in a mouse model. Structure Novobiocin is an aminocoumarin. Novobiocin may be divided up into three entities; a benzoic acid derivative, a coumarin residue, and the sugar novobiose. X-ray crystallographic studies have found that the drug-receptor complex of Novobiocin and DNA Gyrase shows that ATP and Novobiocin have overlapping binding sites on the gyrase molecule. The overlap of the coumarin and ATP-binding sites is consistent with aminocoumarins being competitive inhibitors of the ATPase activity. Structure–activity relationship In structure activity relationship experiments it was found that removal of the carbamoyl group located on the novobiose sugar lead to a dramatic decrease in inhibitory activity of novobiocin. Biosynthesis This aminocoumarin antibiotic consists of three major substituents. The 3-dimethylallyl-4-hydroxybenzoic acid moiety, known as ring A, is derived from prephenate and dimethylallyl pyrophosphate. The aminocoumarin moiety, known as ring B, is derived from L-tyrosine. The final component of novobiocin is the sugar derivative L-noviose, known as ring C, which is derived from glucose-1-phosphate. The biosynthetic gene cluster for novobiocin was identified by Heide and coworkers in 1999 (published 2000) from Streptomyces spheroides NCIB 11891. They identified 23 putative open reading frames (ORFs) and more than 11 other ORFs that may play a role in novobiocin biosynthesis. The biosynthesis of ring A (see Fig. 1) begins with prephenate which is a derived from the shikimic acid biosynthetic pathway. The enzyme NovF catalyzes the decarboxylation of prephenate while simultaneously reducing nicotinamide adenine dinucleotide phosphate (NADP+) to produce NADPH. Following this NovQ catalyzes the electrophilic substitution of the phenyl ring with dimethylallyl pyrophosphate (DMAPP) otherwise known as prenylation. DMAPP can come from either the mevalonic acid pathway or the deoxyxylulose biosynthetic pathway. Next the 3-dimethylallyl-4-hydroxybenzoate molecule is subjected to two oxidative decarboxylations by NovR and molecular oxygen. NovR is a non-heme iron oxygenase with a unique bifunctional catalysis. In the first stage both oxygens are incorporated from the molecular oxygen while in the second step only one is incorporated as determined by isotope labeling studies. This completes the formation of ring A. The biosynthesis of ring B (see Fig. 2) begins with the natural amino acid L-tyrosine. This is then adenylated and thioesterified onto the peptidyl carrier protein (PCP) of NovH by ATP and NovH itself. NovI then further modifies this PCP bound molecule by oxidizing the β-position using NADPH and molecular oxygen. NovJ and NovK form a heterodimer of J2K2 which is the active form of this benzylic oxygenase. This process uses NADP+ as a hydride acceptor in the oxidation of the β-alcohol. This ketone will prefer to exist in its enol tautomer in solution. Next a still unidentified protein catalyzes the selective oxidation of the benzene (as shown in Fig. 2). Upon oxidation this intermediate will spontaneously lactonize to form the aromatic ring B and lose NovH in the process. The biosynthesis of L-noviose (ring C) is shown in Fig. 3. This process starts from glucose-1-phosphate where NovV takes dTTP and replaces the phosphate group with a dTDP group. NovT then oxidizes the 4-hydroxy group using NAD+. NovT also accomplishes a dehydroxylation of the 6 position of the sugar. NovW then epimerizes the 3 position of the sugar. The methylation of the 5 position is accomplished by NovU and S-adenosyl methionine (SAM). Finally NovS reduces the 4 position again to achieve epimerization of that position from the starting glucose-1-phosphate using NADH. Rings A, B, and C are coupled together and modified to give the finished novobiocin molecule. Rings A and B are coupled together by the enzyme NovL using ATP to diphosphorylate the carboxylate group of ring A so that the carbonyl can be attacked by the amine group on ring B. The resulting compound is methylated by NovO and SAM prior to glycosylation. NovM adds ring C (L-noviose) to the hydroxyl group derived from tyrosine with the loss of dTDP. Another methylation is accomplished by NovP and SAM at the 4 position of the L-noviose sugar. This methylation allows NovN to carbamylate the 3 position of the sugar as shown in Fig. 4 completing the biosynthesis of novobiocin. References External links Novobiocin bound to proteins in the PDB Antibiotics Coumarin drugs Benzamides Carbamates Topoisomerase inhibitors Benzopyrans 4-Hydroxyphenyl compounds
Novobiocin
Biology
1,741
35,318,832
https://en.wikipedia.org/wiki/List%20of%20isomers%20of%20dodecane
This is the list of 355 isomers of dodecane. Straight-chain Dodecane Undecane 2-Methylundecane 3-Methylundecane 4-Methylundecane 5-Methylundecane 6-Methylundecane Decane Dimethyl 2,2-Dimethyldecane 2,3-Dimethyldecane 2,4-Dimethyldecane 2,5-Dimethyldecane 2,6-Dimethyldecane 2,7-Dimethyldecane 2,8-Dimethyldecane 2,9-Dimethyldecane 3,3-Dimethyldecane 3,4-Dimethyldecane 3,5-Dimethyldecane 3,6-Dimethyldecane 3,7-Dimethyldecane 3,8-Dimethyldecane 4,4-Dimethyldecane 4,5-Dimethyldecane 4,6-Dimethyldecane 4,7-Dimethyldecane 5,5-Dimethyldecane 5,6-Dimethyldecane Ethyl 3-Ethyldecane 4-Ethyldecane 5-Ethyldecane Nonane Trimethyl 2,2,3-Trimethylnonane 2,2,4-Trimethylnonane 2,2,5-Trimethylnonane 2,2,6-Trimethylnonane 2,2,7-Trimethylnonane 2,2,8-Trimethylnonane 2,3,3-Trimethylnonane 2,3,4-Trimethylnonane 2,3,5-Trimethylnonane 2,3,6-Trimethylnonane 2,3,7-Trimethylnonane 2,3,8-Trimethylnonane 2,4,4-Trimethylnonane 2,4,5-Trimethylnonane 2,4,6-Trimethylnonane 2,4,7-Trimethylnonane 2,4,8-Trimethylnonane 2,5,5-Trimethylnonane 2,5,6-Trimethylnonane 2,5,7-Trimethylnonane 2,5,8-Trimethylnonane 2,6,6-Trimethylnonane 2,6,7-Trimethylnonane 2,7,7-Trimethylnonane 3,3,4-Trimethylnonane 3,3,5-Trimethylnonane 3,3,6-Trimethylnonane 3,3,7-Trimethylnonane 3,4,4-Trimethylnonane 3,4,5-Trimethylnonane 3,4,6-Trimethylnonane 3,4,7-Trimethylnonane 3,5,5-Trimethylnonane 3,5,6-Trimethylnonane 3,5,7-Trimethylnonane 3,6,6-Trimethylnonane 4,4,5-Trimethylnonane 4,4,6-Trimethylnonane 4,5,5-Trimethylnonane 4,5,6-Trimethylnonane Ethyl+Methyl 3-Ethyl-2-methylnonane 3-Ethyl-3-methylnonane 3-Ethyl-4-methylnonane 3-Ethyl-5-methylnonane 3-Ethyl-6-methylnonane 3-Ethyl-7-methylnonane 4-Ethyl-2-methylnonane 4-Ethyl-3-methylnonane 4-Ethyl-4-methylnonane 4-Ethyl-5-methylnonane 4-Ethyl-6-methylnonane 5-Ethyl-2-methylnonane 5-Ethyl-3-methylnonane 5-Ethyl-4-methylnonane 5-Ethyl-5-methylnonane 6-Ethyl-2-methylnonane 6-Ethyl-3-methylnonane 7-Ethyl-2-methylnonane Propyl 4-Propylnonane 5-Propylnonane 4-(1-Methylethyl)nonane /(4-isopropylnonane) 5-(1-Methylethyl)nonane /(5-isopropylnonane) Octane Tetramethyl 2,2,3,3-Tetramethyloctane 2,2,3,4-Tetramethyloctane 2,2,3,5-Tetramethyloctane 2,2,3,6-Tetramethyloctane 2,2,3,7-Tetramethyloctane 2,2,4,4-Tetramethyloctane 2,2,4,5-Tetramethyloctane 2,2,4,6-Tetramethyloctane 2,2,4,7-Tetramethyloctane 2,2,5,5-Tetramethyloctane 2,2,5,6-Tetramethyloctane 2,2,5,7-Tetramethyloctane 2,2,6,6-Tetramethyloctane 2,2,6,7-Tetramethyloctane 2,2,7,7-Tetramethyloctane 2,3,3,4-Tetramethyloctane 2,3,3,5-Tetramethyloctane 2,3,3,6-Tetramethyloctane 2,3,3,7-Tetramethyloctane 2,3,4,4-Tetramethyloctane 2,3,4,5-Tetramethyloctane 2,3,4,6-Tetramethyloctane 2,3,4,7-Tetramethyloctane 2,3,5,5-Tetramethyloctane 2,3,5,6-Tetramethyloctane 2,3,5,7-Tetramethyloctane 2,3,6,6-Tetramethyloctane 2,3,6,7-Tetramethyloctane 2,4,4,5-Tetramethyloctane 2,4,4,6-Tetramethyloctane 2,4,4,7-Tetramethyloctane 2,4,5,5-Tetramethyloctane 2,4,5,6-Tetramethyloctane 2,4,5,7-Tetramethyloctane 2,4,6,6-Tetramethyloctane 2,5,5,6-Tetramethyloctane 2,5,6,6-Tetramethyloctane 3,3,4,4-Tetramethyloctane 3,3,4,5-Tetramethyloctane 3,3,4,6-Tetramethyloctane 3,3,5,5-Tetramethyloctane 3,3,5,6-Tetramethyloctane 3,3,6,6-Tetramethyloctane 3,4,4,5-Tetramethyloctane 3,4,4,6-Tetramethyloctane 3,4,5,5-Tetramethyloctane 3,4,5,6-Tetramethyloctane 4,4,5,5-Tetramethyloctane Ethyl+Dimethyl 3-Ethyl-2,2-dimethyloctane 3-Ethyl-2,3-dimethyloctane 3-Ethyl-2,4-dimethyloctane 3-Ethyl-2,5-dimethyloctane 3-Ethyl-2,6-dimethyloctane 3-Ethyl-2,7-dimethyloctane 3-Ethyl-3,4-dimethyloctane 3-Ethyl-3,5-dimethyloctane 3-Ethyl-3,6-dimethyloctane 3-Ethyl-4,4-dimethyloctane 3-Ethyl-4,5-dimethyloctane 3-Ethyl-4,6-dimethyloctane 3-Ethyl-5,5-dimethyloctane 4-Ethyl-2,2-dimethyloctane 4-Ethyl-2,3-dimethyloctane 4-Ethyl-2,4-dimethyloctane 4-Ethyl-2,5-dimethyloctane 4-Ethyl-2,6-dimethyloctane 4-Ethyl-2,7-dimethyloctane 4-Ethyl-3,3-dimethyloctane 4-Ethyl-3,4-dimethyloctane 4-Ethyl-3,5-dimethyloctane 4-Ethyl-3,6-dimethyloctane 4-Ethyl-4,5-dimethyloctane 5-Ethyl-2,2-dimethyloctane 5-Ethyl-2,3-dimethyloctane 5-Ethyl-2,4-dimethyloctane 5-Ethyl-2,5-dimethyloctane 5-Ethyl-2,6-dimethyloctane 5-Ethyl-3,3-dimethyloctane 5-Ethyl-3,4-dimethyloctane 5-Ethyl-3,5-dimethyloctane 5-Ethyl-4,4-dimethyloctane 6-Ethyl-2,2-dimethyloctane 6-Ethyl-2,3-dimethyloctane 6-Ethyl-2,4-dimethyloctane 6-Ethyl-2,5-dimethyloctane 6-Ethyl-2,6-dimethyloctane 6-Ethyl-3,3-dimethyloctane 6-Ethyl-3,4-dimethyloctane Diethyl 3,3-Diethyloctane 3,4-Diethyloctane 3,5-Diethyloctane 3,6-Diethyloctane 4,4-Diethyloctane 4,5-Diethyloctane Methyl+Propyl 2-Methyl-4-propyloctane 3-Methyl-4-propyloctane 4-Methyl-4-propyloctane 4-Methyl-5-propyloctane 2-Methyl-5-propyloctane 3-Methyl-5-propyloctane 2-Methyl-3-(1-methylethyl)octane 2-Methyl-4-(1-methylethyl)octane 3-Methyl-4-(1-methylethyl)octane 4-Methyl-4-(1-methylethyl)octane 4-Methyl-5-(1-methylethyl)octane 2-Methyl-5-(1-methylethyl)octane 3-Methyl-5-(1-methylethyl)octane tert-Butyl 4-(1,1-Dimethylethyl)octane or 4-tert-Butyloctane Heptane Pentamethyl 2,2,3,3,4-Pentamethylheptane 2,2,3,3,5-Pentamethylheptane 2,2,3,3,6-Pentamethylheptane 2,2,3,4,4-Pentamethylheptane 2,2,3,4,5-Pentamethylheptane 2,2,3,4,6-Pentamethylheptane 2,2,3,5,5-Pentamethylheptane 2,2,3,5,6-Pentamethylheptane 2,2,3,6,6-Pentamethylheptane 2,2,4,4,5-Pentamethylheptane 2,2,4,4,6-Pentamethylheptane 2,2,4,5,5-Pentamethylheptane 2,2,4,5,6-Pentamethylheptane 2,2,4,6,6-Pentamethylheptane 2,2,5,5,6-Pentamethylheptane 2,3,3,4,4-Pentamethylheptane 2,3,3,4,5-Pentamethylheptane 2,3,3,4,6-Pentamethylheptane 2,3,3,5,5-Pentamethylheptane 2,3,3,5,6-Pentamethylheptane 2,3,4,4,5-Pentamethylheptane 2,3,4,4,6-Pentamethylheptane 2,3,4,5,5-Pentamethylheptane 2,3,4,5,6-Pentamethylheptane 2,4,4,5,5-Pentamethylheptane 3,3,4,4,5-Pentamethylheptane 3,3,4,5,5-Pentamethylheptane Ethyl+Trimethyl 3-Ethyl-2,2,3-trimethylheptane 3-Ethyl-2,2,4-trimethylheptane 3-Ethyl-2,2,5-trimethylheptane 3-Ethyl-2,2,6-trimethylheptane 3-Ethyl-2,3,4-trimethylheptane 3-Ethyl-2,3,5-trimethylheptane 3-Ethyl-2,3,6-trimethylheptane 3-Ethyl-2,4,4-trimethylheptane 3-Ethyl-2,4,5-trimethylheptane 3-Ethyl-2,4,6-trimethylheptane 3-Ethyl-2,5,5-trimethylheptane 3-Ethyl-2,5,6-trimethylheptane 3-Ethyl-3,4,4-trimethylheptane 3-Ethyl-3,4,5-trimethylheptane 3-Ethyl-3,5,5-trimethylheptane 3-Ethyl-4,4,5-trimethylheptane 4-Ethyl-2,2,3-trimethylheptane 4-Ethyl-2,2,4-trimethylheptane 4-Ethyl-2,2,5-trimethylheptane 4-Ethyl-2,2,6-trimethylheptane 4-Ethyl-2,3,3-trimethylheptane 4-Ethyl-2,3,4-trimethylheptane 4-Ethyl-2,3,5-trimethylheptane 4-Ethyl-2,3,6-trimethylheptane 4-Ethyl-2,4,5-trimethylheptane 4-Ethyl-2,4,6-trimethylheptane 4-Ethyl-2,5,5-trimethylheptane 4-Ethyl-3,3,4-trimethylheptane 4-Ethyl-3,3,5-trimethylheptane 4-Ethyl-3,4,5-trimethylheptane 5-Ethyl-2,2,3-trimethylheptane 5-Ethyl-2,2,4-trimethylheptane 5-Ethyl-2,2,5-trimethylheptane 5-Ethyl-2,2,6-trimethylheptane 5-Ethyl-2,3,3-trimethylheptane 5-Ethyl-2,3,4-trimethylheptane 5-Ethyl-2,3,5-trimethylheptane 5-Ethyl-2,4,4-trimethylheptane 5-Ethyl-2,4,5-trimethylheptane 5-Ethyl-3,3,4-trimethylheptane Diethyl+Methyl 3,3-Diethyl-2-methylheptane 3,3-Diethyl-4-methylheptane 3,3-Diethyl-5-methylheptane 3,4-Diethyl-2-methylheptane 3,4-Diethyl-3-methylheptane 3,4-Diethyl-4-methylheptane 3,4-Diethyl-5-methylheptane 3,5-Diethyl-2-methylheptane 3,5-Diethyl-3-methylheptane 3,5-Diethyl-4-methylheptane 4,4-Diethyl-2-methylheptane 4,4-Diethyl-3-methylheptane 4,5-Diethyl-2-methylheptane 5,5-Diethyl-2-methylheptane Dimethyl+Propyl 2,2-Dimethyl-4-propylheptane 2,3-Dimethyl-4-propylheptane 2,4-Dimethyl-4-propylheptane 2,5-Dimethyl-4-propylheptane 2,6-Dimethyl-4-propylheptane 3,3-Dimethyl-4-propylheptane 3,4-Dimethyl-4-propylheptane 3,5-Dimethyl-4-propylheptane 2,2-Dimethyl-3-(1-methylethyl)heptane 2,3-Dimethyl-3-(1-methylethyl)heptane 2,4-Dimethyl-3-(1-methylethyl)heptane 2,5-Dimethyl-3-(1-methylethyl)heptane 2,6-Dimethyl-3-(1-methylethyl)heptane 2,2-Dimethyl-4-(1-methylethyl)heptane 2,3-Dimethyl-4-(1-methylethyl)heptane 2,4-Dimethyl-4-(1-methylethyl)heptane 2,5-Dimethyl-4-(1-methylethyl)heptane 2,6-Dimethyl-4-(1-methylethyl)heptane 3,3-Dimethyl-4-(1-methylethyl)heptane 3,4-Dimethyl-4-(1-methylethyl)heptane 3,5-Dimethyl-4-(1-methylethyl)heptane Ethyl+Propyl 3-Ethyl-4-propylheptane 4-Ethyl-4-propylheptane 3-Ethyl-4-(1-methylethyl)heptane 4-Ethyl-4-(1-methylethyl)heptane Propyl+Methyl 4-(1,1-Dimethylethyl)-2-methylheptane 4-(1,1-Dimethylethyl)-3-methylheptane 4-(1,1-Dimethylethyl)-4-methylheptane Hexane Hexamethyl 2,2,3,3,4,4-Hexamethylhexane 2,2,3,3,4,5-Hexamethylhexane 2,2,3,3,5,5-Hexamethylhexane 2,2,3,4,4,5-Hexamethylhexane 2,2,3,4,5,5-Hexamethylhexane 2,3,3,4,4,5-Hexamethylhexane Ethyl+Tetramethyl 3-Ethyl-2,2,3,4-tetramethylhexane 3-Ethyl-2,2,3,5-tetramethylhexane 3-Ethyl-2,2,4,4-tetramethylhexane 3-Ethyl-2,2,4,5-tetramethylhexane 3-Ethyl-2,2,5,5-tetramethylhexane 3-Ethyl-2,3,4,4-tetramethylhexane 3-Ethyl-2,3,4,5-tetramethylhexane 4-Ethyl-2,2,3,3-tetramethylhexane 4-Ethyl-2,2,3,4-tetramethylhexane 4-Ethyl-2,2,3,5-tetramethylhexane 4-Ethyl-2,2,4,5-tetramethylhexane 4-Ethyl-2,3,3,4-tetramethylhexane 4-Ethyl-2,3,3,5-tetramethylhexane Diethyl+Dimethyl 3,3-Diethyl-2,2-dimethylhexane 3,3-Diethyl-2,4-dimethylhexane 3,3-Diethyl-2,5-dimethylhexane 3,3-Diethyl-4,4-dimethylhexane 3,4-Diethyl-2,2-dimethylhexane 3,4-Diethyl-2,3-dimethylhexane 3,4-Diethyl-2,4-dimethylhexane 3,4-Diethyl-2,5-dimethylhexane 3,4-Diethyl-3,4-dimethylhexane 4,4-Diethyl-2,2-dimethylhexane 4,4-Diethyl-2,3-dimethylhexane Triethyl 3,3,4-Triethylhexane Trimethyl+Propyl 2,2,3-Trimethyl-3-(1-methylethyl)hexane 2,2,4-Trimethyl-3-(1-methylethyl)hexane 2,2,5-Trimethyl-3-(1-methylethyl)hexane 2,3,4-Trimethyl-3-(1-methylethyl)hexane 2,3,5-Trimethyl-3-(1-methylethyl)hexane 2,4,4-Trimethyl-3-(1-methylethyl)hexane 2,3,5-Trimethyl-4-(1-methylethyl)hexane 2,2,5-Trimethyl-4-(1-methylethyl)hexane Ethyl+Methyl+Propyl 3-Ethyl-2-methyl-3-(1-methylethyl)hexane 4-Ethyl-2-methyl-3-(1-methylethyl)hexane tert-Butyl+Dimethyl 3-(1,1-Dimethylethyl)-2,2-dimethylhexane Pentane Ethyl+Pentamethyl 3-Ethyl-2,2,3,4,4-pentamethylpentane Diethyl+Trimethyl 3,3-Diethyl-2,2,4-trimethylpentane Tetramethyl+Propyl 2,2,3,4-Tetramethyl-3-(1-methylethyl)pentane 2,2,4,4-Tetramethyl-3-(1-methylethyl)pentane Ethyl+Dimethyl+Propyl 3-Ethyl-2,4-dimethyl-3-(1-methylethyl)pentane References Lists of isomers of alkanes Hydrocarbons
List of isomers of dodecane
Chemistry
5,369
35,994,457
https://en.wikipedia.org/wiki/Common%20spatial%20pattern
Common spatial pattern (CSP) is a mathematical procedure used in signal processing for separating a multivariate signal into additive subcomponents which have maximum differences in variance between two windows. Details Let of size and of size be two windows of a multivariate signal, where is the number of signals and and are the respective number of samples. The CSP algorithm determines the component such that the ratio of variance (or second-order moment) is maximized between the two windows: The solution is given by computing the two covariance matrices: Then, the simultaneous diagonalization of those two matrices (also called generalized eigenvalue decomposition) is realized. We find the matrix of eigenvectors and the diagonal matrix of eigenvalues sorted by decreasing order such that: and with the identity matrix. This is equivalent to the eigendecomposition of : will correspond to the first column of : Discussion Relation between variance ratio and eigenvalue The eigenvectors composing are components with variance ratio between the two windows equal to their corresponding eigenvalue: Other components The vectorial subspace generated by the first eigenvectors will be the subspace maximizing the variance ratio of all components belonging to it: On the same way, the vectorial subspace generated by the last eigenvectors will be the subspace minimizing the variance ratio of all components belonging to it: Variance or second-order moment CSP can be applied after a mean subtraction (a.k.a. "mean centering") on signals in order to realize a variance ratio optimization. Otherwise CSP optimizes the ratio of second-order moment. Choice of windows X1 and X2 The standard use consists on choosing the windows to correspond to two periods of time with different activation of sources (e.g. during rest and during a specific task). It is also possible to choose the two windows to correspond to two different frequency bands in order to find components with specific frequency pattern. Those frequency bands can be on temporal or on frequential basis. Since the matrix depends only of the covariance matrices, the same results can be obtained if the processing is applied on the Fourier transform of the signals. Y. Wang has proposed a particular choice for the first window in order to extract components which have a specific period. was the mean of the different periods for the examined signals. If there is only one window, can be considered as the identity matrix and then CSP corresponds to Principal component analysis. Relation between LDA and CSP Linear discriminant analysis (LDA) and CSP apply in different circumstances. LDA separates data that have different means, by finding a rotation that maximizes the (normalized) distance between the centers of the two sets of data. On the other hand, CSP ignores the means. Thus CSP is good, for example, in separating the signal from the noise in an event-related potential (ERP) experiment because both distributions have zero mean and there is no distinction for LDA to separate. Thus CSP finds a projection that makes the variance of the components of the average ERP as large as possible so the signal stands out above the noise. Applications The CSP method can be applied to multivariate signals in generally, is commonly found in application to electroencephalographic (EEG) signals. Particularly, the method is often used in brain–computer interfaces to retrieve the component signals which best transduce the cerebral activity for a specific task (e.g. hand movement). It can also be used to separate artifacts from EEG signals. CSP can be adapted for the analysis of the event-related potentials. See also Blind signal separation References Signal processing
Common spatial pattern
Technology,Engineering
766
51,796,195
https://en.wikipedia.org/wiki/Cold%20and%20heat%20adaptations%20in%20humans
Cold and heat adaptations in humans are a part of the broad adaptability of Homo sapiens. Adaptations in humans can be physiological, genetic, or cultural, which allow people to live in a wide variety of climates. There has been a great deal of research done on developmental adjustment, acclimatization, and cultural practices, but less research on genetic adaptations to colder and hotter temperatures. The human body always works to remain in homeostasis. One form of homeostasis is thermoregulation. Body temperature varies in every individual, but the average internal temperature is . Sufficient stress from extreme external temperature may cause injury or death if it exceeds the ability of the body to thermoregulate. Hypothermia can set in when the core temperature drops to . Hyperthermia can set in when the core body temperature rises above . Humans have adapted to living in climates where hypothermia and hyperthermia were common primarily through culture and technology, such as the use of clothing and shelter. Origin of cold and heat adaptations Modern humans emerged from Africa approximately 70,000 years ago during a period of unstable climate, leading to a variety of new traits among the population. When modern humans spread into Europe, they outcompeted Neanderthals. Researchers hypothesize that this suggests early modern humans were more evolutionarily fit to live in various climates. This is supported in the variability selection hypothesis proposed by Richard Potts, which says that human adaptability came from environmental change over the long term. Ecogeographic rules Bergmann's rule states that endothermic animal subspecies living in colder climates have larger bodies than those of the subspecies living in warmer climates. Individuals with larger bodies are better suited for colder climates because larger bodies produce more heat due to having more cells, and have a smaller surface area to volume ratio compared to smaller individuals, which reduces the proportional heat loss. A study by Frederick Foster and Mark Collard found that Bergmann's rule can be applied to humans when the latitude and temperature between groups differ widely. Allen's rule is a biological rule that says the limbs of endotherms are shorter in cold climates and longer in hot climates. Limb length affects the body's surface area, which helps with thermoregulation. Shorter limbs help to conserve heat, while longer limbs help to dissipate heat. Marshall T. Newman argues that this can be observed in Eskimo, who have shorter limbs than other people and are laterally built. Paleoanthropologist John F. Hoffecker found that both Bermann's and Allen's biogeographical rules were confirmed, with it being seen that in modern populations, there is a clear trend of shorter distal limb segments in colder environments. Physiological adaptations Origins of heat and cold adaptations can be explained by climatic adaptation. Ambient air temperature affects how much energy investment the human body must make. The temperature that requires the least amount of energy investment is . The body controls its temperature through the hypothalamus. Thermoreceptors in the skin send signals to the hypothalamus, which indicate when vasodilation and vasoconstriction should occur. Cold The human body has two methods of thermogenesis, which produces heat to raise the core body temperature. The first is shivering, which occurs in an unclothed person when the ambient air temperature is under . It is limited by the amount of glycogen available in the body. The second is non-shivering, which occurs in brown adipose tissue. Population studies have shown that the San tribe of Southern Africa and the Sandawe of Eastern Africa have reduced shivering thermogenesis in the cold, and poor cold-induced vasodilation in fingers and toes compared to that of Caucasians. Heat The only mechanism the human body has to cool itself is by sweat evaporation. Sweating occurs when the ambient air temperature is above and the body fails to return to the normal internal temperature. The evaporation of the sweat helps cool the blood beneath the skin. It is limited by the amount of water available in the body, which can cause dehydration. Humans adapted to heat early on. In Africa, the climate selected for traits that helped them stay cool. Also, humans had physiological mechanisms that reduced the rate of metabolism and that modified the sensitivity of sweat glands to provide an adequate amount for cooldown without the individual becoming dehydrated. There are two types of heat the body is adapted to, humid heat and dry heat, but the body adapts to both in similar ways. Humid heat is characterized by warmer temperatures with a high amount of water vapor in the air, while dry heat is characterized by warmer temperatures with little to no vapor, such as desert conditions. With humid heat, the moisture in the air can prevent the evaporation of sweat. Regardless of acclimatization, humid heat poses a far greater threat than dry heat; humans cannot carry out physical outdoor activities at any temperature above when the ambient humidity is greater than 95%. When combined with this high humidity, the theoretical limit to human survival in the shade, even with unlimited water, is – theoretically equivalent to a heat index of . Dry heat, on the other hand, can cause dehydration, as sweat will tend to evaporate extremely quickly. Individuals with less fat and slightly lower body temperatures can more easily handle both humid and dry heat. Acclimatization When humans are exposed to certain climates for extended periods of time, physiological changes occur to help the individual adapt to hot or cold climates. This helps the body conserve energy. Cold The Inuit have more blood flowing into their extremities, and at a hotter temperature, than people living in warmer climates. A 1960 study on the Alacaluf Indians shows that they have a resting metabolic rate 150 to 200 percent higher than the white controls used. The Sami do not have an increase in metabolic rate when sleeping, unlike non-acclimated people. Aboriginal Australians undergo a similar process, where the body cools but the metabolic rate does not increase. Heat Humans and their evolutionary predecessors in Central Africa have been living in similar tropical climates for millions of years, which means that they have similar thermoregulatory systems. A study done on the Bantus of South Africa showed that Bantus have a lower sweat rate than that of acclimated and nonacclimated white people. A similar study done on Aboriginal Australians produced similar results, with Indigenous Australians having a much lower sweat rate than Caucasians. Culture Social adaptations enabled early modern humans to occupy environments with temperatures that were drastically different from that of Africa. (Potts 1998). Culture enabled humans to expand their range to areas that would otherwise be uninhabitable. Cold Humans have been able to occupy areas of extreme cold through clothing, buildings, and manipulation of fire. Furnaces have further enabled the occupation of cold environments. Historically many Indigenous Australians wore only genital coverings. Studies have shown that the warmth from the fires they build is enough to keep the body from fighting heat loss through shivering. Inuit use well-insulated houses that are designed to transfer heat from an energy source to the living area, which means that the average indoor temperature for coastal Inuit is . Heat Humans inhabit hot climates, both dry and humid, and have done so for millions of years. Selective use of clothing and technological inventions such as air conditioning allows humans to live in hot climates. One example is the Chaamba, who live in the Sahara Desert. They wear clothing that traps air in between skin and the clothes, preventing the high ambient air temperature from reaching the skin. See also Apparent temperature Recent human evolution Thermal comfort References Human ecology Evolutionary psychology Environmental studies Human geography Temperature
Cold and heat adaptations in humans
Physics,Chemistry,Environmental_science
1,583
6,872,680
https://en.wikipedia.org/wiki/Service%20catalog
A service catalog (or catalogue), is an organized and curated collection of business and information technology services within an enterprise. Service catalogs are knowledge management tools which designate subject matter experts (SMEs) who answer questions and requests related to the listed service. Services in the catalog are usually very repeatable and have controlled inputs, outputs, and procedures. Service catalogs allow leadership to break the enterprise into highly structured and more efficient operational units, also known as "a service-oriented enterprise." Service centralization A service catalog is a means of centralizing all services that are important to the stakeholders of the enterprises which implement and use it. Given its digital and virtual implementation, via software, the service catalog acts, at a minimum, as a digital registry and a means for highly distributed enterprises to see, find, invoke, and execute services regardless of where they exist in the world. This means that people in one part of the world can find and utilize the same services that people in other parts of the world use, eliminating the need to develop and support local services via a federated implementation model. Centralizing services also acts as a means of identifying service gaps and redundancies that can then be addressed by the enterprise to improve itself. Service catalog composition Service catalogs are implemented in a manner that facilitate the registration, discovery, request, execution, and tracking of desired services for catalog users. Each service within the catalog typically includes traits and elements such as: Clear ownership of and accountability for the service (a person and often an organization). A name or identification label for the service. A description of the service. A service categorization or type that allows it to be grouped with other similar services. Related service request types. Any supporting or underpinning services. Who is entitled to request/view the service. Associated costs (if any). How to request the service and how its delivery is fulfilled. Escalation points and key contacts. The more descriptive the service details are, the easier it is for end users of the service catalog to find and invoke the services they desire. Catalog categories A service catalog is commonly structured in a manner where its registered services are categorized. A large percentage of Categories for services are derived from the areas of an enterprise and the functions it performs, such as Information Technology, Operations, and Fulfillment. Examples of common service categories include Marketing Services, Product Development Services, Fulfillment Services, and Support Services, which are consumed and performed by most businesses. The purpose of categorization of services is to facilitate service curation, such as how books may be curated in a library. Resource management The utilization of service catalogs allow enterprises to allocate and track resources, both human and systemic, which are required for successful service delivery, operations, and support. This allows enterprises to understand where resources are allocated, whether there are too many or too few resources allocated, and whether or not the resources allocated are adequate for purpose. It also allows an understanding of what resources are shared between multiple services versus those that are fully dedicated to a single service. Metrics driven transparency Benefits of implementing and maintaining a service catalog include allowing an enterprise to track and manage metrics that represent the utilization of services and service-related traits, such as those associated with service supply and demand. For example, enterprises can track and measure: Services that are most and least used (i.e. enterprise service demand) Services that are successfully delivering versus those that struggle to deliver (i.e. enterprise service supply) How many service requests are being invoked for each service (i.e. service-specific demand) How many service deliverables are making it to their targeted service requestors (i.e. service-specific supply) Who invokes what services most or least How much time it takes to approve service requests How much time it takes to deliver service outputs, once requests are approved Service finances, such as how much is spent on each service by those who invoke them and those who provide them In addition to the above, a service catalog also helps leadership and management better see and understand correlations of service related work, assets, and resources to the people, organizations, and projects that request them. IT service catalog An IT service catalog is a subset of an enterprise service catalog and is defined by ITIL, by the book Service Design, to be an exhaustive list of IT-only services that an organization provides or offers to its employees or customers. The catalog is the only part of the Service Portfolio that is published to customers and is used to support the sale and/or delivery of IT services. A user's perspective A user goes to a website to search for a specific service, such as requesting a new laptop, requesting a change in benefits, or adding a new employee to a department. The service catalog site groups services by category and allows for searching (especially when hundreds or thousands of services are available). The user selects a desired service and sees the description and details. The user enters any pertinent information (contact information, service-specific questions) and submits the request for service. The request requires approval, and goes through routing, service-level management, and other processes necessary to fulfill the request. The user may return to the site later to check on the status of a request, or to view overall metrics on how well the organization is performing the services it provides. A business unit manager's perspective Business Unit Managers determine what services to "publish" to end-users via the service catalog. Business Unit Managers and Analysts would determine what questions are to be asked of the user, any approvals necessary for a request, and what other systems or processes are needed to fulfill the request. Once the service is defined and the fulfillment process organized, these people or a more technical employee would build the requisite functionality into the service definition and then publish this to the service catalog. Service catalogs for cloud computing services The use of a service catalog for cloud computing services is an integral part of deploying services on private and public clouds. Users wishing to consume cloud services would use a cloud service catalog to view what cloud services are available, their function, and know the technologies used to provide the services. Users would also see the available different service level options based on latency and reliability. With this knowledge, users are able to change the configuration of the technologies used to deliver the services based on cost, performance and technology improvements. By seeing and understanding the different services available through the cloud users can better appreciate what is available to them, compared to traditional IT whereby one group of users or business unit may be unaware of the technologies available to another unit. Accessed by self-service portals, service catalogs contain a list of cloud services from which cloud consumers select for self-provisioning of cloud services. This removes the need for users to work through various IT departments in order to provision a cloud service, nor are users required to provide detailed IT specifications. They are only required to provide business and organization requirements. To make selection easier and to speed service deployment, service definitions are often standardized in cloud service catalogs. This presents three benefits: improved capacity planning, particularly if standard components are used; quicker service provisioning; and better buying forecasts which helps to lower costs. Automation is an aspect of cloud service catalog that has been noted. Cloud service catalogs have been described as enabling "cloud on auto-pilot" enabling cloud users to build cloud services based on pre-built templates selected from catalogs. See also IT Governance ITIL IT Service Management References Information technology management Standards
Service catalog
Technology
1,536
840
https://en.wikipedia.org/wiki/Axiom%20of%20choice
In mathematics, the axiom of choice, abbreviated AC or AoC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally put, the axiom of choice says that given any collection of sets, each containing at least one element, it is possible to construct a new set by choosing one element from each set, even if the collection is infinite. Formally, it states that for every indexed family of nonempty sets, there exists an indexed set such that for every . The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem. The axiom of choice is equivalent to the statement that every partition has a transversal. In many cases, a set created by choosing elements can be made without invoking the axiom of choice, particularly if the number of sets from which to choose the elements is finite, or if a canonical rule on how to choose the elements is available — some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. given the sets {{4, 5, 6}, {10, 12}, {1, 400, 617, 8000}}, the set containing each smallest element is {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets are collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. But no definite choice function is known for the collection of all non-empty subsets of the real numbers. In that case, the axiom of choice must be invoked. Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate collection (i.e. set) of shoes; this makes it possible to define a choice function directly. For an infinite collection of pairs of socks (assumed to have no distinguishing features such as being a left sock rather than a right sock), there is no obvious way to make a function that forms a set out of selecting one sock from each pair without invoking the axiom of choice. Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, and is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced. Statement A choice function (also called selector or selection) is a function f, defined on a collection X of nonempty sets, such that for every set A in X, f(A) is an element of A. With this concept, the axiom can be stated: Formally, this may be expressed as follows: Thus, the negation of the axiom may be expressed as the existence of a collection of nonempty sets which has no choice function. Formally, this may be derived making use of the logical equivalence of to . Each choice function on a collection X of nonempty sets is an element of the Cartesian product of the sets in X. This is not the most general situation of a Cartesian product of a family of sets, where a given set can occur more than once as a factor; however, one can focus on elements of such a product that select the same element every time a given set appears as factor, and such elements correspond to an element of the Cartesian product of all distinct sets in the family. The axiom of choice asserts the existence of such elements; it is therefore equivalent to: Given any family of nonempty sets, their Cartesian product is a nonempty set. Nomenclature In this article and other discussions of the Axiom of Choice the following abbreviations are common: AC – the Axiom of Choice. More rarely, AoC is used. ZF – Zermelo–Fraenkel set theory omitting the Axiom of Choice. ZFC – Zermelo–Fraenkel set theory, extended to include the Axiom of Choice. Variants There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it. One variation avoids the use of choice functions by, in effect, replacing each choice function with its range: Given any set X, if the empty set is not an element of X and the elements of X are pairwise disjoint, then there exists a set C such that its intersection with any of the elements of X contains exactly one element. This can be formalized in first-order logic as: ∀x ( ∃o (o ∈ x ∧ ¬∃n (n ∈ o)) ∨ ∃a ∃b ∃c (a ∈ x ∧ b ∈ x ∧ c ∈ a ∧ c ∈ b ∧ ¬(a = b)) ∨ ∃c ∀e (e ∈ x → ∃a (a ∈ e ∧ a ∈ c ∧ ∀b ((b ∈ e ∧ b ∈ c) → a = b)))) Note that P ∨ Q ∨ R is logically equivalent to (¬P ∧ ¬Q) → R. In English, this first-order sentence reads: Given any set X, X contains the empty set as an element or the elements of X are not pairwise disjoint or there exists a set C such that its intersection with any of the elements of X contains exactly one element. This guarantees for any partition of a set X the existence of a subset C of X containing exactly one element from each part of the partition. Another equivalent axiom only considers collections X that are essentially powersets of other sets: For any set A, the power set of A (with the empty set removed) has a choice function. Authors who use this formulation often speak of the choice function on A, but this is a slightly different notion of choice function. Its domain is the power set of A (with the empty set removed), and so makes sense for any set A, whereas with the definition used elsewhere in this article, the domain of a choice function on a collection of sets is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as Every set has a choice function. which is equivalent to For any set A there is a function f such that for any non-empty subset B of A, f(B) lies in B. The negation of the axiom can thus be expressed as: There is a set A such that for all functions f (on the set of non-empty subsets of A), there is a B such that f(B) does not lie in B. Restriction to finite sets The usual statement of the axiom of choice does not specify whether the collection of nonempty sets is finite or infinite, and thus implies that every finite collection of nonempty sets has a choice function. However, that particular case is a theorem of the Zermelo–Fraenkel set theory without the axiom of choice (ZF); it is easily proved by the principle of finite induction. In the even simpler case of a collection of one set, a choice function just corresponds to an element, so this instance of the axiom of choice says that every nonempty set has an element; this holds trivially. The axiom of choice can be seen as asserting the generalization of this property, already evident for finite collections, to arbitrary collections. Usage Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set X contains only non-empty sets, a mathematician might have said "let F(s) be one of the members of s for all s in X" to define a function F. In general, it is impossible to prove that F exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo. Examples The nature of the individual nonempty sets in the collection may make it possible to avoid the axiom of choice even for certain infinite collections. For example, suppose that each member of the collection X is a nonempty subset of the natural numbers. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set, and makes it unnecessary to add the axiom of choice to our axioms of set theory. The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our selection forms a legitimate set (as defined by the other ZF axioms of set theory)? For example, suppose that X is the set of all non-empty subsets of the real numbers. First we might try to proceed as if X were finite. If we try to choose an element from each set, then, because X is infinite, our choice procedure will never come to an end, and consequently we shall never be able to produce a choice function for all of X. Next we might try specifying the least element from each set. But some subsets of the real numbers do not have least elements. For example, the open interval (0,1) does not have a least element: if x is in (0,1), then so is x/2, and x/2 is always strictly smaller than x. So this attempt also fails. Additionally, consider for instance the unit circle S, and the action on S by a group G consisting of all rational rotations, that is, rotations by angles which are rational multiples of π. Here G is countable while S is uncountable. Hence S breaks up into uncountably many orbits under G. Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset X of S with the property that all of its translates by G are disjoint from X. The set of those translates partitions the circle into a countable collection of pairwise disjoint sets, which are all pairwise congruent. Since X is not measurable for any rotation-invariant countably additive finite measure on S, finding an algorithm to form a set from selecting a point in each orbit requires that one add the axiom of choice to our axioms of set theory. See non-measurable set for more details. In classical arithmetic, the natural numbers are well-ordered: for every nonempty subset of the natural numbers, there is a unique least element under the natural ordering. In this way, one may specify a set from any given subset. One might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice holds. Criticism and acceptance A proof requiring the axiom of choice may establish the existence of an object without explicitly defining the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no individual well-ordering of the reals is definable. Similarly, although a subset of the real numbers that is not Lebesgue measurable can be proved to exist using the axiom of choice, it is consistent that no such set is definable. The axiom of choice asserts the existence of these intangibles (objects that are proved to exist, but which cannot be explicitly constructed), which may conflict with some philosophical principles. Because there is no canonical well-ordering of all sets, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). This has been used as an argument against the use of the axiom of choice. Another argument against the axiom of choice is that it implies the existence of objects that may seem counterintuitive. One example is the Banach–Tarski paradox, which says that it is possible to decompose the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are non-measurable sets. Moreover, paradoxical consequences of the axiom of choice for the no-signaling principle in physics have recently been pointed out. Despite these seemingly paradoxical results, most mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics. But the debate is interesting enough that it is considered notable when a theorem in ZFC (ZF plus AC) is logically equivalent (with just the ZF axioms) to the axiom of choice, and mathematicians look for results that require the axiom of choice to be false, though this type of deduction is less common than the type that requires the axiom of choice to be true. Theorems of ZF hold true in any model of that theory, regardless of the truth or falsity of the axiom of choice in that particular model. The implications of choice below, including weaker versions of the axiom itself, are listed because they are not theorems of ZF. The Banach–Tarski paradox, for example, is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Such statements can be rephrased as conditional statements—for example, "If AC holds, then the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice. In constructive mathematics As discussed above, in the classical theory of ZFC, the axiom of choice enables nonconstructive proofs in which the existence of a type of object is proved without an explicit instance being constructed. In fact, in set theory and topos theory, Diaconescu's theorem shows that the axiom of choice implies the law of excluded middle. The principle is thus not available in constructive set theory, where non-classical logic is employed. The situation is different when the principle is formulated in Martin-Löf type theory. There and higher-order Heyting arithmetic, the appropriate statement of the axiom of choice is (depending on approach) included as an axiom or provable as a theorem. A cause for this difference is that the axiom of choice in type theory does not have the extensionality properties that the axiom of choice in constructive set theory does. The type theoretical context is discussed further below. Different choice principles have been thoroughly studied in the constructive contexts and the principles' status varies between different school and varieties of the constructive mathematics. Some results in constructive set theory use the axiom of countable choice or the axiom of dependent choice, which do not imply the law of the excluded middle. Errett Bishop, who is notable for developing a framework for constructive analysis, argued that an axiom of choice was constructively acceptable, saying Although the axiom of countable choice in particular is commonly used in constructive mathematics, its use has also been questioned. Independence It has been known since as early as 1922 that the axiom of choice may fail in a variant of ZF with urelements, through the technique of permutation models introduced by Abraham Fraenkel and developed further by Andrzej Mostowski. The basic technique can be illustrated as follows: Let xn and yn be distinct urelements for , and build a model where each set is symmetric under the interchange xn ↔ yn for all but a finite number of n. Then the set can be in the model but sets such as cannot, and thus X cannot have a choice function. In 1938, Kurt Gödel showed that the negation of the axiom of choice is not a theorem of ZF by constructing an inner model (the constructible universe) that satisfies ZFC, thus showing that ZFC is consistent if ZF itself is consistent. In 1963, Paul Cohen employed the technique of forcing, developed for this purpose, to show that, assuming ZF is consistent, the axiom of choice itself is not a theorem of ZF. He did this by constructing a much more complex model that satisfies ZF¬C (ZF with the negation of AC added as axiom) and thus showing that ZF¬C is consistent. Cohen's model is a symmetric model, which is similar to permutation models, but uses "generic" subsets of the natural numbers (justified by forcing) in place of urelements. Together these results establish that the axiom of choice is logically independent of ZF. The assumption that ZF is consistent is harmless because adding another axiom to an already inconsistent system cannot make the situation worse. Because of independence, the decision whether to use the axiom of choice (or its negation) in a proof cannot be made by appeal to other axioms of set theory. It must be made on other grounds. One argument in favor of using the axiom of choice is that it is convenient because it allows one to prove some simplifying propositions that otherwise could not be proved. Many theorems provable using choice are of an elegant general character: the cardinalities of any two sets are comparable, every nontrivial ring with unity has a maximal ideal, every vector space has a basis, every connected graph has a spanning tree, and every product of compact spaces is compact, among many others. Frequently, the axiom of choice allows generalizing a theorem to "larger" objects. For example, it is provable without the axiom of choice that every vector space of finite dimension has a basis, but the generalization to all vector spaces requires the axiom of choice. Likewise, a finite product of compact spaces can be proven to be compact without the axiom of choice, but the generalization to infinite products (Tychonoff's theorem) requires the axiom of choice. The proof of the independence result also shows that a wide class of mathematical statements, including all statements that can be phrased in the language of Peano arithmetic, are provable in ZF if and only if they are provable in ZFC. Statements in this class include the statement that P = NP, the Riemann hypothesis, and many other unsolved mathematical problems. When attempting to solve problems in this class, it makes no difference whether ZF or ZFC is employed if the only question is the existence of a proof. It is possible, however, that there is a shorter proof of a theorem from ZFC than from ZF. The axiom of choice is not the only significant statement that is independent of ZF. For example, the generalized continuum hypothesis (GCH) is not only independent of ZF, but also independent of ZFC. However, ZF plus GCH implies AC, making GCH a strictly stronger claim than AC, even though they are both independent of ZF. Stronger axioms The axiom of constructibility and the generalized continuum hypothesis each imply the axiom of choice and so are strictly stronger than it. In class theories such as Von Neumann–Bernays–Gödel set theory and Morse–Kelley set theory, there is an axiom called the axiom of global choice that is stronger than the axiom of choice for sets because it also applies to proper classes. The axiom of global choice follows from the axiom of limitation of size. Tarski's axiom, which is used in Tarski–Grothendieck set theory and states (in the vernacular) that every set belongs to Grothendieck universe, is stronger than the axiom of choice. Equivalents There are important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice. The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering theorem. Set theory Tarski's theorem about choice: For every infinite set A, there is a bijective map between the sets A and A×A. Trichotomy: If two sets are given, then either they have the same cardinality, or one has a smaller cardinality than the other. Given two non-empty sets, one has a surjection to the other. Every surjective function has a right inverse. The Cartesian product of any family of nonempty sets is nonempty. In other words, every family of nonempty sets has a choice function (i.e. a function which maps each of the nonempty sets to one of its elements). König's theorem: Colloquially, the sum of a sequence of cardinals is strictly less than the product of a sequence of larger cardinals. (The reason for the term "colloquially" is that the sum or product of a "sequence" of cardinals cannot itself be defined without some aspect of the axiom of choice.) Well-ordering theorem: Every set can be well-ordered. Consequently, every cardinal has an initial ordinal. Zorn's lemma: Every non-empty partially ordered set in which every chain (i.e., totally ordered subset) has an upper bound contains at least one maximal element. Hausdorff maximal principle: Every partially ordered set has a maximal chain. Equivalently, in any partially ordered set, every chain can be extended to a maximal chain. Tukey's lemma: Every non-empty collection of finite character has a maximal element with respect to inclusion. Antichain principle: Every partially ordered set has a maximal antichain. Equivalently, in any partially ordered set, every antichain can be extended to a maximal antichain. The powerset of any ordinal can be well-ordered. Abstract algebra Every vector space has a basis (i.e., a linearly independent spanning subset). In other words, vector spaces are equivalent to free modules. Krull's theorem: Every unital ring (other than the trivial ring) contains a maximal ideal. Equivalently, in any nontrivial unital ring, every ideal can be extended to a maximal ideal. For every non-empty set S there is a binary operation defined on S that gives it a group structure. (A cancellative binary operation is enough, see group structure and the axiom of choice.) Every free abelian group is projective. Baer's criterion: Every divisible abelian group is injective. Every set is a projective object in the category Set of sets. Functional analysis The closed unit ball of the dual of a normed vector space over the reals has an extreme point. Point-set topology The Cartesian product of any family of connected topological spaces is connected. Tychonoff's theorem: The Cartesian product of any family of compact topological spaces is compact. In the product topology, the closure of a product of subsets is equal to the product of the closures. Mathematical logic If S is a set of sentences of first-order logic and B is a consistent subset of S, then B is included in a set that is maximal among consistent subsets of S. The special case where S is the set of all first-order sentences in a given signature is weaker, equivalent to the Boolean prime ideal theorem; see the section "Weaker forms" below. Graph theory Every connected graph has a spanning tree. Equivalently, every nonempty graph has a spanning forest. Category theory Several results in category theory invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above. Examples of category-theoretic statements which require choice include: Every small category has a skeleton. If two small categories are weakly equivalent, then they are equivalent. Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint (the Freyd adjoint functor theorem). Weaker forms There are several weaker statements that are not equivalent to the axiom of choice but are closely related. One example is the axiom of dependent choice (DC). A still weaker example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set of nonempty sets. These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the full axiom of choice. Given an ordinal parameter α ≥ ω+2 — for every set S with rank less than α, S is well-orderable. Given an ordinal parameter α ≥ 1 — for every set S with Hartogs number less than ωα, S is well-orderable. As the ordinal parameter is increased, these approximate the full axiom of choice more and more closely. Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization. The former is equivalent in ZF to Tarski's 1930 ultrafilter lemma: every filter is a subset of some ultrafilter. Results requiring AC (or weaker forms) but weaker than it One of the most interesting aspects of the axiom of choice is the large number of places in mathematics where it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF. Set theory The ultrafilter lemma (with ZF) can be used to prove the Axiom of choice for finite sets: Given and a collection of non-empty sets, their product is not empty. The union of any countable family of countable sets is countable (this requires countable choice but not the full axiom of choice). If the set A is infinite, then there exists an injection from the natural numbers N to A (see Dedekind infinite). Eight definitions of a finite set are equivalent. Every infinite game in which is a Borel subset of Baire space is determined. Every infinite cardinal κ satisfies 2×κ = κ. Measure theory The Vitali theorem on the existence of non-measurable sets, which states that there exists a subset of the real numbers that is not Lebesgue measurable. There exist Lebesgue-measurable subsets of the real numbers that are not Borel sets. That is, the Borel σ-algebra on the real numbers (which is generated by all real intervals) is strictly included the Lebesgue-measure σ-algebra on the real numbers. The Hausdorff paradox. The Banach–Tarski paradox. Algebra Every field has an algebraic closure. Every field extension has a transcendence basis. Every infinite-dimensional vector space contains an infinite linearly independent subset (this requires dependent choice, but not the full axiom of choice). Stone's representation theorem for Boolean algebras needs the Boolean prime ideal theorem. The Nielsen–Schreier theorem, that every subgroup of a free group is free. The additive groups of R and C are isomorphic. Functional analysis The Hahn–Banach theorem in functional analysis, allowing the extension of linear functionals. The theorem that every Hilbert space has an orthonormal basis. The Banach–Alaoglu theorem about compactness of sets of functionals. The Baire category theorem about complete metric spaces, and its consequences, such as the open mapping theorem and the closed graph theorem. On every infinite-dimensional topological vector space there is a discontinuous linear map. General topology A uniform space is compact if and only if it is complete and totally bounded. Every Tychonoff space has a Stone–Čech compactification. Mathematical logic Gödel's completeness theorem for first-order logic: every consistent set of first-order sentences has a completion. That is, every consistent set of first-order sentences can be extended to a maximal consistent set. The compactness theorem: If is a set of first-order (or alternatively, zero-order) sentences such that every finite subset of has a model, then has a model. Possibly equivalent implications of AC There are several historically important set-theoretic statements implied by AC whose equivalence to AC is open. Zermelo cited the partition principle, which was formulated before AC itself, as a justification for believing AC. In 1906, Russell declared PP to be equivalent, but whether the partition principle implies AC is the oldest open problem in set theory, and the equivalences of the other statements are similarly hard old open problems. In every known model of ZF where choice fails, these statements fail too, but it is unknown whether they can hold without choice. Set theory Partition principle: if there is a surjection from A to B, there is an injection from B to A. Equivalently, every partition P of a set S is less than or equal to S in size. Converse Schröder–Bernstein theorem: if two sets have surjections to each other, they are equinumerous. Weak partition principle: if there is an injection and a surjection from A to B, then A and B are equinumerous. Equivalently, a partition of a set S cannot be strictly larger than S. If WPP holds, this already implies the existence of a non-measurable set. Each of the previous three statements is implied by the preceding one, but it is unknown if any of these implications can be reversed. There is no infinite decreasing sequence of cardinals. The equivalence was conjectured by Schoenflies in 1905. Abstract algebra Hahn embedding theorem: Every ordered abelian group G order-embeds as a subgroup of the additive group endowed with a lexicographical order, where Ω is the set of Archimedean equivalence classes of G. This equivalence was conjectured by Hahn in 1907. Stronger forms of the negation of AC If we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC + BP is consistent, if ZF is. It is also consistent with ZF + DC that every set of reals is Lebesgue measurable, but this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals). Quine's system of axiomatic set theory, New Foundations (NF), takes its name from the title ("New Foundations for Mathematical Logic") of the 1937 article that introduced it. In the NF axiomatic system, the axiom of choice can be disproved. Statements implying the negation of AC There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We shall abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to validate the negation of some standard ZFC theorems. As any model of ZF¬C is also a model of ZF, it is the case that for each of the following statements, there exists a model of ZF in which that statement is true. The negation of the weak partition principle: There is a set that can be partitioned into strictly more equivalence classes than the original set has elements, and a function whose domain is strictly smaller than its range. In fact, this is the case in all known models. There is a function f from the real numbers to the real numbers such that f is not continuous at a, but f is sequentially continuous at a, i.e., for any sequence {xn} converging to a, limn f(xn)=f(a). There is an infinite set of real numbers without a countably infinite subset. The real numbers are a countable union of countable sets. This does not imply that the real numbers are countable: As pointed out above, to show that a countable union of countable sets is itself countable requires the Axiom of countable choice. There is a field with no algebraic closure. In all models of ZF¬C there is a vector space with no basis. There is a vector space with two bases of different cardinalities. There is a free complete Boolean algebra on countably many generators. There is a set that cannot be linearly ordered. There exists a model of ZF¬C in which every set in Rn is measurable. Thus it is possible to exclude counterintuitive results like the Banach–Tarski paradox which are provable in ZFC. Furthermore, this is possible whilst assuming the Axiom of dependent choice, which is weaker than AC but sufficient to develop most of real analysis. In all models of ZF¬C, the generalized continuum hypothesis does not hold. For proofs, see . Additionally, by imposing definability conditions on sets (in the sense of descriptive set theory) one can often prove restricted versions of the axiom of choice from axioms incompatible with general choice. This appears, for example, in the Moschovakis coding lemma. Axiom of choice in type theory In type theory, a different kind of statement is known as the axiom of choice. This form begins with two types, σ and τ, and a relation R between objects of type σ and objects of type τ. The axiom of choice states that if for each x of type σ there exists a y of type τ such that R(x,y), then there is a function f from objects of type σ to objects of type τ such that R(x,f(x)) holds for all x of type σ: Unlike in set theory, the axiom of choice in type theory is typically stated as an axiom scheme, in which R varies over all formulas or over all formulas of a particular logical form. Notes References Per Martin-Löf, "100 years of Zermelo's axiom of choice: What was the problem with it?", in Logicism, Intuitionism, and Formalism: What Has Become of Them?, Sten Lindström, Erik Palmgren, Krister Segerberg, and Viggo Stoltenberg-Hansen, editors (2008). , available as a Dover Publications reprint, 2013, . Herman Rubin, Jean E. Rubin: Equivalents of the axiom of choice. North Holland, 1963. Reissued by Elsevier, April 1970. . Herman Rubin, Jean E. Rubin: Equivalents of the Axiom of Choice II. North Holland/Elsevier, July 1985, . George Tourlakis, Lectures in Logic and Set Theory. Vol. II: Set Theory, Cambridge University Press, 2003. Ernst Zermelo, "Untersuchungen über die Grundlagen der Mengenlehre I," Mathematische Annalen 65: (1908) pp. 261–81. PDF download via digizeitschriften.de Translated in: Jean van Heijenoort, 2002. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. New edition. Harvard University Press. 1904. "Proof that every set can be well-ordered," 139-41. 1908. "Investigations in the foundations of set theory I," 199–215. External links Axiom of Choice entry in the Springer Encyclopedia of Mathematics. Axiom of Choice and Its Equivalents entry at ProvenMath. Includes formal statement of the Axiom of Choice, Hausdorff's Maximal Principle, Zorn's Lemma and formal proofs of their equivalence down to the finest detail. Consequences of the Axiom of Choice , based on the book by Paul Howard and Jean Rubin. .
Axiom of choice
Mathematics
8,023
54,632,449
https://en.wikipedia.org/wiki/Halorhabdus%20utahensis
Halorhabdus utahensis is a halophilic archaeon isolated from the Great Salt Lake in Utah. Cell structure and metabolism Halorhabdus utahensis (salt-loving rod) is a motile, Gram-negative, extremely halophilic archaeon that forms red, circular colonies. It grows at the temperatures between 17 and 55 °C, with optimal growth occurring at 50 °C. It can also grow over a pH range of 5.5–8.5 with the optimal pH value between 6.7 and 7.1. Further, with its extremely high salinity optimum of 27% NaCl, Halorhabdus has one of the highest reported salinity optima of any living organism. The cells of H. utahensis are extremely pleomorphic, exhibiting any shape from irregular coccoid or ellipsoid to triangular, club-shaped or rod-shaped forms. The rod-shaped and ellipsoid cells are 2-10 by 0.5-1 μm and 1-2 by 1 μm in size, respectively, and the spherical cells have a diameter of approximately 1 μm. The archaeon uses only a limited range of substrates, such as glucose, xylose, and fructose, for growth, and is unique in its inability to utilize yeast extract or peptone. Other substances that did not stimulate the organism's growth include organic acids, amino acids, alcohols, glycogen, and starch. References Further reading Scientific journals Scientific books Euryarchaeota Archaea described in 2000
Halorhabdus utahensis
Biology
325
45,062,104
https://en.wikipedia.org/wiki/Penicillium%20caseifulvum
Penicillium caseifulvum is a fungus species of the genus of Penicillium which occurs on the surface of blue cheese and causes discoloration in form of brown spots. See also List of Penicillium species References Further reading caseifulvum Fungi described in 1998 Fungus species
Penicillium caseifulvum
Biology
63
50,788,619
https://en.wikipedia.org/wiki/AIDS%20Care
AIDS Care (subtitle: Psychological and Socio-medical Aspects of AIDS/HIV) is a peer-reviewed medical journal publishing HIV/AIDS research from multiple different disciplines, including psychology and sociology. It was established in 1989 and is published ten times per year by Taylor & Francis. The editor-in-chief is Lorraine Sherr (University College London). According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.887. References External links Taylor & Francis academic journals Academic journals established in 1989 HIV/AIDS journals English-language journals 10 times per year journals
AIDS Care
Biology
121
70,744,892
https://en.wikipedia.org/wiki/Nagstatin
Nagstatin is a strong competitive inhibitor of the N-acetyl-β-d-glucosaminidase with the molecular formula C12H17N3O6. Nagstatin is produced by the bacterium Streptomyces amakusaensis. References Further reading Nagstatin Acetamides Carboxylic acids Triols Imidazopyridines
Nagstatin
Chemistry
85
16,796,803
https://en.wikipedia.org/wiki/HD%2046375%20b
HD 46375 b is an extrasolar planet located approximately 109 light-years away in the constellation of Monoceros, orbiting the star HD 46375. With 79 Ceti b on March 29, 2000, it was joint first known extrasolar planet less massive than Saturn orbiting a normal star. The planet is a "hot Jupiter", a type of planet that orbits very close to its parent star. In this case the orbital distance is only a tenth that of the planet Mercury. No transit of the planet has been detected, so its inclination must be less than 83°. Because the inclination is unknown, the true mass of the planet is not known. References External links Monoceros Exoplanets discovered in 2000 Giant planets Exoplanets detected by radial velocity Hot Jupiters de:HD 46375 b
HD 46375 b
Astronomy
169
20,018,367
https://en.wikipedia.org/wiki/Grooming%20dance
A grooming dance, grooming invitation dance or shaking dance is a dance performed by honeybees to initiate allogrooming. It was first reported in 1945 by biologist Mykola H. Hadak. An increase in the frequency of the grooming dance has been observed among the bees of mite-infested colonies, and among bees who have been dusted with small particles of chalk dust. See also Bee learning and communication Tremble dance Waggle dance References Animal communication Western honey bee behavior Neuroethology
Grooming dance
Biology
107
1,145,355
https://en.wikipedia.org/wiki/Thymolphthalein
Thymolphthalein is a phthalein dye used as an acid–base (pH) indicator. Its transition range is around pH 9.3–10.5. Below this pH, it is colorless; above, it is blue. The molar extinction coefficient for the blue thymolphthalein dianion is 38,000 M−1 cm−1 at 595 nm. Thymolphthalein is also known to have use as a laxative and for disappearing ink. Preparation Thymolphthalein can be synthesized from thymol and phthalic anhydride. See also Phenolphthalein References PH indicators Triarylmethane dyes Phthalides Phenols Isopropyl compounds
Thymolphthalein
Chemistry,Materials_science
157
19,359,492
https://en.wikipedia.org/wiki/HD%20126200
HD 126200 is a blue dwarf star in the northern constellation of Boötes. It has been identified as an Algol-type eclipsing binary, although subsequent observations do not confirm this. References External links HR 5388 Image HD 126200 Boötes 126200 Eclipsing binaries 070384 Algol variables A-type main-sequence stars 5388 Durchmusterung objects
HD 126200
Astronomy
84
433,005
https://en.wikipedia.org/wiki/The%20Emperor%27s%20New%20Mind
The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics is a 1989 book by the mathematical physicist Roger Penrose. Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function. Most of the book is spent reviewing, for the scientifically-minded lay-reader, a plethora of interrelated subjects such as Newtonian physics, special and general relativity, the philosophy and limitations of mathematics, quantum physics, cosmology, and the nature of time. Penrose intermittently describes how each of these bears on his developing theme: that consciousness is not "algorithmic". Only the later portions of the book address the thesis directly. Overview Penrose states that his ideas on the nature of consciousness are speculative, and his thesis is considered erroneous by some experts in the fields of philosophy, computer science, and robotics. The Emperor's New Mind attacks the claims of artificial intelligence using the physics of computing: Penrose notes that the present home of computing lies more in the tangible world of classical mechanics than in the imponderable realm of quantum mechanics. The modern computer is a deterministic system that for the most part simply executes algorithms. Penrose shows that, by reconfiguring the boundaries of a billiard table, one might make a computer in which the billiard balls act as message carriers and their interactions act as logical decisions. The billiard-ball computer was first designed some years ago by Edward Fredkin and Tommaso Toffoli of the Massachusetts Institute of Technology. Reception Following the publication of the book, Penrose began to collaborate with Stuart Hameroff on a biological analog to quantum computation involving microtubules, which became the foundation for his subsequent book, Shadows of the Mind: A Search for the Missing Science of Consciousness. Penrose won the Science Book Prize in 1990 for The Emperor's New Mind. According to an article in the American Journal of Physics, Penrose incorrectly claims a barrier far away from a localized particle can affect the particle. See also Alan Turing Anathem Church–Turing thesis Mind–body dualism Orchestrated objective reduction Quantum mind Raymond Smullyan Shadows of the Mind "The Emperor's New Clothes" Turing test References 1989 non-fiction books Works about consciousness English-language non-fiction books English non-fiction books Mathematics books Oxford University Press books Philosophy of artificial intelligence Philosophy of mind literature Popular physics books Quantum mind Science books Turing machine Works by Roger Penrose
The Emperor's New Mind
Physics
559
55,343,693
https://en.wikipedia.org/wiki/NGC%204754
NGC 4754 is a barred lenticular galaxy located about 53 million light-years away in the constellation of Virgo. NGC 4754 was discovered by astronomer William Herschel on March 15, 1784. It forms a non-interacting pair with the edge-on lenticular galaxy NGC 4762. NGC 4754 is a member of the Virgo Cluster. See also List of NGC objects (4001–5000) NGC 4477 References External links Barred lenticular galaxies Virgo (constellation) 4754 43656 8010 Astronomical objects discovered in 1784 Virgo Cluster
NGC 4754
Astronomy
117
4,692,157
https://en.wikipedia.org/wiki/Bird%E2%80%93Meertens%20formalism
The Bird–Meertens formalism (BMF) is a calculus for deriving programs from program specifications (in a functional programming setting) by a process of equational reasoning. It was devised by Richard Bird and Lambert Meertens as part of their work within IFIP Working Group 2.1. It is sometimes referred to in publications as BMF, as a nod to Backus–Naur form. Facetiously it is also referred to as Squiggol, as a nod to ALGOL, which was also in the remit of WG 2.1, and because of the "squiggly" symbols it uses. A less-used variant name, but actually the first one suggested, is SQUIGOL. Martin and Nipkow provided automated support for Squiggol development proofs, using the Larch Prover. Basic examples and notations Map is a well-known second-order function that applies a given function to every element of a list; in BMF, it is written : Likewise, reduce is a function that collapses a list into a single value by repeated application of a binary operator. It is written / in BMF. Taking as a suitable binary operator with neutral element e, we have Using those two operators and the primitives (as the usual addition), and (for list concatenation), we can easily express the sum of all elements of a list, and the flatten function, as and , in point-free style. We have: Similarly, writing for functional composition and for conjunction, it is easy to write a function testing that all elements of a list satisfy a predicate p, simply as : Bird (1989) transforms inefficient easy-to-understand expressions ("specifications") into efficient involved expressions ("programs") by algebraic manipulation. For example, the specification "" is an almost literal translation of the maximum segment sum problem, but running that functional program on a list of size will take time in general. From this, Bird computes an equivalent functional program that runs in time , and is in fact a functional version of Kadane's algorithm. The derivation is shown in the picture, with computational complexities given in blue, and law applications indicated in red. Example instances of the laws can be opened by clicking on [show]; they use lists of integer numbers, addition, minus, and multiplication. The notation in Bird's paper differs from that used above: , , and correspond to , , and a generalized version of above, respectively, while and compute a list of all prefixes and suffixes of its arguments, respectively. As above, function composition is denoted by "", which has lowest binding precedence. In the example instances, lists are colored by nesting depth; in some cases, new operations are defined ad hoc (grey boxes). The homomorphism lemma and its applications to parallel implementations A function h on lists is called a list homomorphism if there exists an associative binary operator and neutral element such that the following holds: The homomorphism lemma states that h is a homomorphism if and only if there exists an operator and a function f such that . A point of great interest for this lemma is its application to the derivation of highly parallel implementations of computations. Indeed, it is trivial to see that has a highly parallel implementation, and so does — most obviously as a binary tree. Thus for any list homomorphism h, there exists a parallel implementation. That implementation cuts the list into chunks, which are assigned to different computers; each computes the result on its own chunk. It is those results that transit on the network and are finally combined into one. In any application where the list is enormous and the result is a very simple type – say an integer – the benefits of parallelisation are considerable. This is the basis of the map-reduce approach. See also Catamorphism Anamorphism Paramorphism Hylomorphism References Functional languages Theoretical computer science Program derivation
Bird–Meertens formalism
Mathematics
824
1,093,599
https://en.wikipedia.org/wiki/Derailment
In rail transport, a derailment is a type of train wreck that occurs when a rail vehicle such as a train comes off its rails. Although many derailments are minor, all result in temporary disruption of the proper operation of the railway system and they are a potentially serious hazard. A derailment of a train can be caused by a collision with another object, an operational error (such as excessive speed through a curve), the mechanical failure of tracks (such as broken rails), or the mechanical failure of the wheels, among other causes. In emergency situations, deliberate derailment with derails or catch points is sometimes used to prevent a more serious accident. History The first recorded train derailment in history is known as the Hightstown rail accident in New Jersey that occurred on 8 November 1833. The train was traveling between Hightstown and Spotswood, New Jersey, and derailed after an axle broke on one of the carriages as a result of a journal box catching fire. The derailment resulted in one fatality and twenty-three injuries, and it was recorded that both New York railroad magnate Cornelius Vanderbilt and former U.S president John Quincy Adams were on the train as it took place, in which Adams wrote about the event in his journal. During the 19th century derailments were commonplace, but progressively improved safety measures have resulted in a stable lower level of such incidents. A sampling of annual approximate numbers of derailments in the United States includes 3000 in 1980, 1000 in 1986, 500 in 2010, and 1000 in 2022. Derailments in the United States Causes Derailments result from one or more of a number of distinct causes; these may be classified as: the primary mechanical failure of a track component (for example broken rails, gauge spread due to sleeper (tie) failure) the primary mechanical failure of a component of the running gear of a vehicle (for example axlebox failure, wheel breakage) a fault in the geometry of the track components or the running gear that results in a quasi-static failure in running (for example rail climbing due to excessive wear of wheels or rails, earthworks slip) a dynamic effect of the track-vehicle interaction (for example extreme Hunting oscillation, vertical bounce, track shift under a train, excessive speed) improper operation of points, or improper observance of signals protecting them (signal errors) as a secondary event following collision with other trains, road vehicles, or other obstructions (level crossing collisions, obstructions on the line) train handling (snatches due to sudden traction or braking forces, referred to as slack action in North America). Broken rails Broken rails are a leading cause of derailments. According to data from the Federal Railroad Administration, broken rails and welds are the most common reason for train derailments, making up more than 15 percent of derailment cases. A traditional track structure consists of two rails, fixed at a designated distance apart (known as the track gauge), and supported on transverse sleepers (ties). Some advanced track structures support the rails on a concrete or asphalt slab. The running surface of the rails is required to be practically continuous and of the proper geometrical layout. In the event of a broken or cracked rail, the rail running surface may be disrupted if a piece has fallen out, or become lodged in an incorrect location, or if a large gap between the remaining rail sections arises. 170 broken (not cracked) rails were reported on Network Rail in the UK in 2008, down from a peak of 988 in 1998/1999. In jointed track, rails are usually connected with bolted fishplates (joint bars). The web of the rail experiences large shear forces and these are enhanced around the bolt hole. Where track maintenance is poor, metallurgical fatigue can result in the propagation of star cracking from the bolthole. In extreme situations this can lead to a triangular piece of rail at the joint becoming detached. Metallurgical changes take place due to the phenomenon of gauge corner cracking (in which fatigue microcracking propagates faster than ordinary wear), and also due to the effects of hydrogen inclusion during the manufacturing process, leading to crack propagation under fatigue loading. Local embrittlement of the parent metal may take place due to wheel spin (traction units rotating driving wheels without movement along the track). Rail welds (where rail sections are joined by welding) may fail due to poor workmanship; this may be triggered by extremely cold weather or improper stressing of continuously welded rails, such that high tensile forces are generated in the rails. The fishplates (joint bars) in jointed track may fail, allowing the rails to pull apart in extremely cold weather; this is usually associated with uncorrected rail creep. Derailment may take place due to excessive gauge widening (sometimes known as road spread), in which the sleepers or other fastenings fail to maintain the proper gauge. In lightly engineered track where rails are spiked (dogged) to timber sleepers, spike hold failure may result in rotation outwards of a rail, usually under the aggravating action of crabbing of bogies (trucks) on curves. The mechanism of gauge widening is usually gradual and relatively slow, but if it is undetected, the final failure often takes place under the effect of some additional factor, such as excess speed, poorly maintained running gear on a vehicle, misalignment of rails, and extreme traction effects (such as high propelling forces). The crabbing effect referred to above is more marked in dry conditions, when the coefficient of friction at the wheel to rail interface is high. Defective wheels The running gear – wheelsets, bogies (trucks), and suspension—may fail. The most common historical failure mode is collapse of plain bearings due to deficient lubrication, and failure of leaf springs; wheel tyres are also prone to failure due to metallurgical crack propagation. Modern technologies have reduced the incidence of these failures considerably, both by design (specially the elimination of plain bearings) and intervention (non-destructive testing in service). Unusual track interaction If a vertical, lateral, or crosslevel irregularity is cyclic and takes place at a wavelength corresponding to the natural frequency of certain vehicles traversing the route section, there is a risk of resonant harmonic oscillation in the vehicles, leading to extreme improper movement and possibly derailment. This is most hazardous when a cyclic roll is set up by crosslevel variations, but vertical cyclical errors also can result in vehicles lifting off the track; this is especially the case when the vehicles are in the tare (empty) condition, and if the suspension is not designed to have appropriate characteristics. The last condition applies if the suspension springing has a stiffness optimised for the loaded condition, or for a compromise loading condition, so that it is too stiff in the tare situation. The vehicle wheelsets become momentarily unloaded vertically so that the guidance required from the flanges or wheel tread contact is inadequate. A special case is heat related buckling: in hot weather the rail steel expands. This is managed by stressing continuously welded rails (they are tensioned mechanically to be stress neutral at a moderate temperature) and by providing proper expansion gaps at joints and ensuring that fishplates are properly lubricated. In addition, lateral restraint is provided by an adequate ballast shoulder. If any of these measures are inadequate, the track may buckle; a large lateral distortion takes place, which trains are unable to negotiate. (In nine years 2000/1 to 2008/9 there were 429 track buckle incidents in Great Britain). Improper operation of control systems Junctions and other changes of routing on railways are generally made by means of points (switches – movable sections capable of changing the onward route of vehicles). In the early days of railways these were moved independently by local staff. Accidents – usually collisions – took place when staff forgot which route the points were set for, or overlooked the approach of a train on a conflicting route. If the points were not correctly set for either route – set in mid-stroke – it is possible for a train passing to derail. The first concentration of levers for signals and points brought together for operation was at Bricklayer's Arms Junction in south-east London in the period 1843–1844. The signal control location (forerunner of the signalbox) was enhanced by the provision of interlocking (preventing a clear signal being set for a route that was not available) in 1856. To prevent the unintended movement of freight vehicles from sidings to running lines, and other analogous improper movements, trap points and derails are provided at the exit from the sidings. In some cases these are provided at the convergence of running lines. It occasionally happens that a driver incorrectly believes they have authority to proceed over the trap points, or that the signaller improperly gives such permission; this results in derailment. The resulting derailment does not always fully protect the other line: a trap point derailment at speed may well result in considerable damage and obstruction, and even a single vehicle may obstruct the clear line. Derailment following collision If a train collides with a massive object, it is clear that derailment of the proper running of vehicle wheels on the track may take place. Although very large obstructions are imagined, it has been known for a cow straying on to the line to derail a passenger train at speed such as occurred in the Polmont rail accident. The most common obstructions encountered are road vehicles at level crossings (grade crossings); malicious persons sometimes place materials on the rails, and in some cases relatively small objects cause a derailment by guiding one wheel over the rail (rather than by gross collision). Derailment has also been brought about in situations of war or other conflict, such as during hostility by Native Americans, and more especially during periods when military personnel and materiel was being moved by rail. Harsh train handling The handling of a train can also cause derailments. The vehicles of a train are connected by couplings; in the early days of railways these were short lengths of chain ("loose couplings") that connected adjacent vehicles with considerable slack. Even with later improvements there may be a considerable slack between the traction situation (power unit pulling the couplings tight), and power unit braking (locomotive applying brakes and compressing buffers throughout the train). This results in coupling surge. More sophisticated technologies in use nowadays generally employ couplings that have no loose slack, although there is elastic movement at the couplings; continuous braking is provided, so that every vehicle on the train has brakes controlled by the driver. Generally this uses compressed air as a control medium, and there is a measurable time lag as the signal (to apply or release brakes) propagates along the train. If a train driver applies the train brakes suddenly and severely, the front part of the train is subject to braking forces first. (Where only the locomotive has braking, this effect is obviously more extreme). The rear part of the train may overrun the front part, and in cases where coupling condition is imperfect, the resultant sudden closing up (an effect referred to as a "run-in") may result in a vehicle in tare condition (an empty freight vehicle) being lifted momentarily, and leaving the track. This effect was relatively common in the nineteenth century. On curved sections, the longitudinal (traction or braking) forces between vehicles have a component inward or outward respectively on the curve. In extreme situations these lateral forces may be enough to produce derailment. A special case of train handling problems is overspeed on sharp curves. This generally arises when a driver fails to slow the train for a sharp curved section in a route that otherwise has higher speed conditions. In the extreme this results in the train entering a curve at a speed at which it cannot negotiate the curve, and gross derailment takes place. The specific mechanism of this may involve bodily tipping (rotation) but is likely to involve disruption of the track structure and derailment as the primary failure event, followed by overturning. Fatal instances include the Santiago de Compostela derailment in 2013 and the Philadelphia train derailment two years later of trains traveling about . Both went at about twice the maximum allowable speed for the curved section of track. Flange climbing The guidance system of practical railway vehicles relies on the steering effect of the conicity of the wheel treads on moderate curves (down to a radius of about 500 m, or about 1,500 feet). On sharper curves flange contact takes place, and the guiding effect of the flange relies on a vertical force (the vehicle weight). A flange climbing derailment can result if the relationship between these forces, L/V, is excessive. The lateral force L results not only from centrifugal effects, but a large component is from the crabbing of a wheelset which has a non-zero angle of attack during running with flange contact. The L/V excess can result from wheel unloading, or from improper rail or wheel tread profiles. The physics of this is more fully described below, in the section wheel-rail interaction. Wheel unloading can be caused by twist in the track. This can arise if the cant (crosslevel, or superelevation) of the track varies considerably over the wheelbase of a vehicle, and the vehicle suspension is very stiff in torsion. In the quasi-static situation it may arise in extreme cases of poor load distribution, or on extreme cant at low speed. If a rail has been subject to extreme sidewear, or a wheel flange has been worn to an improper angle, it is possible for the L/V ratio to exceed the value that the flange angle can resist. If weld repair of side-worn switches is undertaken, it is possible for poor workmanship to produce a ramp in the profile in the facing direction, that deflects an approaching wheel flange on to the rail head. In extreme situations, the infrastructure may be grossly distorted or even absent; this may arise from a variety of causes, including earthwork movement (embankment slips and washouts), earthquakes and other major terrestrial disruptions, or deficient protection during work processes, among others. Wheel-rail interaction Nearly all practical railway systems use wheels fixed to a common axle: the wheels on both sides rotate in unison. Tramcars requiring low floor levels are the exception, but much benefit in vehicle guidance is lost by having unlinked wheels. The benefit of linked wheels derives from the conicity of the wheel treads—the wheel treads are not cylindrical, but conical. On idealised straight track, a wheelset would run centrally, midway between the rails. The example shown here uses a right-curving section of track. The focus is on the left-side wheel, which is more involved with the forces critical to guiding the railcar through the curve. Diagram 1 below shows the wheel and rail with the wheelset running straight and central on the track. The wheelset is running away from the observer. (Note that the rail is shown inclined inwards; this is done on modern track to match the rail head profile to the wheel tread profile.) Diagram 2 shows the wheelset displaced to the left, due to curvature of the track or a geometrical irregularity. The left wheel (shown here) is now running on a slightly larger diameter; the right wheel opposite has moved to the left as well, towards the centre of the track, and is running on a slightly smaller diameter. As the two wheels rotate at the same rate, the forward speed of the left wheel is a little faster than the forward speed of the right wheel. This causes the wheelset to curve to the right, correcting the displacement. This takes place without flange contact; the wheelsets steer themselves on moderate curves without any flange contact. The sharper the curve, the greater the lateral displacement necessary to achieve the curving. On a very sharp curve (typically less than about 500 m or 1,500 feet radius) the width of the wheel tread is not enough to achieve the necessary steering effect, and the wheel flange contacts the face of the high rail. Diagram 3 shows the running of wheelsets in a bogie or a four-wheeled vehicle. The wheelset is not running parallel to the track: it is constrained by the bogie frame and suspension, and it is yawing to the outside of the curve; that is, its natural rolling direction would lead along a less sharply curved path than the actual curve of the track. The angle between the natural path and the actual path is called the angle of attack (or the yaw angle). As the wheelset rolls forward, it is forced to slide across the railhead by the flange contact. The whole wheelset is forced to do this, so the wheel on the low rail is also forced to slide across its rail. This sliding requires a considerable force to make it happen, and the friction force resisting the sliding is designated "L", the lateral force. The wheelset applies a force L outwards to the rails, and the rails apply a force L inwards to the wheels. Note that this is quite independent of "centrifugal force". However at higher speeds the centrifugal force is added to the friction force to make L. The load (vertical force) on the outer wheel is designated V, so that in Diagram 4 the two forces L and V are shown. The steel-to-steel contact has a coefficient of friction that may be as high as 0.5 in dry conditions, so that the lateral force may be up to 0.5 of the vertical wheel load. During this flange contact, the wheel on the high rail is experiencing the lateral force L, towards the outside of the curve. As the wheel rotates, the flange tends to climb up the flange angle. It is held down by the vertical load on the wheel V, so that if L/V exceeds the trigonometrical tangent of the flange contact angle, climbing will take place. The wheel flange will climb to the rail head where there is no lateral resistance in rolling movement, and a flange climbing derailment usually takes place. In Diagram 5 the flange contact angle is quite steep, and flange climbing is unlikely. However, if the rail head is side-worn (side-cut) or the flange is worn, as shown in Diagram 6 the contact angle is much flatter and flange climbing is more likely. Once the wheel flange has completely climbed onto the rail head, there is no lateral restraint, and the wheelset is likely to follow the yaw angle, resulting in the wheel dropping outside the rail. An L/V ratio greater than 0.6 is considered to be hazardous. It is emphasised that this is a much simplified description of the physics; complicating factors are creep, actual wheel and rail profiles, dynamic effects, stiffness of longitudinal restraint at axleboxes, and the lateral component of longitudinal (traction and braking) forces. Rerailing Following a derailment, it is naturally necessary to replace the vehicle on the track. If there is no significant track damage, that may be all that is needed. However, when trains in normal running derail at speed, a considerable length of track may be damaged or destroyed; far worse secondary damage may be caused if a bridge is encountered. With simple wagon derailments where the final position is close to the proper track location, it is usually possible to pull the derailed wheelsets back on to the track using rerailing ramps; these are metal blocks designed to fit over the rails and to provide a rising path back to the track. A locomotive is usually used to pull the wagon. A disadvantage of doing it this way is that the ramps can seriously damage the infrastructure. Because of which, this procedure may not be used in several countries. If the derailed vehicle is further from the track, or its configuration (such as a high centre of gravity or a very short wheelbase) make the use of ramps impossible, jacks may be used. In its crudest form, the process involves lifting the vehicle frame and then allowing it to fall off the jack towards the track. This may need to be repeated. A more sophisticated process involves a controlled process using slewing jacks in addition. This combination of lifting and sliding is called a hydraulic rerailing system. A system consisting of high pressure hydraulic lifting jacks (used for lifting the train) so a sliding system can be positioned underneath the vehicle. The sliding system consist of a beam (also called a bridge) with sleds or carriages which are moved laterally with a horizontally positioned high pressure hydraulic jack to push the vehicle back above track. After which it is lowered again on the track. Photographs of early locomotives often indicate one or more jacks carried on the frame of the locomotive for the purpose, presumed to be a frequent occurrence. When more complex rerailing work is needed, various combinations of cable and pulley systems may be used, or the use of one or more rail-borne cranes to lift a locomotive bodily. In special cases road cranes are used, as these have greater lifting and reach capacity, if road access to the site is feasible. In extreme circumstances, a derailed vehicle in an awkward location may be scrapped and cut up on site, or simply abandoned as non-salvageable. Examples Note: there is a large list of railway accidents in general at Lists of rail accidents. Primary mechanical failure of a track component In the Hatfield rail crash in England in 2000, which killed four people, rolling contact fatigue had resulted in multiple gauge corner cracking in the surface; 300 such cracks were subsequently found at the site. The rail cracked under a high speed passenger train, which derailed. In the earlier Hither Green rail crash, a triangular segment of rail at a joint became displaced, and lodged in the joint; it derailed a passenger train and 49 persons died. Poor maintenance on an intensively operated section of route was the cause. Primary mechanical failure of a component of the running gear of a vehicle In the Eschede train disaster in Germany, a high speed passenger train derailed in 1998, killing 101 people. The primary cause was the fracture from metal fatigue of a wheel tyre; the train failed to negotiate two sets of points and struck the pier of an overbridge. It was the most serious railway accident in Germany, and also the most serious on any high speed (over ) line. Ultrasonic testing had failed to reveal the incipient fracture. Dynamic effects of vehicle–track interaction In 1967 in the UK there were four derailments due to buckling of continuously welded track ("CWR"): at Lichfield on 10 June, an empty carflat train (a train of flat cars for transporting automobiles); on 13 June an express passenger train was derailed at Somerton; on 15 July a freightliner train (container train) was derailed at Lamington; and on 23 July an express passenger train was derailed at Sandy. The official report was not entirely conclusive as to the causes, but it observed that the annual total of buckling distortions was 48 in 1969, having been in single figures in every previous year, and that [heat-related] distortions per 1,000 miles per annum were 10.42 for CWR and 2.98 for jointed track in 1969, having been a maximum of 1.78 and 1.21 in the previous ten years. 90% of the distortions could be attributed to one of the following: failure to comply with the instructions for laying or maintaining CWR track recent interference with the consolidation of the ballast the effect of discontinuities in the CWR track such as points etc. extraneous factors such as formation subsidence. Improper operation of control systems In the Connington South rail crash on 5 March 1967 in England, a signaller moved the points immediately in front of an approaching train. Mechanical signalling was in force at the location, and it was believed that he improperly replaced the signal protecting the points to danger just as the locomotive passed it. This released the locking on the points and he moved them to lead to a loop line with a low speed restriction. The train, travelling at , was unable to negotiate the points in that position and five people died. Secondary events following collision A passenger train was derailed in the Polmont rail accident in the UK in 1984 upon hitting a cow at speed; the train formation had the locomotive at the rear (propelling) with a light driving-trailer vehicle leading. The cow had strayed on to the line from adjacent agricultural land, due to deficient fencing. 13 persons died in the resulting derailment. However this was thought to be the first occurrence from this cause (in the UK) since 1948. Train handling effects The Salisbury rail crash took place on 1 July 1906; a first class only special boat train from Stonehousepool, Plymouth England, ran through Salisbury station at about ; there was a sharp curve of ten chains (660 feet, 200 m) radius and a speed restriction to . The locomotive overturned bodily and struck the vehicles of a milk train on the adjacent line. 28 people were killed. The driver was sober and normally reliable, but had not driven a non-stopping train through Salisbury before. There have been several other derailments in the UK due to trains entering speed-restricted sections of track at excessive speed; the causes have generally been inattention by the driver due to alcohol, fatigue or other causes. Prominent cases were the Nuneaton rail crash in 1975 (temporary speed restriction in force due to trackwork, warning sign illumination failed), the Morpeth accident in 1984 (express passenger sleeping car train took restricted sharp curve at full speed; alcohol a factor; no fatalities due to the improved crashworthiness of the vehicles) See also Guard rails Lists of rail accidents Rail sabotage Train wreck Tram accident Notes References Further reading Rail technologies Railway accidents and incidents
Derailment
Technology
5,363
10,370,145
https://en.wikipedia.org/wiki/Rhode%20Island%20statistical%20areas
The U.S. currently has two statistical areas that have been delineated by the Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated the Providence-Warwick, RI-MA Metropolitan Statistical Area and the Boston-Worcester-Providence, MA-RI-NH Combined Statistical Area, which are inclusive of all five of Rhode Island's counties. Table See also Geography of Rhode Island Demographics of Rhode Island Notes References External links Office of Management and Budget United States Census Bureau United States statistical areas Statistical Areas Of Rhode Island
Rhode Island statistical areas
Mathematics
117
51,164,451
https://en.wikipedia.org/wiki/Journal%20de%20Th%C3%A9orie%20des%20Nombres%20de%20Bordeaux
The Journal de Théorie des Nombres de Bordeaux is a triannual peer-reviewed open-access scientific journal covering number theory and related topics. It was established in 1989 and is published by the Institut de Mathématiques de Bordeaux on behalf of the Société Arithmétique de Bordeaux. The editor-in-chief is Denis Benois (University of Bordeaux). Abstracting and indexing The journal is abstracted and indexed in Current Contents/Physical, Chemical & Earth Sciences, Zentralblatt MATH, Mathematical Reviews, Science Citation Index Expanded, and Scopus. According to the Journal Citation Reports, the journal has a 2015 impact factor of 0.294. References External links Number theory journals Multilingual journals Triannual journals Academic journals established in 1989
Journal de Théorie des Nombres de Bordeaux
Mathematics
158
31,775,293
https://en.wikipedia.org/wiki/Mobile%20mapping
Mobile mapping is the process of collecting geospatial data from a mobile vehicle, typically fitted with a range of GNSS, photographic, radar, laser, LiDAR or any number of remote sensing systems. Such systems are composed of an integrated array of time synchronised navigation sensors and imaging sensors mounted on a mobile platform. The primary output from such systems include GIS data, digital maps, and georeferenced images and video. History The development of direct reading georeferencing technologies opened the way for mobile mapping systems. GPS and Inertial Navigation Systems, have allowed rapid and accurate determination of position and attitude of remote sensing equipment, effectively leading to direct mapping of features of interest without the need for complex post-processing of observed data. Applications Aerial mobile mapping Traditional techniques of geo-referencing aerial photography, ground profiling radar, or Lidar are prohibitively expensive, particularly in inaccessible areas, or where the type of data collected makes interpretation of individual features difficult. Image direct georeferencing, simplifies the mapping control for large scale mapping tasks. Emergency response planning Mobile mapping systems allow rapid collection of data to allow accurate assessment of conditions on the ground. Internet applications Internet, and mobile device users, are increasingly utilising geo-spatial information, either in the form of mapping, or geo-referenced imaging. Google, Microsoft, and Yahoo have adapted both aerial photographs and satellite images to develop online mapping systems. Street View type images are also an increasing market. Location aware PDA systems rely on geo-referenced features collated from mobile mapping sources. Road mapping and highway facility management GPS combined with digital camera systems allow rapid update of road maps. The same system can be utilised to carry out efficient road condition surveys, and facilities management. Laser scanning technologies, applied in the mobile mapping sense, allow full 3D data collection of slope, bankings, etc. Road Inventory and Asset Management Mobile LiDAR with a digital imaging system is being used to gather data which after post-processing generates strip plan, horizontal and vertical profile, all other asset within and beyond ROW including abutting land use and deficient geometry. This also calls for riding quality of pavement, Existing Traffic Characteristics and capacity of the corridor, Speed-flow-density analysis, Road Safety Review of the Corridor, Junction, and median opening, Facilities for commercial vehicles. Thus all data being used to form a performance matrix help identifying the gaps in corridor efficiency for prioritization of interventions to improve corridor efficiency. Digital Twins applications Mobile mapping combined with indoor mapping are being used in creation of digital twins. These digital twins can be a single building or an entire city or country. Several mobile mapping companies, known as "Maker of Digital Twins" are embarking on capturing the digital twins market amid the growing trend among organizations and governments that are adopting digital twins for Internet of Things and Artificial Intelligence applications within the Industrial Revolution 4.0 framework. Footnotes References Tao, C. V. (2007) Advances in mobile mapping technology: Volume 4 of International Society for Photogrammetry and Remote Sensing book series. Taylor & Francis. Gao, J. (2009) Digital Analysis of Remotely Sensed Imagery. McGraw Hill Professional. . Zlatanova, S. & Li, J. (2008) Geospatial Information Technology for Emergency Response: Volume 6 of International Society for Photogrammetry and Remote Sensing book series. Routledge. . Hammoudi, K. et al. (2013) A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data. https://doi.org/10.5772/56570 van Oosterom, Peter (2008) Advances in 3D geoinformation systems: Lecture notes in geoinformation and cartography. Springer. . Hofmann-Wellenhof, B., Legat, K., Wieser, M. (2003) Navigation: principles of positioning and guidance Springer. . Vitrià, J., Radeva, P., Aguiló, I. (2004) Recent advances in artificial intelligence research and development: Volume 113 of Frontiers in artificial intelligence and applications. IOS Press. . Gavrilova, M.L. (2006) Computational science and its applications: ICCSA 2006 : international conference, Glasgow, UK, May 8–11, 2006 : proceedings. Springer. . Weng, Q. (2009) Remote Sensing and GIS Integration: Theories, Methods, and Applications McGraw Hill Professional. . Surveying Applications of computer vision Earth sciences Digital mapping Satellite meteorology Remote sensing
Mobile mapping
Engineering
944
2,334,800
https://en.wikipedia.org/wiki/Highly%20optimized%20tolerance
In applied mathematics, highly optimized tolerance (HOT) is a method of generating power law behavior in systems by including a global optimization principle. It was developed by Jean M. Carlson and John Doyle in the early 2000s. For some systems that display a characteristic scale, a global optimization term could potentially be added that would then yield power law behavior. It has been used to generate and describe internet-like graphs, forest fire models and may also apply to biological systems. Example The following is taken from Sornette's book. Consider a random variable, , that takes on values with probability . Furthermore, let’s assume for another parameter for some fixed . We then want to minimize subject to the constraint Using Lagrange multipliers, this gives giving us a power law. The global optimization of minimizing the energy along with the power law dependence between and gives us a power law distribution in probability. See also self-organized criticality References . . . . . . . . . Mathematical optimization
Highly optimized tolerance
Mathematics
205
36,968,182
https://en.wikipedia.org/wiki/Patients%20Know%20Best
Patients Know Best is a British social enterprise, with an aim of putting patients in control of their own medical records. In the UK, Patients Know Best integrates into the NHS app and in the Netherlands, it integrates with the government's personal health records infrastructure persoonlijke gezondheidsomgeving (PGO). Its Chairman is Dr Richard Smith (editor). Dr Mohammad Al-Ubaydli is the founder and chief executive officer. The system is the preferred personal health record for London. It is in use in 22 different languages at more than 60 hospitals throughout the UK, Ireland, Germany, Hong Kong, the US and the Netherlands. After trials conducted by Abertawe Bro Morgannwg University Health Board it is to be rolled out across Wales in 2017. In July 2019 it formed a partnership with HealthUnlocked, integrating their eSocial Prescription capability to enable more holistic, personalised care plans. References External links Social enterprises Electronic health records
Patients Know Best
Technology
205
529,476
https://en.wikipedia.org/wiki/Sirolimus
Sirolimus, also known as rapamycin and sold under the brand name Rapamune among others, is a macrolide compound that is used to coat coronary stents, prevent organ transplant rejection, treat a rare lung disease called lymphangioleiomyomatosis, and treat perivascular epithelioid cell tumour (PEComa). It has immunosuppressant functions in humans and is especially useful in preventing the rejection of kidney transplants. It is a mammalian target of rapamycin (mTOR) kinase inhibitor that reduces the sensitivity of T cells and B cells to interleukin-2 (IL-2), inhibiting their activity. This compound also has a use in cardiovascular drug-eluting stent technologies to inhibit restenosis. It is produced by the bacterium Streptomyces hygroscopicus and was isolated for the first time in 1972, from samples of Streptomyces hygroscopicus found on Easter Island. The compound was originally named rapamycin after the native name of the island, Rapa Nui. Sirolimus was initially developed as an antifungal agent. However, this use was abandoned when it was discovered to have potent immunosuppressive and antiproliferative properties due to its ability to inhibit mTOR. It was approved by the U.S. Food and Drug Administration (FDA) in 1999. Hyftor (sirolimus gel) was approved for topical treatment of facial angiofibroma in the European Union in May 2023. Medical uses Sirolimus is indicated for the prevention of organ transplant rejection and for the treatment of lymphangioleiomyomatosis (LAM). Sirolimus (Fyarro), as protein-bound particles, is indicated for the treatment of adults with locally advanced unresectable or metastatic malignant perivascular epithelioid cell tumour (PEComa). In the EU, sirolimus, as Rapamune, is indicated for the prophylaxis of organ rejection in adults at low to moderate immunological risk receiving a renal transplant and, as Hyftor, is indicated for the treatment of facial angiofibroma associated with tuberous sclerosis complex. Prevention of transplant rejection The chief advantage sirolimus has over calcineurin inhibitors is its low toxicity toward kidneys. Transplant patients maintained on calcineurin inhibitors long-term tend to develop impaired kidney function or even kidney failure; this can be avoided by using sirolimus instead. It is particularly advantageous in patients with kidney transplants for hemolytic-uremic syndrome, as this disease is likely to recur in the transplanted kidney if a calcineurin-inhibitor is used. However, on 7 October 2008, the FDA approved safety labeling revisions for sirolimus to warn of the risk for decreased renal function associated with its use. In 2009, the FDA notified healthcare professionals that a clinical trial conducted by Wyeth showed an increased mortality in stable liver transplant patients after switching from a calcineurin inhibitor-based immunosuppressive regimen to sirolimus. A 2019 cohort study of nearly 10,000 lung transplant recipients in the US demonstrated significantly improved long-term survival using sirolimus + tacrolimus instead of mycophenolate mofetil + tacrolimus for immunosuppressive therapy starting at one year after transplant. Sirolimus can also be used alone, or in conjunction with a calcineurin inhibitor (such as tacrolimus), and/or mycophenolate mofetil, to provide steroid-free immunosuppression regimens. Impaired wound healing and thrombocytopenia are possible side effects of sirolimus; therefore, some transplant centers prefer not to use it immediately after the transplant operation, but instead administer it only after a period of weeks or months. Its optimal role in immunosuppression has not yet been determined, and it remains the subject of a number of ongoing clinical trials. Lymphangioleiomyomatosis In May 2015, the FDA approved sirolimus to treat lymphangioleiomyomatosis (LAM), a rare, progressive lung disease that primarily affects women of childbearing age. This made sirolimus the first drug approved to treat this disease. LAM involves lung tissue infiltration with smooth muscle-like cells with mutations of the tuberous sclerosis complex gene (TSC2). Loss of TSC2 gene function activates the mTOR signaling pathway, resulting in the release of lymphangiogenic growth factors. Sirolimus blocks this pathway. The safety and efficacy of sirolimus treatment of LAM were investigated in clinical trials that compared sirolimus treatment with a placebo group in 89 patients for 12 months. The patients were observed for 12 months after the treatment had ended. The most commonly reported side effects of sirolimus treatment of LAM were mouth and lip ulcers, diarrhea, abdominal pain, nausea, sore throat, acne, chest pain, leg swelling, upper respiratory tract infection, headache, dizziness, muscle pain and elevated cholesterol. Serious side effects including hypersensitivity and swelling (edema) have been observed in renal transplant patients. While sirolimus was considered for treatment of LAM, it received orphan drug designation status because LAM is a rare condition. The safety of LAM treatment by sirolimus in people younger than 18 years old has not been tested. Coronary stent coating The antiproliferative effect of sirolimus has also been used in conjunction with coronary stents to prevent restenosis in coronary arteries following balloon angioplasty. The sirolimus is formulated in a polymer coating that affords controlled release through the healing period following coronary intervention. Several large clinical studies have demonstrated lower restenosis rates in patients treated with sirolimus-eluting stents when compared to bare-metal stents, resulting in fewer repeat procedures. A sirolimus-eluting coronary stent was marketed by Cordis, a division of Johnson & Johnson, under the tradename Cypher. However, this kind of stent may also increase the risk of vascular thrombosis. Vascular malformations Sirolimus is used to treat vascular malformations. Treatment with sirolimus can decrease pain and the fullness of vascular malformations, improve coagulation levels, and slow the growth of abnormal lymphatic vessels. Sirolimus is a relatively new medical therapy for the treatment of vascular malformations in recent years, sirolimus has emerged as a new medical treatment option for both vascular tumors and vascular malformations, as a mammalian target of rapamycin (mTOR), capable of integrating signals from the PI3K/AKT pathway to coordinate proper cell growth and proliferation. Hence, sirolimus is ideal for "proliferative" vascular tumors through the control of tissue overgrowth disorders caused by inappropriate activation of the PI3K/AKT/mTOR pathway as an antiproliferative agent. Angiofibromas Sirolimus has been used as a topical treatment of angiofibromas with tuberous sclerosis complex (TSC). Facial angiofibromas occur in 80% of patients with TSC, and the condition is very disfiguring. A retrospective review of English-language medical publications reporting on topical sirolimus treatment of facial angiofibromas found sixteen separate studies with positive patient outcomes after using the drug. The reports involved a total of 84 patients, and improvement was observed in 94% of subjects, especially if treatment began during the early stages of the disease. Sirolimus treatment was applied in several different formulations (ointment, gel, solution, and cream), ranging from 0.003 to 1% concentrations. Reported adverse effects included one case of perioral dermatitis, one case of cephalea, and four cases of irritation. In April 2022, sirolimus was approved by the FDA for treating angiofibromas. Adverse effects The most common adverse reactions (≥30% occurrence, leading to a 5% treatment discontinuation rate) observed with sirolimus in clinical studies of organ rejection prophylaxis in individuals with kidney transplants include: peripheral edema, hypercholesterolemia, abdominal pain, headache, nausea, diarrhea, pain, constipation, hypertriglyceridemia, hypertension, increased creatinine, fever, urinary tract infection, anemia, arthralgia, and thrombocytopenia. The most common adverse reactions (≥20% occurrence, leading to an 11% treatment discontinuation rate) observed with sirolimus in clinical studies for the treatment of lymphangioleiomyomatosis are: peripheral edema, hypercholesterolemia, abdominal pain, headache, nausea, diarrhea, chest pain, stomatitis, nasopharyngitis, acne, upper respiratory tract infection, dizziness, and myalgia. The following adverse effects occurred in 3–20% of individuals taking sirolimus for organ rejection prophylaxis following a kidney transplant: Diabetes-like symptoms While sirolimus inhibition of mTORC1 appears to mediate the drug's benefits, it also inhibits mTORC2, which results in diabetes-like symptoms. This includes decreased glucose tolerance and insensitivity to insulin. Sirolimus treatment may additionally increase the risk of type 2 diabetes. In mouse studies, these symptoms can be avoided through the use of alternate dosing regimens or analogs such as everolimus or temsirolimus. Lung toxicity Lung toxicity is a serious complication associated with sirolimus therapy, especially in the case of lung transplants. The mechanism of the interstitial pneumonitis caused by sirolimus and other macrolide MTOR inhibitors is unclear, and may have nothing to do with the mTOR pathway. The interstitial pneumonitis is not dose-dependent, but is more common in patients with underlying lung disease. Lowered effectiveness of immune system There have been warnings about the use of sirolimus in transplants, where it may increase mortality due to an increased risk of infections. Cancer risk Sirolimus may increase an individual's risk for contracting skin cancers from exposure to sunlight or UV radiation, and risk of developing lymphoma. In studies, the skin cancer risk under sirolimus was lower than under other immunosuppressants such as azathioprine and calcineurin inhibitors, and lower than under placebo. Impaired wound healing Individuals taking sirolimus are at increased risk of experiencing impaired or delayed wound healing, particularly if they have a body mass index in excess of 30 kg/m2 (classified as obese). Interactions Sirolimus is metabolized by the CYP3A4 enzyme and is a substrate of the P-glycoprotein (P-gp) efflux pump; hence, inhibitors of either protein may increase sirolimus concentrations in blood plasma, whereas inducers of CYP3A4 and P-gp may decrease sirolimus concentrations in blood plasma. Pharmacology Pharmacodynamics Unlike the similarly named tacrolimus, sirolimus is not a calcineurin inhibitor, but it has a similar suppressive effect on the immune system. Sirolimus inhibits IL-2 and other cytokine receptor-dependent signal transduction mechanisms, via action on mTOR, and thereby blocks activation of T and B cells. Ciclosporin and tacrolimus inhibit the secretion of IL-2, by inhibiting calcineurin. The mode of action of sirolimus is to bind the cytosolic protein FK-binding protein 12 (FKBP12) in a manner similar to tacrolimus. Unlike the tacrolimus-FKBP12 complex, which inhibits calcineurin (PP2B), the sirolimus-FKBP12 complex inhibits the mTOR (mammalian Target Of Rapamycin, rapamycin being another name for sirolimus) pathway by directly binding to mTOR Complex 1 (mTORC1). mTOR has also been called FRAP (FKBP-rapamycin-associated protein), RAFT (rapamycin and FKBP target), RAPT1, or SEP. The earlier names FRAP and RAFT were coined to reflect the fact that sirolimus must bind FKBP12 first, and only the FKBP12-sirolimus complex can bind mTOR. However, mTOR is now the widely accepted name, since Tor was first discovered via genetic and molecular studies of sirolimus-resistant mutants of Saccharomyces cerevisiae that identified FKBP12, Tor1, and Tor2 as the targets of sirolimus and provided robust support that the FKBP12-sirolimus complex binds to and inhibits Tor1 and Tor2. Pharmacokinetics Sirolimus is metabolized by the CYP3A4 enzyme and is a substrate of the P-glycoprotein (P-gp) efflux pump. It has linear pharmacokinetics. In studies on N=6 and N=36 subjects, peak concentration was obtained in 1.3 hours +/r- 0.5 hours and the terminal elimination was slow, with a half life around 60 hours +/- 10 hours. Sirolimus was not found to effect the concentration of ciclosporin, which is also metabolized primarily by the CYP3A4 enzyme. The bioavailabiliy of sirolimus is low, and the absorption of sirolimus into the blood stream from the intestine varies widely between patients, with some patients having up to eight times more exposure than others for the same dose. Drug levels are, therefore, taken to make sure patients get the right dose for their condition. This is determined by taking a blood sample before the next dose, which gives the trough level. However, good correlation is noted between trough concentration levels and drug exposure, known as area under the concentration-time curve, for both sirolimus (SRL) and tacrolimus (TAC) (SRL: r2 = 0.83; TAC: r2 = 0.82), so only one level need be taken to know its pharmacokinetic (PK) profile. PK profiles of SRL and of TAC are unaltered by simultaneous administration. Dose-corrected drug exposure of TAC correlates with SRL (r2 = 0.8), so patients have similar bioavailability of both. Chemistry Sirolimus is a natural product and macrocyclic lactone. Biosynthesis The biosynthesis of the rapamycin core is accomplished by a type I polyketide synthase (PKS) in conjunction with a nonribosomal peptide synthetase (NRPS). The domains responsible for the biosynthesis of the linear polyketide of rapamycin are organized into three multienzymes, RapA, RapB, and RapC, which contain a total of 14 modules (figure 1). The three multienzymes are organized such that the first four modules of polyketide chain elongation are in RapA, the following six modules for continued elongation are in RapB, and the final four modules to complete the biosynthesis of the linear polyketide are in RapC. Then, the linear polyketide is modified by the NRPS, RapP, which attaches L-pipecolate to the terminal end of the polyketide, and then cyclizes the molecule, yielding the unbound product, prerapamycin. The core macrocycle, prerapamycin (figure 2), is then modified (figure 3) by an additional five enzymes, which lead to the final product, rapamycin. First, the core macrocycle is modified by RapI, SAM-dependent O-methyltransferase (MTase), which O-methylates at C39. Next, a carbonyl is installed at C9 by RapJ, a cytochrome P-450 monooxygenases (P-450). Then, RapM, another MTase, O-methylates at C16. Finally, RapN, another P-450, installs a hydroxyl at C27 immediately followed by O-methylation by Rap Q, a distinct MTase, at C27 to yield rapamycin. The biosynthetic genes responsible for rapamycin synthesis have been identified. As expected, three extremely large open reading frames (ORF's) designated as rapA, rapB, and rapC encode for three extremely large and complex multienzymes, RapA, RapB, and RapC, respectively. The gene rapL has been established to code for a NAD+-dependent lysine cycloamidase, which converts L-lysine to L-pipecolic acid (figure 4) for incorporation at the end of the polyketide. The gene rapP, which is embedded between the PKS genes and translationally coupled to rapC, encodes for an additional enzyme, an NPRS responsible for incorporating L-pipecolic acid, chain termination and cyclization of prerapamycin. In addition, genes rapI, rapJ, rapM, rapN, rapO, and rapQ have been identified as coding for tailoring enzymes that modify the macrocyclic core to give rapamycin (figure 3). Finally, rapG and rapH have been identified to code for enzymes that have a positive regulatory role in the preparation of rapamycin through the control of rapamycin PKS gene expression. Biosynthesis of this 31-membered macrocycle begins as the loading domain is primed with the starter unit, 4,5-dihydroxocyclohex-1-ene-carboxylic acid, which is derived from the shikimate pathway. Note that the cyclohexane ring of the starting unit is reduced during the transfer to module 1. The starting unit is then modified by a series of Claisen condensations with malonyl or methylmalonyl substrates, which are attached to an acyl carrier protein (ACP) and extend the polyketide by two carbons each. After each successive condensation, the growing polyketide is further modified according to enzymatic domains that are present to reduce and dehydrate it, thereby introducing the diversity of functionalities observed in rapamycin (figure 1). Once the linear polyketide is complete, L-pipecolic acid, which is synthesized by a lysine cycloamidase from an L-lysine, is added to the terminal end of the polyketide by an NRPS. Then, the NSPS cyclizes the polyketide, giving prerapamycin, the first enzyme-free product. The macrocyclic core is then customized by a series of post-PKS enzymes through methylations by MTases and oxidations by P-450s to yield rapamycin. Research Cancer The antiproliferative effects of sirolimus may have a role in treating cancer. When dosed appropriately, sirolimus can enhance the immune response to tumor targeting or otherwise promote tumor regression in clinical trials. Sirolimus seems to lower the cancer risk in some transplant patients. Sirolimus was shown to inhibit the progression of dermal Kaposi's sarcoma in patients with renal transplants. Other mTOR inhibitors, such as temsirolimus (CCI-779) or everolimus (RAD001), are being tested for use in cancers such as glioblastoma multiforme and mantle cell lymphoma. However, these drugs have a higher rate of fatal adverse events in cancer patients than control drugs. A combination therapy of doxorubicin and sirolimus has been shown to drive Akt-positive lymphomas into remission in mice. Akt signalling promotes cell survival in Akt-positive lymphomas and acts to prevent the cytotoxic effects of chemotherapy drugs, such as doxorubicin or cyclophosphamide. Sirolimus blocks Akt signalling and the cells lose their resistance to the chemotherapy. Bcl-2-positive lymphomas were completely resistant to the therapy; eIF4E-expressing lymphomas are not sensitive to sirolimus. Tuberous sclerosis complex Sirolimus also shows promise in treating tuberous sclerosis complex (TSC), a congenital disorder that predisposes those afflicted to benign tumor growth in the brain, heart, kidneys, skin, and other organs. After several studies conclusively linked mTOR inhibitors to remission in TSC tumors, specifically subependymal giant-cell astrocytomas in children and angiomyolipomas in adults, many US doctors began prescribing sirolimus (Wyeth's Rapamune) and everolimus (Novartis's RAD001) to TSC patients off-label. Numerous clinical trials using both rapamycin analogs, involving both children and adults with TSC, are underway in the United States. Effects on longevity mTOR, specifically mTORC1, was first shown to be important in aging in 2003, in a study on worms; sirolimus was shown to inhibit and slow aging in worms, yeast, and flies, and then to improve the condition of mouse models of various diseases of aging. Sirolimus was first shown to extend lifespan in wild-type mice in a study published by NIH investigators in 2009; the studies have been replicated in mice of many different genetic backgrounds. A study published in 2020 found late-life sirolimus dosing schedules enhanced mouse lifespan in a sex-specific manner: limited rapamycin exposure enhanced male but not female lifespan, providing evidence for sex differences in sirolimus response. The results are further supported by the finding that genetically modified mice with impaired mTORC1 signalling live longer. Sirolimus has potential for widespread use as a longevity-promoting drug, with evidence pointing to its ability to prevent age-associated decline of cognitive and physical health. In 2014, researchers at Novartis showed that a related compound, everolimus, increased elderly patients' immune response on an intermittent dose. This led to many in the anti-aging community self-experimenting with the compound. However, because of the different biochemical properties of sirolimus, the dosing is potentially very different from that of everolimus. Ultimately, due to known side-effects of sirolimus, as well as inadequate evidence for optimal dosing, it was concluded in 2016 that more research was required before sirolimus could be widely prescribed for this purpose. Two human studies on the effects of sirolimus (rapamycin) on longevity did not show statistically significant benefits. However, due to limitations in the studies, further research is needed to fully assess its potential in humans. Sirolimus has complex effects on the immune system—while IL-12 goes up and IL-10 decreases, which suggests an immunostimulatory response, TNF and IL-6 are decreased, which suggests an immunosuppressive response. The duration of the inhibition and the exact extent to which mTORC1 and mTORC2 are inhibited play a role, but were not yet well understood according to a 2015 paper. Topical administration When applied as a topical preparation, researchers showed that rapamycin can regenerate collagen and reverse clinical signs of aging in elderly patients. The concentrations are far lower than those used to treat angiofibromas. SARS-CoV-2 Rapamycin has been proposed as a treatment for severe acute respiratory syndrome coronavirus 2 insofar as its immunosuppressive effects could prevent or reduce the cytokine storm seen in very serious cases of COVID-19. Moreover, inhibition of cell proliferation by rapamycin could reduce viral replication. Atherosclerosis Rapamycin can accelerate degradation of oxidized LDL cholesterol in endothelial cells, thereby lowering the risk of atherosclerosis. Oxidized LDL cholesterol is a major contributor to atherosclerosis. Lupus As of 2016, studies in cells, animals, and humans have suggested that mTOR activation as process underlying systemic lupus erythematosus and that inhibiting mTOR with rapamycin may be a disease-modifying treatment. As of 2016 rapamycin had been tested in small clinical trials in people with lupus. Lymphatic malformation (LM) Lymphatic malformation, lymphangioma or cystic hygroma, is an abnormal growth of lymphatic vessels that usually affects children around the head and neck area and more rarely involving the tongue causing macroglossia. LM is caused by a PIK3CA mutation during lymphangiogenesis early in gestational cell formation causing the malformation of lymphatic tissue. Treatment often consists of removal of the affected tissue via excision, laser ablation or sclerotherapy, but the rate of recurrence can be high and surgery can have complications. Sirolimus has shown evidence of being an effective treatment in alleviating symptoms and reducing the size of the malformation by way of altering the mTOR pathway in lymphangiogenesis. Although an off label use of the drug, Sirolimus has been shown to be an effective treatment for both microcystic and macrocystic LM. More research is however needed to develop and create targeted, effective treatment therapies for LM. Graft-versus-host disease Due to its immunosuppressant activity, Rapamycin has been assessed as prophylaxis or treatment agent of Graft-versus-host disease (GVHD), a complication of hematopoietic stem cell transplantation. While contrasted results were obtained in clinical trials, pre-clinical studies have shown that Rapamycin can mitigate GVHD by increasing the proliferation of regulatory T cells, inhibiting cytotoxic T cells and lowering the differentiation of effector T cells. Applications in biology research Rapamycin is used in biology research as an agent for chemically induced dimerization. In this application, rapamycin is added to cells expressing two fusion constructs, one of which contains the rapamycin-binding FRB domain from mTOR and the other of which contains an FKBP domain. Each fusion protein also contains additional domains that are brought into proximity when rapamycin induces binding of FRB and FKBP. In this way, rapamycin can be used to control and study protein localization and interactions. Veterinary uses A number of veterinary medicine teaching hospitals are participating in a long-term clinical study examining the effect of rapamycin on the longevity of dogs. References Further reading External links Anti-aging substances Immunosuppressants Lactams Macrolides Orphan drugs Drugs developed by Pfizer Polyenes Drugs developed by Wyeth Ophthalmology drugs Chemical biology
Sirolimus
Chemistry,Biology
5,796
49,847,177
https://en.wikipedia.org/wiki/Computation%20of%20time%20%28Catholic%20canon%20law%29
In the canon law of the Catholic Church, the computation of time, also translated as the reckoning of time (Latin: ), is the manner by which legally-specified periods of time are calculated according to the norm of the canons on the computation of time. The application of laws frequently involves a question of time: generally three months must elapse after their promulgation before they go into effect; some obligations have to be fulfilled within a certain number of days, or weeks, or months. Hence the need of the rules for the computation of time. With the Code of 1917 and the reformed Code of 1983, the legislator has formulated these rules with a clearness and precision that they never had before. Scope and nature of rules These rules hold in all canonical matters: universal ordinances, precepts, rescripts, privileges, judicial sentences; but they have nothing to do with problems of chronology or such questions as the determination of the date for the celebration of Easter. They are not absolute rules, but should be followed when no others have been expressly laid down; liturgical laws regarding, for example, the beginning of the ecclesiastical year, of the solemnity of a feast, remain unchanged. The former use of 'time' in indulgences (prior to Paul VI's revision of sacred indulgences) had special provisions in the 1917 Code (cc. 921, 922, 923, 931), and it is stated that in what pertains to the fulfilment or enforcement of contracts, the prescriptions of the civil law should be complied with, unless there has been some other agreement to the contrary. Nothing prevents inferior legislators from adopting different rules for the application of their own laws, and it is clearly implied that private persons themselves have the same right in matters which depend on their will, like determining when an article sold should be delivered, paid for, etc. Useful and continuous time By useful time is meant in law the time granted for the exercise or prosecution of one's rights in such a way that it does not run if one is prevented from using it through ignorance or some other cause. Continuous time suffers no delay or interruption from one's ignorance or impossibility to act. Thus colleges which possess the right of appointment to a vacant office are given three months of useful time to exercise it, which implies that if they were prevented, v.g. for ten days, from meeting for the election they would have that many additional days to exercise their right. On the contrary, capitular chapters have eight days after the vacancy of the episcopal see has been made known to them to elect a Vicar Capitular, and it is specified that if no election has been made within that time, whatever be the cause, that right devolves to the Metropolitan (cf. 1917 CIC cc. 161, 432). Months Months are computed according to the calendar from the date of publication. A "canonical month" (in contradistinction to a "calendar month") is a period of 30 days, while a "calendar month" is a continuous month. Vacatio legis The vacatio legis is computed according to the calendar; for example, if a law is promulgated on 2 November, and the vacatio legis is 3 months, then the law takes effect on 2 February. So a universal law has a vacatio legis of approximately 90 days—3 months taken according to the calendar—while a particular law has a vacatio legis of approximately 30 days—1 month taken according to the calendar—unless specified to the contrary. History From 1918 to 1983, Book I, Title III of the 1917 Code of Canon Law regulated the computation of time in the Latin Church. References Bibliography Ayrinhac, Very Rev. H. A., S.S., D.D., D.C.L., General Legislation in the New Code of Canon Law: General Norms. (Can. 1-86.) Ecclesiastical Persons in General. (Can. 87-214.) (New York: Blase Benziger & Co., Inc., 1923). Caparros, Ernest, Michel Thériault, Jean Thorn (editors). Code of Canon Law Annotated: Second edition revised and updated of the 6th Spanish language edition (Woodridge: Midwest Theological Forum, 2004). Della Rocca, Fernando. Manual of Canon Law (Milwaukee: The Bruce Publishing Company, 1959). Translated by the Rev. Anselm Thatcher, O.S.B. De Meester, A. Juris Canonici et Juris Canonico-Civilis Compendium: Nova Editio, Ad Normam Codicis Juris Canonici—Tomus Primus (Brugis: Societatis Sancti Augustini, 1921). Peters, Dr. Edward N. The 1917 or Pio-Benedictine Code of Canon Law: In English Translation with Extensive Scholarly Apparatus (San Francisco: Ignatius Press, 2001). Jurisprudence of Catholic canon law time in government time in religion Catholic Church legal terminology
Computation of time (Catholic canon law)
Physics
1,047
11,469,558
https://en.wikipedia.org/wiki/Intensive%20crop%20farming
Intensive crop farming is a modern industrialized form of crop farming. Intensive crop farming's methods include innovation in agricultural machinery, farming methods, genetic engineering technology, techniques for achieving economies of scale in production, the creation of new markets for consumption, patent protection of genetic information, and global trade. These methods are widespread in developed nations. The practice of industrial agriculture is a relatively recent development in the history of agriculture, and the result of scientific discoveries and technological advances. Innovations in agriculture beginning in the late 19th century generally parallel developments in mass production in other industries that characterized the latter part of the Industrial Revolution. The identification of nitrogen and phosphorus as critical factors in plant growth led to the manufacture of synthetic fertilizers, making more intensive uses of farmland for crop production possible. Features Certain crops have proven more amenable to intensive farming than others. large scale – hundreds or thousands of acres of a single crop (much more than can be absorbed into the local or regional market); monoculture – large areas of a single crop, often raised from year to year on the same land, or with little crop rotation; agrichemicals – reliance on imported, synthetic fertilizers and pesticides to provide nutrients and to mitigate pests and diseases, these applied on a regular schedule hybrid seed – use of specialized hybrids designed to favor large scale distribution (e.g. ability to ripen off the vine, to withstand shipping and handling); genetically engineered crops – use of genetically modified varieties designed for large scale production (e.g. ability to withstand selected herbicides); large scale irrigation – heavy water use, and in some cases, growing of crops in otherwise unsuitable regions by extreme use of water (e.g. rice paddies on arid land). high mechanization – automated machinery sustain and harvest crops. Criticism Critics of intensively farmed crops cite a wide range of concerns. On the food quality front, it is held by critics that quality is reduced when crops are bred and grown primarily for cosmetic and shipping characteristics. Environmentally, industrial farming of crops is claimed to be responsible for loss of biodiversity, degradation of soil quality, soil erosion, food toxicity (pesticide residues) and pollution (through agrichemical build-ups and runoff, and use of fossil fuels for agrichemical manufacture and for farm machinery and long-distance distribution). History The projects within the Green Revolution spread technologies that had already existed, but had not been widely used outside of industrialized nations. These technologies included pesticides, irrigation projects, and synthetic nitrogen fertilizer. The novel technological development of the Green Revolution was the production of what some referred to as “miracle seeds.” Scientists created strains of maize, wheat, and rice that are generally referred to as HYVs or “high-yielding varieties.” HYVs have an increased nitrogen-absorbing potential compared to other varieties. Since cereals that absorbed extra nitrogen would typically lodge, or fall over before harvest, semi-dwarfing genes were bred into their genomes. Norin 10 wheat, a variety developed by Orville Vogel from Japanese dwarf wheat varieties, was instrumental in developing Green Revolution wheat cultivars. IR8, the first widely implemented HYV rice to be developed by IRRI, was created through a cross between an Indonesian variety named “Peta” and a Chinese variety named “Dee Geo Woo Gen.” With the availability of molecular genetics in Arabidopsis and rice the mutant genes responsible (reduced height(rht), gibberellin insensitive (gai1) and slender rice (slr1)) have been cloned and identified as cellular signalling components of gibberellic acid, a phytohormone involved in regulating stem growth via its effect on cell division. Stem growth in the mutant background is significantly reduced leading to the dwarf phenotype. Photosynthetic investment in the stem is reduced dramatically as the shorter plants are inherently more stable mechanically. Assimilates become redirected to grain production, amplifying in particular the effect of chemical fertilisers on commercial yield. HYVs significantly outperform traditional varieties in the presence of adequate irrigation, pesticides, and fertilizers. In the absence of these inputs, traditional varieties may outperform HYVs. One criticism of HYVs is that they were developed as F1 hybrids, meaning they need to be purchased by a farmer every season rather than saved from previous seasons, thus increasing a farmer's cost of production. Examples Wheat (modern management techniques) Wheat is a grass that is cultivated worldwide. Globally, it is the most important human food grain and ranks second in total production as a cereal crop behind maize; the third being rice. Wheat and barley were the first cereals known to have been domesticated. Cultivation and repeated harvesting and sowing of the grains of wild grasses led to the domestication of wheat through selection of mutant forms with tough years which remained intact during harvesting, and larger grains. Because of the loss of seed dispersal mechanisms, domesticated wheats have limited capacity to propagate in the wild. Agricultural cultivation using horse collar leveraged plows (3000 years ago) increased cereal grain productivity yields, as did the use of seed drills which replaced broadcasting sowing of seed in the 18th century. Yields of wheat continued to increase, as new land came under cultivation and with improved agricultural husbandry involving the use of fertilizers, threshing machines and reaping machines (the 'combine harvester'), tractor-draw cultivators and planters, and better varieties (see Green Revolution and Norin 10 wheat). With population growth rates falling, while yields continue to rise, the area devoted to wheat may now begin to decline for the first time in modern human history. While winter wheat lies dormant during a winter freeze, wheat normally requires between 110 and 130 days between planting and harvest, depending upon climate, seed type, and soil conditions. Crop management decisions require the knowledge of stage of development of the crop. In particular, spring fertilizers applications, herbicides, fungicides, growth regulators are typically applied at specific stages of plant development. For example, current recommendations often indicate the second application of nitrogen be done when the ear (not visible at this stage) is about 1 cm in size (Z31 on Zadoks scale). Maize (mechanical harvesting) Maize was planted by the Native Americans in hills, in a complex system known to some as the Three Sisters: beans used the corn plant for support, and squashes provided ground cover to stop weeds. This method was replaced by single species hill planting where each hill apart was planted with 3 or 4 seeds, a method still used by home gardeners. A later technique was checked corn where hills were placed apart in each direction, allowing cultivators to run through the field in two directions. In more arid lands this was altered and seeds were planted in the bottom of deep furrows to collect water. Modern technique plants maize in rows which allows for cultivation while the plant is young, although the hill technique is still used in the cornfields of some Native American reservations. The Haudenosaunee are preparing for climate change through seed banking. With a climate changing more crops are able to grow in different areas that they previously weren't able to. This will open growing areas for maize. In North America, fields are often planted in a two-crop rotation with a nitrogen-fixing crop, often alfalfa in cooler climates and soybeans in regions with longer summers. Sometimes a third crop, winter wheat, is added to the rotation. Fields are usually plowed each year, although no-till farming is increasing in use. Many of the maize varieties grown in the United States and Canada are hybrids. Over half of the corn area planted in the United States has been genetically modified using biotechnology to express agronomic traits such as pest resistance or herbicide resistance. Before about World War II, most maize in North America was harvested by hand (as it still is in most of the other countries where it is grown). This often involved large numbers of workers and associated social events. Some one- and two-row mechanical pickers were in use but the corn combine was not adopted until after the War. By hand or mechanical picker, the entire ear is harvested which then requires a separate operation of a corn sheller to remove the kernels from the ear. Whole ears of corn were often stored in corn cribs and these whole ears are a sufficient form for some livestock feeding use. Few modern farms store maize in this manner. Most harvest the grain from the field and store it in bins. The combine with a corn head (with points and snap rolls instead of a reel) does not cut the stalk; it simply pulls the stalk down. The stalk continues downward and is crumpled into a mangled pile on the ground. The ear of corn is too large to pass through a slit in a plate and the snap rolls pull the ear of corn from the stalk so that only the ear and husk enter the machinery. The combine separates the husk and the cob, keeping only the kernels. Soybean (genetic modification) Soybeans are one of the "biotech food" crops that are being genetically modified, and GMO soybeans are being used in an increasing number of products. Monsanto Company is the world's leader in genetically modified soy for the commercial market. In 1995, Monsanto introduced "Roundup Ready" (RR) soybeans that have had a copy of a gene from the bacterium, Agrobacterium sp. strain CP4, inserted, by means of a gene gun, into its genome that allows the transgenic plant to survive being sprayed by this non-selective herbicide, glyphosate. Glyphosate, the active ingredient in Roundup, kills conventional soybeans. The bacterial gene is EPSP (= 5-enolpyruvyl shikimic acid-3-phosphate) synthase. Soybean also has a version of this gene, but the soybean version is sensitive to glyphosate, while the CP4 version is not. RR soybeans allow a farmer to reduce tillage or even to sow the seed directly into an unplowed field, known as 'no-till' or conservation tillage. No-till agriculture has many advantages, greatly reducing soil erosion and creating better wildlife habitat; it also saves fossil fuels, and sequesters , a greenhouse effect gas. In 1997, about 8% of all soybeans cultivated for the commercial market in the United States were genetically modified. In 2006, the figure was 89%. As with other "Roundup Ready crops", concern is expressed over damage to biodiversity. However, the RR gene has been bred into so many different soybean cultivars that the genetic modification itself has not resulted in any decline of genetic diversity. Tomato (hydroponics) The largest commercial hydroponics facility in the world is Eurofresh Farms in Willcox, Arizona, which sold more than 200 million pounds of tomatoes in 2007. Eurofresh has under glass and represents about a third of the commercial hydroponic greenhouse area in the U.S. Eurofresh does not consider their tomatoes organic, but they are pesticide-free. They are grown in rockwool with top irrigation. Some commercial installations use no pesticides or herbicides, preferring integrated pest management techniques. There is often a price premium willingly paid by consumers for produce which is labeled "organic". Some states in the USA require soil as an essential to obtain organic certification. There are also overlapping and somewhat contradictory rules established by the US Federal Government. So some food grown with hydroponics can be certified organic. In fact, they are the cleanest plants possible because there is no environment variable and the dirt in the food supply is extremely limited. Hydroponics also saves an incredible amount of water; It uses as little as 1/20 the amount as a regular farm to produce the same amount of food. The water table can be impacted by the water use and run-off of chemicals from farms, but hydroponics may minimize impact as well as having the advantage that water use and water returns are easier to measure. This can save the farmer money by allowing reduced water use and the ability to measure consequences to the land around a farm. The environment in a hydroponics greenhouse is tightly controlled for maximum efficiency and this new mindset is called soil-less/controlled-environment agriculture (S/CEA). With this growers can make ultra-premium foods anywhere in the world, regardless of temperature and growing seasons. Growers monitor the temperature, humidity, and pH level constantly. See also Intensive animal farming Environmental impact of agriculture Climate change Non-food crop References Crops Agricultural soil science Crops
Intensive crop farming
Chemistry
2,638
33,287,718
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2032
In molecular biology, glycoside hydrolase family 32 is a family of glycoside hydrolases , which are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Family 32 glycosyl hydrolases comprise two distinct domains. The N-terminal domain, which forms a five bladed beta propeller, and the C-terminal domain, which forms a beta sandwich structure. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 32
Biology
192
10,459,464
https://en.wikipedia.org/wiki/Cerf%20theory
In mathematics, at the junction of singularity theory and differential topology, Cerf theory is the study of families of smooth real-valued functions on a smooth manifold , their generic singularities and the topology of the subspaces these singularities define, as subspaces of the function space. The theory is named after Jean Cerf, who initiated it in the late 1960s. An example Marston Morse proved that, provided is compact, any smooth function can be approximated by a Morse function. Thus, for many purposes, one can replace arbitrary functions on by Morse functions. As a next step, one could ask, 'if you have a one-parameter family of functions which start and end at Morse functions, can you assume the whole family is Morse?' In general, the answer is no. Consider, for example, the one-parameter family of functions on given by At time , it has no critical points, but at time , it is a Morse function with two critical points at . Cerf showed that a one-parameter family of functions between two Morse functions can be approximated by one that is Morse at all but finitely many degenerate times. The degeneracies involve a birth/death transition of critical points, as in the above example when, at , an index 0 and index 1 critical point are created as increases. A stratification of an infinite-dimensional space Returning to the general case where is a compact manifold, let denote the space of Morse functions on , and the space of real-valued smooth functions on . Morse proved that is an open and dense subset in the topology. For the purposes of intuition, here is an analogy. Think of the Morse functions as the top-dimensional open stratum in a stratification of (we make no claim that such a stratification exists, but suppose one does). Notice that in stratified spaces, the co-dimension 0 open stratum is open and dense. For notational purposes, reverse the conventions for indexing the stratifications in a stratified space, and index the open strata not by their dimension, but by their co-dimension. This is convenient since is infinite-dimensional if is not a finite set. By assumption, the open co-dimension 0 stratum of is , i.e.: . In a stratified space , frequently is disconnected. The essential property of the co-dimension 1 stratum is that any path in which starts and ends in can be approximated by a path that intersects transversely in finitely many points, and does not intersect for any . Thus Cerf theory is the study of the positive co-dimensional strata of , i.e.: for . In the case of , only for is the function not Morse, and has a cubic degenerate critical point corresponding to the birth/death transition. A single time parameter, statement of theorem The Morse Theorem asserts that if is a Morse function, then near a critical point it is conjugate to a function of the form where . Cerf's one-parameter theorem asserts the essential property of the co-dimension one stratum. Precisely, if is a one-parameter family of smooth functions on with , and Morse, then there exists a smooth one-parameter family such that , is uniformly close to in the -topology on functions . Moreover, is Morse at all but finitely many times. At a non-Morse time the function has only one degenerate critical point , and near that point the family is conjugate to the family where . If this is a one-parameter family of functions where two critical points are created (as increases), and for it is a one-parameter family of functions where two critical points are destroyed. Origins The PL-Schoenflies problem for was solved by J. W. Alexander in 1924. His proof was adapted to the smooth case by Morse and Emilio Baiada. The essential property was used by Cerf in order to prove that every orientation-preserving diffeomorphism of is isotopic to the identity, seen as a one-parameter extension of the Schoenflies theorem for . The corollary at the time had wide implications in differential topology. The essential property was later used by Cerf to prove the pseudo-isotopy theorem for high-dimensional simply-connected manifolds. The proof is a one-parameter extension of Stephen Smale's proof of the h-cobordism theorem (the rewriting of Smale's proof into the functional framework was done by Morse, and also by John Milnor and by Cerf, André Gramain, and Bernard Morin following a suggestion of René Thom). Cerf's proof is built on the work of Thom and John Mather. A useful modern summary of Thom and Mather's work from that period is the book of Marty Golubitsky and Victor Guillemin. Applications Beside the above-mentioned applications, Robion Kirby used Cerf Theory as a key step in justifying the Kirby calculus. Generalization A stratification of the complement of an infinite co-dimension subspace of the space of smooth maps was eventually developed by Francis Sergeraert. During the seventies, the classification problem for pseudo-isotopies of non-simply connected manifolds was solved by Allen Hatcher and John Wagoner, discovering algebraic -obstructions on () and () and by Kiyoshi Igusa, discovering obstructions of a similar nature on (). References Differential topology Singularity theory
Cerf theory
Mathematics
1,137
75,659,247
https://en.wikipedia.org/wiki/Beibei%20Wang%20%28engineer%29
Beibei Wang (born 1983) is a Chinese-American electrical engineer known for her research in wireless sensor networks, cognitive radio, and the use of cooperative game theory in wireless communication. She is vice president for research at Origin Wireless, Inc. Education and career Wang earned a bachelor's degree in electrical engineering in 2004 from the University of Science and Technology of China. She completed her Ph.D. at the University of Maryland, College Park, in 2009. Her doctoral dissertation, Dynamic Spectrum Allocation and Sharing in Cognitive Cooperative Networks, was supervised by K. J. Ray Liu. After postdoctoral research at the University of Maryland, she worked for Qualcomm from 2010 to 2014. In 2015, she joined Origin Wireless, which her advisor had founded in 2013. Books Wang is the coauthor of Cognitive Radio Networking and Security: A Game-Theoretic View (2010) Wireless AI: Wireless Sensing, Positioning, IoT, and Communications (2019) Recognition Wang was named an IEEE Fellow, in the 2024 class of fellows, "for contributions to wireless sensing and cognitive communications". References External links Home page 1983 births Living people Chinese electrical engineers 21st-century Chinese women engineers 21st-century Chinese engineers American electrical engineers American women engineers Women electrical engineers Game theorists University of Science and Technology of China alumni University of Maryland, College Park alumni Fellows of the IEEE
Beibei Wang (engineer)
Mathematics
276
11,436,554
https://en.wikipedia.org/wiki/Cercospora%20zebrina
Cercospora zebrina is a fungal plant pathogen. References zebrina Fungal plant pathogens and diseases Fungus species Fungi described in 1877
Cercospora zebrina
Biology
31
4,415,117
https://en.wikipedia.org/wiki/Uranyl%20zinc%20acetate
Uranyl zinc acetate (ZnUO2(CH3COO)4) is a compound of uranium. Uranyl zinc acetate is used as a laboratory reagent in the determination of sodium concentrations of solutions using a method of quantitatively precipitating sodium with uranyl zinc acetate and gravimetrically determining the sodium as uranyl zinc sodium acetate, (UO2)3ZnNa(CH3CO2)9·6H2O. The presence of caesium and rubidium does not interfere with this reaction, but the presence of potassium and lithium must be removed prior to analysis. This method was important to determine Na in urine for diagnostic purposes. Zinc uranyl acetate is sometimes called "sodium reagent" since pale yellow NaZn(UO2)3(C2H3O2)9 is one of the very few insoluble sodium compounds. Laboratory use Uranyl zinc acetate has been used as a catalyst for the demethoxylation of toluene-2,4-dicarbamate into toluene-2,4-diisocyanate (TDI). References Uranyl compounds Zinc compounds Acetates Coordination complexes
Uranyl zinc acetate
Chemistry
261
8,342,547
https://en.wikipedia.org/wiki/Strontium%20sulfate
Strontium sulfate (SrSO4) is the sulfate salt of strontium. It is a white crystalline powder and occurs in nature as the mineral celestine. It is poorly soluble in water to the extent of 1 part in 8,800. It is more soluble in dilute HCl and nitric acid and appreciably soluble in alkali chloride solutions (e.g. sodium chloride). Structure Strontium sulfate is a polymeric material, isostructural with barium sulfate. Crystallized strontium sulfate is utilized by a small group of radiolarian protozoa, called the Acantharea, as a main constituent of their skeleton. Applications and chemistry Strontium sulfate is of interest as a naturally occurring precursor to other strontium compounds, which are more useful. In industry it is converted to the carbonate for use as ceramic precursor and the nitrate for use in pyrotechnics. The low aqueous solubility of strontium sulfate can lead to scale formation in processes where these ions meet. For example, it can form on surfaces of equipment in underground oil wells depending on the groundwater conditions. References Strontium compounds Sulfates Pyrotechnic colorants
Strontium sulfate
Chemistry
255
69,543,970
https://en.wikipedia.org/wiki/Somatrogon
Somatrogon, sold under the brand name Ngenla, is a medication for the treatment of growth hormone deficiency. Somatrogon is a glycosylated protein constructed from human growth hormone and a small part of human chorionic gonadotropin which is appended to both the N-terminal and C-terminal. Somatrogon is a human growth hormone analog. The most common side effects include reactions at the site of injection, headache, and fever. Somatrogon was approved for medical use in Australia in November 2021, in the European Union in February 2022, and in the United States in June 2023. Medical uses Somatrogon is indicated for the treatment of children who have growth failure due to inadequate secretion of endogenous growth hormone. History The US Food and Drug Administration (FDA) approved somatrogon based on one clinical trial (NCT02968004) of 224 children with growth hormone deficiency and short stature. The trial was conducted at 84 sites in 24 countries including Argentina, Australia, Bulgaria, Belarus, Canada, Colombia, Germany, Georgia, Greece, India, Israel, Italy, Mexico, New Zealand, Poland, South Korea, Russia, Spain, Taiwan, Turkey, Ukraine, the United Kingdom, Vietnam, and the United States. This trial was used to assess efficacy and safety. The benefits and side effects were evaluated in a clinical trial. Children aged 3 to 12 years old were assigned at random to weekly somatrogon or another daily approved growth hormone for 52 weeks. Society and culture Legal status In December 2021, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Ngenla, intended for the treatment of growth hormone deficiency in children and adolescents from three years of age. The applicant for this medicinal product is Pfizer Europe MA EEIG. Somatrogon was approved for medical use in the European Union in February 2022. Names Somatrogon is the international nonproprietary name. References Further reading Growth factors Orphan drugs Drugs developed by Pfizer
Somatrogon
Chemistry
443
23,970,675
https://en.wikipedia.org/wiki/Stokes%20operator
The Stokes operator, named after George Gabriel Stokes, is an unbounded linear operator used in the theory of partial differential equations, specifically in the fields of fluid dynamics and electromagnetics. Definition If we define as the Leray projection onto divergence free vector fields, then the Stokes Operator is defined by where is the Laplacian. Since is unbounded, we must also give its domain of definition, which is defined as , where . Here, is a bounded open set in (usually n = 2 or 3), and are the standard Sobolev spaces, and the divergence of is taken in the distribution sense. Properties For a given domain which is open, bounded, and has boundary, the Stokes operator is a self-adjoint positive-definite operator with respect to the inner product. It has an orthonormal basis of eigenfunctions corresponding to eigenvalues which satisfy and as . Note that the smallest eigenvalue is unique and non-zero. These properties allow one to define powers of the Stokes operator. Let be a real number. We define by its action on : where and is the inner product. The inverse of the Stokes operator is a bounded, compact, self-adjoint operator in the space , where is the trace operator. Furthermore, is injective. References Constantin, Peter and Foias, Ciprian. Navier-Stokes Equations, University of Chicago Press, (1988) Linear algebra Differential equations
Stokes operator
Mathematics
304
7,938,869
https://en.wikipedia.org/wiki/Interrupts%20in%2065xx%20processors
The 65xx family of microprocessors, consisting of the MOS Technology 6502 and its derivatives, the WDC 65C02, WDC 65C802 and WDC 65C816, and CSG 65CE02, all handle interrupts in a similar fashion. There are three hardware interrupt signals common to all 65xx processors and one software interrupt, the instruction. The WDC 65C816 adds a fourth hardware interrupt—, useful for implementing virtual memory architectures—and the software interrupt instruction (also present in the 65C802), intended for use in a system with a coprocessor of some type (e.g., a floating-point processor). Interrupt types The hardware interrupt signals are all active low, and are as follows: RESETa reset signal, level-triggered NMIa non-maskable interrupt, edge-triggered IRQa maskable interrupt, level-triggered ABORTa special-purpose, non-maskable interrupt (65C816 only, see below), level-triggered The detection of a signal causes the processor to enter a system initialization period of six clock cycles, after which it sets the interrupt request disable flag in the status register and loads the program counter with the values stored at the processor initialization vector (–) before commencing execution. If operating in native mode, the 65C816/65C802 are switched back to emulation mode and stay there until returned to native mode under software control. The detection of an or signal, as well as the execution of a instruction, will cause the same overall sequence of events, which are, in order: The processor completes the current instruction and updates registers or memory as required before responding to the interrupt. 65C816/65C802 when operating in native mode: The program bank register (, the part of the address bus) is pushed onto the hardware stack. The most significant byte (MSB) of the program counter () is pushed onto the stack. The least significant byte (LSB) of the program counter is pushed onto the stack. The status register () is pushed onto the stack. The interrupt disable flag is set in the status register. 65C816/65C802: is loaded with . is loaded from the relevant vector (see tables). The behavior of the 65C816 when is asserted differs in some respects from the above description and is separately discussed below. Note that the processor does not push the accumulator and index registers on to the stack—code in the interrupt handler must perform that task, as well as restore the registers at the termination of interrupt processing, as necessary. Also note that the vector for is the same as that for in all eight bit 65xx processors, as well as in the 65C802/65C816 when operating in emulation mode. When operating in native mode, the 65C802/65C816 provide separate vectors for and . When set, the interrupt request disable flag (the bit in the status register) will disable detection of the signal, but will have no effect on any other interrupts (however, see below section on the instruction implemented in WDC CMOS processors). Additionally, with the 65(c)02 or the 65C816/65C802 operating in emulation mode, the copy of the status register that is pushed on to the stack will have the flag set if a (software interrupt) was the cause of the interrupt, or cleared if an was the cause. Hence the interrupt service routine must retrieve a copy of the saved status register from where it was pushed onto the stack and check the status of the flag in order to distinguish between an and a . This requirement is eliminated when operating the 65C802/65C816 in native mode, due to the separate vectors for the two interrupt types. interrupt The 65C816's interrupt input is intended to provide the means to redirect program execution when a hardware exception is detected, such as a page fault or a memory access violation. Hence the processor's response when the input is asserted (negated) is different from when and/or are asserted. Also, achieving correct operation in response to requires that the interrupt occur at the proper time during the machine cycle, whereas no such requirement exists for or . When is asserted during a valid memory cycle, that is, when the processor has asserted the and/or status outputs, the following sequence of events will occur: The processor completes the current instruction but does not change the registers or memory in any way—the computational results of the completed instruction are discarded. An abort interrupt does not literally abort an instruction. The program bank (, see above) is pushed to the stack. The most significant byte (MSB) of the aborted instruction's address is pushed onto the stack. The least significant byte (LSB) of the aborted instruction's address is pushed onto the stack. The status register is pushed onto the stack. The interrupt disable flag is set in the status register. is loaded with . The program counter is loaded from the vector (see tables). As the address pushed to the stack is that of the aborted instruction rather than the contents of the program counter, executing an (ReTurn from Interrupt) following an interrupt will cause the processor to return to the aborted instruction, rather than the next instruction, as would be the case with the other interrupts. In order for the processor to correctly respond to an abort, system logic must assert (negate) the input as soon as a valid address has been placed on the bus and it has been determined that the address constitutes a page fault, memory access violation or other anomaly (e.g., attempted execution of a privileged instruction). Hence the logic must not assert until the processor has asserted the or signals. Also, must remain asserted until the fall of the phase-two clock and then be immediately released. If these timing constraints are not observed, the abort interrupt handler itself may be aborted, causing registers and/or memory to be changed in a possibly-undefined manner. Interrupt anomalies In the NMOS 6502 and derivatives (e.g., 6510), the simultaneous assertion of a hardware interrupt line and execution of was not accounted for in the design—the instruction will be ignored in such a case. Also, the status of the decimal mode flag in the processor status register is unchanged following an interrupt of any kind. This behavior can potentially result in a difficult to locate bug in the interrupt handler if decimal mode happens to be enabled at the time of an interrupt. These anomalies were corrected in all CMOS versions of the processor. Interrupt handler considerations A well-designed and succinct interrupt handler or interrupt service routine (ISR) will not only expeditiously service any event that causes an interrupt, it will do so without interfering in any way with the interrupted foreground task—the ISR must be "transparent" to the interrupted task (although exceptions may apply in specialized cases). This means that the ISR must preserve the microprocessor (MPU) state and not disturb anything in memory that it is not supposed to disturb. Additionally, the ISR should be fully reentrant, meaning that if two interrupts arrive in close succession, the ISR will be able to resume processing the first interrupt after the second one has been serviced. Reentrancy is typically achieved by using only the MPU hardware stack for storage (though there are other possible methods). Preserving the MPU state means that the ISR must assure that whatever values were in the MPU registers at the time of the interrupt are there when the ISR terminates. A part of the preservation process is automatically handled by the MPU when it acknowledges the interrupt, as it will push the program counter (and program bank in the 65C816/65C802) and status register to the stack prior to executing the ISR. At the completion of the ISR, when the instruction is executed, the MPU will reverse the process. No member of the 65xx family pushes any other registers to the stack. In most ISRs, the accumulator and/or index registers must be preserved to assure transparency and later restored as the final steps prior to executing . In the case of the 65C816/65C802, consideration must be given to whether it is being operated in emulation or native mode at the time of the interrupt. If the latter, it may also be necessary to preserve the data bank () and direct (zero) page () registers to guarantee transparency. Also, a 65C816 native mode operating system may well use a different stack location than the application software, which means the ISR would have to preserve and subsequently restore the stack pointer (). Further complicating matters with the 65C816/65C802 is that the sizes of the accumulator and index registers may be either 8 or 16 bits when operating in native mode, requiring that their sizes be preserved for later restoration. The methods by which the MPU state is preserved and restored within an ISR will vary with the different versions of the 65xx family. For NMOS processors (e.g., 6502, 6510, 8502, etc.), there can be only one method by which the accumulator and index registers are preserved, as only the accumulator can be pushed to and pulled from the stack. Therefore, the following ISR entry code is typical: PHA ; save accumulator TXA PHA ; save X-register TYA PHA ; save Y-register CLD ; ensure binary mode by clearing decimal flag The instruction is necessary because, as previously noted, NMOS versions of the 6502 do not clear the (decimal mode) flag in the status register when an interrupt occurs. Once the accumulator and index registers have been preserved, the ISR can use them as needed. When the ISR has concluded its work, it would restore the registers and then resume the interrupted foreground task. Again, the following NMOS code is typical: PLA TAY ; restore Y-register PLA TAX ; restore X-register PLA ; restore accumulator RTI ; resume interrupted task A consequence of the instruction is the MPU will return to decimal mode if that was its state at the time of the interrupt. The 65C02, and the 65C816/65C802 when operating in emulation mode, require less code, as they are able to push and pull the index registers without using the accumulator as an intermediary. They also automatically clear decimal mode before executing the ISR. The following is typical: PHA ; save accumulator PHX ; save X-register PHY ; save Y-register Upon finishing up, the ISR would reverse the process: PLY ; restore Y-register PLX ; restore X-register PLA ; restore accumulator RTI ; resume interrupted task As previously stated, there is a little more complexity with the 65C816/65C802 when operating in native mode due to the variable register sizes and the necessity of accounting for the and registers. In the case of the index registers, they may be pushed without regard to their sizes, as changing sizes automatically sets the most significant byte (MSB) in these registers to zero and no data will be lost when the pushed value is restored, provided the index registers are the same size they were when pushed. The accumulator, however, is really two registers: designated and . Pushing the accumulator when it is set to 8 bits will not preserve , which could result in a loss of transparency should the ISR change in any way. Therefore, the accumulator must always be set to 16 bits before being pushed or pulled if the ISR will be using . It is also more efficient to set the index registers to 16 bits before pushing them. Otherwise, the ISR has to then push an extra copy of the status register so it can restore the register sizes prior to pulling them from the stack. For most ISRs, the following entry code will achieve the goal of transparency: PHB ; save current data bank PHD ; save direct page pointer REP #%00110000 ; select 16 bit registers PHA ; save accumulator PHX ; save X-register PHY ; save Y-register In the above code fragment, the symbol is MOS Technology and WDC standard assembly language syntax for a bitwise operand. If the ISR has its own assigned stack location, preservation of the stack pointer () must occur in memory after the above pushes have occurred—it should be apparent why this is so. The following code, added to the above sequence, would handle this requirement: TSC ; copy stack pointer to accumulator STA stkptr ; save somewhere in safe RAM LDA isrptr ; get ISR's stack pointer &... TCS ; set new stack location At the completion of the ISR, the above processes would be reversed as follows: REP #%00110000 ; select 16 bit registers TSC ; save ISR's SP... STA isrptr ; for subsequent use LDA isstkptr ; get foreground task's SP &... TCS ; set it PLY ; restore Y-register PLX ; restore X-register PLA ; restore accumulator PLD ; restore direct page pointer PLB ; restore current data bank RTI ; resume interrupted task Note that upon executing , the 65C816/65C802 will automatically restore the register sizes to what they were when the interrupt occurred, since pulling the previously–saved status register sets or clears both register size bits to what they were at the time of the interrupt. While it is possible to switch the 65C816/65C802 from native mode to emulation mode within an ISR, such is fraught with peril. In addition to forcing the accumulator and index registers to 8 bits (causing a loss of the most significant byte in the index registers), entering emulation mode will truncate the stack pointer to 8 bits and relocate the stack itself to page 1 RAM. The result is the stack that existed at the time of the interrupt will be inaccessible unless it was also in page 1 RAM and no larger than 256 bytes. In general, mode switching while servicing an interrupt is not a recommended procedure, but may be necessary in specific operating environments. Using and As previously noted, and are software interrupts and, as such, may be used in a variety of ways to implement system functions. A historical use of has been to assist in patching PROMs when bugs were discovered in a system's firmware. A typical technique in firmware development was to arrange for the vector to point to an unprogrammed "patch area" in the PROM. In the event a bug was discovered, patching would be accomplished by "blowing" all of the fuses at the address where the faulty instruction was located, thus changing the instruction's opcode to . Upon executing the resulting , the MPU would be redirected to the patch area, into which suitable patch code would be written. Often, the patch area code started by "sniffing the stack" to determine the address at which the bug was encountered, potentially allowing for the presence of more than one patch in the PROM. The use of for PROM patching diminished once EPROMs and EEPROMs became commonly available. Another use of in software development is as a debugging aid in conjunction with a machine language monitor. By overwriting an opcode with () and directing the hardware vector to the entry point of the monitor, one can cause a program to halt at any desired point, allowing the monitor to take control. At that time, one may examine memory, view the processor's register values, patch code, etc. Debugging, as advocated by Kuckes and Thompson, can be facilitated by liberally sprinkling one's code with instructions (opcode ) that can be replaced by instructions without altering the actual behaviour of the program being debugged. A characteristic of the and instructions is that the processor treats either of them as a two byte instruction: the opcode itself and the following byte, which is referred to as the "signature." Upon execution of or , the processor will add two to the program counter prior to pushing it to the stack. Hence when (ReTurn from Interrupt) is executed, the interrupted program will continue at the address immediately following the signature. If is used as a debugging device, the program counter may have to be adjusted to point to the signature in order for execution to resume where expected. Alternatively, a may be inserted as a signature "placeholder," in which case no program counter adjustment will be required. The fact that and double-increment the program counter before pushing it to the stack facilitates the technique of treating them as supervisor call instructions, as found on some mainframe computers. The usual procedure is to treat the signature as an operating system service index. The operating system or handler would retrieve the value of the program counter pushed to the stack, decrement it and read from the resulting memory location to get the signature. After converting the signature to a zero-based index, a simple lookup table can be consulted to load the program counter with the address of the proper service routine. Upon completion of the service routine, the instruction would be used to return control to the program that made the operating system call. Note that the signature for may be any value, whereas the signature for should be limited to the range -. The use of and/or to request an operating system service means user applications do not have to know the entry address of each operating system function, only the correct signature byte to invoke the desired operation. Hence relocation of the operating system in memory will not break compatibility with existing user applications. Also, as executing or always vectors the processor to the same address, simple code may be used to preserve the registers on the stack prior to turning control over to the requested service. However, this programming model will result in somewhat slower execution as compared to calling a service as a subroutine, primarily a result of the stack activity that occurs with any interrupt. Also, interrupt requests will have been disabled by executing or , requiring that the operating system re-enable them. and instructions (WAit for Interrupt, opcode ) is an instruction available on the WDC version of the 65C02 and the 65C816/65C802 microprocessors (MPU) that halts the MPU and places it into a semi-catatonic state until a hardware interrupt of any kind occurs. The primary use for is in low-power embedded systems where the MPU has nothing to do until an expected event occurs and minimal power consumption is desired as the system is waiting, and/or a quick response is required. A typical example of code that would make use of is as follows: SEI ; disable IRQs WAI ; wait for hardware interrupt ; ... execution resumes here In the above code fragment, the MPU will halt upon execution of and go into a very low power consumption state. Despite interrupt requests (IRQ) having been disabled prior to the instruction, the MPU will respond to any hardware interrupt while waiting. Upon receipt of an IRQ, the MPU will "awaken" in one clock cycle and resume execution at the instruction immediately following . Hence interrupt latency will be very short (70 nanoseconds at 14 megahertz), resulting in the most rapid response possible to an external event. Similar in some ways to is the (SToP, opcode ) instruction, which completely shuts down the MPU while waiting for a single interrupt input. When is executed, the MPU halts its internal clock in the high phase, retaining all data in its registers, and enters a low power state. The MPU is brought out of this state by pulling its reset input pin (, which is classified as an interrupt input) low. Execution will then resume at the address stored at locations , the hardware reset vector. As with , is intended for use in low-power embedded applications where long periods of time may elapse between events that require MPU attention and no other processing is required. would not be used in normal programming, as it would result in total cessation of processing. Footnotes References Further reading Internals of BRK/IRQ/NMI/RESET on a MOS 6502 6502 Family Microprocessor Resources and Forum 65xx Interrupt Primer – An extensive discussion of 65xx family interrupt processing. Investigating 65C816 Interrupts – An extensive discussion of interrupt processing that is specific to 65C816 native mode operation. Machine code 65xx microprocessors Interrupts
Interrupts in 65xx processors
Technology
4,323
12,970,258
https://en.wikipedia.org/wiki/Storm%20botnet
The Storm botnet or Storm Worm botnet (also known as Dorf botnet and Ecard malware) was a remotely controlled network of "zombie" computers (or "botnet") that had been linked by the Storm Worm, a Trojan horse spread through e-mail spam. At its height in September 2007, the Storm botnet was running on anywhere from 1 million to 50 million computer systems, and accounted for 8% of all malware on Microsoft Windows computers. It was first identified around January 2007, having been distributed by email with subjects such as "230 dead as storm batters Europe," giving it its well-known name. The botnet began to decline in late 2007, and by mid-2008 had been reduced to infecting about 85,000 computers, far less than it had infected a year earlier. As of December 2012, the original creators of Storm have not been found. The Storm botnet has displayed defensive behaviors that indicated that its controllers were actively protecting the botnet against attempts at tracking and disabling it, by specifically attacking the online operations of some security vendors and researchers who had attempted to investigate it. Security expert Joe Stewart revealed that in late 2007, the operators of the botnet began to further decentralize their operations, in possible plans to sell portions of the Storm botnet to other operators. It was reportedly powerful enough to force entire countries off the Internet, and was estimated to be capable of executing more instructions per second than some of the world's top supercomputers. The United States Federal Bureau of Investigation considered the botnet a major risk to increased bank fraud, identity theft, and other cybercrimes. Origins First detected on the Internet in January 2007, the Storm botnet and worm are so-called because of the storm-related subject lines its infectious e-mail employed initially, such as "230 dead as storm batters Europe." Later provocative subjects included "Chinese missile shot down USA aircraft," and "U.S. Secretary of State Condoleezza Rice has kicked German Chancellor Angela Merkel." It is suspected by some information security professionals that well-known fugitive spammers, including Leo Kuvayev, may have been involved in the operation and control of the Storm botnet. According to technology journalist Daniel Tynan, writing under his "Robert X. Cringely" pseudonym, a great portion of the fault for the existence of the Storm botnet lay with Microsoft and Adobe Systems. Other sources state that Storm Worm's primary method of victim acquisition was through enticing users via frequently changing social engineering (confidence trickery) schemes. According to Patrick Runald, the Storm botnet had a strong American focus, and likely had agents working to support it within the United States. Some experts, however, believe the Storm botnet controllers were Russian, some pointing specifically at the Russian Business Network, citing that the Storm software mentions a hatred of the Moscow-based security firm Kaspersky Lab, and includes the Russian word "buldozhka," which means "bulldog." Composition The botnet, or zombie network, comprises computers running Microsoft Windows as their operating system. Once infected, a computer becomes known as a bot. This bot then performs automated tasks—anything from gathering data on the user, to attacking web sites, to forwarding infected e-mail—without its owner's knowledge or permission. Estimates indicate that 5,000 to 6,000 computers are dedicated to propagating the spread of the worm through the use of e-mails with infected attachments; 1.2 billion virus messages have been sent by the botnet through September 2007, including a record 57 million on August 22, 2007 alone. Lawrence Baldwin, a computer forensics specialist, was quoted as saying, "Cumulatively, Storm is sending billions of messages a day. It could be double digits in the billions, easily." One of the methods used to entice victims to infection-hosting web sites are offers of free music, from artists such as Beyoncé Knowles, Kelly Clarkson, Rihanna, The Eagles, Foo Fighters, R. Kelly, and Velvet Revolver. Signature-based detection, the main defense of most computer systems against virus and malware infections, is hampered by the large number of Storm variants. Back-end servers that control the spread of the botnet and Storm worm automatically re-encode their distributed infection software twice an hour, for new transmissions, making it difficult for anti-virus vendors to stop the virus and infection spread. Additionally, the location of the remote servers which control the botnet are hidden behind a constantly changing DNS technique called 'fast flux', making it difficult to find and stop virus hosting sites and mail servers. In short, the name and location of such machines are frequently changed and rotated, often on a minute by minute basis. The Storm botnet's operators control the system via peer-to-peer techniques, making external monitoring and disabling of the system more difficult. There is no central "command-and-control point" in the Storm botnet that can be shut down. The botnet also makes use of encrypted traffic. Efforts to infect computers usually revolve around convincing people to download e-mail attachments which contain the virus through subtle manipulation. In one instance, the botnet's controllers took advantage of the National Football League's opening weekend, sending out mail offering "football tracking programs" which did nothing more than infect a user's computer. According to Matt Sergeant, chief anti-spam technologist at MessageLabs, "In terms of power, [the botnet] utterly blows the supercomputers away. If you add up all 500 of the top supercomputers, it blows them all away with just 2 million of its machines. It's very frightening that criminals have access to that much computing power, but there's not much we can do about it." It is estimated that only of the total capacity and power of the Storm botnet is currently being used. Computer security expert Joe Stewart detailed the process by which compromised machines join the botnet: attempts to join the botnet are made by launching a series of EXE files on the compromised machine, in stages. Usually, they are named in a sequence from game0.exe through game5.exe, or similar. It will then continue launching executables in turn. They typically perform the following: game0.exe Backdoor/downloader game1.exe SMTP relay game2.exe E-mail address stealer game3.exe E-mail virus spreader game4.exe Distributed Denial of Service (DDoS) attack tool game5.exe Updated copy of Storm Worm dropper At each stage the compromised system will connect into the botnet; fast flux DNS makes tracking this process exceptionally difficult. This code is run from %windir%\system32\wincom32.sys on a Windows system, via a kernel rootkit, and all connections back to the botnet are sent through a modified version of the eDonkey/Overnet communications protocol. Method The Storm botnet and its variants employ a variety of attack vectors, and a variety of defensive steps exist as well. The Storm botnet was observed to be defending itself, and attacking computer systems that scanned for Storm virus-infected computer systems online. The botnet will defend itself with DDoS counter-attacks, to maintain its own internal integrity. At certain points in time, the Storm worm used to spread the botnet has attempted to release hundreds or thousands of versions of itself onto the Internet, in a concentrated attempt to overwhelm the defenses of anti-virus and malware security firms. According to Joshua Corman, an IBM security researcher, "This is the first time that I can remember ever seeing researchers who were actually afraid of investigating an exploit." Researchers are still unsure if the botnet's defenses and counterattacks are a form of automation, or manually executed by the system's operators. "If you try to attach a debugger, or query sites it's reporting into, it knows and punishes you instantaneously. [Over at] SecureWorks, a chunk of it DDoS-ed [distributed-denial-of-service attacked] a researcher off the network. Every time I hear of an investigator trying to investigate, they're automatically punished. It knows it's being investigated, and it punishes them. It fights back", Corman said. Spameater.com as well as other sites such as 419eater.com and Artists Against 419, both of which deal with 419 spam e-mail fraud, have experienced DDoS attacks, temporarily rendering them completely inoperable. The DDoS attacks consist of making massed parallel network calls to those and other target IP addresses, overloading the servers' capacities and preventing them from responding to requests. Other anti-spam and anti-fraud groups, such as the Spamhaus Project, were also attacked. The webmaster of Artists Against 419 said that the website's server succumbed after the attack increased to over 100Mbit. Similar attacks were perpetrated against over a dozen anti-fraud site hosts. Jeff Chan, a spam researcher, stated, "In terms of mitigating Storm, it's challenging at best and impossible at worst since the bad guys control many hundreds of megabits of traffic. There's some evidence that they may control hundreds of Gigabits of traffic, which is enough to force some countries off the Internet." The Storm botnet's systems also take steps to defend itself locally, on victims' computer systems. The botnet, on some compromised systems, creates a computer process on the Windows machine that notifies the Storm systems whenever a new program or other processes begin. Previously, the Storm worms locally would tell the other programs—such as anti-virus, or anti-malware software, to simply not run. However, according to IBM security research, versions of Storm also now simply "fool" the local computer system into thinking it has run the hostile program successfully, but in fact, they are not doing anything. "Programs, including not just AV exes, dlls and sys files, but also software such as the P2P applications BearShare and eDonkey, will appear to run successfully, even though they didn't actually do anything, which is far less suspicious than a process that gets terminated suddenly from the outside", said Richard Cohen of Sophos. Compromised users, and related security systems, will assume that security software is running successfully when it in fact is not. On September 17, 2007, a Republican Party website in the United States was compromised, and used to propagate the Storm worm and botnet. In October 2007, the botnet took advantage of flaws in YouTube's captcha application on its mail systems, to send targeted spam e-mails to Xbox owners with a scam involving winning a special version of the video game Halo 3. Other attack methods include using appealing animated images of laughing cats to get people to click on a trojan software download, and tricking users of Yahoo!'s GeoCities service to download software that was claimed to be needed to use GeoCities itself. The GeoCities attack in particular was called a "full-fledged attack vector" by Paul Ferguson of Trend Micro, and implicated members of the Russian Business Network, a well-known spam and malware service. On Christmas Eve in 2007, the Storm botnet began sending out holiday-themed messages revolving around male interest in women, with such titles as "Find Some Christmas Tail", "The Twelve Girls of Christmas", and "Mrs. Claus Is Out Tonight!" and photos of attractive women. It was described as an attempt to draw more unprotected systems into the botnet and boost its size over the holidays, when security updates from protection vendors may take longer to be distributed. A day after the e-mails with Christmas strippers were distributed, the Storm botnet operators immediately began sending new infected e-mails that claimed to wish their recipients a "Happy New Year 2008!" In January 2008, the botnet was detected for the first time to be involved in phishing attacks against major financial institutions, targeting both Barclays and Halifax. Encryption and sales Around October 15, 2007, it was uncovered that portions of the Storm botnet and its variants could be for sale. This is being done by using unique security keys in the encryption of the botnet's Internet traffic and information. The unique keys will allow each segment, or sub-section of the Storm botnet, to communicate with a section that has a matching security key. However, this may also allow people to detect, track, and block Storm botnet traffic in the future, if the security keys have unique lengths and signatures. Computer security vendor Sophos has agreed with the assessment that the partitioning of the Storm botnet indicated likely resale of its services. Graham Cluley of Sophos said, "Storm's use of encrypted traffic is an interesting feature which has raised eyebrows in our lab. Its most likely use is for the cybercriminals to lease out portions of the network for misuse. It wouldn't be a surprise if the network was used for spamming, distributed denial-of-service attacks, and other malicious activities." Security experts reported that if Storm is broken up for the malware market, in the form of a "ready-to-use botnet-making spam kit", the world could see a sharp rise in the number of Storm related infections and compromised computer systems. The encryption only seems to affect systems compromised by Storm from the second week of October 2007 onwards, meaning that any of the computer systems compromised after that time frame will remain difficult to track and block. Within days of the discovery of this segmenting of the Storm botnet, spam e-mail from the new subsection was uncovered by major security vendors. On the evening of October 17, security vendors began seeing new spam with embedded MP3 sound files, which attempted to trick victims into investing in a penny stock, as part of an illegal pump-and-dump stock scam. It was believed that this was the first-ever spam e-mail scam that made use of audio to fool victims. Unlike nearly all other Storm-related e-mails, however, these new audio stock scam messages did not include any sort of virus or Storm malware payload; they were simply part of the stock scam. In January 2008, the botnet was detected for the first time to be involved in phishing attacks against the customers of major financial institutions, targeting banking establishments in Europe including Barclays, Halifax and the Royal Bank of Scotland. The unique security keys used indicated to F-Secure that segments of the botnet were being leased. Claimed decline of the botnet On September 25, 2007, it was estimated that a Microsoft update to the Windows Malicious Software Removal Tool (MSRT) may have helped reduce the size of the botnet by up to 20%. The new patch, as claimed by Microsoft, removed Storm from approximately 274,372 infected systems out of 2.6 million scanned Windows systems. However, according to senior security staff at Microsoft, "the 180,000+ additional machines that have been cleaned by MSRT since the first day are likely to be home user machines that were not notably incorporated into the daily operation of the 'Storm' botnet," indicating that the MSRT cleaning may have been symbolic at best. As of late October 2007, some reports indicated that the Storm botnet was losing the size of its Internet footprint, and was significantly reduced in size. Brandon Enright, a University of California at San Diego security analyst, estimated that the botnet had by late October fallen to a size of approximately 160,000 compromised systems, from Enright's previous estimated high in July 2007 of 1,500,000 systems. Enright noted, however, that the botnet's composition was constantly changing, and that it was still actively defending itself against attacks and observation. "If you're a researcher and you hit the pages hosting the malware too much… there is an automated process that automatically launches a denial of service [attack] against you", he said, and added that his research caused a Storm botnet attack that knocked part of the UC San Diego network offline. The computer security company McAfee is reported as saying that the Storm Worm would be the basis of future attacks. Craig Schmugar, a noted security expert who discovered the Mydoom worm, called the Storm botnet a trend-setter, which has led to more usage of similar tactics by criminals. One such derivative botnet has been dubbed the "Celebrity Spam Gang", due to their use of similar technical tools as the Storm botnet controllers. Unlike the sophisticated social engineering that the Storm operators use to entice victims, however, the Celebrity spammers make use of offers of nude images of celebrities such as Angelina Jolie and Britney Spears. Cisco Systems security experts stated in a report that they believe the Storm botnet would remain a critical threat in 2008, and said they estimated that its size remained in the "millions". As of early 2008, the Storm botnet also found business competition in its black hat economy, in the form of Nugache, another similar botnet which was first identified in 2006. Reports have indicated a price war may be underway between the operators of both botnets, for the sale of their spam E-mail delivery. Following the Christmas and New Year's holidays bridging 2007–2008, the researchers of the German Honeynet Project reported that the Storm botnet may have increased in size by up to 20% over the holidays. The MessageLabs Intelligence report dated March 2008 estimates that over 20% of all spam on the Internet originates from Storm. Present state of the botnet The Storm botnet was sending out spam for more than two years until its decline in late 2008. One factor in this—on account of making it less interesting for the creators to maintain the botnet—may have been the Stormfucker tool, which made it possible to take control over parts of the botnet. Stormbot 2 On April 28, 2010, McAfee made an announcement that the so-called "rumors" of a Stormbot 2 were verified. Mark Schloesser, Tillmann Werner, and Felix Leder, the German researchers who did a lot of work in analyzing the original Storm, found that around two-thirds of the "new" functions are a copy and paste from the last Storm code base. The only thing missing is the P2P infrastructure, perhaps because of the tool which used P2P to bring down the original Storm. Honeynet blog dubbed this Stormbot 2. See also Alureon Bagle (computer worm) Botnet Conficker E-mail spam Gameover ZeuS Helpful worm Internet crime Internet security McColo Operation: Bot Roast Rustock botnet Regin (malware) Srizbi botnet Zombie (computer science) ZeroAccess botnet Zeus (malware) References External links "The Storm worm: can you be certain your machine isn't infected?" The target page is no longer on this website. "TrustedSource Storm Tracker": Top Storm domains and latest web proxies The target page is no longer on this website. Internet security Distributed computing projects Spamming Botnets
Storm botnet
Engineering
4,037
76,347,437
https://en.wikipedia.org/wiki/Cyclamin
Cyclamin is an organic compound that has been used by the pharmaceutical industry as an ingredient for nasal sprays. History Research on the cytotoxic and anticlastogenic activities of the cyclamen genus has been limited. In the 1950s and 1960s little research was done on the toxic saponin cyclamin, but no further investigation has recently been performed. Cyclamin, a triterpenoid pentasaccharidic saponin, has previously been extracted from different cyclamen species, including Cyclamen mirabile, Cyclamen trocopteranthum, Cyclamen libanoticum and Cylamen persicum. Available forms Cyclamin can be extracted from cyclamen plants such as the species mirabile and trocopteranthum. Cyclamen are known houseplants; this raises concerns about the awareness of the toxicity of this flower. The compound cyclamin belongs to the family of triterpene saponins, which are derived from the saponin structure. Triterpenoid compounds contain one or more sugar moieties attached to triterpenoid aglycones. The large diversity of structures causes saponins to exhibit a wide range of biological and pharmacological properties. In China, cyclamin has been used as a traditional medicine for years. Cyclamen has been used against menstrual disorders, digestive disorders, and anxiety in women. However, this is only the case for the leaves, the roots of the plants are known to be harmful if ingested. In these roots, cyclamin is found, as well as in the bulbs. Therefore, cyclamin is suspected to be the compound which causes the toxicity of these roots and bulbs in cyclamen plants. Structure and Reactivity As can be seen in Figure 1, cyclamin consists of a hydrophilic part with five connected saccharide groups. The second part of the cyclamin molecule consists of a non-polar, sterol-like backbone. These two different parts make that cyclamin molecules, and saponins in general, are highly amphipathic compounds. However, the exact mechanism of action of cyclamin has not been extensively researched. The structure-activity relationship (SAR) of cyclamin is not yet known. The amphipathic nature of cyclamin makes the compound permeable through the membrane. The carbohydrate part of saponins is water-soluble, making them surface-active. Cyclamin is known as a white opaque substance obtained in solid form that absorbs up to 45% water. Upon absorption of water, it becomes a transparent substance. Furthermore, it is soluble in alcohol and turns brown when exposed to light. Saponins overall are known to be soluble in polar solvents. Except for alcohol and water, cyclamin has not been further tested. When dissolved in water, it produces foam by frothing test and upon heating it has the unique property of coagulation. Concentrated sulfuric acid colours cyclamin in purple red, which disappears with water addition. Mechanism of action Not much is known about the mechanism of action of cyclamin. However, a study proposed possible mechanisms of action based on their experimental results. Firstly, cyclamin might activate the proteins caspase-3, caspase-8 and caspase-9. Caspases are proteins that can induce apoptosis when being activated. Secondly, cyclamin could be responsible for increasing the expression levels of the cyclin-dependent kinase 2 and the cell division cycle 25 homolog A. This can lead to increased DNA synthesis and cell proliferation and an increase in signal transduction pathways. Thirdly, cyclamin could increase the ratio of Bax/B-cell lymphoma 2 expression. This would favour apoptosis to take place. Another property that was found is that cyclamin increases the permeability of Bel-7402 cells. This might be the reason why cyclamin enhances the effect of some chemotherapeutical drugs. Indications and symptoms Cyclamin is an irritant compound that causes gastroenteritis, bloody stools, dizziness, seizures and even death by asphyxiation. Studied by many physiologists, cyclamin was viewed merely as a local irritant. However, considering the toxic effects of cyclamin, this a misconception. The roots and bulbs of cyclamen plants containing cyclamin are known to cause severe diarrhea, nausea, vomiting and even death if eaten raw. Adverse and side effects As cyclamin is not yet used as pharmaceutical drug such as for chemotherapy, no side effects were yet determined. Applications Cyclamin is used as an ingredient for a nasal spray to reduce the tension of the wall and induce secretion of mucus. Furthermore, due to its toxic effects on different (cancer) cell types, cyclamin might be considered for use as chemotherapeutic drug. However, more research first has to be done to reduce its toxicity on normal human cells. Toxicological data In a study, cyclamin was tested regarding its toxicity against several types of cancer cells: SK-BR-3, HT-29, HepG2/3A, NCI-H1299, BXPC-3, 22RV1 but also on its toxicity against human normal fibroblasts DMEM, which are not cancer cells. The results showed that cyclamin induced a significant increase of micronucleated cells after it was activated through metabolism. This means that heritable chromosome mutations could occur in these cells. This result was observed in all cell types that were analysed in this study, therefore including the fibroblasts. The toxicity was indicated by the IC50 value which gives the concentration of the compound at which it causes 50% of its inhibitory effect e.g. on enzymes in cells. The IC50 values of cyclamin were very similar across the different cell types, ranging from 0.32μM – 0.84μM, with the lowest IC50 value in the human normal fibroblasts DMEM cells, which indicates unspecific toxicity of cyclamin across different cell types (Table 1). This indicates that cyclamin is more toxic to the human fibroblasts compared to its toxicity against cancer cell lines. Compared to the chemotherapeutic drug mitomycin C, which has IC50 values ranging from 0.45μM-20.20μM in the cancer cell lines, cyclamin was up to 50 times more toxic for certain cell types when comparing the IC50 values (Table 1). The antioxidant activity of cyclamin was also determined. Cyclamin had an EC50 value of 0.96mM which indicates low antioxidant activity compared to reference compounds catechin (EC50 = 0.009 mM) and ascorbic acid (EC50 = 0.014 mM). The EC50 value represents the potency of a compound by stating the half-maximum concentration of the compound with regards to its concentration where it causes maximum response or effect. Furthermore, it seemed that cyclamin did not have an anticlastogenic effect in the tested cell lines. Another study found that cyclamin was less toxic to human colorectal cancer cells, from that type HTC 166 and HT-29, compared to the chemotherapeutical drug paclitaxel. This could be concluded from the results that cyclamin had higher IC50 values compared to paclitaxel (Table 2). To conclude, cyclamin shows broad toxicity against several cancer cell types, which would make it a promising drug to be used in that respect. However, its toxicity against normal human cells should be investigated further before using it as basis for chemotherapeutic drug to reduce unwanted side effects. For instance, cyclamin has been reported to selectively inhibit the proliferation of liver cancer cells. It is suspected to be related to molecular mechanisms increasing cell membrane permeabilization via targeting cholesterol. With this, it consequently targets the ligand-independent activation of Fas signalling pathway. Only after further investigation it can be decided if cyclamin is suitable for such a medical use. Effects on animals Cyclamin has not been thoroughly investigated in terms of its effects on animals, given it is not a widely known compound. However, the effects of Cyclamin on the snail Biomphalaria glabrata (Say) in terms of molluscicidal activity were researched. The lowest concentration showing 100% mortality to snails was 21 mg/l. Beside cyclamin its effect on these snails, no further data is known about its effect on animals. References Organic compounds
Cyclamin
Chemistry
1,881
36,722,518
https://en.wikipedia.org/wiki/Kyselka%20Spa
The Kyselka Spa (; ), older name Kysibl, is a complex of former public baths in the village of Kyselka in the vicinity of Karlovy Vary in the Czech Republic. History The local springs had been discovered hundreds of years ago; the first written account dates back to 1522. In the 17th century the counts of Černín permitted their subjects to drink the Kyselka mineral water for free. The springs were first used for spa purposes in 1792 when the location was already widely known. The first spa buildings were built in the years 1826–1832 by Wilhelm von Neuberg. The fame of the spa and the tasty water was growing; in 1852 Otto of Greece visited the place and the main spring was named after him. In the course of time the spring had several owners; one of them, count Johann Joseph von Stiebar auf Buttenheim, came up with the idea to export the mineral water (1824) and established a small factory for the production of clay jugs in which the water was bottled. The local water was then sold in Vienna, Prague and also in Karlsbad. Mattoni's era In 1867 the main spring was rented by the Czech businessman of Italian-German origin Heinrich K. Mattoni who began to bottle the water in glass bottles and export it worldwide. In 1873 he had enough money to buy the spa and the surrounding land. Before he died in 1910, he managed to build a new colonnade which roofed the famous "Otta's spring" as well as the buildings of the sanatorium, the hydropathic institute, hotels, restaurants, promenades, the cableway, the chapel of Saint Anne (1884), his monumental residence ("The Chateau"), a hydroelectric power station and the buildings of the bottling plant. Finally he brought railway here so that quietness of the spa was not disturbed by wagons carting mineral water. During his epoch the spa flourished and it has not reached such prosperity ever since. Mattoni's descendants nevertheless kept this property until the end of World War II. In the hands of the state After the war, the buildings shortly served as a refugee camp for orphans from the Greek Civil War and after that as a sanatorium for children. It was closed down after the fall of the Communist regime in 1989. During this period, the premises deteriorated to some degree but were at least somehow maintained. The current situation Since the chaotic voucher privatization between 1991 and 1993, the spa has had several owners. None of them was able or willing to save the spa or at least partially prevent it from dilapidation. Today (2012) the buildings are partially inaccessible, the water springs in rubble heaps of the buildings and leaks into the foundations. Many of the buildings are doomed to demolition. The whole site is moreover burdened with freight (few hundred trucks a day), as the railway was closed down. The current proprietors are the company C.T.S. – DUO and the Carlsbad Mineral Waters owned by the Italian tycoon A. Pasquale who owns the bottling plant and the Mattoni brand as well. The owners have repeatedly promised to renovate the complex, but it becomes obvious that they are counting on the buildings being torn down. All this in spite of the opinion of the Preservation of Monuments bureau, of the attitude of the representatives of the Karlovy Vary Region as well as a quite strong civil movement (over 28,000 signatures) led by the Association for the Protection and Development of the Cultural Heritage of the Czech Republic (ASORKD). References External links Zachraňte lázně Kyselka! – petition initiated by ASORKD Spas Water Buildings and structures in Karlovy Vary
Kyselka Spa
Environmental_science
770
18,747,884
https://en.wikipedia.org/wiki/XO-5b
XO-5b "Makropulos" is an extrasolar planet approximately 910 light years away in the constellation of Lynx. This planet was found by the transit method using the XO Telescope and announced in May 2008. It was also independently discovered by the HATNet Project. The planet has a mass and radius just slightly larger than that of Jupiter. This planet orbits very close to the G-type parent star, as it is typical for transiting planets, classing this as Hot Jupiter. It takes only 4.188 days (or 100.5 hours) to orbit at an orbital distance of 0.0488 AU). The planet XO-5b is named Makropulos. The name was selected in the NameExoWorlds campaign by the Czech Republic, during the 100th anniversary of the IAU. Makropulos is the name from Karel Čapek's play Věc Makropulos (The Makropulos Affair). Discovery XO-5b was the fifth hot Jupiter transiting planet discovered by the XO Project and was identified as a possible candidate extrasolar planet from two seasons of observations, November 2003 to March 2004 and November 2004 to March 2005. Follow-up photometry was provided by the extended team, a collaboration of professional and amateur astronomers. The team obtained better-quality light curves to guide the photometric and spectroscopic follow-up necessary to classify a candidate as an actual planetary companion. To confirm XO-5b's planetary nature radial velocity observations of XO-5 were made with the high-resolution spectrograph on the 11-meter Hobby–Eberly Telescope located at McDonald Observatory, in order to measure the mass of the planet. Commencing on December 7, 2007, a total of ten radial velocity measurements were made which confirmed XO-5b's status as a planet. Notes See also XO Telescope References External links Hot Jupiters Lynx (constellation) Transiting exoplanets Giant planets Exoplanets discovered in 2008 Exoplanets with proper names
XO-5b
Astronomy
429
35,460,464
https://en.wikipedia.org/wiki/Cecilia%20Schelin%20Seideg%C3%A5rd
Cecilia Schelin Seidegård (born 18May 1954, in Stockholm as Irene Cecilia Schelin), is a Swedish biochemist who served as Governor of Gotland County from 2010 to 2018 (including a two-year extension beyond December 2016). Seidegård grew up in Visby. In her youth, she was active in the Free Students, a political party of a students' union. For 1981–82, she was vice-chairman of the Swedish National Union of Students. She obtained a Ph.D. in biochemistry, and then worked in the pharmaceutical industry, particularly in the Astra Group, where she was director of research at AstraZeneca. In 2003–2004, she was CEO of Huddinge University Hospital AB, and in 2004–2007 she was Director of the merged Karolinska University Hospital. Since 2004 she has been Chairman of the Board of the Royal Institute of Technology. Since 2008, she has also been chairman of the Systembolaget. She was elected in 2007 as a member of the Royal Swedish Academy of Engineering Sciences. From 2010 to 2018 she was County Governor of Gotland and in 2019 she was appointed County Governor of Kalmar County for one year. References External links BusinessWeek profile Living people 1954 births Scientists from Stockholm Governors of Gotland County Governors of Kalmar County Swedish biochemists Women biochemists Swedish women chemists 20th-century women scientists 21st-century Swedish women scientists Women business executives AstraZeneca people Swedish women academics Women county governors of Sweden 20th-century Swedish women
Cecilia Schelin Seidegård
Chemistry
316
4,854,639
https://en.wikipedia.org/wiki/Ludvig%20Faddeev
Ludvig Dmitrievich Faddeev (also Ludwig Dmitriyevich; ; 23 March 1934 – 26 February 2017) was a Soviet and Russian mathematical physicist. He is known for the discovery of the Faddeev equations in the quantum-mechanical three-body problem and for the development of path-integral methods in the quantization of non-abelian gauge field theories, including the introduction of the Faddeev–Popov ghosts (with Victor Popov). He led the Leningrad School, in which he along with many of his students developed the quantum inverse scattering method for studying quantum integrable systems in one space and one time dimension. This work led to the invention of quantum groups by Drinfeld and Jimbo. Biography Faddeev was born in Leningrad to a family of mathematicians. His father, Dmitry Faddeev, was a well-known algebraist, professor of Leningrad University and member of the Russian Academy of Sciences. His mother, Vera Faddeeva, was known for her work in numerical linear algebra. Faddeev attended Leningrad University, receiving his undergraduate degree in 1956. He enrolled in physics, rather than mathematics, "to be independent of [his] father". Nevertheless, he received a solid education in mathematics as well "due to the influence of V. A. Fock and V. I. Smirnov". His doctoral work on scattering theory was completed in 1959 under the direction of Olga Ladyzhenskaya. From 1976 to 2000, Faddeev was head of the St. Petersburg Department of Steklov Institute of Mathematics of Russian Academy of Sciences (PDMI RAS). He was an invited visitor to the CERN Theory Division for the first time in 1973 and made several further visits there. In 1988 he founded the Euler International Mathematical Institute, now a department of PDMI RAS. Honours and awards Faddeev was a member of the Russian Academy of Sciences since 1976, and was a member of a number of foreign academies, including the U. S. National Academy of Sciences, the French Academy of Sciences, the Austrian Academy of Sciences, the Brazilian Academy of Sciences, the Royal Swedish Academy of Sciences and the Royal Society. He received numerous honors including USSR State Prize (1971), Dannie Heineman Prize (1975), Dirac Prize (1990), an honorary doctorate from the Faculty of Mathematics and Science at Uppsala University, Sweden, Max Planck Medal (1996), Demidov Prize (2002 – "For outstanding contribution to the development of mathematics, quantum mechanics, string theory and solitons") and the State Prize of the Russian Federation (1995, 2004). He was president of the International Mathematical Union (1986–1990). He was awarded the Henri Poincaré Prize in 2006 and the Shaw Prize in mathematical sciences in 2008. Also the Karpinsky International Prize and the Max Planck Medal (German Physical Society). He also received the Lomonosov Gold Medal in 2013. Faddeev also received state awards: Order of Merit for the Fatherland; 3rd class (25 October 2004) – for outstanding contribution to the development of fundamental and applied domestic science and many years of fruitful activity 4th class (4 June 1999) – for outstanding contribution to the development of national science and training of highly qualified personnel in connection with the 275th anniversary of the Russian Academy of Sciences Order of Friendship (6 June 1994) – for his great personal contribution to the development of mathematical physics and training of highly qualified scientific personnel Order of Lenin Order of the Red Banner of Labour State Prize of the Russian Federation in Science and Technology 2004 (6 June 2005), for outstanding achievement in the development of mathematical physics and in 1995 for science and technology (20 June 1995), for the monograph "Introduction to quantum gauge field theory" USSR State Prize (1971) Honorary citizen of St. Petersburg (2010) Academician (Finland) (1991) Selected works Source: Notes References L. A. Takhtajan et al., Scientific heritage of L. D. Faddeev. Review of works, Russian Mathematical Surveys (2017), 72 (6):977, External links faddeev.com 1934 births 2017 deaths Scientists from Saint Petersburg Russian inventors Soviet mathematicians Soviet physicists 20th-century Russian physicists Members of the French Academy of Sciences Donegall Lecturers of Mathematics at Trinity College Dublin Full Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Foreign associates of the National Academy of Sciences Foreign members of the Royal Society Foreign members of the Chinese Academy of Sciences Members of the Finnish Academy of Science and Letters Recipients of the Order "For Merit to the Fatherland", 3rd class Recipients of the Order of Honour (Russia) State Prize of the Russian Federation laureates Demidov Prize laureates Recipients of the USSR State Prize Recipients of the Order of Friendship of Peoples Recipients of the Order of Lenin Recipients of the Lomonosov Gold Medal People associated with CERN Winners of the Max Planck Medal Presidents of the International Mathematical Union Members of the Royal Swedish Academy of Sciences 20th-century Russian scientists Deputies of Lensovet
Ludvig Faddeev
Technology
1,048
13,068,104
https://en.wikipedia.org/wiki/DITA%20Open%20Toolkit
DITA Open Toolkit (DITA-OT) is an open-source publishing engine for content authored in the Darwin Information Typing Architecture (DITA). The toolkit's extensible plug-in mechanism allows users to add their own transformations and customize the default output, which includes: Eclipse Help HTML5 Microsoft Compiled HTML Help Markdown PDF, through XSL-FO troff XHTML and XHTML with a JavaScript frameset Originally developed by IBM and released to open source in 2005, the distribution packages contain Ant, Apache FOP, Java, Saxon, and Xerces. Many DITA authoring tools and DITA CMSs integrate the DITA-OT, or parts of it, into their publishing workflows. Standalone tools have also been developed to run the DITA-OT via a graphical user interface instead of the command line. References External links “DITA Open Toolkit documentation” From 1.5.2 to current version GitHub DITA-OT code repository DITA XML.org “Information resource for the DITA OASIS Standard” “Introduction to the Darwin Information Typing Architecture” Don Day, Michael Priestley, David Schell Markup languages Technical communication XML XML-based standards
DITA Open Toolkit
Technology
254
41,005,444
https://en.wikipedia.org/wiki/Biblical%20literalist%20chronology
Biblical literalist chronology is the attempt to correlate the historical dates used in the Bible with the chronology of actual events, typically starting with creation in Genesis 1:1. Some of the better-known calculations include Archbishop James Ussher, who placed it in 4004 BC, Isaac Newton in 4000 BC (both off the Masoretic Hebrew Bible), Martin Luther in 3961 BC, the traditional Hebrew calendar date of 3760 BC, and lastly the dates based on the Septuagint, of roughly 4650 BC. The dates between the Septuagint & Masoretic are conflicting by 650 years between the genealogy of Arphaxad to Nahor in Genesis 11:12-24. The Masoretic text, which lacks the 650 years of the Septuagint, is the text used by most modern Bibles. There is no consensus of which is right, however, without the additional 650 years in the Septuagint, according to Egyptologists the great Pyramids of Giza would pre-date the Flood (yet show no signs of water erosion) and provide no time for Tower of Babel event. Background The Jewish Bible (the Christian Old Testament) dates events either by simple arithmetic taking the creation of the world as the starting point, or, in the later books, by correlations between the reigns of kings in Israel and Judah. The data it provides falls into three periods: From the Creation to Abraham's migration to Canaan, during which events are dated by adding the ages of the patriarchs; From Abraham's migration to the foundation of Solomon's temple, in which the chronology in Genesis continues to be arrived at by adding ages, but from Exodus on is usually given in statements; From the foundation of the temple onward, which gives the reigns in years (sometimes shorter periods) of kings in Israel and Judah. Some believe that for the biblical authors the chronology was theological in intent, functioning as prophecy and not as history. Biblical literalism, however, does not treat it this way, because literalists have a profound respect for the Bible as the word of God. This way of thinking had its origins in Christian fundamentalism, an early-20th-century movement which opposed then-current non-supernatural interpretations of the life of Jesus by stressing, among other things, the verbal inspiration of scripture. The underlying concept or reasoning was that if anything in the Bible were not true, everything would collapse. Literalist chronologies The creation of a literalist chronology of the Bible faces several hurdles, of which the following are the most significant: There are different texts of the Jewish Bible, the major text-families being: the Septuagint, a Greek translation of the original Hebrew scriptures made in the last few centuries before Christ; the Masoretic text, a version of the Hebrew text curated by the Jewish rabbis but the earliest manuscripts of which date from the early years of the 2nd millennium CE; and the Samaritan text, restricted to the five books of the Torah plus the Book of Joshua. The three differ quite markedly from each other. Literalists prefer the Masoretic text, on which Protestant Bibles are based, but the Masoretic text sometimes contains absurdities, as when it states that Saul came to the throne at the age of one and reigned for two years. Such obvious errors can be corrected by reference to other versions of the Bible (in this case the Septuagint, which gives more realistic numbers), but their existence calls into question the fundamentalist idea that the MT text is the inspired word of God. Most fundamentalists, with the notable exception of the King James Only movement, avoid this by holding that only the authors of the original autographs (the very first copies written by Moses and others) were inspired by God. Very few events in the Bible are mentioned in outside sources, making it difficult to move from a relative chronology (X happened before Y happened) to an absolute one (X happened in a known year). The Bible is not always consistent. For example, Exodus 12:40 states that the Israelites spent 430 years in Egypt, while Paul in Galatians 3:17 says the 430 years covers the period from Abraham to Moses. Literal interpretation of the earlier parts of Bible is in direct contradiction with modern science. Tables The Bible measures events from the year of God's creation of the world, a type of calendar called Anno Mundi ("Year of the World"), shortened as AM. The task of a literal biblical chronology is to convert this to dates in the modern chronology expressed as years before or after Christ, BC and AD. There have been many attempts to do this, none of them universally accepted. The following tables (derived from Thomas L. Thompson, The Mythic Past; notes within the table as cited) divide the Bible's AM dates by the three periods into which they most naturally fall. Creation to Abraham's migration to Canaan Abraham's entry into Canaan to the foundation of Solomon's temple After Solomon's temple Example of literalist chronology The following tabulation of years and dates is according to the literal letter of the text of the Bible alone. Links to multiple translations and versions are provided for verification. For comparison, known historically dated events are associated with the resultant literal dates. Dates according to the famous Ussher chronology appear in small type italics "" (Latin: "Year of the World"), "" (Latin: "Before Christ"). In ancient Israel a part year was designated as the previous king's last year and the new king's 1st year. The arithmetic can be checked by starting at the bottom of the table with the date of the destruction of the Temple in 587 and adding the number of years in the Scriptures (books of the Prophets and Chronicles through Genesis) back up to the beginning. Dates with events in italics appearing in for historical comparison are according to Bernard Grun's The Timetables of History. For the period after 587 BCE known historical dates are used as referents. Biblical source texts for stated numbers of years are referenced and linked. Reference sources are the RSVCE, The New American Bible The Timetables of History by Bernard Grun, and the Holman Illustrated Bible Dictionary (2003). Adam to the Flood 4246–2590 BC The Flood to Abram 2589–2211 BC Abraham to Joseph 2198–1936 BC Egypt to the Exodus 1914–1577 BCE The Wilderness Period to the Conquest of Canaan 1576–1505 BC The Judges to the United Monarchy 1505–1018 BCE The Divided Monarchy to the Destruction of the Temple 982–587 BCE The Babylonian Captivity to the Decree of Cyrus 586–539 BCE The Second Temple to Alexander the Great 538–334 BCE Jaddua the high priest to John Hyrcanus 333–104 BCE Esther 11:1the 4th year of Ptolemy and Cleopatra as possibly 78–77 BCE 2 Maccabees 1:10–12Aristobulus II 66–63 BCE See also Council of Jamnia Dating creation History of ancient Israel and Judah Intertestamental period Universal history Young Earth creationism Notes References Citations Bibliography Bible-related controversies Christian fundamentalism Christian theology of the Bible Chronology Hebrew Bible studies Hebrew calendar Bible Timelines of Christianity
Biblical literalist chronology
Physics
1,492
45,569
https://en.wikipedia.org/wiki/Dedekind%20cut
In mathematics, Dedekind cuts, named after German mathematician Richard Dedekind (but previously considered by Joseph Bertrand), are а method of construction of the real numbers from the rational numbers. A Dedekind cut is a partition of the rational numbers into two sets A and B, such that each element of A is less than every element of B, and A contains no greatest element. The set B may or may not have a smallest element among the rationals. If B has a smallest element among the rationals, the cut corresponds to that rational. Otherwise, that cut defines a unique irrational number which, loosely speaking, fills the "gap" between A and B. In other words, A contains every rational number less than the cut, and B contains every rational number greater than or equal to the cut. An irrational cut is equated to an irrational number which is in neither set. Every real number, rational or not, is equated to one and only one cut of rationals. Dedekind cuts can be generalized from the rational numbers to any totally ordered set by defining a Dedekind cut as a partition of a totally ordered set into two non-empty parts A and B, such that A is closed downwards (meaning that for all a in A, x ≤ a implies that x is in A as well) and B is closed upwards, and A contains no greatest element. See also completeness (order theory). It is straightforward to show that a Dedekind cut among the real numbers is uniquely defined by the corresponding cut among the rational numbers. Similarly, every cut of reals is identical to the cut produced by a specific real number (which can be identified as the smallest element of the B set). In other words, the number line where every real number is defined as a Dedekind cut of rationals is a complete continuum without any further gaps. Definition A Dedekind cut is a partition of the rationals into two subsets and such that is nonempty. (equivalently, is nonempty). If , , and , then . ( is "closed downwards".) If , then there exists a such that . ( does not contain a greatest element.) By omitting the first two requirements, we formally obtain the extended real number line. Representations It is more symmetrical to use the (A, B) notation for Dedekind cuts, but each of A and B does determine the other. It can be a simplification, in terms of notation if nothing more, to concentrate on one "half" — say, the lower one — and call any downward-closed set A without greatest element a "Dedekind cut". If the ordered set S is complete, then, for every Dedekind cut (A, B) of S, the set B must have a minimal element b, hence we must have that A is the interval (−∞, b), and B the interval [b, +∞). In this case, we say that b is represented by the cut (A, B). The important purpose of the Dedekind cut is to work with number sets that are not complete. The cut itself can represent a number not in the original collection of numbers (most often rational numbers). The cut can represent a number b, even though the numbers contained in the two sets A and B do not actually include the number b that their cut represents. For example if A and B only contain rational numbers, they can still be cut at by putting every negative rational number in A, along with every non-negative rational number whose square is less than 2; similarly B would contain every positive rational number whose square is greater than or equal to 2. Even though there is no rational value for , if the rational numbers are partitioned into A and B this way, the partition itself represents an irrational number. Ordering of cuts Regard one Dedekind cut (A, B) as less than another Dedekind cut (C, D) (of the same superset) if A is a proper subset of C. Equivalently, if D is a proper subset of B, the cut (A, B) is again less than (C, D). In this way, set inclusion can be used to represent the ordering of numbers, and all other relations (greater than, less than or equal to, equal to, and so on) can be similarly created from set relations. The set of all Dedekind cuts is itself a linearly ordered set (of sets). Moreover, the set of Dedekind cuts has the least-upper-bound property, i.e., every nonempty subset of it that has any upper bound has a least upper bound. Thus, constructing the set of Dedekind cuts serves the purpose of embedding the original ordered set S, which might not have had the least-upper-bound property, within a (usually larger) linearly ordered set that does have this useful property. Construction of the real numbers A typical Dedekind cut of the rational numbers is given by the partition with This cut represents the irrational number in Dedekind's construction. The essential idea is that we use a set , which is the set of all rational numbers whose squares are less than 2, to "represent" number , and further, by defining properly arithmetic operators over these sets (addition, subtraction, multiplication, and division), these sets (together with these arithmetic operations) form the familiar real numbers. To establish this, one must show that really is a cut (according to the definition) and the square of , that is (please refer to the link above for the precise definition of how the multiplication of cuts is defined), is (note that rigorously speaking this number 2 is represented by a cut ). To show the first part, we show that for any positive rational with , there is a rational with and . The choice works, thus is indeed a cut. Now armed with the multiplication between cuts, it is easy to check that (essentially, this is because ). Therefore to show that , we show that , and it suffices to show that for any , there exists , . For this we notice that if , then for the constructed above, this means that we have a sequence in whose square can become arbitrarily close to , which finishes the proof. Note that the equality cannot hold since is not rational. Relation to interval arithmetic Given a Dedekind cut representing the real number by splitting the rationals into where rationals in are less than and rationals in are greater than , it can be equivalently represented as the set of pairs with and , with the lower cut and the upper cut being given by projections. This corresponds exactly to the set of intervals approximating . This allows the basic arithmetic operations on the real numbers to be defined in terms of interval arithmetic. This property and its relation with real numbers given only in terms of and is particularly important in weaker foundations such as constructive analysis. Generalizations Arbitrary linearly ordered sets In the general case of an arbitrary linearly ordered set X, a cut is a pair such that and , imply . Some authors add the requirement that both A and B are nonempty. If neither A has a maximum, nor B has a minimum, the cut is called a gap. A linearly ordered set endowed with the order topology is compact if and only if it has no gap. Surreal numbers A construction resembling Dedekind cuts is used for (one among many possible) constructions of surreal numbers. The relevant notion in this case is a Cuesta-Dutari cut, named after the Spanish mathematician . Partially ordered sets More generally, if S is a partially ordered set, a completion of S means a complete lattice L with an order-embedding of S into L. The notion of complete lattice generalizes the least-upper-bound property of the reals. One completion of S is the set of its downwardly closed subsets, ordered by inclusion. A related completion that preserves all existing sups and infs of S is obtained by the following construction: For each subset A of S, let Au denote the set of upper bounds of A, and let Al denote the set of lower bounds of A. (These operators form a Galois connection.) Then the Dedekind–MacNeille completion of S consists of all subsets A for which (Au)l = A; it is ordered by inclusion. The Dedekind-MacNeille completion is the smallest complete lattice with S embedded in it. Notes References Dedekind, Richard, Essays on the Theory of Numbers, "Continuity and Irrational Numbers," Dover Publications: New York, . Also available at Project Gutenberg. External links Order theory Rational numbers Real numbers
Dedekind cut
Mathematics
1,801
2,255,858
https://en.wikipedia.org/wiki/Somatic%20marker%20hypothesis
The somatic marker hypothesis, formulated by Antonio Damasio and associated researchers, proposes that emotional processes guide (or bias) behavior, particularly decision-making. "Somatic markers" are feelings in the body that are associated with emotions, such as the association of rapid heartbeat with anxiety or of nausea with disgust. According to the hypothesis, somatic markers strongly influence subsequent decision-making. Within the brain, somatic markers are thought to be processed in the ventromedial prefrontal cortex (vmPFC) and the amygdala. The hypothesis has been tested in experiments using the Iowa gambling task. Background In economic theory, human decision-making is often modeled as being devoid of emotions, involving only logical reasoning based on cost-benefit calculations. In contrast, the somatic marker hypothesis proposes that emotions play a critical role in the ability to make fast, rational decisions in complex and uncertain situations. Patients with frontal lobe damage, such as Phineas Gage, provided the first evidence that the frontal lobes were associated with decision-making. Frontal lobe damage, particularly to the vmPFC, results in impaired abilities to organize and plan behavior and learn from previous mistakes, without affecting intellect in terms of working memory, attention, and language comprehension and expression. vmPFC patients also have difficulty expressing and experiencing appropriate emotions. This led Antonio Damasio to hypothesize that decision-making deficits following vmPFC damage result from the inability to use emotions to help guide future behavior based on past experiences. Consequently, vmPFC damage forces those affected to rely on slow and laborious cost-benefit analyses for every given choice situation. Hypothesis When individuals make decisions, they must assess the incentive value of the choices available to them, using cognitive and emotional processes. When the individuals face complex and conflicting choices, they may be unable to decide using only cognitive processes, which may become overloaded. Emotions, consequently, are hypothesized to guide decision-making. Emotions, as defined by Damasio, are changes in both body and brain states in response to stimuli. Physiological changes (such as muscle tone, heart rate, endocrine activity, posture, facial expression, and so forth) occur in the body and are relayed to the brain where they are transformed into an emotion that tells the individual something about the stimulus that they have encountered. Over time, emotions and their corresponding bodily changes, which are called "somatic markers", become associated with particular situations and their past outcomes. When making subsequent decisions, these somatic markers and their evoked emotions are consciously or unconsciously associated with their past outcomes, and influence decision-making in favor of some behaviors instead of others. For instance, when a somatic marker associated with a positive outcome is perceived, the person may feel happy and thereby motivated to pursue that behavior. When a somatic marker associated with the negative outcome is perceived, the person may feel sad, which acts as an internal alarm to warn the individual to avoid that course of action. These situation-specific somatic states are based on, and reinforced by, past experiences help to guide behavior in favor of more advantageous choices, and therefore are adaptive. According to the hypothesis, two distinct pathways reactivate somatic marker responses. In the first pathway, emotion can be evoked by changes in the body that are projected to the brain – called the "body loop". For instance, encountering a feared object like a snake may initiate the fight-or-flight response and cause fear. In the second pathway, cognitive representations of the emotions (imagining an unpleasant situation "as if" you were in that particular situation) can be activated in the brain without being directly elicited by a sensory stimulus – called the "as-if body loop". Thus, the brain can anticipate expected bodily changes, which allows the individual to respond faster to external stimuli without waiting for an event to actually occur. The amygdala and vmPFC (a subsection of the orbital and medial prefrontal cortex or OMPFC) are essential components of this hypothesized mechanism, and therefore damage to either structure will disrupt decision-making. Experimental evidence In an effort to produce a simple neuropsychological tool that would assess deficits in emotional processing, decision-making, and social skills of OMPFC-lesioned individuals, Bechara and collaborators created the Iowa gambling task. The task measures a form of emotion-based learning. Studies using the gambling task have found deficits in various neurological (such as amygdala and OMPFC lesions) and psychiatric populations (such as schizophrenia, mania, and drug abusers). The Iowa gambling task is a computerized test in which participants are presented with four decks of cards from which they repeatedly choose. Each deck contains various amounts of rewards of either $50 or $100, and occasional losses that are greater in the decks with higher rewards. The participants do not know where the penalty cards are located, and are told to pick cards that will maximize their winnings. The most profitable strategy turns out to be to choose cards only from the small reward/small penalty decks, because although the reward is smaller, the penalty is proportionally much smaller than in the high reward/high penalty decks. Over the course of a session, most healthy participants come to adopt the profitable low-penalty deck strategy. Participants with brain damage, however, are unable to determine the better deck to choose from, and continue to choose from the high reward/high penalty decks. Since the Iowa gambling task measures participants' quickness in "developing anticipatory emotional responses to guide advantageous choices", it is helpful in testing the somatic marker hypothesis. According to the hypothesis, somatic markers give rise to anticipation of the emotional consequences of a decision being made. Consequently, persons who perform well on the task are thought to be aware of the penalty cards and of the negative emotions associated with drawing such cards, and to realize which deck is less likely to yield a penalty. This experiment has been used to analyze the impairments of people with damage to the vmPFC, which has been known to affect neural signaling of prospective rewards or punishments. Such persons perform less well on the task. Functional magnetic resonance imaging (fMRI) has been used to analyze the brain during the Iowa gambling task. The brain regions that were activated during the Iowa gambling task were also the ones hypothesized to be triggered by somatic markers during decision-making. Evolutionary significance Damasio has posited that the ability of humans to perform abstract thinking quickly and efficiently coincides with both the development of the vmPFC and with the use of somatic markers to guide human behavior during evolution. Patients with damage to the vmPFC are more likely to engage in behaviors that negatively impact personal relationships in the distant future, but they never engage in actions that would lead to immediate harm to themselves or others. The evolution of the prefrontal cortex was associated with the ability to represent events that may occur in the future. Application to risky behavior The somatic marker hypothesis has been applied to trying to understand risky behaviors, such as risky sexual behavior and drug addiction. According to the hypothesis, riskier sexual behaviors are more exhilarating and pleasurable, and therefore they are more likely to stimulate repetitive engagement in such behaviors. When this idea was tested in individuals who were infected with HIV and were substance dependent, differences were found between persons who scored well in the Iowa gambling test, and those who scored poorly. The high scorers showed a correlation between the amount of distress they reported having over their HIV status, and their acceptance of risk during sexual behavior – the greater the distress, the greater the risk that these people would take. The low scorers, on the other hand, showed no such correlation. These results were interpreted as indicating that persons with intact decision-making abilities are better able to rely on past emotional experiences when weighing risks, than are persons who are deficient in such abilities, and that acceptance of risk serves to ameliorate emotional distress. Drug abusers are thought to ignore the negative consequences of addiction while seeking drugs. According to the somatic marker hypothesis, such abusers are impaired in their ability to recall and consider past unpleasant experiences when weighing whether to consider drug seeking behaviors. Researchers analyzed the neuroendocrine responses of substance-dependent individuals and healthy individuals after being shown pleasant or unpleasant images. In response to unpleasant images, drug users showed decreased levels of several neuroendocrine markers, including norepinephrine, cortisol, and adrenocorticotropic hormone. Addicts showed lesser responses to both pleasant and unpleasant images, suggesting that they may have a diminished emotional response. Neuroimaging studies utilizing fMRI indicate that drug-related stimuli have the ability to activate brain regions involved in emotional evaluation and reward processing. When shown a film of people smoking cocaine, cocaine users showed greater activation of the anterior cingulate cortex, the right inferior parietal lobe, and the caudate nucleus than did non-users. Conversely, the cocaine users showed lesser activation when viewing a sex film than did non-users. Criticism Some researchers believe that the use of somatic markers (i.e., afferent feedback) would be a very inefficient method of influencing behavior. Damasio's notion of the as-if experience dependent feedback route, whereby bodily responses are re-represented utilizing the somatosensory cortex (postcentral gyrus), also proposes an inefficient method of affecting explicit behavior. Edmund Rolls (1999) stated that; "it would be very inefficient and noisy to place in the execution route a peripheral response, and transducers to attempt to measure that peripheral response, itself a notoriously difficult procedure" (p. 73). Reinforcement association located in the orbitofrontal cortex and amygdala, where the incentive value of stimuli is decoded, is sufficient to elicit emotion-based learning and to affect behavior via, for example, the orbitofrontal-striatal pathway. This process can occur via implicit or explicit processes. The somatic marker hypothesis represents a model of how feedback from the body may contribute to both advantageous and disadvantageous decision-making in situations of complexity and uncertainty. Much of its supporting data comes from data taken from the Iowa gambling task. While the Iowa gambling task has proven to be an ecologically valid measure of decision-making impairment, there exist three assumptions that need to hold true. First, the claim that it assesses implicit learning as the reward/punishment design is inconsistent with data showing accurate knowledge of the task possibilities and that mechanisms such as working-memory appear to have a strong influence. Second, the claim that this knowledge occurs through preventive marker signals is not supported by competing explanations of the psychophysiology generated profile. Lastly, the claim that the impairment is due to a 'myopia for the future' is undermined by more plausible psychological mechanisms explaining deficits on the tasks such as reversal learning, risk-taking, and working-memory deficits. There may also be more variability in control performance than previously thought, thus complicating the interpretation of the findings. Furthermore, although the somatic marker hypothesis has accurately identified many of the brain regions involved in decision-making, emotion, and body-state representation, it has failed to clearly demonstrate how these processes interact at a psychological and evolutionary level. There are many experiments that could be implemented to further test the somatic marker hypothesis. One way would be to develop variants of the Iowa gambling task that control some of the methodological issues and interpretation ambiguities generated. It may be a good idea to include removing the reversal learning confound, which would make the task more difficult to consciously comprehend. Additionally, causal tests of the somatic marker hypothesis could be practiced more insistently in a greater range of populations with altered peripheral feedback, like on patients with facial paralysis. In conclusion, the somatic marker hypothesis needs to be tested in more experiments. Until a wider range of empirical approaches are employed in order to test the somatic marker hypothesis, it appears that the framework is simply an intriguing idea that is in need of some better supporting evidence. Despite these issues, the somatic marker hypothesis and the Iowa gambling task reestablish the notion that emotion has the potential to be a benefit as well as a problem during the decision-making process in humans. See also James–Lange theory Somatization References External links Neuropsychology Behavior Emotion Hypotheses Somatic psychology
Somatic marker hypothesis
Biology
2,580
8,792,751
https://en.wikipedia.org/wiki/Delta%20G
The Delta G, or Thor-Delta G was an American expendable launch system used to launch two biological research satellites in 1966 and 1967. It was a member of the Delta family of rockets. The Delta G was a two-stage derivative of the Delta E. The first stage was a Thor missile in the DSV-2C configuration and the second stage was a Delta E. Three Castor-1 solid rocket boosters were clustered around the first stage. The solid-fuel upper stage used on the Delta E was not used on the Delta G. Both launches occurred from Cape Canaveral Air Force Station Launch Complex 17. The first was from pad 17A on 14 December 1966 at 19:20 GMT, with Biosatellite 1. At 22:04 on 7 September 1967, Biosatellite 2 was launched from pad B on the second Delta G. References Delta (rocket family)
Delta G
Astronomy
186
3,558,310
https://en.wikipedia.org/wiki/End-user%20computing
End-user computing (EUC) refers to systems in which non-programmers can create working applications. EUC is a group of approaches to computing that aim to better integrate end users into the computing environment. These approaches attempt to realize the potential for high-end computing to perform problem-solving in a trustworthy manner. End-user computing can range in complexity from users simply clicking a series of buttons, to citizen developers writing scripts in a controlled scripting language, to being able to modify and execute code directly. Examples of end-user computing are systems built using fourth-generation programming languages, such as MAPPER or SQL, or one of the fifth-generation programming languages, such as ICAD. Factors Factors contributing to the need for further EUC research include knowledge processing, pervasive computing, issues of ontology, interactive visualization, and the like. Some of the issues related to end-user computing concern software architecture (iconic versus language interfaces, open versus closed, and others). Other issues relate to intellectual property, configuration and maintenance. End-user computing allows more user-input into system affairs that can range from personalization to full-fledged ownership of a system. EUC strategy EUC applications should not be evolved by accident, but there should be a defined EUC strategy. Any Application Architecture Strategy / IT Strategy should consider the white spaces in automation (enterprise functionality not automated by ERP / Enterprise Grade Applications). These are the potential areas where EUC can play a major role. Then ASSIMPLER parameters should be applied to these white spaces to develop the EUC strategy. (ASSIMPLER stands for availability, scalability, security, interoperability, maintainability, performance, low cost of ownership, extendibility and reliability.) In businesses, an end-user concept gives workers more flexibility, as well as more opportunities for better productivity and creativity. However, EUC will work only when leveraged correctly. That’s why it requires a full-fledged strategy. Any strategy should include all the tools users might need to carry out their tasks and work more productively. Types of EUC End-user computing covers a broad range of user-facing resources, including: desktop and notebook computers; desktop operating systems and applications; scripting languages such as robotic desktop automation or RDA; smartphones and wearables; mobile, web and cloud applications; virtual desktops and applications EUC risk drivers Business owners should understand that every user-controlled app needs to be monitored and supervised. Otherwise, organization risk facing a lot of problems and losses if end-users don’t follow company policy or leave their job. In functions such as finance, accounting and regulated activities, unmanaged EUC may expose the organization to regulatory compliance issues and fines. End-user computing operating and business risks may be driven by: lack of rigorous testing; lack of version & change control; lack of documentation and reliance on end-user who developed it; lack of maintenance processes; lack of security; lack of audit trail; overreliance on manual controls. EUC risk management software Many companies elect to leverage software to manage their EUC risks. Software can provide many benefits to organizations, including: automation of risk management activities; reduction in manual effort required for manual controls; version controls for EUC applications; change controls for EUC applications. Examples of EUC risk software include: apparity See also Decentralized computing Defensive computing End-user development Journal of Organizational and End User Computing Knowledge-based engineering Situational application Software engineering Usability Usability engineering User interface User-centered design References External links EUSES Consortium, a collaboration that researches end-user computing. Relationship Between Leadership and EUC Efficiency Human–computer interaction
End-user computing
Engineering
759
29,260,310
https://en.wikipedia.org/wiki/Dacor%20%28scuba%20diving%29
DACOR Corporation was a former American manufacturer of scuba diving equipment which was founded in 1954 by Sam Davison Jr. in Evanston, Illinois as "The Davison Corporation". Since its foundation. DACOR was one of the five early American diving equipment manufacturers. Together, they were: DACOR U.S. Divers (Now Aqua-Lung) Healthways (Sold to Scubapro in the early 60's) Swimaster (Sold to Voit in the early 60's) Voit (Sold to AMF somewhere in the late 50's to early 60's) Star Wars sound designer Ben Burtt around 1977 used a Dacor Dart scuba regulator to create the heavy breathing of the notorious antagonist Darth Vader. References Underwater diving engineering Underwater diving equipment manufacturers
Dacor (scuba diving)
Engineering
159
47,289,377
https://en.wikipedia.org/wiki/Breakthrough%20Initiatives
Breakthrough Initiatives is a science-based program founded in 2015 and funded by Julia and Yuri Milner, also of Breakthrough Prize, to search for extraterrestrial intelligence over a span of at least 10 years. The program is divided into multiple projects. Breakthrough Listen will comprise an effort to search over 1,000,000 stars for artificial radio or laser signals. A parallel project called Breakthrough Message is an effort to create a message "representative of humanity and planet Earth". The project Breakthrough Starshot, co-founded with Mark Zuckerberg, aims to send a swarm of probes to the nearest star at about 20% the speed of light. The project Breakthrough Watch aims to identify and characterize Earth-sized, rocky planets around Alpha Centauri and other stars within 20 light years of Earth. Breakthrough plans to send a mission to Saturn's moon Enceladus, in search for life in its warm ocean, and in 2018 signed a partnership agreement with NASA for the project. History The Breakthrough Initiatives were announced to the public on 20 July 2015, at London's Royal Society by physicist Stephen Hawking. Russian tycoon Yuri Milner created the Initiatives to search for intelligent extraterrestrial life in the Universe and consider a plan for possibly transmitting messages out into space. The announcement included an open letter co-signed by multiple scientists, including Hawking, expressing support for an intensified search for alien radio communications. During the public launch, Hawking said: "In an infinite Universe, there must be other life. There is no bigger question. It is time to commit to finding the answer." The cash infusion is projected to mark up the pace of SETI research over the early 2000s rate, and will nearly double the rate NASA was spending on SETI research annually in approximately 1973–1993. Projects Breakthrough Listen Breakthrough Listen is a program to search for intelligent extraterrestrial communications in the Universe. With $100 million in funding and thousands of hours of dedicated telescope time on state-of-the-art facilities, it is the most comprehensive search for alien communications to date. The project began in January 2016, and is expected to continue for 10 years. The project uses radio wave observations from the Green Bank Observatory and the Parkes Observatory, and visible light observations from the Automated Planet Finder. Targets for the project include one million nearby stars and the centers of 100 galaxies. All data generated from the project are available to the public, and SETI@Home is used for some of the data analysis. The first results were published in April 2017, with further updates expected every 6 months. Breakthrough Message The Breakthrough Message program is to study the ethics of sending messages into deep space. It also launched an open competition with a US$1 million prize pool, to design a digital message that could be transmitted from Earth to an extraterrestrial civilization. The message should be "representative of humanity and planet Earth". The program pledges "not to transmit any message until there has been a global debate at high levels of science and politics on the risks and rewards of contacting advanced civilizations". Breakthrough Starshot Breakthrough Starshot, announced 12 April 2016, is a US$100 million program to develop a proof-of-concept light sail spacecraft fleet capable of making the journey to Alpha Centauri at 20% the speed of light (60,000 km/s or 215 million km/h) taking about 20 years to get there, and about 4 years to notify Earth of a successful arrival. The interstellar journey may include a flyby of Proxima Centauri b, an Earth-sized exoplanet that is in the habitable zone of its host star in the Alpha Centauri system. From a distance of 1 Astronomical Unit (150 million kilometers or 93 million miles), the four cameras on each of the spacecraft could potentially capture an image of high enough quality to resolve surface features. The spacecraft fleet would have 1000 craft, and each craft, named StarChip, would be a very small centimeter-sized craft weighing several grams. They would be propelled by several ground-based lasers of up to 100 gigawatts. Each tiny spacecraft would transmit data back to Earth using a compact on-board laser communications system. Pete Worden is the head of this project. The conceptual principles to enable this interstellar travel project were described in "A Roadmap to Interstellar Flight", by Philip Lubin of UC Santa Barbara. METI president Douglas Vakoch summarized the significance of the project, saying that "by sending hundreds or thousands of space probes the size of postage stamps, Breakthrough Starshot gets around the hazards of spaceflight that could easily end a mission relying on a single spacecraft. Only one nanocraft needs to make its way to Alpha Centauri and send back a signal for the mission to be successful. When that happens, Starshot will make history." In July 2017, scientists announced that precursors to StarChip, named Sprites, were successfully launched and flown. Breakthrough Watch Breakthrough Watch is a multimillion-dollar astronomical program to develop Earth- and space-based technologies that can find Earth-like planets in our cosmic neighborhood – and try to establish whether they host life. The project aims to identify and characterize Earth-sized, rocky planets around Alpha Centauri and other stars within 20 light years of Earth, in search of oxygen and other "biosignatures." Breakthrough Enceladus Breakthrough Enceladus is an astrobiology space probe mission concept to explore the possibility of life on Saturn's moon, Enceladus. In September 2018, NASA signed a collaboration agreement with Breakthrough to jointly create the mission concept. This mission would be the first privately funded deep space mission. It would study the content of the plumes ejecting from Enceladus's warm ocean through its southern ice crust. Enceladus's ice crust is thought to be around two to five kilometers thick, and a probe could use an ice-penetrating radar to constrain its structure. See also Active SETI Colossus Array Array of 74m telescopes capable of laser propelling nano crafts. Communication with extraterrestrial intelligence Interstellar probe Interstellar travel IXS Enterprise Nexus for Exoplanet System Science Ohio State University Radio Observatory 100 Year Starship Open data Open-source software Project Daedalus Project Dragonfly Project Icarus Project Longshot Search for extraterrestrial intelligence SETI@home Starship Starwisp References External links Breakthrough Initiatives web site Breakthrough Listen Breakthrough Message Yuri Milner and Stephen Hawking announce $100 million Breakthrough Initiative to dramatically accelerate search for intelligent life in the Universe / Breakthrough Initiatives, London, 20 July 2015 Breakthrough Listen, Breakthrough Initiatives website Breakthrough Initiatives' official website Creation of Stephen Hawking's Universe with Nanotechnology Search for extraterrestrial intelligence Interstellar messages Interstellar travel Proposed space probes Yuri Milner
Breakthrough Initiatives
Astronomy
1,412
4,951,756
https://en.wikipedia.org/wiki/Interactive%20Pager
The Inter@ctive Pager is a discontinued two-way pager released in 1996 by Research In Motion (later known for the BlackBerry line of smartphones) that allowed users to receive and send messages via the Mobitex wireless network. The US operator of Mobitex, RAM Mobile Data, introduced the Inter@ctive Pager service as RAMfirst Interactive Paging. The device was named '1997 Top Product' by the magazine Wireless for the Corporate User. It is also known as the RIM-900. The device is credited with introducing features such as peer-to-peer delivery, read receipts, sending faxes to phones and text-to-speech technology. In August 1998, BellSouth Wireless Data replaced the RIM-900 with the BlackBerry 950 and marketed the service as BellSouth Interactive Paging. References External links Interactive Messaging Plus User's Guide Pagers Information appliances BlackBerry Limited Products introduced in 1996
Interactive Pager
Technology
188
57,542,143
https://en.wikipedia.org/wiki/Adolf%20Paul%20Schulze
Adolf Paul Schulze FRSE FRMS (1840–1891) was a 19th-century German merchant and amateur optical scientist who settled in Scotland. He created the firm Schulze, Paton & Co. He was an expert on microscopes and microphotography and jointly founded the Scottish Microscopical Society. In business he was known as Paul Schulze and in microscopy he was known as Adolf or Adolph Schulze. Life He was born in Crimmitschau in Saxony (now south-east Germany) on 8 October 1840, the son of Adolph Schulze (1808–1868) and his wife, Othilie Jeannette Streit. He was educated at the Burgerschule in Crimmitschau then at Zwickau. He studied engineering at Chemnitz Polytechnic. He moved to England in 1861 and in 1866 joined his brother in a yarn business in Manchester. He moved to Glasgow in 1867 setting up premises at 79 Glassford Street but still giving his address as 38 Chorlton Street in Manchester. By 1875 he had moved to larger premises at 223 George Street but is still listed as living in Manchester but now at 19 Greenwood Street. In 1879 he joined the Glasgow Natural History Society. He disappears from Glasgow in the early 1880s and reappears living at 2 Doune Gardens in 1885. In 1887 he was elected a Fellow of the Royal Society of Edinburgh for his contributions to scientific observations. His proposers were William Thomson, Lord Kelvin, John Gray McKendrick, William Dittmar and James Thomson Bottomley. His company Schulze, Paton & Co, yarn agents and merchants, were based at 9 Cochrane Street in Glasgow's Merchant City from around 1887. His partner was Walter R. Paton. The company also acted as agents for a Viennese and Sicilian Association based at the same address. He lived at 2 Doune Gardens near the River Kelvin in the Kelvinside district of Glasgow. He died on 3 January 1891 in Glasgow aged 50. Publications On Microscopy and Microscopic Illumination (1875) Family He married Joanna Miller (probably from Manchester), and they had at least six children, including Arthur Paul Schulze (later known as Arthur Paul Miller) (1875–1944). His son Paul Guido Schulze took over his position in his company on his death. References 1840 births 1891 deaths Scientists from Saxony Microscopists Fellows of the Royal Society of Edinburgh People from Crimmitschau
Adolf Paul Schulze
Chemistry
502
10,457,943
https://en.wikipedia.org/wiki/Reconstructed%20clothing
Reconstructed clothing is used or vintage clothing that has been redesigned and resewn into a new garment. Reconstructed clothing became trendy in the mid-2000s. During this first wave of trend, Generation T (2006), which gave instructions for "108 Ways to Transform a T-Shirt," was published. The book included instructions for how to make halter tops, A-line skirts, and string bikinis out of T-shirts. In 2008, Nicolay released another book entitled: Generation T-Beyond Fashion 120 More Ways to Transform Your T's. This book had a bigger variety of projects including ones for children, men, and even pets. In March 2006, the DIY group Compai released their first DIY clothing reconstruction book, 99 Ways to Cut, Sew, Trim, and Tie Your T-shirt Into Something Fabulous! After this book's release, Compai went on to release three more books about reconstructing jeans, sweaters and scarves. Towards the latter half of the 2010s, reconstructed clothing gained traction in high fashion circles, with brands like RE/DONE and Vetements leading the way in popularizing jeans crafted from vintage denim. Marine Serre, winner of the 2017 LVMH prize for young designers, pledges a minimum of 50% of her collections consist of reconstructed clothing. New York brand BODE has from its 2016 inception focused on pieces reconstructed from vintage or antique textiles such as quilts, tablecloths, lace doilies, and oven mitts. Reconstructed clothing is appealing because it allows the designer to "stamp [their ideas] into an existing piece...and come up with a totally different piece" and because it makes the wearer's clothing unique. References 2000s fashion Fashion design Clothing and the environment Reuse Sustainability Clothing
Reconstructed clothing
Engineering
369
43,825,446
https://en.wikipedia.org/wiki/Atmosphere-breathing%20electric%20propulsion
Atmosphere-breathing electric propulsion, or air-breathing electric propulsion, shortly ABEP, is a propulsion technology for spacecraft, which could allow thrust generation in low orbits without the need of on-board propellant, by using residual gases in the atmosphere as propellant. Atmosphere-breathing electric propulsion could make a new class of long-lived, low-orbiting missions feasible. The concept is currently being investigated by the European Space Agency (ESA), the EU-funded BREATHE project at Sant'Anna School of Advanced Studies in Pisa and the EU-funded DISCOVERER project. Current state-of-the-art conventional electric thrusters cannot maintain flight at low altitudes for any times longer than about 2 years, because of the limitation in propellant storage and in the amount of thrust generated, which force the spacecraft's orbit to decay. The ESA officially announced the first successful RAM-EP prototype on-ground demonstration in March 2018. Principle of operation An ABEP is composed by an intake and an electric thruster: rarefied gases which are responsible for drag in low Earth orbit (LEO) and very low Earth orbit (VLEO), are used as the propellant. This technology would ideally allow S/Cs to orbit at very low altitudes (< 400 km around the Earth) without the need of on-board propellant, allowing longer time missions in a new section of atmosphere's altitudes. This advantage makes the technology of interest for scientific missions, military and civil surveillance services as well as low orbit even lower latency communication services than Starlink. A special intake will be used to collect the gas molecules and direct them to the thruster. The molecules will then be ionized by the thruster and expelled from the acceleration stage at a very high velocity, generating thrust. The electric power needed can be provided by the same power subsystems developed for the actual electric propulsion systems, likely a combination of solar arrays and batteries, though other kind of electric power subsystems can be considered. An ABEP could extend the lifetime of satellites in LEO and VLEO by compensating the atmospheric drag during their time of operation. The altitude for an Earth-orbiting ABEP can be optimised between 120–250 km. This technology could also be utilized on any planet with atmosphere, if the thruster can process other propellants, and if the power source can provide the required power, e.g. sufficient solar irradiation for the solar panels, such as Mars and Venus, otherwise other electric power subsystems such as a space nuclear reactor or radioisotope thermoelectric generator (RTG) have to be implemented, for example for a mission around Titan. Concepts and modelling The first studies considering the collection and use of the upper atmosphere as propellant for an electric thruster can be found already in 1959 with the studies on the propulsive fluid accumulator from S. T. Demetriades. In the development of atmosphere-breathing ion engines, a notable extension of Child's Law led to its implementation in the ABEP concept in 1995. Originally, Child's Law modeled the flow of charge between an anode and a cathode with the assumption that the initial velocity of ions was zero. This assumption, however, is not applicable to ion thrusters operating in low Earth orbit, where ambient gas enters the ionization chamber at high velocities. Buford Ray Conley provided a generalization of Child's Law that accounts for a non-zero initial velocity of ions. This adaptation has been significant for the theoretical modeling of ion propulsion systems, particularly those that operate in the rarefied conditions of low Earth orbit. The generalization of Child's Law has implications for the design and efficiency of atmosphere-breathing ion thrusters. By accounting for the high-velocity ambient gas that enters the ionization chamber in low Earth orbit, the modified law allows for more accurate theoretical modeling. Once the ambient gas is ionized in the chamber, it is electromagnetically accelerated out of the exhaust, contributing to the propulsion of the spacecraft. Development and testing European projects ESA's RAM-EP, designed and developed by SITAEL in Italy, was first tested in laboratory in May 2017. The Institute of Space Systems at the University of Stuttgart is developing the intake and the thruster, the latter is the RF helicon-based Plasma Thruster (IPT), which has been ignited for the first time in March 2020, see IRS Uni Stuttgart Press Release. Such a device has the main advantage of no components in direct contact with the plasma, this minimizes the performance degradation over time due to erosion from aggressive propellants, such as atomic oxygen in VLEO, and does not require a neutralizer. Intake and thruster are developed within the DISCOVERER EU H2020 Project. Intakes have been designed in multiple studies, and are based on free molecular flow condition and on gas-surface interaction models: based on specular reflections properties of the intake materials, high efficiencies can theoretically be achieved by using telescope-like designs. With fully diffuse reflection properties, efficiencies are generally lower, but with a trapping mechanism the pressure distribution in front of the thruster can be enhanced as well. The UK-based start-up NewOrbit Space has been developing an air-breathing electric propulsion system since 2021, achieving several milestones in the process. Notably, NewOrbit became the first in the industry to successfully operate and neutralize an ion engine entirely on atmospheric air in a vacuum chamber. Initial testing results have shown a specific impulse of 6,380 seconds, with the engine accelerating incoming air to speeds exceeding 200,000 km/h. This breakthrough enables the propulsion system to generate enough thrust to overcome atmospheric drag in very low Earth orbit, allowing sustainable spacecraft operation at altitudes below 200 km. US & Japanese work Busek Co. Inc. in the U.S. patented their concept of an Air Breathing Hall Effect Thruster (ABHET) in 2004, and with funding from the NASA Institute for Advanced Concepts, started in 2011 a feasibility study that would be applied to Mars (Mars-ABHET or MABHET), where the system would breathe and ionize atmospheric carbon dioxide. The MABHET concept is based on the same general principles as JAXA's Air Breathing Ion Engine (ABIE) or ESA's RAM-EP. See also Ion-propelled aircraft References Ion engines
Atmosphere-breathing electric propulsion
Physics,Chemistry
1,307
293,802
https://en.wikipedia.org/wiki/Closure%20%28mathematics%29
In mathematics, a subset of a given set is closed under an operation of the larger set if performing that operation on members of the subset always produces a member of that subset. For example, the natural numbers are closed under addition, but not under subtraction: is not a natural number, although both 1 and 2 are. Similarly, a subset is said to be closed under a collection of operations if it is closed under each of the operations individually. The closure of a subset is the result of a closure operator applied to the subset. The closure of a subset under some operations is the smallest superset that is closed under these operations. It is often called the span (for example linear span) or the generated set. Definitions Let be a set equipped with one or several methods for producing elements of from other elements of . A subset of is said to be closed under these methods, if, when all input elements are in , then all possible results are also in . Sometimes, one may also say that has the . The main property of closed sets, which results immediately from the definition, is that every intersection of closed sets is a closed set. It follows that for every subset of , there is a smallest closed subset of such that (it is the intersection of all closed subsets that contain ). Depending on the context, is called the closure of or the set generated or spanned by . The concepts of closed sets and closure are often extended to any property of subsets that are stable under intersection; that is, every intersection of subsets that have the property has also the property. For example, in a Zariski-closed set, also known as an algebraic set, is the set of the common zeros of a family of polynomials, and the Zariski closure of a set of points is the smallest algebraic set that contains . In algebraic structures An algebraic structure is a set equipped with operations that satisfy some axioms. These axioms may be identities. Some axioms may contain existential quantifiers in this case it is worth to add some auxiliary operations in order that all axioms become identities or purely universally quantified formulas. See Algebraic structure for details. A set with a single binary operation that is closed is called a magma. In this context, given an algebraic structure , a substructure of is a subset that is closed under all operations of , including the auxiliary operations that are needed for avoiding existential quantifiers. A substructure is an algebraic structure of the same type as . It follows that, in a specific example, when closeness is proved, there is no need to check the axioms for proving that a substructure is a structure of the same type. Given a subset of an algebraic structure , the closure of is the smallest substructure of that is closed under all operations of . In the context of algebraic structures, this closure is generally called the substructure generated or spanned by , and one says that is a generating set of the substructure. For example, a group is a set with an associative operation, often called multiplication, with an identity element, such that every element has an inverse element. Here, the auxiliary operations are the nullary operation that results in the identity element and the unary operation of inversion. A subset of a group that is closed under multiplication and inversion is also closed under the nullary operation (that is, it contains the identity) if and only if it is non-empty. So, a non-empty subset of a group that is closed under multiplication and inversion is a group that is called a subgroup. The subgroup generated by a single element, that is, the closure of this element, is called a cyclic group. In linear algebra, the closure of a non-empty subset of a vector space (under vector-space operations, that is, addition and scalar multiplication) is the linear span of this subset. It is a vector space by the preceding general result, and it can be proved easily that is the set of linear combinations of elements of the subset. Similar examples can be given for almost every algebraic structures, with, sometimes some specific terminology. For example, in a commutative ring, the closure of a single element under ideal operations is called a principal ideal. Binary relations A binary relation on a set can be defined as a subset of the set of the ordered pairs of elements of . The notation is commonly used for Many properties or operations on relations can be used to define closures. Some of the most common ones follow: Reflexivity A relation on the set is reflexive if for every As every intersection of reflexive relations is reflexive, this defines a closure. The reflexive closure of a relation is thus Symmetry Symmetry is the unary operation on that maps to A relation is symmetric if it is closed under this operation, and the symmetric closure of a relation is its closure under this relation. Transitivity Transitivity is defined by the partial binary operation on that maps and to A relation is transitive if it is closed under this operation, and the transitive closure of a relation is its closure under this operation. A preorder is a relation that is reflective and transitive. It follows that the reflexive transitive closure of a relation is the smallest preorder containing it. Similarly, the reflexive transitive symmetric closure or equivalence closure of a relation is the smallest equivalence relation that contains it. Other examples In matroid theory, the closure of X is the largest superset of X that has the same rank as X. The transitive closure of a set. The algebraic closure of a field. The integral closure of an integral domain in a field that contains it. The radical of an ideal in a commutative ring. In geometry, the convex hull of a set S of points is the smallest convex set of which S is a subset. In formal languages, the Kleene closure of a language can be described as the set of strings that can be made by concatenating zero or more strings from that language. In group theory, the conjugate closure or normal closure of a set of group elements is the smallest normal subgroup containing the set. In mathematical analysis and in probability theory, the closure of a collection of subsets of X under countably many set operations is called the σ-algebra generated by the collection. Closure operator In the preceding sections, closures are considered for subsets of a given set. The subsets of a set form a partially ordered set (poset) for inclusion. Closure operators allow generalizing the concept of closure to any partially ordered set. Given a poset whose partial order is denoted with , a closure operator on is a function that is increasing ( for all ), idempotent (), and monotonic (). Equivalently, a function from to is a closure operator if for all An element of is closed if it is its own closure, that is, if By idempotency, an element is closed if and only if it is the closure of some element of . An example is the topological closure operator; in Kuratowski's characterization, axioms K2, K3, K4' correspond to the above defining properties. An example not operating on subsets is the ceiling function, which maps every real number to the smallest integer that is not smaller than . Closure operator vs. closed sets A closure on the subsets of a given set may be defined either by a closure operator or by a set of closed sets that is stable under intersection and includes the given set. These two definitions are equivalent. Indeed, the defining properties of a closure operator implies that an intersection of closed sets is closed: if is an intersection of closed sets, then must contain and be contained in every This implies by definition of the intersection. Conversely, if closed sets are given and every intersection of closed sets is closed, then one can define a closure operator such that is the intersection of the closed sets containing . This equivalence remains true for partially ordered sets with the greatest-lower-bound property, if one replace "closed sets" by "closed elements" and "intersection" by "greatest lower bound". Notes References Set theory Closure operators Abstract algebra
Closure (mathematics)
Mathematics
1,674
36,249,472
https://en.wikipedia.org/wiki/Formothion
Formothion (chemical formula: C6H12NO4PS2) is a chemical compound with obsolete uses in acaricides and insecticides. References Acetylcholinesterase inhibitors Organophosphate insecticides Acaricides Methyl esters Formamides
Formothion
Chemistry
58
65,830,865
https://en.wikipedia.org/wiki/Mirror%20tree
In genealogy, a mirror tree is a family tree reconstructed through estimates of consanguinity. References Family trees Genetics
Mirror tree
Biology
25
35,037,300
https://en.wikipedia.org/wiki/International%20Convention%20on%20Oil%20Pollution%20Preparedness%2C%20Response%20and%20Co-operation
International Convention on Oil Pollution Preparedness, Response and Co-operation (OPRC) is an international maritime convention establishing measures for dealing with marine oil pollution incidents nationally and in co-operation with other countries. , there are 112 state parties to the convention. OPRC Convention was drafted within the framework of the International Maritime Organization and adopted in 1990 entering into force in 1995. In 2000 a protocol to the Convention relating to hazardous and noxious substances (HNS) was adopted (the OPRC-HNS Protocol). In accordance with this convention and its annex, states-parties to the 1990 convention undertake, individually or jointly, to take all appropriate measures to prepare for and respond to oil pollution incidents. Scope The Convention applies to: vessels of any type whatsoever operating in the marine environment including hydrofoil boats, air-cushion vehicles, submersibles, and floating craft of any type; fixed or floating offshore installations or structures engaged in gas or oil exploration, exploitation or production activities, or loading or unloading of oil; and sea ports and oil handling facilities (those facilities which present a risk of an oil pollution incident, including, , sea ports, oil terminals, pipelines and other oil handling facilities). The Convention does not apply to warships, naval auxiliary or other ships owned or operated by a state and used only on government non-commercial service. However, parties to the Convention ensure by the adoption of appropriate measures that such ships act in a manner consistent with the convention. Oil pollution emergency plans Ships are required to carry a shipboard oil pollution emergency plan, in accordance with the provisions adopted by the International Maritime Organization (IMO) for this purpose. These plans are subject, while in a port or at an offshore terminal under the jurisdiction of a party, to inspection by officers duly authorized by that party. Operators of offshore units under the jurisdiction of the parties are required to have oil pollution emergency plans, which are co-ordinated with the national system for responding to oil pollution incidents, approved in accordance with procedures established by the competent national authority. Authorities or operators in charge of sea ports and oil handling facilities under the jurisdiction of parties are also required to have oil pollution emergency plans or similar arrangements which are co-ordinated with the national oil pollution response system). Oil pollution reporting procedures In accordance with the Convention masters or other persons having charge of ships flying the flag of a party and persons having charge of offshore units under the jurisdiction of a party are required to report without delay any event on their ship or offshore unit involving a discharge or probable discharge of oil: (i) in the case of a ship, to the nearest coastal state; (ii) in the case of an offshore unit, to the coastal state to whose jurisdiction the unit is subject. Masters or other persons having charge of ships flying the flag of a party and persons having charge of offshore units under the jurisdiction of a party are also required to report without delay in a similar way any observed event at sea involving a discharge of oil or the presence of oil. Persons having charge of sea ports and oil handling facilities under the jurisdiction of a party are required to report without delay any event involving a discharge or probable discharge of oil or the presence of oil to the competent national authority. Parties to the convention are required to instruct its maritime inspection vessels or aircraft and other appropriate services or officials to report without delay any observed event at sea or at a sea port or oil handling facility involving a discharge of oil or the presence of oil to the competent national authority or to the nearest coastal state; and request the pilots of civil aircraft to report without delay any observed event at sea involving a discharge of oil or the presence of oil to the nearest coastal state). National and regional systems for preparedness and response Each party is under obligation to establish a national system for responding promptly and effectively to oil pollution incidents. Such system comprises: designated competent national authority or authorities with responsibility for oil pollution preparedness and response; national operational contact point or points, responsible for the receipt and transmission of oil pollution reports; and authority which is entitled to act on behalf of the state to request assistance or to decide to render the assistance requested. Also, the system includes a national contingency plan for preparedness and response outlining the organizational relationship of the various involved bodies, public or private, taking into account guidelines developed by the IMO. In addition, each party, either individually or through bilateral or multilateral co-operation and in co-operation with the oil and shipping industries, port authorities and other relevant entities, has to establish: a minimum level of pre-positioned oil spill combating equipment and programmes for its use; a programme of exercises for oil pollution response organizations and training of relevant personnel; detailed plans and communication capabilities for responding to an oil pollution incident; and a mechanism or arrangement to co-ordinate the response to an oil pollution incident with the capabilities to mobilize the necessary resources). Parties ensure that all relevant information is provided to the IMO. International co-operation in pollution response Parties are required to co-operate and provide advisory services, technical support and equipment for the purpose of responding to an oil pollution incident upon the request of any party affected or likely to be affected by such incident. The financing of the costs for such assistance is based on the provisions set out in the annex to the convention. Salient features of OPRC It aims at providing a global framework for international cooperation in combating major incidents or threats of marine pollution. Parties to the convention are required to establish measures in dealing with pollution incidents, either nationally or with other countries. Ships are required to carry a shipboard oil emergency plans. Operators of offshore units are also required to have oil pollution emergency plans or similar arrangements, which must be coordinated with national systems for responding promptly and effectively to oil pollution incidents. Ships are required to report incidents to coastal authorities and the convention details the actions that are to be taken. The convention calls for the stockpiles of oil spill combating equipment, oil spill combating exercises and development of detailed plans for dealing with pollution incidents. Parties to the convention are required to provide assistance to others in the event of a pollution emergency and provision is made for the reimbursement of any assistance provided. A protocol to OPRC relating to hazardous and noxious substances (the OPRC-HNS Protocol) was adopted in 2000. References Further reading Gold, Edgar. International Convention on Oil Pollution Preparedness, Response and Co-operation, 1990 – Report. 22 J. Mar. L. & Com. 341 (1991) Moller, T. H. Santner, R. S. Oil spill preparedness and response–the role of industry. ITOPF. 1997 International Oil Spill Conference. Nelson, P. Australia's National Plan to combat pollution of the sea by oil and other noxious and hazardous substances-Overview and current issues. Spill Science & Technology Bulletin, 2000 Environmental treaties International Convention on Oil Pollution Preparedness, Response and Co-operation International Convention on Oil Pollution Preparedness, Response and Co-operation Law of the sea treaties International Convention on Oil Pollution Preparedness, Response and Co-operation Treaties concluded in 1990 Treaties entered into force in 1995 Treaties of Albania Treaties of Algeria Treaties of Angola Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Australia Treaties of Azerbaijan Treaties of the Bahamas Treaties of Bahrain Treaties of Bangladesh Treaties of Benin Treaties of Brazil Treaties of Bulgaria Treaties of Cameroon Treaties of Canada Treaties of Cape Verde Treaties of Chile Treaties of the People's Republic of China Treaties of Colombia Treaties of the Comoros Treaties of the Republic of the Congo Treaties of Ivory Coast Treaties of Croatia Treaties of Cuba Treaties of Denmark Treaties of Djibouti Treaties of Dominica Treaties of Ecuador Treaties of Egypt Treaties of El Salvador Treaties of Estonia Treaties of Finland Treaties of France Treaties of Gabon Treaties of Georgia (country) Treaties of Germany Treaties of Ghana Treaties of Greece Treaties of Guinea Treaties of Guyana Treaties of Honduras Treaties of Iceland Treaties of India Treaties of Indonesia Treaties of Iran Treaties of Ireland Treaties of Israel Treaties of Italy Treaties of Jamaica Treaties of Japan Treaties of Jordan Treaties of Kenya Treaties of Latvia Treaties of Lebanon Treaties of Liberia Treaties of the Libyan Arab Jamahiriya Treaties of Lithuania Treaties of Madagascar Treaties of Malaysia Treaties of Malta Treaties of the Marshall Islands Treaties of Mauritania Treaties of Mauritius Treaties of Mexico Treaties of Monaco Treaties of Morocco Treaties of Mozambique Treaties of Myanmar Treaties of Namibia Treaties of the Netherlands Treaties of New Zealand Treaties of Nigeria Treaties of Norway Treaties of Oman Treaties of Pakistan Treaties of Palau Treaties of Peru Treaties of the Philippines Treaties of Poland Treaties of Portugal Treaties of Qatar Treaties of South Korea Treaties of Romania Treaties of Russia Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Samoa Treaties of Saudi Arabia Treaties of Senegal Treaties of Seychelles Treaties of Sierra Leone Treaties of Singapore Treaties of Slovenia Treaties of South Africa Treaties of Spain Treaties of Sudan Treaties of Sweden Treaties of Switzerland Treaties of Syria Treaties of Thailand Treaties of Togo Treaties of Tonga Treaties of Trinidad and Tobago Treaties of Tunisia Treaties of Turkey Treaties of the United Kingdom Treaties of Tanzania Treaties of the United States Treaties of Uruguay Treaties of Vanuatu Treaties of Venezuela Treaties of Yemen 1990 in London 1995 in the environment Treaties extended to the Isle of Man Treaties extended to Aruba Treaties extended to the Netherlands Antilles Treaties extended to Greenland Treaties extended to Hong Kong Treaties extended to Macau
International Convention on Oil Pollution Preparedness, Response and Co-operation
Chemistry,Environmental_science
1,886
754,708
https://en.wikipedia.org/wiki/C12H22O11
The molecular form C12H22O11 (molar mass: 342.29 g/mol, exact mass : 342.116212) may refer to: Disaccharides Allolactose Cellobiose Galactose-alpha-1,3-galactose Gentiobiose (amygdalose) Isomaltose Isomaltulose Kojibiose Lactose (milk sugar) Lactulose Laminaribiose Maltose (malt sugar - cereal) 2α-Mannobiose 3α-Mannobiose Melibiose Melibiulose Nigerose Sophorose Sucrose (table sugar) Trehalose Trehalulose Turanose
C12H22O11
Chemistry
157
13,935,217
https://en.wikipedia.org/wiki/1%2C2-Dioxin
1,2-Dioxin is a heterocyclic, organic, antiaromatic compound with the chemical formula CHO. It is an isomeric form of 1,4-dioxin (or p-dioxin). Due to its peroxide-like characteristics, 1,2-dioxin is very unstable and has not been isolated. Calculations suggest that it would isomerize rapidly into but-2-enedial. Even substituted derivatives are very labile, e.g. 1,4-diphenyl-2,3-benzodioxin. Indeed, in 1990, 3,6-bis(p-tolyl)-1,2-dioxin was wrongly accounted as the first stable derivative. It was subsequently shown that the initial compound was not a derivative of 1,2-dioxin, but a thermodynamically more stable dione. References Dioxins Hypothetical chemical compounds Antiaromatic compounds
1,2-Dioxin
Chemistry
203
26,148,127
https://en.wikipedia.org/wiki/Monogyny
Monogyny is a specialised mating system in which a male can only mate with one female throughout his lifetime, but the female may mate with more than one male. In this system, the males generally provide no paternal care. In many spider species that are monogynous, the males have two copulatory organs, which allows them to mate a maximum of twice throughout their lifetime. As is commonly seen in honeybees, ants and certain spider species, a male may put all his energy into a single copulation, knowing that this will lower his overall fitness. During copulation, monogynous males have adapted to cause self genital damage or even death to increase their chances of paternity. Definition and distinction Monogyny is one of several mating systems observed in nature, in which a male mates only once; females, however, may mate with multiple males. It is important to emphasize the distinctions between monogyny and polyandry, and monogyny and monogamy. Polyandry is a mating system by which a female mates with more than one male; the male, in turn, can also mate with more than one female. In a monogamous setting, both male and female consent to having only one mate at any one time and thus mate only with that partner for that time period. Hence, monogyny is sometimes referred to as male monogamy because the male only mates with one female. Examples The mating system of monogyny is most common in ants, honeybees, and spiders. In colony species In species of ants and honeybees, there is only one female queen who mates with all the males in her colony; the males attend to the queen and mate only with her. There are circumstances, however, where a colony can become queenless, and therefore certain males must adapt to this surrounding in order to increase paternity. In ants and honeybees, there are two different types of monogynous settings. Type A are monogynous, queenright colonies where the queen is the mated female and everyone else is unmated. Type B are monogynous, worker-reproductive colonies where there is no queen, but rather there are gamergates, which are mated workers who take on a queen-like role. The queen is normally the only egg producer. However, when a colony becomes queenless, some workers which have intact, undeveloped ovaries may develop them and thus become capable of laying more eggs. So in certain colonies, a singly mated worker called a gamergate reproduces as the functional queen in that colony. These workers are termed "totipotent"; that is, they are able to change and adapt to a different surrounding in which they no longer have a queen. In spiders Males in certain species of spiders often employ drastic methods to be paternally successful. Monogyny in spiders culminates in extreme traits, such as dramatic male self-sacrifice and emasculation of the male by the female during copulation. Since males only mate with one female in a monogynous setting, each individual male must do whatever it takes to increase his particular paternity success, even if it means sacrificing himself. Male redback spiders twist their abdomens onto the fangs of their mates during copulation and, if cannibalized (65% of matings), increase their paternity relative to males that are not cannibalized. In this way, males of the redback spiders in a monogynous setting increase their chance of paternity by actually surrendering themselves to be cannibalized by the female. Male sacrifice tactics Genital plugging The benefits of mate guarding and securing paternity are higher than searching for many mates in a monogynous system. A male securing his paternity becomes a male's first priority during reproduction. One way they mate guard is by creating a physical barrier to turn away any other males. The male can cause severe physical damage to themselves by breaking off their pedipalps in order to plug the genital opening of the female. Males of Argiope bruennichi remove their pedipalp into the females and thus reduce the risk of sperm competition. Males can also remove the sperm of a previous mate from the female and deposit their own sperm, increasing their likelihood of success in that copulation. In the golden orb spider, the male may sacrifice both of his copulatory organs in one mating in order to turn away any wandering males. In other cases, we see the casting off of other body parts such as the anterior legs in the golden orb spider when the male is attacked by an aggressive female. By doing this, he can continue to mate with her while she eats his legs and does not eat him whole. Self sacrifice and cannibalism Copulation in a monogynous male is a sacrificial system. They not only cause genital damage to themselves but, in many cases, die during copulation spontaneously such as Argiope aurantia or are cannibalized by the female. This can be observed in many spider species, such as the red back spider which consumes the male either during or right after copulation if the male is not quick enough to escape. The size and age of the female play a role in whether or not the male is able to escape. They usually are not successful in the attempt to escape if the female is older and heavier making her much more dominant than the male. Another factor in the survival or demise of the male is the duration of the copulation. If there is a high copulation time the chances of him being eaten are much higher. If the copulation has a duration of ten seconds or more, the chances of him being eaten are much greater than those that jump off before ten seconds. Monogyny increases a male's chance of paternity when there is a male biased ratio in the population. When males make up a large majority of the population, the likelihood of finding multiple females is slim. Thus the males will mate with the first female they encounter. Mate guarding her from other males is more beneficial than looking for another female because the odds of meeting another female in a male biased population are against him. Costs and benefits From a male standpoint, evolutionary theory suggests that the focus of mating is to enhance paternity in order to produce viable offspring. Therefore, sexual selection theory would suggest that a male should attempt to mate with several females. This means that if a male wants to ensure that he will be paternally successful, he should mate with more than one female. When the sex ratio is male biased, however, male monogamy (monogyny) would arise as a means of increasing paternity and producing offspring; in other words, if the setting contains a sex ratio of all male to one female, then monogyny would arise as a means to produce offspring. This model predicts that a male-biased sex ratio is required for monogyny to evolve. For females: the advantages and benefits are significant to that of males and are clear and obvious. The monogynous female is dominant over the males and has great reproductive value. For males: Since they can mate with only one female, they must adapt in order to increase their chance of paternity. Male adaptation Males can adapt in order to increase paternity in a monogynous setting. An example of this would be the formation of gamergates in a queenless colony of honeybees and/or ants. Another example would be male-sacrifice in order to increase paternity in certain species of spiders. The costs of increasing paternity in a monogynous setting are great for the males; in certain species of spiders the male will surrender himself to be cannibalized in order to increase paternity. In this respect, the benefit for the female is that she will receive the chance to eat if she is hungry; the cost for the male is the loss of life to increase his paternity. In certain species, male adaptation will include the process of pedipalp damage. Males in species of the golden orb weaver, for instance, can protect their paternity by obstructing the female's genital openings with fragments of their copulatory organs. The male will actively participate in damaging his genitals by breaking off parts of his copulatory organs during mating and obstructing the female's genital openings in order to be paternally successful. Evolutionary significance Male animals, especially in species where males provide little or no parental investment (time and energy invested in current offspring at the expense of future offspring), are generally expected to maximize their fitness by mating with several females. In certain monogynous settings, however, paternal investment by the male is greater than that of males in other mating systems because the benefits of paternal protection exceed those of searching for additional mates. Paternal investment includes even dramatic examples such as the remarkable adaptation of male sacrifice via sexual cannibalism and the ability to inflict genital damage on oneself to increase paternity success. In these circumstances, selection may favor extreme mechanisms of paternity protection that amount to a maximal investment in a single mating. There are also circumstances in which monogyny evolves when males do not provide any paternal investment. Researchers have focused on sexual behaviour in systems where males have low paternal investment but frequently mate only once in their lifetimes, after which they are often killed by the female. Mating effort is high for these males. In particular, time and energy costs or risks incurred by males in securing a given mating could decrease the relative number of males available for mating; this type of mating is called non-promiscuous. Researchers have focused on species of web-building spiders with males that show high levels of non-promiscuous mating effort but apparently low paternal investment. The mechanism of male monogamy (monogyny) in these species is indisputably the most extreme form of non-promiscuous mating effort. References Mating systems
Monogyny
Biology
2,059
44,076,921
https://en.wikipedia.org/wiki/Sagapenum
Sagapenum (Greek σᾰγάπηνον, σικβινίτζα (Du Cange), σεραπίων; Arabic sakbīnadj; Latin sagapenum, sagapium, seraphinum (Pharm. Witenbergica)) is a historical plant from Media, identified with Ferula persica and Ferula szowitziana, also denoting its yellow translucent resin, which causes irritation of the skin and whose smell resembles that of asafoetida. History Pliny (Historia Naturalis 12.126, 19.167, 20.197) holds that sagapenum is similar to ammoniacum, and mentions its use in adultering laser. According to Dioscorides (De materia medica 3.85, 95), sagapenum smells like silphium and galbanum, and has expectorant, topical, anti-convulsant, and abortifacient properties. References Resins Traditional medicine Ferula
Sagapenum
Physics
222
4,744,322
https://en.wikipedia.org/wiki/Flux%20linkage
In electrical engineering the term flux linkage is used to define the interaction of a multi-turn inductor with the magnetic flux as described by the Faraday's law of induction. Since the contributions of all turns in the coil add up, in the over-simplified situation of the same flux passing through all the turns, the flux linkage (also known as flux linked) is , where is the number of turns. The physical limitations of the coil and the configuration of the magnetic field make some flux to leak between the turns of the coil, forming the leakage flux and reducing the linkage. The flux linkage is measured in webers (Wb), like the flux itself. Relation to inductance and reactance In a typical application the term "flux linkage" is used when the flux is created by the electric current flowing through the coil itself. Per Hopkinson's law, , where is the magnetomotive force and is the total reluctance of the coil. Since , where is the current, the equation can be rewritten as , where is called the inductance. Since the electrical reactance of an inductor , where is the AC frequency, . In circuit theory In circuit theory, flux linkage is a property of a two-terminal element. It is an extension rather than an equivalent of magnetic flux and is defined as a time integral where is the voltage across the device, or the potential difference between the two terminals. This definition can also be written in differential form as a rate Faraday showed that the magnitude of the electromotive force (EMF) generated in a conductor forming a closed loop is proportional to the rate of change of the total magnetic flux passing through the loop (Faraday's law of induction). Thus, for a typical inductance (a coil of conducting wire), the flux linkage is equivalent to magnetic flux, which is the total magnetic field passing through the surface (i.e., normal to that surface) formed by a closed conducting loop coil and is determined by the number of turns in the coil and the magnetic field, i.e., where is the magnetic flux density, or magnetic flux per unit area at a given point in space. The simplest example of such a system is a single circular coil of conductive wire immersed in a magnetic field, in which case the flux linkage is simply the flux passing through the loop. The flux through the surface delimited by a coil turn exists independently of the presence of the coil. Furthermore, in a thought experiment with a coil of turns, where each turn forms a loop with exactly the same boundary, each turn will "link" the "same" (identically, not merely the same quantity) flux , all for a total flux linkage of . The distinction relies heavily on intuition, and the term "flux linkage" is used mainly in engineering disciplines. Theoretically, the case of a multi-turn induction coil is explained and treated perfectly rigorously with Riemann surfaces: what is called "flux linkage" in engineering is simply the flux passing through the Riemann surface bounded by the coil's turns, hence no particularly useful distinction between flux and "linkage". Due to the equivalence of flux linkage and total magnetic flux in the case of inductance, it is popularly accepted that the flux linkage is simply an alternative term for total flux, used for convenience in engineering applications. Nevertheless, this is not true, especially for the case of memristor, which is also referred to as the fourth fundamental circuit element. For a memristor, the electric field in the element is not as negligible as for the case of inductance, so the flux linkage is no longer equivalent to magnetic flux. In addition, for a memristor, the energy related to the flux linkage is dissipated in the form of Joule heating, instead of being stored in magnetic field, as done in the case of an inductance. References Sources L. O. Chua, "Memristor – The Missing Circuit Element", IEEE Trans. Circuit Theory, vol. CT_18, no. 5, pp. 507–519, 1971. Electromagnetism Thought experiments in physics
Flux linkage
Physics
872
5,205,640
https://en.wikipedia.org/wiki/Variable%20Assembly%20Language
Variable Algorithmic Language (VAL) is a computer-based control system and language designed specifically for use with Unimation Inc. industrial robots. The VAL robot language is permanently stored as a part of the VAL system. This includes the programming language used to direct the system for individual applications. The VAL language has an easy to understand syntax. It uses a clear, concise, and generally self-explanatory instruction set. All commands and communications with the robot consist of easy to understand word and number sequences. Control programs are written on the same computer that controls the robot. As a real-time system, VAL's continuous trajectory computation permits complex motions to be executed quickly, with efficient use of system memory and reduction in overall system complexity. The VAL system continuously generates robot control commands, and can simultaneously interact with a human operator, permitting on-line program generation and modification. A convenient feature or VAL is the ability to use libraries or manipulation routines. Thus, complex operations may be easily and quickly programmed by combining predefined subtasks. The VAL language consists of monitor commands and program instructions. The monitor commands are used to prepare the system for execution of user-written programs. Program instructions provide the repertoire necessary to create VAL programs for controlling robot actions. Terminology The following terms are frequently used in VAL related operations. Monitor The VAL monitor is an administrative computer program that oversees operation of a system. It accepts user input and initiates the appropriate response; follows instructions from user-written programs to direct the robot; and performs the computations necessary to control the robot. Editor The VAL editor is an aid for entering information into a computer system, and modifying existing text. It is used to enter and modify robot control programs. It has a list of instructions telling a computer how to do something. VAL programs are written by system users to describe tasks the robot is to perform. Location Location is a position of an object in space, and the orientation of the object. Locations are used to define the positions and orientations the robot tool is to assume during program execution. VAL programming Several conventions apply to numerical values to be supplied to VAL commands and instructions. Preceding each monitor-command description are two symbols indicating when the command can be typed by the user. A dot (.) signifies the command can be performed when VAL is in its top-level monitor mode and no user program being executed (that is, when the system prompt is a dot). An asterisk (*) indicates the command can be performed at the same time VAL is executing the program (that is, when the system prompt is an asterisk). If both symbols are present the command can be executed in either case. Most monitor commands and program instructions can be abbreviated. When entering any monitor command or program instruction, the function name can be abbreviated to as many characters as are necessary to make the name unique. For commands and instructions, angle brackets, < >, are used to enclose an item which describes the actual argument to appear. Thus the programmer can supply the appropriate item in that position when entering the command or instruction. Note that these brackets used here are for clarification, and are never to be included as part of a command or instruction. Many VAL commands and instructions have optional arguments. For notations, optional arguments are enclosed in square brackets, [ ]. If there is a comma following such an argument, the comma must be retained if the argument is omitted, unless nothing follows. For example, the monitor BASE command has the form: BASE [<dx>] , [<dy>] , [<dz>] , [<rotation>] To specify only a 300-millimeter change in the Z direction, the command could be entered in any of the following ways: BASE 0,0,300,0 BASE ,,300, BASE ,,300 Note that the commas preceding the number 300 must be present to correctly to relate the number with a Z-direction change. Like angle brackets, square brackets are never entered as part of a command or instruction. Several types of numerical arguments can appear in commands and instructions. For each type there are restrictions on the values that are accepted by VAL. The following rules should be observed: Distances are entered to define locations to which the robot is to move. The unit of measure for distances is millimeter, although units are never explicitly entered for any value. Values entered for distances can be positive or negative, with their magnitudes limited by a number representative of the maximum reach of the robot (for example, 1024 mm and 700 mm for the PUMA 500 and PUMA 250 robots, respectively). Within the resultant range, distance values can be specified in increments of 0.01 mm. Note, however, that some values cannot be represented internally, and are stored as the nearest representable value. Angles in degrees are entered to define and modify orientations the robot is to assume at named locations, and to describe angular positions of robot joints. Angle values can be positive or negative, with their magnitudes limited by 1800 or 3600 depending on the usage. Within the range, angle values can be specified in increments of 0.01°. Values cannot be represented internally, however they are stored as nearest representable value. The VAL system The function of VAL is to regulate and control a robot system by following user commands or instructions. In addition to being a compact stand-alone system, VAL has been designed to be highly interactive to minimize programing time, and to provide as many programming aids as possible. External communication The standard VAL system uses an operator's console terminal and manual control box to input commands and data from the user. The operator console serves as the primary communication device and can be either a direct play terminal or a printing terminal. Interaction with other devices in an automated cell is typically handled by monitoring input channels and switching outputs. By this means the robot can control a modest cell without the need for other programmable devices. VAL Operating System The controller has two levels or operation: the top level is called the VAL operating system, or monitor, because it administers operations of the system, including interaction with the user; the second level is used for diagnostic work on the controller hardware. The system monitor is a computer program stored VAL programmable read-only memory (PROM) in the Computer/Controller. PROM memory retains its contents finitely, and thus VAL is immediately available when the controller is switched on. The monitor is responsible for control of the robot, and its commands come from the manual control unit, the system terminal, or from programs. To increase its versatility and flexibility, the VAL monitor can perform of its commands even while a user program is being executed. Commands that can be processed in this way include those for controlling the status of the system, defining robot locations, storing and retrieving information on the floppy disk, and creating and editing robot control programs. References PUMA 560 VAL Manual Robot programming languages Robotics software Robotics at Unimation
Variable Assembly Language
Engineering
1,427
17,282,821
https://en.wikipedia.org/wiki/Mobile%20IPTV
Mobile IPTV is a technology that enables users to transmit and receive multimedia traffic including video, audio, text and graphic services through IP-based wired and wireless networks, with support for quality of service, quality of experience, security, mobility, and interactive functions. Through Mobile IPTV, users can view IPTV services using a mobile device. Technical approaches Mobile TV plus IP This approach uses the traditional digital broadcast networks to deliver IP-based audio, video, graphics and other broadband data services to mobile users. Wide area wireless networks such as cellular networks are integrated to support interactivity. Activities in this approach include Digital Video Broadcast (DVB)-CBMS (Convergence of Broadcasting and Mobile Services) and the WorldDMB. In addition, DVB-IPI (IPI: IP Infrastructure) is an open DVB standard that enables audio/video services to be delivered to and through the mobile device via IP networking. DVB-CBMS is developing bi-directional mobile IP based broadcasting protocol specifications over DVB-H. DVB-CBMS already finished Phase I and currently is working in Phase II. WorldDAB Forum is enhancing and extending Eureka 147 to support IP based services. Eureka 147 was originally developed for digital radio applications and extended to support video services. Even though this approach is classified as Mobile IPTV technically, the usage of broadcasting networks may incur the loss of individuality of IP. IPTV plus Mobile IPTV services were originally targeted to fixed terminals such as set-top boxes, however, issues on the requirements for mobility support were raised as an out-growth under the auspices of the Fixed-Mobile Convergence (FMC) trend. The outstanding activities are ATIS in the US, Open IPTV Forum, and ITU-T FG IPTV internationally. The development of Mobile IPTV specification is at an early stage. Currently, ITU-T FG IPTV is collecting requirements regarding mobility and wireless characteristics. ATIS has not shown any interest in mobility support yet. In Open IPTV Forum, mobility service entirely based on IMS (IP Multimedia Subsystem) which is a set of specification from 3GPP for delivering IP multimedia to mobile users will be forthcoming. Cellular Open Mobile Alliance (OMA) BCAST is working for IP based mobile broadcasting networks. Its goals are to define an end-to-end framework for mobile broadcast and compile the set of necessary enablers. Its features are bearer agnostic, which means any Broadcast Distribution Network can be adopted as its transport means. OMA BCAST, however, is only applicable to mobile terminals up to now and showing interest in expanding its specification to cover fixed terminals in Phase II. Internet Internet video services are usually termed as Internet TV or Web TV. This approach is open for anybody to be a content provider, a service provider, or a consumer. Quality of service is not guaranteed since it is based on a best-effort service model. Technical obstacles Mobile IPTV has at least one wireless interface per device. A minimum of 2–3 Mbit/s of bandwidth needs to be provided, due to the characteristics of the IPTV service, and until 4G wireless network services are widely deployed, wireless link bandwidth is usually not yet broad enough to accommodate high-definition and ultra-high-definition television quality video services. Since Mobile IPTV assumes at least one wireless link between the source (e.g. a streaming media server) and the destination (e.g. a mobile terminal), there are technical obstacles related to the usage of the wireless link. Most mobile terminals have small displays, low power processors, and limited storage, compared to desktop PCs. Even if mobile terminals are stationary, obstacles around the mobile terminals can affect the received signal and cause packet loss. Packets delivered through the wireless link are exposed to a variety of signal degradation such as shadowing, fast/slow fading, etc. Because it is currently not possible to deploy wireless networks to cover all geographical areas with no "dead spots", services are restricted in some areas. However, by adopting vertical handovers (hand-overs between different networks), the coverage issue can be mitigated. The characteristics of the wireless link can vary due to a variety of causes, and the rate of change can be very abrupt. For example, vertical handover can quickly change the path between the source and sink, bandwidth, physical MAC address, IP address. Therefore, some solutions devised for the relatively static wired computer network environment may not work properly. Middleware By deploying middleware, a service provider can control the usage of IPTV services remotely. Also, middleware acts as a transparent way to adapt IPTV services to different platforms. So far, there are several well-known middleware applications for set-top boxes, but which are too large to be implemented on a mobile device. References Related publications Soohong Park and Seong-Ho Jeong, "Mobile IPTV: Approaches, Challenges, Standards and QoS Support", IEEE Internet Computing, Vol. 13, Issue 3, pp. 23–31, May–June 2009 Soohong Park, Seong-Ho Jeong and Cheolju Hwang, Mobile IPTV Expanding the Value of IPTV, The seventh International Conference on Networking, pp. 296–301, 2008. (DOI 10.1109/ICN.2008.8) Soohong Park, Cheolju Hwang and el. al, Mobile IPTV Requirements for Non-NGN, TTA Technical Report (TTAR-08.0001), February 2008 Djama. I and Ahmed. T, A Cross-Layer Interworking of DVBT and WLAN for Mobile IPTV Service Delivery, IEEE Transactions on Broadcasting, Vol. 53, No. 1, pp. 382–390, 2007 Mushtaq, Mubashar; Ahmed, Toufik, P2P-based mobile IPTV: Challenges and opportunities, Computer Systems and Applications, 2008. AICCSA 2008. IEEE/ACS International Conference on March 31, 2008 – April 4, 2008 Page(s):975 - 980 (DOI 10.1109/AICCSA.2008.4493663) Carlsson. C and Walden. P, Mobile TV–To Live or Die by Content, IEEE 40th Annual Hawaii International Conference on System Sciences, pp. 51–60, 2007 J. She, F. Hou, P.-H. Ho, and L.–L. Xie, “IPTV over WiMAX: Key Success Factors, Challenges and Solutions”, IEEE Communications Magazine, vol. 45, no. 8, pp. 87–93, Aug. 2007 Mobile technology Streaming television
Mobile IPTV
Technology
1,367
39,311,118
https://en.wikipedia.org/wiki/Devendra%20Lal
Devendra Lal FRS (14 February 1929 – 1 December 2012) was an Indian geophysicist. Life He was born in Varanasi, India. He graduated from Banaras Hindu University. He graduated from Bombay University; his thesis was on cosmic ray physics; his thesis adviser was Bernard Peters. He was Director, of the Physical Research Laboratory, Ahmedabad from 1972 to 1983. He was Visiting Professor at Scripps Institution of Oceanography, University of California, San Diego, from 1989 to 2012. Devendra Lal was President of the International Union of Geodesy and Geophysics (IUGG) from 1983 to 1987. He was also elected Fellow of the Royal Society in 1979. References External links https://archive.today/20130628192705/http://paleowave.blogspot.com/2012/12/in-memoriam-devendra-lal-1929-2012.html Devendra Lal Memorial Symposium, Part 1 Devendra Lal Memorial Symposium, Part 2 1929 births 2012 deaths Indian geophysicists Banaras Hindu University alumni University of California, San Diego faculty Fellows of the Royal Society Foreign associates of the National Academy of Sciences Scientists from Varanasi Fellows of the Indian Geophysical Union Recipients of the Padma Shri in science & engineering 20th-century Indian physicists Presidents of the International Union of Geodesy and Geophysics Recipients of the V. M. Goldschmidt Award
Devendra Lal
Chemistry
297
57,731,485
https://en.wikipedia.org/wiki/Spyce%20Kitchen
Spyce Kitchen or just Spyce was a robotic-powered restaurant which prepares food in "three minutes or less". History MIT mechanical engineering graduates Michael Farid, Brady Knight, Luke Schlueter and Kale Rogers developed the kitchen using seven autonomous work stations to prepare bowl-based meals using healthy ingredients such as kale, beans and grains. The four graduates wanted to make healthy meals more affordable, so they built the robotic technology and initially served the food to students at an MIT dining hall. The group received the $10,000 "Eat It" Lemelson-MIT undergraduate prize in 2016 as one of America's top two collegiate inventors in food technology. The four then teamed up with chef Daniel Boulud to create the new menu for their restaurant. Prices started at $7.50 for an entire meal in a bowl at their first real branch, which opened on May 3, 2018, in Boston, Massachusetts. Referred to as the "Spyce Boys", the four founders were inspired by their experiences as hungry student athletes on tight budgets. Spyce Kitchen automated cooking units also clean up after cooking and dirtying the cooking apparatus. Funding Spyce raised $21 million in series A funding in September 2018, led by venture capital firms Maveron, Collaborative Fund, and Khosla Ventures. Restaurants Spyce operated and then shuttered two restaurants in the Greater Boston area. Their first restaurant was located at 241 Washington St in downtown Boston. Their second restaurant, which opened in February 2021, was located at 1 Brattle Square, in Harvard Square. Acquisition by Sweetgreen and closure In 2021, the company was acquired by Sweetgreen, a chain of salad restaurants. Both Spyce restaurants were closed following the Sweetgreen acquisition, "to focus on developing technology for Sweetgreen restaurants". The downtown Boston location closed October 22, 2021, and the Harvard Square location closed February 18, 2022. References External links Robotics Robotic restaurants
Spyce Kitchen
Engineering
399
705,735
https://en.wikipedia.org/wiki/XInclude
XInclude is a generic mechanism for merging XML documents, by writing inclusion tags in the "main" document to automatically include other documents or parts thereof. The resulting document becomes a single composite XML Information Set. The XInclude mechanism can be used to incorporate content from either XML files or non-XML text files. XInclude is not natively supported in Web browsers, but may be partially achieved by using some extra JavaScript code. Example For example, including the text file license.txt: This document is published under GNU Free Documentation License in an XHTML document: <?xml version="1.0"?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:xi="http://www.w3.org/2001/XInclude"> <head>...</head> <body> ... <p><xi:include href="license.txt" parse="text"/></p> </body> </html> gives: <?xml version="1.0"?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:xi="http://www.w3.org/2001/XInclude"> <head>...</head> <body> ... <p>This document is published under GNU Free Documentation License</p> </body> </html> The mechanism is similar to HTML's <object> tag (which is specific to the HTML markup language), but the XInclude mechanism works with any XML format, such as SVG and XHTML. See also XPath References External links XInclude Standard XInclude with XSLT Using XInclude in Xerces Using XInclude article by Elliotte Rusty Harold XML-based standards
XInclude
Technology
428
25,750,768
https://en.wikipedia.org/wiki/Crownlay
A crownlay is a type of dental restoration. Description A crownlay is a hybrid dental restoration typically placed over an endodontically treated tooth that is more conservative than a normal full coverage crown, but less conservative than a normal only. Crownlays incorporate an extension of extra restorative material on the underside of the restoration into the excavated pulp chamber following root canal therapy, taking advantage of the extra surface area afforded in this space on the interior aspect of the preparation, thereby sparing the external walls from needing as much tooth reduction. The use of a crownlay results in the conservation of more healthy, natural tooth structure than is otherwise possible. Usage Crownlays are typically used in place of traditional post and core restorations. Post and core buildups are essentially rods of restorative material made out of titanium, stainless steel or resin that glean extra surface area against the internal walls of root canal-treated teeth when there is little to no teeth left above the gumline to hold a normal crown or onlay in place. The post and core buildup serve to aid in retention of a traditional crown but increase the likelihood of root fracture because chewing forces are directed vertically along the hollowed out and subsequent weaker remnants of the internal surfaces of an endodontically-treated (root-canal-treated) tooth. Crownlays are typically constructed from milled, monolithic blocks of solid porcelain which not only very intimately fit the prepared tooth, but are acid etched and bonded into place using very strong resin materials, decreasing the need for physical retention. References External links The Academy of CAD/CAM Dentistry Same Day CAD/CAM Dentistry, p. 41 Dental materials Restorative dentistry Prosthodontology
Crownlay
Physics
344
41,943,139
https://en.wikipedia.org/wiki/Credential%20lag
Credential lag usually occurs for a user who is attempting to log in to a system that relies on updating its cached or otherwise saved user credentials by conferring with Active Directory or similar database. When a user changes or resets their password, it may take some time for the third party software to retrieve the new credentials from the active directory catalog; for instance, an intranet service that queries AD for permissions. Example User "ANOther" is prompted to change her password as it has expired on her windows domain account. Once changed, Active Directory is updated, and the user proceeds to log in. However, it may be the case that the internal intranet site only refreshes every 15 minutes, therefore, until the intranet refreshes its credential database, the user is unable to log into the intranet service for up to 15 minutes. References Computer access control
Credential lag
Engineering
185
13,038,851
https://en.wikipedia.org/wiki/EMR2
EGF-like module-containing mucin-like hormone receptor-like 2 also known as CD312 (cluster of differentiation 312) is a protein encoded by the ADGRE2 gene. EMR2 is a member of the adhesion GPCR family. Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain. EMR2 is expressed by monocytes/macrophages, dendritic cells and all types of granulocytes. In the case of EMR2 the N-terminal domains consist of alternatively spliced epidermal growth factor-like (EGF-like) domains. EMR2 is closely related to CD97 with 97% amino-acid identity in the EGF-like domains. The N-terminal fragment (NTF) of EMR2 presents 2-5 EGF-like domains in human. Mice lack the Emr2 gene. This gene is closely linked to the gene encoding EGF-like molecule containing mucin-like hormone receptor 3 EMR3 on chromosome 19. Ligand Like the related CD97 protein, the fourth EGF-like domain of EMR2 binds chondroitin sulfate B to mediate cell attachment. However, unlike CD97 EMR2 does not interact with the complement regulatory protein, decay accelerating factor CD55, and indicating that these very closely related proteins likely have nonredundant functions. Signaling Inositol phosphate (IP3) accumulation assays in overexpressing HEK293 cells have demonstrated coupling of EMR2 to Gα15. EGF-like module-containing mucin-like hormone receptor-like 2 (EMR2) is an adhesion GPCR that undergoes GPS autoproteolysis before being trafficked to the plasma membrane. Further, distribution, translocation, co-localization of the N-terminal fragment (NTF) and N-terminal fragment (CTF) of EMR2 within lipid rafts may affect cell signaling. Mutations in the GPS have shown that EMR2 does not need to undergo autoproteolysis to be trafficked, but loses function. EMR2 has been shown to be necessary for in vitro cell migration. Upon cleavage the N-terminus has been shown to associate with the 7TM, but to also dissociate, giving two possible functions. When the N-terminus dissociates it can be found in lipid rafts. Additionally, the cleaved EMR2 protein 7TM has been found to associate with EMR4 N-terminus. Function The expression of EMR2 and CD97 on activated lymphocytes and myeloid cells promotes binding with their ligand chondroitin sulfate B on peripheral B cells, indicating a role in leukocyte interaction. The interaction between EMR2 and chondroitin sulfate B in inflamed rheumatoid synovial tissue suggests a role of the receptors in the recruitment and retention of leukocytes in synovium of arthritis patients. Upon neutrophil activation, EMR2 rapidly moves to membrane ruffles and the leading edge of the cell. Additionally, ligation of EMR2 by antibody promotes neutrophil and macrophage effector functions suggesting a role in potentiating inflammatory responses. Clinical significance EMR2 is rarely expressed by tumor cell lines and tumors, but has been found on breast and colorectal adenocarcinoma. In breast cancer, robust expression and different distribution of EMR2 is inversely correlated with survival. Gain of function mutations within the GAIN domain of EMR2 of certain patient cohorts were shown to result in excessive degranulation by mast cells resulting in vibratory urticaria See also EGF module-containing mucin-like hormone receptor References External links GPCR consortium Clusters of differentiation G protein-coupled receptors Adhesion G protein-coupled receptors
EMR2
Chemistry
840
8,073,009
https://en.wikipedia.org/wiki/Tonsil
The tonsils are a set of lymphoid organs facing into the aerodigestive tract, which is known as Waldeyer's tonsillar ring and consists of the adenoid tonsil (or pharyngeal tonsil), two tubal tonsils, two palatine tonsils, and the lingual tonsils. These organs play an important role in the immune system. When used unqualified, the term most commonly refers specifically to the palatine tonsils, which are two lymphoid organs situated at either side of the back of the human throat. The palatine tonsils and the adenoid tonsil are organs consisting of lymphoepithelial tissue located near the oropharynx and nasopharynx (parts of the throat). Structure Humans are born with four types of tonsils: the pharyngeal tonsil, two tubal tonsils, two palatine tonsils and the lingual tonsils. Development The palatine tonsils tend to reach their largest size in puberty, and they gradually undergo atrophy thereafter. However, they are largest relative to the diameter of the throat in young children. In adults, each palatine tonsil normally measures up to 2.5 cm in length, 2.0 cm in width and 1.2 cm in thickness. The adenoid grows until the age of 5, starts to shrink at the age of 7 and becomes small in adulthood. Function The tonsils are immunocompetent organs that serve as the immune system's first line of defense against ingested or inhaled foreign pathogens, and as such frequently engorge with blood to assist in immune responses to common illnesses such as the common cold. Their surface contains specialized antigen capture cells called microfold cells (M cells) that allow for the uptake of antigens produced by pathogens. These M cells then alert the B cells and T cells in the tonsil that a pathogen is present and an immune response is stimulated. B cells are activated and proliferate in areas called germinal centers in the tonsil. These germinal centers are places where B memory cells are created and secretory antibody (IgA) is produced. Clinical significance The palatine tonsils can become enlarged (adenotonsillar hyperplasia) or inflamed (tonsillitis). The most common way to treat tonsillitis is with anti-inflammatory drugs such as ibuprofen, or if bacterial in origin, antibiotics, e.g. amoxicillin and azithromycin. Surgical removal (tonsillectomy) may be advised if the tonsils obstruct the airway or interfere with swallowing, or in patients with severe or recurrent tonsillitis. However, different mechanisms of pathogenesis for these two subtypes of tonsillar hypertrophy have been described, and may have different responses to identical therapeutic efforts. In older patients, asymmetric tonsils (also known as asymmetric tonsil hypertrophy) may be an indicator of virally infected tonsils, or tumors such as lymphoma or squamous cell carcinoma. A tonsillolith (also known as a "tonsil stone") is material that accumulates on the palatine tonsil. This can reach the size of a blueberry and is white or cream in color. The main substance is mostly calcium, but it has a strong unpleasant odor because of hydrogen sulfide and methyl mercaptan and other chemicals. Palatine tonsil enlargement can affect speech, making it hypernasal and giving it the sound of velopharyngeal incompetence (when space in the mouth is not fully separated from the nose's air space). Tonsil size may have a more significant impact on upper airway obstruction for obese children than for those of average weight. As mucosal lymphatic tissue of the aerodigestive tract, the palatine tonsils are viewed in some classifications as belonging to both the gut-associated lymphoid tissue (GALT) and the mucosa-associated lymphoid tissue (MALT). Other viewpoints treat them (and the spleen and thymus) as large lymphatic organs contradistinguished from the smaller tissue loci of GALT and MALT. Additional images References External links Human throat Immune system Lymphatic system Lymphatic tissue Lymphatics of the head and neck Tonsil disorders
Tonsil
Biology
925
2,665,098
https://en.wikipedia.org/wiki/Psi%20Scorpii
Psi Scorpii, which is Latinized from ψ Scorpii, is a star in the zodiac constellation of Scorpius. It is white in hue and has an apparent visual magnitude of 4.94, which is bright enough to be faintly visible to the naked eye. Based upon parallax measurements, it is located at a distance of around 162 light years from the Sun. Data collected during the Hipparcos mission suggests it is an astrometric binary, although nothing is known about the companion. The system is drifting closer to the Sun with a radial velocity of −5 km/s. The visible component is an A-type main sequence star with a stellar classification of A1 V; a class of star that is still fusing hydrogen at its core. It has around twice the mass and 2.2 times the radius of the Sun, and is shining with 18.6 times the Sun's luminosity. The effective temperature of the star's outer atmosphere is 8,846 K. Psi Scorpii is around 451 million years old and is spinning with a projected rotational velocity of 42.3 km/s. References External links A-type main-sequence stars Astrometric binaries Scorpius Scorpii, Psi Durchmusterung objects Scorpii, 15 145570 079375 6031
Psi Scorpii
Astronomy
281
1,013,793
https://en.wikipedia.org/wiki/Tube%20socket
Tube sockets are electrical sockets into which vacuum tubes (electronic valves) can be plugged, holding them in place and providing terminals, which can be soldered into the circuit, for each of the pins. Sockets are designed to allow tubes to be inserted in only one orientation. They were used in most tube electronic equipment to allow easy removal and replacement. When tube equipment was common, retailers such as drug stores had vacuum tube testers, and sold replacement tubes. Some Nixie tubes were also designed to use sockets. Throughout the tube era, as technology developed, sometimes differently in different parts of the world, many tube bases and sockets came into use. Sockets are not universal; different tubes may fit mechanically into the same socket, though they may not work properly and possibly become damaged. Tube sockets were typically mounted in holes on a sheet metal chassis and wires or other components were hand soldered to lugs on the underside of the socket. In the 1950s, printed circuit boards were introduced and tube sockets were developed whose contacts could be soldered directly to the printed wiring tracks. Looking at the bottom of a socket, or, equivalently, a tube from its bottom, the pins were numbered clockwise, starting at an index notch or gap, a convention that has persisted into the integrated circuit era. In the 1930s, tubes often had the connection to the control grid brought out through a metal top cap on the top of the tube. This was connected by using a clip with an attached wire lead. An example would be the 6A7 pentagrid converter. Later, some tubes, particularly those used as radio frequency (RF) power amplifiers or horizontal deflection amplifiers in TV sets, such as the 6DQ6, had the plate or anode lead protrude through the envelope. In both cases this allowed the tube's output circuitry to be isolated from the input (grid) circuit more effectively. In the case of the tubes with the plate brought out to a cap, this also allowed the plate to run at higher voltages (over 26,000 volts in the case of rectifiers for color television, such as the 3A3, as well as high-voltage regulator tubes.) A few unusual tubes had caps for both grid and plate; the caps were symmetrically placed, with divergent axes. The first tubes The earliest tubes, like the Deforest Spherical Audion from , used the typical light bulb Edison socket for the heater, and flying leads for the other elements. Other tubes directly used flying leads for all of their contacts, like the Cunningham AudioTron from 1915, or the Deforest Oscillion. Type C6A xenon thyratrons, used in servos for the U.S. Navy Stable Element Mark 6, had a mogul screw base and L-shaped stiff wires at the top for grid and anode connections. Mating connectors were machined pairs of brass blocks with clamping screws, attached to flying leads (free hanging). Early bases When tubes became more widespread, and new electrodes were added, more connections were required. Specially designed bases were created to account for this need. However, as the world was suffering from World War I, and the new electronics technology was just emerging, designs were far from being standardized. Usually, each company had their own tubes and sockets, which were not interchangeable with tubes from other companies. By the early 1920s, this situation was finally changing, and several standard bases were created. They consisted of a base (ceramic, metal, bakelite, etc.) with a number of prongs ranging from three to seven, with either a non-regular distribution or with one or two of the prongs of bigger diameter than the other, so that the tube could only be inserted in a certain position. Sometimes they relied on a bayonet on the side of the base. Examples of these are the very common USA bases UX4, UV4, UY5 and UX6, and the European B5, B6, B7, B8, C7, G8A, etc. Tubes in the USA typically had from four to seven pins in a circular array, with adjacent pairs of larger pins for heater connections. Before alternating current (AC) line/mains-powered radios were developed, some four-pin tubes (in particular, the very common UX-201A ('01A)) had a bayonet pin on the side of a cylindrical base. The socket used that pin for retaining the tube; insertion finished with a slight clockwise turn. Leaf springs, essentially all in the same plane, pressed upward on the bottoms of the pins, also keeping the bayonet pin engaged. The first hot-cathode CRT, the Western Electric 224-B, had a standard four-pin bayonet base, and the bayonet pin was a live connection. (Five effective pins: It was an electrostatic-deflection gas-focused type, with a diode gun and single-ended deflection. The anode and the other two plates were common.) An early exception to these types of bases is the Peanut 215, which instead of using prongs had a tiny bayonet base with four drop-like contacts. Another exception is the European Side Contact series commonly known as P, which instead of using a prong, relied on side contacts at 90 degrees from the tube axis with four to twelve contacts. Octal In April 1935, the General Electric Company introduced a new eight-pin tube base with their new metal envelope tubes. The new base became known as the octal base. The octal base provided one more conductor with a smaller overall size of the base than the previous line of U. S. tube bases which had provided a maximum of seven conductors. Octal bases, as defined in IEC 60067, diagram IEC 67-I-5a, have a 45-degree angle between pins, which form a diameter circle around a diameter keyed post (sometimes called a spigot) in the center. Octal sockets were designed to accept octal tubes, the rib in the keyed post fitting an indexing slot in the socket so the tube could only be inserted in one orientation. When used on metal tubes, pin 1 was always reserved for a connection to the metal shell, which was usually grounded for shielding purposes. This reservation prevented tubes such as the 6SL7/6SN7 dual triodes from being issued with metal envelopes, as such valves need three connections (cathode, grid, anode) for each triode (making six total) plus two connections for the paralleled heaters. The octal base soon caught on for glass tubes, where the large central post could also house and protect the "evacuation tip" of the glass tube. The eight available pins allowed more complex tubes than before, such as dual triodes, to be constructed. The glass envelope of an octal base tube was cemented into a bakelite or plastic base with a hollow post in the center, surrounded by eight metal pins. The wire leads from the tube were soldered into the pins, and the evacuation tip was protected inside the post. Matching plugs were also manufactured that let tube sockets be used as eight-pin electrical connectors; bases from discarded tubes could be salvaged for this purpose. Octal sockets were used to mount other components, particularly electrolytic capacitor assemblies and electrical relays; octal-mount relays are still common. Most octal tubes following the widespread European designation system have penultimate digit "3" as in ECC34 (full details in the Mullard–Philips tube designation article). There is a different, totally obsolete, pre-world-war-II German octal type. Octal and miniature tubes are still in use in tube-type audio hi-fi and guitar amplifiers. Relays were historically manufactured in a vacuum tube form, and industrial-grade relays continue to use the octal base for their pinout. Loctal A variant of the octal base, the B8G loctal base or lock-in base (sometimes spelled "loktal" — trademarked by Sylvania), was developed by Sylvania for ruggedized applications such as automobile radios. Along with B8B (a British designation out of date by 1958), these eight-pin locking bases are almost identical and the names usually taken as interchangeable (although there are some minor differences in specifications, such as spigot material and spigot taper, etc.). The pin geometry was the same as for octal, but the pins were thinner (although they will fit into a standard octal socket, they wobble and do not make good contact), the base shell was made of aluminium, and the center hole had an electrical contact that also mechanically locked (hence "loctal") the tube in place. Loctal tubes were only used widely by a few equipment manufacturers, most notably Philco, which used the tubes in many table radios. Loctal tubes have a small indexing mark on the side of the base skirt; they do not release easily from their sockets unless pushed from that side. Because the pins are actually the Fernico or Cunife lead-out wires from the tube, they are prone to intermittent connections caused by the build-up of electrolytic corrosion products due to the pin being of a different metallic composition to the socket contact. The loctal tube's structure was supported directly by the connecting pins passing through the glass "button" base. Octal tube structures were supported on a glass "pinch", formed by heating the bottom of the envelope to fusing temperature, then squeezing the pinch closed. Sealing the pinch embedded the connecting wires in the pinch's glass and gave a vacuum-tight seal. The connecting wires then passed through the hollow base pins, where they were soldered to make permanent connections. Loctal tubes had shorter connecting lengths between the socket pins and the internal elements than did their octal counterparts. This allowed them to operate at higher frequencies than octal tubes. The advent of miniature "all-glass" seven- and nine-pin tubes overtook both octals and loctals, so the loctal's higher-frequency potential was never fully exploited. Loctal tube type numbers in the USA typically begin with "7" (for 6.3-volt types) or "14" for 12.6-volt types. This was fudged by specifying the heater voltage as nominally 7 or 14 volts so that the tube nomenclature fitted. Battery types (mostly 1.4-volt) are coded "1Lxn", where x is a letter and "n" a number, such as "1LA4". Russian loctals end in L, e.g. 6J1L. European designations are ambiguous; all B8G loctals have numbers either in the range: 20–29, (such as EBL21, ECH21, EF22) except for early tubes in the series: DAC21, DBC21, DCH21, DF21, DF22, DL21, DLL21, DM21 which have either B9G or octal bases, the change to Sylvania's locktal standard coming in 1942 or 50–59 (special bases, including the European 9-pin lock-in base), but other types are in the same range (e.g. while EF51 is B8G loctal, the EF55 is 9-pin loctal, B9G, and the EL51 has a side-contact P8A base). Other loctals Nine-pin loctal bases, B9G, include the 1938 Philips EF50, EL60 and some type numbers in the European 20–29 and 50–59 range; There is a different "loctal Lorenz" in the Mullard–Philips tube designation . Miniature tubes Efforts to introduce small tubes into the marketplace date from the 1920s, when experimenters and hobbyists made radios with so-called peanut tubes like the Peanut 215 mentioned above. Because of the primitive manufacturing techniques of the time, these tubes were too unreliable for commercial use. RCA announced new miniature tubes in Electronics magazine, which proved reliable. The first ones, such as the 6J6 ECC91 VHF dual triode, were introduced in 1939. The bases commonly referred to as "miniature" are the seven-pin B7G type, and the slightly later nine-pin B9A (Noval). The pins are arranged evenly in a circle of eight or ten evenly spaced positions, with one pin omitted; this allows the tube to be inserted in only one orientation. Keying by omitting a pin is also used in 8- (subminiature), 10-, and 12-pin (Compactron) tubes (a variant 10-pin form, "Noval+1", is basically a nine-pin socket with an added center contact). As with loctal tubes, the pins of miniature tube are stiff wires protruding through the bottom of the glass envelope which plug directly into the socket. However, unlike all their predecessors, miniature tubes are not fitted with separate bases; the base is an integral part of the glass envelope. The pinched-off air evacuation nub is at the top of the tube, giving it its distinctive appearance. More than one functional section can be included in a single envelope; a dual triode configuration is particularly common. Seven- and nine-pin tubes were common, though miniature tubes with more pins, such as the Compactron series, were later introduced, and could fit up to three amplifying elements. Some miniature tube sockets had a skirt that mated with a cylindrical metal electrostatic shield that surrounded the tube, fitted with a spring to hold the tube in place if the equipment was subject to vibration. Sometimes the shield was also fitted with thermal contacts to transfer heat from the glass envelope to the shield and act as a heat sink, which was considered to improve tube life in higher power applications. Electrolytic effects from the differing metal alloys used for the miniature tube pins (usually Cunife or Fernico) and the tube base could cause intermittent contact due to local corrosion, especially in relatively low current tubes, such as were used in battery-operated radio sets. Malfunctioning equipment with miniature tubes can sometimes be brought back to life by removing and reinserting the tubes, disturbing the insulating layer of corrosion. Miniature tubes were widely manufactured for military use during World War II, and also used in consumer equipment. The Sonora Radio and Television Corporation produced the first radio using these miniature tubes, the "Candid", in April 1940. In June 1940 RCA released its battery-operated Model BP-10, the first superheterodyne receiver small enough to fit in a handbag or coat pocket. This model had the following tube lineup: 1R5 — pentagrid converter; 1T4 — I.F. amplifier; 1S5 — Detector/AVC/AF Amplifier; 1S4 — Audio Output. The BP-10 proved so popular that Zenith, Motorola, Emerson, and other radio manufacturers produced similar pocket radios based on RCA's miniature tubes. Several of these pocket radios were introduced in 1941 and sold until the suspension of radio production in April 1942 for the duration of World War II. After the war miniature tubes continued to be manufactured for civilian use, regardless of any technical advantage, as they were cheaper than octals and loctals. Miniature seven-pin base The B7G (or "small-button" or "heptal") seven-pin miniature tubes are smaller than Noval, with seven pins arranged at 45-degree spacing in a 9.53 mm (3/8th inch) diameter arc, the "missing" pin position being used to position the tube in its socket (unlike octal, loctal and rimlock sockets). Examples include the 6AQ5/EL90 and 6BE6/EK90. European tubes of this type have numbers 90-99, 100-109, 190-199, 900-999. A few in the 100-109 series have unusual, non-B7G bases, e.g., Wehrmacht base. Noval base The nine-pin miniature Noval B9A base, sometimes called button 9-pin, B9-1, offered a useful reduction in physical size compared to previous common types, such as octal (especially important in TV receivers where space was limited), while also providing a sufficient number of connections (unlike B7G) to allow effectively unrestricted access to all the electrodes, even of relatively complex tubes such as double triodes and triode-hexodes. It could also provide multiple connections to an electrode of a simpler device where useful, as in the four connections to the grid of a conventional grounded-grid UHF triode, e.g., 6AM4, to minimise the deleterious effects of lead inductance on the high-frequency performance. This base type was used by many of the United States and most of the European tubes, e.g., 12AX7-ECC83, EF86 and EL84, produced commercially towards the end of the era before transistors largely displaced their use. The IEC 67-I-12a specification calls for a 36-degree angle between the nine pins of 1.016 mm thickness, in an arc of diameter 11.89 mm. European tubes of this type have numbers 80-89, 180-189, 280-289, 800-899, 8000-8999. Duodecar base The Duodecar B12C base (IEC 67-I-17a) has 12 pins in a 19.1 mm diameter circle and dates from 1961. It was also called the Compactron T-9 construction/E12-70 base It is generally similar in form to a Noval socket, but larger. In the center is a clearance hole for a tube evacuation pip, which is typically on the bottom of a Compactron tube. (It should not be confused with the similar-sounding but differently sized Duodecal B12A base.) Rimlock base The Rimlock (B8A) base is an eight-pin design with a pin circle diameter close to Noval, and uses a nub on the side of the envelope to engage with a guide and retaining spring in the socket wall. This provides pin registration (since the pins are equi-spaced) and also a fair degree of retention. Early tubes with this base type typically had a metal skirt around the lower ~15mm of the envelope to match the socket wall, and this offered a degree of built-in screening, but these were fairly soon replaced by "skirtless" versions, which had a characteristic widening in the glass to compensate physically for the absence of the skirt. In the European naming scheme, rimlock tubes are numbered in the ranges 40-49, 110-119 (with exceptions), and 400-499, e.g., EF40. Although virtually unknown elsewhere, this was a very common base type in European radios of the late 1940s through the 1950s, but was eventually displaced by the ubiquitous B7G and Noval (B9A) base types. UHF tubes By 1935 new tube technologies were required for the development of radar and telecommunications. UHF requirements severely limited the existing tubes, so radical ideas were implemented which affected how these tubes connected to the host system. Two new bases appeared, the acorn tube and the lighthouse tube, both solving the same problems but with different approaches. Thompson, G.M. Rose, Saltzberg and Burnside from RCA created the acorn tube by using far smaller electrodes, with radial short connections. A different approach was taken by the designers of the lighthouse tube, such as the octal-base 2C43, which relied on using concentric cylindrical metal contacts in connections that minimized inductance, thus allowing a much higher frequency. Nuvistors were very small, reducing stray capacitances and lead inductances. The base and socket were so compact that they were widely used in UHF TV tuners. They could also be used in small-signal applications at lower frequencies, as in the Ampex MR-70, a costly studio tape recorder whose entire electronics section was based on nuvistors. Other socket styles There are many other socket types, of which a few are: Decal B10B base (IEC 67-I-41a) 10 pins with 1.02 mm diameter in an 11.89 mm diameter circle, e.g. PFL200 Decar B10G base (IEC E10-73) A 10th pin added to the center of a standard 9-pin miniature base, e.g. 6C9 Magnoval B9D base (IEC 67-I-36a) 9 pins with 1.27 mm diameter in a 17.45 mm pin circle diameter arc, e.g. EL503, EL509, PD500, etc. - not to be confused with... Novar B9E base, 9 pins with 1.02 mm diameter in a 17.45 mm pin circle diameter arc, one of several Compactron types, which looks similar to Magnoval (but a Novar tube in a Magnoval socket will not make good pin contact, and a Magnoval tube in a Novar socket may damage the socket). Sub-Magnal B11A base (American), 11-pins. Also used as industrial relay socket and HV power supplies. Amphenol / WirePro (WPI) / Eaton 78-series, Socket (female) part number: 78-S-11. Matching Plug (male) is part number: 86-CP-11 Neo Eightar base (IEC 67-I-31a) 8 pins in a 15.24 mm diameter circle 5-pin sub-miniature wire-ended B5A base (no socket used; e.g. EA76) A remarkably wide variety of tube and similar sockets is listed and described, with some informal application notes, at a commercial site, Pacific T.V., including nuvistor, eight-pin subminiature, vidicon, reflex klystron, nine-pin octal-like, 10-pin miniature (two types), 11-pin sub-magnal, diheptal 14-pin, and many display tubes such as Nixies and vacuum fluorescent types (and even more). As well, each socket has a link to a clear, high-quality picture. Some subminiature tubes with flexible wire leads all exiting in the same plane were connected by subminiature inline sockets. Some low-power reflex klystrons such as the 2K25 and 2K45 had small-diameter rigid coaxial outputs parallel to octal base pins. To accommodate the coax, one contact was replaced by a clearance hole. Vacuum tubes for high-power applications often required custom socket designs. A jumbo four-prong socket was used for various industrial tubes. A specialized seven-pin socket (Septar or B7A), with all pins in a circle with one pin wider than the others, was used for transmitting tubes. Subminiature tubes with long wire leads, introduced in the 1950s, were often soldered directly to printed circuit boards. Sockets were made for early transistors, but quickly fell out of favor as transistor reliability became established. This also happened with early integrated circuits; IC sockets later became used only for devices that may need to be upgraded. Summary of base details References See also Nuvistor Compactron Amphenol List of vacuum tubes Vacuum tubes
Tube socket
Physics
4,942
69,323,201
https://en.wikipedia.org/wiki/NGTS-3
NGTS-3 is a star system located in the southern constellation Columba. With an apparent magnitude of 14.67, it requires a powerful telescope to observe. However, NGTS-3 is actually an unresolved spectroscopic binary system. The system is located approximately 2,480 light years away, based on parallax measurement, and is receding with a radial velocity of 8.57 km/s. The system consists of two main sequence stars, classified as G6 and K1, respectively; however, only the properties of the primary star are known. NGTS-3A has a similar mass to that of the Sun, but is 7% smaller in radius. It radiates at 72% of the Sun's luminosity from its photosphere at an effective temperature of , which gives it the typical yellow hue characteristic of a G-type star. Planetary System In 2018, the NGTS survey discovered an inflated hot Jupiter orbiting NGTS-3A despite the components being visually unresolved. References G-type main-sequence stars Columba (constellation)
NGTS-3
Astronomy
225
29,294,666
https://en.wikipedia.org/wiki/Pterulone
Pterulone is a fungal metabolite. It was initially isolated from the mycelium and liquid cultures of wood-decay fungus in the genus Pterula. The compound inhibits eukaryotic respiration by targeting the mitochondrial NADH:ubiquinone oxidoreductase. References Fungicides Halogen-containing natural products Heterocyclic compounds with 2 rings Oxepines Ketones
Pterulone
Chemistry,Biology
88
24,429,694
https://en.wikipedia.org/wiki/SOOP
SOOP, previously known as AfreecaTV (, short for "Any FREE broadCAsting") (), is a video live-streaming service. It is now owned and operated by AfreecaTV Co., Ltd. in South Korea after Nowcom's AfreecaTV Co., Ltd and ZettaMedia split in 2011. As of July 2019 AfreecaTV was listed 4th in the "Asia's 200 Best Under A Billion" list by Forbes. History AfreecaTV initially started as a W beta service on May 11, 2005, and was officially named "AFREECA” on March 9, 2006. The site mainly re-transmits TV channels but also allows users to upload their own videos and shows. Functions such as broadcasting, viewing, channel listing, live chatting, and discussion boards are provided. Users are required to install 'Afreeca Player' for grid delivery. Independent broadcasters called broadcasting jockeys (BJs) deliver live broadcasts to viewers, who can add them to their list of favorite channels using an Afreeca Player tool. Some channels have tens of thousands of viewers at any given time. Paid services such as quick views or channel relays allow BJs additional sources of revenue. The platform ranges anywhere from TV broadcasts, live video game broadcasts, taxi driver monitoring, artist performances, and personal daily-life video blogs and shows for actresses and professional broadcasters. The head of AfreecaTV's parent company Nowcom, Mun Yong-sik, was arrested in 2008 for illegally distributing copyrighted films. Some alleged the arrest was politically motivated due to Afreeca being used by protesters to coordinate. On September 27, 2012, AfreecaTV English was released on the Google Play store. One example of expansion of Afreeca's role is the hosting of a live talk session with Mayor Park of Seoul, broadcast live online and via mobile on AfreecaTV. He used the platform as a way to conduct a community scanning forum to collect public opinions and allow bloggers with various areas of expertise to participate in the dialogue. The bloggers were able to address the problems facing Seoul and propose solutions in their areas of expertise, while also exchanging ideas with Mayor Park in an in-depth discussion on the administration of the Seoul Metropolitan Government. Depending on rising of power of AfreecaTV, many Idol Groups participate in AfreecaTV for their fans, for example, Nine Muses. On October 15, 2024, AfreecaTV changed its name to SOOP. Controversy There have been many social problems with Afreeca TV such as offers for sexual favors and abasement of disabled individuals. Many broadcasters were involved in these incidents, and they were punished by managers of Afreeca TV by suspension of their IDs. Due to such problems, mass media in South Korea have shown concern about the effects of personal broadcasting platforms. A claim was made that audience overloading has caused overpayment of fees for Internet broadcasting. In light of this, Korea's Clean Internet Broadcasting Council came to an agreement with Afreeca TV to reduce the payment maximum to less than 1 million won (a little less than US$900) per day by June 2008. Esports Afreeca picked up the SBENU StarCraft II team on January 23, 2016 and participated in Proleague. On November 21, 2016, it was announced that the team was disbanding its StarCraft II division, though it kept involvement in Starcraft up. They currently sponsor a professional League of Legends team, DN Freecs (formerly Afreeca Freecs and Kwangdong Freecs). The Starcraft 2 team was reformed at the start of 2020. Afreeca also announced on January 23, 2016 that they would be sponsoring two seasons of Brood War tournaments. The tournament has proved popular and is now in its 17th season as of December 2024. Esports League AfreecaTV StarCraft League (ASL) Global StarCraft II League (GSL) Afreeca TV Battle Ground League (APL) LoL ladies Battle LoL Challengers Korea Hearthstone battle royal AfreecaTV VALORANT League Esports broadcast relay station Overwatch APEX relay League of Legends Pro League Korean Relay LoL Champions Korea Relay Sudden Attack Champions League AfreecaTV Tekken League (ATL) Chinese League of Legend Pro League (LPL) See also Vevo References External links Official website Peercasting Internet properties established in 2005 Streaming television Online companies of South Korea Peer-to-peer software Video game streaming services
SOOP
Technology
934
39,082,936
https://en.wikipedia.org/wiki/Scutiger%20pes-caprae
Scutiger pes-caprae, commonly known as the goat's foot, is a species of fungus in the family Albatrellaceae. Taxonomy It was first described officially as a species of Polyporus by Christian Hendrik Persoon in 1818. In recent decades, it was known most commonly as a species of Albatrellus until molecular research published by Canadian mycologist Serge Audet in 2010 revealed that it was more appropriate in an emended version of the genus Scutiger. Description The brown cap tends toward a convex kidney shape, sometimes lobed. It is wide, while the stem is tall and thick. The flesh is thick and whitish. The spore print is white. Similar species Scutiger ellisii, Laeticutis cristata, and Jahnoporus hirtus bear similarities. Distribution and habitat It is found in western North America, under conifers and on rotting wood, from August to February. References External links Fungi described in 1818 Fungi of North America Fungi of Europe Fungus species
Scutiger pes-caprae
Biology
214
62,443,960
https://en.wikipedia.org/wiki/Geochemical%20Perspectives%20Letters
Geochemical Perspectives Letters is a peer-reviewed open access scholarly journal publishing original research in geochemistry. It is published by the European Association for Geochemistry. Abstracting and indexing The journal is abstracted and indexed in: References External links Open access journals Academic journals established in 2015 English-language journals Geochemistry journals
Geochemical Perspectives Letters
Chemistry
68
37,733,189
https://en.wikipedia.org/wiki/Vesosome
A vesosome is a multi-compartmental structure of lipidic nature used to deliver drugs. They can be considered multivesicular vesicles (MVV) and are, therefore, liposome-derived structures. Description Vesosomes consist of one or more bilayers enclosing an aqueous core that contains unilamellar vesicles that function as internal compartments which contain the drug and which can vary in composition from each other. The external bilayer defines the lumen, limits emission of the vesicle contents, and protects the vesicle contents from degradation due to lipolytic enzymes. Its unique properties enable localized drug delivery to specific parts of the body and extend the duration of drug effect. Vesosomes are relatively straightforward to produce and they offer the flexibility to deliver multiple drugs within a single carrier, which has been shown to confer important advantages in chemotherapy. Internal vesicle diameters range from 20-500 nm and vesosome diameters range from about 0.1 micron to more than 1.0 micron. Historical background Shortly after the first description of liposomes, by British haematologist Alec D Bangham in 1961 (published 1964), at the Babraham Institute, in Cambridge, scientists first started to contemplate the possibility of employing them as transportation systems in the blood stream. Since then, there have been many advances in this area, and as of 2008 there were 11 clinically approved liposomal drugs targeting a variety of pathological conditions and illnesses, including fungal infections, hepatitis A, influenza and certain cancers. Now, scientists plan to take full advantage of the 40 years of progress in liposome development to enhance this transportation system by employing vesosomes. Design and construction Vesosome multicompartment structure encapsulates unilamellar liposomes within a second bilayer. For this purpose, it is necessary to form bilayers that can be opened and closed at will, without disrupting the inner content. This is achieved by adding ethanol to a variety of saturated phospholipids in the gel phase, which drives interdigitation of phospholipids bilayers and subsequent fusion of small vesicles to form flat bilayer sheets. These are steady to removal of the residual ethanol until heated above the lipid chain melting temperature (Tm). The bilayers become flexible, and the sheets spontaneously close on themselves to form unilamellar vesicles. During the closure, the sheets can entrap whatever is around in suspension. By adding the vesicles aggregates including drug-loaded vesicles to the pelleted sheets before heating the mixture, encapsulation is carried out to form vesosomes. Vesosome structure has taken advantage of the progress in liposome development as steric stabilization, pH loading of drugs (it is loaded by pH gradient), and intrinsic biocompatibility (it can be modified with a variety of agents, for example to specifically target a disease site, or promote adhesion or fusion). Applications A wide of molecular structures can be encapsulated in vesosomal vesicles, such as proteins with complex three-dimensional structures or condensed DNA. The most common use is to fill the vesosome’s vesicles with certain drugs that are going to be delivered in a particular area. Due to the small size of the vesosome and its good protection of the inner vesicles, it can be used in various cases, doing different functions. If suitable receptors are included in the outer lipid bilayer of vesosomes during their preparation, then they are able to locate to inflamed areas. Once in the inflamed area, such vesosomes will deliver an anti-inflammatory substance from its vesicles though a pH gradient. Vesosomes that localise to tumours have also been demonstrated. They can be used to create, in a positioned area, a different nano-environment (considering that vesosome size is about 50 - 200 nanometres) either by altering the pH or the concentration of a particular substance. References External links Kisak ET, Coldren B, Evans CA, Boyer C, Zasadzinski JA.The Vesosome - A Multicompartment Drug Delivery Vehicle Retrieved 25 November 2012 Boyer C, Zasadzinski JA Multiple Lipid Vesicle Compartments Slow Vesicle Contents Release in Lipases and Serum Retrieved 25 November 2012 Vesosome: A Versatile Multi-Compartment Structure For Targeted Drug Delivery Retrieved 25 November 2012 Drug delivery devices
Vesosome
Chemistry
958
1,067,914
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20constant
In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of simple continued fractions. In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators qn of the convergents of the continued fraction expansions of almost all real numbers satisfy Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely The term "Lévy's constant" is sometimes used to refer to (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function for and zero otherwise. This gives Lévy's constant as . The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem. Proof The proof assumes basic properties of continued fractions. Let be the Gauss map. Lemma where is the Fibonacci number. Proof. Define the function . The quantity to estimate is then . By the mean value theorem, for any ,The denominator sequence satisfies a recurrence relation, and so it is at least as large as the Fibonacci sequence . Ergodic argument Since , and , we haveBy the lemma, where is finite, and is called the reciprocal Fibonacci constant. By Birkhoff's ergodic theorem, the limit converges to almost surely, where is the Gauss distribution. See also Khinchin's constant References Further reading External links Continued fractions Mathematical constants Paul Lévy (mathematician)
Lévy's constant
Mathematics
404