id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
66,137,241
https://en.wikipedia.org/wiki/Voxtalisib
Voxtalisib (XL-765, SAR245409) is a drug which acts as a dual inhibitor of the kinase enzymes phosphatidylinositol 3-kinase (PI3K) and mechanistic target of rapamycin (mTOR). It is in clinical trials for the treatment of various types of cancer. References Phosphoinositide 3-kinase inhibitors Experimental cancer drugs
Voxtalisib
Chemistry
90
2,903,108
https://en.wikipedia.org/wiki/66%20Aurigae
66 Aurigae is a single star located approximately 880 light years away from the Sun in the northern constellation of Auriga. It is visible to the naked eye as a faint, orange hued star with an apparent magnitude of 5.23. This object is moving further from the Earth with a heliocentric radial velocity of +22.6 km/s. At the age of 107 million years, 66 Aurigae is an evolved giant star, most likely (98% chance) on the horizontal branch, with a stellar classification of K0.5 IIIa. Keenan and Yorka (1987) identified it as a strong–CN star, showing an excess strength of the blue CN bands in the spectrum. Having exhausted the supply of hydrogen at its core, the star has expanded to 48 times the Sun's radius. 66 Aurigae has five times the mass of the Sun and is radiating 834 times the Sun's luminosity from its swollen photosphere at an effective temperature of 4,475 K. It was also known to be part of a much bigger constellation named Telescopium Herschelii before it was unrecognized by the International Astronomical Union (IAU). References External links HR 2805 Image 66 Aurigae K-type main-sequence stars CN stars Horizontal-branch stars Auriga Durchmusterung objects Aurigae, 66 157669 035907 2805
66 Aurigae
Astronomy
302
61,692
https://en.wikipedia.org/wiki/PC%20Card
PC Card is a parallel peripheral interface for laptop computers and PDAs. The PCMCIA originally introduced the 16-bit ISA-based PCMCIA Card in 1990, but renamed it to PC Card in March 1995 to avoid confusion with the name of the organization. The CardBus PC Card was introduced as a 32-bit version of the original PC Card, based on the PCI specification. CardBus slots are backwards compatible, but older slots are not forward compatible with CardBus cards. Although originally designed as a standard for memory-expansion cards for computer storage, the existence of a usable general standard for notebook peripherals led to the development of many kinds of devices including network cards, modems, and hard disks. The PC Card port has been superseded by the ExpressCard interface since 2003, which was also initially developed by the PCMCIA. The organization dissolved in 2009, with its assets merged into the USB Implementers Forum. Applications Many notebooks in the 1990s had two adjacent type-II slots, which allowed installation of two type-II cards or one, double-thickness, type-III card. The cards were also used in early digital SLR cameras, such as the Kodak DCS 300 series. However, their original use as storage expansion is no longer common. Some manufacturers such as Dell continued to offer them into 2012 on their ruggedized XFR notebooks. Mercedes-Benz used a PCMCIA card reader in the W221 S-Class for model years 2006-2009. It was used for reading media files such as MP3 audio files to play through the COMAND infotainment system. After 2009, it was replaced with a standard SD Card reader. , some vehicles from Honda equipped with a navigation system still included a PC Card reader integrated into the audio system. Some Japanese brand consumer entertainment devices such as TV sets include a PC Card slot for playback of media. Adapters for PC Cards to Personal Computer ISA slots were available when these technologies were current. Cardbus adapters for PCI slots have been made. These adapters were sometimes used to fit Wireless (802.11) PCMCIA cards into desktop computers with PCI slots. History Before the introduction of the PCMCIA card, the parallel port was commonly used for portable peripherals. The PCMCIA 1.0 card standard was published by the Personal Computer Memory Card International Association in November 1990 and was soon adopted by more than eighty vendors. It corresponds with the Japanese JEIDA memory card 4.0 standard. It was originally developed to support Memory cards. Intel authored the Exchangable Card Architecture (ExCA) specification, but later merged this into the PCMCIA. SanDisk (operating at the time as "SunDisk") launched its PCMCIA card in October 1992. The company was the first to introduce a writeable Flash RAM card for the HP 95LX (an early MS-DOS pocket computer). These cards conformed to a supplemental PCMCIA-ATA standard that allowed them to appear as more conventional IDE hard drives to the 95LX or a PC. This had the advantage of raising the upper limit on capacity to the full 32 MB available under DOS 3.22 on the 95LX. New Media Corporation was one of the first companies established for the express purpose of manufacturing PC Cards; they became a major OEM for laptop manufacturers such as Toshiba and Compaq for PC Card products. It soon became clear that the PCMCIA card standard needed expansion to support "smart" I/O cards to address the emerging need for fax, modem, LAN, harddisk and floppy disk cards. It also needed interrupt facilities and hot plugging, which required the definition of new BIOS and operating system interfaces. This led to the introduction of release 2.0 of the PCMCIA standard and JEIDA 4.1 in September 1991, which saw corrections and expansion with Card Services (CS) in the PCMCIA 2.1 standard in November 1992. To recognize increased scope beyond memory, and to aid in marketing, the association acquired the rights to the simpler term "PC Card" from IBM. This was the name of the standard from version 2 of the specification onwards. These cards were used for wireless networks, modems, and other functions in notebook PCs. After the release of PCIe-based ExpressCard in 2003, laptop manufacturers started to fit ExpressCard slots to new laptops instead of PC Card slots. Form factors All PC Card devices use a similar sized package which is long and wide, the same size as a credit card. Type I Cards designed to the original specification (PCMCIA 1.0) are type I and have a 16-bit interface. They are thick and have a dual row of 34 holes (68 in total) along a short edge as a connecting interface. Type-I PC Card devices are typically used for memory devices such as RAM, flash memory, OTP (One-Time Programmable), and SRAM cards. Type II introduced with version 2.0 of the standard. Type-II and above PC Card devices use two rows of 34 sockets, and have a 16- or 32-bit interface. They are thick. Type-II cards introduced I/O support, allowing devices to attach an array of peripherals or to provide connectors/slots to interfaces for which the host computer had no built-in support. For example, many modem, network, and TV cards accept this configuration. Due to their thinness, most Type II interface cards have miniature interface connectors on the card connecting to a dongle, a short cable that adapts from the card's miniature connector to an external full-size connector. Some cards instead have a lump on the end with the connectors. This is more robust and convenient than a separate adapter but can block the other slot where slots are present in a pair. Some Type II cards, most notably network interface and modem cards, have a retractable jack, which can be pushed into the card and will pop out when needed, allowing insertion of a cable from above. When use of the card is no longer needed, the jack can be pushed back into the card and locked in place, protecting it from damage. Most network cards have their jack on one side, while most modems have their jack on the other side, allowing the use of both at the same time as they do not interfere with each other. Wireless Type II cards often had a plastic shroud that jutted out from the end of the card to house the antenna. In the mid-90s, PC Card Type II hard disk drive cards became available; previously, PC Card hard disk drives were only available in Type III. Type III introduced with version 2.01 of the standard in 1992. Type-III PC Card devices are 16-bit or 32-bit. These cards are thick, allowing them to accommodate devices with components that would not fit type I or type II height. Examples are hard disk drive cards, and interface cards with full-size connectors that do not require dongles (as is commonly required with type II interface cards). Type IV Type-IV cards, introduced by Toshiba, were not officially standardized or sanctioned by the PCMCIA. These cards are thick. Bus Original The original standard was defined for both 5 V and 3.3 volt cards, with 3.3 V cards having a key on the side to prevent them from being inserted fully into a 5 V-only slot. Some cards and some slots operate at both voltages as needed. The original standard was built around an 'enhanced' 16-bit ISA bus platform. A newer version of the PCMCIA standard is CardBus (see below), a 32-bit version of the original standard. In addition to supporting a wider bus of 32 bits (instead of the original 16), CardBus also supports bus mastering and operation speeds up to 33 MHz. CardBus CardBus are PCMCIA 5.0 or later (JEIDA 4.2 or later) 32-bit PCMCIA devices, introduced in 1995 and present in laptops from late 1997 onward. CardBus is effectively a 32-bit, 33 MHz PCI bus in the PC Card design. CardBus supports bus mastering, which allows a controller on the bus to talk to other devices or memory without going through the CPU. Many chipsets, such as those that support Wi-Fi, are available for both PCI and CardBus. The notch on the left hand front of the device is slightly shallower on a CardBus device so, by design, a 32-bit device cannot be plugged into earlier equipment supporting only 16-bit devices. Most new slots accept both CardBus and the original 16-bit PC Card devices. CardBus cards can be distinguished from older cards by the presence of a gold band with eight small studs on the top of the card next to the pin sockets. The speed of CardBus interfaces in 32-bit burst mode depends on the transfer type: in byte mode, transfer is 33 MB/s; in word mode it is 66 MB/s; and in dword (double-word) mode 132 MB/s. CardBay CardBay is a variant added to the PCMCIA specification introduced in 2001. It was intended to add some forward compatibility with USB and IEEE 1394, but was not universally adopted and only some notebooks have PC Card controllers with CardBay features. This is an implementation of Microsoft and Intel's joint Drive Bay initiative. Design The card information structure (CIS) is metadata stored on a PC card that contains information about the formatting and organization of the data on the card. The CIS also contains information such as: Type of card Supported power supply options Supported power saving capabilities Manufacturer Model number When a card is unrecognized it is frequently because the CIS information is either lost or damaged. Descendants and variants ExpressCard ExpressCard is a later specification from the PCMCIA, intended as a replacement for PC Card, built around the PCI Express and USB 2.0 standards. The PC Card standard is closed to further development and PCMCIA strongly encourages future product designs to utilize the ExpressCard interface. From about 2006, ExpressCard slots replaced PCMCIA slots in laptop computers, with a few laptops having both in the transition period. ExpressCard and CardBus sockets are physically and electrically incompatible. ExpressCard-to-CardBus and Cardbus-to-ExpressCard adapters are available that connect a Cardbus card to an Expresscard slot, or vice versa, and carry out the required electrical interfacing. These adapters do not handle older non-Cardbus PCMCIA cards. PC Card devices can be plugged into an ExpressCard adaptor, which provides a PCI-to-PCIe Bridge. Despite being much faster in speed/bandwidth, ExpressCard was not as popular as PC Card, due in part to the ubiquity of USB ports on modern computers. Most functionality provided by PC Card or ExpressCard devices is now available as an external USB device. These USB devices have the advantage of being compatible with desktop computers as well as portable devices. (Desktop computers were rarely fitted with a PC Card or ExpressCard slot.) This reduced the requirement for internal expansion slots; by 2011, many laptops had none. Some IBM ThinkPad laptops took their onboard RAM (in sizes ranging from 4 to 16 MB) in the factor of an IC-DRAM Card. While very similar in form-factor, these cards did not go into a standard PC Card Slot, often being installed under the keyboard, for example. They also were not pin-compatible, as they had 88 pins but in two staggered rows, as opposed to even rows like PC Cards. These correspond to versions 1 and 2 of the JEIDA memory card standard. Others The shape is also used by the Common Interface form of conditional-access modules for DVB, and by Panasonic for their professional "P2" video acquisition memory cards. A CableCARD conditional-access module is a type II PC Card intended to be plugged into a cable set-top box or digital cable-ready television. The interface has spawned a generation of flash memory cards that set out to improve on the size and features of Type I cards: CompactFlash, MiniCard, P2 Card and SmartMedia. For example, the PC Card electrical specification is also used for CompactFlash, so a PC Card CompactFlash adapter can be a passive physical adapter rather than requiring additional circuitry. CompactFlash is a smaller dimensioned 50 pin subset of the 68 pin PC Card interface. It requires a setting for the interface mode of either "memory" or "ATA storage". The EOMA68 open-source hardware standard uses the same 68-pin PC Card connectors and corresponds to the PC Card form factor in many other ways. See also Further reading References External links Understanding PC Card, PCMCIA, Cardbus, 16-bit, 32-bit. Solid-state computer storage media Motherboard PCMCIA Computer standards Computer-related introductions in 1990
PC Card
Technology
2,684
2,770,094
https://en.wikipedia.org/wiki/Mary%20Everest%20Boole
Mary Everest Boole (11 March 1832 in Wickwar, Gloucestershire – 17 May 1916 in Middlesex, England) was a self-taught mathematician who is best known as an author of didactic works on mathematics, such as Philosophy and Fun of Algebra, and as the wife of fellow mathematician George Boole. Her progressive ideas on education, as expounded in The Preparation of the Child for Science, included encouraging children to explore mathematics through playful activities such as curve stitching. Her life is of interest to feminists as an example of how women made careers in an academic system that did not welcome them. Life She was born in England, the daughter of Reverend Thomas Roupell Everest, Rector of Wickwar, and Mary nee Ryall. Her uncle was George Everest, the surveyor and geographer after whom Mount Everest was named. She spent the first part of her life in France where she received an education in mathematics from a private tutor. On returning to England at the age of 11, she continued to pursue her interest in mathematics through self-instruction. Self-taught mathematician George Boole tutored her, and she visited him in Ireland where he held the position of professor of mathematics at Queen's College Cork. Upon the death of her father in 1855, they married and she moved to Cork. Mary greatly contributed as an editor to Boole's The Laws of Thought, a work on algebraic logic. She had five daughters with him. She was widowed in 1864, at the age of 32, and returned to England, where she was offered a post as a librarian at Queen's College on Harley Street, London. In August 1865, her address was listed as 68 Harley Street in a Deed of Assignment in which she disposed of her husband's former house in Ireland, acting as the Executrix of his will. The deed was witnessed by "John Knights, Porter at Queens College, Harley Street, London and Jane White, Housekeeper at 68 Harley Street, London". As well as working as a librarian, she also tutored privately in mathematics and developed a philosophy of teaching that involved the use of natural materials and physical activities to encourage an imaginative conception of the subject. Her interest extended beyond mathematics to Darwinian theory, philosophy and psychology and she organised discussion groups on these subjects among others. At Queen's College, against the approval of the authorities, she organised discussion groups of students with the unconventional James Hinton, a promulgator of polygamy. This in part led to her mental breakdown and the dispersal of her children. In later life, she belonged to the circle of the Tolstoyan pacifist publisher, C. W. Daniel; she chose the name The Crank for his magazine because, she said, 'a crank was a little thing that made revolutions'. Mary took an active interest in politics, introducing her daughter Ethel to the Russian anti-tsarist cause under Sergei Stepniak. After the Boer war 1899–1902 she became more outspoken in her writings against imperialism, organised religion, the financial world and the tokenism she felt that Parliament represented. She opposed women's suffrage and probably for this reason has not generally been regarded as a feminist. She died in 1916, at the age of 84. Boole was a practitioner of homeopathic medicine. Contributions to education Mary first became interested in mathematics and teaching through her tutor in France, Monsieur Deplace. He helped her understand mathematics through questioning and journal writing. After marrying George Boole she began contributing to the scientific world by advising her husband in his work while attending his lectures, both of which were unheard of for a woman to do in that time period. During this time she also shared ideas with Victoria Welby, another female scholar and dear friend. They discussed everything from logic and mathematics, to pedagogy, theology, and science. Her teaching first began while working as a librarian. Mary would tutor students with new methods; using natural objects, such as sticks or stones. She theorized that using physical manipulations would strengthen the unconscious understanding of materials learned in a classroom setting. One of her most notable contributions in the area of physical manipulations is curve stitching with the use of sewing cards, which she discovered as a form of amusement as a child. This helped to encourage the connections of mathematical concepts to outside sources. Her book Philosophy and Fun of Algebra explained algebra and logic to children in interesting ways, starting with a fable, and including bits of history throughout. She references not only history, but also philosophy and literature, using a mystical tone to keep the attention of children. Mary encouraged the use of mathematical imagination with critical thinking and creativity. This, along with reflective journal writing and creating one's own formulas, was essential in strengthening comprehension and understanding. Cooperative learning was also important because students could share discoveries with each other in an environment of peer tutoring and develop new ideas and methods. She worked on promoting her husband's works, with great attention to mathematical psychology. George Boole's main focus was on psychologism, and Mary provided a more ideological view of his work. She supported the idea that arithmetic was not purely abstract as many believed, but more anthropomorphic. Pulsation was also important in her works and could be described as a sequence of mental attitudes, with her attention being analysis and synthesis. She believed that Indian logic played a role in the development of modern logic by her husband George Boole and others. Spiritualism Boole was interested in parapsychology and the occult, and was a convinced spiritualist. She was the first female member of the Society for Psychical Research which she joined in 1882. However, being the only female member at the time, she resigned after six months. Boole was the author of the book The Message of Psychic Science for Mothers and Nurses. She revealed the manuscript to Frederick Denison Maurice who objected to its controversial ideas and this resulted in her losing her job as librarian at Queens College. The book was not published until 1883. It was later republished as The Message of Psychic Science to the World (1908). Family Her five daughters made their marks in a range of fields. Alicia Boole Stott (1860–1940) became an expert in four-dimensional geometry. Ethel Lilian (1864–1960) married the Polish revolutionary Wilfrid Michael Voynich and was the author of a number of works including The Gadfly. Mary Ellen (1856–1908) married mathematician Charles Hinton and Margaret (1858–1935) was the mother of mathematician G. I. Taylor. Lucy Everest (1862–1905) was a talented chemist and became the first woman Fellow of the Institute of Chemistry.Geoffrey Hinton is a great-grandson of Boole, and is well known for research in Artificial Intelligence (AI). Publications in four volumes References Citations Sources External links "Mary Everest Boole", Biographies of Women Mathematicians, Agnes Scott College 1832 births 1916 deaths 19th-century English mathematicians 19th-century British philosophers Amateur mathematicians English spiritualists British parapsychologists Philosophers of mathematics People from Wickwar 19th-century English women writers 19th-century British women mathematicians
Mary Everest Boole
Mathematics
1,452
2,869,042
https://en.wikipedia.org/wiki/Hicap
Hicap is a mobile technology developed by Nippon Telegraph and Telephone as a higher capacity alternative to their NTT mobile solution. Hicap uses a 25 kHz carrier and uses FDMA to separate different calls from each other. External links http://www.mobileworld.org/analog_about.html Mobile radio telephone systems
Hicap
Technology
68
28,365,776
https://en.wikipedia.org/wiki/PSR%20J2007%2B2722
PSR J2007+2722 is a 40.8-hertz isolated pulsar in the Vulpecula constellation, 5.3 kpc (17,000 ly) distant in the plane of the Galaxy, and is most likely a disrupted recycled pulsar (DRP). J2007+2722 was found on data taken by the Arecibo radio telescope in February 2007, and analyzed by volunteers Chris and Helen Colvin (Ames, Iowa, United States) and Daniel Gebhardt (Universität Mainz, Musikinformatik, Germany) via the distributed computing project Einstein@Home. References Notes Sources External links Pulsars Vulpecula
PSR J2007+2722
Astronomy
146
65,085
https://en.wikipedia.org/wiki/Triticale
Triticale (; × Triticosecale) is a hybrid of wheat (Triticum) and rye (Secale) first bred in laboratories during the late 19th century in Scotland and Germany. Commercially available triticale is almost always a second-generation hybrid, i.e., a cross between two kinds of primary (first-cross) triticales. As a rule, triticale combines the yield potential and grain quality of wheat with the disease and environmental tolerance (including soil conditions) of rye. Only recently has it been developed into a commercially viable crop. Depending on the cultivar, triticale can more or less resemble either of its parents. It is grown mostly for forage or fodder, although some triticale-based foods can be purchased at health food stores and can be found in some breakfast cereals. When crossing wheat and rye, wheat is used as the female parent and rye as the male parent (pollen donor). The resulting hybrid is sterile and must be treated with colchicine to induce polyploidy and thus the ability to reproduce itself. The primary producers of triticale are Poland, Germany, Belarus, France and Russia. In 2014, according to the Food and Agriculture Organization (FAO), 17.1 million tons were harvested in 37 countries across the world. The triticale hybrids are all amphidiploid, which means the plant is diploid for two genomes derived from different species. In other words, triticale is an allotetraploid. In earlier years, most work was done on octoploid triticale. Different ploidy levels have been created and evaluated over time. The tetraploids showed little promise, but hexaploid triticale was successful enough to find commercial application. The CIMMYT (International Maize and Wheat Improvement Center) triticale improvement program was intended to improve food production and nutrition in developing countries. Triticale was thought to have potential in the production of bread and other food products, such as cookies, pasta, pizza dough and breakfast cereals. The protein content is higher than that of wheat, although the glutenin fraction is less. The grain has also been stated to have higher levels of lysine than wheat. Acceptance would require the milling industry to adapt to triticale, as the milling techniques employed for wheat are unsuited to triticale. Past research indicated that triticale could be used as a feed grain and, particularly, later research found that its starch is readily digested. As a feed grain, triticale is already well established and of high economic importance. It has received attention as a potential energy crop, and research is currently being conducted on the use of the crop's biomass in bioethanol production. Triticale has also been used to produce vodka. History In the 19th century, crossing cultivars or species became better understood, allowing the controlled hybridization of more plants and animals. In 1873, Alexander Wilson first managed to manually fertilize the female organs of wheat flowers with rye pollen (male gametes), but found that the resulting plants were sterile, much the way the offspring of a horse and donkey is an infertile mule. Fifteen years later in 1888, a partially-fertile hybrid was produced by , "Tritosecale Rimpaui Wittmack". Such hybrids germinate only when the chromosomes spontaneously double. Unfortunately, "partially fertile" was all that was produced until 1937. In that year, it was discovered that the chemical colchicine, which is used both for general plant germination and as a treatment for gout, would force chromosome doubling by keeping them from pulling apart during cell division. Triticale had become viable, though at that point the cost of producing the seeds was disproportionate to the yield. By the 1960s, triticale was being produced that was far more nutritious than normal wheat. However, it was a poorly-producing crop, sometimes yielding shriveled kernels, germinating poorly or prematurely, and did not bake well. Modern triticale has overcome most of these problems, after decades of additional breeding and gene transfer with wheat and rye. Millions of acres/hectares of the crop are grown around the world, slowly increasing toward becoming a significant source of food-calories. Species Triticale hybrids are currently classified by ploidy into three nothospecies: × Triticosecale semisecale (Mackey) K.Hammer & Filat. – tetraploid triticale. Unstable, but used in breeding bridging. Includes the following crosses: Triticum monococcum × Secale cereale, genome AARR; Alternative crosses, genome ABRR (mixogenome A/B). × Triticosecale neoblaringhemii A.Camus – hexaploid triticale. Stable, currently very successful in agriculture. May be produced by Secale cereale × Triticum turgidum, genome AABBRR. × Triticosecale rimpaui Wittm. – octaploid triticale. Not completely stable, mainly historical importance. May be produced by Secale cereale × Triticum aestivum, genome AABBDDRR. The current treatment follows the Mac Key 2005 treatment of Triticum using a broad species concept based on genome composition. Traditional classifications used a narrow species concept based on the treatment of wheats by Dorofeev et al., 1979, and hence produced many more species names. The genome notation follows , with the rye genome notated as R. Biology and genetics Earlier work with wheat-rye crosses was difficult due to low survival of the resulting hybrid embryo and spontaneous chromosome doubling. These two factors were difficult to predict and control. To improve the viability of the embryo and thus avoid its abortion, in vitro culture techniques were developed (Laibach, 1925). Colchicine was used as a chemical agent to double the chromosomes. After these developments, a new era of triticale breeding was introduced. Earlier triticale hybrids had four reproductive disorders, namely meiotic instability, high aneuploid frequency, low fertility and shriveled seed (Muntzing 1939; Krolow 1966). Cytogenetical studies were encouraged and well funded to overcome these problems. It is especially difficult to see the expression of rye genes in the background of wheat cytoplasm and the predominant wheat nuclear genome. This makes it difficult to realise the potential of rye in disease resistance and ecological adaptation. Triticale is essentially a self-fertilizing, or naturally inbred crop. This mode of reproduction results in a more homozygous genome. The crop is, however, adapted to this form of reproduction from an evolutionary point of view. Cross-fertilization is also possible, but it is not the primary form of reproduction. is a stem rust resistance gene which is commonly found in triticale. Originally from rye (Imperial rye), now () widely found in triticale. Located on the 3A chromosome arm, originally from 3R. Virulence has been observed in field by Puccinia graminis f. sp. secalis (Pgs) and in an artificial cross Pgs Puccinia graminis f. sp. tritici (Pgt). When successful, Sr27 is among the few Srs that does not even allow the underdeveloped uredinia and slight degree of sporulation commonly allowed by most Srs. Instead there are necrotic or chlorotic flecks. Deployment in triticale in New South Wales and Queensland, Australia, however, rapidly showed virulence between 1982 and 1984 – the first virulence on this gene in the world. (This was especially associated with the cultivar Coorong.) Therefore, the International Maize and Wheat Improvement Center's triticale offerings were tested and many were found to depend solely on Sr27. Four years later, in 1988 virulence was found in South Africa. Sr27 has become less common in CIMMYT triticales since the mid-'80s. Conventional breeding approaches The aim of a triticale breeding programme is mainly focused on the improvement of quantitative traits, such as grain yield, nutritional quality and plant height, as well as traits which are more difficult to improve, such as earlier maturity and improved test weight (a measure of bulk density). These traits are controlled by more than one gene. Problems arise, however, because such polygenic traits involve the integration of several physiological processes in their expression. Thus the lack of single-gene control (or simple inheritance) results in low trait heritability (Zumelzú et al. 1998). Since the induction of the International Maize and Wheat Improvement Center triticale breeding programme in 1964, the improvement in realised grain yield has been remarkable. In 1968, at Ciudad Obregón, Sonora, in northwest Mexico, the highest yielding triticale line produced 2.4 t/ha. Today, CIMMYT has released high yielding spring triticale lines (e.g. Pollmer-2) which have surpassed the 10 t/ha yield barrier under optimum production conditions. Based on the commercial success of other hybrid crops, the use of hybrid triticales as a strategy for enhancing yield in favourable, as well as marginal, environments has proven successful over time. Earlier research conducted by CIMMYT made use of a chemical hybridising agent to evaluate heterosis in hexaploid triticale hybrids. To select the most promising parents for hybrid production, test crosses conducted in various environments are required, because the variance of their specific combining ability under differing environmental conditions is the most important component in evaluating their potential as parents to produce promising hybrids. The prediction of general combining ability of any triticale plant from the performance of its parents is only moderate with respect to grain yield. Commercially exploitable yield advantages of hybrid triticale cultivars is dependent on improving parent heterosis and on advances in inbred-line development. Triticale is useful as an animal feed grain. However, it is necessary to improve its milling and bread-making quality aspects to increase its potential for human consumption. The relationship between the constituent wheat and rye genomes were noted to produce meiotic irregularities, and genome instability and incompatibility presented numerous problems when attempts were made to improve triticale. This led to two alternative methods to study and improve its reproductive performance, namely, the improvement of the number of grains per floral spikelet and its meiotic behaviour. The number of grains per spikelet has an associated low heritability value (de Zumelzú et al. 1998). In improving yield, indirect selection (the selection of correlated/related traits other than that to be improved) is not necessarily as effective as direct selection. (Gallais 1984) Lodging (the toppling over of the plant stem, especially under windy conditions) resistance is a polygenically inherited (expression is controlled by many genes) trait, and has thus been an important breeding aim in the past. The use of dwarfing genes, known as Rht genes, which have been incorporated from both Triticum and Secale, has resulted in a decrease of up to in plant height without causing any adverse effects. A 2013 study found that hybrids have better yield stability under yield stress than do inbred lines. Application of newer techniques Abundant information exists concerning R-genes (for disease resistance) in wheat, and a continuously updated on-line catalogue, the Catalogue of Gene Symbols, of these genes can be found at . Another online database of cereal rust resistance genes is available at . Unfortunately, less is known about rye and particularly triticale R-genes. Many R-genes have been transferred to wheat from its wild relatives, and appear in such papers and catalogues, thus making them available for triticale breeding. The two mentioned databases are significant contributors to improving the genetic variability of the triticale gene pool through gene (or more specifically, allele) provision. Genetic variability is essential for progress in breeding. In addition, genetic variability can also be achieved by producing new primary triticales, which essentially means the reconstitution of triticale, and the development of various hybrids involving triticale, such as triticale-rye hybrids. In this way, some chromosomes from the R genome have been replaced by some from the D genome. The resulting so-called substitution and translocation triticale facilitates the transfer of R-genes. Introgression Introgression involves the crossing of closely related plant relatives, and results in the transfer of 'blocks' of genes, i.e. larger segments of chromosomes compared to single genes. R-genes are generally introduced within such blocks, which are usually incorporated/translocated/introgressed into the distal (extreme) regions of chromosomes of the crop being introgressed. Genes located in the proximal areas of chromosomes may be completely linked (very closely spaced), thus preventing or severely hampering recombination, which is necessary to incorporate such blocks. Molecular markers (small lengths of DNA of a characterized/known sequence) are used to 'tag' and thus track such translocations. A weak colchicine solution has been employed to increase the probability of recombination in the proximal chromosome regions, and thus the introduction of the translocation to that region. The resultant translocation of smaller blocks that indeed carry the R-gene(s) of interest has decreased the probability of introducing unwanted genes. The resistance gene was introgressed into wheat from the 2R chromosome of rye. However this was actually done through triticale. Triticale has been the amphiploid for several such rye⇨wheat introgressions. A 2014 study found that dwarfing gene from the rye 5R chromosome also provides Fusarium head blight (FHB) resistance in this host. Production of doubled haploids Doubled haploid (DH) plants have the potential to save much time in the development of inbred lines. This is achieved in a single generation, as opposed to many, which would otherwise occupy much physical space/facilities. DHs also express deleterious recessive alleles otherwise masked by dominance effects in a genome containing more than one copy of each chromosome (and thus more than one copy of each gene). Various techniques exist to create DHs. The in vitro culture of anthers and microspores is most often used in cereals, including triticale. These two techniques are referred to as androgenesis, which refers to the development of pollen. Many plant species and cultivars within species, including triticale, are recalcitrant in that the success rate of achieving whole newly generated (diploid) plants is very low. Genotype by culture medium interaction is responsible for varying success rates, as is a high degree of microspore abortion during culturing. The response of parental triticale lines to anther culture is known to be correlated to the response of their progeny. Chromosome elimination is another method of producing DHs, and involves hybridisation of wheat with maize (Zea mays L.), followed by auxin treatment and the artificial rescue of the resultant haploid embryos before they naturally abort. This technique is applied rather extensively to wheat. Its success is in large part due to the insensitivity of maize pollen to the crossability inhibitor genes known as Kr1 and Kr2 that are expressed in the floral style of many wheat cultivars. The technique is unfortunately less successful in triticale. However, Imperata cylindrica (a grass) was found to be just as effective as maize with respect to the production of DHs in both wheat and triticale. Application of molecular markers An important advantage of biotechnology applied to plant breeding is the speeding up of cultivar release that would otherwise take 8–12 years. It is the process of selection that is actually enhanced, i.e., retaining that which is desirable or promising and ridding that which is not. This carries with it the aim of changing the genetic structure of the plant population. The website is a valuable resource for marker assisted selection (MAS) protocols relating to R-genes in wheat. MAS is a form of indirect selection. The Catalogue of Gene Symbols mentioned earlier is an additional source of molecular and morphological markers. Again, triticale has not been well characterised with respect to molecular markers, although an abundance of rye molecular markers makes it possible to track rye chromosomes and segments thereof within a triticale background. Yield improvements of up to 20% have been achieved in hybrid triticale cultivars due to heterosis. This raises the question of what inbred lines should be crossed (to produce hybrids) with each other as parents to maximize yield in their hybrid progeny. This is termed the 'combining ability' of the parental lines. The identification of good combining ability at an early stage in the breeding programme can reduce the costs associated with 'carrying' a large number of plants (literally thousands) through it, and thus forms part of efficient selection. Combining ability is assessed by taking into consideration all available information on descent (genetic relatedness), morphology, qualitative (simply inherited) traits and biochemical and molecular markers. Exceptionally little information exists on the use of molecular markers to predict heterosis in triticale. Molecular markers are generally accepted as better predictors than morphological markers (of agronomic traits) due to their insensitivity to variation in environmental conditions. A useful molecular marker known as a simple sequence repeat (SSR) is used in breeding with respect to selection. SSRs are segments of a genome composed of tandem repeats of a short sequence of nucleotides, usually two to six base pairs. They are popular tools in genetics and breeding because of their relative abundance compared to other marker types, a high degree of polymorphism (number of variants), and easy assaying by polymerase chain reaction. However, they are expensive to identify and develop. Comparative genome mapping has revealed a high degree of similarity in terms of sequence colinearity between closely related crop species. This allows the exchange of such markers within a group of related species, such as wheat, rye and triticale. One study established a 58% and 39% transferability rate to triticale from wheat and rye, respectively. Transferability refers to the phenomenon where the sequence of DNA nucleotides flanking the SSR locus (position on the chromosome) is sufficiently homologous (similar) between genomes of closely related species. Thus, DNA primers (generally, a short sequence of nucleotides used to direct the copying reaction during PCR) designed for one species can be used to detect SSRs in related species. SSR markers are available in wheat and rye, but very few, if any, are available for triticale. Genetic transformation The genetic transformation of crops involves the incorporation of 'foreign' genes or, rather, very small DNA fragments compared to introgression discussed earlier. Amongst other uses, transformation is a useful tool to introduce new traits or characteristics into the transformed crop. Two methods are commonly employed: infectious bacterial-mediated (usually Agrobacterium) transfer and biolistics, with the latter being most commonly applied to allopolyploid cereals such as triticale. Agrobacterium-mediated transformation, however, holds several advantages, such as a low level of DNA rearrangement in the transgenic plant, a low number of introduced copies of the transforming DNA, stable integration of an a-priori characterized T-DNA fragment (containing the DNA expressing the trait of interest) and an expected higher level of transgene expression. Triticale has, until recently, only been transformed via biolistics, with a 3.3% success rate. Little has been documented on Agrobacterium-mediated transformation of wheat: while no data existed with respect to triticale until 2005, the success rate in later work was nevertheless low. Research Triticale holds much promise as a commercial crop, as it has the potential to address specific problems within the cereal industry. Research is currently being conducted worldwide in places like Stellenbosch University in South Africa. Conventional plant breeding has helped establish triticale as a valuable crop, especially where conditions are less favourable for wheat cultivation. Triticale being a synthesized grain notwithstanding, many initial limitations, such as an inability to reproduce due to infertility and seed shrivelling, low yield and poor nutritional value, have been largely eliminated. Tissue culture techniques with respect to wheat and triticale have seen continuous improvements, but the isolation and culturing of individual microspores seems to hold the most promise. Many molecular markers can be applied to marker-assisted gene transfer, but the expression of R-genes in the new genetic background of triticale remains to be investigated. More than 750 wheat microsatellite primer pairs are available in public wheat breeding programmes, and could be exploited in the development of SSRs in triticale. Another type of molecular marker, single nucleotide polymorphism (SNP), is likely to have a significant impact on the future of triticale breeding. Health concerns Like both its hybrid parents – wheat and rye – triticale contains gluten and is therefore unsuitable for people with gluten-related disorders, such as celiac disease, non-celiac gluten sensitivity and wheat allergy, among others. In fiction An episode of the popular TV series Star Trek, "The Trouble with Tribbles", revolved around the protection of a grain developed from triticale. This grain was named "quadro-triticale" by writer David Gerrold at the suggestion of producer Gene Coon, with four distinct lobes per kernel. In that episode Mr. Spock correctly attributes the ancestry of the nonfictional grain to 20th-century Canada. Indeed, in 1953 the University of Manitoba began the first North American triticale breeding program. Early breeding efforts concentrated on developing a high-yield, drought-tolerant human food crop species suitable for marginal wheat-producing areas. (Later in the episode, Chekov claims that the fictional quadro-triticale was a "Russian invention".) A later episode titled "More Tribbles, More Troubles", in the animated series, also written by Gerrold, dealt with "quinto-triticale", an improvement on the original, having apparently five lobes per kernel. Three decades later the spinoff series Star Trek: Deep Space Nine revisited quadro-triticale and the depredations of the Tribbles in the episode "Trials and Tribble-ations". References Cereals Pooideae Energy crops Plant common names
Triticale
Biology
4,728
11,694,216
https://en.wikipedia.org/wiki/Opus%20vittatum
Opus vittatum ("banded work"), also called opus listatum, was an ancient Roman construction technique introduced at the beginning of the fourth century, made by parallel horizontal courses of tuff blocks alternated with bricks. This technique was adopted during the whole 4th century, and is typical of the works of Maxentius and Constantine. See also References Sources Ancient Roman construction techniques
Opus vittatum
Engineering
80
46,216,773
https://en.wikipedia.org/wiki/Penicillium%20ianthinellum
Penicillium ianthinellum is a species of the genus of Penicillium. References ianthinellum Fungi described in 1923 Fungus species
Penicillium ianthinellum
Biology
33
23,986,673
https://en.wikipedia.org/wiki/Bonse%27s%20inequality
In number theory, Bonse's inequality, named after H. Bonse, relates the size of a primorial to the smallest prime that does not appear in its prime factorization. It states that if p1, ..., pn, pn+1 are the smallest n + 1 prime numbers and n ≥ 4, then (the middle product is short-hand for the primorial of pn) Mathematician Denis Hanson showed an upper bound where . See also Primorial prime Notes References Theorems about prime numbers Inequalities
Bonse's inequality
Mathematics
116
747,591
https://en.wikipedia.org/wiki/Kammback
A Kammback—also known as a Kamm tail or K-tail—is an automotive styling feature wherein the rear of the car slopes downwards before being abruptly cut off with a vertical or near-vertical surface. A Kammback reduces aerodynamic drag, thus improving efficiency and reducing fuel consumption, while maintaining a practical shape for a vehicle. The Kammback is named after German aerodynamicist Wunibald Kamm for his work developing the design in the 1930s. Some vehicles incorporate the kammback design based on aerodynamic principles, while some use a cut-off tail as a design or marketing feature. Origins As the speed of cars increased during the 1920s and 1930s, designers observed and began to apply the principles of automotive aerodynamics. As aerodynamic drag increases, more energy, and thus more fuel, is required to propel the vehicle. In 1922, Paul Jaray patented a car based on a teardrop profile (i.e. with a rounded nose and long, tapered tail) to minimize the aerodynamic drag that is created at higher speeds. The streamliner vehicles of the mid 1930s—such as the Tatra 77, Chrysler Airflow and Lincoln-Zephyr—were designed according to these discoveries. However, the long tail was not a practical shape for a car, so automotive designers sought other solutions. In 1935, German aircraft designer Georg Hans Madelung showed alternatives to minimize drag without a long tail. In 1936, a similar theory was applied to cars after Baron Reinhard Koenig-Fachsenfeld developed a smooth roofline shape with an abrupt end at a vertical surface, effective in achieving low amounts of drag similar to a streamlined body. He worked on an aerodynamic design for a bus, and Koenig-Fachsenfeld patented the idea. Koenig-Fachsenfeld worked with Wunibald Kamm at Stuttgart University, investigating vehicle shapes to "provide a good compromise between everyday utility (e.g. vehicle length and interior dimensions) and an attractive drag coefficient". In addition to aerodynamic efficiency, Kamm emphasized vehicle stability in his design, mathematically and empirically proving the effectiveness of the design. In 1938, Kamm produced a prototype using a Kammback shape, based on a BMW 328. The Kammback, along with other aerodynamic modifications, gave the prototype a drag coefficient of 0.25. The earliest mass-produced cars using Kammback principles were the 1949–1951 Nash Airflyte in the United States and the 1952–1955 Borgward Hansa 2400 in Europe. Aerodynamic theory The ideal shape to minimize drag is a "teardrop," a smooth airfoil-like shape, but it is not practical for road vehicles because of size constraints. However, researchers, including Kamm, found that abruptly cutting off the tail resulted in a minimal increase in drag. The reason for this is that a turbulent wake region forms behind the vertical surface at the rear of the car. This wake region mimics the effect of the tapered tail in that air in the free stream does not enter this region (avoiding boundary layer separation); therefore, smooth airflow is maintained, minimizing drag. Kamm's design is based on the tail being truncated at the point where the cross section area is 50% of the car's maximum cross-section, which Kamm found represented a good compromise, as by that point the turbulence typical of flat-back vehicles had been mostly eliminated at typical speeds. The Kammback presented a partial solution to the problem of aerodynamic lift, which was becoming severe as sports car racing speeds increased during the 1950s. The design paradigm of sloping the tail to reduce drag was carried to an extreme on cars such as the Cunningham C-5R, resulting in an airfoil effect lifting the rear of the car at speed and so running the risk of instability or loss of control. The Kammback decreased the area of the lifting surface while creating a low-pressure zone underneath the tail. Some studies showed that the addition of a rear spoiler to a Kammback design was not beneficial because the overall drag increased with the angles that were studied. Usage In 1959, the Kammback came into use on full-body racing cars as an anti-lift measure, and within a few years would be used on virtually all such vehicles. The design had a resurgence in the early 2000s as a method to reduce fuel consumption in hybrid electric vehicles. Several cars have been marketed as Kammbacks despite their profiles not adhering to the aerodynamic philosophy of a true Kammback. These models include the 1971–1977 Chevrolet Vega Kammback wagon, the 1981–1982 AMC Eagle Kammback, the AMC AMX-GT, and the Pontiac Firebird–based "Type K" concept cars. Some models that are marketed as "coupes"—such as BMW and Mercedes-Benz SUVs like the X6 and GLC Coupé—"use a sort-of Kammback shape, though their tail ends have a few more lumps and bumps than a proper Kammback ought to have." Cars that have had a Kammback include: 1940 BMW 328 "Mille Miglia" Kamm coupé 1952 Cunningham C-4RK + 1958-1963 Lotus Elite 1961 Ferrari 250 GT SWB Breadvan 1962–1964 Ferrari 250 GTO 1963 Aston Martin DP215 1963-1964 Porsche 904 Carrera GTS 1963-1967 Alfa Romeo Giulia TZ 1963–1974 Bizzarrini Iso Grifo 1964-1965 Shelby Daytona 1964-1968 Ferrari 275 GTB 1965–1968 Ford GT40 1965–1970 Aston Martin DB6 1965–1996, 2005–present Mini Marcos 1966 Porsche 906 1966-1970 Unipower GT 1966-1974 Saab Sonett II and III 1967-1977 Alfa Romeo Tipo 33 1968–1973 Ferrari 365 GTB/4 ("Daytona") 1968–1976 Ferrari Dino 1968-1978 Lamborghini Espada 1969–1971 Fiat 850 Coupe and Sport Coupe 1970–1975 Citroën SM 1970–1977 Alfa Romeo Montreal 1970–1986 Citroën GS 1970–1978 Datsun 240Z, 260Z, 280Z 1971–1989 Alfa Romeo Alfasud 1971–1973 Ford Mustang Fastback 1972–1982 Maserati Khamsin 1972-1984 Alfa Romeo Alfetta 1974–1991 Citroën CX 1983-1991 Honda CR-X 1985-1995 Autobianchi Y10 / Lancia Y10 1986-2016 Daewoo LeMans 1991–1998 Mazda MX-3 1994–1998 Mazda Familia Neo/323C 1999-2005 Audi A2 2000–2006 Honda Insight 2004–present Toyota Prius 2007-2015 Renault Laguna III 2010–2014 Honda Insight (2nd generation) 2010-2016 Honda CR-Z 2010–present Audi A7 2012-2022 Hyundai Veloster 2017–2022 Hyundai Ioniq 2018–2023 Kia Stinger 2020–present Tesla Model Y 2020–present Ford Mustang Mach-E 2024–present Li Mega 2024–present Aston Martin Valour See also Fastback, a similar automotive styling feature Liftback, a type of tailgate that cars with a Kammback often use References Automotive styling features Aerodynamics
Kammback
Chemistry,Engineering
1,467
7,578,186
https://en.wikipedia.org/wiki/Orbit%20portrait
In mathematics, an orbit portrait is a combinatorial tool used in complex dynamics for understanding the behavior of one-complex dimensional quadratic maps. In simple words one can say that it is : a list of external angles for which rays land on points of that orbit graph showing above list Definition Given a quadratic map from the complex plane to itself and a repelling or parabolic periodic orbit of , so that (where subscripts are taken 1 + modulo ), let be the set of angles whose corresponding external rays land at . Then the set is called the orbit portrait of the periodic orbit . All of the sets must have the same number of elements, which is called the valence of the portrait. Examples Parabolic or repelling orbit portrait valence 2 valence 3 Valence is 3 so rays land on each orbit point. For complex quadratic polynomial with c= -0.03111+0.79111*i portrait of parabolic period 3 orbit is : Rays for above angles land on points of that orbit . Parameter c is a center of period 9 hyperbolic component of Mandelbrot set. For parabolic julia set c = -1.125 + 0.21650635094611*i. It is a root point between period 2 and period 6 components of Mandelbrot set. Orbit portrait of period 2 orbit with valence 3 is : valence 4 Formal orbit portraits Every orbit portrait has the following properties: Each is a finite subset of The doubling map on the circle gives a bijection from to and preserves cyclic order of the angles. All of the angles in all of the sets are periodic under the doubling map of the circle, and all of the angles have the same exact period. This period must be a multiple of , so the period is of the form , where is called the recurrent ray period. The sets are pairwise unlinked, which is to say that given any pair of them, there are two disjoint intervals of where each interval contains one of the sets. Any collection of subsets of the circle which satisfy these four properties above is called a formal orbit portrait. It is a theorem of John Milnor that every formal orbit portrait is realized by the actual orbit portrait of a periodic orbit of some quadratic one-complex-dimensional map. Orbit portraits contain dynamical information about how external rays and their landing points map in the plane, but formal orbit portraits are no more than combinatorial objects. Milnor's theorem states that, in truth, there is no distinction between the two. Trivial orbit portraits Orbit portrait where all of the sets have only a single element are called trivial, except for orbit portrait . An alternative definition is that an orbit portrait is nontrivial if it is maximal, which in this case means that there is no orbit portrait that strictly contains it (i.e. there does not exist an orbit portrait such that ). It is easy to see that every trivial formal orbit portrait is realized as the orbit portrait of some orbit of the map , since every external ray of this map lands, and they all land at distinct points of the Julia set. Trivial orbit portraits are pathological in some respects, and in the sequel we will refer only to nontrivial orbit portraits. Arcs In an orbit portrait , each is a finite subset of the circle , so each divides the circle into a number of disjoint intervals, called complementary arcs based at the point . The length of each interval is referred to as its angular width. Each has a unique largest arc based at it, which is called its critical arc. The critical arc always has length greater than These arcs have the property that every arc based at , except for the critical arc, maps diffeomorphically to an arc based , and the critical arc covers every arc based at once, except for a single arc, which it covers twice. The arc that it covers twice is called the critical value arc for . This is not necessarily distinct from the critical arc. When escapes to infinity under iteration of , or when is in the Julia set, then has a well-defined external angle. Call this angle . is in every critical value arc. Also, the two inverse images of under the doubling map ( and ) are both in every critical arc. Among all of the critical value arcs for all of the 's, there is a unique smallest critical value arc , called the characteristic arc which is strictly contained within every other critical value arc. The characteristic arc is a complete invariant of an orbit portrait, in the sense that two orbit portraits are identical if and only if they have the same characteristic arc. Sectors Much as the rays landing on the orbit divide up the circle, they divide up the complex plane. For every point of the orbit, the external rays landing at divide the plane into open sets called sectors based at . Sectors are naturally identified the complementary arcs based at the same point. The angular width of a sector is defined as the length of its corresponding complementary arc. Sectors are called critical sectors or critical value sectors when the corresponding arcs are, respectively, critical arcs and critical value arcs. Sectors also have the interesting property that is in the critical sector of every point, and , the critical value of , is in the critical value sector. Parameter wakes Two parameter rays with angles and land at the same point of the Mandelbrot set in parameter space if and only if there exists an orbit portrait with the interval as its characteristic arc. For any orbit portrait let be the common landing point of the two external angles in parameter space corresponding to the characteristic arc of . These two parameter rays, along with their common landing point, split the parameter space into two open components. Let the component that does not contain the point be called the -wake and denoted as . A quadratic polynomial realizes the orbit portrait with a repelling orbit exactly when . is realized with a parabolic orbit only for the single value for about Primitive and satellite orbit portraits Other than the zero portrait, there are two types of orbit portraits: primitive and satellite. If is the valence of an orbit portrait and is the recurrent ray period, then these two types may be characterized as follows: Primitive orbit portraits have and . Every ray in the portrait is mapped to itself by . Each is a pair of angles, each in a distinct orbit of the doubling map. In this case, is the base point of a baby Mandelbrot set in parameter space. Satellite orbit portraits have . In this case, all of the angles make up a single orbit under the doubling map. Additionally, is the base point of a parabolic bifurcation in parameter space. Generalizations Orbit portraits turn out to be useful combinatorial objects in studying the connection between the dynamics and the parameter spaces of other families of maps as well. In particular, they have been used to study the patterns of all periodic dynamical rays landing on a periodic cycle of a unicritical anti-holomorphic polynomial. See also Lamination References Dynamical systems
Orbit portrait
Physics,Mathematics
1,428
14,819,769
https://en.wikipedia.org/wiki/HAND1
Heart- and neural crest derivatives-expressed protein 1 is a protein that in humans is encoded by the HAND1 gene. A member of the HAND subclass of basic Helix-loop-helix (bHLH) transcription factors, the Heart and neural crest-derived transcript-1 (HAND1) gene is vital for the development and differentiation of three distinct embryological lineages including the cardiac muscle cells of the heart, trophoblast of the placenta, and yolk sac vasculogenesis. Most highly related to twist-like bHLH genes in amino acid identity and embryonic expression, HAND1 can form homo- and heterodimer combinations with multiple bHLH partners, mediating transcriptional activity in the nucleus. Function The protein encoded by this gene belongs to the basic helix-loop-helix family of transcription factors. This gene product is one of two closely related family members, the HAND proteins are expressed within the developing ventricular chambers, cardiac neural crest, endocardium (HAND2 only) and epicardium (HAND2 only). HAND1 is expressed with myocardium of the primary heart field and plays an essential but poorly understood role in cardiac morphogenesis. HAND1 works jointly with HAND2 in cardiac development of embryos based on a crucial HAND gene dosage system. If HAND1 is over or under expressed then morphological abnormalities can form; most notable are cleft lips and palates. Expression was modeled with a knock-in of phosphorylation to turn on and off gene expression which induced the craniofacial abnormalities. Knock-out experimentation on mice caused death and severe cardiac malformations such as failed cardiac looping, impaired ventricular development and defective chamber septation. This aids in the implication that HAND1 expression is a factor to patients with congenital heart disease. However, a lack of HAND1 in the distal regions of the Neural Crest has no effect on cranial feature formation. Mutation of HAND1 has been shown to hinder the effect of GATA4, another vital cardiac transcription factor, and is associated with congenital heart disease. The lack of HAND1 detection in the developing embryo leads to many of the structural defects that causes heart disease and facial deformities while the dosage of HAND1 relates to the severity of these maladies. HAND factors function in the formation of the right ventricle, left ventricle, aortic arch arteries, epicardium, and endocardium implicating them as mediators of congenital heart disease. In addition, HAND1 is uniquely expressed in trophoblasts and is essential for early trophoblast differentiation. Cardiac morphogenesis In the third week of fetal development the rudimentary heart (bilaterally symmetrical cardiac tube) undergoes a characteristic dextral looping, forming an asymmetrical structure with bulges that represent the incipient ventricular and atrial chambers of the heart. Arising from cells derived from the primary heart field in the cardiac crescent, HAND1 goes from being expressed on both sides of the heart tube to the ventral surface of the caudal heart segment and the aortic sac, then being restricted to the outer curvature of the left ventricle in the looped heart. In conjunction with HAND2 (a fellow bHLH transcription factor), complementary and overlapping expression patterns are thought to play a role in interpreting asymmetrical signals in the developing heart which leads to the characteristic looping. The two are implemented in cardiac development of embryos based on a crucial HAND gene dosage system. If HAND1 is over or under expressed then morphological abnormalities can form; most notable are cleft lips and palates. Expression was modeled with a knock-in of phosphorylation to turn on and off gene expression which induced the craniofacial abnormalities. HAND1 mutants also appear to develop a spectrum of cardiac abnormalities, as demonstrated in knock-out experimentation in the mouse model, where HAND1-null mice displayed defects in the ventral septum, malformation of the AV valve, hypoplastic ventricles, and outflow tract abnormalities. In humans, evidence of a frameshift mutation in the bHLH domain of HAND1 has been correlated with hypoplastic left heart syndrome (a serious form of congenital heart disease where the left side of the heart is severely underdeveloped), aiding in the implication that HAND1 expression is a factor to patients with the disease. However, a lack of HAND1 in the distal regions of the Neural Crest has no effect on cranial feature formation. Mutation of HAND1 has been shown to hinder the effect of GATA4, another vital cardiac transcription factor, and is associated with congenital heart disease. The lack of HAND1 detection in the developing embryo leads to many of the structural defects that causes heart disease and facial deformities while the dosage of HAND1 relates to the severity of these maladies. Trophoblast differentiation In addition, HAND1 is uniquely expressed in trophoblasts and is essential for early trophoblast giant cell differentiation. Trophoblast giant cells are necessary in order for placental development to proceed, participating in vital processes such as blastocyst implantation, remodeling of the maternal decidua, and secretion of hormones. The importance of this relationship is demonstrated in HAND1-null mutant mice, which display significant abnormalities in trophoblast development, such as a reduced ectoplacental cone, thin parietal yolk sac, and reduced density of trophoblast giant cells. These homozygous HAND1-null mutant embryos were arrested by E7.5 of gestation, though could be saved by contribution of wild-type cells to the trophoblast. Yolk sac vasculogenesis Expressed in high levels in the extraembryonic membranes throughout development, HAND1 also plays a functional role in vascular development of the yolk sac. Though not strictly required for vasculogenesis, data has shown that HAND1 contributes to the fine-tuning of the vasculogenic response in the yolk sac, recruiting smooth muscle cells to the endothelial network in order to refine the primitive endothelial plexus to a functional vascular system. This relationship has been demonstrated in the HAND1-null mouse model, where embryos lacking the HAND1 gene had a yolk sac vasculature defect caused by lack of vasculature refinement leading to the accumulation of hematopoietic cells between the yolk sac and the amnion. References Further reading External links Transcription factors
HAND1
Chemistry,Biology
1,378
16,085,372
https://en.wikipedia.org/wiki/Processing%20amplifier
Processing amplifier, commonly called ProcAmp, is used to alter, change or clean video or audio signal components or parameters in realtime. Form factor Broadcast professionals prefer to use hardware rack mountable ProcAmps that helps them make video broadcast safe by correcting video inconsistencies. They may also be chip-based, as part of other larger multi-purpose devices in professional environments. Software ProcAmps are also available as code embedded in media players like Windows Media Player, VLC, KMPlayer, or in codecs like ffdshow. Software ProcAmps can process media either on the CPU or GPU. Video ProcAmp Video ProcAmps can be used for processing standard-definition 525/30 (NTSC) 625/25 (PAL) or high-definition video signals. ProcAmps can process video signals ranging from analog composite to SDI video signals. Common ProcAmp Controls: Brightness (Luminance) Contrast (Gain) Saturation (Amplitude) Hue (Phase) Common ProcAmp features: Regenerate sync and color burst Adjust sync amplitude Boost low light level video Reduce video wash out Chroma clipping See also Video processing Broadcast-safe External links More information about TV standards References Broadcast engineering Television technology ITU-R recommendations
Processing amplifier
Technology,Engineering
258
35,886,761
https://en.wikipedia.org/wiki/Omicron%20Cygni
The Bayer designation Omicron Cygni (ο Cyg / ο Cygni) is shared by two or three star systems in the constellation Cygnus. Application of the superscripts to the three stars varies in different publications; the Flamsteed designations are unambiguous: ο1 Cygni: 31 Cygni, sometimes 30 Cygni ο2 Cygni: 32 Cygni, sometimes 31 Cygni ο3 Cygni: sometimes 32 Cygni Cygni, Omicron Cygnus (constellation)
Omicron Cygni
Astronomy
108
64,293,474
https://en.wikipedia.org/wiki/Yevgeniy%20Nikulin
Yevgeniy Alexandrovich Nikulin (Евгений Александрович Никулин) is a Russian computer hacker. He was arrested in Prague in October 2016, and was charged with the hacking and data theft of several U.S. technology companies. In September 2020, he was sentenced to 88 months in prison. Hacking career In 2012, Nikulin was alleged to be part of a criminal clique involving a Ukrainian national, Oleksandr Ieremenko. Arrest Czech police arrested Nikulin in Prague on October 5, 2016, in connection with the 2012 hacking and data theft of LinkedIn, Dropbox, and Formspring. According to a report by TV Rain, his arrest may have been the result of a cooperative effort between the U.S. and Sergei Mikhailov (FSB). U.S. authorities had previously been tipped off about Nikulin in April 2014. Detention On November 23, 2016, Russia requested Nikulin's extradition, citing a 2009 case that involved theft from the online payment system WebMoney. On February 7, 2017, a lawyer for Nikulin claimed that in mid-November 2016, as well as earlier that day, an FBI agent had visited Nikulin in Pankrác Prison and had offered him cash, an apartment, U.S. citizenship, as well as all cyber charges against him dropped, if he would agree to confess to participating in the 2016 Democratic National Committee email leak. In late March 2018, Paul Ryan visited the Czech capital, where he urged authorities to grant Nikulin's extradition to the U.S. Extradition On March 30, 2018, Nikulin was extradited to the U.S., where he pleaded not guilty to the charges against him. Conviction On July 10, 2020, Nikulin was convicted by a jury in a United States District Court in San Francisco on all but one of the counts. Sentencing On September 29, 2020, Nikulin was sentenced to 88 months in prison. Controversy Bryan Paarmaan, who was the then-FBI Deputy Assistant Director in the International Operations Division, admitted to leaking details of Nikulin's indictment to Los Angeles Times reporter, Del Quentin Wilber, two days before Nikulin's indictment was unsealed. References 1987 births Living people Hackers Hacking in the 2010s Russian cybercriminals 21st-century Russian criminals
Yevgeniy Nikulin
Technology
497
256,461
https://en.wikipedia.org/wiki/Artificial%20heart
An artificial heart is an artificial organ device that replaces the heart. Artificial hearts are typically used to bridge the time to complete heart transplantation surgery, but research is ongoing to develop a device that could permanently replace the heart in the case that a heart transplant (from a deceased human or, experimentally, from a deceased genetically engineered pig) is unavailable or not viable. , there are two commercially available full artificial heart devices; in both cases, they are for temporary use, of less than a year, for total heart failure patients awaiting a human heart to be transplanted into their bodies. Although other similar inventions preceded it from the late 1940s, the first artificial heart to be successfully implanted in a human was the Jarvik-7 in 1982, designed by a team including Willem Johan Kolff, William DeVries and Robert Jarvik. An artificial heart is distinct from a ventricular assist device (VAD; for either one or both of the ventricles, the heart's lower chambers), which can be a permanent solution also, or the intra-aortic balloon pump – both devices are designed to support a failing heart. It is also distinct from a cardiopulmonary bypass machine, which is an external device used to provide the functions of both the heart and lungs, used only for a few hours at a time, most commonly during cardiac surgery. It is also distinct from a ventilator, used to support failing lungs, or the extracorporeal membrane oxygenation (ECMO), which is used to support those with both inadequate heart and lung function for up to days or weeks, unlike the bypass machine. History Origins A synthetic replacement for a heart remains a long-sought "holy grail" of modern medicine. The obvious benefit of a functional artificial heart would be to lower the need for heart transplants because the demand for organs always greatly exceeds supply. Although the heart is conceptually a pump, it embodies subtleties that defy straightforward emulation with synthetic materials and power supplies. Artificial hearts have historically had issues from both a biomedical standpoint, regarding clotting and foreign object rejection, as well as longevity and practicality, regarding the lifespan of the device as well as the equipment required to run it. Since the inception of the device, artificial hearts have been continually improved as medical technology has. More recent devices, such as the Carmat heart, have sought to improve upon their predecessors by reducing complications resultant from device implant, such as foreign-body rejection and thrombus. Early development The first artificial heart was made by the Soviet scientist Vladimir Demikhov in 1938. It was implanted in a dog. On July 2, 1952, 41-year-old Henry Opitek, suffering from shortness of breath, made medical history at Harper University Hospital at Wayne State University in Michigan. The Dodrill-GMR heart machine, considered to be the first operational mechanical heart, was successfully used while performing heart surgery. Ongoing research was done on calves at Hershey Medical Center, Animal Research Facility, in Hershey, Pennsylvania, during the 1970s. Forest Dewey Dodrill, working closely with Matthew Dudley, used the machine in 1952 to bypass Henry Opitek's left ventricle for 50 minutes while he opened the patient's left atrium and worked to repair the mitral valve. In Dodrill's post-operative report, he notes, "To our knowledge, this is the first instance of survival of a patient when a mechanical heart mechanism was used to take over the complete body function of maintaining the blood supply of the body while the heart was open and operated on." A heart–lung machine was first used in 1953 during a successful open heart surgery. John Heysham Gibbon, the inventor of the machine, performed the operation and developed the heart–lung substitute himself. Following these advances, scientific interest for the development of a solution for heart disease developed in numerous research groups worldwide. Early designs of total artificial hearts In 1949, a precursor to the modern artificial heart pump was built by doctors William Sewell and William Glenn of the Yale School of Medicine using an Erector Set, assorted odds and ends, and dime-store toys. The external pump successfully bypassed the heart of a dog for more than an hour. On December 12, 1957, Willem Johan Kolff, the world's most prolific inventor of artificial organs, implanted an artificial heart into a dog at Cleveland Clinic. The dog lived for 90 minutes. In 1958, Domingo Liotta initiated the studies of TAH (Total Artificial Heart) replacement at Lyon, France, and in 1959–60 at the National University of Córdoba, Argentina. He presented his work at the meeting of the American Society for Artificial Internal Organs held in Atlantic City in March 1961. At that meeting, Liotta described the implantation of three types of orthotopic (inside the pericardial sac) TAHs in dogs, each of which used a different source of external energy: an implantable electric motor, an implantable rotating pump with an external electric motor, and a pneumatic pump. Paul Winchell designed a model of artificial heart with the assistance of Henry Heimlich (the inventor of the Heimlich maneuver) and submitted a patent for a mechanically driven artificial heart implementing a cam driven roller mechanism to compress flexible bags containing blood, on February 6, 1961. This is contrary to the popular claim that Winchell submitted the patent in the summer of 1956, as well as contrary to the claim that Winchell "invented" the artificial heart. In fact, two patents existed prior to Winchell's submission. These patents were filed April 10, 1956, and April 17, 1959, respectively. Winchell also claims that the design within his patent was used in later models of the Jarvik hearts, a claim in which Robert Jarvik, the principle designer of those hearts, denies on the basis that his pneumatically driven hearts share little in common with Winchell's mechanically actuated patent. In 1964, the National Institutes of Health started the Artificial Heart Program, with the goal of putting an artificial heart into a human by the end of the decade. The purpose of the program was to develop an implantable artificial heart, including the power source, to replace a failing heart. In February 1966, Adrian Kantrowitz rose to international prominence when he performed the world's first permanent implantation of a partial mechanical heart (left ventricular assist device) at Maimonides Medical Center. In 1967, Kolff left Cleveland Clinic to start the Division of Artificial Organs at the University of Utah and pursue his work on the artificial heart. In 1973, a calf named Tony survived for 30 days on an early Kolff heart. In 1975, a bull named Burk survived 90 days on the artificial heart. In 1976, a calf named Abebe lived for 184 days on the Jarvik 5 artificial heart. In 1981, a calf named Alfred Lord Tennyson lived for 268 days on the Jarvik 5. Over the years, more than 200 physicians, engineers, students and faculty developed, tested and improved Kolff's artificial heart. To help manage his many endeavors, Kolff assigned project managers. Each project was named after its manager. Graduate student Robert Jarvik was the project manager for the artificial heart projects, for which the Jarvik line of artificial hearts get their name from. There, physician-engineer Clifford Kwan-Gett invented two components of an integrated pneumatic artificial heart system: a ventricle with hemispherical diaphragms that did not crush red blood cells (a problem with previous artificial hearts) and an external heart driver that inherently regulated blood flow without needing complex control systems. Jarvik also combined several modifications: an ovoid shape to fit inside the human chest, a more blood-compatible polyurethane developed by biomedical engineer Donald Lyman, and a fabrication method by Kwan-Gett that made the inside of the ventricles smooth and seamless to reduce dangerous stroke-causing blood clots. First clinical implantation of a total artificial heart On April 4, 1969, Domingo Liotta and Denton A. Cooley replaced a dying man's heart with a mechanical heart inside the chest at The Texas Heart Institute in Houston as a bridge for a transplant. The man woke up and began to recover. After 64 hours, the pneumatic-powered artificial heart was removed and replaced by a donor heart. However thirty-two hours after transplantation, the man died of what was later proved to be an acute pulmonary infection, extended to both lungs, caused by fungi, most likely caused by an immunosuppressive drug complication. The original prototype of Liotta-Cooley artificial heart used in this historic operation is prominently displayed in the Smithsonian Institution's National Museum of American History "Treasures of American History" exhibit in Washington, D.C. First clinical applications of a permanent pneumatic total artificial heart The first clinical use of an artificial heart designed for permanent implantation rather than a bridge to transplant occurred in 1982 at the University of Utah. In 1981, William DeVries submitted a request to the FDA for permission to implant the Jarvik-7 into a human being. On December 1, 1982, William DeVries implanted the Jarvik-7 artificial heart into Barney Clark, a retired dentist from Seattle who had severe congestive heart failure. Clark's case was highly publicized and received much media attention, garnering attention from television networks, newspapers and periodicals. Clark lived for 112 days tethered to the UtahDrive pneumatic drive console, a device weighing some . During that time Clark required several re-operations, suffered seizures, experienced prolonged periods of confusion and a number of instances of bleeding and asked several times to be allowed to die. Clark, however, still believed his being part of the initial experiment was an important contribution to medicine, and maintained an overall positive outlook on his condition. Barney Clark died on March 23, 1983, of multiorgan system failure. Despite the complications, DeVries considered Clark's case a success. DeVries subsequently moved his practice to Humana Hospital Audubon in Louisville, Kentucky to continue studies using the Jarvik-7. DeVries' first artificial heart patient in Louisville was Bill Schroeder. DeVries replaced Schroeder's failing heart with a Jarvik-7 on November 25, 1984. Like Clark, Schroeder suffered from bleeding that required re-operation to resolve. In the first weeks the outlook was good and Schroeder was allowed to have a can of Coors beer and he was given a phone call by President Reagan, in which he famously asked the president for an update on a late Social Security check. However, 19 days after the operation, Schroeder suffered the first of four strokes. Despite this, his recovery continued and was allowed to live in a specially outfitted apartment near the hospital for a period of time, as well as use a newly developed battery-powered portable drive unit for the heart which allowed him to venture out of the hospital for short periods. Schroeder's health continued to decline as three more strokes plagued his time with the artificial heart. He died on August 6, 1986, from complications from a stroke, respiratory failure and sepsis, after 620 days with the artificial heart. Three more patients received the Jarvik-7 as a permanent heart. Murray Haydon, DeVries' third patient, received a Jarvik-7 on February 17, 1985. Haydon suffered pulmonary issues and was required to be on a mechanical ventilator for the duration of his time with the artificial heart. Haydon died of infection and kidney failure on June 19, 1986, after 488 days with his artificial heart. On April 7, 1985, Dr. Bjarne Semb of Karolinska Hospital in Stockholm Sweden implanted a Jarvik-7 in Swedish businessman Leif Stenberg. Stenberg lived 229 largely-uneventful days with the heart but suffered from a stroke and subsequently died on November 21, 1985. Jack Burcham was DeVries' fourth and final patient to receive a Jarvik-7 as a destination therapy. Burcham received his heart on April 14, 1985, but due to complications from the size of the device, bleeding and kidney failure, Burcham died just 10 days later on April 25, 1985. In the mid-1980s, artificial hearts were powered by large pneumatic drive consoles. Moreover, two sizable catheters had to cross the body wall to carry the pneumatic pulses to the implanted heart, greatly increasing the risk of infection. To speed development of a new generation of technologies, the National Heart, Lung, and Blood Institute opened a competition for implantable electrically powered artificial hearts. Three groups received funding: Cleveland Clinic in Cleveland, Ohio; the College of Medicine of Pennsylvania State University (Penn State Milton S. Hershey Medical Center) in Hershey, Pennsylvania; and AbioMed, Inc. of Danvers, Massachusetts. Despite considerable progress, the Cleveland program was discontinued after the first five years. First clinical application of an intrathoracic pump On July 19, 1963, E. Stanley Crawford and Domingo Liotta implanted the first clinical Left Ventricular Assist Device (LVAD) at The Methodist Hospital in Houston, Texas, in a patient who had a cardiac arrest after surgery. The patient survived for four days under mechanical support but did not recover from the complications of the cardiac arrest; finally, the pump was discontinued, and the patient died. First clinical application of a paracorporeal pump On 21 April 1966, Michael DeBakey and Liotta implanted the first clinical LVAD in a paracorporeal position (where the external pump rests at the side of the patient) at The Methodist Hospital in Houston, in a patient experiencing cardiogenic shock after heart surgery. The patient developed neurological and pulmonary complications and died after few days of LVAD mechanical support. In October 1966, DeBakey and Liotta implanted the paracorporeal Liotta-DeBakey LVAD in a new patient who recovered well and was discharged from the hospital after 10 days of mechanical support, thus constituting the first successful use of an LVAD for postcardiotomy shock. First VAD patient with FDA approved hospital discharge In 1990, Brian Williams was discharged from the University of Pittsburgh Medical Center (UPMC), becoming the first VAD patient to be discharged with Food and Drug Administration (FDA) approval. The patient was supported in part by bioengineers from the University of Pittsburgh's McGowan Institute. Total artificial hearts Approved medical devices SynCardia SynCardia Systems is a company based in Tucson, Arizona, which currently has two separate models of their artificial heart available. It is available in a 70cc and 50cc size. The 70cc model is used for biventricular heart failure in adult men, while the 50cc is for children and women. As of 2014, more than 1,250 patients have received SynCardia artificial hearts. The device has two drive systems available for patients to use; the Companion 2 in-hospital driver, approved by the FDA in 2012, or the Freedom Driver System, approved in 2014. The Companion 2 replaced the Circulatory Support System Console, which was the original drive system for the heart. The Freedom Driver System is a compact portable driver for greater mobility and can allow some patients to return home. To power the heart, the drivers send pulsed air through the drivelines into the heart. The drivers also monitor blood flow for each ventricle. In 1991 the rights to the Jarvik-7 were transferred to CardioWest, who resumed testing of the heart. Following good results with the TAH as a bridge to heart transplant, a trial of the CardioWest TAH was initiated in 1993 and completed in 2002. After the completion of this trial, CardioWest became SynCardia. The SynCardia total artificial heart was first approved for use in 2004 by the US Food and Drug Administration. Though the SynCardia shares its design with the Jarvik-7, improvements have been made throughout its lifespan, reducing the occurrence of stroke and bleeding. Lifespan while being supported by the device has also drastically improved, with one patient being supported by the device for over 1,700 days. In 2016, SynCardia filed for bankruptcy protection and was later acquired by the private equity firm Versa Capital Management. In 2021, SynCardia was acquired by Hunniwell Lake Ventures under its portfolio company, Picard Medical. In April 2023, SynCardia filed to become a publicly traded company via SPAC. Carmat Aeson bioprosthetic heart On October 27, 2008, French professor and leading heart transplant specialist Alain F. Carpentier announced a timeline that a fully implantable artificial heart would be ready for clinical trial by 2011 and for alternative transplant in 2013. It was developed and would be manufactured by his biomedical firm CARMAT SA, and venture capital firm Truffle Capital. The prototype used embedded electronic sensors and was made from chemically treated animal tissues, called "biomaterials", or a "pseudo-skin" of biosynthetic, microporous materials. According to a press-release by Carmat dated December 20, 2013, the first implantation of its artificial heart in a 75-year-old patient was performed on December 18, 2013, by the Georges Pompidou European Hospital team in Paris (France). The patient died 75 days after the operation. In Carmat's design, called the Aeson, two chambers are each divided by a membrane that holds hydraulic fluid on one side. A motorized pump moves hydraulic fluid in and out of the chambers. The pumped fluid causes the membrane to move, causing blood to pump through the heart. The blood-facing side of the membrane is made of tissue obtained from a sac that surrounds a cow's heart, to make the device more biocompatible. The Carmat device also uses valves made from cow heart tissue and has sensors to detect increased pressure within the device. Cardiac information is sent to an internal control system that can adjust the flow rate in response to increased demand, such as when a patient is exercising. The Carmat Aeson is aimed to be used in cases of terminal heart failure, instead of being used as a bridge device while the patient awaits a transplant. At 900 grams it weighs nearly three times the typical heart and is targeted primarily towards obese men. It also requires the patient to carry around an additional Li-Ion battery. The projected lifetime of the artificial heart is around 5 years (230 million beats). In 2016, trials for the Carmat "fully artificial heart" were banned by the National Agency for Security and Medicine in Europe after short survival rates were confirmed. The ban was lifted in May 2017. At that time, a European report stated that Celyad's C-Cure cell therapy for ischemic heart failure "Could only help a subpopulation of Phase III study participants, and Carmat will hope that its artificial heart will be able to treat a higher proportion of heart failure patients". The Carmat artificial heart was approved for sale in the European Union, receiving a CE marking on December 22, 2020. , the Carmat is only available in Europe as a bridge-to-transplant, for up to 180 days while awaiting a human heart transplant. In the United States it is only available in clinical trials. Historical prototypes and devices Total artificial heart pump The U.S. Army artificial heart pump was a compact, air-powered unit developed by Kenneth Woodward at Harry Diamond Laboratories in the early to mid-1960s. The Army's heart pump was partially made of plexiglass, and consisted of two valves, a chamber, and a suction flapper. The pump operated without any moving parts under the principle of fluid amplification – providing a pulsating air pressure source resembling a heartbeat. Jarvik Hearts The Jarvik line of hearts was developed by the now-defunct medical device company Symbion, by medical device researchers Willem Kolff and Robert Jarvik in conjunction with the University of Utah. These hearts were developed through animal trials and culminated in the Jarvik-7 100, the original model that was used in the first clinical trials of the heart. Jarvik-7 hearts were made primarily of a biocompatible plastics and polymers. These hearts used four Medtronic-Hall valves and consisted of two “ventricles” which contained multi-layer low-stress diaphragms. The Jarvik-7 was powered pneumatically by two transcutaneous drivelines attached to a large compressed-air drive console, originally called the Utahdrive. The drive console contained two independent drive systems for redundancy, data recording devices and backup compressed air cylinders. The Jarvik-7 was later developed in a smaller 70cc variant so that it would fit better in the chest cavities of more patients. Another development that came to the Jarvik-7 was the introduction of a battery-powered portable drive system the size of a briefcase that later patients took advantage of. Contrary to popular belief and erroneous articles in several periodicals, the Jarvik-7 heart was not permanently banned for use. After a hostile takeover, Symbion's facilities had lost FDA compliance in 1990 and required that the devices be destroyed. After the rights to the device had been transferred to then CardioWest Technologies, an investigational study was approved in 1993. CardioWest Technologies became SynCardia in 2003 who currently produces the modern version of the Jarvik-7, known as the SynCardia temporary Total Artificial Heart. POLVAD Since 1991, the Foundation for Cardiac Surgery Development (FRK) in Zabrze, Poland, has been working on developing an artificial heart. Nowadays, the Polish system for heart support POLCAS consists of the artificial ventricle POLVAD-MEV and the three controllers POLPDU-401, POLPDU-402 and POLPDU-501. Presented devices are designed to handle only one patient. The control units of the 401 and 402 series may be used only in hospital due to its big size, method of control and type of power supply. The control unit of 501 series is the latest product of FRK. Due to its much smaller size and weight, it is significantly more mobile solution. For this reason, it can be also used during supervised treatment conducted outside the hospital. Phoenix-7 In June 1996, a 46-year-old man received a total artificial heart implantation done by Jeng Wei at Cheng-Hsin General Hospital in Taiwan. This technologically advanced pneumatic Phoenix-7 Total Artificial Heart was manufactured by Taiwanese dentist Kelvin K. Cheng, Chinese physician T. M. Kao, and colleagues at the Taiwan TAH Research Center in Tainan, Taiwan. With this experimental artificial heart, the patient's BP was maintained at 90–100/40–50 mmHg and cardiac output at 4.2–5.8 L/min. The patient then received the world's first successful combined heart and kidney transplantation after bridging with a total artificial heart. AbioMed hearts The first AbioCor to be surgically implanted in a patient was on July 3, 2001. The AbioCor is made of titanium and plastic with a weight of 0.9 kg (two pounds), and its internal battery can be recharged with a transduction device that sends power through the skin. The internal battery lasts for half an hour, and a wearable external battery pack lasts for four hours. The FDA announced on September 5, 2006, that the AbioCor could be implanted for humanitarian uses after the device had been tested on 15 patients. It is intended for critically ill patients who cannot receive a heart transplant. Some limitations of the current AbioCor are that its size makes it suitable for less than 50% of the female population and only about 50% of the male population, and its useful life is only 1–2 years. By combining its valved ventricles with the control technology and roller screw developed at Penn State, AbioMed designed a smaller, more stable heart, the AbioCor II. This pump, which should be implantable in most men and 50% of women with a life span of up to five years, had animal trials in 2005, and the company hoped to get FDA approval for human use in 2008. After a great deal of experimentation, AbioMed has abandoned development of total artificial hearts as of 2015. Abiomed as of 2019 only markets heart pumps, "intended to help pump blood in patients who need short-term support (up to 6 days)", which are not total artificial hearts. Frazier-Cohn On March 12, 2011, an experimental artificial heart was implanted in 55-year-old Craig Lewis at The Texas Heart Institute in Houston by O. H. Frazier and William Cohn. The device was a combination of two modified HeartMate II pumps which had undergone bovine trials. So far, only one person has benefited from Frazier and Cohn's artificial heart. Craig Lewis had amyloidosis in 2011 and sought treatment. After obtaining permission from his family, Frazier and Cohn replaced his heart with their device. Lewis survived for another 5 weeks after the operation; he eventually died from liver and kidney failure due to his amyloidosis, after which his family asked that his artificial heart be unplugged. Current prototypes Soft artificial heart On July 10, 2017, Nicholas Cohrs and colleagues presented a new concept of a soft total artificial heart in the Journal of Artificial Organs. The heart was developed in the Functionals Materials Laboratory at ETH Zurich. (Cohrs was listed as a doctoral student in a group led by Professor Wendelin Stark at ETH Zurich.) The soft artificial heart (SAH) is a silicone monoblock fabricated with the help of 3D bioprinting technology. It weighs 390g, has a volume of 679 cm3, and is operated through pressurized air. "Our goal is to develop an artificial heart that is roughly the same size as the patient's own one and which imitates the human heart as closely as possible in form and function", Cohrs said in an interview. The SAH fundamentally moves and works like a natural heart, but the prototype only performed for 3000 beats (about 30 to 50 minutes at an average heart rate) in a hybrid mock circulation machine before the silicone membrane (2.3 mm thick) between the Left Ventricle and the Air Expansion Chamber ruptured. The working life of a more recent Cohrs prototype (using various polymers instead of silicone) was still limited, according to reports in early 2018, with that model providing a useful life of 1 million heartbeats, roughly ten days in a human body. At the time, Cohrs and his team were experimenting with CAD software and 3D printing, striving to develop a model that would last up to 15 years. "We cannot really predict when we could have a final working heart which fulfills all requirements and is ready for implantation. This usually takes years", said Cohrs. BiVACOR Artificial Heart Founded in 2008, the BiVACOR company has been developing a total artificial heart based on a rotary centrifugal pump. Artificial heart researchers and cardiologists O. H. Frazier and William Cohn are on the board of the BiVACOR company. The BiVACOR heart seeks to improve the artificial heart by using a magnetically levitated impeller which reduces clotting and only has a single moving part. This also reduces size and complexity, as well as only requiring a battery pack to run. The BiVACOR heart is not pulsatile like previous hearts and contains no valves, but is capable of generating “beats” by rapidly changing the speed of the impeller. BiVACOR has been tested as a replacement for a heart in a sheep. On November 10, 2023, the BiVACOR heart received FDA authorization under the investigational device exemption for use in human trials. In July 2024, a successful implantation of the BiVACOR artificial heart in a 57-year-old man with end-stage heart failure was conducted as part of its first-in-human clinical study at Baylor St. Luke's Medical Center, with four more patients expected to be enrolled in the study. A few weeks later, the second person, a 34-year-old man, had a BiVACOR artificial heart implanted at Duke University Hospital as a successful bridge to a heart transplant 10 days later. Others A centrifugal pump or an axial-flow pump can be used as an artificial heart, resulting in the patient being alive without a pulse. A centrifugal artificial heart which alternately pumps the pulmonary circulation and the systemic circulation, causing a pulse, has been described. Researchers have constructed a heart out of foam. The heart is made out of flexible silicone and works with an external pump to push air and fluids through the heart. It currently cannot be implanted into humans, but offers a new concept in artificial hearts. Hybrid assistive devices Patients who have some remaining heart function but who can no longer live normally may be candidates for ventricular assist devices (VAD), which do not replace the human heart but complement it by taking up much of the function. The first Left Ventricular Assist Device (LVAD) system was created by Domingo Liotta at Baylor College of Medicine in Houston in 1962. Another VAD, the Kantrowitz CardioVad, designed by Adrian Kantrowitz, boosts the native heart by taking up over 50% of its function. Additionally, the VAD can help patients on the wait list for a heart transplant. In a young person, this device could delay the need for a transplant by 10–15 years, or even allow the heart to recover, in which case the VAD can be removed. The artificial heart is powered by a battery that needs to be changed several times while still working. The first heart assist device was approved by the FDA in 1994, and two more received approval in 1998. While the original assist devices emulated the pulsating heart, newer versions, such as the Heartmate II, developed by The Texas Heart Institute of Houston, provides continuous flow. These pumps (which may be centrifugal or axial flow) are smaller and potentially more durable and last longer than the current generation of total heart replacement pumps. Another major advantage of a VAD is that the patient keeps the natural heart, which may still function for temporary back-up support if the mechanical pump were to stop. This may provide enough support to keep the patient alive until a solution to the problem is implemented. In August 2006, an artificial heart was implanted into a 15-year-old girl at the Stollery Children's Hospital in Edmonton, Alberta. It was intended to act as a temporary fixture until a donor heart could be found. Instead, the artificial heart (called a Berlin Heart) allowed for natural processes to occur and her heart healed on its own. After 146 days, the Berlin Heart was removed, and the girl's heart functioned properly on its own. On 16 December 2011 the Berlin Heart gained U.S. FDA approval. The device has since been successfully implanted in several children including a 4-year-old Honduran girl at Children's Hospital Boston. Several continuous-flow ventricular assist devices have been approved for use in the European Union, and, as of August 2007, were undergoing clinical trials for FDA approval. In 2012, Craig Lewis, a 55-year-old Texan, presented at the Texas Heart Institute with a severe case of cardiac amyloidosis. He was given an experimental continuous-flow artificial heart transplant which saved his life. Lewis died 5 weeks later of liver failure after slipping into a coma due to the amyloidosis. In 2012, a study published in the New England Journal of Medicine compared the Berlin Heart to extracorporeal membrane oxygenation (ECMO) and concluded that "a ventricular assist device available in several sizes for use in children as a bridge to heart transplantation [such as the Berlin Heart] was associated with a significantly higher rate of survival as compared with ECMO." The study's primary author, Charles D. Fraser Jr., surgeon in chief at Texas Children's Hospital, explained: "With the Berlin Heart, we have a more effective therapy to offer patients earlier in the management of their heart failure. When we sit with parents, we have real data to offer so they can make an informed decision. This is a giant step forward." Suffering from end-stage heart failure, former Vice President Dick Cheney underwent a procedure at INOVA Fairfax Hospital, in Fairfax Virginia in July 2010, to have a Heartmate II VAD implanted. In 2012, he received a heart transplant at age 71 after 20 months on a waiting list. See also Organ culture Artificial heart valve Artificial cardiac pacemaker Ventricular assist device References General references George B. Griffenhagen and Calvin H. Hughes. The History of the Mechanical Heart. Smithsonian Report for 1955, (Pub. 4241): 339–356, 1956. "Donor saves Detroit pastor living on artificial heart". Fox News. 18 May 2018 Inline citations Further reading External links Kembrey, Melanie (17 August 2010). "Artificial heart a medical marvel". Fairfield City Champion. Archived 6 July 2011. Russian inventions Implants (medicine) Cardiology Heart 1937 in medicine 20th-century inventions
Artificial heart
Biology
6,921
1,850,771
https://en.wikipedia.org/wiki/Modified%20atmosphere
Modified atmosphere packaging (MAP) is the practice of modifying the composition of the internal atmosphere of a package (commonly food packages, drugs, etc.) in order to improve the shelf life. The need for this technology for food arises from the short shelf life of food products such as meat, fish, poultry, and dairy in the presence of oxygen. In food, oxygen is readily available for lipid oxidation reactions. Oxygen also helps maintain high respiration rates of fresh produce, which contribute to shortened shelf life. From a microbiological aspect, oxygen encourages the growth of aerobic spoilage microorganisms. Therefore, the reduction of oxygen and its replacement with other gases can reduce or delay oxidation reactions and microbiological spoilage. Oxygen scavengers may also be used to reduce browning due to lipid oxidation by halting the auto-oxidative chemical process. Besides, MAP changes the gaseous atmosphere by incorporating different compositions of gases. The modification process generally lowers the amount of oxygen (O2) in the headspace of the package. Oxygen can be replaced with nitrogen (N2), a comparatively inert gas, or carbon dioxide (CO2). A stable atmosphere of gases inside the packaging can be achieved using active techniques, such as gas flushing and compensated vacuum, or passively by designing “breathable” films. History The first recorded beneficial effects of using modified atmosphere date back to 1821. Jacques Étienne Bérard, a professor at the School of Pharmacy in Montpellier, France, reported delayed ripening of fruit and increased shelf life in low-oxygen storage conditions. Controlled atmosphere storage (CAS) was used from the 1930s when ships transporting fresh apples and pears had high levels of CO2 in their holding rooms in order to increase the shelf life of the product. In the 1970s MA packages reached the stores when bacon and fish were sold in retail packs in Mexico. Since then development has been continuous and interest in MAP has grown due to consumer demand. Theory Atmosphere within the package can be modified passively or actively. In passive MAP, the high concentration of CO2 and low O2 levels in the package is achieved over time as a result of respiration of the product and gas transmission rates of the packaging film. This method is commonly used for fresh respiring fruits and vegetables. Reducing O2 and increasing CO2 slows down respiration rate, conserves stored energy, and therefore extended shelf life. On the other hand, active MA involves the use of active systems such as O2 and CO2 scavengers or emitters, moisture absorbers, ethylene scavengers, ethanol emitters and gas flushing in the packaging film or container to modify the atmosphere within the package. The mixture of gases selected for a MA package depends on the type of product, the packaging materials and the storage temperature. The atmosphere in an MA package consists mainly of adjusted amounts of N2, O2, and CO2. Reduction of O2 promotes delay in deteriorative reactions in foods such as lipid oxidation, browning reactions and growth of spoilage organisms. Low O2 levels of 3-5% are used to slow down respiration rate in fruits and vegetables. In the case of red meat, however, high levels of O2 (~80%) are used to reduce oxidation of myoglobin and maintain an attractive bright red color of the meat. Meat color enhancement is not required for pork, poultry and cooked meats; therefore, a higher concentration of CO2 is used to extend the shelf life. Levels higher than 10% of CO2 are phytotoxic for fruit and vegetables, so CO2 is maintained below this level. N2 is mostly used as a filler gas to prevent pack collapse. In addition, it is also used to prevent oxidative rancidity in packaged products such as snack foods by displacing atmospheric air, especially oxygen, therefore extending shelf life. The use of noble gases such as helium (He), argon (Ar) and xenon (Xe) to replace N2 as the balancing gas in MAP can also be used to preserve and extend the shelf life of fresh and minimally processed fruits and vegetables. Their beneficial effects are due to their higher solubility and diffusivity in water, making them more effective in displacing O2 from cellular sites and enzymatic O2 receptors. There has been a debate regarding the use of carbon monoxide (CO) in the packaging of red meat due to its possible toxic effect on packaging workers. Its use results in a more stable red color of carboxymyoglobin in meat, which leads to another concern that it can mask evidence of spoilage in the product. Effect on microorganisms Low O2 and high CO2 concentrations in packages are effective in limiting the growth of Gram negative bacteria, molds and aerobic microorganisms, such as Pseudomonas spp. High O2 combined with high CO2 could have bacteriostatic and bactericidal effects by suppression of aerobes by high CO2 and anaerobes by high O2. CO2 has the ability to penetrate bacterial membrane and affect intracellular pH. Therefore, lag phase and generation time of spoilage microorganisms are increased resulting in shelf life extension of refrigerated foods. Since the growth of spoilage microorganisms are suppressed by MAP, the ability of the pathogens to grow is potentially increased. Microorganisms that can survive under low oxygen environment such as Campylobacter jejuni, Clostridium botulinum, E. coli, Salmonella, Listeria and Aeromonas hydrophila are of major concern for MA packaged products. Products may appear organoleptically acceptable due to the delayed growth of the spoilage microorganisms but might contain harmful pathogens. This risk can be minimized by use of additional hurdles such as temperature control (maintain temperature below 3 degrees C), lowering water activity (less than 0.92), reducing pH (below 4.5) or addition of preservatives such as nitrite to delay metabolic activity and growth of pathogens. Packaging materials Flexible films are commonly used for products such as fresh produce, meats, fish and bread seeing as they provide suitable permeability for gases and water vapor to reach the desired atmosphere. Pre-formed trays are formed and sent to the food packaging facility where they are filled. The package headspace then undergoes modification and sealing. Pre-formed trays are usually more flexible and allow for a broader range of sizes as opposed to thermoformed packaging materials as different tray sizes and colors can be handled without the risk of damaging the package. Thermoformed packaging however is received in the food packaging facility as a roll of sheets. Each sheet is subjected to heat and pressure, and is formed at the packaging station. Following the forming, the package is filled with the product, and then sealed. The advantages that thermoformed packaging materials have over pre-formed trays are mainly cost-related: thermoformed packaging uses 30% to 50% less material, and they are transported as rolls of material. This will amount in significant reduction of manufacturing and transportation costs. When selecting packaging films for MAP of fruits and vegetables the main characteristics to consider are gas permeability, water vapor transmission rate, mechanical properties, transparency, type of package and sealing reliability. Traditionally used packaging films like LDPE (low-density polyethylene), PVC (polyvinyl chloride), EVA (ethylene-vinyl acetate) and OPP (oriented polypropylene) are not permeable enough for highly respiring products like fresh-cut produce, mushrooms and broccoli. As fruits and vegetables are respiring products, there is a need to transmit gases through the film. Films designed with these properties are called permeable films. Other films, called barrier films, are designed to prevent the exchange of gases and are mainly used with non-respiring products like meat and fish. MAP films developed to control the humidity level as well as the gas composition in the sealed package are beneficial for the prolonged storage of fresh fruits, vegetables and herbs that are sensitive to moisture. These films are commonly referred to as modified atmosphere/modified humidity packaging (MA/MH) films. Equipment In using form-fill-seal packaging machines, the main function is to place the product in a flexible pouch suitable for the desired characteristics of the final product. These pouches can either be pre-formed or thermoformed. The food is introduced into the pouch, the composition of the headspace atmosphere is changed within the package; it is then heat sealed. These types of machines are typically called pillow-wrap, which horizontally or vertically form, fill and seal the product. Form-fill-seal packaging machines are usually used for large scale operations. In contrast, chamber machines are used for batch processes. A filled pre-formed wrap is filled with the product and introduced into a cavity. The cavity is closed and vacuum is then pulled on the chamber and the modified atmosphere is inserted as desired. Sealing of the package is done through heated sealing bars, and the product is then removed. This batch process is labor-intensive and thus requires a longer period of time; however, it is relatively cheaper than packaging machines which are automated. Additionally, snorkel machines are used to modify the atmosphere within a package after the food has been filled. The product is placed in the packaging material and positioned into the machine without the need of a chamber. A nozzle, which is the snorkel, is then inserted into the packaging material. It pulls a vacuum and then flushes the modified atmosphere into the package. The nozzle is removed and the package is heat sealed. This method is suitable for bulk and large operations. Products Many products such as red meat, seafood, minimally processed fruits and vegetables, salads, pasta, cheese, bakery goods, poultry, cooked and cured meats, ready meals and dried foods are packaged under MA. A summary of optimal gas mixtures for MA products is shown in the following table. Modified Atmosphere Packaging for different food products and optimal gas mixtures Grains Modified atmosphere may be used to store grains. prevents insects and, depending on concentration, mold and oxidation from damaging the grain. Grain stored in this way can remain edible for approximately five years. One method is placing a block of dry ice in the bottom and filling the can with the grain. Another method is purging the container from the bottom by gaseous carbon dioxide from a cylinder or bulk supply vessel. Nitrogen gas () at concentrations of 98% or higher is also used effectively to kill insects in the grain through hypoxia. However, carbon dioxide has an advantage in this respect, as it kills organisms through hypercarbia and hypoxia (depending on concentration), but it requires concentrations of roughly over 35%. This makes carbon dioxide preferable for fumigation in situations where a hermetic seal cannot be maintained. Air-tight storage of grains (sometimes called hermetic storage) relies on the respiration of grain, insects, and fungi that can modify the enclosed atmosphere sufficiently to control insect pests. This is a method of great antiquity, as well as having modern equivalents. The success of the method relies on having the correct mix of sealing, grain moisture, and temperature. A patented process uses fuel cells to exhaust and automatically maintain the exhaustion of oxygen in a shipping container, containing, for example, fresh fish. See also Active packaging Cold chain Modified atmosphere/modified humidity packaging Permeation Shelf life Food packaging Citations References Church, I.J. & Parsons, A.L.: (1995) Modified Atmosphere Packaging Technology: A Review, Journal Science Food Agriculture, 67, 143-152 Day, B.P.F.: (1996) A perspective of modified atmosphere packaging of fresh produce In Western Europe, Food Science and Technology Today, 4,215-221 European Food Information Council (EFIC: (2001) Opinion of the Scientific Committee on Food on the use of carbon monoxide as component of packaging gases in modified atmosphere packaging for fresh meat. Parry, R. T.: (1993) Principles and applications of MAP of foods, Blackie Academic & Professional, England, 1-132 Phillips, C.A.: (1996) Review: Modified Atmosphere Packaging and its effects on the microbial quality and safety of produce, International Journal of Food Science and Tech, 31, 463-479 Robertson, G. L., "Food Packaging: Principles and Practice", 3rd edition, 2013, Zagory, D. & Kader, A.A.: (1988) Modified atmosphere packaging of fresh produce, Food Technology., 42(9), 70-77 Food technology Packaging Industrial gases
Modified atmosphere
Chemistry
2,643
166,371
https://en.wikipedia.org/wiki/Baroclinity
In fluid dynamics, the baroclinity (often called baroclinicity) of a stratified fluid is a measure of how misaligned the gradient of pressure is from the gradient of density in a fluid. In meteorology a baroclinic flow is one in which the density depends on both temperature and pressure (the fully general case). A simpler case, barotropic flow, allows for density dependence only on pressure, so that the curl of the pressure-gradient force vanishes. Baroclinity is proportional to: which is proportional to the sine of the angle between surfaces of constant pressure and surfaces of constant density. Thus, in a barotropic fluid (which is defined by zero baroclinity), these surfaces are parallel. In Earth's atmosphere, barotropic flow is a better approximation in the tropics, where density surfaces and pressure surfaces are both nearly level, whereas in higher latitudes the flow is more baroclinic. These midlatitude belts of high atmospheric baroclinity are characterized by the frequent formation of synoptic-scale cyclones, although these are not really dependent on the baroclinity term per se: for instance, they are commonly studied on pressure coordinate iso-surfaces where that term has no contribution to vorticity production. Baroclinic instability Baroclinic instability is a fluid dynamical instability of fundamental importance in the atmosphere and in the oceans. In the atmosphere it is the principal mechanism shaping the cyclones and anticyclones that dominate weather in mid-latitudes. In the ocean it generates a field of mesoscale eddies (100 km or smaller) that play various roles in oceanic dynamics and the transport of tracers. Whether a fluid counts as rapidly rotating is determined in this context by the Rossby number, which is a measure of how close the flow is to solid body rotation. More precisely, a flow in solid body rotation has vorticity that is proportional to its angular velocity. The Rossby number is a measure of the departure of the vorticity from that of solid body rotation. The Rossby number must be small for the concept of baroclinic instability to be relevant. When the Rossby number is large, other kinds of instabilities, often referred to as inertial, become more relevant. The simplest example of a stably stratified flow is an incompressible flow with density decreasing with height. In a compressible gas such as the atmosphere, the relevant measure is the vertical gradient of the entropy, which must increase with height for the flow to be stably stratified. The strength of the stratification is measured by asking how large the vertical shear of the horizontal winds has to be in order to destabilize the flow and produce the classic Kelvin–Helmholtz instability. This measure is called the Richardson number. When the Richardson number is large, the stratification is strong enough to prevent this shear instability. Before the classic work of Jule Charney and Eric Eady on baroclinic instability in the late 1940s, most theories trying to explain the structure of mid-latitude eddies took as their starting points the high Rossby number or small Richardson number instabilities familiar to fluid dynamicists at that time. The most important feature of baroclinic instability is that it exists even in the situation of rapid rotation (small Rossby number) and strong stable stratification (large Richardson's number) typically observed in the atmosphere. The energy source for baroclinic instability is the potential energy in the environmental flow. As the instability grows, the center of mass of the fluid is lowered. In growing waves in the atmosphere, cold air moving downwards and equatorwards displaces the warmer air moving polewards and upwards. Baroclinic instability can be investigated in the laboratory using a rotating, fluid filled annulus. The annulus is heated at the outer wall and cooled at the inner wall, and the resulting fluid flows give rise to baroclinically unstable waves. The term "baroclinic" refers to the mechanism by which vorticity is generated. Vorticity is the curl of the velocity field. In general, the evolution of vorticity can be broken into contributions from advection (as vortex tubes move with the flow), stretching and twisting (as vortex tubes are pulled or twisted by the flow) and baroclinic vorticity generation, which occurs whenever there is a density gradient along surfaces of constant pressure. Baroclinic flows can be contrasted with barotropic flows in which density and pressure surfaces coincide and there is no baroclinic generation of vorticity. The study of the evolution of these baroclinic instabilities as they grow and then decay is a crucial part of developing theories for the fundamental characteristics of midlatitude weather. Baroclinic vector Beginning with the equation of motion for a frictionless fluid (the Euler equations) and taking the curl, one arrives at the equation of motion for the curl of the fluid velocity, that is to say, the vorticity. In a fluid that is not all of the same density, a source term appears in the vorticity equation whenever surfaces of constant density (isopycnic surfaces) and surfaces of constant pressure (isobaric surfaces) are not aligned. The material derivative of the local vorticity is given by: (where is the velocity and is the vorticity, is the pressure, and is the density). The baroclinic contribution is the vector: This vector, sometimes called the solenoidal vector, is of interest both in compressible fluids and in incompressible (but inhomogeneous) fluids. Internal gravity waves as well as unstable Rayleigh–Taylor modes can be analyzed from the perspective of the baroclinic vector. It is also of interest in the creation of vorticity by the passage of shocks through inhomogeneous media, such as in the Richtmyer–Meshkov instability. Experienced divers are familiar with the very slow waves that can be excited at a thermocline or a halocline, which are known as internal waves. Similar waves can be generated between a layer of water and a layer of oil. When the interface between these two surfaces is not horizontal and the system is close to hydrostatic equilibrium, the gradient of the pressure is vertical but the gradient of the density is not. Therefore the baroclinic vector is nonzero, and the sense of the baroclinic vector is to create vorticity to make the interface level out. In the process, the interface overshoots, and the result is an oscillation which is an internal gravity wave. Unlike surface gravity waves, internal gravity waves do not require a sharp interface. For example, in bodies of water, a gradual gradient in temperature or salinity is sufficient to support internal gravity waves driven by the baroclinic vector. References Bibliography External links Fluid dynamics Atmospheric dynamics
Baroclinity
Chemistry,Engineering
1,452
64,750
https://en.wikipedia.org/wiki/Computer%20language
A computer language is a formal language used to communicate with a computer. Types of computer languages include: Construction language – all forms of communication by which a human can specify an executable problem solution to a computer Command language – a language used to control the tasks of the computer itself, such as starting programs Configuration language – a language used to write configuration files Programming language – a formal language designed to communicate instructions to a machine, particularly a computer Scripting language – a type of programming language which typically is interpreted at runtime rather than being compiled Query language – a language used to make queries in databases and information systems Transformation language – designed to transform some input text in a certain formal language into a modified output text that meets some specific goal Data exchange language – a language that is domain-independent and can be used for data from any kind of discipline; examples: JSON, XML Markup language – a grammar for annotating a document in a way that is syntactically distinguishable from the text, such as HTML Modeling language – an artificial language used to express information or knowledge, often for use in computer system design Architecture description language – used as a language (or a conceptual model) to describe and represent system architectures Hardware description language – used to model integrated circuits Page description language – describes the appearance of a printed page in a higher level than an actual output bitmap Simulation language – a language used to describe simulations Specification language – a language used to describe what a system should do Style sheet language – a computer language that expresses the presentation of structured documents, such as CSS See also Serialization Domain-specific language – a language specialized to a particular application domain Expression language General-purpose language – a language that is broadly applicable across application domains and lacks specialized features for a particular domain Lists of programming languages Natural language processing – the use of computers to process text or speech in human language External links
Computer language
Technology
384
14,795,640
https://en.wikipedia.org/wiki/UGT1A4
UDP-glucuronosyltransferase 1-4 is an enzyme that in humans is encoded by the UGT1A4 gene. This gene encodes a UDP-glucuronosyltransferase, an enzyme of the glucuronidation pathway that transforms small lipophilic molecules, such as steroids, bilirubin, hormones, and drugs, into water-soluble, excretable metabolites. This gene is part of a complex locus that encodes several UDP-glucuronosyltransferases. The locus includes thirteen unique alternate first exons followed by four common exons. Four of the alternate first exons are considered pseudogenes. Each of the remaining nine 5′ exons may be spliced to the four common exons, resulting in nine proteins with different N-termini and identical C-termini. Each first exon encodes the substrate binding site, and is regulated by its own promoter. This enzyme has some glucuronidase activity towards bilirubin, although it is more active on amines, steroids, and sapogenins. It is the main enzyme responsible for glucuronidation of the anticonvulsant lamotrigine. References Further reading
UGT1A4
Chemistry
276
10,639,301
https://en.wikipedia.org/wiki/Candle%20snuffer
A candle snuffer, candle extinguisher, or douter is an instrument used to extinguish burning candles, consisting of a small cone at the end of a handle. The use of a snuffer helps to avoid problems associated with blowing hot wax and it avoids the smoke and odor of a smoldering wick which results from simply blowing a candle out. Extinguishers are still commonly used in homes and churches. Description Candle snuffers date from the 17th–mid 19th centuries. Scissor-type tools that cut and retain the snuff trimmed from candle wicks are also sometimes called snuffers, though technically a separate tool called a candle wick trimmer. The snuff being the burnt, surplus portion of the wick. The snuff is partially burned wicks and, with the addition of oxygen, is very flammable, therefore it needed to be isolated so it would not reignite once trimmed from the wick. The simplest and most common form of candle wick trimmer consists of a pair of scissors with an attached box to retain the snuff. The snuff would be smashed into the box so it would not reignite. Many complex forms of these trimming snuffers evolved for the homes with many candles. Some had concentric trap-doors that would snap shut and isolate the snuff. Others would stow the snuff in a lower cavity in the scissors. Similar devices include the douter and the extinguisher. Historical usage Before the mid 19th century, the term snuffer referred to a scissors-like device with two flat blades and an attached snuffer box. This tool was used to trim the wick of a candle without extinguishing the flame, to maintain efficient burning. A small receptacle catches the trimmed bit of wick. They were rendered obsolete by the invention of self-snuffing wicks, which curl out of the flame when charred. This allows excess wick to burn away, preventing the wick from becoming too long. References Candles Hand tools
Candle snuffer
Engineering
436
71,160,165
https://en.wikipedia.org/wiki/Lawrence%20Schkade
Lawrence L. Schkade (1930–2017) was an American information systems and management science researcher. Schkade was a native of Port Arthur, Texas, who earned his doctorate at Louisiana State University. He taught at the University of Texas at Austin and the University of North Texas before joining the University of Texas at Arlington. At UTA, he was Ashbel Smith Professor of Information Systems and Management Sciences, later held the Jenkins Garrett Professorship in Information Systems and Operations Management, and also served as dean of the College of Business Administration. Schkade was granted emeritus status in October 2004. He died on 25 November 2017, aged 87. References University of Texas at Arlington faculty 2017 deaths American university and college faculty deans Louisiana State University alumni Information systems researchers 1930 births Management scientists People from Port Arthur, Texas Scientists from Texas University of Texas at Austin faculty Business school deans University of North Texas faculty American business theorists
Lawrence Schkade
Technology
187
41,489,324
https://en.wikipedia.org/wiki/Intelligent%20maintenance%20system
An intelligent maintenance system (IMS) is a system that uses collected data from machinery in order to predict and prevent potential failures in them. The occurrence of failures in machinery can be costly and even catastrophic. In order to avoid failures, there needs to be a system which analyzes the behavior of the machine and provides alarms and instructions for preventive maintenance. Analyzing the behavior of the machines has become possible by means of advanced sensors, data collection systems, data storage/transfer capabilities and data analysis tools. These are the same set of tools developed for prognostics. The aggregation of data collection, storage, transformation, analysis and decision making for smart maintenance is called an intelligent maintenance system (IMS). Definition An intelligent maintenance system is a system that uses data analysis and decision support tools to predict and prevent the potential failure of machines. The recent advancement in information technology, computers, and electronics have facilitated the design and implementation of such systems. The key research elements of intelligent maintenance systems consist of: Transformation of data to information to knowledge and synchronization of the decisions with remote systems Intelligent, embedded prognostic algorithms for assessing degradation and predicting the performance in future Software and hardware platforms to run online models Embedded product services and life cycle information for closed-loop product designs E-manufacturing and e-maintenance With evolving applications of tether-free communication technologies (e.g. Internet) e-intelligence is having a larger impact on industries. Such impact has become a driving force for companies to shift the manufacturing operations from traditional factory integration practices towards an e-factory and e-supply chain philosophy. Such change is transforming the companies from local factory automation to global business automation. The goal of e-manufacturing is, from the plant floor assets, to predict the deviation of the quality of the products and possible loss of any equipment. This brings about the predictive maintenance capability of the machines. The major functions and objectives of e-manufacturing are: “(a) provide a transparent, seamless and automated information exchange process to enable an only handle information once (OHIO) environment; (b) improve the use of plant floor assets using a holistic approach combining the tools of predictive maintenance techniques; (c) links entire supply chain management (SCM) operation and asset optimization; and (d) deliver customer services using the latest predictive intelligence methods and tether-free technologies”. The e-Maintenance infrastructure consists of several information sectors: Control systems and production schedulers Engineering product data management systems Enterprise resource planning (ERP) systems Condition monitoring systems Maintenance scheduling (CMMS/EAM) systems Plant asset management (PAM) systems See also Big Data Cyber manufacturing Cyber-physical system Decision support systems Industrial artificial intelligence Industrial Big Data Industry 4.0 Internet of Things Intelligent transformation Machine to machine Maintenance, repair, and operations Predictive maintenance Preventive maintenance Prognostics Smart, connected products References Further reading M. J. Ashby et al., “Intelligent maintenance advisor for turbine engines”, The Journal of the Operational Research Society, vol. 46, No. 7 (July 1995), 831–853. A. K. S. Jardine et al., “A review on machinery diagnostics and prognostics implementing condition-based maintenance”, Mechanical Systems and Signal Processing 20 (2006) 1483–1510. R. C. M. Yam et al., “Intelligent Predictive Decision Support System forCondition-Based Maintenance”, Int J Adv Manuf Technol (2001) 17:383–391 A. Muller et al., “On the concept of e-maintenance: Review and current research”, Reliability Engineering and System Safety 93 (2008) 1165–1187 A. Bos et al., “SCOPE: An Intelligent Maintenance System for Supporting Crew Operations”, AUTOTESTCON 2004. Proceedings. IEEE, 2004. Maintenance Prediction Survival analysis
Intelligent maintenance system
Engineering
793
15,509,033
https://en.wikipedia.org/wiki/Photoacid
Photoacids are molecules that become more acidic upon absorption of light. Either the light causes a photodissociation to produce a strong acid, or the light causes photoassociation (such as a ring forming reaction) that leads to an increased acidity and dissociation of a proton. There are two main types of molecules that release protons upon illumination: photoacid generators (PAGs) and photoacids (PAHs). PAGs undergo proton photodissociation irreversibly, while PAHs are molecules that undergo proton photodissociation and thermal reassociation. In this latter case, the excited state is strongly acidic, but reversible. Photoacid generators An example due to photodissociation is triphenylsulfonium triflate. This colourless salt consists of a sulfonium cation and the triflate anion. Many related salts are known including those with other noncoordinating anions and those with diverse substituents on the phenyl rings. The triphenylsulfonium salts absorb at a wavelength of 233 nm, which induces a dissociation of one of the three phenyl rings. This dissociated phenyl radical then re-combines with remaining diphenylsulfonium to liberate an H+ ion. The second reaction is irreversible, and therefore the entire process is irreversible, so triphenylsulfonium triflate is a photoacid generator. The ultimate products are thus a neutral organic sulfide and the strong acid triflic acid. [(C6H5)3S+][CF3SO] + hν → [(C6H5)2S+.][CF3SO] + C6H [(C6H5)2S+.][CF3SO] + C6H → (C6H5C6H4)(C6H5)S + [CF3SO][H+] Applications of these photoacids include photolithography and catalysis of the polymerization of epoxides. Photoacids An example of a photoacid which undergoes excited-state proton transfer without prior photolysis is the fluorescent dye pyranine (8-hydroxy-1,3,6-pyrenetrisulfonate or HPTS). The Förster cycle was proposed by Theodor Förster and combines knowledge of the ground state acid dissociation constant (pKa), absorption, and fluorescence spectra to predict the pKa in the excited state of a photoacid. The name photoacid can be abbreviated PAH, where the H does not stand for a word starting with H, but rather for a hydrogen atom which is lost when the molecule reacts as a Brønsted acid. This use of PAH should not be confused with other meanings of PAH in chemistry and in medicine. References Photochemistry Lithography (microfabrication) Microtechnology Light-sensitive chemicals Acids
Photoacid
Chemistry,Materials_science,Engineering
632
26,952,327
https://en.wikipedia.org/wiki/Release%20management
Release management is the process of managing, planning, scheduling and controlling a software build through different stages and environments; it includes testing and deploying software releases. Relationship with processes Organizations that have adopted agile software development are seeing much higher quantities of releases. With the increasing popularity of agile development a new approach to software releases known as continuous delivery is starting to influence how software transitions from development to a release. One goal of continuous delivery and DevOps is to release more reliable applications faster and more frequently. The movement of the application from a "build" through different environments to production as a "release" is part of the continuous delivery pipeline. Release managers are beginning to utilize tools such as application release automation and continuous integration tools to help advance the process of continuous delivery and incorporate a culture of DevOps by automating a task so that it can be done more quickly, reliably, and is repeatable. More software releases have led to increased reliance on release management and automation tools to execute these complex application release processes. Relationship with ITIL/ITSM In organizations that manage IT operations using the IT service management paradigm, specifically the ITIL framework, release management will be guided by ITIL concepts and principles. There are several formal ITIL processes that are related to release management, primarily the release and deployment management process, which "aims to plan, schedule and control the movement of releases to test and live environments", and the change enablement process. In ITIL organizations, releases tend to be less frequent than in an agile development environment. Release processes are managed by IT operations teams using IT service management ticketing systems, with less focus on automation of release processes. References External links "Current Trends in Release Engineering 2016" - Academic Course by Software Construction Research Group, RWTH Aachen, Germany# Release and Deployment Management in the ITIL Framework Software project management Version control Software release
Release management
Engineering
378
5,076,347
https://en.wikipedia.org/wiki/Energy%20and%20Environmental%20Security%20Initiative
Established in 2003, the Energy and Environmental Security Initiative (EESI) is an interdisciplinary Research & Policy Institute located at the University of Colorado Law School. The fundamental mission of EESI is to serve as an interdisciplinary research and policy center concerning the development and crafting of State policies, U.S. energy policies, and global responses to the world's energy crisis; and to facilitate the attainment of a global sustainable energy future through the innovative use of laws, policies and technology solutions. In pursuit of this mission, EESI's primary operational function is that of an enabling environment for teaching, research and policy analysis vis-à-vis the impact of laws and policies on the scientific, technological, sociopolitical, commercial, and environmental dimensions of sustainable energy. Select energy and climate database projects International Sustainable Energy Assessment (ISEA) ISEA is a comprehensive global database of international treaties dealing with, or relevant to, the following energy categories, inter alia: conventional sources of energy such as oil, natural gas, and coal; renewable energy, such as wind, biomass, solar, geothermal, and hydro; energy efficiency and energy conservation; nuclear power; carbon capture and sequestration; and transportation. ISEA International Project on Energy Commitments and Compliance (IPECC) IPECC is designed to improve and enhance the efforts of governments, non-governmental actors—such as corporations, non-governmental organizations, trade unions, churches—and key decision-makers throughout the world in two ways: First by evaluating the extent to which their existing commitments and pledges are actually working; and second, by facilitating new and better clean and affordable energy solutions. IPECC will provide the information needed to improve the effectiveness of existing commitments and encourage new commitments where necessary. It is designed to track and monitor the implementation of sustainable energy commitments undertaken by governments, corporations and other entities, and to provide detailed information on the extent to which they are being complied with. In doing so it will serve as a watchdog over what is and should be happening with respect to these instruments and the commitments they embody. IPECC The EESI Climate Action Database (CAD) CAD contains U.S. policy proposals directed at climate change. CAD Primary document types contained in CAD include: Proposed Federal Legislation. Federal Legislative proposals (i.e., bills actually introduced) for climate stabilization and related energy security and national security actions to be undertaken by the U.S. President, executive administrative entities, or the U.S Congress. Proposals. Non-legislative proposals for climate stabilization and related energy security and national security actions to be undertaken by the U.S. President, executive administrative entities, or the U.S Congress. Impact Analyses. Documents that seek to ascertain the environmental, fiscal and/or carbon impacts of proposed actions. California and the Regional Greenhouse Gas Initiative (RGGI). With respect to California, CAD contains select documents potentially applicable to the federal context. Additionally, CAD contains documents related to RGGI to that may be applicable the federal context. Excluded from CAD proposals are documents that: (1) are older than two years old (defined as generated prior to January 1, 2005), with certain exceptions; (2) are directed at international activities or policies, unless such activities or policies are to be implemented by the President or executive administrative entities; (3) are directed at a U.S. state other than California, or a regional collaboration other than RGGI; (4) that deal with management of the federal transportation fleet or federal buildings. CAD is not, at present, a living database—meaning that it is not intended to offer an up-to-the moment repository for U.S. federal climate proposals. The material contained in CAD is generally current through February 2007. Location EESI is located within the Wolf Law Building, which houses the University of Colorado Law School. Situated on the southern end of the University of Colorado at Boulder campus, the Wolf Law Building was completed in August 2006 and dedicated on September 8. Approximately 60 percent of the Wolf Law Building was financed by University of Colorado at Boulder students. Under the Leadership in Energy and Environmental Design (LEED) Green Building Rating System, the Wolf Law Building has received Gold certification. It is the first publicly supported law school in the country to obtain Gold. As the parent organization of EESI, the University of Colorado Law School is well known for its strength in the area of environmental law. The U.S. News & World Report's 2008 edition of America's Best Graduate Schools (reporting on 2005-6 academic-year data) ranks the law school's environmental law program as 4th in the United States. Parent organizations University of Colorado Law School University of Colorado at Boulder See also Electrical energy efficiency on United States farms List of energy topics Mitigation of global warming Renewable energy development Sustainable energy References External links EESI Home Page—Official website International Sustainable Energy Assessment—Database of International Energy Treaties EESI Climate Action Database—Database of Major U.S. Climate Change Policy Proposals List of Select EESI Projects University of Colorado Law School University of Colorado at Boulder University of Colorado 2003 establishments in Colorado Energy in the United States Environment of the United States Climate change in the United States Sustainability organizations Energy policy
Energy and Environmental Security Initiative
Environmental_science
1,068
74,933,054
https://en.wikipedia.org/wiki/Pi2%20Doradus
{{DISPLAYTITLE:Pi2 Doradus}} Pi2 Doradus, Latinized from π2 Doradus, is a solitary star located in the southern constellation Doradus. It is faintly visible to the naked eye as a yellow-hued point of light with an apparent magnitude of 5.38. The object is located relatively close at a distance of 277 light-years based on Gaia DR3 parallax measurements, but it is receding with a heliocentric radial velocity of approximately . At its current distance, Pi2 Doradus' brightness is diminished by 0.27 magnitudes due to interstellar extinction and it has an absolute magnitude of +0.78. Pi2 Doradus has a stellar classification of G8 III, indicating that it is an evolved G-type giant star. It is a red clump star that is currently on the horizontal branch—fusing helium at its stellar core. It has 1.8 times the mass of the Sun but, at the age of 1.61 billion years, it has expanded to 9.84 times the radius of the Sun. It radiates 51.1 times the luminosity of the Sun from its photosphere at an effective temperature of Pi2 Doradus is metal deficient with an iron abundance of [Fe/H] = −0.26 or roughly 55% of the Sun's. Like many giant stars Pi2 Doradus spins slowly, having a projected rotational velocity lower than . References G-type giants Horizontal-branch stars Dorado Doradus, Pi2 Doradus, 42 CD-69 00392 046116 030565 2327 167126852
Pi2 Doradus
Astronomy
344
1,376,350
https://en.wikipedia.org/wiki/Short%20interest%20ratio
The short interest ratio (also called days-to-cover ratio) represents the number of days it takes short sellers on average to cover their positions, that is repurchase all of the borrowed shares. It is calculated by dividing the number of shares sold short by the average daily trading volume, generally over the last 30 trading days. The ratio is used by fundamental and technical traders to identify trends. The days-to-cover ratio can also be calculated for an entire exchange to determine the sentiment of the market as a whole. If an exchange has a high days-to-cover ratio of around five or greater, this can be taken as a bearish signal, and vice versa. The short interest ratio is not to be confused with the short interest, a similar concept whereby the number of shares sold short is divided by the number of outstanding shares. The latter concept does not take liquidity into account. Short squeeze (a.k.a. Bear Squeeze) A short squeeze can occur if the price of stock with a high short interest begins to have increased demand and a strong upward trend. To cut their losses, short sellers may add to demand by buying shares to cover short positions, causing the share price to further escalate temporarily. In markets with an active options market short sellers can hedge against the risk of a short squeeze by buying call options. Conversely, short squeezes are more likely to occur in stocks with small market capitalization and a small public float. References Short selling Fundamental analysis Financial ratios Corporate finance
Short interest ratio
Mathematics
307
9,453,048
https://en.wikipedia.org/wiki/1%2C8-Octanediol
1,8-Octanediol, also known as octamethylene glycol, is a diol with the molecular formula HO(CH)OH. 1,8-Octanediol is a white solid. It is produced by hydrogenation of esters of suberic acid. 1,8-Octanediol is used as a monomer in the synthesis of some polymers such as polyesters and polyurethanes. As with other fatty alcohols, octane-1,8-diol is used in cosmetics as an emollient and humectant. See also Ethylene glycol 1,2-Octanediol References Monomers Alkanediols
1,8-Octanediol
Chemistry,Materials_science
148
14,410,628
https://en.wikipedia.org/wiki/Galanin%20receptor%201
Galanin receptor 1 (GAL1) is a G-protein coupled receptor encoded by the GALR1 gene. Function The neuropeptide galanin elicits a range of biological effects by interaction with specific G-protein-coupled receptors. Galanin receptors are seven-trans membrane proteins shown to activate a variety of intracellular second-messenger pathways. GALR1 inhibits adenylyl cyclase via a G protein of the GI/GO family. GALR1 is widely expressed in the brain and spinal cord, as well as in peripheral sites such as the small intestine and heart. See also Galanin receptor References Further reading External links G protein-coupled receptors sr:Galaninski receptor 3
Galanin receptor 1
Chemistry
148
26,606,485
https://en.wikipedia.org/wiki/IC%204651
IC 4651 is an open cluster of stars located about 2,900 light years distant in the constellation Ara. It was first catalogued by John Louis Emil Dreyer in his 1895 version of the Index Catalogue. This is an intermediate age cluster that is billion years old. Compared to the Sun, the members of this cluster have a higher abundance of the chemical elements other than hydrogen and helium. The combined mass of the active stars in this cluster is about 630 times the mass of the Sun. The currently known active stars in this cluster form only about 7% of the cluster's original mass. Of the remainder, about 35% of the mass consists of stars that have evolved into white dwarfs or other stellar remnants. The remainder of lost mass consists of stars that have migrated away from the main body of the cluster or have been lost completely. The star IC 4651 9122 displays radial velocity variations suggesting the presence of a planetary companion, though stellar activity cannot be completely ruled out. References External links Open clusters 4651 Ara (constellation)
IC 4651
Astronomy
209
2,324,639
https://en.wikipedia.org/wiki/Syngenta
Syngenta Global AG is a global agricultural technology company headquartered in Basel, Switzerland. It primarily covers crop protection and seeds for farmers. Syngenta is part of the Syngenta Group, entirely owned by Sinochem, a Chinese state-owned enterprise. Syngenta was founded in 2000 by the merger of the agrichemical businesses of Novartis and AstraZeneca, and acquired by China National Chemical Corporation (ChemChina) in 2017. In 2020, the Syngenta Group was formed, bringing together Syngenta Crop Protection and Syngenta Seeds, Adama, and the agricultural business of Sinochem, now called Syngenta Group China, under a single entity. Syngenta's primary products include pesticides, selective herbicides, non-selective herbicides, fungicides, insecticides, as well as corn, soya, and biofuel. Syngenta brands include Actara (Thiamethoxam), Agrisure (corn with Viptera trait), Alto (Cyproconazole), Amistar (azoxystrobin), Avicta, Axial, Bicep II, Bravo, Callisto, Celest, Cruiser (TMX, Thiamethoxam), Dividend, Dual, Durivo, Elatus, Fusilade, Force, Golden Harvest, Gramoxone, Karate, Northrup-King (NK), Proclaim, Revus, Ridomil, Rogers, Score, Seguris, S&G, Tilt, Topik, Touchdown, Vertimec and Vibrance. The company has been controversial, primarily due to its main business – selling toxic chemicals and the environmental impact of those chemicals – but also due to its investment in lobbying. In 2012, the company was nominated for the Public Eye Award, which denounces companies with questionable human rights practices. History Based in Basel, Switzerland, Syngenta was formed in 13 November 2000 by the merger of Novartis Agribusiness and AstraZeneca Agrochemicals. In 2004, Syngenta Seeds purchased Garst, the North American corn and soybean business of Advanta, as well as Golden Harvest Seeds. , Syngenta's main competitors were Monsanto Company, BASF, Dow AgroSciences, Bayer CropScience and DuPont Pioneer. In 2014, Monsanto sought to acquire Syngenta for a reported $40 billion, but Syngenta rejected the offer. Since April 2015, Monsanto and Syngenta had been working with their investment banks Morgan Stanley and Goldman Sachs respectively on a deal. The U.S. Treasury tried to stop the deal for tax inversion. Syngenta's Board of Directors rejected an even better offer by Monsanto during August 2015, and Monsanto withdrew from the negotiations on 26 August. In February 2016, ChemChina, a Chinese state-owned enterprise, offered to purchase Syngenta for $43 billion (480 Swiss francs per share), a deal which the company "unanimously recommended to shareholders". In April 2017, the Federal Trade Commission, the Committee on Foreign Investment in the United States, and the European Commissioner for Competition approved of the acquisition. This was largest takeover by a Chinese company to date, and it caused criticism. To secure approval, ChemChina agreed to divest from pesticide production of paraquat, abamectin, and chlorothalonil. The transaction closed on 26 June 2017. In the following years, several acquisitions were made to expand the business. In November 2017, Syngenta agreed to purchase Nidera from Cofco International. In March 2018, Syngenta announced plans to acquire Strider, a Brazilian agtech company. In July, Syngenta acquired Floranova, a flower and vegetable seeds breeder based in the UK. In September 2019, the company acquired all the assets of The Cropio Group, an agri-technology company. In June 2020, ChemChina transferred its entire agricultural business to the Syngenta Group, which now also includes Adama and the agricultural activities of Sinochem in addition to Syngenta. The Syngenta Group is a Chinese company with its management headquarters in Basel, Switzerland. At the end of 2020, Syngenta Group announced the acquisition of Valagro, a manufacturer of biological crop protection products headquartered in Atessa, Italy. The company continues to operate as an independent brand. Acquisition history (selection) The following is an illustration of the company's mergers, acquisitions, spin-offs and historical predecessors: Products and services Syngenta has eight primary product lines which it develops, markets and sells worldwide; Its five product lines for pesticides are selective herbicides, non-selective herbicides, fungicides, insecticides and seed care. Three product lines for seed products include corn and soya, other field crops and vegetables. In 2014, sales from crop protection products accounted for US $11.381 billion, i.e. 75% of total sales. Field crop seeds include both hybrid seeds and genetically engineered seeds, some of which enter the food chain and become part of genetically modified food. According to Syngenta, in the US their "proprietary triple stack corn seeds expanded to represent around 25 percent of units sold." In 2010, the US EPA approved insecticidal trait stacks including Syngenta's AGRISURE VIPTERA™ gene, which offers resistance to certain corn pests. Syngenta cross-licenses its proprietary genes with Dow AgroSciences and thus is able to include Dow's Herculex I and Herculex RW insect resistance traits in its seeds. It sells a VMAX soybean that is resistant to glyphosate herbicide. In 2021 the company partnered with Hong Kong-based Insilico Medicine to develop "sustainable weedkillers" by using AI deep-learning tools. Syngenta brands include Actara (Thiamethoxam), Agrisure (corn with Viptera trait), Alto (Cyproconazole), Amistar (azoxystrobin), Avicta, Axial, Bicep II, Bravo, Callisto, Celest, Cruiser (TMX, Thiamethoxam), Dividend, Dual, Durivo, Elatus, Fusilade, Force, Golden Harvest, Gramoxone, Karate, Northrup-King (NK), Proclaim, Revus, Ridomil, Rogers, Score, Seguris, S&G, Tilt, Topik, Touchdown, Vertimec and Vibrance. In 2007, Queensland University in Australia contracted with Syngenta to research different inputs for biofuels as a renewable energy source. Former products Syngenta's predecessor, Ciba-Geigy, introduced the insecticide Galecron chlordimeform in 1966, and it was removed from the market in 1988. In 1976, Ciba-Geigy told regulatory authorities that it was temporarily withdrawing chlordimeform because ongoing long-term toxicology studies—particularly studies to determine if long-term exposure could cause cancer—showed that it was causing cancer and that it has already started to monitor its workers' exposure and had found chlordimeform and its metabolites in the urine of its workers. Ciba-Geigy then applied for and was granted, permission to market Galecron at lower doses for use only on cotton. However, as further long-term monitoring data was obtained, regulators banned chlordimeform in 1988. In 1995, class action in the US, Ciba-Geigy agreed to cover costs for employee health monitoring and treatment. In 2005, Syngenta reported that employee health monitoring was continuing at the company's Monthey, Switzerland site. Biofuels Like many agriculture companies, Syngenta also works in the biofuel space. In 2011, it announced the corn trait Enogen to reduce the consumption of water and energy versus conventional corn. In 2007, Queensland University in Australia contracted with Syngenta to research different inputs for biofuels as a renewable energy source. Other activities Lobbying Syngenta is in the transparency register of the European Union as a registered lobbyist. For 2017, it declared a €1.5 to €1.75 million expenditure of lobbying in European institutions. Syngenta's contributions to U.S. federal candidates, parties, and outside groups totaled $140,822 during the 2018 election cycle, ranking it 20th on the list of companies in its sector. Its lobbying expenditures in the U.S. during 2018 were $770,000, ranking it 7th in its sector. Litigation In 2001, the United States Patent and Trademark Office ruled in favor of Syngenta which had filed a suit against Bayer for patent infringement on a class of neonicotinoid insecticides. The following year Syngenta filed suits against Monsanto and other companies claiming infringement of its U.S. biotechnology patents covering genetically modified corn and cotton. In 2004, it again filed a suit against Monsanto, claiming antitrust violations related to the U.S. biotech corn seed market, and Monsanto countersued. Monsanto and Syngenta settled all litigation in 2008. Syngenta was the defendant in a class action lawsuit by the city of Greenville, Illinois concerning the adverse effects of atrazine on human water supplies. The suit was settled for $105 million in May 2012. A similar case involving six states has been in federal court since 2010. In the U.S., Syngenta is facing lawsuits from farmers and shipping companies regarding Viptera genetically modified corn. The plaintiffs in nearly 30 states contend that Syngenta's introduction of Viptera drove down U.S. grain market prices, leading to financial harm, and that Syngenta acted irresponsibly by doing too little to enable shipping companies to export the grain to approved ports. Before Viptera's 2010 introduction Syngenta secured all U.S. and NCGA-recommended export approvals, but none from China. China had imported little to no U.S. grain prior to 2010, and at the time was not considered a major partner, which changed in 2010, when it dramatically increased U.S. grain imports. For three years, China imported U.S. Viptera grain without formal approval. In November 2013, Chinese officials destroyed a U.S. grain shipment containing Viptera grain and began rejecting all U.S. shipments with the GM grain, but continued to accept it from all countries other than the U.S. The same year, U.S. corn market prices dropped $4 per bushel, causing over $2.9 billion in losses, with just over half of that loss occurring prior to China's November rejection. China later approved the GM corn in 2014, but U.S. corn grain market prices have since not rebounded. Syngenta lost the first lawsuit to reach trial in Kansas on 23 June 2017, and was ordered to pay the farmers $217 million. However, Syngenta has stated it would appeal the verdict. In June 2021, Syngenta paid $187.5 million to settle an unknown number of cases in Illinois and California, implicating the company's Paraquat herbicide as a cause of Parkinson's disease. In 2022, the Federal Trade Commission announced that it is initiating litigation against Syngenta, challenging certain US-based discount programs extended to their respective customers. In January 2024, a federal judge ruled that both Corteva and Syngenta must face the FTC lawsuit. In 2024 more than 5,000 Americans have active lawsuits against Syngenta, alleging ties between use of the paraquat herbicide product, Gramoxone, and the onset of Parkinson's disease. Syngenta denies any causal link between its product and Parkinson's disease. Controversies In 2007, Syngenta came under scrutiny by the U.S. Securities and Exchange Commission. This was due to third-party sales of products in countries such as Iran, Cuba, North Korea, Sudan, and Syria. In the past, Syngenta's crop protection products have also been the subject of repeated criticism. The company was accused of including the sale of highly toxic pesticides in its business model. In 2012, the company was therefore nominated for the "Public Eye Award", which denounces companies with questionable human rights practices. In 2023, the Arkansas attorney general Tim Griffin ordered Syngenta to sell land in the state under a prohibition against land sales to entities with "a connection to a country subject to the federal International Traffic in Arms Regulations." Brazil On 21 October 2007, a Brazilian peasant organization, the Landless Workers' Movement, led a group of landless farmers in an invasion of one of the company's seed research farms, in protest against genetically-engineered ("genetically modified") vegetables and in hopes of obtaining land for landless families to cultivate. After the invasion had begun, a team from NF Security arrived in a minibus and a fight with gunfire ensued. A trespasser and a security guard were killed, and some trespassers and other security guards were wounded. The Brazilian police investigation, which concluded in November 2007, blamed the confrontation and death of the trespasser on nine employees and the owner of NF Security; the leader of MST was blamed for trespassing. The inquiry found that the invader was fatally shot in the abdomen and in the leg. The security guard was shot in the head. Eight others were injured, five of them invaders. The Civil Court of Cascavel granted an order for the repossession of the site on 20 December 2007 and on 12 June 2008, the remaining MST members left the Santa Teresa site they had been occupying. On 14 October 2008, Syngenta donated the 123-hectare station to the Agronomy Institute of Paraná (IAPAR) for research into biodiversity, recovery of degraded areas and agriculture production systems, as well as environmental education programs. In November 2015, Judge Pedro Ivo Moreiro, of the 1st Civil Court of Cascavel, ruled that Syngenta must pay compensation to the family of Valmir Mota de Oliveira ("Keno"), who was killed in the attack, and to Isabel Nascimento dos Santos who was injured. In his sentence the judge stated that "to refer to what happened as a confrontation is to close one's eyes to reality, since […] there is no doubt that, in truth, it was a massacre disguised as repossession of property". The version of events put forward by Syngenta was rejected by the Court. In May 2010, Syngenta was condemned by the IV Permanent People's Tribunal for human rights violations in Brazil. Tyrone Hayes There has been a long-running conflict between Syngenta and University of California at Berkeley biologist Tyrone Hayes. In 2010, Syngenta forwarded an ethics complaint to the University of California Berkeley, complaining that Hayes had been sending sexually explicit and harassing e-mails to Syngenta scientists. Legal counsel from the university responded that Hayes had acknowledged sending letters having "unprofessional and offensive" content, and that he had agreed not to use similar language in future communications. According to an article in the 10 February 2014, issue of The New Yorker, Syngenta's public-relations team took steps to discredit Hayes, whose research is purported to suggest that the Syngenta-produced chemical atrazine was responsible for abnormal development of reproductive organs in frogs. The article states that the company paid third-party critics to write articles discrediting Hayes's work, planned to have his wife investigated, and planted hostile audience members at scientific talks given by Hayes. During a 21 February 2014, interview conducted on Democracy Now, Hayes reiterated the claims. After the interview aired, Syngenta denied targeting Hayes or making any threats, calling those statements "uncorroborated and intentionally damaging" and demanding a retraction and public apology from Hayes and Democracy Now. See also References External links Canadian website: https://www.syngenta.ca 2000 establishments in Switzerland 2017 mergers and acquisitions Agriculture companies of Switzerland AstraZeneca Biotechnology companies established in 2000 Biotechnology companies of Switzerland ChemChina Chemical companies of Switzerland Companies based in Basel Companies formerly listed on the New York Stock Exchange Companies formerly listed on the SIX Swiss Exchange Intensive farming Multinational companies headquartered in Switzerland Novartis Swiss brands
Syngenta
Chemistry
3,417
66,394
https://en.wikipedia.org/wiki/Island%20of%20stability
In nuclear physics, the island of stability is a predicted set of isotopes of superheavy elements that may have considerably longer half-lives than known isotopes of these elements. It is predicted to appear as an "island" in the chart of nuclides, separated from known stable and long-lived primordial radionuclides. Its theoretical existence is attributed to stabilizing effects of predicted "magic numbers" of protons and neutrons in the superheavy mass region. Several predictions have been made regarding the exact location of the island of stability, though it is generally thought to center near copernicium and flerovium isotopes in the vicinity of the predicted closed neutron shell at N = 184. These models strongly suggest that the closed shell will confer further stability towards fission and alpha decay. While these effects are expected to be greatest near atomic number Z = 114 (flerovium) and N = 184, the region of increased stability is expected to encompass several neighboring elements, and there may also be additional islands of stability around heavier nuclei that are doubly magic (having magic numbers of both protons and neutrons). Estimates of the stability of the nuclides within the island are usually around a half-life of minutes or days; some optimists propose half-lives on the order of millions of years. Although the nuclear shell model predicting magic numbers has existed since the 1940s, the existence of long-lived superheavy nuclides has not been definitively demonstrated. Like the rest of the superheavy elements, the nuclides within the island of stability have never been found in nature; thus, they must be created artificially in a nuclear reaction to be studied. Scientists have not found a way to carry out such a reaction, for it is likely that new types of reactions will be needed to populate nuclei near the center of the island. Nevertheless, the successful synthesis of superheavy elements up to Z = 118 (oganesson) with up to 177 neutrons demonstrates a slight stabilizing effect around elements 110 to 114 that may continue in heavier isotopes, consistent with the existence of the island of stability. Introduction Nuclide stability The composition of a nuclide (atomic nucleus) is defined by the number of protons Z and the number of neutrons N, which sum to mass number A. Proton number Z, also named the atomic number, determines the position of an element in the periodic table. The approximately 3300 known nuclides are commonly represented in a chart with Z and N for its axes and the half-life for radioactive decay indicated for each unstable nuclide (see figure). , 251 nuclides are observed to be stable (having never been observed to decay); generally, as the number of protons increases, stable nuclei have a higher neutron–proton ratio (more neutrons per proton). The last element in the periodic table that has a stable isotope is lead (Z = 82), with stability (i.e., half-lives of the longest-lived isotopes) generally decreasing in heavier elements, especially beyond curium (Z = 96). The half-lives of nuclei also decrease when there is a lopsided neutron–proton ratio, such that the resulting nuclei have too few or too many neutrons to be stable. The stability of a nucleus is determined by its binding energy, higher binding energy conferring greater stability. The binding energy per nucleon increases with atomic number to a broad plateau around A = 60, then declines. If a nucleus can be split into two parts that have a lower total energy (a consequence of the mass defect resulting from greater binding energy), it is unstable. The nucleus can hold together for a finite time because there is a potential barrier opposing the split, but this barrier can be crossed by quantum tunneling. The lower the barrier and the masses of the fragments, the greater the probability per unit time of a split. Protons in a nucleus are bound together by the strong force, which counterbalances the Coulomb repulsion between positively charged protons. In heavier nuclei, larger numbers of uncharged neutrons are needed to reduce repulsion and confer additional stability. Even so, as physicists started to synthesize elements that are not found in nature, they found the stability decreased as the nuclei became heavier. Thus, they speculated that the periodic table might come to an end. The discoverers of plutonium (element 94) considered naming it "ultimium", thinking it was the last. Following the discoveries of heavier elements, of which some decayed in microseconds, it then seemed that instability with respect to spontaneous fission would limit the existence of heavier elements. In 1939, an upper limit of potential element synthesis was estimated around element 104, and following the first discoveries of transactinide elements in the early 1960s, this upper limit prediction was extended to element 108. Magic numbers As early as 1914, the possible existence of superheavy elements with atomic numbers well beyond that of uranium—then the heaviest known element—was suggested, when German physicist Richard Swinne proposed that superheavy elements around Z = 108 were a source of radiation in cosmic rays. Although he did not make any definitive observations, he hypothesized in 1931 that transuranium elements around Z = 100 or Z = 108 may be relatively long-lived and possibly exist in nature. In 1955, American physicist John Archibald Wheeler also proposed the existence of these elements; he is credited with the first usage of the term "superheavy element" in a 1958 paper published with Frederick Werner. This idea did not attract wide interest until a decade later, after improvements in the nuclear shell model. In this model, the atomic nucleus is built up in "shells", analogous to electron shells in atoms. Independently of each other, neutrons and protons have energy levels that are normally close together, but after a given shell is filled, it takes substantially more energy to start filling the next. Thus, the binding energy per nucleon reaches a local maximum and nuclei with filled shells are more stable than those without. This theory of a nuclear shell model originates in the 1930s, but it was not until 1949 that German physicists Maria Goeppert Mayer and Johannes Hans Daniel Jensen et al. independently devised the correct formulation. The numbers of nucleons for which shells are filled are called magic numbers. Magic numbers of 2, 8, 20, 28, 50, 82 and 126 have been observed for neutrons, and the next number is predicted to be 184. Protons share the first six of these magic numbers, and 126 has been predicted as a magic proton number since the 1940s. Nuclides with a magic number of each—such as 16O (Z = 8, N = 8), 132Sn (Z = 50, N = 82), and 208Pb (Z = 82, N = 126)—are referred to as "doubly magic" and are more stable than nearby nuclides as a result of greater binding energies. In the late 1960s, more sophisticated shell models were formulated by American physicist William Myers and Polish physicist Władysław Świątecki, and independently by German physicist Heiner Meldner (1939–2019). With these models, taking into account Coulomb repulsion, Meldner predicted that the next proton magic number may be 114 instead of 126. Myers and Świątecki appear to have coined the term "island of stability", and American chemist Glenn Seaborg, later a discoverer of many of the superheavy elements, quickly adopted the term and promoted it. Myers and Świątecki also proposed that some superheavy nuclei would be longer-lived as a consequence of higher fission barriers. Further improvements in the nuclear shell model by Soviet physicist Vilen Strutinsky led to the emergence of the macroscopic–microscopic method, a nuclear mass model that takes into consideration both smooth trends characteristic of the liquid drop model and local fluctuations such as shell effects. This approach enabled Swedish physicist Sven Nilsson et al., as well as other groups, to make the first detailed calculations of the stability of nuclei within the island. With the emergence of this model, Strutinsky, Nilsson, and other groups argued for the existence of the doubly magic nuclide 298Fl (Z = 114, N = 184), rather than 310Ubh (Z = 126, N = 184) which was predicted to be doubly magic as early as 1957. Subsequently, estimates of the proton magic number have ranged from 114 to 126, and there is still no consensus. Discoveries Interest in a possible island of stability grew throughout the 1960s, as some calculations suggested that it might contain nuclides with half-lives of billions of years. They were also predicted to be especially stable against spontaneous fission in spite of their high atomic mass. It was thought that if such elements exist and are sufficiently long-lived, there may be several novel applications as a consequence of their nuclear and chemical properties. These include use in particle accelerators as neutron sources, in nuclear weapons as a consequence of their predicted low critical masses and high number of neutrons emitted per fission, and as nuclear fuel to power space missions. These speculations led many researchers to conduct searches for superheavy elements in the 1960s and 1970s, both in nature and through nucleosynthesis in particle accelerators. During the 1970s, many searches for long-lived superheavy nuclei were conducted. Experiments aimed at synthesizing elements ranging in atomic number from 110 to 127 were conducted at laboratories around the world. These elements were sought in fusion-evaporation reactions, in which a heavy target made of one nuclide is irradiated by accelerated ions of another in a cyclotron, and new nuclides are produced after these nuclei fuse and the resulting excited system releases energy by evaporating several particles (usually protons, neutrons, or alpha particles). These reactions are divided into "cold" and "hot" fusion, which respectively create systems with lower and higher excitation energies; this affects the yield of the reaction. For example, the reaction between 248Cm and 40Ar was expected to yield isotopes of element 114, and that between 232Th and 84Kr was expected to yield isotopes of element 126. None of these attempts were successful, indicating that such experiments may have been insufficiently sensitive if reaction cross sections were low—resulting in lower yields—or that any nuclei reachable via such fusion-evaporation reactions might be too short-lived for detection. Subsequent successful experiments reveal that half-lives and cross sections indeed decrease with increasing atomic number, resulting in the synthesis of only a few short-lived atoms of the heaviest elements in each experiment; , the highest reported cross section for a superheavy nuclide near the island of stability is for 288Mc in the reaction between 243Am and 48Ca. Similar searches in nature were also unsuccessful, suggesting that if superheavy elements do exist in nature, their abundance is less than 10−14 moles of superheavy elements per mole of ore. Despite these unsuccessful attempts to observe long-lived superheavy nuclei, new superheavy elements were synthesized every few years in laboratories through light-ion bombardment and cold fusion reactions; rutherfordium, the first transactinide, was discovered in 1969, and copernicium, eight protons closer to the island of stability predicted at Z = 114, was reached by 1996. Even though the half-lives of these nuclei are very short (on the order of seconds), the very existence of elements heavier than rutherfordium is indicative of stabilizing effects thought to be caused by closed shells; a model not considering such effects would forbid the existence of these elements due to rapid spontaneous fission. Flerovium, with the expected magic 114 protons, was first synthesized in 1998 at the Joint Institute for Nuclear Research in Dubna, Russia, by a group of physicists led by Yuri Oganessian. A single atom of element 114 was detected, with a lifetime of 30.4 seconds, and its decay products had half-lives measurable in minutes. Because the produced nuclei underwent alpha decay rather than fission, and the half-lives were several orders of magnitude longer than those previously predicted or observed for superheavy elements, this event was seen as a "textbook example" of a decay chain characteristic of the island of stability, providing strong evidence for the existence of the island of stability in this region. Even though the original 1998 chain was not observed again, and its assignment remains uncertain, further successful experiments in the next two decades led to the discovery of all elements up to oganesson, whose half-lives were found to exceed initially predicted values; these decay properties further support the presence of the island of stability. However, a 2021 study on the decay chains of flerovium isotopes suggests that there is no strong stabilizing effect from Z = 114 in the region of known nuclei (N = 174), and that extra stability would be predominantly a consequence of the neutron shell closure. Although known nuclei still fall several neutrons short of N = 184 where maximum stability is expected (the most neutron-rich confirmed nuclei, 293Lv and 294Ts, only reach N = 177), and the exact location of the center of the island remains unknown, the trend of increasing stability closer to N = 184 has been demonstrated. For example, the isotope 285Cn, with eight more neutrons than 277Cn, has a half-life almost five orders of magnitude longer. This trend is expected to continue into unknown heavier isotopes in the vicinity of the shell closure. Deformed nuclei Though nuclei within the island of stability around N = 184 are predicted to be spherical, studies from the early 1990s—beginning with Polish physicists Zygmunt Patyk and Adam Sobiczewski in 1991—suggest that some superheavy elements do not have perfectly spherical nuclei. A change in the shape of the nucleus changes the position of neutrons and protons in the shell. Research indicates that large nuclei farther from spherical magic numbers are deformed, causing magic numbers to shift or new magic numbers to appear. Current theoretical investigation indicates that in the region Z = 106–108 and N ≈ 160–164, nuclei may be more resistant to fission as a consequence of shell effects for deformed nuclei; thus, such superheavy nuclei would only undergo alpha decay. Hassium-270 is now believed to be a doubly magic deformed nucleus, with deformed magic numbers Z = 108 and N = 162. It has a half-life of 9 seconds. This is consistent with models that take into account the deformed nature of nuclei intermediate between the actinides and island of stability near N = 184, in which a stability "peninsula" emerges at deformed magic numbers Z = 108 and N = 162. Determination of the decay properties of neighboring hassium and seaborgium isotopes near N = 162 provides further strong evidence for this region of relative stability in deformed nuclei. This also strongly suggests that the island of stability (for spherical nuclei) is not completely isolated from the region of stable nuclei, but rather that both regions are instead linked through an isthmus of relatively stable deformed nuclei. Predicted decay properties The half-lives of nuclei in the island of stability itself are unknown since none of the nuclides that would be "on the island" have been observed. Many physicists believe that the half-lives of these nuclei are relatively short, on the order of minutes or days. Some theoretical calculations indicate that their half-lives may be long, on the order of 100 years, or possibly as long as 109 years. The shell closure at N = 184 is predicted to result in longer partial half-lives for alpha decay and spontaneous fission. It is believed that the shell closure will result in higher fission barriers for nuclei around 298Fl, strongly hindering fission and perhaps resulting in fission half-lives 30 orders of magnitude greater than those of nuclei unaffected by the shell closure. For example, the neutron-deficient isotope 284Fl (with N = 170) undergoes fission with a half-life of 2.5 milliseconds, and is thought to be one of the most neutron-deficient nuclides with increased stability in the vicinity of the N = 184 shell closure. Beyond this point, some undiscovered isotopes are predicted to undergo fission with still shorter half-lives, limiting the existence and possible observation of superheavy nuclei far from the island of stability (namely for N < 170 as well as for Z > 120 and N > 184). These nuclei may undergo alpha decay or spontaneous fission in microseconds or less, with some fission half-lives estimated on the order of 10−20 seconds in the absence of fission barriers. In contrast, 298Fl (predicted to lie within the region of maximum shell effects) may have a much longer spontaneous fission half-life, possibly on the order of 1019 years. In the center of the island, there may be competition between alpha decay and spontaneous fission, though the exact ratio is model-dependent. The alpha decay half-lives of 1700 nuclei with 100 ≤ Z ≤ 130 have been calculated in a quantum tunneling model with both experimental and theoretical alpha decay Q-values, and are in agreement with observed half-lives for some of the heaviest isotopes. The longest-lived nuclides are also predicted to lie on the beta-stability line, for beta decay is predicted to compete with the other decay modes near the predicted center of the island, especially for isotopes of elements 111–115. Unlike other decay modes predicted for these nuclides, beta decay does not change the mass number. Instead, a neutron is converted into a proton or vice versa, producing an adjacent isobar closer to the center of stability (the isobar with the lowest mass excess). For example, significant beta decay branches may exist in nuclides such as 291Fl and 291Nh; these nuclides have only a few more neutrons than known nuclides, and might decay via a "narrow pathway" towards the center of the island of stability. The possible role of beta decay is highly uncertain, as some isotopes of these elements (such as 290Fl and 293Mc) are predicted to have shorter partial half-lives for alpha decay. Beta decay would reduce competition and would result in alpha decay remaining the dominant decay channel, unless additional stability towards alpha decay exists in superdeformed isomers of these nuclides. Considering all decay modes, various models indicate a shift of the center of the island (i.e., the longest-living nuclide) from 298Fl to a lower atomic number, and competition between alpha decay and spontaneous fission in these nuclides; these include 100-year half-lives for 291Cn and 293Cn, a 1000-year half-life for 296Cn, a 300-year half-life for 294Ds, and a 3500-year half-life for 293Ds, with 294Ds and 296Cn exactly at the N = 184 shell closure. It has also been posited that this region of enhanced stability for elements with 112 ≤ Z ≤ 118 may instead be a consequence of nuclear deformation, and that the true center of the island of stability for spherical superheavy nuclei lies around 306Ubb (Z = 122, N = 184). This model defines the island of stability as the region with the greatest resistance to fission rather than the longest total half-lives; the nuclide 306Ubb is still predicted to have a short half-life with respect to alpha decay. The island of stability for spherical nuclei may also be a "coral reef" (i.e., a broad region of increased stability without a clear "peak") around N = 184 and 114 ≤ Z ≤ 120, with half-lives rapidly decreasing at higher atomic number, due to combined effects from proton and neutron shell closures. Another potentially significant decay mode for the heaviest superheavy elements was proposed to be cluster decay by Romanian physicists Dorin N. Poenaru and Radu A. Gherghescu and German physicist Walter Greiner. Its branching ratio relative to alpha decay is expected to increase with atomic number such that it may compete with alpha decay around Z = 120, and perhaps become the dominant decay mode for heavier nuclides around Z = 124. As such, it is expected to play a larger role beyond the center of the island of stability (though still influenced by shell effects), unless the center of the island lies at a higher atomic number than predicted. Possible natural occurrence Even though half-lives of hundreds or thousands of years would be relatively long for superheavy elements, they are far too short for any such nuclides to exist primordially on Earth. Additionally, instability of nuclei intermediate between primordial actinides (232Th, 235U, and 238U) and the island of stability may inhibit production of nuclei within the island in r-process nucleosynthesis. Various models suggest that spontaneous fission will be the dominant decay mode of nuclei with A > 280, and that neutron-induced or beta-delayed fission—respectively neutron capture and beta decay immediately followed by fission—will become the primary reaction channels. As a result, beta decay towards the island of stability may only occur within a very narrow path or may be entirely blocked by fission, thus precluding the synthesis of nuclides within the island. The non-observation of superheavy nuclides such as 292Hs and 298Fl in nature is thought to be a consequence of a low yield in the r-process resulting from this mechanism, as well as half-lives too short to allow measurable quantities to persist in nature. Various studies utilizing accelerator mass spectroscopy and crystal scintillators have reported upper limits of the natural abundance of such long-lived superheavy nuclei on the order of relative to their stable homologs. Despite these obstacles to their synthesis, a 2013 study published by a group of Russian physicists led by Valeriy Zagrebaev proposes that the longest-lived copernicium isotopes may occur at an abundance of 10−12 relative to lead, whereby they may be detectable in cosmic rays. Similarly, in a 2013 experiment, a group of Russian physicists led by Aleksandr Bagulya reported the possible observation of three cosmogenic superheavy nuclei in olivine crystals in meteorites. The atomic number of these nuclei was estimated to be between 105 and 130, with one nucleus likely constrained between 113 and 129, and their lifetimes were estimated to be at least 3,000 years. Although this observation has yet to be confirmed in independent studies, it strongly suggests the existence of the island of stability, and is consistent with theoretical calculations of half-lives of these nuclides. The decay of heavy, long-lived elements in the island of stability is a proposed explanation for the unusual presence of the short-lived radioactive isotopes observed in Przybylski's Star. Synthesis and difficulties The manufacture of nuclei on the island of stability proves to be very difficult because the nuclei available as starting materials do not deliver the necessary sum of neutrons. Radioactive ion beams (such as 44S) in combination with actinide targets (such as 248Cm) may allow the production of more neutron rich nuclei nearer to the center of the island of stability, though such beams are not currently available in the required intensities to conduct such experiments. Several heavier isotopes such as 250Cm and 254Es may still be usable as targets, allowing the production of isotopes with one or two more neutrons than known isotopes, though the production of several milligrams of these rare isotopes to create a target is difficult. It may also be possible to probe alternative reaction channels in the same 48Ca-induced fusion-evaporation reactions that populate the most neutron-rich known isotopes, namely those at a lower excitation energy (resulting in fewer neutrons being emitted during de-excitation), or those involving evaporation of charged particles (pxn, evaporating a proton and several neutrons, or αxn, evaporating an alpha particle and several neutrons). This may allow the synthesis of neutron-enriched isotopes of elements 111–117. Although the predicted cross sections are on the order of 1–900 fb, smaller than when only neutrons are evaporated (xn channels), it may still be possible to generate otherwise unreachable isotopes of superheavy elements in these reactions. Some of these heavier isotopes (such as 291Mc, 291Fl, and 291Nh) may also undergo electron capture (converting a proton into a neutron) in addition to alpha decay with relatively long half-lives, decaying to nuclei such as 291Cn that are predicted to lie near the center of the island of stability. However, this remains largely hypothetical as no superheavy nuclei near the beta-stability line have yet been synthesized and predictions of their properties vary considerably across different models. In 2024, a team of researchers at the JINR observed one decay chain of the known isotope 289Mc as a product in the p2n channel of the reaction between 242Pu and 50Ti, an experiment targeting neutron-deficient livermorium isotopes. This was the first successful report of a charged-particle exit channel in a hot fusion reaction between an actinide target and a projectile with Z ≥ 20. The process of slow neutron capture used to produce nuclides as heavy as 257Fm is blocked by short-lived isotopes of fermium that undergo spontaneous fission (for example, 258Fm has a half-life of 370 μs); this is known as the "fermium gap" and prevents the synthesis of heavier elements in such a reaction. It might be possible to bypass this gap, as well as another predicted region of instability around A = 275 and Z = 104–108, in a series of controlled nuclear explosions with a higher neutron flux (about a thousand times greater than fluxes in existing reactors) that mimics the astrophysical r-process. First proposed in 1972 by Meldner, such a reaction might enable the production of macroscopic quantities of superheavy elements within the island of stability; the role of fission in intermediate superheavy nuclides is highly uncertain, and may strongly influence the yield of such a reaction. It may also be possible to generate isotopes in the island of stability such as 298Fl in multi-nucleon transfer reactions in low-energy collisions of actinide nuclei (such as 238U and 248Cm). This inverse quasifission (partial fusion followed by fission, with a shift away from mass equilibrium that results in more asymmetric products) mechanism may provide a path to the island of stability if shell effects around Z = 114 are sufficiently strong, though lighter elements such as nobelium and seaborgium (Z = 102–106) are predicted to have higher yields. Preliminary studies of the 238U + 238U and 238U + 248Cm transfer reactions have failed to produce elements heavier than mendelevium (Z = 101), though the increased yield in the latter reaction suggests that the use of even heavier targets such as 254Es (if available) may enable production of superheavy elements. This result is supported by a later calculation suggesting that the yield of superheavy nuclides (with Z ≤ 109) will likely be higher in transfer reactions using heavier targets. A 2018 study of the 238U + 232Th reaction at the Texas A&M Cyclotron Institute by Sara Wuenschel et al. found several unknown alpha decays that may possibly be attributed to new, neutron-rich isotopes of superheavy elements with 104 < Z < 116, though further research is required to unambiguously determine the atomic number of the products. This result strongly suggests that shell effects have a significant influence on cross sections, and that the island of stability could possibly be reached in future experiments with transfer reactions. Other islands of stability Further shell closures beyond the main island of stability in the vicinity of Z = 112–114 may give rise to additional islands of stability. Although predictions for the location of the next magic numbers vary considerably, two significant islands are thought to exist around heavier doubly magic nuclei; the first near 354126 (with 228 neutrons) and the second near 472164 or 482164 (with 308 or 318 neutrons). Nuclides within these two islands of stability might be especially resistant to spontaneous fission and have alpha decay half-lives measurable in years, thus having comparable stability to elements in the vicinity of flerovium. Other regions of relative stability may also appear with weaker proton shell closures in beta-stable nuclides; such possibilities include regions near 342126 and 462154. Substantially greater electromagnetic repulsion between protons in such heavy nuclei may greatly reduce their stability, and possibly restrict their existence to localized islands in the vicinity of shell effects. This may have the consequence of isolating these islands from the main chart of nuclides, as intermediate nuclides and perhaps elements in a "sea of instability" would rapidly undergo fission and essentially be nonexistent. It is also possible that beyond a region of relative stability around element 126, heavier nuclei would lie beyond a fission threshold given by the liquid drop model and thus undergo fission with very short lifetimes, rendering them essentially nonexistent even in the vicinity of greater magic numbers. It has also been posited that in the region beyond A > 300, an entire "continent of stability" consisting of a hypothetical phase of stable quark matter, comprising freely flowing up and down quarks rather than quarks bound into protons and neutrons, may exist. Such a form of matter is theorized to be a ground state of baryonic matter with a greater binding energy per baryon than nuclear matter, favoring the decay of nuclear matter beyond this mass threshold into quark matter. If this state of matter exists, it could possibly be synthesized in the same fusion reactions leading to normal superheavy nuclei, and would be stabilized against fission as a consequence of its stronger binding that is enough to overcome Coulomb repulsion. See also Island of inversion Table of nuclides Notes References Bibliography External links Island ahoy! (Nature, 2006, with JINR diagram of heavy nuclides and predicted island of stability) Can superheavy elements (such as Z = 116 or 118) be formed in a supernova? Can we observe them? (Cornell, 2004 – "maybe") Second postcard from the island of stability (CERN, 2001; nuclides with 116 protons and mass 292) First postcard from the island of nuclear stability (CERN, 1999; first few Z = 114 atoms) Isotopes Periodic table Hypothetical chemical elements Radioactivity Nuclear physics
Island of stability
Physics,Chemistry
6,378
230,079
https://en.wikipedia.org/wiki/Cross-cultural%20communication
Cross-cultural communication is a field of study investigating how people from differing cultural backgrounds communicate, in similar and different ways among themselves, and how they endeavor to communicate across cultures. Intercultural communication is a related field of study. Cross-cultural deals with the comparison of different cultures.  In cross-cultural communication, differences are understood and acknowledged, and can bring about individual change, but not collective transformations. In cross-cultural societies, one culture is often considered “the norm” and all other cultures are compared or contrasted to the dominant culture. Origins and culture During the Cold War, the economy of the United States was largely self-contained because the world was polarized into two separate and competing powers: the East and the West. However, changes and advancements in economic relationships, political systems, and technological options began to break down old cultural barriers. Business transformed from individual-country capitalism to global capitalism. Thus, the study of cross-cultural communication was originally found within businesses and government, both seeking to expand globally. Businesses began to offer language training to their employees and programs were developed to train employees to understand how to act when abroad. With this also came the development of the Foreign Service Institute, or FSI, through the Foreign Service Act of 1946, where government employees received training and prepared for overseas posts. There began also implementation of a "world view" perspective in the curriculum of higher education. In 1974, the International Progress Organization, with the support of UNESCO and under the auspices of Senegalese President Léopold Sédar Senghor, held an international conference on "The Cultural Self-comprehension of Nations" (Innsbruck, Austria, 27–29 July 1974) which called upon United Nations member states "to organize systematic and global comparative research on the different cultures of the world" and "to make all possible efforts for a more intensive training of diplomats in the field of international cultural co-operation ... and to develop the cultural aspects of their foreign policy." There has become an increasing pressure for universities across the world to incorporate intercultural and international understanding and knowledge into the education of their students. International literacy and cross-cultural understanding have become critical to a country's cultural, technological, economic, and political health. It has become essential for universities to educate, or more importantly, "transform", to function effectively and comfortably in a world characterized by close, multi-faceted relationships and permeable borders. Students must possess a certain level of global competence to understand the world they live in and how they fit into this world. This level of global competence starts at ground level- the university and its faculty- with how they generate and transmit cross-cultural knowledge and information to students. Interdisciplinary orientation Cross-cultural communication endeavors to bring together the relatively unrelated fields of cultural anthropology with established areas of communication. At its core, cross-cultural communication involves understanding the ways in which culturally distinct individuals communicate with each other. Its charge is to also produce some guidelines with which people from different cultures can better communicate with each other. Cross-cultural communication requires an interdisciplinary approach. It involves literacy in fields such as anthropology, cultural studies, psychology and communication. The field has also moved both toward the treatment of interethnic relations, and toward the study of communication strategies used by co-cultural populations, i.e., communication strategies used to deal with majority or mainstream populations. The study of languages other than one's own can serve not only to help one understand what we as humans have in common, but also to assist in the understanding of the diversity which underlines our languages' methods of constructing and organizing knowledge. Such understanding has profound implications with respect to developing a critical awareness of social relationships. Understanding social relationships and the way other cultures work is the groundwork of successful globalization business affairs. Language socialization can be broadly defined as "an investigation of how language both presupposes and creates anew, social relations in cultural context". It is imperative that the speaker understands the grammar and prosody of a language, as well as how elements of language are socially situated in order to reach communicative competence. Human experience is culturally relevant, so elements of language are also culturally relevant. One must carefully consider semiotics and the evaluation of sign systems to compare cross-cultural norms of communication. There are several potential problems that come with language socialization, however. Sometimes people can overgeneralize or label cultures with stereotypical and subjective characterizations. Another primary concern with documenting alternative cultural norms revolves around the fact that no social actor uses language in ways that perfectly match normative characterizations. A methodology for investigating how an individual uses language and other semiotic activity to create and use new models of conduct and how this varies from the cultural norm should be incorporated into the study of language socialization. Global rise With increasing globalization and international trade, it is unavoidable that different cultures will meet, conflict, and blend together. People from different culture find it is difficult to communicate not only due to language barriers, but also are affected by culture styles. For instance, in individualistic cultures, such as in the United States, Canada, and Western Europe, an independent figure or self is dominant. This independent figure is characterized by a sense of self relatively distinct from others and the environment. In interdependent cultures, usually identified as Asian, Latin American, African, and Southern European cultures, an interdependent figure of self is dominant. There is a much greater emphasis on the interrelatedness of the individual to others and the environment; the self is meaningful only (or primarily) in the context of social relationships, duties, and roles. In some degree, the effect brought by cultural difference override the language gap. This culture style difference contributes to one of the biggest challenges for cross-culture communication. Effective communication with people of different cultures is especially challenging. Cultures provide people with ways of thinking—ways of seeing, hearing, and interpreting the world. Thus the same words can mean different things to people from different cultures, even when they speak the "same" language. When the languages are different, and translation has to be used to communicate, the potential for misunderstandings increases. The study of cross-cultural communication is a global research area. As a result, cultural differences in the study of cross-cultural communication can already be found. For example, cross-cultural communication is generally considered part of communication studies in the US, but is emerging as a sub-field of applied linguistics in the UK. Cross-cultural communication in the workplace Corporations have grown into new countries, regions, and continents around the world, which has caused people of various cultures to move and learn to adapt to their environment. This has led to cross-cultural communication becoming more important in the work environment. From nonverbal to spoken communication, it is critical for a company or organizations performance. The entire company or organization will face drastic hardships when their communication is restricted. Over the past few decades, many Western corporations have expanded into Sub-Saharan Africa. James Baba Abugre conducted a study on western expatriates who have moved to work in Ghana. Abugre interviewed both the expatriates and Ghanaians, and found that cultural competence is essential to working with others of different cultures in order to avoid conflict between the Western and Eastern cultural norms. It is important that workers understand both verbal and non-verbal communication styles. Expatriates who move to work in a culture that is not their own should be prepared, be properly trained, and have access to educational resources to help them succeed and to appreciate the culture they have moved into, in order to navigate it effectively. Abugre's main finding is that cultural competency is important to cross-cultural communication. Paula Caligiuri has proposed training of international workers in cultural agility techniques as a way to improve such communication. Yaila Zotzmann, Dimitri van der Linden, and Knut Wyra looked at Asia, Europe, and North America. Together they had a focus on employees in each continent with a focus on error orientation. The authors define this as "one's attitude toward dealing with, communicating about, and learning from errors". They studied employees from China, Germany, Hungary, Japan, Malaysia, the Netherlands, the United States of America, and Vietnam. Country differences, cultural values, and personality factors were also accounted for. The study was quantitative and looked at a single organization that had offices in eight countries. Results showed error orientation varied based on the culture they were in. Americans tend to be more open to errors and learn from them as well as speaking about their mistakes, whereas Japanese subjects had the lowest tolerance for errors. The Japanese showed concern about how it may impact those around them and the organization. The study also referred to Hofstede's cultural dimensions theory. The findings show a potential relationship between error orientation and an employee's culture. Other important factors are the country they live in or personality dimensions. Cross-cultural communications and boundaries are present in all sectors. In Europe, cross-cultural communication in primary care is important, for example in dealing with migrants in the present European migrant crisis. Maria van den Muijsenbergh conducted a study on primary care in Europe as well as a new program, RESTORE. The program stands for: "Research into implementation STrategies to support patients of different ORigins and language background in a variety of European primary care settings". The countries participating are Ireland, England, Scotland, Austria, the Netherlands, and Greece. Muijsenbergh found in her study that there was a range of issues in primary care for migrants in Europe. There are both language and culture barriers between medical professionals and patients, which has an impact on their communication. Even with the translation methods that technology provides, language barriers remain to fall fast. The study also found that migrants were more likely to use emergency services, which was consistent in countries with a steady influx of migrants or few migrants, and during times of economic prosperity or recession. Muijsenbergh found that migrants have worse health than native Europeans, with her findings suggesting that this is a result of the language and cultural barriers. She recommends medical professionals use different training and educational resources in order to become cross-cultural communicators. Cross-cultural communication in lateral teams Feedback in Lateral Cross-Cultural Team Dynamics Lateral feedback, or feedback exchanged among team members at the same hierarchical level, plays a pivotal role in enhancing team creativity and innovation. Studies highlight its dual-edged nature: while positive feedback fosters an environment conducive to creativity by reducing team relationship conflicts (Liu et al., 2022), negative feedback can harm team dynamics and individual creativity by triggering psychological states that detract from collaboration (Kim & Kim, 2020). The effectiveness of this communication is significantly influenced by the cultural context, suggesting the need for a strategic approach that respects individual and cultural differences in communication styles and feedback reception. Research indicates that the impact of lateral feedback is complex, affecting various team performance dimensions differently. For instance, this communication can lead to increased individual performance and team effort but may not necessarily improve overall team performance, highlighting the importance of complex communication practices that acknowledge the sophisticated dynamics of team interactions (Tavoletti et al., 2019; Wisniewski et al., 2020). The application of Feedback Intervention Theory (FIT) emphasizes focusing feedback on task-related aspects rather than personal attributes to optimize its effectiveness (Kluger & DeNisi, 1996). Given the global nature of modern teams, tools like GlobeSmart Profiles and Erin Meyer's Cultural Mapping offer valuable insights for tailoring feedback in culturally intelligent ways, thereby enhancing team performance across diverse settings (Lane & Maznevski, 2019; Meyer, 2024). Emphasizing constructive, culturally informed, and task-related dialogue is essential for fostering an environment that leverages lateral feedback as a tool for continuous improvement, collaboration, and enhanced creativity within teams. Incorporation into college programs The application of cross-cultural communication theory to foreign language education is increasingly appreciated around the world. Cross-cultural communication classes can now be found within foreign language departments of some universities, while other schools are placing cross-cultural communication programs in their departments of education. With the increasing pressures and opportunities of globalization, the incorporation of international networking alliances has become an "essential mechanism for the internationalization of higher education". Many universities from around the world have taken great strides to increase intercultural understanding through processes of organizational change and innovations. In general, university processes revolve around four major dimensions which include: organizational change, curriculum innovation, staff development, and student mobility. Ellingboe emphasizes these four major dimensions with his own specifications for the internationalization process. His specifications include: (1) college leadership; (2) faculty members' international involvement in activities with colleagues, research sites, and institutions worldwide; (3) the availability, affordability, accessibility, and transferability of study abroad programs for students; (4) the presence and integration of international students, scholars, and visiting faculty into campus life; and (5) international co-curricular units (residence halls, conference planning centers, student unions, career centers, cultural immersion and language houses, student activities, and student organizations). Above all, universities need to make sure that they are open and responsive to changes in the outside environment. In order for internationalization to be fully effective, the university (including all staff, students, curriculum, and activities) needs to be current with cultural changes, and willing to adapt to these changes. As stated by Ellingboe, internationalization "is an ongoing, future-oriented, multidimensional, interdisciplinary, leadership-driven vision that involves many stakeholders working to change the internal dynamics of an institution to respond and adapt appropriately to an increasingly diverse, globally focused, ever-changing external environment". New distance learning technologies, such as interactive teleconferencing, enable students located thousands of miles apart to communicate and interact in a virtual classroom. Research has indicated that certain themes and images such as children, animals, life cycles, relationships, and sports can transcend cultural differences, and may be used in international settings such as traditional and online university classrooms to create common ground among diverse cultures (Van Hook, 2011). Many Master of Science in Management programs have an internationalization specialization which may place a focus on cross-cultural communication. For example, the Ivey Business School has a course titled Cross Cultural Management. Jadranka Zlomislić, Ljerka Rados Gverijeri, and Elvira Bugaric study inter-cultural competency of students. As globalization progresses the world has become more interconnected, leading to job and study opportunities abroad in different countries and cultures, where the students are surrounded by a language that is not their mother tongue. Findings suggest that the internet is helpful but, not the answer; students should enroll in language and inter-cultural courses in order to fight stereotypes and develop inter-cultural competence and make them into better cross-cultural communicators. Cross-cultural communication gives opportunities to share ideas, experiences, and different perspectives and perception by interacting with local people. Challenges in cross-language qualitative research Cross-language research refers to research involving two or more languages. Specifically, it can refer to: 1) researchers working with participants in a language that they are not fluent in, or; 2) researchers working with participants utilizing a language that is neither of their native languages, or; 3) translation of research or findings in another language, or; 4) researchers and participants speak the same language (not English). However, the research process and findings are directed to an English-speaking audience. Cross-language issues are of growing concern in research of all methodological forms, but they raise particular concerns for qualitative research. Qualitative researchers seek to develop a comprehensive understanding of human behavior, using inductive approaches to investigate the meanings people attribute to their behavior, actions, and interactions with others. In other words, qualitative researchers seek to gain insights into life experiences by exploring the depth, richness, and complexity inherent to human phenomenon. To gather data, qualitative researchers use direct observation and immersion, interviews, open-ended surveys, focus groups, content analysis of visual and textual material, and oral histories. Qualitative research studies involving cross-language issues are particularly complex in that they require investigating meanings, interpretations, symbols, and the processes and relations of social life. Although a range of scholars have dedicated their attention to challenges in conducting qualitative studies in cross-cultural contexts, no methodological consensus has emerged from these studies. For instance, Edwards noticed how the inconsistent or inappropriate use of translators or interpreters can threaten the trustworthiness of cross-language qualitative research and the applicability of the translated findings on participant populations. Researchers who fail to address the methodological issues translators/interpreters present in a cross-language qualitative research can decrease the trustworthiness of the data as well as compromise the overall rigor of the study Temple and Edwards also describe the important role of translation in research, pointing out that language is not just a tool or technical label for conveying concepts; Indeed, language incorporates values and beliefs and carries cultural, social, and political meanings of a particular social reality that may not have a conceptual equivalence in the language into which will be translated. In the same veing, it has also been noted that the same words can mean different things in different cultures. For instance, as Temple et al. observe, the words we choose matter. Thus, it is crucial to give attention to how researchers describe the use of translators and/or interpreters since it reflects their competence in addressing language as a methodological issue. Historical discussion of cross-language issues and qualitative research In 1989, Saville-Troike was one of the first to turn to apply the use of qualitative research (in the form of ethnographic investigation) to the topic of cross-cultural communication. Using this methodology, Saville-Troike demonstrated that for successful communication to take place, a person must have the appropriate linguistic knowledge, interaction skills, and cultural knowledge. In a cross-cultural context, one must be aware of differences in norms of interaction and interpretation, values and attitudes, as well as cognitive maps and schemata. Regarding cross-cultural interviews, subsequently Stanton argued in 1996 that in order to avoid misunderstandings, the interviewer should try to walk in the other person's shoes. In other words, the interviewer needed to pay attention to the point of view of the interviewee, a notion dubbed as "connected knowing," which refers to a clear and undistorted understanding of the perspective of the interviewee. Relationship between cross-language issues and qualitative research As one of the primary methods for collecting rich and detailed information in qualitative research, interviews conducted in cross-cultural linguistic contexts raise a number of issues. As a form of data collection, interviews provide researchers with insight into how individuals understand and narrate aspects of their lives. Challenges may arise, however, when language barriers exist between researchers and participants. In multilingual contexts, the study of language differences is an essential part of qualitative research. van Ness et al. claim that language differences may have consequences for the research process and outcome, because concepts in one language may be understood differently in another language. For these authors, language is central in all phases of qualitative research, ranging from data collection to analysis and representation of the textual data in publications. In addition, as van Ness et al. observe, challenges of translation can be from the perspective that interpretation of meaning is the core of qualitative research. Interpretation and representation of meaning may be challenging in any communicative act; however, they are more complicated in cross-cultural contexts where interlingual translation is necessary.). Interpretation and understanding of meanings are essential in qualitative research, not only for the interview phase, but also for the final phase when meaning will be represented to the audience through oral or written text. Temple and Edwards claim that without a high level of translated understanding, qualitative research cannot shed light on different perspectives, circumstances that could shut out the voices of those who could enrich and challenge our understandings. Current state of affairs of cross-language studies in qualitative research According to Temple et al., a growing number of researchers are conducting studies in English language societies with people who speak little or no English. However, few of these researchers acknowledge the influence of interpreters and translators. In addition, as Temple et al. noticed, little attention is given to the involvement of interpreters in research interviews and even less attention to language difference in focus group research with people who do not speak English. An exception would be the work of Esposito. There is some work on the role of interpreters and translators in relation to best practice and models of provision, such as that of Thomson et al., However, there is a body of literature aimed at English speaking health and social welfare professionals on how to work with interpreters. Temple and Edwards point out the absence of technically focused literature on translation. This is problematic because there is strong evidence that communication across languages involves more than just a literal transfer of information. In this regard, Simon claims that the translator is not someone who simply offers words in a one-to-one exchange. Rather, the translator is someone who negotiates meanings in relation to a specific context. These meanings cannot be found within the language of translation, but they are embedded in the negotiation process, which is part of their continual reactivation. For this reason, the translator needs to make continuous decisions about the cultural meanings language conveys. Thus, the process of meaning transfer has more to do with reconstructing the value of a term, rather than its cultural inscription. Significant contributions to cross-language studies in qualitative research Jacques Derrida is widely acknowledged to be one of the most significant contributors to the issue of language in qualitative social research. The challenges that arise in studies involving people who speak multiple languages have also been acknowledged. Today, the main contributions concerning issues of translation and interpretation come from the field of health care, including from transcultural nursing. In a globalized era, setting the criteria for qualitative research that is linguistically and culturally representative of study participants is crucial for improving the quality of care provided by health care professionals. Scholars in the health field, like Squires, provide useful guidelines for systematically evaluating the methodological issues in cross-language research in order to address language barriers between researchers and participants. Cross-language concerns in qualitative research Squires defines cross-language as the process that occurs when a language barrier is present between the researcher and participants. This barrier is frequently mediated using a translator or interpreter. When the research involves two languages, interpretation issues might result in loss of meaning and thus loss of the validity of the qualitative study. As Oxley et al. point out, in a multilingual setting interpretation challenges arise when researcher and participants speak the same non-English native language, but the results of the study are intended for an English-speaking audience. For instance, when interviews, observation, and other methods of gathering data are used in cross-cultural environments, the data collection and analysis processes become more complicated due to the inseparability of the human experience and the language spoken in a culture Oxley et al. (2017). Therefore, it is crucial for researchers to be clear on what they know and believe. In other words, they should clarify their position in the research process. In this context, positionality refers to the ethical and relational issues the researchers face when choosing a language over another to communicate their findings. For example, in his study on Chinese international students in a Canadian university, Li considers the ethical and relational issues of language choice experienced when working with the Chinese and English language. In this case, it is important that the researcher offers a rationale behind his/her language choice. Thus, as Squires observes, language plays a significant role in cross-cultural studies; it helps participants represent their sense of self. Similarly, qualitative research interviews involve a continuous reflection on language choices because they may impact the research process and outcome. In his work, Lee illustrates the central role that reflexivity plays in setting researcher's priorities and his/her involvement in the translation process. Specifically, his study focuses on the dilemma that researchers speaking the same language of participants face when the findings are intended to an English-speaking audience only. Lee introduces the article by arguing that "Research conducted by English-speaking researchers about other language speaking subjects is essentially cross-cultural and often multilingual, particularly with QR that involves participants communicating in languages other than English" (p. 53). Specifically, Lee addresses the problems that arise in making sense of interview responses in Mandarin, preparing transcriptions of interviews, and translating the Mandarin/Chinese data for an English-speaking/reading audience. Lee's work then, demonstrates the importance of reflexivity in cross-language research since the researcher's involvement in the language translation can impact the research process and outcome. Therefore, in order to ensure trustworthiness, which is a measure of the rigor of the study, Lincoln & Guba, Sutsrino et al. argue that it is necessary to minimize translation errors, provide detail accounts of the translation, involve more than one translator, and remain open to inquiry from those seeking access to the translation process. For example, in research conducted in the educational context, Sutsrino et al. recommend bilingual researchers the use of inquiry audit for establishing trustworthiness. Specifically, investigators can require an outside person to review and examine the translation process and the data analysis in order to ensure that the translation is accurate, and the findings are consistent. International educational organizations The Society for Intercultural Education, Training and Research SIETAR is an educational membership organization for those professionals who are concerned with the challenges and rewards of intercultural relations. SIETAR was founded in the United States in 1974 by a few dedicated individuals to draw together professionals engaged in various forms of intercultural learning and engagement research and training. SIETAR now has loosely connected chapters in numerous countries and a large international membership. WYSE International WYSE International is a worldwide educational charity specializing in education and development for emerging leaders established in 1989. It is a non-governmental organization associated with the Department of Public Information of the United Nations. Over 3000 participants from 110 countries have attended their courses, they have run in 5 continents. Its flagship International Leadership Programme is a 12-day residential course for 30 people from on average 20 countries (aged 18 – 35). WYSE International's website states its aims are to: Middle East Entrepreneurs of Tomorrow Middle East Entrepreneurs of Tomorrow is an innovative educational initiative aimed at creating a common professional language between Israeli and Palestinian young leaders. Israeli and Palestinian students are selected through an application process and work in small bi-national teams to develop technology and business projects for local impact. Through this process of cross-cultural communication, students build mutual respect, cultural competence and understanding of each others. I need to be more open to people and limit my mind in order to get clues about stereotypes, race, religion, and media. I should give people enough time to speak so I can figure out what my mind is missing about a particular group of people. By being open, I mean having healthy conversations with people, which should begin gradually depending on the situation and people involved. Allowing myself some time to reflect on these elements, where I am going wrong, and where I need to improve. Meanwhile, I'm updating my mental knowledge based on the authentic information I'm gaining through experiential learning. Theories The main theories for cross-cultural communication are based on the work done looking at value differences between different cultures, especially the works of Edward T. Hall, Richard D. Lewis, Geert Hofstede, and Fons Trompenaars. Clifford Geertz was also a contributor to this field. Also Jussi V. Koivisto's model on cultural crossing in internationally operating organizations elaborates from this base of research. These theories have been applied to a variety of different communication theories and settings, including general business and management (Fons Trompenaars and Charles Hampden-Turner) and marketing (Marieke de Mooij, Stephan Dahl). There have also been several successful educational projects which concentrate on the practical applications of these theories in cross-cultural situations. These theories have been criticized mainly by management scholars (e.g. Nigel Holden) for being based on the culture concept derived from 19th century cultural anthropology and emphasizing on culture-as-difference and culture-as-essence. Another criticism has been the uncritical way Hofstede's dimensions are served up in textbooks as facts (Peter W. Cardon). There is a move to focus on 'cross-cultural interdependence' instead of the traditional views of comparative differences and similarities between cultures. Cross-cultural management is increasingly seen as a form of knowledge management. While there is debate in academia, over what cross-cultural teams can do in practice, a meta-analysis by Günter Stahl, Martha Maznevski, Andreas Voigt and Karsten Jonsen on research done on multicultural groups, concluded "Research suggests that cultural diversity leads to process losses through task conflict and decreased social integration, but to process gains through increased creativity and satisfaction." Aspects There are several parameters that may be perceived differently by people of different cultures: High- and low-context cultures: context is the most important cultural dimension and also difficult to define. The idea of context in culture was advanced by the anthropologist Edward T Hall. He divides culture into two main groups: High and Low context cultures. He refers to context as the stimuli, environment or ambiance surrounding the environment. Depending on how a culture relies on the three points to communicate their meaning, will place them in either high or low- context cultures. For example, Hall goes on to explain that low-context cultures assume that the individuals know very little about what they are being told, and therefore must be given a lot of background information. High-context cultures assume the individual is knowledgeable about the subject and has to be given very little background information. Nonverbal, oral and written: the main goal behind improving intercultural audiences is to pay special attention to specific areas of communication to enhance the effectiveness of the intercultural messages. The specific areas are broken down into three sub categories: nonverbal, oral and written messages. Nonverbal contact involves everything from something as obvious as eye contact and facial expressions to more discreet forms of expression such as the use of space. Experts have labeled the term kinesics to mean communicating through body movement. Huseman, author of Business Communication, explains that the two most prominent ways of communication through kinesics are eye contact and facial expressions. Eye contact, Huseman goes on to explain, is the key factor in setting the tone between two individuals and greatly differs in meaning between cultures. In the Americas and Western Europe, eye contact is interpreted the same way, conveying interest and honesty. People who avoid eye contact when speaking are viewed in a negative light, withholding information and lacking in general confidence. However, in the Middle East, Africa, and especially Asia, eye contact is seen as disrespectful and even challenging of one's authority. People who make eye contact, but only briefly, are seen as respectful and courteous. Facial expressions are their own language by comparison and universal throughout all cultures. Dale Leathers, for example, states that facial expression can communicate ten basic classes of meaning. The final part to nonverbal communication lies in our gestures, and can be broken down into five subcategories: Emblems Emblems refer to sign language (such as, thumbs up, one of the most recognized symbols in the world) Illustrators Illustrators mimic what is spoken (such as gesturing how much time is left by holding up a certain number of fingers). Regulators Regulators act as a way of conveying meaning through gestures (raising up a hand for instance indicates that one has a certain question about what was just said) and become more complicated since the same regulator can have different meanings across different cultures (making a circle with a hand, for instance, in the Americas means agreement, in Japan is symbolic for money, and in France conveys the notion of worthlessness). Affect displays Affect displays reveal emotions such as happiness (through a smile) or sadness (mouth trembling, tears). Adaptors Adaptors are more subtle such as a yawn or clenching fists in anger. The last nonverbal type of communication deals with communication through the space around people, or proxemics. Huseman goes on to explain that Hall identifies three types of space: Feature-fixed space: deals with how cultures arrange their space on a large scale, such as buildings and parks. Semifixed feature space: deals with how space is arranged inside buildings, such as the placement of desks, chairs and plants. Informal space: the space and its importance, such as talking distance, how close people sit to one another and office space are all examples. A production line worker often has to make an appointment to see a supervisor, but the supervisor is free to visit the production line workers at will. Oral and written communication is generally easier to learn, adapt and deal with in the business world for the simple fact that each language is unique. The one difficulty that comes into play is paralanguage, how something is said. Differences between westerners and indigenous Australians In the view of Australian linguists, such as Michael Walsh and Ghil'ad Zuckermann, conversations between people from western cultures are usually: "dyadic" – i.e. a dialogue between two specific people; direct eye contact is important; whichever person is speaking at a particular point in time controls the interaction, and; conversations are "contained", in a relatively short, well-defined time frame. Indigenous Australian conversational interactions – in contrast to those of westerners – tend to be: "communal" or multilateral, i.e. they involve several people simultaneously; direct eye contact is not important (or even deliberately minimised); listeners control the interaction, and; conversations are "continuous" or episodic, spread over a longer, less definite timeframe. Challenges Different spoken languages Spoken language is the most important communication tool between people. Spoken language is seen as people's natural production tool, more common and normal, while written language is seen as intricate because of its broad rules. The same language has different meanings in different contexts. When two countries that use the same language communicate, there may also be some misunderstandings due to some dialects. American English and British English is an example for when two different of cross-cultural communication. See also Footnotes Mary Ellen Guffey, Kathy Rhodes, Patricia Rogin. "Communicating Across Cultures." Mary Ellen Guffey, Kathy Rhodes, Patricia Rogin. Business Communication Process and Production. Nelson Education Ltd., 2010. 68–89. References Bartell, M. (2003). Internationalization of universities: A university culture-based framework. Higher Education, 45(1), 44, 46, 48, 49. Cameron, K.S. (1984). Organizational adaptation and higher education. Journal of Higher Education 55(2), 123. Ellingboe, B.J. (1998). Divisional strategies to internationalize a campus portrait: Results, resistance, and recommendations from a case study at a U.S. university, in Mestenhauser, J.A. and Elllingboe, B.J (eds.), Reforming the Higher Education Curriculum: Internationalizing the Campus. Phoenix, AZ: American Council on Education and Oryx Press, 199. Everett M. Rogers, William B. Hart, & Yoshitaka Miike (2002). Edward T. Hall and The History of Intercultural Communication: The United States and Japan. Keio Communication Review No. 24, 1–5. Hans Köchler (ed.), Cultural Self-comprehension of Nations. Tübingen: Erdmann, 1978, , Final Resolution, p. 142. Rudzki, R. E. J. (1995). The application of a strategic management model to the internationalization of higher education institutions. Higher Education, 29(4), 421–422. Rymes, (2008). Language Socialization and the Linguistic Anthropology of Education. Encyclopedia of Language and Education, 2(8, Springer), 1. Teather, D. (2004). The networking alliance: A mechanism for the internationalisation of higher education? Managing Education Matters, 7(2), 3. External links "Voices on Antisemitism," Interview with Diego Portillo Mazal, from the U.S. Holocaust Memorial Museum Communicating Across Cultures Inter cultural Research: The Current State of Knowledge A Dozen Rules of Thumb for Avoiding Inter cultural Misunderstandings Inter cultural Teachers Training Project INNOCENT: teachers learn cross-cultural communication by doing a free Web Based Training WBT International Association for Intercultural Communication Studies (IAICS) International Association for Translation and Intercultural Studies (IATIS) Human communication Cross-cultural psychology Cross-cultural studies Communication studies
Cross-cultural communication
Biology
7,649
2,158,485
https://en.wikipedia.org/wiki/Smart%20bookmark
Smart bookmarks are an extended kind of Internet bookmark used in web browsers. By accepting an argument, they directly give access to functions of web sites, as opposed to filling web forms at the respective web site for accessing these functions. Smart bookmarks can be used for web searches, or access to data on web sites with uniformly structured web addresses (e.g., user profiles in a web forum). History Smart bookmarks first were introduced in OmniWeb on the NEXTSTEP platform in 1997/1998, where they were called shortcuts. The feature was subsequently taken up by Opera, Galeon and Internet Explorer for Mac, so they can now be used in many web browsers, most of which are Mozilla based, like Kazehakase and Mozilla Firefox. In Web, smart bookmarks appear in a dropdown menu when entering text in the address bar. By selecting a smart bookmark the respective web site is accessed using the text as argument. Smart bookmarks can also be added to the toolbar, together with their own textbox. The same applies to Galeon, which also allows the user to collapse and expand the textboxes within the toolbar. Smart bookmarks can also be shared, and there is a collection of them at the web site of the Galeon project. Usage There are two ways to employ smart bookmarks: either through the assignment of keywords or without. E.g., Mozilla derivatives and also Konqueror requires the assigning of keywords that can then be typed directly into the address bar followed by the term. Epiphany does not allow assigning keywords. Instead, the term is typed directly into the address bar, then all smart bookmarks appear on the address bar, can be dropped down the list, and selected. See also Bookmarklets, making it possible to use javascript with smart bookmarks iMacros for Firefox, embeds web browser macros in bookmarks or links References External links Smart Bookmarks at the Galeon site Smart Bookmarks And Bookmarklets Web browsers Smart devices
Smart bookmark
Technology
439
16,179,834
https://en.wikipedia.org/wiki/Rabies%20vaccine
The rabies vaccine is a vaccine used to prevent rabies. There are several rabies vaccines available that are both safe and effective. Vaccinations must be administered prior to rabies virus exposure or within the latent period after exposure to prevent the disease. Transmission of rabies virus to humans typically occurs through a bite or scratch from an infectious animal, but exposure can occur through indirect contact with the saliva from an infectious individual. Doses are usually given by injection into the skin or muscle. After exposure, the vaccination is typically used along with rabies immunoglobulin. It is recommended that those who are at high risk of exposure be vaccinated before potential exposure. Rabies vaccines are effective in humans and other animals, and vaccinating dogs is very effective in preventing the spread of rabies to humans. A long-lasting immunity to the virus develops after a full course of treatment. Rabies vaccines may be used safely by all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. After exposure to rabies, there is no contraindication to its use, because the untreated virus is virtually 100% fatal. The first rabies vaccine was introduced in 1885 and was followed by an improved version in 1908. Over 29 million people worldwide receive human rabies vaccine annually. It is on the World Health Organization's List of Essential Medicines. Medical uses Before exposure The World Health Organization (WHO) recommends vaccinating those who are at high risk of the disease, such as children who live in areas where it is common. Other groups may include veterinarians, researchers, or people planning to travel to regions where rabies is common. Three doses of the vaccine are given over a one-month period on days zero, seven, and either twenty-one or twenty-eight. After exposure For individuals who have been potentially exposed to the virus, four doses over two weeks are recommended, as well as an injection of rabies immunoglobulin with the first dose. This is known as post-exposure vaccination. For people who have previously been vaccinated, only a single dose of the rabies vaccine is required. However, vaccination after exposure is neither a treatment nor a cure for rabies; it can only prevent the development of rabies in a person if given before the virus reaches the brain. Because the rabies virus has a relatively long incubation period, post-exposure vaccinations are typically highly effective. Additional doses Immunity following a course of doses is typically long lasting, and additional doses are usually not needed unless the person has a high risk of contracting the virus. Those at risk may have tests done to measure the amount of rabies antibodies in the blood, and then get rabies boosters as needed. Following administration of a booster dose, one study found 97% of immunocompetent individuals demonstrated protective levels of neutralizing antibodies after ten years. Safety Rabies vaccines are safe in all age groups. About 35 to 45 percent of people develop a brief period of redness and pain at the injection site, and 5 to 15 percent of people may experience fever, headaches, or nausea. Because of the certain fatality of the virus, receiving the vaccine is always advisable. Vaccines made from nerve tissue are used in a few countries, mainly in Asia and Latin America, but are less effective and have greater side effects. Their use is thus not recommended by the World Health Organization. Types The human diploid cell rabies vaccine (HDCV) was started in 1967. Human diploid cell rabies vaccines are inactivated vaccines made using the attenuated Pitman-Moore L503 strain of the virus. In addition to these developments, newer and less expensive purified chicken embryo cell vaccines (CCEEV) and purified Vero cell rabies vaccines are now available and are recommended for use by the WHO. The purified Vero cell rabies vaccine uses the attenuated Wistar strain of the rabies virus, and uses the Vero cell line as its host. CCEEVs can be used in both pre- and post-exposure vaccinations. CCEEVs use inactivated rabies virus grown from either embryonated eggs or in cell cultures and are safe for use in humans and animals. The vaccine was attenuated and prepared in the H.D.C. strain WI-38 which was gifted to Hilary Koprowski at the Wistar Institute by Leonard Hayflick, an Associate Member, who developed this normal human diploid cell strain. Verorab, developed by Sanofi-Aventis and Speeda, developed by Liaoning Chengda are purified vero cell rabies vaccine (PVRV). The first is approved by the World Health Organization. Verorab is approved for medical use in Australia and the European Union and is indicated for both pre-exposure and post-exposure prophylaxis against rabies. History Virtually all infections with rabies resulted in death until two French scientists, Louis Pasteur and Émile Roux, developed the first rabies vaccination in 1885. Nine-year-old Joseph Meister (1876–1940), who had been mauled by a rabid dog, was the first human to receive this vaccine. The treatment started with a subcutaneous injection on 6 July 1885, at 8:00pm, which was followed with 12 additional doses administered over the following 10 days. The first injection was derived from the spinal cord of an inoculated rabbit which had died of rabies 15 days earlier. All the doses were obtained by attenuation, but later ones were progressively more virulent. The Pasteur-Roux vaccine attenuated the harvested virus samples by allowing them to dry for five to ten days. Similar nerve tissue-derived vaccines are still used in some countries, and while they are much cheaper than modern cell culture vaccines, they are not as effective. Neural tissue vaccines also carry a certain risk of neurological complications. Society and culture Economics When the modern cell-culture rabies vaccine was first introduced in the early 1980s, it cost $45 per dose, and was considered to be too expensive. The cost of the rabies vaccine continues to be a limitation to acquiring pre-exposure rabies immunization for travelers from developed countries. In 2015, in the United States, a course of three doses could cost over , while in Europe a course costs around . It is possible and more cost-effective to split one intramuscular dose of the vaccine into several intradermal doses. This method is recommended by the World Health Organization (WHO) in areas that are constrained by cost or with supply issues. The route is as safe and effective as intramuscular according to the WHO. Veterinary use Pre-exposure immunization has been used on domesticated and wild populations. In many jurisdictions, domestic dogs, cats, ferrets, and rabbits are required to be vaccinated. There are two main types of vaccines used for domesticated animals and pets (including pets from wildlife species): Inactivated rabies virus (similar technology to that given to humans) administered by injection Modified live viruses administered orally (by mouth): Live rabies virus from attenuated strains. Attenuated means strains that have developed mutations that cause them to be weaker and do not cause disease. Imrab is an example of a veterinary rabies vaccine containing the Pasteur strain of killed rabies virus. Several different types of Imrab exist, including Imrab, Imrab 3, and Imrab Large Animal. Imrab 3 has been approved for ferrets and, in some areas, pet skunks. Dogs Aside from vaccinating humans, another approach was also developed by vaccinating dogs to prevent the spread of the virus. In 1979, the Van Houweling Research Laboratory of the Silliman University Medical Center in Dumaguete in the Philippines developed and produced a dog vaccine that gave a three-year immunity from rabies. The development of the vaccine resulted in the elimination of rabies in many parts of the Visayas and Mindanao Islands. The successful program in the Philippines was later used as a model by other countries, such as Ecuador and the Mexican state of Yucatán, in their fight against rabies conducted in collaboration with the World Health Organization. In Tunisia, a rabies control program was initiated to give dog owners free vaccination to promote mass vaccination which was sponsored by their government. The vaccine is known as Rabisin (Mérial), which is a cell based rabies vaccine only used countrywide. Vaccinations are often administered when owners take in their dogs for check-ups and visits at the vet. Oral rabies vaccines (see below for details) have been trialled on feral/stray dogs in some areas with high rabies incidence, as it could potentially be more efficient than catching and injecting them. However these have not been deployed for dogs at large scale yet. Wild animals Wildlife species, primarily bats, raccoons, skunks, and foxes, act as reservoir species for different variants of the rabies virus in distinct geographic regions of the United States. This results in the general occurrence of rabies as well as outbreaks in animal populations. Approximately 90% of all reported rabies cases in the US are from wildlife. Oral rabies vaccine Oral rabies vaccines are distributed across the landscape, targeting reservoir species, in an effort to produce a herd immunity effect. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. Development of an oral immunization for wildlife began in the United States with laboratory trials using the live, attenuated Evelyn-Rokitnicki-Abselseth (ERA) vaccine, derived from the Street Alabama Dufferin (SAD) strain. The first ORV field trial using the live attenuated vaccine to immunize foxes occurred in Switzerland during 1978. There are currently three different types of oral wildlife rabies vaccine in use: Modified live virus: Attenuated vaccine strains of rabies virus such as SAG2 and SAD B19 Recombinant vaccinia virus expressing rabies glycoprotein (V-RG): This is a strain of the vaccinia virus (originally a smallpox vaccine) that has been engineered to encode the gene for the rabies glycoprotein. V-RG has been proven safe in over 60 animal species including cats and dogs. The idea of wildlife vaccination was conceived during the 1960s, and modified-live rabies viruses were used for the experimental oral vaccination of carnivores by the 1970s. ONRAB: an experimental live recombinant adenovirus vaccine Other oral rabies experimental vaccines in development include recombinant adenovirus vaccines. Oral rabies vaccination (ORV) programs have been used in many countries in an effort to control the spread of rabies and limit the risk of human contact with the rabies virus. ORV programs were initiated in Europe in the 1980s, Canada in 1985, and in the United States in 1990. ORV is a preventive measure to eliminate rabies in wild animal vectors of disease, mainly foxes, raccoons, raccoon dogs, coyotes and jackals, but also can be used for dogs in developing countries. ORV programs typically attractive baits to deliver the vaccine to targeted animals. In the United States, RABORAL V-RG (Boehringer Ingelheim, Duluth, GA, USA) has been the only licensed ORV for rabies virus management since 1997. However, ONRAB "Ultralite" (Artemis Technologies Inc., Guelph, Ontario, Canada) baits have been distributed by the United States Department of Agriculture (USDA) in select areas of the eastern United States under an experimental permit to target raccoons since 2011. RABORAL V-RG baits consist of a small packet containing the oral vaccine which is then either coated in a fishmeal paste or encased in a fishmeal-polymer block. ONRAB "Ultralite" baits consist of a blister pack with a coating matrix of vanilla flavor, green food coloring, vegetable oil and hydrogenated vegetable fat. When an animal bites into the bait, the packets burst and the vaccine is administered. Current research suggests that if adequate amounts of the vaccine is ingested, immunity to the virus should last for upwards of one year. By immunizing wild or stray animals, ORV programs work to create a buffer zone between the rabies virus and potential contact with humans, pets, or livestock. Landscape features such as large bodies of water and mountains are often used to enhance the effectiveness of the buffer. The effectiveness of ORV campaigns in specific areas is determined through trap-and-release methods. Titer tests are performed on the blood drawn from the sample animals in order to measure rabies antibody levels in the blood. Baits are usually distributed by aircraft to more efficiently cover large, rural regions. In order to place baits more precisely and to minimize human and pet contact with baits, they are distributed by hand in suburban or urban regions. The standard bait distribution density is 75 baits/km2 in rural areas and 150 baits/km2 in urban and developed areas. Implementation of ORV programs in the United States has led to the elimination of the coyote rabies virus variant in 2003 and gray fox variant during 2013. Furthermore, ORV has been successful in preventing the westward expansion of the raccoon rabies enzootic front beyond Alabama. References External links Animal vaccines French inventions Inactivated vaccines Rabies Vaccines World Health Organization essential medicines (vaccines) Wikipedia medicine articles ready to translate fr:Rage (maladie)#Histoire des vaccins
Rabies vaccine
Biology
2,888
34,087,660
https://en.wikipedia.org/wiki/Danish%20pile-driving%20formula
The Danish pile-driving formula is a formula which enables one to have a good gauge of the bearing capacity of a driven pile. History The formula was constructed by the Danish civil engineer Andreas Knudsen in 1955. It was made as a part of his final project at The Technical University of Denmark and was published for the Geotechnic Congress in London in 1956. It later became part of the Danish Code of Practice for Foundation Engineering and was named. The formula in which where Qdy = ultimate dynamic bearing capacity of driven pile α = pile driving hammer efficiency WH = weight of hammer H = hammer drop S = inelastic set of piles, in distance pr. hammer blow Se = elastic set of piles, in distance pr. hammer blow L = pile length A = pile end area E = modulus of elasticity of pile material References Dansk standard DS/EN 1997-1, 2. udgave, 2007-06-22 Journal of the Soil Mechanics and Foundation Division, November 1967 The Influence of Time on the Bearing capacity of Driven Piles External links – The Danish Code of Practice – homepage. Equations
Danish pile-driving formula
Mathematics
227
59,146,042
https://en.wikipedia.org/wiki/Accuracy%20paradox
The accuracy paradox is the paradoxical finding that accuracy is not a good metric for predictive models when classifying in predictive analytics. This is because a simple model may have a high level of accuracy but too crude to be useful. For example, if the incidence of category A is dominant, being found in 99% of cases, then predicting that case is category A will have an accuracy of 99%. Precision and recall are better measures in such cases. The underlying issue is that there is a class imbalance between the positive class and the negative class. Prior probabilities for these classes need to be accounted for in error analysis. Precision and recall help, but precision too can be biased by unbalanced class priors in the test sets. Example For example, a city of 1 million people has ten terrorists. A profiling system results in the following confusion matrix: Even though the accuracy is ≈ 99.9%, 990 out of the 1000 positive predictions are incorrect. The precision of = 1% reveals its poor performance. As the classes are so unbalanced, a better metric is the F1 score = ≈ 2% (the recall being = 1). Literature Kubat, M. (2000). Addressing the Curse of Imbalanced Training Sets: One-Sided Selection. Fourteenth International Conference on Machine Learning. See also False positive paradox References Statistical paradoxes
Accuracy paradox
Mathematics
280
1,386,551
https://en.wikipedia.org/wiki/Crime%20mapping
Crime mapping is used by analysts in law enforcement agencies to map, visualize, and analyze crime incident patterns. It is a key component of crime analysis and the CompStat policing strategy. Mapping crime, using Geographic Information Systems (GIS), allows crime analysts to identify crime hot spots, along with other trends and patterns. Overview Using GIS, crime analysts can overlay other datasets such as census demographics, locations of pawn shops, schools, etc., to better understand the underlying causes of crime and help law enforcement administrators to devise strategies to deal with the problem. GIS is also useful for law enforcement operations, such as allocating police officers and dispatching to emergencies. Underlying theories that help explain spatial behavior of criminals include environmental criminology, which was devised in the 1980s by Patricia and Paul Brantingham, routine activity theory, developed by Lawrence Cohen and Marcus Felson and originally published in 1979, and rational choice theory, developed by Ronald V. Clarke and Derek Cornish, originally published in 1986. In recent years, crime mapping and analysis has incorporated spatial data analysis techniques that add statistical rigor and address inherent limitations of spatial data, including spatial autocorrelation and spatial heterogeneity. Spatial data analysis helps one analyze crime data and better understand why and not just where crime is occurring. Research into computer-based crime mapping started in 1986, when the National Institute of Justice (NIJ) funded a project in the Chicago Police Department to explore crime mapping as an adjunct to community policing. That project was carried out by the CPD in conjunction with the Chicago Alliance for Neighborhood Safety, the University of Illinois at Chicago, and Northwestern University, reported on in the book, Mapping Crime in Its Community Setting: Event Geography Analysis. The success of this project prompted NIJ to initiate the Drug Market Analysis Program (with the appropriate acronym D-MAP) in five cities, and the techniques these efforts developed led to the spread of crime mapping throughout the US and elsewhere, including the New York City Police Department's CompStat. Applications Crime analysts use crime mapping and analysis to help law enforcement management (e.g. the police chief) to make better decisions, target resources, and formulate strategies, as well as for tactical analysis (e.g. crime forecasting, geographic profiling). New York City does this through the CompStat approach, though that way of thinking deals more with the short term. There are other, related approaches with terms including Information-led policing, Intelligence-led policing, Problem-oriented policing, and Community policing. In some law enforcement agencies, crime analysts work in civilian positions, while in other agencies, crime analysts are sworn officers. From a research and policy perspective, crime mapping is used to understand patterns of incarceration and recidivism, help target resources and programs, evaluate crime prevention or crime reduction programs (e.g. Project Safe Neighborhoods, Weed & Seed and as proposed in Fixing Broken Windows), and further understanding of causes of crime. See also Crime analysis Geographic profiling Programs and projects CrimeView Criminal Reduction Utilising Statistical History CrimeAnalyst RAIDS Online RiskAhead ATACRAIDS Individuals André-Michel Guerry Michael Maltz Public access RAIDS Online SpotCrime.com Trulia, US real estate site with crime mapping YourMapper, open data crime mapping CitySafe, crime mapping for travelers & tourists General Public Participation GIS References Further reading Applications of geographic information systems Criminology Human geography Law enforcement techniques Crime statistics Map types
Crime mapping
Environmental_science
715
357,796
https://en.wikipedia.org/wiki/Constructivism%20%28philosophy%20of%20science%29
Constructivism is a view in the philosophy of science that maintains that scientific knowledge is constructed by the scientific community, which seeks to measure and construct models of the natural world. According to constructivists, natural science consists of mental constructs that aim to explain sensory experiences and measurements, and that there is no single valid methodology in science but rather a diversity of useful methods. They also hold that the world is independent of human minds, but knowledge of the world is always a human and social construction. Constructivism opposes the philosophy of objectivism, embracing the belief that human beings can come to know the truth about the natural world not mediated by scientific approximations with different degrees of validity and accuracy. Constructivism and sciences Social constructivism in sociology One version of social constructivism contends that categories of knowledge and reality are actively created by social relationships and interactions. These interactions also alter the way in which scientific episteme is organized. Social activity presupposes human interaction, and in the case of social construction, utilizing semiotic resources (meaning-making and signifying) with reference to social structures and institutions. Several traditions use the term Social Constructivism: psychology (after Lev Vygotsky), sociology (after Peter Berger and Thomas Luckmann, themselves influenced by Alfred Schütz), sociology of knowledge (David Bloor), sociology of mathematics (Sal Restivo), philosophy of mathematics (Paul Ernest). Ludwig Wittgenstein's later philosophy can be seen as a foundation for social constructivism, with its key theoretical concepts of language games embedded in forms of life. Constructivism in philosophy of science Thomas Kuhn argued that changes in scientists' views of reality not only contain subjective elements but result from group dynamics, "revolutions" in scientific practice, and changes in "paradigms". As an example, Kuhn suggested that the Sun-centric Copernican "revolution" replaced the Earth-centric views of Ptolemy not because of empirical failures but because of a new "paradigm" that exerted control over what scientists felt to be the more fruitful way to pursue their goals. The view of reality as accessible only through models was called model-dependent realism by Stephen Hawking and Leonard Mlodinow. While not rejecting an independent reality, model-dependent realism says that we can know only an approximation of it provided by the intermediary of models. These models evolve over time as guided by scientific inspiration and experiments. In the field of the social sciences, constructivism as an epistemology urges that researchers reflect upon the paradigms that may be underpinning their research, and in the light of this that they become more open to considering other ways of interpreting any results of the research. Furthermore, the focus is on presenting results as negotiable constructs rather than as models that aim to "represent" social realities more or less accurately. Norma Romm, in her book Accountability in Social Research (2001), argues that social researchers can earn trust from participants and wider audiences insofar as they adopt this orientation and invite inputs from others regarding their inquiry practices and the results thereof. Constructivism and psychology In psychology, constructivism refers to many schools of thought that, though extraordinarily different in their techniques (applied in fields such as education and psychotherapy), are all connected by a common critique of previous standard objectivist approaches. Constructivist psychology schools share assumptions about the active constructive nature of human knowledge. In particular, the critique is aimed at the "associationist" postulate of empiricism, "by which the mind is conceived as a passive system that gathers its contents from its environment and, through the act of knowing, produces a copy of the order of reality." In contrast, "constructivism is an epistemological premise grounded on the assertion that, in the act of knowing, it is the human mind that actively gives meaning and order to that reality to which it is responding". The constructivist psychologies theorize about and investigate how human beings create systems for meaningfully understanding their worlds and experiences. Constructivism and education Joe L. Kincheloe has published numerous social and educational books on critical constructivism (2001, 2005, 2008), a version of constructivist epistemology that places emphasis on the exaggerated influence of political and cultural power in the construction of knowledge, consciousness, and views of reality. In the contemporary mediated electronic era, Kincheloe argues, dominant modes of power have never exerted such influence on human affairs. Coming from a critical pedagogical perspective, Kincheloe argues that understanding a critical constructivist epistemology is central to becoming an educated person and to the institution of just social change. Kincheloe's characteristics of critical constructivism: Knowledge is socially constructed: World and information co-construct one another Consciousness is a social construction Political struggles: Power plays an exaggerated role in the production of knowledge and consciousness The necessity of understanding consciousness—even though it does not lend itself to traditional reductionistic modes of measurability The importance of uniting logic and emotion in the process of knowledge and producing knowledge The inseparability of the knower and the known The centrality of the perspectives of oppressed peoples—the value of the insights of those who have suffered as the result of existing social arrangements The existence of multiple realities: Making sense of a world far more complex than we originally imagined Becoming humble knowledge workers: Understanding our location in the tangled web of reality Standpoint epistemology: Locating ourselves in the web of reality, we are better equipped to produce our own knowledge Constructing practical knowledge for critical social action Complexity: Overcoming reductionism Knowledge is always entrenched in a larger process The centrality of interpretation: Critical hermeneutics The new frontier of classroom knowledge: Personal experiences intersecting with pluriversal information Constructing new ways of being human: Critical ontology Constructivist approaches Critical constructivism A series of articles published in the journal Critical Inquiry (1991) served as a manifesto for the movement of critical constructivism in various disciplines, including the natural sciences. Not only truth and reality, but also "evidence", "document", "experience", "fact", "proof", and other central categories of empirical research (in physics, biology, statistics, history, law, etc.) reveal their contingent character as a social and ideological construction. Thus, a "realist" or "rationalist" interpretation is subjected to criticism. Kincheloe's political and pedagogical notion (above) has emerged as a central articulation of the concept. Cultural constructivism Cultural constructivism asserts that knowledge and reality are a product of their cultural context, meaning that two independent cultures will likely form different observational methodologies. Genetic epistemology James Mark Baldwin invented this expression, which was later popularized by Jean Piaget. From 1955 to 1980, Piaget was Director of the International Centre for Genetic Epistemology in Geneva. Radical constructivism Ernst von Glasersfeld was a prominent proponent of radical constructivism. This claims that knowledge is not a commodity that is transported from one mind into another. Rather, it is up to the individual to "link up" specific interpretations of experiences and ideas with their own reference of what is possible and viable. That is, the process of constructing knowledge, of understanding, is dependent on the individual's subjective interpretation of their active experience, not what "actually" occurs. Understanding and acting are seen by radical constructivists not as dualistic processes but "circularly conjoined". Radical constructivism is closely related to second-order cybernetics. Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains. Relational constructivism Relational constructivism can be perceived as a relational consequence of radical constructivism. In contrary to social constructivism, it picks up the epistemological threads. It maintains the radical constructivist idea that humans cannot overcome their limited conditions of reception (i.e., self-referentially operating cognition). Therefore, humans are not able to come to objective conclusions about the world. In spite of the subjectivity of human constructions of reality, relational constructivism focuses on the relational conditions applying to human perceptional processes. Björn Kraus puts it in a nutshell: Social Constructivism Criticisms Numerous criticisms have been levelled at Constructivism. The most common one is that it either explicitly advocates or implicitly reduces to relativism. Another criticism of constructivism is that it holds that the concepts of two different social formations be entirely different and incommensurate. This being the case, it is impossible to make comparative judgments about statements made according to each worldview. This is because the criteria of judgment will themselves have to be based on some worldview or other. If this is the case, then it brings into question how communication between them about the truth or falsity of any given statement could be established. The Wittgensteinian philosopher Gavin Kitching argues that constructivists usually implicitly presuppose a deterministic view of language, which severely constrains the minds and use of words by members of societies: they are not just "constructed" by language on this view but are literally "determined" by it. Kitching notes the contradiction here: somehow, the advocate of constructivism is not similarly constrained. While other individuals are controlled by the dominant concepts of society, the advocate of constructivism can transcend these concepts and see through them. See also Autopoiesis Consensus reality Constructivism in international relations Cultural pluralism Epistemological pluralism Tinkerbell effect Map–territory relation Meaning making Metacognition Ontological pluralism Personal construct psychology Perspectivism Pragmatism References Further reading Devitt, M. 1997. Realism and Truth, Princeton University Press. Gillett, E. 1998. "Relativism and the Social-constructivist Paradigm", Philosophy, Psychiatry, & Psychology, Vol.5, No.1, pp. 37–48 Ernst von Glasersfeld 1987. The construction of knowledge, Contributions to conceptual semantics. Ernst von Glasersfeld 1995. Radical constructivism: A way of knowing and learning. Joe L. Kincheloe 2001. Getting beyond the Facts: Teaching Social Studies/Social Science in the Twenty-First Century, NY: Peter Lang. Joe L. Kincheloe 2005. Critical Constructivism Primer, NY: Peter Lang. Joe L. Kincheloe 2008. Knowledge and Critical Pedagogy, Dordrecht, The Netherlands: Springer. Kitching, G. 2008. The Trouble with Theory: The Educational Costs of Postmodernism, Penn State University Press. Björn Kraus 2014: Introducing a model for analyzing the possibilities of power, help and control. In: Social Work and Society. International Online Journal. Retrieved 3 April 2019.(http://www.socwork.net/sws/article/view/393) Björn Kraus 2015: The Life We Live and the Life We Experience: Introducing the Epistemological Difference between "Lifeworld" (Lebenswelt) and "Life Conditions" (Lebenslage). In: Social Work and Society. International Online Journal. Retrieved 27 August 2018.(http://www.socwork.net/sws/article/view/438). Björn Kraus 2019: Relational constructivism and relational social work. In: Webb, Stephen, A. (edt.) The Routledge Handbook of Critical Social Work. Routledge international Handbooks. London and New York: Taylor & Francis Ltd. Friedrich Kratochwil: Constructivism: what it is (not) and how it matters, in Donatella della Porta & Michael Keating (eds.) 2008, Approaches and Methodologies in the Social Sciences: A Pluralist Perspective, Cambridge University Press, 80–98. Mariyani-Squire, E. 1999. "Social Constructivism: A flawed Debate over Conceptual Foundations", Capitalism, Nature, Socialism, vol.10, no.4, pp. 97–125 Matthews, M.R. (ed.) 1998. Constructivism in Science Education: A Philosophical Examination, Kluwer Academic Publishers. Edgar Morin 1986, La Méthode, Tome 3, La Connaissance de la connaissance. Nola, R. 1997. "Constructivism in Science and in Science Education: A Philosophical Critique", Science & Education, Vol.6, no.1-2, pp. 55–83. Jean Piaget (ed.) 1967. Logique et connaissance scientifique, Encyclopédie de la Pléiade, vol. 22. Editions Gallimard. Herbert A. Simon 1969. The Sciences of the Artificial (3rd Edition MIT Press 1996). Slezak, P. 2000. "A Critique of Radical Social Constructivism", in D.C. Philips, (ed.) 2000, Constructivism in Education: Opinions and Second Opinions on Controversial Issues, The University of Chicago Press. Suchting, W.A. 1992. "Constructivism Deconstructed", Science & Education, vol.1, no.3, pp. 223–254 Paul Watzlawick 1984. The Invented Reality: How Do We Know What We Believe We Know? (Contributions to Constructivism), W W. Norton. Tom Rockmore 2008. On Constructivist Epistemology. Romm, N.R.A. 2001. Accountability in Social Research, Dordrecht, The Netherlands: Springer. https://www.springer.com/social+sciences/book/978-0-306-46564-2 External links Journal of Constructivist Psychology Radical Constructivism Constructivist Foundations Epistemological theories Epistemology of science Metatheory of science Philosophical analogies Social constructionism Social epistemology Systems theory Theories of truth Constructivism
Constructivism (philosophy of science)
Technology
2,848
400,062
https://en.wikipedia.org/wiki/Neil%20Sloane
Neil James Alexander Sloane FLSW (born October 10, 1939) is a British-American mathematician. His major contributions are in the fields of combinatorics, error-correcting codes, and sphere packing. Sloane is best known for being the creator and maintainer of the On-Line Encyclopedia of Integer Sequences (OEIS). Biography Sloane was born in Beaumaris, Anglesey, Wales, in 1939, moving to Cowes, Isle of Wight, England in 1946. The family emigrated to Australia, arriving at the start of 1949. Sloane then moved from Melbourne to the United States in 1961. He studied at Cornell University under Nick DeClaris, Frank Rosenblatt, Frederick Jelinek and Wolfgang Heinrich Johannes Fuchs, receiving his Ph.D. in 1967. His doctoral dissertation was titled Lengths of Cycle Times in Random Neural Networks. Sloane joined Bell Labs in 1968 and retired from its successor AT&T Labs in 2012. He became an AT&T Fellow in 1998. He is also a Fellow of the Learned Society of Wales, an IEEE Fellow, a Fellow of the American Mathematical Society, and a member of the National Academy of Engineering. He is a winner of a Lester R. Ford Award in 1978 and the Chauvenet Prize in 1979. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin. In 2005 Sloane received the IEEE Richard W. Hamming Medal. In 2008 he received the Mathematical Association of America David P. Robbins Prize, and in 2013 the George Pólya Award. In 2014, to celebrate his 75th birthday, Sloane shared some of his favorite integer sequences. Besides mathematics, he loves rock climbing and has authored two rock-climbing guides to New Jersey. He regularly appears in videos for Brady Haran's YouTube channel Numberphile. Selected publications Neil James Alexander Sloane, A Handbook of Integer Sequences, Academic Press, NY, 1973. Florence Jessie MacWilliams and Neil James Alexander Sloane, The Theory of Error-Correcting Codes, Elsevier/North-Holland, Amsterdam, 1977. M. Harwit and Neil James Alexander Sloane, Hadamard Transform Optics, Academic Press, San Diego CA, 1979. Neil James Alexander Sloane and A. D. Wyner, editors, Claude Elwood Shannon: Collected Papers, IEEE Press, NY, 1993. Neil James Alexander Sloane and S. Plouffe, The Encyclopedia of Integer Sequences, Academic Press, San Diego, 1995. J. H. Conway and Neil James Alexander Sloane, Sphere Packings, Lattices and Groups, Springer-Verlag, NY, 1st edn., 1988; 2nd edn., 1993; 3rd ed., 1998. A. S. Hedayat, Neil James Alexander Sloane and J. Stufken, Orthogonal Arrays: Theory and Applications, Springer-Verlag, NY, 1999. G. Nebe, E. M. Rains and Neil James Alexander Sloane, Self-Dual Codes and Invariant Theory, Springer-Verlag, 2006. See also Reeds–Sloane algorithm Sloane's gap References External links IEEE Richard W. Hamming Medal Recipients, 2005 – Neil J. A. Sloane Neil Sloane's entry in the Numericana Hall of Fame "The pattern collector", Science News Doron Zeilberger, Opinion 124: A Database is Worth a Thousand Mathematical Articles Confessions of a Sequence Addict: Neil Sloane 20th-century American mathematicians 21st-century American mathematicians Combinatorialists 1939 births Living people Scientists at Bell Labs Members of the United States National Academy of Engineering Fellows of the American Mathematical Society Cornell University alumni University of Melbourne alumni Fellows of the IEEE People from Beaumaris Fellows of the Learned Society of Wales
Neil Sloane
Mathematics
745
50,605,630
https://en.wikipedia.org/wiki/Cortinarius%20altissimus
Cortinarius altissimus is a fungus native to Guyana. It was described in 2015 by Emma Harrower and colleagues, and is closely related to the northern hemisphere species Cortinarius violaceus. See also List of Cortinarius species References External links altissimus Fungi described in 2015 Fungi of Guyana Fungus species
Cortinarius altissimus
Biology
69
37,959,983
https://en.wikipedia.org/wiki/Korte%27s%20third%20law%20of%20apparent%20motion
In psychophysics, Korte's third law of apparent motion is an observation relating the phenomenon of apparent motion to the distance and duration between two successively presented stimuli. Formulation Korte's four laws were first proposed in 1915 by Adolf Korte. The third law, particularly, describes how the increase in distance between two stimuli narrows the range of interstimulus intervals (ISI), which produce the apparent motion. It holds that there is a requirement for the proportional decrease in the frequency in which two stimulators are activated in alternation with the increase in ISI to ensure the quality of apparent motion. One identified violation of the Korte's law occurs if the shortest path between seen arm positions is not possible anatomically. This was demonstrated by Maggie Shiffrar and Jennifer Freyd using a picture that showed a woman demonstrating two positions. This highlighted the problem in taking the shortest path to perform the alternating postures. The laws were composed of general statements (laws) describing beta movement in the sense of "optimal motion". These outlined several constraints for obtaining the percept of apparent motion between flashes: "(1) larger separations require higher intensities, (2) slower presentation rates require higher intensities, (3) larger separations require slower presentation rates, (4) longer flash durations require shorter intervals . A modern formulation of the law is that the greater the length of a path between two successively presented stimuli, the greater the stimulus onset asynchrony (SOA) must be for an observer to perceive the two stimuli as a single mobile object. Typically, the relationship between distance and minimal SOA is linear. Arguably, Korte's third law is counterintuitive. One might expect that successive stimuli are less likely to be perceived as a single object as both distance and interval increase, and therefore, a negative relationship should be observed instead. In fact, such a negative relationship can be observed as well as Korte's law. Which relationship holds depends on speed. Korte's law also involves a constancy of velocity through apparent motion and it is said that data do not support it. References Psychophysics
Korte's third law of apparent motion
Physics
444
12,297,564
https://en.wikipedia.org/wiki/Wildlife%20of%20Ethiopia
The richness and variety of the wildlife of Ethiopia is dictated by the great diversity of terrain with wide variations in climate, soils, natural vegetation and settlement patterns. Ethiopia contains a vast highland complex of mountains and dissected plateaus divided by the Great Rift Valley, which runs generally southwest to northeast and is surrounded by lowlands, steppes, or semi-desert. Ethiopia is an ecologically diverse country, ranging from the deserts along the eastern border to the tropical forests in the south to extensive Afromontane in the northern and southwestern parts. Lake Tana in the north is the source of the Blue Nile. It also has many endemic species, including 31 mammal species, notably the gelada, the walia ibex and the Ethiopian wolf ("Simien fox"). There are seven mammal species classified as "critically endangered", and others as "endangered" or "vulnerable". The wide range of altitude has given the country a variety of ecologically distinct areas, and this has helped to encourage the evolution of endemic species in ecological isolation. But some of these habitats are now much reduced or threatened. The nation is a land of geographical contrasts, ranging from the vast fertile west, with its forests and numerous rivers, to the world's hottest settlement of Dallol in its north. The Ethiopian Highlands are the largest continuous mountain ranges in Africa, and the Sof Omar Caves contains the largest cave on the continent. Ethiopia also has the second-largest number of UNESCO World Heritage Sites in Africa. Fauna Mammals Birds Fish About 14 genera of fishes are found. These include primitive bichirs, lung fishes, catfish, cyprinids, cyprinodonts, cichlids, several caracins and a few spiny rayed families. The following fish families are most peculiar of this region: Polypteridae: protopterus (lung fish) related to lepidosiren (Neotropical lung fish) Mormyridae or African electric fishes, not related to Gymnotidae (Neotropical electric fishes) Archaic bichirs Gymnorchidae Isopondyli fishes catfishes Butterflies Molluscs Reptiles Crocodylus niloticus, Nile crocodile Rhinotyphlops somalicus, Ethiopian blind snake Homopholis fasciata, Banded velvet gecko Pelusios adansonii, Adanson's mud turtle Bitis arietans, Puff adder Dendroaspis polylepis, Black mamba Bitis parviocula, Ethiopian mountain adder Lamprophis abyssinicus, Abyssinian house snake Python sebae sebae, African rock python Threatened species Historically, throughout the African continent, wildlife populations have been rapidly declining owing to logging, civil wars, hunting, pollution, poaching and other human interference. A 17-year-long civil war along with severe drought, negatively impacted Ethiopia's environmental conditions leading to even greater habitat degradation. Habitat destruction is a factor that leads to endangerment. When changes to a habitat occur rapidly, animals do not have time to adjust. Human impact threatens many species, with greater threats expected as a result of climate change-induced by greenhouse gas emissions. Ethiopia has a large number of species listed as critically endangered, endangered and vulnerable by the IUCN. To assess the current situation in Ethiopia, it is critical to identify the threatened species in this region. There are 31 endemic species of mammals, meaning that a species occurs naturally only in a certain area, in this case Ethiopia. The African wild dog prehistorically had widespread distribution in Ethiopia; however, with last sightings at Fincha, this canid is thought to be potentially extirpated within Ethiopia. The Ethiopian wolf is perhaps the most researched of all the endangered species within Ethiopia. This, however, is likely not the case as a breeding pack has been seen, and photographed by Bale Mountain Lodge guests inside the park's Harenna Forest in 2015. Several conservation programs are in effect to help endangered species in Ethiopia. A group was created in 1966 called The Ethiopian Wildlife and Natural History Society, which focuses on studying and promoting the natural environments of Ethiopia along with spreading the knowledge they acquire, and supporting legislation to protect environmental resources. There are multiple conservation organizations one can access online, one of which connects directly to the Ethiopian wolf. Funding supports the World Wildlife Fund's global conservation efforts. The WWF Chairman of the Board, Bruce Babbitt holds this organization accountable for the best practices in accountability, governance and transparency throughout all tiers within the organization. Flora There are many types of vegetation, flowers, and plants in Ethiopia. There are many cactus plants that grow in Ethiopian high lands. Ethiopia has many species of flowers that are used for medication and decoration. Many of the plants are used to make honey and oil. Moreover, many of the floras in Ethiopia can be used flavoring or spice. Ethiopia has different climate and geological zones that provide different types floras. There are different alpines and evergreen floras. There are some plants that Ethiopia exports to other countries like coffee and Kat which is significant to their economy. Notes References External links Biota of Ethiopia Ethiopia
Wildlife of Ethiopia
Biology
1,040
2,221,032
https://en.wikipedia.org/wiki/Cycles%20and%20fixed%20points
In mathematics, the cycles of a permutation of a finite set S correspond bijectively to the orbits of the subgroup generated by acting on S. These orbits are subsets of S that can be written as , such that for , and . The corresponding cycle of is written as ( c1 c2 ... cn ); this expression is not unique since c1 can be chosen to be any element of the orbit. The size of the orbit is called the length of the corresponding cycle; when , the single element in the orbit is called a fixed point of the permutation. A permutation is determined by giving an expression for each of its cycles, and one notation for permutations consist of writing such expressions one after another in some order. For example, let be a permutation that maps 1 to 2, 6 to 8, etc. Then one may write = ( 1 2 4 3 ) ( 5 ) ( 6 8 ) (7) = (7) ( 1 2 4 3 ) ( 6 8 ) ( 5 ) = ( 4 3 1 2 ) ( 8 6 ) ( 5 ) (7) = ... Here 5 and 7 are fixed points of , since (5) = 5 and (7)=7. It is typical, but not necessary, to not write the cycles of length one in such an expression. Thus,  = (1 2 4 3)(6 8), would be an appropriate way to express this permutation. There are different ways to write a permutation as a list of its cycles, but the number of cycles and their contents are given by the partition of S into orbits, and these are therefore the same for all such expressions. Counting permutations by number of cycles The unsigned Stirling number of the first kind, s(k, j) counts the number of permutations of k elements with exactly j disjoint cycles. Properties (1) For every k > 0 : (2) For every k > 0 : (3) For every k > j > 1, Reasons for properties (1) There is only one way to construct a permutation of k elements with k cycles: Every cycle must have length 1 so every element must be a fixed point. (2.a) Every cycle of length k may be written as permutation of the number 1 to k; there are k! of these permutations. (2.b) There are k different ways to write a given cycle of length k, e.g. ( 1 2 4 3 ) = ( 2 4 3 1 ) = ( 4 3 1 2 ) = ( 3 1 2 4 ). (2.c) Finally: (3) There are two different ways to construct a permutation of k elements with j cycles: (3.a) If we want element k to be a fixed point we may choose one of the permutations with elements and cycles and add element k as a new cycle of length 1. (3.b) If we want element k not to be a fixed point we may choose one of the permutations with elements and j cycles and insert element k in an existing cycle in front of one of the elements. Some values Counting permutations by number of fixed points The value counts the number of permutations of k elements with exactly j fixed points. For the main article on this topic, see rencontres numbers. Properties (1) For every j < 0 or j > k : (2) f(0, 0) = 1. (3) For every k > 1 and k ≥ j ≥ 0, Reasons for properties (3) There are three different methods to construct a permutation of k elements with j fixed points: (3.a) We may choose one of the permutations with elements and fixed points and add element k as a new fixed point. (3.b) We may choose one of the permutations with elements and j fixed points and insert element k in an existing cycle of length > 1 in front of one of the elements. (3.c) We may choose one of the permutations with elements and fixed points and join element k with one of the fixed points to a cycle of length 2. Some values Alternate calculations See also Cyclic permutation Cycle notation Notes References Permutations Fixed points (mathematics)
Cycles and fixed points
Mathematics
895
24,145,055
https://en.wikipedia.org/wiki/C21H36O2
{{DISPLAYTITLE:C21H36O2}} The molecular formula C21H36O2 may refer to: Allopregnanediol, or 5α-pregnane-3α,20α-diol, a steroid Adipostatin A, an alkylresorcinol Pregnanediol, a steroid Molecular formulas
C21H36O2
Physics,Chemistry
79
20,213,460
https://en.wikipedia.org/wiki/Veracevine
Veracevine is an alkaloid that occurs in the seeds of Schoenocaulon officinale. It is used as an insecticide in veterinary medicine. See also Veratridine, a related alkaloid References Isoquinoline alkaloids Insecticides
Veracevine
Chemistry
59
22,022,792
https://en.wikipedia.org/wiki/Cybermethodology
Cybermethodology is a newly emergent field that focuses on the creative development and use of computational and technological research methodologies for the analysis of next-generation data sources such as the Internet. The first formal academic program in Cybermethodology is being developed by the University of California, Los Angeles. Background Cybermethodology is an outgrowth of two relatively new academic fields. The first is technology and society. This field focuses on the impact of research and innovation on society, and related policy issues. Many universities, including Berkeley, Cornell, MIT and Stanford offer degrees and/or programs of study in this and related fields. A great strength of technology and society studies is that it exists at the intersection of the natural and social sciences, engineering, and public policy. The second field closely integrated with cybermethodology is Internet studies. This recently developed field has generated programs at several universities including Minnesota, Washington, Brandeis, and Georgetown. Internet studies involves the study of the fundamental workings of the Internet as well as learning about entities and issues such as Internet security, on-line communities and gaming, Internet culture, and intellectual property. Nature Cybermethodology is the component of internet and technology studies that is specifically concerned with the use of innovative technology-based methods of analysis, new sources of data, and conceptualizations in order to gain a better understanding of human behavior. It is characterized by the use, as primary data sources, of emergent entities such as virtual worlds, blogs, texting, on-line gaming (mmorpgs), social networking sites, video sharing, wikis, search engines, and numerous other innovative tools and activities available on the web. Major components of cybermethodology include: Basic Cyber-Literacy, a core knowledge of information technology and Internet tools such as statistical and analytic software, electronic library resources, digital devices, and use of the Internet as a source of data. The Research Life Cycle, knowledge of the data lifecycle from acquisition and input to archiving and accessibility. Non-Linear Technologies, including hyperlinks, dynamics surveys, and technological methods such as neuroimaging. Programming Concepts, including the ability to create new interactive research tools. Analytical Methods and their relationship to different types of data: non-linear, qualitative, spatial, time-variant processes, and agent-based information such as rules of social interaction and agent mental representations. Modes of Interaction extending beyond person-to-person interviews, on-site fieldwork, and anonymous surveys to contemporary environments such as online and virtual communities and interaction through games and virtual environments. Research Presentation including the use of new media techniques, issues raised by intended or unintended rapid dissemination of results by electronic means to untargeted audiences, and the dynamic potentially interactive nature of cyber-research. Meta-Literacy, the ability to critically evaluate the methods, tools, and results of cyber-research. See also Cyberculture Cyberinfrastructure Cyberpsychology Cyberspace Scientific method Social software References Computing and society Social science methodology
Cybermethodology
Technology
623
54,201,555
https://en.wikipedia.org/wiki/Toxic%20unit
Toxic units (TU) are used in the field of toxicology to quantify the interactions of toxicants in binary mixtures of chemicals. A toxic unit for a given compound is based on the concentration at which there is a 50% effect (ex. EC50) for a certain biological endpoint. One toxic unit is equal to the EC50 for a given endpoint for a specific biological effect over a given amount of time. Toxic units allow for the comparison of the individual toxicities of a binary mixture to the combined toxicity. This allows researchers to categorize mixtures as additive, synergistic or antagonistic. Synergism and antagonism are defined by mixtures that are more or less toxic than predicted by the sum of their toxic units. Contaminants are frequently present as mixtures in the environment. Regulatory decisions are based on mixture toxicity models that assume additivity, which can result in under or overestimation of toxic effects. Refining our understanding of mixture interactions can lead to better informed environmental management and decision making. In addition, exploring mixture interactions can elucidate the mechanisms of action for specific toxicants which, in many cases, are poorly understood. Methods Application of toxic units requires toxicity data for the individual components of the mixture as well as specialized mixture toxicity data. Evaluating the response of each individual chemical allows researchers to generate a new dosing metric, toxic units, which is standardized to the toxicity of each chemical. Since the toxicity of two compounds may vary widely, 1 toxic unit of two different compounds could correspond to two very different concentrations on a per mass basis. In addition to the toxicity of the individual components, use of toxic units requires a 2x2 factorial design concentration series where the response is measured to an increase of each contaminant with the other contaminant held constant. This elaborate concentration series allows researchers to describe how the mixture components interact with each other and predict effects at untested combinations components with nonlinear regression models. Point estimates Point estimation is a technique to predict population parameters based on available sample data and can be used to relate the mass based concentration to a toxicity based metric. Point estimates in toxicology are frequently response endpoints on a dose response curve. These point estimates predict at what concentration one would expect to see a given biological endpoint like 50% mortality (LC50). Any toxicological endpoint (growth inhibition, reproduction, behavior etc.) can be used as the toxicity metric to convert from mass based concentration to toxic units. Point estimates are generated by fitting a nonlinear regression model to toxicity data and using that model to predict the concentration of chemical required to elicit a known response of the biological endpoint. Equation and calculations One Toxic unit can be defined by the researcher as the concentration of a given chemical required to cause a given toxicological endpoint (LC50, EC50, IC50). 1TU=LC50 or 1TU=IC50 for inhibition of growth Since the mass or molar based concentrations of different chemicals required to cause a given endpoint like an LC50 may vary widely, the concentration that corresponds to 1TU is specific to each individual chemical tested. Isobolograms Isobolograms are one way to present the results of binary mixture toxicity testing based in toxic units. The strength of this method is its simplicity and ease of use. First a line of additivity is plotted that corresponds to all the combinations of the two chemicals that would result in one toxic unit. Next the experimental results from binary mixture tests are plotted on the isobologram. The results from the mixture test are point estimates from the mixture dose response curves that correspond to the single chemical tests. When these mixture point estimates are plotted on the isobologram, the region that they fall into (based on the concentrations of the two chemicals required to cause that given endpoint) demonstrates whether the mixture interactions are additive, synergistic or antagonistic. Response surfaces Response surfaces are a more advanced and complex way to visualize the same information presented in an isobologram. A response surface is a three dimensional graph with concentrations of individual components in toxic units on the x and y axis and the response variable on the z axis. This three dimensional representation of the organisms response to the two chemical stressors can be used to predict the toxicity of any combination of the components based on the nonlinear regression models that form the response surface. Antagonistic, additive, and synergistic effects The primary utility of toxic units is to classify mixture interactions as additive, synergistic or antagonistic. Additivity means that the toxicity of the mixture is equal to the sum of the toxicities of the individual components. Additivity is the default assumption of models used to predict toxicity of mixtures for regulatory and environmental management purposes. Synergistic effects occur when the experimental toxicity of the mixture is greater than the sum of the individual toxicities. Conversely, antagonistic effects occur when the experimental toxicity of a mixture is less than would be predicted by additivity. Understanding mixture interactions can prevent over or underestimation of toxicity by regulators who assume additivity for uncategorized mixtures. Applications Equilibrium partitioning sediment benchmarks The U.S. EPA uses toxic units as a benchmark, called the equilibrium partitioning sediment benchmark (ESB), for predicting the toxicity of polycyclic aromatic hydrocarbon (PAH)-contaminated sediments to benthic invertebrates. Toxic units are calculated from sediment concentrations of 34 PAHs and their expected sediment, water, and lipid partitioning behavior. Based on the equilibrium partitioning approach (which accounts for the varying biological availability of chemicals in different sediments), the ESB for total PAH is the sum of the quotients of a minimum of each of the 34 individual PAHs in a specific sediment, divided by the final chronic value concentration for each specific PAH in sediment. According to the EPA, freshwater or saltwater sediments that contain less or equal to 1.0 toxic units of the mixture of the 34 PAHs or more PAHs are acceptable for the protection of benthic organisms. Sediments that are greater than 1.0 toxic units are not protective and potentially have adverse effects to benthic organisms. EPA ESBs do not consider antagonistic, additive, or synergistic effects of other sediment contaminants and have been criticized as an overly conservative estimate for pyrogenic PAHs (such as those from manufactured gas plant processes) when only 13 parent PAHs are measured, due to the correction factor used to account for the PAHs not measured. This is in part due to the analytical approaches for determining the toxic units for both pyrogenic and petrogenic PAHs. Toxicity identification evaluation The toxicity identification evaluation (TIE) is an approach to systematically characterize, identify, and confirm toxic substances in whole sediments and sediment interstitial waters. This approach is typically carried out by the EPA. The effluent effect concentration data and the measured toxicant concentration data are transformed to toxic units for the regression analysis to evaluate whether a linear relationship exists between two or more toxicants. Limitations The limitations associated with using toxic units are largely dependent on the methodology in which they are being used. For example, the use of isobolograms is applicable to only binary mixtures. In general, toxic units are based on point estimates which are limited by projection. Point estimates, and therefore toxic units, are a simplification of a dose-response model. Information about toxic effects at concentrations other than the point estimate are lost in translation. Alternative ways to study mixtures Top-down Approach A common method for studying mixtures is to measure the total toxicity of the mixture and consider the internal toxicant interactions as irrelevant. Any mixture effects are taken into account in the total toxicity. The results for this method are limited by being mixture specific and has limited value in determining specific mechanisms of toxicity. GLM Approach Using Generalized Linear Models (GLM) allows for complex, non-parametric model fitting to describe the toxicity complex mixtures. Generalized Linear Models are more likely to find significant differences from additivity than TU approaches. The GLM approach also allows for the alteration of models to reflect current knowledge of biological mechanisms References Concentration indicators Toxicology Units of measurement
Toxic unit
Mathematics,Environmental_science
1,688
47,815,949
https://en.wikipedia.org/wiki/Collagen-induced%20arthritis
Collagen-induced arthritis (CIA) is a condition induced in mice (or rats) to study rheumatoid arthritis. CIA is induced in mice by injecting them with an emulsion of complete Freund's adjuvant and type II collagen. In rats, only one injection is needed, but mice are normally injected twice. References Further reading External links Animal testing Arthritis Collagens
Collagen-induced arthritis
Chemistry
88
38,805,770
https://en.wikipedia.org/wiki/Polyetherketoneketone
Polyetherketoneketone (PEKK) is a semi-crystalline thermoplastic in the polyaryletherketone (PAEK) family of polymers. It possesses high heat, chemical, and mechanical load resistance. PEKK has a glass transition temperature (Tg) of 162 °C. Applications Hexcel Corporation (Formerly Oxford Performance Materials) manufactures PEKK-based parts for Boeing for use in its Starliner space taxis, using 3D printing-based additive manufacturing. It has wide dental and medical applications. The parts are claimed to be as strong as aluminum while being 40 percent of the weight. In addition, PEKK manufactured components have shown fire and radiation resistant properties. Oxford Performance Materials manufactures Biomedical 3D printing of PEKK parts. In addition, the company has a solution casting technology to apply a nanoscale coating of PEKK to various mediums. References Organic polymers Polyethers Thermoplastics
Polyetherketoneketone
Chemistry
190
15,816,550
https://en.wikipedia.org/wiki/Isobaric%20tag%20for%20relative%20and%20absolute%20quantitation
Isobaric tags for relative and absolute quantitation (iTRAQ) is an isobaric labeling method used in quantitative proteomics by tandem mass spectrometry to determine the amount of proteins from different sources in a single experiment. It uses stable isotope labeled molecules that can be covalent bonded to the N-terminus and side chain amines of proteins. Procedure The ITRAQ method is based on the covalent labeling of the N-terminus and side chain amines of peptides from protein digestions with tags of varying mass. There are currently two mainly used reagents: 4-plex and 8-plex, which can be used to label all peptides from different samples/treatments. These samples are then pooled and usually fractionated by liquid chromatography and analyzed by tandem mass spectrometry (MS/MS). A database search is then performed using the fragmentation data to identify the labeled peptides and hence the corresponding proteins. The fragmentation of the attached tag generates a low molecular mass reporter ion that can be used to relatively quantify the peptides and the proteins from which they originated. Data evaluation At the peptide level, the signals of the reporter ions of each MS/MS spectrum allow for calculating the relative abundance (ratio) of the peptide(s) identified by this spectrum. The abundance of the reporter ions may consist of more than one single signal in the MS/MS data and the signals have to be integrated in some way from the histogram spectrum. At the protein level, the combined ratios a proteins' peptides represent the relative quantification of that protein. The MS/MS spectra can be analyzed using software that is freely available: i-Tracker and jTraqX References Further reading Mass spectrometry Biochemistry detection methods Protein methods
Isobaric tag for relative and absolute quantitation
Physics,Chemistry,Biology
368
69,058,158
https://en.wikipedia.org/wiki/Humu%20%28software%29
Humu is a software company that uses machine learning to send "nudges," small recommendations based in nudge theory, to employees at work. Since August 2023, it is a subsidiary of Perceptyx. History Humu was founded in May 2017 by former Google executives Laszlo Bock, Wayne Crosby, and Jessie Wisdom. Before founding Humu, Laszlo Bock served as Google's original Head of People Operations. Humu exited stealth mode in October 2018 with $40 million in funding. Humu analyzes company data and employee feedback to identify changes likely to improve employees' happiness, performance, and retention. The platform then delivers "nudges," short messages urging users to change their behavior. The company holds a trademark on "Nudge Engine," based on the behavioral economics concept of nudge theory from Nobel Prize-winning economist Richard Thaler and popularized in Thaler's 2008 book Nudge, co-authored with legal scholar Cass R. Sunstein. The book argues that small cues can help people make better choices. Notable customers include Fidelity Investments, Silicon Valley Bank, Lumen, Farfetch, and American fast casual restaurant chain Sweetgreen. A 2019 trademark dispute between Humu and American video streaming service Hulu was settled in federal court. On June 24, 2021, Humu announced Humu Business Edition, a personal coach for mid-sized businesses. In August 2023, Humu was acquired by Perceptyx. Funding In May 2019, Humu announced it had raised $40 million in series A and B funding led by Index Ventures and IVP. See also Captology References Human resource management software Nudge theory Behavioral economics Government by algorithm Software companies established in 2017
Humu (software)
Engineering,Biology
351
53,518,515
https://en.wikipedia.org/wiki/Theta%20nigrum
The theta nigrum () or theta infelix () is a symbol of death in Greek and Latin epigraphy. Isidore of Seville notes the letter was appended after the name of a deceased soldier and finds of papyri containing military records have confirmed this use. Additionally it can be seen in the Gladiator Mosaic. The term theta nigrum was coined by Theodor Mommsen. It consists of a circle with a diagonal line. The theta signified Thanatos, the Greek deity of death. See also References External links Cultural aspects of death Symbols Epigraphy Greek letters
Theta nigrum
Mathematics
125
64,692,781
https://en.wikipedia.org/wiki/Gravitational%20memory%20effect
Gravitational memory effects, also known as gravitational-wave memory effects are predicted persistent changes in the relative position of pairs of masses in space due to the passing of a gravitational wave. Detection of gravitational memory effects has been suggested as a way of validating general relativity. In 2014 Andrew Strominger and Alexander Zhiboedov showed that the formula related to the memory effect is the Fourier transform in time of Weinberg's soft graviton theorem. Linear and non linear effect There are two kinds of predicted gravitational memory effect: one based on a linear approximation of Einstein's equations, first proposed in 1974 by the Soviet scientists Yakov Zel'dovich and A. G. Polnarev, developed also by Vladimir Braginsky and L. P. Grishchuk, and a non-linear phenomenon known as the non-linear memory effect, which was first proposed in the 1990s by Demetrios Christodoulou. The non-linear memory effect could be exploited to determine the inclination, with respect to us observers, of the plane on which the two objects that merged and generated the gravitational waves were moving, making the calculation of their distance more precise, since the amplitude of the received wave (what is experimentally measured) depends on the distance of the source and the aforementioned inclination with respect to us. Gravitational spin memory In 2016, a new type of memory effect, induced by gravitational waves incident on rays of light moving along circular trajectories perpendicular to the waves, was proposed by Sabrina Gonzalez Pasterski, Strominger and Zhiboedov. This is caused by the angular momentum of the waves themselves and therefore termed gravitational spin memory. As in the previous case, this memory also turns out to be a Fourier transform in time, but, in this case, of the graviton theorem expanded to the subleading term. Detection The effect should, in theory, be detectable by recording changes in the distance between pairs of free-falling objects in spacetime before and after the passage of gravitational waves. The proposed LISA detector is expected to detect the memory effect easily. In contrast, detection with the existing LIGO is complicated by two factors. First, LIGO detection targets a higher frequency range than is desirable for detection of memory effects. Secondly, LIGO is not in free-fall, and its parts will drift back to their equilibrium position following the passage of the gravitational waves. However, as thousands of events from LIGO and similar earth-based detectors are recorded and statistically analyzed over the course of several years, the cumulative data may be sufficient to confirm the existence of the gravitational memory effect. See also Pasterski–Strominger–Zhiboedov triangle References External links Gravitational-wave memory: an overview by Marc Favata The gravitational memory effect: what it is and why Stephen and I did not discover it by Gary Gibbons Astronomy General relativity Effects of gravity
Gravitational memory effect
Physics,Astronomy
592
15,494,156
https://en.wikipedia.org/wiki/IEEE%20Journal%20of%20Selected%20Topics%20in%20Quantum%20Electronics
The IEEE Journal of Selected Topics in Quantum Electronics is a bimonthly peer-reviewed scientific journal published by the IEEE Photonics Society. It covers research on quantum electronics. The editor-in-chief is José Capmany (Universitat Politècnica de València). According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.544. See also IEEE Journal of Quantum Electronics References External links Journal of Selected Topics in Quantum Electronics Quantum mechanics journals Optics journals Electronics journals Bimonthly journals English-language journals Academic journals established in 1995
IEEE Journal of Selected Topics in Quantum Electronics
Physics
116
41,404
https://en.wikipedia.org/wiki/Net%20operation
A radio net is three or more radio stations communicating with each other on a common channel or frequency. A net is essentially a moderated conference call conducted over two-way radio, typically in half-duplex operating conditions. The use of half-duplex operation requires a very particular set of operating procedures to be followed in order to avoid inefficiencies and chaos. Nets operate either on schedule or continuously (continuous watch). Nets operating on schedule handle traffic only at definite, prearranged times and in accordance with a prearranged schedule of intercommunication. Nets operating continuously are prepared to handle traffic at any time; they maintain operators on duty at all stations in the net at all times. When practicable, messages relating to schedules will be transmitted by a means of signal communication other than radio. Net operations: allow participants to conduct ordered conferences among participants who usually have common information needs or related functions to perform are characterized by adherence to standard formats and procedures, and are responsive to a common supervisory station, called the "net control station", which permits access to the net and maintains net operational discipline. Net manager A net manager is the person who supervises the creation and operation of a net over multiple sessions. This person will specify the format, date, time, participants, and the net control script. The net manager will also choose the Net Control Station for each net, and may occasionally take on that function, especially in smaller organizations. Net Control Station Radio nets are like conference calls in that both have a moderator who initiates the group communication, who ensures all participants follow the standard procedures, and who determines and directs when each other station may talk. The moderator in a radio net is called the Net Control Station, formally abbreviated NCS, and has the following duties: Establishes the net and closes the net; Directs Net activities, such as passing traffic, to maintain optimum efficiency; Chooses net frequency, maintains circuit discipline and frequency accuracy; Maintains a net log and records participation in the net and movement of messages; (always knows who is on and off net) Appoints one or more Alternate Net Control Stations (ANCS); Determines whether and when to conduct network continuity checks; Determines when full procedure and full call signs may enhance communications; Subject to Net Manager guidance, directs a net to be directed or free. The Net Control Station will, for each net, appoint at least one Alternate Net Control Station, formally abbreviated ANCS (abbreviated NC2 in WWII procedures), who has the following duties: Assists the NCS to maintain optimum efficiency; Assumes NCS duties in event that the NCS develops station problems; Assumes NCS duties for a portion of the net, as directed or as needed; Serves as a resource for the NCS; echoes transmissions of the NCS if, and only if, directed to do so by the NCS; Maintains a duplicate net log Structure of the net Nets can be described as always having a net opening and a net closing, with a roll call normally following the net opening, itself followed by regular net business, which may include announcements, official business, and message passing. Military nets will follow a very abbreviated and opaque version of the structure outlined below, but will still have the critical elements of opening, roll call, late check-ins, and closing. A net should always operate on the same principle as the inverted pyramid used in journalism—the most important communications always come first, followed by content in ever lower levels of priority. Net opening Identification of the NCS Announcement of the regular date, time, and frequency of the net Purpose of the net Roll call A call for stations to check in, oftentimes from a roster of regular stations A call for late check-ins (stations on the roster who did not respond to the first check-in period) A call for guest stations to check in Net business Optional conversion to a free net Net closing Each net will typically have a main purpose, which varies according to the organization conducting the net, which occurs during the net business phase. For amateur radio nets, it's typically for the purpose of allowing stations to discuss their recent operating activities (stations worked, antennas built, etc.) or to swap equipment. For Military Auxiliary Radio System and National Traffic System nets, net business will involve mainly the passing of formal messages, known as radiograms. Two modes of net operation Directed Net A net in which no station other than the net control station can communicate with any other station, except for the transmission of urgent messages, without first obtaining the permission of the net control station. Free net A net in which any station may communicate with any other station in the same net without first obtaining permission from the net control station to do so. Net-control procedure words U.S. Army Field Manual ACP 125(G) has the most complete set of procedure words used in radio nets: Types of radio nets Maritime mobile nets serve the needs of seagoing vessels. Civil Air Patrol nets The Civil Air Patrol defines a different set of nets: Amateur radio nets The International Amateur Radio Union defines six different types of nets in its IARU Emergency Telecommunications Guide: Other Amateur radio net types U.S. Military radio nets The U.S. Army Field Manual FM 6-02.53, Tactical Radio Operations, defines the following types of radio nets: Maritime radio nets When boats or ships are in distress, they will operate a maritime broadcast communications net to communicate among the vessel in distress and all the other vessels, aircraft, and shore stations assisting in the distress response. See also Amateur radio net References External links Telecommunications
Net operation
Technology
1,136
2,945,164
https://en.wikipedia.org/wiki/Ecofascism
Ecofascism (sometimes spelled eco-fascism) is a term used to describe individuals and groups which combine environmentalism with fascism. Philosopher André Gorz characterized eco-fascism as hypothetical forms of totalitarianism based on an ecological orientation of politics. Similar definitions have been used by others in older academic literature in accusations of ecofascism of "environmental fascism". However, since the 2010s, a number of individuals and groups have emerged that either self-identify as "ecofascist" or have been labelled as "ecofascist" by academic or journalistic sources. These individuals and groups synthesise radical far-right politics with environmentalism, and will typically argue that overpopulation is the primary threat to the environment and that the only solution is a complete halt to immigration or, at their most extreme, genocide against non-White groups and ethnicities. Many far-right political parties have added green politics to their platforms. Through the 2010s ecofascism has seen increasing support. Definition In 2005, environmental historian Michael E. Zimmerman defined "ecofascism" as "a totalitarian government that requires individuals to sacrifice their interests to the well-being of the 'land', understood as the splendid web of life, or the organic whole of nature, including peoples and their states". This was supported by philosopher Patrick Hassan’s work analysing historical accusations of ecofascism in academic literature. Zimmerman argued that while no ecofascist government has existed so far, "important aspects of it can be found in German National Socialism, one of whose central slogans was "Blood and Soil". Other political agendas, instead of environmental protection and prevention of climate change, are nationalist approaches to climate such as national economic environmentalism, securitization of climate change, and ecobordering. Ecofascists often believe there is a symbiotic relationship between a nation-group and its homeland. They often blame the global south for ecological problems, with their proposed solutions often entailing extreme population control measures based on racial categorisations, and advocating for the accelerated collapse of current society to be replaced by fascist societies. This latter belief is often accompanied with vocal support for terrorist actions. Vice has defined ecofascism as an ideology "which blames the demise of the environment on overpopulation, immigration, and over-industrialization, problems that followers think could be partly remedied through the mass murder of refugees in Western countries." Environmentalist author Naomi Klein has suggested that ecofascists' primary objectives are to close borders to immigrants and, on the more extreme end, to embrace the idea of climate change as a divinely-ordained signal to begin a mass purge of sections of the human race. Ecofascism is "environmentalism through genocide", opined Klein. Political researcher Alex Amend defined ecofascist belief as "The devaluing of human life—particularly of populations seen as inferior—in order to protect the environment viewed as essential to White identity." Terrorism researcher Kristy Campion defined ecofascism as "a reactionary and revolutionary ideology that champions the regeneration of an imagined community through a return to a romanticised, ethnopluralist vision of the natural order." The European Commission describes ecofascism as the "weaponization of climate change by far right populist political parties and white supremacist groups". Tactics of this weaponization include the use of language and equating actors in population and migration discourses to components of the climate crisis. As said in a policy brief for The International Center for Counter-Terrorism, this "linguistic violence" entails that "the invasion of non-native species that threaten the environment becomes synonymous with the invasion of immigrants, the protection of the environment with the protection of borders, trash with people, and environmental cleansing with ethnic cleaning." Helen Cawood and Xany Jansen Van Vuuren have criticised previous attempts to define ecofascism as focusing too heavily on environmental and ecological conservationism in historical fascist movements, and the subsequent definitions being too broad and encompassing many ontologically different ideologies. In their criticism they summarise the current definition of ecofascism as used in the academic literature as "a movement that uses environmental and ecological conservationist talking points to push an ideology of ethnic or racial separatism". This is supported by Blair Taylor statement that ecofascism refers to "groups and ideologies that offer authoritarian, hierarchical, and racist analyses and solutions to environmental problems". Similarly, extremism researchers Brian Hughes, Dave Jones, and Amarnath Amarasingam argue that ecofascism is less a coherent ideology and more a cultural expression of mystical, anti-humanist romanticism. This is further supported by Maria Darwish in her research into the Nordic Resistance Movement where while there is concern for environmental issues they are "a concern for Neo-Nazis only in so far as it supports and popularizes the backstage mission of the NRM", that is the implementation of a fascist regime, and Jacob Blumenfeld stating "ecofascism names a specific far-right ideology that rationalizes white supremacist violence by invoking imminent ecological collapse and scarce natural resources". Borrowing from the "watermelon" analogy of eco-socialism, Berggruen Institute scholar Nils Gilman has coined the term "avocado politics" for eco-fascism, being "green on the outside but brown(shirt) at the core". In his book "", the political scientist Carlos Taibo characterises the phenomenon as a response to crises brought about by climate change. The ecofascist solution is to "[P]reserve increasingly scarce resources for a select minority. And to marginalize – in the mildest version – and exterminate – in the harshest – what are seen as surplus populations, on a planet that has visibly exceeded its limits." Crucially, Taibo argues that far from being circumscribed to the margins of right-wing extremism, which traditionally has mostly been associated with Climate change denial, ecofascist notions are likely to be pursued by "political forces we usually label as liberal and social-democratic", emerging within major centers of power in the west and among elites in the developing world. From this perspective, the antecedents of ecofascism, extending beyond ecological currents in fascist movements of the past, would be ideologies typical of Western colonialism, returning in modernised forms. Ideological origins Madison Grant Sometimes dubbed the "founding father" of ecofascism, Madison Grant was a pioneer of conservationism in America in the late 19th and early 20th century. Grant is credited as a founder of modern wildlife management. Grant built the Bronx River Parkway, was a co-founder of the American Bison Society, and helped create Glacier National Park, Olympic National Park, Everglades National Park and Denali National Park. As president of the New York Zoological Society, he founded the Bronx Zoo in 1899. In addition to his conservationist work, Grant was a racist. In 1906, Grant supported the placement of Ota Benga, a member of the Mbuti people who was kidnapped, removed from his home in the Congo, and put on display in the Bronx Zoo as an exhibit in the Monkey House. In 1916, Grant wrote The Passing of the Great Race, a work of pseudoscientific literature which claimed to give an account of the anthropological history of Europe. The book divides Europeans into three races; Alpines, Mediterraneans and Nordics, and it also claims that the first two races are inferior to the superior Nordic race, which is the only race which is fit to rule the earth. Adolf Hitler would later describe Grant's book as "his bible" and Grant's "Nordic theory" became the bedrock of Nazi racial theories. Additionally, Grant was a eugenicist: He cofounded and was the director of the American Eugenics Society and he also advocated the culling of the unfit from the human population. Grant concocted a 100-year plan to perfect the human race, a plan in which one ethnic group after another would be killed off until racial purity would be obtained. Grant campaigned for the passage of the Emergency Quota Act of 1921 and he also campaigned for the passage of the Immigration Act of 1924, which drastically reduced the number of immigrants from eastern Europe and Asia who were allowed to enter the United States. In the modern era, Grant's ideas have been cited by advocates of far-right politics such as Richard Spencer and Anders Breivik. Nazism The authors Janet Biehl and Peter Staudenmaier suggest that the synthesis of fascism and environmentalism began with Nazism, stating that 19th and 20th century Germany was an early center of ecofascist thought, finding its antecedents in many prominent natural scientists and environmentalists, including Ernst Moritz Arndt, Wilhelm Heinrich Riehl, and Ernst Haeckel. With the works and ideas of such individuals being later established as policies in the Nazi regime. This is supported by other researchers who identify the Völkisch movement as an ideological originator of later ecofascism. In Biehl and Staudenmaier's book Ecofascism: Lessons from the German Experience, they note the Nazi Party's interest in ecology, and suggest their interest was "linked with traditional agrarian romanticism and hostility to urban civilization". With Zimmerman pointing to the works of conservationist and Nazi Walther Schoenichen as having pertinence to later ecofascism and similarities to developments in deep ecological understanding. During the Nazi rise to power, there was strong support for the Nazis among German environmentalists and conservationists. Richard Walther Darré, a leading Nazi ideologist and Reich Minister of Food and Agriculture who invented the term "Blood and Soil", developed a concept of the nation having a mystic connection with their homeland, and as such, the nation was dutybound to take care of the land. This was supported by other Nazi theorists such as Alfred Rosenberg who wrote of how society's move from agricultural systems to industrialised systems broke their connection to nature and contributed to the death of the . Similar sentiments are found in speeches from Fascist Italy’s Minister of Agriculture Giuseppe Tassinari. Because of this, modern ecofascists cite the Nazi Party as an origin point of ecofascism. Beyond Darré, Rudolf Hess and Fritz Todt are viewed as representatives of environmentalism within the Nazi party. Roger Griffin has also pointed to the glorification of wildlife in Nazi art and ruralism in the novels of the fascist sympathizers Knut Hamsun and Henry Williamson as examples. After the outlawing of the neo-nazi Socialist Reich Party, one of its members August Haußleiter moved towards organising within the environmental and anti-nuclear movements, going on to become a founding member of the German Green Party. When green activists later uncovered his past activities in the neo-nazi movement, Haußleiter was forced to step down as the party's chairman, although he continued to hold a central role in the party newspaper. As efforts to expel nationalist elements within the party continued, a conservative faction split off and founded the Ecological Democratic Party, which became noted for persistent holocaust denial, rejection of social justice and opposition to immigration. Savitri Devi The French-born Greek fascist Savitri Devi (born Maximiani Julia Portas) was a prominent proponent of Esoteric Nazism and deep ecology. A fanatical supporter of Hitler and the Nazi Party from the 1930s onwards, she also supported animal rights activism and was a vegetarian from a young age. In her works, she espoused ecologist views, such as the Impeachment of Man (1959), in which she espoused her views on animal rights and nature. In accordance with her ecologist views, human beings do not stand above the animals; instead, humans are a part of the ecosystem and as a result, they should respect all forms of life, including animals and the whole of nature. Because of her dual devotion to Nazism and deep ecology, she is considered an influential figure in ecofascist circles. Malthusianism Malthusian ideas of overpopulation have been adopted by ecofascists, using Malthusian rationale in anti-immigration arguments and seeking to resolve the perceived global issue by enforcing population control measures on the global south and racial minorities in white majority countries. Such Malthusian ideas are often paired with Social Darwinist and eugenicist views. Ted Kaczynski, the Unabomber Ted Kaczynski, better known as "The Unabomber", is cited as a figure who was highly influential in the development of ecofascist thought, and features prominently in contemporary ecofascist propaganda. Between 1978 and 1995 Kaczynski instigated a terrorist bombing campaign aimed at inciting a revolution against modern industrial society, in the name of returning humanity to a primitive state he suggested offered humanity more freedom while protecting the environment. In 1995 Kaczynski offered to end his bombing campaign if The Washington Post or The New York Times would publish his 35,000-word Unabomber manifesto. Both newspapers agreed to those terms. The manifesto railed not only against modern industrial society but also against "modern leftists", whom Kaczynski defined as "mainly socialists, collectivists, 'politically correct' types, feminists, gay and disability activists, animal rights activists and the like". Because of Kaczynski's intelligence and because of his ability to write in a high-level academic tone, his manifesto was given serious consideration upon its release and it became highly influential, even amongst those who severely disagreed with his use of violence. Kaczynski's staunchly radical pro-green, anti-left work was quickly absorbed into ecofascist thought. Kaczynski also criticized right-wing activists who complained about the erosion of traditional social mores because they supported technological and economic progress, a view which he opposed. He stated that technology erodes traditional social mores that conservatives and right wingers want to protect, and he referred to conservatives as fools. Although Kaczynski and his manifesto have been embraced by ecofascists, he rejected "fascism", including specifically "the 'ecofascists'", describing 'ecofascism' itself as 'an aberrant branch of leftism': In his manifesto, Kaczynski wrote that he considered fascism a "kook ideology" and he also wrote that he considered Nazism "evil". Kaczynski never tried to align himself with the far-right at any point before or after his arrest. In 2017, Netflix released a dramatisation of Kaczynski's life, titled Manhunt: Unabomber. Once again, the popularity of the show thrust Kaczynski and his manifesto into the public's mind and it also raised the profile of ecofascism. Garrett Hardin, Pentti Linkola, and "Lifeboat Ethics" Two figures influential in ecofascism are Garrett Hardin and Pentti Linkola, both of whom were proponents of what they refer to as "Lifeboat Ethics". Hardin was a professor of Human Ecology at the University of California often described as a white nationalist. His work was focused on the ethics of overpopulation and population control and suggested different methods like "birth control, abortion, and sterilization". Not only did he have medical suggestions but also stood against immigration and for the end of foreign aid. Linkola was a Finnish ecologist and radical Malthusian accused of being an active ecofascist who actively advocated ending democracy and replacing it with dictatorships that would use totalitarian and even genocidal tactics to end climate change. Both men used versions of the following analogy to illustrate their viewpoint: Renaud Camus Renaud Camus' conspiracy theory, the Great Replacement, has been influential on ecofascism, being referenced explicitly in multiple manifestos and had its ideas relayed in others. In the conspiracy theory, the "native" white populations of western countries are being replaced by non-white populations as a directed political effort. Association with violence Ecofascist violence has occurred since the 21st century, with academics and researchers warning that as ecological crises worsen and remain unaddressed, support for ecofascism and violence in the name of ecofascism will increase. In December 2020, the Swedish Defence Research Agency released a report on ecofascism. The paper argued that ecofascism is intimately tied to the ideology of accelerationism, and ecofascists nearly exclusively choose terror tactics over the political approach. Further, the SDRA argues not all ecofascist mass shooters have been recognized as such: Pekka-Eric Auvinen who shot eight people in Finland in 2007 before killing himself adhered to the ideology according to his manifesto titled "The Natural Selector's Manifesto". He advocated "total war against humanity" due to the threat humanity posed to other species. He wrote that death and killing is not a tragedy, as it constantly happens in nature between all species. Auvinen also wrote that the modern society hinders "natural justice" and that all inferior "subhumans" should be killed and only the elite of humanity be spared. In one of his YouTube videos Auvinen paid tribute to the prominent deep ecologist Pentti Linkola. 2010s James Jay Lee, the eco-terrorist who took several hostages at the Discovery Communications headquarters on 1 September 2010, was described as an ecofascist by Mark Potok of the Southern Poverty Law Center. Anders Breivik committed the 2011 Norway attacks on 22 July 2011, in which he killed eight people by detonating a van bomb at Regjeringskvartalet in Oslo, and then killed 69 participants of a Workers' Youth League (AUF) summer camp, in a mass shooting on the island of Utøya. While dismissive of climate change, Breivik's manifesto was concerned with the carrying capacity of the planet, taking inspiration from Kaczynski and Grant’s The Passing of the Great Race. Breivik’s solution to this perceived problem was to cap the global population at 2.5 billion people, with the reduction in the global population being forced upon the global south. Through his actions he sought to inspire other terrorist attacks, and was an inspiration for later ecofascist terrorists. William H. Stoetzer, a member of the Atomwaffen Division, an organisation responsible for at least eight murders, was active in the Earth Liberation Front as late as 2008 and joined Atomwaffen in 2016. Brenton Tarrant, the Australian-born perpetrator of the Christchurch mosque shootings in New Zealand described himself as an ecofascist, ethno-nationalist, and racist in his manifesto The Great Replacement, named after a far-right conspiracy theory originating in France. In the manifesto Tarrant specifically mentions Breivik as an ideological and operational influence. Researchers point to Tarrant's terrorist attack as the moment when discussion of ecofascism moved from academic and specialist circles into the mainstream. Jordan Weissmann, writing for Slate, describes the perpetrator's version of ecofascism as "an established, if somewhat obscure, brand of neo-Nazi" and quotes Sarah Manavis of New Statesman as saying, "[Eco-fascists] believe that living in the original regions a race is meant to have originated in and shunning multiculturalism is the only way to save the planet they prioritise above all else". Similarly, Luke Darby clarifies it as: "eco-fascism is not the fringe hippie movement usually associated with ecoterrorism. It's a belief that the only way to deal with climate change is through eugenics and the brutal suppression of migrants." Patrick Crusius, the perpetrator of the 2019 El Paso shooting wrote a similar manifesto, professing support for Tarrant. Posted to the online message board 8chan, it blames immigration to the United States for environmental destruction, saying that American lifestyles were "destroying the environment", invoking an ecological burden to be borne by future generations, and concluding that the solution was to "decrease the number of people in America using resources". Crusius outlined how he took inspiration from Tarrant and Breivik in his manifesto. Crusius and Tarrant also inspired Philip Manshaus who attacked a mosque in Norway in 2019. 2020s The Swedish self-identified ecofascist Green Brigade is an eco-terrorist group linked to The Base that is responsible for multiple mass murder plots. The Green Brigade has been responsible for arson attacks against targets deemed to be enemies of nature, like an attack on a mink farm that caused multi-million-dollar damages. Two members were arrested by Swedish police, allegedly planning assassinating judges and bombings. In June 2021, the Telegram-based Terrorgram collective published an online guide with incitements for attacks on infrastructure and violence against minorities, police, public figures, journalists, and other perceived enemies. In December 2021, they published a second document containing ideological sections on accelerationism, white supremacy, and ecofascism. During 2021, several neo-Nazi groups and individuals who espoused ecofascist rhetoric were arrested and charged by French authorities for planning terrorist attacks. These include the group , and two "accelerationists" in Occitania. In an interview with a blog a leader of the eco-extremist group Individualists Tending to the Wild (ITS) claimed to have taken organisational influence from the fascist accelerationist terrorist group Order of Nine Angles. The Foundation for Defense of Democracies and European Union Counter-Terrorism Coordinator characterized ITS as ecofascist. Payton S. Gendron, the instigator of the 2022 Buffalo shooting, also wrote a manifesto self-describing as "an ethno-nationalist eco-fascist national socialist" within it and also professing support for far-right shooters from Tarrant and Dylann Roof to Breivik and Robert Bowers. Later in 2022, the Terrorgram collective released another publication, with analysts believing it would likely inspire further "Buffalo shootings". In Finland on 15 March 2024, the anniversary of Christchurch mosque shooting, a Finnish army non-commissioned officer was arrested for allegedly planning a mass shooting in a university in Vaasa that day. As her motivation she said the world needed "a mass culling" to put an end to "selfish individualism", "human degeneration", global warming and conspicuous consumption. The Finnish police described her as ecofascist and that she had read books by Nietzsche, Linkola and Kaczynski. Additionally she had praised Pekka-Eric Auvinen in internet conversations and had visited Jokela school where he perpetrated the mass shooting. On 12 August 2024 at least five people were wounded in a mass stabbing attack in Eskisehir, Turkey. The perpetrator had called for "Total Human Death" and voiced support for Ted Kaczynski and Accelerationism on the Internet. Criticism The deep ecologic activist and "left biocentrism" advocate David Orton stated in 2000 that the term is pejorative in nature and it has "social ecology roots, against the deep ecology movement and its supporters plus, more generally, the environmental movement. Thus, 'ecofascist' and 'ecofascism', are used not to enlighten but to smear." Orton argued that "it is a strange term/concept to really have any conceptual validity" as there has not "yet been a country that has had an "eco-fascist" government or, to my knowledge, a political organization which has declared itself publicly as organized on an ecofascist basis." Accusations of ecofascism have often been made but are usually strenuously denied. Left wing critiques view ecofascism as an assault on human rights, as in social ecologist Murray Bookchin's use of the term. Deep ecology Deep ecology is an environmental philosophy that promotes the inherent worth of all living beings regardless of their instrumental utility to human needs. It has long been linked to fascist ideologies, both by critics and fascist proponents. In certain texts, the Norwegian philosopher Arne Næss, a leading voice of the "deep ecology" movement, opposes environmentalism and humanism, even proclaiming, in imitation of a famous phrase of the Marquis de Sade, ("Ecologists, another effort to become anti-humanists!"). Luc Ferry, in his anti-environmentalist book published in 1992, particularly incriminated deep ecology as being an anti-humanist ideology bordering on Nazism. Modern ecofascism has been described as a deep ecological philosophy combined with antihumanism and an accelerationist stance. Bookchin's critique of deep ecology Murray Bookchin criticizes the political position of deep ecologists such as David Foreman: Sakai on "natural purity" Such observations among the left are not exclusive to Bookchin. In his review of Anna Bramwell's biography of Richard Walther Darré, political writer J. Sakai and author of Settlers: The Mythology of the White Proletariat, observes the fascist ideological undertones of natural purity. Prior to the Russian Revolution, the tsarist intelligentsia was divided on the one hand between liberal "utilitarian naturalists", who were "taken with the idea of creating a paradise on earth through scientific mastery of nature" and influenced by nihilism as well as Russian zoologists such as Anatoli Petrovich Bogdanov; and, on the other, "cultural-aesthetic" conservationists such as Ivan Parfenevich Borodin, who were influenced in turn by German Romantic and idealist concepts such as and . Narrowness of the label Political scientist Balša Lubarda has criticised the use of the term "ecofascism" as not sufficiently covering and describing the wider network of ideologies and systems that feed into ecofascist action, suggesting the term "far-right ecologism" (FRE) instead. Lubarda is supported by researcher Bernhard Forchtner who emphasises ecofascism's existence as a fringe ideology that has had little impact on the wider far-right's interaction with environmentalism. Disavowment As ecofascism has become more prevalent various environmental groups and organisations have publicly disavowed the ideology and those who subscribe to it. Far-right green movements In recent years there has been a greater proliferation in ecofascist groups globally in line with the proliferation of ecofascist rhetoric. Australia Australia has seen an increasing prominence of ecofascism among its far-right groups in recent years. Austria (DGÖ) had been founded in 1982 by the former NDP official Alfred Bayer to use the popularity of the green movement at the time for the purposes of the NDP. The party managed to win a number of municipal seats in the mid-1980s but in 1988 the Constitutional Court banned the party on grounds of Neo-Nazism alongside a parallel ban on the NDP. Finland The neo-fascist Blue-and-Black Movement includes ecofascist policy goals, stating that they aim to protect the nature and biodiversity of Finland, and to live in harmony with nature, ending ritual slaughter, fur-farming and animal testing. France Nouvelle Droite movement The European movement, developed by Alain de Benoist and other individuals involved with the GRECE think tank, have also combined various left-wing ideas, including green politics, with right-wing ideas such as European ethnonationalism. Various other far-right figures have taken the lead from de Benoist, providing an appeal to nature in their politics, including: Guillaume Faye, Renaud Camus, and Hervé Juvin. Génération identitaire In 2020, following articles from self-described ecofascist , a spokesperson for , Clément Martin, advocated for , ethnically homogenous zones to be violently defended in order to protect the environment. National Rally Marine Le Pen, president of the far-right National Rally (, or RN) in the French National Assembly, has shown an ecofascist approach towards climate change issue and has incorporated environmental issues into her platform, although her policies regarding the climate often reflect a nationalist and protectionist stance to address it. Le Pen has stated that concern for the climate is inherently nationalist, and that immigrants "do not care about the environment". Jordan Bardella, president of National Rally, embraces similar beliefs and has stated "Borders are the environment’s greatest ally; it is through them we will save the planet." Solutions for climate change proposed for Le Pen also align with right-wing conservative economics. She has disregarded liberal free trade economics, under her belief that it "kills the planet" and creates "suffering for animals". Rather than supporting mass production of international commerce, she designed a localist project for "economic patriotism" to boost French products. Climate change was not in the RN's party platform until around 2019, when the issue began to be capitalized electorally by both leftist and center parties alike. In response to this rising awareness regarding environmental issues, Le Pen designed an energy plan focused on fossil fuels, opposing wind and solar energy, and emphasizing expanding nuclear power wherein she delineated a party policy where 70% of France's electricity was to come from nuclear energy by 2050. Additionally, Le Pen supports maintaining oil heating systems and reducing taxes on fossil fuels, which contradicts climate experts' recommendations, and could increase France's dependence on fossil fuels. Germany Staudenmaier points to how from the post-war period in Germany an ecofascist section has always been present in the German far-right, though as a minor peripheral section, with others pointing out a long history of right-wing individuals and groups being present in the environmental and green movement in Germany. Die Heimat Die Heimat (The Homeland), previously known as the National Democratic Party of Germany (NPD), a German Nationalist far-right party, has long sought to utilise the green movement. This is one of many strategies the party has used to try to gain supporters. The German far-right has published the magazine , that masquerades as a garden and nature publication but intertwines garden tips with extremist political ideology. This is known as a "camouflage publication" in which the NPD has spread its mission and ideologies through a discrete source and made its way into homes they otherwise wouldn’t. Right-wing environmentalists are settling in the northern regions of rural Germany and are forming nationalistic and authoritarian communities which produce honey, fresh produce, baked goods, and other such farm goods for profit. Their ideology is centered around "blood and soil" ruralism in which they humanely raise produce and animals for profit and sustenance. Through their support of this operation, and the backing of many others, it’s reported that the NPD is trying to wrestle the green movement, which has been dominated by the left since the 1980s, back from the left through these avenues. It's difficult to know if when one is buying local produce or farm fresh eggs from a farmer at their stand, they're supporting a right-wing agenda. Various efforts are being made to halt or slow the infiltration of right-wing ecologists into the community of organic farmers such as brochures about their communities and common practices. However, as the organic cultivation organisation, Biopark, demonstrates with their vetting process, it's difficult to keep people out of communities because of their ideologies. Biopark specifies that they vet based on cultivation habits, not opinions or doctrines, especially when they're not explicitly stated. AfD Prominent (Alternative for Germany) politician, Björn Höcke, has stated his desire to "reclaim" natural conservation from the left. Höcke believes that nature conservation is not correctly executed under climate justice politics, and is quoted stating that the AfD has "to take the issue of nature conservation back from the Greens" However, Höcke recognizes that a socially conservative position that strongly values environmental protection is not the majority position of the AfD. Regardless, Höcke sees the work of far-right ecological magazine, , as laying a theoretical standpoint for the AfD to later draw from. Collegium Humanum Other groups The term is also used to a limited extent within the . The neo-Artamans have been identified as ecofascists in their attempts to revive the agrarian and völkisch traditions of the Artaman League in communes that they have built up since the 1990s. Hungary Following the fall of Communism in Hungary at the end of the 1980s, one of the new political parties that emerged in the country was the Green Party of Hungary. Initially having a moderate centre-right green outlook, after 1993 the party adopted a radical anti-liberal, anti-communist, anti-Semitic and pro-fascist stance, paired with the creation of a paramilitary wing. This ideological swing resulted in many members breaking off from the party to form new green parties, first with Green Alternative in 1993 and secondly with Hungarian Social Green Party in 1995. Each green party remained on the political fringe of Hungarian politics and petered out over time. It was not until the formation of LMP – Hungary's Green Party in the 2010s that green politics in Hungary consolidated around a single green party. The far-right Hungarian political party Our Homeland Movement has adopted some elements of environmentalism, and commonly refers to itself as the only true green party; for example, the party has called on Hungarians to show patriotism by supporting the removal of pollution from the Tisza River while simultaneously placing the blame on the pollution on Romania and Ukraine. Similarly, elements of the far-right Sixty-Four Counties Youth Movement proscribe themselves to the "Eco-Nationalist" label, with one member stating "no real nationalist is a climate denialist". India Narendra Modi's leadership of India with the Bharatiya Janata Party seeks to install a complete system of Hindutva, with repression of racial and religious minorities and caste discrimination. Since 2018 Modi has been increasingly viewed as an environmental champion and used rhetoric about protecting the environment to greenwash his image and the image of his party. International Greenline Front is an international network of ecofascists which originated in Eastern Europe, with chapters in a variety of countries such as Argentina, Belarus, Chile, Germany, Italy, Poland, Russia, Serbia, Spain and Switzerland. Serbia The Leviathan Movement claims to promote ecology and protect animals from cruelty by, among other things, saving them from abusers. Leviathan has been reported as an ideologically neo-fascist and neo-nazi group. They used to share an office with the Serbian Right, a far-right political party, and Leviathan's leader, Pavle Bihali, is seen in pictures on his social media accounts posing with neo-Nazis. Sweden The Nordic Resistance Movement, a pan-Nordic neo-Nazi movement in the Nordic countries and a political party in Sweden has been continually described as ecofascist, and have declared themselves as the "new green party" of the Nordics. Switzerland In Switzerland, the initiators of the Ecopop initiative were accused of eco-fascism by FDFA State Secretary at a Christian Democratic People's Party of Switzerland event on 11 January 2013. However, after threatening to sue, Rossier apologized for the allegation. United Kingdom There is also a historic tradition between the far-right and environmentalism in the UK. Throughout its history, the far-right British National Party has flirted on and off with environmentalism. During the 1970s the party's first leader John Bean expressed support for the emerging environmentalist movement in the pages of the party's newspaper and suggested the primary cause of pollution as overpopulation, and therefore immigration into Britain must be halted. During the 2000s the BNP sought to position itself as the "only 'true' green party in the United Kingdom, dedicating a significant portion of their manifestos to green issues. During an appearance on BBC One's Question Time in October 2009, then-leader Nick Griffin proclaimed: The Guardian criticised Griffin's claims that himself and the BNP were truly environmentalists at heart, suggesting it was merely a smokescreen for anti-immigrant rhetoric and pointed to previous statements by Griffin in which he suggested that climate change was a hoax. These suspicions seemed to be proven correct when in December 2009 the BNP released a 40-page document denying that global warming is a "man-made" phenomenon. The party reiterated this stance in 2011, as well as making claims that wind farms were causing the deaths of "thousands of Scottish pensioners from hypothermia". John Bean a far-right activist and politician, the first leader of the BNP and latterly a leader within the National Front, wrote regularly in the National Front’s magazine about the problems of pollution and environmental degradation tying them to ideas of overpopulation and immigration. In 2024 it was reported by Searchlight that the fascist groups Patriotic Alternative and Homeland party has also started to make claims that the countryside was being destroyed by immigration. In Scotland, former UKIP candidate and activist Alistair McConnachie, who has questioned the Holocaust, founded the Independent Green Voice in 2003, and multiple ex-BNP members and activists have stood as candidates for the party. United States During the 1990s a highly militant environmentalist subculture called Hardline emerged from the straight edge hardcore punk music scene and established itself in a number of cities across the US. Adherents to the Hardline lifestyle combined the straight edge belief in no alcohol, no drugs, no tobacco with militant veganism and advocacy for animal rights. Hardline touted a biocentric worldview that claimed to value all life, and therefore opposed abortion, contraceptives, and sex for any purpose other than procreation. On this same line, Hardline opposed homosexuality as "unnatural" and "deviant”. Hardline groups were highly militant; In 1999 Salt Lake City grouped Hardliners as a criminal gang and suggested they were behind dozens of assaults in the metro area. That same year CBS News reported that Hardliners were behind the firebombing of fast food outlets and clothing stores selling leather items, and attributed 30 attacks to Hardliners. The Hardline subculture dissolved after the 1990s. White supremacist John Tanton and the network of organisations he created, dubbed the Tanton network, have been described as ecofascist. Tanton and his organisations spent decades linking immigration to environmental concerns. Political researchers Blair Taylor and Eszter Szenes have identified multiple threads in alt-right discourse and ideology that align with far-right ecologism and ecofascism. The Green Party of the United States has also long been the target of various far-right figures, such as anti-Semitic conspiracy theorists, who have tried to shift the party drastically to the far-right. In 1994, so-called "Takings" bills were introduced by the U.S. Congress to financially compensate wetlands owners who were unable to develop their land for profit due to environmental protection policies. These bills were met with resistance by "anthropocentric market liberals", who oppose any sort of market regulation or intervention of the state into private ownership. Hence, these "takings" bills were deemed ecofascist and proponents of the bills were "disparaged" and viewed as "'nature-loving' romantics for having reactionary tendencies that may be consistent with fascism". The journal Social Theory and Practice uses this instance to exemplify how growing public frustration with complex federal environmental regulations leads to rapidly polarizing opinions on environmental regulations in the United States: one is either a citizen who supports people, private property, and the U.S. Constitution, or a radical environmentalist who supports nature, communal ownership, and ecofascism. Pejorative Detractors on the political right tend to use the term "ecofascism" as a hyperbolic general pejorative against all environmental activists, including more mainstream groups such as Greenpeace, prominent activists such as Greta Thunberg, and government agencies tasked with protecting environmental resources. Such detractors include Rush Limbaugh and other conservative and wise use movement commentators. The term as a pejorative has been used in multiple countries. See also Adolf Hitler and vegetarianism Animal welfare in Nazi Germany ATWA Conspirituality Definitions of fascism Ecoauthoritarianism Ecocapitalism Eco-nationalism Eco-socialism Eco-terrorism Environmental movement Environmental racism Green Imperialism Hardline (subculture) Neo-Luddism Pastel QAnon Radical environmentalism Red-green-brown alliance Notes References Bibliography Further reading External links Fascism Green politics Political pejoratives Far-right politics Deep ecology Environmental movements Syncretic political movements Environmentalism Eco-terrorism Political ecology Totalitarian ideologies
Ecofascism
Biology,Environmental_science
8,460
27,501,362
https://en.wikipedia.org/wiki/Holstein%E2%80%93Herring%20method
The Holstein–Herring method, also called the surface integral method, or Smirnov's method is an effective means of getting the exchange energy splittings of asymptotically degenerate energy states in molecular systems. Although the exchange energy becomes elusive at large internuclear systems, it is of prominent importance in theories of molecular binding and magnetism. This splitting results from the symmetry under exchange of identical nuclei (Pauli exclusion principle). The basic idea pioneered by Theodore Holstein, Conyers Herring and Boris M. Smirnov in the 1950-1960. Theory The method can be illustrated for the hydrogen molecular ion or more generally, atom-ion systems or one-active electron systems, as follows. We consider states that are represented by even or odd functions with respect to behavior under space inversion. This is denoted with the suffixes g and u from the German gerade and ungerade and are standard practice for the designation of electronic states of diatomic molecules, whereas for atomic states the terms even and odd are used. The electronic time-independent Schrödinger equation can be written as: where E is the (electronic) energy of a given quantum mechanical state (eigenstate), with the electronic state function depending on the spatial coordinates of the electron and where is the electron-nuclear Coulomb potential energy function. For the hydrogen molecular ion, this is: For any gerade (or even) state, the electronic Schrödinger wave equation can be written in atomic units () as: For any ungerade (or odd) state, the corresponding wave equation can be written as: For simplicity, we assume real functions (although the result can be generalized to the complex case). We then multiply the gerade wave equation by on the left and the ungerade wave equation on the left by and subtract to obtain: where is the exchange energy splitting. Next, without loss of generality, we define orthogonal single-particle functions, and , located at the nuclei and write: This is similar to the LCAO (linear combination of atomic orbitals) method used in quantum chemistry, but we emphasize that the functions and are in general polarized i.e. they are not pure eigenfunctions of angular momentum with respect to their nuclear center, see also below). Note, however, that in the limit as , these localized functions collapse into the well-known atomic (hydrogenic) psi functions . We denote as the mid-plane located exactly between the two nuclei (see diagram for hydrogen molecular ion for more details), with representing the unit normal vector of this plane (which is parallel to the Cartesian -direction), so that the full space is divided into left () and right () halves. By considerations of symmetry: This implies that: Also, these localized functions are normalized, which leads to: and conversely. Integration of the above in the whole space left to the mid-plane yields: and From a variation of the divergence theorem on the above, we finally obtain: where is a differential surface element of the mid-plane. This is the Holstein–Herring formula. From the latter, Conyers Herring was the first to show that the lead term for the asymptotic expansion of the energy difference between the two lowest states of the hydrogen molecular ion, namely the first excited state and the ground state (as expressed in molecular notation—see graph for energy curves), was found to be: Previous calculations based on the LCAO of atomic orbitals had erroneously given a lead coefficient of instead of . While it is true that for the Hydrogen molecular ion, the eigenenergies can be mathematically expressed in terms of a generalization of the Lambert W function, these asymptotic formulae are more useful in the long range and the Holstein–Herring method has a much wider range of applications than this particular molecule. Applications The Holstein–Herring formula had limited applications until around 1990 when Kwong-Tin Tang, Jan Peter Toennies, and C. L. Yiu demonstrated that can be a polarized wave function, i.e. an atomic wave function localized at a particular nucleus but perturbed by the other nuclear center, and consequently without apparent gerade or ungerade symmetry, and nonetheless the Holstein–Herring formula above can be used to generate the correct asymptotic series expansions for the exchange energies. In this way, one has successfully recast a two-center formulation into an effective one-center formulation. Subsequently, it has been applied with success to one-active electron systems. Later, Scott et al. explained and clarified their results while sorting out subtle but important issues concerning the true convergence of the polarized wave function. The outcome meant that it was possible to solve for the asymptotic exchange energy splittings to any order. The Holstein–Herring method has been extended to the two-active electron case i.e. the hydrogen molecule for the two lowest discrete states of and also for general atom-atom systems. Physical interpretation The Holstein–Herring formula can be physically interpreted as the electron undergoing "quantum tunnelling" between both nuclei, thus creating a current whose flux through the mid-plane allows us to isolate the exchange energy. The energy is thus shared, i.e. exchanged, between the two nuclear centers. Related to the tunnelling effect, a complementary interpretation from Sidney Coleman's Aspects of Symmetry (1985) has an "instanton" travelling near and about the classical paths within path integral formulation. Note that the volume integral in the denominator of the Holstein–Herring formula is sub-dominant in . Consequently this denominator is almost unity for sufficiently large internuclear distances and only the surface integral of the numerator need be considered. See also Dirac delta function model (1-D version of H2+) Exchange interaction Exchange symmetry Conyers Herring Hydrogen molecular ion Lambert W function Quantum tunneling List of quantum-mechanical systems with analytical solutions References Quantum chemistry
Holstein–Herring method
Physics,Chemistry
1,228
11,024
https://en.wikipedia.org/wiki/Formant
In speech science and phonetics, a formant is the broad spectral maximum that results from an acoustic resonance of the human vocal tract. In acoustics, a formant is usually defined as a broad peak, or local maximum, in the spectrum. For harmonic sounds, with this definition, the formant frequency is sometimes taken as that of the harmonic that is most augmented by a resonance. The difference between these two definitions resides in whether "formants" characterise the production mechanisms of a sound or the produced sound itself. In practice, the frequency of a spectral peak differs slightly from the associated resonance frequency, except when, by luck, harmonics are aligned with the resonance frequency, or when the sound source is mostly non-harmonic, as in whispering and vocal fry. A room can be said to have formants characteristic of that particular room, due to its resonances, i.e., to the way sound reflects from its walls and objects. Room formants of this nature reinforce themselves by emphasizing specific frequencies and absorbing others, as exploited, for example, by Alvin Lucier in his piece I Am Sitting in a Room. In acoustic digital signal processing, the way a collection of formants (such as a room) affects a signal can be represented by an impulse response. In both speech and rooms, formants are characteristic features of the resonances of the space. They are said to be excited by acoustic sources such as the voice, and they shape (filter) the sources' sounds, but they are not sources themselves. History From an acoustic point of view, phonetics had a serious problem with the idea that the effective length of vocal tract changed vowels. Indeed, when the length of the vocal tract changes, all the acoustic resonators formed by mouth cavities are scaled, and so are their resonance frequencies. Therefore, it was unclear how vowels could depend on frequencies when talkers with different vocal tract lengths, for instance bass and soprano singers, can produce sounds that are perceived as belonging to the same phonetic category. There had to be some way to normalize the spectral information underpinning the vowel identity. Hermann suggested a solution to this problem in 1894, coining the term “formant”. A vowel, according to him, is a special acoustic phenomenon, depending on the intermittent production of a special partial, or “formant”, or “characteristique” feature. The frequency of the “formant” may vary a little without altering the character of the vowel. For “long e” (ee or iy) for example, the lowest-frequency “formant” may vary from 350 to 440 Hz even in the same person. Phonetics Formants are distinctive frequency components of the acoustic signal produced by speech, musical instruments or singing. The information that humans require to distinguish between speech sounds can be represented purely quantitatively by specifying peaks in the frequency spectrum. Most of these formants are produced by tube and chamber resonance, but a few whistle tones derive from periodic collapse of Venturi effect low-pressure zones. The formant with the lowest frequency is called F1, the second F2, the third F3, and so forth. The fundamental frequency or pitch of the voice is sometimes referred to as F0, but it is not a formant. Most often the two first formants, F1 and F2, are sufficient to identify the vowel. The relationship between the perceived vowel quality and the first two formant frequencies can be appreciated by listening to "artificial vowels" that are generated by passing a click train (to simulate the glottal pulse train) through a pair of bandpass filters (to simulate vocal tract resonances). Front vowels have higher F2, while low vowels have higher F1. Lip rounding tends to lower F1 and F2 in back vowels and F2 and F3 in front vowels. Nasal consonants usually have an additional formant around 2500 Hz. The liquid usually has an extra formant at 1500 Hz, whereas the English "r" sound () is distinguished by a very low third formant (well below 2000 Hz). Plosives (and, to some degree, fricatives) modify the placement of formants in the surrounding vowels. Bilabial sounds (such as and in "ball" or "sap") cause a lowering of the formants; on spectrograms, velar sounds ( and in English) almost always show F2 and F3 coming together in a 'velar pinch' before the velar and separating from the same 'pinch' as the velar is released; alveolar sounds (English and ) cause fewer systematic changes in neighbouring vowel formants, depending partially on exactly which vowel is present. The time course of these changes in vowel formant frequencies are referred to as 'formant transitions'. In normal voiced speech, the underlying vibration produced by the vocal folds resembles a sawtooth wave, rich in harmonic overtones. If the fundamental frequency or (more often) one of the overtones is higher than a resonance frequency of the system, then the resonance will be only weakly excited and the formant usually imparted by that resonance will be mostly lost. This is most apparent in the case of soprano opera singers, who sing at pitches high enough that their vowels become very hard to distinguish. Control of resonances is an essential component of the vocal technique known as overtone singing, in which the performer sings a low fundamental tone, and creates sharp resonances to select upper harmonics, giving the impression of several tones being sung at once. Spectrograms may be used to visualise formants. In spectrograms, it can be hard to distinguish formants from naturally occurring harmonics when one sings. However, one can hear the natural formants in a vowel shape through atonal techniques such as vocal fry. Formant estimation Formants, whether they are seen as acoustic resonances of the vocal tract, or as local maxima in the speech spectrum, like band-pass filters, are defined by their frequency and by their spectral width (bandwidth). Different methods exist to obtain this information. Formant frequencies, in their acoustic definition, can be estimated from the frequency spectrum of the sound, using a spectrogram (in the figure) or a spectrum analyzer. However, to estimate the acoustic resonances of the vocal tract (i.e. the speech definition of formants) from a speech recording, one can use linear predictive coding. An intermediate approach consists in extracting the spectral envelope by neutralizing the fundamental frequency, and only then looking for local maxima in the spectral envelope. Formant plots The first two formants are important in determining the quality of vowels, and are frequently said to correspond to the open/close (or low/high) and front/back dimensions (which have traditionally been associated with the shape and position of the tongue). Thus the first formant F1 has a higher frequency for an open or low vowel such as and a lower frequency for a closed or high vowel such as or ; and the second formant F2 has a higher frequency for a front vowel such as and a lower frequency for a back vowel such as . Vowels will almost always have four or more distinguishable formants, and sometimes more than six. However, the first two formants are the most important in determining vowel quality and are often plotted against each other in vowel diagrams, though this simplification fails to capture some aspects of vowel quality such as rounding. Many writers have addressed the problem of finding an optimal alignment of the positions of vowels on formant plots with those on the conventional vowel quadrilateral. The pioneering work of Ladefoged used the Mel scale because this scale was claimed to correspond more closely to the auditory scale of pitch than to the acoustic measure of fundamental frequency expressed in Hertz. Two alternatives to the Mel scale are the Bark scale and the ERB-rate scale. Another widely adopted strategy is plotting the difference between F1 and F2 rather than F2 on the horizontal axis. Singer's formant Studies of the frequency spectrum of trained speakers and classical singers, especially male singers, indicate a clear formant around 3000 Hz (between 2800 and 3400 Hz) that is absent in speech or in the spectra of untrained speakers or singers. It is thought to be associated with one or more of the higher resonances of the vocal tract. It is this increase in energy at 3000 Hz which allows singers to be heard and understood over an orchestra. This formant is actively developed through vocal training, for instance through so-called voce di strega or "witch's voice" exercises and is caused by a part of the vocal tract acting as a resonator. In classical music and vocal pedagogy, this phenomenon is also known as squillo. See also Formant synthesis Human voice Linear predictive coding Praat Timbre Vocoder References External links Formants for fun and profit Formants and wah-wah pedals What is a formant? A discussion of the three different meanings of the word 'formant' Formant tuning by soprano singers from the University of New South Wales The acoustics of harmonic or overtone singing from the University of New South Wales Materials for measuring and plotting vowel formants Human voice Sound synthesis types Acoustics
Formant
Physics
1,898
71,604,473
https://en.wikipedia.org/wiki/Zwackhiomyces%20calcariae
Zwackhiomyces calcariae is a species of lichenicolous fungus in the family Xanthopyreniaceae. It was first formally described in 1896 by French lichenologist Camille Flagey, as Arthopyrenia calcariae. Josef Hafellner and Nikolaus Hoffmann transferred it to the genus Zwackhiomyces in 2000. The fungus is parasitic on lichens in genus Aspicilia. References Xanthopyreniaceae Fungi described in 1896 Lichenicolous fungi Fungus species
Zwackhiomyces calcariae
Biology
109
26,421,394
https://en.wikipedia.org/wiki/Coinage%20metals
The coinage metals comprise those metallic chemical elements and alloys which have been used to mint coins. Historically, most coinage metals are from the three nonradioactive members of group 11 of the periodic table: copper, silver and gold. Copper is usually augmented with tin or other metals to form bronze. Gold, silver and bronze or copper were the principal coinage metals of the ancient world, the medieval period and into the late modern period when the diversity of coinage metals increased. Coins are often made from more than one metal, either using alloys, coatings (cladding/plating) or bimetallic configurations. While coins are primarily made from metal, some non-metallic materials have also been used. History Early coinage made from metal came into use during the Axial Age in the Greek world, in northern India, and in China, as coins became a widespread embodiment of money. Bronze, gold, silver and electrum (a naturally occurring pale yellow mixture of gold and silver that was further alloyed with silver and copper) were used. Silver coins from about 700 BC are known from Aegina Island. Early electrum coins from Ephesus, Lydia, date from about 650 BC. Ancient India in 6th century BC, was also one of the earliest issuers of coins in the world. The gold Croeseids, issued in Lydia, were the first true gold coins with a standardized purity for general circulation. The gold and silver Croeseids formed the world's first bimetallic monetary system, c. 550 BC. The Persian daric was also an early gold coin which, along with a similar silver coin, the siglos, (from Ancient Greek σίγλος, Hebrew שֶׁקֶל (shékel)) represented the bimetallic monetary standard of the Achaemenid Persian Empire. These coins were also very well known in the Persian and Sassanids era, most notably, in Susa and in Ctesiphon. Precious metals were used historically in commodity money and are found in bullion coins and some collectable coins. Coins functioning as fiat money are now made from a larger variety of base metals. Multiple metals Coins may be composed of multiple metals using alloys, coatings, or bimetallic forms. Coin alloys include bronze, electrum and cupronickel. Plating, cladding or other coating methods are used to form an outer layer of metal and are typically used to replace a more expensive metal while retaining the former appearance. For example, United States cents since 1982 are zinc with copper-plating, and thus retain their prior copper look while having a less expensive composition. Coatings may also be used as a form of debasement in commodity money. Bimetallic coins are used for their distinctive appearance and generally have an outer ring of one metal or alloy surrounding a center of contrasting metal. Requirements for a coinage metal Coins that are intended for circulation may circulate for decades and thus must have excellent resistance to wear and corrosion. Achieving this goal typically necessitates the use of base metal alloys. In addition, some metals, such as manganese, are unsuitable as they are too hard to take an impression well or are apt to wear out stamping machines at the mint. When minting coins, especially low denomination coins, there is a risk that the value of metal within a coin is greater than the face value, leading to negative seigniorage. This leads to the possibility of smelters taking coins and melting them down for the scrap value of the metal. Pre-1992 British pennies were made of 97% copper; but as of 2008, based on the price of copper, the value of a penny from this period is 1.5 new-pence. Modern British pennies are now made of copper-plated steel. Cupronickel, a base metal alloy with varying proportions of copper and nickel, was introduced as a cheaper alternative for silver in coinage. Cupronickel, most commonly 75% copper, 25% nickel, has a silver color, is hard wearing and has excellent striking properties, essential for the design of the coin to be pressed accurately and quickly during manufacture. However, in the 21st century with the prices of both copper and nickel rising, it has become common to experiment with various alloys of steel, often stainless steel as an even cheaper alternative. For example, in India some coins have been made from a stainless steel that contains 82% iron, 18% chromium, and many other countries that have minted coins that contain metals now worth nearly the coin face-value, are experimenting with various steel alloys. Italy had earlier experimented with acmonital, a stainless steel alloy, for its coins. A number of more exotic metals have been used to make demonstration or fantasy coins which have not been used to make monetized coins for a nation-state. Some of these elements would make excellent coins in theory (e.g. zirconium). More expensive metals that are intrinsically valuable as commodities are less practical as coinage due to their cost, but could be used for bullion coins. Chemical elements used in circulating coins In 1992, twenty-four chemical elements used in world coinage were documented by Jay and Marieli Roe in an award-winning exhibit and publication: aluminum, antimony, carbon, cobalt, copper, gold, hafnium, iron, lead, magnesium, molybdenum, nickel, niobium, palladium, platinum, rhenium, silver, tantalum, tin, titanium, tungsten, vanadium, zinc and zirconium. Chromium and manganese, however, were not mentioned, even though both elements had been used in common circulation coins (Canada wartime V nickels and US wartime Jefferson nickels, respectively) long before the article's publication. Non-circulating Chemical elements used in non-circulating commemorative, demo, bullion or fantasy coins, medals, patterns, and trial strikes: Cadmium: 1828 medal made by G. Loos for the marriage of Heinrich von Dechen, "of Silesian cadmium". Cobalt: 2005 Cameroon 750 CFA francs struck in cobalt-plated iron. Hafnium: Fred Zinkann demo coin. Iridium: 2013 oz 10 franc bullion coin issued by Rwanda as part of "Noble Five" precious metals set. Molybdenum: Demo coin, Fred Zinkann. 2008 1 tr oz coins by Coins By Design, Murray Buckner (mintage 250). Niobium: Austria has issued a number of bimetallic 25 euro coins with a niobium center. Palladium: First issued 1966 by Sierra Leone. Also presentation sets from Tonga and bullion coins of various countries. Rhenium: Fred Zinkann fantasy pieces, Pope Matthew Triple Ducat and Malvinas 5 Australes Rhodium: 2014 oz 10 franc bullion coin issued by Rwanda as part of "Noble Six" precious metals set. Also Cohen Mint bullion round. Ruthenium: 1967 Hau from Tonga was 98% palladium and 2% ruthenium. Selenium: 1862 medal in UK Science Museum, commemorating Berzelius, discoverer of the element. Silicon: Privately struck US quarter patterns dated 1964 (Pollock-5380) in nickel-silicon alloy. Tantalum: Used in a bimetallic silver-tantalum coin from Kazakhstan. Tellurium: 1896 Hungarian mining medal. Reproductions exist from 1975. Titanium: First issued 1999 by Gibraltar. Austria has made bimetallic silver/titanium commemoratives. Tungsten: While Tungsten alloys are too hard for practical use, a few private demos have been struck for experimentation, e.g. Fred Zinkann US half eagle patterns. Uranium: Two types of a German medal of native uranium. Vanadium: 2011 1 Troy ounce coins by Coins By Design, Murray Buckner (mintage 20). Zirconium: 2012 1 Troy ounce coins, including 50 black & 50 Rainbow, by Coins By Design, Murray Buckner (mintage 500). Element Series Beginning in 2006, Dave Hamric (Metallium) has been attempting to strike "coins" (technically tokens or medals, about the size of a US cent) of every stable chemical element. He has struck tokens of the following elements: aluminium, antimony, barium (reactive, sealed in glass capsule), beryllium, bismuth, boron (mixed with binder, sealed in resin cast), cadmium, calcium (reactive, sealed in glass capsule), carbon (mixed with binder, sealed in resin cast), cerium (reactive, sealed in glass capsule), chromium, cobalt, copper, dysprosium, erbium, europium (reactive, sealed in glass capsule), gadolinium, gallium, gold, hafnium, holmium, indium, iridium, iron, lanthanum (reactive, sealed in glass capsule), lead, lutetium, magnesium, mercury (sealed in resin cast, containing the expected coin-weight of liquid mercury), molybdenum, neodymium (reactive, sealed in glass capsule), nickel, niobium, palladium, phosphorus (mixed with binder, sealed in resin cast), platinum, praseodymium (reactive, sealed in glass capsule), rhenium, rhodium, ruthenium, samarium (reactive, sealed in glass capsule), scandium, selenium, silver, strontium (reactive, sealed in glass capsule), sulfur, tantalum, tellurium, terbium, thallium (extremely poisonous; lead token clad on one side with thallium foil and sealed in resin), thulium, tin, titanium, uranium (not offered for sale), vanadium, ytterbium, yttrium, zinc, zirconium. Non-metallic materials used for circulating coins See also Billon Fineness Italma Melchior Nickel silver Nordic Gold References External links Comprehensive list of metals and their alloys which have been used at various times, in coins for all types of purposes. Site of coinage metals and alloys
Coinage metals
Chemistry
2,114
40,203,419
https://en.wikipedia.org/wiki/Pandoraviridae
Pandoraviridae is a proposed family of double-stranded DNA viruses that infect amoebae. There is only one genus in this family: Pandoravirus. Several species in this genus have been described, including Pandoravirus dulcis, Pandoravirus salinus and Pandoravirus yedoma. History The viruses were discovered in 2013. Description The viruses in this family are the second largest known virus (~1 micrometer) in capsid length, after Pithovirus (1.5 micrometer). Pandoravirus has the largest viral genome known, containing double-stranded DNA of 1.9 to 2.5 megabase pairs. Evolution These viruses appear to be related to the phycodnaviruses. References Nucleocytoplasmic large DNA viruses Unaccepted virus taxa
Pandoraviridae
Biology
164
57,655,202
https://en.wikipedia.org/wiki/San%20Thang
Thang Hoa San (, ; born 28 August 1954) is an Australian chemist of Chinese-Vietnamese background. Background Thang was born in Saigon in 1954 to Chinese parents who migrated to Vietnam in the 1930s. He completed his Bachelor of Science at Saigon University in 1976, and worked as a chemist at SINCO, a sewing machine manufacturer. In 1979, Thang left Vietnam as a refugee from the Vietnam War, and spent five months in a refugee camp in Malaysia before arriving in Brisbane, Australia later in the year. He enrolled at Griffith University where he completed an Honours degree in chemistry and a PhD in organic chemistry. In 1986, Thang joined the CSIRO, the Australian government's scientific agency. In 1987, he left to join ICI Australia, but returned to CSIRO in 1990. In September 2014, research by Thomson Reuters for their citation laureate prize named Thang as one of a trio of CSIRO scientists (together with Ezio Rizzardo and Graeme Moad) most likely to be contenders for the Nobel Prize in Chemistry for their work in co-developing the reversible addition−fragmentation chain-transfer polymerization (RAFT) process. The three had shared the ATSE Clunies-Ross Award earlier that year. However, in December that year, Thang revealed that he had been laid off by the organisation in September, but had continued to work there unpaid in an honorary position. He later worked as a professor for Beijing University of Chemical Technology, and in May 2015, joined Monash University as a professor of chemistry and was elected as a Fellow of the Australian Academy of Science. References 1954 births Living people 21st-century Australian chemists Australian chemists Polymer scientists and engineers Vietnamese scientists CSIRO people Griffith University alumni Academic staff of Monash University Companions of the Order of Australia Fellows of the Australian Academy of Science Fellows of the Australian Academy of Technological Sciences and Engineering Vietnamese emigrants to Australia Vietnamese refugees
San Thang
Chemistry,Materials_science
391
14,166,786
https://en.wikipedia.org/wiki/Homeobox%20protein%20MSX-1
Homeobox protein MSX-1, is a protein that in humans is encoded by the MSX1 gene. MSX1 transcripts are not only found in thyrotrope-derived TSH cells, but also in the TtT97 thyrotropic tumor, which is a well differentiated hyperplastic tissue that produces both TSHß- and a-subunits and is responsive to thyroid hormone. MSX1 is also expressed in highly differentiated pituitary cells which until recently was thought to be expressed exclusively during embryogenesis. There is a highly conserved structural organization of the members of the MSX family of genes and their abundant expression at sites of inductive cell–cell interactions in the embryo suggest that they have a pivotal role during early development. Function This gene encodes a member of the muscle segment homeobox gene family. The encoded protein functions as a transcriptional repressor during embryogenesis through interactions with components of the core transcription complex and other homeoproteins. It may also have roles in limb-pattern formation, craniofacial development, in particular, odontogenesis, and tumor growth inhibition. There is also strong evidence from sequencing studies of candidate genes involved in clefting that mutations in the MSX1 gene may be associated in the pathogenesis of cleft lip and palate. Mutations in this gene, which was once known as homeobox 7, have also been associated with Witkop syndrome, Wolf–Hirschhorn syndrome, and autosomal dominant hypodontia. Haploinsufficiency of MSX1 protein affects the development of all teeth, preferentially third molars and second premolars. The effect of haploinsufficiency of PAX9 on the development of incisors and premolars is probably caused by a deficiency of MSX1 protein. Phenotypes caused by deficiency of MSX1 protein might depend on the localization of mutations and their effect on the protein structure and function. Two substitution mutations, Arg196Pro and Met61Lys cause only familial non-syndromic tooth agenesis. Frameshift mutations, Ser202Stop mutation, resulting in a protein that lacks the C-terminal end of the homeodomain, impairs not only teeth but also nail formation, while Ser105Stop mutation, causing complete absence of the MSX1 homeodomain, is responsible for the most severe phenotype, which includes orofacial clefts with accompanied tooth agenesis. MSX1 is one of the strongest candidate genes for specific forms of tooth agenesis, mutations in this gene was detected only in some affected individuals. Genes expressed in the early dental epithelium in mice such as Bmp4, Bmp7, Dlx2, Dlx5, Fgf1, Fgf2, Fgf4, Fgf8, Lef1, Gli2, and Gli3 are also potential candidates. Based on existing evidence, it seems possible that both hypodontia and oligodontia are heterogeneous traits, caused by several independent defective genes, which act along or in combination with other genes and lead to specific phenotypes. MSX1 is found to have a linkage with Witkop syndrome, also known as “tooth and nail syndrome” or “nail dysgenesis and hypodontia” since mutations in MSX1 were shown to be associated with tooth agenesis. There is a linkage found between TNS and markers surrounding the MSX1 locus and it showed that a nonsense mutation (S202X) in MSX1 cosegregated with the TNS phenotype in a three-generation family. Interactions MSX1 has been shown to interact with DLX5, CREB binding protein, Sp1 transcription factor, DLX2, TATA binding protein and Msh homeobox 2. LHX2, a LIMtype homeoprotein, is a protein partner for MSX1 in vitro and in cellular extracts. The interaction between MSX1 and LHX2 is mediated through the homeodomain-containing regions of both proteins. MSX1 and LHX2 form a protein complex in the absence of DNA, and that DNA binding by either protein alone can occur at the expense of protein complex formation. References Further reading External links Transcription factors
Homeobox protein MSX-1
Chemistry,Biology
919
35,538,241
https://en.wikipedia.org/wiki/Topological%20rigidity
In the mathematical field of topology, a manifold M is called topologically rigid if every manifold homotopically equivalent to M is also homeomorphic to M. Motivation A central problem in topology is determining when two spaces are the same i.e. homeomorphic or diffeomorphic. Constructing a morphism explicitly is almost always impractical. If we put further condition on one or both spaces (manifolds) we can exploit this additional structure in order to show that the desired morphism must exist. Rigidity theorem is about when a fairly weak equivalence between two manifolds (usually a homotopy equivalence) implies the existence of stronger equivalence homeomorphism, diffeomorphism or isometry. Definition. A closed topological manifold M is called topological rigid if any homotopy equivalence f : N → M with some manifold N as source and M as target is homotopic to a homeomorphism. Examples Example 1. If closed 2-manifolds M and N are homotopically equivalent then they are homeomorphic. Moreover, any homotopy equivalence of closed surfaces deforms to a homeomorphism. Example 2. If a closed manifold Mn (n ≠ 3) is homotopy-equivalent to Sn then Mn is homeomorphic to Sn. Rigidity theorem in geometry Definition. A diffeomorphism of flat-Riemannian manifolds is said to be affine iff it carries geodesics to geodesic. Theorem (Bieberbach) If f : M → N is a homotopy equivalence between flat closed connected Riemannian manifolds then f is homotopic to an affine homeomorphism. Mostow's rigidity theorem Theorem: Let M and N be compact, locally symmetric Riemannian manifolds with everywhere non-positive curvature having no closed one or two dimensional geodesic subspace which are direct factor locally. If f : M → N is a homotopy equivalence then f is homotopic to an isometry. Theorem (Mostow's theorem for hyperbolic n-manifolds, n ≥ 3): If M and N are complete hyperbolic n-manifolds, n ≥ 3 with finite volume and f : M → N is a homotopy equivalence then f is homotopic to an isometry. These results are named after George Mostow. Algebraic form Let Γ and Δ be discrete subgroups of the isometry group of hyperbolic n-space H, where n ≥ 3, whose quotients H/Γ and H/Δ have finite volume. If Γ and Δ are isomorphic as discrete groups then they are conjugate. Remarks (1) In the 2-dimensional case any manifold of genus at least two has a hyperbolic structure. Mostow's rigidity theorem does not apply in this case. In fact, there are many hyperbolic structures on any such manifold; each such structure corresponds to a point in Teichmuller space. (2) On the other hand, if M and N are 2-manifolds of finite volume then it is easy to show that they are homeomorphic exactly when their fundamental groups are the same. Application The group of isometries of a finite-volume hyperbolic n-manifold M (for n ≥ 3) is finitely generated and isomorphic to π1(M). References Topology Maps of manifolds Homotopy theory
Topological rigidity
Physics,Mathematics
699
342,453
https://en.wikipedia.org/wiki/Center%20%28algebra%29
The term center or centre is used in various contexts in abstract algebra to denote the set of all those elements that commute with all other elements. The center of a group G consists of all those elements x in G such that xg = gx for all g in G. This is a normal subgroup of G. The similarly named notion for a semigroup is defined likewise and it is a subsemigroup. The center of a ring (or an associative algebra) R is the subset of R consisting of all those elements x of R such that xr = rx for all r in R. The center is a commutative subring of R. The center of a Lie algebra L consists of all those elements x in L such that [x,a] = 0 for all a in L. This is an ideal of the Lie algebra L. See also Centralizer and normalizer Center (category theory) References Abstract algebra
Center (algebra)
Mathematics
194
11,485,724
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA70
In molecular biology, Small nucleolar RNA SNORA70 (also known as U70) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a "guide RNA". ACA70 was originally cloned from HeLa cells and belongs to the H/ACA box class of snoRNAs as it has the predicted hairpin-hinge-hairpin-tail structure, has the conserved H/ACA-box motifs and is found associated with GAR1 protein. snoRNA ACA70 is predicted to guide the pseudouridylation of U1692 of 18S ribosomal RNA (rRNA). Pseudouridylation is the (isomerisation of the nucleoside uridine) to the different isomeric form pseudouridine. References External links Small nuclear RNA
Small nucleolar RNA SNORA70
Chemistry
242
54,339,873
https://en.wikipedia.org/wiki/The%20Space%20Observatory%20%28Observatoire%20de%20l%27Espace%29
The Observatoire de l’Espace (Space Observatory) is a cultural laboratory created in 2000 by CNES (the French Space Agency) to promote a new vision of outer space, different from that of popular science. Space has a major influence on people's perceptions and imagination. The Observatoire de l’Espace introduces artists to the activities of the CNES and paves the way for any kind of space-inspired creations. Those creations strongly connected to space art are thereafter shared with the wider audience. Programs As a cultural laboratory, the Observatoire de l’Espace has a specific methodology: the goal is to share space-related items and innovations with artists, in order to inspire new creations. Two programmes have been developed following this process. Cultural studies of Space Program For many years, the Observatoire de l’Espace has been developing a cultural story of outer space by building an inventory of everything created on Earth about outer space. It resulted in a wide classification: space instruments (for example satellite prototypes), audio and video items (documentaries, video archives...), works of art inspired by outer space, laboratories and other space related buildings, and all kinds of everyday life objects. The Observatoire de l’Espace is also welcoming researchers in human sciences and art history as well as building partnerships with research laboratories to foster works about Space in those fields. An academic blog called Humanités spatiales (Space humanities) was also created in 2015. This website is dedicated to analysis and dialogue for researchers who are interested in space activities and its cultural representations. All this work can be used by artists as a basis for new creations. The Spatial Creation and Imagination Program The Observatoire de l’Espace fosters space-related artistic creation through its programme called "Création et Imaginaire spatial" (Spatial creation and imagination) which enables artists to benefit from an off-site residency. Various immersive ways to discover the space field are suggested to the artists: interviews with space experts, scientific data and documentation, access to places where spatial activities take place (laboratories, technical or industrial centers...), participation in scientists seminaries or even weightlessness flights aboard the Airbus Zero-G. Many creations arise from this programme: from literature to contemporary art and from performing arts to contemporary music. Many cultural events are organized by the Observatoire de l’Espace in order to present those creations but they are also displayed in many cultural places . Events Sideration festival The Sideration festival takes place every year in March at the CNES’ headquarters in Paris. Since 2011, it enables a deep immersion in Space and imagination through an eclectic programme: theatre, music, video, cinema, visual arts, readings and even real or fictional science stories. Every year, the CNES’ headquarters become a large art-science stage where about 30 artists have the opportunity to exhibit their space related productions. Those creations are often the result of a strong collaboration with the artists through the « off-site » residency programme, the sharing of studies about space or through calls for submissions regarding the journal Espace(s). Nuit Blanche (All-nighter) One opportunity for the Observatoire de l’Espace to organize an exhibition is for the Nuit Blanche which is an annual all night long, cultural, free arts festival. Since 2014, the Observatoire de l’Espace has called for artistic projects dealing with Space history. Selected archives are given to artists in order to widen their imagination or to be the core material of their creations. Archives and creations are both exhibited during the Nuit Blanche at the CNES’ headquarter. The journal Espace(s) The journal Espace(s) is an semi-annual journal dedicated to literature and a large variety of creations (typography, performing arts, comic strip, music, poetry...). It gathers texts dealing with a chosen theme linked with Space (Dreams, revolt, revolution, or Obsession and fascination). About thirty authors contribute to each issue. This collection is written under the format of laboratory books in order to mix up both literary and scientific universes. The works of the Spatial creation and imagination programme, made by the artists in residency, also have their place in the journal. References CNES Space art
The Space Observatory (Observatoire de l'Espace)
Astronomy
888
19,747,247
https://en.wikipedia.org/wiki/Monobasic
Monobasic may refer to: A monobasic or monoprotic acid, able to donate one proton per molecule A monobasic salt, with one hydrogen atom, with respect to the parent acid, replaced by cations Monobasic, or Monotypic taxon, a taxonomic group (taxon) that contains only one immediately subordinate taxon Monobasic, an album by Jess Cornelius Mono-Basic, the implementation of Visual Basic.Net for Mono See also Dibasic (disambiguation) Tribasic (disambiguation) Polybasic (disambiguation) Chemical nomenclature
Monobasic
Chemistry
126
5,959,283
https://en.wikipedia.org/wiki/Bromo-Seltzer
Bromo-Seltzer is a brand of antacid formulated to relieve pain occurring together with heartburn, upset stomach, or acid indigestion. It originally contained sodium bromide and acetanilide, both toxic substances which were eventually removed. Its current formulation contains the pain reliever aspirin and two reactive chemicalssodium bicarbonate and citric acidwhich creates effervescence when mixed with water. Sodium bicarbonate is an antacid. History Bromo-Seltzer was invented in 1888 by Isaac E. Emerson and produced by the Emerson Drug Company of Baltimore, Maryland. It was sold in the United States in the form of effervescent granules that were mixed with water before ingestion. The product took its name from a component of the original formula, sodium bromide; each dose contained 3.2 mEq/teaspoon of it. Bromides are a class of tranquilizers that were withdrawn from the U.S. market in 1975 due to their toxicity. Their sedative effect probably accounted for Bromo-Seltzer's popularity as a hangover remedy. Early formulas also used acetanilide as the analgesic ingredient; it is now known to be toxic. Acetanilide was replaced with its metabolite acetaminophen, and its current formulation uses aspirin, sodium bicarbonate, and citric acid, the latter two of which provide the carbonation. Bromo-Seltzer's main offices and factory were located in downtown Baltimore, Maryland, at the corner of West Lombard and South Eutaw streets. The factory's most notable feature was the Emerson Bromo-Seltzer Tower, built in 1911, whose four clock faces are ringed by letters spelling out the product name. The tower was patterned on the Palazzo Vecchio in Florence, Italy, and is listed on the National Register of Historic Places. The tower originally held a 51-foot (16m) representation of a Bromo-Seltzer bottle at its top, glowing blue and rotating on a vertical axis. The sign weighed 20 tons (18.1 tonnes), included 314 incandescent light bulbs, and was topped with a crown. The sign was removed in 1936 because of structural concerns. Emerson, who traveled widely, said the fizz reminded him of the bubbling action of Mount Bromo, a volcano in Java. In popular culture Bromo-Seltzer is mentioned in several films and TV shows, including The Crooked Circle (1932), Bed of Roses (1933), Topper (1937), Wonder Man (1945), Somewhere in the Night (1946), The Postman Always Rings Twice (1981), The Hudsucker Proxy (1994), the 1998 The Simpsons episode "Bart Carny", and in Golden Girls (Season 4, Episode 1). It is mentioned in John Steinbeck's 1939 novel The Grapes of Wrath. Drugstore Bromo-Seltzer dispensers are mentioned in Georges Simenon's 1949 detective novel Maigret chez le coroner that takes place in Arizona. It was a sponsor of the old time radio program, “The Inner Sanctum”. It is mentioned in several songs, including "Bewitched, Bothered and Bewildered" by Rodgers and Hart, "Adelaide's Lament" in the musical Guys and Dolls, and "Pachuco Cadaver" by Captain Beefheart and his Magic Band. In Spike Jones' version of Laura, the chorus chants "Bromo-Seltzer, Bromo-Seltzer..." to evoke the sound of a chugging train. References External links Bromo-Seltzer at drugs.com Emerson Bromo-Seltzer Tower website Products introduced in 1888 Bromides Drugs acting on the gastrointestinal system and metabolism
Bromo-Seltzer
Chemistry
801
13,454,849
https://en.wikipedia.org/wiki/MUMmer
MUMmer is a bioinformatics software system for sequence alignment. It is based on the suffix tree data structure. It has been used for comparing different genomes assemblies to one another, which allows scientists to determine how a genome has changed. The acronym "MUMmer" comes from "Maximal Unique Matches", or MUMs. The original algorithms in the MUMMER software package were designed by Art Delcher, Simon Kasif and Steven Salzberg. Mummer was the first whole genome comparison system developed in Bioinformatics. It was originally applied to the comparison of two related strains of bacteria. The MUMmer software is open source. The system is maintained primarily by Steven Salzberg and Arthur Delcher at Center for Computational Biology at Johns Hopkins University. MUMmer is a highly cited bioinformatics system in the scientific literature. According to Google Scholar, as of early 2013 the original MUMmer paper (Delcher et al., 1999) has been cited 691 times; the MUMmer 2 paper (Delcher et al., 2002) has been cited 455 times; and the MUMmer 3.0 article (Kurtz et al., 2004) has been cited 903 times. Overview Mummer is a fast algorithm used for the rapid alignment of entire genomes. The MUMmer algorithm is relatively new and has 4 versions. Versions of MUMmers MUMmer1 MUMmer1 or just MUMmer consists of three parts, the first part consists of the creation of suffix trees (to get MUMs), the second part in the longest increasing subsequence or longest common subsequences (to order MUMs), lastly any alignment to close gaps. Interruptions between MUMs-alignment, are known as gaps. Otherther alignment algorithms fill these gaps. The gaps fall in the following four classes: An SNPinterruption – when comparing two sequences, one character will differ. An insertion – when comparing two sequences, there is a subsequence in only appears in one of the sequences. It would be an empty gap in the other sequence at the moment of comparison of the two sequences. A highly polymorphic region – when comparing two sequences, there can be found a subsequence in which every single character differs. A repeat – it’s the repetition of a sequence. Since MUMs can only take unique sequences, that gap can be one repetition of one of the MUMs. MUMmer 2 This algorithm was redesigned to require less memory and increase speed and accuracy. It also allows for bigger genomes alignment. The improvement was the amount stored in the suffix trees by employing the one created by Kurtz. MUMmer 3 According to Stefan Kurtz and his teammates, “the most significant technical improvement in MUMmer 3.0, is a complete rewrite of the suffix-tree code, based on the compact suffix- tree representation of” the tree described in the article “Reducing the space requirement of suffix trees”. MUMmer 4 According to Guillaume and his team, there are some extra improvements in the implementation and also innovation with Query parallelism. “MUMmer4 now includes options to save and load the suffix array for a given reference." This allows the suffix tree can be built once and constructed again after running it from the saved suffix tree. Software - Open Source MUMmer has open-source software and can be accessed online. Related Sequence Alignments There are other types of sequence alignments: Edit distance BLAST Bowtie BWA Blat Mauve LASTZ BLAST References External links MUMmer home page MUMmer2 Book MUMmer Software MUMmer3 MUMmer1 MUMmer4 Bioinformatics software
MUMmer
Biology
746
1,531,409
https://en.wikipedia.org/wiki/Homeomorphism%20group
In mathematics, particularly topology, the homeomorphism group of a topological space is the group consisting of all homeomorphisms from the space to itself with function composition as the group operation. They are important to the theory of topological spaces, generally exemplary of automorphism groups and topologically invariant in the group isomorphism sense. Properties and examples There is a natural group action of the homeomorphism group of a space on that space. Let be a topological space and denote the homeomorphism group of by . The action is defined as follows: This is a group action since for all , where denotes the group action, and the identity element of (which is the identity function on ) sends points to themselves. If this action is transitive, then the space is said to be homogeneous. Topology As with other sets of maps between topological spaces, the homeomorphism group can be given a topology, such as the compact-open topology. In the case of regular, locally compact spaces the group multiplication is then continuous. If the space is compact and Hausdorff, the inversion is continuous as well and becomes a topological group. If is Hausdorff, locally compact and locally connected this holds as well. tSome locally compact separable metric spaces exhibit an inversion map that is not continuous, resulting in not forming a topological group. Mapping class group In geometric topology especially, one considers the quotient group obtained by quotienting out by isotopy, called the mapping class group: The MCG can also be interpreted as the 0th homotopy group, . This yields the short exact sequence: In some applications, particularly surfaces, the homeomorphism group is studied via this short exact sequence, and by first studying the mapping class group and group of isotopically trivial homeomorphisms, and then (at times) the extension. See also Mapping class group References Group theory Topology Topological groups
Homeomorphism group
Physics,Mathematics
392
23,048,878
https://en.wikipedia.org/wiki/EchoStar%20Mobile
EchoStar Mobile Limited, an Irish company with commercial operations headquartered in the United Kingdom and a data centre based in Griesheim, Germany, is a mobile operator that provides connectivity across Europe through a converged satellite and terrestrial network. EchoStar Mobile is a subsidiary of EchoStar Corporation, a provider of satellite communications devices. History EchoStar Mobile Limited was established in 2008 as Solaris Mobile, a joint venture company between SES and Eutelsat Communications to develop and commercialize the first geostationary satellite systems in Europe for broadcasting video, radio and data to in-vehicle receivers and to mobile devices, such as mobile phones, portable media players and PDAs. In January 2014 all stock in Solaris Mobile was acquired by EchoStar Corporation and in March 2015 the company was renamed EchoStar Mobile. The agreement to set up Solaris Mobile was reached in 2006 with the company formed in 2008. SES and Eutelsat – both successful European satellite operators, providing TV and other services from geostationary satellites to millions of cable and direct-to-home viewers – invested €130m in the venture. The services to be developed included video, radio, multimedia data, interactive services, and voice communications, with the primary aim of delivering mobile television any time, anywhere. Its headquarters is in Dublin, Ireland. Solaris Mobile's first commercial contract was with Italian media publishing group Class Editori, to launch a digital radio service in Italy. A hybrid satellite/terrestrial network will initially be deployed in Milan, in October 2011 and extended across the country in 2012. Solaris claims that the network will enable Italians to access dozens of new digital radio channels broadcasting music, news, entertainment and sports, in their original format with continuity of reception across the entire country, and that the digital audio signal will be complemented with new visual media services such as programme information and traffic data. Applications The EU Telecoms Commissioner, Viviane Reding, has commented, "Mobile satellite services have huge potential: they can enable Europeans to access new communication services, particularly in rural and less populated regions." Solaris Mobile primarily intends to provide mobile TV and interactive services to handheld and vehicle receivers. For in-vehicle use, the mobile satellite receivers could also double as web browsers providing full Internet access, and deliver interactive services such as online reservations, emergency warnings, or toll payments. The coverage across Europe will also enable the system to be used for situations when other means of communication are not possible, such as gathering data (traffic, weather, pollution) from moving vehicles, and support for emergency and rescue services in isolated regions, under extreme conditions or when terrestrial networks have been compromised. To avoid the requirement for mobile phones with S-band reception for the satellite services, Solaris Mobile has developed a 'Pocket Gateway' in conjunction with Finnish company Elekrobit. The Gateway is a compact S-band receiver which decodes DVB-SH transmissions from the Solaris satellite and relays them over WiFi to any compatible handset with a web browser. The Gateway is also planned to be used in vehicles with a roof-mounted antenna for S-band reception with services accessed on passengers' mobile phones. The technology was demonstrated at the GSMA Mobile World Congress in Barcelona in February 2010. Technology The Solaris Mobile services use DVB-SH technology to deliver IP based data and media content to handheld and in-vehicle terminals using a hybrid satellite/terrestrial system with satellite transmission serving the whole of Europe and beyond, and terrestrial repeaters for urban and indoor penetration. The S-band frequencies used (2.00 GHz) are reserved for the exclusive use of satellite and terrestrial mobile services, and sit alongside the UMTS frequencies already in use across Europe for 3G terrestrial mobile phone services, allowing the reuse of existing cellular towers and antennas, and the simple incorporation of Solaris services in mobile handsets. Handsets equipped with the first DVB-SH chipsets were successfully demonstrated live at the Mobile World Congress in Barcelona in February 2008. Solaris was intended to first use the Eutelsat W2A satellite at 10° east, which contains an S-band payload, and was scheduled for launch in early 2009. However, following the successful launch on April 3, 2009, the S-band payload was found to show "an anomaly" which has put in doubt the payload's capability to provide mobile satellite services for Solaris. Further testing of the satellite was undertaken to establish its future in the Solaris programme. Investigation of S-band payload has confirmed significant non-compliance from its original specifications. On 1 July 2009, Solaris Mobile filed an insurance claim. The technical findings indicate that the company should be able to offer some, but not all of the services it was planning to offer. Regulatory On June 30, 2008 the European Parliament and the Council adopted the European's Decision to establish a single selection and authorisation process to ensure a coordinated introduction of mobile satellite services (MSS) in Europe. The selection process was launched in August 2008 and attracted four applications by prospective operators (ICO, Inmarsat, Solaris Mobile, TerreStar). In May 2009, the European Commission selected two operators, Inmarsat Ventures and Solaris Mobile, giving these operators "the right to use the specific radio frequencies identified in the Commission's decision and the right to operate their respective mobile satellite systems". EU Member States now have to ensure that the two operators have the right to use the specific radio frequencies identified in the commission's decision and the right to operate their respective mobile satellite systems for 18 years from the selection decision. The operators are compelled to start operations within 24 months from the selection decision. Although the EU's decision was announced days after the apparent failure of the payload intended to serve Solaris, the company remains confident of "its ability to meet the commitments made under the European Commission selection process". In May 2010, Solaris Mobile announced that following the granting of spectrum by the European Commission, the company had been actively pursuing licenses from European member states and it had just been granted 18-year licences to operate Mobile Satellite Services in France, Sweden and Germany, to add to existing licenses for Finland, Luxembourg, Italy, and Slovenia. See also DVB-SH EchoStar Corporation SES Eutelsat S-band References Companies based in Dublin (city) Telecommunications companies established in 2008 Irish companies established in 2008 SES (company) Satellite television Interactive television Satellite operators Mobile web Mobile television Mobile
EchoStar Mobile
Technology
1,312
18,598,282
https://en.wikipedia.org/wiki/Levoamphetamine
Levoamphetamine{{#tag:ref|Synonyms and alternate spellings include: (IUPAC name), levamfetamine (International Nonproprietary Name [INN]), , , , and is a stimulant medication which is used in the treatment of certain medical conditions. It was previously marketed by itself under the brand name Cydril, but is now available only in combination with dextroamphetamine in varying ratios under brand names like Adderall and Evekeo. The drug is known to increase wakefulness and concentration in association with decreased appetite and fatigue. Pharmaceuticals that contain levoamphetamine are currently indicated and prescribed for the treatment of attention deficit hyperactivity disorder (ADHD), obesity, and narcolepsy in some countries. Levoamphetamine is taken by mouth. Levoamphetamine acts as a releasing agent of the monoamine neurotransmitters norepinephrine and dopamine. It is similar to dextroamphetamine in its ability to release norepinephrine and in its sympathomimetic effects but is a few times weaker than dextroamphetamine in its capacity to release dopamine and in its psychostimulant effects. Levoamphetamine is the levorotatory stereoisomer of the racemic amphetamine molecule, whereas dextroamphetamine is the dextrorotatory isomer. Levoamphetamine was first introduced in the form of racemic amphetamine under the brand name Benzedrine in 1935 and as an enantiopure drug under the brand name Cydril in the 1970s. While pharmaceutical formulations containing enantiopure levoamphetamine are no longer manufactured, levomethamphetamine (levmetamfetamine) is still marketed and sold over-the-counter as a nasal decongestant. In addition to being used in pharmaceutical drugs itself, levoamphetamine is a known active metabolite of certain other drugs, such as selegiline (L-deprenyl). Medical uses Levoamphetamine has been used in the treatment of attention deficit hyperactivity disorder (ADHD) both alone and in combination with dextroamphetamine at different ratios. Levoamphetamine on its own has been found to be effective in the treatment of ADHD in multiple clinical studies conducted in the 1970s. The clinical dosages and potencies of levoamphetamine and dextroamphetamine in the treatment of ADHD have been fairly similar in these older studies. Available forms Racemic amphetamine The first patented amphetamine brand, Benzedrine, was a racemic (i.e., equal parts) mixture of the free bases or the more stable sulfate salts of both amphetamine enantiomers (levoamphetamine and dextroamphetamine) that was introduced in the United States in 1934 as an inhaler for treating nasal congestion. It was later realized that the amphetamine enantiomers could treat obesity, narcolepsy, and ADHD. Because of the greater central nervous system effect of the dextrorotatory enantiomer (i.e., dextroamphetamine), sold as Dexedrine, prescription of the Benzedrine brand fell and was eventually discontinued. However, in 2012, racemic amphetamine sulfate was reintroduced as the Evekeo brand name. Adderall Adderall is a 3.1:1 mixture of dextro- to levo- amphetamine base equivalent pharmaceutical that contains equal amounts (by weight) of four salts: dextroamphetamine sulfate, amphetamine sulfate, dextroamphetamine saccharate and amphetamine (D,L)-aspartate monohydrate. This result is a 76% dextroamphetamine to 24% levoamphetamine, or to ratio. Evekeo Evekeo is an FDA-approved medication that contains racemic amphetamine sulfate (i.e., 50% levoamphetamine sulfate and 50% dextroamphetamine sulfate). It is approved for the treatment of narcolepsy, ADHD, and exogenous obesity. The orally disintegrating tablets are approved for the treatment of attention deficit hyperactivity disorder (ADHD) in children and adolescents aged six to 17 years of age. Other forms Products using amphetamine base are now marketed. Dyanavel XR, a liquid suspension form became available in 2015, and contains about 24% levoamphetamine. Adzenys XR, an orally dissolving tablet came to market in 2016 and contains 25% levoamphetamine. Side effects Levoamphetamine can produce sympathomimetic side effects. Pharmacology Pharmacodynamics Levoamphetamine, similarly to dextroamphetamine, acts as a reuptake inhibitor and releasing agent of norepinephrine and dopamine in vitro. However, there are differences in potency between the two compounds. Levoamphetamine is either similar in potency or somewhat more potent in inducing the release of norepinephrine than dextroamphetamine, whereas dextroamphetamine is approximately 4-fold more potent in inducing the release of dopamine than levoamphetamine. In addition, as a reuptake inhibitor, levoamphetamine is about 3- to 7-fold less potent than dextroamphetamine in inhibiting dopamine reuptake but is only about 2-fold less potent in inhibiting norepinephrine reuptake. Dextroamphetamine is very weak as a reuptake inhibitor of serotonin, whereas levoamphetamine is essentially inactive in this regard. Levoamphetamine and dextroamphetamine are both also relatively weak reversible inhibitors of monoamine oxidase (MAO) and hence can inhibit catecholamine metabolism. However, this action may not occur significantly at clinical doses and may only be relevant to high doses. In rodent studies, both dextroamphetamine and levoamphetamine dose-dependently induce the release of dopamine in the striatum and norepinephrine in the prefrontal cortex. Dextroamphetamine is about 3- to 5-fold more potent in increasing striatal dopamine levels as levoamphetamine in rodents in vivo, whereas the two enantiomers are about equally effective in terms of increasing prefrontal norepinephrine levels. Dextroamphetamine has greater effects on dopamine levels than on norepinephrine levels, whereas levoamphetamine has relatively more balanced effects on dopamine and norepinephrine levels. As with rodent studies, levoamphetamine and dextroamphetamine have been found to be similarly potent in elevating norepinephrine levels in cerebrospinal fluid in monkeys. By an uncertain mechanism, the striatal dopamine release of dextroamphetamine in rodents appears to be prolonged by levoamphetamine when the two enantiomers are administered at a 3:1 ratio (though not at a 1:1 ratio). The catecholamine-releasing effects of levoamphetamine and dextroamphetamine in rodents have a fast onset of action, with a peak of effect after about 30 to 45minutes, are large in magnitude (e.g., 700–1,500% of baseline for dopamine and 400–450% of baseline for norepinephrine), and decline relatively rapidly after the effects reach their maximum. The magnitudes of the effects of amphetamines are greater than those of classical reuptake inhibitors like atomoxetine and bupropion. In addition, unlike with reuptake inhibitors, there is no dose–effect ceiling in the case of amphetamines. Although dextroamphetamine is more potent than levoamphetamine, both enantiomers can maximally increase striatal dopamine release by more than 5,000% of baseline. This is in contrast to reuptake inhibitors like bupropion and vanoxerine, which have 5- to 10-fold smaller maximal impacts on dopamine levels and, in contrast to amphetamines, were not experienced as stimulating or euphoric. Dextroamphetamine has greater potency in producing stimulant-like effects in rodents and non-human primates than levoamphetamine. Some rodent studies have found it to be 5- to 10-fold more potent in its stimulant-like effects than levoamphetamine. Levoamphetamine is also less potent than dextroamphetamine in its anorectic effects in rodents. Dextroamphetamine is about 4-fold more potent than levoamphetamine in motivating self-administration in monkeys and is about 2- to 3-fold more potent than levoamphetamine in terms of positive reinforcing effects in humans. Potency ratios of dextroamphetamine versus levoamphetamine with single doses of 5 to 80mg in terms of psychological effects in humans including stimulation, wakefulness, activation, euphoria, reduction of hyperactivity, and exacerbation of psychosis have ranged from 1:1 to 4:1 in a variety of older clinical studies. With very large doses, ranging from 270 to 640mg, the potency ratios of dextroamphetamine and levoamphetamine in stimulating locomotor activity and inducing amphetamine psychosis in humans have ranged from 1:1 to 2:1 in a couple studies. The differences in potency and dopamine versus norepinephrine release between dextroamphetamine and levoamphetamine are suggestive of dopamine being the primary neurochemical mediator responsible for the stimulant and euphoric effects of these agents. In addition to inducing norepinephrine release in the brain, levoamphetamine and dextroamphetamine induce the release of epinephrine (adrenaline) in the peripheral sympathetic nervous system and this is related to their cardiovascular effects. Although levoamphetamine is less potent than dextroamphetamine as a stimulant, it is approximately equipotent with dextroamphetamine in producing various peripheral effects, including vasoconstriction, vasopression, and other cardiovascular effects. Similarly to dextroamphetamine, levoamphetamine has been found to improve symptoms in an animal model of ADHD, the spontaneously hypertensive rat (SHR), including improving sustained attention and reducing overactivity and impulsivity. These findings parallel the clinical results in which both levoamphetamine and dextroamphetamine have been found to be effective in the treatment of ADHD in humans. Unlike the case of dextroamphetamine versus dextromethamphetamine, in which the latter is more effective than the former, levoamphetamine is substantially more potent as a dopamine releaser and stimulant than levomethamphetamine. Conversely, levoamphetamine, levomethamphetamine, and dextroamphetamine are all similar in their potencies as norepinephrine releasers. In addition to its catecholamine-releasing activity, levoamphetamine is also an agonist of the trace amine-associated receptor 1 (TAAR1). Levoamphetamine has also been found to act as a catecholaminergic activity enhancer (CAE), notably at much lower concentrations than its catecholamine releasing activity. It is similarly potent to selegiline and levomethamphetamine but is more potent than dextromethamphetamine and dextroamphetamine in this action. The CAE effects of such agents may be mediated by TAAR1 agonism. Pharmacokinetics The pharmacokinetics of levoamphetamine have been studied. Usually this has been orally in combination with dextroamphetamine at different ratios. The pharmacokinetics of levoamphetamine have also been studied as a metabolite of selegiline. Absorption The oral bioavailability of levoamphetamine has been found to be similar to that of dextroamphetamine. The time to peak levels of levoamphetamine with immediate-release (IR) formulations of amphetamine ranges from 2.5 to 3.5hours and with extended-release (ER) formulations ranges from 5.3 to 8.2hours depending on the formulation and the study. For comparison, the time to peak levels of dextroamphetamine with IR formulations ranges from 2.4 to 3.3hours and with ER formulations ranges from 4.0 to 8.0hours. The peak levels of levoamphetamine are proportionally similar to those of dextroamphetamine with administration of amphetamine at varying ratios. With a single oral dose of 10mg racemic amphetamine (a 1:1 ratio of enantiomers, or 5mg dextroamphetamine and 5mg levoamphetamine), peak levels of dextroamphetamine were 14.7ng/mL and peak levels of levoamphetamine were 12.0ng/mL in one study. Food does not affect the peak levels or overall exposure to levoamphetamine or dextroamphetamine with IR racemic amphetamine. However, time to peak levels was delayed from 2.5hours (range 1.5–6hours) to 4.5hours (range 2.5–8.0hours). During oral selegiline therapy at a dosage of 10mg/day, circulating levels of levoamphetamine have been found to be 6 to 8ng/mL and levels of levomethamphetamine have been reported to be 9 to 14ng/mL. Although levels of levoamphetamine and levomethamphetamine are relatively low at typical doses of selegiline, they could be clinically relevant and may contribute to the effects and side effects of selegiline. Distribution The volume of distribution of both levoamphetamine and dextroamphetamine is about 3 to 4L/kg. The plasma protein binding of levoamphetamine is 31.7%, whereas that of dextroamphetamine was 29.0% in the same study. Metabolism Levoamphetamine and dextroamphetamine are metabolized via CYP2D6-mediated hydroxylation to produce 4-hydroxyamphetamine and additionally via oxidative deamination. There are several enzymes involved in the metabolism of amphetamine, of which CYP2D6 is one. Levoamphetamine seems to be metabolized somewhat less efficiently than dextroamphetamine. The pharmacokinetics of levoamphetamine generated as a metabolite from selegiline have been found not to significantly vary in CYP2D6 poor metabolizers versus extensive metabolizers, suggesting that CYP2D6 may be minimally involved in the clinical metabolism of levoamphetamine. Elimination The mean elimination half-life of levoamphetamine ranges from 11.7 to 15.2hours in different studies. Its half-life is somewhat longer than that of dextroamphetamine, with a difference of about 1 to 2hours. For comparison, in the same studies that reported the preceding values for levoamphetamine's half-life, the half-life of dextroamphetamine ranged from 10.0 to 12.4hours. The elimination of amphetamine is highly dependent on urinary pH. Urinary acidifying agents like ascorbic acid and ammonium chloride increase amphetamine excretion and reduce its elimination half-life, whereas urinary alkalinizing agents like acetazolamide enhance renal tubular reabsorption and extend its half-life. The urinary excretion of unchanged amphetamine is 70% on average with a urinary pH of 6.6 and 17 to 43% at a urinary pH of greater than 6.7. With selegiline at an oral dose of 10mg, levoamphetamine and levomethamphetamine are eliminated in urine and recovery of levoamphetamine is 9 to 30% (or about 1–3mg) while that of levomethamphetamine is 20 to 60% (or about 2–6mg). Chemistry Levoamphetamine is a substituted phenethylamine and amphetamine. It is also known as L-α-methyl-β-phenylethylamine or as (2R)-1-phenylpropan-2-amine. Levoamphetamine is the levorotatory stereoisomer of the amphetamine molecule. Racemic amphetamine contains two optical isomers in equal amounts, dextroamphetamine (the dextrorotatory enantiomer) and levoamphetamine. History The origin of the amphetamine psychostimulants comes from ephedra. This plant, also known as "ma huang", is an herb which has been used for thousands of years in traditional Chinese medicine as a stimulant and antiasthmatic medicine. Ephedrine ((1R,2S)-β-hydroxy-N-methylamphetamine), an analogue and derivative of amphetamine and the major pharmacologically active constituent of ephedra, was first isolated from the plant in 1885. Another plant, known as Catha edulis (khat), also naturally contains amphetamines, specifically cathine ((1S,2S)-β-hydroxyamphetamine) and cathinone (β-ketoamphetamine). It has a long history of use for its stimulant effects in Eastern Africa and the Arabian Peninsula. However, cathine was not isolated from khat until 1930 and cathinone was not isolated from the plant until 1975. Amphetamine, which is a racemic mixture of dextroamphetamine and levoamphetamine, was first discovered in 1887, shortly after the isolation of ephedrine. However, it was not until 1927 that amphetamine was synthesized by Gordon Alles and was studied by him in animals and humans. This led to the discovery of the stimulating effects of amphetamine in humans in 1929 after Alles injected himself with 50mg of the drug. Levoamphetamine was first introduced in the form of racemic amphetamine (a 1:1 combination of levoamphetamine and dextroamphetamine) under the brand name Benzedrine in 1935. It was indicated for the treatment of narcolepsy, mild depression, parkinsonism, and a variety of other conditions. Dextroamphetamine was found to be the more potent of the two enantiomers of amphetamine and was introduced as an enantiopure drug under the brand name Dexedrine in 1937. Consequent to its lower potency, levoamphetamine has received far less attention than racemic amphetamine or dextroamphetamine. Levoamphetamine was studied in the treatment of attention deficit hyperactivity disorder (ADHD) in the 1970s and was found to be clinically effective for this condition similarly to dextroamphetamine. As a result, it was marketed as an enantiopure drug under the brand name Cydril for the treatment of ADHD in the 1970s. However, it was reported in 1976 that racemic amphetamine was less effective than dextroamphetamine in treating ADHD. As a result of this study, use of racemic amphetamine in the treatment of ADHD dramatically declined in favor of dextroamphetamine. Enantiopure levoamphetamine was eventually discontinued and is no longer available today. Society and culture Legal status Levoamphetamine is a controlled substance in the Philippines. Recreational use Misuse of enantiopure levoamphetamine and levomethamphetamine is reportedly not known. However, rare cases of misuse of levomethamphetamine, which is available over-the-counter as a nasal decongestant, actually have been reported. Due to their lower efficacy in stimulating dopamine release and their reduced potency as psychostimulants, levoamphetamine and levomethamphetamine would theoretically be expected to have less misuse potential than the corresponding dextroamphetamine and dextromethamphetamine forms. Research Levoamphetamine as an enantiopure drug has been studied in the past in a variety of contexts. These include its effects in and/or treatment of mood, "minimal brain dysfunction", narcolepsy, "hyperkinetic syndrome" and aggression, sleep, schizophrenia, wakefulness, Tourette's syndrome, and Parkinson's disease, among others. Levoamphetamine has been studied in the treatment of multiple sclerosis in more modern studies and has been reported to improve cognition and memory in this condition as well. It was under development for this indication under the name levafetamine and the developmental code name C-105 and reached phase 2 clinical trials, but development was discontinued sometime after 2008. Other drugs Selegiline Levoamphetamine is a major active metabolite of selegiline (L-deprenyl; N-propargyl-L-methamphetamine). Selegiline is a monoamine oxidase inhibitor (MAOI), specifically a selective inhibitor of monoamine oxidase B (MAO-B) at lower doses and a dual inhibitor of both monoamine oxidase A (MAO-A) and MAO-B at higher doses. It also has additional activities, such as acting as a catecholaminergic activity enhancer (CAE), possibly via agonism of the TAAR1, and having potential neuroprotective effects. Selegiline is clinically used as an antiparkinsonian agent in the treatment of Parkinson's disease and as an antidepressant in the treatment of major depressive disorder. In addition to levoamphetamine, selegiline also metabolizes into levomethamphetamine. With a 10mg oral dose of selegiline, about 2 to 6mg levomethamphetamine and 1 to 3mg levoamphetamine is excreted in urine. As levoamphetamine and levomethamphetamine are norepinephrine and/or dopamine releasing agents, they may contribute to the effects and side effects of selegiline. This may particularly include cardiovascular and sympathomimetic effects of selegiline. Other selective MAO-B inhibitors that do not metabolize into amphetamine metabolites or have associated cardiovascular effects, such as rasagiline, have also been developed and introduced. Because selegiline metabolizes into levoamphetamine and levomethamphetamine, people taking selegiline can erroneously test positive for amphetamines on drug tests. Notes References External links Amphetamine Antihypotensive agents Drugs acting on the nervous system Enantiopure drugs Euphoriants Human drug metabolites Monoaminergic activity enhancers Norepinephrine-dopamine releasing agents Phenethylamines Selegiline Stimulants Substituted amphetamines Sympathomimetics TAAR1 agonists VMAT inhibitors Wakefulness-promoting agents
Levoamphetamine
Chemistry
5,068
31,307
https://en.wikipedia.org/wiki/Tau%20Ceti
Tau Ceti, Latinized from τ Ceti, is a single star in the constellation Cetus that is spectrally similar to the Sun, although it has only about 78% of the Sun's mass. At a distance of just under from the Solar System, it is a relatively nearby star and the closest solitary G-class star. The star appears stable, with little stellar variation, and is metal-deficient (low in elements other than hydrogen and helium) relative to the Sun. It can be seen with the unaided eye with an apparent magnitude of 3.5. As seen from Tau Ceti, the Sun would be in the northern hemisphere constellation Boötes with an apparent magnitude of about 2.6. Observations have detected more than ten times as much dust surrounding Tau Ceti as is present in the Solar System. Since December 2012, there has been evidence of at least four planets—all likely super-Earths—orbiting Tau Ceti, and two of these are potentially in the habitable zone. There is evidence of up to an additional four unconfirmed planets, one of which would be a Jovian planet between 3 and 20 AU from the star. Because of its debris disk, any planet orbiting Tau Ceti would face far more impact events than present day Earth. Note that those planetary candidates have been contested recently and recent discoveries about the stellar inclination cast doubt about the terrestrial nature of these worlds. Despite this hurdle to habitability, its solar analog (Sun-like) characteristics have led to widespread interest in the star. Given its stability, similarity and relative proximity to the Sun, Tau Ceti is consistently listed as a target for the search for extraterrestrial intelligence (SETI). Name The name "Tau Ceti" is the Bayer designation for this star, established in 1603 as part of German celestial cartographer Johann Bayer's Uranometria star catalogue: it is "number T" in Bayer's sequence of constellation Cetus. In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, written at Cairo about 1650, this star was designated Thālith al Naʽāmāt (ثالث النعامات - thālith al-naʽāmāt), which was translated into Latin as Tertia Struthionum, meaning the third of the ostriches. This star, along with η Cet (Deneb Algenubi), θ Cet (Thanih Al Naamat), ζ Cet (Baten Kaitos), and υ Cet, were Al Naʽāmāt (النعامات), the Hen Ostriches. In Chinese astronomy, the "Square Celestial Granary" () refers to an asterism consisting of τ Ceti, ι Ceti, η Ceti, ζ Ceti, θ Ceti and 57 Ceti. Consequently, the Chinese name for τ Ceti itself is "the Fifth Star of Square Celestial Granary" (). Motion The proper motion of a star is its rate of movement across the celestial sphere, determined by comparing its position relative to more distant background objects. Tau Ceti is considered to be a high-proper-motion star, although it only has an annual traverse of just under 2 arc seconds. Thus it will require about 2000 years before the location of this star shifts by more than a degree. A high proper motion is an indicator of closeness to the Sun. Nearby stars can traverse an angle of arc across the sky more rapidly than the distant background stars and are good candidates for parallax studies. In the case of Tau Ceti, the parallax measurements indicate a distance of . This makes it one of the closest star systems to the Sun and the next-closest spectral class-G star after Alpha Centauri A. The radial velocity of a star is the component of its motion that is toward or away from the Sun. Unlike proper motion, a star's radial velocity cannot be directly observed, but can be determined by measuring its spectrum. Due to the Doppler shift, the absorption lines in the spectrum of a star will be shifted slightly toward the red (or longer wavelengths) if the star is moving away from the observer, or toward blue (or shorter wavelengths) when it moves toward the observer. In the case of Tau Ceti, the radial velocity is about −17 km/s, with the negative value indicating that it is moving toward the Sun. The star will make its closest approach to the Sun in about 43,000 years, when it comes to within . The distance to Tau Ceti, along with its proper motion and radial velocity, together give the motion of the star through space. The space velocity relative to the Sun is . This result can then be used to compute an orbital path of Tau Ceti through the Milky Way. It has a mean galacto-centric distance of () and an orbital eccentricity of 0.22. Physical properties The Tau Ceti system is believed to have only one stellar component. A dim optical companion has been observed with magnitude 13.1. As of 2000, it was distant from the primary. It may be gravitationally bound, but it is considered more likely to be a line-of-sight coincidence. Most of what is known about the physical properties of Tau Ceti and its system has been determined through spectroscopic measurements. By comparing the spectrum to computed models of stellar evolution, the age, mass, radius and luminosity of Tau Ceti can be estimated. However, using an astronomical interferometer, measurements of the radius of the star can be made directly to an accuracy of 0.5%. Through such means, the radius of Tau Ceti has been measured to be of the solar radius. This is about the size that is expected for a star with somewhat lower mass than the Sun. Rotation The rotation period for Tau Ceti was measured by periodic variations in the classic H and K absorption lines of singly ionized calcium (Ca II). These lines are closely associated with surface magnetic activity, so the period of variation measures the time required for the activity sites to complete a full rotation about the star. By this means the rotation period for Tau Ceti is estimated to be . Due to the Doppler effect, the rotation rate of a star affects the width of the absorption lines in the spectrum (light from the side of the star moving away from the observer will be shifted to a longer wavelength; light from the side moving towards the observer will be shifted toward a shorter wavelength). By analyzing the width of these lines, the rotational velocity of a star can be estimated. The projected rotation velocity for Tau Ceti is where veq is the velocity at the equator, and i is the inclination angle of the rotation axis to the line of sight. For a typical G8 star, the rotation velocity is about . The relatively low rotational velocity measurements may indicate that Tau Ceti is being viewed from nearly the direction of its pole. More recently, a 2023 study has estimated a rotation period of and a veq sin i of , corresponding to a pole-on inclination of . Metallicity The chemical composition of a star provides important clues to its evolutionary history, including the age at which it formed. The interstellar medium of dust and gas from which stars form is primarily composed of hydrogen and helium with trace amounts of heavier elements. As nearby stars continually evolve and die, they seed the interstellar medium with an increasing portion of heavier elements. Thus younger stars tend to have a higher portion of heavy elements in their atmospheres than do the older stars. These heavy elements are termed "metals" by astronomers, and the portion of heavy elements is the metallicity. The amount of metallicity in a star is given in terms of the ratio of iron (Fe), an easily observed heavy element, to hydrogen. A logarithm of the relative iron abundance is compared to the Sun. In the case of Tau Ceti, the atmospheric metallicity is  dex, equivalent to about a third the solar abundance. Past measurements have varied from −0.13 to −0.60. This lower abundance of iron indicates that Tau Ceti is almost certainly older than the Sun. Its age had previously been estimated to be , but is now thought to be around . This compares with for the Sun. However, age estimates for Tau Ceti can range from 4.4 to , depending on the model adopted. Besides rotation, another factor that can widen the absorption features in the spectrum of a star is pressure broadening. The presence of nearby particles affects the radiation emitted by an individual particle. So the line width is dependent on the surface pressure of the star, which in turn is determined by the temperature and surface gravity. This technique was used to determine the surface gravity of Tau Ceti. The , or logarithm of the star's surface gravity, is about 4.4, very close to the for the Sun. Luminosity and variability The luminosity of Tau Ceti is equal to only 55% of the Sun's luminosity. A terrestrial planet would need to orbit this star at a distance of about to match the solar insolation level of Earth. This is approximately the same as the average distance between Venus and the Sun. The chromosphere of Tau Ceti—the portion of a star's atmosphere just above the light-emitting photosphere—currently displays little or no magnetic activity, indicating a stable star. One 9-year study of temperature, granulation, and the chromosphere showed no systematic variations; Ca II emissions around the H and K infrared bands show a possible 11-year cycle, but this is weak relative to the Sun. Alternatively it has been suggested that the star could be in a low-activity state analogous to a Maunder Minimum—a historical period, associated with the Little Ice Age in Europe, when sunspots became exceedingly rare on the Sun's surface. Spectral line profiles of Tau Ceti are extremely narrow, indicating low turbulence and observed rotation. The star's asteroseismological oscillations have an amplitude about half that of the Sun and a lower mode lifetime. Planetary system Principal factors driving research interest in Tau Ceti are its proximity, its Sun-like characteristics, and the implications for possible life on its planets. For categorization purposes, Hall and Lockwood report that "the terms 'solarlike star', 'solar analog', and 'solar twin' [are] progressively restrictive descriptions". Tau Ceti fits the second category, given its similar mass and low variability, but relative lack of metals. The similarities have inspired popular culture references for decades, as well as scientific examination. In 1988, radial-velocity observations ruled out any periodical variations attributable to massive planets around Tau Ceti inside of Jupiter-like distances. Ever more precise measurements continue to rule out such planets, at least until December 2012. The velocity precision reached is about 11 m/s measured over a 5-year time span. This result excludes hot Jupiters and probably excludes any planets with minimal mass greater than or equal to Jupiter's mass and with orbital periods less than 15 years. In addition, a survey of nearby stars by the Hubble Space Telescope's Wide Field and Planetary Camera was completed in 1999, including a search for faint companions to Tau Ceti; none were discovered to limits of the telescope's resolving power. However, these searches only excluded larger brown dwarf bodies and closer orbiting giant planets, so smaller, Earth-like planets in orbit around the star, like those discovered in 2012, were not precluded. If hot Jupiters were to exist in close orbit, they would likely disrupt the star's habitable zone; their exclusion was thus considered positive for the possibility of Earth-like planets. General research has shown a positive correlation between the presence of planets and a relatively high-metallicity parent star, suggesting that stars with lower metallicity such as Tau Ceti have a lower chance of having planets. Discovery On December 19, 2012, evidence was presented that suggested a system of five planets orbiting Tau Ceti. The planets' estimated minimum masses were between 2 and 6 Earth masses, with orbital periods ranging from 14 to 640 days. One of them, Tau Ceti e, appears to orbit about half as far from Tau Ceti as Earth does from the Sun. With Tau Ceti's luminosity of 52% that of the Sun and a distance from the star of 0.552 AU, the planet would receive 1.71 times as much stellar radiation as Earth does, slightly less than Venus with 1.91 times Earth's. Nevertheless, some research places it within the star's habitable zone. The Planetary Habitability Laboratory has estimated that Tau Ceti f, which receives 28.5% as much starlight as Earth, would be within the star's habitable zone, albeit narrowly. New results were published in August 2017. They confirmed Tau Ceti e and f as candidates but failed to consistently detect planets b (which may be a false negative), c (whose weakly defined apparent signal was correlated to stellar rotation), and d (which did not show up in all data sets). Instead, they found two new planetary candidates, g and h, with orbits of 20 and 49 days. The signals detected from the candidate planets have radial velocities as low as 30 cm/s, and the experimental method used in their detection, as it was applied to HARPS, could in theory have detected down to around 20 cm/s. The updated 4-planet model is dynamically packed and potentially stable for billions of years. However, with further refinements, even more candidate planets have been detected. In 2019, a paper published in Astronomy & Astrophysics suggested that Tau Ceti could have a Jupiter or super-Jupiter based on a tangential astrometric velocity of around 11.3 m/s. The exact size and position of this conjectured object have not been determined, though it is at most 5 Jupiter masses if it orbits between 3 and 20 AU. A 2020 Astronomical Journal study by astronomers Jamie Dietrich and Daniel Apai analyzed the orbital stability of the known planets and, considering statistical patterns identified from hundreds of other planetary systems, explored the orbits in which the presence of additional, yet-undetected planets are most likely. This analysis predicted three planet candidates at orbits coinciding with planet candidates b, c, and d. The close match between the independently predicted planet periods and the periods of the three planet candidates previously identified in radial velocity data supports the genuine planetary nature of candidates b, c, and d. Furthermore, the study also predicts at least one yet-undetected planet between planets e and f, i.e., within the habitable zone. This predicted exoplanet is identified as PxP-4. Since Tau Ceti is likely aligned in such a way that it is nearly pole-on to Earth (as indicated by its rotation), if its planets share this alignment and have nearly face-on orbits, they would be less similar to Earth's mass and more to Neptune, Saturn, or Jupiter. For example, were Tau Ceti f's orbit inclined 70 degrees from being face-on to Earth, its mass would be Earth masses, making it a middle-to-low end super-Earth. However, these scenarios aren't necessarily true; since Tau Ceti's debris disk has an inclination of , the planets' orbits could be similarly inclined. If the debris disk and f's orbits were assumed to be equal, f would be between and Earth masses, making it slightly more likely to be a mini-Neptune. On top of that, the lower the inclination of the planetary orbits the less stable they tend to be over a given time period, as the planets would have greater masses and therefore more gravitational pull which would in turn disturb the orbital stability of neighbouring planets. So, for example, if as estimated in the Korolik et al 2023 study Tau Ceti has a pole-on inclination of around 7 degrees, and the postulated planets do as well, then those planets' orbits would be verging on instability within just a 10 million year timeframe, and therefore it is extremely unlikely they would have survived for the billions of years that make up the lifetime of the star system. Tau Ceti e Tau Ceti e is a candidate planet orbiting Tau Ceti that was first proposed in 2012 by statistical analyses of the data of the star's variations in radial velocity that were obtained using HIRES, AAPS, and HARPS. Its possible properties were refined in 2017: if confirmed, it would orbit at a distance of 0.552 AU (between the orbits of Venus and Mercury in the Solar System) with an orbital period of 168 days and has a minimum mass of 3.93 Earth masses. If Tau Ceti e possessed an Earth-like atmosphere, the surface temperature would be around . Based upon the incident flux upon the planet, a study by Güdel et al. (2014) speculated that the planet may lie outside the habitable zone and closer to a Venus-like world. Tau Ceti f Tau Ceti f is a candidate planet orbiting Tau Ceti that was proposed in 2012 by statistical analyses of the star's variations in radial velocity, and also recovered by further analysis in 2017. It is of interest because its orbit places it in Tau Ceti's extended habitable zone. However, a 2015 study implies that it would have been in the temperate zone for less than one billion years, so there may not be a detectable biosignature. Few properties of the planet are known other than its orbit and mass. It orbits Tau Ceti at a distance of 1.35 AU (near Mars's orbit in the Solar System) with an orbital period of 642 days and has a minimum mass of 3.93 Earth masses. However, a reanalysis of the data in 2021 provided an in-depth study of the HARPS spectrograph systematics, showing that the 600-day signal was likely a spurious combination of instrumental systematics with a potential 1000-day yet unknown signal. Debris disk In 2004, a team of UK astronomers led by Jane Greaves discovered that Tau Ceti has more than ten times the amount of cometary and asteroidal material orbiting it than does the Sun. This was determined by measuring the disk of cold dust orbiting the star produced by collisions between such small bodies. This result puts a damper on the possibility of complex life in the system, because any planets would suffer from large impact events roughly ten times more frequently than present day Earth. Greaves noted at the time of her research that "it is likely that [any planets] will experience constant bombardment from asteroids of the kind believed to have wiped out the dinosaurs". Such bombardments would inhibit the development of biodiversity between impacts. However, it is possible that a large Jupiter-sized gas giant (such as the proposed planet "i") could deflect comets and asteroids. The debris disk was discovered by measuring the amount of radiation emitted by the system in the far infrared portion of the spectrum. The disk forms a symmetric feature that is centered on the star, and its outer radius averages . The lack of infrared radiation from the warmer parts of the disk near Tau Ceti implies an inner cut-off at a radius of . By comparison, the Solar System's Kuiper belt extends from 30 to . To be maintained over a long period of time, this ring of dust must be constantly replenished through collisions by larger bodies. The bulk of the disk appears to be orbiting Tau Ceti at a distance of 35–, well outside the orbit of the habitable zone. At this distance, the dust belt may be analogous to the Kuiper belt that lies outside the orbit of Neptune in the Solar System. Tau Ceti shows that stars need not lose large disks as they age, and such a thick belt may not be uncommon among Sun-like stars. Tau Ceti's belt is only 1/20 as dense as the belt around its young neighbor, Epsilon Eridani. The relative lack of debris around the Sun may be the unusual case: one research-team member suggests the Sun may have passed close to another star early in its history and had most of its comets and asteroids stripped away. Stars with large debris disks have changed the way astronomers think about planet formation because debris disk stars, where dust is continually generated by collisions, appear to form planets readily. Habitability Tau Ceti's habitable zone—the locations where liquid water could be present on an Earth-sized planet—spans a radius of 0.55–1.16 AU, where 1 AU is the average distance from the Earth to the Sun. Primitive life on Tau Ceti's planets may reveal itself through an analysis of atmospheric composition via spectroscopy, if the composition is unlikely to be abiotic, just as oxygen on Earth is indicative of life. The most optimistic search project to date was Project Ozma, which was intended to "search for extraterrestrial intelligence" (SETI) by examining selected stars for indications of artificial radio signals. It was run by the astronomer Frank Drake, who selected Tau Ceti and Epsilon Eridani as the initial targets. Both are located near the Solar System and are physically similar to the Sun. No artificial signals were found despite 200 hours of observations. Subsequent radio searches of this star system have turned up negative. This lack of results has not dampened interest in observing the Tau Ceti system for biosignatures. In 2002, astronomers Margaret Turnbull and Jill Tarter developed the Catalog of Nearby Habitable Systems (HabCat) under the auspices of Project Phoenix, another SETI endeavour. The list contained more than theoretically habitable systems, approximately 10% of the original sample. The next year, Turnbull would further refine the list to the 30 most promising systems out of within 100 light-years from the Sun, including Tau Ceti; this will form part of the basis of radio searches with the Allen Telescope Array. She chose Tau Ceti for a final shortlist of just five stars suitable for searches by the (now cancelled) Terrestrial Planet Finder telescope system, commenting that "these are places I'd want to live if God were to put our planet around another star". See also Epsilon Eridani List of nearest stars List of potentially habitable exoplanets List of nearest terrestrial exoplanet candidates Tau Ceti in fiction Notes References Further reading External links Tau Ceti at Jim Kaler's STARS site Tau Ceti: Life Amidst Catastrophe? at Centauri Dreams G-type main-sequence stars Ceti, Tau Maunder Minimum Circumstellar disks Planetary systems with four confirmed planets Cetus Ceti, Tau 0509 BD-16 0295 Ceti, 52 0071 010700 008102 Thālith al Naʽāmāt
Tau Ceti
Astronomy
4,707
39,316,365
https://en.wikipedia.org/wiki/Ceiling%20effect%20%28pharmacology%29
In pharmacology, the term ceiling effect refers to the property of increasing doses of a given medication to have progressively smaller incremental effect (an example of diminishing returns). Mixed agonist-antagonist opioids, such as nalbuphine, serve as a classic example of the ceiling effect; increasing the dose of a narcotic frequently leads to smaller and smaller gains in relief of pain. In many cases, the severity of side effects from a medication increases as the dose increases, long after its therapeutic ceiling has been reached. The term is defined as "the phenomenon in which a drug reaches a maximum effect, so that increasing the drug dosage does not increase its effectiveness." Sometimes drugs cannot be compared across a wide range of treatment situations because one drug has a ceiling effect. Sometimes the desired effect increases with dose, but side-effects worsen or start being dangerous, and risk to benefit ratio increases. This is because of occupation of all the receptors in a given specimen. See also Agonist–antagonist opioids Buprenorphine Codeine Dose–response relationship Pain ladder Weber–Fechner law References External links Is there a ceiling effect of transdermal buprenorphine? Preliminary data in cancer patients Clinical evidence for an LH ‘ceiling’ effect induced by administration of recombinant human LH during the late follicular phase of stimulated cycles in World Health Organization type I and type II anovulation Analgesic effect of i.v. paracetamol: possible ceiling effect of paracetamol in postoperative pain Pharmacodynamics
Ceiling effect (pharmacology)
Chemistry
329
53,823,229
https://en.wikipedia.org/wiki/Probabilistic%20signature%20scheme
Probabilistic Signature Scheme (PSS) is a cryptographic signature scheme designed by Mihir Bellare and Phillip Rogaway. RSA-PSS is an adaptation of their work and is standardized as part of PKCS#1 v2.1. In general, RSA-PSS should be used as a replacement for RSA-PKCS#1 v1.5. Design PSS was specifically developed to allow modern methods of security analysis to prove that its security directly relates to that of the RSA problem. There is no such proof for the traditional PKCS#1 v1.5 scheme. Implementations OpenSSL wolfSSL GnuTLS References External links Raising the standard for RSA signatures: RSA-PSS RFC 4056: Use of the RSASSA-PSS Signature Algorithm in Cryptographic Message Syntax (CMS) RFC 5756: Updates for RSAES-OAEP and RSASSA-PSS Algorithm Parameters RFC 8017: PKCS #1: RSA Cryptography Specifications Version 2.2 Cryptography Digital signature schemes
Probabilistic signature scheme
Mathematics,Engineering
227
16,718,997
https://en.wikipedia.org/wiki/Toshiba%20Samsung%20Storage%20Technology
Toshiba Samsung Storage Technology Corporation (abbreviated TSST) is a former international joint venture company of Toshiba (Japan) and Samsung Electronics (South Korea). Toshiba used to own 51% of its stock, while Samsung used to own the remaining 49%. The company specialized in optical disc drive manufacturing. The company was established in 2004. The company's headquarters is located in Shibaura, Minato, Tokyo, Japan with Hiroshi Suzuki as its president and CEO. Its subsidiary, Toshiba Samsung Storage Technology Korea Corporation is located in Suwon, South Korea, and headed by Dae Sung Kim. Each corporation in Japan and Korea has the individual directorate system. For the business issues, TSST has been discussing it through the common relevant organization for mutual consent. TSST is currently responsible for the product development, marketing and sales, and has been taking advantage of the existing network of Samsung Electronics and Toshiba for manufacturing, sales, and after-sales service. Half-height optical drives by TSSTcorp with writing abilities are branded "WriteMaster". History In October 2009, TSST had received a subpoena from the U.S. Justice Department for possibly violating the antitrust laws. Samsung and Toshiba sold their stakes in TSST to Optis Co., Ltd., a Korean manufacturer that made products for TSST under contract. In mid-2016 TSST entered Chapter 15 bankruptcy protection in Delaware US, shielding it from most U.S. creditor actions while the company reorganizes its business in Korea's court system and restructures a $78 million debt. Models and specifications All reading speeds are in constant angular velocity unless otherwise indicated. "?" indicates that the format is supported, but the speed specification is unknown for this model. Functionality not listed in "Additional functionality" (e.g. CD-MRW support) for a particular model does not always imply that the feature is missing, but may also imply that existing sources have not confirmed that feature for that particular model yet. Half-height Model numbers with the third digit being a "2" stands for an Parallel ATA + Integrated drive electronics (IDE) interface, while the digit "3" stands for a Serial ATA (SATA) interface. DVD writers by TSST may have a "WriteMaster" branding. Some models such as the SH-S162 and the SH-S182 have "L" variants (SH-S162L and SH-S182L) respectively, which indicates support for LightScribe technology. Writing speeds higher than ×16 (constant angular velocity) are only unlocked on quality recordable media from selected manufacturers, including Mitsubishi/Verbatim and Taiyo Yuden. Slim type On slim type optical drives, accessing speeds are physically limited by the rotation speeds of the engine rather than the performance of the optical pickup system or the physical strength of the disc. Possible domain abuse As of 2021-05-13 whois.verisign-grs.com claims that the previous official website domain was newly registered on 2017-02-13T00:00:00Z. According to the registrant's whois system whois.regtons.com and a whois on the IP addresses it seems to belong to a Czech hosting company. The website shows TSST infos from probably 2015, but a lot of ads which surely wouldn't be on a trusted vendors homepage and might cause harm to your computer or privacy (i.e. clicking on the firmware updates link opens up ads which surely nothing have to do with the firmware updates you were looking for). Notes See also Hitachi-LG Data Storage Sony Optiarc References External links Archived official website from 2015 on archive.org - be careful to be not redirected to newer snapshots after domain reregistration Toshiba Samsung Electronics Multinational joint-venture companies Computer storage companies Electronics companies of Japan Electronics companies of South Korea Electronics companies established in 2004 Japanese companies established in 2004 South Korean companies established in 2004 Computer hardware companies Optical computer storage
Toshiba Samsung Storage Technology
Technology
842
43,203,961
https://en.wikipedia.org/wiki/Z-source%20inverter
A Z-source inverter is a type of power inverter, a circuit that converts direct current to alternating current. The circuit functions as a buck-boost inverter without making use of DC-DC converter bridge due to its topology. Impedance (Z) source networks efficiently convert power between source and load from DC to DC, DC to AC, and from AC to AC. The numbers of modifications and new Z-source topologies have grown rapidly since 2002. Improvements to the impedance networks by introducing coupled magnetics have also been lately proposed for achieving even higher voltage boosting, while using a shorter shoot-through time. They include the Γ-source, T-source, trans-Z-source, TZ-source, LCCT-Z-source that utilizes a high-frequency transformer connected in series with two DC-current-blocking capacitors, high-frequency transformer-isolated, and Y-source networks. Amongst them, the Y-source network is more versatile and can be viewed as the generic network, from which the Γ-source, T-source, and trans-Z-source networks are derived. The incommensurate properties of this network open a new horizon to researchers and engineers to explore, expand, and modify the circuit for a wide range of power conversion applications. Types of inverters Inverters can be classified by their structure as Single-phase inverter: This type of inverter consists of two legs or two poles. (A pole is connection of two switches where source of one and drain of other are connected and this common point is taken out). Three-phase inverter: This type of inverter consists of three legs or poles or four legs (three legs for phases and one for neutral). Inverters are also classified based on the type of input source as follows: Voltage-source inverter (VSI): In this type of inverter, a constant voltage source acts as input to the inverter bridge. The constant voltage source is obtained by connecting a large capacitor across the DC source. Current-source inverter (CSI): In this type of inverter, a constant current source acts as input to the inverter bridge. The constant current source is obtained by connecting a large inductor in series with the DC source. Operation Normally, three-phase inverters have 8 vector states (6 active states and 2 zero states). There is an additional state known as the shoot through state, during which the switches of one leg are short-circuited. In this state, energy is stored in the impedance network, and when the inverter is in its active state, the stored energy is transferred to the load, thus providing boost operation. This shoot through state is prohibited in VSI. Achieving the buck-boost facility in ZSI requires pulse-width modulation. The normal sinusoidal pulse width modulation (SPWM) is generated by comparing carrier triangular wave with reference sine wave. For shoot through pulses, the carrier wave is compared with two complementary DC reference levels. These pulses are added in the SPWM. ZSI has two control freedoms: modulation index of the reference wave which is the ratio of amplitude of reference wave to amplitude of carrier wave and shoot through duty ratio which can be controlled by DC level. Advantages The advantages of Z-source inverter are: The source can be either a voltage source or a current source. The DC source of a ZSI can be a battery, a diode rectifier or a thyristor converter, a fuel cell stack or a combination of these. The main circuit of a ZSI can either be the traditional VSI or the traditional CSI. Works as a buck-boost inverter. The load of a ZSC can be inductive or capacitive, or it can be another Z-source network. Disadvantages Typical inverters (VSI and CSI) have few disadvantages: They behave in a boost or buck operation only. Thus the obtainable output voltage range is either smaller or greater than the input voltage. They are vulnerable to electromagnetic interference and the devices get damaged in either open or short circuit conditions. The combined system of DC-DC boost converter and the inverter has lower reliability. The main switching devices of VSI and CSI are not interchangeable. Applications Renewable energy sources Electric vehicles Motor drives References Electrical circuits Inverters
Z-source inverter
Engineering
921
54,859,162
https://en.wikipedia.org/wiki/George%20McRoberts
George McRoberts (1839–1896) was a Scottish chemist and early explosives expert. He assisted Alfred Nobel in establishing the original Nobel Enterprises dynamite factory at Ardeer. He was a close colleague of Nobel and probably a close friend. Life He was born in 1839 in central Scotland the son of John N McRoberts and his wife, Sarah Ogle. He was educated at Falkirk Grammar School. In 1870 he established a chemical factory at Westquarter in Falkirk, mainly producing sulphuric acid. Alfred Nobel bought the company in 1871 and started making detonators there, mainly for the Scottish coalfields. He was very impressed by McRoberts and in 1873 he moved him to the new British Dynamite Factory in Ardeer, North Ayrshire as its Manager, directly under Alfred Nobel, the first dynamite factory in the world. It was McRoberts and a partner John Downie who raised the £24,000 to build the factory rather than Nobel himself, who was yet to become rich from his invention. The company had its offices at 7 Royal Bank Place in Glasgow. The Chairman of the company was the Glasgow shipbuilder, Charles Randolph. McRoberts was injured in an explosion during his early years there. He also built a second explosives factory at Pitsea in Essex in 1876. In 1883 he was elected a Fellow of the Royal Society of Edinburgh his proposers being Sir James Dewar, William Dittmar, John Gray McKendrick and Robert Rattray Tatlock. He died on 15 January 1896. Family He was married to Jane Paton. References 1839 births 1896 deaths People from Falkirk Fellows of the Royal Society of Edinburgh 19th-century Scottish people 19th-century Scottish chemists Alfred Nobel Explosives engineers
George McRoberts
Engineering
362
1,684,895
https://en.wikipedia.org/wiki/Nutritional%20yeast
Nutritional yeast (also known as nooch) is a deactivated (i.e. dead) yeast, often a strain of Saccharomyces cerevisiae, that is sold commercially as a food product. It is sold in the form of yellow flakes, granules, or powder, and may be found in the bulk aisle of natural food stores. It is used in vegan and vegetarian cooking as an ingredient in recipes or as a condiment. It is a source of some B-complex vitamins and contains trace amounts of several other vitamins and minerals. It may be fortified with vitamin B12. Nutritional yeast has a strong flavor described as nutty or cheesy for use as a cheese substitute. It may be used in preparation of mashed potatoes or tofu. Nutritional yeast is a whole-cell inactive yeast that contains both soluble and insoluble parts, which is different from yeast extract. Yeast extract is made by centrifuging inactive nutritional yeast and concentrating the water-soluble yeast cell proteins which are rich in glutamic acid, nucleotides, and peptides, the flavor compounds responsible for umami taste. Commercial production Nutritional yeast is produced by culturing yeast in a nutrient medium for several days. The primary ingredient in the growth medium is glucose, often from either sugarcane or beet molasses. When the yeast is ready, it is killed with heat and then harvested, washed, dried and packaged. The species of yeast used is often a strain of Saccharomyces cerevisiae. The strains are cultured and selected for desirable characteristics and often exhibit a different phenotype from strains of S. cerevisiae used in baking and brewing. Nutrition In a reference amount of , one manufactured, fortified brand is 33% carbohydrates, 53% protein, and 3% fat, providing 60 calories (table). Levels of B vitamins in the reference amount are multiples of the Daily Value (table). Nutritional yeast contains low amounts of dietary minerals (source in table), unless fortified. There may be confusion about the source of vitamin B12 in nutritional yeast, as yeast cannot produce B12, which is naturally produced only by some bacteria. When it is fortified, the vitamin B12 (commonly cyanocobalamin) is produced separately and then added to the yeast. See also Marmite Vegan cheese Vegemite References External links Environmental rules for the manufacture of nutritional yeast Condiments Food additives Umami enhancers Vegan cuisine Yeasts
Nutritional yeast
Biology
523
72,398,699
https://en.wikipedia.org/wiki/Abrothallus%20etayoi
Abrothallus etayoi is a species of lichenicolous fungus in the family Abrothallaceae. Found in Mexico, it was formally described as a new species in 2015 by Ave Suija and Sergio Pérez-Ortega. The type specimen was collected from Angahuan (Michoacán) at an elevation of ; there, in a pine-oak forest, it was found growing on a Sticta lichen that itself was growing on oak. The species epithet honours Spanish lichenologist Javier Etayo, "a keen collector of lichenicolous fungi and lichens". Abrothallus etayoi differs from other Abrothallus fungi by the shape of its (barrel-shaped to somewhat spherical), and by its single-celled conidia that measure 11–17.5 by 7–11 μm. Although known only from the type locality, the authors suggest that the fungus may have a wider distribution in Mexico due to the prevalence of the ecosystem from which it was collected. References etayoi Lichenicolous fungi Fungi described in 2015 Fungi of Mexico Taxa named by Ave Suija Fungus species
Abrothallus etayoi
Biology
235
40,666,874
https://en.wikipedia.org/wiki/Bis%28dinitrogen%29bis%281%2C2-bis%28diphenylphosphino%29ethane%29molybdenum%280%29
trans-Bis(dinitrogen)bis[1,2-bis(diphenylphosphino)ethane]molybdenum(0) is a coordination complex with the formula Mo(N2)2(dppe)2. It is a relatively air stable yellow-orange solid. It is notable as being the first discovered dinitrogen containing complex of molybdenum. Structure Mo(N2)2(dppe)2 is an octahedral complex with idealized D2h point group symmetry. The dinitrogen ligands are mutually trans across the metal center. The Mo-N bond has a length of 2.01 Å, and the N-N bond has a length of 1.10 Å. This length is close to the free nitrogen bond length, but coordination to the metal weakens the N-N bond making it susceptible to electrophilic attack. Synthesis The first synthetic route to Mo(N2)2(DPPE)2 involved a reduction of molybdenum(III) acetylacetonate with triethylaluminium in the presence of dppe and nitrogen. A higher yielding synthesis involves a four-step process. In the first step, molybdenum(V) chloride is reduced by acetonitrile (CH3CN) to give [MoCl4(CH3CN)2]. Acetonitrile is displaced by tetrahydrofuran (THF) to give [MoCl4(THF)2]. This Mo(IV) compound is reduced by tin powder to [MoCl3(thf)3]. The desired compound is formed in the presence of nitrogen gas, dppe ligand, and magnesium turnings as the reductant: 3 Mg + 2 MoCl3(THF)3 + 4 Ph2PCH2CH2PPh2 + 4 N2 → 2 trans-[Mo(N2)2(Ph2PCH2CH2PPh2)2] + 3 MgCl2 + 6 THF Reactivity The terminal nitrogen is susceptible to electrophilic attack, allowing for the fixation of nitrogen to ammonia in the presence of acid. In this way, Mo(N2)2(dppe)2 serves as a model for biological nitrogen fixation. Carbon-nitrogen bonds can also be formed with this complex through condensation reactions with ketones and aldehydes, and substitution reactions with acid chlorides. The terminal nitrogen can also be silylated. See also Transition metal dinitrogen complex Nitrogen fixation References Coordination complexes Molybdenum(0) compounds Phosphine complexes Nitrogen compounds
Bis(dinitrogen)bis(1,2-bis(diphenylphosphino)ethane)molybdenum(0)
Chemistry
562
58,790,105
https://en.wikipedia.org/wiki/Amy%20Mullin
Amy S. Mullin is an American chemist and professor at the University of Maryland. She is a Fellow of the American Physical Society, the American Association for the Advancement of Science and the Optical Society of America. Her research focuses on molecular dynamics. Education Amy S. Mullin has a B.A. in chemistry from University of California, Santa Cruz (1985) and a Ph.D. in physical chemistry from the University of Colorado, Boulder (in 1991 with W. Carl Lineberger). She was an AAUW American Postdoctoral Fellow at Columbia University, working with George W. Flynn (1992–1994). Research Mullin uses time-resolved laser spectroscopy to investigate how energy is used in chemical processes and molecular collisions. This includes: Transient spectroscopy of collisions, where molecules are excited to very high energy states with pulsed lasers, and studied with time-resolved high-resolution optical absorption in order to investigate the relationship between molecular structure and collision dynamics; driving chemical reactions with vibrational energy, where high-resolution optical probing is used to investigate how chemical reactions are affected by large amounts of vibrational energy of the reacting molecules at a quantum-state resolved level; and spinning molecules into reactive states, using ultrafast lasers to investigate molecules in the presence of strong fields applied in short pulses of time. Mullin developed a high power optical centrifuge to generate molecules in high rotational angular momentum states in order to investigate the chemistry and dynamics of rotationally activated molecules. The optical centrifuge work is primarily focused on studying "rotationally-induced dissociation and isomerization and the coupling of vibrational and rotational degrees of freedom in high energy states." Awards Fellow of the Optical Society of America (2018) Creative Educator Award, College of Computer, Mathematical and Natural Science (2011) Fellow of the American Physical Society (2009) General Research Board Semester Award (2008) from the University of Maryland Elected Fellow of the American Association for the Advancement of Science (2006) Camille Dreyfus Teacher Scholar Award (1999) American Young Leader of the American Swiss Foundation (1998) National Science Foundation CAREER Award (1996) Office of Naval Research Young Investigator Award (1996) Clare Boothe Luce Professorship (1994) American Association of University Women Postdoctoral Fellow (1993) Elected to Sigma Xi (1991) References External links Living people Year of birth missing (living people) Place of birth missing (living people) University of Maryland, College Park faculty 20th-century American chemists 21st-century American chemists American women chemists Fellows of the American Physical Society University of California, Santa Cruz alumni University of Colorado Boulder alumni Fellows of Optica (society) Fellows of the American Association for the Advancement of Science American physical chemists Women physical chemists Women in optics 20th-century American women scientists 21st-century American women scientists
Amy Mullin
Chemistry
565
17,469,833
https://en.wikipedia.org/wiki/Phallus%20hadriani
Phallus hadriani, commonly known as the dune stinkhorn or the sand stinkhorn, is a species of fungus in the Phallaceae (stinkhorn) family. The stalk of the fruit body reaches up to tall by thick, and is spongy, fragile, and hollow. At the top of the stem is a ridged and pitted, thimble-like cap over which is spread olive-colored spore slime (gleba). Shortly after emerging, the gleba liquefies and releases a fetid odor that attracts insects, which help disperse the spores. P. hadriani may be distinguished from the similar P. impudicus (the common stinkhorn) by the presence of a pink or violet-colored volva at the base of the stem, and by differences in odor. It is a widely distributed species, and is native to Eurasia and North America. In Australia, it is probably an introduced species. It typically grows in public lawns, yards and gardens, usually in sandy soils. It is said to be edible in its immature egg-like stage. Taxonomy The species was first described scientifically by the French botanist Étienne Pierre Ventenat in 1798, and sanctioned by Christiaan Hendrik Persoon under that name in his 1801 Synopsis Methodica Fungorum. Christian Gottfried Daniel Nees von Esenbeck called the species Hymenophallus hadriani in 1817; this name is a synonym. According to the taxonomical database Index Fungorum, additional synonyms include: Phallus iosmus, named by Berkeley in 1836; Phallus imperialis, Schulzer, 1873; Ithyphallus impudicus var. imperialis and Ithyphallus impudicus var. iosmos, De Toni, date unknown. The specific epithet hadriani is named after the Dutch botanist Hadrianus Junius (1512–1575), who wrote a pamphlet on stinkhorn mushrooms in 1564 (Phalli, ex fungorum genere, in Hollandiae sabuletis passim crescentis descriptio). Description The immature fruiting bodies of P. hadriani in the egg stage have dimensions of by , and are colored rosy-pink to violet. They are typically submerged in the ground with rhizomorphs (aggregations of mycelium resembling plant roots) at the base. The eggs are enclosed in a tough covering and a gelatinous layer that breaks down as the stinkhorn emerges. Mature fruiting bodies may be tall by thick. The white- or cream-colored stipe is up to 3 cm wide, hollow, spongy and honeycombed. The head is reticulate, ridged and pitted, and covered with olive green glebal mass. The volva is cuplike, and typically retains its pink color although it may turn brownish with age. Fruit bodies are short-lived, typically lasting only one or two days. Although the odor of P. hadriani has been described by some authors as faint and pleasant or like violets, others describe the smell as fetid or putrid. The gleba is known to attract insects, including flies, bees, and beetles, some of which consume the spore-containing slime. It is thought that long-distance spore dispersal is facilitated by these insects, who may deposit in their feces intact spores that survive the passage through the digestive tract. The spores are cylindrical, smooth, and hyaline (translucent), with dimensions of 3–4 by 1–2 μm. The basidia (spore-bearing cells) are cylindrical, with dimensions of 20–25 by 3–4 μm. They have eight sterigmata (slender extensions that attach to the spores), as well as a clamp at their base. Similar species Phallus impudicus has the same overall appearance as P. hadriani, but is distinguished by its white volva. Another similar stinkhorn, P. ravenelii has a smooth, not reticulate head. Habitat and distribution Phallus hadriani is known to be in Australia (where it is thought to be an introduced species imported on woodchip mulch used in gardening and landscaping), North America, Europe (including Denmark, Ireland, Latvia, The Netherlands, Norway, Poland, Slovakia, Sweden, Ukraine, and Wales) Turkey (Iğdır Province), Japan, and China (Jilin Province). Phallus hadriani is a saprobic species, and thus obtains nutrients by decomposing organic matter. In North America, it is commonly associated with tree stumps, or roots of stumps that are decomposing in the ground. In Great Britain, its distribution is more or less restricted to coastal dunes, while in Poland, it has been noted to avoid humid and humic forest soils, and live in symbiosis with xerophilous grasses and the black locust tree, Robinia pseudoacacia. The mushroom is one of three species protected by the Red Data Book of Latvia. Edibility Like many other stinkhorns, this species is thought to be edible when in its egg form. Central Europeans and the Chinese consider the eggs a delicacy. Regarding the edibility of mature specimens, one author commented on the genus in general, "No one with his sense of smell developed would think of eating the members of this group." References Phallales Fungi of Asia Fungi of Australia Fungi of Europe Fungi of North America Fungi described in 1798 Fungus species
Phallus hadriani
Biology
1,153
7,610,007
https://en.wikipedia.org/wiki/War%20of%20the%20Ring%20%28board%20game%29
War of the Ring is a strategy board game based on The Lord of the Rings by J.R.R. Tolkien. The game was made by Roberto Di Meglio, Marco Maggi and Francesco Nepitello, first produced by Nexus Editrice (Italy), and is currently published by Ares Games. Since its first print-run it has been produced in many languages: Fantasy Flight Games published the English edition in 2004. An expansion called Battles of the Third Age was released in 2006 and a Collector's Edition in 2010 (with both the base game and expansion materials, hand-painted miniatures, a leather-bound rulebook, and corrected and clarified rules and cards). The Fantasy Flight edition of both the base game and expansion are currently out of print. A new 2nd Edition, published by Ares Games, was published in 2011, as well as one expansion entitled Lords of Middle Earth and one called Warriors of Middle Earth, and a new expansion released in 2023 entitled Kings of Middle Earth. Gameplay War of the Ring is a 2-player game that takes approximately 3 hours, though there are variant rules for 3 or 4 players where one or both sides play as a team. The game concerns the War of the Ring starting from the Fellowship's forming in Rivendell. One player controls the Shadow Armies and tries to conquer Middle-earth or to corrupt the Fellowship's Ringbearer. The other player controls the Free Peoples and tries to hold back the Shadow long enough to move the Fellowship into Mount Doom and destroy the Ring. A Free Peoples military victory is also possible, but the Shadow's power is overwhelming. The board depicts northwestern Middle-earth, divided into territories. Some lands form nations while broad swatches sit unclaimed. The Free Peoples are the nations of Gondor and Rohan, the Elves (Rivendell, Lórien, the Woodland Realm, and the Gray Havens), the Dwarves (the settlements in Erebor, the Iron Hills and the Blue Mountains) and "The North" (the men of Dale, Carrock, and Bree, and the hobbits of the Shire). The Shadow Armies are Sauron (Mordor, Moria, Angmar and Dol Guldur), Isengard, and the combined Southrons and Easterlings. Reception War of the Ring has generally been received very positively, for example with an 8.5/10 rating at BoardGameGeek from over 18,000 voters, enough to earn it the eighth-highest spot on the site's list of best board games. It has been described by reviewers as "quite simply a masterpiece" and "a remarkable game". References External links Official War of the Ring Forums (Ares Games version) (Nexus Editrice/Fantasy Flight Games version) (Nexus Editrice version) Site of one of the War of the Ring playtesters, with an extensive FAQ and useful articles Asymmetric board games Board games based on Middle-earth Board games introduced in 2004 Fantasy board wargames Fantasy Flight Games games
War of the Ring (board game)
Physics
621
57,631,467
https://en.wikipedia.org/wiki/%CE%91-O
α-O (alpha-oxygen) is a reactive oxygen species formed from an oxygen-atom abstraction (OAT) from nitrous oxide (N2O) by alpha-iron (α-Fe) catalysts. The latter is defined as a high spin (S=2) divalent iron(II) ion in a constrained square planar coordination with an accessible axial coordination position. The stabilization of α-O requires structural strain on the equatorial ligand field to maintain the reactive oxygen atom in the axial position and it is this forced geometry, similar to the 'entatic state' known in metalloproteins, that lies at the basis of its reactivity in inert C-H bond activation. The alpha-oxygen site was first discovered and named in 1990 by researchers from the Boreskov Institute of Catalysis in the ZSM-5 zeolite, and was later described in detail by researchers from Stanford University and KU Leuven in the beta zeolite. See also Carbon–hydrogen bond activation Catalysis References Reactive oxygen species Reactive intermediates
Α-O
Chemistry
221
39,186,685
https://en.wikipedia.org/wiki/Heptavalent%20botulism%20antitoxin
The Botulism Antitoxin Heptavalent (A, B, C, D, E, F, G) - (Equine) (trade name BAT), made by Emergent BioSolutions Canada Inc. (formerly Cangene Corporation), is a licensed, commercially available botulism anti-toxin that effectively neutralizes all seven known botulinum nerve toxin serotypes (types A, B, C, D, E, F and G). It is indicated for sporadic cases of life-threatening botulism and is also stockpiled for the eventuality of botulinum nerve toxins being used in a future bioterrorist attack. BAT was first approved in 2010 by the Centers for Disease Control for the indication of treating naturally occurring non-infant botulism on an investigational basis, replacing two earlier products. It was then licensed for commercial marketing by the United States FDA in 2013. Medical use Indications BAT is the only FDA-approved product available for treating botulism in adults, and for botulism in infants caused by botulinum toxins other than types A and B. BAT has been used to treat a case of type F infant botulism and, on a case-by-case basis, may be used for future cases of non-type A and non-type B infant botulism. Administration Early administration of BAT is considered critical as the antitoxin can neutralize only circulating toxin, not toxin that has become bound to nerve terminals. One vial (20 mL) of BAT is administered to a patient as an intravenous infusion. It must be diluted with 0.9% sodium chloride in a 1:10 ratio before use. A volumetric infusion pump is used for slow administration (0.5 mL/min for the initial 30 minutes) to minimize the possibility of allergic reactions. If no reactions are noted, the rate is increased to 1 mL/min for another 30 minutes, and then if still no reaction is evident, to 2 mL/min for the remainder of the procedure. Side effects In CDC studies of BAT, headache, fever, chills, rash, itching, and nausea were the most observed adverse events. It can trigger allergic reactions and delayed hypersensitivity reactions in people sensitive to horse proteins. Chemistry and pharmacology BAT is derived from "despeciated" equine IgG antibodies, which have had the Fc portion cleaved off, leaving the F(ab')2 portions. Compared to whole antibodies, as found in trivalent botulinum antitoxin (TBAT) available from local health departments (via the CDC), F(ab')2 is less efficacious at neutralizing toxin, but should carry a reduced risk of anaphylaxis. Unlike TBAT, BAT is considered effective against all known strains of botulism (A, B, C, D, E, F, and G). All antitoxins neutralize only circulating toxin in patients with symptoms of botulism that are continuing to progress; they have no effect on toxin already bound to the nerve terminals. (This is not, however, considered a reason to withhold the product from any patient, even if treatment has been delayed.) Military variants A related product, Botulism AntiToxin, Heptavalent, Equine, Types A, B, C, D, E, F and G (HE-BAT), is also available to the U.S. military under IND (experimental) protocols. This "equine" antitoxin requires skin testing with escalating dose challenges before full dose administration to obviate serious sensitivity to horse serum, as it consists of the whole non-despeciated antibody. The US Department of Defence also stocks HFabBAT, the despeciated version of HE-BAT, similar to BAT. History Development and domestic contracts BAT (formerly known as HBAT) was developed from equine (horse) plasma at the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID). The main funding stream was the Biomedical Advanced Research and Development Authority (within the US Department of Health and Human Services' Office of the Assistant Secretary for Preparedness and Response). It was then available for many years on an IND (investigational) basis from the US Centers for Disease Control and Prevention (CDC). On June 1, 2006, the DHHS awarded a $363 million contract to Emergent BioSolutions, (then Cangene Corporation) for 200,000 doses of BAT over five years for delivery into the US Strategic National Stockpile (SNS). The CDC began supplying doses to the SNS in 2007 under a now $427 million contract with the DHHS, according to a Cangene press release. In 2010, the CDC replaced the licensed bivalent botulinum antitoxin AB (BAT-AB) and the investigational monovalent botulinum antitoxin E (BAT-E) with BAT when the former two products indications expired. This action left BAT as the only botulinum antitoxin available in the US for naturally occurring non-infant botulism. On March 22, 2013, the US Food and Drug Administration (FDA) approved BAT as the first product to treat all serotypes of botulism. This was considered a significant step in the US armamentarium for emergency use against a bioterrorist attack. The CDC continues to distribute the stockpiled antitoxin. The FDA approved BAT for marketing based on its efficacy as established in animal studies (efficacy trials in humans not being considered feasible or ethical). The safety of the antitoxin, however, was established in a study of 40 healthy volunteers as well as in the experimental treatment of 228 patients in a CDC program. After the February 2014 acquisition of Cangene Corporation by Emergent BioSolutions, Emergent took control of Cangene's products and contracts, including BAT. In March 2017, Emergent extended its contract with the Biomedical Advanced Research and Development Authority (BARDA), adding $53 million in value throughout 2022 for the production and bulk storage of BAT. Under the conditions of the extension, future transport of BAT to the SNS was approved. BAT remains the only recognized, licensed, and distributed botulism antitoxin within the FDA and the CDC. Other Licensed Jurisdictions and Contracts In 2012, Emergent signed a 10-year contract to provide BAT to the Canadian Department of National Defense and the Public Health Agency of Canada, as well as individual provincial health officials. In December 2016, Health Canada approved Emergent's New Drug Submission for BAT under the Extraordinary Use New Drug Regulations, which provide guidelines for consideration of drugs that do not have clinical information about impacts on humans due to the nature of the conditions that the drugs are used to treat. Pleased with Canada's decision to prepare for botulinum toxin events, one of the "more likely biological threat agents", Adam Havey, executive vice president and president of the biodefense division at Emergent BioSolutions, said, "Emergent is committed to helping allied governments fulfill their preparedness needs. We expect to expand upon our longstanding relationship with the Canadian government and develop similar relations outside of North America..." BAT was approved by the Health Sciences Authority in Singapore in July 2019. FDA clinical trials BAT has undergone extensive testing for effectiveness and safety. Emergent BioSolutions, in a 2017 document published to describe prescription information for BAT, said that the effectiveness of the antitoxin is based on efficacy studies that demonstrably prove increased chances of survival. In two clinical studies cited by the company, the safety profile of BAT was proven acceptable when one or two vials of the antitoxin were intravenously delivered to healthy subjects. Another clinical trial, the BT-011 study, known also as Pharmacokinetics of Botulism Antitoxin Heptavalent in Pediatric Patients, was initiated to test the success of BAT in children who had contracted botulism (or had been suspected of contracting botulism). In the study, a serum sample was collected from pediatric patients to analyze the pharmacokinetics of BAT to better adjust pediatric dosing recommendations. The details of the study from ClinicalTrials.gov, are as follows: See also Antitoxin First Flight (medical research horse) Passive immunity References External links CDC/MMWR (March 19, 2010): "Investigational Heptavalent Botulinum Antitoxin (HBAT) to Replace Licensed Botulinum Antitoxin AB and Investigational Botulinum Antitoxin E". FDA News Release (March 22, 2013): "FDA approves first Botulism Antitoxin for use in neutralizing all seven known botulinum nerve toxin serotypes". Antitoxins Immune system Immunology Medical treatments Biological warfare Botulism
Heptavalent botulism antitoxin
Biology
1,851
24,886,317
https://en.wikipedia.org/wiki/V-drive
A V-drive is a power transmission system for boats that consists (usually) of two gearboxes, two drive shafts, and a propeller. Whereas the conventional arrangement sites the engine with its gearbox aft, driving the propeller shaft directly, in a "V-drive" layout, the engine is reversed, to have the gearbox in front. This primary gearbox typically drives a short shaft forward to a transfer gearbox which reverses the transmission to the main tailshaft which is directed rearwards the propeller, at a "V" angle to the short shaft. A V-drive system variation is for the tailshaft to drive a saildrive propeller, mounted on a skeg below the hull. This is common on Lagoon catamarans. A variation of the two-shaft V-Drive layout is the "close-coupled V-drive" whereby the engine is still mounted "back-to-front", but the main gearbox incorporates an output flange that has already been reversed. This system obviates the need for any short primary shaft. Counter-intuitively, a V-drive system will not necessarily mean that the engine is sited further rearward; the whole engine/transmission may be sited forward than in a conventional arrangement. V-drive rationales V-drives are typically fitted to (i) sport motorboats or (ii) cruising yachts and catamarans. The pros and cons tend to be different for each type. For sportboats, a V-drive may be an alternative to an inboard/outboard (I/O) drive. Here the V-drive will allow the engine to be moved further forwards and the propeller (& rudder) will be under the boat. This can give a flatter ride (getting the boat planing earlier), greater choice of propellers, and a safer stern access for swimmers. For cruising yachts, the rationale of the V-drive installation may include: the propeller can be mounted further forward in deeper "quieter water". the engine can be mounted in a more space-efficient position. the position of the hull's centre of gravity may be optimised, In all cases, maintenance may be more difficult with a V-drive, since the main propshaft beneath the engine may be somewhat inaccessible. Also, when installing a V-drive, it can be a difficult process to ensure accurate alignment. Boats with V-drives Fred Cooper's 1935 design for Malcolm Campbell's Blue Bird used a v-drive designed by Reid Railton for its 2,000 bhp engine. See also Sterndrive References Marine propulsion
V-drive
Engineering
532
26,127,259
https://en.wikipedia.org/wiki/Boring%20Billion
The Boring Billion, otherwise known as the Mid Proterozoic and Earth's Middle Ages, is an informal geological time period between 1.8 and 0.8 billion years ago (Ga) during the middle Proterozoic eon spanning from the Statherian to the Tonian periods, characterized by more or less tectonic stability, climatic stasis and slow biological evolution. Although it is bordered by two different oxygenation events (the Great Oxygenation Event and Neoproterozoic Oxygenation Event) and two global glacial events (the Huronian and Cryogenian glaciations), the Boring Billion period itself actually had very low oxygen levels and no geological evidence of glaciations. The oceans during the Boring Billion may have been oxygen-poor, nutrient-poor and sulfidic (euxinia), populated by mainly anoxygenic purple bacteria, a type of bacteriochlorophyll-based photosynthetic bacteria which uses hydrogen sulfide (H2S) for carbon fixation instead of water and produces sulfur as a byproduct instead of oxygen. This is known as a Canfield ocean, and such composition may have caused the oceans to be colored black-and-milky-turquoise instead of blue or green as later. (By contrast, during the much earlier Purple Earth phase during the Archean, photosynthesis was performed mostly by archaeal colonies using retinal-based proton pumps that absorb green light, and the oceans would be magenta-purple.) Despite such adverse conditions, eukaryotes may have evolved around the beginning of the Boring Billion, and adopted several novel adaptations, such as various organelles, multicellularity and possibly sexual reproduction, and diversified into algae, fungi and early animals at the end of this time interval. Such advances may have been important precursors to the evolution of large, complex life later in the Ediacaran Avalon Explosion and the subsequent Phanerozoic Cambrian Explosion. Nonetheless, prokaryotic cyanobacteria were the dominant autotrophic lifeforms during this time, and likely supported an energy-poor food-web with a small number of protists at the apex level. The land was likely inhabited by prokaryotic cyanobacteria and eukaryotic proto-lichens, the latter more successful here probably due to the greater availability of nutrients than in offshore ocean waters. Description In 1995, geologists Roger Buick, Davis Des Marais, and Andrew Knoll reviewed the apparent lack of major biological, geological, and climatic events during the Mesoproterozoic era 1.6 to 1 billion years ago (Ga), and, thus, described it as "the dullest time in Earth's history". The term "Boring Billion" was coined by paleontologist Martin Brasier to refer to the time between about 2 and 1 Ga, which was characterized by geochemical stasis and glacial stagnation. In 2013, geochemist Grant Young used the term "Barren Billion" to refer to a period of apparent glacial stagnation and lack of carbon isotope excursions from 1.8 to 0.8 Ga. In 2014, geologists Peter Cawood and Chris Hawkesworth called the time between 1.7 and 0.75 Ga "Earth's Middle Ages" due to a lack of evidence of tectonic movement. The Boring Billion is now largely cited as spanning about 1.8 to 0.8 Ga, contained within the Proterozoic eon, mainly the Mesoproterozoic. The Boring Billion is characterized by geological, climatic, and by-and-large evolutionary stasis, with low nutrient abundance. In the time leading up to the Boring Billion, Earth experienced the Great Oxygenation Event due to the evolution of oxygenic photosynthetic cyanobacteria, and the resultant Huronian glaciation (Snowball Earth), formation of the UV-blocking ozone layer, and oxidation of several metals. Oxygen levels during the Boring Billion are thought to have been markedly lower than during the Great Oxidation Event, perhaps 0.1% to 10% of modern levels. It was ended by the breakup of the supercontinent Rodinia during the Tonian (1000–720 Ma) period, a second oxygenation event, and another Snowball Earth in the Cryogenian period. Tectonic stasis The evolution of Earth's biosphere, atmosphere, and hydrosphere has long been linked to the supercontinent cycle, where the continents aggregate and then drift apart. The Boring Billion saw the evolution of two supercontinents: Columbia (or Nuna) and Rodinia. The supercontinent Columbia formed between 2.0 and 1.7 Ga and remained intact until at least 1.3 Ga. Geological and paleomagnetic evidence suggest that Columbia underwent only minor changes to form the supercontinent Rodinia from 1.1 to 0.9 Ga. Paleogeographic reconstructions suggest that the supercontinent assemblage was located in equatorial and temperate climate zones, and there is little or no evidence for continental fragments in polar regions. Due to the lack of evidence of sediment build-up (on passive margins) which would occur as a result of rifting, the supercontinent probably did not break up, and rather was simply an assemblage of juxtaposed proto-continents and cratons. There is no evidence of rifting until the formation of Rodinia, 1.25 Ga in North Laurentia, and 1 Ga in East Baltica and South Siberia. Breakup did not occur until 0.75 Ga, marking the end of the Boring Billion. This tectonic stasis may have been related in ocean and atmospheric chemistry. It is possible the asthenosphere—the molten layer of Earth's mantle that tectonic plates essentially float and move around upon—was too hot to sustain modern plate tectonics at this time. Instead of vigorous plate recycling at subduction zones, plates were linked together for billions of years until the mantle cooled off sufficiently. The onset of this component of plate tectonics may have been aided by the cooling and thickening of the crust that, once initiated, made plate subduction anomalously strong, occurring at the end of the Boring Billion. Nonetheless, major magmatic events still occurred, such as the formation (via magma plume) of the central Australian Musgrave Province from 1.22 to 1.12 Ga, and the Canadian Mackenzie Large Igneous Province 1.27 Ga. Plate tectonics were still active enough to build mountains, with several orogenies, including the Grenville orogeny, occurring at the time. Climatic stability There is little evidence of significant climatic variability during this time period. Climate was likely not primarily dictated by solar luminosity because the Sun was 5–18% less luminous than it is today, but there is no evidence that Earth's climate was significantly cooler. In fact, the Boring Billion seems to lack any evidence of prolonged glaciations, which can be observed at regular periodicity in other parts of Earth's geologic history. High CO2 could not have been a main driver for warming because levels would have needed to be 30 to 100 times greater than pre-industrial levels and produced substantial ocean acidification to prevent ice formation, which also did not occur. Mesoproterozoic CO2 levels may have been comparable to those of the Phanerozoic eon, perhaps 7 to 10 times higher than modern levels. The first record of ice from this time period was reported in 2020 from the 1 Ga Scottish Diabaig Formation in the Torridon Group, where dropstone formations were likely formed by debris from ice rafting; the area, then situated between 35–50°S, was a (possibly upland) lake which is thought to have frozen over in the winter and melted in the summer, rafting occurring in the spring melt. A higher abundance of other greenhouse gases, namely methane produced by prokaryotes, may have compensated for the low CO2 levels; a largely ice-free world achieved by an atmospheric methane concentration of 140 parts per million (ppm). Methanogenic prokaryotes could not have produced so much methane, implying some other greenhouse gas, probably nitrous oxide, was elevated, perhaps to 3 ppm (10 times today's levels). Based on presumed greenhouse gas concentrations, equatorial temperatures during the Mesoproterozoic may have been about , in the tropics , at 60° , and the poles ; and the global average temperature about , which is 4 °C (7.2 °F) warmer than today. Temperatures at the poles dropped below freezing in winter, allowing for temporary sea ice formation and snowfall, but there were likely no permanent ice sheets. It has also been proposed that, because the intensity of cosmic rays has been shown to be positively correlated to cloud cover, and cloud cover reflects light into space and reduces global temperatures, lower rates of bombardment during this time due to reduced star formation in the galaxy caused less cloud cover and prevented glaciation events, maintaining a warm climate. Also, some combination of weathering intensity which would have reduced CO2 levels by oxidation of exposed metals, cooling of the mantle and reduced geothermal heat and volcanism, and increasing solar intensity and solar heat may have reached an equilibrium, barring ice formation. Conversely, glacial movements over a billion years ago may not have left many remnants today, and an apparent lack of evidence could be due to the incompleteness of the fossil record rather than absence. Further, the low oxygen and solar intensity levels may have prevented the formation of the ozone layer, preventing greenhouse gasses from being trapped in the atmosphere and heating the Earth via the greenhouse effect, which would have caused glaciation. Though not much oxygen is necessary to sustain the ozone layer, and levels during the Boring Billion may have been high enough for it, the Earth may have been more heavily bombarded by UV radiation than today. Oceanic composition The oceans seem to have had low concentrations of key nutrients thought to be necessary for complex life, namely molybdenum, iron, nitrogen, and phosphorus, in large part due to a lack of oxygen and resultant oxidation necessary for these geochemical cycles. Nutrients could have been more abundant in terrestrial environments, such as lakes or nearshore environments closer to continental runoff. In general, the oceans may have had an oxygenated surface layer, a sulfidic middle layer, and suboxic bottom layer. The predominantly sulfidic composition may have caused the oceans to be a black-and milky-turquoise color instead of blue. Oxygen Earth's geologic record indicates two events associated with significant increases in oxygen levels on Earth, with one occurring between 2.4 and 2.1 Ga, known as the Great Oxidation Event (GOE), and the second occurring an approximate 0.8 Ga, known as the Neoproterozoic Oxygenation Event (NOE). The intermediary period, during the Boring Billion, is thought to have had low oxygen levels (with minor fluctuations), leading to widespread anoxic waters. The oceans may have been distinctly stratified, with surface water being oxygenated and deep water being suboxic (less than 1 μM oxygen), the latter possibly maintained by lower levels of hydrogen (H2) and H2S output by deep sea hydrothermal vents which otherwise would have been chemically reduced by the oxygen, i.e., euxinic waters. Even in the shallowest waters, significant quantities of oxygen may have been restricted mainly to areas near the coast. The decomposition of sinking organic matter would have also leached oxygen from deep waters. The sudden drop in O2 after the Great Oxygenation Event—indicated by δ13C levels to have been a loss of 10 to 20 times the current volume of atmospheric oxygen—is known as the Lomagundi-Jatuli Event, and is the most prominent carbon isotope event in Earth's history. Oxygen levels may have been less than 0.1 to 1% of modern-day levels, which would have effectively stalled the evolution of complex life during the Boring Billion. However, a Mesoproterozoic Oxygenation Event (MOE), during which oxygen rose transiently to about 4% PAL at various points in time, is proposed to have occurred from 1.59 to 1.36 Ga. In particular, some evidence from the Gaoyuzhuang Formation suggests a rise in oxygen around 1.57 Ga, while the Velkerri Formation in the Roper Group of the Northern Territory of Australia, the Kaltasy Formation () of Volgo-Uralia, Russia, and the Xiamaling Formation in the northern North China Craton indicate noticeable oxygenation around 1.4 Ga, although the degree to which this represents global oxygen levels is unclear. Oxic conditions would have become dominant at the NOE causing the proliferation of aerobic activity over anaerobic, but widespread suboxic and anoxic conditions likely lasted until about 0.55 Ga corresponding with Ediacaran biota and the Cambrian explosion. Sulfur In 1998, geologist Donald Canfield proposed what is now known as the Canfield ocean hypothesis. Canfield claimed that increasing levels of oxygen in the atmosphere at the Great Oxygenation Event would have reacted with and oxidized continental iron pyrite (FeS2) deposits, with sulfate (SO42−) as a byproduct, which was transported into the sea. Sulfate-reducing microorganisms converted this to hydrogen sulfide (H2S), dividing the ocean into a somewhat oxic surface layer, and a sulfidic layer beneath, with anoxygenic bacteria living at the border, metabolizing the H2S and creating sulfur as a waste product. This created widespread euxinic conditions in middle-waters, an anoxic state with a high sulfur concentration, which was maintained by the bacteria. Many deposits from the Boring Billion contain mercury isotopic ratios characteristic of photic zone euxinia. More systematic geochemical study of the Mid-Proterozoic indicates that the oceans were largely ferruginous with a thin surface layer of weakly oxygenated waters, and euxinia may have occurred over relatively small areas, perhaps less than 7% of the seafloor. The very low concentrations of molybdenum in the Mesoproterozoic could sufficiently be explained even with such a relatively low percentage of the seafloor being euxinic. Euxinia expanded and contracted, sometimes reaching the photic zone and at other times being relegated to deeper waters. Evidence from the McArthur Basin of northern Australia reveals that intracontinental settings in particular were low in sulphide intermittently. Iron Among rocks dating to the Boring Billion, there is a conspicuous lack of banded iron formations, which form from iron in the upper water column (sourced from the deep ocean) reacting with oxygen and precipitating out of the water. They seemingly cease around the world after 1.85 Ga. Canfield argued that oceanic reduced all the iron in the anoxic deep sea. Iron could have been metabolized by anoxygenic bacteria. It has also been proposed that the 1.85 Ga Sudbury meteor impact mixed the previously stratified ocean via tsunamis, interaction between vaporized seawater and the oxygenated atmosphere, oceanic cavitation, and massive runoff of destroyed continental margins into the sea. Resultant suboxic deep waters (due to oxygenated surface water mixing with previously anoxic deep water) would have oxidized deep-water iron, preventing it from being transported and deposited on continental margins. Nonetheless, iron-rich waters did exist, such as the 1.4 Ga Xiamaling Formation of Northern China, which perhaps was fed by deep water hydrothermal vents. Iron-rich conditions also indicate anoxic bottom water in this area, as oxic conditions would have oxidized all the iron. Lifeforms Low nutrient abundance may have facilitated photosymbiosis—where one organism is capable of photosynthesis and the other metabolizes the waste product—among prokaryotes (bacteria and archaea), and the emergence of eukaryotes. Bacteria, Archaea, and Eukaryota are the three domains, the highest taxonomic ranking. Eukaryotes are distinguished from prokaryotes by a nucleus and membrane-bound organelles, and almost all multicellular organisms are eukaryotes. Prokaryotes Prokaryotes were the dominant lifeforms throughout the Boring Billion. Microfossils indicate the presence of cyanobacteria, green and purple sulfur bacteria, methane-producing archaea, sulfate-metabolizing bacteria, methane-metabolizing archaea or bacteria, iron-metabolizing bacteria, nitrogen-metabolizing bacteria, and anoxygenic photosynthetic bacteria. Anoxygenic cyanobacteria are thought to have been the dominant photosynthesizers, metabolizing the abundant H2S in the oceans. In iron-rich waters, cyanobacteria may have suffered from iron poisoning, especially in offshore waters where iron-rich deep water mixed with surface waters, and thus were outcompeted by other bacteria which could metabolize both iron and H2S. However, iron poisoning could have been abated by silica-rich waters or biomineralization of iron within the cell. Unicellular planktonic lineages of cyanobacteria evolved in freshwater during the middle of the Mesoproterozoic, and during the Neoproterozoic both benthic marine and some freshwater ancestors gave rise to marine planktonic cyanobacteria (both nitrogen-fixing and non-nitrogen fixing), contributing to the oxygenation of the Pre-Cambrian oceans. Research on cyanobacteria in the laboratory has shown that the enzyme nitrogenase, which is used to fix atmospheric nitrogen, stops working when oxygen levels are higher than 10% of current atmospheric levels. The absence of nitrogen due to an increased amount of oxygen would have created a negative feedback loop where atmospheric oxygen levels stabilised at 2%, which began to change about 600 million years ago when landplants started releasing oxygen. By 408 million years ago, nitrogen fixating cyanobacteria had evolved heterocysts to protect their nitrogenase from oxygen. Eukaryotes Eukaryotes may have arisen around the beginning of the Boring Billion, coinciding with the accretion of Columbia, which could have somehow increased oceanic oxygen levels. Although there have been claimed records of eukaryotes as early as 2.1 billion years ago, these have been considered questionable, with the oldest unambiguous eukaryote remains dating to around 1.8-1.6 billion years ago in China. Following this, eukaryotic evolution was rather slow, possibly due to the euxinic conditions of the Canfield ocean and a lack of key nutrients and metals which prevented large, complex life with high energy requirements from evolving. Euxinic conditions would have also decreased the solubility of iron and molybdenum, essential metals in nitrogen fixation. A lack of dissolved nitrogen would have favored prokaryotes over eukaryotes, as the former can metabolize gaseous nitrogen. An alternative hypothesis for the lack of diversification among eukaryotes implicates high temperatures during the Boring Billion rather than low oxygen levels, positing that the fact that oxygenation events prior to the Late Neoproterozoic did not kickstart eukaryotic evolution suggests it was not the main limiting factor inhibiting it. Nonetheless, the diversification of crown group eukaryotic macroorganisms seems to have started about 1.6–1 Ga, seemingly coinciding with an increase in key nutrient concentrations. According to molecular clock analysis, plants diverged from animals and fungi about 1.6 Ga; animals and fungi about 1.5 Ga; Sponges from other animals diverged about 1.35 Ga; Bilaterians and cnidarians (animals respectively with and without bilateral symmetry) about 1.3 Ga; and Ascomycota and Basidiomycota (the two divisions of the fungus subkingdom Dikarya) 0.97 Ga. The paper's authors state that their time estimates disagree with the scientific consensus. Fossils from the late Palaeoproterozoic and early Mesoproterozoic of the Vindhyan sedimentary basin of India, the Ruyang Group of North China, and the Kotuikan Formation of the Anabar Shield of Siberia, among other places, indicate high rates (by pre-Ediacaran standards) of eukaryotic diversification between 1.7 and 1.4 Ga, although much of this diversity is represented by previously unknown, no longer existing clades of eukaryotes. The earliest known red algae mats date to 1.6 Ga. The earliest known fungus dates to 1.01–0.89 Ga from Northern Canada. Multicellular eukaryotes, thought to be the descendants of colonial unicellular aggregates, had probably evolved about 2–1.4 Ga. Likewise, early multicellular eukaryotes likely mainly aggregated into stromatolite mats. The red alga Bangiomorpha is the earliest known sexually reproducing and meiotic lifeform, and evolved by 1.047 Ga. Based on this, these adaptations evolved between ca. 2–1.4 Ga. Alternatively, these may have evolved well before the last common ancestor of eukaryotes given that meiosis is performed using the same proteins in all eukaryotes, perhaps stretching to as far back as the hypothesized RNA world. Cell organelles probably originated from free-living cyanobacteria (symbiogenesis) possibly after the evolution of phagocytosis (engulfing other cells) with the removal of the rigid cell wall which was only necessary for asexual reproduction. Mitochondria had already evolved in the Great Oxygenation Event, but plastids used in primoplants for photosynthesis are thought to have appeared about 1.6–1.5 Ga. Histones likely appeared during the Boring Billion to help organize and package the increasing amount of DNA in eukaryotic cells into nucleosomes. Hydrogenosomes used in anaerobic activity may have originated in this time from an archaeon. Given the evolutionary landmarks achieved by eukaryotes, this time period could be considered an important precursor to the Cambrian explosion about 0.54 Ga, and the evolution of relatively large, complex life. Ecology Due to the marginalization of large food particles, such as algae, in favor of cyanobacteria and prokaryotes which do not transmit as much energy to higher trophic levels, a complex food web likely did not form, and large lifeforms with high energy demands could not evolve. Such a food web probably only sustained a small number of protists as, in a sense, apex predators. The presumably oxygenic photosynthetic eukaryotic acritarchs, perhaps a type of microalga, inhabited the Mesoproterozoic surface waters. Their population may have been largely limited by nutrient availability rather than predation because species have been reported to have survived for hundreds of millions of years, but after 1 Ga, species duration dropped to about 100 Ma, perhaps due to increased herbivory by early protists. This is consistent with species survival dropping to 10 Ma just after the Cambrian explosion and the expansion of herbivorous animals. The relatively low concentrations of molybdenum in the ocean throughout the Boring Billion have been suggested as a major limiting factor that kept populations of open ocean nitrogen fixing microorganisms, which require molybdenum to produce nitrogenases, low, although freshwater and coastal environments close to riverine sources of dissolved molybdenum may have still hosted significant communities of nitrogen fixers. The low rate of nitrogen fixation, which only ended during the Cryogenian with the evolution of planktonic nitrogen fixers, meant that free ammonium was in short supply across this time interval, severely constraining the evolution and diversification of multicellular biota. Life on land Some of the earliest evidence of the prokaryotic colonization of land dates to before 3 Ga, possibly as early as 3.5 Ga. During the Boring Billion, land may have been inhabited mainly by cyanobacterial mats. Dust would have supplied an abundance of nutrients and a means of dispersal for surface-dwelling microbes, though microbial communities could have also formed in caves and freshwater lakes and rivers. By 1.2 Ga, microbial communities may have been abundant enough to have affected weathering, erosion, sedimentation, and various geochemical cycles, and expansive microbial mats could indicate biological soil crust was abundant. The earliest terrestrial eukaryotes may have been lichen fungi about 1.3 Ga, which grazed on the microbial mats. Abundant eukaryotic microfossils from the freshwater Scottish Torridon Group seems to indicate eukaryotic dominance in non-marine habitats by 1 Ga, probably due to increased nutrient availability in areas closer to the continents and continental runoff. These lichen may have later facilitated plant colonization 0.75 Ga in some manner. A massive increase in terrestrial photosynthetic biomass seems to have occurred about 0.85 Ga, indicated by a flux in terrestrially-sourced carbon, which may have increased oxygen levels enough to support an expansion of multicellular eukaryotes. See also The natural nuclear fission reactors at what is now Oklo, Gabon were active in this era External links Earth’s 'boring billion' years of stagnant, stinking oceans might actually have been rather dynamic Astrobiology References Proterozoic Plate tectonics Geological periods 1995 in science
Boring Billion
Astronomy,Biology
5,371
6,789,892
https://en.wikipedia.org/wiki/ALK-Abell%C3%B3
ALK-Abelló A/S (), also commonly known as ALK, is a Denmark-based pharmaceutical company which specializes in the development and manufacturing of allergy immunotherapy (AIT) products for the prevention and treatment of allergy. It is one of the world's largest makers of allergy immunotherapy products (also known as ‘allergy vaccines’) with 67% of its revenue coming from sales in Europe. History ALK-Abello dates back to 1923, when Denmark's first allergen extracts were produced at the pharmacy of the Copenhagen University Hospital. The company was subsequently established as Allergologisk Laboratorium København. The company has since focused on mapping the mechanisms of allergic impact on the immune system in order to develop new and improved allergy therapies. The research focuses on prevention, treatment, and management of allergies, especially hay fever and asthma. In 1925, Juan Abello Pascual founded Abello Pharmaceuticals in Spain. In 1978, ALK released the first standardized line of products for the treatment of allergies. In 1992, ALK and Abello merged. In the 1990s, ALK was the first company to launch sublingual immunotherapy drops (allergy immunotherapy administered as droplets under the tongue). In recent years, ALK's research and development strategy has been focused on introducing a range of sublingual immunotherapy tablets (SLIT-tablets). The first, for grass pollen allergy, was launched in 2006, and was followed by SLIT-tablets for ragweed pollen allergy in 2014, and house dust mite allergy in 2016. The company is part-owned by the Lundbeck Foundation, however is also a publicly traded entity. Corporate information ALK's stock is listed on the NASDAQ OMX Copenhagen Stock Exchange. As of November 1, 2023, the current members of ALK's board of directors are: Anders Hedegaard (Chairperson of the board, Chairperson of the Remuneration & Nomination Committee, Member of the Scientific Committee, Professional board member) Lene Skole (Vice Chair, Member of the Remuneration & Nomination Committee, Member of the Scientific Committee, CEO Lundbeckfonden and directorships at two other subsidiaries) Gitte Aabo (Chairperson of the Audit Committee, CEO of GN Hearing and GN Store Nord) Lars Holmqvist (Member of the Audit Committee, Professional board member) Jesper Høiland (Member of the Audit Committee, Strategic adviser, PharmaCo Consult ApS) Bertil Lindmark (Chairperson of the Scientific Committee, Chief Medical Officer of Galecto A/S) Alan Main (Member of the Remuneration & Nomination Committee, Senior adviser, Bamboo Capital Partners) Katja Barnkob Nanna Carlson Lise Lund Mærkedahl Johan Smedsrud As of November 1, 2023, the members of ALK's board of management are: Peter Halling (President and Chief Executive Officer) Henriette Mersebach (Executive Vice President, Research and Development) Søren Niegel (Executive Vice President, Commercial Operations) Claus Steensen Sølje (Chief Financial Officer and Executive Vice President) Christian Gauguin Houghton (Executive Vice President, Product Supply) Lisbeth Kirk (Senior Vice President, People & Organisation) Jan Engel Jensen (Senior Vice President, Global Quality Assurance) Operations ALK employs 2,900 people worldwide. The company's global headquarters and its main research and development center are on the Scion DTU science and technology park in Hørsholm, Denmark, in the Capital Region of Denmark. The company is present in 46 countries, either directly or via partnerships. Other ALK facilities include: European production facilities in Hørsholm, Denmark, in Vanduille and Varennes, France, and in Madrid, Spain. North American production facilities in Port Washington, New York, in Post Falls, Idaho, in Luther, Iowa, and in Oklahoma City, Oklahoma. Products Allergy immunotherapy products account for 88% of ALK's revenues and comprise three types of product: Sublingual immunotherapy tablets (SLIT-tablets) covering grass pollen, ragweed pollen and house dust mite allergies. Sublingual immunotherapy drops (SLIT-drops). A droplet-based allergy vaccine marketed under various brand names and covering approximately 40 different allergens and combinations of allergens, including pollens, molds, mites and pets. Subcutaneous immunotherapy (SCIT). An injection-based allergy vaccine marketed under various brand names and covering the most common allergens such as pollens, molds, mites, pets, bees and wasps. The aforementioned allergy immunotherapy products are purified from natural allergen sources then combined with excipients during formulation. Other ALK products include an emergency adrenaline autoinjector for people who experience anaphylaxis, allergy diagnostic kits, for example, skin prick tests, and bulk allergen extracts, mostly sold in the US to allergy specialists for use in their practices. Partnerships and collaborations From 2006 until 2016, ALK had a strategic alliance with Merck & Co to develop and commercialize ALK's SLIT-tablet products in the United States, Canada and Mexico, granting Merck exclusive rights to develop, market and distribute the tablets for grass pollen allergy, house dust mite allergy and ragweed allergy in those markets. In June 2016, ALK and Merck & Co announced that the partnership would end by the end of 2016. The company also has an alliance with Torii Pharmaceutical Co. Ltd to develop and commercialize ALK's house dust mite allergy immunotherapy products in Japan. The agreement covers the house dust mite SLIT-tablet, an injection-based immunotherapy product, and diagnostic products for house dust mite allergy. In addition, it established a research and development collaboration to develop a SLIT-tablet product for Japanese cedar pollen allergy. In China, ALK had a collaboration with Eddingpharm which covered ALK's diagnostic skin prick test for house dust mite allergy, and a subcutaneous immunotherapy (SCIT) product, also for house dust mite allergy. Under the agreement, Eddingpharm handled sales and distribution, while ALK provided medical and scientific expertise. Announced in April 2014, the collaboration was to run for an initial seven years, provided certain performance targets were met. However, the agreement was terminated by ALK in 2016, in favor of a collaboration with a new distributor. ALK also has had a collaboration with Abbott covering Russia and selected emerging markets, under which Abbott gained rights to distribute and commercialize ALK's SLIT-tablets for grass, ragweed, tree and house dust mite allergies. The agreement with Abbott was expanded in 2016, and allowed Abbott until the end of the collaboration in 2017 to register and sell ALK's house dust mite SLIT-tablet in seven markets in South-East Asia: Hong Kong, Malaysia, the Philippines, Singapore, South Korea, Taiwan and Thailand. ALK also signed a collaboration with bioCSL (now name Seqirus) covering Australia and New Zealand which grants it exclusive rights to promote and sell ALK's SLIT-tablets against house dust mite and grass pollen allergy and its adrenaline auto-injector. Sponsorships ALK is one of the founder sponsors of the European Academy of Allergy and Clinical Immunology (EAACI). It also sponsors and organizes an invitation-only biennial scientific symposium, the Symposium on Specific Allergy (SOSA). Since 2000, the company has sponsored the WAO Henning Løwenstein Research Award which is awarded to a young scientist who has shown excellence within the field of allergy. Since 2005, the award has been made every two years. The 2013 winner, received €10,000 (or an educational grant of corresponding value) and a travel grant to attend the 2013 European Academy of Allergy and Clinical Immunology (EAACI)/World Allergy Organization (WAO) Congress in Milan, Italy. Timeline 1923: Doctor Kaj Baagøe and pharmacist Peter Barfod produce Denmark's first pharmaceutically manufactured allergen extract at Copenhagen University Hospital (Rigshospitalet) 1949: Production of allergy vaccines and diagnostics is centered at an independent unit called Allergologisk Laboratorium (Allergology Laboratory) 1972: The technique to accurately identify the proteins that provoke allergies in individual patients is developed 1976: The first standardized process for allergen extracts (DAS 76) is developed 1978: The world's first standardized allergy immunotherapy product is launched 1984: ALK begins a period of global expansion by establishing a presence in several new markets and through strategic acquisitions 1990: The world's first product for sublingual immunotherapy drops (SLIT-drops) (droplets administered under the tongue) is launched 1990: The PAT (Preventative Allergy Treatment) study is initiated in collaboration with leading European allergy specialists. The study finds that allergy immunotherapy decreases the risk of the development of asthma in children with allergic rhinitis 1992: ALK acquires Alergia e Immunologia Abello S.A. (Abello), a Spanish company based in Madrid (with additional locations in Italy and Germany), its largest competitor at the time. Abello Pharmaceuticals was founded in 1925 by pharmacist Juan Abello Pascual and specialized in manufacturing oxygen- and alkaline-based therapeutics as well as synthetic hormones, among other processes and products. 2000: Acquires Center Laboratories, one of the leading suppliers of allergenic extracts in the U.S. 2006: The world's first registered sublingual immunotherapy tablet (SLIT-tablet), for grass pollen allergy, is launched 2007: An agreement is signed with Schering-Plough (which merged with Merck & Co. in November 2009) to develop, register and commercialize in North America a portfolio of SLIT-tablet treatments for grass, ragweed and house dust mite allergy 2009: The Grazax Asthma Prevention (GAP) trial, investigating the potential for preventing the development of asthma in children and adolescents is initiated. 2011: A new ALK-developed adrenaline auto-injector for the treatment of acute anaphylaxis is launched 2011: An agreement with Torii Pharmaceutical Co., Ltd. to develop, register and commercialize a number of SLIT-tablet products in Japan is signed. The agreement also covers selected existing ALK products and diagnostics against house dust mite allergy. It also covers a research and development contract to develop a sublingual immunotherapy tablet against Japanese cedar pollen allergy. 2014: The grass and ragweed SLIT-tablets are launched in the US. 2014: ALK signs collaboration agreements with Eddingpharm for China and Abbott for Russia and selected emerging markets. 2015: ALK signs a collaboration agreement with bioCSL (now Seqirus) covering Australia and New Zealand. 2016: ALK expands its partnership with Abbott to cover seven markets in South-East Asia and terminates its partnerships with Merck and EddingPharm. 2016: The house dust mite SLIT-tablet is launched in Europe and Japan. 2017: Inclusion in GINA. For the first time, allergy immunotherapy is recommended as a treatment option in the Global Initiative for Asthma (GINA) strategy document. 2017: New transformational growth strategy. ALK launches a new strategy to reach out to more people with allergy to become a broader-based allergy company. 2018: Launch of sister brand. Launch of ALK's sister brand klarify.me - offering a broad range of pre-screened products and services to help people manage their allergies. 2021: Entering food allergies. ALK announces an entry into food allergy treatment and begins developing a tablet for peanut allergies. 2023: ALK turns 100 years. ALK has been a pioneer in fighting allergies for 100 years. Research and development Since launching the first SLIT-tablet (for grass allergy) in 2006, ALK has been developing tablets for other allergies, including tree, ragweed and house dust mite. Its strategy is to develop more patient-friendly treatments based on the approach of 're-educating' the immune system. Notes and references External links Companies listed on Nasdaq Copenhagen Pharmaceutical companies of Denmark Biotechnology companies of Denmark Life science companies based in Copenhagen Pharmaceutical companies established in 1923 Danish companies established in 1923 Companies based in Rudersdal Municipality
ALK-Abelló
Biology
2,678
23,825,035
https://en.wikipedia.org/wiki/Non-squeezing%20theorem
The non-squeezing theorem, also called Gromov's non-squeezing theorem, is one of the most important theorems in symplectic geometry. It was first proven in 1985 by Mikhail Gromov. The theorem states that one cannot embed a ball into a cylinder via a symplectic map unless the radius of the ball is less than or equal to the radius of the cylinder. The theorem is important because formerly very little was known about the geometry behind symplectic maps. One easy consequence of a transformation being symplectic is that it preserves volume. One can easily embed a ball of any radius into a cylinder of any other radius by a volume-preserving transformation: just picture squeezing the ball into the cylinder (hence, the name non-squeezing theorem). Thus, the non-squeezing theorem tells us that, although symplectic transformations are volume-preserving, it is much more restrictive for a transformation to be symplectic than it is to be volume-preserving. Background and statement Consider the symplectic spaces each endowed with the symplectic form The space is called the ball of radius and is called the cylinder of radius . The choice of axes for the cylinder are not arbitrary given the fixed symplectic form above; the circles of the cylinder each lie in a symplectic subspace of . If and are symplectic manifolds, a symplectic embedding is a smooth embedding such that . For , there is a symplectic embedding which takes to the same point . Gromov's non-squeezing theorem says that if there is a symplectic embedding , then . Symplectic capacities A symplectic capacity is a map satisfying (Monotonicity) If there is a symplectic embedding and , then , (Conformality) , (Nontriviality) and . The existence of a symplectic capacity satisfying is equivalent to Gromov's non-squeezing theorem. Given such a capacity, one can verify the non-squeezing theorem, and given the non-squeezing theorem, the Gromov width is such a capacity. The “symplectic camel” Gromov's non-squeezing theorem has also become known as the principle of the symplectic camel since Ian Stewart referred to it by alluding to the parable of the camel and the eye of a needle. As Maurice A. de Gosson states: Similarly: Further work De Gosson has shown that the non-squeezing theorem is closely linked to the Robertson–Schrödinger–Heisenberg inequality, a generalization of the Heisenberg uncertainty relation. The Robertson–Schrödinger–Heisenberg inequality states that: with Q and P the canonical coordinates and var and cov the variance and covariance functions. References Further reading Maurice A. de Gosson: The symplectic egg, arXiv:1208.5969v1, submitted on 29 August 2012 – includes a proof of a variant of the theorem for case of linear canonical transformations Dusa McDuff: What is symplectic geometry?, 2009 Symplectic geometry Theorems in geometry
Non-squeezing theorem
Mathematics
650
35,919,963
https://en.wikipedia.org/wiki/Exeter%20point
In geometry, the Exeter point is a special point associated with a plane triangle. It is a triangle center and is designated as X(22) in Clark Kimberling's Encyclopedia of Triangle Centers. This was discovered in a computers-in-mathematics workshop at Phillips Exeter Academy in 1986. This is one of the recent triangle centers, unlike the classical triangle centers like centroid, incenter, and Steiner point. Definition The Exeter point is defined as follows. Let be any given triangle. Let the medians through the vertices meet the circumcircle of at respectively. Let be the triangle formed by the tangents at to the circumcircle of . (Let be the vertex opposite to the side formed by the tangent at the vertex , be the vertex opposite to the side formed by the tangent at the vertex , and be the vertex opposite to the side formed by the tangent at the vertex .) The lines through are concurrent. The point of concurrence is the Exeter point of . Trilinear coordinates The trilinear coordinates of the Exeter point are Properties The Exeter point of triangle ABC lies on the Euler line (the line passing through the centroid, the orthocenter, the de Longchamps point, the Euler centre and the circumcenter) of triangle ABC. So there are 6 points collinear over one only line. References Triangle centers
Exeter point
Physics,Mathematics
286
56,520,281
https://en.wikipedia.org/wiki/William%20Foster%20Nye
William Foster Nye (May 20, 1824 – August 12, 1910) was an American businessman and founder of a lubricating oil business in New Bedford, Massachusetts, which is still in existence today and known as Nye Lubricants. He was a relative of Bill Nye of Bill Nye the Science Guy fame. Life and career Nye was born in the village of Pocasset (at the time considered part of the town of Sandwich), one of the eight children of Syrena née Dimmock and Ebenezer Nye. His family was descended from Benjamin Nye who had emigrated from England in 1635 and settled in Massachusetts where he eventually built and operated a sawmill near Sandwich. At the age of 16 Nye was apprenticed to Prince Weeks, a master builder in New Bedford. On finishing his apprenticeship, he worked for a pipe organ-building company in Boston and then spent three years in Calcutta as a carpenter for the Frederic Tudor Ice Company. On his return to Massachusetts he married Mary Keith on May 20, 1851. Nye then sailed to California, crossing the Isthmus of Panama on foot, and arriving in San Francisco shortly after the Fire of 1851 which had destroyed much of the city. He worked in the re-building of the city for several years, helping to construct some of San Francisco's first brick buildings. In 1855, Nye returned to New Bedford and set up an oil and kerosene business which he operated until the outbreak of the American Civil War when he joined the Union Army as a sutler to the Massachusetts Artillery and the 4th Massachusetts Cavalry. He was with the advance guard of the cavalry when it entered Richmond, Virginia in 1865 and set up a trading post there in one of the city's remaining brick buildings. For a time he was the sole tradesman operating in Richmond. After his regiment was mustered out in November 1865 Nye returned to New Bedford and began developing the lubricant oil business for which he became principally known. Nye's oil business, originally run out of small rented premises in Fairhaven, focused primarily on highly refined lubricant oils for watches, clocks, typewriters, sewing machines, and bicycles. In the late 1860s he acquired an entire catch of 2,200 pilot whales which would supply the raw material for his lubricating oils for several years. He expanded the business in 1877 with the purchase of a large brick building on Fish Island which became its principal refinery. By 1888, his company had become one of the world's largest suppliers of refined lubricant oils. In 1896 Nye absorbed Ezra Kelley's oil company, his main rival. He remained actively involved in the business until shortly before his death in 1910 at the age of 86. Nye's son, Joseph Keith Nye (1858–1923) worked extensively with his father, patented several inventions for the improvement of the refining process, and took over the company after his father's death. After Joseph's death in 1923, the business was acquired by his associate Anderson W. Kelley. It was subsequently acquired by the Mock family in 1956 and still operates today under the name Nye Lubricants. A keen believer in spiritualism, Nye had been one of the founders and most active promoters of the Onset Bay Grove Association. Their summer retreat, Onset Bay Grove, built by the association in the late 1870s, claimed to be the "largest community of spiritualism yet formed in the fifty years history of its teachings." In his later years Nye said of his beliefs: That I am a spiritualist must be to those I leave behind me the touch that withers my memory or the ever living archway about which they can entwine earth's fragrant flowers and through which they may in gladness follow me to the evergreen shore. Nye is buried with his wife Mary in Riverside Cemetery in Fairhaven. Notes References 1824 births 1910 deaths American manufacturing businesspeople Businesspeople from Massachusetts People from Sandwich, Massachusetts 19th-century American businesspeople Tribologists
William Foster Nye
Materials_science
829
23,917,974
https://en.wikipedia.org/wiki/SOX1
SOX1 is a gene that encodes a transcription factor with a HMG-box (high mobility group) DNA-binding domain and functions primarily in neurogenesis. SOX1, SOX2 and SOX3, members of the SOX gene family (specifically the SOXB1 group), contain transcription factors related to SRY, the testis-determining factor. SOX1 exerts its importance in its role in development of the central nervous system (neurogenesis) and in particular the development of the eye, where it is functionally redundant with SOX3 and to a lesser degree SOX2, and maintenance of neural progenitor cell identity. SOX1 expression is restricted to the neuroectoderm by proliferating progenitor cells in the tetrapod embryo. The induction of this neuroectoderm occurs upon expression of the SOX1 gene. In ectodermal cells committed to a certain cell fate, SOX1 has shown to be one of the earliest transcription factors expressed. In particular, SOX1 is first detected in the late head fold stage. Clinical significance Striatum development SOX1 is expressed particularly in the ventral striatum, and Sox1-deficient mice have altered striatum development, leading e.g. to epilepsy. Lens development SOX1 has shown clinical significance in its direct regulation of gamma-crystallin genes, which is vital for lens development in mice. Gamma-crystallins serve as a key structural component in lens fiber cells in both mammals and amphibians. Research has shown direct deletion of the SOX1 gene in mice causes cataracts and microphthalmia. These mutant lenses fail to elongate due to the absence of gamma-crystallins. SOXB1 group redundant roles SOX1 is a member of the SOX gene family, in particular the SOXB1 group, which includes SOX1, SOX2, and SOX3. The SOX gene family encodes transcription factors. It is suggested the three members of the SOXB1 group have redundant roles in the development of neural stem cells. This group of SOX genes regulate neural progenitor identity. Each of these proteins have unique neural markers. Overexpression of either SOX1, SOX2, or SOX 3 increases neural progenitors and prevents neural differentiation. In non-mammalian vertebrates, loss of one SOXB1 protein results in minor phenotypic differences. This supports the claim that SOXB1 group proteins have redundant roles. See also Neurogenesis References Transcription factors Developmental genes and proteins
SOX1
Chemistry,Biology
518
4,579,429
https://en.wikipedia.org/wiki/Sir%20Edmund%20Whittaker%20Memorial%20Prize
The Sir Edmund Whittaker Memorial Prize is awarded every four years by the Edinburgh Mathematical Society to an outstanding young mathematician having a specified connection with Scotland. It is named after Sir Edmund Whittaker. History After the death of Sir Edmund Whittaker in 1956, his son John Macnaghten Whittaker donated on behalf of the Whittaker Family the sum of £500 to the Edinburgh Mathematical Society to establish a prize for mathematical work in memory of his father. As of 2009, the award money remains £500. Winners 2023 Jonathan Fraser (University of St Andrews) 2021 Ben Davison (University of Edinburgh) 2019 Michela Ottobre (Heriot-Watt University) 2016 Arend Bayer (University of Edinburgh) 2013 Stuart White (University of Glasgow) 2009 Agata Smoktunowicz (University of Edinburgh) 2005 Tom Bridgeland (University of Sheffield) 2001 Michael McQuillan and J A Sherratt 1997 Alan D Rendall (Max-Planck-Institut für Gravitationsphysik) 1993 Mitchell A. Berger and Alan W. Reid 1989 A A Lacey and Michael Röckner 1985 John Mackintosh Howie 1981 John M. Ball (University of Oxford) 1977 Gavin Brown and C A Stuart 1973 A M Davie (University of Edinburgh) 1970 Derek J S Robinson (University of Illinois) 1965 John Bryce McLeod (University of Oxford) 1961 A G Mackie and Andrew H. Wallace See also List of mathematics awards References External links O'Connor, John J.; Robertson, Edmund F., "EMS Whittaker Prize", MacTutor History of Mathematics archive, University of St Andrews O'Connor, John J.; Robertson, Edmund F., "Winners of the EMS Whittaker Prize", MacTutor History of Mathematics archive, University of St Andrews Mathematics awards Monuments and memorials in Scotland Scottish awards Memorial Prize
Sir Edmund Whittaker Memorial Prize
Technology
389
5,435,674
https://en.wikipedia.org/wiki/Acoustic%20shadow
An acoustic shadow or sound shadow is an area through which sound waves fail to propagate, due to topographical obstructions or disruption of the waves via phenomena such as wind currents, buildings, or sound barriers. Short-distance acoustic shadow A short-distance acoustic shadow occurs behind a building or a sound barrier. The sound from a source is shielded by the obstruction. Due to diffraction around the object, it will not be completely silent in the sound shadow. The amplitude of the sound can be reduced considerably, however, depending on the additional distance the sound has to travel between source and receiver. Long-distance acoustic shadow Anomalous sound propagation in the atmosphere can occur in certain conditions of wind, temperature and pressure. Such conditions enable sound to travel in refraction channels over long distances until returning to the Earth's surface, and it thus may not be heard in intervening locations. As one website refers to it, "an acoustic shadow is to sound what a mirage is to light". For example, at the Battle of Iuka, a northerly wind prevented General Ulysses S. Grant from hearing the sounds of battle and sending more troops. Many other instances of acoustic shadowing were prevalent during the American Civil War, including the Battles of Seven Pines, Gaines' Mill, Perryville and Five Forks. Indeed, this is addressed in the Ken Burns's documentary The Civil War, which aired on PBS in September 1990. Observers of nearby battles would sometimes see the smoke and flashes of light from cannon but not hear the corresponding roar of battle, while those in more distant locations would hear the sounds distinctly. Two diarists John Evelyn and Samuel Pepys heard from London the naval guns of the Four Days' Battle, which ranged over the southern North Sea between England and the Flanders coast. However the guns were not heard at all in towns on the coast nearer to the action: See also for a fuller explanation of the phenomenon. Gobo (recording) References Notes Further reading Garrison Jr., Webb, Strange Battles of the Civil War, Cumberland House, 2001, Ross, Charles D. Civil War Acoustic Shadows. Shippensburg, PA: White Mane Publishing, 2001l . External links Acoustic Shadows - What is an acoustic shadow and how does it work? - Lisa Acoustic Shadow Hearing Waves Acoustics
Acoustic shadow
Physics
465
21,836,950
https://en.wikipedia.org/wiki/Ampulloclitocybe%20clavipes
Ampulloclitocybe clavipes, commonly known as the club-foot or club-footed clitocybe, is a species of gilled mushroom from Europe and North America. The grey brown mushrooms have yellowish decurrent gills and a bulbous stalk, and are found in deciduous and conifer woodlands. Although considered edible, disulfiram-like reactions have been reported after consumption of alcohol after eating this mushroom. Taxonomy The species was initially described as Agaricus clavipes by South African mycologist Christiaan Hendrik Persoon in 1801, its specific epithet derived from the Latin terms clava "club", and pes "foot". It was transferred to Clitocybe by German naturalist Paul Kummer in 1871 and was even designated, improperly, the type species by Howard E. Bigelow in 1965. French mycologist Lucien Quélet chose to place it in Omphalia (now Omphalina) in 1886. Scott Redhead and colleagues proposed the genus Ampulloclitocybe for it, as the species was only distantly related to other members of Clitocybe proper and more closely related instead to Rimbachia bryophila, Omphalina pyxidata and "Clitocybe" lateritia. Around the same time, Finnish mycologist Harri Harmaja proposed the genus Clavicybe. However, as the former name was published on November 5, 2002, and the latter one on December 31, 2002, Harmaja conceded that Ampulloclitocybe had priority. English mycologist P. D. Orton described a Clitocybe squamulosoides in 1960, which he held to be a slender relative with large spores, though the differences are inconsistent and there are intermediate forms. hence it is considered indistinguishable from A. clavipes. Common names include club foot, club-footed funnel cap, club-footed clitocybe and clavate-stalked clitocybe. Description The cap of the mushroom is in diameter, convex with a small boss, becoming plane to depressed in shape. It has a smooth surface often covered in fibrils, and usually moist. Cap colours are generally grey-brown, sometimes tinged olive, with a pale margin. The stem has a markedly bulbous base, and is tall by wide. Its surface is covered in silky fibres, and it is the same colour as the cap. The thick flesh is white, but slightly yellow at the base. In the stem, it is tough on the surface and spongy and soft in the centre. It is watery with a slightly sweet smell that has been likened to bitter almond, orange blossom, cinnamon, or even grape bubble gum. The gills are strongly decurrent and cream-yellow in colour, contrasting with the rest of the mushroom. There are some smaller gills in between the regular gills, and the gills are occasionally forked near the stem. The gill edges are straight in younger mushrooms and sometimes wavy (undulate) in older ones. The spore print is white. The round to oval spores are 4.5–5 by 3.5–4 microns. It resembles the clouded agaric (Clitocybe nebularis), but can be distinguished by its bulbous stem, deeply decurrent gills, and overall darker colour. In the western United States, it can be confused with Ampulloclitocybe avellaneialba, which is larger and has a darker cap and white gills. Distribution and habitat It is widespread and abundant across Northern Europe and the British Isles, and is becoming more common. In North America, it is common under pine plantations in the east, and less common in the Pacific Northwest. It is found in conifer and deciduous forests, particularly under beech, the fruit bodies appearing from August to November in northern Europe. Edibility It has been described as edible, though too unpalatable as eating it has been likened to eating wet cotton. Others categorize it as inedible. It contains toxins which make it dangerous when consumed with alcohol. Club foots collected from Stinchfield Woods, northwest of Dexter, Michigan, in 1974, 1976 and 1977 caused an Antabuse-like syndrome. Alcohol was consumed around seven hours after the mushrooms were eaten in each case, resulting in flushing of the face, throbbing of the head and neck and puffy hands around five to ten minutes afterwards. The symptoms were mild with vodka and gin, but worse with whiskey, which resulted in a pounding headache that lasted several hours. Rechallenging with alcohol the next day brought on the symptoms but not after that. The phenomenon has been reported at least one other time in the United States. Oddly, collection of club foots before 1974 did not reveal any symptoms. The phenomenon has also been recorded in Japan. Though similar to the symptoms experienced with Coprinopsis atramentaria, the aldehyde dehydrogenase inhibitor in this species is not known. Experiments with club foot extract found that it inhibited the enzyme acetaldehyde dehydrogenase in mouse livers. References External links Acetaldehyde dehydrogenase inhibitors Fungi described in 1801 Hygrophoraceae Fungi of Europe Taxa named by Christiaan Hendrik Persoon Fungus species
Ampulloclitocybe clavipes
Biology
1,091