id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,766,164
https://en.wikipedia.org/wiki/EPHA8
Ephrin type-A receptor 8 is a protein that in humans is encoded by the EPHA8 gene. Function This gene encodes a member of the ephrin receptor subfamily of the protein-tyrosine kinase family. EPH and EPH-related receptors have been implicated in mediating developmental events, particularly in the nervous system. Receptors in the EPH subfamily typically have a single kinase domain and an extracellular region containing a Cys-rich domain and 2 fibronectin type III repeats. The ephrin receptors are divided into 2 groups based on the similarity of their extracellular domain sequences and their affinities for binding ephrin-A and ephrin-B ligands. The protein encoded by this gene functions as a receptor for ephrin A2, A3 and A5 and plays a role in short-range contact-mediated axonal guidance during development of the mammalian nervous system. Interactions EPHA8 has been shown to interact with FYN. References Further reading Tyrosine kinase receptors
EPHA8
[ "Chemistry" ]
211
[ "Tyrosine kinase receptors", "Signal transduction" ]
14,766,178
https://en.wikipedia.org/wiki/ERV3
HERV-R_7q21.2 provirus ancestral envelope (Env) polyprotein is a protein that in humans is encoded by the ERV3 gene. Function The human genome includes many retroelements including the human endogenous retroviruses (HERVs), which compose about 7-8% of the human genome. ERV3, one of the most studied HERVs, is thought to have integrated 30 to 40 million years ago and is present in higher primates with the exception of gorillas. Taken together, the observation of genome conservation, the detection of transcript expression, and the presence of conserved ORFs is circumstantial evidence for a functional role. Similar endogenous retroviral Env genes like syncytin-1 have important roles in placental formation and embryonic development by enabling cell-cell fusion. Despite its origin as an Env gene, ERV3 has a premature stop codon that precludes any cell-cell fusion functionality. However, it does have an immunosuppressive function that helps the fetus evade a damaging maternal immune response, which may explain its high expression in the placenta. There is speculation that ERV3 originally did have cell-cell fusion functionality in the placenta, but that it was eventually supplanted by other Env genes like syncytin, leading to a loss of this function. Another functional role is suggested by the observation that downregulation of ERV3 is reported in choriocarcinoma. References Further reading External links Transcription factors
ERV3
[ "Chemistry", "Biology" ]
322
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,766,240
https://en.wikipedia.org/wiki/FABP5
Fatty acid-binding protein, epidermal is a protein that in humans is encoded by the FABP5 gene. Function This gene encodes the fatty acid binding protein found in epidermal cells, and was first identified as being upregulated in psoriasis tissue. Fatty acid binding proteins are a family of small, highly conserved, cytoplasmic proteins that bind long-chain fatty acids and other hydrophobic ligands. It is thought that FABPs roles include fatty acid uptake, transport, and metabolism. The phytocannabinoids (THC and CBD) inhibit endocannabinoid anandamide (AEA) uptake by targeting FABP5, and competition for FABPs may in part or wholly explain the increased circulating levels of endocannabinoids reported after consumption of cannabinoids. Results show that cannabinoids inhibit keratinocyte proliferation, and therefore support a potential role for cannabinoids in the treatment of psoriasis. Interactions FABP5 has been shown to interact with S100A7. References Further reading
FABP5
[ "Chemistry" ]
231
[ "Biochemistry stubs", "Protein stubs" ]
1,619,933
https://en.wikipedia.org/wiki/Woodblock%20printing
Woodblock printing or block printing is a technique for printing text, images or patterns used widely throughout East Asia and originating in China in antiquity as a method of printing on textiles and later on paper. Each page or image is created by carving a wooden block to leave only some areas and lines at the original level; it is these that are inked and show in the print, in a relief printing process. Carving the blocks is skilled and laborious work, but a large number of impressions can then be printed. As a method of printing on cloth, the earliest surviving examples from China date to before 220 AD. Woodblock printing existed in Tang China by the 7th century AD and remained the most common East Asian method of printing books and other texts, as well as images, until the 19th century. Ukiyo-e is the best-known type of Japanese woodblock art print. Most European uses of the technique for printing images on paper are covered by the art term woodcut, except for the block books produced mainly in the 15th century. History China According to the Book of Southern Qi, in the 480s, a man named Gong Xuanyi (龔玄宜) styled himself Gong the Sage and "said that a supernatural being had given him a 'jade seal jade block writing,' which did not require a brush: one blew on the paper and characters formed." He then used his powers to mystify a local governor. Eventually he was dealt with by the governor's successor, who presumably executed Gong. Timothy Hugh Barrett postulates that Gong's magical jade block was actually a printing device, and Gong was one of the first, if not the first printer. The semi-mythical record of him therefore describes his usage of the printing process to deliberately bewilder onlookers and create an image of mysticism around himself. However, woodblock print flower patterns applied to silk in three colours have been found dated from the Han dynasty (before AD 220). Inscribed seals made of metal or stone, especially jade, and inscribed stone tablets probably provided inspiration for the invention of printing. Copies of classical texts on tablets were erected in a public place in Luoyang during the Han dynasty for scholars and students to copy. The Suishu jingjizhi, the blibography of the official history of the Sui dynasty, includes several ink-squeeze rubbings, believed to have led to the early duplication of texts that inspired printing. A stone inscription cut in reverse dating from the first half of the 6th century implies that it may have been a large printing block. The rise of printing was greatly influenced by Mahayana Buddhism. According to Mahayana beliefs, religious texts hold intrinsic value for carrying the Buddha's word and act as talismanic objects containing sacred power capable of warding off evil spirits. By copying and preserving these texts, Buddhists could accrue personal merit. As a consequence the idea of printing and its advantages in replicating texts quickly became apparent to Buddhists, who by the 7th century, were using woodblocks to create apotropaic documents. These Buddhist texts were printed specifically as ritual items and were not widely circulated or meant for public consumption. Instead they were buried in consecrated ground. The earliest extant example of this type of printed matter is a fragment of a dhāraṇī (Buddhist spell) miniature scroll written in Sanskrit unearthed in a tomb in Xi'an. It is called the Great spell of unsullied pure light (Wugou jingguang da tuoluoni jing 無垢淨光大陀羅尼經) and was printed using woodblock during the Tang dynasty, –670 AD. A similar piece, the Saddharma pundarika sutra, was also discovered and dated to 690 to 699. This coincides with the reign of Wu Zetian, under which the Longer Sukhāvatīvyūha Sūtra, which advocates the practice of printing apotropaic and merit making texts and images, was translated by Chinese monks. The oldest extant evidence of woodblock prints created for the purpose of reading are portions of the Lotus Sutra discovered at Turpan in 1906. They have been dated to the reign of Wu Zetian using character form recognition. The oldest text containing a specific date of printing was discovered in the Mogao Caves of Dunhuang in 1907 by Aurel Stein. This copy of the Diamond Sutra is 14 feet long and contains a colophon at the inner end, which reads: "Reverently [caused to be] made for universal free distribution by Wang Jie on behalf of his two parents on the 13th of the 4th moon of the 9th year of Xiantong [i.e. 11 May, AD 868 ]". It is considered the world's oldest securely dated woodblock scroll. The Diamond sutra was closely followed by the earliest extant printed almanac, the Qianfu sinian lishu (乾符四年曆書), dated to 877. Spread Evidence of woodblock printing appeared in Korea and Japan soon afterward. The Great Dharani Sutra () was discovered at Bulguksa, South Korea in 1966 and dated between 704 and 751 in the era of Later Silla. The document is printed on a mulberry paper scroll. A dhāraṇī sutra was printed in Japan around AD 770. One million copies of the sutra, along with other prayers, were ordered to be produced by Empress Shōtoku. As each copy was then stored in a tiny wooden pagoda, the copies are together known as the Hyakumantō Darani (百万塔陀羅尼, "1,000,000 towers/pagodas Darani"). Woodblock printing spread across Eurasia by 1000 AD and could be found in the Byzantine Empire. However printing onto cloth only became common in Europe by 1300. "In the 13th century the Chinese technique of blockprinting was transmitted to Europe", soon after paper became available in Europe. Song dynasty From 932 to 955 the Twelve Classics and an assortment of other texts were printed. During the Song dynasty, the Directorate of education and other agencies used these block print disseminate their standardized versions of the Classics. Other disseminated works include the Histories, philosophical works, encyclopedias, collections, and books on medicine and the art of war. In 971 work began on the complete Tripiṭaka Buddhist Canon (Kaibao zangshu 開寶藏書) in Chengdu. It took 10 years to finish the 130,000 blocks needed to print the text. The finished product, the Sichuan edition of the Kaibao Canon, also known as the Kaibao Tripitaka, was printed in 983. Prior to the introduction of printing, the size of private collections in China had already seen an increase since the invention of paper. Fan Ping (215–84) had in his collection 7,000 rolls (juan), or a few hundred titles. Two centuries later, Zhang Mian owned 10,000 juan, Shen Yue (441–513) 20,000 juan, and Xiao Tong and his cousin Xiao Mai both had collections of 30,000 juan. Emperor Yuan of Liang (508–555) was said to have had a collection of 80,000 juan. The combined total of all known private book collectors prior to the Song dynasty number around 200, with the Tang alone accounting for 60 of them. Following the maturation of woodblock printing, official, commercial, and private publishing businesses emerged while the size and number of collections grew exponentially. The Song dynasty alone accounts for some 700 known private collections, more than triple the number of all the preceding centuries combined. Private libraries of 10–20,000 juan became commonplace while six individuals owned collections of over 30,000 juan. The earliest extant private Song library catalogue lists 1,937 titles in 24,501 juan. Zhou Mi's collection numbered 42,000 juan, Chen Zhensun's collection lists 3,096 titles in 51,180 juan, and Ye Mengde (1077–1148) as well as one other individual owned libraries of 6,000 titles in 100,000 juan. The majority of which were secular in nature. Texts contained material such as medicinal instruction or came in the form of a leishu (類書), a type of encyclopedic reference book used to help examination candidates. Imperial establishments such as the Three Institutes: Zhaowen Institute, History Institute, and Jixian Institute also followed suit. At the start of the dynasty the Three Institutes' holdings numbered 13,000 juan, by the year 1023 39,142 juan, by 1068 47,588 juan, and by 1127 73,877 juan. The Three Institutes were one of several imperial libraries, with eight other major palace libraries, not including imperial academies. According to Weng Tongwen, by the 11th century, central government offices were saving tenfold by substituting earlier manuscripts with printed versions. The impact of woodblock printing on Song society is illustrated in the following exchange between Emperor Zhenzong and Xing Bing in the year 1005: In 1076, the 39 year old Su Shi remarked upon the unforeseen effect an abundance of books had on examination candidates: Woodblock printing also changed the shape and structure of books. Scrolls were gradually replaced by concertina binding (經摺裝) from the Tang period onward. The advantage was that it was now possible to flip to a reference without unfolding the entire document. The next development known as whirlwind binding (xuanfeng zhuang 旋風裝) was to secure the first and last leaves to a single large sheet, so that the book could be opened like an accordion. Around the year 1000, butterfly binding was developed. Woodblock prints allowed two mirror images to be easily replicated on a single sheet. Thus two pages were printed on a sheet, which was then folded inwards. The sheets were then pasted together at the fold to make a codex with alternate openings of printed and blank pairs of pages. In the 14th century the folding was reversed outwards to give continuous printed pages, each backed by a blank hidden page. Later the sewn bindings were preferred rather than pasted bindings. Only relatively small volumes (juan 卷) were bound up, and several of these would be enclosed in a cover called a tao, with wooden boards at front and back, and loops and pegs to close up the book when not in use. For example, one complete Tripitaka had over 6,400 juan in 595 tao. Ming dynasty Despite the productive effect of woodblock printing, historian Endymion Wilkinson notes that it never supplanted handwritten manuscripts. Indeed, manuscripts remained dominant until the very end of Imperial China: Not only did manuscripts remain competitive with imprints, they were even preferred by elite scholars and collectors. The age of printing gave the act of copying by hand a new dimension of cultural reverence. Those who considered themselves real scholars and true connoisseurs of the book did not consider imprints to be real books. Under the elitist attitudes of the time, "printed books were for those who did not truly care about books". However, copyists and manuscripts only continued to remain competitive with printed editions by dramatically reducing their price. According to the Ming dynasty author Hu Yinglin, "if no printed edition were available on the market, the hand-copied manuscript of a book would cost ten times as much as the printed work", and also, "once a printed edition appeared, the transcribed copy could no longer be sold and would be discarded". The result is that despite the mutual co-existence of hand-copied manuscripts and printed texts, the cost of the book had declined by about 90 percent by the end of the 16th century. As a result, literacy increased. In 1488, the Korean Choe Bu observed during his trip to China that "even village children, ferrymen, and sailors" could read, although this applied mainly to the south, while northern China remained largely illiterate. Three-five colored prints In modern times, Chinese printing continued the tradition begun in medieval times. Black-and-white woodcuts were generally replaced by colored ones, achieved by printing successive runs with different inks. Between the end of the 16th and the beginning of the 17th century, three—and five—color prints appeared. The oldest surviving print is the Ten Bamboo Studio Manual of Calligraphy and Paintings (1644) by Hu Zhengyan, of which there are several copies in various museums and collections. It is still commonly reproduced in China today and its images are very popular: it includes landscapes, flowers, animals, reproductions of jades, bronzes, porcelain and other objects. Another outstanding series is the collection of twenty-nine Kaempfer Prints (British Museum, London), brought in 1693 by a German physician from China to Europe, which includes flowers, fruits, birds, insects and ornamental motifs reminiscent of the style of Kangxi ceramics. Equally famous is the compilation Manual of the Mustard Seed Garden, published in two parts between 1679 and 1701. It was initiated by the scholar and landscape painter Wáng Gài and expanded and prefaced by the art critic Li Yu and the landscape painter Wáng Niè. It was noted for the quality of its polychrome and drawings, which influenced Qing painting. Goryeo (Korea) In 989 Seongjong of Goryeo sent the monk Yeoga to request from the Song a copy of the complete Buddhist canon. The request was granted in 991 when Seongjong's official Han Eongong visited the Song court. In 1011, Hyeonjong of Goryeo issued the carving of their own set of the Buddhist canon, which would come to be known as the Goryeo Daejanggyeong. The project was suspended in 1031 after Heyongjong's death, but work resumed again in 1046 after Munjong's accession to the throne. The completed work, amounting to some 6,000 volumes, was finished in 1087. Unfortunately the original set of woodblocks was destroyed in a conflagration during the Mongol invasion of 1232. King Gojong ordered another set to be created and work began in 1237, this time only taking 12 years to complete. In 1248 the complete Goryeo Daejanggyeong numbered 81,258 printing blocks, 52,330,152 characters, 1496 titles, and 6568 volumes. Due to the stringent editing process that went into the Goryeo Daejanggyeong and its surprisingly enduring nature, having survived completely intact over 760 years, it is considered the most accurate of Buddhist canons written in Classical Chinese as well as a standard edition for East Asian Buddhist scholarship. Japan In the Kamakura period from the 12th century to the 13th century, many books were printed and published by woodblock printing at Buddhist temples in Kyoto and Kamakura. The mass production of woodblock prints in the Edo period was due to the high literacy rate of Japanese people. The literacy rate of the Japanese by 1800 was almost 100% for the samurai class and 50% to 60% for the chōnin and nōmin (farmer) class due to the spread of private schools terakoya. There were more than 600 rental bookstores in Edo, and people lent woodblock-printed illustrated books of various genres. The content of these books varied widely, including travel guides, gardening books, cookbooks, kibyōshi (satirical novels), sharebon (books on urban culture), kokkeibon (comical books), ninjōbon (romance novel), yomihon, kusazōshi, art books, play scripts for the kabuki and jōruri (puppet) theatre, etc. The best-selling books of this period were Kōshoku Ichidai Otoko (Life of an Amorous Man) by Ihara Saikaku, Nansō Satomi Hakkenden by Takizawa Bakin, and Tōkaidōchū Hizakurige by Jippensha Ikku, and these books were reprinted many times. From the 17th century to the 19th century, ukiyo-e depicting secular subjects became very popular among the common people and were mass-produced. ukiyo-e is based on kabuki actors, sumo wrestlers, beautiful women, landscapes of sightseeing spots, historical tales, and so on, and Hokusai and Hiroshige are the most famous artists. In the 18th century, Suzuki Harunobu established the technique of multicolor woodblock printing called nishiki-e and greatly developed Japanese woodblock printing culture such as ukiyo-e. Ukiyo-e influenced European Japonisme and Impressionism. In the early 20th century, shin-hanga that fused the tradition of ukiyo-e with the techniques of Western paintings became popular, and the works of Hasui Kawase and Hiroshi Yoshida gained international popularity. Asia and North Africa A few specimen of wood block printing, possibly called tarsh in Arabic, have been excavated from a 10th-century context in Arabic Egypt. They were mostly used for prayers and amulets. The technique may have spread from China or been an independent invention, but had very little impact and virtually disappeared at the end of the 14th century. In India the main importance of the technique has always been as a method of printing textiles, which has been a large industry since at least the 10th century. Nowadays wooden block printing is commonly used for creating beautiful textiles, such as block print saree, kurta, curtains, kurtis, dress, shirts, cotton sarees. Europe Block books, where both text and images are cut on a single block for a whole page, appeared in Europe in the mid-15th century. As they were almost always undated, and without statement of printer or place of printing, determining their dates of printing has been an extremely difficult task. Allan H. Stevenson, by comparing the watermarks in the paper used in block books with watermarks in dated documents, concluded that the "heyday" of block books was the 1460s, but that at least one dated from about 1451. Block books printed in the 1470s were often of cheaper quality, as a cheaper alternative to books printed by printing press. Block books continued to be printed sporadically up through the end of the 15th century. The method was also used extensively for printing playing cards. Impact of movable type China Ceramic and wooden movable type were invented in the Northern Song dynasty around the year 1041 by the commoner Bi Sheng. Metal movable type also appeared in the Southern Song dynasty. The earliest extant book printed using movable type is the Auspicious Tantra of All-Reaching Union, printed in Western Xia c. 1139–1193. Metal movable type was used in the Song, Jin, and Yuan dynasties for printing banknotes. The invention of movable type did not have an immediate effect on woodblock printing and it never supplanted it in East Asia. Only during the Ming and Qing dynasties did wooden and metal movable types see any considerable use, but the preferred method remained woodblock. Usage of movable type in China never exceeded 10 percent of all printed materials while 90 percent of printed books used the older woodblock technology. In one case an entire set of wooden type numbering 250,000 pieces was used for firewood. Woodblocks remained the dominant printing method in China until the introduction of lithography in the late 19th century. Traditionally it has been assumed that the prevalence of woodblock printing in East Asia as a result of Chinese characters led to the stagnation of printing culture and enterprise in that region. S. H. Steinberg describes woodblock printing in his Five Hundred Years of Printing as having "outlived their usefulness" and their printed material as "cheap tracts for the half-literate, [...] which anyway had to be very brief because of the laborious process of cutting the letters". John Man's The Gutenberg Revolution makes a similar case: "wood-blocks were even more demanding than manuscript pages to make, and they wore out and broke, and then you had to carve another one – a whole page at a time". Commentaries on printing in China from the 1990s on, which cite contemporary European observers with first-hand knowledge, complicate the traditional narrative. T. H. Barrett points out that only Europeans who had never seen Chinese woodblock printing in action tended to dismiss it, perhaps due to the almost instantaneous arrival of both xylography and movable type in Europe. The early Jesuit missionaries of late-16th-century China, for instance, had a similar distaste for wood-based printing for very different reasons. These Jesuits found that "the cheapness and omnipresence of printing in China made the prevailing wood-based technology extremely disturbing, even dangerous". Matteo Ricci made note of "the exceedingly large numbers of books in circulation here and the ridiculously low prices at which they are sold". Two hundred years later the Englishman John Barrow, by way of the Macartney mission to Qing China, also remarked with some amazement that the printing industry was "as free as in England, and the profession of printing open to everyone". The commercial success and profitability of woodblock printing was attested to by one British observer at the end of the nineteenth century, who noted that even before the arrival of western printing methods, the price of books and printed materials in China had already reached an astoundingly low price compared to what could be found in his home country. Of this, he said: Other modern scholars such as Endymion Wilkinson hold a more conservative and skeptical view. While Wilkinson does not deny "China's dominance in book production from the fourth to the fifteenth century," he also insists that arguments for the Chinese advantage "should not be extended either forwards or backwards in time." Decline of woodblock printing in China During the 16th and 17th centuries, printmaking enjoyed great popularity, especially in the illustration of books such as Buddhist texts, poems, novels, biographies, medical treatises, music, etc. The major center of production was initially in Kien-ngan (Fujian) and, from the 17th century, in Sin-ngan (Anhui) and Nanjing (Jiangsu). On the other hand, in the 18th century, the industry began to decline, with stereotyped images. This coincided with the arrival of European missionaries who introduced Western engraving techniques. The Jesuit Matteo Ripa edited in 1714–1715 a series of poems by Emperor Kangxi, which he illustrated with landscapes of the imperial summer residence at Jehol. During the reign of Emperor Qianlong the one hundred and four maps of the Chinese Empire made by Jesuit missionaries were printed, as well as illustrations of his military victories, which he commissioned in Paris from the engraver Charles-Nicolas Cochin (Conquests of the Emperor of China, 1767–1773). The emperor himself commissioned the Jesuits to instruct Chinese artisans in the intaglio technique, but they did not obtain good results. Already in the 19th century, the growing xenophobia against Europeans was progressively relegating the use of engraving in China. In the 20th century, the genre was revived by the writer Lou Siun, who founded a woodcut school in Shanghai in 1930. Influenced by contemporary Russian engraving, this school dealt especially with popular, agricultural and military subjects for propaganda purposes, as is evident in the work of P'an Jeng and Huang Yong-yu. Korea In 1234, cast metal movable type was used in Goryeo (Korea) to print the 50-volume Prescribed Texts for Rites of the Past and Present, compiled by Ch'oe Yun-ŭi, but no copies survived to the present. The oldest extant book printed with movable metal type is the Jikji of 1377. This form of metal movable type was described by the French scholar Henri-Jean Martin as "extremely similar to Gutenberg's". Movable type never replaced woodblock printing in Korea. Indeed, even the promulgation of Hangeul was done through woodblock prints. The general assumption is that movable type did not replace block printing in places that used Chinese characters due to the expense of producing more than 200,000 individual pieces of type. Even woodblock printing was not as cost productive as simply paying a copyist to write out a book by hand if there was no intention of producing more than a few copies. Although Sejong the Great introduced Hangeul, an alphabetic system, in the 15th century, Hangeul only replaced Hanja in the 20th century. And unlike China, the movable type system was kept mainly within the confines of a highly stratified elite Korean society: Japan Western style movable type printing-press was brought to Japan by Tenshō embassy in 1590, and was first printed in Kazusa, Nagasaki in 1591. However, western printing-press were discontinued after the ban on Christianity in 1614. The moveable type printing-press seized from Korea by Toyotomi Hideyoshi's forces in 1593 was also in use at the same time as the printing press from Europe. An edition of the Confucian Analects was printed in 1598, using a Korean moveable type printing press, at the order of Emperor Go-Yōzei. Tokugawa Ieyasu established a printing school at Enko-ji in Kyoto and started publishing books using domestic wooden movable type printing-press instead of metal from 1599. Ieyasu supervised the production of 100,000 types, which were used to print many political and historical books. In 1605, books using domestic copper movable type printing-press began to be published, but copper type did not become mainstream after Ieyasu died in 1616. The great pioneers in applying movable type printing press to the creation of artistic books, and in preceding mass production for general consumption, were Honami Kōetsu and Suminokura Soan. At their studio in Saga, Kyoto, the pair created a number of woodblock versions of the Japanese classics, both text and images, essentially converting emaki (handscrolls) to printed books, and reproducing them for wider consumption. These books, now known as Kōetsu Books, Suminokura Books, or Saga Books, are considered the first and finest printed reproductions of many of these classic tales; the Saga Book of the Tales of Ise (Ise monogatari), printed in 1608, is especially renowned. Saga Books were printed on expensive paper, and used various embellishments, being printed specifically for a small circle of literary connoisseurs. For aesthetic reasons, the typeface of the , like that of traditional handwritten books, adopted the (ja), in which several characters are written in succession with smooth brush strokes. As a result, a single typeface was sometimes created by combining two to four semi-cursive and cursive kanji or hiragana characters. In one book, 2,100 characters were created, but 16% of them were used only once. Despite the appeal of moveable type, however, craftsmen soon decided that the semi-cursive and cursive script style of Japanese writings was better reproduced using woodblocks. By 1640 woodblocks were once again used for nearly all purposes. After the 1640s, movable type printing declined, and books were mass-produced by conventional woodblock printing during most of the Edo period. It was after the 1870s, during the Meiji period, when Japan opened the country to the West and began to modernize, that this technique was used again. Middle East In countries using Arabic scripts, works, especially the Qur'an were printed from blocks or by lithography in the 19th century, as the links between the characters require compromises when movable type is used which were considered inappropriate for sacred texts. Europe Around the mid-1400s, block-books, woodcut books with both text and images, usually carved in the same block, emerged as a cheaper alternative to manuscripts and books printed with movable type. These were all short heavily illustrated works, the bestsellers of the day, repeated in many different block-book versions: the Ars moriendi and the Biblia pauperum were the most common. There is still some controversy among scholars as to whether their introduction preceded or, the majority view, followed the introduction of movable type, with the range of estimated dates being between about 1440–1460. Technique Jia xie is a method for dyeing textiles (usually silk) using wood blocks invented in the 5th–6th centuries in China. An upper and a lower block are made, with carved out compartments opening to the back, fitted with plugs. The cloth, usually folded a number of times, is inserted and clamped between the two blocks. By unplugging the different compartments and filling them with dyes of different colours, a multi-coloured pattern can be printed over quite a large area of folded cloth. The method is not strictly printing however, as the pattern is not caused by pressure against the block. Colour woodblock printing The earliest woodblock printing known is in colour—Chinese silk from the Han dynasty printed in three colours. Colour is very common in Asian woodblock printing on paper; in China the first known example is a Diamond sutra of 1341, printed in black and red at the Zifu Temple in modern-day Hubei province. The earliest dated book printed in more than 2 colours is Chengshi moyuan (), a book on ink-cakes printed in 1606 and the technique reached its height in books on art published in the first half of the 17th century. Notable examples are the Hu Zhengyan's Treatise on the Paintings and Writings of the Ten Bamboo Studio of 1633, and the Mustard Seed Garden Painting Manual published in 1679 and 1701. See also Ajrak Banhua Old master print New Year picture Kalamkari Ghalamkar Bagh Print Bagru Print Conservation and restoration of woodblock prints References Works cited External links Centre for the History of the Book Excellent images and descriptions of examples, mostly Chinese, from the Schoyen Collection () Fine example of a European block-book, Apocalypse, with hand-colouring Chinese book-binding methods, from the V&A Museum Chinese book-binding methods, from the International Dunhuang Project Chinese woodblock prints from SOAS University of London "Multiple Impressions: Contemporary Chinese Woodblock Prints" at the University of Michigan Museum of Art American Printing History Association—Numerous links to Online Resources and Other Organizations Block printing in India Prints & People: A Social History of Printed Pictures, an exhibition catalog from The Metropolitan Museum of Art (fully available online as PDF), which contains material on woodblock printing The History of Chinese Bookbinding: the case of Dunhuang findings Video: Block-printed wallpaper, a video demonstrating printing of multicolored wallpaper with a press, using blocks produced by William Morris China engraved block printing technique, UNESCO Intangible Cultural Heritage. 2009. Chinese inventions Book arts Book design Decorative arts History of printing Relief printing Intangible Cultural Heritage of Humanity Textile arts Textual scholarship
Woodblock printing
[ "Engineering" ]
6,420
[ "Book design", "Design" ]
1,619,958
https://en.wikipedia.org/wiki/Transportation%20Safety%20Board%20of%20Canada
The Transportation Safety Board of Canada (TSB, ), officially the Canadian Transportation Accident Investigation and Safety Board () is the agency of the Government of Canada responsible for advancing transportation safety in Canada. It is accountable to Parliament directly through the President of the King’s Privy Council and the Minister of Intergovernmental and Northern Affairs and Internal Trade. The independent agency investigates accidents and makes safety recommendations in four modes of transportation: aviation, rail, marine and pipelines. Agency history Prior to 1990, Transport Canada's Aircraft Accident Investigation Branch (1960–1984) and its successor the Canadian Aviation Safety Board or CASB (1984–1990) were responsible for investigation of air incidents. Before 1990, investigations and actions were taken by Transport Canada and even after 1984 the findings from CASB were not binding for Transport Canada to respond to. The TSB was created under the Canadian Transportation Accident Investigation and Safety Board Act, which received royal assent in June 1989 and came into force March 29, 1990. It was formed in response to a number of high-profile accidents, following which the Government of Canada identified the need for an independent, multi-modal investigation agency. The headquarters are located in Place du Centre in Gatineau, Quebec. The provisions of the Canadian Transportation Accident Investigation and Safety Board Act were written to establish an independent relationship between the agency and the Government of Canada. This agency's first major test came with the crash of Swissair Flight 111 on September 2, 1998, the largest single aviation accident on Canadian territory since the 1985 crash of Arrow Air Flight 1285R. The TSB delivered its report on the accident on March 27, 2003, some 4½ years after the accident and at a cost of $57 million, making it the most complex and costly accident investigation in Canadian history to that date. From 2005 to 2010, the TSB concluded a number of investigations into high-profile accidents, including: the crash of Air France Flight 358; the Cheakamus River derailment; the sinking of Queen of the North; the loss overboard of a crewmember of Picton Castle; the Burnaby pipeline rupture; the crash of Cougar Helicopters Flight 91; the sinking of Concordia. To increase the uptake of its recommendations and address accident patterns, the TSB launched its Watchlist in 2010, which points to nine critical safety issues troubling Canada's transportation system. On 3 December 2013, in the wake of the Lac-Mégantic rail disaster the previous July, it was reported that the number of runaway trains was triple the number documented by the TSB. In August 2014, the TSB released the report on its investigation into the July 2013 Lac-Mégantic derailment. In a news conference, then TSB chair Wendy Tadros described how eighteen factors played a role in the disaster including a "weak safety culture" at the now-defunct Montreal, Maine & Atlantic Railways with "a lack of standards, poor training and easily punctured tanks." The TSB also blamed Transport Canada, the regulator, for not doing thorough safety audits often enough on railways "to know how those companies were really managing, or not managing, risk." The TSB report called for "physical restraints, such as wheel chocks, for parked trains." Prior to the accident TSB had called for "new and more robust wagons for flammable liquids" but as of August 2014, little progress had been made in implementing this. On February 4, 2019, the TSB deployed to the derailment of Canadian Pacific Railway (CP) train 301-349. Ninety-nine cars and two locomotives derailed at Mile 130.6 of the CP Laggan Subdivision, near Field, British Columbia (BC) while proceeding westward to Vancouver, BC. The three train crewmembers – a locomotive engineer, a conductor, and a conductor trainee – died as a result. During the course of its investigation into the derailment, the organization issued two safety advisories on April 11, 2019 to Transport Canada . The first called attention to the need for effective safety procedures to be applied to all trains stopped in emergency on both "heavy grades" and "mountain grades" and the second highlighted the need to review the efficacy of the inspection and maintenance procedures for grain hopper cars used in CP's unit grain train operations (and for other railways as applicable), and ensure that these cars can be operated safely at all times. In January 2020, the Senior Investigator was reassigned in order to protect the integrity and objectivity of the investigation after voicing an opinion implying civil or criminal liability. The TSB labelled the comments made to The Fifth Estate journalists as "completely inappropriate" as the mandate of the TSB is to make findings as to causes and contributing factors of a transportation occurrence, but not to assign fault or determine civil or criminal liability. The CBC documentary pointed out what seemed to be a problem, where the private police service of CP Rail investigated the accident. A CPPS officer was also resigned over these circumstances. As of June 2020, the investigation is ongoing. Mandate and direction The Transportation Safety Board's mandate is to conduct independent investigations, including public inquiries when necessary, into selected transportation occurrences in order to make findings as to their causes and contributing factors; identify safety deficiencies, as evidenced by transportation occurrences; make recommendations designed to eliminate or reduce any such safety deficiencies; and report publicly on its investigations and on the related findings The TSB may assist other transportation safety boards in their investigations. This may happen when: an incident or accident occurs involving a Canadian-registered aircraft in commercial or air transport use; an incident or accident occurs involving a Canadian-built aircraft (or an aircraft with Canadian-built engines, propellers, or other vital components) in commercial or air transport use; a country without the technical ability to conduct a full investigation asks for the TSB's assistance (especially in the field of reading and analyzing the content of flight recorders). Provincial and territorial governments may call upon the TSB to investigate occurrences. However, it is up to the TSB whether or not to proceed with an investigation. Public reports are published following class one, class two, class three and class four investigations. Recommendations made by the TSB are not legally binding upon the Government of Canada, nor any of its Ministers of departments. However, when a recommendation is made to a federal department, a formal response must be presented to the TSB within 90 days. The TSB reports to the Parliament of Canada through the President of the King's Privy Council for Canada. Board membership As of August 2024, the Board was composed of the following four members: Chair Yoan Marier Ken Potter Paul Dittmann Leo Donatti Facilities The TSB Engineering Laboratory, which has the facilities for investigating transport accidents and incidents, is in Ottawa, adjacent to Ottawa International Airport. List of chairs John W. Stants 1990–1996 Benoît Bouchard 1996–2001 Charles H. Simpson 2001–2002 (acting) Camille Thériault 2002–2004 Charles H. Simpson 2004–2005 (acting) Wendy A. Tadros 2005–2006 (acting) Wendy A. Tadros 2006–2014 Kathleen Fox 2014–2024 Yoan Marier 2024–present See also Aviation safety References External links Rail accident investigators Organizations investigating aviation accidents and incidents Aviation authorities Transport safety organizations Federal departments and agencies of Canada Aviation in Canada 1990 establishments in Quebec History of transport in Canada Railway safety Organizations based in Gatineau Transport organizations based in Canada Canadian transport law
Transportation Safety Board of Canada
[ "Technology" ]
1,531
[ "Railway accidents and incidents", "Rail accident investigators" ]
1,619,962
https://en.wikipedia.org/wiki/The%20Dinner%20Party
The Dinner Party is an installation artwork by American feminist artist Judy Chicago. There are 39 elaborate place settings on a triangular table for 39 mythical and historical famous women. Sacajawea, Sojourner Truth, Eleanor of Aquitaine, Empress Theodora of Byzantium, Virginia Woolf, Susan B. Anthony, and Georgia O'Keeffe are among the symbolic guests. Each place setting includes a hand-painted china plate, ceramic cutlery and chalice, and a napkin with an embroidered gold edge. Each plate, except the ones corresponding to Sojourner Truth and Ethel Smyth, depicts a brightly colored, elaborately styled vulvar form. The settings rest on intricately embroidered runners, executed in a variety of needlework styles and techniques. The table stands on The Heritage Floor, made up of more than 2,000 white luster-glazed triangular tiles, each inscribed in gold scripts with the name of one of 998 women and one man who have made a mark on history. (The man, Kresilas, was included by mistake, as he was thought to have been a woman called Cresilla.) The Dinner Party was produced from 1974 to 1979 as a collaboration and first exhibited in 1979. Despite art world resistance, it toured to 16 venues in six countries on three continents to a viewing audience of 15 million. It was retired to storage from 1988 until 1996, as it was beginning to suffer from constant traveling. In 2007, it became a permanent exhibit in the Elizabeth A. Sackler Center for Feminist Art at the Brooklyn Museum, New York. About the work The Dinner Party was created by artist Judy Chicago, with the assistance of numerous volunteers, with the goal to "end the ongoing cycle of omission in which women were written out of the historical record." According to the artist, “The Dinner Party suggests that women have the capacity to be prime symbol-makers, to remake the world in our own image and likeness” (SNYDER, 1981, p. 31). The table is triangular and measures 48 feet (14.63 m) on each side. There are 13 place settings on each of the table's sides, making 39 in all. Wing I honors women from Prehistory to the Roman Empire, Wing II honors women from the beginnings of Christianity to the Reformation and Wing III from the American Revolution to feminism. Each place setting features a table runner embroidered with the woman's name and images or symbols relating to her accomplishments, with a napkin, utensils, a glass or goblet, and a plate. Many of the plates feature a butterfly- or flower-like sculpture representing a vulva. A cooperative effort of female and male artisans, The Dinner Party celebrates traditional female accomplishments such as textile arts (weaving, embroidery, sewing) and china painting, which have been framed as craft or domestic art, as opposed to the more culturally valued, male-dominated fine arts. While the piece is composed of typical craftwork such as needlepoint and china painting and normally considered low art, "Chicago made it clear that she wants The Dinner Party to be viewed as high art, that she still subscribes to this structure of value: 'I'm not willing to say a painting and a pot are the same thing,' she has stated. 'It has to do with intent. I want to make art.' The white floor of triangular porcelain tiles, called the Heritage Floor, is inscribed with the names of a further 998 notable women (and one man, Kresilas, an ancient Greek sculptor, mistakenly included as he was thought to have been a woman called Cresilla), each associated with one of the place settings. In 2002, the Elizabeth A. Sackler Foundation purchased and donated The Dinner Party to the Brooklyn Museum, where it lives in the Elizabeth A. Sackler Center for Feminist Art, which opened in March 2007. In 2018, Chicago created a limited edition set of functional plates based on the Dinner Party designs. The designs that were reproduced were Elizabeth I, Primordial Goddess, Amazon, and Sappho. Design details The Dinner Party took six years and $250,000 to complete, not including volunteer labor. It began modestly as Twenty-Five Women Who Were Eaten Alive, a way in which Chicago could use her "butterfly-vagina" imagery and interest in china painting in a high-art setting. She soon expanded it to include 39 women arranged in three groups of 13. The triangular shape has long been a feminine symbol. The table is an equilateral triangle, to represent equality. The number 13 represents the number of people who were present at the Last Supper, an important comparison for Chicago, as the only people there were men. She developed the work on her own for the first three years before bringing in others. Over the next three years, over 400 people contributed to the work, most of them volunteers. About 125 were called "members of the project", suggesting long-term efforts, and a small group was closely involved with the project for the final three years, including ceramicists, needle-workers, and researchers. The project was organized according to what has been called "benevolent hierarchy" and "non-hierarchical leadership", as Chicago designed most aspects of the work and had the final control over decisions made. The 39 plates start flat and begin to emerge in higher relief toward the end of the chronology, meant to represent modern woman's increasing independence and equality. The work also uses supplementary written information such as banners, timelines, and a three-book exhibition publication to provide background information on each woman and the process of making the work. Women represented in the place settings The first wing of the triangular table has place settings for female figures from the goddesses of prehistory through to Hypatia at the time of the Roman Empire. This section covers the emergence and decline of the Classical world. The second wing begins with Marcella and covers the rise of Christianity. It concludes with Anna van Schurman in the seventeenth century at the time of the Restoration. The third wing represents the Age of Revolution. It begins with Anne Hutchinson and moves through the twentieth century to the final places paying tribute to Virginia Woolf and Georgia O'Keeffe. The 39 women with places at the table are: Wing I: From Prehistory to the Roman Empire 1. Primordial Goddess 2. Fertile Goddess 3. Ishtar 4. Kali 5. Snake Goddess 6. Sophia 7. Amazon 8. Hatshepsut 9. Judith 10. Sappho 11. Aspasia 12. Boadicea 13. Hypatia Wing II: From the Beginnings of Christianity to the Reformation 14. Marcella 15. Saint Bridget 16. Theodora 17. Hrosvitha 18. Trota of Salerno 19. Eleanor of Aquitaine 20. Hildegarde of Bingen 21. Petronilla de Meath 22. Christine de Pisan 23. Isabella d'Este 24. Elizabeth I 25. Artemisia Gentileschi 26. Anna van Schurman Wing III: From the American to the Women's Revolution 27. Anne Hutchinson 28. Sacajawea 29. Caroline Herschel 30. Mary Wollstonecraft 31. Sojourner Truth 32. Susan B. Anthony 33. Elizabeth Blackwell 34. Emily Dickinson 35. Ethel Smyth 36. Margaret Sanger 37. Natalie Barney 38. Virginia Woolf 39. Georgia O'Keeffe Women represented in the Heritage Floor The Heritage Floor, which sits underneath the table, features the names of 998 women (and one man, Kresilas, mistakenly included as he was thought to have been a woman called Cresilla) inscribed on white handmade porcelain floor tilings. The tilings cover the full extent of the triangular table area, from the footings at each place setting, continues under the tables themselves and fills the full enclosed area within the three tables. There are 2,304 tiles with names spread across more than one tile. The names are written in the Palmer cursive script, a twentieth-century American form. Chicago states that the criteria for a woman's name being included in the floor were one or more of the following: She had made a worthwhile contribution to society She had tried to improve the lot of other women Her life and work had illuminated significant aspects of women's history She had provided a role model for a more egalitarian future. Accompanying the installation are a series of wall panels that explain the role of each woman on the floor and associate her with one of the place settings. Response In a 1981 interview, Chicago said that the backlash of threats and hateful castigation in reaction to the work brought on the only period of suicide risk she had ever experienced in her life, characterizing herself as "like a wounded animal". She said that she sought refuge from public attention by moving to a small rural community and that friends and acquaintances took on administrative support roles for her, such as opening her mail, while she threw herself into working on Embroidering Our Heritage, the 1980 book documenting the project. Immediate critical response (1980–1981) The Dinner Party prompted many varied opinions. Feminist critic Lucy Lippard stated, "My own initial experience was strongly emotional... The longer I spent with the piece, the more I became addicted to its intricate detail and hidden meanings", and defended the work as an excellent example of the feminist effort. These reactions are echoed by other critics, and the work was glorified by many. Just as adamant, however, were the immediate criticisms of the work. Hilton Kramer, for example, argued, "The Dinner Party reiterates its theme with an insistence and vulgarity more appropriate, perhaps, to an advertising campaign than to a work of art". He called the work not only a kitsch object but also "crass and solemn and singleminded", "very bad art,... failed art,... art so mired in the pieties of a cause that it quite fails to acquire any independent artistic life of its own". Maureen Mullarkey also criticized the work, calling it preachy and untrue to the women it claims to represent. She especially disagreed with the sentiment that she labels "turn 'em upside down and they all look alike", an essentializing of all women that does not respect the feminist cause. Mullarkey also called the hierarchical aspect of the work into question, claiming that Chicago took advantage of her female volunteers. Mullarkey focused on several particular plates in her critique of the work, specifically Emily Dickinson, Virginia Woolf, and Georgia O'Keeffe, using these women as examples of why Chicago's work was disrespectful to the women it depicts. She states that Dickinson's "multi-tiered pink lace crotch" was opposite the woman that it was meant to symbolize because of Dickinson's extreme privacy. Woolf's inclusion ignores her frustration at the public's curiosity about the sex of writers, and O'Keeffe had similar thoughts, denying that her work had any sexed or sexual meaning. The Dinner Party was satirized by artist Maria Manhattan, whose counter-exhibit The Box Lunch at a SoHo gallery was billed as "a major art event honoring 39 women of dubious distinction", and ran in November and December 1980. In response to The Dinner Party being a collaborative work, Amelia Jones makes note that "Chicago never made exorbitant claims for the 'collaborative' or nonhierarchical nature of the project. She has insisted that it was never conceived or presented as a 'collaborative' project as this notion is generally understood ... The Dinner Party project, she insisted throughout, was cooperative, not collaborative, in the sense that it involved a clear hierarchy but cooperative effort to ensure its successful completion." New York Times art reviewer Roberta Smith declares that all of the details are not equal. She believes that "the runners tend to be livelier and more varied than the plates. In addition, the runners grow strong as the work progresses, while the plates become weaker, more monotonous and more overdone, which means the middle two-thirds of the piece is more successful." With the runners becoming more detailed as the work progresses, Smith notes that the backs of the runners are difficult to see and they "may be the best and boldest parts of all." Similarly, Smith stated that "its historical import and social significance may be greater than its aesthetic value". Regarding the place settings, Janet Koplos believes that the plates are meant to serve as canvases, and the goblets offer vertical punctuation. She feels, however, that the "standardized flatware is historically incorrect early on and culturally skewed. The settings would be stronger as plates and runners alone." Race and identity In a 1984 article, Hortense J. Spillers critiqued Judy Chicago and The Dinner Party, asserting that, as a white woman, Chicago recreates the erasure of the black feminine sexual self. Spillers calls to her defense the place setting of Sojourner Truth, the only black woman. After thorough review, it can be seen that all of the place settings depict uniquely designed vaginas, except for Sojourner Truth. The place setting of Sojourner Truth is depicted by three faces, rather than a vagina. Spillers writes, "The excision of the female genitalia here is a symbolic castration. By effacing the genitals, Chicago not only abrogates the disturbing sexuality of her subject, but also hopes to suggest that her sexual being did not exist to be denied in the first place..." Much like Spillers's critique, Alice Walker published her critical essay in Ms. magazine noting "Chicago's ignorance of women of color in history (specifically black women painters), focusing in particular on The Dinner Party'''s representation of black female subjectivity in Sojourner Truth's plate. Walker states, "It occurred to me that perhaps white women feminists, no less than white women generally, can not imagine black women have vaginas. Or if they can, where imagination leads them is too far to go." Esther Allen further criticizes Chicago in her article "Returning the Gaze, with a Vengeance". Allen claims that The Dinner Party excludes women from Spain, Portugal, or any of these empires' former colonies. This means that several very prominent women of Western history were excluded, such as Frida Kahlo, Teresa of Ávila, Gabriela Mistral, and more. Chicago herself responded to these criticisms, claiming that all of these women are included on the "Heritage Floor" and that focusing solely on who is at the table is "to over-simplify the art and ignore the criteria my studio team and I established and the limits we were working under". Further, Chicago states that, in the mid-1970s, there was little or no knowledge about any of these women. Larger retrospective response Critics such as Mullarkey have returned to The Dinner Party in later years and stated that their opinions have not changed. Many later responses to the work, however, have been more moderate or accepting, even if only by giving the work value based on its continued importance. Amelia Jones, for example, places the work in the context of both art history and the evolution of feminist ideas to explain critical responses of the work. She discusses Hilton Kramer's objection to the piece as an extension of Modernist ideas about art, stating, "the piece blatantly subverts modernist value systems, which privilege the 'pure' aesthetic object over the debased sentimentality of the domestic and popular arts" . Jones also addresses some critics' argument that The Dinner Party is not high art because of its huge popularity and public appeal. Where Kramer saw the work's popularity as a sign that it was of a lesser quality, Lippard and Chicago herself thought that its capability of speaking to a larger audience should be considered a positive attribute. The "butterfly vagina" imagery continues to be both highly criticized and esteemed. Many conservatives criticized the work for reasons summed up by Congressman Robert K. Dornan in his statement that it was "ceramic 3-D pornography", but some feminists also found the imagery problematic because of its essentializing, passive nature. However, the work fits into the feminist movement of the 1970s, which glorified and focused on the female body. Other feminists have disagreed with the main idea of this work because it shows a universal female experience, which many argue does not exist. For example, lesbians and women of ethnicities other than white and European are not well represented in the work. Jones presents the argument regarding the collaborative nature of the project. Many critics attacked Chicago for claiming that the work was a collaboration when instead she was in control of the work. Chicago, however, had never claimed that the work would be this kind of ideal collaboration and always took full responsibility for the piece. Artist Cornelia Parker nominated it as a work she would like to see "binned", saying, "Too many vaginas for my liking. I find it all about Judy Chicago's ego rather than the poor women she's supposed to be elevating – we're all reduced to vaginas, which is a bit depressing. It's almost like the biggest piece of victim art you've ever seen. And it takes up so much space! I quite like the idea of trying to fit it in some tiny bin – not a very feminist gesture but I don't think the piece is either." Controversy at the University of the District of Columbia In 1990, The Dinner Party was considered for permanent housing at the University of the District of Columbia. It was part of a plan to bring in revenue for the school, as it had proved to be very successful. The work was to be donated as a gift to the school, and it was to join an expanding collection of African-American art, including a large group of paintings by Washington abstractionist Sam Gilliam and works by Elizabeth Catlett, Romare Bearden, Alma Thomas, Hale Woodruff, Jacob Lawrence and Lois Mailou Jones, among others. These – along with works by a group of local white Color Field painters and some white UDC faculty members also in the university collections – were to become the core of what was presented in early 1990 as a ground-breaking multicultural art center, a hopeful coalition between artists of color, feminists and other artists depicting the struggle for freedom and human equality. Judy Chicago donated The Dinner Party with the understanding that one of the school's buildings would be repaired to house it. The money for these repairs had already been allocated and did not come from the school's working budget. On June 19, 1990, UDC trustees formally accepted the gift of The Dinner Party by unanimous vote. Soon, however, reporters from the right-wing Washington Times began writing stories that claimed that The Dinner Party "had been banned from several art galleries around the country because it depicts women's genitalia on plates" and that the "Board of Trustees will spend nearly $1.6 million to acquire and exhibit a piece of controversial art." Misunderstandings about the monetary situation were emphasized and perpetuated by media sources. Eventually, the plans were cancelled owing to concerns that the collection would negatively affect the school's working budget. Companion piece The International Honor Quilt (also known as the International Quilting Bee) is a collective feminist art project initiated in 1980 by Judy Chicago as a companion piece to The Dinner Party. See also International Honor Quilt Famous Women Dinner Service Vagina and vulva in art References Bibliography Chicago, Judy. The Dinner Party: From Creation to Preservation. London: Merrell (2007). . Further reading Chicago, Judy. The Dinner Party: A Symbol of our Heritage. New York: Anchor (1979). Chicago, Judy. Embroidering Our Heritage: The Needlework of The Dinner Party. New York: Anchor (1980) Chicago, Judy. Through The Flower: My Struggle as A Woman Artist. Lincoln: Authors Choice Press (2006). Gerhard, Jane F. The Dinner Party: Judy Chicago and the Power of Popular Feminism, 1970-2007. Athens, GA: The University of Georgia Press (2013). Jones, Amelia. Sexual Politics: Judy Chicago's Dinner Party in Feminist Art History. Berkeley: University of California Press (1996). External links The Dinner Party exhibition website from the Brooklyn Museum, including a searchable database of all the women represented. The Dinner Party from Chicago's non-profit organization, Through the Flower. Videos and documentary films 28 March 2007 Video tour of the work and part of the Elizabeth A. Sackler Center for Feminist Art by James Kalm . 28 March 2007. Accessed September 2009. 41-minute video where Judy Chicago personally takes viewers on a tour of The Dinner Party, with explanations of how the work was created, as well as special focus on certain place settings. 3 October 2012. Accessed 21 July 2013. Right Out of History: Judy Chicago'', Phoenix Learning Group (2008) (DVD) 1979 works Installation art works Collection of the Brooklyn Museum 1996 books Cultural depictions of Hypatia Cultural depictions of Eleanor of Aquitaine Cultural depictions of Elizabeth I Cultural depictions of Susan B. Anthony Cultural depictions of Emily Dickinson Cultural depictions of Sacagawea Cultural depictions of Boudica Cultural depictions of Virginia Woolf Cultural depictions of Judith Cultural depictions of Sappho Cultural depictions of Hatshepsut Cultural depictions of Theodora I Books about the visual arts Feminist art Feminism and history Obscenity controversies in art Yonic symbols Vagina and vulva in art
The Dinner Party
[ "Astronomy" ]
4,446
[ "Cultural depictions of Hypatia", "Cultural depictions of astronomers" ]
1,620,176
https://en.wikipedia.org/wiki/La%20Grande%20River
La Grande River (, ; ; both meaning "great river") is a river in northwestern Quebec, Canada, rising in the highlands of the north-central part of the province and flowing roughly west to its drainage at James Bay. It is the second-longest river in the province, surpassed only by the Saint Lawrence River. Originally, the La Grande River drained an area of , and had a mean discharge of . Since the 1980s, when hydroelectric development diverted the Eastmain and Caniapiscau rivers into the La Grande, its total catchment area has increased to about , with its mean discharge being more than . In November 2009, the Rupert River was also (partially) diverted, adding another to the basin. At one time, the La Grande was known as the "Fort George River". The Hudson's Bay Company operated a trading post on the river, at Big River House, between 1803 and 1824. In 1837, a larger trading post was established at Fort George, on an island at the mouth of the river. In the early 20th century, this trading post became a village as the Crees of the James Bay region abandoned their nomadic way of life and settled nearby. The modern Cree village of Chisasibi, which replaced Fort George in 1980, is situated on the southern shore of the La Grande River, several kilometers to the East. Tributaries Significant tributaries of La Grande River include: Kanaaupscow River Sakami River Eastmain River (diverted) Opinaca River Rupert River (diverted) Rivière de Pontois Rivière de la Corvette Laforge River Caniapiscau River (diverted) Hydro-electric development The river has been extensively developed as a source of hydroelectric power by Hydro-Québec, starting in 1974. An area of was flooded and almost all of the flow of the Eastmain River and approximately 70% of the flows of the Rupert River were diverted into the La Grande watershed. The following generating stations are on the La Grande River and its tributaries in upstream order: La-Grande-1 (LG-1) Robert-Bourassa La Grande-2A (LG-2A) La Grande-3 (LG-3) La Grande-4 (LG-4) Laforge-1 (LF-1) Laforge-2 (LF-2) Brisay Eastmain-1 As a result of the development projects, the Cree people of the region lost some parts of their traditional hunting and trapping territories (about 10% of the hunting and trapping territories used by the Cree of Chisasibi). Organic mercury levels increased in the fish, which forms an important part of their diet, as the organic material trapped by the rising waters in the new reservoirs began to filter into the food chain. Careful follow-up by Cree health authorities since the 1980s has been largely successful. The authorities continue to promote the regular consumption of fish, with the notable exception of the predatory species living in the reservoirs, which still show high levels of mercury. Climate See also James Bay Project List of longest rivers of Canada References External links Hydro-Québec's La Grande Complex The Grand River at LG-1 (YouTube Video) Rivers of Nord-du-Québec James Bay Project Tributaries of James Bay
La Grande River
[ "Engineering" ]
665
[ "James Bay Project", "Macro-engineering" ]
1,620,216
https://en.wikipedia.org/wiki/Algebraic%20graph%20theory
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants. Branches of algebraic graph theory Using linear algebra The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Laplacian matrix of a graph (this part of algebraic graph theory is also called spectral graph theory). For the Petersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to other graph properties. As a simple example, a connected graph with diameter D will have at least D+1 distinct values in its spectrum. Aspects of graph spectra have been used in analysing the synchronizability of networks. Using group theory The second branch of algebraic graph theory involves the study of graphs in connection to group theory, particularly automorphism groups and geometric group theory. The focus is placed on various families of graphs based on symmetry (such as symmetric graphs, vertex-transitive graphs, edge-transitive graphs, distance-transitive graphs, distance-regular graphs, and strongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough that lists of graphs can be drawn up. By Frucht's theorem, all groups can be represented as the automorphism group of a connected graph (indeed, of a cubic graph). Another connection with group theory is that, given any group, symmetrical graphs known as Cayley graphs can be generated, and these have properties related to the structure of the group. This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum. In particular, the spectrum of a highly symmetrical graph, such as the Petersen graph, has few distinct values (the Petersen graph has 3, which is the minimum possible, given its diameter). For Cayley graphs, the spectrum can be related directly to the structure of the group, in particular to its irreducible characters. Studying graph invariants Finally, the third branch of algebraic graph theory concerns algebraic properties of invariants of graphs, and especially the chromatic polynomial, the Tutte polynomial and knot invariants. The chromatic polynomial of a graph, for example, counts the number of its proper vertex colorings. For the Petersen graph, this polynomial is . In particular, this means that the Petersen graph cannot be properly colored with one or two colors, but can be colored in 120 different ways with 3 colors. Much work in this area of algebraic graph theory was motivated by attempts to prove the four color theorem. However, there are still many open problems, such as characterizing graphs which have the same chromatic polynomial, and determining which polynomials are chromatic. See also Spectral graph theory Algebraic combinatorics Algebraic connectivity Dulmage–Mendelsohn decomposition Graph property Adjacency matrix References . External links
Algebraic graph theory
[ "Mathematics" ]
677
[ "Mathematical relations", "Graph theory", "Algebra", "Algebraic graph theory" ]
1,620,275
https://en.wikipedia.org/wiki/Subsidence%20crater
A subsidence crater is a hole or depression left on the surface of an area which has had an underground (usually nuclear) explosion. Many such craters are commonly present at bomb testing areas; one notable example is the Nevada Test Site, which was historically used for nuclear weapons testing over a period of 41 years. Subsidence craters are created as the roof of the cavity caused by the explosion collapses. This causes the surface to depress into a sink (which subsidence craters are sometimes called; see sink hole). It is possible for further collapse to occur from the sink into the explosion chamber. When this collapse reaches the surface, and the chamber is exposed atmospherically to the surface, it is referred to as a chimney. It is at the point that a chimney is formed through which radioactive fallout may reach the surface. At the Nevada Test Site, depths of were used for tests. When the material above the explosion is solid rock, then a mound may be formed by broken rock that has a greater volume. This type of mound has been called "retarc", "crater" spelled backwards. When a drilling oil well encounters high-pressured gas which cannot be contained either by the weight of the drilling mud or by blow-out preventers, the resulting violent eruption can create a large crater which can swallow a drilling rig. This phenomenon is called "cratering" in oil field slang. An example is the Darvaza gas crater near Darvaza, Turkmenistan. Gallery See also Underground nuclear weapons testing Explosion crater Chagan (nuclear test) Sedan (nuclear test) Camouflet Glory hole (mining), specifically the subsidence crater produced by underground block caving. Subsidence (atmosphere) - the similar phenomenon observed in the air, upon cooling. Caldera References External links DOE Image of NTS's many subsidence craters Crater at DOE site Low-Yield Earth-Penetrating Nuclear Weapons at FAS site Nuclear weapons testing Underground nuclear weapons testing Explosion craters Articles containing video clips
Subsidence crater
[ "Chemistry", "Technology" ]
408
[ "Nuclear weapons testing", "Environmental impact of nuclear power", "Explosion craters", "Explosions" ]
1,620,404
https://en.wikipedia.org/wiki/Trimethylamine
Trimethylamine (TMA) is an organic compound with the formula N(CH3)3. It is a trimethylated derivative of ammonia. TMA is widely used in industry. At higher concentrations it has an ammonia-like odor, and can cause necrosis of mucous membranes on contact. At lower concentrations, it has a "fishy" odor, the odor associated with rotting fish. Physical and chemical properties TMA is a colorless, hygroscopic, and flammable tertiary amine. It is a gas at room temperature but is usually sold as a 40% solution in water. It is also sold in pressurized gas cylinders. TMA protonates to give the trimethylammonium cation. Trimethylamine is a good nucleophile, and this reactivity underpins most of its applications. Trimethylamine is a Lewis base that forms adducts with a variety of Lewis acids. Production Industry and laboratory Trimethylamine is prepared by the reaction of ammonia and methanol employing a catalyst: 3 CH3OH + NH3 → (CH3)3N + 3 H2O This reaction coproduces the other methylamines, dimethylamine (CH3)2NH and methylamine CH3NH2. Trimethylammonium chloride has been prepared by a reaction of ammonium chloride and paraformaldehyde: 9 (CH2=O)n + 2n NH4Cl → 2n (CH3)3N•HCl + 3n H2O + 3n CO2 Biosynthesis Trimethylamine is produced by several routes in nature. Well studied are the degradation of choline and carnitine. Applications Trimethylamine is used in the synthesis of choline, tetramethylammonium hydroxide, plant growth regulators, herbicides, strongly basic anion exchange resins, dye leveling agents and a number of basic dyes. Gas sensors to test for fish freshness detect trimethylamine. Toxicity In humans, ingestion of certain plant and animal (e.g., red meat, egg yolk) food containing lecithin, choline, and L-carnitine provides certain gut microbiota with the substrate to synthesize TMA, which is then absorbed into the bloodstream. High levels of trimethylamine in the body are associated with the development of trimethylaminuria, or fish odor syndrome, caused by a genetic defect in the enzyme which degrades TMA; or by taking large doses of supplements containing choline or L-carnitine. TMA is metabolized by the liver to trimethylamine N-oxide (TMAO); TMAO is being investigated as a possible proatherogenic substance which may accelerate atherosclerosis in those eating foods with a high content of TMA precursors. TMA also causes the odor of some human infections, bad breath, and bacterial vaginosis. Trimethylamine is a full agonist of human TAAR5, a trace amine-associated receptor that is expressed in the olfactory epithelium and functions as an olfactory receptor for tertiary amines. One or more additional odorant receptors appear to be involved in trimethylamine olfaction in humans as well. Acute and chronic toxic effects of TMA were suggested in medical literature as early as the 19th century. TMA causes eye and skin irritation, and it is suggested to be a uremic toxin. In patients, trimethylamine caused stomach ache, vomiting, diarrhoea, lacrimation, greying of the skin and agitation. Apart from that, reproductive/developmental toxicity has been reported. Some experimental studies suggested that TMA may be involved in etiology of cardiovascular diseases. Guidelines with exposure limit for workers are available e.g. the Recommendation from the Scientific Committee on Occupational Exposure Limits by the European Union Commission. Trimethylaminuria Trimethylaminuria is an autosomal recessive genetic disorder involving a defect in the function or expression of flavin-containing monooxygenase 3 (FMO3) which results in poor trimethylamine metabolism. Individuals with trimethylaminuria develop a characteristic fish odor—the smell of trimethylamine—in their sweat, urine, and breath after the consumption of choline-rich foods. A condition similar to trimethylaminuria has also been observed in a certain breed of Rhode Island Red chicken that produces eggs with a fishy smell, especially after eating food containing a high proportion of rapeseed. In the history of psychoanalysis The first dream of his own which Sigmund Freud tried to analyse in detail, when he was developing his theories about the interpretation of dreams, involved a patient of Freud's who had to have an injection of trimethylamine, and the chemical formula of the substance, written in bold letters on the bottle, jumping out at Freud. See also Ammonia, NH3 Ammonium, NH4+ Methylamine, (CH3)NH2 Triethylamine (TEA) References External links Molecule of the Month: Trimethylamine NIST Webbook data CDC - NIOSH Pocket Guide to Chemical Hazards Pesticides Dimethylamino compounds Foul-smelling chemicals Alkylamines Methyl compounds
Trimethylamine
[ "Biology", "Environmental_science" ]
1,106
[ "Biocides", "Toxicology", "Pesticides" ]
1,620,415
https://en.wikipedia.org/wiki/Thermogenics
Thermogenic means tending to produce heat, and the term is commonly applied to drugs which increase heat through metabolic stimulation, or to microorganisms which create heat within organic waste. Approximately all enzymatic reaction in the human body is thermogenic, which gives rise to the basal metabolic rate. In bodybuilding, athletes wishing to reduce body fat percentage use thermogenics in order to attempt to increase their basal metabolic rate, thereby increasing overall energy expenditure. Caffeine and ephedrine are commonly used for this purpose in the ECA stack. 2,4-Dinitrophenol (DNP) is a very strong thermogenic drug used for fat loss which produces a dose-dependent increase in body temperature, to the point where it can induce death by hyperthermia. It works as a mitochondrial oxidative phosphorylation uncoupler, disrupting the mitochondrial electron transport chain. This stops the mitochondria from producing adenosine triphosphate, causing energy to be released as heat. Notes Nutrition
Thermogenics
[ "Chemistry" ]
216
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
1,620,467
https://en.wikipedia.org/wiki/Calcium%20bicarbonate
Calcium bicarbonate, also called calcium hydrogencarbonate, has the chemical formula Ca(HCO3)2. The term does not refer to a known solid compound; it exists only in aqueous solution containing calcium (Ca2+), bicarbonate (), and carbonate () ions, together with dissolved carbon dioxide (CO2). The relative concentrations of these carbon-containing species depend on the pH; bicarbonate predominates within the range 6.36–10.25 in fresh water. All waters in contact with the atmosphere absorb carbon dioxide, and as these waters come into contact with rocks and sediments they acquire metal ions, most commonly calcium and magnesium, so most natural waters that come from streams, lakes, and especially wells, can be regarded as dilute solutions of these bicarbonates. These hard waters tend to form carbonate scale in pipes and boilers, and they react with soaps to form an undesirable scum. Attempts to prepare compounds such as solid calcium bicarbonate by evaporating its solution to dryness invariably yield instead the solid calcium carbonate: Ca(HCO3)2(aq) → CO2(g) + H2O(l) + CaCO3(s). Very few solid bicarbonates other than those of the alkali metals (other than ammonium bicarbonate) are known to exist. The above reaction is very important to the formation of stalactites, stalagmites, columns, and other speleothems within caves, and for that matter, in the formation of the caves themselves. As water containing carbon dioxide (including extra CO2 acquired from soil organisms) passes through limestone or other calcium carbonate-containing minerals, it dissolves part of the calcium carbonate, hence becomes richer in bicarbonate. As the groundwater enters the cave, the excess carbon dioxide is released from the solution of the bicarbonate, causing the much less soluble calcium carbonate to be deposited. In the reverse process, dissolved carbon dioxide (CO2) in rainwater (H2O) reacts with limestone calcium carbonate (CaCO3) to form soluble calcium bicarbonate (Ca(HCO3)2). This soluble compound is then washed away with the rainwater. This form of weathering is called carbonation and carbonatation. In medicine, calcium bicarbonate is sometimes administered intravenously to immediately correct the cardiac depressor effects of hyperkalemia by increasing calcium concentration in serum, and at the same time, correcting the acid usually present. References Bicarbonates Acid salts Calcium compounds
Calcium bicarbonate
[ "Chemistry" ]
535
[ "Acid salts", "Salts" ]
1,620,643
https://en.wikipedia.org/wiki/Strain%20energy
In physics, the elastic potential energy gained by a wire during elongation with a tensile (stretching) or compressive (contractile) force is called strain energy. For linearly elastic materials, strain energy is: where is stress, is strain, is volume, and is Young's modulus: Molecular strain In a molecule, strain energy is released when the constituent atoms are allowed to rearrange themselves in a chemical reaction. The external work done on an elastic member in causing it to distort from its unstressed state is transformed into strain energy which is a form of potential energy. The strain energy in the form of elastic deformation is mostly recoverable in the form of mechanical work. For example, the heat of combustion of cyclopropane (696 kJ/mol) is higher than that of propane (657 kJ/mol) for each additional CH2 unit. Compounds with unusually large strain energy include tetrahedranes, propellanes, cubane-type clusters, fenestranes and cyclophanes. References Chemical bonding Structural analysis
Strain energy
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
228
[ "Structural engineering", "Aerospace engineering", "Structural analysis", "Condensed matter physics", "Mechanical engineering", "nan", "Chemical bonding" ]
1,620,703
https://en.wikipedia.org/wiki/Platonic%20hydrocarbon
In organic chemistry, a Platonic hydrocarbon is a hydrocarbon whose structure matches one of the five Platonic solids, with carbon atoms replacing its vertices, carbon–carbon bonds replacing its edges, and hydrogen atoms as needed. Not all Platonic solids have molecular hydrocarbon counterparts; those that do are the tetrahedron (tetrahedrane), the cube (cubane), and the dodecahedron (dodecahedrane). The possibility and existence of each platonic hydrocarbon is affected by the number of bonds to each carbon vertex and the angle strain between the bonds at each vertex. Tetrahedrane Tetrahedrane (C4H4) is a hypothetical compound. It has not yet been synthesized without substituents, but it is predicted to be kinetically stable in spite of its angle strain. Some stable derivatives, including tetra(tert-butyl)tetrahedrane and tetra(trimethylsilyl)tetrahedrane, have been produced. Cubane Cubane (C8H8) has been synthesized. Although it has high angle strain, cubane is kinetically stable, due to a lack of readily available decomposition paths. Octahedrane Angle strain would make an octahedron highly unstable due to inverted tetrahedral geometry at each vertex. There would also be no hydrogen atoms because four edges meet at each corner; thus, the hypothetical octahedrane molecule, with a molecular formula of C6, would be an allotrope of elemental carbon rather than a hydrocarbon. The existence of octahedrane cannot be ruled out completely, although calculations have shown that it is unlikely. Dodecahedrane Dodecahedrane (C20H20) was first synthesized in 1982, and has minimal angle strain; the tetrahedral angle is 109.5° and the dodecahedral angle is 108°, only a slight discrepancy. Icosahedrane The tetravalency (4-connectedness) of carbon excludes an icosahedron because 5 edges meet at each vertex. True pentavalent carbon is unlikely; methanium, nominally , usually exists as . The hypothetical icosahedral lacks hydrogen so it is not a hydrocarbon; it is also an ion. Both icosahedral and octahedral structures have been observed in boron compounds such as the dodecaborate ion and some of the carbon-containing carboranes. Other polyhedra Increasing the number of atoms that comprise the carbon skeleton leads to a geometry that increasingly approximates a sphere, and the space enclosed in the carbon "cage" increases. This trend continues with buckyballs or spherical fullerene (C60). Although not a Platonic hydrocarbon, buckminsterfullerene has the shape of a truncated icosahedron, an Archimedean solid. The concept can also be extended to regular Euclidean tilings, with the hexagonal tiling producing graphane. A square tiling (which would resemble an infinitely large fenestrane) would suffer from the same problem as octahedrane, and the triangular tiling icosahedrane. No generalisations to hyperbolic tilings seem to be known. The regular convex 4-polytopes may also have hydrocarbon analogues; hypercubane has been proposed. References Hydrocarbons Hypothetical chemical compounds Hydrocarbons
Platonic hydrocarbon
[ "Chemistry" ]
694
[ "Hydrocarbons", "Hypotheses in chemistry", "Organic compounds", "Theoretical chemistry", "Hypothetical chemical compounds" ]
1,621,152
https://en.wikipedia.org/wiki/Spiritual%20materialism
Spiritual materialism is a term coined by Chögyam Trungpa in his book Cutting Through Spiritual Materialism. The book is a compendium of his talks explaining Buddhism given while opening the Karma Dzong meditation center in Boulder, Colorado. He expands on the concept in later seminars that became books such as Work, Sex, Money. He uses the term to describe mistakes spiritual seekers commit which turn the pursuit of spirituality into an ego-building and confusion-creating endeavor, based on the idea that ego development is counter to spiritual progress. Conventionally, it is used to describe capitalist and spiritual narcissism, commercial efforts such as "new age" bookstores and wealthy lecturers on spirituality; it might also mean the attempt to build up a list of credentials or accumulate teachings in order to present oneself as a more realized or holy person. Author Jorge Ferrer equates the terms "Spiritual materialism" and "Spiritual Narcissism", though others draw a distinction, that spiritual narcissism is believing that one deserves love and respect or is better than another because one has accumulated spiritual training instead of the belief that accumulating training will bring an end to suffering. Lords of Materialism In Trungpa's presentation, spiritual materialism can fall into three categories — what he calls the three "Lords of Materialism" (Tibetan: lalo literally "barbarian") — in which a form of materialism is misunderstood as bringing long-term happiness but instead brings only short-term entertainment followed by long-term suffering: Physical materialism is the belief that possessions can bring release from suffering. In Trungpa's view, they may bring temporary happiness but then more suffering in the endless pursuit of creating one's environment to be just right. Or on another level it may cause a misunderstanding like, "I am rich because I have this or that" or "I am a teacher (or whatever) because I have a diploma (or whatever)." Psychological materialism is the belief that a particular philosophy, belief system, or point of view will bring release from suffering. So seeking refuge by strongly identifying with a particular religion, philosophy, political party or viewpoint, for example, would be psychological materialism. From this the conventional usage of spiritual materialism arises, by identifying oneself as Buddhist or some other label, or by collecting initiations and spiritual accomplishments, one further constructs a solidified view of ego. Trungpa characterizes the goal of psychological materialism as using external concepts, pretexts, and ideas to prove that the ego-driven self exists, which manifests in a particular competitive attitude. Spiritual materialism is the belief that a certain temporary state of mind is a refuge from suffering. An example would be using meditation practices to create a peaceful state of mind, or using drugs or alcohol to remain in a numbed out or a euphoric state. According to Trungpa, these states are temporary and merely heighten the suffering when they cease. So attempting to maintain a particular emotional state of mind as a refuge from suffering, or constantly pursuing particular emotional states of mind like being in love, will actually lead to more long-term suffering. Ego The underlying source of these three approaches to finding happiness is based, according to Trungpa, on the mistaken notion that one's ego is inherently existent and a valid point of view. He claims that is incorrect, and therefore the materialistic approaches have an invalid basis to begin with. The message in summary is, "Don't try to reinforce your ego through material things, belief systems like religion, or certain emotional states of mind." In his view, the point of religion is to show you that your ego doesn't really exist inherently. Ego is something you build up to make you think you exist, but it is not necessary and in the long run causes more suffering. References Carson, Richard David (2003) Taming Your Gremlin: A Surprisingly Simple Method for Getting Out of Your Own Way Ferrer, Jorge Noguera (2001) Revisioning Transpersonal Theory: A Participatory Vision of Human Spirituality Hart, Tobin (2004) The Secret Spiritual World of Children Potter, Richard and Potter, Jan (2006) Spiritual Development for Beginners: A Simple Guide to Leading a Purpose Filled Life Trungpa, Chögyam (1973). Cutting Through Spiritual Materialism. Boston, Massachusetts: Shambhala Publications, Inc. . Trungpa, Chögyam (2011). Work, Sex, Money: Real Life on the Path of Mindfulness. Boston, Massachusetts: Shambhala Publications, Inc. . Based on a series of talks given between 1971 and 1981. External links Cutting Through Spiritual Materialism excerpts Work, Sex, Money excerpts Spiritual Finances Video of Boulder talks on the subject by Chögyam Trungpa Materialism Spiritual philosophy Tibetan Buddhist philosophical concepts
Spiritual materialism
[ "Physics" ]
992
[ "Materialism", "Matter" ]
1,621,212
https://en.wikipedia.org/wiki/Fieldbus
A fieldbus is a member of a family of industrial digital communication networks used for real-time distributed control. Fieldbus profiles are standardized by the International Electrotechnical Commission (IEC) as IEC 61784/61158. A complex automated industrial system is typically structured in hierarchical levels as a distributed control system (DCS). In this hierarchy the upper levels for production managements are linked to the direct control level of programmable logic controllers (PLC) via a non-time-critical communications system (e.g. Ethernet). The fieldbus links the PLCs of the direct control level to the components in the plant of the field level such as sensors, actuators, electric motors, console lights, switches, valves and contactors and replaces the direct connections via current loops or digital I/O signals. The requirement for a fieldbus are therefore time-critical and cost sensitive. Since the new millennium a number of fieldbuses based on Real-time Ethernet have been established. These have the potential to replace traditional fieldbuses in the long term. Description A fieldbus is an industrial network system for real-time distributed control. It is a way to connect instruments in a manufacturing plant. A fieldbus works on a network structure which typically allows daisy-chain, star, ring, branch, and tree network topologies. Previously, computers were connected using RS-232 (serial connections) by which only two devices could communicate. This would be the equivalent of the currently used 4–20 mA communication scheme which requires that each device have its own communication point at the controller level, while the fieldbus is the equivalent of the current LAN-type connections, which require only one communication point at the controller level and allow multiple (hundreds) of analog and digital points to be connected at the same time. This reduces both the length of the cable required and the number of cables required. Furthermore, since devices that communicate through a fieldbus require a microprocessor, multiple points are typically provided by the same device. Some fieldbus devices now support control schemes such as PID control on the device side instead of forcing the controller to do the processing. History The most important motivation to use a fieldbus in a distributed control system is to reduce the cost for installation and maintenance of the installation without losing the high availability and reliability of the automation system. The goal is to use a two wire cable and simple configuration for field devices from different manufacturers. Depending on the application, the number of sensors and actuators vary from hundreds in one machine up to several thousands distributed over a large plant. The history of the fieldbus shows how to approach these goals. Precursors of fieldbuses General Purpose Interface Bus (GPIB) Arguably the precursor field bus technology is HP-IB as described in IEEE 488 in 1975. "It became known as the General Purpose Interface Bus (GPIB), and became a de facto standard for automated and industrial instrument control". The GPIB has its main application in automated measurements with instruments from different manufacturers. It is a parallel bus with a cable and connector with 24 wires, limited to a maximal cable length of 20 metres. Bitbus The oldest commonly used field bus technology is Bitbus. Bitbus was created by Intel Corporation to enhance use of Multibus systems in industrial systems by separating slow i/o functions from faster memory access. In 1983, Intel created the 8044 Bitbus microcontroller by adding field bus firmware to its existing 8051 microcontroller. Bitbus uses EIA-485 at the physical layer, with two twisted pairs - one for data and the other for clocking and signals. Use of SDLC at the data link layer permits 250 nodes on one segment with a total distance of 13.2 km. Bitbus has one master node and multiple slaves, with slaves only responding to requests from the master. Bitbus does not define routing at the network layer. The 8044 permits only a relatively small data packet (13 bytes), but embeds an efficient set of RAC (remote access and control) tasks and the ability to develop custom RAC tasks. In 1990, the IEEE adopted Bitbus as the Microcontroller System Serial Control Bus (IEEE-1118). Today BITBUS is maintained by the BEUG - BITBUS European Users Group. Computer networks for automation Office networks are not really suited for automation applications, as they lack the upper bounded transmission delay. ARCNET, which was conceived as early as 1975 for office connectivity uses a token mechanism and therefore found later uses in industry, Manufacturing Automation Protocol (MAP) The Manufacturing Automation Protocol (MAP) was an implementation of OSI-compliant protocols in automation technology initiated by General Motors in 1984. MAP became a LAN standardization proposal supported by many manufacturers and was mainly used in factory automation. MAP has used the 10 Mbit/s IEEE 802.4 token bus as transmission medium. Due to its scope and complexity, MAP failed to make the big breakthrough. To reduce the complexity and reach faster processing with reduced resources the Enhanced Performance Architecture (EPA) MAP was developed in 1988. This MiniMap contains only levels 1,2 and 7 of the Open Systems Interconnection (OSI) basic reference model. This shortcut was taken over by the later fieldbus definitions. The most important achievement of MAP is Manufacturing Message Specification (MMS), the application layer of MAP. Manufacturing Message Specification (MMS) The Manufacturing Message Specification (MMS) is an international standard ISO 9506 dealing with an application protocol and services for transferring real time process data and supervisory control information between networked devices or computer applications published as a first version in 1986. It has been a model for many further developments in other industrial communication standardizations such as FMS for Profibus or SDO for CANopen. It is still in use as a possible application layer e.g. for power utility automation in the IEC 61850 standards. Fieldbuses for manufacturing automation In the field of manufacturing automation the requirements for a fieldbus are to support short reaction times with only a few bits or bytes to be transmitted over not more than some hundreds of meters. MODBUS In 1979 Modicon (now Schneider Electric) defined a serial bus to connect their programmable logic controllers (PLCs) called Modbus. In its first version Modbus used a two wire cable with EIA 485 UART signals. The protocol itself is very simple with a master/slave protocol and the number of data types are limited to those understood by PLCs at the time. Nevertheless, Modbus is (with its Modbus-TCP version) still one of the most used industrial networks, mainly in the building automation field. PROFIBUS A research project with the financial support of the German government defined in 1987 the fieldbus PROFIBUS based on the Fieldbus Message Specification (FMS). It showed in practical applications, that it was too complicated to handle in the field. In 1994 Siemens proposed a modified application layer with the name Decentralized Periphery (DP) which reached a good acceptance in the manufacturing industry. 2016 the Profibus is one of the most installed fieldbuses in the world and reaches 60 millions of installed nodes in 2018. INTERBUS In 1987 Phoenix Contact developed a serial bus to connect spacially distributed inputs and outputs to a centralized controller. The controller send one frame over a physical ring, which contains all input and output data. The cable has 5 wires: beside the ground signal two wires for the outgoing frame and two wires for the returning frame. With this cable is it possible to have the whole installation in a tree topology. The INTERBUS was very successful in the manufacturing industry with more than 22,9 million of devices installed in the field. The Interbus joined the Profinet technology for Ethernet-based fieldbus Profinet and the INTERBUS is now maintained by the Profibus Nutzerorganisation e.V. CAN During the 1980s, to solve communication problems between different control systems in cars, the German company Robert Bosch GmbH first developed the Controller Area Network (CAN). The concept of CAN was that every device can be connected by a single set of wires, and every device that is connected can freely exchange data with any other device. CAN soon migrated into the factory automation marketplace (with many others). DeviceNet was developed by the American company Allen-Bradley (now owned by Rockwell Automation) and the ODVA (Open DeviceNet Vendor Association) as an open fieldbus standard based on the CAN protocol. DeviceNet is standardised in the European standard EN 50325. Specification and maintenance of the DeviceNet standard is the responsibility of ODVA. Like ControlNet and EtherNet/IP, DeviceNet belongs to the family of CIP-based networks. CIP (Common Industrial Protocol) forms the common application layer of these three industrial networks. DeviceNet, ControlNet and Ethernet/IP are therefore well coordinated and provide the user with a graded communication system for the management level (EtherNet/IP), cell level (ControlNet) and field level (DeviceNet). DeviceNet is an object-oriented bus system and operates according to the producer/consumer method. DeviceNet devices can be client (master) or server (slave) or both. Clients and servers can be Producer, Consumer or both. CANopen was developed by the CiA (CAN in Automation), the user and manufacturer association for CANopen, and has been standardized as European standard EN 50325-4 since the end of 2002. CANopen uses layers 1 and 2 of the CAN standard (ISO 11898-2) and extensions with regard to pin assignment, transmission rates and the application layer. Fieldbuses for process automation In process automation traditionally most of the field transmitters are connected over a current loop with 4-20 mA to the controlling device. This allows not only to transmit the measured value with the level of the current, but also provide the required electrical power to the field device with just one two-wire cable of a length of more than a thousand meters. These systems are also installed in hazardous areas. According to NAMUR a fieldbus in these applications has to fulfill these requirements. A special standard for instrumentation IEC/EN 60079-27 is describing requirements for the Fieldbus Intrinsically Safe Concept (FISCO) for installations in zone 0, 1 or 2. WorldFIP The FIP standard is based on a French initiative in 1982 to create a requirements analysis for a future field bus standard. The study led to the European Eureka initiative for a field bus standard in June 1986 that included 13 partners. The development group (réseaux locaux industriels) created the first proposal to be standardized in France. The name of the FIP field bus was originally given as an abbreviation of the French "Flux d'Information vers le Processus" while later referring to FIP with the English name "Factory Instrumentation Protocol". FIP has lost ground to Profibus which came to prevail the market in Europe in the following decade - the WorldFIP homepage has seen no press release since 2002. The closest cousin of the FIP family can be found today in the Wire Train Bus for train coaches. However a specific subset of WorldFIP - known the FIPIO protocol - can be found widely in machine components. Foundation Fieldbus (FF) Foundation Fieldbus was developed over a period of many years by the International Society of Automation (ISA) as SP50. Foundation Fieldbus today enjoys a growing installed base in many heavy process applications such as refining, petrochemicals, power generation, and even food and beverage, pharmaceuticals, and nuclear applications. Effective January 1, 2015, the Fieldbus Foundation has become part of the new FieldComm Group. PROFIBUS-PA Profibus PA (process automation) is used for communication between measuring and process instruments, actuators and process control system or PLC/DCS in process engineering. Profibus PA is a Profibus version with physical layer suitable for process automation, in which several segments (PA segments) with field instruments can be connected to Profibus DP via so-called couplers. The two-wire bus cable of these segments takes over not only the communication, but also the power supply of the participants (MBP transmission technology). Another special feature of Profibus PA is the widely used device profile "PA Devices" (PA Profile), in which the most important functions of the field devices are standardized across manufacturers. Fieldbuses for building automation The market of building automation has also different requirements for the application of a fieldbus: installation bus with a lot of simple I/O distributed over a large space. automation fieldbus for control of heating, ventilation, and air conditioning (HVAC) management network for facility management The BatiBUS defined in 1989 and used mainly in France, the Instabus extended to the European Installation Bus (EIB) and the European Home Systems Protocol (EHS) merged in 1999 to the Konnex) (KNX) standard EN 50090, (ISO/IEC 14543-3). In 2020 495 Member companies offer 8'000 products with KNX interfaces in 190 countries worldwide. LonWorks Going back to the 1980s, unlike other networks, LonWorks is the result of the work of computer scientists from Echelon Corporation. In 1999 the communications protocol (then known as LonTalk) was submitted to ANSI and accepted as a standard for control networking (ANSI/CEA-709.1-B), in 2005 as EN 14908 (European building automation standard). The protocol is also one of several data link/physical layers of the BACnet ASHRAE/ANSI standard for building automation. BACnet The BACnet standard was initially developed and is now maintained by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) starting in 1987. BACnet is an American National Standard (ANSI) 135 since 1995, a European standard, a national standard in many countries, and global ISO Standard 16484 since 2003. BACnet has in 2017 a market share of 60% in building automation market. Standardization Although fieldbus technology has been around since 1988, with the completion of the ISA S50.02 standard, the development of the international standard took many years. In 1999, the IEC SC65C/WG6 standards committee met to resolve difference in the draft IEC fieldbus standard. The result of this meeting was the initial form of the IEC 61158 standard with eight different protocol sets called "Types". This form of standard was first developed for the European Common Market, concentrates less on commonality, and achieves its primary purpose—elimination of restraint of trade between nations. Issues of commonality are now left to the international consortia that support each of the fieldbus standard types. Almost as soon as it was approved, the IEC standards development work ceased and the committee was dissolved. A new IEC committee SC65C/MT-9 was formed to resolve the conflicts in form and substance within the more than 4000 pages of IEC 61158. The work on the above protocol types is substantially complete. New protocols, such as for safety fieldbuses or real-time Ethernet fieldbuses are being accepted into the definition of the international fieldbus standard during a typical 5-year maintenance cycle. In the 2008 version of the standard, the fieldbus types are reorganized into Communication Profile Families (CPFs). Structure of fieldbus standards There were many competing technologies for fieldbuses and the original hope for one single unified communications mechanism has not been realized. This should not be unexpected since fieldbus technology needs to be implemented differently in different applications; automotive fieldbuses are functionally different from process plant control fieldbuses. IEC 61158: Industrial communication networks - Fieldbus specification In June 1999 the IEC's Committee of Action (CA) decided to take a new structure for the fieldbus standards beginning with a first edition valid at the January 1, 2000, in time for the new millennium: There is a large IEC 61158 standard, where all fieldbuses find their place. The experts have decided that the structure of IEC 61158 is maintained according to different layers, divided into services and protocols. The individual fieldbuses are incorporated into this structure as different types. The Standard IEC 61158 Industrial communication networks - Fieldbus specifications is split into the following parts: IEC 61158-1 Part 1: Overview and guidance for the IEC 61158 and IEC 61784 series IEC 61158-2 PhL: Part 2: Physical layer specification and service definition IEC 61158-3-x DLL: Part 3-x: Data-link layer service definition - Type x elements IEC 61158-4-x DLL: Part 4-x: Data-link layer protocol specification - Type x elements IEC 61158-5-x AL: Part 5-x: Application layer service definition - Type x elements IEC 61158-6-x AL: Part 6-x: Application layer protocol specification - Type x elements Each part still contains several thousand pages. Therefore, these parts have been further subdivided into subparts. The individual protocols have simply been numbered with a type. Each protocol type thus has its own subpart if required. In order to find the corresponding subpart of the individual parts of the IEC 61158 standard, one must know the corresponding protocol type for a specific family. In the 2019 edition of IEC 61158 up to 26 different types of protocols are specified. In IEC 61158 standardization, the use of brand names is avoided and replaced by dry technical terms and abbreviations. For example, Ethernet is replaced by the technically correct CSMA/CD or a reference to the corresponding ISO standard 8802.3. This is also the case with fieldbus names, they all are replaced by type numbers. The reader will therefore never find a designation such as PROFIBUS or DeviceNet in the entire IEC 61158 fieldbus standard. In the section Compliance to IEC 61784 a complete reference table is provided. IEC 61784: Industrial communication networks - Profiles It is clear that this collection of fieldbus standards in IEC 61158 is not suitable for implementation. It must be supplemented with instructions for use. These instructions show how and which parts of IEC 61158 can be assembled to a functioning system. This assembly instruction has been compiled subsequently as IEC 61784 fieldbus profiles. According to IEC 61158-1 the Standard IEC 61784 is split in the following parts: IEC 61784-1 Profile sets for continuous and discrete manufacturing relative to fieldbus use in industrial control systems IEC 61784-2 Additional profiles for ISO/IEC 8802 3 based communication networks in real-time applications IEC 61784-3 Functional safety fieldbuses – General rules and profile definitions IEC 61784-3-n Functional safety fieldbuses – Additional specifications for CPF n IEC 61784-5-n Installation of fieldbuses - Installation profiles for CPF n IEC 61784-1: Fieldbus profiles The IEC 61784 Part 1 standard with the name Profile sets for continuous and discrete manufacturing relative to fieldbus use in industrial control systems lists all fieldbuses which are proposed by the national standardization bodies. In the first edition in 2003 7 different Communication Profile Families (CPF) are introduced: CPF 1 FOUNDATION Fieldbus CPF 2 ControlNet CPF 3 PROFIBUS CPF 4 P-NET CPF 5 WorldFIP CPF 6 INTERBUS CPF 7 SwiftNet Swiftnet, which is widely used in aircraft construction (Boeing), was included in the first edition of the standard. This later proves to be a mistake and in the 2007 edition 2 this protocol was removed from the standard. At the same time, the CPF 8 CC-Link, the CPF 9 HART protocol and CPF 16 SERCOS are added. In the edition 4 in 2014 the last fieldbus CPF 19 MECHATROLINK was included into the standard. The edition 5 in 2019 was just a maintenance revision without any new profile added. See List of automation protocols for fieldbuses that are not included in this standard. IEC 61784-2: Real-time Ethernet Already in edition 2 of the fieldbus profile first profiles based on Ethernet as physical layer are included. All this new developed Real-time Ethernet (RTE) protocols are compiled in IEC 61784 Part 2 as Additional profiles for ISO/IEC 8802 3 based communication networks in real-time applications. Here we find the solutions Ethernet/IP, three versions of PROFINET IO - the classes A, B, and C - and the solutions of P-NET, Vnet/IP TCnet, EtherCAT, Ethernet POWERLINK, Ethernet for Plant Automation (EPA), and also the MODBUS with a new Real-Time Publish-Subscribe MODBUS-RTPS and the legacy profile MODBUS-TCP. The SERCOS solution is interesting in this context. This network from the field of axis control had its own standard IEC 61491. With the introduction of the Ethernet-based solution SERCOS III, this standard has been taken apart and the communication part is integrated in IEC 61158/61784. The application part has been integrated together with other drive solutions into a special drive standard IEC 61800-7. So the list of RTE for the first edition in 2007 is already long: CPF 2 CIP CPF 3 PROFIBUS & PROFINET CPF 4 P-NET CPF 6 INTERBUS CPF 10 Vnet/IP CPF 11 TCnet CPF 12 EtherCAT CPF 13 ETHERNET Powerlink CPF 14 Ethernet for Plant Automation (EPA) CPF 15 MODBUS CPF 16 SERCOS 2010: CPF 17 RAPIEnet CPF 18 SafetyNET p 2019: CPF 19 FL-net CPF 20 ADS-net 2023: CPF 21 AUBUS In 2010 already a second edition was published to include CPF 17 RAPIEnet and CPF 18 SafetyNET p. In the third edition in 2014 the Industrial Ethernet (IE) version of CC-Link was added. The two profile families CPF 20 ADS-net and CPF 19 FL-net are added to the edition four in 2019. For details about these RTEs see the article on Industrial Ethernet. IEC 61784-3: Safety For functional safety, different consortia have developed different protocols for safety applications up to Safety Integrity Level 3 (SIL) according to IEC 61508 or Performance Level "e" (PL) according to ISO 13849. What most solutions have in common is that they are based on a Black Channel and can therefore be transmitted via different fieldbuses and networks. Depending on the actual profile the safety protocol does provide measures like counters, CRCs, echo, timeout, unique sender and receiver IDs or cross check. The first edition issued in 2007 of IEC 61784 Part 3 named Industrialcommunication networks – Profiles – Functional safety fieldbuses includes the Communication Profile Families (CPF): CPF 1 FOUNDATION Fieldbus CPF 2 CIP with CIP safety CPF 3 PROFIBUS & PROFINET with PROFIsafe CPF 6 INTERBUS SERCOS does use the CIP safety protocol as well. In the second edition issued in 2010 additional CPF are added to the standard: CPF 8 CC-Link CPF 12 EtherCAT with Safety over EtherCAT CPF 13 Ethernet POWERLINK with openSAFETY CPF 14 EPA In the third edition in 2016 the last safety profile CPF 17 SafetyNET p was added. A new edition 4 is expected to be published in 2021. The standard has now 9 different safety profiles. They are all included and referenced in the global compliance table in the next section. Compliance to IEC 61784 The protocol families of each brand name are called Communication Profile Family and are abbreviated as CPF with a number. Each protocol family can now define fieldbuses, real-time Ethernet solutions, installation rules and protocols for functional safety. These possible profile families are laid down in IEC 61784 and compiled in the following table. As an example, we will search for the standards for PROFIBUS-DP. This belongs to the CPF 3 family and has the profile CP 3/1. In Table 5 we find that its protocol scope is defined in IEC 61784 Part 1. It uses protocol type 3, so the documents IEC 61158-3-3, 61158-4-3, 61158-5-3 and 61158-6-3 are required for the protocol definitions. The physical interface is defined in the common 61158-2 under type 3. The installation regulations can be found in IEC 61784-5-3 in Appendix A. It can be combined with the FSCP3/1 as PROFIsafe, which is defined in the IEC 61784-3-3 standard. To avoid the manufacturer having to list all these standards explicitly, the reference to the profile is specified in the standard. In the case of our example for the PROFIBUS-DP, the specification of the relevant standards would therefore have to be Compliance to IEC 61784-1 Ed.3:2019 CPF 3/1 IEC 62026: Controller-device interfaces (CDIs) Requirements of fieldbus networks for process automation applications (flowmeters, pressure transmitters, and other measurement devices and control valves in industries such as hydrocarbon processing and power generation) are different from the requirements of fieldbus networks found in discrete manufacturing applications such as automotive manufacturing, where large numbers of discrete sensors are used including motion sensors, position sensors, and so on. Discrete fieldbus networks are often referred to as "device networks". Already in the year 2000 the International Electrotechnical Commission (IEC) decided that a set of controller-device interfaces (CDIs) will be specified by the Technical Committee TC 121 Low-voltage switchgear and controlgear to cover the device networks. This set of standards with the number IEC 62026 includes in the actual edition of 2019 the following parts: IEC 62026-1: Part 1: General rules IEC 62026-2: Part 2: Actuator sensor interface (AS-i) IEC 62026-3: Part 3: DeviceNet IEC 62026-7: Part 7: CompoNet The following parts have been withdrawn in 2006 and are not maintained anymore: IEC 62026-5: Part 5: Smart distributed system (SDS) IEC 62026-6: Part 6: Seriplex (Serial Multiplexed Control Bus) Cost advantage The amount of cabling required is much lower in fieldbus than in 4–20 mA installations. This is because many devices share the same set of cables in a multi-dropped fashion rather than requiring a dedicated set of cables per device as in the case of 4–20 mA devices. Moreover, several parameters can be communicated per device in a fieldbus network whereas only one parameter can be transmitted on a 4–20 mA connection. A fieldbus also provides a good foundation for the creation of a predictive and proactive maintenance strategy. The diagnostics available from fieldbus devices can be used to address issues with devices before they become critical problems. Networking Despite each technology sharing the generic name of fieldbus the various fieldbuses are not readily interchangeable. The differences between them are so profound that they cannot be easily connected to each other. To understand the differences among fieldbus standards, it is necessary to understand how fieldbus networks are designed. With reference to the OSI model, fieldbus standards are determined by the physical media of the cabling, and layers one, two and seven of the reference model. For each technology the physical medium and the physical layer standards fully describe, in detail, the implementation of bit timing, synchronization, encoding/decoding, band rate, bus length and the physical connection of the transceiver to the communication wires. The data link layer standard is responsible for fully specifying how messages are assembled ready for transmission by the physical layer, error handling, message-filtering and bus arbitration and how these standards are to be implemented in hardware. The application layer standard, in general defines how the data communication layers are interfaced to the application that wishes to communicate. It describes message specifications, network management implementations and response to the request from the application of services. Layers three to six are not described in fieldbus standards. Features Different fieldbuses offer different sets of features and performance. It is difficult to make a general comparison of fieldbus performance because of fundamental differences in data transfer methodology. In the comparison table below it is simply noted if the fieldbus in question typically supports data update cycles of 1 millisecond or faster. Market , in process control systems, the market is dominated by Foundation Fieldbus and Profibus PA. Both technologies use the same physical layer (2-wire Manchester-encoded current modulation at 31.25 kHz) but are not interchangeable. As a general guide, applications which are controlled and monitored by programmable logic controllers (PLCs) tend towards PROFIBUS, and applications which are controlled and monitored by a digital/distributed control system (DCS) tend towards Foundation Fieldbus. PROFIBUS technology is made available through Profibus International with headquarters in Karlsruhe, Germany. Foundation Fieldbus technology is owned and distributed by the Fieldbus Foundation of Austin, Texas. See also Parallel Redundancy Protocol Media Redundancy Protocol References Bibliography Patel, Kirnesh (2013) Foundation Fieldbus Technology and its applications Serial buses Industrial computing Computer-related introductions in 1988
Fieldbus
[ "Technology", "Engineering" ]
6,116
[ "Industrial computing", "Industrial engineering", "Automation" ]
1,621,242
https://en.wikipedia.org/wiki/Pythagorean%20hammers
According to legend, Pythagoras discovered the foundations of musical tuning by listening to the sounds of four blacksmith's hammers, which produced consonance and dissonance when they were struck simultaneously. According to Nicomachus in his 2nd-century CE Enchiridion harmonices, Pythagoras noticed that hammer A produced consonance with hammer B when they were struck together, and hammer C produced consonance with hammer A, but hammers B and C produced dissonance with each other. Hammer D produced such perfect consonance with hammer A that they seemed to be "singing" the same note. Pythagoras rushed into the blacksmith shop to discover why, and found that the explanation was in the weight ratios. The hammers weighed 12, 9, 8, and 6 pounds respectively. Hammers A and D were in a ratio of 2:1, which is the ratio of the octave. Hammers B and C weighed 8 and 9 pounds. Their ratios with hammer D were (12:8 = 3:2 = perfect fifth) and (12:9 = 4:3 = perfect fourth). The space between B and C is a ratio of 9:8, which is equal to the musical whole tone, or whole step interval (). The legend is, at least with respect to the hammers, demonstrably false. It is probably a Middle Eastern folk tale. These proportions are indeed relevant to string length (e.g. that of a monochord) — using these founding intervals, it is possible to construct the chromatic scale and the basic seven-tone diatonic scale used in modern music, and Pythagoras might well have been influential in the discovery of these proportions (hence, sometimes referred to as Pythagorean tuning) — but the proportions do not have the same relationship to hammer weight and the tones produced by them. However, hammer-driven chisels with equal cross-section, show an exact proportion between length or weight and Eigenfrequency. Earlier sources mention Pythagoras' interest in harmony and ratio. Xenocrates (4th century BCE), while not as far as we know mentioning the blacksmith story, described Pythagoras' interest in general terms: "Pythagoras discovered also that the intervals in music do not come into being apart from number; for they are an interrelation of quantity with quantity. So he set out to investigate under what conditions concordant intervals come about, and discordant ones, and everything well-attuned and ill-tuned." Whatever the details of the discovery of the relationship between music and ratio, it is regarded as historically the first empirically secure mathematical description of a physical fact. As such, it is symbolic of, and perhaps leads to, the Pythagorean conception of mathematics as nature's modus operandi. As Aristotle was later to write, "the Pythagoreans construct the whole universe out of numbers". The Micrologus of Guido of Arezzo repeats the legend in Chapter XX. Contents of the legend According to the oldest recorded version of the legend, Pythagoras, who lived in the 6th century BC, sought a tool to measure acoustic perceptions, similar to how geometric quantities are measured with a compass or weights with a scale. As he passed by a forge where four (according to a later version, five) craftsmen were working with hammers, he noticed that each strike produced tones of different pitch, which resulted in harmonies when paired. He was able to distinguish Octave, fifth, and fourth. Only one pair, which formed the interval between fourth and fifth (a major second), he perceived as dissonant. Excitedly, he ran into the forge to conduct experiments. There, he discovered that the difference in pitch was not dependent on the shape of the hammer, the position of the struck iron, or the force of the blow. Rather, he could associate the pitches with the weights of the hammers, which he measured precisely. He then returned home to continue the experiments. He hung four equally long, equally strong, and equally twisted strings in succession on a peg attached diagonally to the corner of the walls, weighting them differently by attaching different weights at the bottom. Then he struck the strings in pairs, and the same harmonies resonated as in the forge. The string with the heaviest load of twelve units, when paired with the least burdened string carrying six units, produced an octave. Thus, it was evident that the octave was based on the ratio 12:6, or 2:1. The most tense string yielded a fifth with the second loosest string (eight units), and a fourth with the second tightest string (nine units). From this, it followed that the fifth was based on the ratio 12:8, or 3:2, and the fourth on the ratio 12:9, or 4:3. Again, the ratio of the second tightest string to the loosest, with 9:6, or 3:2, yielded a fifth, and the ratio of the second loosest to the loosest, with 8:6, or 4:3, yielded a fourth. For the dissonant interval between fifth and fourth, it was revealed that it was based on the ratio 9:8, which coincided with the weight measurements carried out in the forge. The octave proved to be the product of the fifth and fourth: Pythagoras then extended the experiment to various instruments, experimented with vessels, flutes, triangles, the Monochord, etc., always finding the same numerical ratios. Finally, he introduced the commonly used terminology for relative pitch. Further traditions With the invention of the monochord to investigate and demonstrate the harmonies of pairs of strings with different integer length ratios, Pythagoras is said to have introduced a convenient means of illustrating the mathematical foundation of music theory that he discovered. The monochord, called κανών (kanōn) in Greek and regula in Latin, is a resonating box with a string stretched over it. A measurement scale is attached to the box. The device is equipped with a movable bridge, which allows the vibrating length of the string to be divided; the division can be precisely determined using the measurement scale. This enables measurement of intervals. Despite the name "monochord", which means "one-stringed", there were also multi-stringed monochords that could produce simultaneous intervals. However, it is unclear when the monochord was invented. Walter Burkert dates this achievement to a time after the era of Aristotle, who apparently did not know the device; thus, it was introduced long after Pythagoras' death. On the other hand, Leonid Zhmud suggests that Pythagoras probably conducted his experiment, which led to the discovery of numerical ratios, using the monochord. Hippasus of Metapontum, an early Pythagorean (late 6th and early 5th centuries BCE), conducted quantitative investigations into musical intervals. The experiment attributed to Hippasus, involving freely oscillating circular plates of varying thicknesses, is physically correct, unlike the alleged experiments of Pythagoras. It is unclear whether Archytas of Tarentum, an important Pythagorean of the 5th/4th centuries BCE, conducted relevant experiments. He was probably more of a theoretician than a practitioner in music, but he referred to the acoustic observations of his predecessors. The musical examples he cites in support of his acoustic theory involve wind instruments; he does not mention experiments with stringed instruments or individual strings. Archytas proceeded from the mistaken hypothesis that pitch depends on the speed of sound propagation and the force of impact on the sound-producing body; in reality, the speed of sound is constant in a given medium, and the force only affects the volume. Interpretation of the legend Walter Burkert is of the opinion that despite its physical impossibility, the legend should not be regarded as an arbitrary invention, but rather as having a meaning that can be found in Greek mythology. The Idaean Dactyls, the mythical inventors of blacksmithing, were also, according to myth, the inventors of music. Thus, there already existed a very ancient tradition associating blacksmithing with music, in which the mythical blacksmiths were depicted as possessors of the secret of magical music. Burkert sees the legend of Pythagoras in the blacksmiths as a late transformation and rationalization of the ancient Dactyl myth: In the legend of Pythagoras, the blacksmiths no longer appear as possessors of ancient magical knowledge, but rather, without intending to, they become - albeit unknowing - "teachers" of Pythagoras. In the Early Middle Ages, Isidore of Seville referred to the biblical blacksmith Tubal as the inventor of music; later authors followed him in this. This tradition once again shows the idea of a relationship between blacksmithing and music, which also appears in non-European myths and legends. Tubal was the half-brother of Jubal, who was considered the ancestor of all musicians. Both were sons of Lamech and thus grandsons of Cain. In some Christian traditions of the Middle Ages, Jubal, who observed his brother Tubal, was equated with Pythagoras. Another explanation is suggested by Jørgen Raasted, following Leonid Zhmud. Raasted's hypothesis states that the starting point of the legend formation was a report on the experiments of Hippasus. Hippasus used vessels called "sphaírai". This word was mistakenly confused with "sphýrai" (hammers) due to a scribal error, and instead of Hippasus' name, that of Pythagoras was used as the originator of the experiments. From this, the legend of the forge emerged. Basis of music theory The whole numbers 6, 8, 9, and 12, in relation to the lowest tone (number 12), correspond to the pure intervals fourth (number 9), fifth (number 8), and octave (number 6) upwards: Such pure intervals are perceived by the human ear as beat-free, as the volume of the tones does not vary. In sheet music, these four Pythagorean tones can, for example, be expressed with the melodic sequence c' – f' – g' – c": If this sequence of tones is not considered from the lowest, but from the highest tone (number 6), the following intervals also result: a fourth (number 8), a fifth (number 9), and an octave (number 12) - in this case, however, downward: The fifth and the octave appear in relation to the fundamental tone in natural harmonic series, but not the fourth or its octave equivalent. This interval of a fourth occurs in the ventless brass instruments known since ancient times and in the harmonic overtones of stringed instruments. Significance for the later development of tonal systems The further investigation of intervals consisting of octaves, fifths, and fourths, and their multiples, eventually led from diatonic scales with seven different tones (heptatonic scale) in Pythagorean tuning to a chromatic scale with twelve tones. The wolf intervals in Pythagorean tuning posed a problem: instead of the pure fifths A♭-E♭ and D♭-A♭, the fifths G♯-E♭ and C♯-A♭, which were detuned by the Pythagorean comma, sounded. With the advent of polyphony in the second half of the 15th century, in addition to the octave and fifth, the pure third became crucial for major and minor triads. Although this tuning could not be realized on a twelve-note keyboard, it could be well achieved in the meantone temperament. Its disadvantage was that not all keys of the circle of fifths were playable. To remedy this deficiency, tempered tunings were introduced, albeit with the trade-off that the pure third sounded harsher in some keys. Nowadays, most instruments are tuned in equal temperament with 12 keys, so that the octaves are perfectly pure, the fifths are almost pure, and the thirds sound rough. The four Pythagorean tones in music In music, the four harmonic Pythagorean tones play a prominent role in the pentatonic scale, particularly on the first, fourth, fifth, and eighth degrees of diatonic scales (especially in major and minor) and in the composition of cadences as fundamental tones of tonic, subdominant, and dominant. This sequence of tones often appears in cadences with the corresponding chords: The four Pythagorean tones appear in many compositions. The first tones of the medieval antiphons "Ad te levavi" and "Factus est repente" consist essentially of the four Pythagorean tones, apart from some ornaments and high notes. Another example is the beginning of the Passacaglia in C minor by Johann Sebastian Bach. The theme consists of fifteen tones, of which a total of ten tones and especially the last four tones are derived from the sequence. Refutation Absolute Pitch of Hammers The resonance frequency of steel hammers that can be moved by human hands is usually in the ultrasonic range and therefore inaudible. Pythagoras could not have perceived these tones, especially when the hammers had a difference of one octave in pitch. Pitch Depending on Hammer Weight The vibration frequency of a freely oscillating solid body, such as a longitudinal wave, is usually not proportional to its weight or volume, but it is proportional to its length, which changes with similar geometry only with the cube root of the volume. For the Pythagorean hammers, the following ratio numbers apply for similar geometry (values in arbitrary units): Pitch in relation to string tension The assumption that the vibration frequency of a string is proportional to the tension is not correct. Rather, the vibration frequency is proportional to the square root of the tension. To double the vibration frequency, four times the tension must be applied and thus a weight four times as heavy must be hung on a string. Physical considerations Consonance Integer frequency ratios The fact that a tone with the fundamental frequency is in consonance with a second tone with an integer multiple (with and ) of this fundamental frequency is immediately evident from the fact that the maxima and minima of the tone vibrations are synchronous in time, but can also be explained as follows: The beat frequency of the two simultaneously sounding tones is mathematically calculated from the difference between the frequencies of these two tones and can be heard as a combination tone: (see Mathematical description of the beat). This difference is itself in an integer ratio to the fundamental frequency : For all integer multiples of the fundamental frequency in the second tone, there are also integer multiples for the beat frequency (see the table on the right), so that all tones sound consonant. Rational Frequency Ratios Even for two tones whose frequencies are in a rational ratio of to , there is a consonance. The frequency of the second tone is given by: Consequently, the beat frequency of the two simultaneously sounding tones is given by: Under this condition, the fundamental frequency is always an integer multiple of the beat frequency (see the table on the right). Therefore, no dissonance occurs. Longitudinal Oscillations and Natural Frequency of Solid Bodies To estimate a metal block, let's consider a homogeneous rectangular prism with a maximum length and made of a material with a speed of sound . For the vibration mode along its longest side (longitudinal oscillation), it has the lowest natural frequency with antinodes at both ends and a node in the middle. . Therefore, the pitch is independent of the mass and cross-sectional area of the prism, and the cross-sectional area can even vary. Moreover, the force and velocity when striking the body also do not play a role. At least this fact corresponds to the observation attributed to Pythagoras that the perceived pitch was not dependent on the hands (and thus the forces) of the craftsmen. Bodies with more complex geometry, such as bells, cups, or bowls, which may even be filled with liquids, have natural frequencies that require considerably more elaborate physical descriptions since not only the shape but also the wall thickness or even the striking location must be considered. In these cases, transverse oscillations may also be excited and audible. Hammers A very large sledgehammer (the speed of sound in steel is approximately = 5000 meters per second) with a hammer head length = 0.2 meters has a natural frequency of 12.5 kilohertz. With a square cross-sectional area of 0.1 square meters (0.1 meters by 0.1 meters), it would have an unusually large mass of almost 16 kilograms at a density of 7.86 grams per cubic centimeter. Frequencies above approximately 15 kilohertz cannot be perceived by many people anymore (see auditory threshold); therefore, the natural frequency of such a large hammer is hardly audible. Hammers with shorter heads have even higher natural frequencies that are therefore inaudible. Anvils A large steel anvil with a length = 0.5 meters has a natural frequency of only 5 kilohertz and is therefore easily audible. There are a variety of compositions in which the composer specifies the use of anvils as musical instruments. Particularly well-known are the two operas from the music drama Der Ring des Nibelungen by Richard Wagner: Das Rheingold, Scene 3, 18 anvils in F in three octaves Siegfried, Act 1, Siegfried's smithing song Nothung! Nothung! Neidliches Schwert! Materials with a lower speed of sound than steel, such as granite or brass, produce even lower frequencies with congruent geometry. In any case, anvils are not mentioned in the early accounts and audible sounds of anvils are attributed to hammers in the later versions of the legend. Metal rods It is possible to compare metal rods, such as chisels used by stonemasons or splitting wedges for stone breaking, in order to arrive at an observation similar to the one attributed to Pythagoras, namely that the vibration frequency of tools is proportional to their weight. If the metal rods, neglecting the tapering cutting edges, all have the same uniform cross-sectional area A but different lengths l, then their weight is proportional to the length and thus also to the vibration frequency, provided that the metal rods are excited to longitudinal vibrations by blows along the longitudinal axis (sound examples can be found in the box on the right). For bending oscillators, such as tuning forks or the plates of metallophones, different conditions and laws apply; therefore, these considerations do not apply to them. String vibrations Strings can be fixed at two ends, each on a bridge. Unlike a solid with longitudinal vibrations, the two bridges establish the boundary conditions for two nodal points of vibration; hence, the vibrational node is located in the middle. The natural frequency and thus the pitch of strings with length are not proportional to the tension , but to the square root of the tension. Moreover, the frequency increases with higher tensile weight and thus higher tension, rather than decreasing: Nevertheless, the vibration frequency is inversely proportional to the length of the string at constant tension, which can be directly demonstrated with the monochord—allegedly invented by Pythagoras. Reception Antiquity The earliest mention of Pythagoras' discovery of the mathematical basis of musical intervals is found in the Platonist Xenocrates (4th century BC); as it is only a quote from a lost work of this thinker, it is unclear whether he knew the forge legend. In the 4th century BC, criticism of the Pythagorean theory of intervals was already expressed, although without reference to the Pythagoras legend; the philosopher and music theorist Aristoxenus considered it to be false. The oldest recorded version of the legend was presented centuries after the time of Pythagoras by the Neopythagorean Nicomachus of Gerasa, who in the 1st or 2nd century AD documented the story in his Harmonikḗ Encheirídion ("Handbook of Harmony"). He relied on the philosopher Philolaus, a Pythagorean of the 5th century BC, for his representation of the numerical ratios in music theory. The famous mathematician and music theorist Ptolemy (2nd century AD) was aware of the weight method transmitted by the legend but rejected it. However, he did not recognize the falsity of the weight experiments; he only criticized their inaccuracies compared to the precise measurements on the monochord. It is probable that he obtained his knowledge of the legendary tradition not from Nicomachus but from an older source, now lost. The chronologically difficult to place music theorist of the Imperial era, Gaudentius, described the legend in his Harmonikḗ Eisagōgḗ ("Introduction to Harmony"), in a version slightly shorter than that of Nicomachus. The Neoplatonist philosopher Iamblichus of Chalcis, who worked as a philosophy teacher in the late 3rd and early 4th centuries, wrote a Pythagoras biography titled On the Pythagorean Life, in which he reproduced the blacksmith legend in the version of Nicomachus. In the first half of the 5th century, the writer Macrobius extensively discussed the blacksmith legend in his commentary on Cicero's Somnium Scipionis, which he described in a similar manner to Nikomachos. The strongest repercussion among the ancient music theorists who took up the narrative was achieved by Boethius with his textbook De institutione musica ("Introduction to Music"), written in the early 6th century, in which he initially describes Pythagoras' efforts of understanding in the forge and then at home. It is unclear whether he relied on Nikomachus' account or another source. In contrast to the entire earlier tradition, he reports five hammers instead of the four assumed by earlier authors. He claims that Pythagoras rejected the fifth hammer because it resulted in dissonance with all the other hammers. According to Boethius' account (as with Macrobius), Pythagoras tested his initial assumption that the difference in sound was due to different strength in the arms of the men by having the smiths exchange hammers, which led to its refutation. Regarding the experiments at Pythagoras' home, Boethius writes that the philosopher first hung strings with weights equal to those of the hammers in the forge and then experimented with pipes and cups, with all the experiments yielding the same results as the initial ones with the hammers. Using the legend as a basis, Boethius addresses the question of the reliability of sensory perceptions in terms of science and epistemology. The crucial point is that Pythagoras was initially prompted by sensory perception to formulate his question and hypotheses, and through empirical testing of hypotheses, he arrived at irrefutable certainty. The path to knowledge went from sensory perception to the initial hypothesis, which turned out to be erroneous, then to the formation of a correct opinion, and finally to its verification. Boethius acknowledges the necessity and value of sensory perception and opinion formation on the path to insight, although as a Platonist, he is inherently skeptical of sensory perception due to its proneness to error. Genuine knowledge, for him, arises only when the regularity is grasped, allowing the researcher to emancipate themselves from their initial dependence on unreliable sensory perception. The judgment of the researcher must not be based solely on sensory judgment derived from empirical experience, but rather it should only be made once they have found a rule through deliberation that enables them to position themselves beyond the realm of possible sensory deception. In the 6th century, the scholar Cassiodorus wrote in his Institutiones that Gaudentius attributed the beginnings "of music" to Pythagoras in his account of the legend of the blacksmith. He was referring to music theory, as Iamblichus had also done, who, with reference to the blacksmith narrative and the experiments described there, had referred to Pythagoras as the inventor "of music". Middle Ages In the Early Middle Ages, Isidore of Seville mentioned the legend of the blacksmith in his Etymologiae, which became a fundamental reference work for the educated in the Middle Ages. He briefly mentioned the legend, adopting Cassiodorus' wording and also designating Pythagoras as the inventor of music. As Cassiodorus and Isidore were first-rate authorities in the Middle Ages, the notion spread that Pythagoras had discovered the fundamental law of music and thus had been its founder. Despite such sweeping statements, medieval music theorists assumed that music had existed before Pythagoras and that the "invention of music" referred to the discovery of its principles. In the 9th century, the musicologist Aurelian of Réomé recounted the legend in his Musica disciplina ("Music Theory"). Aurelian's account was followed in the 10th century by Regino of Prüm in his work De harmonica institutione ("Introduction to Harmonic Theory"). Both emphasized that Pythagoras had been given the opportunity to make his discovery in the blacksmith's forge through a divine providence. In antiquity, Nicomachus and Iamblichus had already spoken of a daimonic providence, and Boethius had transformed it into a divine decree. In the 11th century, the legendary material was processed in the Carmina Cantabrigiensia. In the first half of the 11th century, Guido of Arezzo, the most famous music theorist of the Middle Ages, recounted the legend of the blacksmith in the final chapter of his Micrologus, basing it on the version of Boethius, whom he named specifically. Guido remarked at the outset: Nor would anyone ever have discovered anything certain about this art (music) if, in the end, divine goodness had not brought about the following event at its behest. He attributed the fact that the hammers weighed 12, 9, 8, and 6 units and thus produced harmonious sound to God's providence. He also mentioned that Pythagoras, starting from his discovery, had invented the monochord, but did not go into detail about its properties. The work De musica by Johannes Cotto (also known as John Cotton or Johannes Afflighemensis) was illustrated with the blacksmith scene around 1250 by an anonymous book illuminator in the Cistercian Abbey of Aldersbach. Among the medieval music theorists who told the legend of the forge according to Boethius' version, were also Juan Gil de Zámora (Johannes Aegidius von Zamora), active in the late 13th and early 14th centuries, Johannes de Muris and Simon Tunstede in the 14th century, and Adam von Fulda on the threshold of the early modern period in the 15th century. As an opponent of the Pythagorean conception, which held that consonances were based on certain numerical ratios, Johannes de Grocheio emerged in the 13th century, starting from an Aristotelian perspective. Although he explicitly stated that Pythagoras had discovered the principles of music, and he told the legend of the forge citing Boethius, whom he considered trustworthy, he rejected the Pythagorean theory of consonance, which he wanted to reduce to a merely metaphorical expression. Early modern period Franchino Gaffurio published his work Theoricum opus musice discipline ("Theoretical Music Theory") in Naples in 1480, which was revised and republished in 1492 under the title Theorica musice ("Music Theory"). In it, he presented a version of the legend of the forge that surpassed all previous accounts in detail. He based his version on that of Boethius and added a sixth hammer in order to include as many tones of the octave as possible in the narrative. In four pictorial representations, he presented musical instruments or sound generators, each with six harmonic tones, and indicated the numbers 4, 6, 8, 9, 12, and 16 associated with the tones in the labels. In addition to the four traditional ratios of the legend (6, 8, 9, and 12), he added 4 and 16, which represent a tone a fifth lower and another tone a fourth higher. The entire sequence of tones now extends not only over one, but over two octaves. These numbers correspond, for example, to the tones f – c' – f' – g' – c" – f": The painter Erhard Sanßdorffer was commissioned in 1546 to create a fresco in the Hessian Büdingen Castle, which is well preserved and represents the history of music starting from the forge of Pythagoras like a compendium. Gioseffo Zarlino also recounted the legend in his work Le istitutioni harmoniche ("The Foundations of Harmony"), which he published in 1558; like Gaffurio, he based his account on Boethius' version. The music theorist Vincenzo Galilei, the father of Galileo Galilei, published his treatise Discorso intorno all'opere di messer Gioseffo Zarlino ("Discourse on the Works of Mr. Gioseffo Zarlino") in 1589, which was directed against the views of his teacher Zarlino. In it, he pointed out that the information in the legend about the loading of strings with weights is not accurate.[1] In 1626, the Thesaurus philopoliticus by Daniel Meisner featured a copper engraving titled "Duynkirchen" by Eberhard Kieser, depicting only three blacksmiths at an anvil. The Latin and German caption reads:[2] Triplicibus percussa sonat varie ictibus incus. Musica Pythagoras struit hinc fundamina princ(eps). The anvil sounds with triple strikes, producing three different tones. Music is the foundation built by Pythagoras, which no donkey's head could have achieved. A few years later, the matter was definitively clarified after Galileo Galilei and Marin Mersenne discovered the laws of string vibrations. In 1636, Mersenne published his Harmonie universelle, in which he explained the physical error in the legend: the vibration frequency is not proportional to the tension, but to its square root.[3] Several composers incorporated this subject matter into their works, including Georg Muffat at the end of the 17th century. and Rupert Ignaz Mayr. Modern era Even in the 19th century, Hegel assumed the physical accuracy of the alleged measurements mentioned in the Pythagoras legend in his lectures on the history of philosophy. Werner Heisenberg emphasized in an essay first published in 1937 that the Pythagorean "discovery of the mathematical determinacy of harmony" is based on "the idea of the meaningful power of mathematical structures", a "fundamental idea that modern exact science has inherited from antiquity"; the discovery attributed to Pythagoras belongs "to the strongest impulses of human science in general". Even more recently, accounts have been published in which the legend is uncritically reproduced without reference to its physical and historical falsehood. For example, in the non-fiction book The Fifth Hammer: Pythagoras and the Disharmony of the World by Daniel Heller-Roazen. Sources Gottfried Friedlein (Hrsg.): Anicii Manlii Torquati Severini Boetii de institutione arithmetica libri duo, de institutione musica libri quinque. Minerva, Frankfurt am Main 1966 (Nachdruck der Ausgabe Leipzig 1867, online; deutsche Übersetzung online) Michael Hermesdorff (Übersetzer): Micrologus Guidonis de disciplina artis musicae, d. i. Kurze Abhandlung Guidos über die Regeln der musikalischen Kunst. Trier 1876 (online) Ilde Illuminati, Fabio Bellissima (Hrsg.): Franchino Gaffurio: Theorica musice. Edizioni del Galluzzo, Firenze 2005, , S. 66–71 (lateinischer Text und italienische Übersetzung) Further reading Walter Burkert: Weisheit und Wissenschaft. Studien zu Pythagoras, Philolaos und Platon (= Erlanger Beiträge zur Sprach- und Kunstwissenschaft. Band 10). Hans Carl, Nürnberg 1962 Anja Heilmann: Boethius' Musiktheorie und das Quadrivium. Eine Einführung in den neuplatonischen Hintergrund von "De institutione musica". Vandenhoeck & Ruprecht, Göttingen 2007, , S. 203–222 () Werner Keil (Hrsg.): Basistexte Musikästhetik und Musiktheorie. Wilhelm Fink, Paderborn 2007, , S. 342–346 () Barbara Münxelhaus: Pythagoras musicus. Zur Rezeption der pythagoreischen Musiktheorie als quadrivialer Wissenschaft im lateinischen Mittelalter (= Orpheus-Schriftenreihe zu Grundfragen der Musik. Band 19). Verlag für systematische Musikwissenschaft, Bonn–Bad Godesberg 1976 Jørgen Raasted: A neglected version of the anecdote about Pythagoras's hammer experiment. In: Cahiers de l'Institut du Moyen-Âge grec et latin. Band 31a, 1979, S. 1–9 Leonid Zhmud: Wissenschaft, Philosophie und Religion im frühen Pythagoreismus. Akademie Verlag, Berlin 1997, See also Just intonation References Acoustics Ancient Greek science Pythagoreanism Musical tuning
Pythagorean hammers
[ "Physics" ]
7,151
[ "Classical mechanics", "Acoustics" ]
1,621,271
https://en.wikipedia.org/wiki/International%20Society%20of%20Automation
The International Society of Automation (ISA) Is a non-profit technical society for engineers, technicians, businesspeople, educators and students, who work, study or are interested in automation and pursuits related to it, such as instrumentation. Originally known as the Instrument Society of America, the society is more commonly known by its acronym, ISA. The society's scope now includes many technical and engineering disciplines. ISA is one of the foremost professional organizations in the world for setting standards and educating industry professionals in automation. Instrumentation and automation are some of the key technologies involved in nearly all industrialized manufacturing. Modern industrial manufacturing is a complex interaction of numerous systems. Instrumentation provides regulation for these complex systems using many different measurement and control devices. Automation provides the programmable devices that permit greater flexibility in the operation of these complex manufacturing systems. In 2019, ISA announced the formation of the ISA Global Cybersecurity Alliance to promote the ISA/IEC 62443 series of standards, which are the world’s only consensus-based security standard for automation and control system applications. Structure The International Society of Automation is a non-profit member-driven organization, which is built on a backbone of volunteers. Volunteers, working together with the ISA's full-time staff, are key to the ongoing mission and success of the organization. ISA has a strong leadership development program that develops volunteer leaders as they get involved with the organization's many different facets. ISA has several different ways that volunteers get involved from the section, division, and standards roots of the organization. ISA members are typically assigned an ISA Section (local chapter) which is related to their geographic location. Members can then join ISA Divisions which correspond to their individual technical interests. ISA Standards Committees are open to both ISA members and non-members to become involved with. In addition to the member-driven aspects of the ISA, the organization itself is divided into departments headed by a director. These departments are: Education, Training & Publications Marketing & Graphics Membership IT Sales Standards Finance Customer/Member Service History ISA was officially established as the Instrument Society of America on 28 April 1945, in Pittsburgh, Pennsylvania. The society grew out of the desire of 18 local instrument societies to form a national organization. It was the brainchild of Richard Rimbach of the Instruments Publishing Company. Rimbach is recognized as the founder of ISA. Industrial instruments, which became widely used during World War II, continued to play an ever-greater role in the expansion of technology after the war. Individuals like Rimbach and others involved in industry saw a need for the sharing of information about instruments on a national basis, as well as for standards and uniformity. The Instrument Society of America addressed that need. Albert F. Sperry, chairman of Panelit Corporation, became ISA's first president in 1946. In that same year, the Society held its first conference and exhibit in Pittsburgh. The first standard, RP 5.1 Instrument Flow Plan Symbols, followed in 1949, and the first journal was published in 1954. In the years following, ISA continued to expand its products and services, increasing the size and scope of the ISA conference and exhibition, developing symposia, offering professional development and training, adding technical Divisions, and even producing films about measurement and control. Membership grew from 900 in 1946 to 6,900 in 1953, and as of 2019, ISA members number approximately 32,000 from over 100 countries. In 1980, ISA moved its headquarters to Research Triangle Park (RTP), North Carolina, and a training center was established in nearby Raleigh. In 1997, the headquarters and training center were consolidated in a new building in RTP, where the society's day-to-day activities are managed by a professional staff of approximately 75. Recognizing the fact that ISA's technical scope had grown beyond instruments and that its reach went beyond "America", in the fall of 2000 the ISA Council of Society Delegates approved a legal name change to ISA—The Instrumentation, Systems, and Automation Society. Today, ISA's corporate branding strategy focuses exclusively on the letters, though ISA's official, legal name remains the same. In recent years, ISA has assumed a more global orientation, hiring multilingual staff and a director of global operations, chartering new sections in several countries outside the United States and Canada, issuing publications in Spanish. On October 2, 2007, the Council of Society Delegates deliberated a proposal to change the society's legal name to "International Society of Automation". A majority vote favored the action. However, since the 2/3 majority required for a bylaw change was not achieved, the proposal was not adopted. On October 13, 2008, the Council of Society Delegates deliberated a proposal to change the society's legal name to "International Society of Automation". The majority vote favored the action and the proposal was adopted. Membership ISA membership is organized into particular grades: Honorary, Fellow, Senior Member, Member, and Student Member. Honorary membership is conferred only upon those individuals who have made noteworthy contributions to the profession and does not require payment of dues. Professional members pay dues of $100 per year, and student dues are $10 annually. Members in certain countries with lower per capita GDP (relative to US & Europe) may pay dues at a reduced rate, and a grade of "virtual member", with very limited benefits is available for annual dues of $5 to students in certain circumstances. After 25 years of membership and satisfaction of an age requirement, members are eligible to become Life Members and exempt from dues payment. The benefits of ISA membership include, among other things, affiliation with an ISA section (see below), a subscription to the ISA's bimonthly flagship magazine InTech, discounts on ISA's products, events and services, and the privilege of viewing ISA standards, recommended practices, and technical papers at no extra charge. In 2012, ISA introduced a free membership program called an Automation Community Member. Sections and districts Local ISA chapters are known as ISA Sections. A "regular" section consists of at least 30 members (not including student members). Sections are commonly organized around a specific geographic area, e.g. Seattle Section, Connecticut Valley Section, Greater Oklahoma Section, France Section etc. There are nearly 170 chartered sections in around 30 countries in North America, South America, Europe, Asia, and the Middle East. Sections are separately incorporated, according to the laws of the state, province or other political subdivision in which they are located. They are not units of ISA, although their bylaws may not conflict with ISA's. As of 2012, there are 146 sections. Many sections sponsor training courses, conduct periodic trade shows, and act as a resource to the local industrial community. Reflecting their primacy in ISA's early days, sections retain pre-eminent governance authority, as ISA's legislative body, the Council of Society Delegates, is composed of section representatives (delegates) who hold voting power equal to the size of their membership. ISA also has nearly 200 student sections, in locations all over the world, principally where the economy has a substantial manufacturing component, and instrumentation and industrial automation are vital academic programs. Some student sections have found it difficult to remain active, as it is necessary to continually replace graduates with newer students, and membership is consequently very fluid. Sections are located within districts, of which there are 14, and which comprise large geographic areas of the world. Each one is headed by a vice president. Districts 1,2,3,5,6,7,8,9, and 11 are in the US (although District 7 also includes Mexico and Central America, and District 3 includes Puerto Rico). Districts 10 and 13 are in Canada. District 4 is South America (including the Trinidad Section). District 12 is Europe and the Middle East, and District 14 is the Asia-Pacific sphere. ISA formerly had geographic subdivisions known as "regions", which were part of the short lived "ISA International" (1988–1996). At varying intervals following the disestablishment of ISA International, the European Region became District 12, the India Region became District 14, and the South America Region became District 4 . Technical divisions ISA's 16 technical divisions, established for the purpose of increased information exchange within tightly focused segments of the fields of instrumentation, systems, and automation are organized under the Automation & Technology or Industries & Sciences Departments, depending upon the nature of the division. The ISA Technical Divisions are: Aerospace / Test Measurement Division Analysis Division Automatic Controls and Robotics Division Automation Project Management and Delivery Division Building Automation Systems Division Chemical and Petroleum Industries Division Construction and Design Division Education and Research Division Food and Pharmaceutical Industries Division Mining and Metals Industries Division Power Industry Division Process Measurement and Control Division Pulp and Paper Industry Division Safety and Security Division Smart Manufacturing and IIoT Division Water and Wastewater Industries Division Standards ISA standards play a major role in the work of instrumentation and automation professionals. Many ISA standards have been recognized by the American National Standards Institute (ANSI). Many ISA standards have also been adopted as international standards by the International Electrotechnical Commission (IEC). Standards committees ISA standards are developed using a consensus-based model employing volunteer standards committees of automation professionals from across industries. The ANSI standards development model is used with standards committees having the characteristics of Openness, Lack of Dominance, Balance, Consensus and a Right of Appeal. All ISA standards processes are overseen by the ISA Standards & Practices Board. As of 2012, there were more than 3500 participating individuals on ISA standards committees, from over 40 countries, and representing more than 2000 companies and organizations. ISA standards cover a wide range of concepts of importance to instrumentation and automation professionals. ISA has standards committees for symbols and nomenclature used within the industry, safety standards for equipment in non-hazardous and hazardous environments, communications standards to permit interoperable equipment availability from several manufacturers, and additional committees for standards on many more technical issues of importance to the industry. An example of one significant ISA standard is the ANSI/ISA-50.02 Fieldbus Standard for Use in Industrial Control Systems, which is a product of the ISA-50 Signal Compatibility of Electrical Instruments committee. Another significant ISA standard family is the batch processing standards of ANSI/ISA-88.00.01 Models and Terminology, ANSI/ISA-88.00.02 Data Structures and Guidelines for Languages, and ANSI/ISA-88.00.03 General and Site Recipe Models and Representation, which are products of the ISA-88 Batch Control committee. Other standards developed by ISA include: ISA100.11a is for testing and certification of wireless products and systems. This standard was approved by the International Electrotechnical Commission (IEC) as a publicly available specification, or PAS in September 2011. ISA95 is an international standard for developing an automated interface between enterprise and control systems. As of 2012, the Society has over 162 published standards, recommended practices, and technical reports. Security Standards for Automation and Control Systems The International Society of Automation also produces the ISA-62443 standards as part of the information security standards. The security of private industries and governmental installations are often dependent on the reliable functioning of an Industrial control system. This is a highly debated subject that has considerable importance for the security of the critical infrastructure of any country. For example: International Society of Automation security standards are mentioned on the United States Computer Emergency Response Team website. The ISA has formed the ISA Security Compliance Institute to promote and designate cyber-secure products and practices for industrial automation suppliers and operational sites. Conferences, symposia and shows Division symposia ISA also holds both industry and technology-specific symposia on a wide variety of topics. Local section events ISA Sections will often host their own local trade shows called Section Expos, member events, and/or sponsored training in their individual geographic areas. Publishing Periodicals ISA's technical magazine is one of the benefits of ISA membership. InTech circulation includes all 31,000 ISA members, as well as several thousand other recipients, who are classified as "qualified" subscribers. Total circulation is about 60,000 in print and a further 40,000 through the web-based digital edition. The quarterly publication ISA Transactions, published by Elsevier, is a referred journal of scholarly material, for which the intended audience is research and development personnel from academy and industry in the field of process instrumentation, systems, and automation. ISA formerly published Industrial Computing, of the now-inactive Industrial Computing Society as well as Motion Control, a magazine devoted to professionals in this discipline. Although the print version was discontinued in 2001, it continued online for a period of time. Books ISA publishes and distributes books which offer thorough coverage of the world of automation. ISA books are organized by the technical categories which are generally considered as defining automation: Basic continuous control Basic discrete, sequencing and manufacturing control Advanced control Reliability, safety, and electrical Integration and software Deployment and maintenance Work structure Standards The ISA publishes its standards, recommended practices and technical reports in a variety of formats. These include printed hardcopy, downloadable PDF, web-based viewable, CDROM/DVD and network licenses. Training, certification and education Training ISA training products include classroom-based training, mobile training courses, in-plant training, online courses, and printed course materials. The ISA also provides in-house training for a number of large corporations in the oil/gas and chemical industries. Technical papers archive The ISA has an online, searchable collection of technical papers which are available to ISA members and to digital library subscribers. As of 2012, the library has over 3000 technical papers. Certification programs ISA manages two certification programs, Certified Automation Professional (CAP), and Certified Control Systems Technician (CCST). Each of these is designed to be an objective, third-party assessment and confirmation of an individual's professional abilities and technical skills. Each certification is granted based on a combination of formal education/training, professional experience, and performance on a written examination. The CCST program was established in the early 1990s and because of an obvious industry need, rapidly gained credibility. There are now approximately 4,000 ISA certified technicians worldwide. The CAP program, launched in 2004, is still in the process of becoming established within the industrial community and gaining recognition. As of 2012, there are over 500 certified CAPs worldwide. The ISA used to have a third certification program called Certified Industrial Maintenance Mechanic (CIMM) which was established in 2004. In 2010, the CIMM program was transferred to the Society for Maintenance and Reliability Professionals. The SMRP renamed the CIMM certification to the Certified Maintenance and Reliability Technician (CMRT). References International organizations based in the United States Organizations based in North Carolina Engineering organizations Standards organizations in the United States Industrial automation
International Society of Automation
[ "Engineering" ]
2,989
[ "nan", "Industrial automation", "Automation", "Industrial engineering" ]
1,621,361
https://en.wikipedia.org/wiki/Pro-verb
In linguistics, a pro-verb is a word or partial phrase that substitutes for a contextually recognizable verb phrase (via a process known as grammatical gapping), obviating the need to repeat an antecedent verb phrase. A pro-verb is a type of anaphora that falls within the general group of word classes called pro-forms (pro-verb is an analog of the pronoun that applies to verbs instead of nouns). Many languages use a replacement verb as a pro-verb to avoid repetition: English "do" (for example, "I like pie, and so does he"), , . The parallels between the roles of pronouns and pro-verbs on language are "striking": both are anaphoric and coreferential, able to replace very complex syntactic structures. The latter property makes it sometimes impossible to replace a pro-verb with a verb, thus its utility (like the one of a pronoun) goes beyond the stylistic variation of word substitution. When choosing between substituting a pro-verb and repeating a verb, in multiple languages, including English, French, and Swedish the repetition is preferred by a wide margin (up to 80% to 20% ratio in the modern French). In many cases this is due to the presence of different objects, like in "I will read your letter every day, as a Christian reads the Gospels". Chance of using a pro-verb increases as the complexity of the verb phrase being replaced grows; verbs in the passive voice have lower chance of being substituted by a pro-verb. The pro-verb construction can be applied when a "direct construction" (without preposition before the object) is used in the verb phrase, or with an "indirect construction" with a preposition. In the latter case, the preposition, depending on the language and context, can be either omitted from the pro-verb construct, added, copied, or modified. For example, in modern Swedish any preposition in a verb phrase is replaced by in the pro-verb (cf. insertion of "with" in English, "You can organize voicemail in folders as you do with email.") In English The term "pro-verb" is used in English linguistics since the 19th century, a standard example is provided by variations of the verb "do": "I liked the movie; she did too" (did stands for "liked it"). The discussions about the precise role of "do" (and "do it") in this context are ongoing in the 21st century. English does not have dedicated pro-verbs. Auxiliary and catenative verbs that take bare infinitives can be said to double as pro-verbs by implying rather than expressing them (including most of the auxiliary verbs). Similarly, the auxiliary verbs have and be can double as pro-verbs for perfect, progressive, and passive constructions by eliding the participle. When there is no other auxiliary or catenative verb, do can be used as with do-support unless the antecedent verb is to be. The following are some examples of these kinds of pro-verb: Who can tell? —No one can . Why can't he do it? —He can ; he just won't . I like pie, as does he . Why did you break the jar? —He made me . Can you go to the park? No, I cannot [go to the park]. Note that, when there are multiple auxiliary verbs, some of these may be elided as well. For example, in reply to "Who's been leaving the milk out of the refrigerator?", any of "You've been doing it", "You have been", or "You have" would have the same meaning. Since a to-infinitive is just the particle to plus a bare infinitive, and a bare infinitive can be elided, the particle to doubles as a pro-verb for a to-infinitive:Clean your room! —I don't want to .He refused to clean his room when I told him to .Finally, even in dialects where bare infinitives and participles can be elided, there does exist the pro-verb do so: "He asked me to leave, so I did so". This pro-verb, unlike the above-described pro-verbs, can be used in any grammatical context; however, in contexts where another pro-verb could be used, it can be overly formal. For example, in "I want to get an 'A', but to do so, I need to get a perfect score on the next test," there is no other pro-verb that could be used; whereas in "I want to get an 'A', but I can't do so," the do so'' could simply be elided, and doing so would make the sentence sound less formal. Some works, like A Comprehensive Grammar of the English Language, would consider pro-verbs in English as purely substitutional, unlike the coreferential pronouns. English primarily uses direct objects, and absence of propositions is usually mirrored in the pro-verb ("You don't love me as much as I do you"). However, in the past, a preposition could have been affixed to the pro-verb: "She let him go — as a cat might have done to a mouse" (Dickens). In the modern English, as found online, there is a tendency to insert "with", especially if the verb phrase is complex or pro-verb precedes the verb phrase ("As you do with your driver, you need to swing this club with a sweeping motion"). In Swedish ("do"/"make") is considered by the scholars of Swedish language as a or (a "pronominal verb phrase"), the latter term reflects the typical use with pronoun, like ("do it"). The pro-verb phrases in Swedish use indirect construction for the object. While in the past the preposition appears to be typically omitted in the pro-verb, the modern language requires to use a single preposition, , regardless of the preposition in the verb phrase (or absence of it), even if it produces an awkward syntax, ("You can read this book as one does with most books"). In French In French, terms ("vicarious do", from ) are used to describe the pro-verb. The role of pro-verb is played by the verb ("make" or "do"). Olof Eriksson, a professor of French linguistics, offers the following example to illustrate that pro-verbs in French are not purely substitutional: . Here, the replacement with enables to start in the proper syntactic context of a comparative clause attached to the whole of . The pronomial object in French naturally precludes the use of pro-verbs: "You don't love me as much as I do you" cannot be translated to French using the pro-verb . Four prepositions can be used between a pro-verb and an object: . is he most used one, but , currently in the third place by the frequency of use, is rapidly catching up. Ericsson explains the tendency by close proximity of this proposition to English "with" and . In Russian Pro-verbs are generally absent in Russian. One of the rare exceptions, "" ("do it"), is used similarly to its English equivalent (but rarely). The other example is provided by the colloquial use of the extremely obscene expressions ("mat") where the verb derivatives from the most used obscene roots (the "obscene triad") lost their original semantics and their meaning is defined almost entirely by the affixes and context. In Chinese In Chinese, the pro-verb role is played by the character ("to come"): in , the meaning of 来 is actually "do it". Understanding that 来 can be an equivalent of the pronoun for verbs was first suggested by Zhao Yuanren in 1968. See also Do-support References Sources Parts of speech
Pro-verb
[ "Technology" ]
1,695
[ "Parts of speech", "Components" ]
1,621,441
https://en.wikipedia.org/wiki/Astylar
Astylar (from Gr. ἀ-, privative, and στῦλος, a column) is an architectural term given to design which uses neither columns nor pilasters for decorative purposes; thus the Riccardi and Strozzi palaces in Florence are astylar in their design, as opposed to Palladio's palaces at Vicenza, which are columnar. References Architectural terminology
Astylar
[ "Engineering" ]
84
[ "Architectural terminology", "Architecture" ]
1,621,449
https://en.wikipedia.org/wiki/Swarming%20%28honey%20bee%29
Swarming is a honey bee colony's natural means of reproduction. In the process of swarming, a single colony splits into two or more distinct colonies. Swarming is mainly a spring phenomenon, usually within a two- or three-week period depending on the locale, but occasional swarms can happen throughout the producing season. Secondary afterswarms, or cast swarms may happen. Cast swarms are usually smaller and are accompanied by a virgin queen. Sometimes a beehive will swarm in succession until it is almost totally depleted of workers. One species of honey bee that participates in such swarming behavior is Apis cerana. The reproduction swarms of this species settle away from the natal nest for a few days and will then depart for a new nest site after getting information from scout bees. Scout bees search for suitable cavities in which to construct the swarm's home. Successful scouts will then come back and report the location of suitable nesting sites to the other bees. Apis mellifera participates in a similar swarming process as they both evolved from the same ancestors. However, allopatric speciation forced them to evolve into different environments. Swarming is prevalent in both rural and urban areas, the latter is due to issues such as inadequate beekeeping. When not properly managed bee swarms split off from their hive and find new homes in city infrastructure. This requires intervention by professional beekeepers to relocate them to a proper home. Preparation Worker bees create queen cups throughout the year. When the hive is getting ready to swarm, the queen lays eggs into the queen cups. New queens are raised and the hive may swarm as soon as the queen cells are capped and before the new virgin queens emerge from their queen cells. A laying queen is too heavy to fly long distances. Therefore, the workers will stop feeding her before the anticipated swarm date and the queen will stop laying eggs. Swarming creates an interruption in the brood cycle of the original colony. During the swarm preparation, scout bees will simply find a nearby location for the swarm to cluster. When a honey bee swarm emerges from a hive they do not fly far at first. They may gather in a tree or on a branch only a few metres from the hive. There, they cluster about the queen and send 20–50 scout bees out to find suitable new nest locations. This intermediate stop is not for permanent habitation and they will normally leave within a few hours to a suitable location. It is from this temporary location that the cluster will determine the final nest site based on the level of excitement of the dances of the scout bees. It is unusual if a swarm clusters for more than three days at an intermediate stop. During the intermediate stop, swarm performs thermoregulation, maintaining its cluster core temperature at 34-36 degree Celsius and its cluster mantle temperature above 15 degree Celsius by reducing heat loss. As soon as the scout bees find a new home, swarm maintains its mantle temperature to 34-36 degree Celsius which is required for the flight. This process is necessary for the conservation of energy supply. Swarming creates a vulnerable time in the life of honey bees. Swarms are provisioned only with the nectar or honey they carry in their stomachs. A swarm will starve if it does not quickly find a home and more nectar stores. This happens most often with early swarms that leave on a warm day that is followed by cold or rainy weather in spring. The remnant colony, after having produced one or more swarms, is usually well provisioned with food. But, the new queen can be lost or eaten by predators during her mating flight, or poor weather can prevent her mating flight. In this case the hive has no further young brood to raise additional queens, and it will not survive. A cast swarm will usually contain a young virgin queen. Absconding The propensity to swarm differs among the honey bee species. Africanized bees are notable for their propensity to swarm or abscond. Absconding is a process where the whole hive leaves rather than splits like in swarming. This process is mainly determined by climate and effects of climate change and nectar flow. Poor physical conditions such as entry of water into the hive, excessively high temperatures due to lack of shade or shortage of water, the proximity of bush fires or excessive disturbance can also encourage colonies to abscond. Being tropical bees, they tend to swarm or abscond any time food is scarce, thus making themselves vulnerable in colder locales. Mainly for lack of sufficient winter stores, the Africanized bee colonies tend to perish in the winter in higher latitudes. There are generally two types of absconding where one is planned or resource planned. The resource planned occurs due to scarcity of water or pollen and unplanned occurs due to predation and infestation by undesirable pests. Generally, a weak bee colony will not swarm until the colony has produced a larger population of bees. Weak bee colonies can be the result of low food supply, disease such as foulbrood disease, or from a queen that produces low quantities of eggs. Nest site selection A good nesting site for honey bees must be large enough to accommodate their swarm (minimum in volume, preferably ≈). It should be well protected from the elements, and have a small entrance (approximately ) located at the bottom of the cavity. It must receive a certain amount of warmth from the sun, and should not be infested with ants. In addition to these criteria, nest sites with abandoned honeycombs, if the scout bees can find one, are preferred, because this allows the bees to better conserve their resources. The scout bees are the most experienced foragers in the resting swarm cluster. Only 3-5% of scout bees are selected for site selection. An individual scout returning to the cluster promotes a location she has found. She uses the waggle dance to indicate its direction, distance, and quality to others in the cluster. The more excited she is about her findings, the more excitedly she dances. If she can convince other scouts to check out the location she found, they take off, check out the proposed site, and choose to promote the site further upon their return. Several sites may be promoted by different scouts at first. After several hours and sometimes days, a favorite location gradually emerges from this decision-making process. In order for a decision to be made in a relatively short amount of time (the swarm can only survive for about three days on the honey on which they gorged themselves before leaving the hive), a decision will often be made when somewhere around 80% of the scouts have agreed upon a single location and/or when there is a quorum of 20–30 scouts present at a potential nest site. (If the swarm waited for less than 80% of the scouts to agree, the bees would lack confidence in the suitability of the site. If they waited for more than 80% of the scouts to agree, the swarm would be wasting its stored honey.) When the scout bees agree where to nest, the whole clustered swarm takes off and flies to it. A swarm may fly a kilometer or more to the scouted location, with the scouts guiding the rest of the bees by quickly flying overhead in the proper direction. This collective decision-making process is remarkably successful in identifying the most suitable new nest site and keeping the swarm intact. Beekeeping Swarm control methods Beekeepers who do not wish to increase their number of active hives may use one or more of many methods for swarm control. Most methods simulate swarming to extinguish the swarming drive. Clipping one wing of the queen. When one wing of the queen is clipped, a swarm may issue but due to the queen's inability to fly, the swarm will gather right outside the original hive, where the swarm can be easily collected. Oftentimes gets killed by the frustrated worker bees. Even though this is not a swarm prevention method it is a method of swarm retrieval. In the Demaree method, a frame of capped brood is removed with the old queen. This frame is put in a hive box with empty drawn frames and foundation at the same location of the old hive. A honey super is added to the top of this hive topped by a queen excluder. The remaining hive box sans queen is inspected for queen cells. All queen cells are destroyed. This hive box, which has most of the bees, is put on top of the queen excluder. Foraging bees will return to the lower box depleting the population of the upper box. After a week to ten days both parts are inspected again and any subsequent queen cells destroyed. After another period of separation the swarming drive is extinguished and the hives can be re-combined. Simply keeping the brood nest open is another method of swarm control. In preparation for swarming, bees fill the brood nest with honey. The queen stops laying to be trim enough to fly, and her newly unemployed nurse bees go with her. The concept of this method is to open the brood nest to employ those nurse bees and get the queen laying again and redirect this sequence of events. This is done by any number of slight variations from empty frames in the brood nest, frames of bare foundation in the brood nest or drawn combs in the brood nest, or moving brood combs to the box above to cause more expansion of the brood nest. Checkerboarding. In the late winter, frames are rearranged above the growing brood nest. The frames above the brood nest are alternated between full honey frames and empty drawn out frames or even foundationless frames. It is believed that only colonies that perceive to have enough reserves will attempt to swarm. Checkerboarding frames above the brood nest apparently destroys this sense of having reserves. Colony equalization can be done. The food and brood from the strong colonies can be transferred to weaker colonies which typically removes congestion and helps to strengthen the weaker colonies before the nectar flow. Alternatively, there are also swarm traps with Nasonov pheromone lures that can be used to attract swarms. Beekeepers who are aware that a colony has swarmed may add brood with eggs that is free of mites. Given young brood the bees have a second chance to raise a new queen if the first one fails. Swarm capture Beekeepers are sometimes called to capture swarms that are cast by feral honey bees or from the hives of domestic beekeepers. Most beekeepers will remove a honeybee swarm for a small fee or maybe even free if they are nearby. Bee swarms can almost always be collected alive and relocated by a competent beekeeper or bee removal company. Extermination of a bee swarm is rarely necessary and discouraged if bee removal is possible. There are various methods to capture a swarm. When the swarm first settles down and forms a cluster it is relatively easy to capture the swarm in a suitable box or nuc. One method that can be employed on a sunny day when the swarm is located on a lower branch or small tree is to put a white sheet under the swarm location. A nuc box is put on the sheet. The swarm is sprayed from the outside with a sugar solution (soaking the bees so they become too heavy to fly away) and then vigorously shaken off the branch. The main cluster, hopefully including the queen, will fall onto the white sheet and the bees will quickly go for the first dark entrance space that is in sight, which is the opening of the nuc. An organized march toward the opening will ensue and after 15 minutes the majority of bees will be inside the nuc. This capture method does not work at night. If the swarm is too embroiled in its perch so it cannot be dropped into a box or sheet, a skep can be suspended over it and gentle smoke used to "herd" the swarm into the skep. Smoke is not recommended to calm a clustered swarm. Smoke will have the opposite effect on a clustered swarm as many bees will become agitated and fly about instead of settling down. A bee vac can also be used. Human behavior A swarm of bees sometimes frightens people, though the bees are usually not aggressive at this stage of their life cycle. This is principally due to the swarming bees' lack of brood (developing bees) to defend and their interest in finding a new nesting location for their queen. This does not mean that bees from a swarm will not attack if they perceive a threat; however, most bees only attack in response to intrusions against their colony. Additionally, bees seldom swarm except when the position of the sun is direct and impressive. Swarm clusters, hanging from a tree branch, will move on and find a suitable nesting location in a day or two. Encountering a bee swarm for the first time can be alarming. Bees tend to swarm near their hives or honeycombs, so if a swarm is visible then a nest is nearby. Swarms are usually not aggressive unless provoked, so it is important to keep a good distance from swarms in order to avoid provoking them. Alternatively there are many human activities that disrupt bee swarms and their nesting sites. This includes the growth of urbanization, as well as, practices such as logging. These activities cause bees to lose their homes and become displaced. Furthermore, these and other human activities such as the use of pesticides impact the possible nectar sites for bees to get food and pollinate flower. Urbanization influence on Swarming Bees Ecological impact of urbanization Urbanization is a significant anthropogenic change in an ecosystem. This causes a significant impact on wildlife. With bees decreasing in species richness and diversity compared to their rural counterparts. Despite this bees are more prevalent than other insects in the area as there are a lot more brought into the area by beekeepers. This along with bee friendly areas in the city allow for pollination of wildflowers and crops. Urbanization pose a threat of pathogenic diseases such as fungal parasite Nosema ceranae and the parasitic mite Varroa destructor which contribute to a loss of honey bees colonies. As a result, both managed and wild bees has declined in abundance over the centuries. Human encroachment such as agriculture, livestock management, deforestation inflect habitat loss and habitat fragmentation in bee colonies. Honey bees in urban environments vary in more cases than their natural environments. The change in setting also causes a change in the genome of the bees. It has been found that there is significant genetic variation between worker bees in urban environments compared to rural environments. A study by Patenković et al. demonstrated this difference by analyzing 82 worker bees and comparing it to 241 samples of rural bees from 46 different areas. Urban presence of bees Bees are typically known for being managed in apiaries with feral, wild, bees assumed to have died from pathogens such as Varroa destructor which can decimate colonies if left untreated, Despite this researchers Bila Dubaic et al. have discovered numerous feral swarms in urban areas. Particularly in Belgrade, the capital of Serbia, an analysis over seven years uncovered a total of more than five hundred verified swarms reported by the public. Upon further analysis the feral bee swarm prevalence was higher in areas of greater population density. As about half the swarming and nesting sites involved urban architecture. Genetic diversity There is a significant amount of genetic diversity illustrated in swarming bees. This is highlighted when comparing feral bees in urban landscapes, managed colonies in urban landscapes and their rural counterparts. An analysis into this genetic diversity performed by Patenkovic et al. demonstrated this difference. It was found that not only did urban bees differ from their rural counterparts but in the same urban area the feral bees had genetic diversity from the managed colonies. Reasons for this include the small genetic pool that managed colonies breed from. Furthermore, the variation between urban and rural areas may be due to the low pesticide use that allows for greater floral diversity in urban areas. Overall, the urban environment, although not being as optimal as natural habitats, provides enough substitutes through viable foraging and nesting sites. References External links Swarm Prevention — MAAREC Western honey bee behavior Bee ecology Animal migration Apiculture
Swarming (honey bee)
[ "Biology" ]
3,298
[ "Ethology", "Behavior", "Animal migration" ]
1,621,535
https://en.wikipedia.org/wiki/Woodstock%20of%20physics
The Woodstock of physics was the popular name given by physicists to the marathon session of the American Physical Society’s meeting on March 18, 1987, which featured 51 presentations of recent discoveries in the science of high-temperature superconductors. Various presenters anticipated that these new materials would soon result in revolutionary technological applications, but in the three subsequent decades, this proved to be overly optimistic. The name is a reference to the 1969 Woodstock Music and Art Festival. Leading up to the meeting Before a series of breakthroughs in the mid-1980s, most scientists believed that the extremely low temperature requirements of superconductors rendered them impractical for everyday use. However, in June 1986, K. Alex Muller and Georg Bednorz working in IBM Zurich broke the record of critical temperature superconductivity in lanthanum barium copper oxide (LBCO) to 35 K above absolute zero, which had remained unbroken at 23 K for 17 years. Their discovery stimulated a great deal of additional research in high-temperature superconductivity. By March 1987, a flurry of recent research on ceramic superconductors had succeeded in creating ever-higher superconducting temperatures, including the discovery of Maw-Kue Wu and Jim Ashburn at the University of Alabama, who found a critical temperature of 77 K in yttrium barium copper oxide (YBCO). This result was followed by Paul C. W. Chu at the University of Houston's of a superconductor that operated at a temperature that could be achieved by cooling with liquid nitrogen. The scientific community was abuzz with excitement. Events The discoveries were so recent that no papers on them had been submitted by the deadline. However, the Society added a last-minute session to their annual meeting to discuss the new research. The session was chaired by physicist M. Brian Maple, a superconductor researcher himself, who was one of the meeting's organizers. It was scheduled to start at 7:30 pm in the Sutton ballroom of the New York Hilton, but excited scientists started lining up at 5:30. Key researchers such as Chu and Müller were given 10 minutes to describe their research; other physicists were given five minutes. Nearly 2,000 scientists tried to squeeze into the ballroom. Those who could not find a seat filled the aisles or watched outside the room on television monitors. The session ended at 3:15 am, but many lingered until dawn to discuss the presentations. The meeting caused a surge in mainstream media interest in superconductors, and laboratories around the world raced to pursue breakthroughs in the field. In October of the same year, Bednorz and Muller were awarded the Nobel Prize in Physics "for their important break-through in the discovery of superconductivity in ceramic materials", setting a record for the shortest time between the discovery and the prize award for any scientific Nobel Prize category. Sequels Woodstock of physics II By the following year (1988) two new families of copper-oxide superconductorsthe bismuth based or so-called BSCCO and the thallium based or TBCCO materialshad been discovered. Both of these have superconducting transitions above . So in the follow-up March APS meeting at New Orleans a special evening session called Woodstock of Physics-II was hastily organized to highlight the synthesis and properties of these new, first-ever 'triple digit superconductors'. The format of the session was the same as in New York. Some of the panelists were repeats from the original "Woodstock" session. Additional researchers including Allen M. Hermann (at that time at the University of Arkansas), the co-discoverer of the thallium system, and Laura H. Greene (then with AT&T Labs) were panelists. The 1988 session was chaired by Timir Datta from the University of South Carolina. 20 year anniversary On March 5, 2007, many of the original participants reconvened in Denver to recognize and review the session on its 20-year anniversary; the "reunion" was again chaired by Maple. See also List of physics conferences Notes References External links Video recordings (published in 2016 by the American Physical Society, announcement: Experience the 1987 "Woodstock of Physics" Online) Superconductivity 1987 in science History of physics Physics conferences
Woodstock of physics
[ "Physics", "Materials_science", "Engineering" ]
883
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
1,621,696
https://en.wikipedia.org/wiki/National%20Emissions%20Standards%20for%20Hazardous%20Air%20Pollutants
The National Emission Standards for Hazardous Air Pollutants (NESHAP) are air pollution standards issued by the United States Environmental Protection Agency (EPA). The standards, authorized by the Clean Air Act, are for pollutants not covered by the National Ambient Air Quality Standards (NAAQS) that may cause an increase in fatalities or in serious, irreversible, or incapacitating illness. Maximum Achievable Control Technology standards The standards for a particular source category require the maximum degree of emission reduction that the EPA determines to be achievable, which is known as the Maximum Achievable Control Technology (MACT) standards. These standards are authorized by Section 112 of the 1970 Clean Air Act and the regulations are published in the Code of Federal Regulations (CFR). Pollutants EPA regulates the following hazardous air pollutants with the MACT standards. For all listings above which contain the word "compounds" and for glycol ethers, the following applies: Unless otherwise specified, these listings are defined as including any unique chemical substance that contains the named chemical (i.e., antimony, arsenic, etc.) as part of that chemical's infrastructure. X'CN where X = H' or any other group where a formal dissociation may occur. For example, KCN or Ca(CN)2 Includes mono- and di- ethers of ethylene glycol, diethylene glycol, and triethylene glycol where n = 1, 2, or 3 R = alkyl C7 (chain of 7 carbon atoms) or less; or phenyl or alkyl substituted phenyl R' = H or alkyl C7 or less; or OR' consisting of carboxylic acid ester, sulfate, phosphate, nitrate, or sulfonate. Polymers are excluded from the glycol category, as well as surfactant alcohol ethoxylates (where R is an alkyl C8 or greater) and their derivatives, and ethylene glycol monobutyl ether (CAS 111-76-2). Includes mineral fiber emissions from facilities manufacturing or processing glass, rock, or slag fibers (or other mineral derived fibers) of average diameter 1 micrometer or less. Includes organic compounds with more than one benzene ring, and which have a boiling point greater than or equal to 100 °C. A type of atom which spontaneously undergoes radioactive decay. Pollution sources Most air toxics originate from human-made sources, including mobile sources (e.g., cars, trucks, buses) and stationary sources (e.g., factories, oil refineries, power plants), as well as indoor sources (e.g., building materials and activities such as cleaning). There are two types of stationary sources that generate routine emissions of air toxics: Major sources are defined as sources that emit 10 or more tons per year of any of the listed toxic air pollutants, or 25 or more tons per year of a mixture of air toxics. These sources may release air toxics from fugitive emissions (equipment leaks), when materials are transferred from one location to another, or during discharge through emission stacks or vents. Area sources consist of smaller facilities that release lesser quantities of toxic pollutants into the air. Area sources are defined as sources that do not emit more than 10 tons per year of a single air toxic or more than 25 tons per year of a combination of air toxics. Although the emissions from individual area sources are often relatively small, collectively their emissions can be of concern, particularly where large numbers of sources are located in heavily populated areas. EPA published its initial list of source categories in 1992. Subsequently the agency issued several revisions and updates to the list and the regulatory promulgation schedule. For each listed source category, EPA indicates whether the sources are considered to be major sources or area sources. The 1990 Clean Air Act Amendments direct EPA to set standards for all major sources of air toxics, and for some area sources that are of particular concern. EPA is required to review all source category regulations every eight years. See also Air pollution in the United States Mercury and Air Toxics Standards References External links Hazardous Air Pollutants - EPA Air pollution in the United States Chemical safety Emission standards Standards of the United States United States Environmental Protection Agency
National Emissions Standards for Hazardous Air Pollutants
[ "Chemistry" ]
899
[ "Chemical safety", "Chemical accident", "nan" ]
1,621,705
https://en.wikipedia.org/wiki/National%20Ambient%20Air%20Quality%20Standards
The U.S. National Ambient Air Quality Standards (NAAQS, pronounced ) are limits on atmospheric concentration of six pollutants that cause smog, acid rain, and other health hazards. Established by the United States Environmental Protection Agency (EPA) under authority of the Clean Air Act (42 U.S.C. 7401 et seq.), NAAQS is applied for outdoor air throughout the country. The six criteria air pollutants (CAP), or criteria pollutants, for which limits are set in the NAAQS are ozone (O3), atmospheric particulate matter (PM2.5/PM10), lead (Pb), carbon monoxide (CO), sulfur oxides (SOx), and nitrogen oxides (NOx). These are typically emitted from many sources in industry, mining, transportation, electricity generation and agriculture. In many cases they are the products of the combustion of fossil fuels or industrial processes. The National Emissions Standards for Hazardous Air Pollutants cover many other chemicals, and require the maximum achievable reduction that the EPA determines is feasible. Background The six criteria air pollutants were the first set of pollutants recognized by the United States Environmental Protection Agency as needing standards on a national level. The Clean Air Act requires the EPA to set US National Ambient Air Quality Standards (NAAQS) for the six CAPs. The NAAQS are health based and the EPA sets two types of standards: primary and secondary. The primary standards are designed to protect the health of 'sensitive' populations such as asthmatics, children, and the elderly. The secondary standards are concerned with protecting the environment. They are designed to address visibility, damage to crops, vegetation, buildings, and animals. The EPA established the NAAQS according to Sections 108 and 109 of the U.S. Clean Air Act, which was last amended in 1990. These sections require the EPA "(1) to list widespread air pollutants that reasonably may be expected to endanger public health or welfare; (2) to issue air quality criteria for them that assess the latest available scientific information on nature and effects of ambient exposure to them; (3) to set primary NAAQS to protect human health with adequate margin of safety and to set secondary NAAQS to protect against welfare effects (e.g., effects on vegetation, ecosystems, visibility, climate, manmade materials, etc); and (5) to periodically review and revise, as appropriate, the criteria and NAAQS for a given listed pollutant or class of pollutants." Descriptions Ground level ozone (O3): Ozone found on the surface-level, also known as tropospheric ozone is also regulated by the NAAQS under the Clean Air Act. Ozone was originally found to be damaging to grapes in the 1950s. The US EPA set "oxidants" standards in 1971, which included ozone. These standards were created to reduce agricultural impacts and other related damages. Like lead, ozone requires a reexamination of new findings of health and vegetation effects periodically. This aspect necessitated the creation of a US EPA criteria document. Further analysis done in 1979 and 1997 made it necessary to significantly modify the pollution standards. Atmospheric particulate matter PM10, coarse particles: 2.5 micrometers (μm) to 10 μm in size (although current implementation includes all particles 10 μm or less in the standard) PM2.5, fine particles: 2.5 μm in size or less. Particulate Matter (PM) was listed in the 1996 Criteria document issued by the EPA. In April 2001, the EPA created a Second External Review Draft of the Air Quality Criteria for PM, which addressed updated studies done on particulate matter and the modified pollutant standards done since the First External Review Draft. In May 2002, a Third External Review Draft was made, and the EPA revised PM requirements again. After issuing a fourth version of the document, the EPA issued the final version in October 2004. Lead (Pb): In the mid-1970s, lead was listed as a criteria air pollutant that required NAAQS regulation. In 1977, the EPA published a document which detailed the Air Quality Criteria for lead. This document was based on the scientific assessments of lead at the time. Based on this report (1977 Lead AQCD), the EPA established a "1.5 μg/m3 (maximum quarterly calendar average) Pb NAAQS in 1978." The Clean Air Act requires periodic review of NAAQS, and new scientific data published after 1977 made it necessary to revise the standards previously established in the 1977 Lead AQCD document. An Addendum to the document was published in 1986 and then again as a Supplement to the 1986 AQCD/Addendum in 1990. In 1990, a Lead Staff Paper was prepared by the EPA's Office of Air Quality Planning and Standards (OPQPS), which was based on information presented in the 1986 Lead/AQCD/Addendum and 1990 Supplement, in addition to other OAQPS sponsored lead exposure/risk analyses. In this paper, it was proposed that the Pb NAAQS be revised further and presented options for revision to the EPA. The EPA elected to not modify the Pb NAAQS further, but decided to instead focus on the 1991 U.S. EPA Strategy for Reducing Lead Exposure. The EPA concentrated on regulatory and remedial clean-up efforts to minimize Pb exposure from numerous non-air sources that caused more severe public health risks, and undertook actions to reduce air emissions. Carbon monoxide (CO): The EPA set the first NAAQS for carbon monoxide in 1971. The primary standard was set at 9 ppm averaged over an 8-hour period and 35 ppm over a 1-hour period. The majority of CO emitted into the ambient air is from mobile sources. The EPA has reviewed and assessed the current scientific literature with respect to CO in 1979, 1984, 1991, and 1994. After the review in 1984 the EPA decided to remove the secondary standard for CO due to lack of significant evidence of the adverse environmental impacts. On January 28, 2011, the EPA decided that the current NAAQS for CO were sufficient and proposed to keep the existing standards as they stood. The EPA is strengthening monitoring requirements for CO by calling for CO monitors to be placed in strategic locations near large urban areas. Specifically, the EPA has called for monitors to be placed and operational in CBSA's (core based statistical areas) with populations over 2.5 million by January 1, 2015; and in CBSA's with populations of 1 million or more by January 1, 2017. In addition they are requiring the collocation of CO monitors with NO2 monitors in urban areas having a population of 1 million for more. As of May 2011 there were approximately 328 operational CO monitors in place nationwide. The EPA has provided some authority to the EPA Regional Administrators to oversee case-by-case requested exceptions and in determining the need for additional monitoring systems above the minimum required. The EPA reports the national average concentration of CO has decreased by 82% since 1980. The last nonattainment designation was deemed in attainment on September 27, 2010. Currently all areas in the US are in attainment. Sulfur oxides (SOx): SOx refers to the oxides of sulfur, a highly reactive group of gases. SO2 is of greatest interest and is used as the indicator for the entire SOx family. The EPA first set primary and secondary standards in 1971. Dual primary standards were set at 140 ppb averaged over a 24-hour period, and at 30 ppb averaged annually. The secondary standard was set at 500 ppb averaged over a 3-hour period, not to be exceeded more than once a year. The most recent review took place in 1996 during which the EPA considered implementing a new NAAQS for 5-minute peaks of SO2 affecting sensitive populations such as asthmatics. The Agency did not establish this new NAAQS and kept the existing standards. In 2010 the EPA decided to replace the dual primary standards with a new 1-hour standard set at 75 ppb. On March 20, 2012, the EPA "took final action" to maintain the existing NAAQS as they stood. Only three monitoring sites have exceeded the current NAAQS for SO2, all of which are located in the Hawaii Volcanoes National Park. The violations occurred between 2007–2008 and the state of Hawaii suggested these should be exempt from regulatory actions due to an 'exceptional event' (volcanic activity). Since 1980 the national concentration of SO2 in the ambient air has decreased by 83%. Annual average concentrations hover between 1–6 ppb. Currently all ACQR's are in attainment for SO2. Nitrogen oxides (NOx): The EPA first set primary and secondary standards for the oxides of nitrogen in 1971. Among these are nitric oxide (NO), nitrous oxide (N2O), and nitrogen dioxide (NO2), all of which are covered in the NAAQS. NO2 is the oxide measured and used as the indicator for the entire NOx family as it is of the most concern due to its quick formation and contribution to the formation of harmful ground level ozone. In 1971 the primary and secondary NAAQS for NO2 were both set at an annual average of 0.053 ppm. The EPA reviewed this NAAQS in 1985 and 1996, and in both cases concluded that the existing standard was sufficient. The most recent review by the EPA occurred in 2010, resulting in a new 1-hour NO2 primary standard set at 100 ppb; the annual average of 0.053 ppm remained the same. Also considered was a new 1-hour secondary standard of 100 ppb. This was the first time the EPA reviewed the environmental impacts separate from the health impacts for this group of criteria air pollutants. Also, in 2010, the EPA decided to ensure compliance by strengthening monitoring requirements, calling for increased numbers of monitoring systems near large urban areas and major roadways. On March 20, 2012, the EPA "took final action" to maintain the existing NAAQS as they stand. The national average of NOx concentrations has dropped by 52% since 1980. The annual concentration for NO2 is reported to be averaging around 10–20 ppb, and is expected to decrease further with new mobile source regulations. Currently all areas of the US are classified as in attainment. In April 2023, the EPA finalized its "Good Neighbor Plan", which phases in tighter standards for NOx, using a cap and trade system during the summer "ozone season". This is intended to reduce ground-level ozone in non-attainment areas downwind of industrial sources like power plants, incinerators, and industrial furnaces, often in other states. Standards The standards are listed in . Primary standards are designed to protect human health, with an adequate margin of safety, including sensitive populations such as children, the elderly, and individuals suffering from respiratory diseases. Secondary standards are designed to protect public welfare, damage to property, transportation hazards, economic values, and personal comfort and well-being from any known or anticipated adverse effects of a pollutant. A district meeting a given standard is known as an "attainment area" for that standard, and otherwise a "non-attainment area". Standards are required to "accurately reflect the latest scientific knowledge," and are reviewed every five years by a Clean Air Scientific Advisory Committee (CASAC), consisting of "seven members appointed by the EPA administrator." EPA has set NAAQS for six major pollutants listed as below. These six are also the criteria air pollutants. As of June 15, 2005, the 1-hour ozone standard no longer applies to areas designated with respect to the 8-hour ozone standard (which includes most of the United States, except for portions of 10 states). Source: USEPA Detection methods The EPA National Exposure Research Laboratory can designate a measurement device using an established technological basis as a Federal Reference Method (FRM) to certify that the device has undergone a testing and analysis protocol, and can be used to monitor NAAQS compliance. Devices based on new technologies can be designated as a Federal Equivalent Method (FEM). FEMs are based on different sampling and/or analyzing technologies than FRMs, but are required to provide the same decision making quality when making NAAQS attainment determinations. Approved new methods are formally announced through publication in the Federal Register. A complete list of FRMs and FEMs is available. Air quality control region An air quality control region is an area, designated by the federal government, where communities share a common air pollution problem. See also Air pollution Air quality index Asthma Atmospheric dispersion modeling Contamination control Clean Air Act (1990) Portable emissions measurement system Toxic Substances Control Act of 1976 References External links EPA summary of the National Ambient Air Quality Standards US Environmental Protection Agency - Criteria Air Pollutants EPA Green Book showing non-attainment, maintenance, and attainment areas EPA Alumni Association Oral History Video "Early Implementation of the Clean Air Act of 1970 in California." Air pollution in the United States Air pollution organizations Environmental law in the United States Environmental science Environmental chemistry Natural resource management Smog United States Environmental Protection Agency
National Ambient Air Quality Standards
[ "Physics", "Chemistry", "Environmental_science" ]
2,745
[ "Visibility", "Physical quantities", "Smog", "Environmental chemistry", "nan" ]
1,621,842
https://en.wikipedia.org/wiki/Beryllium-10
Beryllium-10 (10Be) is a radioactive isotope of beryllium. It is formed in the Earth's atmosphere mainly by cosmic ray spallation of nitrogen and oxygen. Beryllium-10 has a half-life of 1.39 × 106 years, and decays by beta decay to stable boron-10 with a maximum energy of 556.2 keV. It decays through the reaction 10Be→10B + e−. Light elements in the atmosphere react with high energy galactic cosmic ray particles. The spallation of the reaction products is the source of 10Be (t, u particles like n or p): 14N(t,5u)10Be; Example: 14N(n,p α)10Be 16O(t,7u)10Be Because beryllium tends to exist in solutions below about pH 5.5 (and rainwater above many industrialized areas can have a pH less than 5), it will dissolve and be transported to the Earth's surface via rainwater. As the precipitation quickly becomes more alkaline, beryllium drops out of solution. Cosmogenic 10Be thereby accumulates at the soil surface, where its relatively long half-life (1.387 million years) permits a long residence time before decaying to 10B. 10Be and its daughter product have been used to examine soil erosion, soil formation from regolith, the development of lateritic soils and the age of ice cores. It is also formed in nuclear explosions by a reaction of fast neutrons with 13C in the carbon dioxide in air, and is one of the historical indicators of past activity at nuclear test sites. 10Be decay is a significant isotope used as a proxy data measure for cosmogenic nuclides to characterize solar and extra-solar attributes of the past from terrestrial samples. The rate of production of beryllium-10 depends on the activity of the sun. When solar activity is low (low numbers of sunspots and low solar wind), the barrier against cosmic rays that exists beyond the termination shock is weakened (see Cosmic ray#Cosmic-ray flux). This means more beryllium-10 is produced, and it can be detected millennia later. Beryllium-10 can thus serve as a marker of Miyake events, such as the 774–775 carbon-14 spike. There can be an effect on climate (see Homeric Minimum). See also Surface exposure dating References Isotopes of beryllium Radionuclides used in radiometric dating
Beryllium-10
[ "Chemistry" ]
528
[ "Radionuclides used in radiometric dating", "Isotopes of beryllium", "Isotopes" ]
1,621,854
https://en.wikipedia.org/wiki/Outflow%20boundary
An outflow boundary, also known as a gust front, is a storm-scale or mesoscale boundary separating thunderstorm-cooled air (outflow) from the surrounding air; similar in effect to a cold front, with passage marked by a wind shift and usually a drop in temperature and a related pressure jump. Outflow boundaries can persist for 24 hours or more after the thunderstorms that generated them dissipate, and can travel hundreds of kilometers from their area of origin. New thunderstorms often develop along outflow boundaries, especially near the point of intersection with another boundary (cold front, dry line, another outflow boundary, etc.). Outflow boundaries can be seen either as fine lines on weather radar imagery or else as arcs of low clouds on weather satellite imagery. From the ground, outflow boundaries can be co-located with the appearance of roll clouds and shelf clouds. Outflow boundaries create low-level wind shear which can be hazardous during aircraft takeoffs and landings. If a thunderstorm runs into an outflow boundary, the low-level wind shear from the boundary can cause thunderstorms to exhibit rotation at the base of the storm, at times causing tornadic activity. Strong versions of these features known as downbursts can be generated in environments of vertical wind shear and mid-level dry air. Microbursts have a diameter of influence less than , while macrobursts occur over a diameter greater than . Wet microbursts occur in atmospheres where the low levels are saturated, while dry microbursts occur in drier atmospheres from high-based thunderstorms. When an outflow boundary moves into a more stable low level environment, such as into a region of cooler air or over regions of cooler water temperatures out at sea, it can lead to the development of an undular bore. Definition An outflow boundary, also known as a gust front or arc cloud, is the leading edge of gusty, cooler surface winds from thunderstorm downdrafts; sometimes associated with a shelf cloud or roll cloud. A pressure jump is associated with its passage. Outflow boundaries can persist for over 24 hours and travel hundreds of kilometers (miles) from their area of origin. A wrapping gust front is a front that wraps around the mesocyclone, cutting off the inflow of warm moist air and resulting in occlusion. This is sometimes the case during the event of a collapsing storm, in which the wind literally "rips it apart". Origin A microburst is a very localized column of sinking air known as a downburst, producing damaging divergent and straight-line winds at the surface that are similar to but distinguishable from tornadoes which generally have convergent damage. The term was defined as affecting an area in diameter or less, distinguishing them as a type of downburst and apart from common wind shear which can encompass greater areas. They are normally associated with individual thunderstorms. Microburst soundings show the presence of mid-level dry air, which enhances evaporative cooling. Organized areas of thunderstorm activity reinforce pre-existing frontal zones, and can outrun cold fronts. This outrunning occurs within the westerlies in a pattern where the upper-level jet splits into two streams. The resultant mesoscale convective system (MCS) forms at the point of the upper level split in the wind pattern in the area of best low level inflow. The convection then moves east and toward the equator into the warm sector, parallel to low-level thickness lines. When the convection is strong and linear or curved, the MCS is called a squall line, with the feature placed at the leading edge of the significant wind shift and pressure rise which is normally just ahead of its radar signature. This feature is commonly depicted in the warm season across the United States on surface analyses, as they lie within sharp surface troughs. A macroburst, normally associated with squall lines, is a strong downburst larger than . A wet microburst consists of precipitation and an atmosphere saturated in the low-levels. A dry microburst emanates from high-based thunderstorms with virga falling from their base. All types are formed by precipitation-cooled air rushing to the surface. Downbursts can occur over large areas. In the extreme case, a derecho can cover a huge area more than wide and over long, lasting up to 12 hours or more, and is associated with some of the most intense straight-line winds, but the generative process is somewhat different from that of most downbursts. Appearance At ground level, shelf clouds and roll clouds can be seen at the leading edge of outflow boundaries. Through satellite imagery, an arc cloud is visible as an arc of low clouds spreading out from a thunderstorm. If the skies are cloudy behind the arc, or if the arc is moving quickly, high wind gusts are likely behind the gust front. Sometimes a gust front can be seen on weather radar, showing as a thin arc or line of weak radar echos pushing out from a collapsing storm. The thin line of weak radar echoes is known as a fine line. Occasionally, winds caused by the gust front are so high in velocity that they also show up on radar. This cool outdraft can then energize other storms which it hits by assisting in updrafts. Gust fronts colliding from two storms can even create new storms. Usually, however, no rain accompanies the shifting winds. An expansion of the rain shaft near ground level, in the general shape of a human foot, is a telltale sign of a downburst. Gustnadoes, short-lived vertical circulations near ground level, can be spawned by outflow boundaries. Effects Gust fronts create low-level wind shear which can be hazardous to planes when they takeoff or land. Flying insects are swept along by the prevailing winds. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. At the surface, clouds of dust can be raised by outflow boundaries. If squall lines form over arid regions, a duststorm known as a haboob can result from the high winds picking up dust in their wake from the desert floor. If outflow boundaries move into areas of the atmosphere which are stable in the low levels, such through the cold sector of extratropical cyclones or a nocturnal boundary layer, they can create a phenomenon known as an undular bore, which shows up on satellite and radar imagery as a series of transverse waves in the cloud field oriented perpendicular to the low-level winds. See also Density Derecho Gustnado Haboob Heat burst Inflow (meteorology) Lake-effect snow Mathematical singularity Sea breeze Tropical cyclogenesis Wake low Weather front Pseudo-cold front References External links Outflow boundary over south Florida MPEG, 854KB Atmospheric dynamics Wind
Outflow boundary
[ "Chemistry" ]
1,432
[ "Atmospheric dynamics", "Fluid dynamics" ]
1,621,913
https://en.wikipedia.org/wiki/Bulk%20Richardson%20number
The Bulk Richardson Number (BRN) is an approximation of the Gradient Richardson number. The BRN is a dimensionless ratio in meteorology related to the consumption of turbulence divided by the shear production (the generation of turbulence kinetic energy caused by wind shear) of turbulence. It is used to show dynamic stability and the formation of turbulence. The BRN is used frequently in meteorology due to widely available radiosonde data and numerical weather forecasts that supply wind and temperature measurements at discrete points in space. Formula Below is the formula for the BRN, where g is gravitational acceleration, Tv is absolute virtual temperature, Δθv is the virtual potential temperature difference across a layer of thickness, Δz is vertical depth, and ΔU and ΔV are the changes in horizontal wind components across that same layer. Critical values and interpretation High values indicate unstable and/or weakly-sheared environments; low values indicate weak instability and/or strong vertical shear. Generally, values in the range of around 10 to 50 suggest environmental conditions favorable for supercell development. In the limit of layer thickness becoming small, the Bulk Richardson number approaches the Gradient Richardson number, for which a critical Richardson number is roughly Ric= 0.25. Numbers less than this critical value are dynamically unstable and likely to become or remain turbulent. The critical value of 0.25 applies only for local gradients, not for finite differences across thick layers. The thicker the layer is the more likely we are to average out large gradients that occur within small sub-regions of the layer of interest. This results in uncertainty of our prediction of the occurrence of turbulence, and now one must use an artificially large value of the critical Richardson number to give reasonable results using our smoothed gradients. This means that the thinner the layer, the closer the value to the theory. See also Monin–Obukhov length Richardson number Atmospheric dynamics Atmospheric thermodynamics References Further reading Help - Bulk Richardson Number - NOAA Storm Prediction Center Boundary layer meteorology Turbulence Severe weather and convection
Bulk Richardson number
[ "Chemistry" ]
413
[ "Turbulence", "Fluid dynamics" ]
1,622,226
https://en.wikipedia.org/wiki/Alex%20%28videotex%20service%29
Alex was an interactive videotex information service offered by Bell Canada in market research from 1988 to 1990 and thence to the general public until 1994. The Alextel terminal was based on the French Minitel terminals, built by Northern Telecom and leased to customers for $7.95/month. It consisted of a CRT display, attached keyboard, and a 1200 bit/s modem for use on regular phone lines. In 1991 proprietary software was released for IBM PCs that allowed computer users to access the network. Communications on the Alex network was via DATAPAC X.25 protocol. The system operated in the same fashion as Minitel, whereby users connected to various content providers over the X.25 network and thus access was normally through a local telephone number. The most popular (and most expensive) sites were chat rooms. Using the service could cost as much as per minute. Also offered was an electronic white pages and yellow pages directory. Many users terminated their subscription upon receiving their first invoice. One subscriber racked up a monthly fee of over C$2,000 spending most of his online time in chat. History The motivation to develop the Alex terminal and online service came from competitive pressure from France's Minitel, which had expanded into the Quebec market in April 1988. Bell Canada quickly organized their own version and received approval from the CRTC to offer the online service as of November 1988. Both services were expensive. A Minitel terminal cost $25 per month to rent or a one-time payment of $600, and $15 per hour of usage on top. An Alextel terminal was $7.95 a month to rent, but services cost up to $40 an hour. The advent of the World Wide Web contributed to making this service obsolete. On April 29, 1994, Bell Canada sent a letter to its customers announcing that the service would be terminated on June 3, 1994. In that letter, Mr. T.E. Graham, then Director of Business Planning for Bell Advanced Communications, stated that "Quite simply, the ALEX network is not the right vehicle, nor the appropriate technology, at this time to deliver the information goods needed in our fast-paced society." The Alextel terminal is also usable as a dumb terminal for VT100 emulation. Further reading Proulx, Serge (1991). "The Videotex Industry in Québec: The Difficulties of Mass Marketing Telematics". Canadian Journal of Communication. Université du Québec à Montréal. 16 (3). See also Prestel Telidon Viewdata ICON (microcomputer), a computer system used in Ontario schools from 1984 to 1994. References External links "Alextel ". Personal Computer Museum. Retrieved March 20, 2020 Computer-related introductions in 1988 1988 establishments in Canada 1994 disestablishments in Canada Videotex Pre–World Wide Web online services Legacy systems Bell Canada Telecommunications in Canada History of telecommunications in Canada Information technology in Canada
Alex (videotex service)
[ "Technology" ]
600
[ "Legacy systems", "Computer systems", "History of computing" ]
1,622,743
https://en.wikipedia.org/wiki/Joy%20%28dishwashing%20liquid%29
Joy is an American brand of dishwashing liquid detergent owned by JoySuds, LLC. The brand was introduced in the United States in 1949 by Procter & Gamble. In 2019, Procter & Gamble sold the rights to the Joy brand for the Americas to JoySuds, LLC. Overview The brand was an early and long-term sponsor of several "soap operas", including the long-running pioneering soap Search for Tomorrow. There are several kinescopes existing of 1950s' soap operas containing these commercials, usually with the famous slogan, "From grease to shine in half the time". Joy was an early example of a product being reformulated to include the fragrance of lemons and helped begin the overall trend toward citrus-scented cleaning products. Joy is designed for use in the hand washing of dishes, not automatic dishwashers, and as such also contains emollients designed to protect the user's hands from drying out. Available in both "ultra" (concentrated) and "non-ultra" (regular) strengths, Joy remains one of the most recognizable dish brands in North America, with a loyal customer following across the US and Latin American retail markets. Although Joy's stapled lemon fragrance remains its most widely distributed line, Orange Joy has grown in popularity in recent years. The brand also offers a commercial grade detergent formula, Joy Professional, which is commonly used in restaurant, hotel and other commercial settings due to its high concentration of surfactants and cleaning effectiveness. Joy was introduced in Japan during the 1990s, where it became market leader for a period of time. See also List of cleaning agents References External links Joy official US site Detergents Cleaning product brands Soap brands Procter & Gamble brands Products introduced in 1949
Joy (dishwashing liquid)
[ "Chemistry", "Technology" ]
356
[]
1,623,051
https://en.wikipedia.org/wiki/Cypher%20%28film%29
Cypher (also known as Brainstorm and Company Man), is a 2002 science fiction spy-fi thriller film directed by Vincenzo Natali and written by Brian King. The film follows an accountant (Jeremy Northam) whose sudden career as a corporate spy takes an unexpected turn when he meets a mysterious woman (Lucy Liu), uncovering secrets about the nature of his work. The film was shown in limited release in theaters in the US and Australia, and released on DVD on August 2, 2005. The film received mixed reviews, and Northam received the Best Actor award at the Sitges Film Festival. Plot Recently unemployed accountant Morgan Sullivan is bored with his suburban life. Pressured by his wife to take a job with her father's company, he instead pursues a position in corporate espionage. Digicorp's Head of Security, Finster, inducts Morgan and assigns him a new identity. As Jack Thursby, he is sent to conventions to secretly record presentations and transmit them to headquarters. Sullivan is soon haunted by recurring nightmares and neck pain. At a bar, Morgan meets Rita Foster from a competing corporation, who offers him pills and tells him not to transmit at the next convention. Afterward, Morgan is surprised when Digicorp confirms the receipt of his non-existent transmission. He takes the pills Rita gave him and his nightmares and pains stop. Confused and intrigued by Rita, he arranges to meet with her again. She tells him about Digicorp's deception and offers him an antidote – a green liquid in a large syringe. Morgan hesitantly accepts. She warns him that no matter what happens at the next convention he must not react. Morgan discovers that all the convention attendees believe themselves to be Digicorp spies. While they are drugged from the served drinks, plastic-clad scientists probe, inject and brainwash them. Individual headsets reinforce their new identities, preparing them to be used and then disposed of. Morgan manages to convince Digicorp that he believes his new identity. He is then recruited by Sunway Systems, a rival of Digicorp. Sunway's Head of Security, Callaway, encourages Morgan to act as a double agent, feeding corrupted data to Digicorp. Morgan calls Rita, who warns him that Sunway is equally ruthless, and that he is in fact being used by Rita's boss, Sebastian Rooks. Morgan manages to steal the required information from Sunway Systems' vault, escaping with Rita's help. Rita ultimately takes him to meet Rooks. When she temporarily leaves the room, a nervous Morgan calls Finster, and becomes even more distressed. He accidentally shoots Rita, who encourages him to ignore her and meet Rooks in the room next door. Morgan finds the room filled with objects which appear to be personal to him, including a photograph of him and Rita together. Realising that he is apparently Rooks, he turns to Rita in disbelief. Before Rita can convince him, the apartment is invaded by armed men. Rita and Morgan escape to the roof of the skyscraper as the security teams of Digicorp and Sunway meet, led by Finster and Callaway. After a short Mexican standoff both sides realise they are after the same person, Sebastian Rooks, and rush to the roof, where they find Morgan and Rita in a helicopter. Rita cannot fly it, but, having designed it himself, Sebastian can after Rita encourages him to remember his past self, connecting through his love for her. He lifts off amid gunfire from the security teams. Finster and Callaway comment as the couple seem to have escaped: Callaway: "Did you get a look at him? Did you see Rooks' face?" Finster: "Just Morgan Sullivan, our pawn." Looking up, they see the helicopter hovering and realise, too late, the true identity of Morgan Sullivan. Sebastian triggers a bomb, causing the whole roof to explode. On a boat in the South Pacific Ocean, Sebastian reveals the content of the stolen disc to Rita. Marked "terminate with extreme prejudice", it is the last copy of Rita's identity (after the one in the vault was destroyed). Sebastian throws the disc into the sea and says, "Now there's no copy at all." Cast Reception The film received mixed reviews. On review aggregator Rotten Tomatoes, the film holds a 58% rating based on reviews from 19 critics. Derek Elley of Variety called the film "consistently intriguing" and "100% plot driven" with excellent performances from the cast, while BBC's Neil Smith compared Cypher to The Manchurian Candidate, and noticed feelings of tension and claustrophobia, as in Natali's directorial début Cube, finally concluding that "Natali his yarn in an Orwellian atmosphere of paranoia." Scott Weinberg, reviewing for DVD Talk, recommended the film, calling it "one of the best direct-to-video titles [he has] seen all year", noting similarities to The Matrix, Dark City and the works of Philip K. Dick. English horror fiction writer and journalist Kim Newman, writing for the Empire magazine, awarded the film 4 out of 5 stars, praising Northam's and Liu's performances and calling the film a "semi-science-fictional exercise in puzzle-setting and solving". Some critics found problems with the film's complex narrative. Paul Byrnes of The Sydney Morning Herald found that the plot overwhelmed the characters so much that he "stopped caring". John J. Puccio, writing for Movie Metropolis, thought that "[Cyphers] corporate espionage plot doesn't prove simply too complicated, it ends up downright muddled", but concluded that the film was nevertheless "still kind of fun". For his performance in Cypher, Jeremy Northam received the Best Actor award on the 2002 Sitges Film Festival in Catalonia. References External links 2002 films 2000s science fiction thriller films 2000s spy films American science fiction thriller films Canadian science fiction thriller films English-language Canadian films Films about memory erasure and alteration Films about computing Films scored by Michael Andrews Films directed by Vincenzo Natali Cyberpunk films Fiction about mind control American spy films Films about accountants 2000s English-language films 2000s American films 2000s Canadian films English-language science fiction thriller films
Cypher (film)
[ "Technology" ]
1,278
[ "Works about computing", "Films about computing" ]
1,623,141
https://en.wikipedia.org/wiki/Remote%20administration
Remote administration refers to any method of controlling a computer or other Internet-connected device, such as a smartphone, from a remote location. There are many commercially available and free-to-use software that make remote administration easy to set up and use. Remote administration is often used when it's difficult or impractical to be physically near a system in order to use it or troubleshoot it. Many server administrators also use remote administration to control the servers around the world at remote locations. It is also used by companies and corporations to improve overall productivity as well as promote remote work. It may also refer to both legal and illegal (i.e. hacking) remote administration (see Owned and Trojan). Requirements Internet connection Any computer with an Internet connection or on a local area network can be remotely administered. For non-malicious administration, the user must install or enable server software on the host system in order to be viewed. Then the user/client can access the host system from another computer using the installed software. Usually, both systems should be connected to the internet, and the IP address of the host/server system must be known. Remote administration is therefore less practical if the host uses a dial-up modem, which is not constantly online and often has a Dynamic IP. Connecting When the client connects to the host computer, a window showing the Desktop of the host usually appears. The client may then control the host as if he/she were sitting right in front of it. Windows has a built-in remote administration package called Remote Desktop Connection. A free cross-platform alternative is VNC, which offers similar functionality. Common tasks for which remote administration is used Shutdown Shutting down or rebooting another computer over a network Accessing peripherals Using a network device, like printer Retrieving streaming data, much like a CCTV system Modifying Editing another computer's Registry settings Modifying system services Installing software on another machine Modifying logical groups Viewing Remotely assisting others Supervising computer or internet usage Access to a remote system's "Computer Management" snap-in Hacking Computers infected with malware such as Trojans sometimes open back doors into computer systems which allows malicious users to hack into and control the computer. Such users may then add, delete, modify or execute files on the computer to their own ends. Notable software Windows Windows Server 2003, 2008, Tablet PC Editions, and Windows Vista Ultimate, Enterprise and Business editions come with Microsoft's Microsoft Management Console, Windows Registry Editor and various command-line utilities that may be used to administer a remote machine. One form of remote administration is remote desktop software, and Windows includes a Remote Desktop Connection client for this purpose. Windows XP comes with a built-in remote administration tools called Remote Assistance and Remote Desktop, these are restricted versions of the Windows Server 2003 Terminal Services meant only for helping users and remote administration. With a simple hack/patch (derived from the beta version of Windows XP) it's possible to "unlock" XP to a fully featured Terminal Server. Windows Server 2003 comes with built-in remote administration tools, including a web application and a simplified version of Terminal Services designed for Remote administration. Active Directory and other features found in Microsoft's Windows NT Domains allow for remote administration of computers that are members of the domain, including editing the Registry and modifying system services and access to the system's "Computer Management" Microsoft Management Console snap-in. Some third-party remote desktop software programs perform the same job. Back Orifice, whilst commonly used as a script kiddie tool, claims to be a remote-administration and system management tool. Critics have previously stated that the capabilities of the software require a very loose definition of what "administration" entails. Remote Server Administration Tools for Windows 7 enables IT administrators to manage roles and features that are installed on remote computers that are running Windows Server 2008 R2 Non-Windows VNC can be used for remote administration of computers, however it is increasingly being used as an equivalent of Terminal Services and Remote Desktop Protocol for multi-user environments. Linux, UNIX and BSD support remote administration via remote login, typically via SSH (The use of the Telnet protocol has been phased out due to security concerns). X-server connection forwarding, often tunneled over SSH for security, allows GUI programs to be used remotely. VNC is also available for these operating systems. Apple Remote Desktop provides Macintosh users with remote administration capabilities. NX and its Google fork Neatx are free graphical Desktop sharing solutions for the X Window System with Clients for different platforms like Linux, Windows and Mac OS X. There is also an enhanced commercial version of NX Server available. Wireless remote administration Remote administration software has recently started to appear on wireless devices such as the BlackBerry, Pocket PC, and Palm devices, as well as some mobile phones. Generally these solutions do not provide the full remote access seen on software such as VNC or Terminal Services, but do allow administrators to perform a variety of tasks, such as rebooting computers, resetting passwords, and viewing system event logs, thus reducing or even eliminating the need for system administrators to carry a laptop or be within reach of the office. Wireless remote administration is usually the only method to maintain man-made objects in space. References Internet Protocol based network software Remote administration software System administration
Remote administration
[ "Technology" ]
1,075
[ "Information systems", "System administration" ]
1,623,162
https://en.wikipedia.org/wiki/Hyper%20Text%20Coffee%20Pot%20Control%20Protocol
The Hyper Text Coffee Pot Control Protocol (HTCPCP) is a facetious communication protocol for controlling, monitoring, and diagnosing coffee pots. It is specified in , published on 1 April 1998 as an April Fools' Day RFC, as part of an April Fools prank. An extension, HTCPCP-TEA, was published as RFC 7168 on 1 April 2014 to support brewing teas, also as an April Fools' Day RFC in error 418. Protocol RFC 2324 was written by Larry Masinter, who describes it as a satire, saying "This has a serious purpose – it identifies many of the ways in which HTTP has been extended inappropriately." The wording of the protocol made it clear that it was not entirely serious; for example, it notes that "there is a strong, dark, rich requirement for a protocol designed espressoly for the brewing of coffee". Despite the joking nature of its origins, or perhaps because of it, the protocol has remained as a minor presence online. The editor Emacs includes a fully functional client-side implementation of it, and a number of bug reports exist complaining about Mozilla's lack of support for the protocol. Ten years after the publication of HTCPCP, the Web-Controlled Coffee Consortium (WC3) published a first draft of "HTCPCP Vocabulary in RDF" in parody of the World Wide Web Consortium (W3C)'s "HTTP Vocabulary in RDF". On April 1, 2014, RFC 7168 extended HTCPCP to fully handle teapots. Commands and replies HTCPCP is an extension of HTTP. HTCPCP requests are identified with the Uniform Resource Identifier (URI) scheme coffee (or the corresponding word in any other of the 29 listed languages) and contain several additions to the HTTP methods: It also defines four error responses: Save 418 movement On 5 August 2017, Mark Nottingham, chairman of the IETF HTTPBIS Working Group, called for the removal of status code 418 "I'm a teapot" from the Node.js platform, a code implemented in reference to the original 418 "I'm a teapot" established in Hyper Text Coffee Pot Control Protocol. On 6 August 2017, Nottingham requested that references to 418 "I'm a teapot" be removed from the programming language Go and subsequently from Python's Requests and ASP.NET's HttpAbstractions library as well. In response, 15-year-old developer Shane Brunswick created a website, save418.com, and established the "Save 418 Movement", asserting that references to 418 "I'm a teapot" in different projects serve as "a reminder that the underlying processes of computers are still made by humans". Brunswick's site went viral in the hours following its publishing, garnering thousands of upvotes on the social platform Reddit, and causing the mass adoption of the "#save418" Twitter hashtag he introduced on his site. Heeding the public outcry, Node.js, Go, Python's Requests, and ASP.NET's HttpAbstractions library decided against removing 418 "I'm a teapot" from their respective projects. The unanimous support from the aforementioned projects and the general public prompted Nottingham to begin the process of having 418 marked as a reserved HTTP status code, ensuring that 418 will not be replaced by an official status code for the foreseeable future. On 5 October 2020, Python 3.9 released with an updated HTTP library including 418 IM_A_TEAPOT status code. In the corresponding pull request, the Save 418 movement was directly cited in support of adoption. Usage The status code 418 is sometimes returned by servers when blocking a request, instead of the more appropriate 403 Forbidden, or 404 Not Found. Around the time of the 2022 Russian invasion of Ukraine, the Russian military website mil.ru returned the HTTP 418 status code when accessed from outside of Russia as a DDoS attack protection measure. The change was first noticed in December of 2021. See also Trojan Room coffee pot Internet of things Utah teapot References External links Google's demo page: Error 418 (I'm a teapot)!? Package teapot HTCPCP-TEA implementation by David Skinner save418.com error418.net Request for Comments Application layer protocols Computer errors Computer humour April Fools' Day jokes 1998 hoaxes Coffee preparation Teapots
Hyper Text Coffee Pot Control Protocol
[ "Technology" ]
936
[ "Computer errors" ]
1,623,208
https://en.wikipedia.org/wiki/Growth%20medium
A growth medium or culture medium is a solid, liquid, or semi-solid designed to support the growth of a population of microorganisms or cells via the process of cell proliferation or small plants like the moss Physcomitrella patens. Different types of media are used for growing different types of cells. The two major types of growth media are those used for cell culture, which use specific cell types derived from plants or animals, and those used for microbiological culture, which are used for growing microorganisms such as bacteria or fungi. The most common growth media for microorganisms are nutrient broths and agar plates; specialized media are sometimes required for microorganism and cell culture growth. Some organisms, termed fastidious organisms, require specialized environments due to complex nutritional requirements. Viruses, for example, are obligate intracellular parasites and require a growth medium containing living cells. Types The most common growth media for microorganisms are nutrient broths (liquid nutrient medium) or lysogeny broth medium. Liquid media are often mixed with agar and poured via a sterile media dispenser into Petri dishes to solidify. These agar plates provide a solid medium on which microbes may be cultured. They remain solid, as very few bacteria are able to decompose agar (the exception being some species in the genera: Cytophaga, Flavobacterium, Bacillus, Pseudomonas, and Alcaligenes). Bacteria grown in liquid cultures often form colloidal suspensions. The difference between growth media used for cell culture and those used for microbiological culture is that cells derived from whole organisms and grown in culture often cannot grow without the addition of, for instance, hormones or growth factors which usually occur in vivo. In the case of animal cells, this difficulty is often addressed by the addition of blood serum or a synthetic serum replacement to the medium. In the case of microorganisms, no such limitations exist, as they are often unicellular organisms. One other major difference is that animal cells in culture are often grown on a flat surface to which they attach, and the medium is provided in a liquid form, which covers the cells. In contrast, bacteria such as Escherichia coli may be grown on solid or in liquid media. An important distinction between growth media types is that of chemically defined versus undefined media. A defined medium will have known quantities of all ingredients. For microorganisms, they consist of providing trace elements and vitamins required by the microbe and especially defined carbon and nitrogen sources. Glucose or glycerol are often used as carbon sources, and ammonium salts or nitrates as inorganic nitrogen sources. An undefined medium has some complex ingredients, such as yeast extract or casein hydrolysate, which consist of a mixture of many chemical species in unknown proportions. Undefined media are sometimes chosen based on price and sometimes by necessity – some microorganisms have never been cultured on defined media. A good example of a growth medium is the wort used to make beer. The wort contains all the nutrients required for yeast growth, and under anaerobic conditions, alcohol is produced. When the fermentation process is complete, the combination of medium and dormant microbes, now beer, is ready for consumption. The main types are culture media minimal media selective media differential media transport media indicator media Culture media Culture media contain all the elements that most bacteria need for growth and are not selective, so they are used for the general cultivation and maintenance of bacteria kept in laboratory culture collections. An undefined medium (also known as a basal or complex medium) contains: a carbon source such as glucose water various salts a source of amino acids and nitrogen (e.g. beef, yeast extract) This is an undefined medium because the amino-acid source contains a variety of compounds; the exact composition is unknown. A defined medium (also known as chemically defined medium or synthetic medium) is a medium in which all the chemicals used are known no yeast, animal, or plant tissue is present Examples of nutrient media: nutrient agar plate count agar trypticase soy agar Minimal media A defined medium that has just enough ingredients to support growth is called a "minimal medium". The number of ingredients that must be added to a minimal medium varies enormously depending on which microorganism is being grown. Minimal media are those that contain the minimum nutrients possible for colony growth, generally without the presence of amino acids, and are often used by microbiologists and geneticists to grow "wild-type" microorganisms. Minimal media can also be used to select for or against recombinants or exconjugants. Minimal medium typically contains: a carbon source, which may be a sugar such as glucose, or a less energy-rich source such as succinate various salts, which may vary among bacteria species and growing conditions; these generally provide essential elements such as magnesium, nitrogen, phosphorus, and sulfur to allow the bacteria to synthesize protein and nucleic acids water Supplementary minimal media are minimal media that also contains a single selected agent, usually an amino acid or a sugar. This supplementation allows for the culturing of specific lines of auxotrophic recombinants. Selective media Selective media are used for the growth of only selected microorganisms. For example, if a microorganism is resistant to a certain antibiotic, such as ampicillin or tetracycline, then that antibiotic can be added to the medium to prevent other cells, which do not possess the resistance, from growing. Media lacking an amino acid such as proline in conjunction with E. coli unable to synthesize it were commonly used by geneticists before the emergence of genomics to map bacterial chromosomes. Selective growth media are also used in cell culture to ensure the survival or proliferation of cells with certain properties, such as antibiotic resistance or the ability to synthesize a certain metabolite. Normally, the presence of a specific gene or an allele of a gene confers upon the cell the ability to grow in the selective medium. In such cases, the gene is termed a marker. Selective growth media for eukaryotic cells commonly contain neomycin to select cells that have been successfully transfected with a plasmid carrying the neomycin resistance gene as a marker. Gancyclovir is an exception to the rule, as it is used to specifically kill cells that carry its respective marker, the Herpes simplex virus thymidine kinase. Examples of selective media: Eosin methylene blue contains dyes that are toxic for Gram-positive bacteria. It is the selective and differential medium for coliforms. YM (yeast extract agar) has a low pH, deterring bacterial growth. MEA (malt extract agar) has a low pH, deterring bacterial growth. MacConkey agar is for Gram-negative bacteria. Hektoen enteric agar is selective for Gram-negative bacteria. HIS-selective medium is a type cell culture medium that lacks the amino acid histidine. Mannitol salt agar is selective for gram-positive bacteria and differential for mannitol. Xylose lysine deoxycholate is selective for Gram-negative bacteria. Buffered charcoal yeast extract agar is selective for certain gram-negative bacteria, especially Legionella pneumophila. Baird-Parker agar is for gram-positive staphylococci. Sabouraud agar is selective to certain fungi due to its low pH (5.6) and high glucose concentration (3–4%). DRBC (dichloran rose bengal chloramphenicol agar) is a selective medium for the enumeration of moulds and yeasts in foods. Dichloran and rose bengal restrict the growth of mould colonies, preventing overgrowth of luxuriant species and assisting accurate counting of colonies. MMN (Modified Melin-Norkrans) medium and BAF medium are used for ectomycorrhizal fungi. Columbia Nalidixic Acid (CNA) agar contains antibiotics (nalidixic acid and colistin) that inhibit Gram-negative organisms, aiding in the selective isolation of Gram-positive bacteria. Differential media Differential or indicator media distinguish one microorganism type from another growing on the same medium. This type of media uses the biochemical characteristics of a microorganism growing in the presence of specific nutrients or indicators (such as neutral red, phenol red, eosin y, or methylene blue) added to the medium to visibly indicate the defining characteristics of a microorganism. These media are used for the detection of microorganisms and by molecular biologists to detect recombinant strains of bacteria. Examples of differential media: Blood agar (used in strep tests) contains bovine heart blood that becomes transparent in the presence of β-hemolytic organisms such as Streptococcus pyogenes and Staphylococcus aureus. Eosin methylene blue is differential for lactose fermentation. Granada medium is selective and differential for Streptococcus agalactiae (group B streptococcus) which grows as distinctive red colonies in this medium. MacConkey agar is differential for lactose fermentation. Mannitol salt agar is differential for mannitol fermentation. X-gal plates are differential for lac operon mutants. Transport media Transport media should fulfill these criteria: Temporary storage of specimens being transported to the laboratory for cultivation Maintain the viability of all organisms in the specimen without altering their concentration Contain only buffers and salt Lack of carbon, nitrogen, and organic growth factors so as to prevent microbial multiplication Transport media used in the isolation of anaerobes must be free of molecular oxygen. Examples of transport media: Thioglycolate broth is for strict anaerobes. Stuart transport medium is a non-nutrient soft agar gel containing a reducing agent to prevent oxidation, and charcoal to neutralize. Certain bacterial inhibitors are used for gonococci, and buffered glycerol saline for enteric bacilli. Venkataraman Ramakrishna (VR) medium is used for V. cholerae. Enriched media Enriched media contain the nutrients required to support the growth of a wide variety of organisms, including some of the more fastidious ones. They are commonly used to harvest as many different types of microbes as are present in the specimen. Blood agar is an enriched medium in which nutritionally rich whole blood supplements the basic nutrients. Chocolate agar is enriched with heat-treated blood (), which turns brown and gives the medium the color for which it is named. Physiological relevance The choice of culture medium might affect the physiological relevance of findings from tissue culture experiments, especially for metabolic studies. In addition, the dependence of a cell line on a metabolic gene was shown to be affected by the media type. When performing a study involving several cell lines, utilizing a uniform culture media for all cell lines might reduce the bias in the generated datasets. Using a growth medium that better represents the physiological levels of nutrients can improve the physiological relevance of in vitro studies and recently such media types, as Plasmax and human plasma-like medium (HPLM), were developed. Culture medium for Mammalian cells The selection of cell culture medium is crucial for efficient mammalian cell culture, significantly affecting cell growth, productivity, and consistency across batches. In protein expression, the choice of media can also influence the therapeutic characteristics of produced proteins through processes like glycosylation. Different types of media, such as serum-containing, serum-free, protein-free, and chemically defined media, have distinct benefits and drawbacks. Serum-containing media are rich in growth factors but can lead to variability and contamination issues. Fetal bovine serum (FBS) is commonly used due to its high capacity to support cell growth, although it poses biosafety concerns due to its inconsistent composition. In contrast, serum-free media (SFM) offer standardized formulations that enhance reliability and reduce contamination risks. They are designed to include essential nutrients like amino acids, vitamins, and glucose, but can sometimes provide weaker growth performance compared to serum-containing alternatives. The development of protein-free and chemically defined media is aimed at achieving greater consistency and control in cell culture processes. Ultimately, the composition of the culture medium directly impacts cell viability and productivity, making the careful selection and design of culture media essential for successful mammalian cell culture. See also Cell culture Impedance microbiology Modified Chee's medium References External links "The Nutrient Requirements of Cells" Growth media Microbiological media
Growth medium
[ "Biology" ]
2,645
[ "Microbiological media", "Microbiology equipment" ]
1,623,224
https://en.wikipedia.org/wiki/Harmine
Harmine is a beta-carboline and a harmala alkaloid. It occurs in a number of different plants, most notably the Syrian rue and Banisteriopsis caapi. Harmine reversibly inhibits monoamine oxidase A (MAO-A), an enzyme which breaks down monoamines, making it a Reversible inhibitor of monoamine oxidase A (RIMA). Harmine does not inhibit MAO-B. Harmine is also known as banisterin, banisterine, telopathin, telepathine, leucoharmine and yagin, yageine. Biosynthesis The coincident occurrence of β-carboline alkaloids and serotonin in Peganum harmala indicates the presence of two very similar, interrelated biosynthetic pathways, which makes it difficult to definitively identify whether free tryptamine or L-tryptophan is the precursor in the biosynthesis of harmine. However, it is postulated that L-tryptophan is the most likely precursor, with tryptamine existing as an intermediate in the pathway. The following figure shows the proposed biosynthetic scheme for harmine. The Shikimate acid pathway yields the aromatic amino acid, L-tryptophan. Decarboxylation of L-tryptophan by aromatic L-amino acid decarboxylase (AADC) produces tryptamine (I), which contains a nucleophilic center at the C-2 carbon of the indole ring due to the adjacent nitrogen atom that enables the participation in a Mannich-type reaction. Rearrangements enable the formation of a Schiff base from tryptamine, which then reacts with pyruvate in II to form a β-carboline carboxylic acid. The β-carboline carboxylic acid subsequently undergoes decarboxylation to produce 1-methyl β-carboline III. Hydroxylation followed by methylation in IV yields harmaline. The order of O-methylation and hydroxylation have been shown to be inconsequential to the formation of the harmaline intermediate. In the last step V, the oxidation of harmaline is accompanied by the loss of water and effectively generates harmine. The difficulty distinguishing between L-tryptophan and free tryptamine as the precursor of harmine biosynthesis originates from the presence of the serotonin biosynthetic pathway, which closely resembles that of harmine, yet necessitates the availability of free tryptamine as its precursor. As such, it is unclear if the decarboxylation of L-tryptophan, or the incorporation of pyruvate into the basic tryptamine structure is the first step of harmine biosynthesis. However, feeding experiments involving the feeding of one of tryptamine to hairy root cultures of P. harmala showed that the feeding of tryptamine yielded a great increase in serotonin levels with little to no effect on β-carboline levels, confirming that tryptamine is the precursor for serotonin, and indicating that it is likely only an intermediate in the biosynthesis of harmine; otherwise, comparable increases in harmine levels would have been observed. Uses Monoamine oxidase inhibitor Harmine is a RIMA, as it reversibly inhibits monoamine oxidase A (MAO-A), but not MAO-B. Oral or intravenous harmine doses ranging from 30 to 300 mg may cause agitation, bradycardia or tachycardia, blurred vision, hypotension, paresthesias. Serum or plasma harmine concentrations may be measured as a confirmation of diagnosis. The plasma elimination half-life of harmine is on the order of 1–3 hours. Medically significant amounts of harmine occur in the plants Syrian rue and Banisteriopsis caapi. These plants also contain notable amounts of harmaline, which is also a RIMA. The psychoactive ayahuasca brew is made from B. caapi stem bark usually in combination with dimethyltryptamine (DMT) containing Psychotria viridis leaves. DMT is a psychedelic drug, but it is not orally active unless it is ingested with MAOIs. This makes harmine a vital component of the ayahuasca brew with regard to its ability to induce a psychedelic experience. Syrian rue or synthetic harmine is sometimes used to substitute B. caapi in the oral use of DMT. Other Harmine is a useful fluorescent pH indicator. As the pH of its local environment increases, the fluorescence emission of harmine decreases. Due to its MAO-A specific binding, carbon-11 labeled harmine can be used in positron emission tomography to study MAO-A dysregulation in several psychiatric and neurologic illnesses. Harmine was used as an antiparkinsonian medication since the late 1920s until the early 1950s. It was replaced by other medications. Research Pancreatic islet cell proliferation Harmine is currently the only known drug that induces proliferation (rapid mitosis and subsequent mass growth) of pancreatic alpha (α) and beta (β) cells in adult humans. These islet sub-cells are normally resistant to growth stimulation in the adult stage of a human's life, as the cell mass plateaus at around age 10 and remains virtually unchanged. Adverse effects A 2024 Phase 1 clinical trial investigating pharmaceutical-grade harmine hydrochloride in healthy adults found that the maximum tolerated dose (MTD) is approximately 2.7 mg/kg body weight. Below this threshold, harmine is generally well-tolerated with minimal adverse effects. Above 2.7 mg/kg, common adverse effects include nausea and vomiting, which typically occur 60-90 minutes after ingestion. Other reported effects include drowsiness, dizziness, and impaired concentration. These effects are generally mild to moderate in severity and resolve within several hours. No serious adverse cardiovascular effects were observed at any dose tested (up to 500 mg), though rare instances of transient hypotension occurred during episodes of vomiting. Unlike some traditional preparations containing harmine (such as Ayahuasca), pure harmine did not cause diarrhea in study participants. The study found that adverse effects were more common in participants with lower body weight when given fixed doses, leading the researchers to conclude that 2.7 mg/kg represents a more useful threshold than fixed dosing. Natural sources Harmine is found in a wide variety of different organisms, most of which are plants. Alexander Shulgin lists about thirty different species known to contain harmine, including seven species of butterfly in the family Nymphalidae. The harmine-containing plants include tobacco, Peganum harmala, two species of passiflora, and numerous others. Lemon balm (Melissa officinalis) contains harmine. In addition to B. caapi, at least three members of the Malpighiaceae contain harmine, including two more Banisteriopsis species and the plant Callaeum antifebrile. Callaway, Brito and Neves (2005) found harmine levels of 0.31–8.43% in B. caapi samples. The family Zygophyllaceae, which P. harmala belongs to, contains at least two other harmine-bearing plants: Peganum nigellastrum and Zygophyllum fabago. History J. Fritzsche was the first to isolate and name harmine. He isolated it from the husks of Peganum harmala seeds in 1848. The related harmaline was already isolated and named by Fr. Göbel in 1837 from the same plant. The pharmacology of harmine was not studied in detail until 1895. The structures of harmine and harmaline were determined in 1927 by Richard Helmuth Fredrick Manske and colleagues. In 1905, the Colombian naturalist and chemist, Rafael Zerda-Bayón suggested the name telepathine to the then unknown hallucinogenic ingredient in ayahuasca brew. "Telepathine" comes from "telepathy", as Zerda-Bayón believed that ayahuasca induced telepathic visions. In 1923, the Colombian chemist, Guillermo Fischer-Cárdenas was the first to isolate harmine from Banisteriopsis caapi, which is an important herbal component of ayahuasca brew. He called the isolated harmine "telepathine". This was solely to honor Zerda-Bayón, as Fischer-Cárdenas found that telepathine had only mild non-hallucinogenic effects in humans. In 1925, Barriga Villalba, professor of chemistry at the University of Bogotá, isolated harmine from B. caapi, but named it "yajéine", which in some texts is written as "yageine". In 1927, F. Elger, who was a chemist working at Hoffmann-La Roche, isolated harmine from B. caapi. With the assistance of Professor Robert Robinson in Manchester, Elger showed that harmine (which was already isolated in 1848) was identical with telepathine and yajéine. In 1928, Louis Lewin isolated harmine from B. caapi, and named it "banisterine", but this supposedly novel compound was soon also shown to be harmine. Harmine was first patented by Jialin Wu and others who invented ways to produce new harmine derivatives with enhanced antitumor activity and lower toxicity to human nervous cells. Legal status Australia Harmala alkaloids are considered Schedule 9 prohibited substances under the Poisons Standard (October 2015). A Schedule 9 substance is a substance which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of Commonwealth and/or State or Territory Health Authorities. Exceptions are made when in herbs, or preparations, for therapeutic use such as: (a) containing 0.1 per cent or less of harmala alkaloids; or (b) in divided preparations containing 2 mg or less of harmala alkaloids per recommended daily dose. References External links Harmine entry in TiHKAL • info Indole alkaloids Alkaloids found in Nicotiana Beta-Carbolines Monoamine oxidase inhibitors Phenol ethers
Harmine
[ "Chemistry" ]
2,200
[ "Alkaloids by chemical classification", "Indole alkaloids" ]
1,623,246
https://en.wikipedia.org/wiki/Harmaline
Harmaline is a fluorescent indole alkaloid from the group of harmala alkaloids and beta-carbolines. It is the partly hydrogenated form of harmine. Occurrence in nature Various plants contain harmaline including Peganum harmala (Syrian rue) as well as the hallucinogenic beverage ayahuasca, which is traditionally brewed using Banisteriopsis caapi. Present at 3% by dry weight, the harmala alkaloids may be extracted from the Syrian rue seeds. Effects Harmaline is a central nervous system stimulant and a "reversible inhibitor of MAO-A (RIMA)". This means that the risk of a hypertensive crisis, a dangerous high blood pressure crisis from eating tyramine-rich foods such as cheese, is likely lower with harmaline than with irreversible MAOIs such as phenelzine. The harmala alkaloids are psychoactive in humans. Harmaline is shown to act as an acetylcholinesterase inhibitor. Harmaline also stimulates striatal dopamine release in rats at very high dose levels. Since harmaline is a reversible inhibitor of monoamine oxidase A, it could, in theory, induce both serotonin syndrome and hypertensive crises in combination with tyramine, serotonergics, catecholaminergics drugs or prodrugs. Harmaline-containing plants and tryptamine-containing plants are used in ayahuasca brews. The inhibitory effects on monoamine oxidase allows dimethyltryptamine (DMT), the psychoactively prominent chemical in the mixture, to bypass the extensive first-pass metabolism it undergoes upon ingestion, allowing a psychologically active quantity of the chemical to exist in the brain for a perceivable period of time. Harmaline forces the anabolic metabolism of serotonin into N-acetylserotonin (normelatonin), and then to melatonin, the body's principal sleep-regulating hormone and a powerful antioxidant. United States Patent Number 5591738 describes a method for treating various chemical dependencies via the administration of harmaline and or other beta-carbolines. A study has reported the antiviral activity of Harmaline against Herpes Simplex Virus 1 and 2 (HSV-1 and HSV-2) by inhibiting immediate early transcription of the virus at noncytotoxic concentration. Harmaline is known to act as a histamine N-methyltransferase inhibitor. This explains how harmaline elicits its wakefulness-promoting effects. Legal status Australia Harmala alkaloids are considered Schedule 9 prohibited substances under the Poisons Standard (October 2015). A Schedule 9 substance is a substance which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of Commonwealth and/or State or Territory Health Authorities. Canada Harmaline and Harmalol are considered Schedule III controlled substances by the Controlled Drugs and Substances Act. Every person found to be in possession of a Schedule III drug is guilty of an indictable offence and liable to imprisonment for a term not exceeding three years; or for a first offence, guilty on summary conviction, to a fine not exceeding one thousand dollars or to imprisonment for a term not exceeding six months, or to both. Every person found to be trafficking a Schedule III drug is guilty of an indictable offence and liable to imprisonment for a term not exceeding ten years, or is guilty on summary conviction (first-time offenders) and liable to imprisonment for a term not exceeding eighteen months. See also Harmalol Ibogamine Tetrahydroharmine, a similar harmala alkaloid References Further reading Reversible inhibitors of MAO-A Antidepressants Beta-Carbolines Tryptamine alkaloids Entheogens Monoamine oxidase inhibitors Acetylcholinesterase inhibitors Phenol ethers Oneirogens
Harmaline
[ "Chemistry" ]
868
[ "Tryptamine alkaloids", "Alkaloids by chemical classification" ]
1,623,249
https://en.wikipedia.org/wiki/Tetrahydroharmine
Tetrahydroharmine (THH) is a fluorescent indole alkaloid that occurs in the tropical liana species Banisteriopsis caapi. THH, like other harmala alkaloids in B. caapi, namely harmaline and harmine, is a reversible inhibitor of monoamine oxidase A (RIMA), but it also inhibits the reuptake of serotonin. THH contributes to B. caapi's psychoactivity as a serotonin reuptake inhibitor. Legal Status Australia Harmala alkaloids are considered Schedule 9 prohibited substances under the Poisons Standard (October 2015). A Schedule 9 substance is a substance which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of Commonwealth and/or State or Territory Health Authorities. See also Ayahuasca Coronaridine References Further reading Beta-Carbolines Tryptamine alkaloids Serotonin reuptake inhibitors
Tetrahydroharmine
[ "Chemistry" ]
231
[ "Tryptamine alkaloids", "Alkaloids by chemical classification" ]
1,623,251
https://en.wikipedia.org/wiki/Interactive%20computing
In computer science, interactive computing refers to software which accepts input from the user as it runs. Interactive software includes commonly used programs, such as word processors or spreadsheet applications. By comparison, non-interactive programs operate without user intervention; examples of these include compilers and batch processing applications that are pre-programmed to run independently. Interactive computing focuses on real-time interaction ("dialog") between the computer and the operator, and the technologies that enable them. If the response of the computer system is complex enough, it is said that the system is conducting social interaction; some systems try to achieve this through the implementation of social interfaces. The nature of interactive computing as well as its impact on users, are studied extensively in the field of computer interaction. History of interactive computing systems Ivan Sutherland is considered the father of interactive computing for his work on Sketchpad, the interactive display graphics program he developed in 1963. He later worked at the ARPA Information Processing Techniques Office under the direction of J. C. R. Licklider. There he facilitated ARPA's research grant to Douglas Engelbart for developing the NLS system at SRI, based on his visionary manifesto published in a 1962 report, in which Engelbart envisioned interactive computing as a vehicle for user interaction with computers, with each other, and with their knowledge, all in a vast virtual information space. In a 1965 report, Engelbart published his early experiments with pointing devices, including the computer mouse, for composing and editing on interactive display workstations. Engelbart's work on interactive computing at SRI migrated directly to Xerox PARC, from there to Apple, and out into the mainstream. Thus, the tree of evolution for interactive computing generally traces back to Engelbart's lab at SRI. In December 2008, on the 40th anniversary of his 1968 demo, SRI sponsored a public commemorative event in his honor. Current research The need for constant user interaction in interactive computing systems makes it different in many ways from batch processing systems. Areas of current research include the design of novel programming models and achieving information security and reliability in interactive computing. IPython is a software system for scientific interactive computing, supporting data visualization, event-driven programming and a number of related GUI toolkits. The Georgia Institute of Technology's School of Interactive Computing formed in 2007, offering masters and doctoral degrees via collaboration with more than 40 faculties. The Tangible Media Group of MIT, led by Professor Hiroshi Ishii, seeks to seamlessly couple the dual world of bits and atoms by presenting a dynamic physical form to computation. See also Interactivity Interactive computation Processing modes J. C. R. Licklider Douglas Engelbart Ubiquitous computing References Human–computer interaction
Interactive computing
[ "Engineering" ]
559
[ "Human–computer interaction", "Human–machine interaction" ]
1,623,440
https://en.wikipedia.org/wiki/MW%2050
MW 50 (Methanol-Wasser 50) was a 50-50 mixture of methanol and water (German: Wasser) that was often sprayed into the supercharger of World War II aircraft engines primarily for its anti-detonation effect, allowing the use of increased boost pressures. Secondary effects were cooling of the engine and charge cooling. Higher boost was only effective at altitudes below the full-throttle height, where the supercharger could still provide additional boost pressure that was otherwise wasted, while the smaller secondary effects were useful even above that altitude. Composition MW 50 is something of a misnomer, as it is actually a mixture of three fluids: 50% methanol acting primarily to achieve optimum anti-detonant effect, secondarily as an anti-freeze; 49.5% water; and 0.5% Schutzöl 39, an oil-based anti-corrosion additive. The similar MW 30 increased the water to 69.5% and decreased methanol to 30%. This increased the cooling performance but made it easier to freeze at -18 degrees C as opposed to -50 C for MW 50). As a result, this mixture was intended to be used for lower-altitude missions. EW 30 and EW 50 mixtures also existed, which substituted methanol with ethanol; in emergency, pure water could be used. Effect The effect of MW 50 injection could be dramatic. Simply turning on the system allowed the engine to pull in more air due to the charge cooling effect, boosting performance by about on the BMW 801 and DB 605. However, the MW 50 also allowed the supercharger to be run at much higher boost levels as well, for a combined increase of . At sea level, this allowed the engine to run at over . MW 50 was fully effective up to about , above which it added only about 4% extra power, due largely to charge cooling. Time limits The increased power could be used for a maximum of 10 minutes at a time, much like the American war emergency power setting for their own aircraft, with at least five minutes between each application. Aircraft generally carried enough MW 50 for about two ten-minute periods of use, allowing them to increase their climb rate and level speed in combat for interception missions. Applications Fittings for MW 50 first appeared on the BMW 801D in 1942, but it never went into production for this engine because the cylinder heads developed micro-cracks when MW 50 was used. Instead, the DB 605-engined later versions of the Messerschmitt Bf 109 were fitted with an MW 50 injection system, beginning in early 1944. Later engine designs all included the fittings as well, notably the Junkers Jumo 213, which relied on it to increase non-boosted performance and tune the supercharger for higher altitudes. Other systems MW 50 was not the only charge cooling system to be used by the Germans. Some engines dedicated to high altitude included an intercooler instead, as they would be needing the cooling for longer periods of time. The 801D also included the ability to spray gasoline into the supercharger instead of MW 50. (The Erhöhte Notleistung [Increased Emergency Performance] system.) While this was not as effective, it did increase boost without the complexity of the additional tanking and plumbing. Many of the late-war engines also included a system for high-altitude boost, GM-1, which added oxygen to the fuel/air mix by injecting nitrous oxide into the supercharger instead of employing higher boost levels. See also Water injection (engines) War emergency power References Notes Bibliography Bridgman, L, (ed.) (1989) Jane's fighting aircraft of World War II. Crescent. Aircraft engines fr:Injection d'eau-méthanol
MW 50
[ "Technology" ]
789
[ "Engines", "Aircraft engines" ]
1,623,599
https://en.wikipedia.org/wiki/Trimethylglycine
Trimethylglycine is an amino acid derivative with the formula . A colorless, water-soluble solid, it occurs in plants. Trimethylglycine is a zwitterion: the molecule contains both a quaternary ammonium group and a carboxylate group. Trimethylglycine was the first betaine discovered; originally it was simply called betaine because it was discovered in sugar beets (Beta vulgaris subsp. vulgaris). Several other betaines are now known. Medical uses Betaine, sold under the brand name Cystadane is indicated for the adjunctive treatment of homocystinuria, involving deficiencies or defects in cystathionine beta-synthase (CBS), 5,10-methylene-tetrahydrofolate reductase (MTHFR), or cobalamin cofactor metabolism (cbl). The most common side effect is elevated levels of methionine in the blood. The EU has authorized the health claim that betaine "contributes to normal homocysteine metabolism.". Biological function Biosynthesis In most organisms, glycine betaine is biosynthesized by oxidation of choline. The intermediate, betaine aldehyde, is generated by the action of the enzyme mitochondrial choline oxidase (choline dehydrogenase, EC 1.1.99.1). In mice, betaine aldehyde is further oxidised in the mitochondria by the enzyme betaine-aldehyde dehydrogenase (EC 1.2.1.8). In humans betaine aldehyde activity is performed by a nonspecific cystosolic aldehyde dehydrogenase enzyme (EC 1.2.1.3) Trimethylglycine is produced by some cyanobacteria, as established by C nuclear magnetic resonance. It is proposed to protect for some enzymes, against inhibition by NaCl and KCl. Osmolyte Trimethylglycine is an osmolyte, a water-soluble salt-like substance. Sugar beet was cultivated from sea beet, which requires osmolytes in order to survive the salty soils of coastal areas. Trimethylglycine also occurs in high concentrations (~10 mM) in many marine invertebrates, such as crustaceans and molluscs. It serves as a appetitive attractant to generalist carnivores such as the predatory sea slug Pleurobranchaea californica. Methyl donor Trimethylglycine is a cofactor in methylation, a process that occurs in all mammals. These processes include the synthesis of neurotransmitters such as dopamine and serotonin. Methylation is also required for the biosynthesis of melatonin and the electron transport chain constituent coenzyme Q10, as well as the methylation of DNA for epigenetics. One step in the methylation cycle is the remethylation of homocysteine, a compound which is naturally generated during demethylation of the essential amino acid methionine. Despite its natural formation, homocysteine has been linked to inflammation, depression, specific forms of dementia, and various types of vascular disease. The remethylation process that detoxifies homocysteine and converts it back to methionine can occur via either of two pathways. The pathway present in virtually all cells involves the enzyme methionine synthase (MS), which requires vitamin B12 as a cofactor, and also depends indirectly on folate and other B vitamins. The second pathway (restricted to liver and kidney in most mammals) involves betaine-homocysteine methyltransferase (BHMT) and requires trimethylglycine as a cofactor. During normal physiological conditions, the two pathways contribute equally to removal of homocysteine in the body. Further degradation of betaine, via the enzyme dimethylglycine dehydrogenase produces folate, thus contributing back to methionine synthase. Betaine is thus involved in the synthesis of many biologically important molecules, and may be even more important in situations where the major pathway for the regeneration of methionine from homocysteine has been compromised by genetic polymorphisms such as mutations in the MS gene. Agriculture and aquaculture Trimethylglycine is used as a supplement for both animals and plants. Processing sucrose from sugar beets yields glycine betaine as a byproduct. The economic significance of trimethylglycine is comparable to that of sugar in sugar beets. Salmon farms apply trimethylglycine to relieve the osmotic pressure on the fishes' cells when workers transfer the fish from freshwater to saltwater. Trimethylglycine supplementation decreases the amount of adipose tissue in pigs; however, research in human subjects has shown no effect on body weight, body composition, or resting energy expenditure. Nutrition Nutritionally, betaine is not needed when sufficient dietary choline is present for synthesis. When insufficient betaine is available, elevated homocysteine levels and decreased SAM levels in blood occur. Supplementation of betaine in this situation would resolve these blood marker issues, but not compensate for other functions of choline. Dietary supplement Although trimethylglycine supplementation decreases the amount of adipose tissue in pigs, research on human subjects has shown no effect on body weight, body composition, or resting energy expenditure when used in conjunction with a low calorie diet. The US Food and Drug Administration (FDA) approved betaine trimethylglycine (also known by the brand name Cystadane) for the treatment of homocystinuria, a disease caused by abnormally high homocysteine levels at birth. Trimethylglycine is also used as the hydrochloride salt (marketed as betaine hydrochloride or betaine HCl). Betaine hydrochloride was sold over-the-counter (OTC) as a purported gastric aid in the United States. US Code of Federal Regulations, Title 21, Section 310.540, which became effective in November 1993, banned the marketing of betaine hydrochloride as a digestive aid due to insufficient evidence to classify it as "generally recognized as safe and effective" for that specified use. Side effects Trimethylglycine supplementation may cause diarrhea, bloating, cramps, dyspepsia, nausea or vomiting. Although rare, it can also causes excessive increases in serum methionine concentrations in the brain, which may lead to cerebral edema, a life-threatening condition. Trimethylglycine supplementation lowers homocysteine but also raises LDL-cholesterol in obese individuals and renal patients. References External links USDA Database for the Choline Content of Common Foods – including the data on choline metabolites, such as betaine, in 434 food items. Amino acid derivatives Alpha-Amino acids Food additives Orphan drugs Quaternary ammonium compounds Zwitterions
Trimethylglycine
[ "Physics", "Chemistry" ]
1,485
[ "Ions", "Zwitterions", "Matter" ]
1,623,707
https://en.wikipedia.org/wiki/Underlay
Underlay may refer to flooring or roofing materials, bed padding, or a musical notation. Flooring Underlay or underlayment generally refers to a layer of cushioning made of materials such as sponge rubber, foam, felt, crumb rubber, or recycled plastic; this material is laid beneath carpeting to provide comfort underfoot, to reduce wear on the carpet, and to provide insulation against sound, moisture, and heat. In general, it is a layer which is underneath another layer, so underlay is thus also used to describe many different surface-covering products. In vinyl flooring or "linoleum", the underlay is the thin layer of plywood that is fastened over the structural subfloor to create a uniform, smooth platform for the sheet vinyl. For laminated wood flooring, the underlay provides a “vapor barrier” to prevent moisture from coming through the floor of the home and then migrating into the flooring; the underlayment may also have noise-dampening properties. A self-leveling underlay is a concrete product that can be pumped in liquid form onto the floor in order to create a level floor. Carpet Underlay Popular carpet underlays are pu foam crumb rubber felt cork More recent innovations in underlay materials include recycled plastic underlay, which can be made from plastic bottles and other single-use plastics for reduced environmental impact. Carpet underlays are typically 6-12 mm thick. They primarily provide foot comfort, but they also reduce carpet wear and provide sound and thermal insulation. Underfloor Heating Underlay Underlays are also available for underfloor heating in either central heating pipes or electric applications which allow the heat to transmit through the underlay and carpet. Specialist underlays include ThermalStream which have air perforated holes to allow a faster heat transaction. Wood floor Underlay for timber floors is typically 3mm closed cell plastic foam. This primarily provides sound insulation and a vapour barrier. A watertight vapour barrier should only be used though, when neither condensation from humid air coming from beneath nor water spillage coming from above are to be expected. Especially if there is spillage it will otherwise take a much longer time for the moisture to disperse and evaporate if it is prevented by the barrier from seeping through the wooden flooring. This might make it necessary to raise the temperature of the floor to facilitate evaporation and prevent rot. Roofing Underlay is also the term for the material under roofing tiles; this roofing membrane is often made of rubber and is used to seal the roof and prevent leakage. Underlayment used with roofing shingles provides a second layer of water proofing to prevent leaks and is called tar paper, roofing felt, or since the 1990s synthetic underlayment. Roofing underlays can be breathable or non-breathable depending on the ventilation requirements of the building. Bedding Bedding underlay (or mattress overlay) is a thick, extra layer of padding between the bed mattress and bedding. Underlays are designed to increase comfort and support, while extending the life of the mattress (or mattress protector). Common underlay materials include: Wool, foam, and latex. Music In music, underlay refers to text intended for vocalization – positioned either directly or indirectly under notes on a musical staff. References Rugs and carpets Floors
Underlay
[ "Engineering" ]
696
[ "Structural engineering", "Floors" ]
1,623,847
https://en.wikipedia.org/wiki/Stickland%20fermentation
Stickland fermentation or The Stickland Reaction is the name for a chemical reaction that involves the coupled oxidation and reduction of amino acids to organic acids. The electron donor amino acid is oxidised to a volatile carboxylic acid one carbon atom shorter than the original amino acid. For example, alanine with a three carbon chain is converted to acetate with two carbons. The electron acceptor amino acid is reduced to a volatile carboxylic acid the same length as the original amino acid. For example, glycine with two carbons is converted to acetate. In this way, amino acid fermenting microbes can avoid using hydrogen ions as electron acceptors to produce hydrogen gas. Amino acids can be Stickland acceptors, Stickland donors, or act as both donor and acceptor. Only histidine cannot be fermented by Stickland reactions, and is oxidised. With a typical amino acid mix, there is a 10% shortfall in Stickland acceptors, which results in hydrogen production. Under very low hydrogen partial pressures, increased uncoupled anaerobic oxidation has also been observed. It occurs in proteolytic clostridia such as: C. perfringens, Clostridioides difficile, C. sporogenes, and C. botulinum. Additionally, sarcosine and betaine can act as electron acceptors. References Biochemical reactions Name reactions
Stickland fermentation
[ "Chemistry", "Biology" ]
301
[ "Biotechnology stubs", "Biochemical reactions", "Name reactions", "Biochemistry stubs", "Biochemistry" ]
1,623,938
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Lov%C3%A1sz
László Lovász (; born March 9, 1948) is a Hungarian mathematician and professor emeritus at Eötvös Loránd University, best known for his work in combinatorics, for which he was awarded the 2021 Abel Prize jointly with Avi Wigderson. He was the president of the International Mathematical Union from 2007 to 2010 and the president of the Hungarian Academy of Sciences from 2014 to 2020. In graph theory, Lovász's notable contributions include the proofs of Kneser's conjecture and the Lovász local lemma, as well as the formulation of the Erdős–Faber–Lovász conjecture. He is also one of the eponymous authors of the LLL lattice reduction algorithm. Early life and education Lovász was born on March 9, 1948, in Budapest, Hungary. Lovász attended the Fazekas Mihály Gimnázium in Budapest. He won three gold medals (1964–1966) and one silver medal (1963) at the International Mathematical Olympiad. He also participated in a Hungarian game show about math prodigies. Paul Erdős helped introduce Lovász to graph theory at a young age. Lovász received his Candidate of Sciences (C.Sc.) degree in 1970 at the Hungarian Academy of Sciences. His advisor was Tibor Gallai. He received his first doctorate (Dr.Rer.Nat.) degree from Eötvös Loránd University in 1971 and his second doctorate (Dr.Math.Sci.) from the Hungarian Academy of Sciences in 1977. Career From 1971 to 1975, Lovász worked at Eötvös Loránd University as a research associate. From 1975 to 1978, he was a docent at the University of Szeged, and then served as a professor and the Chair of Geometry there until 1982. He then returned to Eötvös Loránd University as a professor and the Chair of Computer Science until 1993. Lovász was a professor at Yale University from 1993 to 1999, when he moved to the Microsoft Research Center where he worked as a senior researcher until 2006. He returned to Eötvös Loránd University where he was the director of the Mathematical Institute (2006–2011) and a professor in the Department of Computer Science (2006–2018). He retired in 2018. Lovász was the president of the International Mathematical Union between January 1, 2007, and December 31, 2010. In 2014, he was elected the president of the Hungarian Academy of Sciences (MTA) and served until 2020. Research In collaboration with Erdős in the 1970s, Lovász developed complementary methods to Erdős's existing probabilistic graph theory techniques. This included the Lovász local lemma, which has become a standard technique for proving the existence of rare graphs. Also in graph theory, Lovász proved Kneser's conjecture and helped formulate the Erdős–Faber–Lovász conjecture. With Arjen Lenstra and Hendrik Lenstra in 1982, Lovász developed the LLL algorithm for approximating points in lattices and reducing their bases. The LLL algorithm has been described by Gil Kalai as "one of the fundamental algorithms" and has been used in several practical applications, including polynomial factorization algorithms and cryptography. Donald Knuth named Lovász as one of his combinatorial heroes in a 2023 interview. Awards Lovász was awarded the Pólya Prize in 1979, the Fulkerson Prize in 1982 and 2012, the Brouwer Medal in 1993, the Wolf Prize and Knuth Prize in 1999, the Gödel Prize in 2001, the John von Neumann Theory Prize in 2006, the in 2007, the Széchenyi Prize in 2008, and the Kyoto Prize in Basic Sciences in 2010. In March 2021, he shared the Abel Prize with Avi Wigderson from the Institute for Advanced Study "for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics". In 2017 he received John von Neumann Professor title from the Budapest University of Technology and Economics (BME) and the John von Neumann Computer Society. In 2021, he received Hungary's highest order, the Hungarian Order of Saint Stephen. He was elected a foreign member of the Royal Netherlands Academy of Arts and Sciences in 2006 and the Royal Swedish Academy of Sciences in 2007, and an honorary member of the London Mathematical Society in 2009. Lovász was elected as a member of the U.S. National Academy of Sciences in 2012. In 2012 he became a fellow of the American Mathematical Society. Personal life Lovász is married to fellow mathematician Katalin Vesztergombi, with whom he participated in a program for high school students gifted in mathematics, and has four children. He is a dual citizen of Hungary and the United States. Books See also Topological combinatorics Lovász conjecture Geometry of numbers Perfect graph theorem Greedoid Bell number Lovász number Graph limit Lovász local lemma Notes External links Website of László Lovász 1948 births Living people 20th-century American mathematicians 20th-century Hungarian mathematicians 21st-century American mathematicians 21st-century Hungarian mathematicians Abel Prize laureates American computer scientists Brouwer Medalists Combinatorialists European Research Council grantees Fellows of the American Mathematical Society Foreign members of the Russian Academy of Sciences Gödel Prize laureates Graph theorists Hungarian computer scientists Hungarian emigrants to the United States Institute for Advanced Study visiting scholars International Mathematical Olympiad participants John von Neumann Theory Prize winners Knuth Prize laureates Kyoto laureates in Basic Sciences Members of the Hungarian Academy of Sciences Members of the Royal Netherlands Academy of Arts and Sciences Members of the Royal Swedish Academy of Sciences Members of the United States National Academy of Sciences Wolf Prize in Mathematics laureates Yale University faculty Network scientists Presidents of the International Mathematical Union
László Lovász
[ "Mathematics" ]
1,186
[ "Graph theory", "Combinatorics", "Combinatorialists", "Mathematical relations", "Graph theorists" ]
1,623,956
https://en.wikipedia.org/wiki/Charm%20%28quantum%20number%29
Charm (symbol C) is a flavour quantum number representing the difference between the number of charm quarks () and charm antiquarks () that are present in a particle: By convention, the sign of flavour quantum numbers agree with the sign of the electric charge carried by the quarks of corresponding flavour. The charm quark, which carries an electric charge (Q) of +, therefore carries a charm of +1. The charm antiquarks have the opposite charge (), and flavour quantum numbers (). As with any flavour-related quantum numbers, charm is preserved under strong and electromagnetic interaction, but not under weak interaction (see CKM matrix). For first-order weak decays, that is processes involving only one quark decay, charm can only vary by 1 (). Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate "selection rule" for weak decays. See also Quantum number References Further reading Lessons in Particle Physics Luis Anchordoqui and Francis Halzen, University of Wisconsin, 18th Dec. 2009 Quarks Flavour (particle physics) he:קסם (פיזיקה)
Charm (quantum number)
[ "Physics" ]
253
[ "Particle physics stubs", "Particle physics" ]
1,624,084
https://en.wikipedia.org/wiki/Pospiviroid
Pospiviroid is a genus of ssRNA viroids that infects plants, most commonly tubers. It belongs to the family Pospiviroidae. The first viroid discovered was a pospiviroid, the PSTVd species (potato spindle tuber viroid). Taxonomy Pospiviroid has 10 virus species References External links ICTV Report: Pospiviroidae Viroids Virus genera
Pospiviroid
[ "Biology" ]
91
[ "Virus stubs", "Viruses" ]
1,624,181
https://en.wikipedia.org/wiki/BBN%20Butterfly
The BBN Butterfly was a massively parallel computer built by Bolt, Beranek and Newman in the 1980s. It was named for the "butterfly" multi-stage switching network around which it was built. Each machine had up to 512 CPUs, each with local memory, which could be connected to allow every CPU access to every other CPU's memory, although with a substantially greater latency (roughly 15:1) than for its own. The CPUs were commodity microprocessors. The memory address space was shared. The first generation used Motorola 68000 processors, followed by a 68010 version. The Butterfly connect was developed specifically for this computer. The second or third generation, GP-1000 models used Motorola 68020's and scaled to 256 CPUs. The later, TC-2000 models used Motorola MC88100's, and scaled to 512 CPUs. The Butterfly was initially developed as the Voice Funnel, a router for the ST-II protocol intended for carrying voice and video over IP networks. The Butterfly hardware was later used for the Butterfly Satellite IMP (BSAT) packet switch of DARPA's Wideband Packet Satellite Network which operated at multiple sites around the US over a shared 3 Mbit/s broadcast satellite channel. In the late 1980s, this network became the Terrestrial Wideband Network, based on terrestrial T1 circuits instead of a shared broadcast satellite channel and the BSAT became the Wideband Packet Switch (WPS). Another DARPA sponsored project at BBN produced the Butterfly Multiprocessor Internet Gateway (Internet Router) to interconnect different types of networks at the IP layer. Like the BSAT, the Butterfly Gateway broke the contention of a shared bus minicomputer architecture that had been in use for Internet Gateways by combining the routing computations and I/O at the network interfaces and using the Butterfly's switch fabric to provide the network interconnections. This resulted in significantly higher link throughputs. The Butterfly began with a proprietary operating system called Chrysalis, but moved to a Mach kernel operating system in 1989. While the memory access time was non-uniform, the machine had SMP memory semantics, and could be operated as a symmetric multiprocessor. The largest configured system with 128 processors was at the University of Rochester Computer Science Department. Most delivered systems had about 16 processors. No known configurations appear to be in museums. At least one system is thought to be sitting within a DARPA autonomous vehicle. TotalView, the parallel program debugger developed for the Butterfly, outlived the platform and was ported to a number of other massively parallel machines. See also Pluribus was an earlier multiprocessor designed at BBN. References External links BBN at Index of Dead Supercomputer Projects - Apparent source for much of this article's text. Massively parallel computers Supercomputers
BBN Butterfly
[ "Technology" ]
594
[ "Supercomputers", "Supercomputing" ]
1,624,188
https://en.wikipedia.org/wiki/Ozonolysis
In organic chemistry, ozonolysis is an organic reaction where the unsaturated bonds are cleaved with ozone (). Multiple carbon–carbon bond are replaced by carbonyl () groups, such as aldehydes, ketones, and carboxylic acids. The reaction is predominantly applied to alkenes, but alkynes and azo compounds are also susceptible to cleavage. The outcome of the reaction depends on the type of multiple bond being oxidized and the work-up conditions. Detailed procedures have been reported. Ozonolysis of alkenes Alkenes can be oxidized with ozone to form alcohols, aldehydes or ketones, or carboxylic acids. In a typical procedure, ozone is bubbled through a solution of the alkene in methanol at until the solution takes on a characteristic blue color, which is due to unreacted ozone. Industry however recommends temperatures near . This color change indicates complete consumption of the alkene. Alternatively, various other reagents can be used as indicators of this endpoint by detecting the presence of ozone. If ozonolysis is performed by introducing a stream of ozone-enriched oxygen through the reaction mixture, the effluent gas can be directed through a potassium iodide solution. When the solution has stopped absorbing ozone, the excess ozone oxidizes the iodide to iodine, which can easily be observed by its violet color. For closer control of the reaction itself, an indicator such as Sudan Red III can be added to the reaction mixture. Ozone reacts with this indicator more slowly than with the intended ozonolysis target. The ozonolysis of the indicator, which causes a noticeable color change, only occurs once the desired target has been consumed. If the substrate has two alkenes that react with ozone at different rates, one can choose an indicator whose own oxidation rate is intermediate between them, and therefore stop the reaction when only the most susceptible alkene in the substrate has reacted. Otherwise, the presence of unreacted ozone in solution (seeing its blue color) or in the bubbles (via iodide detection) only indicates when all alkenes have reacted. After completing the addition, a reagent is then added to convert the intermediate ozonide to a carbonyl derivative. Reductive work-up conditions are far more commonly used than oxidative conditions. The use of triphenylphosphine, thiourea, zinc dust, or dimethyl sulfide produces aldehydes or ketones. While the use of sodium borohydride produces alcohols. (R group can also be hydrogens) The use of hydrogen peroxide can produce carboxylic acids. Amine N-oxides produce aldehydes directly. Other functional groups, such as benzyl ethers, can also be oxidized by ozone. It has been proposed that small amounts of acid may be generated during the reaction from oxidation of the solvent, so pyridine is sometimes used to buffer the reaction. Dichloromethane is often used as a 1:1 cosolvent to facilitate timely cleavage of the ozonide. Azelaic acid and pelargonic acids are produced from ozonolysis of oleic acid on an industrial scale. An example is the ozonolysis of eugenol converting the terminal alkene to an aldehyde: By controlling the reaction/workup conditions, unsymmetrical products can be generated from symmetrical alkenes: Using TsOH; sodium bicarbonate (NaHCO3); dimethyl sulfide (DMS) gives an aldehyde and a dimethyl acetal Using acetic anhydride (Ac2O), triethylamine (Et3N) gives a methyl ester and an aldehyde Using TsOH; Ac2O, Et3N, gives a methyl ester and a dimethyl acetal. Reaction mechanism In the generally accepted mechanism proposed by Rudolf Criegee in 1953, the alkene and ozone form an intermediate molozonide in a 1,3-dipolar cycloaddition. Next, the molozonide reverts to its corresponding carbonyl oxide (also called the Criegee intermediate or Criegee zwitterion) and aldehyde or ketone (3) in a retro-1,3-dipolar cycloaddition. The oxide and aldehyde or ketone react again in a 1,3-dipolar cycloaddition, producing a relatively stable ozonide intermediate, known as a trioxolane (4). Evidence for this mechanism is found in isotopic labeling. When 17O-labelled benzaldehyde reacts with carbonyl oxides, the label ends up exclusively in the ether linkage of the ozonide. There is still dispute over whether the molozonide collapses via a concerted or radical process; this may also exhibit a substrate dependence. History Christian Friedrich Schönbein, who discovered ozone in 1840, also did the first ozonolysis: in 1845, he reported that ethylene reacts with ozone – after the reaction, neither the smell of ozone nor the smell of ethylene was perceivable. The ozonolysis of alkenes is sometimes referred to as "Harries ozonolysis", because some attribute this reaction to Carl Dietrich Harries. Before the advent of modern spectroscopic techniques, the ozonolysis was an important method for determining the structure of organic molecules. Chemists would ozonize an unknown alkene to yield smaller and more readily identifiable fragments. Ozonolysis of alkynes Ozonolysis of alkynes generally gives an acid anhydride or diketone product, not complete fragmentation as for alkenes. A reducing agent is not needed for these reactions. The mechanism is unknown. If the reaction is performed in the presence of water, the anhydride hydrolyzes to give two carboxylic acids. Other substrates Although rarely examined, azo compounds () are susceptible to ozonolysis. Nitrosamines () are produced. Applications The main use of ozonolysis is for the conversion of unsaturated fatty acids to value-added derivatives. Ozonolysis of oleic acid is an important route to azelaic acid. The coproduct is nonanoic acid: Erucic acid is a precursor to brassylic acid, a C13-dicarboxylic acid that is used to make specialty polyamides and polyesters. The conversion entails ozonolysis, which selectively cleaves the C=C bond in erucic acid: A number of drugs and their intermediates have been produced by ozonolysis. The use of ozone in the pharmaceutical industry is difficult to discern owing to confidentiality considerations. Ozonolysis as an analytical method Ozonolysis has been used to characterize the structure of some polyolefins. Early experiments showed that the repeat unit in natural rubber was shown to be isoprene. Occurrence Ozonolysis can be a serious problem, known as ozone cracking where traces of the gas in an atmosphere degrade elastomers, such as natural rubber, polybutadiene, styrene-butadiene, and nitrile rubber. Ozonolysis produces surface ketone groups that can cause further gradual degradation via Norrish reactions if the polymer is exposed to light. To minimize this problem, many polyolefin-based products are treated with antiozonants. Ozone cracking is a form of stress corrosion cracking where active chemical species attack products of a susceptible material. The rubber product must be under tension for crack growth to occur. Ozone cracking was once commonly seen in the sidewalls of tires, where it could expand to cause a dangerous blowout, but is now rare owing to the use of modern antiozonants. Other means of prevention include replacing susceptible rubbers with resistant elastomers such as polychloroprene, EPDM or Viton. Safety The use of ozone in the pharmaceutical industry is limited by safety considerations. See also Polymer degradation Lemieux–Johnson oxidation – an alternative system using periodate and osmium tetroxide Trametes hirsuta, a biotechnological alternative to ozonolysis. References Organic oxidation reactions Cycloadditions
Ozonolysis
[ "Chemistry" ]
1,745
[ "Organic oxidation reactions", "Organic redox reactions", "Organic reactions" ]
1,624,240
https://en.wikipedia.org/wiki/CAR%20T%20cell
In biology, chimeric antigen receptors (CARs)—also known as chimeric immunoreceptors, chimeric T cell receptors or artificial T cell receptors—are receptor proteins that have been engineered to give T cells the new ability to target a specific antigen. The receptors are chimeric in that they combine both antigen-binding and T cell activating functions into a single receptor. CAR T cell therapy uses T cells engineered with CARs to treat cancer. T cells are modified to recognize cancer cells and destroy them. The standard approach is to harvest T cells from patients, genetically alter them, then infuse the resulting CAR T cells into patients to attack their tumors. CAR T cells can be derived either autologously from T cells in a patient's own blood or allogeneically from those of a donor. Once isolated, these T cells are genetically engineered to express a specific CAR, using a vector derived from an engineered lentivirus such as HIV (see Lentiviral vector in gene therapy). The CAR programs the T cells to target an antigen present on the tumor cell surface. For safety, CAR T cells are engineered to be specific to an antigen that is expressed on a tumor cell but not on healthy cells. After the modified T cells are infused into a patient, they act as a "living drug" against cancer cells. When they come in contact with their targeted antigen on a cell's surface, T cells bind to it and become activated, then proceed to proliferate and become cytotoxic. CAR T cells destroy cells through several mechanisms, including extensive stimulated cell proliferation, increasing the degree to which they are toxic to other living cells (cytotoxicity), and by causing the increased secretion of factors that can affect other cells such as cytokines, interleukins and growth factors. The surface of CAR T cells can bear either of two types of co-receptors, CD4 and CD8. These two cell types, called CD4+ and CD8+, respectively, have different and interacting cytotoxic effects. Therapies employing a 1-to-1 ratio of the cell types apparently provide synergistic antitumor effects. History The first chimeric receptors containing portions of an antibody and the T cell receptor was described in 1987 by Yoshihisa Kuwana et al. at Fujita Health University and Kyowa Hakko Kogyo, Co. Ltd. in Japan, and independently in 1989 by Gideon Gross and Zelig Eshhar at the Weizmann Institute in Israel. Originally termed "T-bodies", these early approaches combined an antibody's ability to specifically bind to diverse targets with the constant domains of the TCR-α or TCR-β proteins. In 1991, chimeric receptors containing the intracellular signaling domain of CD3ζ were shown to activate T cell signaling by Arthur Weiss at the University of California, San Francisco. This work prompted CD3ζ intracellular domains to be added to chimeric receptors with antibody-like extracellular domains, commonly single-chain fraction variable (scFv) domains, as well as proteins such as CD4, subsequently termed first generation CARs. A first generation CAR containing a CD4 extracellular domain and a CD3ζ intracellular domain was used in the first clinical trial of chimeric antigen receptor T cells by the biotechnology company Cell Genesys in the mid 1990s, allowing adoptively transferred T cells to target HIV infected cells, although it failed to show any clinical improvement. Similar early clinical trials of CAR T cells in solid tumors in the 1990s using first generation CARs targeting a solid tumor antigens such as MUC1 did not show long-term persistence of the transferred T cells or result in significant remissions. In the early 2000s, co-stimulatory domains such as CD28 or 4-1BB were added to first generation CAR's CD3ζ intracellular domain. Termed second generation CARs, these constructs showed greater persistence and improved tumor clearance in pre-clinical models. Clinical trials in the early 2010s using second generation CARs targeting CD19, a protein expressed by normal B cells as well as B-cell leukemias and lymphomas, by investigators at the NCI, University of Pennsylvania, and Memorial Sloan Kettering Cancer Center demonstrated the clinical efficacy of CAR T cell therapies and resulted in complete remissions in many heavily pre-treated patients. These trials ultimately led in the US to the FDA's first two approvals of CAR T cells in 2017, those for tisagenlecleucel (Kymriah), marketed by Novartis originally for B-cell precursor acute lymphoblastic leukemia (B-ALL), and axicabtagene ciloleucel (Yescarta), marketed by Kite Pharma originally for diffuse large B-cell lymphoma (DLBCL). There are now six FDA-approved CAR T therapies. Production The first step in the production of CAR T-cells is the isolation of T cells from human blood. CAR T-cells may be manufactured either from the patient's own blood, known as an autologous treatment, or from the blood of a healthy donor, known as an allogeneic treatment. The manufacturing process is the same in both cases; only the choice of initial blood donor is different. First, leukocytes are isolated using a blood cell separator in a process known as leukocyte apheresis. Peripheral blood mononuclear cells (PBMCs) are then separated and collected. The products of leukocyte apheresis are then transferred to a cell-processing center. In the cell processing center, specific T cells are stimulated so that they will actively proliferate and expand to large numbers. To drive their expansion, T cells are typically treated with the cytokine interleukin 2 (IL-2) and anti-CD3 antibodies. Anti-CD3/CD28 antibodies are also used in some protocols. The expanded T cells are purified and then transduced with a gene encoding the engineered CAR via a retroviral vector, typically either an integrating gammaretrovirus (RV) or a lentiviral (LV) vector. These vectors are very safe in modern times due to a partial deletion of the U3 region. The new gene editing tool CRISPR/Cas9 has recently been used instead of retroviral vectors to integrate the CAR gene into specific sites in the genome. The patient undergoes lymphodepletion chemotherapy prior to the introduction of the engineered CAR T-cells. The depletion of the number of circulating leukocytes in the patient upregulates the number of cytokines that are produced and reduces competition for resources, which helps to promote the expansion of the engineered CAR T-cells. Clinical applications As of March 2019, there were around 364 ongoing clinical trials happening globally involving CAR T cells. The majority of those trials target blood cancers: CAR T therapies account for more than half of all trials for hematological malignancies. CD19 continues to be the most popular antigen target, followed by BCMA (commonly expressed in multiple myeloma). In 2016, studies began to explore the viability of other antigens, such as CD20. Trials for solid tumors are less dominated by CAR T, with about half of cell therapy-based trials involving other platforms such as NK cells. Cancer T cells are genetically engineered to express chimeric antigen receptors specifically directed toward antigens on a patient's tumor cells, then infused into the patient where they attack and kill the cancer cells. Adoptive transfer of T cells expressing CARs is a promising anti-cancer therapeutic, because CAR-modified T cells can be engineered to target potentially any tumor associated antigen. Early CAR T cell research has focused on blood cancers. The first approved treatments use CARs that target the antigen CD19, present in B-cell-derived cancers such as acute lymphoblastic leukemia (ALL) and diffuse large B-cell lymphoma (DLBCL). There are also efforts underway to engineer CARs targeting many other blood cancer antigens, including CD30 in refractory Hodgkin's lymphoma; CD33, CD123, and FLT3 in acute myeloid leukemia (AML); and BCMA in multiple myeloma. Aside from CD19, CARs targeting the multiple myeloma antigen B-cell maturation antigen (BCMA) have achieved the most clinical success so far. CARs targeting BCMA were initially reported by Robert Carpenter and James Kochenderfer et al. Anti-BCMA CAR T cells have now been tested in many clinical trials, and anti-BCMA CAR T-cell products have been approved by the U.S. Food and Drug Administration. CAR T cells have also been found to be effective in treating glioblastoma. A single infusion is enough to show rapid tumor regression in a matter of days. Solid tumors have presented a more difficult target. Identification of good antigens has been challenging: such antigens must be highly expressed on the majority of cancer cells, but largely absent on normal tissues. CAR T cells are also not trafficked efficiently into the center of solid tumor masses, and the hostile tumor microenvironment suppresses T cell activity. Autoimmune disease While most CAR T cell studies focus on creating a CAR T cell that can eradicate a certain cell population (for instance, CAR T cells that target lymphoma cells), there are other potential uses for this technology. T cells can also mediate tolerance to antigens. A regulatory T cell outfitted with a CAR could have the potential to confer tolerance to a specific antigen, something that could be utilized in organ transplantation or rheumatologic diseases like lupus. Approved therapies Safety There are serious side effects that result from CAR T-cells being introduced into the body, including cytokine release syndrome and neurological toxicity. Because it is a relatively new treatment, there are few data about the long-term effects of CAR T-cell therapy. There are still concerns about long-term patient survival, as well as pregnancy complications in female patients treated with CAR T-cells. Anaphylaxis may be a side effect, as the CAR is made with a foreign monoclonal antibody, and as a result provokes an immune response. On-target/off-tumor recognition occurs when the CAR T-cell recognizes the correct antigen, but the antigen is expressed on healthy, non-pathogenic tissue. This results in the CAR T-cells attacking non-tumor tissue, such as healthy B cells that express CD19 causing B-cell aplasia. The severity of this adverse effect can vary but the combination of prior immunosuppression, lymphodepleting chemotherapy and on-target effects causing hypogammaglobulinaemia and prolonged cytopenias places patients at increased risk of serious infections. There is also the unlikely possibility that the engineered CAR T-cells will themselves become transformed into cancerous cells through insertional mutagenesis, due to the viral vector inserting the CAR gene into a tumor suppressor or oncogene in the host T cell's genome. Some retroviral (RV) vectors carry a lower risk than lentiviral (LV) vectors. However, both have the potential to be oncogenic. Genomic sequencing analysis of CAR insertion sites in T cells has been established for better understanding of CAR T-cell function and persistence in vivo. Cytokine release syndrome The most common issue after treatment with CAR T-cells is cytokine release syndrome (CRS), a condition in which the immune system is activated and releases an increased number of inflammatory cytokines. The clinical manifestation of this syndrome resembles sepsis with high fever, fatigue, myalgia, nausea, capillary leakages, tachycardia and other cardiac dysfunction, liver failure, and kidney impairment. CRS occurs in almost all patients treated with CAR T-cell therapy; in fact, the presence of CRS is a diagnostic marker that indicates the CAR T-cells are working as intended to kill the cancer cells. The severity of CRS does not correlate with an increased response to the treatment, but rather higher disease burden. Severe cytokine release syndrome can be managed with immunosuppressants such as corticosteroids, and with tocilizumab, an anti-IL-6 monoclonal antibody. Early intervention using tocilizumab was shown to reduce the frequency of severe CRS in multiple studies without affecting the therapeutic effect of the treatment. A novel strategy aimed to ameliorate CRS is based on the simultaneous expression of an artificial non-signaling IL-6 receptor on the surface of CAR T-cells. This construct neutralizes macrophage-derived IL-6 through sequestration, thus decreasing the severity of CRS without interfering with the antitumor capability of the CAR T-cell itself. Immune effector cell-associated neurotoxicity Neurological toxicity is also often associated with CAR T-cell treatment. The underlying mechanism is poorly understood, and may or may not be related to CRS. Clinical manifestations include delirium, the partial loss of the ability to speak coherently while still having the ability to interpret language (expressive aphasia), lowered alertness (obtundation), and seizures. During some clinical trials, deaths caused by neurotoxicity have occurred. The main cause of death from neurotoxicity is cerebral edema. In a study carried out by Juno Therapeutics, Inc., five patients enrolled in the trial died as a result of cerebral edema. Two of the patients were treated with cyclophosphamide alone and the remaining three were treated with a combination of cyclophosphamide and fludarabine. In another clinical trial sponsored by the Fred Hutchinson Cancer Research Center, there was one reported case of irreversible and fatal neurological toxicity 122 days after the administration of CAR T-cells. Hypokinetic movement disorder (parkinsonism, or movement and neurocognitive treatment emergent adverse events) has been observed with BCMA-chimeric antigen receptor (CAR) T-cell treatment for multiple myeloma. Chimeric antigen receptor structure Chimeric antigen receptors combine many facets of normal T cell activation into a single protein. They link an extracellular antigen recognition domain to an intracellular signalling domain, which activates the T cell when an antigen is bound. CARs are composed of four regions: an antigen recognition domain, an extracellular hinge region, a transmembrane domain, and an intracellular T cell signaling domain. Antigen recognition domain The antigen recognition domain is exposed to the outside of the cell, in the ectodomain portion of the receptor. It interacts with potential target molecules and is responsible for targeting the CAR T cell to any cell expressing a matching molecule. The antigen recognition domain is typically derived from the variable regions of a monoclonal antibody linked together as a single-chain variable fragment (scFv). An scFv is a chimeric protein made up of the light (VL) and heavy (VH) chains of immunoglobins, connected with a short linker peptide. These VL and VH regions are selected in advance for their binding ability to the target antigen (such as CD19). The linker between the two chains consists of hydrophilic residues with stretches of glycine and serine in it for flexibility as well as stretches of glutamate and lysine for added solubility. Single domain antibodies (e.g. VH, VHH, VNAR) have been engineered and developed as antigen recognition domains in the CAR format due to their high transduction efficiency in T cells. In addition to antibody fragments, non-antibody-based approaches have also been used to direct CAR specificity, usually taking advantage of ligand/receptor pairs that normally bind to each other. Cytokines, innate immune receptors, TNF receptors, growth factors, and structural proteins have all been successfully used as CAR antigen recognition domains. Hinge region The hinge, also called a spacer, is a small structural domain that sits between the antigen recognition region and the cell's outer membrane. An ideal hinge enhances the flexibility of the scFv receptor head, reducing the spatial constraints between the CAR and its target antigen. This promotes antigen binding and synapse formation between the CAR T cells and target cells. Hinge sequences are often based on membrane-proximal regions from other immune molecules including IgG, CD8, and CD28. Transmembrane domain The transmembrane domain is a structural component, consisting of a hydrophobic alpha helix that spans the cell membrane. It anchors the CAR to the plasma membrane, bridging the extracellular hinge and antigen recognition domains with the intracellular signaling region. This domain is essential for the stability of the receptor as a whole. Generally, the transmembrane domain from the most membrane-proximal component of the endodomain is used, but different transmembrane domains result in different receptor stability. The CD28 transmembrane domain is known to result in a highly expressed, stable receptor. Using the CD3-zeta transmembrane domain is not recommended, as it can result in incorporation of the artificial TCR into the native TCR. Intracellular T cell signaling domain The intracellular T cell signaling domain lies in the receptor's endodomain, inside the cell. After an antigen is bound to the external antigen recognition domain, CAR receptors cluster together and transmit an activation signal. Then the internal cytoplasmic end of the receptor perpetuates signaling inside the T cell. Normal T cell activation relies on the phosphorylation of immunoreceptor tyrosine-based activation motifs (ITAMs) present in the cytoplasmic domain of CD3-zeta. To mimic this process, CD3-zeta's cytoplasmic domain is commonly used as the main CAR endodomain component. Other ITAM-containing domains have also been tried, but are not as effective. T cells also require co-stimulatory molecules in addition to CD3 signaling in order to persist after activation. For this reason, the endodomains of CAR receptors typically also include one or more chimeric domains from co-stimulatory proteins. Signaling domains from a wide variety of co-stimulatory molecules have been successfully tested, including CD28, CD27, CD134 (OX40), and CD137 (4-1BB). The intracellular signaling domain used defines the generation of a CAR T cell. First generation CARs include only a CD3-zeta cytoplasmic domain. Second generation CARs add a co-stimulatory domain, like CD28 or 4-1BB. The involvement of these intracellular signaling domains improve T cell proliferation, cytokine secretion, resistance to apoptosis, and in vivo persistence. Third generation CARs combine multiple co-stimulatory domains, such as CD28-41BB or CD28-OX40, to augment T cell activity. Preclinical data show the third-generation CARs exhibit improved effector functions and better in vivo persistence as compared to second-generation CARs. Research directions Antigen recognition Although the initial clinical remission rates after CAR T cell therapy in all patients are as high as 90%, long-term survival rates are much lower. The cause is typically the emergence of leukemia cells that do not express CD19 and so evade recognition by the CD19–CAR T cells, a phenomenon known as antigen escape. Preclinical studies developing CAR T cells with dual targeting of CD19 plus CD22 or CD19 plus CD20 have demonstrated promise, and trials studying bispecific targeting to circumvent CD19 down-regulation are ongoing. In 2018, a version of CAR was developed that is referred to as SUPRA CAR, or split, universal, and programmable. Multiple mechanisms can be deployed to finely regulate the activity of SUPRA CAR, which limits overactivation. In contrast to the traditional CAR design, SUPRA CAR allows targeting of multiple antigens without further genetic modification of a person's immune cells. Treatment of antigenically heterogeneous tumors can be achieved by administration of a mixture of the desired antigen-specific adaptors. CAR T function Fourth generation CARs (also known as TRUCKs or armored CARs) further add factors that enhance T cell expansion, persistence, and anti-tumoral activity. This can include cytokines, such is IL-2, IL-5, IL-12 and co-stimulatory ligands. Control mechanisms Adding a synthetic control mechanism to engineered T cells allows doctors to precisely control the persistence or activity of the T cells in the patient's body, with the goal of reducing toxic side effects. The major control techniques trigger T cell death or limit T cell activation, and often regulate the T cells via a separate drug that can be introduced or withheld as needed. Suicide genes: Genetically modified T cells are engineered to include one or more genes that can induce apoptosis when activated by an extracellular molecule. Herpes simplex virus thymidine kinase (HSV-TK) and inducible caspase 9 (iCasp9) are two types of suicide genes that have been integrated into CAR T cells. In the iCasp9 system, the suicide gene complex has two elements: a mutated FK506-binding protein with high specificity to the small molecule rimiducid/AP1903, and a gene encoding a pro-domain-deleted human caspase 9. Dosing the patient with rimiducid activates the suicide system, leading to rapid apoptosis of the genetically modified T cells. Although both the HSV-TK and iCasp9 systems demonstrate a noticeable function as a safety switch in clinical trials, some defects limit their application. HSV-TK is virus-derived and may be immunogenic to humans. It is also currently unclear whether the suicide gene strategies will act quickly enough in all situations to halt dangerous off-tumor cytotoxicity. Dual-antigen receptor: CAR T cells are engineered to express two tumor-associated antigen receptors at the same time, reducing the likelihood that the T cells will attack non-tumor cells. Dual-antigen receptor CAR T cells have been reported to have less intense side effects. An in vivo study in mice shows that dual-receptor CAR T cells effectively eradicated prostate cancer and achieved complete long-term survival. ON-switch and OFF-switch: In this system, CAR T cells can only function in the presence of both tumor antigen and a benign exogenous molecule. To achieve this, the CAR T cell's engineered chimeric antigen receptor is split into two separate proteins that must come together in order to function. The first receptor protein typically contains the extracellular antigen binding domain, while the second protein contains the downstream signaling elements and co-stimulatory molecules (such as CD3ζ and 4-1BB). In the presence of an exogenous molecule (such as a rapamycin analog), the binding and signaling proteins dimerize together, allowing the CAR T cells to attack the tumor. Human EGFR truncated form (hEGFRt) has been used as an OFF-switch for CAR T cells using cetuximab. Bispecific molecules as switches: Bispecific molecules target both a tumor-associated antigen and the CD3 molecule on the surface of T cells. This ensures that the T cells cannot become activated unless they are in close physical proximity to a tumor cell. The anti-CD20/CD3 bispecific molecule shows high specificity to both malignant B cells and cancer cells in mice. FITC is another bifunctional molecule used in this strategy. FITC can redirect and regulate the activity of the FITC-specific CAR T cells toward tumor cells with folate receptors. Advances in CAR T cell manufacturing. Due to the high costs of CAR T cell therapy, a number of alternative efforts are being investigated to improve CAR T cell manufacturing and reduce costs. In vivo CAR T cell manufacturing strategies are being tested. In addition, bioinstructive materials have been developed for CAR T cell generation. Rapid CAR T cell generation is also possible through shortening or eliminating the activation and expansion steps. In situ modification Another approach is to modify T cells and/or B cells still in the body using viral vectors. Alternative Activating Domains Recent advancements in CAR T-cell therapy have focused on alternative activating domains to enhance efficacy and overcome resistance in solid tumors. For instance, Toll-like receptor 4 (TLR4) signaling components can be incorporated into CAR constructs to modulate cytokine production and boost T-cell activation and proliferation, leading to enhanced CAR T-cell expansion and persistence. Similarly, the FYN kinase, a member of the Src family kinases involved in T-cell receptor signaling, can be integrated to improve the signaling cascade within CAR T-cells, resulting in better targeting and elimination of cancer cells. Additionally, KIR-based CARs (KIR-CAR), which use the transmembrane and intracellular domains of the activating receptor KIR2DS2 combined with the DAP-12 signaling adaptor, have shown improved T-cell proliferation and antitumor activity. These strategies, including the use of nonconventional costimulatory molecules like MyD88/CD40, highlight the innovative approaches being taken to optimize CAR T-cell therapies for more effective cancer treatments. Economics The cost of CAR T cell therapies has been criticized, with the initial costs of tisagenlecleucel (Kymriah) and axicabtagene ciloleucel (Yescarta) being $375,000 and $475,000 respectively. The high cost of CAR T therapies is due to complex cellular manufacturing in specialized good manufacturing practice (GMP) facilities as well as the high level of hospital care necessary after CAR T cells are administered due to risks such as cytokine release syndrome. In the United States, CAR T cell therapies are covered by Medicare and by many but not all private insurers. Manufacturers of CAR T cells have developed alternative payment programs due to the high cost of CAR T therapy, such as by requiring payment only if the CAR T therapy induces a complete remission by a certain time point after treatment. Additionally, CAR T cell therapies are not available worldwide yet. CAR T cell therapies have been approved in China, Australia, Singapore, the United Kingdom, and some European countries. In February 2022 Brazil approved tisagenlecleucel (Kymriah) treatment. See also Cell therapy Checkpoint inhibitor Glofitamab Mosunetuzumab Epcoritamab Gene therapy Immune checkpoint References External links CAR T Cells: Engineering Patients' Immune Cells to Treat Their Cancers. National Cancer Institute, July 2019 Cancer immunotherapy Gene therapy Immune system Leukemia Lymphoma T cells
CAR T cell
[ "Engineering", "Biology" ]
5,671
[ "Immune system", "Organ systems", "Gene therapy", "Genetic engineering" ]
1,624,276
https://en.wikipedia.org/wiki/Bottomness
In physics, bottomness (symbol B′; using a prime as plain B is used already for baryon number) or beauty is a flavour quantum number reflecting the difference between the number of bottom antiquarks (n) and the number of bottom quarks (n) that are present in a particle: Bottom quarks have (by convention) a bottomness of −1 while bottom antiquarks have a bottomness of +1. The convention is that the flavour quantum number sign for the quark is the same as the sign of the electric charge (symbol Q) of that quark (in this case, Q = −). As with other flavour-related quantum numbers, bottomness is preserved under strong and electromagnetic interactions, but not under weak interactions. For first-order weak reactions, it holds that . This term is rarely used. Most physicists simply refer to "the number of bottom quarks" and "the number of bottom antiquarks". References Quarks Flavour (particle physics)
Bottomness
[ "Physics" ]
211
[ "Particle physics stubs", "Particle physics" ]
1,624,356
https://en.wikipedia.org/wiki/Intermediate%20power%20amplifier
An Intermediate power amplifier (IPA) is one stage of the amplification process in a radio transmitter which usually occurs prior to the final high power amplification. The IPA provides lower power RF energy necessary to drive the final. In very high power transmitters, such as 10 kilowatts and above, multiple IPAs are combined to provide enough drive for the final. An exciter, an even lower power transmitter, provides a similar service to the IPA by driving it; although an exciter usually encompasses other important functions, such as choosing the frequency of the RF. References Electronic amplifiers Broadcast engineering
Intermediate power amplifier
[ "Technology", "Engineering" ]
124
[ "Broadcast engineering", "Electronic engineering", "Electronic amplifiers", "Amplifiers" ]
1,624,397
https://en.wikipedia.org/wiki/Topness
Topness (symbol T) or truth is a flavour quantum number that represents the difference between the number of top quarks (t) and number of top antiquarks () present in a particle: By convention, top quarks have a topness of +1 and top antiquarks have a topness of −1. The term "topness" is rarely used; most physicists simply refer to "the number of top quarks" and "the number of top antiquarks". Conservation Like all flavour quantum numbers, topness is preserved under strong and electromagnetic interactions, but not under weak interaction. However the top quark is extremely unstable, with a half-life under 10−23 s, which is the required time for the strong interaction to take place. For that reason the top quark does not hadronize, that is it never forms any meson or baryon, so the topness of a meson or a baryon is always zero. By the time it can interact strongly it has already decayed to another flavour of quark (usually to a bottom quark). References Further reading Quarks Flavour (particle physics)
Topness
[ "Physics" ]
242
[ "Particle physics stubs", "Particle physics" ]
1,624,412
https://en.wikipedia.org/wiki/Bend%20radius
Bend radius, which is measured to the inside curvature, is the minimum radius one can bend a pipe, tube, sheet, cable or hose without kinking it, damaging it, or shortening its life. The smaller the bend radius, the greater the material flexibility (as the radius of curvature decreases, the curvature increases). The diagram to the right illustrates a cable with a seven-centimeter bend radius. The minimum bend radius is the radius below which an object such as a cable should not be bent. Fiber optics The minimum bend radius is of particular importance in the handling of fiber-optic cables, which are often used in telecommunications. The minimum bending radius will vary with different cable designs. The manufacturer should specify the minimum radius to which the cable may safely be bent during installation and for the long term. The former is somewhat larger than the latter. The minimum bend radius is in general also a function of tensile stresses, e.g., during installation, while being bent around a sheave while the fiber or cable is under tension. If no minimum bend radius is specified, one is usually safe in assuming a minimum long-term low-stress radius not less than 15 times the cable diameter, or 2 inches. Besides mechanical destruction, another reason why one should avoid excessive bending of fiber-optic cables is to minimize microbending and macrobending losses. Microbending causes light attenuation induced by deformation of the fiber while macrobending causes the leakage of light through the fiber cladding and this is more likely to happen where the fiber is excessively bent. Other applications Strain gauges also have a minimum bending radius. This radius is the radius below which the strain gauge will malfunction. For metal tubing, bend radius is to the centerline of tubing, not the exterior. References Cables Fiber optics Plumbing Radii vi:Bán kính cong
Bend radius
[ "Engineering" ]
383
[ "Construction", "Plumbing" ]
1,624,490
https://en.wikipedia.org/wiki/Voltage%20doubler
A voltage doubler is an electronic circuit which charges capacitors from the input voltage and switches these charges in such a way that, in the ideal case, exactly twice the voltage is produced at the output as at its input. The simplest of these circuits is a form of rectifier which take an AC voltage as input and outputs a doubled DC voltage. The switching elements are simple diodes and they are driven to switch state merely by the alternating voltage of the input. DC-to-DC voltage doublers cannot switch in this way and require a driving circuit to control the switching. They frequently also require a switching element that can be controlled directly, such as a transistor, rather than relying on the voltage across the switch as in the simple AC-to-DC case. Voltage doublers are a variety of voltage multiplier circuits. Many, but not all, voltage doubler circuits can be viewed as a single stage of a higher order multiplier: cascading identical stages together achieves a greater voltage multiplication. Voltage doubling rectifiers Villard circuit The Villard circuit, conceived by Paul Ulrich Villard, consists simply of a capacitor and a diode. While it has the great benefit of simplicity, its output has very poor ripple characteristics. Essentially, the circuit is a diode clamp circuit. The capacitor is charged on the negative half cycles to the peak AC voltage (Vpk). The output is the superposition of the input AC waveform and the steady DC of the capacitor. The effect of the circuit is to shift the DC value of the waveform. The negative peaks of the AC waveform are "clamped" to 0 V (actually −VF, the small forward bias voltage of the diode) by the diode, therefore the positive peaks of the output waveform are 2Vpk. The peak-to-peak ripple is an enormous 2Vpk and cannot be smoothed unless the circuit is effectively turned into one of the more sophisticated forms. This is the circuit (with diode reversed) used to supply the negative high voltage for the magnetron in a microwave oven. Greinacher circuit The Greinacher voltage doubler is a significant improvement over the Villard circuit for a small cost in additional components. The ripple is much reduced, nominally zero under open-circuit load conditions, but when current is being drawn depends on the resistance of the load and the value of the capacitors used. The circuit works by following a Villard cell stage with what is in essence a peak detector or envelope detector stage. The peak detector cell has the effect of removing most of the ripple while preserving the peak voltage at the output. The Greinacher circuit is also commonly known as the half-wave voltage doubler. This circuit was first invented by Heinrich Greinacher in 1913 (published 1914) to provide the 200–300 V he needed for his newly invented ionometer, the 110 V AC supplied by the Zürich power stations of the time being insufficient. He later extended this idea into a cascade of multipliers in 1920. This cascade of Greinacher cells is often inaccurately referred to as a Villard cascade. It is also called a Cockcroft–Walton multiplier after the particle accelerator machine built by John Cockcroft and Ernest Walton, who independently discovered the circuit in 1932. The concept in this topology can be extended to a voltage quadrupler circuit by using two Greinacher cells of opposite polarities driven from the same AC source. The output is taken across the two individual outputs. As with a bridge circuit, it is impossible to simultaneously ground the input and output of this circuit. Delon circuit The Delon circuit uses a bridge topology for voltage doubling; consequently it is also called a full-wave voltage doubler. This form of circuit was, at one time, commonly found in cathode-ray-tube television sets where it was used to provide an extra high tension (EHT) supply. Generating voltages in excess of 5 kV with a transformer has safety issues in terms of domestic equipment and in any case is uneconomical. However, black and white television sets required an e.h.t. of 10 kV and colour sets even more. Voltage doublers were used to either double the voltage on an e.h.t winding on the mains transformer or were applied to the waveform on the line flyback coils. The circuit consists of two half-wave peak detectors, functioning in exactly the same way as the peak detector cell in the Greinacher circuit. Each of the two peak detector cells operates on opposite half-cycles of the incoming waveform. Since their outputs are in series, the output is twice the peak input voltage. Switched capacitor circuits It is possible to use the simple diode-capacitor circuits described above to double the voltage of a DC source by preceding the voltage doubler with a chopper circuit. In effect, this converts the DC to AC before application to the voltage doubler. More efficient circuits can be built by driving the switching devices from an external clock so that both functions, the chopping and multiplying, are achieved simultaneously. Such circuits are known as switched capacitor circuits. This approach is especially useful in low-voltage battery-powered applications where integrated circuits require a voltage supply greater than the battery can deliver. Frequently, a clock signal is readily available on board the integrated circuit and little or no additional circuitry is needed to generate it. Conceptually, perhaps the simplest switched capacitor configuration is that shown schematically in figure 5. Here two capacitors are simultaneously charged to the same voltage in parallel. The supply is then switched off and the capacitors are switched into series. The output is taken from across the two capacitors in series resulting in an output double the supply voltage. There are many different switching devices that could be used in such a circuit, but in integrated circuits MOSFET devices are frequently employed. Another basic concept is the charge pump, a version of which is shown schematically in figure 6. The charge pump capacitor, CP, is first charged to the input voltage. It is then switched to charging the output capacitor, CO, in series with the input voltage resulting in CO eventually being charged to twice the input voltage. It may take several cycles before the charge pump succeeds in fully charging CO but after steady state has been reached it is only necessary for CP to pump a small amount of charge equivalent to that being supplied to the load from CO. While CO is disconnected from the charge pump it partially discharges into the load resulting in ripple on the output voltage. This ripple is smaller for higher clock frequencies since the discharge time is shorter, and is also easier to filter. Alternatively, the capacitors can be made smaller for a given ripple specification. The practical maximum clock frequency in integrated circuits is typically in the hundreds of kilohertz. Dickson charge pump The Dickson charge pump, or Dickson multiplier, consists of a cascade of diode/capacitor cells with the bottom plate of each capacitor driven by a clock pulse train. The circuit is a modification of the Cockcroft-Walton multiplier but takes a DC input with the clock trains providing the switching signal instead of the AC input. The Dickson multiplier normally requires that alternate cells are driven from clock pulses of opposite phase. However, since a voltage doubler, shown in figure 7, requires only one stage of multiplication only one clock signal is required. The Dickson multiplier is frequently employed in integrated circuits where the supply voltage (from a battery for instance) is lower than that required by the circuitry. It is advantageous in integrated circuit manufacture that all the semiconductor components are of basically the same type. MOSFETs are commonly the standard logic block in many integrated circuits. For this reason the diodes are often replaced by this type of transistor, but wired to function as a diode - an arrangement called a diode-wired MOSFET. Figure 8 shows a Dickson voltage doubler using diode-wired n-channel enhancement type MOSFETs. There are many variations and improvements to the basic Dickson charge pump. Many of these are concerned with reducing the effect of the transistor drain-source voltage. This can be very significant if the input voltage is small, such as a low-voltage battery. With ideal switching elements the output is an integral multiple of the input (two for a doubler) but with a single-cell battery as the input source and MOSFET switches the output will be far less than this value since much of the voltage will be dropped across the transistors. For a circuit using discrete components the Schottky diode would be a better choice of switching element for its extremely low voltage drop in the on state. However, integrated circuit designers prefer to use the easily available MOSFET and compensate for its inadequacies with increased circuit complexity. As an example, an alkaline battery cell has a nominal voltage of . A voltage doubler using ideal switching elements with zero voltage drop will hypothetically double this to . However, the drain-source voltage drop of a diode-wired MOSFET when it is in the on state must be at least the gate threshold voltage which might typically be . This voltage "doubler" will only succeed in raising the output voltage by about to . If the drop across the final smoothing transistor is also taken into account the circuit may not be able to increase the voltage at all without using multiple stages. A typical Schottky diode, on the other hand, might have an on state voltage of . A doubler using this Schottky diode will result in a voltage of , or at the output after the smoothing diode, . Cross-coupled switched capacitors Cross-coupled switched capacitor circuits come into their own for very low input voltages. Wireless battery driven equipment such as pagers, bluetooth devices and the like may require a single-cell battery to continue to supply power when it has discharged to under a volt. When clock is low, transistor Q2 is turned off. At the same time, clock is high. This turns on transistor Q1, which results in capacitor C1 being charged to Vin. When goes high, the top plate of C1 is pushed up to twice Vin. At the same time, switch S1 closes, so this voltage appears at the output. At the same time, Q2 is turned on allowing C2 to charge. On the next half cycle the roles will be reversed: will be low, will be high, S1 will open and S2 will close. Thus, the output is supplied with 2Vin alternately from each side of the circuit. The loss is low in this circuit because there are no diode-wired MOSFETs and their associated threshold voltage problems. The circuit also has the advantage that the ripple frequency is doubled because there are effectively two voltage doublers both supplying the output from out of phase clocks. The primary disadvantage of this circuit is that stray capacitances are much more significant than with the Dickson multiplier and account for the larger part of the losses in this circuit. See also Boost converter Buck-boost converter DC to DC converter Flyback converter References Bibliography Ahmed, Syed Imran Pipelined ADC Design and Enhancement Techniques, Springer, 2010 . Campardo, Giovanni; Micheloni, Rino; Novosel, David VLSI-design of Non-volatile Memories, Springer, 2005 . Kories, Ralf; Schmidt-Walter, Heinz Taschenbuch der Elektrotechnik: Grundlagen und Elektronik, Deutsch Harri GmbH, 2004 . Liou, Juin J.; Ortiz-Conde, Adelmo; García-Sánchez, F. Analysis and Design of MOSFETs, Springer, 1998 . McComb, Gordon Gordon McComb's gadgeteer's goldmine!, McGraw-Hill Professional, 1990 . Mehra, J; Rechenberg, H The Historical Development of Quantum Theory, Springer, 2001 . Millman, Jacob; Halkias, Christos C. Integrated Electronics, McGraw-Hill Kogakusha, 1972 . Peluso, Vincenzo; Steyaert, Michiel; Sansen, Willy M. C. Design of Low-voltage Low-power CMOS Delta-Sigma A/D Converters, Springer, 1999 . Wharton, W.; Howorth, D. Principles of Television Reception, Pitman Publishing, 1971 . Yuan, Fei CMOS Circuits for Passive Wireless Microsystems, Springer, 2010 . Zumbahlen, Hank Linear Circuit Design Handbook, Newnes, 2008 . Primary sources Electrical circuits Electric power conversion Analog circuits Electronic design Rectifiers
Voltage doubler
[ "Engineering" ]
2,642
[ "Electronic design", "Analog circuits", "Electronic engineering", "Electrical engineering", "Design", "Electrical circuits" ]
1,624,551
https://en.wikipedia.org/wiki/Betts%20electrolytic%20process
The Betts electrolytic process is an industrial process for purification of lead from bullion. Lead obtained from its ores is impure because lead is a good solvent for many metals. Often these impurities are tolerated, but the Betts electrolytic process is used when high purity lead is required, especially for bismuth-free lead. Process description for lead The electrolyte for this process is a mixture of lead fluorosilicate ("PbSiF6") and hexafluorosilicic acid (H2SiF6) operating at 45 °C (113 °F). Cathodes are thin sheets of pure lead and anodes are cast from the impure lead to be purified. A potential of 0.5 volts is applied. At the anode, lead dissolves, as do metal impurities that are less noble than lead. Impurities that are more noble than lead, such as silver, gold, and bismuth, flake from the anode as it dissolves and settle to the bottom of the vessel as "anode mud." Pure metallic lead plates onto the cathode, with the less noble metals remaining in solution. Because of its high cost, electrolysis is used only when very pure lead is needed. Otherwise pyrometallurgical methods are preferred, such as the Parkes process followed by the Betterton-Kroll process. History The process is named for its inventor Anson Gardner Betts who filed several patents for this method starting in 1901. See also Processing lead from ore Lead smelter Electrochemical engineering References External links Bismuth Bismuth Lead Electrolysis Metallurgical processes
Betts electrolytic process
[ "Chemistry", "Materials_science", "Engineering" ]
344
[ "Materials science stubs", "Metallurgical processes", "Metallurgy", "Materials science", "Electrolysis", "Electrochemistry", "Electrochemistry stubs", "Physical chemistry stubs" ]
1,624,595
https://en.wikipedia.org/wiki/Electrolytic%20process
An electrolytic process is the use of electrolysis industrially to refine metals or compounds at a high purity and low cost. Some examples are the Hall-Héroult process used for aluminium, or the production of hydrogen from water. Overview Electrolysis is usually done in bulk using hundreds of sheets of metal connected to an electric power source. In the production of copper, these pure sheets of copper are used as starter material for the cathodes, and are then lowered into a solution such as copper sulfate with the large anodes that are cast from impure (97% pure) copper. The copper from the anodes is electroplated on to the cathodes, while any impurities settle to the bottom of the tank. This forms cathodes of 99.999% pure copper. See also Electrolysis of water Electroplating References Industrial processes Chemical processes Hydrogen production Articles containing video clips
Electrolytic process
[ "Chemistry" ]
184
[ "Chemical process engineering", "Chemical processes", "nan" ]
1,624,622
https://en.wikipedia.org/wiki/AAI%20Aerosonde
The AAI Aerosonde is a small unmanned aerial vehicle (UAV) designed to collect weather data, including temperature, atmospheric pressure, humidity, and wind measurements over oceans and remote areas. The Aerosonde was developed by Insitu, and is now manufactured by Aerosonde Ltd, which is a strategic business of AAI Corporation. The Aerosonde is powered by a modified Enya R120 model aircraft engine, and carries on board a small computer, meteorological instruments, and a GPS receiver for navigation. It is also used by the United States Armed Forces for intelligence, surveillance and reconnaissance (ISR). Design and development On August 21, 1998, a Phase 1 Aerosonde nicknamed "Laima", after the ancient Latvian deity of good fortune, completed a 2,031 mile (3,270 km) flight across the Atlantic Ocean. This was the first crossing of the Atlantic Ocean by a UAV; at the time, it was also the smallest aircraft ever to cross the Atlantic (the smallest aircraft record was subsequently broken by the Spirit of Butts Farm UAV). Launched from a roof rack of a moving car due to its lack of undercarriage, Laima flew from Newfoundland, Canada to Benbecula, an island off the coast of Scotland in 26 hours 45 minutes in stormy weather, using approximately 1.5 U.S. gallons (1.25 imperial gallons or 5.7 litres) of gasoline (petrol). Other than for take-off and landing, the flight was autonomous, without external control, at an altitude of 5,500 ft (1,680 meters). Aerosondes have also been the first unmanned aircraft to penetrate tropical cyclones, with an initial mission in 2001 followed by eye penetrations in 2005. Operational history On 5 March 2012, the U.S. Special Operations Command (SOCOM) awarded AAI a contract to provide the Aerosonde-G for their Mid-Endurance UAS II program. The catapult-launched air vehicle has a takeoff weight depending on engine type, with endurance of over 10 hours and an electro-optic/infrared and laser-pointer payload. The Aerosonde has been employed by SOCOM and U.S. Naval Air Systems Command (NAVAIR) under the designation MQ-19 under service provision contracts. A typical system comprises four air vehicles and two ground control stations that are accommodated in tents or tailored to fit in most vehicles. The system can also include remote video terminals for individual users to uplink new navigation waypoints and sensor commands to, and receive sensor imagery and video from, the vehicle from a ruggedized tablet device. Originally, the Aerosonde suffered from engine-reliability issues, but Textron says it has rectified those issues. By November 2015, Textron Systems was performing Aerosonde operations in "eight or nine" countries for its users, including the U.S. Marine Corps, U.S. Air Force, and SOCOM, as well as for commercial users consisting of a customer in the oil and gas industry. Instead of buying hardware, customers pay for "sensor hours," and the company decides how many aircraft are produced to meet requirements. 4,000 fee-for-service hours were being performed monthly, and the Aerosonde had exceeded 110,000 flight hours in service. Variants Specifications (Aerosonde) General characteristics Crew: Remote-controlled Length: 5 ft 8 in (1.7 m) Wingspan: 9 ft 8 in (2.9 m) Height: 2 ft 0 in (0.60 m) Wing area: 6.1 ft2 (0.57 m2) Empty: 22lb (10 kg) Loaded: 28.9 lb (13.1 kg) Maximum take-off: 28.9 lb (13.1 kg) Powerplant: Modified Enya R120 model aircraft engine, 1.74 hp (1280 W) Lycoming El-005 Multi Fuel power plant Performance Maximum speed: 69 mph (111 km/h) Range: 100 miles (150 km) Service ceiling: 15,000 ft (4,500 m) Rate of climb: 2.5 m/sec (8.2 ft/sec) Wing loading: 5 lb/ft2 (23 kg/m2) Power/Mass: 0.06 hp/lb (98 W/kg) References Display information at Museum of Flight in Seattle, Washington. G.J. Holland, T. McGeer and H.H. Youngren. Autonomous aerosondes for economical atmospheric soundings anywhere on the globe. Bulletin of the American Meteorological Society 73(12):1987-1999, December 1992. Cyclone reconnaissance P-H Lin & C-S Lee. Fly into typhoon Haiyan with UAV Aerosonde. American Meteorological Society conference paper 52113 (2002). NASA Wallops Flight Facility press release: "Aerosonde UAV Completes First Operational Flights at NASA Wallops" Laima flight Tad McGeer. "Laima: The first Atlantic crossing by unmanned aircraft" (1998) Aerosonde Pty Ltd. press release: "First UAV across the Atlantic" University of Washington, Aeronautics and Astronautics Program, College of Engineering: (Aerosonde project web page) External links Aerosonde Pty Ltd. web site Meteorological instrumentation and equipment Single-engined pusher aircraft 1990s United States special-purpose aircraft Unmanned aerial vehicles of Australia
AAI Aerosonde
[ "Technology", "Engineering" ]
1,108
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
1,624,638
https://en.wikipedia.org/wiki/RasMol
RasMol is a computer program written for molecular graphics visualization intended and used mainly to depict and explore biological macromolecule structures, such as those found in the Protein Data Bank (PDB). History It was originally developed by Roger Sayle in the early 1990s. Historically, it was an important tool for molecular biologists since the extremely optimized program allowed the software to run on (then) modestly powerful personal computers. Before RasMol, visualization software ran on graphics workstations that, due to their cost, were less accessible to scholars. RasMol continues to be important for research in structural biology, and has become important in education. RasMol has a complex licensing version history. Starting with the version 2.7 series, RasMol source code is dual-licensed under a GNU General Public License (GPL), or custom license RASLIC. Starting with version 2.7.5, a GPL is the only license valid for binary distributions. RasMol includes a scripting language, to perform many functions such as selecting certain protein chains, changing colors, etc. Jmol and Sirius software have incorporated this language into their commands. Protein Data Bank (PDB) files can be downloaded for visualization from members of the Worldwide Protein Data Bank (wwPDB). These have been uploaded by researchers who have characterized the structure of molecules usually by X-ray crystallography, protein NMR spectroscopy, or cryogenic electron microscopy. Interprocess communication Rasmol can communicate with other programs via Tcl/Tk on Unix platforms, and via Dynamic Data Exchange (DDE) on Microsoft Windows. With a multiple sequence alignment program, the responsible Java class can be freely used in other applications. See also List of molecular graphics systems Comparison of software for molecular mechanics modeling Molecular graphics Molecule editor List of free and open-source software packages References External links Download RasMol (production releases) Source repository (stable) Source repository (development) Early Later Molecular modelling software Free science software Free software programmed in C Free educational software
RasMol
[ "Chemistry" ]
424
[ "Molecular modelling", "Molecular modelling software", "Computational chemistry software" ]
1,624,646
https://en.wikipedia.org/wiki/Crude%20oil%20washing
Crude oil washing (COW) is washing out the residue from the oil tanker using the crude oil cargo itself, after the cargo tanks have been emptied. Crude oil is pumped back and preheated in the slop tanks, then sprayed back via high pressure nozzles in the cargo tanks onto the walls of the tank. Due to the sticky nature of the crude oil, the oil clings to the tank walls, and such oil adds to the cargo 'remaining on board' (the ROB). By COWing the tanks, the amount of ROB is significantly reduced, and with the current high cost of oil, the financial savings are significant, both for the Charterer and the Shipowner. If the cargo ROB is deemed as 'liquid and pumpable' then the charterers can claim from the owner for any cargo loss for normally between 0.3% up to 0.5%. It replaced the load on top and seawater washing systems, both of which involved discharging oil-contaminated water into the sea. MARPOL 73/78 made this mandatory equipment for oil tankers of 20,000 tons or greater deadweight. Although COWing is most notable for actual tankers, the current chairman for Hashimoto Technical Service, Hashimoto Akiyoshi, applied this method in washing refinery plant oil tanks in Japan. Hashimoto is currently using this method in the Kyushu, Chugoku, and Tohouku regions in Japan. Seawater washing Originally oil tankers used one set of tanks for cargo, and about one third of the same tanks were for water ballast on their empty trips. High-pressure jets of hot seawater were used to clean the tanks, and the mixture of seawater and residue called slops was discharged into the sea, as was the oil-contaminated ballast water. The 1954 OILPOL Convention attempted to reduce the harm by prohibiting such discharges within of most land and of certain particularly sensitive areas. Load on top The discharges from seawater washing were still considered a problem, and during the 1960s the load-on-top approach began to be adopted. The mixture of cleaning water and residue was pumped into a slop tank and allowed to separate by their different densities into oil and water during the journey. The water portion was then discharged, leaving only crude oil in the slop tank. This was pumped into the main tanks and the new cargo loaded on top of it, recovering as much as 800 tons of oil which was formerly discarded. History Even with load on top there is still some oil in the discharged water from the slop tank. Starting in the 1970s, equipment capable of using crude oil itself for washing began to replace the water-based washing, leading to the current technique of crude oil washing. This reduces the remaining deliberate discharge of oil-contaminated water and increases the amount of cargo discharged, providing a further benefit to the cargo owner. Crude oil washing equipment became mandatory for new tankers of 20,000 tons or more deadweight with the 1978 Protocol to the 1973 MARPOL Convention. Revised specifications for the equipment were introduced in 1999. Modern tankers also use segregated ballast tanks and these remove the problem of discharge of oily ballast water. External links International Maritime Organization description of Crude Oil Washing Scanjet Crude Oil Washing Machine See also Maritime environmental crime MARPOL 73/78 Sources Further reading Petroleum production Ocean pollution
Crude oil washing
[ "Chemistry", "Environmental_science" ]
683
[ "Ocean pollution", "Water pollution" ]
1,624,694
https://en.wikipedia.org/wiki/LGP-30
The LGP-30, standing for Librascope General Purpose and then Librascope General Precision, is an early off-the-shelf computer. It was manufactured by the Librascope company of Glendale, California (a division of General Precision Inc.), and sold and serviced by the Royal Precision Electronic Computer Company, a joint venture with the Royal McBee division of the Royal Typewriter Company. The LGP-30 was first manufactured in 1956, at a retail price of $47,000, . The LGP-30 was commonly referred to as a desk computer. Its height, width, and depth, excluding the typewriter shelf, was . It weighed about , and was mounted on sturdy casters which facilitated moving the unit. Design The primary design consultant for the Librascope computer was Stan Frankel, a Manhattan Project veteran and one of the first programmers of ENIAC. He designed a usable computer with a minimal amount of hardware. The single address instruction set had only 16 commands. Magnetic drum memory held the main memory, and the central processing unit (CPU) processor registers, timing information, and the master bit clock, each on a dedicated track. The number of vacuum tubes was minimized by using solid-state diode logic, a bit-serial architecture and multiple use of each of the 15 flip-flops. It was a binary, 31-bit word computer with a 4096-word drum memory. Standard inputs were the Flexowriter keyboard and paper tape (ten six-bit characters/second). The standard output was the Flexowriter printer (typewriter, working at 10 characters/second). An optional higher-speed paper tape reader and punch was available as a separate peripheral. The computer contained 113 electronic tubes and 1450 diodes. The tubes were mounted on 34 etched circuit pluggable cards which also contain associated components. The 34 cards were of only 12 different types. Card-extenders were available to permit dynamic testing of all machine functions. 680 of the 1450 diodes were mounted on one pluggable logic board. The LGP-30 required 1500 watts operating under full load. The power inlet cord could plug into any standard 115 volt 60-cycle single-phase line. The computer incorporated voltage regulation suitable for powerline variation of 95 to 130 volts. In addition to power regulation, the computer also contained circuitry for a warm-up stage, which minimized thermal shock to the tubes to ensure longer life. The computer contained a cooling fan which directed filtered air through ducts to the tubes and diodes, to extend component life and ensure proper operation. No expensive air conditioning was necessary if the LGP-30 was operated at reasonable temperatures. There were 32 bit locations per drum word, but only 31 were used, permitting a "restoration of magnetic flux in the head" at the 32nd bit time. Since there was only one address per instruction, a method was needed to optimize allocation of operands. Otherwise, each instruction would wait a complete drum (or disk) revolution each time a data reference was made. The LGP-30 provided for operand-location optimization by interleaving the logical addresses on the drum so that two adjacent addresses (e.g., 00 and 01) were separated by nine physical locations. These spaces allowed for operands to be located next to the instructions which use them. There were 64 tracks, each with 64 words (sectors). The time between two adjacent physical words was about 0.260 millisecond (ms), and the time between two adjacent addresses was 9 x 0.260 or 2.340 ms. The worst-case access time was 16.66 ms. Half of the instruction (15 bits) was unused. The unused half could have been used for extra instructions, indexing, indirect addressing, or a second (+1) address to locate the next instruction, each of which would have increased program performance. None of these features were implemented in the LGP-30, but some were realized in its 1960 successor, the RPC-4000. A unique feature of the LGP-30 was its built-in multiplication, despite being an inexpensive computer. Since this was a drum computer, bits were processed serially as they were read from the drum. As it did each of the additions associated with the multiplication, it effectively shifted the operand right, acting as if the binary point were on the left side of the word, as opposed to the right side as on most other computers. The divide operation worked similarly. To further reduce costs, the traditional front panel lights showing internal registers were absent. Instead, Librascope mounted a small oscilloscope on the front panel that displayed the output from the three register read heads, one above the other, allowing the operator to see and read the bits. Horizontal and vertical size controls let the operator adjust the display to match a plastic overlay engraved with the bit numbers. To read bits the operator counted the up- and down-transitions of the oscilloscope trace. Unlike other computers of its day, internal data was represented in hexadecimal instead of octal, but being a very inexpensive machine it used the physical typewriter keys that correspond to positions 10 to 15 in the type basket for the six non-decimal characters (as opposed to the now normal A – F) to represent those values, resulting in 0 – 9 f g j k q w, which was remembered using the phrase "FiberGlass Javelins Kill Quite Well". Specifications Word length: 31 bits, including a sign bit, but no blank spacer bit Memory size: 4096 words Speed: 0.260 milliseconds access time between two adjacent physical words; access times between two adjacent addresses 2.340 milliseconds. Addition time: 0.26 ms excluding access time Multiplication or division time: 17 ms excluding access time Clock rate: 120 kHz Power consumption: 1500 Watts Heat dissipation: Arithmetic element: three working registers: C the counter register, R the instruction register and A the accumulator register. Instruction format: Sixteen instructions using half-word format Technology: 113 vacuum tubes and 1350 diodes. Number produced; 320~493 First delivery: September 1956 Price: $47,000 Successor: LGP-21 Achievements: The LGP-30 was one of the first desk-sized computers offering small-scale scientific computing. The LGP-30 was fairly popular with "half a thousand" units sold, including one to Dartmouth College where students implemented Dartmouth ALGOL 30 and DOPE (Dartmouth Oversimplified Programming Experiment) on the machine. Programming Instruction set The LGP-30 has 16 instructions. Each instruction occupies a 31-bit word though about half the bits are unused and set to zero. An instruction consists of an "order" such as the letter b for "bring from memory" and an address part such as the number 2000 to designate a memory location. All instructions have a similar appearance in an LGP-30 word. The order bits occupy positions 12 through 15 of the word and the address bits occupy positions 18 through 29 of the word. The address bits are further divided by track and sector. Although all instructions have an address, some do not use the address. It is customary to enter an address of 0000 in these instructions. ACT-III programming language The LGP-30 had a high-level language called ACT-III. Every token had to be delimited by an apostrophe, making it hard to read and even harder to prepare tapes: <nowiki> s1'dim'a'500'm'500'q'500'' index'j'j+1'j-1'' daprt'e'n't'e'r' 'd'a't'a''cr'' rdxit's35'' s2'iread'm'1''iread'q'1''iread'd''iread'n'' 1';'j'' 0'flo'd';'d.'' s3'sqrt'd.';'sqrd.'' 1'unflo'sqrd.'i/'10';'sqrd'' 2010'print'sqrd.''2000'iprt'sqrd''cr''cr'' ... </nowiki> ALGOL 30 Dartmouth College developed two implementations of ALGOL 60 for the LGP-30. Dartmouth ALGOL 30 was a three-pass system (compiler, loader, and interpreter) that provided almost all features of ALGOL except those requiring run-time storage allocation. SCALP, a Self Contained Algol Processor, was a one-pass system for a small subset of ALGOL (no blocks other than the entire program), no procedure declarations, conditional statements but no conditional expressions, no constructs other than while in a for statement, no nested switch declarations (nested calls are permitted), and no boolean variables and operators. As in ACT-III, every token had to be separated by an apostrophe. DICTATOR DICTATOR is a painful acronym for DODCO Interpretive Code for Three Address with Technical Optimum Range. DICTATOR, introduced in 1959, is an interpreter designed to hide the LGP-30 machine details from the programmer. The programming language resembles three-operand assembly code with two source operands and one destination operand. All numbers are in floating point with an eight digit mantissa and two digit exponent. Natural logs and exponents are supported along with sin, cos, and arctan. Up to four nested loops are supported. Table look-up and block memory move operations are implemented. A bit more than half the total LGP-30 memory is used by the interpreter; it takes about 30 minutes to load the paper tape via the Flexowriter. Floating point add, subtract, multiply, and divide take less than 455 milliseconds each. Cosine is calculated in 740 milliseconds. Starting the machine The procedure for starting, or "booting" the LGP-30 was complicated. First, the bootstrap paper tape was snapped into the console typewriter, a Friden Flexowriter. The operator pressed a lever on the Flexowriter to read an address field and pressed a button on the front panel to transfer the address into a computer register. Then the lever on the Flexowriter was pressed to read the data field and three more buttons were pressed on the front panel to store it at the specified address. This process was repeated, maybe six to eight times, and a rhythm was developed: burrrp, clunk, burrrp, clunk, clunk, clunk, burrrp, clunk, burrrp, clunk, clunk, clunk, burrrp, clunk, burrrp, clunk, clunk, clunk, burrrp, clunk, burrrp, clunk, clunk, clunk, burrrp, clunk, burrrp, clunk, clunk, clunk, burrrp, clunk, burrrp, clunk, clunk, clunk. The operator then removed the bootstrap tape, snapped in the tape containing the regular loader, carefully arranging it so it would not jam, and pressed a few more buttons to start up the bootstrap program. Once the regular loader was in, the computer was ready to read in a program tape. The regular loader read a more compact format tape than the bootstrap loader. Each block began with a starting address so the tape could be rewound and retried if an error occurred. If any mistakes were made in the process, or if the program crashed and damaged the loader program, the process had to be restarted from the beginning. LGP-21 In 1963, Librascope produced a transistorized update to the LGP-30 named the LGP-21. The new computer had about 460 transistors and about 375 diodes. It cost only $16,250, one-third the price of its predecessor. However, it was also about one-third as fast as the earlier computer. The central computer weighed about , the basic system (including printer and stands) about . RPC 4000 Another, more-powerful successor machine, was the General Precision RPC 4000, announced in 1960. Similar to the LGP-30, but transistorized, it featured 8,008 32-bit words of memory drum storage. It had 500 transistors and 4,500 diodes, sold for $87,500 and weighed . Notable uses Edward Lorenz used the LGP-30 in his attempt to model changing weather patterns. His discovery that massive differences in forecast could derive from tiny differences in initial data led to him coining the terms strange attractor and butterfly effect, core concepts in chaos theory. The RPC-4000 (successor to the LGP-30) is also remembered as the computer on which Mel Kaye performed a legendary programming task in machine code, retold by Ed Nather in the hacker epic The Story of Mel. Simulation A software simulation of the LGP-30 and LGP-21 are supported by SIMH, a free and open source, multi-platform multi-system emulator. See also IBM 650 List of vacuum tube computers Further reading References External links Working LGP-30 on display in Stuttgart, Germany LGP-30 description LGP-21 description 1962 advertisement showing both the LGP-30 and RPC-4000 Story of Stan P. Frankel, designer of the LGP-30, with photos. Programming manual Warming up the LGP-30 on YouTube technikum 29: LGP 30 1950-1959 Librazettes – company newsletters on LGP-30: Vacuum tube computers Serial computers Typewriters
LGP-30
[ "Technology" ]
2,879
[ "Serial computers", "Computers" ]
1,624,732
https://en.wikipedia.org/wiki/Time-lapse%20photography
Time-lapse photography is a technique in which the frequency at which film frames are captured (the frame rate) is much lower than the frequency used to view the sequence. When played at normal speed, time appears to be moving faster and thus lapsing. For example, an image of a scene may be captured at 1 frame per second but then played back at 30 frames per second; the result is an apparent 30 times speed increase. Processes that would normally appear subtle and slow to the human eye, such as the motion of the sun and stars in the sky or the growth of a plant, become very pronounced. Time-lapse is the extreme version of the cinematography technique of undercranking. Stop motion animation is a comparable technique; a subject that does not actually move, such as a puppet, can repeatedly be moved manually by a small distance and photographed. Then, the photographs can be played back as a film at a speed that shows the subject appearing to move. Conversely, film can be played at a much lower rate than at which it was captured, which slows down an otherwise fast action, as in slow motion or high-speed photography. History Some classic subjects of time-lapse photography include: Landscapes and celestial motion Plants and flowers growing Fruit rotting Evolution of a construction project People in the city The technique has been used to photograph crowds, traffic, and even television. The effect of photographing a subject that changes imperceptibly slowly creates a smooth impression of motion. A subject that changes quickly is transformed into an onslaught of activity. The inception of time-lapse photography occurred in 1872 when Leland Stanford hired Eadweard Muybridge to prove whether or not race horses hooves ever are simultaneously in the air when running. The experiments progressed for 6 years until 1878 when Muybridge set up a series of cameras for every few feet of a track which had tripwires the horses triggered as they ran. The photos taken from the multiple cameras were then compiled into a collection of images that recorded the horses running. The first use of time-lapse photography in a feature film was in Georges Méliès' motion picture Carrefour De L'Opera (1897). F. Percy Smith pioneered the use of time-lapse in nature photography with his 1910 silent film The Birth of a Flower. Time-lapse photography of biological phenomena was pioneered by Jean Comandon in collaboration with Pathé Frères from 1909, by F. Percy Smith in 1910 and Roman Vishniac from 1915 to 1918. Time-lapse photography was further pioneered in the 1920s via a series of feature films called Bergfilme (mountain films) by Arnold Fanck, including Das Wolkenphänomen in Maloja (1924) and The Holy Mountain (1926). From 1929 to 1931, R. R. Rife astonished journalists with early demonstrations of high magnification time-lapse cine-micrography, but no filmmaker can be credited for popularizing time-lapse techniques more than John Ott, whose life work is documented in the film Exploring the Spectrum. Ott's initial "day-job" career was that of a banker, with time-lapse movie photography, mostly of plants, initially just a hobby. Starting in the 1930s, Ott bought and built more and more time-lapse equipment, eventually building a large greenhouse full of plants, cameras, and even self-built automated electric motion control systems for moving the cameras to follow the growth of plants as they developed. He time-lapsed his entire greenhouse of plants and cameras as they worked—a virtual symphony of time-lapse movement. His work was featured on a late 1950s episode of the request TV show You Asked for It. Ott discovered that the movement of plants could be manipulated by varying the amount of water the plants were given, and varying the color temperature of the lights in the studio. Some colors caused the plants to flower, and other colors caused the plants to bear fruit. Ott discovered ways to change the sex of plants merely by varying the light source's color temperature. By using these techniques, Ott time-lapse animated plants "dancing" up and down synchronized to pre-recorded music tracks. His cinematography of flowers blooming in such classic documentaries as Walt Disney's Secrets of Life (1956), pioneered the modern use of time-lapse on film and television. Ott wrote several books on the history of his time-lapse adventures including My Ivory Cellar (1958) and Health and Light (1979), and produced the 1975 documentary film Exploring the Spectrum. The Oxford Scientific Film Institute in Oxford, United Kingdom specializes in time-lapse and slow-motion systems, and has developed camera systems that can go into (and move through) small places. Their footage has appeared in TV documentaries and movies. PBS's NOVA series aired a full episode on time-lapse (and slow motion) photography and systems in 1981 titled Moving Still. Highlights of Oxford's work are slow-motion shots of a dog shaking water off himself, with close ups of drops knocking a bee off a flower, as well as a time-lapse sequence of the decay of a dead mouse. The non-narrative feature film Koyaanisqatsi (1983) contained time-lapse images of clouds, crowds, and cities filmed by cinematographer Ron Fricke. Years later, Ron Fricke produced a solo project called Chronos shot using IMAX cameras. Fricke used the technique extensively in the documentary Baraka (1992) which he photographed on Todd-AO (70 mm) film. Countless other films, commercials, TV shows and presentations have included time-lapse material. For example, Peter Greenaway's film A Zed & Two Noughts features a sub-plot involving time-lapse photography of decomposing animals and includes a composition called "Time Lapse" written for the film by Michael Nyman. In the late 1990s, Adam Zoghlin's time-lapse cinematography was featured in the CBS television series Early Edition, depicting the adventures of a character that receives tomorrow's newspaper today. David Attenborough's 1995 series The Private Life of Plants also utilised the technique extensively. Terminology The frame rate of time-lapse movie photography can be varied to virtually any degree, from a rate approaching a normal frame rate (between 24 and 30 frames per second) to only one frame a day, a week, or longer, depending on the subject. The term time-lapse can also apply to how long the shutter of the camera is open during the exposure of each frame of film (or video), and has also been applied to the use of long-shutter openings used in still photography in some older photography circles. In movies, both kinds of time-lapse can be used together, depending on the sophistication of the camera system being used. A night shot of stars moving as the Earth rotates requires both forms. A long exposure of each frame is necessary to enable the dim light of the stars to register on the film. Lapses in time between frames provide the rapid movement when the film is viewed at normal speed. As the frame rate of time-lapse photography approaches normal frame rates, these "mild" forms are sometimes referred to simply as fast motion or (in video) fast forward. This type of borderline time-lapse technique resembles a VCR in a fast forward ("scan") mode. A man riding a bicycle will display legs pumping furiously while he flashes through city streets at the speed of a racing car. Longer exposure rates for each frame can also produce blurs in the man's leg movements, heightening the illusion of speed. Two examples of both techniques are the running sequence in Terry Gilliam's The Adventures of Baron Munchausen (1989), in which a character outraces a speeding bullet, and Los Angeles animator Mike Jittlov's 1980s short and feature-length films, both titled The Wizard of Speed and Time. When used in motion pictures and on television, fast motion can serve one of several purposes. One popular usage is for comic effect. A slapstick comic scene might be played in fast motion with accompanying music. (This form of special effect was often used in silent film comedies in the early days of cinema. Another use of fast motion is to speed up slow segments of a TV program that would otherwise take up too much of the time allotted a TV show. This allows, for example, a slow scene in a house redecorating show of furniture being moved around (or replaced with other furniture) to be compressed in a smaller allotment of time while still allowing the viewer to see what took place. The opposite of fast motion is slow motion. Cinematographers refer to fast motion as undercranking since it was originally achieved by cranking a handcranked camera slower than normal. Overcranking produces slow motion effects. Methodology Film is often projected at 24 frame/s, meaning 24 images appear on the screen every second. Under normal circumstances, a film camera will record images at 24 frame/s since the projection speed and the recording speed are the same. Even if the film camera is set to record at a slower speed, it will still be projected at 24 frame/s. Thus the image on screen will appear to move faster. The change in speed of the onscreen image can be calculated by dividing the projection speed by the camera speed. So a film recorded at 12 frames per second will appear to move twice as fast. Shooting at camera speeds between 8 and 22 frames per second usually falls into the undercranked fast motion category, with images shot at slower speeds more closely falling into the realm of time-lapse, although these distinctions of terminology have not been entirely established in all movie production circles. The same principles apply to video and other digital photography techniques. However, until very recently , video cameras have not been capable of recording at variable frame rates. Time-lapse can be achieved with some normal movie cameras by simply shooting individual frames manually. But greater accuracy in time-increments and consistency in exposure rates of successive frames are better achieved through a device that connects to the camera's shutter system (camera design permitting) called an intervalometer. The intervalometer regulates the motion of the camera according to a specific interval of time between frames. Today, many consumer grade digital cameras, including even some point-and-shoot cameras have hardware or software intervalometers available. Some intervalometers can be connected to motion control systems that move the camera on any number of axes as the time-lapse photography is achieved, creating tilts, pans, tracks, and trucking shots when the movie is played at normal frame rate. Ron Fricke is the primary developer of such systems, which can be seen in his short film Chronos (1985) and his feature films Baraka (1992, released to video in 2001) and Samsara (2011). Short and long exposure As mentioned above, in addition to modifying the speed of the camera, it is important to consider the relationship between the frame interval and the exposure time. This relationship controls the amount of motion blur present in each frame and is, in principle, exactly the same as adjusting the shutter angle on a movie camera. This is known as "dragging the shutter". A film camera normally records images at 24 frames per second (fps). During each second, the film is actually exposed to light for roughly half the time. The rest of the time, it is hidden behind the shutter. Thus exposure time for motion picture film is normally calculated to be second (often rounded to second). Adjusting the shutter angle on a film camera (if its design allows), can add or reduce the amount of motion blur by changing the amount of time that the film frame is actually exposed to light. In time-lapse photography, the camera records images at a specific slow interval such as one frame every thirty seconds ( fps). The shutter will be open for some portion of that time. In short exposure time-lapse the film is exposed to light for a normal exposure time over an abnormal frame interval. For example, the camera will be set up to expose a frame for second every 30 seconds. Such a setup will create the effect of an extremely tight shutter angle giving the resulting film a stop-motion animation quality. In long exposure time-lapse, the exposure time will approximate the effects of a normal shutter angle. Normally, this means the exposure time should be half of the frame interval. Thus a 30-second frame interval should be accompanied by a 15-second exposure time to simulate a normal shutter. The resulting film will appear smooth. The exposure time can be calculated based on the desired shutter angle effect and the frame interval with the equation: Long exposure time-lapse is less common because it is often difficult to properly expose film at such a long period, especially in daylight situations. A film frame that is exposed for 15 seconds will receive 750 times more light than its second counterpart. (Thus it will be more than 9 stops over normal exposure.) A scientific grade neutral density filter can be used to compensate for the over-exposure. Camera movement Some of the most stunning time-lapse images are created by moving the camera during the shot. A time-lapse camera can be mounted to a moving car for example to create a notion of extreme speed. However, to achieve the effect of a simple tracking shot, it is necessary to use motion control to move the camera. A motion control rig can be set to dolly or pan the camera at a glacially slow pace. When the image is projected it could appear that the camera is moving at a normal speed while the world around it is in time-lapse. This juxtaposition can greatly heighten the time-lapse illusion. The speed that the camera must move to create a perceived normal camera motion can be calculated by inverting the time-lapse equation: Baraka was one of the first films to use this effect to its extreme. Director and cinematographer Ron Fricke designed his own motion control equipment that utilized stepper motors to pan, tilt and dolly the camera. The short film A Year Along the Abandoned Road shows a whole year passing by in Norway's Børfjord (in Hasvik Municipality) at 50,000 times the normal speed in just 12 minutes. The camera was moved, manually, slightly each day, and so the film gives the viewer the impression of seamlessly travelling around the fjord as the year goes along, each day compressed into a few seconds. A panning time-lapse image can be easily and inexpensively achieved by using a widely available equatorial telescope mount with a right ascension motor. Two axis pans can be achieved as well, with contemporary motorized telescope mounts. A variation of these are rigs that move the camera during exposures of each frame of film, blurring the entire image. Under controlled conditions, usually with computers carefully making the movements during and between each frame, some exciting blurred artistic and visual effects can be achieved, especially when the camera is mounted on a tracking system that enables its own movement through space. The most classic example of this is the "slit-scan" opening of the "stargate" sequence toward the end of Stanley Kubrick's 2001: A Space Odyssey (1968), created by Douglas Trumbull. Related techniques Bullet time Hyperlapse Motion control photography Long-exposure photography High-dynamic-range (HDR) Time-lapse can be combined with techniques such as high-dynamic-range imaging. One method to achieve HDR involves bracketing for each frame. Three photographs are taken at separate exposure values (capturing the three in immediate succession) to produce a group of pictures for each frame representing the highlights, mid-tones, and shadows. The bracketed groups are consolidated into individual frames. Those frames are then sequenced into video. Day-to-night transitions Day-to-night transitions are among the most demanding scenes in time-lapse photography and the method used to deal with those transitions is commonly referred to as the "Holy Grail" technique. In a remote area not affected by light pollution the night sky is about ten million times darker than the sky on a sunny day, which corresponds to 23 exposure values. In the analog age, blending techniques have been used in order to handle this difference: One shot has been taken in daytime and the other one in the night from exactly the same camera angle. Digital photography provides many ways to handle day-to-night transitions, such as automatic exposure and ISO, bulb ramping and several software solutions to operate the camera from a computer or smartphone. See also The Benny Hill Show Everyday (video) The Longest Way Rephotography References Further reading ICP Library of Photographers. Roman Vishniac. Grossman Publishers, New York. 1974. Roman Vishniac. Current Biography (1967). Exploring the Spectrum John Ott. (1975; DVD re-issue 2008). EBSCO Industries. (2013). From ponies to ProjectCam: The history of time lapse photography. Retrieved from https://www.wingscapes.com/blog/from-peonies-to-the-projectcam-the-history-of-time-lapse-photography/ External links Time-lapse photography tutorial Audiovisual introductions in 1897 Cinematic techniques Animation techniques Articles containing video clips Film post-production technology Photography by genre Time
Time-lapse photography
[ "Physics", "Mathematics" ]
3,591
[ "Physical quantities", "Time", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
1,624,750
https://en.wikipedia.org/wiki/Noscapine
Noscapine, also known as narcotine, nectodon, nospen, anarcotine and (archaic) opiane, is a benzylisoquinoline alkaloid of the phthalideisoquinoline structural subgroup, which has been isolated from numerous species of the family Papaveraceae (poppy family). It lacks effects associated with opioids such as sedation, euphoria, or analgesia (pain-relief) and lacks addictive potential. Noscapine is primarily used for its antitussive (cough-suppressing) effects. Medical uses Noscapine is often used as an antitussive medication. A 2012 Dutch guideline, however, does not recommend its use for acute coughing. Side effects Nausea Vomiting Loss of coordination Hallucinations (auditory and visual) Loss of sexual drive Swelling of prostate Loss of appetite Dilated pupils Increased heart rate Shaking and muscle spasms Chest pain Increased alertness Increased wakefulness Loss of stereoscopic vision Interactions Noscapine can increase the effects of centrally sedating substances such as alcohol and hypnotics. The drug should not be taken with monoamine oxidase inhibitors (MAOIs), as unknown and potentially fatal effects may occur. Noscapine should not be taken in conjunction with warfarin as the anticoagulant effects of warfarin may be increased. Biosynthesis The biosynthesis of noscapine in P. somniferum begins with chorismic acid, which is synthesized via the shikimate pathway from erythrose 4-phosphate and phosphoenolpyruvate. Chorismic acid is a precursor to the amino acid tyrosine, the source of nitrogen in benzylisoquinoline alkaloids. Tyrosine can undergo a PLP-mediated transamination to form 4-hydroxyphenylpyruvic acid (4-HPP), followed by a TPP-mediated decarboxylation to form 4-hydroxyphenylacetaldehyde (4-HPAA). Tyrosine can also be hydroxylated to form 3,4-dihydroxyphenylalanine (DOPA), followed by a PLP-mediated decarboxylation to form dopamine. Norcoclaurine synthase (NCS) catalyzes a Pictet-Spengler reaction between 4-HPAA and dopamine to synthesize (S)-norcoclaurine, providing the characteristic benzylisoquinoline scaffold. (S)-Norcoclaurine is sequentially 6-O-methylated (6OMT), N-methylated (CNMT), 3-hydroxylated (NMCH), and 4′-O-methylated (4′OMT), with the use of cofactors S-adenosyl-methionine (SAM) and NADP+ for methylations and hydroxylations, respectively. These reactions produce (S)-reticuline, a key branchpoint intermediate in the biosynthesis of benzylisoquinoline alkaloids. The remainder of the noscapine biosynthetic pathway is largely governed by a single biosynthetic 10-gene cluster. Genes comprising the cluster encode enzymes responsible for nine of the eleven remaining chemical transformations. First, berberine bridge enzyme (BBE), an enzyme not encoded by the cluster, forms the fused four-ring structure in (S)-scoulerine. BBE uses O2 as an oxidant and is aided by cofactor flavin adenine dinucleotide (FAD). Next, an O-methyltransferase (SOMT) methylates the 9-hydroxyl group. Canadine synthase (CAS) catalyzes the formation of a unique C2-C3 methylenedioxy bridge in (S)-canadine. An N-methylation (TNMT) and two hydroxylations (CYP82Y1, CYP82X2) follow, aided by SAM and O2/NADPH, respectively. The C13 alcohol is then acetylated by an acetyltransferase (AT1) using acetyl-CoA. Another cytochrome P450 enzyme (CYP82X1) catalyzes the hydroxylation of C8, and the newly formed hemiaminal spontaneously cleaves, yielding a tertiary amine and aldehyde. A methyltransferase heterodimer (OMT2:OMT3) catalyzes a SAM-mediated O-methylation on C4′. The O-acetyl group is then cleaved by a carboxylesterase (CXE1), yielding an alcohol which immediately reacts with the neighboring C1 aldehyde to form a hemiacetal in a new five-membered ring. The apparent counteractivity between AT1 and CXE1 suggests that acetylation in this context is employed as a protective group, preventing hemiacetal formation until the ester is enzymatically cleaved. Finally, an NAD+-dependent short-chain dehydrogenase (NOS) oxidizes the hemiacetal to a lactone, completing noscapine biosynthesis. Mechanism of action Noscapine's antitussive effects appear to be primarily mediated by its σ–receptor agonist activity. Evidence for this mechanism is suggested by experimental evidence in rats. Pretreatment with rimcazole, a σ-specific antagonist, causes a dose-dependent reduction in antitussive activity of noscapine. Noscapine, and its synthetic derivatives called noscapinoids, are known to interact with microtubules and inhibit cancer cell proliferation Structure analysis The lactone ring is unstable and opens in basic media. The opposite reaction is presented in acidic media. The bond (C1−C3′) connecting the two optically active carbon atoms is also unstable. In aqueous solution of sulfuric acid and heating it dissociates into cotarnine (4-methoxy-6-methyl-5,6,7,8-tetrahydro-[1,3]dioxolo[4,5-g]isoquinoline) and opic acid (6-formyl-2,3-dimethoxybenzoic acid). When noscapine is reduced with zinc/HCl, the bond C1−C3′ saturates and the molecule dissociates into hydrocotarnine (2-hydroxycotarnine) and meconine (6,7-dimethoxyisobenzofuran-1(3H)-one). History Noscapine was first isolated and characterized in chemical breakdown and properties in 1803 under the denomination of "Narcotine" by Jean-Francois Derosne, a French chemist in Paris. Then Pierre-Jean Robiquet, another French chemist, proved narcotine and morphine to be distinct alkaloids in 1831. Finally, Pierre-Jean Robiquet conducted over 20 years between 1815 and 1835 a series of studies in the enhancement of methods for the isolation of morphine, and also isolated in 1832 another very important component of raw opium, that he called codeine, currently a widely used opium-derived compound. Society and culture Recreational use There are anecdotal reports of the recreational use of over-the-counter drugs in several countries, being readily available from local pharmacies without a prescription. The effects, beginning around 45 to 120 minutes after consumption, are similar to dextromethorphan and alcohol intoxication. Unlike dextromethorphan, noscapine is not an NMDA receptor antagonist. Noscapine in heroin Noscapine can survive the manufacturing processes of heroin and can be found in street heroin. This is useful for law enforcement agencies, as the amounts of contaminants can identify the source of seized drugs. In 2005 in Liège, Belgium, the average noscapine concentration was around 8%. Noscapine has also been used to identify drug users who are taking street heroin at the same time as prescribed diamorphine. Since the diamorphine in street heroin is the same as the pharmaceutical diamorphine, examination of the contaminants is the only way to test whether street heroin has been used. Other contaminants used in urine samples alongside noscapine include papaverine and acetylcodeine. Noscapine is metabolised by the body, and is itself rarely found in urine, instead being present as the primary metabolites, cotarnine and meconine. Detection is performed by gas chromatography-mass spectrometry or liquid chromatography-mass spectrometry (LCMS) but can also use a variety of other analytical techniques. Research Clinical trials The efficacy of noscapine in the treatment of certain hematological malignancies has been explored in the clinic. Polyploidy induction by noscapine has been observed in vitro in human lymphocytes at high dose levels (>30 μM); however, low-level systemic exposure, e.g. with cough medications, does not appear to present a genotoxic hazard. The mechanism of polyploidy induction by noscapine is suggested to involve either chromosome spindle apparatus damage or cell fusion. Noscapine biosynthesis reconstitution Many of the enzymes in the noscapine biosynthetic pathway was elucidated by the discovery of a 10 gene "operon-like cluster" named HN1. In 2016, the biosynthetic pathway of noscapine was reconstituted in yeast cells, allowing the drug to be synthesised without the requirement of harvest and purification from plant material. In 2018, the entire noscapine pathway was reconstituted and produced in yeast from simple molecules. In addition, protein expression was optimised in yeast, allowing production of noscapine to be improved 18,000 fold. It is hoped that this technology could be used to produce pharmaceutical alkaloids such as noscapine which are currently expressed at too low a yield in plantae to be mass-produced, allowing them to become marketable therapeutic drugs. Anticancer derivatives Noscapine is itself an antimitotic agent, therefore its analogs have great potential as novel anti-cancer drugs. Analogs having significant cytotoxic effects through modified 1,3-benzodioxole moiety have been developed. Similarly, N-alkyl amine, 1,3-diynyl, 9-vinyl-phenyl and 9-arylimino derivatives of noscapine have also been developed. Their mechanism of action is through tubulin inhibition. Anti-inflammatory effects Various studies have indicated that noscapine has anti-inflammatory effects and significantly reduces the levels of proinflammatory factors such as interleukin 1β (IL-1β), IFN-c, and IL-6. In this regard, in another study, Khakpour et al. examined the effect of noscapine against carrageenan-induced inflammation in rats. They found that noscapine at a dose of 5 mg/kg body weight in three hours after the injection has the most anti-inflammatory effects. Moreover, they showed that the amount of inflammation reduction at this dose of noscapine is approximately equal to indomethacin, a standard anti-inflammatory medication. Furthermore, Shiri et al. concluded that noscapine prevented the progression of bradykinin-induced inflammation in the rat's foot by antagonising bradykinin receptors. In addition, Zughaier et al. evaluated the anti-inflammatory effects of brominated noscapine. The brominated form of noscapine has been shown to inhibit the secretion of the cytokine TNF-α and the chemokine CXCL10 from macrophages, thereby reducing inflammation without affecting macrophage survival. Furthermore, the bromated derivative of noscapine has about 5 to 40 times more potent effects than noscapine. Again, this brominated derivative also inhibits toll-like receptors (TLR), TNF-α, and nitric oxide (NO) in human and mouse macrophages without causing toxicity. See also Cough syrup Codeine; Pholcodine Dextromethorphan; Dimemorfan Racemorphan; Dextrorphan; Levorphanol Butamirate Pentoxyverine Tipepidine Cloperastine Levocloperastine Narceine, a related opium alkaloid. References Natural opium alkaloids Antitussives 3-(5,6,7,8-tetrahydro-(1,3)dioxolo(4,5-g)isoquinolin-5-yl)-3H-2-benzofuran-1-ones Pyrogallol ethers Sigma agonists
Noscapine
[ "Chemistry" ]
2,752
[ "Alkaloids by chemical classification", "Tetrahydroisoquinoline alkaloids" ]
1,624,786
https://en.wikipedia.org/wiki/AL-6XN
AL-6XN (UNS designation N08367) is a type of weldable stainless steel that consist of an alloy of nickel (24%), chromium (22%) and molybdenum (6.3%) with other trace elements such as nitrogen. The high nickel and molybdenum contents of the AL-6XN alloy give it good resistance to chloride stress-corrosion cracking. The molybdenum confers resistance to chloride pitting. The nitrogen content serves to further increase pitting resistance and also gives it higher strength than typical 300 series austenitic stainless steels, and thereby often allows it to be used in thinner sections. This metal is commonly used instead of 300 series stainless steels in high temperature and low pH applications in food processing. For example, tomato juice will corrode 316L stainless steel at pasteurization temperatures of 100 °C (210 °F). AL-6XN will better resist this corrosion while still offering the beneficial properties of stainless steel. AL-6XN applications and markets Applications for superaustenitic stainless steel alloy AL-6XN include chemical processing, oil and gas, medical – sterilization and power generation with specific applications identified in desalination, water piping systems. transformer cases in marine environments, food processing equipment, FGD scrubbers, reverse osmosis, and heat exchangers. AL-6XN specifications Specifications include: ASME SA : 182, 240, 249, 312, 479 ASME SB : 366, 462, 564, 675, 676, 688, 691 ASTM A : 182, 240, 249, 312, 479 ASTM B : 366, 462, 472, 564, 675, 676, 688, 691, 804 NACE : MR0175 UNS : N08367 References Chromium alloys Nickel alloys Stainless steel Steel alloys
AL-6XN
[ "Chemistry" ]
410
[ "Nickel alloys", "Alloys", "Chromium alloys", "Alloy stubs" ]
1,624,795
https://en.wikipedia.org/wiki/Weak%20hypercharge
In the Standard Model of electroweak interactions of particle physics, the weak hypercharge is a quantum number relating the electric charge and the third component of weak isospin. It is frequently denoted and corresponds to the gauge symmetry U(1). It is conserved (only terms that are overall weak-hypercharge neutral are allowed in the Lagrangian). However, one of the interactions is with the Higgs field. Since the Higgs field vacuum expectation value is nonzero, particles interact with this field all the time even in vacuum. This changes their weak hypercharge (and weak isospin ). Only a specific combination of them, (electric charge), is conserved. Mathematically, weak hypercharge appears similar to the Gell-Mann–Nishijima formula for the hypercharge of strong interactions (which is not conserved in weak interactions and is zero for leptons). In the electroweak theory SU(2) transformations commute with U(1) transformations by definition and therefore U(1) charges for the elements of the SU(2) doublet (for example lefthanded up and down quarks) have to be equal. This is why U(1) cannot be identified with U(1)em and weak hypercharge has to be introduced. Weak hypercharge was first introduced by Sheldon Glashow in 1961. Definition Weak hypercharge is the generator of the U(1) component of the electroweak gauge group, and its associated quantum field mixes with the electroweak quantum field to produce the observed gauge boson and the photon of quantum electrodynamics. The weak hypercharge satisfies the relation where is the electric charge (in elementary charge units) and is the third component of weak isospin (the SU(2) component). Rearranging, the weak hypercharge can be explicitly defined as: where "left"- and "right"-handed here are left and right chirality, respectively (distinct from helicity). The weak hypercharge for an anti-fermion is the opposite of that of the corresponding fermion because the electric charge and the third component of the weak isospin reverse sign under charge conjugation. The sum of −isospin and +charge is zero for each of the gauge bosons; consequently, all the electroweak gauge bosons have Hypercharge assignments in the Standard Model are determined up to a twofold ambiguity by requiring cancellation of all anomalies. Alternative half-scale For convenience, weak hypercharge is often represented at half-scale, so that which is equal to just the average electric charge of the particles in the isospin multiplet. Baryon and lepton number Weak hypercharge is related to baryon number minus lepton number via: where X is a conserved quantum number in GUT. Since weak hypercharge is always conserved within the Standard Model and most extensions, this implies that baryon number minus lepton number is also always conserved. Neutron decay Hence neutron decay conserves baryon number and lepton number separately, so also the difference is conserved. Proton decay Proton decay is a prediction of many grand unification theories. Hence this hypothetical proton decay would conserve , even though it would individually violate conservation of both lepton number and baryon number. See also Standard Model (mathematical formulation) Weak charge References Nuclear physics Standard Model Electroweak theory he:היפרמטען חלש
Weak hypercharge
[ "Physics" ]
724
[ "Standard Model", "Physical phenomena", "Electroweak theory", "Particle physics", "Fundamental interactions", "Nuclear physics" ]
8,930,340
https://en.wikipedia.org/wiki/Cacls
In Microsoft Windows, cacls, and its replacement icacls, are native command-line utilities that can display and modify the security descriptors on files and folders. An access-control list is a list of permissions for securable object, such as a file or folder, that controls who can access it. The cacls command is also available on ReactOS. cacls The cacls.exe utility is a deprecated command line editor of directory and file security descriptors in Windows NT 3.5 and later operating systems of the Windows NT family. Microsoft has produced the following newer utilities, some also subsequently deprecated, that offer enhancements to support changes introduced with version 3.0 of the NTFS filesystem: xcacls.exe is supported by Windows 2000 and later and adds new features like setting Execute, Delete and Take Ownership permissions xcacls.vbs fileacl.exe icacls.exe (included in Windows Server 2003 SP2 and later) SubInAcl.exe - Resource Kit utility to set and replace permissions on various type of objects including files, services and registry keys Windows PowerShell (Get-Acl and Set-Acl cmdlets) The ReactOS version was developed by Thomas Weidenmueller and is licensed under the GNU Lesser General Public License. icacls Stands for Integrity Control Access Control List. Windows Server 2003 Service Pack 2 and later include icacls, an in-box command-line utility that can display, modify, backup and restore ACLs for files and folders, as well as to set integrity levels and ownership in Vista and later versions. It is not a complete replacement for cacls, however. For example, it does not support Security Descriptor Definition Language (SDDL) syntax directly via command line parameters (only via the /restore option). See also SetACL chmod takeown References Further reading The Security Descriptor Definition Language of Love (Part 1) External links cacls | Microsoft Docs icacls | Microsoft Docs ReactOS commands fr:Cacls
Cacls
[ "Technology" ]
446
[ "Windows commands", "ReactOS commands", "Computing commands" ]
8,930,508
https://en.wikipedia.org/wiki/EVA%20%28benchmark%29
EVA was a continuously running benchmark project for assessing the quality and value of protein structure prediction and secondary structure prediction methods. Methods for predicting both secondary structure and tertiary structure - including homology modeling, protein threading, and contact order prediction - were compared to results from each week's newly solved protein structures deposited in the Protein Data Bank. The project aimed to determine the prediction accuracy that would be expected for non-expert users of common, publicly available prediction webservers; this is similar to the related LiveBench project and stands in contrast to the bi-yearly benchmark CASP, which aims to identify the maximum accuracy achievable by prediction experts. References Rost B, Eyrich VA. (2001). EVA: large-scale analysis of secondary structure prediction. Proteins Suppl 5:192-9. Eyrich VA, Marti-Renom MA, Przybylski D, Madhusudhan MS, Fiser A, Pazos F, Valencia A, Sali A, Rost B. (2001). EVA: continuous automatic evaluation of protein structure prediction servers. Bioinformatics 17(12):1242-3. Koh IY, Eyrich VA, Marti-Renom MA, Przybylski D, Madhusudhan MS, Eswar N, Grana O, Pazos F, Valencia A, Sali A, Rost B. (2003). EVA: Evaluation of protein structure prediction servers. Nucleic Acids Res 31(13):3311-5. External links EVA main site Bioinformatics Protein methods
EVA (benchmark)
[ "Chemistry", "Engineering", "Biology" ]
335
[ "Biochemistry methods", "Biological engineering", "Bioinformatics stubs", "Protein methods", "Biotechnology stubs", "Protein biochemistry", "Biochemistry stubs", "Bioinformatics" ]
8,930,788
https://en.wikipedia.org/wiki/LiveBench
LiveBench is a continuously running benchmark project for assessing the quality of protein structure prediction and secondary structure prediction methods. LiveBench focuses mainly on homology modeling and protein threading but also includes secondary structure prediction, comparing publicly available webserver output to newly deposited protein structures in the Protein Data Bank. Like the EVA project and unlike the related CASP and CAFASP experiments, LiveBench is intended to study the accuracy of predictions that would be obtained by non-expert users of publicly available prediction methods. A major advantage of LiveBench and EVA over CASP projects, which run once every two years, is their comparatively large data set. References Bujnicki JM, Elofsson A, Fischer D, Rychlewski L. (2001). LiveBench-1: continuous benchmarking of protein structure prediction servers. Protein Sci 10(2):352-61. Rychlewski L, Fischer D. (2005). LiveBench-8: the large-scale, continuous assessment of automated protein structure prediction. Protein Sci 14(1):240-5. External links LiveBench main site Bioinformatics Protein methods
LiveBench
[ "Chemistry", "Engineering", "Biology" ]
243
[ "Biochemistry methods", "Biological engineering", "Bioinformatics stubs", "Protein methods", "Biotechnology stubs", "Protein biochemistry", "Biochemistry stubs", "Bioinformatics" ]
8,932,548
https://en.wikipedia.org/wiki/Philippine%20flying%20lemur
The Philippine flying lemur or Philippine colugo (Cynocephalus volans), known locally as kagwang, is one of two species of colugo or "flying lemurs". It is monotypic of its genus. Although it is called "flying lemur", the Philippine flying lemur is neither a lemur nor does it fly. Instead, it glides as it leaps among trees. The kagwang belongs to the order Dermoptera that contains only two species, one of which is found in the Philippines, while the other, the Sunda flying lemur, is found in Indonesia, Thailand, Malaysia, and Singapore. Recent research from genetic analysis suggests two other species, the Bornean flying lemur and the Javan flying lemur, may exist, as well, but they have yet to be officially classified. Both species of Dermoptera are classified under the grandorder Euarchonta, which includes treeshrews and primates, as well as an extinct order of mammals, the Plesiadapiformes. Habitat and ecology The Philippine flying lemur is endemic to the southern Philippines. Its population is concentrated in the Mindanao region and Bohol. It may also be found in Samar and Leyte. Colugos are found in heavily forested areas, living mainly high up in the trees in lowland and mountainous forests or sometimes in coconut and rubber plantations, rarely coming down to the ground. They spend most of their time at the top of the rainforest canopy or in the forest middle level. With their wide patagia and unopposable thumbs, Philippine flying lemurs are rather slow, clumsy climbers, ascending tree trunks in a series of slow lurches with their heads up and limbs spread to grasp the tree. Physical features A typical Philippine flying lemur weighs about and its head-body length is . Its tail length is . The species exhibits sexual dimorphism with females being somewhat larger than males. It has a wide head and rostrum with a robust mandible for increased bite strength, small ears, and big eyes with unique photoreceptor adaptations adapted for its nocturnal lifestyle. The large eyes allow for excellent vision, which the colugo uses to accurately jump and glide from tree to tree. It has an avascular retina which is not typical of mammals, suggesting this is a primitive trait; on par with other nocturnal mammals, specifically nocturnal primates, the rod cells in the eye make up about 95–99% of the photoreceptors and cones make up about 1–5%. Its clawed feet are large and sharp with an incredible grip strength, allowing them to skillfully but slowly climb trees, hang from branches, or anchor themselves to the trunk of a tree. One unique feature of the colugo is the patagium, the weblike membrane that connects its limbs to allow for gliding. Unlike other mammals with patagia, its patagium extends from the neck to the limbs, in between digits, and even behind the hind limbs and the tail. Its keeled sternum, which is also seen in bats, aids in its gliding efficiency. Its patagium is the most extensive membrane used for gliding in mammals and also functions as a hammock-like pouch for its young. This membrane helps it glide distances of 100 m or more, useful for finding food and escaping predators, such as the Philippine eagle (Pithecophaga jefferyi) and tree-climbing snakes that try to attack the colugos when they glide between trees. The dental formula of the Philippine flying lemur is 2/3, 1/1, 2/2, 3/3, with a total of 34 teeth. The first two lower procumbent incisors are pectinate with up to 15 tines, which are thought to be used for grooming and grating food. The upper incisors are small and have spaces between them, as well. The deciduous teeth are serrated until they are lost and then they are replaced with blade-like teeth that have evolved to shear along with the molars that also have long shearing crests to help break down the plant matter they ingest. Following mastication, the digestive tract of the Philippine flying lemur, especially the stomach, is specially adapted to break down and process the large amount of leaves and vegetation they ingest. Colugos also have a brownish grey-and-white pelage they use as camouflage amongst the tree trunks and branches, which allows them to better hide from predators and hunters. Diet The Philippine flying lemur is a folivore, eating mainly young leaves and occasionally soft fruits, flowers, plant shoots, and insects. They also obtain a significant amount of their water from licking wet leaves and from the water in the plants and fruits themselves. Most of their nutrition is obtained by jumping and gliding between trees high in the canopy; rarely do they eat on the forest floor. Behaviour The Philippine flying lemur is arboreal and nocturnal, and usually resides in primary and secondary forests, but some wander into coconut, banana, and rubber plantations as deforestation for farming and industry is an increasingly prevalent problem. The colugo sleeps in hollow trees or clings onto branches in dense foliage during daytime. When they engage in this hanging behaviour from branches, they keep their heads upright, unlike bats. On the ground, colugos are slow and clumsy, and not able to stand erect, so they rarely leave the canopy level of the forest, where they glide from tree to tree to get to food or their nests, which are also high in the trees. In the trees, though, colugos are quite effective climbers, though they are slow; they move in a series of lingering hops as they use their claws to move up the tree trunk. Foraging only at night, colugos on average forage for 9.4 minutes about 12 times per night. They typically leave their nests at dusk to begin their foraging activity. When foraging, returning to the nest, or just moving around, the Philippine flying lemur uses its patagium to glide from tree to tree. The patagium is also used for cloaking the colugo when it is clinging to a tree trunk or branch, and sometimes it is even seen curled up in a ball, using its patagium again as a cloaking mechanism among palm fronds often in coconut plantations. Colugos maintain height in the trees to avoid predators that may live in lower levels, but they are still susceptible to other predators that can reach these higher levels of the canopy and predatory birds that can attack from above. They live alone, but several may be seen in the same tree, where they maintain their distance from one another and are very territorial of their personal areas. Though they are not social mammals, they do engage in a unique semi-social behaviour where colugos living in the same relative area or tree follow each other's gliding paths through the trees in search of food. This may be a defence mechanism, whereas a population, the safest route possible is determined and shared as a sort of cooperative mechanism for increased survival rates. The only time colugos actually live socially is after a mother has given birth; then she will care for and live with her offspring until they are weaned; at that point, the offspring are on their own. The average lifespan of the Philippine flying lemur is unknown. Reproduction Little is known about the reproductive behaviour in colugos. The female usually gives birth to one young after a two-month gestation period. The young is born undeveloped and helpless, and it attaches itself to its mother's belly, in a pouch formed from the mother's tail membrane. It is eventually weaned around 6 months old, and leaves its mother's patagium. Adult size and sexual maturity is reached between two and three years of age. Mating usually occurs between January and March. Major threats The Philippine flying lemur is threatened by massive destruction of its forest habitat, owing partly to logging and the development of land for agriculture. It is a primary prey of the Philippine eagle making up to 90% of the eagle's diet. It is also hunted by humans for food. Conservation The IUCN 1996 had declared the species vulnerable owing to the destruction of lowland forests and to hunting, but it was listed as least concern in 2008. The 2008 IUCN report indicates the species persists in the face of degraded habitat, with its current population large enough to avoid the threatened category. Since colugos have limited dispersal abilities, they are increasingly vulnerable as deforestation is occurring at increasing rates. Other threats to the species include hunting by the farmers of the plantations they sometimes invade, where they are considered pests, since they eat fruits and flowers. In local cultures, their flesh is also consumed as a delicacy; other uses of the colugo vary in different regions of the Philippines. In Bohol, their fur is used as material for native hats, but in Samar, the species is considered a bad omen and is killed either to be used as a warning or to get rid of the omen. The animal is largely unknown in many areas in the Philippines such that on Facebook, its image was once mistaken for a supernatural creature that was said to feed on other animals, though in reality, the endangered species is a folivore that feeds on fruits, flowers, and leaves. References External links Cynocephalus volans of Philippine Mammalian Fauna: Classification at Zip Code Zoo Flying Lemur at Txt Mania Flying Lemur at Rob Stewart Photography Mammals described in 1758 EDGE species Colugos Mammals of the Philippines Endemic fauna of the Philippines Fauna of Mindanao Fauna of Bohol Fauna of Leyte Fauna of Samar Fauna of Biliran Fauna of Dinagat Islands Taxa named by Carl Linnaeus su:Tando
Philippine flying lemur
[ "Biology" ]
2,025
[ "EDGE species", "Biodiversity" ]
8,933,657
https://en.wikipedia.org/wiki/Aczel%27s%20anti-foundation%20axiom
In the foundations of mathematics, Aczel's anti-foundation axiom is an axiom set forth by , as an alternative to the axiom of foundation in Zermelo–Fraenkel set theory. It states that every accessible pointed directed graph corresponds to exactly one set. In particular, according to this axiom, the graph consisting of a single vertex with a loop corresponds to a set that contains only itself as element, i.e. a Quine atom. A set theory obeying this axiom is necessarily a non-well-founded set theory. Accessible pointed graphs An accessible pointed graph is a directed graph with a distinguished vertex (the "root") such that for any node in the graph there is at least one path in the directed graph from the root to that node. The anti-foundation axiom postulates that each such directed graph corresponds to the membership structure of exactly one set. For example, the directed graph with only one node and an edge from that node to itself corresponds to a set of the form x = {x}. See also von Neumann universe References Axioms of set theory Directed graphs de:Fundierungsaxiom#Mengenlehren ohne Fundierungsaxiom
Aczel's anti-foundation axiom
[ "Mathematics" ]
252
[ "Axioms of set theory", "Mathematical axioms" ]
8,933,729
https://en.wikipedia.org/wiki/Helium%20release%20valve
A helium release valve, helium escape valve or gas escape valve is a feature found on some diving watches intended for saturation diving using helium based breathing gas. Gas ingress problem When saturation divers operate at great depths, they live under pressure in a saturation habitat with an atmosphere containing helium or hydrogen. Since helium atoms are the smallest natural gas particles—the atomic radius of a helium atom is 0.49 angstrom and that of a water molecule is about 2.75 angstrom—, they are able to diffuse over about five days into the watch, past the seals which are able to prevent ingress of larger molecules such as water. This is not a problem as long as the watch remains under external pressure, but when decompressing, a pressure difference builds up between the trapped gas inside the watch case and the environment. Depending on the construction of the watch case, seals and crystal, this effect can cause damage to the watch, such as the crystal popping off, as diving watches are designed primarily to withstand external pressure. Solutions development Some watch manufacturers manage the internal overpressure effect by simply making the case and sealed connected parts adequately sealed or strong enough to avoid or withstand the internal pressure, but Rolex and Doxa S.A. approached the problem by creating the helium escape valve in the 1960s (first introduced in the Rolex Submariner/Sea-Dweller and the Doxa Conquistador): A small, spring-loaded one-way valve is fitted in the watch case that opens when the differential between internal and external pressure is sufficient to overcome the spring force. As a result, the valve releases the gases trapped inside the watch case during decompression, preventing damage to the watch. The original idea for using a one-way valve came from Robert A. Barth, a US Navy diver who pioneered saturation diving during the US Navy Genesis and SEALAB missions led by Dr. George F. Bond. The patent for the helium escape valve was filed by Rolex on 6 November 1967 and granted on 15 June 1970. Solutions application Automatic helium release valves usually don't need any manual operation, but some are backed up by a screw-down crown in the side of the watch, which is unscrewed at the start of decompression to allow the valve to operate. As decompressing saturation divers is a slow working conditions requirements regulated process to prevent sickness and any other harmful medical effects, the helium release valve does not have to be able to cope with extremely rapid decompression scenarios, that can occur in a material/medical pass-through system lock. Helium release valves can primarily be found on diving watches featuring a water resistance rating greater than 300 m (1000 ft). ISO 6425 defines a diver's watch for mixed-gas diving as: A watch required to be resistant during diving in water to a depth of at least 100 m and to be unaffected by the overpressure of the breathing gas. Models that feature a helium release valve include most of the Omega Seamaster series, Rolex Sea Dweller, Tudor watches Pelagos, some dive watches from the Citizen Watch Co., Ltd, Breitling, Girard-Perregaux, Anonimo, Panerai, Mühle Rasmus by Nautische Instrumente Mühle Glashütte, Deep Blue, Scurfa Watches, all watches produced by Enzo Mechana, Aegir Watches and selected Doxa, selected Victorinox models, Oris models, TAG Heuer Aquaracer models, and the DEL MAR Professional Dive 1000 watch. Other watch manufacturers such as Seiko and Citizen Watch Co., Ltd still offer high-level dive watches that are guaranteed safe against the effects of mixed-gas diving without needing an additional opening in the case in the form of a release valve. This is normally achieved through the use of special gaskets and monocoque case construction instead of using the more common screw down case-backs. Saturation diving water resistance management To enable changing the time or date during their dive, saturation divers have to act somewhat counterintuitive regarding the water resistance management of their diving watches. On the initial and any later blowdown or compression, most saturation divers consciously open the water-resistant crown of their watches to allow the breathing gas inside to equalize the internal pressure to their storage/living environment. This pressure differential mitigation strategy allows them to later open the water-resistant crown at their storage pressure, to be able to adjust their watch if required during their (often weeks long) saturation period under regularly varying pressure levels between worksites. The storage pressure is generally kept equal or only slightly lower than the pressure at the intended divers' working depth. Opening a watch case (by unscrewing a crown) means expanding its internal volume. In a significantly higher external pressure environment, any expansion will be impeded by this environment. Every opening and closing action of a release valve or crown seal involves a risk of dirt, lint or other non-gaseous matter ingress, that can compromise the proper functioning of the seal and watch. ISO 6425 divers' watches standard for mixed-gas diving decompression testing The standards and features for diving watches are regulated by the ISO 6425 – Divers' watches international standard. ISO 6425 testing of the water resistance or water-tightness and resistance at a water overpressure as it is officially defined is fundamentally different from non-dive watches, because every single watch has to be tested. ISO 6425 provides specific additional requirements for testing of diver's watches for mixed-gas diving. Some specific additional requirements for testing of diver's watches for mixed-gas diving provided by ISO 6425 are: Test of operation at a gas overpressure. The watch is subject to the overpressure of gas which will actually be used, i.e. 125% of the rated pressure, for 15 days. Then a rapid reduction in pressure to the atmospheric pressure shall be carried out in a time not exceeding 3 minutes. After this test, the watch shall function correctly. An electronic watch shall function normally during and after the test. A mechanical watch shall function normally after the test (the power reserve normally being less than 15 days). Test by internal pressure (simulation of decompression). Remove the crown together with the winding and/or setting stem. In its place, fit a crown of the same type with a hole. Through this hole, introduce the gas mixture which will actually be used and create an overpressure of the rated pressure/20 bar in the watch for a period of 10 hours. Then carry out the test at the rated water overpressure. In this case, the original crown with the stem shall be refitted beforehand. After this test, the watch shall function correctly. Gallery References Swiss patent CH492246A Montre étanche, MONTRES ROLEX SA, ANDRE ZIBACH, 6 November 1967 Technical Perspective What Saturation Diving Really Means (And What Watchmakers Do About It) It's all about the helium, and not getting killed. Jack Forster, 11 July 11 2017, hodinkee.com Underwater diving equipment components
Helium release valve
[ "Technology" ]
1,471
[ "Components", "Underwater diving equipment components" ]
8,934,260
https://en.wikipedia.org/wiki/VirtualBox
Oracle VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox and InnoTek VirtualBox) is a hosted hypervisor for x86 virtualization developed by Oracle Corporation. VirtualBox was originally created by InnoTek Systemberatung GmbH, which was acquired by Sun Microsystems in 2008, which was in turn acquired by Oracle in 2010. VirtualBox may be installed on Microsoft Windows, macOS, Linux, Solaris and OpenSolaris. There are also ports to FreeBSD and Genode. It supports the creation and management of guest virtual machines running Windows, Linux, BSD, OS/2, Solaris, Haiku, and OSx86, as well as limited virtualization of guests on Apple hardware. For some guest operating systems, a "Guest Additions" package of device drivers and system applications is available, which typically improves performance, especially that of graphics, and allows changing the resolution of the guest OS automatically when the window of the virtual machine on the host OS is resized. Released under the terms of the GNU General Public License and, optionally, the CDDL for most files of the source distribution, VirtualBox is free and open-source software, though the Extension Pack is proprietary software, free of charge only to personal users. The License to VirtualBox was relicensed to GPLv3 with linking exceptions to the CDDL and other GPL-incompatible licenses. History VirtualBox was first offered by InnoTek Systemberatung GmbH, a German company based in Weinstadt, under a proprietary software license, making one version of the product available at no cost for personal or evaluation use, subject to the VirtualBox Personal Use and Evaluation License (PUEL). In January 2007, based on counsel by LiSoG, InnoTek released VirtualBox Open Source Edition (OSE) as free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2. InnoTek also contributed to the development of OS/2 and Linux support in virtualization and OS/2 ports of products from Connectix which were later acquired by Microsoft. Specifically, InnoTek developed the "additions" code in both Windows Virtual PC and Microsoft Virtual Server, which enables various host–guest OS interactions like shared clipboards or dynamic viewport resizing. Sun Microsystems acquired InnoTek in February 2008. Following the acquisition of Sun Microsystems by Oracle Corporation in January 2010, the product was re-branded as "Oracle VM VirtualBox". In December 2019, VirtualBox removed support for software-based virtualization and exclusively performs hardware-assisted virtualization. Release history Licensing The core package, since version 4 in December 2010, is free software under GNU General Public License version 2 (GPLv2). A supplementary package, under a proprietary license, adds support for USB 2.0 and 3.0 devices, Remote Desktop Protocol (RDP), disk encryption, NVMe, and Preboot Execution Environment (PXE). This package is called "VirtualBox Oracle VM VirtualBox extension pack". It includes closed-source components, so it is not source-available. The license is called Personal Use and Evaluation License (PUEL). It allows gratis access for personal use, educational use, and evaluation. Since VirtualBox version 5.1.30, Oracle defines personal use as installation on a single computer for non-commercial purposes. Prior to version 4, there were two different packages of the VirtualBox software. The full package was offered gratis under the PUEL, with licenses for other commercial deployment purchasable from Oracle. A second package called the VirtualBox Open Source Edition (OSE) was released under GPLv2. This removed the same proprietary components not available under GPLv2. , building the BIOS for VirtualBox requires the Open Watcom compiler, which is released under the Sybase Open Watcom Public License. The Open Source Initiative has approved this as "Open Source" but the Free Software Foundation and the Debian Free Software Guidelines do not consider it "free". VirtualBox has experimental support for macOS guests. However, macOS's end user license agreement does not permit running on non-Apple hardware. The operating system enforces this by calling the Apple System Management Controller (SMC), to verify the hardware's authenticity. All Apple machines have an SMC. Virtualization Users of VirtualBox can load multiple guest OSes under a single host operating-system (host OS). Each guest can be started, paused and stopped independently within its own virtual machine (VM). The user can independently configure each VM and run it under a choice of software-based virtualization or hardware assisted virtualization if the underlying host hardware supports this. The host OS and guest OSs and applications can communicate with each other through a number of mechanisms including a common clipboard and a virtualized network facility. Guest VMs can also directly communicate with each other if configured to do so. Hardware-assisted VirtualBox supports both Intel's VT-x and AMD's AMD-V hardware-assisted virtualization. Making use of these facilities, VirtualBox can run each guest VM in its own separate address-space; the guest OS ring 0 code runs on the host at ring 0 in VMX non-root mode rather than in ring 1. Starting with version 6.1, VirtualBox only supports this method. Until then, VirtualBox specifically supported some guests (including 64-bit guests, SMP guests and certain proprietary OSs) only on hosts with hardware-assisted virtualization. Devices and peripherals VirtualBox emulates hard disks in three formats: the native VDI (Virtual Disk Image), VMware's VMDK, and Microsoft's VHD. It thus supports disks created by other hypervisor software. VirtualBox can also connect to iSCSI targets and to raw partitions on the host, using either as virtual hard disks. VirtualBox emulates IDE (PIIX4 and ICH6 controllers), SCSI, SATA (ICH8M controller), and SAS controllers, to which hard drives can be attached. VirtualBox has supported Open Virtualization Format (OVF) since version 2.2.0 (April 2009). Both ISO images and physical devices connected to the host can be mounted as CD or DVD drives. VirtualBox supports running operating systems from live CDs and DVDs. By default, VirtualBox provides graphics support through a custom virtual graphics-card that is VBE or UEFI GOP compatible. The Guest Additions for Windows, Linux, Solaris, OpenSolaris, and OS/2 guests include a special video-driver that increases video performance and includes additional features, such as automatically adjusting the guest resolution when resizing the VM window and desktop composition via virtualized WDDM drivers. For an Ethernet network adapter, VirtualBox virtualizes these Network Interface Cards: AMD PCnet PCI II (Am79C970A) AMD PCnet-Fast III (Am79C973) Intel Pro/1000 MT Desktop (82540EM) Intel Pro/1000 MT Server (82545EM) Intel Pro/1000 T Server (82543GC) Paravirtualized network adapter (virtio-net) The emulated network cards allow most guest OSs to run without the need to find and install drivers for networking hardware as they are shipped as part of the guest OS. A special paravirtualized network adapter is also available, which improves network performance by eliminating the need to match a specific hardware interface, but requires special driver support in the guest. (Many distributions of Linux ship with this driver included.) By default, VirtualBox uses NAT through which Internet software for end-users such as Firefox or ssh can operate. Bridged networking via a host network adapter or virtual networks between guests can also be configured. Up to 36 network adapters can be attached simultaneously, but only four are configurable through the graphical interface. For a sound card, VirtualBox virtualizes Intel HD Audio, Intel ICH AC'97, and SoundBlaster 16 devices. A USB 1.1 controller is emulated, so that any USB devices attached to the host can be seen in the guest. The proprietary extension pack adds a USB 2.0 or USB 3.0 controller and, if VirtualBox acts as an RDP server, it can also use USB devices on the remote RDP client, as if they were connected to the host, although only if the client supports this VirtualBox-specific extension (Oracle provides clients for Solaris, Linux, and Sun Ray thin clients that can do this, and has promised support for other platforms in future versions). Software-based In the absence of hardware-assisted virtualization, versions 6.0.24 and earlier of VirtualBox could adopt a standard software-based virtualization approach. This mode supports 32-bit guest operating systems which run in rings 0 and 3 of the Intel ring architecture. The system reconfigures the guest OS code, which would normally run in ring 0, to execute in ring 1 on the host hardware. Because this code contains many privileged instructions which cannot run natively in ring 1, VirtualBox employs a Code Scanning and Analysis Manager (CSAM) to scan the ring 0 code recursively before its first execution to identify problematic instructions and then calls the Patch Manager (PATM) to perform in-situ patching. This replaces the instruction with a jump to a VM-safe equivalent compiled code fragment in hypervisor memory. The guest user-mode code, running in ring 3, generally runs directly on the host hardware in ring 3. In both cases, VirtualBox uses CSAM and PATM to inspect and patch the offending instructions whenever a fault occurs. VirtualBox also contains a dynamic recompiler, based on QEMU to recompile any real mode or protected mode code entirely (e.g. BIOS code, a DOS guest, or any operating system startup). Using these techniques, VirtualBox could achieve performance comparable to that of VMware in its later versions. The feature was dropped starting with VirtualBox 6.1. Features Snapshots of the RAM and storage that allow reverting to a prior state. Screenshots and screen video capture "Host key" for releasing the keyboard and mouse cursor to the host system if captured (coupled) to the guest system, and for keyboard shortcuts to features such as configuration, restarting, and screenshot. By default, it is the right-side key, or on Mac, the left key. Mouse pointer integration, meaning automatic coupling and uncoupling of mouse cursor when moved inside and outside the virtual screen, if supported by guest operating system. Seamless mode – the ability to run virtualized applications side by side with normal desktop applications Shared clipboard Shared folders through "guest additions" software Special drivers and utilities to facilitate switching between systems Ability to specify amount of shared RAM, video memory, and CPU execution cap Ability to emulate multiple screens Command line interaction (in addition to the GUI) Public API (Java, Python, SOAP, XPCOM) to control VM configuration and execution Nested paging for AMD-V and Intel VT (only for processors supporting SLAT and with SLAT enabled) Limited support for 3D graphics acceleration (including OpenGL up to (but not including) 3.0 and Direct3D 9.0c via Wine's Direct3D to OpenGL translation in versions prior to 7.0 or DXVK in later releases) SMP support (up to 32 virtual CPUs per virtual machine), since version 3.0 Teleportation (aka Live Migration) 2D video output acceleration (not to be mistaken with video decoding acceleration), since version 3.1 EFI has been supported since version 3.1 (Windows 7 guests are not supported) Storage emulation Ability to mount virtual hard disk drives and disk images. Virtual optical disc images can be used for booting and sharing files to guest systems lacking networking support. NCQ support for SATA, SCSI and SAS raw disks and partitions SATA disk hotplugging Pass-through mode for solid-state drives Pass-through mode for CD/DVD/BD drives – allows users to play audio CDs, burn optical disks, and play encrypted DVD discs Can disable host OS I/O cache Allows limitation of IO bandwidth PATA, SATA, SCSI, SAS, iSCSI, floppy disk controllers VM disk image encryption using AES128/AES256 Storage support includes: Raw hard disk access – allows physical hard disk partitions on the host system to appear in the guest system VMware Virtual Machine Disk (VMDK) format support – allows exchange of disk images with VMware Microsoft VHD support QEMU qed and qcow disks HDD format disks (only version 2; versions 3 and 4 are not supported) used by Parallels virtualization products Limitations 3D graphics acceleration for Windows guests earlier than Windows 7 was removed in version 6.1. This affected Windows XP and Windows Vista. VirtualBox has a very low transfer rate to and from USB2 devices. For USB3 equipment, device pass-through does not work in older guest OSes, such as Windows Vista and Windows XP, which lack appropriate drivers. However, since version 5.0, VirtualBox has added an experimental USB3 controller (the Renesas uPD720201 xHCI), which enables USB3 in these operating systems. This requires editing some configuration files. Guest Additions for macOS are unavailable at this time. Native Guest Additions for Windows 9x (Windows 95, 98 and ME) are not available. This results in poor performance due to the lack of graphics acceleration with the default limited color depth. External third-party software is available to enable support for 32-bit color mode, resulting in better performance. EFI support is incomplete, e.g. EFI boot for a Windows 7 guest is not supported. Only older versions of DirectX and OpenGL pass-through are supported (the feature can be enabled using the 3D Acceleration option for each VM individually). Video RAM is limited to 128 MiB (256 MiB with 2D Video Acceleration enabled) due to technical difficulties (merely changing the GUI to allow the user to allocate more video RAM to a VM or manually editing the configuration file of a VM won't work and will result in a fatal error). Windows 95/98/98SE/ME cannot be installed or work unreliably with modern CPUs (AMD Zen and newer; Intel Tiger Lake and newer) and hardware assisted virtualization (VirtualBox 6.1 and higher). This is due to these OSes not being coded correctly. An open source patch has been developed to fix the issue which also addresses Windows 95/98/98SE bug which makes the system crash when running on new fast CPUs. VirtualBox 7.0 and later is required to run a pristine Windows 11 guest. Full compatibility with Windows 11 is achieved in VirtualBox version 7.0.14 and higher. Host OS The supported operating systems include: Windows 10 64-bit and higher. Support for 64-bit Windows was added with VirtualBox 1.5. Support for 32-bit Windows was removed in 6.0. Support for Windows 2000 was removed in version 1.6. Support for Windows XP was removed in version 5.0. Support for Windows Vista was removed in version 5.2. Support for Windows 7 (64-bit) was removed in version 6.1. Support for Windows 8 (64-bit) was removed in version 7.0. Support for Windows 8.1 (64-bit) was removed in version 7.1. Windows Server 2019 and higher. Support for Windows Server 2003 was removed in 5.0. Support for Windows Server 2008 was removed in 6.0. Support for Windows Server 2008 R2 was removed in version 7.0. Support for Windows Server 2012 and 2016 was removed in version 7.1. Linux distributions macOS from version 11 (Big Sur) to 14 (Sonoma) both ARM and Intel versions: Preliminary Mac OS X support (beta stage) was added with VirtualBox 1.4, full support with 1.6. Support for Mac OS X 10.4 (Tiger) and earlier was removed with VirtualBox 3.1. Support for Mac OS X 10.5 (Leopard) was removed with VirtualBox 4.2. Support for Mac OS X 10.6 (Snow Leopard) and 10.7 (Lion) was removed with VirtualBox 5.0. Support for Mac OS X 10.8 (Mountain Lion) was removed with VirtualBox 5.1. Support for Mac OS X 10.9 (Mavericks) was removed with VirtualBox 5.2. Support for Mac OS X 10.10 (Yosemite) and OS X 10.11 (El Capitan) was removed with VirtualBox 6.0. Support for macOS 10.12 (Sierra) was officially removed with VirtualBox 6.1 (as of 6.1.16 it will still install and run, however). Support for macOS 10.13 (High Sierra) and macOS 10.14 (Mojave) was officially removed with VirtualBox 7.0. Support for macOS 10.15 (Catalina) was officially removed with VirtualBox 7.1. Oracle Solaris Guest additions Some features require the installation of the closed-source "VirtualBox Extension Pack": Support for a virtual USB 2.0/3.0 controller (EHCI/xHCI) (Starting with VirtualBox 7.0, this functionality was integrated into the GPL version instead.) VirtualBox RDP: support for the proprietary remote connection protocol developed by Microsoft and Citrix Systems. PXE boot for Intel cards. VM disk image encryption Webcam support While VirtualBox itself is free to use and is distributed under an open source license the VirtualBox Extension Pack is licensed under the VirtualBox Personal Use and Evaluation License (PUEL). Personal use of the extension pack is free but commercial users need to purchase a license. Guest Additions are installed within each guest virtual machine which supports them; the Extension Pack is installed on the host running VirtualBox. See also Comparison of platform virtualization software VMware Workstation OS-level virtualization x86 virtualization References External links Oracle Oracle Cloud Articles containing video clips Cross-platform free software Free emulation software Free software programmed in C++ Free virtualization software Platform virtualization software Software derived from or incorporating Wine Software that uses Qt Sun Microsystems software Virtualization software for Linux Cloud infrastructure Oracle Cloud Services
VirtualBox
[ "Technology" ]
3,886
[ "Cloud infrastructure", "IT infrastructure" ]
8,934,267
https://en.wikipedia.org/wiki/Edge-localized%20mode
An edge-localized mode (ELM) is a plasma instability occurring in the edge region of a tokamak plasma due to periodic relaxations of the edge transport barrier in high-confinement mode. Each ELM burst is associated with expulsion of particles and energy from the confined plasma into the scrape-off layer. This phenomenon was first observed in the ASDEX tokamak in 1981. Diamagnetic effects in the model equations expand the size of the parameter space in which solutions of repeated sawteeth can be recovered compared to a resistive MHD model. An ELM can expel up to 20 percent of the reactor's energy. Issues ELM is a major challenge in magnetic fusion research with tokamaks, as these instabilities can: Damage wall components (in particular divertor plates) by ablating them away due to their extremely high energy transfer rate (GW/m2); Potentially couple or trigger other instabilities, such as the resistive wall mode (RWM) or the neoclassical tearing mode (NTM). Prevention and control A variety of experiments/simulations have attempted to mitigate damage from ELM. Techniques include: Application of resonant magnetic perturbations (RMPs) with in-vessel current carrying coils can eliminate or weaken ELMs. Injecting pellets to increase the frequency and thereby decrease the severity of ELM bursts (ASDEX Upgrade). Multiple small-scale ELMs (000s/s) in tokamaks to prevent the creation of large ones, spreading the associated heat over a larger area and interval Increase the plasma density and, at high densities, adjusting the topology of the magnetic field lines confining the plasma. History In 2003 DIII-D began experimenting with resonant magnetic perturbations to control ELMs. In 2006 an initiative (Project Aster) was started to simulate a full ELM cycle including its onset, the highly non-linear phase, and its decay. However, this did not constitute a “true” ELM cycle, since a true ELM cycle would require modeling the slow growth after the crash, in order to produce a second ELM. As of late 2011, several research facilities had demonstrated active control or suppression of ELMs in tokamak plasmas. For example, the KSTAR tokamak used specific asymmetric three-dimensional magnetic field configurations to achieve this goal. In 2015, results of the first simulation to demonstrate repeated ELM cycling was published. In 2022, researchers began testing the small ELM hypothesis at JET to assess the utility of the technique. See also Resonant magnetic perturbations, used to control ELMs Plasma instability Tokamak References Further reading Plasma instabilities
Edge-localized mode
[ "Physics" ]
553
[ "Plasma phenomena", "Physical phenomena", "Plasma instabilities" ]
8,935,386
https://en.wikipedia.org/wiki/Joint%20Institute%20for%20Nuclear%20Astrophysics
The Joint Institute for Nuclear Astrophysics Center for the Evolution of the Elements (JINA-CEE) is a multi-institutional Physics Frontiers Center funded by the US National Science Foundation since 2014. From 2003 to 2014, JINA was a collaboration between Michigan State University, the University of Notre Dame, the University of Chicago, and directed by Michael Wiescher from the University of Notre Dame. Principal investigators were Hendrik Schatz, Timothy Beers and Jim Truran. JINA-CEE is a collaboration between Michigan State University, the University of Notre Dame, University of Washington and Arizona State University and a number of associated institutions, centers, and national laboratories in the US and across the world, with the goal to bring together nuclear experimenters, nuclear theorists, astrophysical modelers, astrophysics theorists, and observational astronomers to address the open scientific questions at the intersection of nuclear physics and astrophysics. JINA-CEE serves as an intellectual center and focal point for the field of nuclear astrophysics, and is intended to enable scientific work and exchange of data and information across field boundaries within its collaboration, and for the field as a whole though workshops, schools, and web-based tools and data bases. It is led by director Hendrik Schatz with Michael Wiescher, Timothy Beers, Sanjay Reddy and Frank Timmes as principal investigators. Most JINA-CEE nuclear physics experiments are carried out at the Nuclear Science Laboratory at the University of Notre Dame, the National Superconducting Cyclotron Laboratory at Michigan State University and the ATLAS/CARIBOU facility at Argonne National Laboratory. JINA-CEE is heavily involved in observations with the Apache Point Observatory within the framework of extensions to the Sloan Digital Sky Survey, LAMOST in China, SkyMapper in Australia, and the Hubble Space Telescope. Among many other observational data, JINA-CEE also uses heavily X-ray observational data from BeppoSAX, RXTE, Chandra, XMM-Newton, and INTEGRAL. JINA stimulated the development of similar centers in other countries, and collaborates with a number of multi-institutional nuclear astrophysics centers in Germany, including NAVI, EMMI and the Universe Cluster in Munich. REACLIB Database One of the many projects of JINA-CEE is the maintenance of an up-to-date nuclear reaction rate library called REACLIB. REACLIB contains over 75,000 thermonuclear reaction rates. Virtual Journals Nuclear astrophysics is made of many overlapping disciplines, spanning fields in astronomy, astrophysics and nuclear physics. In order to understand the origin of the elements, or the evolution and deaths of stars in galaxies, quite a broad base of knowledge is required. JINA-CEE created two virtual journals in order to meet the need for coverage of this broad-based information. The JINA Virtual Journal debuted in 2003, and reviews a broad realm of nuclear astrophysics, followed by the SEGUE Virtual Journal in 2006, focusing more on Galactic Chemical and Structural evolution. Each week, the editors search almost 40 refereed journals for newly published articles. Editors review the articles, flagging those that are relevant, and categorize them into their respective subjects (which are searchable by individual users). When the virtual journals are published, an email notification is sent to subscribers informing them of the newly available selections from the Virtual Journals. Education Education, outreach, and creating inclusive environments are high priorities for JINA-CEE. JINA-CEE has a multitude of educational and outreach programs aimed at attracting young people to science careers, research training, and disseminating research findings to the public. Educational programs target audiences ranging from K-12 to Graduate Students and Postdocs. References External links Official JINA website JINA DIANA/SURF Nuclear Astrophysics Group website JINA FRIB Nuclear Astrophysics Group website JINA SDSS-II Nuclear Astrophysics Group website Full list of Associated and Participating Institutions JINA Educational programs Research institutes in the United States Astrophysics research institutes
Joint Institute for Nuclear Astrophysics
[ "Physics" ]
826
[ "Astrophysics research institutes", "Astrophysics" ]
8,936,239
https://en.wikipedia.org/wiki/Aggresome
In eukaryotic cells, an aggresome refers to an aggregation of misfolded proteins in the cell, formed when the protein degradation system of the cell is overwhelmed. Aggresome formation is a highly regulated process that possibly serves to organize misfolded proteins into a single location. Biogenesis Correct folding requires proteins to assume one particular structure from a constellation of possible but incorrect conformations. The failure of polypeptides to adopt their proper structure is a major threat to cell function and viability. Consequently, elaborate systems have evolved to protect cells from the deleterious effects of misfolded proteins. Upon synthesis, proteins are in their linear and non-functional form, called a nascent protein. They must undergo co-translational folding as quickly as possible in order to become a functional, three-dimensional structure. Normally folded proteins are referred to as being in their native structure. In this state, they have undergone a hydrophobic collapse process, indicated by outward-facing hydrophilic components and inward-facing hydrophobic components. The solubility of proteins is an important biochemical aspect of protein folding as it has been shown to affect the formation of protein aggregates. Contrary to native structures, a misfolded protein will often have outward-facing hydrophobic regions which acts as an attractant to other insoluble proteins. There are some chaperones which identify aggregates by recognizing their hydrophobic region. These chaperone may work as solubilizers. Cells mainly deploy three mechanisms to counteract misfolded proteins: up-regulating chaperones to assist protein refolding, proteolytic degradation of the misfolded/damaged proteins involving ubiquitin–proteasome and autophagy–lysosome systems, and formation of detergent-insoluble aggresomes by transporting the misfolded proteins along microtubules to a region near the nucleus. Intracellular deposition of misfolded protein aggregates into ubiquitin-rich cytoplasmic inclusions is linked to the pathogenesis of many diseases. Functional blockade of either degradative system leads to an enhanced aggresome formation. Why these aggregates form despite the existence of cellular machinery to recognize and degrade misfolded protein, and how they are delivered to cytoplasmic inclusions, are not known. Aggresome formation is accompanied by redistribution of the intermediate filament protein vimentin to form a cage surrounding a pericentriolar core of aggregated, ubiquitinated protein. Disruption of microtubules blocks the formation of aggresomes. Similarly, inhibition of proteasome function also prevents the degradation of unassembled presenilin-1 (PSE1) molecules leading to their aggregation and deposition in aggresomes. Aggresome formation is a general response of cells which occurs when the capacity of the proteasome is exceeded by the production of aggregation-prone misfolded proteins. Typically, an aggresome forms in response to a cellular stress which generates a large amount of misfolded or partially denatured protein: hyperthermia, overexpression of an insoluble or mutant protein, etc. The formation of the aggresome is largely believed to be a protective response, sequestering potentially cytotoxic aggregates and also acting as a staging center for eventual autophagic clearance from the cell. An aggresome forms around the microtubule organizing center in eukaryotic cells, adjacent to or enveloping the cell's centrosomes. Polyubiquitination tags the protein for retrograde transport via HDAC6 binding and microtubule-based motor protein, dynein. Moreover, substrates can also be targeted to the aggresome by a ubiquitin-independent pathway mediated by the stress-induced co-chaperone BAG3 (Bcl-2-associated athanogene 3), which transfers misfolded protein substrates bound to HSP70 (heat-shock protein 70) directly on to the microtubule motor dynein. The protein aggregate is then transported along the microtubule and unloaded via ATPase p97 forming the aggresome. Mediators such as p62 are believed to be involved in aggresome formation in sequestering omega-somes, which bind and increase the size of the aggresome. The aggresome is eventually targeted for autophagic clearance from the cell. Some pathological proteins, such as alpha-synuclein, cannot be degraded and cause the aggresomes to form inclusion bodies (in Parkinson's disease, Lewy bodies) which contribute to neuronal dysfunction and death. Triggering aggresome formation Abnormal polypeptides that escape proteasome-dependent degradation and aggregate in cytosol can be transported via microtubules to an aggresome, a recently discovered organelle where aggregated proteins are stored or degraded by autophagy. Synphilin 1, a protein implicated in Parkinson disease, was used as a model to study mechanisms of aggresome formation. When expressed in naïve HEK293 cells, synphilin 1 forms multiple small highly mobile aggregates. However, proteasome or Hsp90 inhibition rapidly triggered their translocation into the aggresome, and surprisingly, this response was independent on the expression level of synphilin 1. Therefore, aggresome formation, but not aggregation of synphilin 1, represents a special cellular response to a failure of the proteasome/chaperone machinery. Importantly, translocation to aggresomes required a special aggresome-targeting signal within the sequence of synphilin 1, an ankyrin-like repeat domain. On the other hand, formation of multiple small aggregates required an entirely different segment within synphilin 1, indicating that aggregation and aggresome formation determinants can be separated genetically. Furthermore, substitution of the ankyrin-like repeat in synphilin 1 with an aggresome-targeting signal from huntingtin was sufficient for aggresome formation upon inhibition of the proteasome. Analogously, attachment of the ankyrin-like repeat to a huntingtin fragment lacking its aggresome-targeting signal promoted its transport to aggresomes. These findings indicate the existence of transferable signals that target aggregation-prone polypeptides to aggresomes. Human disease Accumulation of misfolded proteins in proteinaceous inclusions is common to many age-related neurodegenerative diseases, including Parkinson's disease, Alzheimer's disease, Huntington's disease, and amyotrophic lateral sclerosis. In cultured cells, when the production of misfolded proteins exceeds the capacity of the chaperone refolding system and the ubiquitin-proteasome degradation pathway, misfolded proteins are actively transported to a cytoplasmic juxtanuclear structure called an aggresome. Whether aggresomes are benevolent or noxious is unknown, but they are of particular interest because of the appearance of similar inclusions in protein deposition diseases. Evidence shows that aggresomes serve a cytoprotective function and are associated with accelerated turnover of mutant proteins. Experiments show that mutant androgen receptor (AR), the protein responsible for X-linked spinobulbar muscular atrophy, forms insoluble aggregates and is toxic to cultured cells. Mutant AR was also found to form aggresomes in a process distinct from aggregation. Molecular and pharmacological interventions were used to disrupt aggresome formation, revealing their cytoprotective function. Aggresome-forming proteins were found to have an accelerated rate of turnover, and this turnover was slowed by inhibition of aggresome formation. Finally, it is shown that aggresome-forming proteins become membrane-bound and associate with lysosomal structures. Together, these findings suggest that aggresomes are cytoprotective, serving as cytoplasmic recruitment centers to facilitate degradation of toxic proteins. Proteins implicated in aggresome formation Histone deacetylase 6 is the protein that, in the deacetylase adaptor protein function, forms Lewy bodies (the regular wild-type protein localized to inclusion bodies). No mutation associated with disease has been linked to this protein. Parkin is the protein that, in the protein ligase function, forms Lewy bodies (the regular wild-type protein localized to inclusion bodies). Parkinson's disease has been linked to this protein when there is a protein. Ataxin-3 is the protein that, in the deubiquitinating enzyme function, forms SCA type-1 and 2 DRPLA intranuclear inclusions (the regular wild-type protein localized to inclusion bodies). SCA type-3 has been linked to this protein when there is a protein. Dynein motor complex is the protein that, in the retrograde microtubule motor function, forms an unknown protein (the regular wild-type protein localized to inclusion bodies). Motor neuron degeneration has been linked to this protein when there is a protein. Ubiquilin-1 is the protein that, in the protein turnover, intracellular trafficking function, forms Lewy bodies and neurofibrillary tangles (the regular wild-type protein localized to inclusion bodies). Alzheimer's disease (potential risk factor) has been linked to this protein when there is a protein. Cystic fibrosis Cystic fibrosis transmembrane conductance regulator (CFTR) is an inefficiently folded integral membrane protein that is degraded by the cytoplasmic ubiquitin-proteasome pathway. Overexpression or inhibition of proteasome activity in transfected human embryonic kidney cells or Chinese hamster ovary cells leads to the accumulation of stable, high molecular weight, detergent-insoluble, multi-ubiquitinated forms of CFTR. Undergraded CFTR molecules accumulate at a distinct pericentriolar aggresome. Role of the aggresome pathway in cancer There is emerging evidence that inhibiting the aggresome pathway leads to accumulation of misfolded proteins and apoptosis in tumor cells through autophagy. See also JUNQ and IPOD References Further reading Aggresomes: A Cellular Response to Misfolded Proteins Aggresomes protect cells by enhancing the degradation of toxic polyglutamine-containing protein Aggresome Formation and Neurodegenerative Diseases: Therapeutic Implications Role of the Aggresome Pathway in Cancer: Targeting Histone Deacetylase 6–Dependent Protein Degradation Cell biology
Aggresome
[ "Biology" ]
2,212
[ "Cell biology" ]
8,936,720
https://en.wikipedia.org/wiki/Table%20of%20nuclides
A table or chart of nuclides is a two-dimensional graph of isotopes of the elements, in which one axis represents the number of neutrons (symbol N) and the other represents the number of protons (atomic number, symbol Z) in the atomic nucleus. Each point plotted on the graph thus represents a nuclide of a known or hypothetical chemical element. This system of ordering nuclides can offer a greater insight into the characteristics of isotopes than the better-known periodic table, which shows only elements and not their isotopes. The chart of the nuclides is also known as the Segrè chart, after the Italian physicist Emilio Segrè. Description and utility A chart or table of nuclides maps the nuclear, or radioactive, behavior of nuclides, as it distinguishes the isotopes of an element. It contrasts with a periodic table, which only maps their chemical behavior, since isotopes (nuclides that are variants of the same element) do not differ chemically to any significant degree, with the exception of hydrogen. Nuclide charts organize nuclides along the X axis by their numbers of neutrons and along the Y axis by their numbers of protons, out to the limits of the neutron and proton drip lines. This representation was first published by Kurt Guggenheimer in 1934 and expanded by Giorgio Fea in 1935, Emilio Segrè in 1945 or Glenn Seaborg. In 1958, Walter Seelmann-Eggebert and Gerda Pfennig published the first edition of the Karlsruhe Nuclide Chart. Its 7th edition was made available in 2006. Today, there are several nuclide charts, four of which have a wide distribution: the Karlsruhe Nuclide Chart, the Strasbourg Universal Nuclide Chart, the Chart of the Nuclides from the Japan Atomic Energy Agency (JAEA), and the Nuclide Chart from Knolls Atomic Power Laboratory in the United States. It has become a basic tool of the nuclear community. Trends in the chart of nuclides The trends in this section refer to the following chart, which shows Z increasing to the right and N increasing downward, a 90° clockwise rotation of the above landscape-orientation charts. Isotopes are nuclides with the same number of protons but differing numbers of neutrons; that is, they have the same atomic number and are therefore the same chemical element. Isotopes neighbor each other vertically. Examples include carbon-12, carbon-13, and carbon-14 in the table above. Isotones are nuclides with the same number of neutrons but differing numbers of protons. Isotones neighbor each other horizontally. Examples include carbon-14, nitrogen-15, and oxygen-16 in the table above. Isobars are nuclides with the same number of nucleons (i.e. mass number) but different numbers of protons and neutrons. Isobars neighbor each other diagonally from lower-left to upper-right. Examples include carbon-14, nitrogen-14, and oxygen-14 in the table above. Isodiaphers are nuclides with the same difference between their numbers of neutrons and protons (N − Z). Like isobars, they follow diagonal lines, but at right angles to the isobar lines (from upper-left to lower-right). Examples include boron-10, carbon-12, and nitrogen-14 (as N − Z = 0 for each pair), or boron-12, carbon-14, and nitrogen-16 (as N − Z = 2 for each pair). Beyond the neutron drip line along the lower left, nuclides decay by neutron emission. Beyond the proton drip line along the upper right, nuclides decay by proton emission. Drip lines have only been established for some elements. The island of stability is a hypothetical region in the top right cluster of nuclides that contains isotopes far more stable than other transuranic elements. There are no stable nuclides having an equal number of protons and neutrons in their nuclei with atomic number greater than 20 (i.e. calcium) as can be readily observed from the chart. Nuclei of greater atomic number require an excess of neutrons for stability. The only stable nuclides having an odd number of protons and an odd number of neutrons are hydrogen-2, lithium-6, boron-10, nitrogen-14 and (observationally) tantalum-180m. This is because the mass–energy of such atoms is usually higher than that of their neighbors on the same isobaric chain, so most of them are unstable to beta decay. There are no stable nuclides with mass numbers 5 or 8. There are stable nuclides with all other mass numbers up to 208 with the exceptions of 147 and 151, which are represented by the very long-lived samarium-147 and europium-151. (Bismuth-209 was found to be radioactive in 2003, but with a half-life of .) With the exception of the pair tellurium-123 and antimony-123, odd mass numbers are never represented by more than one stable nuclide. This is because the mass–energy is a convex function of atomic number, so all nuclides on an odd isobaric chain except one have a lower-energy neighbor to which they can decay by beta decay. See Mattauch isobar rule. (123Te is expected to decay to 123Sb, but the half-life appears to be so long that the decay has never been observed.) There are no stable nuclides having atomic number greater than Z = 82 (lead), although bismuth (Z = 83) is stable for all practical human purposes and thorium (Z = 90) and uranium (Z = 92) are sufficiently long-lived to occur on Earth in large quantities. Elements with atomic numbers from 1 to 82 all have stable isotopes, with the exceptions of technetium (Z = 43) and promethium (Z = 61). Tables For convenience, three different views of the data are available on Wikipedia: two sets of "segmented tables", and a single "unitized table (all elements)". The unitized table allows easy visualizion of proton/neutron-count trends but requires simultaneous horizontal and vertical scrolling. The segmented tables permit easier examination of a particular chemical element with much less scrolling. Links are provided to quickly jump between the different sections. Segmented tables Table of nuclides (segmented, narrow) Table of nuclides (segmented, wide) Full table The nuclide table below shows nuclides (often loosely called "isotopes", but this term properly refers to nuclides with the same atomic number, see above), including all with half-life of at least one day. They are arranged with increasing atomic numbers from left to right and increasing neutron numbers from top to bottom. Cell color denotes the half-life of each nuclide; if a border is present, its color indicates the half-life of the most stable nuclear isomer. In graphical browsers, each nuclide also has a tool tip indicating its half-life. Each color represents a certain range of length of half-life, and the color of the border indicates the half-life of its nuclear isomer state. Some nuclides have multiple nuclear isomers, and this table notes the one with the longest half-life. Dotted borders mean that a nuclide has a nuclear isomer with a half-life in the same range as the ground state nuclide. The dashed lines between several nuclides of the first few elements are the experimentally determined proton and neutron drip lines. References External links Chart of the Nuclides 2014 (Japan Atomic Energy Agency) Interactive Chart of Nuclides (Brookhaven National Laboratory) Karlsruhe Nuclide Chart – New 10th edition 2018 Nucleonica web driven nuclear science IAEA Live Chart of Nuclides app for mobiles: Android or Apple – for PC use The Live Chart of Nuclides - IAEA The Colourful Nuclide Chart, by Edward Simpson of Australian National University Nuclide chart (EnergyEducation.ca) Another example of a Chart of Nuclides from Korea Data up to Jan 1999 only Tables of nuclides
Table of nuclides
[ "Chemistry" ]
1,721
[ "Tables of nuclides", "Isotopes" ]
8,936,958
https://en.wikipedia.org/wiki/Magnesium%20bromide
Magnesium bromide are inorganic compounds with the chemical formula , where x can range from 0 to 9. They are all white deliquescent solids. Some magnesium bromides have been found naturally as rare minerals such as: bischofite and carnallite. Synthesis Magnesium bromide can be synthesized by treating magnesium oxide (and related basic salts) with hydrobromic acid. It can also be made by reacting magnesium carbonate and hydrobromic acids, and collecting the solid left after evaporation. As suggested by its easy conversion to various hydrates, anhydrous is a Lewis acid. In the coordination polymer with the formula MgBr2(dioxane)2, Mg2+ adopts an octahedral geometry. Uses and reactions Magnesium bromide is used as a Lewis acid catalyst in some organic synthesis, e.g., in aldol reaction. Magnesium bromide also has been used as a tranquilizer and as an anticonvulsant for treatment of nervous disorders. Magnesium bromide modifies the catalytic properties of palladium on charcoal. Magnesium bromide hexahydrate has properties as a flame retardant. Treatment of magnesium bromide with chlorine gives magnesium chloride. This reaction is employed in the production of magnesium chloride from brines. Structure Two hydrates are known, the hexahydrate and the nonahydrate. Several reports claim a decahydrate, but X-ray crystallography confirmed that it is a nonahydrate. The hydrates feature [Mg(H2O)6]2+ ions. References Bromides Magnesium compounds Alkaline earth metal halides
Magnesium bromide
[ "Chemistry" ]
341
[ "Bromides", "Salts" ]
8,936,967
https://en.wikipedia.org/wiki/Hydraulic%20clearance
Hydraulic clearance. Flow in narrow clearances are of vital importance in hydraulic system component design. The flow in a narrow circular clearance of a spool valve can be calculated according to the formula below if the height is negligible compared to the width of the clearance, such as most of the clearances in hydraulic pumps, hydraulic motors, and spool valves. Flow is considered to be laminar. The formula below is valid for a spool valve when the spool is steady. Concentric spool/valve housing position i.e. the height/radial clearance c is the same all around: Units as per SI conventions : Flow Qi = (∆P · π · d · c3) ÷ (12 · ν · ρ · L) where: Q = volumetric flow rate (m^3/sec) ΔP = P1-P2 = pressure drop over the clearance (N/m^2, Pa) d = valve spool diameter (metre) c = clearance height (radial clearance) (metre) ν = kinematic viscosity for the oil (m^2/sec) ρ = density for the oil (kg/m^3) L = clearance length (metre) As can be seen from the formula, the clearance height c has much more influence on the leakage than the length. The formula clearly hints of pure laminar flow conditions. It is also valid for gases. Contact between the spool and the wall, the value that is generally used for practical calculations: Flow Qe = 2.5 · Qi Hydraulic Clearance in Hydraulic Components Pistons: The clearance between the piston and cylinder wall is crucial for preventing leakage and maintaining hydraulic efficiency. A tight clearance minimizes fluid loss, while a clearance that is too small can lead to increased friction and wear. The piston's design and the material used influence the optimal clearance. Hydraulic spool valves: These valves rely on precise clearances to control the flow of hydraulic fluid. The clearance between the spool and valve body affects the valve's responsiveness, leakage rate, and overall performance. Different types of spool valves, such as two-way, three-way, and four-way valves, have varying clearance requirements. Hydraulic seals: Seals are essential for preventing fluid leakage in hydraulic systems. The clearance between the seal and the mating surface is critical for ensuring effective sealing. Different seal materials and designs have different clearance tolerances. Proper clearance is necessary to avoid excessive friction, wear, and seal failure. Hydraulic cylinders: The clearances within a hydraulic cylinder, such as between the piston and cylinder wall, and between the rod and gland, affect the cylinder's efficiency, service life, and leakage rate. Accurate clearances are necessary for smooth operation and to prevent damage to the cylinder components. Understanding and controlling hydraulic clearance is essential for optimizing the performance, efficiency, and longevity of hydraulic systems. References Hydraulics
Hydraulic clearance
[ "Physics", "Chemistry" ]
598
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
8,937,664
https://en.wikipedia.org/wiki/Security%20controls
Security controls or security measures are safeguards or countermeasures to avoid, detect, counteract, or minimize security risks to physical property, information, computer systems, or other assets. In the field of information security, such controls protect the confidentiality, integrity and availability of information. Systems of controls can be referred to as frameworks or standards. Frameworks can enable an organization to manage security controls across different types of assets with consistency. Types of security controls Security controls can be classified by various criteria. For example, controls can be classified by how/when/where they act relative to a security breach (sometimes termed control types): Preventive controls are intended to prevent an incident from occurring e.g. by locking out unauthorized intruders; Detective controls are intended to identify, characterize, and log an incident e.g. isolating suspicious behavior from a malicious actor on a network; Compensating controls mitigate ongoing damages of an active incident, e.g. shutting down a system upon detecting malware. After the event, corrective controls are intended to restore damage caused by the incident e.g. by recovering the organization to normal working status as efficiently as possible. Security controls can also be classified according to the implementation of the control (sometimes termed control categories), for example: Physical controls - e.g. fences, doors, locks and fire extinguishers; Procedural or administrative controls - e.g. incident response processes, management oversight, security awareness and training; Technical or logical controls - e.g. user authentication (login) and logical access controls, antivirus software, firewalls; Legal and regulatory or compliance controls - e.g. privacy laws, policies and clauses. Information security standards and control frameworks Numerous information security standards promote good security practices and define frameworks or systems to structure the analysis and design for managing information security controls. Some of the most well known standards are outlined below. International Standards Organization ISO/IEC 27001:2022 was released in October 2022. All organizations certified to ISO 27001:2013 are obliged to transition to the new version of the Standard within 3 years (by October 2025). The 2022 version of the Standard specifies 93 controls in 4 groups: A.5: Organisational controls A.6: People controls A.7: Physical controls A.8: Technological controls It groups these controls into operational capabilities as follows: Governance Asset management Information protection Human resource security Physical security System and network security Application security Secure configuration Identity and access management Threat and vulnerability management Continuity Supplier relationships security Legal and compliance Information security event management; and Information_security_assurance The previous version of the Standard, ISO/IEC 27001, specified 114 controls in 14 groups: A.5: Information security policies A.6: How information security is organised A.7: Human resources security - controls that are applied before, during, or after employment. A.8: Asset management A.9: Access controls and managing user access A.10: Cryptographic technology A.11: Physical security of the organisation's sites and equipment A.12: Operational security A.13: Secure communications and data transfer A.14: Secure acquisition, development, and support of information systems A.15: Security for suppliers and third parties A.16: Incident management A.17: Business continuity/disaster recovery (to the extent that it affects information security) A.18: Compliance - with internal requirements, such as policies, and with external requirements, such as laws. U.S. Federal Government information security standards The Federal Information Processing Standards (FIPS) apply to all US government agencies. However, certain national security systems, under the purview of the Committee on National Security Systems, are managed outside these standards. Federal information Processing Standard 200 (FIPS 200), "Minimum Security Requirements for Federal Information and Information Systems," specifies the minimum security controls for federal information systems and the processes by which risk-based selection of security controls occurs. The catalog of minimum security controls is found in NIST Special Publication SP 800-53. FIPS 200 identifies 17 broad control families: AC Access Control AT Awareness and Training AU Audit and Accountability CA Security Assessment and Authorization (historical abbreviation) CM Configuration Management CP Contingency Planning IA Identification and Authentication IR Incident Response MA Maintenance MP Media Protection PE Physical and Environmental Protection PL Planning PS Personnel Security RA Risk Assessment SA System and Services Acquisition SC System and Communications Protection SI System and Information Integrity National Institute of Standards and Technology NIST Cybersecurity Framework A maturity based framework divided into five functional areas and approximately 100 individual controls in its "core." NIST SP-800-53 A database of nearly one thousand technical controls grouped into families and cross references. Starting with Revision 3 of 800-53, Program Management controls were identified. These controls are independent of the system controls, but are necessary for an effective security program. Starting with Revision 4 of 800-53, eight families of privacy controls were identified to align the security controls with the privacy expectations of federal law. Starting with Revision 5 of 800-53, the controls also address data privacy as defined by the NIST Data Privacy Framework. Commercial Control Sets COBIT5 A proprietary control set published by ISACA. Governance of Enterprise IT Evaluate, Direct and Monitor (EDM) – 5 processes Management of Enterprise IT Align, Plan and Organise (APO) – 13 processes Build, Acquire and Implement (BAI) – 10 processes Deliver, Service and Support (DSS) – 6 processes Monitor, Evaluate and Assess (MEA) - 3 processes CIS Controls (CIS 18) Formerly known as the SANS Critical Security Controls now officially called the CIS Critical Security Controls (COS Controls). The CIS Controls are divided into 18 controls. CIS Control 1: Inventory and Control of Enterprise Assets CIS Control 2: Inventory and Control of Software Assets CIS Control 3: Data Protection CIS Control 4: Secure Configuration of Enterprise Assets and Software CIS Control 5: Account Management CIS Control 6: Access Control Management CIS Control 7: Continuous Vulnerability Management CIS Control 8: Audit Log Management CIS Control 9: Email and Web Browser Protections CIS Control 10: Malware Defenses CIS Control 11: Data Recovery CIS Control 12: Network Infrastructure Management CIS Control 13: Network Monitoring and Defense CIS Control 14: Security Awareness and Skills Training CIS Control 15: Service Provider Management CIS Control 16: Application Software Security CIS Control 17: Incident Response Management CIS Control 18: Penetration Testing The Controls are divided further into Implementation Groups (IGs) which are a recommended guidance to prioritize implementation of the CIS controls. Telecommunications In telecommunications, security controls are defined as security services as part of the OSI model: ITU-T X.800 Recommendation. ISO ISO 7498-2 These are technically aligned. This model is widely recognized. Data liability (legal, regulatory, compliance) The intersection of security risk and laws that set standards of care is where data liability are defined. A handful of databases are emerging to help risk managers research laws that define liability at the country, province/state, and local levels. In these control sets, compliance with relevant laws are the actual risk mitigators. Perkins Coie Security Breach Notification Chart: A set of articles (one per state) that define data breach notification requirements among US states. NCSL Security Breach Notification Laws: A list of US state statutes that define data breach notification requirements. ts jurisdiction: A commercial cybersecurity research platform with coverage of 380+ US State & Federal laws that impact cybersecurity before and after a breach. ts jurisdiction also maps to the NIST Cybersecurity Framework. Business control frameworks There are a wide range of frameworks and standards looking at internal business, and inter-business controls, including: SSAE 16 ISAE 3402 Payment Card Industry Data Security Standard Health Insurance Portability and Accountability Act COBIT 4/5 CIS Top-20 NIST Cybersecurity Framework See also Access control Aviation security Countermeasure Defense in depth Environmental design Information security Physical Security Risk Security Security engineering Security management Security services Gordon–Loeb model for cyber security investments References External links NIST SP 800-53 Revision 4 DoD Instruction 8500.2 FISMApedia Terms Computer network security Computer security procedures Data security
Security controls
[ "Engineering" ]
1,675
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security", "Data security", "Computer security procedures" ]
8,937,665
https://en.wikipedia.org/wiki/Sublimatory
A sublimatory or sublimation apparatus is equipment, commonly laboratory glassware, for purification of compounds by selective sublimation. In principle, the operation resembles purification by distillation, except that the products do not pass through a liquid phase. Overview A typical sublimation apparatus separates a mix of appropriate solid materials in a vessel in which it applies heat under a controllable atmosphere (air, vacuum or inert gas). If the material is not at first solid, then it may freeze under reduced pressure. Conditions are so chosen that the solid volatilizes and condenses as a purified compound on a cooled surface, leaving the non-volatile residual impurities or solid products behind. The form of the cooled surface often is a so-called cold finger which for very low-temperature sublimation may actually be cryogenically cooled. If the operation is a batch process, then the sublimed material can be collected from the cooled surface once heating ceases and the vacuum is released. Although this may be quite convenient for small quantities, adapting sublimation processes to large volume is generally not practical with the apparatus becoming extremely large and generally needing to be disassembled to recover products and remove residue. Among the advantages of applying the principle to certain materials are the comparatively low working temperatures, reduced exposure to gases such as oxygen that might harm certain products, and the ease with which it can be performed on extremely small quantities. The same apparatus may also be used for conventional distillation of extremely small quantities due to the very small volume and surface area between evaporating and condensing regions, although this is generally only useful if the cold finger can be cold enough to solidify the condensate. Temperature gradient More sophisticated variants of sublimation apparatus include those that apply a temperature gradient so as to allow for controlled recrystallization of different fractions along the cold surface. Thermodynamic processes follow a statistical distribution, and suitably designed apparatus exploit this principle with a gradient that will yield different purities in particular temperature zones along the collection surface. Such techniques are especially helpful when the requirement is to refine or separate multiple products or impurities from the same mix of raw materials. It is necessary in particular when some of the required products have similar sublimation points or pressure curves. See also Distillation List of purification methods in chemistry References External links Alchemical processes Laboratory glassware Separation processes Chemical equipment Phase transitions
Sublimatory
[ "Physics", "Chemistry", "Engineering" ]
500
[ "Physical phenomena", "Phase transitions", "Separation processes", "Chemical equipment", "Phases of matter", "Critical phenomena", "Alchemical processes", "nan", "Statistical mechanics", "Matter" ]
8,937,839
https://en.wikipedia.org/wiki/ArVid
ArVid (Archiver on Video) () was a data backup solution using a VHS tape as a storage medium. It was very popular in Russia and the rest of the former USSR in the mid-1990s. It was produced in Zelenograd, Russia by PO KSI. Features Using low-cost VHS tapes and recording units for data backup. High reliability Hamming code error correction Easy data copying between two VHS units (eliminating need of a computer for data copying) Disadvantages Inefficient tape capacity usage (only 2 grades of luminance signal spectrum were used) Poor software support Operation A VHS recorder unit should be connected to an ArVid ISA board by a composite video cable. Unit operation is controlled by a remote control emulator using an LED. Device may operate in two modes: low data rate at 200 KB/s and high data rate at 325 KB/s (equivalent to roughly 1.33× and 2.17× CDR recording speed). The original, lower recording speed was retained as a user option because not all VHS recorders of the time offered sufficient recording quality to reliably support the higher speed. An E-180 video tape is able to hold 2 GB of uncompressed data at the lower rate, more than sufficient for most PC hard drives of the time. This can be shown by calculating 200 KB/s × 60 s/min × 60 min/h × 3 h = 2.06 GB (2.06 × 230 bytes), which also leaves a few minutes spare for header and synchronisation space. Note that it is unclear here whether "200 kbyte" means (200 × 103) or (200 × 210); the above calculation assumes the latter, but the former still produces a capacity of 2.01 GB (2.01 × 230 bytes), providing 2.00 GB of capacity in a little under 2 hours and 59 minutes. Similarly, this means that an E240 4-hour tape, using the higher data rate, would be capable of storing between 4.35 and 4.46 GB (230 bytes), approximately equivalent to a standard single-layer recordable DVD. Models ArVid 1010, 100 kbyte/s, 4 kbyte RAM, was first of ArVid devices. Its production started in 1992. ArVid 1020, 200 kbyte/s, no RAM, was a successor to ArVid 1010 using more advanced integrated circuitry. ArVid 1030/1031, 200 kbyte/s, 64 kbyte RAM, had better internal design, less power consumption, was smaller in size and was made using CPLD. It allowed automatic switching to a TV set when device was not in use. ArVid 1051/1052, 325 kbyte/s, 128/512 kbyte RAM References External links ArVid description and images (in Russian) Drivers for linux and FreeBSD VHS Computer storage devices
ArVid
[ "Technology" ]
599
[ "Computer storage devices", "Recording devices" ]
8,938,988
https://en.wikipedia.org/wiki/Hyfrecator
A hyfrecator is a low-powered medical apparatus used in electrosurgery on conscious patients, usually in an office setting. It is used to destroy tissue directly, and to stop bleeding during minor surgery. It works by emitting low-power high-frequency high-voltage AC electrical pulses, via an electrode mounted on a handpiece, directly to the affected area of the body. A continuous electric spark discharge may be drawn between probe and tissue, especially at the highest settings of power, although this is not necessary for the device to function. The amount of output power is adjustable, and the device is equipped with different tips, electrodes and forceps, depending on the electrosurgical requirement. Unlike other types of electrosurgery, the hyfrecator does not employ a dispersive electrode pad that is attached to the patient in an area not being treated, and that leads back to the apparatus (sometimes loosely but not quite correctly called a "ground pad"). It is designed to work with non-grounded (insulated) patients. The word hyfrecator is a portmanteau derived from “high-frequency eradicator.” It was introduced as a brand name for a device introduced in 1940 by the Birtcher Corporation of Los Angeles. Birtcher also trademark registered the name Hyfrecator in 1939, and rights to the registered trademark were acquired by CONMED Corporation when it acquired Birtcher in 1995. Today, machines with the name Hyfrecator are sold only by ConMed Corporation. However, the word "hyfrecator" is sometimes used as a genericized trademark to refer to any dedicated non-ground-return electrosurgical apparatus, and a number of manufacturers now produce such machines, although not by this name. Differentiation from other types of electrosurgical equipment The hyfrecator primarily differs from other electrosurgical devices in that it is low-powered and not intended for cutting tissue, thus enabling its use with conscious patients. The hyfrecator does not require a dispersive return pad, referred-to in the electrosurgery field as a "ground pad," or "patient plate," because the hyfrecator can pass a very low-powered current between forceps tips via bipolar output, or pass an A.C. current between one pointed metal electrode probe and the patient, with the patient's self-capacitance alone providing a current sink--this is equivalent to considering displacement current to be the return current. In the latter mode, the patient must sit or lie on an insulated table, much as in the case with objects to be charged electrostatically with high-voltage D.C. (as from a Van de Graaff generator, for example). Stray ground paths between the patient and foreign conductors (such as a metal table leading somewhere to earth-ground) can offer another capacitative reservoir besides the patient, and burns out of the area of treatment may thus result, from current passing between patient and the earth-ground. For this reason, hyfrecation and all non-ground-pad electrosurgery is performed only on conscious patients, who would be aware of the burn and discomfort from an unwanted earth-ground path. (In types of electrosurgery which do employ a ground-pad, the ground-pad path serves as such a low resistance ground to the machine, that extraneous other ground paths become unimportant, and thus with proper precautions these methods can, and often are, used on anesthetized patients). Because hyfrecation is always a relatively low-power modality, it can be used in some situations (such as very small nevus removal or skin tag removal) without local anaesthesia. In many other uses to destroy larger lesions, a local anesthetic injection or regional nerve block is used. The pain from hyfrecation is due to the burning of tissue, and the pain of electric current is absent, due to the high (radio) frequency which does not directly cause discharge of nerves. Although the hyfrecator is not used primarily to cut tissue, it may be used in a secondary capacity to control bleeding, after tissue is cut by a standard surgical scalpel, or else it may be used to partly destroy superficial tissue, that is then removed by the scraping action of a curette. These are done under local anesthesia. An example of such a combination procedure is the standard method of electrodesiccation and curettage used by dermatologists to destroy skin cancers. Modes of use Hyfrecators are used in two principal modes: Desiccation, in which electrical energy kills tissue near the probe tip by heating it past the temperature at which cells can survive. The method is called desiccation because it removes water from tissue as steam, leaving the tissue white and dead, without obviously being burned. This mode is usually employed with the probe in physical contact with the skin or lesion to be destroyed. This method is notable for causing relatively little actual destruction at the point of skin contact, but a large zone of destruction beneath the skin, as the current from the probe fans out into the tissue below the point of contact. Such effects may be deliberately employed in destruction of subcutaneous nodules, where minimal damage to the intact and normal skin surface is desired, at the same time as destruction and degeneration of a larger mass immediately beneath the skin, such as a subcutaneous wart or sebaceous gland. Fulguration, in which a deliberate spark is generated by touching or nearly touching the sharp probe to the lesion or skin. This results in far higher temperatures at the point of contact of the spark to skin, causing very high temperatures and carbonization (eschar) of the tissue immediately at the spark-contact point, and just below it. Thus, it results in the highest effect at the point of spark contact. This is most useful for completely destroying very superficial structures, such as nevi and skin tags, which protrude above the skin surface. Targets of use The hyfrecator has a large number of uses, such as removal of warts (especially recalcitrant warts), pearly penile papules, desiccation of sebaceous gland disorders, electrocautery of bleeding, epilation, destruction of small cosmetically unwanted superficial veins, in certain types of plastic surgery, and many other dermatological tasks. It may also be instrumental in the destruction of skin cancers such as basal cell carcinoma. For larger amounts of tissue destruction, the hyfrecator may be used in multiple sessions in the same area or point, as for example to gradually reduce the size of a large subcutaneous structure, such as a plantar wart. The hyfrecator is useful to control bleeding in dermatological office surgery in conscious patients, after tissue-cutting, tissue removal, or biopsy is first done mechanically, with a scalpel. See electrodesiccation and curettage. The hyfrecator can be used in almost all fields of medicine, e.g. podiatry, dentistry, ophthalmology, gynecology, and veterinary medicine. More recently, the hyfrecator is being used by those performing body modification services as a more precise way to brand the skin for aesthetic purposes. It allows more intricate and elaborate designs to be burned into the skin. References External links Hyfrecator on ConMed site "The hyfrecator: a treatment for radiation induced telangiectasia in breast cancer patients" -- British Journal of Radiology "Comparison of potassium titanyl phosphate vascular laser and hyfrecator in the treatment of vascular spiders and cherry angiomas." (Abstract) - Clinical and experimental dermatology Medical equipment
Hyfrecator
[ "Biology" ]
1,636
[ "Medical equipment", "Medical technology" ]
8,939,054
https://en.wikipedia.org/wiki/Factotum%20%28software%29
factotum is a password management and authentication protocol negotiation virtual file system for Plan 9 from Bell Labs. When a program wants to authenticate to a service, it requests a key from factotum. If factotum does not have the key, it requests it from the users either via the terminal window or auth/fgui which is then stored in volatile memory. factotum then authenticates to the service on behalf of the program. For long term storage, keys are usually stored in secstore or in an encrypted file. See also List of password managers Password manager Cryptography External links factotum(4) in Plan 9 Programmer's Manual, Volume 1. factotum(4) in 9front. Security in Plan 9 in Plan 9 Programmer's Manual, Volume 2. Plan 9 from Bell Labs Free password managers
Factotum (software)
[ "Technology" ]
171
[ "Plan 9 from Bell Labs", "Computing platforms" ]
8,939,340
https://en.wikipedia.org/wiki/Pacific%20Ocean%20Shelf%20Tracking%20Project
The Pacific Ocean Shelf Tracking Project (POST) is a field project of the Census of Marine Life that researches the behavior of marine animals through the use of ocean telemetry and data management systems. This system of telemetry consists of highly efficient lines of acoustic receivers that create sections of the continental shelf along the coast of the Pacific Northwest. The acoustic receivers pick up signals from the tagged animals as they pass along the lines, allowing for the documentation of movement patterns. The receivers also allow for the estimation of parameters such as swimming speed and mortality. The trackers sit on the seabed of the continental shelf and in the major rivers of the world. This method can be used to improve fishing skills and management. The program started in 2002 and was initially limited to the study of the movement and ocean-survival of both hatchery-raised and wild salmon in the Pacific Northwest. After the successful pilot period, the program has now moved into the tracking of trout, sharks, rockfish, and lingcod. See also Ocean Tracking Network References External links Marine biology Fisheries databases Databases in Canada Continental shelves
Pacific Ocean Shelf Tracking Project
[ "Biology" ]
221
[ "Marine biology" ]
8,939,352
https://en.wikipedia.org/wiki/Union%20label
A union label (sometimes called a union bug) is a label, mark or emblem which advertises that the employees who make a product or provide a service are represented by the labor union or group of unions whose label appears, in order to attract customers who prefer to buy union-made products. The term "union bug" is frequently used to describe a minuscule union label appearing on printed materials, which supposedly resembles a small insect. Origin and history The invention of the union label concept is attributed to the Carpenter's Eight-Hour League in San Francisco, California which adopted a stamp in 1869 for use on products produced by factories employing men on the eight- (as opposed to ten-) hour day. In 1874, that city's unionized cigar-making workers created a similar "white labor" label to differentiate their cigars from those made by poorly paid, non-unionized Chinese workers. The concept of the union label as a tool for harnessing support from fellow working-class consumers for unionization spread rapidly in the next decades, first among the cigarmakers (their union adopted the first national union label in 1880), but among other unions as well, including typographers, garment workers, coopers, bakers and iron molders. By 1909, the American Federation of Labor had created its Union Label Department. See also Printer's mark References Trade unions Labor movement in the United States Certification marks Consumer symbols
Union label
[ "Mathematics" ]
292
[ "Symbols", "Certification marks" ]
8,940,073
https://en.wikipedia.org/wiki/Jetstream%20furnace
Jetstream furnaces (later tempest wood-burning boilers), were an advanced design of wood-fired water heaters conceived by Dr. Richard Hill of the University of Maine in Orono, Maine, USA. The design heated a house to prove the theory, then, with government funding, became a commercial product. Wood-burning water furnaces, boilers and melters The furnace used a forced and induced draft fan to draw combustion air and exhaust gases through the combustion chamber at 1/3 of the speed of sound (100 m/s+). The wood was loaded into a vertical tube which passed through the water jacket into a refractory lined combustion chamber. In this chamber the burning took place and was limited to the ends of the logs. The water jacket prevented the upper parts of the logs from burning so they would gravity feed as the log was consumed. The products of combustion left the chamber and passed through a narrow ceramic neck which reached temperatures of 2000 degrees F where the gases and tars released by the wood completed their burning. The products then passed through a refractory lined ash chamber which slowed the flow and let ash settle out. From here the hot gases travelled up through the boiler tubes which pass through the water jacket. Turbulators in the tubes improve heat transfer to the water jacket. All this resulted in total efficiencies as high as 85% but more commonly 75-80% and allowed partly dry unsplit wood to be burned just as effectively and cleanly. The particulate production was 100 times less than airtight stoves of the 1970s and 1980s and was less than representative oil fired furnaces. The Jetstream produced approximately 0.1 grams/hours of soot while EPA certified woodstoves produce up to 7.2 grams per hour. The high combustion chamber velocities do result in fine particulate flyash being ejected from the stack. The other aspect of Dr. Hill's design was the use of water storage. The furnace only operated at one setting, wide-open burn. A full load of hardwood, approximately 40 lbs would be consumed in four hours and the heat released was stored in water tanks for use through the day. The Hampton Industries model was designed to produce . A Hampton Jetstream Mk II which was set to be the next model offered by Hampton Industries existed in prototype form. It was an upsized version of the unit offered for sale. The only component changed was the diameter of the burning chamber. This was enlarged within the standard casting. The prototype shares many of the design improvements seen in the Kerr Jetstream. The Tempest was produced by Dumont Industries of Monmouth, ME, USA and is very similar to the Jetstream. The patent for this device, termed a WoodFired Quick Recovery Water Heater, number 4583495, issued April 22, 1986, is assigned to the board of trustees of the University of Maine. There is no current production using the design of this patent. (January, 2008) Production history Hampton Industries of Hampton, PEI, Canada, pursued the design to fit into houses more easily. Hampton Industries produced the Jetstream from January 1980 to June 1981 producing 500 units. At this point the company ceased operations with unfilled orders for hundreds more stoves and sales approximately 25% higher than projected. It was stated the advertising costs incurred before production depleted the principals in the business and a deal with a venture capitalist fell through at the last minute. Within 4 weeks of entering receivership, Kerr Controls Ltd of Truro, Nova Scotia had purchased the manufacturing rights and resumed production of the slightly redesigned Jetstream in mid-September 1981 and produced 150 units just in the last quarter of 1981. The Kerr Jetstream incorporated several updates including the available belt-driven fan replacing the Electrolux vacuum cleaner motor originally used. A removable refractory plug allowing access to the tunnel was added in the back of the unit. An updated control panel was adopted and the option of an electronic panel was added. The design of the Hampton Industries furnaces and spare parts belong to Kerr Heating Products of Parrsboro, Nova Scotia. Some molds to replace parts still exist and are available through Kerr Controls or Kerr Heating. Alternate designs Current (2007) furnaces with similar designs: Solo Series Wood Gasification Boilers by HS -Tarm Alternate Heating Systems (AHS) The Greenwood Hydronic Wood Furnace Garn WHS Kunzel Wood Gasification Boilers Alternative Fuel Gasification The EKO-LINE and KP-PYRO Boilers and Goliath Commercial Boiler from New Horizon Corporation Inc. These companies use a process called gasification but the basics of forced draft, twin refractory lined combustion and ash chambers linked by a ceramic or refractory burner nozzle or tube and shell and tube heat exchanger remain common. External links (Hill's 1979 DOE Grant Report) See also Furnace Hydronics Gasification Wood gas Wood gas generator References Boilers Plumbing Heating, ventilation, and air conditioning
Jetstream furnace
[ "Chemistry", "Engineering" ]
1,008
[ "Construction", "Boilers", "Plumbing", "Pressure vessels" ]
8,940,450
https://en.wikipedia.org/wiki/Foundations%20of%20Physics
Foundations of Physics is a monthly journal "devoted to the conceptual bases and fundamental theories of modern physics and cosmology, emphasizing the logical, methodological, and philosophical premises of modern physical theories and procedures". The journal publishes results and observations based on fundamental questions from all fields of physics, including: quantum mechanics, quantum field theory, special relativity, general relativity, string theory, M-theory, cosmology, thermodynamics, statistical physics, and quantum gravity Foundations of Physics has been published since 1970. Its founding editors were Henry Margenau and Wolfgang Yourgrau. The 1999 Nobel laureate Gerard 't Hooft was editor-in-chief from January 2007. At that stage, it absorbed the associated journal for shorter submissions Foundations of Physics Letters, which had been edited by Alwyn Van der Merwe since its foundation in 1988. Past editorial board members (which include several Nobel laureates) include Louis de Broglie, Robert H. Dicke, Murray Gell-Mann, Abdus Salam, Ilya Prigogine and Nathan Rosen. Carlo Rovelli was announced as new editor-in-chief in February 2016. Einstein–Cartan–Evans theory Between 2003 and 2005, Foundations of Physics Letters published a series of papers by Myron W. Evans claiming to make obsolete well-established results of quantum field theory and general relativity. In 2008, an editorial was written by the new Editor-in-Chief Gerard 't Hooft distancing the journal from the topic of Einstein–Cartan–Evans theory. Abstracting and indexing According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.276. The journal is abstracted and indexed in the following databases: References External links Physics journals Philosophy of physics Monthly journals
Foundations of Physics
[ "Physics" ]
363
[ "Philosophy of physics", "Applied and interdisciplinary physics" ]
8,940,918
https://en.wikipedia.org/wiki/Computer%20Measurement%20Group
The Computer Measurement Group (CMG), founded in 1974, is a worldwide non-profit organization of data processing professionals whose work involves measuring and managing the performance of computing systems. In this context, performance is understood to mean the response time of software applications of interest, and the overall capacity (or throughput) characteristics of the system, or of some part of the system. CMG members are primarily concerned with evaluating and maximizing the performance of existing computer systems and networks, and with capacity management, in which planned enhancements to existing systems or the designs of new systems are evaluated to find the necessary resources required to provide adequate performance at a reasonable cost. Mission and activities CMG's purpose is to promote the exchange of technical information among Information Technology (IT) professionals through regional groups, technical publications, and an annual conference. In common with other user groups devoted to a broad range of products or technologies (for example SHARE or DECUS), CMG provides education, networking, and leadership opportunities for its members. The association's activities provide: Extensive introductory education for new professionals Information on emerging technology as well as methodology for existing performance professionals Forums for the exchange of information, promotion of new ideas, and discussions of management information requirements Focus on practical applications and results oriented methodologies Encouragement for educational institutions to focus on the IT curriculum. CMG groups With over thirty regional and international groups, CMG's wide reaching structure emphasizes an extensive information and peer network. Regional groups hold local educational meetings, typically three or four times a year, and many publish informational newsletters. Regional meetings may span a half-day, a full day (the most common), or occasionally two days. International CMG groups also hold their own annual conferences and publish their own conference proceedings. In the US, Regional CMG groups cover the following areas: Boston, Connecticut, Florida, Greater Atlanta, Kansas City, Midwest, Minneapolis, National Capital Area, New York, Northern California, Northwest, Ohio Valley, Philadelphia, Rocky Mountain, St Louis, Salt Lake City, Southern, Southern California, and South West. International CMG groups exist in Australia, Austria and Eastern Europe (CMG AE), Canada, Central Europe (CECMG), Italy, China, the Netherlands South Africa, India and the United Kingdom (UKCMG) Focus areas CMG allows members to exchange information about the measurement, management and performance of Information technology systems. Topics of particular concern among CMG members include: Awards At its annual conference, CMG presents several awards recognizing outstanding contributions to the field of computer measurement and performance evaluation: The A. A. Michelson Award, first awarded in 1974, is named in honor of Albert Abraham Michelson, known for his technical accomplishments in measuring the speed of light and for his role as teacher and inspirer of others. This award is given to one honoree annually, to recognize the same combination of technical excellence and professional contributions. The J. William Mullen Award, first awarded in 1990, is given in memory of past CMG president Bill Mullen. The award recipient is an individual who exhibits both technical excellence and an engaging presentation style. This was recently won by Ramya Ramalinga Moorthy (from Bangalore, India) in 2017 for her exceptional and engaging presentation style. The CMG Graduate Fellowship program, begun in 1987, is open to full-time graduate students in Computer Science and related fields. It is intended to encourage and support research in the areas of measurement, modeling, management and analysis of system and network performance. Best Paper Awards are given to conference papers judged exemplary by industry peers. Publications CMG produces a number of both print and electronic publications. Currently there are four unique publications, the CMG Journal, the CMG Proceedings, the CMG Bulletin and MeasureIT. Membership in CMG is required to obtain copies of the publications with the exception of MeasureIT. In addition, some libraries and Universities have copies of the CMG Journal available for reference use. The free electronic newsletter, MeasureIT, is written by computer and performance professionals and is distributed around the second Tuesday of every month. Anyone may subscribe to MeasureIT by visiting the CMG MeasureIT homepage. Annual conference CMG holds an annual conference for performance professionals from around the world. CMG'11 was held in Washington, D.C., United States, 5–9 December 2011. In 2013 the annual conference was renamed Performance and Capacity 2013 by CMG Conference and held in La Jolla, CA, United States, 4–8 November 2013. See also A. A. Michelson Award recipients: Neil J. Gunther (2008) Jeffrey P. Buzen (1978) Arnold Allen (mathematician) (1994) External links CMG website Organizations established in 1974 Computer performance Professional associations based in the United States Information technology organizations
Computer Measurement Group
[ "Technology" ]
977
[ "Computer performance", "Information technology", "Information technology organizations" ]
9,624,797
https://en.wikipedia.org/wiki/Translational%20Genomics%20Research%20Institute
The Translational Genomics Research Institute (TGen) is a non-profit genomics research institute based in Phoenix, Arizona, United States. History and activities TGen was established in July 2002 by Jeffrey Trent in Phoenix, Arizona, with an initial investment of US$100 million from Arizona public- and private-sector investors. The field of translational genomics research searches for ways to apply results from the Human Genome Project to the development of improved diagnostics, prognostics, and therapies for cancer, neurological disorders, diabetes and other complex diseases. The mission of TGen is to make and translate genomic discoveries into advances in human health. TGen has contributed to the growth of scientific research and biotechnology in Arizona. The institute has been involved in collaborations and studies, such as the research on chronic traumatic encephalopathy (CTE) in former NFL players in partnership with Exosome Sciences. References Genetics or genomics research institutions Bioinformatics organizations Research institutes in Arizona Research institutes established in 2002 Organizations based in Arizona 2002 establishments in Arizona
Translational Genomics Research Institute
[ "Biology" ]
217
[ "Bioinformatics", "Bioinformatics organizations" ]
9,624,831
https://en.wikipedia.org/wiki/Birkenhead%20Library
Birkenhead Library (Te Whare Matauranga o Birkenhead in Māori) is a New Zealand library, part of the Auckland Libraries system located on Auckland's North Shore. Founded in 1949 it predominantly serves the areas of Birkenhead, Beach Haven, Birkdale, Kauri Park, Chelsea, and Birkenhead East, a population of about 26,000, including six primary schools, two intermediate schools, and two colleges. Typical of medium-sized public libraries in New Zealand, it is able to provide an extensive range of modern library resources and services through its integration into a wider urban network, and through its association with the National Library, while retaining its own distinct, local connections such as the Archives Collection of the Chelsea Sugar Refinery. The library was the first public library to be founded in North Shore City, the first to offer dial-up access to the New Zealand Bibliographic Network, and a leading proponent of full weekend services. For four years the library was located in temporary quarters in the Birkenhead Leisure Centre, while a dispute over the location and design of its proposed new building was resolved. On 17 December 2009, a new Birkenhead Library and Civic Centre was opened on the site of the former library. History The history of Birkenhead Public Library is characterized by four transformations which occurred at approximately twenty-year intervals since its founding in 1949. Three of these transformations involved new buildings, while the other involved amalgamation into the wider North Shore Libraries system. There was also an unexpectedly long interim period when the library was based at the Leisure Centre. Founding of the library At the turn of the twentieth century, apart from "subscription libraries" the only library in Birkenhead was run by the Zion Hill Methodist Church. In 1901, the Birkenhead Borough Council resolved that its legal and finance committee should consider building a public one, but little eventuated. A subsidy of £100 was sought from the government in 1904 for a building "not to exceed a total of £600". However, it was not until 1949 that the Free Birkenhead Public Library was established in the basement of the Council Chambers, opening on 14 November. It was a modest beginning, bolstered by support from the National Library. There was an initial budget of £500 (about $35,984 in 1st Quarter of 2017). The library began with a collection of around 1500 items, "swelled by about another twenty books a month." The "Civic Reserve library" After the Auckland Harbour Bridge was opened in 1959 the Birkenhead area became much more accessible. By the mid-1960s issues each year had increased dramatically by nearly a hundred thousand items. Nora Bourke, the chairman of the Library Committee, felt the existing building was limited and, with mayor Cyril Crocombe, began making plans for a much larger building. This was to be built on the Civic Reserve, on which a World War One memorial has stood since 1927. On 20 April 1968, the new building was officially opened by the Governor General Arthur Porritt. For the next 37 years, until 2005, this was the location of the Birkenhead Public Library, and in 1979 the reserve was renamed Nell Fisher Reserve after the first librarian, Eleanor "Nell" Fisher. Amalgamation The 1980s saw an increase in the depth and variety of services offered. A Bedford van was used to start a mobile library service in 1982, and the library began opening on Saturdays in 1983. In 1986, children's multimedia items were offered for the first time, and the New Zealand Bibliographic Network link was established. Soon after, compact discs were made available, while in 1987 the library began opening on Sundays. So service was now provided over the entire week, a first in New Zealand. Notable too in the late eighties, was the processing of books to create "machine readable codes," which saw the catalogue shifted from card to microfiche. Borrowers were now directly registered onto the computer, and a new computer management system went live, "the most sophisticated...in the world." This was a forerunner of the greater computerisation ahead, including the introduction of self-issue machines in 1995 (pictured), internet access in 1996 and a widening range of electronic resources from 2002. However, perhaps the most significant event of the eighties was amalgamation of the Birkenhead and Northcote Boroughs, and the subsequent merging of the local libraries into the North Shore Libraries system in 1989 (pictured). Staff were redeployed and regional development was initiated. A new division, Technical Services, became fully operational at Takapuna. A Children's Services Coordinator was appointed, and the computer management system established the year before was improved to allow universal access to the six libraries' holdings. This convergence has continued to this day with the advent of the "eLGAR" conglomerate, the Libraries of the Greater Auckland Region. Birkenhead Library (as part of the North Shore Libraries system) began a public rollout of the eLGAR Smarter System on 16 June 2005. On 1 May 2000, a time capsule was buried out in front of the library, by the Birkenhead war memorial. It contained various items such as maps, driver's licences, shopping receipts, and old library cards from the 1960s and 1970s. Blessed by a kaumatua from Awataha Marae it was planned to be dug up in one hundred years. On the plaque are quoted the opening two lines from T. S. Eliot's poem Burnt Norton. The "Leisure Centre library" In 1992 issues topped 300,000 items. By 2003 usage of the library had increased still further, to such an extent that it was noticeably affecting service delivery. Over 500 people a day were entering the library and new members were growing at a rate of 150 per month. Finding room to add new material to the existing stock of some 67,500 items was becoming increasingly difficult. Another factor driving the need for change was the absence of enough space for community groups, such as primary schools and students, book clubs, and author events. By the end of the 1990s some sort of addition to the library or a rebuild was being actively considered. In 2005, in preparation for building works on the same site, the library was shifted to a converted basketball court in the Birkenhead Leisure Centre in Mahara Avenue (pictured). Other alternative sites had been considered, but most were found to be either inappropriate or too expensive. With limited space available for services, the Plunket, Citizen's Advice Bureau (CAB), and council Area Office had to find alternative premises. In fact only 50% to 60% of the library's own stock could be accommodated. $175,000 was budgeted for the fitout of the basketball court, and included such things as improved lighting, car park access, and funding for a passenger lift to allow for disabled patrons. Since the location was some distance from the town centre, a free shuttle bus was provided from Highbury once a week. The Leisure Centre is located in the Birkenhead War Memorial Park. In the areas adjacent to the library, there were problems associated with youth drinking, graffiti, and other undesirable behaviour. Patronage of the library dropped by 35%. In March 2007, the library was granted a resource consent to use the Leisure Centre location for a further three years or until the new library was built, whichever came later. This location was meant to be a transitional arrangement while the new building was being constructed. However, the library remained at this temporary location for four years. The current library Brendan Rawson, from the Architecture Office in Ponsonby, was retained in 2004 to design a building that would create the much needed public space; and in addition, reflect the heritage of the area. Initial concepts took advantage of the considerable potential for views, and incorporated extensive additional landscaping, from more trees to poppies. The first completed design, (pictured), evoked the kauri that were once endemic to the region. Shadow-patterns of branches etched on the windows were reminiscent of the trees in the reserve, one of which was itself a kauri, planted in 1987 to commemorate environmentalist Bill Fisher. There was also to be a café on the second floor and a drive-thru at basement level for dropping off returns. This version was planned to be two metres higher than the previous building, with 1200 m2 of floorspace set aside for the library. Put out to public scrutiny there was some negative feedback. Peter White, a local resident, was critical of the design, calling the building "strange...full of different angles." Community board member Tony Holman wanted more thought put into the heritage aspects, though he did not specify any details. The Friends of the Library, on the other hand, were unanimous in their praise. Another important aspect of the design was that it would also be a sustainable building. This commitment to the environment was an increasingly significant part of North Shore City Council's approach to urban development, especially through the Resource Management Act and the Treaty of Waitangi. The council aimed to lead by example with best practices. The library design incorporated several notable features, including the maximisation of natural light, the use of recyclable material, including reuse of grey water, and a natural ventilation and cooling system to limit energy costs. After the Environment Court decision (see below) this design underwent some modification, but the library opened on 17 December 2009 with a formal opening ceremony in February 2010. Controversy over the new building In preparation for a new community library complex the old Birkenhead Library was demolished in May 2005. The library service was temporarily relocated to a basketball court in the Birkenhead Leisure Centre. It was expected to be there for eighteen months. However, the project was then delayed for several years; and was not completed until December 2009. Cost of delays The initial focus of the library project was on upgrading the existing building. When the second floor was built on top of the library in the early 1970s, it initially housed the Council Area Office, though it was conceived even then that it would eventually be used as library space. However, investigations in 1999 by engineers revealed that the second-level floor was too weak to support the weight of books without significant strengthening, which would be expensive to undertake. As well, the existing building was showing increasing signs of deterioration, most significantly a leaking ceiling. While it could be repaired, long-term maintenance would negate the short-term cost advantage of doing so. Thus, the option of a completely new building was brought under consideration. In 2000, a feasibility study considered four options: demolition, refurbishment, ground floor extensions, or extensions to both levels. Demolition and reconstruction was estimated at $3.3 million, cheaper than extending both floors at $3.7m, though nearly twice the cost of refurbishment. The study noted that the site comprised five lots whose exact boundaries were unclear. It also acknowledged the demolition and reconstruction option would require a land use consent application, as the new building would exceed height and coverage restrictions. In March 2002, City Librarian Geoff Chamberlain presented the study in a report to the council, with the first option being the preferred choice by the library services. It would increase the size of the building to 1600 m2, 1200 m2 of which would be devoted to the library itself, with the additional space being used for the Council Area Office and the Citizens Advice Bureau. A year later, in June 2003, demolition and reconstruction was the confirmed preference. Costs were projected for the next year's draft annual plan. This included $100,000 in the first year for design and planning, $900,000 in the next, and $4.5 million for the building itself. By mid–2004 the concept had been finalised, and the detailed design was being worked on for presentation to the community boards and for public consultation. On 23 February 2005, public submissions closed and a fitout of the temporary site in the Leisure Centre was begun. Total cost for the new library was now expected to be $6.5 million, then $7.3 million. The project was then delayed, requiring a change to the district plan (see below). In late 2005, the Council Community Services General Manager, Loretta Burnett, stated: "There will be additional costs associated with a plan change but they are modest in comparison." However, subsequent further delays lifted the cost to the region of $9.25 to $9.5 million as of the end of 2007, with a budget shortfall of $2.75 million. Resource consent issue The North Shore City Council had lodged a resource consent application for the new building in December 2004, but did not wait for it to be confirmed before demolishing the existing building in May–June the next year. According to a 2006 examination of the project management, Council assumed there was little risk of the application being declined. However, an amended Council report by planner Ian Jefferis revealed that the building was to occupy 15% more of the reserve than expected (pictured). Speaking after his appeal in 2007, Bill Abrahams, owner of Rawene Chambers located opposite the library site (pictured below), said that this lack of a consent was "the crux of the matter." Some residents claimed they had not been properly consulted. Abrahams felt he should've been consulted because the design blocked his views. The Council disputed that there had been no consultation. The Strategic projects manager, Simon Guillemin, pointed out that there had been a number of public meetings and press releases. He also said there had also been consultation with the Birkenhead Town Centre Association and the Friends of the Library. Yet in June, independent commissioners declined resource consent. Reasons cited included concern about the impact on the existing environment, traffic flow, and the building's proposed size, which violated the zoning requirements. Rezoning debate Thwarted, the Council elected to seek rezoning. As Geoff Chamberlain, the City Librarian acknowledged, the original zoning on the site was historically complex, and never tidied up. In fact, it limited the coverage of any building to only 10% of the land; but the original building built in 1968 had covered 19.5%. This did not include the Plunket building. The new plan was initially thought to be 32.9%, then revised to 48%. This apparent expansion of the footprint particularly concerned resident Clyde Scott, who was later one of those who lodged an appeal with the Environment Court. In the local paper, the North Shore Times, there was a steady clamour from both opponents of the project and those in favour. In addition to issues with the new library itself, the council's performance was questioned, concern was raised about the drop-off in existing library services, and the library project was linked to the ongoing redevelopment in the whole of the Birkenhead town centre. Four months after the library was demolished, Jill Nerheny, the Birkenhead-Northcote Community Coordinator, claimed there was a groundswell of support for leaving the land as green space. Residents Clyde Scott and Peter White became adamant the community would be better served with a park on the reserve site and the library located elsewhere in Highbury. Others lobbied for the fence ringing the library site to be taken down, and eventually the Birkenhead-Northcote Community Board had it pushed back to the perimeter. Two new park benches were installed on the reserve to take advantage of the expanse that was now available (pictured). As part of their later application on rezoning, the Council submitted a report on the prevailing winds, which suggested the land would be unsuitable as open space. While the commissioners acknowledged this, they felt that landscaping could improve the situation somewhat. They then noted the number of people using the newly opened reserve was "relatively modest," especially when contrasted with the significant number who wanted to reinstate the library there. Three-quarters of those who made submissions on the rezoning supported the change, including Thea Muldoon. Prominent among those objecting were former television newsreader Judy Bailey, and property developer, Graham Milne. Milne, who had made proposals as far back as 1989, proffered a wide variety of alternative plans for a far more elaborate community centre. These involved road closures, adding new roads, and leasing or selling his land on 15 and 17 Rawene Road, or going into partnership using his buildings. There was, however, little support for his ideas. In October 2005, he sent an email to Community Services & Parks Committee claiming his proposals had been misrepresented. He accused those doing the assessment reports of "yet more incompetence" and threatened to contest the rezoning "fiercely.. every step of the way." The Committee concluded, contrary to Milne, that the reports were "thorough and included site analysis, as well as evaluation of alternatives." In June 2006, the three independent commissioners approved the rezoning of the district plan (pictured). Their decision acknowledged the significance of historical precedent. In other words, the fact that there had been a library on the site for over thirty years was "notable" with regards to the usage the land was now put to. Also important was the appropriateness of the site in comparison to other options. Previous reports in 2003 and 2005 had considered the existing site was the best choice, while the Highbury Centre Plan of 2006 indicated that there had been extensive close consultation with the community over two years, which in a general sense was pertinent to the usage of the contested land. The commissioners concurred, and the library site became formalised as part of a Special Purpose 9 zone, which allows for the continued operation of community facilities. They noted too, that it is this zoning which underlies other North Shore Libraries, such as the ones at Takapuna and Glenfield. The commissioners also reconfigured the Recreation 2 boundary, to cut through the middle plot. This was done to safeguard the treed area, and to ensure a better balance between the reserve and the building complex. They emphasised the need for integration, both physical and visual, between the two zones to encourage usage of the recreation area. Environment court It was expected then that the library project would be further delayed by two years. While the exact future of the library was uncertain, a survey conducted by the MP for Northcote Jonathan Coleman in October 2006 showed there was widespread public support for its return to its former site. However, Abraham Holdings, owner of Rawene Chambers, located opposite the library's former site, lodged a last-minute appeal with the Environment Court, claiming, among other things, concern over the impact on the historic value of the reserve. Former councillor Jenny Kirk decried Abraham Holdings for their blatant commercial self-interest, and lodged a counter objection. A hearing was set for 28 May 2007. Speaking on behalf of the rezoning were Council, Friends of the Library (represented by Mrs. Adrienne Wright), Plunket (by Ms. Jane Sheridan and Jenny Kirk); speaking against were Abraham Holdings, Graham Milne (Airborne Asia Pacific), Clyde Scott, Peter White and David Brook. After three months' deliberation the Environment Court approved the building of a new library on the former site, but reaffirmed the rezoning commissioners' restrictions, notably the restriction on height which dropped the maximum from 11 m to 9 m, and the constraints on the footprint. The Council agreed, "keen to keep the planning process as straight-forward as possible." Bill Abraham, of Abraham Holdings, claimed "people in years to come will be grateful for all the park space that has been kept." Indications were that it would still be larger than the old library, with 250 m2 of extra floorspace, though this would make it some 200 m2 to 300 m2 smaller than the original preferred design. The steering group was redesignated as the Governance and Advisory Group. Consisting of councillors, community board members, and the Community Services general manager, it was set up to monitor the project more closely. Construction began in June 2007, and the new $9 million building opened on 17 December 2009. Staff structure In the beginning volunteers were crucial to the running of the library. Savings in wages were considered instrumental in allowing the purchase of more books and also allowed money to be set aside for future planned extensions to the building. In 1949 the number of volunteers was recorded at twenty-eight. They included the town clerk and a councillor, Percy Hurn, as well others who had given freely of their expertise such as Duthie from the Auckland Public library, a passing instructor from the Country Library service, and those pulled in to read to the youngsters during the school holidays. Among this "band of honorary assistants" a Mrs. Gaidener, Mr. Slovey and Mr. Odd came in for particular acknowledgement. At the opening of the new library building nearly twenty years later the mayor paid tribute to all the original volunteers. By 1950 the Borough Council was looking to formally employ someone for £100 a year. Joan Foggin and John Wilson were the first paid staff, while Eleanor Fisher, already working in the library became the first full-time staff member in 1952. She remained the in-charge librarian until her retirement nineteen years later. By 1955 there was a part-time library assistant as well; and soon a fulltime junior was being considered. Growth in library use and opening hours continued, so that by 1968 there were 3 full-timers and a part-timer to help on Friday nights. Staffing peaked in 1986 with 11.5 full time equivalents, and was subsequently reduced to 8.67. This includes staff spread over seven days, and reflects the high preponderance of part-timers typical of the industry. The decrease in staff occurred despite the onset of Sunday openings and the increase in door-count and issues because, at least in part, amalgamation allowed the centralisation of many departments, such as cataloguing. The onset of computers has also increased efficiency, even to the near complete automation of some services, such as circulation through self-issue machines, which recorded 40% of items borrowed within their first months of installation at Birkenhead in 1995. The current staff structure is headed by a Community Librarian, with two main senior positions: an Information Services Librarian and a Children and Young Adults Librarian, both of which are full-time positions. Other senior roles include the Weekend Supervisors. The main bulk of staff continues to be Library Assistants, with 1–2 being fulltime. There are also a number of shelvers, generally students. Other occasional staff have included a librarian on exchange from England, and various volunteers, such as a Taskforce Green worker helping with a rebarcoding project, and a student doing a Duke of Edinburgh award. Services Services include children's programmes, reference, interloans, internet access, printer-copier, and housebound deliveries. Children's programmes Children's programmes were in place from early on. Eleanor Fisher persuaded local residents to come in and read to youngsters during school holidays. Storytime went for an hour once a week, and up to 50 youngsters attended. Class visits by local schools started in 1954, and became a regular feature. Outside of this collaboration with schools the library offered reading programmes, such as "Go Bush" and later, as part of the North Shore Libraries, the Rakaau Reader scheme. This encouraged reading by setting targets coupled with incentives and visible marks of achievement, such as green, silver, then gold leaves on the Raakau tree (pictured). From this the library became more involved in the provision of a wider range of holiday activities, like puppet shows and hands-on arts and crafts, such as making hats or murals or cake decorations. On one occasion these were so well subscribed the library held them down the road in the All Saints Church hall. There were events too on such occasions as the library's 50th celebration, and Halloween. There have been regular appearances by authors, illustrators, storytellers and various speakers, and celebrities from Judy Bailey to Edith the Elf. Others include storyteller Lynne Kriegler, illustrators Trevor Pye, Margaret Beames, Robyn Belton, and Judy Lambert, and writers Lino Nelisi, Tom Bradley, and Jean Bennett, as well as Irish storyteller Nigel De Burca, and two of the Aunties. They were often invited as part of various book festivals, such as the Children's Aim Book Award. Competitions to select favourite reads further raised awareness and use of the books. Lapsit for preschoolers with their parents was an innovation launched by then Chief Librarian Rata Graham in 1992. These were half-hour sessions of mostly music and song, as well as stories and finger puppetry. Lapsit proved so popular it was extended to twice a week in 1999. It was the precursor to "Rhymetime" now standardised across the entire North Shore Libraries system, a programme specifically designed to encourage active socialisation and the development of reading skills, through the focus on rhythm and rhyme. Resources The initial Borough Council budget for books was £500 and when it opened in 1949 the library began with a collection of around 1500 items, "swelled by about another twenty books a month." Percy Hurn, a councillor at the time, recalled the first book he selected for the library was "Sunset over France." More prosaically the local newspaper recorded "textbooks on agriculture... a complete set of the books of Walter Scott... and 13 volumes of the works of Thackeray." Support from the National Library was keenly sought, as it would allow access to "practically every library in the dominion." However this support was qualified: the National Library did not want to encourage "cheap reading" of genre books, such as romance, westerns and detective stories. In the event, when Birkenhead opened nearly half the books present were on extended loan from the National Library. Their "field librarians" continued to provide a regular infusion of books into the Birkenhead collection two or three times a year, for at least a decade. Topics were diverse, from gardening, music, occupations and hobbies, to art, agriculture and home management. Junior books were added in 1953. One hundred and fifty of the original collection were donations. Items gifted have ranged from the Walter Scott works, to individual titles, to a 34 volume set of Britannica. The Rotary Club provided a $2000 Reference collection for the opening of the 1968 building. Later, Plunkett donated records, while Bob and Norma Inward gave two folios of prints by painters Goldie and Heaphy. Borrowing then, as now, was free to ratepayers; those outside the area paid 10 shillings in 1949, and fifty years later, a $100 for an annual subscription. Initially one book was issued to each member, with 2–3d charge for additional items. Newer books were more expensive, as much as 6d Rental charges on fiction were dropped in 1990; though the late 1980s saw them on items such as CDs, a practice which became generalised across other multimedia items like CD-ROMs and DVDs. For a while, Internet access was charged too, at $2 per 15 minutes. At the time it was used mainly for email. In 1994 rental fiction returned with the start of a "Bestseller" book collections; Four years later a similar rental collection of bestseller magazines was started. The collection size in 1968 was 19,000 items, mostly books and magazine. This increased to over 63,000 items in 1992, and included a much more diverse range of media, from children's puzzles to archives, as well as the provision of stock from other branches, and access to system-wide databases. By 2003 Birkenhead's stock had risen to 67,500. Shortly afterwards, the library was temporarily relocated to the Leisure Centre where there was only room to house 40–50% of the collection. Currently Adult, Young Adults, Junior, and Large Print collections are subdivided in the traditional manner, into fiction and nonfiction areas with the latter arranged according to the Dewey Decimal system. Media other than books are generally collated as separate collections or subdivisions. There are exceptions, such as language material which is collated in the nonfiction 400s. Junior material is separated into the widest range of categories, from board books up through various reading-ages, such as picture books, readers, and various levels of chapter books. Resources unique to Birkenhead library include the Chelsea Sugar Archives, and its local history photo collection. Apart from these special collections most material is available for lending. Exceptions include newspapers, a Reference Collection interfiled in amongst the main collection, a Quick Reference Collection, and a depository of council documents and other official publications (pictured). The front page of the North Shore Libraries website is itself a web portal, for various council and library resources, including the catalogue. Public space The library has also tried to provide public space for various activities, such as study and leisure reading, though its history is marked by a struggle to do this consistently. The lack of space meant the popularity of the original library was something of an embarrassment. The 1968 building was more spacious, especially after later alterations. 1973 saw the addition of the mezzanine floor; and 1993, the addition of a Young Adult room, as well as a Large Print lounge. However, there was little room for much expansion, which led to the curtailment of some service development. This was one of the reasons for the new building project started in 2005. The temporary location in the Leisure Centre, offered two tables in the magazine-newspaper-computer area, along with a few sofa chairs. The Children's section also had some seating (pictured). But, after pressure from Cr Hartley the children's play equipment was cut back by more than half. The original costs was to spend $80,000 on the children and the final cost was only $30,000 and the landscaping was also slashed, a reduction that is reflected in the final arrangements. Mobile library Origins Located in central Highbury the library is about seven miles (11 km) distant from the more remote areas of Beach Haven and Birkdale. As a consequence, from the mid-1960s there was a persistent call to establish a more convenient branch location. Petitioned by residents the Borough Council considered the possibility of setting up something in the Beach Haven hall on a temporary basis, to see if it would take. They asked the then Chief Librarian Ann Clegg to prepare a report looking into the details. Her conclusion that a branch was an expensive option, and that it would make more sense to expand the existing library, aggrieved locals. They felt her assertion that there was not enough demand by "serious readers" was a misrepresentation of the community's ability and very real need; while the three councillors who'd campaigned on a promise of getting something done proclaimed the report biased. The Beach Haven Residents and Ratepayers Association started a petition and gathered 700 signatures. There were angry letters in the paper. In the event Birkenhead bought the mobile van off Takapuna in 1982. This was a 1949 Bedford chassis with a purpose-built body that had already been in service for 35 years, much of it as the first mobile library in Auckland. In fact, as part of the Takapuna City Council in 1977 it had been contracted to visit the outer Birkenhead area once a week. This was reminiscent of the Country Library van, a national service which used to visit Birkenhead Library itself several times a year during an earlier era. Years of service With its purchase Birkenhead greatly expanded the mobile service. Capable of stocking up to 2000 items the van now went out five days a week with a full range of items from adult fiction, to magazines, picture books and puzzles, constantly reinvigorated from the main library. As well, it provided a community noticeboard. The van stopped at a different place each day, generally staying between 10am and 4:30pm, closing only for lunch and tea-breaks. This was a length of time commensurate with weekend services. Within a few years though the more common practice was adopted of spending less time at a greater variety of locations. Initially there were two staff to cope with the influx of registrations, but the sole position was quickly established with Cynthia McKenzie as Birkenhead's first Mobile Librarian. Over its decade of existence the Mobile had half a dozen different librarians, who had to cope with double de-clutching, a leaking roof and stifling heat, as well as the usual duties of a librarian. Retirement Issues dropped, and in 1988 the service was reduced from 5 to 2 days a week, in part perhaps due to increasing mechanical difficulties. The once famously reliable van had problems with its radiator ensuring it had to be constantly stopped and attended by the last Mobile Librarian, Malcolm Fletcher. Then it blew a head gasket. With the onset of amalgamation of Birkenhead into the wider North Shore Libraries it was superseded by the system Mobile anyway, and the Bedford was eventually retired in 1992. Not wanted by the Museum of Transport & Technology or the Devonport Museum it eventually went to the North Shore Vintage Car Club. Over its eleven years of operation it had issued over 163 thousand items. Webpage development The North Shore Libraries catalogue was launched onto the Internet in June 1995. Shortly afterwards the branches began to demonstrate general internet usage to the public, then to roll out access. By May 1996 Birkenhead was having daily demonstrations. About this time, North Shore Libraries put up their first website. Datacom, whose main role was the provision of the libraries' online catalogue, created the website too, using AOLpress 1.2. This included the establishing of separate webpages for each branch. A green look 1997–2003 Birkenhead Library's own subsidiary webpage was already well established then, before its subsequent recording on the Internet Archives. It was sited two clicks in from the main page. There was a photo, and the predominant green of the layout matched the official Council colour used in its logo. From this static front a number of subpages could be accessed. These detailed basic facts about the library, such as location, opening hours, and staff contact details. This was to remain unchanged for about five years, apart from minor alterations, such as the inclusion of a branch phone number prominently displayed on the front page, and updates to reflect changes in staffing. White with columns 2003–2009 In 2003 a more elaborate three-column style was adopted. This marked a shift from Datacom's maintenance to the work of Mike Copley and Trine Romlund hired specifically to build a more functional and professional-looking website for the entire North Shore Libraries. They used PHP templates. Features included a range of photo thumbs illustrating aspects of the library and its services, and a brief note on the history of the library. Three significantly detailed related pages were also added: Birkenhead Local History Books for Sale, Birkenhead Library Collections and Services, and its subpage on the Chelsea Archives. This remained the style and predominant content of the Birkenhead Library webpage into 2009. Only some minor details changed, such as the link from the North Shore Libraries main page reduced from three to two clicks with the utilisation of drop-down menus; and an increased cross-linking of hyperlinks throughout the North Shore Libraries website. Current display As part of the redesign of the North Shore Libraries website in July 2009 the Birkenhead page was revamped to its current layout. Artwork and exhibitions The library has purchased or had donated a variety of artworks: Two bound folios of Charles Goldie prints and watercolour prints of Charles Heaphy. Donated in 1981 and 1983 respectively by Bob & Norma Inward, valued at $750 each. Bush walk painting. Presented by the Birkenhead Licensing Trust, valued at $1500. Also "some pottery" and a leatherbound album of the 1988 Birkenhead centenary. "Sands of Time in Piha," painting by Michelle Stuart. Donated by Birkenhead Licensing Trust. "Polymorphous," a bronzed ceramic sculpture, donated by Ian Firth. "Nga Aho Matauranga" (Connections of knowledge) by Toi Te Ritio Mahi. "Island Night," handscreen printed acrylic by Sue Pearson. Six mural panels. Were located on streetfront alongside main entrance of Civic Reserve Library. Featuring local history heritage of early pioneers, horticulture, sugar factory and township. Done by students from Birkenhead College. There have also been a number of exhibitions, often work of local artists. These were usually presented on the mezzanine floor of the Civic Reserve library. The first exhibition was a display of books in 1955. Subjects ranged from "poetry to sheep mustering," and included works by William Satchell and Katherine Mansfield. The oldest item was a bible in Maori published in 1840. During the North Shore Arts Festival of 1966 there was an exhibition devoted exclusively to Maori books, sculpture and painting. Other exhibitions include paintings by Pauline Thompson, Linda Mcneur-Wismer, Betty Eddington, and Susan Durrant, pottery by Peter Collis and Peter Shearer, prints by Julienne Francis, glass sculptures by Carl Houser, and copperwork by Andrew Campbell. A fabric art collection by locals was instigated by Rata Graham and displayed as part of the library's centenary celebrations. Support groups Birkenhead library has had a number of support groups. Volunteers were instrumental in staffing the library in the early days; while the Birkenhead Rotary Club took down the old council quarters in preparation for the new 1968 building. They also donated money to set up the Reference Collection; and later, in 1982, they started the first collection of talking books. Similarly, the Plunkett club of 1980 raised $200 to buy 35 LP recordings of children's fairytales, songs and rhymes. The Friends of Birkenhead Library were established in November 1990, under the patronage of Keith Sinclair, and continued under Thea Muldoon from 1994. They have advocated strongly for the library, most recently on behalf of the new building, drumming up support through meetings with the Community Board and in the local paper. Along with Plunkett they spoke up for the library at the Environment Court hearing. The Friends have also raised monies for various equipment, such as listening posts and the library's first camera, a Pentax. Speakers at their events have been diverse and have included historian Claudia Orange, writers Muriel Fisher, Sheridan Keith, and Rosemary Menzies, as well as Ann Hartley, a former mayor, Jenny Kirk, a former councillor, and Sergio Gulyaev, a Russian astronomer. Usage statistics Published statistics extend to 2000, with most detail being recorded for the 1990s when Annual Reports were written for each branch. These statistics give an indication of usage of the library by a variety of measures including membership, issues (yearly and monthly), door count, stock size and number of reference queries. Membership No statistical correlation has been done with population growth, though the latter has been cited generally in connection with library usage. Also there has been some records of membership numbers in and of themselves, as well as a proportion of the total population. So, for example, in 1971 there were 8,752 members of a borough population of 15,825. That this meant over half the population were members was duly noted. However, by 1992 the population had increased to 31,860, while membership had only risen to 13,162. By 2003 there were 150 new registrations a month. Issues From the library's inception issues have risen steadily with minor fluctuations, from a little over 8,000 items in 1950 to over 300,000 at the turn of the century (pictured). To date the peak recorded year of the published data has been 1998. In 1957 when issues dropped for the first time, this was attributed to the opening of the Northcote Library. Similarly, while the Civic Reserve building was being constructed the library was relocated to a temporary shop site and the subsequent fall off in 1968 was attributed to this. Other trends visible from the graph include a dip in the 1970s and a rise in the 1990s. Data on the impact of the shift to the Leisure Centre has not been made publicly available yet. Over its decade of existence the Birkenhead Mobile Library issued nearly 15,000 items a year. At one point, 1984, 24,128 items, equivalent to more than 10% of the building's issue; though on average about half that. On the other hand, Birkenhead as a subset of the whole North Shore Libraries system, has averaged 16% of the total issues. This figure excludes those early years when there were no other libraries on the North Shore, but includes the introduction of branches at Takapuna, Northcote, Devonport, East Coast Bays, Glenfield, and the system Mobile, as well as the start and cessation of Birkenhead's Mobile, the impact of amalgamation, and the shift to and from various buildings. Notes Annotated bibliography Birkenhead Heritage Society (2020). A History of Birkenhead Library: 1901 to 2010. This is described as an updated history of the library, incorporating contributions from members of the society. Christie, Colleen. (1988). Back then: oral history interviews from the Birkenhead Public Library collection, vo1 1–3, Birkenhead: Birkenhead City Council. Recollections by retired local residents. As such, no sourcing or verification of claims within. Fisher, Muriel and Hilder, Wenman. (1969). Birkenhead: the kauri suburb, Birkenhead: Birkenhead Borough Council. A couple of pages on library only. Graham, Rata.(1992). Birkenhead Library: a history, Birkenhead: North Shore Libraries. Main resource putting library in historical context, with some analysis. Haddon, Kathy. (1993). Birkenhead: the way we were, Birkenhead: North Shore City Council. A couple of pages on library only. Internet Archive. Contains snapshots of early stages of North Shore Libraries website. Does not include images used or many subpages. No records of various OPACs or DRA software used. North Shore City Council. North Shore City district plan (operative in part) decisions on submissions and further submissions in respect of plan change 14 : re-zone Nell Fisher Reserve, Birkenhead. Takapuna: North Shore, 2006. . In particular, the Birkenhead-Northcote Community Board and the Community Services & Parks Committee. Full text of minutes, but appendixed reports not included. North Shore City Council media releases. Often published verbatim and unattributed in North Shore Times. North Shore Libraries. Annual reports: 1990–2000, Takapuna, North Shore City Council. Separate volumes for each year. Brief records of events for each branch, plus statistics. Little analysis, explanation or forecasting other than in City Librarian's prefaces. . See also the . North Shore Times, and other local newspapers. Different variants of NST can be confusing. On 23 June 1966, the North Shore Times and the North Shore Advertiser merged to become the North Shore Times Advertiser. Then on 9 March 2004, the name changed to North Shore Times. For index see the North Shore Times index. White & White Ltd. North Shore City Council Birkenhead building feasibility study. Tauranga: White & White Ltd, 2000, p4. Considers possibilities of refurbishment and complete rebuild options, i.e. well before any actual plan eventuated. Birkenhead, New Zealand Libraries in Auckland Libraries established in 1949 1949 establishments in New Zealand Architectural controversies 2000s architecture in New Zealand
Birkenhead Library
[ "Engineering" ]
9,059
[ "Architectural controversies", "Architecture" ]
9,625,980
https://en.wikipedia.org/wiki/Phoebe%20%28daughter%20of%20Leucippus%29
In Greek mythology, Phoebe ( ; , associated with phoîbos, "shining") was a Messenian princess. Family Phoebe was the daughter of Leucippus and Philodice, daughter of Inachus. She and her sister Hilaera are commonly referred to as Leucippides (that is, "daughters of Leucippus"). In another account, they were the daughters of Apollo. Phoebe married Pollux and bore him a son, named either Mnesileos or Mnasinous. Mythology Phoebe and Hilaera were priestesses of Athena and Artemis, and betrothed to Idas and Lynceus, the sons of Aphareus. Castor and Pollux were charmed by their beauty and carried them off. When Idas and Lynceus tried to rescue their brides-to-be they were both slain, but Castor himself fell. Pollux persuaded Zeus to allow him to share his immortality with his brother. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Publius Ovidius Naso, Fasti translated by James G. Frazer. Online version at the Topos Text Project. Publius Ovidius Naso, Fasti. Sir James George Frazer. London; Cambridge, MA. William Heinemann Ltd.; Harvard University Press. 1933. Latin text available at the Perseus Digital Library. Sextus Propertius, Elegies from Charm. Vincent Katz. trans. Los Angeles. Sun & Moon Press. 1995. Online version at the Perseus Digital Library. Latin text available at the same website. Theocritus, Idylls from The Greek Bucolic Poets translated by Edmonds, J M. Loeb Classical Library Volume 28. Cambridge, MA. Harvard Univserity Press. 1912. Online version at theoi.com Theocritus, Idylls edited by R. J. Cholmeley, M.A. London. George Bell & Sons. 1901. Greek text available at the Perseus Digital Library. Princesses in Greek mythology Children of Apollo Mythological rape victims Mythological Messenians Castor and Pollux Greek mythological priestesses
Phoebe (daughter of Leucippus)
[ "Astronomy" ]
671
[ "Castor and Pollux", "Astronomical myths" ]
9,626,396
https://en.wikipedia.org/wiki/Watson%20capsule
The Watson peroral small intestinal biopsy capsule was a system used through from the 1960s to obtain small intestinal wall biopsies in patients with suspected coeliac disease and other diseases affecting the proximal small bowel. A similar device known as the Crosby-Kugler capsule was also developed in the 1950s and utilized for similar purposes. References Medical equipment
Watson capsule
[ "Biology" ]
76
[ "Medical equipment", "Medical technology" ]
9,627,437
https://en.wikipedia.org/wiki/Siemens%20C72
The Siemens C72 is a mobile phone based on the C65. It features a built-in camera capable of taking pictures in VGA resolution (640*480 pixels), but is not capable of recording videos (with possible workarounds). External links User Manuals / User Guides for C72 from Manualsmania C72 Mobile phones introduced in 2005 Mobile phones with infrared transmitter
Siemens C72
[ "Technology" ]
83
[ "Mobile technology stubs", "Mobile phone stubs" ]
9,627,598
https://en.wikipedia.org/wiki/Puccinia%20hordei
Puccinia hordei is a species of rust fungus. A plant pathogen, it can cause leaf rust of barley, also known as brown rust of barley. It was originally found on the dry leaves of Hordeum vulgare in Germany. Taxonomy Synonyms Aecidium ornithogaleum Dicaeoma anomalum Dicaeoma holcinum Dicaeoma hordei Nielsenia hordei Nigredo hordeina Nigredo hordei Pleomeris holcina Pleomeris hordei Pleomeris simplex Pleomeris triseti Puccinia anomala Puccinia fragosoi Puccinia holcina Puccinia hordei Puccinia hordei-murini Puccinia loliina Puccinia recondita f.sp. holci Puccinia recondita f.sp. holcina Puccinia recondita f.sp. triseti Puccinia recondita f.sp. tritici Puccinia schismi Puccinia schismi var. loliina Puccinia simplex Puccinia straminis var. simplex Puccinia triseti Puccinia vulpiae-myuri Puccinia vulpiana Uromyces hordei Host resistance At the time of Johnston et al., 2013's discovery of severe susceptibility in Golden Promise, this was considered to be the most susceptible variety in the world. Soon thereafter however, Yeo et al., 2014 found SusPtrit was slightly worse. These results alter the meaning of such a basic term as "fully susceptible" to brown rust. In 2007 several resistance genes for this pathogen including receptor-like kinase (RLK), , s, peroxidases, superoxide dismutase and thaumatin were found in barley. References Fungal plant pathogens and diseases Barley diseases Leaf diseases hordei Fungi described in 1871 Fungus species
Puccinia hordei
[ "Biology" ]
408
[ "Fungi", "Fungus species" ]
9,627,698
https://en.wikipedia.org/wiki/Child%20development
Child development involves the biological, psychological and emotional changes that occur in human beings between birth and the conclusion of adolescence. It is—particularly from birth to five years— a foundation for a prosperous and sustainable society. Childhood is divided into three stages of life which include early childhood, middle childhood, and late childhood (preadolescence). Early childhood typically ranges from infancy to the age of 6 years old. During this period, development is significant, as many of life's milestones happen during this time period such as first words, learning to crawl, and learning to walk. Middle childhood/preadolescence or ages 6–12 universally mark a distinctive period between major developmental transition points. Adolescence is the stage of life that typically starts around the major onset of puberty, with markers such as menarche and spermarche, typically occurring at 12–14 years of age. It has been defined as ages 10 to 24 years old by the World Happiness Report WHR. In the course of development, the individual human progresses from dependency to increasing autonomy. It is a continuous process with a predictable sequence, yet has a unique course for every child. It does not always progress at the same rate and each stage is affected by the preceding developmental experiences. As genetic factors and events during prenatal life may strongly influence developmental changes, genetics and prenatal development usually form a part of the study of child development. Related terms include developmental psychology, referring to development from birth to death, and pediatrics, the branch of medicine relating to the care of children. Developmental change may occur as a result of genetically controlled processes, known as maturation, or environmental factors and learning, but most commonly involves an interaction between the two. Development may also occur as a result of human nature and of human ability to learn from the environment. There are various definitions of the periods in a child's development, since each period is a continuum with individual differences regarding starting and ending. Some age-related development periods with defined intervals include: newborn (ages 0 – 2 months); infant (ages 3 – 11 months); toddler (ages 1 – 2 years); preschooler (ages 3 – 4 years); school-aged child (ages 5 – 12 years); teens (ages 13 – 19 years); adolescence (ages 10 - 25 years); college age (ages 18 - 25 years). Parents play a large role in a child's activities, socialization, and development; having multiple parents can add stability to a child's life and therefore encourage healthy development. Another influential factor in children's development is the quality of their care. Child-care programs may be beneficial for childhood development such as learning capabilities and social skills. The optimal development of children is considered vital to society and it is important to understand the social, cognitive, emotional, and educational development of children. Increased research and interest in this field has resulted in new theories and strategies, especially with regard to practices that promote development within the school systems. Some theories seek to describe a sequence of states that compose child development. Theories Ecological systems Also called "development in context" or "human ecology" theory, ecological systems theory was originally formulated by Urie Bronfenbrenner. It specifies four types of nested environmental systems, with bi-directional influences within and between the systems; they are the microsystem, mesosystem, exosystem, and macrosystem. Each system contains roles, norms, and rules that can powerfully shape development. Since its publication in 1979, Bronfenbrenner's major statement of this theory, The Ecology of Human Development, has had widespread influence on the way psychologists and others approach the study of human beings and their environments. As a result of this influential conceptualization of development, these environments – from the family to economic and political structures – have come to be viewed as part of the life course from childhood through adulthood. Piaget Jean Piaget was a Swiss scholar who began his studies in intellectual development in the 1920s. Interested in the ways animals adapt to their environments, his first scientific article was published when he was 10 years old, and he pursued a Ph.D. in zoology, where he became interested in epistemology. Epistemology branches off from philosophy and deals with the origin of knowledge, which Piaget believed came from Psychology. After travelling to Paris, he began working on the first "standardized intelligence test" at Alfred Binet laboratories, which influenced his career greatly. During this intelligence testing he began developing a profound interest in the way children's intellectualism works. As a result, he developed his own laboratory, where he spent years recording children's intellectual growth and attempting to find out how children develop through various stages of thinking. This led Piaget to develop four important stages of cognitive development: sensorimotor stage (birth to age 2), preoperational stage (age 2 to 7), concrete-operational stage (ages 7 to 12), and formal-operational stage (ages 11 to 12, and thereafter). Piaget concluded that adaption to an environment (behaviour) is managed through schemas and adaption occurs through assimilation and accommodation. Stages Sensory Motor: (birth to about age 2) In the first stage in Piaget's theory, infants have the following basic senses: vision, hearing, and motor skills. In this stage, knowledge of the world is limited but is constantly developing due to the child's experiences and interactions. According to Piaget, when an infant reaches about 7–9 months of age they begin to develop what he called object permanence, meaning the child now has the ability to understand that objects keep existing even when they cannot be seen. An example of this would be hiding the child's favorite toy under a blanket, and although the child cannot physically see it they still know to look under the blanket. Preoperational: (begins about the time the child starts to talk, around age 2) During this stage, young children begin analyzing their environment using mental symbols, including words and images; the child will begin to apply these in their everyday lives as they come across different objects, events, and situations. However, Piaget's main focus on this stage, and the reason why he named it "preoperational," is that children at this point are not able to apply specific cognitive operations, such as mental math. In addition to symbolism, children start to engage in pretend play, pretending to be people they are not, for example teachers or superheroes; they sometimes use different props to make this pretend play more real. Some weaknesses in this stage are that children who are about 3–4 years old often display what is called egocentrism, meaning the child is not able to see someone else's point of view, and they feel as if every other person is experiencing the same events and feelings that they are. However, at about 7, thought processes of children are no longer egocentric and are more intuitive, meaning they now think about the way something looks, though they do not yet use rational thinking. Concrete: (about first grade to early adolescence) In this stage, children between the age of 7 and 11 use appropriate logic to develop cognitive operations and begin applying this new way of thinking to different events they encounter. Children in this stage incorporate inductive reasoning, which involves drawing conclusions from other observations in order to make a generalization. Unlike in the preoperational stage, children can now change and rearrange mental images and symbols to form a logical thought, an example of this is "reversibility," where the child now knows to reverse an action by doing the opposite. Formal operations: (around early adolescence to mid/late adolescence) The final stage of Piaget's cognitive development defines a child as now having the ability to "think more rationally and systematically about abstract concepts and hypothetical events". Some strengths during this time are that the child or adolescent begins forming their identity and begins understanding why people behave the way they behave. While some weaknesses include the child or adolescent developing some egocentric thoughts, including the imaginary audience and the personal fable. An imaginary audience is when an adolescent feels that the world is just as concerned and judgemental of anything the adolescent does as they themselves are; an adolescent may feel as if they are "on stage" and everyone is a critic and they are the ones being critiqued. A personal fable is when the adolescent feels that he or she is a unique person and everything they do is unique. They feel as if they are the only ones that have ever experienced what they are experiencing and that they are invincible and nothing bad will happen to them, bad things only happen to other people. Vygotsky Vygotsky, a Russian theorist, proposed the sociocultural theory of child development. During the 1920s–1930s, while Piaget was developing his own theory, Vygotsky was an active scholar and at that time his theory was said to be "recent" because it was translated out of Russian and began influencing Western thinking. He posited that children learn through hands-on experience, as Piaget suggested. However, unlike Piaget, he claimed that timely and sensitive intervention by adults when a child is on the edge of learning a new task (called the zone of proximal development) could help children learn new tasks. This technique, called "scaffolding," builds new knowledge onto the knowledge children already have to help the child learn. An example of this might be when a parent "helps" an infant clap or roll their hands to the pat-a-cake rhyme, until they can clap and roll their hands themself. Vygotsky was strongly focused on the role of culture in determining the child's pattern of development. He argued that "Every function in the child's cultural development appears twice: first, on the social level, and later, on the individual level; first, between people (interpsychological) and then inside the child (intrapsychological). This applies equally to voluntary attention, to logical memory, and to the formation of concepts. All the higher functions originate as actual relationships between individuals." Vygotsky felt that development was a process, and saw that during periods of crisis there was a qualitative transformation in the child's mental functioning. Attachment Attachment theory, originating in the work of John Bowlby and developed by Mary Ainsworth, is a psychological, evolutionary and ethological theory that provides a descriptive and explanatory framework for understanding interpersonal relationships. Bowlby's observations led him to believe that close emotional bonds or "attachments" between an infant and their primary caregiver were an important requirement for forming "normal social and emotional development". Erik Erikson Erikson, a follower of Freud, synthesized his theories with Freud's to create what is known as the "psychosocial" stages of human development. Spanning from birth to death, they focus on "tasks" at each stage that must be accomplished to successfully navigate life's challenges. Erikson's eight stages consist of the following: Trust vs. mistrust (infant) Autonomy vs. shame (toddlerhood) Initiative vs. guilt (preschooler) Industry vs. inferiority (young adolescent) Identity vs. role confusion (adolescent) Intimacy vs. isolation (young adulthood) Generativity vs. stagnation (middle adulthood) Ego integrity vs. despair (old age) Behavioral John B. Watson's behaviorism theory forms the foundation of the behavioral model of development. Watson explained human psychology through the process of classical conditioning, and he believed that all individual differences in behavior were due to different learning experiences. He wrote extensively on child development and conducted research, such as the Little Albert experiment, which showed that a phobia could be created by classical conditioning. Watson was instrumental in the modification of William James' stream of consciousness approach to construct behavior theory. He also helped bring a natural science perspective to child psychology by introducing objective research methods based on observable and measurable behavior. Following Watson's lead, B.F. Skinner further extended this model to cover operant conditioning and verbal behavior. Skinner used the operant chamber, or Skinner box, to observe the behavior of animals in a controlled situation and proved that behaviors are influenced by the environment. Furthermore, he used reinforcement and punishment to shape the desired behavior. Children's behavior can strongly depend on their psychological development. Freud Sigmund Freud divided development, from infancy onward, into five stages. In accordance with his view that the sexual drive is a basic human motivation, each stage centered around the gratification of the libido within a particular area, or erogenous zone, of the body. He argued that as humans develop, they become fixated on different and specific objects throughout their stages of development. Each stage contains conflict which requires resolution to enable the child to develop. Other The use of dynamical systems theory as a framework for the consideration of development began in the early 1990s and has continued into the present. This theory stresses nonlinear connections (e.g., between earlier and later social assertiveness) and the capacity of a system to reorganize as a phase shift that is stage-like in nature. Another useful concept for developmentalists is the attractor state, a condition (such as teething or stranger anxiety) that helps to determine apparently unrelated behaviors as well as related ones. Dynamic systems theory has been applied extensively to the study of motor development; the theory also has strong associations with some of Bowlby's views about attachment systems. Dynamic systems theory also relates to the concept of the transactional process, a mutually interactive process in which children and parents simultaneously influence each other, producing developmental change in both over time. The "core knowledge perspective" is an evolutionary theory in child development that proposes "infants begin life with innate, special-purpose knowledge systems referred to as core domains of thought". These five domains are each crucial for survival, and prepare us to develop key aspects of early cognition, they are: physical, numerical, linguistic, psychological, and biological. Beginning of cognition The most influential theories emphasize social interaction's essential contribution to child development from birth (e.g., the theories of Bronfenbrenner, Piaget, Vygotsky). It means that organisms with simple reflexes begin to cognize the environment in collaboration with caregivers. However, different viewpoints on this issue - the binding problem and the primary data entry problem - challenge the ability of children in this stage of development to meaningfully interact with the environment. Recent advances in neuroscience and wisdom from physiology and physics studies reconsider the knowledge gap on how social interaction provides cognition in newborns and infants. Developmental psychologist Michael Tomasello contributed to knowledge about the origins of social cognition in children by developing the notion of Shared intentionality. He posed ideas about unaware processes during social learning after birth to explain processes in shaping Intentionality. Other researchers developed the notion, by observing this collaborative interaction in psychophysiological research. This concept has been expanded to the intrauterine period. Research professor in bioengineering at Liepaja University Igor Val Danilov developed the idea of Michael Tomasello by introducing a Mother-Fetus Neurocognitive model: a hypothesis of neurophysiological processes occurring during Shared intentionality. It explains the onset of childhood development, describing this cooperative interaction at different levels of bio-system complexity, from interpersonal dynamics to neuronal interactions. The Shared intentionality hypothesis argues that nervous system synchronization provides non-local neuronal coupling in a mother-child pair, contributing to the proper development of the child's nervous system from the embryo onward. From the cognitive development perspective, this non-local neuronal coupling enables the mother to indicate the relevant sensory stimulus of an actual cognitive problem to the child, helping the child to grasp the perception of the object. A growing body of evidence in neuroscience supports the Shared intentionality approach. Hyperscanning research studies show inter-brain coordinated activity under conditions without communication in pairs while subjects are solving a shared cognitive task This increased inter-brain activity is observed in pairs, which differs from the result in the condition where subjects solve a similar task alone. The significance of this knowledge is that although Shared intentionality enables social cooperation to be achieved in the unaware condition (unconsciously), it constitutes society. While this social interaction modality facilitates child development, it also contributes to grasping social norms and shaping individual values in children. Continuity and discontinuity Although the identification of developmental milestones is of interest to researchers and caregivers, many aspects of development are continuous and do not display noticeable milestones. Continuous changes, like growth in stature, involve fairly gradual and predictable progress toward adult characteristics. When developmental change is discontinuous, however, researchers may identify not only milestones of development, but related age periods often called stages. These stages are periods of time, often associated with known age ranges, during which a behavior or physical characteristic is qualitatively different from what it is at other ages. When an age period is referred to as a stage, the term implies not only this qualitative difference, but also a predictable sequence of developmental events, such that each stage is preceded and followed by specific other periods associated with characteristic behavioral or physical qualities. Stages of development may overlap or be associated with specific other aspects of development, such as speech or movement. Even within a particular developmental area, transition into a stage may not mean that the previous stage is completely finished. For example, in Erikson's stages, he suggests that a lifetime is spent in reworking issues that were originally characteristic of a childhood stage. Similarly, the theorist of cognitive development, Piaget, described situations in which children could solve one type of problem using mature thinking skills, but could not accomplish this for less familiar problems, a phenomenon he called horizontal decalage. Mechanisms Although developmental change runs parallel with chronological age, age itself cannot cause development. The basic causes for developmental change are genetic and environmental factors. Genetic factors are responsible for cellular changes like overall growth, changes in proportion of body and brain parts, and the maturation of aspects of function such as vision and dietary needs. Because genes can be "turned off" and "turned on", the individual's initial genotype may change in function over time, giving rise to further developmental change. Environmental factors affecting development may include both diet and disease exposure, as well as social, emotional, and cognitive experiences. However, examination of environmental factors also shows that children can survive a fairly broad range of environmental experiences. Rather than acting as independent mechanisms, genetic and environmental factors often interact to cause developmental change. Some aspects of child development are notable for their plasticity, or the extent to which the direction of development is guided by environmental factors as well as initiated by genetic factors. When an aspect of development is strongly affected by early experience, it is said to show a high degree of plasticity; when the genetic make-up is the primary cause of development, plasticity is said to be low. Plasticity may involve guidance by endogenous factors like hormones as well as by exogenous factors like infection. One way the environment guides development is through experience-dependent plasticity, in which behavior is altered as a result of learning from the environment. Plasticity of this type can occur throughout the lifespan and involve many kinds of behavior, including some emotional reactions. A second type of plasticity, experience-expectant plasticity, involves the strong effect of specific experiences during limited sensitive periods of development. For example, the coordinated use of two eyes, and the experience of a single three-dimensional image rather than the two-dimensional images created by each eye, depends on experiences with vision during the second half of the first year of life. Experience-expectant plasticity works to fine-tune aspects of development that cannot proceed to optimum outcomes as a result of genetic factors alone. In addition to plasticity, genetic-environmental correlations may function in several ways to determine the mature characteristics of the individual. Genetic-environmental correlations are circumstances in which genetic factors interact with the environment to make certain experiences more likely to occur. In passive genetic-environmental correlation, a child is likely to experience a particular environment because his or her parents' genetic make-up makes them likely to choose or create such an environment. In evocative genetic-environmental correlation, the child's genetically produced characteristics cause other people to respond in certain ways, providing a different environment than might occur for a genetically different child; for instance, a child with Down syndrome may be protected more and challenged less than a child without Down syndrome. Finally, an active genetic-environmental correlation is one in which the child chooses experiences that in turn have their effect, for instance, a muscular, active child may choose after-school sports experiences that increase athletic skills, but may forgo music lessons. In all of these cases, it becomes difficult to know whether the child's characteristics were shaped by genetic factors, by experiences, or by a combination of the two. Asynchronous development Asynchronous development occurs in cases when a child's cognitive, physical, and/or emotional development occur at different rates. This is common for gifted children when their cognitive development outpaces their physical and/or emotional maturity, such as when a child is academically advanced and skipping school grade levels yet still cries over childish matters and/or still looks their age. Asynchronous development presents challenges for schools, parents, siblings, peers, and the children themselves, such as making it hard for the child to fit in or frustrating adults who have become accustomed to the child's advancement in other areas. Research issues and methods Research questions include: What develops? What relevant aspects of the individual change over a period of time? What are the rate and speed of development? What are the mechanisms of development – what aspects of experience and heredity cause developmental change? Are there typical individual differences in the relevant developmental changes? Are there population differences in this aspect of development (for example, differences in the development of boys and of girls)? Empirical research that attempts to answer these questions may follow a number of patterns. Initially, observational research in naturalistic conditions may be needed to develop a narrative describing and defining an aspect of developmental change, such as changes in reflex reactions in the first year. Observational research may be followed by correlational studies, which collect information about chronological age and some type of development, such as increasing vocabulary; such studies examine the characteristics of children at different ages. Other methods may include longitudinal studies, in which a group of children is re-examined on a number of occasions as they get older; cross-sectional studies, where groups of children of different ages are tested once and compared with each other; or there may be a combination of these approaches. Some child development studies that examine the effects of experience or heredity by comparing characteristics of different groups of children cannot use a randomized design; while other studies use randomized designs to compare outcomes for groups of children who receive different interventions or educational treatments. Infant research methods When conducting psychological research on infants and children, certain key aspects need to be considered. These include that infants cannot talk, have a limited behavioral repertoire, cannot follow instructions, have a short attention span, and that, due to how rapidly infants develop, methods need to be updated for different ages and developmental stages. High-amplitude sucking technique (HAS) is a common way to explore infants' preferences, and is appropriate from birth to four months since it takes advantage of infants' sucking reflex. When this is being measured, researchers will code a baseline sucking rate for each baby before exposing them to the item of interest. A common finding of HAS shows a relaxed, natural sucking rate when exposed to something the infant is familiar with, like their mother's voice, compared to an increased sucking rate around novel stimuli. The preferential-looking technique was a breakthrough made by Robert L. Fantz in 1961. In his experiments, he would show the infants in his study two different stimuli. If an infant looks at one image longer than the other, there are two things that can be inferred: the infant can see that they are two different images and that the infant is showing preference to one image in some capacity. Depending on the experiment, infants may prefer to look at the novel and more interesting stimulus or they may look at the more comforting and familiar image. Eye tracking is a straightforward way of looking at infants' preferences. Using an eye tracking software, it is possible to see if infants understand commonly used nouns by tracking their eyes after they are cued with the target word. Another unique way to study infants' cognition is through habituation, which is the process of repeatedly showing a stimulus to an infant until they give no response. Then, when infants are presented with a novel stimulus, they show a response, which reveals patterns of cognition and perception. Using this study method, many different cognitive and perceptual ideas can be studied. Looking time, a common measure of habituation, is studied by recording how long an infant looks at a stimulus before they are habituated to it. Then, researchers record if an infant becomes dishabituated to a novel stimulus. This method can be used to measure preferences infants, including preferences for colors, and other discriminatory tasks, such as auditory discrimination between different musical excerpts. Another way of studying children is through brain imaging technology, such as Magnetic Resonance Imaging (MRI), electroencephalography (EEG). MRI can be used to track brain activity, growth, and connectivity in children, and can track brain development from when a child is a fetus. EEG can be used to diagnose seizures and encephalopathy, but the conceptual age of the infant must be considered when analyzing the results. Ethical considerations Most of the ethical challenges that exist in studies with adults also exist in studying children, with some notable differences. Namely informed consent, as while it is important that children consent to participate in research, they cannot give legal consent; parents must give informed consent for their children. Children can informally consent though, and their continued agreement should be reliably checked for by both verbal and nonverbal cues throughout their participation. Also, due to the inherent power structure in most research settings, researchers must consider study designs that protect children from feeling coerced. Milestones Milestones are changes in specific physical and mental abilities (such as walking and understanding language) that mark the end of one developmental period and the beginning of another; for stage theories, milestones indicate a stage transition. These milestones, and the chronological age at which they typically occur, have been established via study of when various developmental tasks are accomplished. However, there is considerable variation in when milestones are reached, even between children developing within the typical range. Some milestones are more variable than others; for example, receptive speech indicators do not show much variation among children with typical hearing, but expressive speech milestones can be quite variable. A common concern in child development is delayed development of age-specific developmental milestones. Preventing, and intervening early, in developmental delays is a significant topic in the study of child development. Developmental delays are characterized by comparison with age variability of a milestone, not with respect to average age at achievement. Physical aspects of development Physical growth Physical growth in stature and weight occurs for 15–20 years following birth, as the individual changes from the average weight of and length of at full term birth to their final adult size. As stature and weight increase, proportions also change, from the relatively large head and small torso and limbs of the neonate, to the adult's relatively small head and long torso and limbs. In a book directed toward pediatricians it says a child's pattern of growth is in a head-to-toe direction, or cephalocaudal, and in an inward to outward pattern (center of the body to the peripheral) called proximodistal. Speed and pattern The speed of physical growth is rapid in the months after birth, then slows, so birth weight is doubled in the first four months, tripled by 1 year, but not quadrupled until 2 years. Growth then proceeds at a slow rate until a period of rapid growth occurs shortly before puberty (between about 9 and 15 years of age). Growth is not uniform in rate and timing across all parts of the body. At birth, head size is already relatively near that of an adult, but the lower parts of the body are much smaller than adult size. Thus during development, the head grows relatively little, while the torso and limbs undergo a great deal of growth. Mechanisms of change Genetic factors play a major role in determining the growth rate, particularly in the characteristic changes in proportions during early human development. However, genetic factors can produce maximum growth only if environmental conditions are adequate, as poor nutrition, frequent injury, or disease can reduce the individual's adult stature; though even the best environment cannot cause growth to a greater stature than is determined by heredity. Individual variation versus disease Individual differences in height and weight during childhood can be considerable. Some of these differences are due to genetic or environmental factors, but individual differences in reproductive maturation strongly influence development at some points. For individuals falling outside these typical variations, the American Association of Clinical Endocrinologists defines short stature as height more than 2 standard deviations below the mean for age and gender, which corresponds to the shortest 2.3% of individuals. In contrast, failure to thrive is usually defined in terms of weight, and can be evaluated either by a low weight for the child's age, or by a low rate of weight gain. A similar term, stunted growth, generally refers to reduced growth rate as a manifestation of malnutrition in early childhood. Motor skills Physical abilities change through childhood from the largely reflexive (unlearned, involuntary) movement young infants to the highly skilled voluntary movements characteristic of later childhood and adolescence. Definition "Motor learning refers to the increasing spatial and temporal accuracy of movements with practice". Motor skills can be divided into two categories: basic skills necessary for everyday life and recreational skills, including skills for employment or interest based skills. Speed and pattern The speed of motor development is rapid in early life, as many of the reflexes of the newborn alter or disappear within the first year, and slows later. Like physical growth, motor development shows predictable patterns of cephalocaudal (head to foot) and proximodistal (torso to extremities) development, with movements at the head and in the more central areas coming under control before those of the lower part of the body or the hands and feet. Movement ability develops in stage-like sequences, for example: locomotion at 6–8 months involves creeping on all fours, then proceeds to pulling to stand, "cruising" while holding on to an object, walking while holding an adult's hand, and finally walking independently. By middle childhood and adolescence, new motor skills are acquired by instruction or observation rather than in a predictable sequence. There are executive functions of the brain (working memory, timed inhibition and switching), which are generally considered essential to motor skills, though some argue the reverse dependence—that motor skills are actually precursors to executive function. Mechanisms The mechanisms involved in motor development involve some genetic components that determine aspects of muscle and bone strength, as well as the physical size of body parts at a given age. The main areas of the brain involved in motor skills are the frontal cortex, parietal cortex and basal ganglia. The dorsolateral frontal cortex is responsible for strategic processing, the parietal cortex is important in controlling perceptual-motor integration and the basal ganglia and supplementary motor cortex are responsible for motor sequences. According to a study showing the relationship between coordination and limb growth in infants, genetic components have a huge impact on motor development. Intra-limb correlations, like the distance between hip and knee joints, were studied and proved to affect the way an infant will walk. There are also genetic factors like the tendency to use the left or right side of the body more (which allows for early prediction of the dominant hand early). Sample t-tests showed that, for female babies, there was a significant difference between the left and right sides at 18 weeks and that the right side was usually dominant. Some factors are biological constraints that we cannot control, like male infants tending to have larger and longer arms, yet have an influence on measures like when an infant's reach. Overall, there are both sociological and genetic factors that influence motor development. Nutrition and exercise also determine strength, flexibility, and the ease and accuracy with which a body part can be moved. It has also been shown that the frontal lobe develops posterio-anteriorally (from back to front), which is significant in motor development because the hind portion of the frontal lobe is known to control motor functions. This form of development (known as "Proportional Development") explains why motor functions typically develop relatively quickly during childhood, while logic, which is controlled by the middle and front portions of the frontal lobe, usually will not develop until late childhood or early adolescence. Opportunities to carry out movements help establish the abilities to flex (move toward the trunk) and extend body parts; both capacities are necessary for good motor ability. Skilled voluntary movements such as passing objects from hand to hand develop as a result of practice and learning. Mastery Climates are autonomy-supportive climates that a teacher can adopt to as a suggested successful learning environment for children to promote and reinforce motor skills by their own motivation. This promotes participation and active learning in children, which Piaget's theory of cognitive development says is extremely important in early childhood. Individual differences Individual differences in motor ability are common and depend in part on the child's weight and build. Infants with smaller, slimmer, and more mature builds (proportionally) tend to belly crawl and crawl earlier than infants with larger builds. Infants with more motor experience have been shown to belly crawl and crawl sooner. Not all infants belly crawl, however; those who skip stage this are not as proficient in their ability to crawl on their hands and knees. After the infant period, individual differences are strongly affected by opportunities to practice, observe, and be instructed on specific movements. Atypical motor development such as persistent primitive reflexes beyond 4–6 months, or delayed walking may be an indication of developmental delays or conditions such as autism, cerebral palsy, or down syndrome. Lower motor coordination results in difficulties with speed accuracy and with trade-off in complex tasks. Children with disabilities Children with Down syndrome or developmental coordination disorder are late to reach major motor skills milestones like sucking, grasping, rolling, sitting up and walking, talking. Children with Down syndrome sometimes have heart problems, frequent ear infections, hypotonia, or undeveloped muscle mass. Children can also be diagnosed with a learning disability, which are disabilities in any of the areas related to language, reading, and mathematics, with basic reading skills being the most common learning disability. The definition of a learning disability focuses on the difference between a child's academic achievement and their apparent capacity to learn. Population differences Regardless of the culture a baby is born into, they are born with a few core domains of knowledge which allow them to make sense of their environment and learn upon previous experience by using motor skills such as grasping or crawling. There are some population differences in motor development, with girls showing some advantages in small muscle usage, including articulation of sounds with lips and tongue. Ethnic differences in reflex movements of newborn infants have been reported, suggesting that some biological factor is at work. Cultural differences may encourage learning of motor skills like using the left hand only for sanitary purposes and the right hand for all other uses, producing a population difference. Cultural factors are play a role in practiced voluntary movements, such as the use of the foot to dribble a soccer ball or the hand to dribble a basketball. Mental and emotional aspects of development Cognitive/intellectual Cognitive development is primarily concerned with ways in which young children acquire, develop, and use internal mental capabilities such as problem-solving, memory, and language. Mechanisms Cognitive development has genetic and other biological mechanisms, as is seen in the many genetic causes of intellectual disability. Environmental factors including food and nutrition, the responsiveness of parents, love, daily experiences, and physical activity can influence early brain development of children. However, although it is assumed that the brain causes cognition, it is not yet possible to measure specific brain changes and show the cognitive changes they cause. Developmental advances in cognition are also related to experience and learning, especially for higher-level abilities like abstraction, which depend to a considerable extent on formal education. Speed and pattern The ability to learn temporal patterns in sequenced actions was investigated in elementary-school-age children. Temporal learning depends upon a process of integrating timing patterns with action sequences. Children ages 6–13 and young adults performed a serial response time task in which a response and a timing sequence were presented repeatedly in a phase-matched manner, allowing for integrative learning. The degree of integrative learning was measured as the slowing in performance that resulted when phase-shifting the sequences. Learning was similar for the children and adults on average but increased with age for the children. Executive function measured by Wisconsin Card Sorting Test (WCST) performance as well as a measure of response speed also improved with age. Finally, WCST performance and response speed predicted temporal learning. Taken together, the results indicate that temporal learning continues to develop in pre-adolescents and that maturing executive function or processing speed may play an important role in acquiring temporal patterns in sequenced actions and the development of this ability. Individual differences There are typical individual differences in the ages at which specific cognitive abilities are achieved, but schooling for children in industrialized countries is based on the assumption that there are no large differences. Delays in cognitive development are problematic for children in cultures that demand advanced cognitive skills for work and for independent living. Everyday cognitive skills include problem-solving, reasoning, and abstract thinking among many others. In the absence of these life skills, children may struggle to complete work in a timely manner or understand certain tasks they are asked to do. If a delay is noticed screenings can possibly find the source of the issue; if there is no underlying issue it is important to help aid the child by reading with them, playing games with them, or reaching out to professionals that can help. Population differences There are few population differences in cognitive development: boys and girls show some differences in their skills and preferences, but there is a great deal of overlap between them. Some differences are seen in fluid reasoning and visual processing, as until about the age of four girls outperform boys in tests of these skills, but by about six or seven boys and girls score similarly. This is also true of IQ tests where girls tend to outscore boys, but again, as they age the gap lessens. Differences in cognitive achievement between different ethnic groups appears to result from cultural or other environmental factors. Social-emotional Factors Newborn infants do not seem to experience fear or have preferences for contact with any specific people. In the first few months they only experience happiness, sadness, and anger. A baby's first smile usually occurs between 6 and 10 weeks, as this usually occurs during social interactions it is called a "social smile". By about 8–12 months, they go through a fairly rapid change and become fearful of perceived threats. By around 6–36 months, infants begin to prefer familiar people and show anxiety and distress when separated from them, and when approached by strangers. Separation anxiety is a typical stage of development to an extent. Kicking, screaming, and throwing temper tantrums are normal symptoms of separation anxiety. The level of intensity of these symptoms can help determine whether or not a child has separation anxiety disorder, which is when a child constantly and intensely refuses to separate from the parent. The capacity for empathy and the understanding of social rules begin in the preschool period and continue to develop into adulthood. Middle childhood is characterized by friendships with age-mates, and adolescence by emotions connected with sexuality and the beginnings of romantic love. Anger seems most intense during the toddler and early preschool period, and during adolescence. Speed and pattern Some aspects of social-emotional development, like empathy, develop gradually, but others, like fearfulness, seem to involve a rather sudden reorganization of the child's experiences of emotion. Sexual and romantic emotions develop in connection with physical maturation. Mechanisms Genetic factors appear to regulate some of the social-emotional developments that occur at predictable ages, such as fearfulness and attachment to familiar people. Experience plays a role in determining which people are familiar, which social rules are obeyed, and how anger is expressed. Parenting practices have been shown to predict children's emotional intelligence. The amount of time mothers spent with their children and the quality of their interactions are important in terms of children's trait emotional intelligence, not only because those times of joint activity reflect a more positive parenting, but because they are likely to promote modeling, reinforcement, shared attention, and social cooperation. Population differences Population differences may occur in older children, if, for example, they have learned that it is appropriate for boys to express emotion or behave differently than girls, or if customs learned by children of one ethnic group are different than those learned by another. Social and emotional differences between boys and girls of the same age may also be associated with the differences in the timing of puberty seen between the two sexes. Development of language and communication Mechanisms Language serves the purpose of communication to express oneself through a systematic and conventional use of sounds, signs, or written symbols. There are four subcomponents a child must know to acquire language competence: phonology, lexicon, morphology and syntax, and pragmatics. These subcomponents combine to form the components of language: sociolinguistics and literacy. Currently, there is no single accepted theory of language acquisition but various explanations of language development have been given. Components The four components of language development include: Phonology is concerned with the sounds of language. It is the function, behavior, and organization of sounds as linguistic items. Phonology considers what the sounds of language are and what the rules are for combining sounds. Phonological acquisition in children can be measured by frequency and accuracy of production of various vowels and consonants, the acquisition of phonemic contrasts and distinctive features, or by viewing development in regular stages and to characterizing systematic strategies they adopt. Lexicon is similar to vocabulary as they both describe the complex dictionary of words used in speech production and comprehension. The lexicon of a language also includes that language's morphemes. Morphemes act as minimal meaning-bearing elements or building blocks of something in language that makes sense. For example, in the word "cat", the component "cat" makes sense as does "at", but "at" does not mean the same thing as "cat". In this example, "ca" does not mean anything. Morphology is the study of words and how they are formed. Morphology is also the branch of linguistics that deals with words, their internal structure and how they are formed. It is also the mental system involved in word formation. Pragmatics is the study of the relationship between linguistic forms and speakers of the language, it also incorporates how speech is used to serve different functions. Pragmatics can be defined as the ability to communicate one's feelings and desires to others. Children's development of language also includes semantics which is the attachment of meaning to words. This happens in three stages. First, each word means an entire sentence. For example, a young child may say "mama" but the child may mean "Here is Mama", "Where is Mama?", or "I see Mama." In the second stage, words have meaning but do not have complete definitions. This stage occurs around age two or three. Third, around age seven or eight, words have adult-like definitions and their meanings are more complete. A child learns the syntax of their language when they are able to join words together into sentences and understand multiple-word sentences said by other people. There appear to be six major stages in which a child's acquisition of syntax develops. First, is the use of sentence-like words in which the child communicates using one word with additional vocal and bodily cues. This stage usually occurs between 12 and 18 months of age. Second, between 18 months to two years, there is the modification stage where children communicate concepts by modifying a topic word. The third stage, between two and three years old, involves the child using complete subject-predicate structures to communicate concepts. Fourth, children make changes on basic sentence structure that enables them to communicate more complex concepts. This stage occurs between the ages of two and a half years to four years. The fifth stage of categorization involves children aged three and a half to seven years refining their sentences with more purposeful word choice that reflects their complex system of categorizing word types. Finally, children use structures of language that involve more complicated syntactic relationships between the ages of five years old to ten years old. Sequential skills and milestones Infants begin with cooing and soft vowel sounds. Shortly after birth, this system is developed as the infants begin to understand that their noises, or non-verbal communication, lead to a response from their caregiver. This will then progress into babbling around 5 months of age, with infants first babbling consonant and vowel sounds together that may sound like "ma" or "da". At around 8 months of age, babbling increases to include repetition of sounds, such as "ma-ma" and "da-da". Around this age infants also learn the forms for words and which sounds are more likely to follow other sounds. At this stage, much of the child's communication is open to interpretation. For example, if a child says "bah" when they are in a toy room with their guardian, it is likely to be interpreted as "ball" because the toy is in sight. However, if you were to listen to the same 'word' on a recorded tape without knowing the context, one might not be able to figure out what the child was trying to say. A child's receptive language, the understanding of others' speech, has a gradual development beginning at about 6 months. However, expressive language, the production of words, moves rapidly after its beginning at about a year of age, with a "vocabulary explosion" of rapid word acquisition occurring in the middle of the second year. Grammatical rules and word combinations appear at about age two. Between 20 and 28 months, children move from understanding the difference between high and low, hot and cold and begin to change "no" to "wait a minute", "not now" and "why". Eventually, they are able to add pronouns to words and combine them to form short sentences. Mastery of vocabulary and grammar continue gradually through the preschool and school years, with adolescents having smaller vocabularies than adults and experiencing more difficulty with constructions such as the passive voice. By age 1, children are able to say 1–2 words, respond to their name, imitate familiar sounds and follow simple instructions. Between 1–2 years old, the child uses 5–20 words, says 2-word sentences, expresses their wishes by saying words like "more" or "up", and understands the word "no". Between 2 and 3 years of age, the child is able to refer to themself as "me", combine nouns and verbs, use short sentences, use some simple plurals, answer "where" questions, and has a vocabulary of about 450 words. By age 4, children are able to use sentences of 4–5 words and have a vocabulary of about 1000 words. Children between the ages of 4 and 5 years old are able to use past tense, have a vocabulary of about 1,500 words, and ask questions like "why?" and "who?". By age 6, the child has a vocabulary of 2,600 words, is able to form sentences of 5–6 words and use a variety of different types of sentences. By the age of 5 or 6 years old, the majority of children have mastered the basics of their native language. Infants, up to 15 month-olds, are initially unable to understand familiar words in their native language pronounced using an unfamiliar accent. This means that a Canadian-English speaking infant cannot recognize familiar words pronounced with an Australian-English accent. This skill develops close to their second birthday. However, this can be overcome when a highly familiar story is read in the new accent prior to the test, suggesting the essential functions of underlying spoken language is in place earlier than previously thought. Vocabulary typically grows from about 20 words at 18 months to around 200 words at 21 months. Starting around 18 months the child begins to combine words into two-word sentences, which the adult typically expands to clarify its meaning. By 24–27 months the child is producing three or four-word sentences using a logical, if not strictly correct, syntax. The theory is that children apply a basic set of rules such as adding 's' for plurals or inventing simpler words out of words too complicated to repeat like "choskit" for chocolate biscuit. Following this there is a rapid appearance of grammatical rules and ordering of sentences. There is often an interest in rhyme, and imaginative play frequently includes conversations. Children's recorded monologues give insight into the development of the process of organizing information into meaningful units. By age three the child begins to use complex sentences, including relative clauses, although they are still perfecting various linguistic systems. By five years of age the child's use of language is very similar to that of an adult. From the age of about three children can indicate fantasy or make-believe linguistics, produce coherent personal stories and fictional narratives with beginnings and endings. It is argued that children devise narrative as a way of understanding their own experience and as a medium for communicating their meaning to others. The ability to engage in extended discourse emerges over time from regular conversation with adults and peers. For this, a child needs to learn to combine their perspective with that of others and with outside events and learn to use linguistic indicators to show they are doing this. They also learn to adjust their language depending on who they are speaking to. Typically by the age of about 9 a child can recount other narratives in addition to their own experiences, from the perspectives of the author, the characters in the story and their own views. Theories Although the role of adult speech is important in facilitating the child's learning, there is considerable disagreement among theorists about the extent to which it influences children's early meanings and expressive words. Findings about the initial mapping of new words, the ability to decontextualize words, and refine meaning of words are diverse. One hypothesis, known as the syntactic bootstrapping hypothesis, refers to the child's ability to infer meaning from cues by using grammatical information from the structure of sentences. Another theory is the multi-route model which argues that context-bound words and referential words follow different routes; the first being mapped onto event representations and the latter onto mental representations. In this model, parental input has a critical role but the children ultimately rely on cognitive processing to establish subsequent use of words. However, naturalistic research on language development has indicated that preschoolers' vocabularies are strongly associated with the number of words said to them by adults. There is no single accepted theory of language acquisition. Instead, there are current theories that help to explain theories of language, theories of cognition, and theories of development. They include the generativist theory, social interactionist theory, usage-based theory (Tomasello), connectionist theory, and behaviorist theory (Skinner). Generativist theories say that universal grammar is innate and language experience activates that innate knowledge. Social interactionist theories define language as a social phenomenon where children acquire language because they want to communicate with others; this theory is heavily based on social-cognitive abilities that drive the language acquisition process. Usage-based theories define language as a set of formulas that emerge from the child's learning abilities in correlation with their social cognitive interpretation and their understanding of the speakers' intended meanings. Connectionist theory is a pattern-learning procedure that defines language as a system composed of smaller subsystems or patterns of sound or meaning. Behaviorist theories defined language as the establishment of positive reinforcement, but are now regarded as only being of historical interest. Communication Communication can be defined as the exchange and negotiation of information between two or more individuals through verbal and nonverbal symbols, oral and written (or visual) modes, and the production and comprehension processes of communication. According to First International Congress for the Study of Child Language, "the general hypothesis [is that] access to social interaction is a prerequisite to normal language acquisition". Principles of conversation include two or more people focusing on one topic. All questions in a conversation should be answered, comments should be understood or acknowledged and any directions should, in theory, be followed. In the case of young children these conversations are expected to be basic or redundant. The role of a guardians during developing stages is to convey that conversation is meant to have a purpose, as well as teaching children to recognize the other speaker's emotions. Communicative language is both verbal and nonverbal, and to achieve communication competence, four components must be mastered. These components are: grammatical competence, including vocabulary knowledge, rules of word sentence formation, etc.; sociolinguistic competence, or the appropriate meanings and grammatical forms in different social contexts; discourse competence, which is having the knowledge required to combine forms and meanings; and strategic competence in the form of knowledge about verbal and nonverbal communication strategies. The attainment of communicative competence is an essential part of actual communication. Language development is viewed as a motive to communication, and the communicative function of language in-turn provides the motive for language development. Jean Piaget uses the term "acted conversations" to explain a child's style of communication that relies more heavily on gestures and body movements than words. Younger children depend on gestures for a direct statement of their message. As they begin to acquire more language, body movements take on a different role and begin to complement the verbal message. These nonverbal bodily movements allow children to express their emotions before they can express them verbally. The child's nonverbal communication of how they are feeling is seen in babies 0 to 3 months who use wild, jerky movements of the body to show excitement or distress. This develops to more rhythmic movements of the entire body at 3 to 5 months to demonstrate the child's anger or delight. Between 9–12 months of age, children view themselves as joining the communicative world. Before 9–12 months, babies interact with objects and interact with people, but they do not interact with people about objects. This developmental change is the change from primary intersubjectivity (capacity to share oneself with others) to secondary intersubjectivity (capacity to share one's experience), which changes the infant from an unsociable to socially engaging creature. Around 12 months of age the use of communicative gestures begins, including communicative pointing where an infant points to request something, or to point to provide information. Another communicative gesture presents around the age of 10 and 11 months where infants start gaze-following, by looking where another person is looking. This joint attention results in changes to their social cognitive skills between the ages of 9 and 15 months as their time is increasingly spent with others. Children's use of non-verbal communicative gestures predicts future language development. The use of non-verbal communication in the form of gestures indicate the child's interest in communication development, and the meanings they choose to convey that are soon revealed through the verbalization of language. Language acquisition and development contribute to the verbal form of communication. Children originate with a linguistic system where the words they learn are the words used for functional meaning. This instigation of speech has been termed pragmatic bootstrapping. According to this theory children view words as a means of social connection, in that words are used to connect the communicative intentions of the speaker to new words. Hence, the competence of verbal communication through language is achieved by gains in syntax or grammar. Another function of communicating through language is related to pragmatic development. Pragmatic development includes the child's intentions of communication before they knows how to express these intentions, and throughout the first few years of life both language and communicative functions develop. When children acquire language and learn to use language for communicative functions (pragmatics), children also gain knowledge about how to participate in conversations and how to relay past experiences/events (discourse knowledge), as well as learning how to use language appropriately for their social situation or social group (sociolinguistic knowledge). Within the first two years of life, a child's language ability progresses and conversational skills, such as the mechanics of verbal interaction, develop. Mechanics of verbal interaction include taking turns, initiating topics, repairing miscommunication, and responding to lengthen or sustain the conversation. Conversation is asymmetrical when a child interacts with an adult because the adult is the one to create structure in the conversation, and to build upon the child's contributions. In accordance to the child's developing conversational skills, asymmetrical conversation between adult and child advance to an equal temperament of conversation. This shift in balance of conversation suggests a development of narrative discourse in communication. Ordinarily, the development of communicative competence and the development of language are linked to one another. Causes of delays Individual differences Delays in language skills are the most frequent type of developmental delay. According to demographics 1 out of 6 children have a significant language delay; speech/language delay is three to four times more common in boys than in girls. Some children also display behavioral problems due to their frustration of not being able to express what they want or need. Simple speech delays are usually temporary. Most cases are solved on their own or with a little extra attention from the family. It is the parent's duty to encourage their baby to talk to them with gestures or sounds and for them to spend a great amount of time playing with, reading to, and communicating with their baby. In certain circumstances, parents will have to seek professional help, such as a speech therapist. It is important to take into consideration that sometimes delays can be a warning sign of more serious conditions that could include auditory processing disorders, hearing loss, developmental verbal dyspraxia, developmental delay in other areas, or an autism spectrum disorder (ASD). Environmental causes There are many environmental causes that are linked to language delays, including situations where the child has their full attention on another skill, such as walking. The child may have a twin or a sibling close to their own age and may not be receiving the parent's full attention. Another possibility is that the child is in a daycare with too few adults to administer individual attention. General development can be impacted if the child does not receive an adequately nutritional diet. Perhaps the most obvious environmental cause would be a child that suffers from psychosocial deprivation such as poverty, poor housing, neglect, inadequate linguistic stimulation, or emotional stress. Neurological causes Language delay can be caused by a substantial number of underlying disorders, such as intellectual disability, which accounts for more than 50 percent of language delays. Language delay is usually more severe than other developmental delays in intellectually disabled children, and it is usually the first obvious symptom of intellectual disability. Intellectual disability causes global language delay, including delayed auditory comprehension and delayed use of gestures. Impaired hearing is another of the most common causes of language delay. A child who can not hear or process speech in a clear and consistent manner will have a language delay, and even the most minimum hearing impairment or auditory processing deficit can considerably affect language development. Generally the more the severe the impairment, the more serious the language delay. Nonetheless, deaf children that are born to families who use sign language develop infant babble and use a fully expressive sign language at the same pace as hearing children. Developmental Dyslexia is a developmental reading disorder that occurs when the brain does not properly recognize and process the graphic symbols that represent the sounds of speech. Children with dyslexia may encounter problems in rhyming and separating the sounds that compose words, which is essential in learning to read as early reading skills rely heavily on word recognition. When using an alphabet writing system this involves in having the ability to separate out the sounds in words and be able to match them with letter and groups of letters. Difficulty connecting the sounds of language to the letters of words may result difficulty in understanding sentences. Confusion between similar letters, such as "b" and "d" can occur. In general the symptoms of dyslexia are: difficulty in determining the meaning of a simple sentence, learning to recognize written words, and difficulty in rhyming. Autism and speech delay are usually correlated. Problems with verbal language is the most common sign of autism. Early diagnosis and treatment of autism can significantly help the child improve their speech skills. Autism is recognized as one of the five pervasive developmental disorders, distinguished by problems with language, speech, communication and social skills that present in early childhood. Some common types of language disorders are having limited to no verbal speech, echolalia or repeating words out of context, problems responding to verbal instruction and ignoring people who speak to them directly. Other aspects of development Gender Gender identity involves how a person perceives themselves as male, female, or a variation of the two. Children can identify themselves as belonging to a certain gender as early as two years old, but how gender identity is developed is a topic of scientific debate. Several factors are involved in determining an individual's gender, including neonatal hormones, postnatal socialization, and genetic influences. Some believe that gender is malleable until late childhood, while others argue that gender is established early and gender-typed socialization patterns either reinforce or soften the individual's notion of gender. Since most people identify as the gender that is typically associated to their genitalia, studying the impact of these factors is difficult. Evidence suggests that neonatal androgens, male sex hormones produced in the womb during gestation, play an important role. Testosterone in the womb directly codes the brain for either male or female-typical development. This includes both the physical structure of the brain and the characteristics the person expresses because of it. People exposed to high levels of testosterone during gestation typically develop a male gender identity, while those not exposed to testosterone, or who lack the receptors necessary to interact with it, typically develop a female gender identity. An individual's genes are also thought to interact with the hormones during gestation and, in turn, affect gender identity, but the genes responsible for this and their effects have not been precisely documented and evidence is limited. It is unknown whether socialization plays a part in determining gender identity postnatally. It is well documented that children actively seek out information on how to properly interact with others based on their gender, but the extent to which these role models, which can include parents, friends, and TV characters, influence gender identity is less clear and no consensus has been reached. Race In addition to the course of development, previous literature has looked at how race, ethnicity, and socioeconomic status has affected child development. Some studies seem to speak to the importance of adult supervision of adolescent youth. Literature suggested that African American child development was sometimes differentiated between cultural socialization and racial socialization. Further, a different study found that most immigrant youth choose majors focusing on the fields of science and math. Risk factors Risk factors in child development include: malnutrition, maternal depression, maternal substance use and pain in infancy; though many more factors have been studied. Pain The prevention and alleviation of pain in neonates, particularly preterm infants, is important not only because it is ethical but also because exposure to repeated painful stimuli early in life is known to have short- and long-term adverse sequelae. These sequelae include physiologic instability, altered brain development, and abnormal neurodevelopment, somatosensory, and stress response systems, which can persist into childhood.5,–15 Nociceptive pathways are active and functional as early as 25 weeks’ gestation and may elicit a generalized or exaggerated response to noxious stimuli in immature newborn infants.16American Academy of Pediatrics February 2016 Policy Statement, reaffirmed July 2020 Postnatal depression Although there are a large number of studies regarding the effect of maternal depression and postnatal depression on various areas of infant development, they are yet to come to a consensus regarding the true effects. Numerous studies indicate impaired development, while many others find no effect of depression on development. A study of 18-month-olds whose mothers had depressive symptoms while the children were 6 weeks and/or 6 months old found that maternal depression had no effect on the child's cognitive development. Furthermore, the study indicates that maternal depression combined with a poor home environment is more likely to have an effect on cognitive development than maternal depression alone. However, the authors conclude that it may be that short term depression has no effect, but long term depression could cause more serious problems. A longitudinal study spanning 7 years found no effect of maternal depression on cognitive development as a whole, however it found that boys are more susceptible to cognitive developmental issues when their mothers had depression. This trend is continued in a study of children up to 2 years old, which revealed a significant difference on cognitive development between genders, with girls having a higher score; however girls scored higher regardless of the mother's history of depression. Infants with chronically depressed mothers showed significantly lower scores on the motor and mental scales within the Bayley Scales of Infant Development, contrasting with many older studies. A similar effect has been found at 11 years: male children of depressed mothers score an average of 19.4 points lower on an IQ test than peers with healthy mothers, while this difference is less pronounced in girls. Three month olds with depressed mothers show significantly lower scores on the Griffiths Mental Development Scale, which covers a range of developmental areas including cognitive, motor and social development. Furthermore, interactions between depressed mothers and their children may affect social and cognitive abilities in later life. Maternal depression has been shown to influence the mothers' interaction with her child. When communicating with their child, depressed mothers fail to make changes to their vocal intonation, and tend to use unstructured vocal behaviours. Furthermore, compared to when interacting with healthy mothers, infants interacting with depressed mothers show signs of stress, such as increased pulse and raised cortisol levels, and make more use of avoidance behaviours, for example looking away. Mother-infant interaction at 2 months has been shown to affect the child's cognitive performance at 5 years. Studies have begun to show that other forms of psychopathology (mental illness) can independently influence infants' and toddlers' subsequent social-emotional development through effects on regulatory processes within the child-parent attachment. Maternal interpersonal violence-related post-traumatic stress disorder (PTSD), for example, has been associated with subsequent dysregulation of emotion and aggression by ages 4–7 years. Maternal drug use Cocaine Research has provided conflicting evidence regarding the severity of effects on children's development posed by maternal substance use during and after pregnancy. Children exposed to cocaine in utero weigh less than those not exposed at ages ranging from 6 to 30 months. Additionally, studies indicate that the head circumference of children exposed to cocaine is lower than those that of children without cocaine exposure. However, two more recent studies found no significant differences in either measure between those exposed to cocaine and those who were not. Maternal cocaine use may also affect the child's cognitive development, with exposed children achieving lower scores on measures of psychomotor and mental development. Again though, there is conflicting evidence, and a number of studies indicate no effect of maternal cocaine use on a child's cognitive development. Continuing the trend, some studies found maternal cocaine use to impair motor development, while others showed no effect of cocaine use on motor development. Other The use of cocaine by pregnant women is not the only drug that can have a negative effect on the fetus. Tobacco, marijuana, and opiates can also affect an unborn child's cognitive and behavioral development. Smoking tobacco increases pregnancy complications including low birth weight, prematurity, placental abruption, and intrauterine death. After birth it can disturb maternal-infant interactions, reduce IQ, increase the risk of ADHD, and lead to tobacco use in the child. Prenatal marijuana exposure may have long-term emotional and behavioral consequences, as at ten-years-old children who had been exposed to the drug during pregnancy reported more depressive symptoms than unexposed peers. Some other effects include executive function impairment, reading difficulty, and delayed emotional regulation. An opiate drug, such as heroin, in utero decreases birth weight, birth length, and head circumference. Parental opiate exposure may impact the infant's central nervous system and autonomic nervous system, though the evidence is even more inconsistent than with parental cocaine exposure. There are also some unexpected negative consequences on a child, such as: less rhythmic swallowing, strabismus, and feelings of rejection. Malnutrition and Undernutrition Poor nutrition early in life contributes to stunting, and by the age of two or three can be associated with cognitive deficits, poor school achievement, and, later in life, poor social relationships. Malnutrition is a large problem in developing nations, and has an important effect on young children's weight and height. Children suffering malnutrition in Colombia weighed less than those living in upper class conditions at the age of 36 months ( compared to ), and were shorter ( versus ). Malnutrition during the first 1000 days of a child's life can cause irreversible physical and mental stunting. Infections and parasites related to poor sanitation and hygiene can impact absorption of nutrients in the gut. Adequate sanitation and hygiene (rather than just access to food) play a critical role in preventing undernutrition, malnutrition and stunting and ensuring normal early childhood development. Malnutrition has been indicated as a negative influence on childhood intelligence quotient (IQ). Although it has also been suggested that this effect is nullified when parental IQ is considered, implying that this difference is genetic. Specific nutrients The effect of low iron levels on cognitive development and IQ has yet to reach consensus. Some evidence suggests that even well-nourished children with lower levels of iron and folate (although not at such a level to be considered deficient) have a lower IQ than those with higher levels of iron and folate. Furthermore, anaemic children perform worse on cognitive measures than non-anaemic children. Other nutrients have been strongly implicated in brain development, including iodine and zinc. Iodine is required for the formation of thyroid hormones necessary for brain development. Iodine deficiency may reduce IQ by an average of 13.5 points compared to healthy individual. Zinc deficiency has also been shown to slow childhood growth and development. Zinc supplementation appears to be beneficial for growth in infants under six months old. Socioeconomic status Socioeconomic status is measured primarily based on income, educational attainment and occupation. Investigations into the role of socioeconomic factors on child development repeatedly show that continual poverty is more harmful on IQ, and cognitive abilities than short-term poverty. Children in families who experience persistent financial hardships and poverty have significantly impaired cognitive abilities compared to those in families who do not face these issues. Poverty can also cause a number of other factors shown to effect child development, such as poor academic success, less family involvement, iron deficiency, infections, a lack of stimulation, and malnutrition. Poverty also increases the risk of lead poisoning due to lead paint found on the walls of some houses; child blood levels of lead increase as income decreases. Income based poverty is associated with a 6–13 point reduction in IQ for those earning half of the poverty threshold compared to those earning twice the poverty threshold, and children coming from households featuring continual or temporary poverty perform lower than children in middle-class families. Parental educational attainment is the most significant socioeconomic factor in predicting the child's cognitive abilities, as those with a mother with a high IQ are likely to have higher IQs themselves. Similarly, maternal occupation is associated with better cognitive achievement. Those whose mothers' job entails problem-solving are more likely to be given stimulating tasks and games, and are likely to achieve more advanced verbal competency. On the other hand, maternal employment is associated with slightly lower test scores, regardless of socioeconomic status. Counterintuitively, maternal employment results in more disadvantages the higher the socioeconomic status, as these children are being removed from a more enriching environment to be put in child care, though the quality of the child care must be considered. Low income children tend to be cared for by grandparents or extended family and therefore form strong bonds with family. High income children tend to be cared for in a child care setting or in home care such as with a nanny. If the mother is highly educated, this can be a disadvantage to the child. Even with quality of care controlled for, studies still found that full-time work within the first year was correlated with negative effects on child development. Children whose mothers work are also less likely to receive regular well-baby doctor visits and less likely to be breastfed, which has been proven to improve developmental results. Effects are felt more strongly when women resume full-time work within the first year of the child's life. These effects may be due in part to pre-existing differences between mothers who return to work and those who do not, such as differences in character or reason for returning to work. Low-income families are less likely to provide a stimulating home learning environment to their children due to time constraints and financial stress. Compared to two-parent households, children with a single-parent households have greater economic vulnerability and less parental involvement, leading to worse social, behavioral, educational, or cognitive outcomes. A child's academic achievement is influenced by parents' educational attainment, parenting style, and parental investment in their child's cognitive and educational success. Higher-income families are able to afford learning opportunities both inside and outside the classroom. Poverty-stricken children have fewer opportunities for stimulating recreational activities, often missing out on trips to libraries or museums, and are unable to access a tutor to help with problematic academic areas. A further factor in a child's educational attainment involves the school environment, more specifically teacher expectations and attitudes. If teachers perceive low-SES children as being less academically able then they may provide them with less attention and reinforcement. On the other hand, when schools make an effort to increase family and school involvement, children perform better on state tests. Parasites Diarrhea caused by the parasitic disease Giardiasis is associated with lower IQ. Parasitic worms (helminths) are associated with nutritional deficiencies known to be a risk to child development. Intestinal parasitism is one of the most neglected tropical diseases in the developed world, and harboring of this parasite could have several health implications in children that negatively affect childhood development and morbidity. Prolonged exposure to faecally-transmitted infections, including environmental enteropathy, other intestinal infections, and parasites during early childhood can lead to irreversible stunting. Reducing the prevalence of these parasites can be a benefit in child growth, development, and educational outcome. Toxin exposure High levels of lead in the blood is associated with attention deficits, while arsenic poisoning has a negative effect on both verbal IQ and on total intelligence quotient. Manganese poisoning due to levels in drinking water is also associated with a reduced IQ of 6.2 points between the highest and lowest level of poisoning. Prenatal exposure to various pesticides including organophosphates, and chlorpyrifos has also been linked to reduced IQ score. Organophosphates have been specifically linked to poorer working memory, verbal comprehension, perceptual reasoning and processing speed. Other Intrauterine growth restriction is associated with learning deficits in childhood, and as such, is related to lower IQ. Cognitive development can also be harmed by childhood exposure to violence and trauma, including spousal abuse between the parents and sexual abuse. Neglect When a child is unable to meet their developmental goals because they have not been provided with the correct amount of care, stimulation or nutrition this situation is commonly referred to as child neglect. It is the most widespread form of child abuse, accounting for 78% of all child abuse cases in the United States in 2010 alone. Scientific studies show that child neglect can have lifelong consequences for children. Assessing and identifying Assessing and identifying neglect pose a number of challenges for practitioners. Given that neglect is a dynamic between the child's development and levels of nurturance, the question in identifying neglect, becomes one of where do you start, with the child's development or with the levels of nurturance? Development focused methods Some professionals identify neglect by measuring the developmental levels of a child, as if those levels are normal, one can, by definition, conclude that a child is not being neglected. Measured areas of development can include weight, height, stamina, social and emotional responses, speech and motor development. As all these features go into making a medical assessment of whether a child is thriving, a professional looking to start an assessment of neglect might start with information collected by a doctor. Infants are often weighed and measured when seen by their pediatrician for well-baby check-ups. The physician initiates a more complete evaluation when the infant's development and functioning are found to be delayed. Then social work staff could consult medical notes to establish if the baby or child is failing to thrive, as a first step in a pathway towards identifying neglect. If developmental levels are below normal, the identification of neglect then requires the professional to establish if this can be put down to the level of nurturance experienced by the child. Developmental delays caused by genetic conditions or disease need to be discounted, as they do not have their basis in a lack of nurturance. Starting the assessment Besides routine pediatrician visits, another way of starting the process of identifying neglect is to determine if the child is experiencing a level of nurturance lower than that considered necessary to support normal development, which might be unique to the child's age, gender and other factors. Exactly how to ascertain what a particular child needs, without referring back to their level of development, is not something theory and policy on neglect is clear about. Furthermore, determining whether a child is getting the requisite level of nurturance needs to take into account not just the intensity of the nurturance, but also the duration and frequency of the nurturance. Children may experience varying and low levels of certain types of nurturance across a day and from time to time, however, the levels of nurturance should never cross thresholds of intensity, duration and frequency. For this reason, professionals must keep detailed histories of care provision, which demonstrate the duration of subnormal exposure to care, stimulation, and nutrition. Common guidance suggests professionals should focus on the levels of nurturance provided by the carers of the child, as neglect is understood as an issue of the parents' behaviour towards the child. Some authors feel that establishing the failure of parents and caregivers to provide care is sufficient to conclude that neglect is occurring. One definition is that, "a child experiences neglect when the adults who look after them fail to meet their needs", which clearly defines neglect as a matter of parental performance. This raises the question about what level of nurturance a carer or parent needs to fall under to provoke developmental delay, and how one goes about measuring that accurately. This definition, which focuses on the stimulation provided by the carer, can be subject to critique. Neglect is about the child's development being adversely affected by the levels of nurturance, but the carers' provision of nurturance is not always a good indicator of the level of nurturance received by the child. Neglect may be occurring at school, outside of parental care. The child may be receiving nurturance from siblings or through a boarding school education, which compensates for the lack of nurturance provided by the parents. Linking to stimulation Neglect is a process whereby children experience developmental delay owing to experiencing insufficient levels of nurturance. In practice, this means that when starting an assessment of neglect by identifying developmental delay one needs to then check the levels of nurturance received by the child. While some guidance on identifying neglect urges practitioners to measure developmental levels, other guidance focuses on how developmental levels can be attributed to parental behaviour. However, the narrow focus on parental behaviour can be criticised for unnecessarily ruling out the possible effect of institutionalised neglect, e.g. neglect at school. If one starts by concluding that the levels of nurture received by the child are insufficient, one then needs to consider the developmental levels achieved by the child. Further challenges arise, however, as even when one has established developmental delay and exposure to low levels of nurture, one needs to rule out the possibility that the link between the two is coincidental. The developmental delay may be caused by a genetic disorder, disease or physical, sexual or emotional abuse. The developmental delay may be caused by a mixture of underexposure to nurture, abuse, genetics, and disease. Measuring tools The Graded Care Profile Tool is a practice tool which gives an objective measure of the quality of care in terms of a parent/carer's commitment. It was developed in the UK. The North Carolina Family Assessment Scale is a tool that can be used by a practitioner to explore whether neglect is taking place across a range of family functioning areas. Intervention programs Early intervention programs and treatments include individual counselling, family and group counselling, social support services, behavioural skills training programs to eliminate problematic behaviour and teach parents appropriate parenting behaviour. Video interaction guidance is a video feedback intervention through which a "guider" helps a client to enhance communication within relationships. The client is guided to analyse and reflect on video clips of their own interactions. Video Interaction Guidance has been used where concerns have been expressed over possible parental neglect in cases where the focus child is aged 2–12, and where the child is not the subject of a child protection plan. The SafeCare programme is a preventive programme working with parents of children under 6 years old who are at risk of significant harm through neglect. The programme is delivered in the home by trained practitioners, and is 18 to 20 sessions focused on 3 key areas: parent-infant/child interaction, home safety and child health. Triple P (Parenting Program) is a positive parenting program. It is a multilevel parenting and family support strategy. The idea behind it is that if parents are educated on proper parenting and given the appropriate resources, it could help decrease the number of child neglect cases. See also Attachment theory Birth order Child development stages Child life specialist Child prodigy Clinical social work Critical period Developmental psychology Developmental psychobiology Developmental psychopathology Early childhood education Evolutionary developmental psychology Pedagogy Play Psychoanalytic infant observation Child development in Africa Child development in India References Further reading External links Developmental psychology Development
Child development
[ "Biology" ]
17,171
[ "Behavioural sciences", "Behavior", "Developmental psychology" ]
9,628,193
https://en.wikipedia.org/wiki/Darwin%E2%80%93Radau%20equation
In astrophysics, the Darwin–Radau equation (named after Rodolphe Radau and Charles Galton Darwin) gives an approximate relation between the moment of inertia factor of a planetary body and its rotational speed and shape. The moment of inertia factor is directly related to the largest principal moment of inertia, C. It is assumed that the rotating body is in hydrostatic equilibrium and is an ellipsoid of revolution. The Darwin–Radau equation states where M and Re represent the mass and mean equatorial radius of the body. Here λ is known as d'Alembert's parameter and the Radau parameter η is defined as where q is the geodynamical constant and ε is the geometrical flattening where Rp is the mean polar radius and Re is the mean equatorial radius. For Earth, and , which yields , a good approximation to the measured value of 0.3307. References Astrophysics Planetary science Equations of astronomy
Darwin–Radau equation
[ "Physics", "Astronomy" ]
205
[ "Astronomical sub-disciplines", "Concepts in astronomy", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Equations of astronomy", "Planetary science", "Planetary science stubs" ]
9,628,780
https://en.wikipedia.org/wiki/Animal%20migration
Animal migration is the relatively long-distance movement of individual animals, usually on a seasonal basis. It is the most common form of migration in ecology. It is found in all major animal groups, including birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. The cause of migration may be local climate, local availability of food, the season of the year or for mating. To be counted as a true migration, and not just a local dispersal or irruption, the movement of the animals should be an annual or seasonal occurrence, or a major habitat change as part of their life. An annual event could include Northern Hemisphere birds migrating south for the winter, or wildebeest migrating annually for seasonal grazing. A major habitat change could include young Atlantic salmon or sea lamprey leaving the river of their birth when they have reached a few inches in size. Some traditional forms of human migration fit this pattern. Migrations can be studied using traditional identification tags such as bird rings, or tracked directly with electronic tracking devices. Before animal migration was understood, folklore explanations were formulated for the appearance and disappearance of some species, such as that barnacle geese grew from goose barnacles. Overview Concepts Migration can take very different forms in different species, and has a variety of causes. As such, there is no simple accepted definition of migration. One of the most commonly used definitions, proposed by the zoologist J. S. Kennedy is Migration encompasses four related concepts: persistent straight movement; relocation of an individual on a greater scale (in both space and time) than its normal daily activities; seasonal to-and-fro movement of a population between two areas; and movement leading to the redistribution of individuals within a population. Migration can be either obligate, meaning individuals must migrate, or facultative, meaning individuals can "choose" to migrate or not. Within a migratory species or even within a single population, often not all individuals migrate. Complete migration is when all individuals migrate, partial migration is when some individuals migrate while others do not, and differential migration is when the difference between migratory and non-migratory individuals is based on discernible characteristics like age or sex. Irregular (non-cyclical) migrations such as irruptions can occur under pressure of famine, overpopulation of a locality, or some more obscure influence. Seasonal Seasonal migration is the movement of various species from one habitat to another during the year. Resource availability changes depending on seasonal fluctuations, which influence migration patterns. Some species such as Pacific salmon migrate to reproduce; every year, they swim upstream to mate and then return to the ocean. Temperature is a driving factor of migration that is dependent on the time of year. Many species, especially birds, migrate to warmer locations during the winter to escape poor environmental conditions. Circadian Circadian migration is where birds utilise circadian rhythm (CR) to regulate migration in both fall and spring. In circadian migration, clocks of both circadian (daily) and circannual (annual) patterns are used to determine the birds' orientation in both time and space as they migrate from one destination to the next. This type of migration is advantageous in birds that, during the winter, remain close to the equator, and also allows the monitoring of the auditory and spatial memory of the bird's brain to remember an optimal site of migration. These birds also have timing mechanisms that provide them with the distance to their destination. Tidal Tidal migration is the use of tides by organisms to move periodically from one habitat to another. This type of migration is often used in order to find food or mates. Tides can carry organisms horizontally and vertically for as little as a few nanometres to even thousands of kilometres. The most common form of tidal migration is to and from the intertidal zone during daily tidal cycles. These zones are often populated by many different species and are rich in nutrients. Organisms like crabs, nematodes, and small fish move in and out of these areas as the tides rise and fall, typically about every twelve hours. The cycle movements are associated with foraging of marine and bird species. Typically, during low tide, smaller or younger species will emerge to forage because they can survive in the shallower water and have less chance of being preyed upon. During high tide, larger species can be found due to the deeper water and nutrient upwelling from the tidal movements. Tidal migration is often facilitated by ocean currents. Diel While most migratory movements occur on an annual cycle, some daily movements are also described as migration. Many aquatic animals make a diel vertical migration, travelling a few hundred metres up and down the water column, while some jellyfish make daily horizontal migrations of a few hundred metres. In specific groups Different kinds of animals migrate in different ways. In birds Approximately 1,800 of the world's 10,000 bird species migrate long distances each year in response to the seasons. Many of these migrations are north-south, with species feeding and breeding in high northern latitudes in the summer and moving some hundreds of kilometres south for the winter. Some species extend this strategy to migrate annually between the Northern and Southern Hemispheres. The Arctic tern has the longest migration journey of any bird: it flies from its Arctic breeding grounds to the Antarctic and back again each year, a distance of at least , giving it two summers every year. Bird migration is controlled primarily by day length, signalled by hormonal changes in the bird's body. On migration, birds navigate using multiple senses. Many birds use a sun compass, requiring them to compensate for the sun's changing position with time of day. Navigation involves the ability to detect magnetic fields. In fish Most fish species are relatively limited in their movements, remaining in a single geographical area and making short migrations to overwinter, to spawn, or to feed. A few hundred species migrate long distances, in some cases of thousands of kilometres. About 120 species of fish, including several species of salmon, migrate between saltwater and freshwater (they are 'diadromous'). Forage fish such as herring and capelin migrate around substantial parts of the North Atlantic ocean. The capelin, for example, spawn around the southern and western coasts of Iceland; their larvae drift clockwise around Iceland, while the fish swim northwards towards Jan Mayen island to feed and return to Iceland parallel with Greenland's east coast. In the 'sardine run', billions of Southern African pilchard Sardinops sagax spawn in the cold waters of the Agulhas Bank and move northward along the east coast of South Africa between May and July. In insects Some winged insects such as locusts and certain butterflies and dragonflies with strong flight migrate long distances. Among the dragonflies, species of Libellula and Sympetrum are known for mass migration, while Pantala flavescens, known as the globe skimmer or wandering glider dragonfly, makes the longest ocean crossing of any insect: between India and Africa. Exceptionally, swarms of the desert locust, Schistocerca gregaria, flew westwards across the Atlantic Ocean for during October 1988, using air currents in the Inter-Tropical Convergence Zone. In some migratory butterflies, such as the monarch butterfly and the painted lady, no individual completes the whole migration. Instead, the butterflies mate and reproduce on the journey, and successive generations continue the migration. In mammals Some mammals undertake exceptional migrations; reindeer have one of the longest terrestrial migrations on the planet, reaching as much as per year in North America. However, over the course of a year, grey wolves move the most. One grey wolf covered a total cumulative annual distance of . Mass migration occurs in mammals such as the Serengeti 'great migration', an annual circular pattern of movement with some 1.7 million wildebeest and hundreds of thousands of other large game animals, including gazelles and zebra. More than 20 such species engage, or used to engage, in mass migrations. Of these migrations, those of the springbok, black wildebeest, blesbok, scimitar-horned oryx, and kulan have ceased. Long-distance migrations occur in some batsnotably the mass migration of the Mexican free-tailed bat between Oregon and southern Mexico. Migration is important in cetaceans, including whales, dolphins and porpoises; some species travel long distances between their feeding and their breeding areas. Humans are mammals, but human migration, as commonly defined, is when individuals often permanently change where they live, which does not fit the patterns described here. An exception is some traditional migratory patterns such as transhumance, in which herders and their animals move seasonally between mountains and valleys, and the seasonal movements of nomads. In other animals Among the reptiles, adult sea turtles migrate long distances to breed, as do some amphibians. Hatchling sea turtles, too, emerge from underground nests, crawl down to the water, and swim offshore to reach the open sea. Juvenile green sea turtles make use of Earth's magnetic field to navigate. Some crustaceans migrate, such as the largely-terrestrial Christmas Island red crab, which moves en masse each year by the millions. Like other crabs, they breathe using gills, which must remain wet, so they avoid direct sunlight, digging burrows to shelter from the sun. They mate on land near their burrows. The females incubate their eggs in their abdominal brood pouches for two weeks. Then they return to the sea to release their eggs at high tide in the moon's last quarter. The larvae spend a few weeks at sea and then return to land. Tracking Migration Scientists gather observations of animal migration by tracking their movements. Animals were traditionally tracked with identification tags such as bird rings for later recovery. However, no information was obtained about the actual route followed between release and recovery, and only a fraction of tagged individuals were recovered. More convenient, therefore, are electronic devices such as radio-tracking collars that can be followed by radio, whether handheld, in a vehicle or aircraft, or by satellite. GPS animal tracking enables accurate positions to be broadcast at regular intervals, but the devices are inevitably heavier and more expensive than those without GPS. An alternative is the Argos Doppler tag, also called a 'Platform Transmitter Terminal' (PTT), which sends regularly to the polar-orbiting Argos satellites; using Doppler shift, the animal's location can be estimated, relatively roughly compared to GPS, but at a lower cost and weight. A technology suitable for small birds which cannot carry the heavier devices is the geolocator which logs the light level as the bird flies, for analysis on recapture. There is scope for further development of systems able to track small animals globally. Radio-tracking tags can be fitted to insects, including dragonflies and bees. In culture Before animal migration was understood, various folklore and erroneous explanations were formulated to account for the disappearance or sudden arrival of birds in an area. In Ancient Greece, Aristotle proposed that robins turned into redstarts when summer arrived. The barnacle goose was explained in European Medieval bestiaries and manuscripts as either growing like fruit on trees, or developing from goose barnacles on pieces of driftwood. Another example is the swallow, which was once thought, even by naturalists such as Gilbert White, to hibernate either underwater, buried in muddy riverbanks, or in hollow trees. See also Great American Interchange References Further reading General Baker, R. R. (1978) The Evolutionary Ecology of Animal Migration. Holmes & Meier. . Dingle, H. (1996) Migration: The Biology of Life on the Move. Oxford University Press. . Gauthreaux, S. A. (1980) Animal Migration, Orientation, and Navigation. Academic Press. . Milner-Gulland, E. J., Fryxell, J. M., and Sinclair, A. R. E. (2011) Animal Migration: A Synthesis. Oxford University Press. . Rankin, M. (1985) Migration: Mechanisms and Adaptive Significance: Contributions in Marine Science. Marine Science Institute. . Riede, K. (2002) Global Register of Migratory Species. With database and GIS maps on CD. . By group Drake, V. A. and Gatehouse, A. G. (1995) Insect migration: tracking resources through space and time. Cambridge University Press. Elphick, J. (1995) The atlas of bird migration: tracing the great journeys of the world's birds. Random House. Greenberg, R. and Marra, P. P. (2005) Birds of Two Worlds: The Ecology and Evolution of Migration. Johns Hopkins University Press. Lucas, M. C. and Baras, E. (2001) Migration of freshwater fishes. Blackwell Science. MacKeown, B. A. (1984) Fish migration. Timber Press. Sonnenschein, E; Berthold, P. (2003) Avian migration. Springer. For children Gans, R. and Mirocha, P. How do Birds Find their Way? HarperCollins. (Stage 2) Marsh, L. (2010) Amazing Animal Journeys. National Geographic Society. (Level 3) External links Migration Basics from U.S. National Park Service Witnessing the Great Migration in Serengeti and Masai Mara Global Register of Migratory Species – identifies, maps and features 4,300 migratory vertebrate species Animal migration on PubMed MeSH term F01.145.113.083 Ethology
Animal migration
[ "Biology" ]
2,775
[ "Behavioural sciences", "Ethology", "Behavior", "Animal migration" ]
9,628,924
https://en.wikipedia.org/wiki/Intravascular%20volume%20status
In medicine, intravascular volume status refers to the volume of blood in a patient's circulatory system, and is essentially the blood plasma component of the overall volume status of the body, which otherwise includes both intracellular fluid and extracellular fluid. Still, the intravascular component is usually of primary interest, and volume status is sometimes used synonymously with intravascular volume status. It is related to the patient's state of hydration, but is not identical to it. For instance, intravascular volume depletion can exist in an adequately hydrated person if there is loss of water into interstitial tissue (e.g. due to hyponatremia or liver failure). Clinical assessment Intravascular Volume Depletion Volume contraction of intravascular fluid (blood plasma) is termed hypovolemia, and its signs include, in order of severity: a fast pulse infrequent and low volume urination dry mucous membranes (e.g. a dry tongue) poor capillary refill (e.g. when the patient's fingertip is pressed, the skin turns white, but upon release, the skin does not return to pink as fast as it should - usually >2 seconds) decreased skin turgor (e.g. the skin remains "tented" when it is pinched) a weak pulse orthostatic hypotension (dizziness upon standing up from a seated or reclining position, due to a drop in cerebral blood pressure) orthostatic increase in pulse rate cool extremities (e.g. cool fingers) Intravascular volume overload Signs of intravascular volume overload (high blood volume) include: an elevated Jugular venous pressure (JVP) Intravascular Blood Volume Correlation to a Patient's Ideal Height and Weight For the clinical assessment of intravascular blood volume, the BVA-100, a semi-automated blood volume analyzer device that has FDA approval, determines the status of a patient’s blood volume based on the Ideal Height and Weight Method. Using a patient’s ideal weight and actual weight, the percent deviation from the desirable weight is found using the following equation: Using the deviation from desirable weight, the BV ratio (ml/kg), i.e. Ideal Blood Volume, can be determined. The machine was tested in clinical studies for the treatment of a broad range of medical conditions related to Intravascular Volume Status, such as anemia, congestive heart failure, sepsis, CFS, Hyponatremia, Syncope and more. This tool for measuring blood volume may foster improved patient care as both a stand-alone and complementary diagnostic tool as there has been a statistically significant increase in patient survival. Pathophysiology Intravascular volume depletion The most common cause of hypovolemia is diarrhea or vomiting. The other causes are usually divided into renal and extrarenal causes. Renal causes include overuse of diuretics, or trauma or disease of the kidney. Extrarenal causes include bleeding, burns, and any causes of edema (e.g. congestive heart failure, liver failure). Intravascular volume depletion is divided into three types based on the blood sodium level: Isonatremic (normal blood sodium levels) Example: a child with diarrhea, because both water and sodium are lost in diarrhea. Hyponatremic (abnormally low blood sodium levels). Example: a child with diarrhea who has been given tap water to replete diarrheal losses. Overall there is more water than sodium in the body. The intravascular volume is low because the water will move through a process called osmosis out of the vasculature into the cells (intracellularly). The danger is tissue swelling (edema) the most important being brain edema which in turn will cause more vomiting. Hypernatremic (abnormally high blood sodium levels). Example: a child with diarrhea who has been given salty soup to drink, or insufficiently diluted infant formula. Overall there is more sodium than water. The water will move out of the cell toward the intravascular compartment down the osmotic gradient. This can cause tissue breakage (in case of muscle breakage it is called rhabdomyolysis). Intravascular volume overload Intravascular volume overload can occur during surgery, if water rather than isotonic saline is used to wash the incision. It can also occur if there is inadequate urination, e.g. with certain kidney diseases. See also Body water Oral rehydration therapy References Blood tests
Intravascular volume status
[ "Chemistry" ]
980
[ "Blood tests", "Chemical pathology" ]
9,628,999
https://en.wikipedia.org/wiki/Secretin%20receptor
The secretin receptor is a protein that in humans is encoded by the SCTR gene. This protein is a G protein-coupled receptor which binds secretin and is the leading member (i.e., first cloned) of the secretin receptor family, also called class B GPCR subfamily. Interactions The secretin receptor has been shown to interact with pituitary adenylate cyclase activating peptide. References Further reading External links IUPHAR GPCR Database - Secretin receptor G protein-coupled receptors
Secretin receptor
[ "Chemistry" ]
107
[ "G protein-coupled receptors", "Signal transduction" ]
9,629,218
https://en.wikipedia.org/wiki/LULI
LULI : Laboratoire pour l'Utilisation des Lasers Intenses (LULI) is a scientific research laboratory specialised in the study of plasmas generated by laser-matter interaction at high intensities and their applications. The main missions of LULI include: (i) Research in Plasma Physics, (ii) Development and operation of high-power high-energy lasers and experimental facilities, (iii) student formation in Plasma Physics, Optics and Laser Physics. Research in Plasma Physics Focusing the extreme power of pulsed lasers (up to the petawatt level, 1015W) onto tiny spots, μm to mm in diameter, leads to ultrahigh intensities reaching today 1020W/cm2 or more. Targets irradiated at such intensities can reach temperatures of the order of hundred million degrees and pressures of tens of megabars. Moreover, the electric and magnetic fields associated with the laser beam itself or the fields produced in the plasma are responsible for the acceleration of particles to relativistic energies and to the production of intense radiation from THz to x-rays and γ-rays. The main subjects studied by LULI's scientists include laser inertial fusion and all its physical components (e.g. laser-plasma interaction), fundamental physics of hot and dense plasmas and its applications in astrophysics and geophysics. In the short-pulse picosecond regime, the main developments concern the fast-igniter scheme for inertial fusion, and the production of brief and intense sources of radiation and relativistic particles. National and International Facility LULI is the French national civilian facility dedicated to research using high-energy high-power lasers and their applications. French and foreign users have access to the two most energetic French academic laser chains: 100TW and LULI2000. The main beam of the 100TW facility delivers 30 J in 300 fs at 1.06 μm. It is coupled with additional nanosecond and picosecond beams. Nano2000, the nanosecond version of LULI2000 consists in two laser beams delivering each 1 kJ in nanosecond pulses at 1.06 μm. PICO2000 couples one of these nanosecond beams with a 200 J picosecond beam. Each of these facilities is coupled to a dedicated experimental area. The development of the laser sources is supported by an important R&D programme in high-power laser related optics and laser technology. Student Training LULI trains French and foreign undergraduate and graduate students in plasma physics and laser physics and technology . Collaborations Located at École Polytechnique, LULI is a CNRS - CEA - École Polytechnique - Université Pierre et Marie Curie laboratory. It is part of numerous national and international collaborations with research teams and laboratories involved in the development or the utilisation of high-power lasers. The following list includes some French and European contacts and projects. Institut Laser Plasma Physics Department at 'École Polytechnique Centre de Physique Théorique Laboratoire d'Optique Appliquée CEA - DRECAM Confédération des Lasers Intenses du Plateau de Saclay Centre Lasers Intenses et Applications Laser Alise Ligne d'Intégration Laser LASERLAB-EUROPE Central laser Facility PHELIX - GSI Extreme Light Infrastructure PETAL HiPER References Research institutes in France Physics research institutes Plasma physics facilities French UMR
LULI
[ "Physics" ]
706
[ "Plasma physics facilities", "Plasma physics" ]
9,629,242
https://en.wikipedia.org/wiki/SafetyBUS%20p
SafetyBUS p is a standard for failsafe fieldbus communication in automation technology. It meets SIL 3 of IEC 61508 and Category 4 of EN 954-1 or Performance Level "e" of the successor standard EN 13849-1. Origin SafetyBUS p was developed by Pilz GmbH & Co. KG between 1995 and 1999. The objective was to provide a fieldbus for data communication in terms of machinery safety . Since 1999 the technology of SafetyBUS p has been managed by the user organisation Safety Network International e.V. (formally SafetyBUS p Club International e.V.), whose members work on the further development of SafetyBUS p. Application The main application of SafetyBUS p lies in the communication of data with safety-related content. SafetyBUS p can be used anywhere that communicated data has to be consistent in terms of time and content, in order to safeguard against danger. This danger may concern hazards to life and limb, but may also involve the protection of economic assets. Typical application areas are: Factory automation (e.g. car production, presses) Transport technology (e.g. cable cars, fairground rides) SafetyBUS p can be used on applications with safety-related requirements up to SIL 3 of IEC 61508 or Cat. 4 of EN 954-1. Technology Technically, SafetyBUS p is based on the fieldbus system CAN. In addition to OSI Layers 1 and 2, which are already defined in CAN for bit rate and security, SafetyBUS p adds additional mechanisms to safeguard transmission in Layers 2 and 7. Fault mechanisms The following mechanisms are used on SafetyBUS p to detect transmission errors and device errors: Sequential numbers Timeout Echo ID for transmitter and receiver Data security (CRC) Technical details The SafetyBUS p frame data, as determined by the technology, is as follows: Maximum 64 bus subscribers per network Up to 4000 I/O points per network Transmission rates 20 to 500 kbit/s, depending on the network extension Individual network segments can extend to up to 3500 m Multiple network segments can be connected Guaranteed error reaction times of up to 20 ms can be achieved Suitable for applications in accordance with SIL 3 of IEC 61508 and Cat. 4 of EN 954-1 Option to supply voltage to the devices via the bus cable Subnetworks can be implemented in wireless technology, with fibre-optic cables and as an infrared connection. Devices Only safety-related devices are used in SafetyBUS p networks. In general these are designed to be multi-channel internally. Safety-related use of the devices normally requires safety to be certified by authorised testing laboratories, who test the devices in accordance with the applicable standards and provisions. A functional inspection is carried out by the user organisation Safety Network International e.V. Organisation The user organisation SafetyBUS p Club International e.V. combines manufacturers and users of SafetyBUS p and has been in existence since 1999. In 2006 the organisation was renamed Safety Network International e.V. In addition to the international organisation there are also two other regional organisations: Japan was established in 2000, while USA was established in 2001. The organisation continues to develop the system, resulting in its successor, SafetyNET p. Literature Winfried Gräf: Maschinensicherheit. Hüthig GmbH & Co. KG, Heidelberg 2004, Armin Schwarz und Matthias Brinkmann: Praxis Profiline – SafetyBUS p – Volume D/J/E. Vogel Industrie Medien GmbH & Co. KG, Würzburg 2004, Armin Schwarz und Matthias Brinkmann: Praxis Profiline – SafetyBUS p – Volume D/E. Vogel Industrie Medien GmbH & Co. KG, Würzburg 2002, EU machinery directive: 98/37/EG IFA Report 2/2017: Funktionale Sicherheit von Maschinensteuerungen – Anwendung der DIN EN ISO 13849 BG ETEM, Prüfgrundsatz GS-ET-26: Bussysteme für die Übertragung sicherheitsrelevanter Nachrichten Reinert, D.; Schaefer, M.: Sichere Bussysteme für die Automation. Hüthig, Heidelberg 2001. External links Organizations: Safety Network International e.V. Safety Network International USA (Regional organization for USA) Industrial automation Safety
SafetyBUS p
[ "Engineering" ]
893
[ "Industrial automation", "Automation", "Industrial engineering" ]
9,629,284
https://en.wikipedia.org/wiki/Desensitization%20%28telecommunications%29
In telecommunications, desensitization (also known as receiver blocking) is a form of electromagnetic interference where a radio receiver is unable to receive a weak radio signal that it might otherwise be able to receive when there is no interference. This is caused by a nearby transmitter with a strong signal on a close frequency, which overloads the receiver and makes it unable to fully receive the desired signal. Typical receiver operation is such that the Minimum Detectable Signal (MDS) level is determined by the thermal noise of its electronic components. When a signal is received, additional spurious signals are produced within the receiver because it is not truly a linear device. When these spurious signals have a power level that is less than the thermal noise power level, then the receiver is operating normally. When these spurious signals have a power level that is higher than the thermal noise floor, then the receiver is desensitized. This is because the MDS has risen due to the level of the spurious signals. Spurious signals increase in level when the received signal strength increases. When an interfering signal is present, it can contribute to the level of the spurious signals. Stronger interference generates stronger spurious signals. The interference may be at a different frequency than the signal of interest, but the spurious signals caused by that interference can show up at the same frequency as the signal of interest. It is these spurious signals that degrade the ability of the receiver by raising the MDS. Consider the case of a repeater station, a station consisting of a transmitter and receiver, both operating at the same time, but on separate frequencies, and in some cases, separate antennas. Elevated MDS can be experienced in this case as well. One way to correct this condition is adding a duplexer to the station. This is common in Land Mobile Radio services such as police, fire, various commercial and amateur service. See also Receiver (radio) Sensitivity (electronics) Minimum detectable signal References Electromagnetic compatibility Interference Noise (electronics) Radio communications
Desensitization (telecommunications)
[ "Engineering" ]
409
[ "Electromagnetic compatibility", "Radio electronics", "Telecommunications engineering", "Radio communications", "Electrical engineering" ]
9,629,286
https://en.wikipedia.org/wiki/Desensitization%20%28psychology%29
Desensitization (from Latin "de-" meaning "removal" and "sensus" meaning "feeling" or "perception") is a psychology term related to the treatment or process that diminishes emotional responsiveness (reduced reaction) to a negative or aversive stimulus after repeated exposure. This process typically occurs when an emotional response (feeling) is repeatedly triggered, but the action tendency associated with the emotion proves irrelevant or unnecessary. Psychologist Mary Cover Jones pioneered early desensitization techniques to help individuals "unlearn" (disassociate from) phobias and anxieties. Her work laid the foundation for later structured approaches to desensitization therapy, aimed at gradually reducing emotional reactions to previously distressing situations. In 1958, Joseph Wolpe developed a hierarchical (ranked) list of anxiety-evoking stimuli ordered by intensity to help individuals gradually adapt (become accustomed) to their fears. Wolpe's "reciprocal inhibition" desensitization process is based on established psychology theories, including Clark Hull's drive-reduction theory (which suggests that reducing a drive decreases anxiety) and Sherrington's concept of reciprocal inhibition (which proposes that certain responses can be inhibited by activating opposing responses. Although medication is available for individuals with anxiety, fear, or phobias, empirical evidence supports desensitization with high rates of cure, particularly in clients with depression or schizophrenia. Steps The hierarchical list is constructed between client and therapist in an ordered series of steps from the least disturbing to the most alarming fears or phobias. The therapist and the patient for acrophobia create a list of escalating exposure scenarios. The patient progresses from using a low step ladder to standing and taking the first step. The scenes are arranged in a commonly used version of this treatment to increase arousal. Secondly, the client is taught techniques that produce deep relaxation. This is repeated until the hierarchy element no longer causes anxiety or fear, at which point the next scene is shown. This procedure is repeated until the client has finished the hierarchy. It is impossible to feel both anxiety and relaxation simultaneously, so easing the client into deep relaxation helps inhibit any anxiety. Systematic desensitization (a guided reduction in fear, anxiety, or aversion) can then be achieved by gradually approaching the feared stimulus while maintaining relaxation. Desensitization works best when individuals are directly exposed to the stimuli and situations they fear, so anxiety-evoking stimuli are paired with inhibitory responses. This is done either by clients performing in real-life situations (vivo desensitization) or, if it is not practical to directly act out the steps of hierarchy, by observing models performing the feared behavior (known as vicarious desensitization). Clients slowly move up the hierarchy, repeating performances if necessary, until the last item on the list is performed without fear or anxiety. According to research, it is not necessary for the hierarchy of scenes to be presented in a specific order, nor is it essential for the client to have mastered a relaxation response. Recent research suggests that none of the three conditions listed above are required for successful desensitization when taken as a whole. The only prerequisite appears to be the ability to imagine frightening scenes, which need not be ordered in a particular order or lead to the relaxation of the muscles. Suggested mechanisms Reciprocal inhibition Reciprocal inhibition is based on the idea that two opposing mental states cannot coexist and is used as both a psychological and biological mechanism. The theory that "two opposing states cannot occur simultaneously" i.e. relaxation methods that are involved with desensitization inhibit feelings of anxiety that come with being exposed to phobic stimuli. Deep muscle relaxation techniques are the primary method used by Wolpe to increase parasympathetic nervous system activity, the nervous system the body uses to relax. According to Tryon (2005), being relaxed does not always imply being anxious, and it is critical to avoid tautology when discussing reciprocal inhibition. This phenomenon is only observed when two events have a strong negative correlation. Reflex research has revealed the biological basis of reciprocal inhibition, which occurs when a tap on the patellar tendon results in muscle relaxation (inhibition) of the flexors and muscle activation (excitation) of the extensors. This is an example of coordinated inhibition and excitation in different muscles. One criticism is that reciprocal inhibition isn't a necessary part of the process of desensitizing people as other therapies that are along similar lines, such as flooding, work without pre-emptive, inhibitory relaxation techniques. A review of empirical evidence confirmed that therapy without relaxation was equally effective and gave birth to exposure therapy. A review of Taylor's (2002) classification of reciprocal inhibition as being short-term but with long-term effects within the understanding of desensitization doesn't make sense due to it being theoretically similar to reactive inhibition, which is longer-term as it develops conditioned inhibition. Counterconditioning Counterconditioning suggests that the anxiety response is replaced by a relaxation response through conditioning during the desensitization process. Counterconditioning is the behavioral equivalent of reciprocal inhibition which is understood as a neurological process. Wolpe (1958) used this mechanism to explain the long-term effects of systematic desensitization as it reduces avoidance responses and therefore excessive avoidance behaviors contributing to anxiety disorders. However, this explanation is not supported by empirical evidence. For similar reasons to reciprocal inhibition, counterconditioning is criticized as the underpinning mechanism for desensitization due to therapies that don't suggest a replacement emotion for anxiety being effective in desensitizing people. There would be no behavioral difference if reciprocal inhibition or counterconditioning were the functioning ×mechanisms. Habituation Habituation theory explains that with increased exposure to stimulus, there will be a decreased response from the phobic subject. There is empirical evidence to suggest that overall phobia responses are reduced in people who have specific phobias with exposure. However, empirical evidence does not support habituation as an explanation of desensitization due to its reversible and short-term nature. Extinction Extinction is a model that demonstrates how learned behaviors decrease through the absence of anticipated reinforcement. Extinction is not only when a previously learned value lessens, but also when a new association being created leads to a new value being learned. However, this cannot be used to explain why desensitization works, as it solely describes the functional relationship between absent reinforcement and phobic responses and lacks an actual mechanism for why such a relationship exists.Several studies looking into the neural mechanisms of extinction propose that the amygdala is responsible for the learning and expressing of phobic responses, and also has a part in the learning and strengthening of fear extinction. Wolpe disagreed that extinction could be the explanatory mechanism of how desensitization occurs with therapies based on exposure, as he believed that repeated exposure was insufficient and had likely already happened during the lives of people with specific phobias.However, desensitization is a form of exposure therapy which in turn leads to the unwanted behavior becoming extinct due to the learned associations becoming weakened. Two-factor model Exposure to phobic stimuli and then a subsequent avoidance response may strengthen the future anxiety as the avoidance response reduces the stress, which therefore reinforces the avoidant behavior (prominent feature of specific phobias and anxiety disorders). Therefore, exposure with non-avoidance is seen as essential in the desensitization process. Self-efficacy Self-efficacy is an individual's personal assessment of their ability to successfully do something in a certain situation. A person's belief in themselves of being able to cope increases, especially when moving up the exposure hierarchy and having confirmatory experiences of coping from the lower levels. A high self-efficacy is shown to enhance the extinction of an unwanted behavior. This explanation for desensitization lacks an explanation for how heightened anticipation of fear reduction leads to reduced fear responses, and it does not address whether desensitization effectively occurs if an individual does not experience decreased fear responses, potentially leading their anxiety response to reaffirm their phobia instead. Expectancy theory Expectancy theory suggests that because people expect that the therapy is going to work and change their view on how they are going to receive the phobic stimuli after speaking with the therapist, their responses will align with that and display reduced anxiety. Marcia et al. (1969) found that those with high expectancy change (receiving full expectancy treatment) had comparable results to those who had systematic desensitization therapy suggesting its just a change in expectancy that reduces fear responses. Emotional processing theory R. J. McNally explains, "fear is represented in memory as a network comprising stimulus propositions that express information about feared cues, response propositions that express information about behavioral and physiologic responses to these cues, and meaning propositions that elaborate on the significance of other elements in the fear structure". Excessive fear such as phobias can be understood as a problem in this structure which leads to problems processing information leading to exaggerated fear responses. Using this information about fear networks, desensitization can be achieved accessing the fear network using matching stimuli to information in the fear network and then having the person engage with the stimuli to input new information into the network by disconfirming existing propositions. Neuroscience Medial prefrontal cortex The medial prefrontal cortex works with the amygdala,; when damaged, a phobic subject finds desensitization more difficult. Neurons in this area aren't fired during the desensitization process despite reducing spontaneous fear responses when artificially fired, suggesting the area stores extinction memories that reduce phobic responses to future stimuli related to the phobia (conditioned), which explains the long-term impact of desensitization. N-methyl-D-aspartate glutamatergic receptors NMDA receptors have been found to play a key role in the extinction of fear, and therefore, the use of an agonist would accelerate the reduction in fear responses during the process of desensitization. Self-control desensitization Self-control desensitization is a variant of systematic desensitization, which Joseph Wolpe pioneered. Instead of using a passive counter-conditioning model, it uses an active, mediational, coping skills change model. It uses coping mechanisms like relaxation as an alternative to an anxiety response when anxiety-inducing stimuli are present. In-person practise in actual anxiety-producing situations is encouraged. In many ways, it is comparable to other methods for controlling anxiety, like applied relaxation and anxiety management training. During self-control desensitization, clients are given a justification that is primarily coping skills oriented in nature. They are told that they have learned to react to certain situations by becoming anxious, tense, or nervous based on previous experience. Then it is explained to them that they will learn new coping skills to swap out their unfavorable reactions for more flexible ones. They are instructed to use relaxation techniques and other coping mechanisms in a hierarchy of anxiety-producing situations to reduce tensions and serve as covert rehearsal for eventualities. These techniques include breathing control, attention to internal sensations, and relaxation techniques. According to research, self-control desensitization is effective for various anxiety disorders but is not more effective than other cognitive or behavioural techniques. Criticism and developments With the widespread research and development of behavioural therapies and experiments being conducted in order to understand the mechanisms driving desensitization, a consensus often arises that exposure is the key element of desensitization. This suggests the steps leading up to the actual exposure such as relaxation techniques and the development of an exposure hierarchy are redundant steps for effective desensitization. It would seem that crucial elements for a successful therapeutic outcome in both desensitisation and more conventional forms of psychotherapy are the cognitive and social aspects of the therapeutic situation. These factors include the expectation of therapeutic benefit, the therapist's ability to foster social reinforcement, the information-feedback of approximations towards successful fear reduction, training in attention control, and the vicarious learning of contingencies of non-avoidance behaviour in the fear situation (via instructed imagination). Effects on animals Animals can also be desensitized to their rational or irrational fears. A race horse who fears the starting gate can be desensitized to the fearful elements (the creak of the gate, the starting bell, the enclosed space) one at a time, in small doses or at a distance. Clay et al. (2009) conducted an experiment whereby he allocated rhesus macaques to either a desensitization group or a control group, finding that those in the desensitization group showed a significant reduction in both the rate and duration of fearful behavior. This supports the use of PRT training. Desensitization is commonly used with simple phobias like insect phobia. In addition, desensitization therapy is a useful tool in training domesticated dogs. Systematic desensitization used in conjunction with counter-conditioning was shown to reduce problem behaviours in dogs, such as vocalization and property destruction. Effects on violence Desensitization also refers to the potential for reduced responsiveness to actual violence caused by exposure to violence in the media. However, this topic is debated in the scientific literature. Desensitization may arise from different media sources, including TV, video games, and movies. Some scholars suggest that violence may prime thoughts of hostility, possibly affecting how we perceive others and interpret their actions. Desensitization has been shown to lower arousal to violent scenes in heavy versus light television viewers at the physiological level. It has frequently been suggested that those who commit extreme violence have blunted sensibilities as a result of watching violent videos repeatedly. Desensitization to violence has been linked to a number of outcomes. It has been observed, for example, as less arousal and emotional disturbance when witnessing violence, as greater hesitancy to call an adult to intervene in a witnessed physical altercation, and as less sympathy for victims of domestic abuse. Recent school shootings have sparked a lot of discussion about the desensitizing effects of violent video games and the possible involvement of "shooter" games, which teach gun handling skills and provide intense desensitization training. It is hypothesized that initial exposure to violence in the media may produce a number of aversive responses, such as increased heart rate, fear, discomfort, perspiration, and disgust. However, prolonged and repeated exposure to violence in the media may reduce or habituate the initial psychological impact until violent images do not elicit these negative responses. Eventually, the observer may become emotionally and cognitively desensitized to media violence. In one experiment, participants who played violent video games showed lower heart rate and galvanic skin response readings, which the authors interpreted as displaying physiological desensitization to violence. However, other studies have failed to replicate this finding. Some scholars have questioned whether becoming desensitized to media violence specifically transfers to becoming desensitized to real-life violence. In addition, psychological research frequently focuses on how members of a group behave, and these studies demonstrate that media violence raises the likelihood that members of the group will become desensitized and act aggressively. However, more sensitive developmental studies might find that this effect can be moderated by some individual difference variables (such as empathy, perspective taking, or trait hostility). See also Sensitization Flooding (psychology) Extinction (psychology) Habituation Conditioning Alarm fatigue References Anxiety disorder treatment Behavior therapy Behaviorism
Desensitization (psychology)
[ "Biology" ]
3,226
[ "Behavior", "Behavior therapy", "Behaviorism" ]