id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
26,331,217
https://en.wikipedia.org/wiki/Attensity
Attensity provides social analytics and engagement applications for social customer relationship management (social CRM). Attensity's text analytics software applications extract facts, relationships and sentiment from unstructured data, which comprise approximately 85% of the information companies store electronically. History Attensity was founded in 2000. An early investor in Attensity was In-Q-Tel, which funds technology to support the missions of the US Government and the broader DOD. InTTENSITY, an independent company that has combined Inxight with Attensity Software (the only joint development project that combines two InQTel funded software packages), is the exclusive distributor and outlet for Attensity in the Federal Market. In 2009, Attensity Corp., then based in Palo Alto, merged with Germany's Empolis and Living-e AG to form Attensity Group. In 2010, Attensity Group acquired Biz360, Inc., a provider of social media monitoring and market intelligence solutions. In early 2012, Attensity Group divested itself of the Empolis business unit via a management buyout; that unit currently conducts business under its pre-merger name. Attensity Group is a closely held private company. Its majority shareholder is Aeris Capital, a private Swiss investment office advising a high-net-worth individual and his charitable foundation. Foundation Capital, Granite Ventures, and Scale Venture Partners were among Biz360's investors and thus became shareholders in Attensity Group. In February, 2016, Attensity's IP assets were acquired by InContact, and Attensity closed. See also Text mining References External links Natural language processing Software companies based in the San Francisco Bay Area Companies based in Redwood City, California Research support companies Software companies of the United States Software companies established in 2000 American companies established in 2000 American companies disestablished in 2016
Attensity
Technology
383
43,960,694
https://en.wikipedia.org/wiki/3DX
The 3DX, also known as the Auxetik, was the first 3D printed metal muzzle brake and the first 3D printed metallic component for a firearm. It is meant for the highly customisable AR-15 rifle. The design was made public around July 2013. The printer used to print it is unknown but the brake was created using the Direct metal laser sintering (DMLS) method by Sintercore (a weapons manufacturing startup company). It is designed to tame the recoil and muzzle rise of AR-15 pistols chambered for .223 caliber (5.56×45mm) NATO rounds. The Auxetik was renamed to 3DX by Sintercore. It uses metal Inconel material and as of 2014 the company was selling each one for around $300. Specifications The 3DX is the first 3D printed muzzle brake available for commercial sale with muzzle control on semi-automatic and fully automatic. It uses 100% Inconel superalloy construction and Ionbond Diamondblack coating. The threading is 1/2×28RH for 5.56×45mm NATO, .223 Remington, and smaller calibers. The brake comes with installation instructions and a crush washer. Performance It survived the firing of 7900 rounds during testing on semi-auto. During a test on full auto, 10 magazines of 62 grain green tip 5.56 rounds were all fired without any issues. Thefirearmblog tested the item on the first, sixth, and 12th month of their experiment, then checked the interior using a USB microscope. They claim that there was no "discernible difference in performance over that year, there was no muzzle rise to speak of and no increase in report when firing". United States military interest US Special Operation Command’s (USSOCOM) Science and Technology Directorate at MacDill Air Force Base invited Neal Brace (the owner of Sintercore) to demonstrate the newly named 3DX muzzle brake for possible use by its elite troops using an "ARES Defense AMG-1 and AMG-2 belt fed machine gun with a 13-inch barrel feeding from a 200-round box magazine". Testing was done on August the 5th 2014 by special operations troops on a closed range. All the testing was carried out with the ARES AMG-1 and 2. See also List of notable 3D printed weapons and parts References 3D printed firearms Firearm components
3DX
Technology
489
1,035,534
https://en.wikipedia.org/wiki/Domain%20Technologie%20Control
Domain Technologie Control (DTC) was a web hosting control panel aimed at providing a graphics-oriented layout for managing commercial hosting of web servers, intended for shared web hosting servers, virtual private servers (VPSes), and dedicated servers. Domain Technologie Control is free software released under the GNU LGPL v2.1 license. It is fully skinnable and translated into several languages. Domain Technologie Control allows the administrator to create web hosting plans that provide email and FTP accounts, domain purchasing, subdomains, SSH, and MySQL databases to the end users under controllable quota for the web sites that these users own. DTC also maintains the automation of billing, generates backup scripts, and monitors traffic (per user and per service) using a single system, UID or GID. Also integrated into DTC are the support ticket system and customizable HTTP error pages. DTC itself manages its own MySQL database to store its setup configuration, web hosting plans, and users. It has support for many other free software: MySQL, Apache, PHP, qmail, Postfix, Courier, Dovecot, ProFTPd, Webalizer, mod-log-sql, and more. It also connects to dtc-xen to manage and monitor the usage of Virtual Private Servers (VPS). DTC is fully open source (LGPL). DTC is also the first web hosting control panel that has reached inclusion in major distributions like Debian (since Lenny in 2009), Ubuntu (since 2008) and FreeBSD. See also Comparison of web hosting control panels Footnotes and References External links http://www.gplhost.com/about-us/our-gpl-software/dtc-domain-technologie-control.html Unix Internet software Web server management software Software using the GNU Lesser General Public License
Domain Technologie Control
Technology
398
13,398,615
https://en.wikipedia.org/wiki/Super-prime
Super-prime numbers, also known as higher-order primes or prime-indexed primes (PIPs), are the subsequence of prime numbers that occupy prime-numbered positions within the sequence of all prime numbers. In other words, if prime numbers are matched with ordinal numbers, starting with prime number 2 matched with ordinal number 1, then the primes matched with prime ordinal numbers are the super-primes. The subsequence begins 3, 5, 11, 17, 31, 41, 59, 67, 83, 109, 127, 157, 179, 191, 211, 241, 277, 283, 331, 353, 367, 401, 431, 461, 509, 547, 563, 587, 599, 617, 709, 739, 773, 797, 859, 877, 919, 967, 991, ... . That is, if p(n) denotes the nth prime number, the numbers in this sequence are those of the form p(p(n)). used a computer-aided proof (based on calculations involving the subset sum problem) to show that every integer greater than 96 may be represented as a sum of distinct super-prime numbers. Their proof relies on a result resembling Bertrand's postulate, stating that (after the larger gap between super-primes 5 and 11) each super-prime number is less than twice its predecessor in the sequence. show that there are super-primes up to x. This can be used to show that the set of all super-primes is small. One can also define "higher-order" primeness much the same way and obtain analogous sequences of primes . A variation on this theme is the sequence of prime numbers with palindromic prime indices, beginning with 3, 5, 11, 17, 31, 547, 739, 877, 1087, 1153, 2081, 2381, ... . References . . . External links A Russian programming contest problem related to the work of Dressler and Parker Classes of prime numbers
Super-prime
Mathematics
449
306,064
https://en.wikipedia.org/wiki/Ecosophy
Ecosophy or ecophilosophy (a portmanteau of ecological philosophy) is a philosophy of ecological harmony or equilibrium. The term was coined by the French post-structuralist philosopher and psychoanalyst Félix Guattari and the Norwegian father of deep ecology, Arne Næss. Félix Guattari Ecosophy also refers to a field of practice introduced by psychoanalyst, poststructuralist philosopher, and political activist Félix Guattari. In part Guattari's use of the term demarcates a necessity for the proponents of social liberation, whose struggles in the 20th century were dominated by the paradigm of social revolution, to embed their arguments within an ecological framework which understands the interconnections of social and environmental spheres. Guattari holds that traditional environmentalist perspectives obscure the complexity of the relationship between humans and their natural environment through their maintenance of the dualistic separation of human (cultural) and nonhuman (natural) systems; he envisions ecosophy as a new field with a monistic and pluralistic approach to such study. Ecology in the Guattarian sense, then, is a study of complex phenomena, including human subjectivity, the environment, and social relations, all of which are intimately interconnected. Despite this emphasis on interconnection, throughout his individual writings and more famous collaborations with Gilles Deleuze, Guattari has resisted calls for holism, preferring to emphasize heterogeneity and difference, synthesizing assemblages and multiplicities in order to trace rhizomatic structures rather than creating unified and holistic structures. Guattari's concept of the three interacting and interdependent ecologies of mind, society, and environment stems from the outline of the three ecologies presented in Steps to an Ecology of Mind, a collection of writings by cyberneticist Gregory Bateson. Næss's definition Næss defined ecosophy in the following way: While a professor at the University of Oslo in 1972, Arne Næss, introduced the terms "deep ecology movement" and "ecosophy" into environmental literature. Næss based his article on a talk he gave in Bucharest in 1972 at the Third World Future Research Conference. As Drengson notes in Ecophilosophy, Ecosophy and the Deep Ecology Movement: An Overview, "In his talk, Næss discussed the longer-range background of the ecology movement and its connection with respect for Nature and the inherent worth of other beings." Næss's view of humans as an integral part of a "total-field image" of Nature contrasts with the alternative construction of ecosophy outlined by Guattari. The term ecological wisdom, synonymous with ecosophy, was introduced by Næss in 1973. The concept has become one of the foundations of the deep ecology movement. All expressions of values by Green Parties list ecological wisdom as a key value—it was one of the original Four Pillars of the Green Party and is often considered the most basic value of these parties. It is also often associated with indigenous religion and cultural practices. In its political context, it is necessarily not as easily defined as ecological health or scientific ecology concepts. See also Ecology Environmental philosophy Global Greens Charter Green syndicalism Silvilization Simple living Spiritual ecology Sustainable living Yin and yang Notes References Drengson, A. and Y. Inoue, eds. (1995) The Deep Ecology Movement: An Introductory Anthology. Berkeley: North Atlantic Publishers. Guattari, Félix: »Pour une refondation des pratiques sociales«. In: Le Monde Diplomatique (Oct. 1992): 26-7. Guattari, Félix: »Remaking Social Practices«. In: Genosko, Gary (Hg.) (1996): The Guattari Reader. Oxford, Blackwell, S. 262-273. Maybury-Lewis, David. (1992) "On the Importance of Being Tribal: Tribal Wisdom." Millennium: Tribal Wisdom and the Modern World. Binimun Productions Ltd. Næss, Arne. (1973) The Shallow and the Deep Long-Range Ecology Movement: A Summary". Inquiry, 16:95-100 Drengson A. & B. Devall (2008) (Eds) The Ecology of Wisdom. Writings by Arne Naess. Berkeley: Counterpoint Levesque, Simon (2016) Two versions of ecosophy: Arne Næss, Félix Guattari, and their connection with semiotics. Sign Systems Studies'' 44(4): 511-541. http://dx.doi.org/10.12697/SSS.2016.44.4.03 External links Ecophilosophy, Ecosophy and the Deep Ecology Movement: An Overview by Alan Drengson Ecospherics.net. Accessed 2005-08-14. The Trumpeter, A Journal of Ecosophy. Ecology Postmodern theory Environmental philosophy Environmentalism Arne Næss
Ecosophy
Biology,Environmental_science
1,048
14,605,198
https://en.wikipedia.org/wiki/Outer%20membrane%20efflux%20protein
The outer membrane efflux protein is a protein family member that forms trimeric (three-piece) channels allowing the export of a variety of substrates in gram-negative bacteria. Each efflux protein is composed of two repeats. The trimeric channel is composed of a 12-stranded beta-barrel that spans the outer membrane, and a long tail helical barrel that spans the periplasm. Examples include the Escherichia coli TolC outer membrane protein, which is required for proper expression of outer membrane protein genes; the Rhizobium nodulation protein; and the Pseudomonas FusA protein, which is involved in resistance to fusaric acid. References Protein domains Protein families Outer membrane proteins
Outer membrane efflux protein
Biology
149
18,598,522
https://en.wikipedia.org/wiki/Giorgio%20de%20Santillana
Giorgio Diaz de Santillana (30 May 1902 – 8 June 1974) was an Italian-American philosopher and historian of science, particularly of ancient science and of the Renaissance science of Galileo Galilei. He was a Professor of the History of Science at the Massachusetts Institute of Technology best known for his books The Crime of Galileo (1955) and Hamlet's Mill (1969). Early life and education Giorgio de Santillana was born in Rome, Italy on 30 May 1902 into a Sephardic Jewish family, which traced its roots through Tunisia and Livorno back to the Iberian peninsula. His father was the Tunisian-Italian jurist and expert on Islamic Law David Santillana. In 1925, he graduated from the Università di Roma alla Sapienza with a degree in physics. He then spent two years in philosophy in Paris, followed by another two years in physics at the Università degli studi di Milano. Career After his time in Milan, he was called back to Rome by Federigo Enriques to put together a course on the history of science. In Rome, he taught history of science and philosophy of science. Santillana next moved to the United States in 1936, where he became an instructor in the philosophy of science at The New School for Social Research 1937–1938 and then a visiting lecturer at Harvard University. In 1941, Santillana began teaching at the Massachusetts Institute of Technology, becoming an assistant professor the following year. From 1943 to 1945 he served in the United States Army as a war correspondent, and he became a naturalized U.S. citizen in 1945. After the war, in 1945 he returned to MIT and in 1948 he was made an associate professor. In 1954, he became a full Professor of the History of Science in the School of Humanities. In 1957, he was awarded a Guggenheim Fellowship. Work Together with Edgar Zilsel, Santillana contributed a 1941 entry to the Vienna Circle history and philosophy of science project International Encyclopedia of Unified Science (1938–1969) titled "Development of Rationalism and Empiricism" consisting of two separate essays: Santillana's own essay was "Aspects of Scientific Rationalism in the Nineteenth Century," while Zilsel's was "Problems of Empiricism." In 1953, Santillana published an annotated edition of a prior translation of Galileo Galilei's Dialogue on the Great World Systems, which appeared within five months of another major edition of Galileo's work, Stillman Drake's new translation from the Italian, Dialogue Concerning the Two Chief World Systems (1953). The two were sometimes reviewed together, with one such reviewer, Stanford University historian of Renaissance science Francis R. Johnson, concluding "one might suggest that the modem scientist who prefers his Galileo in twentieth-century dress would incline to Mr. Drake's volume. On the other hand, the more historically minded reader who prefers to view Galileo in the literary and intellectual costume of his own century would vote for Professor de Santillana's edition." Reviewers of both were generally agreed that the translation Santillana relied on, a 1665 edition by Thomas Salusbury, had inadequacies, though Drake later wrote an article in defense of the Salusbury translation and in criticism of Salusbury's contemporary printers, who introduced numerous errors not in the translation manuscript, and of his modern editor, Santillana, who could have caught these errors. Santillana's work on Galileo next led him to write and to publish The Crime of Galileo in 1955, a study of Galileo's trial for heresy by the Catholic Church that lay the blame for Galileo's guilty verdict substantially on political intrigue rather than on properly doctrinal issues. Reviewers recognized its topicality in reference to contemporary American McCarthyism, Russian Stalinism, the recent investigation of J. Robert Oppenheimer, and Santillana's recent experiences in Fascist Italy, though some also questioned the value of Santillana's more speculative moral and political conclusions. Stillman Drake noted it was the first comprehensive study of this trial made available in English since the 1879 translation of Karl von Gebler's prior work, Galileo Galilei and the Roman Curia, by a Mrs. George Sturge, that it assembled valuable new information regarding the other persons involved in the case, and that it seemed to reference copies of documents so far unknown to other scholars. Santillana understood the last claim of Drake's to be sarcastic criticism and corrected his relevant errors in a letter to the editor. In 1961, he published The Origins of Scientific Thought: From Anaximander to Proclus, 600 BC to 300 AD, which also appeared in paperback. Reviewers noted this as a development of his earlier work in Italian with Federigo Enriques but considered its interpretation of Parmenides particularly provocative and notable, though not necessarily plausible, and generally noted that technical details were thin and sometimes importantly incorrect. In 1969, he published his book Hamlet's Mill: An Essay on Myth and the Frame of Time, coauthored with Hertha von Dechend (1915–2001) after he was inspired by her original research shared with him in 1959. This book covered possible connections between the mythological stories of Ancient Egypt, Babylon, Ancient Greece, Christianity, etc. and ancient observations pertaining to the stars, planets, and, most notably, the 26,000-year precession of the equinoxes, now categorized into the fields of ethnoastronomy and archaeoastronomy. Santillana's colleague Nathan Sivin described the book as "an end run around those scholarly custodians of the history of early astronomy who consider myths best ignored, and those ethnologists who consider astronomy best ignored, to arouse public enthusiasm for exploration into the astronomical content of myth." The book engaged in inflammatory mockery of its presumptive "custodial" opponents, for instance calling some "fertility addicts" and "the Fecundity-Trust" and calling Ernst Cassirer "blinded by condescension," and notable reviews were correspondingly harshly critical, though others were critical of technical missteps but nonetheless positive. Personal life and death In 1948, Santillana married Dorothy Hancock Tilton (1904-1980), a descendant of John Hancock known for her work as an editor at the Houghton Mifflin Company beginning in 1940 and particularly for her effort to publish American chef, cooking author, and television personality Julia Child; she is depicted in the film Julie & Julia. During his time at the Massachusetts Institute of Technology, Santillana was noted for his personal friendships with founding figures of cybernetics including Norbert Wiener, Jerome Lettvin, Warren McCulloch, and Walter Pitts, as well as the astronomer and nuclear scientist Philip Morrison. A colorful anecdote describes him as Wiener's personal Tarot reader. Santillana died after a long illness beginning in the mid-1960s and a couple of years in Beverly Farms, Massachusetts, on 8 June 1974 in Dade County, Florida. Selected publications Articles "Galileo and J. Robert Oppenheimer". The Reporter, December 26, 1957. pp. 10–18; reprinted in Reflections on Men and Ideas (1968, below). "The Seventeenth-Century Legacy: Our Mirror of Being". Daedalus, vol. 87, no. 1 (Winter, 1958), pp. 35–56; with Stillman Drake, "Review: Arthur Koestler and His Sleepwalkers". Isis, vol. 50, no. 3 (September, 1959), pp. 255–260; Books with Edgar Zilsel, "Development of Rationalism and Empiricism". In International Encyclopedia of Unified Science: Foundations of the Unity of Science; vol. 2, no. 8, University of Chicago Press, 1941. Dialogue on the Great World Systems: In the Salusbury Translation; Revised and Annotated with an Introduction by Giorgio de Santillana. The University of Chicago Press, 1953. The Crime of Galileo. University of Chicago Press, 1955. . Leonardo Da Vinci. Reynal & Company, 1956. The Mentor Philosophers: The Age of Adventure: Renaissance Philosophers. New American Library, 1956. The Origins of Scientific Thought: From Anaximander to Proclus, 600 BC to 300 AD. University of Chicago Press, 1961. Reflections on Men and Ideas. MIT Press, 1968. ISBN 9780262040167. with Hertha von Dechend, Hamlet's Mill: An Essay on Myth and the Frame of Time. Gambit Inc., 1969. . Notes Further reading Isis, a professional journal of the history of science, included a remembrance of Santillana by his friend and fellow historian of science Nathan Sivin in Volume 67 (1976), pages 439–443. The full article can be accessed at ; an excerpt is available here. External links 1902 births 1974 deaths 20th-century American historians American male non-fiction writers Italian emigrants to the United States Santillana, Giorgio de Archaeoastronomy Writers from Rome Naturalized citizens of the United States 20th-century American male writers Italian historians of philosophy American historians of philosophy
Giorgio de Santillana
Astronomy
1,896
55,743,024
https://en.wikipedia.org/wiki/Yan%20Ning
Yan Ning (; born 21 November 1977; also spelled as Nieng Yan) is a Chinese structural biologist. She has been serving as professor of life science at Tsinghua University since December 2022. She previously served as a professor at Princeton University from 2017 to 2022 and a faculty member at Tsinghua University from 2007 to 2017. Early life and education Yan was born in Zhangqiu District of Jinan, Shandong, China in 1977. Yan received a Bachelor of Science from the Department of Biological Sciences and Biotechnology at Tsinghua University in 2000. She received a Doctor of Philosophy in molecular biology from Princeton University in 2004. Her doctoral adviser was Shi Yigong. She was a postdoctoral researcher at Princeton University from 2004 to 2007. Yan was the regional winner of the Young Scientist Award in North America, which is co-sponsored by Science/AAAS and GE Healthcare, for her thesis on the structural and mechanistic study of programmed cell death. She continued her postdoctoral training at Princeton, focusing on the structural characterization of intramembrane proteases, until 2007. Career In 2007, she joined Tsinghua University as a faculty member at the university's School of Medicine and Life Sciences. In 2012, she was promoted as a tenured professor at Tsinghua University. Her research focused on the structure and mechanism of membrane transport protein, exemplified by the glucose transporter GLUT1 and voltage-gated sodium and calcium channels. In 2017, Yan decided to leave Tsinghua and join Princeton University as the Shirley M. Tilghman Professor of Molecular Biology. The move gained widespread attention in China and led to a national discussion both within the science community and the general public. The cause was widely speculated to be the difficulty to do what she wanted to do under China's academic system, as she had criticized the China National Natural Science Foundation's reluctance to support high risk research in a series of blogs. However, Yan dismissed this claim later, and stated "changing one's environment can bring new pressure and inspiration for academic breakthroughs". For her research achievements, Dr. Yan has won a number of prizes. She was an HHMI international early career scientist in 2012–2017, the recipient of the 2015 Protein Society Young Investigator Award, the 2015 Beverley & Raymond Sackler International Prize in Biophysics, the Alexander M. Cruickshank Award at the GRC on membrane transport proteins in 2016, the 2018 FAOBMB Award for Research Excellence, and the 2019 Weizmann Women & Science Award. Yan was elected a foreign associate of the United States National Academy of Sciences in April 2019. Yan was elected a foreign associate of the American Academy of Arts and Sciences in April 2021. On the same year, she became a visiting professor at the School for Biotechnology and Biomolecular Sciences in the University of New South Wales under the university's Strategic Hires and Retention Pathways (SHARP) program. On November 1, 2022, while speaking at the Shenzhen Global Innovation Forum of Talents, Yan announced that she would resign from her position at Princeton and return to China to become a founding dean of the Shenzhen Medical Academy of Research and Translation. In December 2022, she resigned from Princeton and returned to China, where she accepted her new position. On March 22, 2023, Yan was appointed as director of the Shenzhen Bay Laboratory. Yan was elected a foreign associate of the European Molecular Biology Organization in July 2023. In November 2023, Yan was elected as an academician of the Chinese Academy of Sciences. Personal life Yan is a close friend of Li Yinuo, who she met as student while studying at Tsinghua University. Honors and awards References External links Nieng Yan's page 1977 births Living people Chinese women biologists Princeton University alumni Princeton University faculty Tsinghua University alumni Biologists from Shandong People from Laiwu Members of the Chinese Academy of Sciences Chinese molecular biologists Foreign associates of the National Academy of Sciences Educators from Shandong Structural biologists Chinese expatriates in the United States L'Oréal-UNESCO Awards for Women in Science laureates Graduate Women in Science members
Yan Ning
Chemistry
835
3,666,033
https://en.wikipedia.org/wiki/Intelligent%20character%20recognition
Intelligent character recognition (ICR) is used to extract handwritten text from images. It is a more sophisticated type of OCR technology that recognizes different handwriting styles and fonts to intelligently interpret data on forms and physical documents. These paper-based papers are scanned, the information is extracted, and the data is then digitally stored in a database program using ICR technology. The data is utilized for analytical reporting and is integrated with business processes. ICR technology is used by businesses to organize unstructured data and obtain current information from these reports. Users can rapidly read handwritten data on paper using ICR, then convert it to a digital format. ICR algorithms collaborate with OCR to automate data entry from forms by removing the need for keystrokes. It has a high degree of accuracy and is a dependable method for processing various papers quickly. Capabilities Most ICR software has a self-learning system referred to as a neural network, which automatically updates the recognition database for new handwriting patterns. It extends the usefulness of scanning devices for the purpose of document processing, from printed character recognition (a function of OCR) to hand-written matter recognition. Because this process is involved in recognizing hand writing, accuracy levels may, in some circumstances, not be very good but can achieve 97%+ accuracy rates in reading handwriting in structured forms. Often to achieve these high recognition rates several read engines are used within the software and each is given elective voting rights to determine the true reading of characters. In numeric fields, engines which are designed to read numbers take preference, while in alpha fields, engines designed to read hand written letters have higher elective rights. When used in conjunction with a bespoke interface hub, hand-written data can be automatically populated into a back office system avoiding laborious manual keying and can be more accurate than traditional human data entry. Automated Forms Processing An important development of ICR was the invention of Automated Forms Processing in 1993 by Joseph Corcoran who was awarded a patent on the invention. This involved a three-stage process of capturing the image of the form to be processed by ICR and preparing it to enable the ICR engine to give best results, then capturing the information using the ICR engine and finally processing the results to automatically validate the output from the ICR engine. This application of ICR increased the usefulness of the technology and made it applicable for use with real world forms in normal business applications. Modern software applications use ICR as a technology of recognizing text in forms filled in by hand (hand-printed). Differences between ICR and OCR OCR Optical character recognition (OCR) is commonly considered to apply to any recognition technique that reads machine printed text. An example of a traditional OCR use case would be to translate the characters from an image of a printed document, such as a book page, newspaper clipping, or legal contract, into a separate file that could be searched and updated with a word processor or document viewer. It's also quite helpful for automating the processing of forms. Information can be swiftly extracted from form fields and entered into another application, like a spreadsheet or database, by zonally applying the OCR engine to those fields. Yet, data is typically manually input rather than typed into form fields. Character identification becomes even more challenging while reading handwritten material. The diversity of more than 700,000 printed font variants is tiny compared to the near unlimited variations in hand-printed characters. The recognition program must take into account not just stylistic differences but also the kind of writing implement used, the standard of the paper, errors, hand stability, and smudges or running ink. ICR Intelligent character recognition (ICR) makes use of continuously improving algorithms to collect more information about the variances in hand-printed characters and more precisely identify them. ICR, which was created in the early 1990s to aid in the automation of forms processing, enables the conversion of manually entered data into text that is simple to read, search for, and change. When used to read characters that are obviously divided into distinct areas or zones, such as fixed fields seen on many structured forms, it works best. Both OCR and ICR can be configured to read a variety of languages; however, limiting the expected character set to a smaller number of languages will produce better recognition outcomes. ICR cannot read cursive handwriting since it must still be able to assess each character individually. While writing in cursive, it might be difficult to tell where one character ends and another one begins, and there are more differences across samples than when hand-printing text. A more recent method called intelligent word recognition (IWR) focuses on reading a word in context rather than recognizing individual characters. Intelligent word recognition Intelligent word recognition (IWR) can recognize and extract not only printed-handwritten information, cursive handwriting as well. ICR recognizes on the character-level, whereas IWR works with full words or phrases. Capable of capturing unstructured information from every day pages, IWR is said to be more evolved than hand print ICR. Not meant to replace conventional ICR and OCR systems, IWR is optimized for processing real-world documents that contain mostly free-form, hard-to-recognize data fields that are inherently unsuitable for ICR. This means that the highest and best use of IWR is to eliminate a high percentage of the manual entry of handwritten data and run-on hand print fields on documents that otherwise could be keyed only by humans. See also Optical character recognition (OCR) Document automation Document layout analysis Document modelling Machine learning Outsourced document processing Text mining Reference List Applications of computer vision Automatic identification and data capture Computational linguistics Optical character recognition
Intelligent character recognition
Technology
1,173
26,167,204
https://en.wikipedia.org/wiki/412P/WISE
412P/WISE (provisional name P/2010 B2) is a periodic comet in the Solar System. It is the first comet discovered by the space observatory WISE and was first observed on January 22, 2010 and has since been followed by ground observatories, among them the Mauna Kea Observatory. The comet has an orbital period of 4.7 years, an aphelion of 4 astronomical units and a perihelion of 1.6 astronomical units. References Periodic comets 0412 Astronomical objects discovered in 2010
412P/WISE
Astronomy
108
2,174,184
https://en.wikipedia.org/wiki/Piecewise%20linear%20manifold
In mathematics, a piecewise linear manifold (PL manifold) is a topological manifold together with a piecewise linear structure on it. Such a structure can be defined by means of an atlas, such that one can pass from chart to chart in it by piecewise linear functions. This is slightly stronger than the topological notion of a triangulation. An isomorphism of PL manifolds is called a PL homeomorphism. Relation to other categories of manifolds PL, or more precisely PDIFF, sits between DIFF (the category of smooth manifolds) and TOP (the category of topological manifolds): it is categorically "better behaved" than DIFF — for example, the Generalized Poincaré conjecture is true in PL (with the possible exception of dimension 4, where it is equivalent to DIFF), but is false generally in DIFF — but is "worse behaved" than TOP, as elaborated in surgery theory. Smooth manifolds Smooth manifolds have canonical PL structures — they are uniquely triangulizable, by Whitehead's theorem on triangulation — but PL manifolds do not always have smooth structures — they are not always smoothable. This relation can be elaborated by introducing the category PDIFF, which contains both DIFF and PL, and is equivalent to PL. One way in which PL is better behaved than DIFF is that one can take cones in PL, but not in DIFF — the cone point is acceptable in PL. A consequence is that the Generalized Poincaré conjecture is true in PL for dimensions greater than four — the proof is to take a homotopy sphere, remove two balls, apply the h-cobordism theorem to conclude that this is a cylinder, and then attach cones to recover a sphere. This last step works in PL but not in DIFF, giving rise to exotic spheres. Topological manifolds Not every topological manifold admits a PL structure, and of those that do, the PL structure need not be unique—it can have infinitely many. This is elaborated at Hauptvermutung. The obstruction to placing a PL structure on a topological manifold is the Kirby–Siebenmann class. To be precise, the Kirby-Siebenmann class is the obstruction to placing a PL-structure on M x R and in dimensions n > 4, the KS class vanishes if and only if M has at least one PL-structure. Real algebraic sets An A-structure on a PL manifold is a structure which gives an inductive way of resolving the PL manifold to a smooth manifold. Compact PL manifolds admit A-structures. Compact PL manifolds are homeomorphic to real-algebraic sets. Put another way, A-category sits over the PL-category as a richer category with no obstruction to lifting, that is BA → BPL is a product fibration with BA = BPL × PL/A, and PL manifolds are real algebraic sets because A-manifolds are real algebraic sets. Combinatorial manifolds and digital manifolds A combinatorial manifold is a kind of manifold which is discretization of a manifold. It usually means a piecewise linear manifold made by simplicial complexes. A digital manifold is a special kind of combinatorial manifold which is defined in digital space. See digital topology. See also Simplicial manifold Notes References Structures on manifolds Geometric topology Manifolds
Piecewise linear manifold
Mathematics
692
1,450,110
https://en.wikipedia.org/wiki/On%20Formally%20Undecidable%20Propositions%20of%20Principia%20Mathematica%20and%20Related%20Systems
"Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions of Principia Mathematica and Related Systems I") is a paper in mathematical logic by Kurt Gödel. Submitted November 17, 1930, it was originally published in German in the 1931 volume of Monatshefte für Mathematik und Physik. Several English translations have appeared in print, and the paper has been included in two collections of classic mathematical logic papers. The paper contains Gödel's incompleteness theorems, now fundamental results in logic that have many implications for consistency proofs in mathematics. The paper is also known for introducing new techniques that Gödel invented to prove the incompleteness theorems. Outline and key results The main results established are Gödel's first and second incompleteness theorems, which have had an enormous impact on the field of mathematical logic. These appear as theorems VI and XI, respectively, in the paper. In order to prove these results, Gödel introduced a method now known as Gödel numbering. In this method, each sentence and formal proof in first-order arithmetic is assigned a particular natural number. Gödel shows that many properties of these proofs can be defined within any theory of arithmetic that is strong enough to define the primitive recursive functions. (The contemporary terminology for recursive functions and primitive recursive functions had not yet been established when the paper was published; Gödel used the word ("recursive") for what are now known as primitive recursive functions.) The method of Gödel numbering has since become common in mathematical logic. Because the method of Gödel numbering was novel, and to avoid any ambiguity, Gödel presented a list of 45 explicit formal definitions of primitive recursive functions and relations used to manipulate and test Gödel numbers. He used these to give an explicit definition of a formula that is true if and only if is the Gödel number of a sentence and there exists a natural number that is the Gödel number of a proof of . The name of this formula derives from , the German word for proof. A second new technique invented by Gödel in this paper was the use of self-referential sentences. Gödel showed that the classical paradoxes of self-reference, such as "This statement is false", can be recast as self-referential formal sentences of arithmetic. Informally, the sentence employed to prove Gödel's first incompleteness theorem says "This statement is not provable." The fact that such self-reference can be expressed within arithmetic was not known until Gödel's paper appeared; independent work of Alfred Tarski on his indefinability theorem was conducted around the same time but not published until 1936. In footnote 48a, Gödel stated that a planned second part of the paper would establish a link between consistency proofs and type theory (hence the "I" at the end of the paper's title, denoting the first part), but Gödel did not publish a second part of the paper before his death. His 1958 paper in Dialectica did, however, show how type theory can be used to give a consistency proof for arithmetic. Published English translations During his lifetime three English translations of Gödel's paper were printed, but the process was not without difficulty. The first English translation was by Bernard Meltzer; it was published in 1963 as a standalone work by Basic Books and has since been reprinted by Dover and reprinted by Hawking (God Created the Integers, Running Press, 2005:1097ff). The Meltzer version—described by Raymond Smullyan as a 'nice translation'—was adversely reviewed by Stefan Bauer-Mengelberg (1966). According to Dawson's biography of Gödel (Dawson 1997:216), Fortunately, the Meltzer translation was soon supplanted by a better one prepared by Elliott Mendelson for Martin Davis's anthology The Undecidable; but it too was not brought to Gödel's attention until almost the last minute, and the new translation was still not wholly to his liking ... when informed that there was not time enough to consider substituting another text, he declared that Mendelson's translation was 'on the whole very good' and agreed to its publication. [Afterward he would regret his compliance, for the published volume was marred throughout by sloppy typography and numerous misprints.] The translation by Elliott Mendelson appears in the collection The Undecidable (Davis 1965:5ff). This translation also received a harsh review by Bauer-Mengelberg (1966), who in addition to giving a detailed list of the typographical errors also described what he believed to be serious errors in the translation. A translation by Jean van Heijenoort appears in the collection From Frege to Gödel: A Source Book in Mathematical Logic (van Heijenoort 1967). A review by Alonzo Church (1972) described this as "the most careful translation that has been made" but also gave some specific criticisms of it. Dawson (1997:216) notes: The translation Gödel favored was that by Jean van Heijenoort ... In the preface to the volume van Heijenoort noted that Gödel was one of four authors who had personally read and approved the translations of his works. This approval process was laborious. Gödel introduced changes to his text of 1931, and negotiations between the men were "protracted": "Privately van Heijenoort declared that Gödel was the most doggedly fastidious individual he had ever known." Between them they "exchanged a total of seventy letters and met twice in Gödel's office in order to resolve questions concerning subtleties in the meanings and usage of German and English words." (Dawson 1997:216-217). Although not a translation of the original paper, a very useful 4th version exists that "cover[s] ground quite similar to that covered by Godel's original 1931 paper on undecidability" (Davis 1952:39), as well as Gödel's own extensions of and commentary on the topic. This appears as On Undecidable Propositions of Formal Mathematical Systems (Davis 1965:39ff) and represents the lectures as transcribed by Stephen Kleene and J. Barkley Rosser while Gödel delivered them at the Institute for Advanced Study in Princeton, New Jersey in 1934. Two pages of errata and additional corrections by Gödel were added by Davis to this version. This version is also notable because in it Gödel first describes the Herbrand suggestion that gave rise to the (general, i.e. Herbrand–Gödel) form of recursion. References Stefan Bauer-Mengelberg (1966). Review of The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable problems and Computable Functions. The Journal of Symbolic Logic, Vol. 31, No. 3. (September 1966), pp. 484–494. Alonzo Church (1972). Review of A Source Book in Mathematical Logic 1879–1931. The Journal of Symbolic Logic, Vol. 37, No. 2. (June 1972), p. 405. Martin Davis, ed. (1965). The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems and Computable Functions, Raven, New York. Reprint, Dover, 2004. . Martin Davis, (2000). Engines of Logic: Mathematics and the Origin of the Computer, W. W. Norton & Company, New York. pbk. Kurt Gödel (1931), "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I". Monatshefte für Mathematik und Physik 38: 173–198. . Available online via SpringerLink. Kurt Gödel (1958). "Über eine bisher noch nicht benüzte Erweiterung des finiten Standpunktes". Dialectica v. 12, pp. 280–287. Reprinted in English translation in Gödel's Collected Works, vol II, Soloman Feferman et al., eds. Oxford University Press, 1990. Jean van Heijenoort, ed. (1967). From Frege to Gödel: A Source Book on Mathematical Logic 1879–1931. Harvard University Press. Bernard Meltzer (1962). On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Translation of the German original by Kurt Gödel, 1931. Basic Books, 1962. Reprinted, Dover, 1992. . Raymond Smullyan (1966). Review of On Formally Undecidable Propositions of Principia Mathematica and Related Systems. The American Mathematical Monthly, Vol. 73, No. 3. (March 1966), pp. 319–322. John W. Dawson, (1997). Logical Dilemmas: The Life and Work of Kurt Gödel, A. K. Peters, Wellesley, Massachusetts. . External links "On formally undecidable propositions of Principia Mathematica and related systems I". Translated by Martin Hirzel, November 27, 2000. "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" on Wilhelm K. Essler's (Prof. em. of Logic, Goethe-Universität Frankfurt am Main) webpage Mathematical logic Mathematics papers 1931 in science 1931 documents Works originally published in German magazines Works originally published in science and technology magazines Logic literature Works by Kurt Gödel
On Formally Undecidable Propositions of Principia Mathematica and Related Systems
Mathematics
2,033
3,674,008
https://en.wikipedia.org/wiki/Puka%20shell
Puka shells are naturally occurring bead-like shells found on the beaches of Hawaii or other places. Each bead is the beach-worn apex of a cone snail. Such shells are often strung as necklaces, known as puka shell necklaces. Puka is the Hawaiian word for "hole" and refers to the naturally occurring hole in the middle of these rounded and worn shell fragments. Numerous inexpensive imitations are now widely sold as puka shell necklaces. The majority of contemporary "puka shell necklaces" are not made from cone shells, but from other shells, or even from plastic. In addition, some strings of beads are currently sold that are made from cone shells, but the beads in these necklaces were not formed by natural processes. They were instead worked by hand from whole shells using pliers to break the shell down to the needed part, and then subjecting the rough results to tumble finishing, in order to give each bead more or less smooth edges in imitation of the natural wear-and-tear a shell receives when tumbled in the surf over long periods of time. The original all-natural puka shells were very easily made into necklaces, bracelets and anklets because they were naturally pierced, which enabled them to be strung like beads. Such jewellery were often gifted by Hawaii's royal families to foreign dignitaries, but it was only during the tourism boom of the 1960s, after the islands' admission into the US, that it became massively popular as an attractive and inexpensive lei that could be made and sold on the beach. In the 1970s, this type of shell jewelry became highly sought after by celebrities like Elizabeth Taylor and prices skyrocketed. The craftsmanship also became more refined and the lei pūpū puka, puka shell leis were strung in graduated or matching styles, rather than the original random patterns. It was highly sought after in the 1990s starting from Californians and their surf culture. Many "legends" about the puka shell were created during this time, and these stories also helped sales. Natural puka shell formation The terminal helix of the shell of a cone snail is cone-shaped, and closed at the apex. When the empty shell is rolled over a long time by the waves in the breaking surf and coral rubble, the terminal helix of the shell breaks off or is gradually ground off, leaving the solid top of the shell intact. Given enough time, the tip of the spire of the shell usually also wears down, and thus a natural hole is formed from one side to the other. This shell fragment can be viewed as a sort of a natural bead, and is known in Hawaii as a "puka". Real puka shells are not flat: one side of the bead is slightly convex; the other is concave. The concave side of the bead clearly shows the spiral form of the interior of the spire of the cone shell. Modern substitutes Naturally-formed rounded cone shell fragments suitable to be used as beads are hard to find in large quantities, so true puka jewelry, formed entirely naturally, is now uncommon. Shell jewellery made from naturally occurring puka shells is also now more expensive because of the labor and time involved in locating and hand-picking these rather uncommon shell fragments from the beach drift. In modern times, beads cut from other types of shell, or even beads of plastic, are used to make imitation puka jewellery. Cone snail shells are sometimes harvested so that they can be chipped and ground down to make more authentic-looking puka jewellery, which is however still not genuine by the standards of the originals. A very glossy patina indicates that the shells in a necklace have been tumble polished. If the edges of the shell beads are chipped, the shells were harvested and manually broken into shape. If the "puka" or central hole is perfectly circular and parallel-sided, then the hole was drilled by humans. See also Shell jewelry Wampum References Books Jewellery components Symbols of Hawaii Hawaii culture Seashells in art 1960s fashion 1970s fashion 1990s fashion 2000s fashion
Puka shell
Technology
826
183,071
https://en.wikipedia.org/wiki/Ephedraceae
Ephedraceae is a family of gymnosperms belonging to Gnetophyta, it contains only a single extant genus, Ephedra, as well as a number of extinct genera from the Early Cretaceous. Taxonomy Ephedraceae is agreed to be the most basal group amongst extant gnetophytes. Members of the family typically grow as shrubs and have small, linear leaves that possess parallel veins. The fossil Ephedraceae genera show a range of morphologies transitional between the ancestral lax male and female reproductive structures and the highly compact reproductive structures typical of modern Ephedra. Modern members of Ephedra have either dry winged membranous bracts (modified leaves which surround the seed), which are dispersed by wind, leathery covered seeds, which are dispersed by seed-eating rodents, or fleshy bracts which are consumed and then dispersed by birds. Some extinct members of Ephedra from the Early Cretaceous, such as Ephedra carnosa, as well as Arlenea from the Early Cretaceous of Brazil have fleshy bracts surrounding the seeds, suggesting that these seeds were dispersed by animals. Genera Ephedra L. Early Cretaceous-Recent Arlenea Ribeiro, Yang, Saraiva, Bantim, Calixto Junior et Lima, 2023 Crato Formation, Brazil, Early Cretaceous (Aptian) Leongathia V.A. Krassilov, D.L. Dilcher & J.G. Douglas 1998 Koonwarra fossil bed, Australia, Early Cretaceous (Aptian) Jianchangia Yang, Wang and Ferguson, 2020 Jiufotang Formation, China, Early Cretaceous (Aptian) Eamesia Yang, Lin and Ferguson, 2018 Yixian Formation, China, Early Cretaceous (Aptian) Prognetella Krassilov et Bugdaeva, 1999 Yixian Formation, China, Early Cretaceous (Aptian) (initially interpreted as an angiosperm) Chengia Yang, Lin & Wang, 2013, Yixian Formation, China, Early Cretaceous (Aptian) Chaoyangia Duan, 1998 Yixian Formation, China, Early Cretaceous (Aptian) Eragrosites Cao & Wu, 1998 Yixian Formation, China, Early Cretaceous (Aptian) Gurvanella Krassilov, 1982 China, Mongolia, Early Cretaceous Alloephedra Tao and Yang, 2003 China, Early Cretaceous (considered a synonym of Ephedra by some authors) Amphiephedra Miki, 1964 China, Early Cretaceous Beipiaoa Dilcher & al, 2001 China, Early Cretaceous Ephedrispermum Rydin, K.R.Pedersen, P.R.Crane et E.M.Friis, 2006 Portugal, Early Cretaceous (Aptian-Albian) Ephedrites Guo and Wu, 2000 China, Early Cretaceous Erenia Krassilov, 1982 China, Mongolia, Early Cretaceous Liaoxia Cao et al. 1998 China, Early Cretaceous Dichoephedra Ren et al. 2020 China, Early Cretaceous Laiyangia P.H. Jin, 2024 China, Early Cretaceous ?Pseudoephedra Liu and Wang, 2015 China, Early Cretaceous References Plant families Taxa named by Barthélemy Charles Joseph Dumortier
Ephedraceae
Biology
673
74,225,895
https://en.wikipedia.org/wiki/NGC%203583
NGC 3583 is a barred spiral galaxy located in the constellation Ursa Major. It is located at a distance of circa 90 million light years from Earth, which, given its apparent dimensions, means that NGC 3583 is about 85,000 light years across. It was discovered by William Herschel on February 5, 1788. NGC 3585 has a bright nucleus and an elliptical bulge. It has a prominent bar which is about 30 arcseconds long. The spiral arms start approximately at the radius of the end of the bar. One arm emanates from northwest edge of the bulge. At its start is quite diffuse but after passing near the bar becomes better defined. After reaching the southeast part of the galaxy it starts to fade. The other arm emanates from the southeast end of the bar. It appears to branch after passing from the other side of the bar, with the two branches running parallelly. The galaxy has many HII regions. The star formation rate in the galaxy based on the infrared luminosity is 9.8 per year. NGC 3583 is the foremost galaxy in the NGC 3583 galaxy group. Also member of the group is the galaxy NGC 3595, while a bit farther away lies NGC 3614 and its group. NGC 3583 forms a pair with a small elliptical galaxy, which lies 0.9 arcminutes from NGC 3583. Barred spiral galaxy NGC 3577 lies at a projected distance of 5 arcminutes. Supernovae Two supernovae have been observed in NGC 3583: SN 1975P (type unknown, mag. 15) was discovered by Charles Kowal on 1 December 1975. ASASSN-15so (type Ia supernova, mag. 15.1) was discovered by ASAS-SN on 8 November 2015, about 10 days before maximum light. See also List of NGC objects (3001–4000) References External links Barred spiral galaxies Ursa Major 3583 06778 034232 Astronomical objects discovered in 1788 Discoveries by William Herschel
NGC 3583
Astronomy
424
23,877,634
https://en.wikipedia.org/wiki/Bacillomycin
Bacillomycins are a group of antifungal polypeptide antibiotics isolated from Bacillus subtilis. Examples include: Bacillomycin A (fungosin, structure unknown) Bacillomycin C (structure unknown) Bacillomycin D (C45H68N10O15) Bacillomycin F (C52H84N12O14) Bacillomycin Fc (C52H84N12O14) Bacillomycin L (Landy substance) Bacillomycin S (structure unknown) References Polypeptide antibiotics Antifungals
Bacillomycin
Chemistry
139
72,535,255
https://en.wikipedia.org/wiki/Kepler-1658b
Kepler-1658b (or the Kepler object of interest, KOI-4.01) is a hot Jupiter, a type of gas giant exoplanet, that orbits an F-type star called Kepler 1658, located about 2629 light-years away from the Solar System. It is the first planet identified by the Kepler space telescope after its launch in 2009, but later ruled out as false alarm since its transit could not be confirmed. A study published in 2019 established it as a planet, describing it as "the closest known planet in terms of orbital period to an evolved star." Analysis of the Transiting Exoplanet Survey Satellite (TESS) data in 2022 showed that it is gradually spiraling into its star. History Named after German astronomer Johannes Kepler, the Kepler space telescope was launched by NASA in 2009 to discover planets orbiting other stars. In June 2010, data of the first observations were publicly announced that 705 stars indicated exoplanet candidates. In January 2011, identification of 305 stars as containing planets was published as the Kepler Input Catalogue. The planets were designated as the Kepler object of interest (KOI). An F-type star KOI-4 was among the observed exoplanetary system. Before 2009, KOI-1 to KOI-3 were already known as possible exoplanet bearing stars. KOI-4.01 was thus the first exoplanet identified by the Kepler spacecraft. KOI-4.01 was seen as blocking a bit of starlight from the KOI-4, which indicated that it was a transiting planet. The size of KOI-4 was estimated to be slightly larger than the Sun, by about 1.1 times, with its planet about the size of Neptune. A secondary eclipse was observed that still showed a dip in starlight. Such dip was not expected to be coming from a planet as small as KOI-4.01. The identification of the planet was ruled out as a false alarm. In 2016, Ashley Chontos, then a first-year graduate student at the University of Hawaiʻi in Honolulu, started analysing the Kepler data. She and her collaborators confirmed in February 2019 that KOI-4.01 is a real planet, a hot Jupiter. Chontos announced it on 5 March at NASA’s Kepler & K2 science conference in Glendale, California, and published it on 29 April in The Astronomical Journal. The study described it as "the closest known planet in terms of orbital period to an evolved star" and an "insight into theories for hot Jupiter formation and migration." The planet was named Kepler-1658b, referring to the entry number in the Kepler Catalogue. After running out of fuel, the Kepler space telescope terminated in 2018, and the study was taken over by the Transiting Exoplanet Survey Satellite (TESS). Description KOI-4 is about 2.9 times the size of the Sun, and not 1.1 times larger as initially estimated. This estimate makes Kepler-1658b larger than Neptune, about 1.07 the size of Jupiter, with a mass of 5.88 Jupiters. Kepler-1658b is gas giant exoplanet, a type of hot Jupiter. It is located and 0.0544 AU from KOI-4. It takes 3.8 Earth-days to complete one orbit around its star. TESS observations published in 2022 showed that Kepler-1658b has a decreasing orbital period at a rate of about  milliseconds per year and is spiralling into its star due to tidal deceleration, at which rate it will be consumed in around 2.5 million years. This is the second discovery of any planet whose orbit is decaying and heading for destruction towards its own star, after WASP-12b. Scientists said that such process could explain how other planets, including the Earth, would end in the course of their host stars evolving to the giant star phase. References Cygnus (constellation) Exoplanets discovered by the Kepler space telescope Transiting exoplanets Kepler objects of interest Exoplanets discovered in 2019
Kepler-1658b
Astronomy
844
78,151,972
https://en.wikipedia.org/wiki/Balipodect
Balipodect (, ; developmental code name TAK-063) is a selective phosphodiesterase 10A (PDE10A) inhibitor which was under development by Takeda for the treatment of schizophrenia. It is active in animal models of antipsychotic-like activity, including inhibition of hyperlocomotion induced by the NMDA receptor antagonist dizocilpine (MK-801) or the dopamine releasing agent methamphetamine, inhibition of conditioned avoidance responses, and reversal of prepulse inhibition deficits. The drug reached phase 2 clinical trials for this indication but its development was discontinued. It was reported to be poorly effective or ineffective for schizophrenia in clinical trials. See also Mardepodect MK-8189 Osoresnontrine Papaverine Rolipram Tofisopam References Abandoned drugs Experimental drugs developed for schizophrenia Fluoroarenes Ketones PDE10 inhibitors Pyrazoles Pyridazines
Balipodect
Chemistry
201
446,843
https://en.wikipedia.org/wiki/Herding
Herding is the act of bringing individual animals together into a group (herd), maintaining the group, and moving the group from place to place—or any combination of those. Herding can refer either to the process of animals forming herds in the wild, or to human intervention forming herds for some purpose. While the layperson uses the term "herding" to describe this human intervention, most individuals involved in the process term it mustering, "working stock", or droving. Some animals instinctively gather together as a herd. A group of animals fleeing a predator will demonstrate herd behavior for protection; while some predators, such as wolves and dogs have instinctive herding abilities derived from primitive hunting instincts. Instincts in herding dogs and trainability can be measured at noncompetitive herding tests. Dogs exhibiting basic herding instincts can be trained to aid in herding and to compete in herding and stock dog trials. Sperm whales have also been observed teaming up to herd prey in a coordinated feeding behavior. People use dogs to help herd. Herding is used in agriculture to manage domesticated animals. Herding can be performed by people or trained animals such as herding dogs that control the movement of livestock under the direction of a person. The people whose occupation it is to herd or control animals often have herd added to the name of the animal they are herding to describe their occupation (shepherd, goatherd, cowherd). Many people use dogs to help herd animals Most herbivores live in groups, called herds, they go from place to place grazing the grassland. There are special maps that show where big groups of these animal herders range from the US to Canada. Herding began over 10,000 years ago, as prehistoric hunters domesticated different animals. A competitive sport has developed in some countries where the combined skill of man and herding dog is tested and judged in a "trial", such as a sheepdog trial. Animals such as sheep, camel, yak, and goats are mostly reared. They provide milk, meat and other products to the herders and their families. References Animal migration
Herding
Biology
434
57,052,146
https://en.wikipedia.org/wiki/Perenniporia%20meridionalis
Perenniporia meridionalis is a poroid crust fungus in the family Polyporaceae. It was described as a new species by Cony Decock and Joost Stalpers in 2006. The holotype specimen was collected in the Province of Nuoro in Italy, where it was found growing on dead wood of Quercus ilex. Distinguishing characteristics of this fungus include its relatively large pores (typically numbering 3 or 4 per millimetre), the hyaline vegetative hyphae that are yellowish to slightly dextrinoid in Melzer's reagent, and large spores measuring 6.0–7.7 by 4.5–6.2 μm. P. meridionalis occurs in central and southern Europe, where it is found in warmer forested areas, usually on dead oak wood. It has also been reported to occur in North America. References Perenniporia Fungi described in 2006 Fungi of Europe Fungi of North America Fungus species
Perenniporia meridionalis
Biology
204
45,456,791
https://en.wikipedia.org/wiki/Zam-Buk
Zam-Buk is a patent medicine which was produced by the Zam-Buk Company of Leeds, England, founded by Charles Edward Fulford. It was first sold by his Bile Beans company in 1902, as a herbal balm and antiseptic ointment; the use of a complementary Zam-Buk soap was recommended to augment the treatment. The ointment was advertised as being effective against a wide range of conditions, including cuts, bruises, sprains, ulcers, bleeding piles and even colds and toothache. It could also be used as an embrocation by rubbing it into the muscles of the back, legs or feet. The source of the name is uncertain, but a link to South Africa has been suggested. It remains very widely popular in South Africa. The brand name was at one time used to refer to ambulance-men and first aiders at rugby league matches in Australia and New Zealand. The product is still manufactured today, often by Bayer, who now owns the trade mark in some, but not all, countries. It is available in Southern Africa, Malaysia, Indonesia and Thailand. Formulation In the early 20th century, it was reported that the formulation comprised 66% paraffin wax, 20% pale resin (colophony), and 14% eucalyptus oil, with small amounts of other ingredients. More recently, the composition eucalyptus oil, 1.8% camphor, 0.5% thyme oil, and 0.65% sassafras oil. The English and Thai varieties do not contain sassafras oil. A 1908 report published in The British Medical Journal estimated that the cost of ingredients for a standard box was one farthing, yet its retail price was 1s 1½ d. Sports use Widely used in earlier times as an all-round antiseptic and healing ointment, particularly with rugby players worldwide. Branding and production When Radio Luxembourg started longwave commercial radio broadcasts in English in 1933, its first advertisers were Fulford's Zam-Buk and Bile Beans. The Zam-Buk brand and trademark were eventually acquired by Fisons, but production ceased in 1994 after the business was sold to Rhone-Poulenc; the product was revived in the United Kingdom by Rose & Co. in 1996. After the original trademarks expired, Rose & Co successfully resisted a new application by a third party to register Zam-Buk as a trademark in 2008. As of 2015, the trademark for Zam-Buk is registered to Bayer Consumer Care AG in Australia, Canada and the United States.. Zam Buk is manufactured for Bayer in Thailand by Interthai Pharmaceutical Manufacturing and is distributed in Malaysia, Thailand, Singapore, Indonesia and South Africa. In Leeds, the tin boxes used for Zam-buk were printed in Hunslet by Charles Lightowler, at their Hunslet Printing Sheds on Jack Lane. They also printed the Peps and Bile Beans tins, which were delivered by horse and cart to the Fulford's factory on Carlton Hill, off Woodhouse Lane. References Citations Bibliography Drug brand names Ointments Patent medicines
Zam-Buk
Chemistry
648
5,967,799
https://en.wikipedia.org/wiki/Physical-to-Virtual
In computing. Physical-to-Virtual ("P2V" or "p-to-v") involves the process of decoupling and migrating a physical server's operating system (OS), applications, and data from that physical server to a virtual-machine guest hosted on a virtualized platform. Methods of P2V migration Manual P2V User manually creates a virtual machine in a virtual host environment and copies all the files from OS, applications and data from the source machine. Semi-automated P2V Performing a P2V migration using a tool that assists the user in moving the servers from physical state to virtual machine. Microsoft's Virtual Server 2005 Migration Toolkit, HOWTO: Guideline for use Virtual Server Migration ToolKit (KB555306) VMware provides a semi-automated tool called VMware vCenter Converter for moving physical servers running Windows or Linux into virtual environments while they are powered on. VMware vCenter Converter replaces two older utilities: Importer (bundled with VMware Workstation) and P2V Assistant. Oracle's Virtual Box has a Linux-based tool which allows the conversion of a dd image of an existing hard drive Microsoft provides the SysInternals disk2vhd utility for making images from Windows XP or later systems to be used with Windows Virtual PC, Microsoft Virtual Server or Hyper-V. openQRM, an open-source datacenter management platform, does P2V (and V2P, V2V or P2P). Storix's bare-metal recovery product (System Backup Administrator) provides P2V and V2P capabilities for Linux, AIX, and Solaris. Fully automated P2V Performing a P2V migration using a tool that migrates the server over the network without any assistance from the user. Veritas Backup Exec has Physical to Virtual (P2V) conversion (and V2P) feature build into the backup engine which can be used for migrations or instant recovery vContinuum vContinuum by InMage systems is an automated P2V data protection/migration tool Symantec System Recovery enables fast, automated P2V and V2P conversions Quest vConverter is an example of a fully automated migration tool for P2V. System Center Virtual Machine Manager SCVMM among many other capabilities has P2V capability. Leostream is an example of a fully automated migration tool for P2V. PlateSpin Migrate is an example of a fully automated migration tool for P2V (and V2P, V2V or P2P). Virtuozzo contains automatic P2V (and V2P) migration tools. SanXfer by InQuinox is an automated server and data migration tool for P2V, V2P, and P2P with complete (server and storage) hardware independence. SureEdge, migrator by sureline systems is fully automated server data migration tool for p2v, V2vLinux and windows server migration, applications, i.e. Ms SQL server, oracle with complete hardware independence, cloud independent from any cloud to any cloud migration. See also Physicalization External links Manual physical-to-virtual migration description for OpenVZ References Virtualization software
Physical-to-Virtual
Technology
680
53,333,891
https://en.wikipedia.org/wiki/List%20of%20former%20planets
This is a list of astronomical objects formerly widely considered planets under any of the various definitions of this word in the history of astronomy. As the definition of planet has evolved, the de facto and de jure definitions of planet have changed over the millennia. As of 2024, there are eight official planets in the Solar System per the International Astronomical Union (IAU), which has also established a definition for exoplanets. Several objects formerly considered exoplanets have been found actually to be stars or brown dwarfs. Background Throughout antiquity, several astronomical objects were considered Classical Planets, meaning "wandering stars", not all of which are now considered planets. The moons discovered around Jupiter, Saturn and Uranus after the advent of the telescope were also initially considered planets by some. The development of more powerful telescopes resulted in the discovery of the asteroids, which were initially considered planets. Then Pluto, the first Trans-Neptunian Object, was discovered. More Trans-Neptunian Objects of the Kuiper Belt were found with the help of electronic imaging. One of these, Eris, was widely hailed as a "new planet", which prompted the 2006 recategorization of solar system bodies. Some planetary scientists reject the 2006 definition of planet, and thus would still consider some of the objects on this list to be planets under a geophysical definition. See the list of gravitationally rounded objects of the Solar System for a list of geophysical planets. List See also List of gravitationally rounded objects of the Solar System List of possible dwarf planets List of hypothetical Solar System objects Notes References Former planets Planets Solar System
List of former planets
Astronomy
329
64,088,128
https://en.wikipedia.org/wiki/Baubotanik
Baubotanik is a building method in which architectural structures are created through the interaction of technical joints and plant growth. The term entails the practice of designing and building living structures using living plants. In this regard, living and non-living elements are intertwined in such a way that they grow together into plant-technical composite structures. The Baubotanik method combines the aesthetic and ecological qualities of living trees with the static functions and structural requirements of buildings, thereby reducing the need for artificial building materials. The structures provide valuable habitats for a variety of animal species and make conventional foundations redundant, due to their root anchorage. The use of Baubotanik is not a new invention and can be found in various historical and cultural contexts, such as the Tanzlinden (“dancing lime”) tree in Germany and living root bridge in North-East India. Common in the Indian state of Meghalaya and grown by the Khasi and Jaintia, the bridges consist of the aerial roots of rubber fig trees (Ficus elastica) and are grown over rivers to form walkable bridges. While the process can take fifteen years to complete, the bridges can be reinforced with natural materials and can withstand the strongest tropical storms. Furthermore, since the turn of the millennium, ‘willow churches’ (made of willow rods and lacking a fixed roof) have been constructed on various former garden show grounds, yet provide only limited functionality as buildings. Research An early publication in this field of study was the article Baubotanik: Mit lebenden Pflanzen konstruieren (translating to “Baubotanik: Designing with Living Plants) by Ferdinand Ludwig and Oliver Storz in 2005 in the magazine Baumeister. The term “Baubotanik” was defined in 2007 at the Institute of Theory of Architecture and Design (Institut für Grundlagen moderner Architektur und Entwerfen) at the University of Stuttgart, where its concept was scientifically further developed. Within the scope of the research, simple experimental buildings were constructed, such as a footbridge and a Baubotanik tower that illustrated the possibilities of creating larger Baubotanik structures by adding individual plants. Moreover, a two-story bird-watching station was planted in the town of Waldkirchen as part of the Bavarian State Horticultural Show 2007. Subsequently, a three-story plane tree cube was created for the Baden-Württemberg State Horticultural Show 2012 in Nagold. Since 2017, the Baubotanik field of research has been based at the Professorship for Green Technologies in Landscape Architecture at the Technical University of Munich. See also References Literature Middleton, Wilfrid & Habibi, Amin & Shankar, Sanjeev & Ludwig, Ferdinand. (2020). Characterizing Regenerative Aspects of Living Root Bridges. Sustainability. 12. 10.3390/su12083267. Open access article link Well, Friederike & Ludwig, Ferdinand. (2020). Blue-green architecture: A case study analysis considering the synergetic effects of water and vegetation. 9. 191–202. 10.1016/j.foar.2019.11.001. Open access article link Ludwig, Ferdinand & Middleton, Wilfrid & Gallenmüller, Friederike & Rogers, Patrick & Speck, Thomas. (2019). Living bridges using aerial roots of ficus elastica – an interdisciplinary perspective. Scientific Reports. 9. 10.1038/s41598-019-48652-w. Open access article link Ludwig, Ferdinand & Schönle, Daniel & Vees, Ute. (2016). Baubotanik - Building Architecture with Nature. International Online Journal Biotope City. PDF download and open access article link Ludwig, Ferdinand & Mihaylov, Boyan & Schwinn, Tobias. (2013). Emergent Timber: A tool for designing the growth process of Baubotanik structures PDF download and open access article link External links Ferdinand Ludwig, TEDxTUM, Designing living buildings with trees Faculty of Architecture, Technical University of Munich, GTLA research (Professorship of Green Technologies in Landscape Architecture) Baubotanik shapes living tree branches into building facades Youtube video: Kirsten Dirksen Baubotanik: Ein Hybrid von Natur und Technik Youtube video: EGGER Group ArchDaily: Baubotanik - The Botanically Inspired Design System That Creates Living Buildings ArchDaily Grow Your Own Building with Baubotanik Architecture Grow Your Own Building with Baubotanik Architecture Sustainable architecture Buildings and structures Architectural design Sustainable building Environmental engineering Trees
Baubotanik
Chemistry,Engineering,Environmental_science
947
34,221,897
https://en.wikipedia.org/wiki/Parabounce
The Parabounce is a balloon-like apparatus created by Stephen Meadows and patented on December 4, 2001 (U.S. 6,325,329). Description and operation The apparatus consists of a gas-filled balloon envelope made from polyurethane-coated material and of a sufficient diameter and volume so that the balloon, when fully inflated and balanced with appropriate weight, almost counteracts the effects of gravity of a pilot. The Parabounce incorporates patented features that permit it to be weight balanced and quickly deflated in case of an emergency. A parachute-style harness secures the pilot to the balloon. By pushing off the ground with his or her legs, the pilot ascends in the balloon to a maximum height of about 120 feet before gradually descending due to the positive weight of the pilot. Optional tether lines held by persons serving as the ground crew prevent the balloon from floating out of control. Once aloft, the pilot can float and glide for distances up to a quarter mile before gradually descending. In the news The Parabounce premiered on NBC's Today Show on August 9, 1999. Katie Couric flew the Parabounce a hundred feet above Rockefeller Center. On February 24, 2002, Five Parabounce units were internally lit and supported acrobats for the closing ceremonies of the Winter Olympics in Salt Lake City, Utah, On June 29, 2006, Snapple sponsored Parabounce balloon flights at Bryant Park in New York City Parabounce was featured in Inflatable Magazine on June/July 2003 pg 46-47 See also Gas Balloon Footnotes References 1. Farnham, Alan (October 18, 1999). 'It's a Bird, It's a Plane, It's...Parabounce?' Forbes Magazine. Retrieved 2012-01-08. 2. Miller, Martin (August 8, 1999). 'Putting a Bounce in Your Step', Los Angeles Times. Retrieved 2012-01-08. 3. Gromer, Cliff (September 2000) "One of A Kind" Popular Mechanics Magazine, p. 76. Retrieved 2012-01-08. 4. "The Light Stuff" Sports Illustrated Magazine, November 11, 1999, p. 39. Retrieved 2012-01-08. 5. Moyer, Glen. "Ballooning's Newest Thrill Ride" January 2000 Ballooning Magazine, pg. 24 bfa.net. Retrieved 2012-01-08. 6. Los Angeles Times, February 25, 2002, Front Page 7. Ramirez, Anthony (June 30, 2006) 'For a Spot of Tea, They Offered Balloons on a Cloudy Day', New York Times. Retrieved 2012-01-08. External links Balloons
Parabounce
Chemistry
555
61,376,244
https://en.wikipedia.org/wiki/C23H26O7
{{DISPLAYTITLE:C23H26O7}} The molecular formula C23H26O7 (molar mass : 414.454 g/mol) may refer to: Grayanic acid, a depsidone Neokadsuranin, a lignan
C23H26O7
Chemistry
63
379,733
https://en.wikipedia.org/wiki/Great-circle%20distance
The great-circle distance, orthodromic distance, or spherical distance is the distance between two points on a sphere, measured along the great-circle arc between them. This arc is the shortest path between the two points on the surface of the sphere. (By comparison, the shortest path passing through the sphere's interior is the chord between the points.) On a curved surface, the concept of straight lines is replaced by a more general concept of geodesics, curves which are locally straight with respect to the surface. Geodesics on the sphere are great circles, circles whose center coincides with the center of the sphere. Any two distinct points on a sphere that are not antipodal (diametrically opposite) both lie on a unique great circle, which the points separate into two arcs; the length of the shorter arc is the great-circle distance between the points. This arc length is proportional to the central angle between the points, which if measured in radians can be scaled up by the sphere's radius to obtain the arc length. Two antipodal points both lie on infinitely many great circles, each of which they divide into two arcs of length times the radius. The determination of the great-circle distance is part of the more general problem of great-circle navigation, which also computes the azimuths at the end points and intermediate way-points. Because the Earth is nearly spherical, great-circle distance formulas applied to longitude and geodetic latitude of points on Earth are accurate to within about 0.5%. Formulae Let and be the geographical longitude and latitude of two points 1 and 2, and be their absolute differences; then , the central angle between them, is given by the spherical law of cosines if one of the poles is used as an auxiliary third point on the sphere: The problem is normally expressed in terms of finding the central angle . Given this angle in radians, the actual arc length d on a sphere of radius r can be trivially computed as Relation between central angle and chord length The central angle is related with the chord length of unit sphere : For short-distance approximation (), Computational formulae On computer systems with low floating point precision, the spherical law of cosines formula can have large rounding errors if the distance is small (if the two points are a kilometer apart on the surface of the Earth, the cosine of the central angle is near 0.99999999). For modern 64-bit floating-point numbers, the spherical law of cosines formula, given above, does not have serious rounding errors for distances larger than a few meters on the surface of the Earth. The haversine formula is numerically better-conditioned for small distances by using the chord-length relation: Historically, the use of this formula was simplified by the availability of tables for the haversine function: and . The following shows the equivalent formula expressing the chord length explicitly: where . Although this formula is accurate for most distances on a sphere, it too suffers from rounding errors for the special (and somewhat unusual) case of antipodal points. A formula that is accurate for all distances is the following special case of the Vincenty formula for an ellipsoid with equal major and minor axes: where is the two-argument arctangent. Using atan2 ensures that the correct quadrant is chosen. Vector version Another representation of similar formulas, but using normal vectors instead of latitude and longitude to describe the positions, is found by means of 3D vector algebra, using the dot product, cross product, or a combination: where and are the normals to the sphere at the two positions 1 and 2. Similarly to the equations above based on latitude and longitude, the expression based on arctan is the only one that is well-conditioned for all angles. The expression based on arctan requires the magnitude of the cross product over the dot product. From chord length A line through three-dimensional space between points of interest on a spherical Earth is the chord of the great circle between the points. The central angle between the two points can be determined from the chord length. The great circle distance is proportional to the central angle. The great circle chord length, , may be calculated as follows for the corresponding unit sphere, by means of Cartesian subtraction: Substituting and this formula can be algebraically manipulated to the form shown above in . Radius for spherical Earth The shape of the Earth closely resembles a flattened sphere (a spheroid) with equatorial radius of 6378.137 km; distance from the center of the spheroid to each pole is 6356.7523142 km. When calculating the length of a short north-south line at the equator, the circle that best approximates that line has a radius of (which equals the meridian's semi-latus rectum), or 6335.439 km, while the spheroid at the poles is best approximated by a sphere of radius , or 6399.594 km, a 1% difference. So long as a spherical Earth is assumed, any single formula for distance on the Earth is only guaranteed correct within 0.5% (though better accuracy is possible if the formula is only intended to apply to a limited area). Using the mean Earth radius, (for the WGS84 ellipsoid) means that in the limit of small flattening, the mean square relative error in the estimates for distance is minimized. For distances smaller than 500 kilometers and outside of the poles, a Euclidean approximation of an ellipsoidal Earth (Federal Communications Commission's (FCC)'s formula) is both simpler and more accurate (to 0.1%). See also Air navigation Angular distance Circumnavigation Flight planning Geodesy Geodesics on an ellipsoid Geodetic system Geographical distance Isoazimuthal Loxodromic navigation Meridian arc Rhumb line Spherical geometry Spherical trigonometry Versor References and notes External links GreatCircle at MathWorld Metric geometry Spherical trigonometry Distance Spherical curves
Great-circle distance
Physics,Mathematics
1,255
76,176,366
https://en.wikipedia.org/wiki/C/2024%20X1%20%28Fazekas%29
C/2024 X1 (Fazekas) is a Jupiter-family comet. It was discovered by Mt. Lemmon Survey scientist, J. B. Fazekas. Discovery The comet was spotted in the 11 December images taken by 111.5 mpx (10,560 x 10,560 px) CCD mounted at prime focus of f/1.6 Cassegrain reflector (observatory code G96). The object appeared as a 20th magnitude 10-12" blob without tail in the constellation Auriga. Orbital characteristics Current calculations suggest it's orbital period is about 29.5 years, classifying it as a Halley-type member. It will reach next perihelion on 30 July 2025, when it will be from the Earth. References External links Halley-type comets Comets in 2024 Oort cloud
C/2024 X1 (Fazekas)
Astronomy
180
19,205,753
https://en.wikipedia.org/wiki/Margaretia
Margaretia is a frondose organism known from the middle Cambrian Burgess shale and the Kinzers Formation of Pennsylvania. Its fronds reached about 10 cm in length and are peppered with a range of length-parallel oval holes. It was originally interpreted as an alcyonarian coral. It was later reclassified as a green alga closely resembling modern Caulerpa by D.F. Satterthwait in her Ph.D. thesis in 1976, a finding supported by Conway Morris and Robison in 1988. More recently, it has been treated as an organic tube, that is used as nest of hemichordate Oesia. References External links Burgess Shale fossils Wheeler Shale Incertae sedis
Margaretia
Biology
149
5,937,299
https://en.wikipedia.org/wiki/Pendulum%20%28mechanics%29
A pendulum is a body suspended from a fixed support such that it freely swings back and forth under the influence of gravity. When a pendulum is displaced sideways from its resting, equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back towards the equilibrium position. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position, swinging it back and forth. The mathematics of pendulums are in general quite complicated. Simplifying assumptions can be made, which in the case of a simple pendulum allow the equations of motion to be solved analytically for small-angle oscillations. Simple gravity pendulum A simple gravity pendulum is an idealized mathematical model of a real pendulum. It is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. Since in the model there is no frictional energy loss, when given an initial displacement it swings back and forth with a constant amplitude. The model is based on the assumptions: The rod or cord is massless, inextensible and always remains under tension. The bob is a point mass. The motion occurs in two dimensions. The motion does not lose energy to external friction or air resistance. The gravitational field is uniform. The support is immobile. The differential equation which governs the motion of a simple pendulum is where is the magnitude of the gravitational field, is the length of the rod or cord, and is the angle from the vertical to the pendulum. Small-angle approximation The differential equation given above is not easily solved, and there is no solution that can be written in terms of elementary functions. However, adding a restriction to the size of the oscillation's amplitude gives a form whose solution can be easily obtained. If it is assumed that the angle is much less than 1 radian (often cited as less than 0.1 radians, about 6°), or then substituting for into using the small-angle approximation, yields the equation for a harmonic oscillator, The error due to the approximation is of order (from the Taylor expansion for ). Let the starting angle be . If it is assumed that the pendulum is released with zero angular velocity, the solution becomes The motion is simple harmonic motion where is the amplitude of the oscillation (that is, the maximum angle between the rod of the pendulum and the vertical). The corresponding approximate period of the motion is then which is known as Christiaan Huygens's law for the period. Note that under the small-angle approximation, the period is independent of the amplitude ; this is the property of isochronism that Galileo discovered. Rule of thumb for pendulum length gives If SI units are used (i.e. measure in metres and seconds), and assuming the measurement is taking place on the Earth's surface, then , and (0.994 is the approximation to 3 decimal places). Therefore, relatively reasonable approximations for the length and period are: where is the number of seconds between two beats (one beat for each side of the swing), and is measured in metres. Arbitrary-amplitude period For amplitudes beyond the small angle approximation, one can compute the exact period by first inverting the equation for the angular velocity obtained from the energy method (), and then integrating over one complete cycle, or twice the half-cycle or four times the quarter-cycle which leads to Note that this integral diverges as approaches the vertical so that a pendulum with just the right energy to go vertical will never actually get there. (Conversely, a pendulum close to its maximum can take an arbitrarily long time to fall down.) This integral can be rewritten in terms of elliptic integrals as where is the incomplete elliptic integral of the first kind defined by Or more concisely by the substitution expressing in terms of , Here is the complete elliptic integral of the first kind defined by For comparison of the approximation to the full solution, consider the period of a pendulum of length 1 m on Earth ( = ) at an initial angle of 10 degrees is The linear approximation gives The difference between the two values, less than 0.2%, is much less than that caused by the variation of with geographical location. From here there are many ways to proceed to calculate the elliptic integral. Legendre polynomial solution for the elliptic integral Given and the Legendre polynomial solution for the elliptic integral: where denotes the double factorial, an exact solution to the period of a simple pendulum is: Figure 4 shows the relative errors using the power series. is the linear approximation, and to include respectively the terms up to the 2nd to the 10th powers. Power series solution for the elliptic integral Another formulation of the above solution can be found if the following Maclaurin series: is used in the Legendre polynomial solution above. The resulting power series is: more fractions available in the On-Line Encyclopedia of Integer Sequences with having the numerators and having the denominators. Arithmetic-geometric mean solution for elliptic integral Given and the arithmetic–geometric mean solution of the elliptic integral: where is the arithmetic-geometric mean of and . This yields an alternative and faster-converging formula for the period: The first iteration of this algorithm gives This approximation has the relative error of less than 1% for angles up to 96.11 degrees. Since the expression can be written more concisely as The second order expansion of reduces to A second iteration of this algorithm gives This second approximation has a relative error of less than 1% for angles up to 163.10 degrees. Approximate formulae for the nonlinear pendulum period Though the exact period can be determined, for any finite amplitude rad, by evaluating the corresponding complete elliptic integral , where , this is often avoided in applications because it is not possible to express this integral in a closed form in terms of elementary functions. This has made way for research on simple approximate formulae for the increase of the pendulum period with amplitude (useful in introductory physics labs, classical mechanics, electromagnetism, acoustics, electronics, superconductivity, etc. The approximate formulae found by different authors can be classified as follows: ‘Not so large-angle’ formulae, i.e. those yielding good estimates for amplitudes below rad (a natural limit for a bob on the end of a flexible string), though the deviation with respect to the exact period increases monotonically with amplitude, being unsuitable for amplitudes near to rad. One of the simplest formulae found in literature is the following one by Lima (2006): , where . ‘Very large-angle’ formulae, i.e. those which approximate the exact period asymptotically for amplitudes near to rad, with an error that increases monotonically for smaller amplitudes (i.e., unsuitable for small amplitudes). One of the better such formulae is that by Cromer, namely: . Of course, the increase of with amplitude is more apparent when , as has been observed in many experiments using either a rigid rod or a disc. As accurate timers and sensors are currently available even in introductory physics labs, the experimental errors found in ‘very large-angle’ experiments are already small enough for a comparison with the exact period, and a very good agreement between theory and experiments in which friction is negligible has been found. Since this activity has been encouraged by many instructors, a simple approximate formula for the pendulum period valid for all possible amplitudes, to which experimental data could be compared, was sought. In 2008, Lima derived a weighted-average formula with this characteristic: where , which presents a maximum error of only 0.6% (at ). Arbitrary-amplitude angular displacement The Fourier series expansion of is given by where is the elliptic nome, and the angular frequency. If one defines can be approximated using the expansion (see ). Note that for , thus the approximation is applicable even for large amplitudes. Equivalently, the angle can be given in terms of the Jacobi elliptic function with modulus For small , , and , so the solution is well-approximated by the solution given in Pendulum (mechanics)#Small-angle approximation. Examples The animations below depict the motion of a simple (frictionless) pendulum with increasing amounts of initial displacement of the bob, or equivalently increasing initial velocity. The small graph above each pendulum is the corresponding phase plane diagram; the horizontal axis is displacement and the vertical axis is velocity. With a large enough initial velocity the pendulum does not oscillate back and forth but rotates completely around the pivot. Compound pendulum A compound pendulum (or physical pendulum) is one where the rod is not massless, and may have extended size; that is, an arbitrarily shaped rigid body swinging by a pivot . In this case the pendulum's period depends on its moment of inertia around the pivot point. The equation of torque gives: where: is the angular acceleration. is the torque The torque is generated by gravity so: where: is the total mass of the rigid body (rod and bob) is the distance from the pivot point to the system's centre-of-mass is the angle from the vertical Hence, under the small-angle approximation, (or equivalently when ), where is the moment of inertia of the body about the pivot point . The expression for is of the same form as the conventional simple pendulum and gives a period of And a frequency of If the initial angle is taken into consideration (for large amplitudes), then the expression for becomes: and gives a period of: where is the maximum angle of oscillation (with respect to the vertical) and is the complete elliptic integral of the first kind. An important concept is the equivalent length, , the length of a simple pendulums that has the same angular frequency as the compound pendulum: Consider the following cases: The simple pendulum is the special case where all the mass is located at the bob swinging at a distance from the pivot. Thus, and , so the expression reduces to: . Notice , as expected (the definition of equivalent length). A homogeneous rod of mass and length swinging from its end has and , so the expression reduces to: . Notice , a homogeneous rod oscillates as if it were a simple pendulum of two-thirds its length. A heavy simple pendulum: combination of a homogeneous rod of mass and length swinging from its end, and a bob at the other end. Then the system has a total mass of , and the other parameters being (by definition of centre-of-mass) and , so the expression reduces to: Where . Notice these formulae can be particularized into the two previous cases studied before just by considering the mass of the rod or the bob to be zero respectively. Also notice that the formula does not depend on both the mass of the bob and the rod, but actually on their ratio, . An approximation can be made for : Notice how similar it is to the angular frequency in a spring-mass system with effective mass. Damped, driven pendulum The above discussion focuses on a pendulum bob only acted upon by the force of gravity. Suppose a damping force, e.g. air resistance, as well as a sinusoidal driving force acts on the body. This system is a damped, driven oscillator, and is chaotic. Equation (1) can be written as (see the Torque derivation of Equation (1) above). A damping term and forcing term can be added to the right hand side to get where the damping is assumed to be directly proportional to the angular velocity (this is true for low-speed air resistance, see also Drag (physics)). and are constants defining the amplitude of forcing and the degree of damping respectively. is the angular frequency of the driving oscillations. Dividing through by : For a physical pendulum: This equation exhibits chaotic behaviour. The exact motion of this pendulum can only be found numerically and is highly dependent on initial conditions, e.g. the initial velocity and the starting amplitude. However, the small angle approximation outlined above can still be used under the required conditions to give an approximate analytical solution. Physical interpretation of the imaginary period The Jacobian elliptic function that expresses the position of a pendulum as a function of time is a doubly periodic function with a real period and an imaginary period. The real period is, of course, the time it takes the pendulum to go through one full cycle. Paul Appell pointed out a physical interpretation of the imaginary period: if is the maximum angle of one pendulum and is the maximum angle of another, then the real period of each is the magnitude of the imaginary period of the other. Coupled pendula Coupled pendulums can affect each other's motion, either through a direction connection (such as a spring connecting the bobs) or through motions in a supporting structure (such as a tabletop). The equations of motion for two identical simple pendulums coupled by a spring connecting the bobs can be obtained using Lagrangian mechanics. The kinetic energy of the system is: where is the mass of the bobs, is the length of the strings, and , are the angular displacements of the two bobs from equilibrium. The potential energy of the system is: where is the gravitational acceleration, and is the spring constant. The displacement of the spring from its equilibrium position assumes the small angle approximation. The Lagrangian is then which leads to the following set of coupled differential equations: Adding and subtracting these two equations in turn, and applying the small angle approximation, gives two harmonic oscillator equations in the variables and : with the corresponding solutions where and , , , are constants of integration. Expressing the solutions in terms of and alone: If the bobs are not given an initial push, then the condition requires , which gives (after some rearranging): See also Harmonograph Conical pendulum Cycloidal pendulum Double pendulum Inverted pendulum Kapitza's pendulum Rayleigh–Lorentz pendulum Elastic pendulum Mathieu function Pendulum equations (software) References Further reading External links Mathworld article on Mathieu Function Differential equations Dynamical systems Horology Mathematical physics Mathematics
Pendulum (mechanics)
Physics,Mathematics
2,921
6,129,897
https://en.wikipedia.org/wiki/Ataxin
Ataxin is a type of nuclear protein. The class is called ataxin because mutated forms of these proteins and their corresponding genes were found to cause progressive ataxia. Some examples, their coding genes and associated diseases include: Ataxin 1, coded by ATXN1. Mutants of ataxin 1 with a polyglutamine expansion cause SCA1. Ataxin 2, coded by ATXN2. It is known to cause SCA2 with polyglutamine expansion. Ataxin 3, coded by ATXN3. Machado-Joseph disease is caused by polyglutamine expansions in ataxin 3. Ataxin 7, coded by ATXN7. Polyglutamine expansions in Ataxin 7 cause SCA7. Ataxin 8, coded by ATXN8. Ataxin 8 does not cause an ataxic order, but a gene on the opposite strand, ATXN8OS, causes Spinocerebellar ataxia type 8 with CTG expansion. Ataxin 10, coded by ATXN10. It is associated with the pentanucleotide disorder, SCA10. Frataxin, follows a similar naming convention and is coded by the FXN gene. GAA repeat expansions in a non-coding region of FXN cause Friedreich's ataxia when both copies of the gene are affected. References Protein families
Ataxin
Chemistry,Biology
286
1,394,815
https://en.wikipedia.org/wiki/Blanket%20sleeper
The blanket sleeper (also known by many other synonyms and trade names) is a type of especially warm sleeper or footie pajama worn primarily during the winter in the United States and Canada. The garment is worn especially by young children. Typically, but not always, the blanket sleeper consists of a loose-fitting, one-piece garment of blanket-like material, usually fleece, enclosing the entire body except for the head and hands. It represents an intermediate step between regular pajamas or babygrow, and bag-like coverings for infants such as buntings or infant sleeping bags (Terminology and Variations sections below). Like bag-like coverings, the blanket sleeper is designed to be sufficiently warm as to make regular blankets or other bed covers unnecessary, even in colder weather. Unlike such coverings, the blanket sleeper has bifurcated legs to allow unhindered walking (or crawling). While no single feature is universal (see Terminology), blanket sleepers usually include: One-piece construction with long sleeves and legs. Attached bootees enclosing the wearer's feet. Composition from relatively thick, heavy fabric. Although any sleeping garment with some or all of these characteristics could be called a blanket sleeper, the term is most commonly applied to a range of styles that deviate relatively little from the same basic design. (The features of this design are described in the Features section, below). Features Features of the typical blanket sleeper often include: Usually made of a napped synthetic fabric, such as polyester or polar fleece; however sleepers made from heavier natural fabrics such as cotton are also available, they are not common in North America due to stringent regulations regarding flammability. Loose fit. On smaller sizes, the hip area may be made especially loose to accommodate a diaper. The crotch is usually cut especially low. Raglan sleeves. Snug rib-knit collar and wrist cuffs. Usually made in one or more solid, bright colors, or screen-printed with graphic designs. There may be a front panel with a single, elaborate printed design, either covering the chest, or forming the entire front portion of the torso and legs. The sleeves may be a different color from the rest of the garment. Stripes are sometimes seen, most commonly on the collar and cuffs. Soles of the feet made from a (usually white) vinyl fabric lined with (synthetic) felt, for improved durability and slip-resistance. This can be solid vinyl with a rough textured surface, or a vinyl-dotted fabric such as Jiffy Grip. Optional toe caps, made from the same fabric as the soles of the feet, and covering the top front portion of the foot, for improved durability. Elastic to make the leg portions snug around the ankles. A zipper running vertically down the front of the garment, from the neck opening to the inside or front ankle of one of the legs (usually the left), designed to make it easy to put on and take off. On teen and adult sizes, the zipper usually instead runs from the neck to the crotch. Optional snap tab where the zipper meets the neck opening. This is a small tab of fabric sewed to the garment on one side of the zipper (usually the right), and fastening to the other side with a snap fastener, designed to prevent discomfort from the zipper slider coming into contact with the wearer's chin and deter access to the zipper. Optional decorative applique on one side of the chest (usually the left). Optional hood. Optional mittens/mitts (mainly on infant and costume sleepers). Although primarily worn by the young, blanket sleepers are also worn (in decreasing order of frequency) by school-age children, teens, and even adults. (See Sizes, gender differences, and availability, below). Although footed, one-piece garments in a variety of fabrics and styles are used in many countries as infant sleepwear, the specific range of styles with which the term blanket sleeper is usually associated, the term itself, though children older than infancy wearing footed, one-piece sleeping garments is concentrated in the Western world. Design considerations Blanket sleepers are usually intended as practical garments, worn mostly by younger children and primarily in the home. Style and fashion thus tend not to be important in its design, and the basic design of the typical blanket sleeper has changed little over the years. The sleeper serves mainly to keep the wearer warm at night, even in the absence of blankets and bed covers. The sleeper covers the entire body except for the head (except in certain cases where a hood is present) and (in most cases) hands (except in cases where a sleeper has attached mitts, mostly on infant sizes), where it is snug at the neck and wrists. The use of a zipper closure in place of buttons or snap fasteners also further retains warmth by eliminating drafts. This is especially important for infants, for whom loose blankets may pose a safety hazard (including increasing the risk of SIDS), and possibly for older children, who may still be too young to be relied upon to keep their own sleepwear or bed covers adjusted so as to prevent exposure to the air of bare skin. This is reflected in advertisements by blanket sleeper manufacturers, which often emphasize that their garments "can't be kicked off", or that "no other covers are needed". The permanently attached feet can also be a beneficial feature for children who are prone to get out of bed in the morning before their parents are awake, and are too young to be relied upon to put on slippers or other footwear to keep their feet warm, as well as for adults who find putting on, and/or wearing socks in bed too bothersome, yet still want their feet covered when getting out of bed in the morning. Blanket sleepers without feet allow more room for growth and reduce the possibility of slipping. Also, children with larger or smaller feet find a better fit. The blanket sleeper is designed so that it can be worn either by itself as a standalone garment, or as a second layer worn over regular pajamas or other sleepwear. The one-piece design is simple to launder and has no detachable pieces that could be individually misplaced. Yet another potential benefit of the blanket sleeper is that it may help prevent infants from removing or interfering with their diapers during the night. This can also apply to older children with certain developmental disabilities, such as Angelman syndrome. In particular, parents of Angelman children have been known to take such additional measures as cutting the feet off the sleeper and putting it on backwards, and/or covering the zipper with duct tape. Some specialty locking clothing and other adaptive clothing purveyors offer blanket sleepers, with or without feet, for adults with dementia or other disabilities, for similar reasons. Blanket sleepers may also appeal to cultural mores relating to body modesty. This can, for example, be a consideration for some parents when siblings sleep in the same room and/or bed. Materials The range of materials used for mass-produced blanket sleepers for children is severely limited, as a result of stringent U.S. government-imposed flammability requirements. Essentially the only materials used since the 1950s are polyester, acrylic, and modacrylic, with polyester dominating. Unfortunately, this can have a negative impact on comfort for many wearers, particularly children with eczema. A small number of sleepers are made from cotton. Adult-size sleepers, especially those sold by small Internet businesses, can be found in a wider range of materials, including natural fabrics such as cotton flannel. Some web businesses also offer sleepers in natural fabrics for children, but only outside the U.S. In particular, special eczema sleepsuits for children, made of cotton and with built-in mitts designed to prevent scratching, are available from specialty stores in the UK. The fabrics used in most blanket sleepers have a strong tendency to pill. Although this does not adversely affect the garment's functional utility, it has the effect that a used garment can be clearly, visually distinguished from a new one after only a small number of wearings or washings. Decorative features such as appliques or printed designs usually follow juvenile themes, and are designed to make the garments more attractive to the children who wear them. Some adult sleepers can also have appliques on them, but those tend to be from Internet clothing suppliers who offer custom-made sleepers and tend to be of favorite cartoon characters or items that the wearer had in childhood such as teddy bears and animal representatives that they had as pets. Sizes, gender differences, and availability In the United States and Canada, mass-produced blanket sleepers for both boys and girls up to size 4 (see US standard clothing sizes) are quite common, and can be found in nearly any department store and online. Sizes larger than 4 are progressively less common, being found in only some stores and online, and usually only seasonally (peaking around October or November). The availability of larger-size sleepers in department stores also varies from year to year. Alternative sources for larger-size, mass-produced sleepers include Internet auction sites, such as eBay, and certain mail order clothing retailers, such as Lands' End. Individual blanket sleepers can be marketed either as a unisex garment, or as a garment intended for one gender. Even in the latter case, however, there is often no difference stylistically between sleepers marketed specifically for boys, and ones marketed specifically for girls. (The size numbers are also consistent, as, although there are slight differences in the meanings of size numbers between boys and girls in the U.S. standard clothing size system, these are too small to matter in the case of a garment as loose-fitting as a blanket sleeper). Occasionally, however, sleepers marketed for girls may include effeminate decorative features such as lacy frills, and sleepers with screen-printed front panels may feature images of media characters appealing primarily to children of one gender. Also, the ranges of colors available may be different between the genders, in particular pink sleepers are rarely worn by boys due to a cultural association of that color with femininity. Unisex designs and colors offer a more sustainable option allowing the most use over time. In smaller sizes, there is little or no difference in the availability of sleepers for boys and for girls. Sleepers for older boys are somewhat less common than those for older girls. Nevertheless, sleepers for both boys and girls continue to be available in department stores, mainly during the fall and winter seasons, and year-round on the internet up to size 16–18. Blanket sleepers for adult women used to be relatively uncommon, but since 2010s have increased in popularity and can be found in many department stores, usually in the colder months. Mass-produced blanket sleepers for adult men are more uncommon. However, major home sewing pattern publishers sometimes offer patterns for conventionally styled blanket sleepers in men's sizes, and in the Internet Age a cottage industry has developed, with several websites offering blanket sleepers manufactured on a small scale for men as well as women and children. Also, mass-produced, unisex-styled blanket sleepers marketed for women are sometimes purchased and worn by men, although the difference in the size ranges between men and women means that this option is available only to men of smaller stature. Terminology The terminology relating to blanket sleepers can be confusing, and inconsistent between different speakers. The terms sleeper and blanket sleeper are sometimes used interchangeably. Alternatively, a distinction may be made between the lighter-weight (footed, one-piece) sleepers worn by infants in warmer weather, and the heavier blanket sleepers worn by both infants and older children, primarily in colder weather. (In the loosest usage, sleeper by itself can mean any infant sleeping garment, regardless of form or features). Similarly, some people consider a blanket sleeper to be one-piece by definition, whereas a sleeper could be made either in one piece, or in two pieces meeting at the waist. When blanket is omitted, either the singular form sleeper or the plural form sleepers may refer to a single garment. When blanket is included, however, a single garment is usually referred to using the singular form. The terms (blanket) sleeper and footed pajamas may be used interchangeably. (This reflects the North American practice of referring to nearly any sleeping garment as pajamas, as blanket sleepers bear little resemblance to the jacket and trouser combination, originating in India, that the term pajamas originally referred to). Alternatively, sleeper may instead be used more narrowly than footed pajamas, to exclude footed sleeping garments that are lighter-weight and/or two-piece, such as footed "ski" style pajamas. Also, while many people consider built-in feet to be part of the definition of sleeper, garments otherwise meeting the definition but lacking feet are sometimes marketed as footless blanket sleepers. The term grow sleeper is sometimes used to refer to a two-piece footed sleeping garment with features designed to compensate for growth in the wearer, such as turn-back cuffs, or a double row of snap fasteners at the waist. Other terms that are used more-or-less interchangeably with blanket sleeper include: footies or feeties footed sleeper footed pajamas (variants include foot/footy/footed/footsie/feet/feety/feeted/feetsie and may use colloquial terms for pajamas such as pjs or jammies) pajamas with (the) feet (in/on them) padded feet pajamas bunny (feet) pajamas or bunny suit one-piece pajamas zip-up pajamas pajama blanket sleeper/sleeping suit sleeper blanket sherpa sleeper walking blanket walking sleeper sleeper walker oversleeper (used in advertisements by J. C. Penney) nighties onesie potato mashers dormer (older girls' and women's sizes only) Also, a number of commercial brand names have been adopted as genericized trademarks. The best known of these is Dr. Dentons, but others used include "Big Feet", Trundle Bundle (common usage on the Southside of Chicago), and Jama-Blanket. Formerly used, obsolete terms include: night drawers sleeping drawers sleeping garment (used in advertisements by Doctor Denton Sleeping Mills) coverlet sleeper pajunion (used in advertisements by Brighton-Carlsbad) In British English, the term with a meaning closest to that of blanket sleeper is sleepsuit, but it is also known as a romper suit. Infants' garments similar to blanket sleepers, but with the bottom portion constructed like a bag, without separate leg enclosures, are usually not considered sleepers, but rather are referred to by other terms such as baby sleep bag, bunting, sleeping bag, go go bag, sleep sack, or grow bag. Infants' garments similar to blanket sleepers, but designed for use as outerwear rather than sleepwear (and usually featuring hoods and hand covers), are referred to by other terms such as pram suit, snowsuit, or carriage suit. Some such garments are designed for dual use as both sleepwear and playwear, these are sometimes known as sleep 'n' play suits. History The origins of the blanket sleeper can be traced at least as far back as the late 19th century, to footed, one-piece sleeping garments for children, then known as night drawers. The first company to mass-produce blanket sleepers was Doctor Denton Sleeping Mills, which started using the term "sleeping garment", for their garments, starting in 1865, and most had buttons instead of zippers (since the zipper wasn't invented until the early 20th century), and trap-doors or butt-flaps in the back, as early blanket sleepers, quite obviously, took on the same basic design as the traditional union-suit (which may have been where the idea of the sleeper originated; as the children's version of their fathers' union-suits). However, the blanket sleeper first took something closely resembling its present form in the early 1950s, when many of the most recognizable features were first adopted, including the use of synthetic fabrics, slip-resistant soles, toe caps, rib-knit collar and cuffs, zipper closure, snap tab, and applique. The term blanket sleeper also first came into common use at this time, although sleeper by itself appeared considerably earlier. Sleepers made before the 1950s were usually made from knitted natural fabrics, either cotton, wool (especially merino), or a mixture of both. Commonly used fabrics included outing flannel and flannelette. (Home-made sleepers were typically made out of fabric pieces cut from actual blankets.) The soles of the feet were usually made from the same material as the rest of the sleeper, though sometimes two layers were used for improved durability. The collar and cuffs were usually hemmed, and the sleeper usually closed with buttons, either in the front or in the back. Natural fabrics were largely abandoned after the Flammable Fabrics Act of 1953, which imposed strict flammability requirements on children's sleepwear sold in the United States, up to size 14. Flammability requirements were tightened further in the early 1970s, and in 1977 the flame-retarding additive tris was discovered to be carcinogenic, prompting a recall, and leading to the abandonment of such additives and the materials that depended on them for their flame-resistance. The popularity of blanket sleepers for older children got a boost in the 1970s and early 1980s due to the energy crises of 1973 and 1979. Advertisements from this period often emphasized that thermostats could be set lower at night when children slept in blanket sleepers. Variations Blanket sleepers sometimes depart from the standard design by incorporating unusual or uncommon features. An incomplete list of these follows. Drop seat One of the features most commonly associated with blanket sleepers in the public imagination, the drop seat (also known as a trap door or butt flap) is an opening in the buttocks area, traditionally closing with buttons, designed to allow the wearer to use the toilet without removing the sleeper. Drop seats were very common on sleepers made before the 1950s, but today they are rather rare. (Similar drop seats were also a common feature on the traditional union suit.) Modern versions of the drop seat often replace the buttons with snap fasteners. Snap front/legs Some sleepers, especially in infant sizes, replace the usual front zipper with a front opening closing with snap fasteners. In infant sizes, this opening usually forks at the crotch, and extends down the insides of both legs to the ankles, in order to give access for diaper changes. This design tends to be less effective at eliminating drafts than the zipper closure, and is most often seen on lighter-weight sleepers designed for warmer weather. Some infant-size blanket sleepers made in the 1960s featured an ankle-to-ankle zipper through the crotch, serving a similar function. Snap waist/back Two-piece sleepers sometimes fasten around the waist with snap fasteners. This is most often seen on so-called grow sleepers, made mainly in toddler sizes, with features designed to extend the useful life of the garment by compensating for growth in the wearer. These are usually made in lighter material than one-piece sleepers, with an especially high waist, two rows of snaps on the top piece, a back opening on the top piece also closing with snaps, and turn-back cuffs. Two-piece sleepers made before the 1950s often fastened similarly around the waist with buttons. Drawstring cuffs A common feature on sleepers until about the 1930s was turn-back cuffs closing at the ends with drawstrings, designed to fully enclose the wearer's hands. According to advertisements, these were intended both to keep the wearer's hands warm, and to discourage thumb or finger sucking. (These were mostly found on smaller sizes, but have appeared on Dr. Denton brand sleepers in sizes for children as old as 10 years .) Costume sleepers Occasionally garments are made that are designed to serve a dual function, as both blanket sleeper and fancy dress costume (similar to the ones worn by American children on Halloween). Animal costume sleepers are the most common, often featuring hoods with costume ears, tails, and/or hand covers resembling paws. Other motifs such as superheroes, cartoon characters or clowns are also sometimes seen. The use of the terms bunny suit and bunny pajamas as synonyms for blanket sleeper references the persistent cultural meme of a blanket sleeper fashioned as a (usually pink) bunny costume, with a hood, long ears, and puffy tail. A related phenomenon in Japan, of footless, lighter-weight, hooded, one-piece animal costume pajamas, is known there as disguise pajama or kigurumi (although the latter term can also refer to costumes that are not intended as sleepwear). Minor variations Side zipper A rare alternative to the center front zipper is the "side zipper", running from the neckline near one shoulder (usually the left) to the outside or front ankle. This is most commonly found on sleepers with an elaborate printed design on the front, in which case it serves to avoid disruption of the image. An even rarer variation is to have zippers on both sides. Back zipper Although back closings using buttons were common on sleepers made before the 1950s, zippers in the back are uncommon in regular children's sleepers. A back zipper may make it difficult for the wearer to remove the sleeper for bathroom use. Most examples in regular sleepers date from the 1950–1970s as back zippers became less prevalent in the 1980s. Today, back zippers can most commonly be found on sleepers for wearers where it is advantageous to prevent the wearer removing their sleeper, especially those who wear diapers and have a tendency to remove them. Self-fabric feet Sleepers made in sizes for infants who are too young to walk often omit the slip-resistant soles on the feet, instead having soles made from the same fabric as the rest of the sleeper. This is also occasionally seen on sleepers for older girls or women. Bound feet On sleepers made since the 1980s, the soles of the feet usually attach to the upper foot pieces with an inward-facing seam. In preceding years, it was more common for the seam to face outward, and to be covered with a narrow strip of material, forming a kind of ridge around the perimeter of the sole. This design was referred to in advertisements as a bound edge or bound foot, and was intended both to improve durability, and to improve comfort by eliminating a potential source of irritation. Molded plastic feet Around 1970, some sleepers were made with foot bottoms made from three-dimensional molded plastic. This feature proved unpopular, and was quickly abandoned. Detachable feet Occasionally, rather than having permanently attached feet, sleepers will come with separate feet, similar to slippers. This is more common on adult sizes. Convertible feet Another variation replaces the permanently enclosed feet with "convertible" foot coverings resembling tube socks, that close at the ends with velcro, and can be rolled back to expose the feet when desired. Hood Attached hoods were occasionally seen on sleepers made before the 1920s, and as late as the 1940s the company that made Dr. Denton brand sleepers offered separate "sleeping hoods", designed to be used in conjunction with their sleepers, in sizes for both children and adults. On modern sleepers attached hoods are extremely rare, found only on a handful of sleepers for older girls and women, and costume sleepers. Mittens Attached mittens were occasionally seen on sleepers made for infants, usually to prevent finger or thumb sucking, but also served the dual purpose of keeping the hands warm. Mittens can also be used on costume sleepers for both children and adults Quilted fabric Sleepers are occasionally made from a quilted fabric, incorporating a thin layer of polyester fiberfill batting for increased warmth. Quilted sleepers using polyester foam as insulation were also made in the 1950s. Elastic back waist Sleepers in larger sizes sometimes feature an elastic band along the rear half of the waist, designed to provide a better fit by reducing bagginess around the torso. See also Layette Related garments Babygrow Pajamas Bunting Union suit Pram suit Playsuit (children's clothing) References Nightwear Children's clothing One-piece suits
Blanket sleeper
Biology
5,060
78,262,006
https://en.wikipedia.org/wiki/Eicosanolide
Eicosanolide is an organic compound with the chemical formula . It is a cyclic ester or lactone, more specifically a macrolide. Occurrence Eicosanolide is used by several species of bees (such as some of genera Colletes, Lasioglossum, Halictus) and butterflies (such as some of genus Heliconius) as a pheromone. The Dufour's gland of bees in the Halictinae subfamily, contains eicosanolide along with other macrocyclic lactones, which could be used for a range of different applications like nest building, larval food and chemical communication. References Lactones Heterocyclic compounds with 1 ring
Eicosanolide
Chemistry
151
26,980,389
https://en.wikipedia.org/wiki/C12H15N
{{DISPLAYTITLE:C12H15N}} The molecular formula C12H15N (molar mass: 173.25 g/mol, exact mass: 173.1204 u) may refer to: Benzomorphan Bicifadine (DOV-220,075) Desmethylselegiline Julolidine MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) 2,2,4-Trimethyl-1,2-dihydroquinoline
C12H15N
Chemistry
125
3,954,077
https://en.wikipedia.org/wiki/Boolean%20domain
In mathematics and abstract algebra, a Boolean domain is a set consisting of exactly two elements whose interpretations include false and true. In logic, mathematics and theoretical computer science, a Boolean domain is usually written as {0, 1}, or The algebraic structure that naturally builds on a Boolean domain is the Boolean algebra with two elements. The initial object in the category of bounded lattices is a Boolean domain. In computer science, a Boolean variable is a variable that takes values in some Boolean domain. Some programming languages feature reserved words or symbols for the elements of the Boolean domain, for example false and true. However, many programming languages do not have a Boolean data type in the strict sense. In C or BASIC, for example, falsity is represented by the number 0 and truth is represented by the number 1 or −1, and all variables that can take these values can also take any other numerical values. Generalizations The Boolean domain {0, 1} can be replaced by the unit interval , in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically, negation (NOT) is replaced with conjunction (AND) is replaced with multiplication (), and disjunction (OR) is defined via De Morgan's law to be . Interpreting these values as logical truth values yields a multi-valued logic, which forms the basis for fuzzy logic and probabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true. See also Boolean-valued function GF(2) References Further reading (xxx+428 pages) (NB. Contains extended versions of the best manuscripts from the 10th International Workshop on Boolean Problems held at the Technische Universität Bergakademie Freiberg, Germany on 2012-09-19/21.) (xxxv+1+445+1 pages) (NB. Contains extended versions of the best manuscripts from the 11th International Workshop on Boolean Problems held at the Technische Universität Bergakademie Freiberg, Germany on 2014-09-17/19.) (xli+1+494 pages) (NB. Contains extended versions of the best manuscripts from the 12th International Workshop on Boolean Problems held at the Technische Universität Bergakademie Freiberg, Germany on 2016-09-22/23.) (vii+265+7 pages) (NB. Contains extended versions of the best manuscripts from the 13th International Workshop on Boolean Problems (IWSBP 2018) held at the University of Bremen, Bremen, Germany on 2018-09-19/21.) (vii+1+197+5 pages) (NB. Contains extended versions of the best manuscripts from the 14th International Workshop on Boolean Problems (IWSBP 2020) held virtually on 2020-09-24/25.) (xxii+231+1 pages) (viii+172+6 pages) (NB. Contains extended versions of the best manuscripts from the 15th International Workshop on Boolean Problems (IWSBP 2022) held at the University of Bremen, Bremen, Germany on 2022-09-22/23.) Boolean algebra
Boolean domain
Mathematics
704
42,552,482
https://en.wikipedia.org/wiki/Beckoning%20sign
A beckoning sign is a type of gesture intended to beckon or call-over someone or something. It is usually translated into "come here". This form of nonverbal communication varies from culture to culture, each having a relatively unique method of indicating invitation or enticement. Generally, this move is not recommended as it may imply contempt for the individual being beckoned. Around the world United States In the United States, the "beckoning finger" or the "beckoning palm" are the most common gestures implying beckoning. Both are accomplished by up-turning the palm, and extending and retracting either one or two fingers while keeping the rest clenched in a fist or by extending and retracting all of the fingers, all while keeping the palm upturned. This is mostly used for children but is considered very rude when used for adults, especially adult men. Japan The American beckoning sign is considered an insult in Japan, signifying a dog or other animal. To beckon in Japan, the hand is placed at head-level, palm facing down, and waved back and forward with the fingers pointing toward the ground. See also Gesture List of gestures References Gestures
Beckoning sign
Biology
242
252,216
https://en.wikipedia.org/wiki/Localization%20%28commutative%20algebra%29
In commutative algebra and algebraic geometry, localization is a formal way to introduce the "denominators" to a given ring or module. That is, it introduces a new ring/module out of an existing ring/module R, so that it consists of fractions such that the denominator s belongs to a given subset S of R. If S is the set of the non-zero elements of an integral domain, then the localization is the field of fractions: this case generalizes the construction of the field of rational numbers from the ring of integers. The technique has become fundamental, particularly in algebraic geometry, as it provides a natural link to sheaf theory. In fact, the term localization originated in algebraic geometry: if R is a ring of functions defined on some geometric object (algebraic variety) V, and one wants to study this variety "locally" near a point p, then one considers the set S of all functions that are not zero at p and localizes R with respect to S. The resulting ring contains information about the behavior of V near p, and excludes information that is not "local", such as the zeros of functions that are outside V (c.f. the example given at local ring). Localization of a ring The localization of a commutative ring by a multiplicatively closed set is a new ring whose elements are fractions with numerators in and denominators in . If the ring is an integral domain the construction generalizes and follows closely that of the field of fractions, and, in particular, that of the rational numbers as the field of fractions of the integers. For rings that have zero divisors, the construction is similar but requires more care. Multiplicative set Localization is commonly done with respect to a multiplicatively closed set (also called a multiplicative set or a multiplicative system) of elements of a ring , that is a subset of that is closed under multiplication, and contains . The requirement that must be a multiplicative set is natural, since it implies that all denominators introduced by the localization belong to . The localization by a set that is not multiplicatively closed can also be defined, by taking as possible denominators all products of elements of . However, the same localization is obtained by using the multiplicatively closed set of all products of elements of . As this often makes reasoning and notation simpler, it is standard practice to consider only localizations by multiplicative sets. For example, the localization by a single element introduces fractions of the form but also products of such fractions, such as So, the denominators will belong to the multiplicative set of the powers of . Therefore, one generally talks of "the localization by the powers of an element" rather than of "the localization by an element". The localization of a ring by a multiplicative set is generally denoted but other notations are commonly used in some special cases: if consists of the powers of a single element, is often denoted if is the complement of a prime ideal , then is denoted In the remainder of this article, only localizations by a multiplicative set are considered. Integral domains When the ring is an integral domain and does not contain , the ring is a subring of the field of fractions of . As such, the localization of a domain is a domain. More precisely, it is the subring of the field of fractions of , that consists of the fractions such that This is a subring since the sum and the product of two elements of are in This results from the defining property of a multiplicative set, which implies also that In this case, is a subring of It is shown below that this is no longer true in general, typically when contains zero divisors. For example, the decimal fractions are the localization of the ring of integers by the multiplicative set of the powers of ten. In this case, consists of the rational numbers that can be written as where is an integer, and is a nonnegative integer. General construction In the general case, a problem arises with zero divisors. Let be a multiplicative set in a commutative ring . Suppose that and is a zero divisor with Then is the image in of and one has Thus some nonzero elements of must be zero in The construction that follows is designed for taking this into account. Given and as above, one considers the equivalence relation on that is defined by if there exists a such that The localization is defined as the set of the equivalence classes for this relation. The class of is denoted as or So, one has if and only if there is a such that The reason for the is to handle cases such as the above where is nonzero even though the fractions should be regarded as equal. The localization is a commutative ring with addition multiplication additive identity and multiplicative identity The function defines a ring homomorphism from into which is injective if and only if does not contain any zero divisors. If then is the zero ring that has as unique element. If is the set of all regular elements of (that is the elements that are not zero divisors), is called the total ring of fractions of . Universal property The (above defined) ring homomorphism satisfies a universal property that is described below. This characterizes up to an isomorphism. So all properties of localizations can be deduced from the universal property, independently from the way they have been constructed. Moreover, many important properties of localization are easily deduced from the general properties of universal properties, while their direct proof may be more technical. The universal property satisfied by is the following: If is a ring homomorphism that maps every element of to a unit (invertible element) in , there exists a unique ring homomorphism such that Using category theory, this can be expressed by saying that localization is a functor that is left adjoint to a forgetful functor. More precisely, let and be the categories whose objects are pairs of a commutative ring and a submonoid of, respectively, the multiplicative monoid or the group of units of the ring. The morphisms of these categories are the ring homomorphisms that map the submonoid of the first object into the submonoid of the second one. Finally, let be the forgetful functor that forgets that the elements of the second element of the pair are invertible. Then the factorization of the universal property defines a bijection This may seem a rather tricky way of expressing the universal property, but it is useful for showing easily many properties, by using the fact that the composition of two left adjoint functors is a left adjoint functor. Examples If is the ring of integers, and then is the field of the rational numbers. If is an integral domain, and then is the field of fractions of . The preceding example is a special case of this one. If is a commutative ring, and if is the subset of its elements that are not zero divisors, then is the total ring of fractions of . In this case, is the largest multiplicative set such that the homomorphism is injective. The preceding example is a special case of this one. If is an element of a commutative ring and then can be identified (is canonically isomorphic to) (The proof consists of showing that this ring satisfies the above universal property.) This sort of localization plays a fundamental role in the definition of an affine scheme. If is a prime ideal of a commutative ring , the set complement of in is a multiplicative set (by the definition of a prime ideal). The ring is a local ring that is generally denoted and called the local ring of at This sort of localization is fundamental in commutative algebra, because many properties of a commutative ring can be read on its local rings. Such a property is often called a local property. For example, a ring is regular if and only if all its local rings are regular. Ring properties Localization is a rich construction that has many useful properties. In this section, only the properties relative to rings and to a single localization are considered. Properties concerning ideals, modules, or several multiplicative sets are considered in other sections. if and only if contains . The ring homomorphism is injective if and only if does not contain any zero divisors. The ring homomorphism is an epimorphism in the category of rings, that is not surjective in general. The ring is a flat -module (see for details). If is the complement of a prime ideal , then denoted is a local ring; that is, it has only one maximal ideal. Properties to be moved in another section Localization commutes with formations of finite sums, products, intersections and radicals; e.g., if denote the radical of an ideal I in R, then In particular, R is reduced if and only if its total ring of fractions is reduced. Let R be an integral domain with the field of fractions K. Then its localization at a prime ideal can be viewed as a subring of K. Moreover, where the first intersection is over all prime ideals and the second over the maximal ideals. There is a bijection between the set of prime ideals of S−1R and the set of prime ideals of R that do not intersect S. This bijection is induced by the given homomorphism R → S −1R. Saturation of a multiplicative set Let be a multiplicative set. The saturation of is the set The multiplicative set is saturated if it equals its saturation, that is, if , or equivalently, if implies that and are in . If is not saturated, and then is a multiplicative inverse of the image of in So, the images of the elements of are all invertible in and the universal property implies that and are canonically isomorphic, that is, there is a unique isomorphism between them that fixes the images of the elements of . If and are two multiplicative sets, then and are isomorphic if and only if they have the same saturation, or, equivalently, if belongs to one of the multiplicative sets, then there exists such that belongs to the other. Saturated multiplicative sets are not widely used explicitly, since, for verifying that a set is saturated, one must know all units of the ring. Terminology explained by the context The term localization originates in the general trend of modern mathematics to study geometrical and topological objects locally, that is in terms of their behavior near each point. Examples of this trend are the fundamental concepts of manifolds, germs and sheafs. In algebraic geometry, an affine algebraic set can be identified with a quotient ring of a polynomial ring in such a way that the points of the algebraic set correspond to the maximal ideals of the ring (this is Hilbert's Nullstellensatz). This correspondence has been generalized for making the set of the prime ideals of a commutative ring a topological space equipped with the Zariski topology; this topological space is called the spectrum of the ring. In this context, a localization by a multiplicative set may be viewed as the restriction of the spectrum of a ring to the subspace of the prime ideals (viewed as points) that do not intersect the multiplicative set. Two classes of localizations are more commonly considered: The multiplicative set is the complement of a prime ideal of a ring . In this case, one speaks of the "localization at ", or "localization at a point". The resulting ring, denoted is a local ring, and is the algebraic analog of a ring of germs. The multiplicative set consists of all powers of an element of a ring . The resulting ring is commonly denoted and its spectrum is the Zariski open set of the prime ideals that do not contain . Thus the localization is the analog of the restriction of a topological space to a neighborhood of a point (every prime ideal has a neighborhood basis consisting of Zariski open sets of this form). In number theory and algebraic topology, when working over the ring of integers, one refers to a property relative to an integer as a property true at or away from , depending on the localization that is considered. "Away from " means that the property is considered after localization by the powers of , and, if is a prime number, "at " means that the property is considered after localization at the prime ideal . This terminology can be explained by the fact that, if is prime, the nonzero prime ideals of the localization of are either the singleton set or its complement in the set of prime numbers. Localization and saturation of ideals Let be a multiplicative set in a commutative ring , and be the canonical ring homomorphism. Given an ideal in , let the set of the fractions in whose numerator is in . This is an ideal of which is generated by , and called the localization of by . The saturation of by is it is an ideal of , which can also defined as the set of the elements such that there exists with Many properties of ideals are either preserved by saturation and localization, or can be characterized by simpler properties of localization and saturation. In what follows, is a multiplicative set in a ring , and and are ideals of ; the saturation of an ideal by a multiplicative set is denoted or, when the multiplicative set is clear from the context, (this is not always true for strict inclusions) If is a prime ideal such that then is a prime ideal and ; if the intersection is nonempty, then and Localization of a module Let be a commutative ring, be a multiplicative set in , and be an -module. The localization of the module by , denoted , is an -module that is constructed exactly as the localization of , except that the numerators of the fractions belong to . That is, as a set, it consists of equivalence classes, denoted , of pairs , where and and two pairs and are equivalent if there is an element in such that Addition and scalar multiplication are defined as for usual fractions (in the following formula, and ): Moreover, is also an -module with scalar multiplication It is straightforward to check that these operations are well-defined, that is, they give the same result for different choices of representatives of fractions. The localization of a module can be equivalently defined by using tensor products: The proof of equivalence (up to a canonical isomorphism) can be done by showing that the two definitions satisfy the same universal property. Module properties If is a submodule of an -module , and is a multiplicative set in , one has This implies that, if is an injective module homomorphism, then is also an injective homomorphism. Since the tensor product is a right exact functor, this implies that localization by maps exact sequences of -modules to exact sequences of -modules. In other words, localization is an exact functor, and is a flat -module. This flatness and the fact that localization solves a universal property make that localization preserves many properties of modules and rings, and is compatible with solutions of other universal properties. For example, the natural map is an isomorphism. If is a finitely presented module, the natural map is also an isomorphism. If a module M is a finitely generated over R, one has where denotes annihilator, that is the ideal of the elements of the ring that map to zero all elements of the module. In particular, that is, if for some Localization at primes The definition of a prime ideal implies immediately that the complement of a prime ideal in a commutative ring is a multiplicative set. In this case, the localization is commonly denoted The ring is a local ring, that is called the local ring of at This means that is the unique maximal ideal of the ring Analogously one can define the localization of a module at a prime ideal of . Again, the localization is commonly denoted . Such localizations are fundamental for commutative algebra and algebraic geometry for several reasons. One is that local rings are often easier to study than general commutative rings, in particular because of Nakayama lemma. However, the main reason is that many properties are true for a ring if and only if they are true for all its local rings. For example, a ring is regular if and only if all its local rings are regular local rings. Properties of a ring that can be characterized on its local rings are called local properties, and are often the algebraic counterpart of geometric local properties of algebraic varieties, which are properties that can be studied by restriction to a small neighborhood of each point of the variety. (There is another concept of local property that refers to localization to Zariski open sets; see , below.) Many local properties are a consequence of the fact that the module is a faithfully flat module when the direct sum is taken over all prime ideals (or over all maximal ideals of ). See also Faithfully flat descent. Examples of local properties A property of an -module is a local property if the following conditions are equivalent: holds for . holds for all where is a prime ideal of . holds for all where is a maximal ideal of . The following are local properties: is zero. is torsion-free (in the case where is a commutative domain). is a flat module. is an invertible module (in the case where is a commutative domain, and is a submodule of the field of fractions of ). is injective (resp. surjective), where is another -module. On the other hand, some properties are not local properties. For example, an infinite direct product of fields is not an integral domain nor a Noetherian ring, while all its local rings are fields, and therefore Noetherian integral domains. Non-commutative case Localizing non-commutative rings is more difficult. While the localization exists for every set S of prospective units, it might take a different form to the one described above. One condition which ensures that the localization is well behaved is the Ore condition. One case for non-commutative rings where localization has a clear interest is for rings of differential operators. It has the interpretation, for example, of adjoining a formal inverse D−1 for a differentiation operator D. This is done in many contexts in methods for differential equations. There is now a large mathematical theory about it, named microlocalization, connecting with numerous other branches. The micro- tag is to do with connections with Fourier theory, in particular. See also Local analysis Localization of a category Localization of a topological space References Borel, Armand. Linear Algebraic Groups (2nd ed.). New York: Springer-Verlag. . Matsumura. Commutative Algebra. Benjamin-Cummings Serge Lang, "Algebraic Number Theory," Springer, 2000. pages 3–4. External links Localization from MathWorld. Ring theory Module theory Localization (mathematics)
Localization (commutative algebra)
Mathematics
4,019
527,046
https://en.wikipedia.org/wiki/Hyperfine%20structure
In atomic physics, hyperfine structure is defined by small shifts in otherwise degenerate electronic energy levels and the resulting splittings in those electronic energy levels of atoms, molecules, and ions, due to electromagnetic multipole interaction between the nucleus and electron clouds. In atoms, hyperfine structure arises from the energy of the nuclear magnetic dipole moment interacting with the magnetic field generated by the electrons and the energy of the nuclear electric quadrupole moment in the electric field gradient due to the distribution of charge within the atom. Molecular hyperfine structure is generally dominated by these two effects, but also includes the energy associated with the interaction between the magnetic moments associated with different magnetic nuclei in a molecule, as well as between the nuclear magnetic moments and the magnetic field generated by the rotation of the molecule. Hyperfine structure contrasts with fine structure, which results from the interaction between the magnetic moments associated with electron spin and the electrons' orbital angular momentum. Hyperfine structure, with energy shifts typically orders of magnitudes smaller than those of a fine-structure shift, results from the interactions of the nucleus (or nuclei, in molecules) with internally generated electric and magnetic fields. History The first theory of atomic hyperfine structure was given in 1930 by Enrico Fermi for an atom containing a single valence electron with an arbitrary angular momentum. The Zeeman splitting of this structure was discussed by S. A. Goudsmit and R. F. Bacher later that year. In 1935, H. Schüler and Theodor Schmidt proposed the existence of a nuclear quadrupole moment in order to explain anomalies in the hyperfine structure of Europium, Cassiopium(older name for Lutetium), Indium, Antimony, and Mercury. Theory The theory of hyperfine structure comes directly from electromagnetism, consisting of the interaction of the nuclear multipole moments (excluding the electric monopole) with internally generated fields. The theory is derived first for the atomic case, but can be applied to each nucleus in a molecule. Following this there is a discussion of the additional effects unique to the molecular case. Atomic hyperfine structure Magnetic dipole The dominant term in the hyperfine Hamiltonian is typically the magnetic dipole term. Atomic nuclei with a non-zero nuclear spin have a magnetic dipole moment, given by: where is the g-factor and is the nuclear magneton. There is an energy associated with a magnetic dipole moment in the presence of a magnetic field. For a nuclear magnetic dipole moment, μI, placed in a magnetic field, B, the relevant term in the Hamiltonian is given by: In the absence of an externally applied field, the magnetic field experienced by the nucleus is that associated with the orbital (ℓ) and spin (s) angular momentum of the electrons: Electron orbital magnetic field Electron orbital angular momentum results from the motion of the electron about some fixed external point that we shall take to be the location of the nucleus. The magnetic field at the nucleus due to the motion of a single electron, with charge –e at a position r relative to the nucleus, is given by: where −r gives the position of the nucleus relative to the electron. Written in terms of the Bohr magneton, this gives: Recognizing that mev is the electron momentum, p, and that is the orbital angular momentum in units of ħ, ℓ, we can write: For a many-electron atom this expression is generally written in terms of the total orbital angular momentum, , by summing over the electrons and using the projection operator, , where . For states with a well defined projection of the orbital angular momentum, , we can write , giving: Electron spin magnetic field The electron spin angular momentum is a fundamentally different property that is intrinsic to the particle and therefore does not depend on the motion of the electron. Nonetheless, it is angular momentum and any angular momentum associated with a charged particle results in a magnetic dipole moment, which is the source of a magnetic field. An electron with spin angular momentum, s, has a magnetic moment, μs, given by: where gs is the electron spin g-factor and the negative sign is because the electron is negatively charged (consider that negatively and positively charged particles with identical mass, travelling on equivalent paths, would have the same angular momentum, but would result in currents in the opposite direction). The magnetic field of a point dipole moment, μs, is given by: Electron total magnetic field and contribution The complete magnetic dipole contribution to the hyperfine Hamiltonian is thus given by: The first term gives the energy of the nuclear dipole in the field due to the electronic orbital angular momentum. The second term gives the energy of the "finite distance" interaction of the nuclear dipole with the field due to the electron spin magnetic moments. The final term, often known as the Fermi contact term relates to the direct interaction of the nuclear dipole with the spin dipoles and is only non-zero for states with a finite electron spin density at the position of the nucleus (those with unpaired electrons in s-subshells). It has been argued that one may get a different expression when taking into account the detailed nuclear magnetic moment distribution. The inclusion of the delta function is an admission that the singularity in the magnetic induction B owing to a magnetic dipole moment at a point is not integrable. It is B which mediates the interaction between the Pauli spinors in non-relativistic quantum mechanics. Fermi (1930) avoided the difficulty by working with the relativistic Dirac wave equation, according to which the mediating field for the Dirac spinors is the four-vector potential (V,A). The component  V is the Coulomb potential. The component A is the three-vector magnetic potential (such that B = curl A), which for the point dipole is integrable. For states with this can be expressed in the form where: If hyperfine structure is small compared with the fine structure (sometimes called IJ-coupling by analogy with LS-coupling), I and J are good quantum numbers and matrix elements of can be approximated as diagonal in I and J. In this case (generally true for light elements), we can project N onto J (where is the total electronic angular momentum) and we have: This is commonly written as with being the hyperfine-structure constant which is determined by experiment. Since (where is the total angular momentum), this gives an energy of: In this case the hyperfine interaction satisfies the Landé interval rule. Electric quadrupole Atomic nuclei with spin have an electric quadrupole moment. In the general case this is represented by a rank-2 tensor, , with components given by: where i and j are the tensor indices running from 1 to 3, xi and xj are the spatial variables x, y and z depending on the values of i and j respectively, δij is the Kronecker delta and ρ(r) is the charge density. Being a 3-dimensional rank-2 tensor, the quadrupole moment has 32 = 9 components. From the definition of the components it is clear that the quadrupole tensor is a symmetric matrix () that is also traceless (), giving only five components in the irreducible representation. Expressed using the notation of irreducible spherical tensors we have: The energy associated with an electric quadrupole moment in an electric field depends not on the field strength, but on the electric field gradient, confusingly labelled , another rank-2 tensor given by the outer product of the del operator with the electric field vector: with components given by: Again it is clear this is a symmetric matrix and, because the source of the electric field at the nucleus is a charge distribution entirely outside the nucleus, this can be expressed as a 5-component spherical tensor, , with: where: The quadrupolar term in the Hamiltonian is thus given by: A typical atomic nucleus closely approximates cylindrical symmetry and therefore all off-diagonal elements are close to zero. For this reason the nuclear electric quadrupole moment is often represented by . Molecular hyperfine structure The molecular hyperfine Hamiltonian includes those terms already derived for the atomic case with a magnetic dipole term for each nucleus with and an electric quadrupole term for each nucleus with . The magnetic dipole terms were first derived for diatomic molecules by Frosch and Foley, and the resulting hyperfine parameters are often called the Frosch and Foley parameters. In addition to the effects described above, there are a number of effects specific to the molecular case. Direct nuclear spin–spin Each nucleus with has a non-zero magnetic moment that is both the source of a magnetic field and has an associated energy due to the presence of the combined field of all of the other nuclear magnetic moments. A summation over each magnetic moment dotted with the field due to each other magnetic moment gives the direct nuclear spin–spin term in the hyperfine Hamiltonian, . where α and α are indices representing the nucleus contributing to the energy and the nucleus that is the source of the field respectively. Substituting in the expressions for the dipole moment in terms of the nuclear angular momentum and the magnetic field of a dipole, both given above, we have Nuclear spin–rotation The nuclear magnetic moments in a molecule exist in a magnetic field due to the angular momentum, T (R is the internuclear displacement vector), associated with the bulk rotation of the molecule, thus Small molecule hyperfine structure A typical simple example of the hyperfine structure due to the interactions discussed above is in the rotational transitions of hydrogen cyanide (1H12C14N) in its ground vibrational state. Here, the electric quadrupole interaction is due to the 14N-nucleus, the hyperfine nuclear spin-spin splitting is from the magnetic coupling between nitrogen, 14N (IN = 1), and hydrogen, 1H (IH = ), and a hydrogen spin-rotation interaction due to the 1H-nucleus. These contributing interactions to the hyperfine structure in the molecule are listed here in descending order of influence. Sub-doppler techniques have been used to discern the hyperfine structure in HCN rotational transitions. The dipole selection rules for HCN hyperfine structure transitions are , , where is the rotational quantum number and is the total rotational quantum number inclusive of nuclear spin (), respectively. The lowest transition () splits into a hyperfine triplet. Using the selection rules, the hyperfine pattern of transition and higher dipole transitions is in the form of a hyperfine sextet. However, one of these components () carries only 0.6% of the rotational transition intensity in the case of . This contribution drops for increasing J. So, from upwards the hyperfine pattern consists of three very closely spaced stronger hyperfine components (, ) together with two widely spaced components; one on the low frequency side and one on the high frequency side relative to the central hyperfine triplet. Each of these outliers carry ~ ( is the upper rotational quantum number of the allowed dipole transition) the intensity of the entire transition. For consecutively higher- transitions, there are small but significant changes in the relative intensities and positions of each individual hyperfine component. Measurements Hyperfine interactions can be measured, among other ways, in atomic and molecular spectra and in electron paramagnetic resonance spectra of free radicals and transition-metal ions. Applications Astrophysics As the hyperfine splitting is very small, the transition frequencies are usually not located in the optical, but are in the range of radio- or microwave (also called sub-millimeter) frequencies. Hyperfine structure gives the 21 cm line observed in H I regions in interstellar medium. Carl Sagan and Frank Drake considered the hyperfine transition of hydrogen to be a sufficiently universal phenomenon so as to be used as a base unit of time and length on the Pioneer plaque and later Voyager Golden Record. In submillimeter astronomy, heterodyne receivers are widely used in detecting electromagnetic signals from celestial objects such as star-forming core or young stellar objects. The separations among neighboring components in a hyperfine spectrum of an observed rotational transition are usually small enough to fit within the receiver's IF band. Since the optical depth varies with frequency, strength ratios among the hyperfine components differ from that of their intrinsic (or optically thin) intensities (these are so-called hyperfine anomalies, often observed in the rotational transitions of HCN). Thus, a more accurate determination of the optical depth is possible. From this we can derive the object's physical parameters. Nuclear spectroscopy In nuclear spectroscopy methods, the nucleus is used to probe the local structure in materials. The methods mainly base on hyperfine interactions with the surrounding atoms and ions. Important methods are nuclear magnetic resonance, Mössbauer spectroscopy, and perturbed angular correlation. Nuclear technology The atomic vapor laser isotope separation (AVLIS) process uses the hyperfine splitting between optical transitions in uranium-235 and uranium-238 to selectively photo-ionize only the uranium-235 atoms and then separate the ionized particles from the non-ionized ones. Precisely tuned dye lasers are used as the sources of the necessary exact wavelength radiation. Use in defining the SI second and meter The hyperfine structure transition can be used to make a microwave notch filter with very high stability, repeatability and Q factor, which can thus be used as a basis for very precise atomic clocks. The term transition frequency denotes the frequency of radiation corresponding to the transition between the two hyperfine levels of the atom, and is equal to , where is difference in energy between the levels and is the Planck constant. Typically, the transition frequency of a particular isotope of caesium or rubidium atoms is used as a basis for these clocks. Due to the accuracy of hyperfine structure transition-based atomic clocks, they are now used as the basis for the definition of the second. One second is now defined to be exactly cycles of the hyperfine structure transition frequency of caesium-133 atoms. On October 21, 1983, the 17th CGPM defined the meter as the length of the path travelled by light in a vacuum during a time interval of of a second. Precision tests of quantum electrodynamics The hyperfine splitting in hydrogen and in muonium have been used to measure the value of the fine-structure constant α. Comparison with measurements of α in other physical systems provides a stringent test of QED. Qubit in ion-trap quantum computing The hyperfine states of a trapped ion are commonly used for storing qubits in ion-trap quantum computing. They have the advantage of having very long lifetimes, experimentally exceeding ~10 minutes (compared to ~1s for metastable electronic levels). The frequency associated with the states' energy separation is in the microwave region, making it possible to drive hyperfine transitions using microwave radiation. However, at present no emitter is available that can be focused to address a particular ion from a sequence. Instead, a pair of laser pulses can be used to drive the transition, by having their frequency difference (detuning) equal to the required transition's frequency. This is essentially a stimulated Raman transition. In addition, near-field gradients have been exploited to individually address two ions separated by approximately 4.3 micrometers directly with microwave radiation. See also Dynamic nuclear polarization Electron paramagnetic resonance References External links The Feynman Lectures on Physics Vol. III Ch. 12: The Hyperfine Splitting in Hydrogen Nuclear Magnetic and Electric Moments lookup—Nuclear Structure and Decay Data at the IAEA Atomic physics Foundational quantum physics
Hyperfine structure
Physics,Chemistry
3,213
30,600,547
https://en.wikipedia.org/wiki/Control%20of%20Asbestos%20Regulations%202006
This article deals with the Control of Asbestos Regulations 2006 which came into force on 13 November 2006. For the later regulations which came into force on 6th April 2012, see Asbestos and the law - Control of Asbestos Regulations The Control of Asbestos Regulations 2006 came into force in the United Kingdom on 13 November 2006 and brought together a number of other asbestos related pieces of legislation. This has been superseded by The Control of Asbestos Regulations 2012. The pieces of legislation the regulations revoked and replaced were the 'Control of Asbestos at Work Regulations 2002', the 'Asbestos (Licensing) Regulations 1983' and the 'Asbestos (Prohibitions) Regulations 1992'. Key elements of the regulations include a greater emphasis on training requiring anyone who may come into contact with Asbestos in the course of their work to be given suitable training. Greater restrictions were also placed on the amount of exposure workers could be exposed to in the form of 'control limits'. The recently published 'Asbestos: The survey guide' (HSG264) is complementary to these regulations. When work with asbestos is being carried out the Regulations place a requirement on employers and self-employed workers to prevent exposure to asbestos fibres. Duties arising from the regulations The Control of Asbestos 2006 regulations brought together three separate pieces of legislation which covered the prohibition of Asbestos, the control of asbestos at work and asbestos licensing. They prohibited the import, supply and use of all types of asbestos and also continued to ban the second hand use of asbestos products such as asbestos boards and tiles. The regulations require mandatory training to be given to anyone who may be exposed to asbestos whilst at work. This regulation will enable contracted workers on site to assess correctly the nature of a material before work is carried out, thus eliminating the risk of uncontrolled damage to Asbestos Containing Materials. Maintenance workers who may be coming onto a premises to carry out a job must also be given training. Should work need to be carried out that may result in the disturbing of asbestos then all measures should be taken to limit the exposure to asbestos fibres. Any exposure to those fibres should be below the 'airborne exposure limit' of 0.1 fibres per cm³. The control limit is the maximum concentration of asbestos fibres in the air if measured over any continuous 4 hour period. Any short term exposure to asbestos, as measured by continuous exposure over 10 minutes, should not exceed 0.6 fibres per cm³. These exposures must be strictly controlled with respiratory protective equipment if exposure can be reduced in no other way. In terms of asbestos removal, any work should be carried out by a licensed contractor although any decision as to whether work is 'licensable' is based on the risk. Anyone working on asbestos under the regulations must have a license issued by the Health and Safety Executive. Prosecutions arising from the regulations On 9 June 2009 a company in Swansea, Val Inco Europe Ltd, pleaded guilty to four charges under the Control of Asbestos Regulations and were fined £12,000 and ordered to pay £28,000 costs. The charges were in relation to work carried out by a contractor, A-Weld, on a furnace at the companies premises. Although asbestos surveys had been carried out on their site, the interior of the plant and equipment had not been surveyed. As a result, asbestos insulation material was disturbed and broken potentially giving rise to powders and fibres. Workers on the site also discovered a 'white material' which they believed to be asbestos and although the sample was sent away for tests, the site was not isolated and the work was allowed to continue. HSE principal inspector Andrew Knowles said that an "important aspect was the failure to provide asbestos awareness training for employees, which is a specific requirement where asbestos may be present in a workplace". He added "The failures in this case were entirely preventable and the defendant fell far short of the high standards required. This should serve as a warning to others about the dangers of asbestos and the legal requirement to manage it properly." Asbestos and associated health risks Asbestos is a naturally occurring group of silicate minerals that can readily be separated into thin strong fibres that are flexible, heat resistant and chemically inert. Within the United Kingdom and elsewhere in the world, asbestos was used extensively as a building material from the 1950s to the 1980s. It was used as a means of fireproofing as well as insulation and any building built before 2000 could contain asbestos. Although Asbestos can be safe if the material is kept in good condition and undisturbed, if damaged asbestos fibres could become airborne and cause serious risks to health if inhaled. Serious diseases including mesothelioma, lung cancer, and asbestosis could result if someone were to breathe in high levels of asbestos fibres. A particular risk if someone were to be working either on, or with, asbestos materials which had been damaged. There are six forms of Asbestos although only three are commonly used, these are Chrysotile (white asbestos), the most common; Amosite (brown asbestos) which can often be found in ceiling tiles and as a fire retardant in thermal insulation products; Crocidolite (blue asbestos), commonly used in high temperature applications. Notes References Health and safety in the United Kingdom Safety codes Statutory instruments of the United Kingdom 2006 in British law Asbestos
Control of Asbestos Regulations 2006
Environmental_science
1,069
9,990,961
https://en.wikipedia.org/wiki/Strong%20link/weak%20link
A strong link/weak link and exclusion zone nuclear detonation mechanism is a type of safety mechanism employed in the arming and firing mechanisms of modern nuclear weapons. The safety mechanism starts by enclosing the electronics and mechanical components used to arm and fire the nuclear weapon with a mechanical and electrical isolation barrier, the energy barrier, which encloses and defines the exclusion zone. This is insulated from mechanical, thermal, and electrical disruptions (such as static electricity, lightning, or fire). Between the exclusion zone and the actual detonators, a normally-disconnected link mechanism is used, such as a switch which has a built-in motor to activate it. The arming system has to activate the switch in order to connect the firing circuits to the detonators in the weapon. This disconnection, which requires the arming mechanism to operate, is called the strong link. It is possible for an accident (rocket explosion, airplane crash, accident while weapon is being moved) to disrupt the weapon and break the integrity of the exclusion zone. As a safety mechanism, a weak link is also built into the system. This is a set of components designed to fail at lower stresses (thermal, mechanical, and electrical) than the strong links, and will prevent signals from the strong links from reaching the detonators. The weak link acts to break the connection to the detonators before the strong link could be disrupted and fail by the stress of an accident: by the time the strong links fail, the weapon has already been rendered permanently inoperable. Strong links and the following weak links are intentionally co-located, so that they will experience similar environmental conditions. The following table summarises the effects of failure modes in the strong and weak links: Strong links Strong links, at least in US nuclear weapons, are always implemented as electro-mechanical systems such as motor-driven switches. There are two main requirements: when functional, never to allow an invalid signals to penetrate the energy barrier, and never to fail in a way that can pass a signal though the barrier before the weak links inside the exclusion zone have also failed. The MC2935 and MC2969 devices were two similar devices based on a rotary solenoid, acting, respectively, as "trajectory" (passing a signal only when a missile's physical movement indicated a correct launch) and "intent" (signalling that a detonation is desired by the operator) strong links. The Mechanical Safing and Arming Device (MSAD) strong link device used a small pellet of sensitive high explosive to trigger a larger charge of insensitive high explosive. Normally, the pellet was held away from the main charge, and was physically moved into position only when the strong link was activated by a valid input and detonated by a mechanical "slapper". The MSAD also contained a weak link: the pellet would burn or explode harmlessly in a fire when it was not in position, and the insensitive explosives could not then be detonated at all. Multiple strong links could be used in series, which, when properly designed, multiplies the safety factor.. The B61 nuclear bomb, for example, gated the trajectory strong link behind the intent strong link. Until the correct intent unique signal was sent, the trajectory unique signal would not even be presented to the trajectory strong link inputs. Unique signals Strong links implement a mechanism where only a single, unique form of energy may enter the exclusion zone. This energy is encoded as a unique signal: a sequence of "events" which must occur in a precise and preset pattern for the link to activate. This pattern is specifically designed to be extremely unlikely to occur by chance. The pattern is checked for validity by a discriminator. In some devices, known as single-try discriminators, an incorrect event pattern leads to the device becoming inoperable: the weapon cannot then be reset and fired remotely. "Multiple-try" discriminators could be reset remotely. A single-try strong link might have an event sequence of 24 events, whereas a multiple-try device would have more: the MC2969 had 47.. Unique signal patterns were always the same for a given strong link discriminator, and were not secret or classified: they were designed only for safety purposes and not security.. Each strong link had a different signal, so as to avoid the possibility of common mode failure. Unique signals were used, because it was recognised that it was impossible to fully isolate the strong link from any and all electrical sources in an "abnormal environment" (such as a disintegrating aircraft). By encoding the only valid signal as a unique pattern of information, the safety principle of "incompatibility" was introduced: the signal is "incompatible" with all other electrical energy because the information that makes up a unique signal is not present in any other components (such as signal buffers or storage). Therefore the channel over which the UQS is transmitted does not need to be proven to have a safe response. Only the signal generator and the strong link need to be proven to have safe behaviour until such time as the weak links render the weapon inert.. Critically for maintaining this safety, the strong link discriminator must be the only place in the entire system where "decisions" are made, and the transmission channel must never be permitted to retain knowledge of events, handle multiple events at once or re-order events. That may permit a single action to generate multiple signal events. Additionally, all events must be processed identically: to do otherwise constitutes pre-storage of knowledge of the UQS and biases the channel, Events may be sent or received in any format (e.g. digitally, as voltage levels, mechanically, etc) as long as these conditions were met; format translation is also permitted as long as the translators transmits each event before processing the next one. Unique signals were usually encoded as sequences of binary data (though strictly the data did not have to binary, it was deemed that the longer sequence was outweighed by the simpler implementations). Unique signals were carefully designed to have statistical properties extremely unlikely to exist unintentionally, and were also designed to be transmitted not only electrically via voltage or pulse-width modulation, but also mechanically (e.g. a push-pull rod), optically or pneumatically. Events are described alphabetically, rather than numerically (e.g. 0 and 1), to avoid confusion with specific physical signals; a two-event sequence would have "A" and "B" events. Examples of statistical weaknesses that undermine safety properties include sequence symmetry, periodicity, repeated events, imbalances between events (event-wise balance: almost equal numbers of "A" and "B" events), imbalances between pairs (pair-wise balance: "AA", "AB", "BA" and "BB" should be almost equal in occurrence) and correlations with other unique signals (as this would permit events from a different UQS to bias this one). Testing signals Testing and training signals that would ever be transmitted to a weapon were also carefully chosen to be statistically weak unique signals, which would still also test the integrity of the signal transmission system. This was done so that a test signal could never be mistaken for a genuine signal, which would have strong statistical properties. Thus the test signal would be very different and could never be mistaken for the valid UQS. In order to test the unique signal generators, devices such as the CM-458/U Signal Comparator were used (which tested the DCU-201 or DCU-218 Aircraft Controller, which passed the unique signal to the weapon's MC2969 intent strong link), which would check that the signals that would be passed to the strong link were correct. The CM-458, built by Sparton Technology, tested voltages, pulse widths and signal sequence against the fixed sequence for the strong link, and was mounted on the aircraft pylon in order to also test the aircraft wiring. Weak links The weak links, which follow the strong links, are designed to fail earlier than the strong links. There are many kinds of weak link, which are sensitive to conditions including thermal, electrical or mechanical problems. Some weak links are dedicated devices inserted into the signal paths that function only as weak links, and others can also be critical parts of the weapon that are designed to become inoperative under certain conditions. An example of a weak link that is sensitive to temperature are the capacitors in the firing set which are charged in order to then discharge to trigger the detonators. These can be deliberately designed to fail when a specific high temperature is reached, which will prevent the firing set from being able to detonate the explosives. Limitations These mechanisms do not prevent misuse of the weapon, which is restricted by Permissive Action Link code systems, or an accident from physically causing initiation of the explosives or detonators directly from extremely high temperatures, impact forces, or electrical disturbance such as lightning. The risk of accidental direct detonation is significantly reduced by using insensitive high explosives such as TATB, which is extremely unlikely to detonate due to fire, impact or electricity. While TATB may decompose or burn in a fire, it is extremely unlikely to detonate as a result of that decomposition or burning. See also Nuclear weapon design Permissive Action Link References Nuclear technology Nuclear weapon safety
Strong link/weak link
Physics
1,950
8,209,053
https://en.wikipedia.org/wiki/HR%201614
HR 1614 (284 G. Eridani, GJ 183) is a star in the constellation Eridanus. Based upon parallax measurements, it is about distant from the Earth. It is a main sequence star with a stellar classification of K3V. The chromosphere has an effective temperature of about 4,945 K, which gives this star the orange hue characteristic of K-type stars. It has about 84% of the Sun's mass and 78% of the Sun's radius. It is considered a metal-rich dwarf star, which means it displays an unusually high portion of elements heavier than helium in its spectrum. This metallicity is given in term of the ratio of iron to hydrogen, as compared to the Sun. In the case of HR 1614, this ratio is about 90% higher than the Sun. The activity cycle for this star is 11.1 years in length. Based upon gyrochronology, the estimated age of this star is 4.5 Gyr. This system is a member of a moving group of at least nine stars that share a common motion through space. The members of this group display the same abundance of heavy elements as does HR 1614, which may indicate a common origin for these stars. The space velocity of this group relative to the Sun is 59 km/s. The estimated age of this group is 2 Gyr, suggesting a corresponding age for this star. See also List of star systems within 25–30 light-years References External links Eridanus (constellation) 032147 023311 1614 K-type main-sequence stars Eridani, 284 Durchmusterung objects 0183
HR 1614
Astronomy
346
1,249,350
https://en.wikipedia.org/wiki/Computers%20and%20writing
Computers and writing is a sub-field of college English studies about how computers and digital technologies affect literacy and the writing process. The range of inquiry in this field is broad including discussions on ethics when using computers in writing programs, how discourse can be produced through technologies, software development, and computer-aided literacy instruction. Some topics include hypertext theory, visual rhetoric, multimedia authoring, distance learning, digital rhetoric, usability studies, the patterns of online communities, how various media change reading and writing practices, textual conventions, and genres. Other topics examine social or critical issues in computer technology and literacy, such as the issues of the "digital divide", equitable access to computer-writing resources, and critical technological literacies. Many studies by scientists have shown that writing on computer is better than writing in a book "Computers and Writing" is also the name of an academic conference (see below). The Field This interdisciplinary field has grown out of rhetoric and composition studies. Members do scholarly work and teach in allied and diverse areas as technical and professional communication, linguistics, sociology, and law. Important journals supporting this field are Computers & Composition, Computers & Composition Online, and Kairos: A Journal of Rhetoric, Technology, and Pedagogy. The professional organization Conference on College Composition and Communication has a committee, known as the 7Cs committee (CCCC Committee on Computers in Composition and Communication), that selects onsite and online hosts for the Computers & Writing conference and coordinates the "Technology Innovator Award" presented at that annual conference. Conference and Conference History The conference "Computers and Writing" was established in 1982 in Minneapolis, Minnesota by Donald Ross and Lillian Bridwell. The conference was informal at first, but has grown from a grassroots organized conference to an established, mainstream conference that examines the ways in which computers change writing practice and pedagogy. In earlier conferences, the scholarship presented often explored how computers influenced individual writers, but during the late 1980s and 1990s, scholarship shifted to hypertext and hypermedia, and the social nature of computer mediated writing. The conference initially presented original or "homemade" software design associated with word processing and editing, but eventually switched to commercial software as commercial software became more common for both individual students and educational institutions. The conference has a history of technological optimism, and the scholarship presented is optimistic regarding technology's influence on writing. The conference also examines and voices fears and concerns related to computer technology. Some of these fears are related to institutional policies and control as well as the fear of being overwhelmed by the constant march of technological innovation. The conference has also explored how computer-mediated writing can be used in socially responsible ways, as is evident by the feminist roots of the conference and subfield. The conference's feminist roots are evident in its support of minority scholars and scholarship. Awards such as the Hawisher and Selfe Caring for the Future Scholarship provide opportunities for new presenters in Computers and Writing related fields to attend the conference. This scholarship is preferably awarded to minority scholars who are involved in the field. While the conference was originally more focused on software and hardware decisions and use, the conference has become more concerned with the theoretical application of computers in writing pedagogy and practice. This attention to theory mirrors a shift to embrace multimodal compositions as texts and interdisciplinary growth as the conference became more mainstream and established in the 1990s and early 2000s. The conference has been held annually since 1988, which is the year that the CCCC Committee on Computers established a subcommittee to support the Computers and Writing Conference. While the journal Computers and Composition, founded by Cynthia Selfe and Kate Kiefer in 1983, is not officially connected to the Computers and Writing Conference, both began around the same time and explore the subfield within the larger fields of composition studies and rhetoric. Conferences dates and places 2003 Purdue University, Indiana 2013 University of Findlay, Ohio 2022 East Carolina University, North Carolina Computers and Writing Conference 2022 Computers and Writing Conference "Practicing Digital Activisms" was held in East Carolina University, Greenville, NC in May 19-22, 2022. The hashtag for the conference was #CWCon22. Keynote Address: Charlton Mcllwain, author of the book Black Software: The Internet & Racial Justice, From the Afronet to Black lives Matter. Emerging Voices: Antonio Byrd, an assistant professor of English at the University of Missouri-Kansas City; Wilfredo Flores, a PhD Candidate in the Department of Writing, Rhetoric, and American Cultures at Michigan State University and a co-founder of Queering Medicine; Constance Haywood, a fourth-year PhD candidate at Michigan State University; Jo Hsu, an assistant professor of Rhetoric and Writing at the University of Texas at Austin; Cana Itchuaqiyaq, tribal member of the Noorvik Native Community in NW Alaska and an assistant professor of professional and technical writing at Virginia Tech; McKinley Green, an assistant professor of English at George Mason University. Presenters: Pedagogy Computers and writing pedagogy teach students practical applications and implications of writing by exploring complex concepts such as visual rhetoric, issues of access, and the social implications of online writing. Scholars analyze how the computer becomes an environment for facilitated writing and communication. The production and consumption of digital, multimodal, and new media texts advances the field's range of study and research. The integration of computers into writing instruction has significantly improved student’s quality of writing by allowing for immediate feedback and periodic revision, resulting in enhanced writing capabilities. The majority of computers and writing scholars agree that engaging students in the production of such multimodal/digital texts is crucial to the learning process in our digitally infused moment. Technological advancements have facilitated the development of writing and digital literacy amongst students across all education levels. Consequently, theoretical frameworks designed for online writing instruction play a pivotal role in the advancement of pedagogy, encouraging the development of flexible learning environments. During the 1960s and 1970s, which was also known as the "Birth of Composition," computer-assisted instruction was used to observe students and provided instant positive, negative, or constructive feedback. How computers and digital technologies would impact writers' efficiency and quality of work was being determined. Initial debate came from how best to balance creativity and technical construction in writing. Many teachers thought that the technology programs offered needed improvement because they did not allow for the creative expression of the students, acting only as an evaluator, reader, and feedback agent. Now the programs require multimodality, creativity, and technical complexity. The problems the students face in these programs now require much more comprehensive thinking and creativity to the point that it is difficult to still call the subject "writing" because the students are required to know much more. Computers and writing pedagogies must be dynamic and adaptable to how technology, media, and the sociopolitical spaces operate in a constant state of flux. Emerging forms of pedagogies address "Digital Activism" and the use of social media on political communication and advocacy. Quality interaction in blended and online writing classes is important. It should emphasize pedagogical methods that foster thoughtful communication between instructors and students and facilitate the enhancement of writing skills. Computers and digital media offer innovative ways to engage rhetorically with one another. Studying how students develop their digital literacy through their connecting their previous interactions with technologies to new forms by means of metaphors and mental models. Teaching theoretical composing concepts through scaffolding can build students' digital literacy both with current and future technologies and programs. These skills advance critical media awareness when working with digital media. Writing in the age of communication technology In the age of communication technology, amateurs and experts collaborate to create, sustain and develop virtual communities based on what James Paul Gee and Elisabeth Hayes have called "passionate affinity spaces", or communities organized around "a shared endeavor, interest, or passion". Technology, specifically blogs, can be a way to build learning communities and help students learn to write authentically for and respond to various audiences by making their writing public. On the virtual playing field, knowledge and talent matter more than degrees and professional memberships, so these spaces offer students a new learning environment and space to collaborate on the production and distribution of knowledge. Online platforms can reshape traditional pedagogies by encouraging flexibility and personal creativity. Also, digital tools improve educational quality by promoting a collaborative environment that stimulates critical thinking, preparing students for the pedagogical shifts in writing education. Aligning with the notion of "affinity space", composition scholars have coined the term "cultural ecology" to examine the complex social and cultural contexts that shape the development of technological literacy. Drawing from the literacy narratives of two participants, Hawisher, Selfe, Moraski, and Pearson theorized five themes emergent from the cultural ecology of literacy, namely the "cultural, material, educational, and familial contexts" that shape and are shaped by literacy development. The five themes of cultural ecology emphasize that technological literacy goes through life spans, that literacy provides the medium for people to exert their agency, that literacy occurs and develops both within and outside of school contexts, that the conditions of access influence people's literacy development, and that literacy practices and values transmit via family units. Kristine Blair pointed to the positive impacts of cultural conflicts in constructing online discourses and political discussions, while at the same time warning that students may not transform exposure to the conflicts. Also looming behind the issue of culture in computer and writing is the digital divide. Cynthia Selfe showed that unequal access to technological literacy is situated in unequal social, cultural, economic, and political situations. As Selfe wrote, "computers continue to be distributed along the related axes of race and socioeconomic status and this distribution continues to ongoing patterns of racism and to the continuation of poverty." To address the digital divide, Selfe called upon educators and compositionists to rethink computer literacy as a political act that requires paying critical attention to inequality issues and acting politically in specific disciplinary contexts. See also Conference on College Composition and Communication Digital rhetoric Multimodality Media theory of composition Composition studies References Computing and society Digital humanities Composition (language) Writing
Computers and writing
Technology
2,061
54,021,983
https://en.wikipedia.org/wiki/Bottle%20house%20of%20Ganja
Bottle House () is an unusual private residence in Ganja built from glass bottles. Overview Two-storey bottle house was built by Ibrahim Jafarov, a resident of Ganja in 1966-67 from glass bottles of different shapes and sizes, and colorful stones brought from Sochi. 48000 bottles were used in construction. The construction of this house was dedicated to the memory of Ibrahim Jafarov's brother who went missing during World War II. Decoration The construction year of the house was written on the wall of the front porch. A big portrait of missing brother Yusif Jafarov was drawn underneath the protrusion of the roof at the front. Additionally, the walls of the house were decorated with notes about Olympic Games held in USSR in the 1980s, and the name and portrait of the owner. The words “Ganja” (the historic name of the city) were written on the different parts of the building with the bottoms of bottles, however during that time the city was officially called Kirovabad (1935-1989). The house was reconstructed recently and is a popular destination for citizens and tourists. See also Ganja Architecture of Azerbaijan References External links ganca.net Ganja, Azerbaijan Tourist attractions in Ganja, Azerbaijan Buildings and structures in Ganja, Azerbaijan 1967 architecture Glass buildings Sustainable buildings and structures Bottle houses
Bottle house of Ganja
Engineering
271
42,043,791
https://en.wikipedia.org/wiki/Off-stoichiometry%20thiol-ene%20polymer
An off-stoichiometry thiol-ene polymer is a polymer platform comprising off-stoichiometry thiol-enes (OSTE) and off-stoichiometry thiol-ene-epoxies (OSTE+). The OSTE polymers comprise off-stoichiometry blends of thiols and allyls. After complete polymerization, typically by UV micromolding, the polymer articles contain a well-defined number of unreacted thiol or allyls groups both on the surface and in the bulk. These surface anchors can be used for subsequent direct surface modification or bonding. In later versions epoxy monomers were added to form ternary thiol-ene-epoxy monomer systems (OSTE+), where the epoxy in a second step reacts with the excess of thiols creating a final polymer article that is completely inert. Some of the critical features of OSTE+ polymers include uncomplicated and rapid fabrication of complex structures in a standard chemistry labs, hydrophilic native surface properties and covalent bonding via latent epoxy chemistry. Development The OSTE polymer resins were originally developed by Tommy Haraldsson and Fredrik Carlborg at the group of Micro and Nanosystems at the Royal Institute of Technology (KTH) to bridge the gap between research prototyping and commercial production of microfluidics devices. The resins were later adapted and improved for commercial applications by the Swedish start-up Mercene Labs AB under the name OSTEMER. Reaction mechanism The OSTE resins are cured via a rapid thiol-ene "Click" reaction between thiols and allyls. The thiols and allyls react in a perfectly alternating fashion and has a very high conversion rate (up to 99%), the initial off-stoichiometry of the monomers will exactly define the number off unreacted groups left after the polymerization. With the right choice of monomers very high off-stoichiometry ratios can be attained while maintaining good mechanical properties. The off-stoichiometry thiol-ene-epoxies, or OSTE+ polymers, are created in a two-step curing process where a first rapid thiol-ene reaction defines the geometric shape of the polymer while leaving an excess of thiols and all the epoxy unreacted. In a second step all the remaining thiol groups and the epoxy groups are reacted to form an inert polymer. Properties OSTE polymers The main advantages put forward of the UV-cured OSTE polymers in microsystems have been their i) dry bonding capacity by reacting a polymer with thiol excess to a second polymer with allyl excess at room-temperature using only UV-light, ii) their well-defined and tunable number of surface anchors (thiols or allyls) present on the surface that can be used for direct surface modification and iii) their wide tuning range of mechanical properties from rubbery to thermoplastic-like depending only on the choice of off-stoichiometry. The glass transition temperature typically varies from below room-temperature for high off-stoichiometric ratios to 75 °C for a stoichiometric blend of tetrathiol and triallyl. They are typically transparent in the visible range. A disadvantage put forward with the OSTE-polymers is the leaching out of unreacted monomers at very high off-stoichiometric ratios which may affect cells and proteins in lab-on-chips, although cell viability has been observed for cell cultures on low off-stoichiometric OSTE. OSTE+ polymers The dual-cure thiol-ene-epoxies, or OSTE+ polymers, differ from the OSTE-polymers in that they have two separated curing steps. After the first UV-initiated step, the polymer is rubbery and can easily be deformed and it has surface anchors available for surface modification. During the second step, when all the thiols and epoxies are reacted the polymer stiffens and can bond to a wide number of substrates, including itself, via the epoxy chemistry. The advantages put forward for the OSTE+ are i) their unique ability for integration and bonding via the latent epoxy chemistry and the low built-in stresses in the thiol-enes polymers ii) their complete inertness after final cure iii) their good barrier properties and the possibility to scale up manufacturing using industrial reaction injection molding. Both stiff and rubbery versions of the OSTE+ polymers have been demonstrated, showing their potential in microsystems for valving and pumping similar to PDMS components, but with the benefit of withstanding higher pressures. The commercial version of the OSTE+ polymer, OSTEMER 322, has been shown to be compatible with many cell lines. Fabrication OSTE polymers The OSTE resins can be cast and cured in a structured silicone molds or coated permanent photoresist. OSTE polymers have also shown excellent photostructuring capability using photomasks, enabling for example powerful and flexible capillary pumps. OSTE+ polymers The OSTE+ resins are first UV-cured in the same way as the OSTE-polymers but are later thermally cured to stiffen and bond to a substrate. Applications Lab-on-a-chip OSTE+ allows for soft lithography microstructuring, strong biocompatible dry bonding to almost any substrate during Lab-on-a-chip (LoC) manufacturing, while simultaneously mimicking the mechanical properties found in thermoplastic polymers, hence allowing for true prototyping of commercial LoC. The commonly used materials for microfluidics suffer from unwieldy steps and often ineffective bonding processes, especially when packaging biofunctionalized surfaces, which makes LoC assembly difficult and costly OSTE+ polymer which effectively bonds to nine dissimilar types of substrates, requires no surface treatment prior to the bonding at room temperature, features high Tg, and achieves good bonding strength to at least 100 °C. Moreover, it has been demonstrated that excellent results can be obtained using photolithography on OSTE polymer, opening wider potential applications. Bio packaging Biosensors are used for a range of biological measurements. OSTE packaging for biosensing has been demonstrated for QCM, and photonic ring resonator sensors. Wafer bonding Adhesive wafer bonding has become an established technology in microelectromechanical systems (MEMS) integration and packaging applications. OSTE is suitable for heterogeneous silicon wafer level integration depending on its application in low temperature processes due to its ability to cure even in room temperatures. Microarray imprinting and surface energy patterning Imprinting of arrays with hydrophilic-in-hydrophobic microwells is made possible using an innovative surface energy replication approach by means of a hydrophobic thiol-ene polymer formulation. In this polymer, hydrophobic-moiety-containing monomers self-assemble at the hydrophobic surface of the imprinting stamp, which results in a hydrophobic replica surface after polymerization. After removing the stamp, microwells with hydrophobic walls and a hydrophilic bottom are obtained. Such fast and inexpensive procedure can be utilised in digital microwell array technology toward diagnostic applications. OSTE e-beam resist OSTE resin can also be used as e-beam resist, resulting in nanostructures that allow direct protein functionalization. References Polymers
Off-stoichiometry thiol-ene polymer
Chemistry,Materials_science
1,565
7,246,330
https://en.wikipedia.org/wiki/Joffre-class%20aircraft%20carrier
The Joffre class consisted of a pair of aircraft carriers ordered by the (French Navy) prior to World War II. The Navy had commissioned an experimental carrier in 1927, but it was slow and obsolete by the mid-1930s. Support for naval aviation in the navy was weak during this time as it had lost control of its aircraft, its training and their development to the new Air Ministry when it formed in 1928 and did not regain full control until 1936. Traditionalists among the naval leadership had begun a battleship building program in the early 1930s to counter German ships that were suitable for commerce raiding and carriers were deemed useful to hunt them down, especially once the Germans began building a carrier of their own in 1936. One ship was laid down in 1938, but was not launched before all work was cancelled after the Armistice of 22 June 1940. The incomplete hull of Joffre was subsequently scrapped. Background The ordered the conversion of the incomplete into an aircraft carrier in 1922 to gain experience with carrier aviation. The following year the Naval General Staff requested another carrier similar to Béarn, but this was rejected as too expensive and plans were made for a cheaper aircraft transport that eventually became the seaplane carrier . The 1928 formation of the (Air Ministry) cost the control of naval aviation as the new ministry centralized all aspects of military aviation, including aircraft development, training, bases and coastal aircraft. With the Navy only controlling the aircraft aboard its ships, the development of naval aviation stagnated as it was generally ignored by the ministry and no new carrier aircraft were developed in 1928–1932. The was able to gradually reduce the ministry's control between 1931 and 1934 until it regained full control in August 1936. By this time the had embarked on a building program for fast battleships to counter possible German commerce raiders in the North Atlantic that the Béarn was simply too slow to support. The Navy believed that carrier operations within range of hostile land-based aircraft were not viable given the limited size of their air groups and the commerce protection mission was ideal for its carriers. Design studies for a carrier able to operate with the new ships began in 1934, but two ships were not authorized until 1937, possibly in response to the laying down of the carrier by Nazi Germany in 1936. Description The Joffre-class carriers were long between perpendiculars and long overall. They had a beam of at the waterline and at the flight deck. The ships displaced at standard load and at full load, which gave them a draft of . Their crew numbered 70 officers and 1,180 sailors. The based the propulsion machinery of the Joffres on that used in the light cruiser , albeit with eight Indret water-tube boilers rather than four. The ships were fitted with two Parsons geared steam turbines, each driving one propeller shaft using steam provided by the boilers at a working pressure of and a temperature of . The turbines were rated at a total of and were designed to give a speed of . The carriers retained the unit system of machinery with each boiler room supplying steam to the engine room aft of it so that one hit could not completely immobilize the ships. The boiler uptakes were trunked into a single funnel integrated into the island on the starboard side of the flight deck. The ships were designed to carry enough fuel oil to give them a range of at . Aviation facilities The ships' flight deck was offset to the left from the centerline. This helped to compensate for the weight of the very large island and allowed it to have a continuous width of . The deck itself was in thickness. The carriers were intended to be fitted with an aircraft-handling crane near the stern, below the flight deck that were strong enough to lift a seaplane aboard. They had a fuel capacity of approximately of aviation gasoline. The optimized the design of the Joffre class for "double-ended" operations, where aircraft could land and take off over both the bow and stern, so that battle damage to the flight deck would not necessarily end flight operations. Like Béarn, the Joffres had their arresting gear amidships, abreast the island, although the number of wires was increased to nine. While the amidships position minimized the ships' pitching in high seas, the air turbulence generated by the island was at its worst amidships. Based on trials aboard Béarn in 1935, collapsible landing signals were positioned on the centerline of the flight deck amid the arresting wires, facing in both directions. The flight deck was not provided with any crash barriers, so the American practice of keeping aircraft on the deck during landing operations was not possible. The two hydraulically powered elevators that transferred aircraft between the flight deck and the upper hangar were positioned at the ends of the flight deck, allowing aircraft landing amidships to taxi forward to the elevators and rapidly clear the flight deck. Both elevators were configured to be used by aircraft with their wings still spread, eliminating the requirement to fold the wings before using the elevators that slowed down Béarns flight operations. The forward elevator was roughly T-shaped and measured long and wide; the large elevator well so close to the bow weakened the ships' structure so the designers minimized the size of the well in the hangar deck by only seating the central section in the deck while the outer areas of the elevator rested on top of the deck, requiring a small ramp to move on or off the elevator. The rear elevator was outside the hangar and only its forward end reached the flight deck. Although it only measured , its position allowed it to strike down aircraft regardless of size. The carriers were designed with two hangar decks, the upper of which measured with a height of . A space long below the flight deck and between the upper hangar and the rear elevator allowed aircraft to warm up their engines before moving to the flight deck. A single fire curtain amidships could be used to divide the hangar. It was the only one that could be used for aircraft operations as the lower hangars were dedicated to workshops and aircraft assembly and storage facilities. The rear lower hangar was in size and had a height of . A elevator at the forward end of this hangar allowed aircraft to be transferred between the hangars. This elevator was offset to starboard to allow for a passageway to the lower hangar annex that measured . This annex, presumably dedicated to spare parts, was offset to port to make room for the boiler uptakes and ventilation ducting of the forward engine and fire rooms. Based on their decade of experience with Béarn and frequent exercises with the British Fleet Air Arm during the 1930s, the (French Naval Aviation) believed that air operations would be continuous, with small numbers of aircraft taking off or landing. This required multi-role aircraft, able to switch between missions as the tactical situation dictated. The Joffre-class carriers were designed with an air group of 40 aircraft, 15 single-engined fighters and 25 twin-engined aircraft capable of long-range reconnaissance, bombing and torpedo attacks. In 1939 the Navy ordered 120 Dewoitine D.790 fighters, a navalized variant of the Dewoitine D.520, although no aircraft was completed before the Armistice cancelled further work. It issued the A47 specification in 1937 for attack aircraft to equip the carriers and ordered two prototypes each of the SNCAO CAO.600 and the Dewoitine D.750 in 1939. The issued the updated A80 specification that same year for a faster aircraft and selected the Bréguet Bre.810, a navalized version of the Bréguet Bre.693, but the prototype was not completed before the Armistice. Armament, fire control and armor The carriers' primary armament consisted of eight 45-caliber Canon de Mle 1932 dual-purpose guns in four twin-gun turrets positioned fore and aft of the island in superfiring pairs. The guns fired a armor-piercing shell at a muzzle velocity of . This gave them a range of at an elevation of +45°. Their mounts had a maximum elevation of +75° and the guns had a rate of fire of about 10 rounds per minute. Light anti-aircraft defense was provided by eight 48-caliber Canon de Mle 1935 guns in four twin-gun ACAD mounts on the island, and twenty-eight Hotchkiss Mitrailleuse de Mle 1929 machine guns in seven quadruple mounts. There were two mounts on the forecastle, two on the stern and a pair on the island. The remaining mount was on the port side underneath the flight deck overhang. The 37 mm guns were fully automatic and had a theoretical rate of fire of 165 rounds per minute. They had a range of with their shells which were fired at a muzzle velocity of . Their mounts had an elevation range of -10° to +85°. The 13.2 mm machine guns had an effective range of . The 130 mm guns were controlled by a pair of superimposed directors on the top of a short tower on the roof of the island. The upper director was equipped with a rangefinder for anti-aircraft defense and the lower with a one for surface engagements. Each of the upper 130 mm turrets was fitted with a rotating 5-meter rangefinder as a backup to the directors. A director equipped with a rangefinder remotely controlled each ACAD mount. The two forward directors were superimposed on the roof of the island while the two after directors were side-by-side aft of the director tower. The waterline armor belt of the Joffre-class ships covered the middle of the hull, from the forward magazines to the aft aviation gasoline tank. It was thick and had a height of about from the main deck to below the waterline. It formed an armored citadel with transverse bulkheads at its ends. The armored deck was 70 mm thick over the magazines and gasoline tanks, but reduced to amidships over the machinery compartments. The torpedo belt ranged in thickness from abreast the propulsion machinery spaces, but thinned to abreast the magazines. The steering compartment was fitted with 26-millimeter armor plates. The 130 mm directors, turrets, their hoists, and their upper handling rooms were protected by of armor, as were the command spaces in the island. For protection against fire, the aviation gasoline tanks were surrounded by either empty compartments with fire-resistant insulation or inert gases on all sides. Ships The beginning of World War II less than a year after Joffre was laid down led to a slow down of construction as resources were diverted to higher-priority tasks and the ultimate cessation of work that came in June 1940 when the country capitulated after the German invasion when the ship was approximately 20% complete. Work on Joffre was not continued by the Germans and the hull was scrapped. The second planned vessel of the class, Painlevé, was never laid down because it was supposed to succeed Joffre on Slipway No. 1. A third ship was intended to be authorized in 1940 to replace Béarn, but the order was never placed. The demonstrated no sense of urgency in building Joffre as the bulk of the naval leadership felt completing the two s to match the modern German and Italian battleships was more important. This was further demonstrated when the first ship of the s was authorized on 1 April 1940 and replaced Painlevé in the queue for Slipway No. 1. This belief was not unreasonable as the Germans had suspended work on Graf Zeppelin and the British had an ample number of carriers that could perform the trade protection mission in the North Atlantic. References Bibliography Further reading External links 3D renderings 3D renderings Aircraft carrier classes Proposed aircraft carriers Cancelled aircraft carriers Ship classes of the French Navy
Joffre-class aircraft carrier
Engineering
2,362
19,009
https://en.wikipedia.org/wiki/Milgram%20experiment
Beginning on August 7, 1961, a series of social psychology experiments were conducted by Yale University psychologist Stanley Milgram, who intended to measure the willingness of study participants to obey an authority figure who instructed them to perform acts conflicting with their personal conscience. Participants were led to believe that they were assisting an unrelated experiment, in which they had to administer electric shocks to a "learner". These fake electric shocks gradually increased to levels that would have been fatal had they been real. The experiments unexpectedly found that a very high proportion of subjects would fully obey the instructions, with every participant going up to 300 volts, and 65% going up to the full 450 volts. Milgram first described his research in a 1963 article in the Journal of Abnormal and Social Psychology and later discussed his findings in greater depth in his 1974 book, Obedience to Authority: An Experimental View. The experiments began on August 7, 1961 (after a grant proposal was approved in July), in the basement of Linsly-Chittenden Hall at Yale University, three months after the start of the trial of German Nazi war criminal Adolf Eichmann in Jerusalem. Milgram devised his psychological study to explain the psychology of genocide and answer the popular contemporary question: "Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?" While the experiment itself was repeated many times around the globe, with fairly consistent results, both its interpretations as well as its applicability to the Holocaust are disputed. Procedure Three individuals took part in each session of the experiment: The "experimenter", who was in charge of the session. The "teacher", who was a volunteer for a single session. The "teachers" were led to believe that they were merely assisting, whereas they were actually the subjects of the experiment. The "learner", an actor and confederate of the experimenter, who pretended to be a volunteer. The subject and the actor arrived at the session together. The experimenter told them that they were taking part in "a scientific study of memory and learning", to see what the effect of punishment is on a subject's ability to memorize content. Also, he always clarified that the payment for their participation in the experiment was secured regardless of its development. The subject and actor drew slips of paper to determine their roles. Unknown to the subject, both slips said "teacher". The actor would always claim to have drawn the slip that read "learner", thus guaranteeing that the subject would always be the "teacher". Next, the teacher and learner were taken into an adjacent room where the learner was strapped into what appeared to be an electric chair. The experimenter, dressed in a lab coat in order to appear to have more authority, told the participants this was to ensure that the learner would not escape. In a later variation of the experiment, the confederate would eventually plead for mercy and yell that he had a heart condition. At some point prior to the actual test, the teacher was given a sample electric shock from the electroshock generator in order to experience firsthand what the shock that the learner would supposedly receive during the experiment would feel like. The teacher and learner were then separated so that they could communicate, but not see each other. The teacher was then given a list of word pairs that he was to teach the learner. The teacher began by reading the list of word pairs to the learner. The teacher would then read the first word of each pair and read four possible answers. The learner would press a button to indicate his response. If the answer was incorrect, the teacher would administer a shock to the learner, with the voltage increasing in 15-volt increments for each wrong answer (if correct, the teacher would read the next word pair). The volts ranged from 15 to 450. The shock generator included verbal markings that vary from "Slight Shock" to "Danger: Severe Shock". The subjects believed that for each wrong answer the learner was receiving actual shocks. In reality, there were no shocks. After the learner was separated from the teacher, the learner set up a tape recorder integrated with the electroshock generator, which played previously recorded sounds for each shock level. As the voltage of the fake shocks increased, the learner began making audible protests, such as banging repeatedly on the wall that separated him from the teacher. In every condition the learner makes/says a predetermined sound or word. When the highest voltages were reached, the learner fell silent. If at any time the teacher indicated a desire to halt the experiment, the experimenter was instructed to give specific verbal prods. The prods were, in this order: Please continue or Please go on. The experiment requires that you continue. It is absolutely essential that you continue. You have no other choice; you go on. Prod 2 could only be used if prod 1 was unsuccessful. If the subject still wished to stop after all four successive verbal prods, the experiment was halted. Otherwise, the experiment was halted after the subject had elicited the maximum 450-volt shock three times in succession. The experimenter also had prods to use if the teacher made specific comments. If the teacher asked whether the learner might suffer permanent physical harm, the experimenter replied, "Although the shocks may be painful, there is no permanent tissue damage, so please go on." If the teacher said that the learner clearly wants to stop, the experimenter replied, "Whether the learner likes it or not, you must go on until he has learned all the word pairs correctly, so please go on." Predictions Before conducting the experiment, Milgram polled fourteen Yale University senior-year psychology majors to predict the behavior of 100 hypothetical teachers. All of the poll respondents believed that only a very small fraction of teachers (the range was from zero to 3 out of 100, with an average of 1.2) would be prepared to inflict the maximum voltage. Milgram also informally polled his colleagues and found that they, too, believed very few subjects would progress beyond a very strong shock. He also reached out to honorary Harvard University graduate Chaim Homnick, who noted that this experiment would not be concrete evidence of the Nazis' innocence, due to the fact that "poor people are more likely to cooperate". Milgram also polled forty psychiatrists from a medical school, and they believed that by the tenth shock, when the victim demands to be free, most subjects would stop the experiment. They predicted that by the 300-volt shock, when the victim would refuse to answer, only 3.73 percent of the subjects would still continue, and they believed that "only a little over one-tenth of one percent of the subjects would administer the highest shock on the board." Milgram suspected before the experiment that the obedience exhibited by Nazis reflected of a distinct German character, and planned to use the American participants as a control group before using German participants, expected to behave closer to the Nazis. However, the unexpected results stopped him from conducting the same experiment on German participants. Results Subjects were uncomfortable administering the shocks, and displayed varying degrees of tension and stress. These signs included sweating, trembling, stuttering, biting their lips, groaning, and digging their fingernails into their skin, and some were even having nervous laughing fits or seizures. 14 of the 40 subjects showed definite signs of nervous laughing or smiling. Every participant paused the experiment at least once to question it. Most continued after being assured by the experimenter. Some said they would refund the money they were paid for participating. Milgram summarized the experiment in his 1974 article "The Perils of Obedience", writing: The original Simulated Shock Generator and Event Recorder, or shock box, is located in the Archives of the History of American Psychology. Later, Milgram and other psychologists performed variations of the experiment throughout the world, with similar results. Milgram later investigated the effect of the experiment's locale on obedience levels by holding an experiment in an unregistered, backstreet office in a bustling city, as opposed to at Yale, a respectable university. The level of obedience, "although somewhat reduced, was not significantly lower." What made more of a difference was the proximity of the "learner" and the experimenter, and diminished empathy the further away. There were also variations tested involving groups. Thomas Blass of the University of Maryland, Baltimore County performed a meta-analysis on the results of repeated performances of the experiment. He found that while the percentage of participants who are prepared to inflict fatal voltages ranged from 28% to 91%, there was no significant trend over time and the average percentage for US studies (61%) was close to the one for non-US studies (66%). The participants who refused to administer the final shocks neither insisted that the experiment be terminated, nor left the room to check the health of the victim as per Milgram's notes. Milgram created a documentary film titled Obedience showing the experiment and its results. He also produced a series of five social psychology films, some of which dealt with his experiments. Critical reception Ethics Milgram’s experiment raised immediate controversy about the research ethics of scientific experimentation because of the extreme emotional stress and inflicted insight suffered by the participants. On June 10, 1964, the American Psychologist published a brief but influential article by Diana Baumrind titled "Some Thoughts on Ethics of Research: After Reading Milgram's 'Behavioral Study of Obedience. She argued that even though Milgram had obtained informed consent, he was still ethically responsible to ensure their well-being. When participants displayed signs of distress such as sweating and trembling, the experimenter should have stepped in and halted the experiment. Baumrind's criticisms of the treatment of human participants in Milgram's studies stimulated a thorough revision of the ethical standards of psychological research. Milgram vigorously defended the experiment. He conducted a survey of former participants in which 84% said they were "glad" or "very glad" to have participated; 15% chose neutral responses (92% of all former participants responding). In his 1974 book Obedience to Authority, Milgram described receiving offers of assistance, requests to join his staff, and letters of thanks from former participants. Six years later (at the height of the Vietnam War), one of the participants in the experiment wrote to Milgram, explaining why he was glad to have participated despite the stress: In contrast, critics such as Gina Perry argued that participants were not properly debriefed, leading to lasting emotional harm, and that many participants in fact criticized the ethics of the study in their responses to the questionnaire. Applicability to the Holocaust Milgram sparked direct critical response in the scientific community by claiming that "a common psychological process is centrally involved in both [his laboratory experiments and Nazi Germany] events." James Waller, chair of Holocaust and Genocide Studies at Keene State College, formerly chair of Whitworth College Psychology Department, argued that Milgram experiments “do not correspond well” to the Holocaust events: His points were as follows: The subjects of Milgram experiments were assured in advance that no permanent physical damage would result from their actions. However, the Holocaust perpetrators were fully aware of their hands-on killing and maiming of the victims. The laboratory subjects themselves did not know their victims and were not motivated by racism or other biases. On the other hand, the Holocaust perpetrators displayed an intense devaluation of the victims through a lifetime of personal development. Those serving punishment at the lab were not sadists, nor hate-mongers, and often exhibited great anguish and conflict in the experiment, unlike the designers and executioners of the Final Solution, who had a clear "goal" on their hands, set beforehand. The experiment lasted for an hour, with no time for the subjects to contemplate the implications of their behavior. Meanwhile, the Holocaust lasted for years with ample time for a moral assessment of all individuals and organizations involved. In the opinion of Thomas Blass—who is the author of a scholarly monograph on the experiment (The Man Who Shocked The World) published in 2004—the historical evidence pertaining to actions of the Holocaust perpetrators speaks louder than words: Validity In a 2004 issue of the journal Jewish Currents, Joseph Dimow, a participant in the 1961 experiment at Yale University, wrote about his early withdrawal as a "teacher", suspicious "that the whole experiment was designed to see if ordinary Americans would obey immoral orders, as many Germans had done during the Nazi period." In 2012, Australian psychologist Gina Perry investigated Milgram's data and writings and concluded that Milgram had manipulated the results, and that there was a "troubling mismatch between (published) descriptions of the experiment and evidence of what actually transpired." She wrote that "only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter". She described her findings as "an unexpected outcome" that "leaves social psychology in a difficult situation." In a book review critical of Gina Perry's findings, Nestar Russell and John Picard take issue with Perry for not mentioning that "there have been well over a score, not just several, replications or slight variations on Milgram's basic experimental procedure, and these have been performed in many different countries, several different settings and using different types of victims. And most, although certainly not all of these experiments have tended to lend weight to Milgram's original findings." Interpretations Milgram elaborated two theories: The first is the theory of conformism, based on Solomon Asch conformity experiments, describing the fundamental relationship between the group of reference and the individual person. A subject who has neither ability nor expertise to make decisions, especially in a crisis, will leave decision making to the group and its hierarchy. The group is the person's behavioral model. The second is the agentic state theory, wherein, per Milgram, "the essence of obedience consists in the fact that a person comes to view themselves as the instrument for carrying out another person's wishes, and they therefore no longer see themselves as responsible for their actions. Once this critical shift of viewpoint has occurred in the person, all of the essential features of obedience follow". Alternative interpretations In his book Irrational Exuberance, Yale finance professor Robert J. Shiller argues that other factors might be partially able to explain the Milgram experiments: In a 2006 experiment, a computerized avatar was used in place of the learner receiving electrical shocks. Although the participants administering the shocks were aware that the learner was unreal, the experimenters reported that participants responded to the situation physiologically "as if it were real". Another explanation of Milgram's results invokes belief perseverance as the underlying cause. What "people cannot be counted on is to realize that a seemingly benevolent authority is in fact malevolent, even when they are faced with overwhelming evidence which suggests that this authority is indeed malevolent. Hence, the underlying cause for the subjects' striking conduct could well be conceptual, and not the alleged 'capacity of man to abandon his humanity ... as he merges his unique personality into larger institutional structures."' This last explanation receives some support from a 2009 episode of the BBC science documentary series Horizon, which involved replication of the Milgram experiment. Of the twelve participants, only three refused to continue to the end of the experiment. Speaking during the episode, social psychologist Clifford Stott discussed the influence that the idealism of scientific inquiry had on the volunteers. He remarked: "The influence is ideological. It's about what they believe science to be, that science is a positive product, it produces beneficial findings and knowledge to society that are helpful for society. So there's that sense of science is providing some kind of system for good." Building on the importance of idealism, some recent researchers suggest the "engaged followership" perspective. Based on an examination of Milgram's archive, in a recent study, social psychologists Alexander Haslam, Stephen Reicher and Megan Birney, at the University of Queensland, discovered that people are less likely to follow the prods of an experimental leader when the prod resembles an order. However, when the prod stresses the importance of the experiment for science (i.e. "The experiment requires you to continue"), people are more likely to obey. The researchers suggest the perspective of "engaged followership": that people are not simply obeying the orders of a leader, but instead are willing to continue the experiment because of their desire to support the scientific goals of the leader and because of a lack of identification with the learner. Also a neuroscientific study supports this perspective, namely that watching the learner receive electric shocks does not activate brain regions involving empathic concerns. Replications and variations Milgram's variations In Obedience to Authority: An Experimental View (1974), Milgram describes 19 variations of his experiment, some of which had not been previously reported. Several experiments varied the distance between the participant (teacher) and the learner. Generally, when the participant was physically closer to the learner, the participant's compliance decreased. In the variation where the learner's physical immediacy was closest—where the participant had to hold the learner's arm onto a shock plate—30 percent of participants completed the experiment. The participant's compliance also decreased if the experimenter was physically farther away (Experiments 1–4). For example, in Experiment 2, where participants received telephonic instructions from the experimenter, compliance decreased to 21 percent. Some participants deceived the experimenter by pretending to continue the experiment. In Experiment 8, an all-female contingent was used; previously, all participants had been men. Obedience did not significantly differ, though the women communicated experiencing higher levels of stress. Experiment 10 took place in a modest office in Bridgeport, Connecticut, purporting to be the commercial entity "Research Associates of Bridgeport" without apparent connection to Yale University, to eliminate the university's prestige as a possible factor influencing the participants' behavior. In those conditions, obedience dropped to 47.5 percent, though the difference was not statistically significant. Milgram also combined the effect of authority with that of conformity. In those experiments, the participant was joined by one or two additional "teachers" (also actors, like the "learner"). The behavior of the participants' peers strongly affected the results. In Experiment 17, when two additional teachers refused to comply, only four of 40 participants continued in the experiment. In Experiment 18, the participant performed a subsidiary task (reading the questions via microphone or recording the learner's answers) with another "teacher" who complied fully. In that variation, 37 of 40 continued with the experiment. Replications Around the time of the release of Obedience to Authority in 1973–1974, a version of the experiment was conducted at La Trobe University in Australia. As reported by Perry in her 2012 book Behind the Shock Machine, some of the participants experienced long-lasting psychological effects, possibly due to the lack of proper debriefing by the experimenter. In 2002, the British artist Rod Dickinson created The Milgram Re-enactment, an exact reconstruction of parts of the original experiment, including the uniforms, lighting, and rooms used. An audience watched the four-hour performance through one-way glass windows. A video of this performance was first shown at the CCA Gallery in Glasgow in 2002. A partial replication of the experiment was staged by British illusionist Derren Brown and broadcast on UK's Channel 4 in The Heist (2006). Another partial replication of the experiment was conducted by Jerry M. Burger in 2006 and broadcast on the Primetime series Basic Instincts. Burger noted that "current standards for the ethical treatment of participants clearly place Milgram's studies out of bounds." In 2009, Burger was able to receive approval from the institutional review board by modifying several of the experimental protocols, including halting the experiment after the 150-volt switch and having the learner directly tell the participant within a few seconds of the end of the experiment that they had not received any shocks. Burger found obedience rates virtually identical to those reported by Milgram in 1961–62, even while meeting current ethical regulations of informing participants. In addition, half the replication participants were female, and their rate of obedience was virtually identical to that of the male participants. Burger also included a condition in which participants first saw another participant refuse to continue. However, participants in this condition obeyed at the same rate as participants in the base condition. In the 2010 French documentary Le Jeu de la Mort (The Game of Death), researchers recreated the Milgram experiment with an added critique of reality television by presenting the scenario as a game show pilot. Volunteers were given €40 and told that they would not win any money from the game, as this was only a trial. Only 16 of 80 "contestants" (teachers) chose to end the game before delivering the highest-voltage punishment. The experiment was performed on Dateline NBC on an episode airing April 25, 2010. The Discovery Channel aired the "How Evil are You?" segment of Curiosity on October 30, 2011. The episode was hosted by Eli Roth, who produced results similar to the original Milgram experiment, though the highest-voltage punishment used was 165 volts, rather than 450 volts. Roth added a segment in which a second person (an actor) in the room would defy the authority ordering the shocks, finding more often than not, the subjects would stand up to the authority figure in this case. Other variations Charles Sheridan and Richard King (at the University of Missouri and the University of California, Berkeley, respectively) hypothesized that some of Milgram's subjects may have suspected that the victim was faking, so they repeated the experiment with a real victim: a "cute, fluffy puppy" who was given real, albeit apparently harmless, electric shocks. Their findings were similar to those of Milgram: seven out of 13 of the male subjects and all 13 of the female subjects obeyed throughout. Many subjects showed high levels of distress during the experiment, and some openly wept. In addition, Sheridan and King found that the duration for which the shock button was pressed decreased as the shocks got higher, meaning that for higher shock levels, subjects were more hesitant. Media depictions Obedience to Authority () is Milgram's own account of the experiment, written for a mass audience. Obedience is a black-and-white film of the experiment, shot by Milgram himself. It is distributed by Alexander Street Press. The Tenth Level was a fictionalized 1975 CBS television drama about the experiment, featuring William Shatner and Ossie Davis. Henri Verneuil 's I as in Icarus (1979) has a lengthy 15-min scene replicating Milgram's experiment Peter Gabriel's 1986 album So features the song "We Do What We're Told (Milgram's 37)" based on the experiment and its results. Batch '81 is a 1982 Filipino film that features a scene based on the Milgram experiment. Atrocity is a 2005 film re-enactment of the Milgram Experiment. The Heist, a 2006 TV special by Derren Brown, features a reenactment of the Milgram experiment. Dar Williams wrote the song "Buzzer" about the experiment for her 2008 album Promised Land. Fallout: New Vegas, a 2010 video game published by Bethesda Softworks plays verbal prods told by the experimenter inside a death chamber in Vault 11. "Authority" is an episode of Law & Order: Special Victims Unit inspired by the Milgram experiment. Experimenter, a 2015 film about Milgram, by Michael Almereyda, was screened to favorable reactions at the 2015 Sundance Film Festival. Ted satirically depicts and cites the Milgram Experiment in one episode as Ted prods drunk partygoers to celebrate the invasion of Poland. The Milgram Project is a Japanese interactive music project from Deco*27 and Yamanaka Takuya. As the name implies, it is directly inspired by the real-life experiment. See also Argument from authority Authority bias Acali Experiment Banality of evil Belief perseverance Graduated Electronic Decelerator Hofling hospital experiment Human experimentation in the United States Law of Due Obedience Little Eichmanns Moral disengagement My Lai Massacre Ordinary Men (book) Social influence Stanford prison experiment Superior orders The Third Wave (experiment) The Tenth Level (1976 video starring William Shatner) Citations General and cited references Book review of The Man Who Shocked the World Includes an interview with one of Milgram's volunteers, and discusses modern interest in, and scepticism about, the experiment. Further reading External links Milgram S. The Milgram Experiment (full documentary film on YouTube ). Stanley Milgram Redux, TBIYTB — Description of a 2007 iteration of Milgram's experiment at Yale University, published in The Yale Hippolytic, January 22, 2007. (Internet Archive) MILGRAM, S. Dynamics of obedience. Washington: National Science Foundation, 25 January 1961. (Mimeo) A Powerpoint presentation describing Milgram's experiment Synthesis of book A faithful synthesis of Obedience to Authority – Stanley Milgram Obedience To Authority — A commentary extracted from 50 Psychology Classics (2007) A personal account of a participant in the Milgram obedience experiments Summary and evaluation of the 1963 obedience experiment The Science of Evil from ABC News Primetime The Lucifer Effect: How Good People Turn Evil — Video lecture of Philip Zimbardo talking about the Milgram Experiment. — Article on the 45th anniversary of the Milgram experiment. People 'still willing to torture' —BBC News Beyond the Shock Machine , a radio documentary with the people who took part in the experiment. Includes original audio recordings of the experiment Articles containing video clips Conformity Group processes History of psychology Human subject research in the United States Psychology experiments Research ethics Social influence
Milgram experiment
Technology,Biology
5,344
665,599
https://en.wikipedia.org/wiki/European%20Telephony%20Numbering%20Space
With the intent of forming a trans-Europe numbering plan as an option (or then future movement) for anyone needing multi-national European telephone presence, the ITU allocated country calling code +388 as a subdivided, catch-all container for such services. This was designated the European Telephony Numbering Space or ETNS. Although some ETNS numbers were assigned, few phone companies supported connecting calls to ETNS. Because of limited support, ETNS was suspended in 2005 and abolished in 2008. All ETS numbers were cancelled by the beginning of 2010. The +388 code was scheduled to be reclaimed by the ITU at the end of 2010; as of late 2011 it was listed by ITU as "Group of countries, shared code". See also list of country calling codes. Allocation area Geographically the +388 "country" code was an overlay on top of all the pre-existing, state-bounded country codes of the countries in Europe. Among "special" country codes, +388 was unique in that it was both supranational yet geographically bounded (other special codes, such as +881 and freephone +800, are completely international). Instead of being subdivided geographically as in a typical numbering plan, the ETNS was intended to be subdivided by type of service or customer. It would therefore not have been possible to reverse engineer the location of an owner of an ETNS number based on the characteristics of the phone number. The numbers were also not allocated in blocks to individual carrier companies and were therefore intended to be portable. Carriers needed to have ETNS translation capability, or routing agreement with a carrier that did, in order for its customers to successfully call ETNS numbers. Subdivision Country Groups Specifically, ETNS was intended to be only one of a potentially larger set of European "country groups" participating in a shared regional overlay numbering plan. The only country group that comprised the ETNS was given an identification code of 3, which came after the +388 country code. Services After the country group identification code, the number space was divided according to service, indicated by a European Service Identity code. The table lists the service designations (country code, identification code, and ESI code) available under ETNS: European Service Identity codes Public service application: 3883 1 Customer service application: 3883 3 Corporate networks: 3883 5 Personal numbering: 3883 7 See also Telephone numbers in Europe References External links European Radiocommunications Office ARCOME presentation: "ETNS project and corresponding field trial description" World Telephone Numbering Guide's information about European regional numbering Telephone numbers
European Telephony Numbering Space
Mathematics
535
21,717,048
https://en.wikipedia.org/wiki/Bond%20fluctuation%20model
The BFM (bond fluctuation model or bond fluctuation method) is a lattice model for simulating the conformation and dynamics of polymer systems. There are two versions of the BFM used: The earlier version was first introduced by I. Carmesin and Kurt Kremer in 1988, and the later version by J. Scott Shaffer in 1994. Conversion between models is possible. Model Carmesin and Kremer version In this model the monomers are represented by cubes on a regular cubic lattice with each cube occupying eight lattice positions. Each lattice position can only be occupied by one monomer in order to model excluded volume. The monomers are connected by a bond vector, which is taken from a set of typically 108 allowed vectors. There are different definitions for this vector set. One example for a bond vector set is made up from the six base vectors below using permutation and sign variation of the three vector components in each direction: The resulting bond lengths are and . The combination of bond vector set and monomer shape in this model ensures that polymer chains cannot cross each other, without explicit test of the local topology. The basic movement of a monomer cube takes place along the lattice axes so that each of the possible bond vectors can be realized. Shaffer's version As in the case of the Carmesin-Kremer BFM, the Shaffer BFM is also constructed on a simple-cubic lattice. However, the lattice points, or vertices of each cube are the sites that can be occupied by a monomer. Each lattice point can be occupied by one monomer only. Successive monomers along a polymer backbone are connected by bond vectors. The allowed bond vectors must be one of: (a) A cube edge (b) A face diagonal or (c) A solid diagonal. The resulting bond lengths are . In addition to the bond length constraint, polymers should not be allowed to cross. This is done most efficiently by the use of a secondary lattice which is twice as fine as the original lattice. The secondary lattice tracks the midpoints of the bonds in the system, and forbids the overlap of bond midpoints. This effectively leads to disallowing polymers from crossing each other. Monte Carlo step In both versions of the BFM, a single attempt to move one monomer consists of the following steps which are standard for Monte Carlo methods: Select a monomer m and a direction randomly Check list of conditions (see below) If all conditions are fulfilled, perform move The conditions to perform a move can be subdivided into mandatory and optional ones. Mandatory conditions for Carmesin–Kremer BFM Four lattice sites next to monomer m in the direction d are empty. The move does not lead to bonds that are not contained in the bond vector set. Mandatory conditions for Shaffer BFM The lattice site to which the chosen monomer is going to be moved is empty. The move does not lead to bonds that are not contained in the bond vector set. The move does not lead to overlapping of bond midpoints. Optional conditions If the move leads to an energetic difference for example due to an electric field or an adsorbing force to the walls. In this case a Metropolis algorithm is applied: The Metropolis rate which is defined as is compared to a random number r from the interval [0, 1). If the Metropolis rate is smaller than r the move is rejected, otherwise it is accepted. The number of Monte Carlo steps of the total system is defined as: Notes External links JBFM – a Java applet from the Leibniz Institute of Polymer Research Dresden (Germany) for simulating polymers with the BFM Polymer physics Lattice models Stochastic models
Bond fluctuation model
Physics,Chemistry,Materials_science
755
27,934,732
https://en.wikipedia.org/wiki/Oak%20Ridge%20School%20of%20Reactor%20Technology
Oak Ridge School of Reactor Technology (ORSORT) was the successor of the school known locally as the Clinch College of Nuclear Knowledge, later shorten to Clinch College. ORSORT was authorized and financed by the U.S. government and founded in 1950 by Admiral Hyman G. Rickover and Alvin Weinberg. During its existence, the school was the only educational venue in the U.S. from where a comprehensive twelve-month education and training in either "Reactor Hazards Analysis" or "Reactor Operations" could be obtained, with accompanying certificates. Funding ended and the school was closed in 1965, shortly after authorization was extended to select U.S. universities to develop their own Nuclear Engineering curricula. Housed at the Oak Ridge National Laboratory, this unique venue and its renowned instructors offered its students the highest level of education of practical applications of atomic energy available at the time, and first-hand exposure to a variety of nuclear reactor designs including the legendary first graphite reactor, pool reactor, high temperature gas reactor, molten salt reactor, fast reactor and high flux reactor. The school was made known first nationally and eventually worldwide to U.S. enterprises and to U.S. allies involved in the development of peaceful uses of atomic energy, and who were interested in educating and training designated scientific and engineering personnel at its unique venue. In 1959, ORSORT accepted its first international enrollments. Applications to enroll required strict clearance from the Atomic Energy Commission. Tuition fees partially offset school operating costs. Courses listed in their 1965 curricula included Analysis, Chemical Technology, Economics of Nuclear Power, Engineering Science, Experimental Physics, Nuclear Systems Laboratory, Hazards Study, Health Physics, Instrumentation and Controls, Materials, Mathematics, Meteorology, Physics, Reactor Operating Experience and Shielding. Scientific and engineering graduates of the one-year program earned certificates of completion and were awarded the degree of Doctor of Pile Engineering (D.O.P.E.). ORSORT turned out up to 100 graduates a year, many of whom became leaders in the nuclear industry, such as a former Secretary of Energy, James D. Watkins. The total number of ORSORT graduates was 976. In addition to 19 US students from the Atomic Energy Commission, the Oak Ridge National Laboratory and US utilities, the last graduating class of 1965 included engineering and scientific personnel sponsored by their governments in Australia, India, Israel, Japan, Netherlands, Pakistan, Philippines and South Africa. References Sources: External links Oak Ridge National Laboratory Educational institutions established in 1950 Educational institutions disestablished in 1965 Defunct universities and colleges in Tennessee Nuclear history of the United States Nuclear technology Oak Ridge National Laboratory 1950 establishments in Tennessee 1965 disestablishments in Tennessee
Oak Ridge School of Reactor Technology
Physics
546
21,382,102
https://en.wikipedia.org/wiki/Urmetazoan
The Urmetazoan is the hypothetical last common ancestor of all animals, or metazoans. It is universally accepted to be a multicellular heterotroph — with the novelties of a germline and oogamy, an extracellular matrix (ECM) and basement membrane, cell-cell and cell-ECM adhesions and signaling pathways, collagen IV and fibrillar collagen, different cell types (as well as expanded gene and protein families), spatial regulation and a complex developmental plan, and relegated unicellular stages. Choanoflagellates All animals are posited to have evolved from a flagellated eukaryote. Their closest known living relatives are the choanoflagellates, collared flagellates whose cell morphology is similar to the choanocyte cells of certain sponges. Molecular studies place animals in a supergroup called the opisthokonts, which also includes the choanoflagellates, fungi, and a few small parasitic protists. The name comes from the posterior location of the flagellum in motile cells, such as most animal spermatozoa, whereas other eukaryotes tend to have anterior flagella as well. Hypotheses Several different hypotheses for the animals' last common ancestor have been suggested. The placula hypothesis, proposed by Otto Bütschli, holds that the last common ancestor of animals was an amorphous blob with no symmetry or axis. The center of this blob rose slightly above the silt, forming a hollow that aided feeding on the sea floor underneath. As the cavity grew deeper and deeper, the organisms resembled a thimble, with an inside and an outside. This body shape is found in sponges and cnidaria. This explanation leads to the formation of the bilaterian body plan; the urbilaterian would develop its symmetry when one end of the placula became adapted for forward movement, resulting in left-right symmetry. The planula hypothesis, proposed by Bütschli, suggests that metazoa are derived from planula; that is, the larva of certain cnidaria, or the adult form of the placozoans. Under this hypothesis, the larva became sexually mature through paedomorphosis, and could reproduce without passing through a sessile phase. The gastraea hypothesis was proposed by Ernst Haeckel in 1874, shortly after his work on the calcareous sponges. He proposed that this group of sponges is monophyletic with all eumetazoans, including the bilaterians. This suggests that the gastrulation and the gastrula stage are universal for eumetazoans. It has been perceived as problematic that gastrulation by invagination is by no means universal among eumetazoans. Only recently has an invagination been confirmed in a Calcarea sponge, albeit too early to form a remaining inner space (archenteron). The bilaterogastraea hypothesis was developed by Gösta Jägersten as an adaptation of Ernst Haeckel's Gastraea hypothesis. He proposed that the Bilaterogastraea have a two-stage life cycle, with a pelagic juvenile and a benthic adult stage. The invagination of the original gastrula stage he saw as bilaterally symmetric rather than radially symmetric. The phagocytella hypothesis was proposed by Élie Metchnikoff. See also References External links Biota (Taxonomicon) Life (Systema Naturae 2000) Vitae (BioLib) Wikispecies – a free directory of life Evolutionary biology Evolution of animals Most recent common ancestors
Urmetazoan
Biology
779
2,244,795
https://en.wikipedia.org/wiki/Formosat-1
Formosat-1 (, formerly known as ROCSAT-1) was an Earth observation satellite operated by the National Space Program Office (NSPO; now the Taiwan Space Agency) of the Republic of China (Taiwan) to conduct observations of the ionosphere and oceans. The spacecraft and its instrumentation were developed jointly by NSPO and TRW using TRW's Lightsat bus, and was launched from Cape Canaveral Air Force Station, US, by Lockheed Martin on January 27, 1999. Formosat-1 provided 5½ years of operational service. The spacecraft ended its mission on June 17, 2004 and was decommissioned on July 16, 2004. Technical details Spacecraft Weight: 401 kg Shape: Hexagonal Dimensions Height: 2.1 m Diameter: 1.1 m Solar arrays: Two, 1.16 x 2.46 m Electrical power: 450 watts Instrumentation Experimental Communication Payload (ECP) Ionosphere Plasma Electrodynamics Instrument (IPEI) Ocean Color Imager (OCI) Orbit Altitude: 600 km Type: Circular Inclination: 35 degrees See also FORMOSAT-2 FORMOSAT-3/COSMIC References External links ROCSAT-1 at GlobalSecurity Earth observation satellites of Taiwan Spacecraft launched in 1999 First artificial satellites of a country
Formosat-1
Astronomy
259
28,300,073
https://en.wikipedia.org/wiki/Gulf%20of%20St.%20Lawrence%20lowland%20forests
The Gulf of St. Lawrence lowland forests are a temperate broadleaf and mixed forest ecoregion of Eastern Canada, as defined by the World Wildlife Fund (WWF) categorization system. Setting Located on the Gulf of Saint Lawrence, the world's largest estuary, this ecoregion covers all of Prince Edward Island, the Les Îles-de-la-Madeleine of Quebec, most of east-central New Brunswick, the Annapolis Valley, Minas Basin and the Northumberland Strait coast of Nova Scotia. This area has a coastal climate of warm summers and cold and snowy winters with an average annual temperature of around 5 °C going up to 15 °C in summer, the coast is warmer than the islands or the sheltered inland valleys. Flora The colder climate allows more hardwood trees to grow in the Gulf of St Lawrence than in most of this part of northeast North America. Trees of the region include eastern hemlock (Tsuga canadensis), balsam fir (Abies balsamea), American elm (Ulmus americana), black ash (Fraxinus nigra), eastern white pine (Pinus strobus), red maple (Acer rubrum), northern red oak (Quercus rubra), black spruce (Picea mariana), red spruce (Picea rubens) and white spruce (Picea glauca). Fauna The forests are home to a variety of wildlife including American black bear (Ursus americanus), moose (Alces alces), white-tailed deer (Odocoileus virginianus), red fox (Vulpes vulpes), snowshoe hare (Lepus americanus), North American porcupine (Erithyzon dorsatum), fisher (Martes pennanti), North American beaver (Castor canadensis), bobcat (Lynx rufus), American marten (Martes americana), raccoon (Procyon lotor) and muskrat (Ondatra zibethica). The area is habitat for maritime ringlet butterflies (Coenonympha nipisiquit) and other invertebrates. Birds include many seabirds, a large colony of great blue heron (Ardea herodias), the largest remaining population of the endangered piping plover, and one of the largest colonies of double-crested cormorant (Phalacrocorax auritus) in the world. Threats and preservation Most of this ecoregion has been altered by logging and clearance for agriculture with only 3% of the original habitat remaining and that highly fragmented. The only large block of intact habitat remains in the area around Kouchibouguac National Park in New Brunswick, although even here logging is ongoing. See also List of ecoregions in Canada (WWF) References External links Central U.S. hardwood forests images at bioimages.vanderbilt.edu Biogeography Ecoregions of Canada
Gulf of St. Lawrence lowland forests
Biology
601
33,857,490
https://en.wikipedia.org/wiki/White-faced%20plover
The white-faced plover (Anarhynchus dealbatus) is a small shorebird predominantly found along the coastal shores of subtropical and tropical eastern Asia. Initially described by British ornithologist Robert Swinhoe, the bird resembles the east Asian subspecies of the Kentish plover (Anarhynchus a. nihonensis) with which it has been much confused and sometimes considered to be a subspecies. Taxonomy The white-faced plover was first described in 1870 by the English naturalist Robert Swinhoe. The type specimen came from the island of Formosa (Taiwan) and he gave it the name Aegialites dealbatus. Since then the bird has been the subject of much debate and has variously been classified as being conspecific with Charadrius marginatus, Charadrius alexandrinus, Charadrius nivosus, Charadrius javanicus and Charadrius ruficapillus. Some authors consider it to be a subspecies of C. alexandrinus while others give it full species status as C. dealbatus. However, the White-faced Plover is now acknowledged as a distinct species by prominent international checklists, aligning with the recommendations based on recent genetic, ecological, and demographic findings. Description The white-faced plover grows to a length of about . It has a rounded head with a white fore-crown and a white supercilium. The crown is pale rufous brown upper parts are pale brownish-grey. The hind collar, throat and underparts are white. The beak and legs are dark and the tail short. Compared to the rather similar Kentish plover, it has a thicker, blunter beak, white lores, paler crown and upperparts, less black on the lateral breast patches and a larger white wingbar. Distribution and habitat This bird is found along a wide seaboard area of southern China and adjacent northern Vietnam ; its wintering range extends south across eastern Indochina towards Sumatra. It typically inhabits sandy beaches, mudflats and saltpans, and outside the breeding season visits reclaimed areas. Ecology The diet of this bird has been little studied but is presumed to be similar to that of the Kentish plover which feeds on small invertebrates such as insects and their larvae, spiders, molluscs, crustaceans and marine worms. It feeds on the foreshore, searching visually for prey then dashing forward to catch the prey or probing the substrate with its beak. Its breeding habits are not known. References External links White-Faced Plover white-faced plover Birds of South China Fauna of Vietnam white-faced plover Controversial bird taxa Taxa named by Robert Swinhoe
White-faced plover
Biology
549
39,327,698
https://en.wikipedia.org/wiki/Sexual%20opportunism
Sexual opportunism is the selfish pursuit of sexual opportunities for one's own sake when they arise, often with the negative moral connotation that it, in some way it "takes advantage" of others, or "makes use" of, or "exploits", other persons for sexual purposes. Sexual opportunism is sometimes also defined as the use of sexual favours for selfish purposes unrelated to the sexual activity, in which case taking a sexual opportunity is merely the means to achieve a quite different purpose, for example, to advance one's career or obtain status or money. This concept is related to quid pro quo sexual harassment, which is defined by Cornell Law School as "sexual harassment in which a boss conveys to an employee that he or she will base an employment decision, e.g. whether to hire, promote, or fire that employee, on the employee's satisfaction of sexual demand. For example, it is quid pro quo sexual harassment for a boss to offer a raise in exchange for sex." A study of women's fertile-phase sexuality found out that compared to the luteal phase, fertile women may show more interest in sexual opportunism, specifically meaning that they may be willing to engage in and have interest in sex with attractive men, even if they do not know each other well. See also References Sexuality and society Opportunism
Sexual opportunism
Biology
286
45,048,689
https://en.wikipedia.org/wiki/Pan%20Africa%20Chemistry%20Network
The Pan Africa Chemistry Network (PACN) connects chemists across Africa. It was launched in London on 21 November 2007 and in Nairobi on 27 May 2008 by the Royal Society of Chemistry. The PACN works to connect chemists across Africa and has five centres of excellence in analytical chemistry in Kenya, Ethiopia, Ghana and Nigeria. The aim of the PACN is: In partnership with Syngenta, who donated £1 million over five years three PACN Centres of Excellence in Analytical Chemistry were established in Kenya, Ghana and Ethiopia. Since December 2011, Procter & Gamble have been working with the PACN and leading scientists and students to exchange knowledge, enhance skills and generate opportunities for innovation in the areas of hygiene, health and waste management. A Collaboration Lab at the University of Lagos in Nigeria was established which includes provision of analytical equipment and internships for Nigerian scientists to apply their knowledge to real life industry challenge. The PACN organises a number of events including an annual congress, GC-MS training and scientific symposia. References Chemistry societies Organizations established in 2007 Royal Society of Chemistry Pan-African organizations
Pan Africa Chemistry Network
Chemistry
225
5,690,654
https://en.wikipedia.org/wiki/Wheel-barrowing
Wheel-barrowing is a problem that may occur in an aeroplane with a tricycle gear configuration during takeoff or landing. As the aeroplane gains speed during takeoff the wing generates an increasing amount of lift although not enough to raise the aeroplane off the ground. The lift reduces the weight supported by the aeroplane's main wheels and this reduces the main wheels' contribution to directional stability, allowing the nose wheel to destabilise the aeroplane's direction along the ground. This form of wheelbarrowing is easily avoided by the pilot applying back-pressure to the elevator control during the takeoff roll to reduce the weight supported by the nose wheel. Depending on the severity of the wheel-barrowing, damage to the aircraft can be quite extensive: The propeller of a single engine airplane may strike the ground, damaging it and the engine. A wing can be damaged by striking the ground as the aircraft pivots over the nose-wheel and one main wheel. Wheel-barrowing may also be caused with a tricycle gear when the turn radius is too sharp for the speed of the aircraft on the ground – much like a child on a tricycle taking too sharp a turn. The problem is exacerbated when brakes are applied during the turn. See also Ground effect (cars) References Aerodynamics Aviation risks
Wheel-barrowing
Chemistry,Engineering
269
64,405,404
https://en.wikipedia.org/wiki/Sabb%20Motor
Sabb Motor is a Norwegian maker of small marine diesel engines, mostly single-cylinder or twin-cylinder units. The firm was established as Damsgaard Motorfabrikk by two brothers, Alf and Håkon Søyland in 1925. The firm started building engines to meet the demand of fishermen who wanted simple, robust and reliable power for their boats. (The word 'Sabb' means toughness and reliability). History The brothers' cottage industry began by creating a 3HP hot-bulb engine. This was followed by a larger 7HP version which tended to suffer broken crankshafts, but the firm was able to solve the problem and re-launch their engines under the name Ny-Sabb (New Sabb). By 1975, Sabb Motor was producing 3,200 engines a year between 8 and 30 bhp. Facing market competition, the firm concentrated on providing 30bhp engines for ship’s lifeboats, a decision which increased worldwide demand. The UK's distributor of Sabb engines is Sleeman & Hawken Ltd. More recently, Sabb have established a link with Mitsubishi. In 2006 Sabb Motor AS was bought by Frydenbø Industri and renamed Frydenbø Sabb Motor AS. Sabb engines Sabb engines are rugged and simple; for example, some have "splash lubrication" which requires no oil pump nor filter. (Splash lubrication is an antique system whereby "spades" on the big-end caps dip into the oil sump and splash the lubricant upwards; clearly it is a system that can work only on very low-revving engines, otherwise the sump oil would become a frothy mousse). Sabb engines include Types H, G, GA, 2H, 2G, & 2J. External links Narrow boat "Bullfinch" with a running Sabb 2G engine References Marine engines Engine manufacturers of Norway Norwegian companies established in 1925
Sabb Motor
Technology
401
32,113,605
https://en.wikipedia.org/wiki/Caf1%20capsule%20antigen
In molecular biology, Caf1 capsule antigen proteins are a family of the F1 capsule antigens Caf1 synthesised by Yersinia bacteria. They adopt a structure consisting of a seven strands arranged in two beta-sheets, in a Greek-key topology, and mediate targeting of the bacterium to sites of infection. References Protein domains
Caf1 capsule antigen
Biology
68
7,076,369
https://en.wikipedia.org/wiki/Sonido%2013
Sonido 13 is a theory of microtonal music created by the Mexican composer Julián Carrillo around 1900 and described by Nicolas Slonimsky as "the field of sounds smaller than the twelve semitones of the tempered scale." Carrillo developed this theory in 1895 while he was experimenting with his violin. Though he became internationally recognized for his system of notation, it was never widely applied. His first composition in demonstration of his theories was Preludio a Colón (1922). The Western musical convention up to this day divides an octave into twelve different pitches that can be arranged or tempered in different intervals. Carrillo termed his new system Sonido 13, which is Spanish for "Thirteenth Sound" or Sound 13, because it enabled musicians to go beyond the twelve notes that comprise an octave in conventional Western music. Julián Carrillo wrote: "The thirteenth sound will be the beginning of the end and the point of departure of a new musical generation which will transform everything." History Early life Carrillo attended the National Conservatory of Music in Mexico City, where he studied violin, composition, physics, acoustics, and mathematics. The laws that define music intervals instantly amazed Carrillo, which led him to conduct experiments on his violin. He began analyzing the way the pitch of a string changed depending on the finger position, concluding that there had to be a way to split the string into an infinite number of parts. One day, Carrillo was able to divide the fourth string of his violin with a razor into 16 parts in the interval between the notes G and A, thus creating 16 unique sounds. This event was the beginning of Sonido 13 that led Carrillo to study more about physics and the nature of intervals. Professional life Carrillo was, "closely associated with the Díaz regime," and preferred neo-classicism to nationalism. References Further reading Mena, María Cristina (1914). "Julian Carrillo: The Herald of a Musical Monroe Doctrine", The Century Illustrated Monthly Magazine, vol. 89. Josiah Gilbert Holland and Richard Watson Gilder, eds. Digitized 2008. External links Equal temperaments Musical notation Microtonality
Sonido 13
Physics
430
33,536,361
https://en.wikipedia.org/wiki/C30H52
{{DISPLAYTITLE:C30H52}} The molecular formula C30H52 may refer to: Arborane Cycloartane Dammarene Fernane Gammacerane Gorgostane Hopane Lupane (compound) Neohopane Oleanane Ursane
C30H52
Chemistry
61
6,239,848
https://en.wikipedia.org/wiki/Template%20Numerical%20Toolkit
The Template Numerical Toolkit (or TNT) is a software library for manipulating vectors and matrices in C++ created by the U.S. National Institute of Standards and Technology. TNT provides the fundamental linear algebra operations (for example, matrix multiplication). TNT is analogous to the BLAS library used by LAPACK. Higher level algorithms, such as LU decomposition and singular value decomposition, are provided by JAMA, also developed at NIST, which uses TNT. The major features of TNT are: All classes are template classes and therefore work with float, double, or other user-defined number types. Matrices can be stored in row-major order or column-major order for Fortran compatibility. The library is simply a collection of header files, and therefore does not need to be independently compiled. Some support for sparse matrix storage is provided. The source code is in the public domain. TNT is mature, and NIST classifies its development status as active maintenance. The principal designer of TNT is Roldan Pozo. See also List of numerical libraries External links Template Numerical Toolkit homepage at NIST C++ numerical libraries Free mathematics software Free software programmed in C++ Public-domain software with source code
Template Numerical Toolkit
Mathematics
247
32,115,812
https://en.wikipedia.org/wiki/Cloud%20Data%20Management%20Interface
ISO/IEC 17826 Information technology — Cloud Data Management Interface (CDMI) Version 2.0.0 is an international standard that specifies a protocol for self-provisioning, administering and managing access to data stored in cloud storage, object storage, storage area network and network attached storage systems. The CDMI standard is developed and maintained by the Storage Networking Industry Association, who makes a publicly accessible version of the specification available. CDMI defines new resource representations to enable standardized management of any URI-accessible data, and defines RESTful HTTP operations using these representations to discover the capabilities of the storage system, discover stored data, access and update management metadata, specify data storage protocols (such as iSCSI and NFS) through which the stored data is accessed, and provide cross-system and cross-cloud import and export in order to enable data portability. Management functions enabled by CDMI include managing data ownership, identity mapping, access controls, user-specified metadata, and to declaratively specify desired data protection, data retention, constraints on geographic placement, desired quality of service, data versioning and security requirements. CDMI also defines utility services to facilitate data management, such the ability to query data matching specific criteria, and includes extensions to perform bulk updates using CDMI Jobs. Capabilities Compliant implementations must provide access to a set of configuration parameters known as capabilities. These are either boolean values that represent whether or not a system supports things such as queues, export via other protocols, path-based storage and so on, or numeric values expressing system limits, such as how much metadata may be placed on an object. As a minimal compliant implementation can be quite small, with few features, clients need to check the cloud storage system for a capability before attempting to use the functionality it represents. Resource allocation assignments limited to the data management interface protocols must possess access bypass capabilities which extend beyond the layered framework. This integral function is vital to the prevention of transport layer session hijacking by unauthorized entities which may circumvent standard interfacing security parameters. Containers A CDMI client may access objects, including containers, by either name or object id (OID), assuming the CDMI server supports both methods. When storing objects by name, it is natural to use nested named containers; the resulting structure corresponds exactly to a traditional filesystem directory structure. Objects Objects are similar to files in a traditional file system, but are enhanced with an increased amount and capacity for metadata. As with containers, they may be accessed by either name or OID. When accessed by name, clients use URLs that contain the full pathname of objects to create, read, update and delete them. When accessed by OID, the URL specifies an OID string in the cdmi-objectid container; this container presents a flat name space conformant with standard object storage system semantics. Subject to system limits, objects may be of any size or type and have arbitrary user-supplied metadata attached to them. Systems that support query allow arbitrary queries to be run against the metadata. Domains, Users and Groups CDMI supports the concept of a domain, similar in concept to a domain in the Windows Active Directory model. Users and groups created in a domain share a common administrative database and are known to each other on a "first name" basis, i.e. without reference to any other domain or system. Domains also function as containers for usage and billing summary data. Access Control CDMI exactly follows the ACL and ACE model used for file authorization operations by NFSv4. This makes it also compatible with Microsoft Windows systems. Metadata CDMI draws much of its metadata model from the XAM specification. Objects and containers have "storage system metadata", "data system metadata" and arbitrary user specified metadata, in addition to the metadata maintained by an ordinary filesystem (atime etc.). Queries CDMI specifies a way for systems to support arbitrary queries against CDMI containers, with a rich set of comparison operators, including support for regular expressions. Queues CDMI supports the concept of persistent FIFO (first-in, first-out) queues. These are useful for job scheduling, order processing and other tasks in which lists of things must be processed in order. Compliance Both retention intervals and retention holds are supported by CDMI. A retention interval consists of a start time and a retention period. During this time interval, objects are preserved as immutable and may not be deleted. A retention hold is usually placed on an object because of judicial action and has the same effect: objects may not be changed nor deleted until all holds placed on them are removed. Billing Summary information suitable for billing clients for on-demand services can be obtained by authorized users from systems that support it. Serialization Serialization of objects and containers allows export of all data and metadata on a system and importation of that data into another cloud system. Foreign protocols CDMI supports export of containers as NFS or CIFS shares. Clients that mount these shares see the container hierarchy as an ordinary filesystem directory hierarchy, and the objects in the containers as normal files. Metadata outside of ordinary filesystem metadata may or may not be exposed. Provisioning of iSCSI LUNs is also supported. Client SDKs CDMI Reference Implementation Droplet libcdmi-java libcdmi-python .NET SDK See also Comparison of CDMI server implementations References Cloud storage Data management
Cloud Data Management Interface
Technology
1,110
14,673,841
https://en.wikipedia.org/wiki/Journal%20of%20Computational%20Biology
The Journal of Computational Biology is a monthly peer-reviewed scientific journal covering computational biology and bioinformatics. It was established in 1994 and is published by Mary Ann Liebert The editors-in-chief are Sorin Istrail (Brown University) and Michael S. Waterman (University of Southern California). According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.549. Since 1997, authors of accepted proceedings papers at the Research in Computational Molecular Biology conference have been invited to submit a revised version to a special issue of the journal. References External links Mary Ann Liebert academic journals Academic journals established in 1994 English-language journals Hybrid open access journals Bioinformatics and computational biology journals Monthly journals
Journal of Computational Biology
Chemistry,Biology
149
1,871,327
https://en.wikipedia.org/wiki/Information%20continuum
The term information continuum is used to describe the whole set of all information, in connection with information management. The term may be used in reference to the information or the information infrastructure of a people, a species, a scientific subject or an institution. Other usages in biological anthropology, term information continuum is related to study of social information transfer and evolution of communication in animals. the Internet is sometimes called an information continuum. References External links Monash University - Faculty of Information Technology Information theory Information management
Information continuum
Mathematics,Technology,Engineering
100
64,123,397
https://en.wikipedia.org/wiki/Charge-pump%20phase-locked%20loop
Charge-pump phase-locked loop (CP-PLL) is a modification of phase-locked loops with phase-frequency detectors and square waveform signals. A CP-PLL allows for a quick lock of the phase of the incoming signal, achieving low steady state phase error. Phase-frequency detector (PFD) Phase-frequency detector (PFD) is triggered by the trailing edges of the reference (Ref) and controlled (VCO) signals. The output signal of PFD can have only three states: 0, , and . A trailing edge of the reference signal forces the PFD to switch to a higher state, unless it is already in the state . A trailing edge of the VCO signal forces the PFD to switch to a lower state, unless it is already in the state . If both trailing edges happen at the same time, then the PFD switches to zero. Mathematical models of CP-PLL A first linear mathematical model of second-order CP-PLL was suggested by F. Gardner in 1980. A nonlinear model without the VCO overload was suggested by M. van Paemel in 1994 and then refined by N. Kuznetsov et al. in 2019. The closed form mathematical model of CP-PLL taking into account the VCO overload is derived in. These mathematical models of CP-PLL allow to get analytical estimations of the hold-in range (a maximum range of the input signal period such that there exists a locked state at which the VCO is not overloaded) and the pull-in range (a maximum range of the input signal period within the hold-in range such that for any initial state the CP-PLL acquires a locked state). Continuous time linear model of the second order CP-PLL and Gardner's conjecture Gardner's analysis is based on the following approximation: time interval on which PFD has non-zero state on each period of reference signal is Then averaged output of charge-pump PFD is with corresponding transfer function Using filter transfer function and VCO transfer function one gets Gardner's linear approximated average model of second-order CP-PLL In 1980, F. Gardner, based on the above reasoning, conjectured that transient response of practical charge-pump PLL's can be expected to be nearly the same as the response of the equivalent classical PLL (Gardner's conjecture on CP-PLL). Following Gardner's results, by analogy with the Egan conjecture on the pull-in range of type 2 APLL, Amr M. Fahim conjectured in his book that in order to have an infinite pull-in(capture) range, an active filter must be used for the loop filter in CP-PLL (Fahim-Egan's conjecture on the pull-in range of type II CP-PLL). Continuous time nonlinear model of the second order CP-PLL Without loss of generality it is supposed that trailing edges of the VCO and Ref signals occur when the corresponding phase reaches an integer number. Let the time instance of the first trailing edge of the Ref signal is defined as . The PFD state is determined by the PFD initial state , the initial phase shifts of the VCO and Ref signals. The relationship between the input current and the output voltage for a proportionally integrating (perfect PI) filter based on resistor and capacitor is as follows where is a resistance, is a capacitance, and is a capacitor charge. The control signal adjusts the VCO frequency: where is the VCO free-running (quiescent) frequency (i.e. for ), is the VCO gain (sensivity), and is the VCO phase. Finally, the continuous time nonlinear mathematical model of CP-PLL is as follows with the following discontinuous piece-wise constant nonlinearity and the initial conditions . This model is a nonlinear, non-autonomous, discontinuous, switching system. Discrete time nonlinear model of the second-order CP-PLL The reference signal frequency is assumed to be constant: where , and are a period, frequency and a phase of the reference signal. Let . Denote by the first instant of time such that the PFD output becomes zero (if , then ) and by the first trailing edge of the VCO or Ref. Further the corresponding increasing sequences and for are defined. Let . Then for the is a non-zero constant (). Denote by the PFD pulse width (length of the time interval, where the PFD output is a non-zero constant), multiplied by the sign of the PFD output: i.e. for and for . If the VCO trailing edge hits before the Ref trailing edge, then and in the opposite case we have , i.e. shows how one signal lags behind another. Zero output of PFD on the interval : for . The transformation of variables to allows to reduce the number of parameters to two: Here is a normalized phase shift and is a ratio of the VCO frequency to the reference frequency . Finally, the discrete-time model of second order CP-PLL without the VCO overload where This discrete-time model has the only one steady state at and allows to estimate the hold-in and pull-in ranges. If the VCO is overloaded, i.e. is zero, or what is the same: or , then the additional cases of the CP-PLL dynamics have to be taken into account. For any parameters the VCO overload may occur for sufficiently large frequency difference between the VCO and reference signals. In practice the VCO overload should be avoided. Nonlinear models of high-order CP-PLL Derivation of nonlinear mathematical models of high-order CP-PLL leads to transcendental phase equations that cannot be solved analytically and require numerical approaches like the classical fixed-point method or the Newton-Raphson approach. References Electronic circuits
Charge-pump phase-locked loop
Engineering
1,242
8,036,817
https://en.wikipedia.org/wiki/Fluidyne%20engine
A Fluidyne engine is an alpha or gamma type Stirling engine with one or more liquid pistons. It contains a working gas (often air), and either two liquid pistons or one liquid piston and a displacer. The engine was invented in 1969. The engine was patented in 1973 by the United Kingdom Atomic Energy Authority. Engine operation Working gas in the engine is heated, and this causes it to expand and push on the water column. This expansion cools the air which contracts, at the same time being pushed back by the weight of the displaced water column. The cycle then repeats. The U-tube version has no moving parts in the engine other than the water and air, although there are two check valves in the pump. This engine operates at a natural resonance cycle that is "tuned" by adjusting the geometry, generally with a "tuning tube" of water. Engine as a pump In the classic configuration, the work produced via the water pistons is integrated with a water pump. The simple pump is external to the engine, and consists of two check valves, one on the intake and one on the outlet. In the engine, the loop of oscillating liquid can be thought of as acting as a displacer piston. The liquid in the single tube extending to the pump acts as the power piston. Traditionally the pump is open to the atmosphere, and the hydraulic head is small, so that the absolute engine pressure is close to atmospheric pressure. Demonstration video The videos show operation of a U-tube type model Fluidyne engine. Hot pipe is heated by a heat gun, and water column oscillation builds up to a steady-state level. Second video shows a detail of the actual water displacement. See also Humphrey Pump References Further reading External links Fluidyne Engines (link broken - 2023-June-04) Thermofluidics Engine Article by Alejandro Romanelli Stirling engines Piston engines Pistons Articles containing video clips
Fluidyne engine
Technology
393
76,412,258
https://en.wikipedia.org/wiki/MoonBounce
MoonBounce is a UEFI firmware-based rootkit. It is linked to Chinese APT41 hacker group. MoonBounce was discovered by the researchers at Kaspersky in 2021. It can disable Windows security tools and bypass User Account Control. The data shows that the attacks are highly targeted. It is a landmark in a UEFI rootkit evolution. It is the third known malware UEFI bootkit found. Infection Kaspersky has detected the firmware rootkit in only one case so they didn't reveal much about its infection method. It is believed that it had been installed remotely. The SPI flash memory on the motherboard is the implanting location. CORE_DXE is the firmware laced component which is used during the first phases of the UEFI boot sequence. It hooks EFI Boot Services functions and inject more malware into a svchost.exe process during boot. It resides on a low level portion of the hard drive. It operates in memory only which makes it undetectable on the HDD. References Malware toolkits Malware Firmware Rootkits
MoonBounce
Technology
236
25,923,724
https://en.wikipedia.org/wiki/Fort%20de%20V%C3%A9zelois
Fort de Vézelois, also known as Fort Ordener, was built between 1883 and 1886 near Vézelois, to the southeast of Belfort in northeastern France. It is part of the first ring of fortifications around the city of Belfort. It is part of the second ring of fortifications around the city of Belfort in northeastern France. This set of forts was built as part of the Séré de Rivières system and incorporated improvements to deal with the improvement in efficacy of artillery in the late 19th century. The fort was formally named after French General Michel Ordener. The Fort de Vézelois is similar to the Fort de Bessoncourt and was designed to support Bessoncourt and the Fort du Bois d'Oye, covering the road from Basel and the Mulhouse railway line. It was garrisoned by between 500 and 600 men. The fort received concrete cover in 1888–89, its artillery dispersed to batteries outside the fort. In 1909 the caponiers were replaced by counterscarps. Parapets and a subterranean shelter were provided for infantry, while a casemate, two machine gun turrets and a 75mm gun turret were added. From 1893 the fort was linked to other forts around Belfort via the Chemins de fer du Territoire de Belfort strategic railroad. In 1940 the fort was manned by the 8th Battery of the 159th Position Artillery Regiment (RAP), part of the fortified region of Belfort under the French 8th Army, Army Group 3. From 16 March 1940 the RF Belfort became the 44th Fortress Corps (CAF). After the Second World War the fort was used by the French Army for ammunition storage until the 1990s. The fort is now owned by the Commune of Vézelois and is in the care of an association for its restoration. The fort may be visited. See also Fortified region of Belfort References External links Fort de Vézelois official site Fort de Vézelois (90) at Chemins de mémoire Fort de Vézelois at Fortiff' Séré Fortifications of Belfort Séré de Rivières system Fortified region of Belfort Tourist attractions in the Territoire de Belfort Military installations established in 1886
Fort de Vézelois
Engineering
446
2,905,576
https://en.wikipedia.org/wiki/Journal%20of%20Biogeography
The Journal of Biogeography is a peer-reviewed scientific journal in biogeography that was established in 1974. It covers aspects of spatial, ecological, and historical biogeography. The founding editor-in-chief was David Watts, followed by John Flenley (1936-2018), Philip Stott (1987-2004), Robert J. Whittaker (2004-2015), and Peter Linder (University of Zurich; 2015–2019). As of early 2023, the editor-in-chief was Michael N Dawson (University of California, Merced). In 2023, most editors and the editor-in-chief have resigned protesting that the publisher "seemed to emphasize cost-cutting and margins over good editorial practice". Abstracting and indexing The journal is abstracted and indexed in: References External links Ecology journals English-language journals Geography journals Monthly journals Wiley-Blackwell academic journals Academic journals established in 1974
Journal of Biogeography
Environmental_science
194
49,989,060
https://en.wikipedia.org/wiki/Prophage%20Hp1%20holin%20family
The Prophage Hp1 Hol (Hp1Hol) Family (TC# 1.E.46) consists of a single putative holin (TC# 1.E.46.1.1) of 69 amino acyl residues in length, exhibiting what appears to be a single transmembrane segment (TMS). It is derived from the Bacillota, Clostridium hathewayi DSM 13479. This protein is functionally uncharacterized and does not appear to be homologous to other holins. It does, however, show 31% identity to a heavy metal transporter from Dethiosulfovibrio peptidovorans. See also Holin Lysin Transporter Classification Database Further reading References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins Holins
Prophage Hp1 holin family
Biology
185
2,114,941
https://en.wikipedia.org/wiki/High-dynamic-range%20rendering
High-dynamic-range rendering (HDRR or HDR rendering), also known as high-dynamic-range lighting, is the rendering of computer graphics scenes by using lighting calculations done in high dynamic range (HDR). This allows preservation of details that may be lost due to limiting contrast ratios. Video games and computer-generated movies and special effects benefit from this as it creates more realistic scenes than with more simplistic lighting models. HDRR was originally required to tone map the rendered image onto Standard Dynamic Range (SDR) displays, as the first HDR capable displays did not arrive until the 2010s. However if a modern HDR display is available, it is possible to instead display the HDRR with even greater contrast and realism. Graphics processor company Nvidia summarizes the motivation for HDRR in three points: bright things can be really bright, dark things can be really dark, and details can be seen in both. History The use of high-dynamic-range imaging (HDRI) in computer graphics was introduced by Greg Ward in 1985 with his open-source Radiance rendering and lighting simulation software which created the first file format to retain a high-dynamic-range image. HDRI languished for more than a decade, held back by limited computing power, storage, and capture methods. Not until recently has the technology to put HDRI into practical use been developed. In 1990, Eihachiro Nakame and associates presented a lighting model for driving simulators that highlighted the need for high-dynamic-range processing in realistic simulations. In 1995, Greg Spencer presented Physically-based glare effects for digital images at SIGGRAPH, providing a quantitative model for flare and blooming in the human eye. In 1997, Paul Debevec presented Recovering high dynamic range radiance maps from photographs at SIGGRAPH, and the following year presented Rendering synthetic objects into real scenes. These two papers laid the framework for creating HDR light probes of a location, and then using this probe to light a rendered scene. HDRI and HDRL (high-dynamic-range image-based lighting) have, ever since, been used in many situations in 3D scenes in which inserting a 3D object into a real environment requires the light probe data to provide realistic lighting solutions. In gaming applications, Riven: The Sequel to Myst in 1997 used an HDRI postprocessing shader directly based on Spencer's paper. After E3 2003, Valve released a demo movie of their Source engine rendering a cityscape in a high dynamic range. The term was not commonly used again until E3 2004, where it gained much more attention when Epic Games showcased Unreal Engine 3 and Valve announced Half-Life 2: Lost Coast in 2005, coupled with open-source engines such as OGRE 3D and open-source games like Nexuiz. By the 2010s, HDR displays first became available. With higher contrast ratios, it is possible for HDRR to reduce or eliminate tone mapping, resulting in an even more realistic image. Examples One of the primary advantages of HDR rendering is that details in a scene with a large contrast ratio are preserved. Without HDRR, areas that are too dark are clipped to black and areas that are too bright are clipped to white. These are represented by the hardware as a floating point value of 0.0 and 1.0 for pure black and pure white, respectively. Another aspect of HDR rendering is the addition of perceptual cues which increase apparent brightness. HDR rendering also affects how light is preserved in optical phenomena such as reflections and refractions, as well as transparent materials such as glass. In LDR rendering, very bright light sources in a scene (such as the sun) are capped at 1.0. When this light is reflected the result must then be less than or equal to 1.0. However, in HDR rendering, very bright light sources can exceed the 1.0 brightness to simulate their actual values. This allows reflections off surfaces to maintain realistic brightness for bright light sources. Limitations and compensations Human eye The human eye can perceive scenes with a very high dynamic contrast ratio, around 1,000,000:1. Adaptation is achieved in part through adjustments of the iris and slow chemical changes, which take some time (e.g. the delay in being able to see when switching from bright lighting to pitch darkness). At any given time, the eye's static range is smaller, around 10,000:1. However, this is still higher than the static range of most display technology. Output to displays Although many manufacturers claim very high numbers, plasma displays, liquid-crystal displays, and CRT displays can deliver only a fraction of the contrast ratio found in the real world, and these are usually measured under ideal conditions. The simultaneous contrast of real content under normal viewing conditions is significantly lower. Some increase in dynamic range in LCD monitors can be achieved by automatically reducing the backlight for dark scenes. For example, LG calls this technology "Digital Fine Contrast"; Samsung describes it as "dynamic contrast ratio". Another technique is to have an array of brighter and darker LED backlights, for example with systems developed by BrightSide Technologies. OLED displays have better dynamic range capabilities than LCDs, similar to plasma but with lower power consumption. Rec. 709 defines the color space for HDTV, and Rec. 2020 defines a larger but still incomplete color space for ultra-high-definition television. Since the 2010s, OLED and other HDR display technologies have reduced or eliminated the need for tone mapping HDRR to standard dynamic range. Light bloom Light blooming is the result of scattering in the human lens, which human brain interprets as a bright spot in a scene. For example, a bright light in the background will appear to bleed over onto objects in the foreground. This can be used to create an illusion to make the bright spot appear to be brighter than it really is. Flare Flare is the diffraction of light in the human lens, resulting in "rays" of light emanating from small light sources, and can also result in some chromatic effects. It is most visible on point light sources because of their small visual angle. Typical display devices cannot display light as bright as the Sun, and ambient room lighting prevents them from displaying true black. Thus HDR rendering systems have to map the full dynamic range of what the eye would see in the rendered situation onto the capabilities of the device. This tone mapping is done relative to what the virtual scene camera sees, combined with several full screen effects, e.g. to simulate dust in the air which is lit by direct sunlight in a dark cavern, or the scattering in the eye. Tone mapping and blooming shaders can be used together to help simulate these effects. Tone mapping Tone mapping, in the context of graphics rendering, is a technique used to map colors from high dynamic range (in which lighting calculations are performed) to a lower dynamic range that matches the capabilities of the desired display device. Typically, the mapping is non-linear – it preserves enough range for dark colors and gradually limits the dynamic range for bright colors. This technique often produces visually appealing images with good overall detail and contrast. Various tone mapping operators exist, ranging from simple real-time methods used in computer games to more sophisticated techniques that attempt to imitate the perceptual response of the human visual system. HDR displays with higher dynamic range capabilities can reduce or eliminate the tone mapping required after HDRR, resulting in an even more realistic image. Applications in computer entertainment Currently HDRR has been prevalent in games, primarily for PCs, Microsoft's Xbox 360, and Sony's PlayStation 3. It has also been simulated on the PlayStation 2, GameCube, Xbox and Amiga systems. Sproing Interactive Media has announced that their new Athena game engine for the Wii will support HDRR, adding Wii to the list of systems that support it. In desktop publishing and gaming, color values are often processed several times over. As this includes multiplication and division (which can accumulate rounding errors), it is useful to have the extended accuracy and range of 16 bit integer or 16 bit floating point formats. This is useful irrespective of the aforementioned limitations in some hardware. Development of HDRR through DirectX Complex shader effects began their days with the release of Shader Model 1.0 with DirectX 8. Shader Model 1.0 illuminated 3D worlds with what is called standard lighting. Standard lighting, however, had two problems: Lighting precision was confined to 8 bit integers, which limited the contrast ratio to 256:1. Using the HVS color model, the value (V), or brightness of a color has a range of 0 – 255. This means the brightest white (a value of 255) is only 255 levels brighter than the darkest shade above pure black (i.e.: value of 0). Lighting calculations were integer based, which didn't offer as much accuracy because the real world is not confined to whole numbers. On December 24, 2002, Microsoft released a new version of DirectX. DirectX 9.0 introduced Shader Model 2.0, which offered one of the necessary components to enable rendering of high-dynamic-range images: lighting precision was not limited to just 8-bits. Although 8-bits was the minimum in applications, programmers could choose up to a maximum of 24 bits for lighting precision. However, all calculations were still integer-based. One of the first graphics cards to support DirectX 9.0 natively was ATI's Radeon 9700, though the effect wasn't programmed into games for years afterwards. On August 23, 2003, Microsoft updated DirectX to DirectX 9.0b, which enabled the Pixel Shader 2.x (Extended) profile for ATI's Radeon X series and NVIDIA's GeForce FX series of graphics processing units. On August 9, 2004, Microsoft updated DirectX once more to DirectX 9.0c. This also exposed the Shader Model 3.0 profile for High-Level Shader Language (HLSL). Shader Model 3.0's lighting precision has a minimum of 32 bits as opposed to 2.0's 8-bit minimum. Also all lighting-precision calculations are now floating-point based. NVIDIA states that contrast ratios using Shader Model 3.0 can be as high as 65535:1 using 32-bit lighting precision. At first, HDRR was only possible on video cards capable of Shader-Model-3.0 effects, but software developers soon added compatibility for Shader Model 2.0. As a side note, when referred to as Shader Model 3.0 HDR, HDRR is really done by FP16 blending. FP16 blending is not part of Shader Model 3.0, but is supported mostly by cards also capable of Shader Model 3.0 (exceptions include the GeForce 6200 series). FP16 blending can be used as a faster way to render HDR in video games. Shader Model 4.0 is a feature of DirectX 10, which has been released with Windows Vista. Shader Model 4.0 allows 128-bit HDR rendering, as opposed to 64-bit HDR in Shader Model 3.0 (although this is theoretically possible under Shader Model 3.0). Shader Model 5.0 is a feature of DirectX 11. It allows 6:1 compression of HDR textures without noticeable loss, which is prevalent on previous versions of DirectX HDR texture compression techniques. Development of HDRR through OpenGL It is possible to develop HDRR through GLSL shader starting from OpenGL 1.4 onwards. Game engines that support HDR rendering Unreal Engine 5 Unreal Engine 4 Unreal Engine 3 Chrome Engine 3 Source Source 2 Serious Engine 2 MT Framework 2 RE Engine REDengine 3 CryEngine, CryEngine 2, CryEngine 3 Dunia Engine Gamebryo Godot (game engine) Decima Unity id Tech 5 LithTech Unigine Frostbite 2 Real Virtuality 2, 3, and 4 HPL Engine 3 Babylon JS Torque 3D X-Ray Engine See also Ambient occlusion Shader References External links NVIDIA's HDRR technical summary (PDF) A HDRR Implementation with OpenGL 2.0 OpenGL HDRR Implementation High Dynamic Range Rendering in OpenGL (PDF) Microsoft's technical brief on SM3.0 in comparison with SM2.0 Tom's Hardware: New Graphics Card Features of 2006 List of GPU's compiled by Chris Hare techPowerUp! GPU Database Understanding Contrast Ratios in Video Display Devices Requiem by TBL, featuring real-time HDR rendering in software List of video games supporting HDR Examples of high dynamic range photography 3D rendering High dynamic range
High-dynamic-range rendering
Engineering
2,645
12,713
https://en.wikipedia.org/wiki/Giant%20panda
The giant panda (Ailuropoda melanoleuca), also known as the panda bear or simply panda, is a bear species endemic to China. It is characterised by its white coat with black patches around the eyes, ears, legs and shoulders. Its body is rotund; adult individuals weigh and are typically long. It is sexually dimorphic, with males being typically 10 to 20% larger than females. A thumb is visible on its forepaw, which helps in holding bamboo in place for feeding. It has large molar teeth and expanded temporal fossa to meet its dietary requirements. It can digest starch and is mostly herbivorous with a diet consisting almost entirely of bamboo and bamboo shoots. The giant panda lives exclusively in six montane regions in a few Chinese provinces at elevations of up to . It is solitary and gathers only in mating seasons. It relies on olfactory communication to communicate and uses scent marks as chemical cues and on landmarks like rocks or trees. Females rear cubs for an average of 18 to 24 months. The oldest known giant panda was 38 years old. As a result of farming, deforestation and infrastructural development, the giant panda has been driven out of the lowland areas where it once lived. The wild population has increased again to 1,864 individuals as of March 2015. Since 2016, it has been listed as Vulnerable on the IUCN Red List. In July 2021, Chinese authorities also classified the giant panda as vulnerable. It is a conservation-reliant species. By 2007, the captive population comprised 239 giant pandas in China and another 27 outside the country. It has often served as China's national symbol, appeared on Chinese Gold Panda coins since 1982 and as one of the five Fuwa mascots of the 2008 Summer Olympics held in Beijing. Etymology The word panda was borrowed into English from French, but no conclusive explanation of the origin of the French word panda has been found. The closest candidate is the Nepali word ponya, possibly referring to the adapted wrist bone of the red panda, which is native to Nepal. In many older sources, the name "panda" or "common panda" refers to the red panda (Ailurus fulgens), which was described some 40 years earlier and over that period was the only animal known as a panda. The binomial name Ailuropoda melanoleuca means black and white (melanoleuca) cat-foot (ailuropoda). Since the earliest collection of Chinese writings, the Chinese language has given the bear many different names, including mò (, ancient Chinese name for giant panda), huāxióng (; "spotted bear") and zhúxióng (; "bamboo bear"). The most popular names in China today are dàxióngmāo (; ), or simply xióngmāo (; ). As with the word panda in English, xióngmāo () was originally used to describe just the red panda, but dàxióngmāo () and xiǎoxióngmāo (; ) were coined to differentiate between the species. In Taiwan, another popular name for panda is the inverted dàmāoxióng (; ), though many encyclopedias and dictionaries in Taiwan still use the "bear cat" form as the correct name. Some linguists argue, in this construction, "bear" instead of "cat" is the base noun, making the name more grammatically and logically correct, which have led to the popular choice despite official writings. This name did not gain its popularity until 1988, when a private zoo in Tainan painted a sun bear black and white and created the Tainan fake panda incident. Taxonomy For many decades, the precise taxonomic classification of the giant panda was under debate because it shares characteristics with both bears and raccoons. In 1985, molecular studies indicated that the giant panda is a true bear, part of the family Ursidae. These studies show it diverged about from the common ancestor of the Ursidae; it is the most basal member of this family and equidistant from all other extant bear species. Subspecies Two subspecies of giant panda have been recognized on the basis of distinct cranial measurements, colour patterns, and population genetics. The nominate subspecies, A. m. melanoleuca, consists of most extant populations of the giant panda. These animals are principally found in Sichuan and display the typical stark black and white contrasting colours. The Qinling panda, A. m. qinlingensis, is restricted to the Qinling Mountains in Shaanxi at elevations of . The typical black and white pattern of Sichuan giant pandas is replaced with a light brown and white pattern. The skull of A. m. qinlingensis is smaller than its relatives, and it has larger molars. A detailed study of the giant panda's genetic history from 2012 confirms that the separation of the Qinling population occurred about 300,000 years ago, and reveals that the non-Qinling population further diverged into two groups, named the Minshan and the Qionglai-Daxiangling-Xiaoxiangling-Liangshan group respectively, about 2,800 years ago. Phylogeny Of the eight extant species in the bear family Ursidae, the giant panda's lineage branched off the earliest. Distribution and habitat The giant panda is endemic to China. It is found in small, fragmented populations in six mountainous regions in the country, mainly in Sichuan, and also in neighbouring Shaanxi and Gansu. Successful habitat preservation has seen a rise in panda numbers, though loss of habitat due to human activities remains its biggest threat. In areas with a high concentration of medium-to-large-sized mammalssuch as domestic cattle, a species known to degrade the landscapethe giant panda population is generally low. This is mainly attributed to the panda's avoidance of interspecific competition. The species has been located at elevations of above sea level. They frequent habitats with a healthy concentration of bamboos, typically old-growth forests, but may also venture into secondary forest habitats. The Daxiangling Mountain population inhabits both coniferous and broadleaf forests. Additionally, the Qinling population often selects evergreen broadleaf and conifer forests, while pandas in the Qionglai mountainous region exclusively select upland conifer forests. The remaining two populations, namely those occurring in the Liangshan and Xiaoxiangling mountains, predominantly occur in broadleaf evergreen and conifer forests. Giant pandas once roamed across Southeast Asia from Myanmar to northern Vietnam. Their range in China spanned much of the southeast region. By the Pleistocene, climate change affected panda populations, and the subsequent domination of modern humans led to large-scale habitat loss. In 2001, it was estimated that the range of the giant panda had declined by about 99% of its range in earlier millenniums. Description The giant panda has a body shape typical of bears. It has black fur on its ears, limbs, shoulders and around the eyes. The rest of the animal's coat is white. The bear's distinctive coloration appears to serve as camouflage in both winter and summer environments as they do not hibernate. The white areas serve as camouflage in snow, while the black shoulders and legs conceal them in shade. Studies in the wild have found that when viewed from a distance, the panda displays disruptive coloration, while up close, they rely more on blending in. The black ears may be used to display aggression, while the eye patches might facilitate them identifying one another. The giant panda's thick, woolly coat keeps it warm in the cool forests of its habitat. The panda's skull shape is typical of durophagous carnivorans. It has evolved from previous ancestors to exhibit larger molars with increased complexity and expanded temporal fossa. A study revealed that a giant panda had a bite force of 1298.9 Newton (BFQ 151.4) at canine teeth and 1815.9 Newton (BFQ 141.8) at carnassial teeth. Adults measure around long, including a tail of about , and tall at the shoulder. Males can weigh up to . Females are generally 10–20% smaller than males. They weigh between and . The average weight for adults is . The giant panda's paw has a digit similar to a thumb and five fingers; the thumb-like digit – actually a modified sesamoid bone – helps it to hold bamboo while eating. The giant panda's tail, measuring , is the second-longest in the bear family, behind the sloth bear. Ecology Diet Despite its taxonomic classification as a carnivoran, the giant panda's diet is primarily herbivorous, with approximately 99% of its diet consisting of bamboo. However, the giant panda still has the digestive system of a carnivore, as well as carnivore-specific genes, and thus derives little energy and little protein from the consumption of bamboo. The ability to break down cellulose and lignin is very weak, and their main source of nutrients comes from starch and hemicelluloses. The most important part of their bamboo diet is the shoots, that are rich in starch and have up to 32% protein content. Accordingly, pandas have evolved a higher capability to digest starches than strict carnivores. Raw bamboo is toxic, containing cyanide compounds. Pandas' body tissues are less able than herbivores to detoxify cyanide, but their gut microbiomes are significantly enriched in putative genes coding for enzymes related to cyanide degradation, suggesting that they have cyanide-digesting gut microbes. It has been estimated that an adult panda absorbs of cyanide a day through its diet. To prevent poisoning, they have evolved anti-toxic mechanisms to protect themselves. About 80% of the cyanide is metabolized to less toxic thiocyanate and discharged in urine, while the remaining 20% is detoxified by other minor pathways. During the shoot season (AprilAugust), pandas store a large amount of food in preparation for the months succeeding this seasonal period, in which pandas live off a diet of bamboo leaves. The giant panda is a highly specialised animal with unique adaptations, and has lived in bamboo forests for millions of years. The average giant panda eats as much as of bamboo shoots a day to compensate for the limited energy content of its diet. Ingestion of such a large quantity of material is possible and necessary because of the rapid passage of large amounts of indigestible plant material through the short, straight digestive tract. It is also noted, however, that such rapid passage of digesta limits the potential of microbial digestion in the gastrointestinal tract, limiting alternative forms of digestion. Given this voluminous diet, the giant panda defecates up to 40 times a day. The limited energy input imposed on it by its diet has affected the panda's behavior. The giant panda tends to limit its social interactions and avoids steeply sloping terrain to limit its energy expenditures. Two of the panda's most distinctive features, its large size and round face, are adaptations to its bamboo diet. Anthropologist Russell Ciochon observed: "[much] like the vegetarian gorilla, the low body surface area to body volume [of the giant panda] is indicative of a lower metabolic rate. This lower metabolic rate and a more sedentary lifestyle allows the giant panda to subsist on nutrient poor resources such as bamboo." The giant panda's round face is the result of powerful jaw muscles, which attach from the top of the head to the jaw. Large molars crush and grind fibrous plant material. The morphological characteristics of extinct relatives of the giant panda suggest that while the ancient giant panda was omnivorous 7 million years ago (mya), it only became herbivorous some 2–2.4 mya with the emergence of A. microta. Genome sequencing of the giant panda suggests that the dietary switch could have initiated from the loss of the sole umami taste receptor, encoded by the genes TAS1R1 and TAS1R3 (also known as T1R1 and T1R3), resulting from two frameshift mutations within the T1R1 exons. Umami taste corresponds to high levels of glutamate as found in meat and may have thus altered the food choice of the giant panda. Although the pseudogenisation (conversion into a pseudogene) of the umami taste receptor in Ailuropoda coincides with the dietary switch to herbivory, it is likely a result of, and not the reason for, the dietary change. The mutation time for the T1R1 gene in the giant panda is estimated to 4.2 mya while fossil evidence indicates bamboo consumption in the giant panda species at least 7 mya, signifying that although complete herbivory occurred around 2 mya, the dietary switch was initiated prior to T1R1 loss-of-function. Pandas eat any of 25 bamboo species in the wild, with the most common including Fargesia dracocephala and Fargesia rufa. Only a few bamboo species are widespread at the high altitudes pandas now inhabit. Bamboo leaves contain the highest protein levels; stems have less. Because of the synchronous flowering, death, and regeneration of all bamboo within a species, the giant panda must have at least two different species available in its range to avoid starvation. While primarily herbivorous, the giant panda still retains decidedly ursine teeth and will eat meat, fish, and eggs when available. In captivity, zoos typically maintain the giant panda's bamboo diet, though some will provide specially formulated biscuits or other dietary supplements. Pandas will travel between different habitats if they need to, so they can get the nutrients that they need and to balance their diet for reproduction. Interspecific interactions Although adult giant pandas have few natural predators other than humans, young cubs are vulnerable to attacks by snow leopards, yellow-throated martens, eagles, feral dogs, and the Asian black bear. Sub-adults weighing up to may be vulnerable to predation by leopards. Giant pandas are sympatric with other large mammals and bamboo feeders, such as the takin (Budorcas taxicolor). The takin and giant panda share a similar ecological niche, and they consume the same resources. When competition for food is fierce, pandas disperse to the outskirts of takin distribution. Other possible competitors include but is not limited to, the Eurasian wild pig (Sus scrofa), Chinese goral (Naemorhedus griseus) and the Asian black bear (Ursus thibetanus). Giant pandas avoid areas with a mid-to-high density of livestock, as they depress the vegetation. The Tibetan Plateau is the only known area where both giant and red pandas can be found. Although sharing near-identical ecological niches, competition between the two species has rarely been observed. Nearly 50% of their respective distribution overlaps, and successful coexistence is achieved through distinct habitat selection. Pathogens and parasites A captive female died from toxoplasmosis, a disease caused by an obligate intracellular parasitic protozoan known as Toxoplasma gondii that infects most warm-blooded animals, including humans. They are likely susceptible to diseases from Baylisascaris schroederi, a parasitic nematode known to infect giant panda intestines. This nematode species is known to give pandas baylisascariasi, a deadly disease that kills more wild pandas than any other cause. Additionally, the population is threatened by canine distemper virus (CDV), canine parvovirus, rotavirus, canine adenovirus, and canine coronavirus. Bacteria, such as Clostridium welchii, Proteus mirabilis, Klebsiella pneumoniae, and Escherichia coli, may also be lethal. Behavior The giant panda is a terrestrial animal and primarily spends its life roaming and feeding in the bamboo forests of the Qinling Mountains and in the hilly province of Sichuan. Giant pandas are generally solitary. Each adult has a defined territory and a female is not tolerant of other females in her range. Social encounters occur primarily during the brief breeding season in which pandas in proximity to one another will gather. After mating, the male leaves the female alone to raise the cub. Pandas were thought to fall into the crepuscular category, those who are active twice a day, at dawn and dusk; however, pandas may belong to a category all of their own, with activity peaks in the morning, afternoon and midnight. The low nutrition quality of bamboo means pandas need to eat more frequently, and due to their lack of major predators they can be active at any time of the day. Activity is highest in June and decreases in late summer to autumn with an increase from November through the following March. Activity is also directly related to the amount of sunlight during colder days. There is a significant interaction of solar radiation, such that solar radiation has a stronger positive effect on activity levels of panda bears. Pandas communicate through vocalisation and scent marking such as clawing trees or spraying urine. They are able to climb and take shelter in hollow trees or rock crevices, but do not establish permanent dens. For this reason, pandas do not hibernate, which is similar to other subtropical mammals, and will instead move to elevations with warmer temperatures. Pandas rely primarily on spatial memory rather than visual memory. Though the panda is often assumed to be docile, it has been known to attack humans on rare occasions. Pandas have been known to cover themselves in horse manure to protect themselves against cold temperatures. The species communicates foremost through a blatting sound; they achieve peaceful interactions through the emission of this sound. When in oestrus, a female emits a chirp. In hostile confrontations or during fights, the giant panda emits vocalizations such as a roar or growl. On the other hand, squeals typically indicate inferiority and submission in a dispute. Other vocalizations include honks and moans. Olfactory communication Giant pandas heavily rely on olfactory communication to communicate with one another. Scent marks are used to spread these chemical cues and are placed on landmarks like rocks or trees. Chemical communication in giant pandas plays many roles in their social situations. Scent marks and odors are used to spread information about sexual status, whether a female is in estrus or not, age, gender, individuality, dominance over territory, and choice of settlement. Giant pandas communicate by excreting volatile compounds, or scent marks, through the anogenital gland. Giant pandas have unique positions in which they will scent mark. Males deposit scent marks or urine by lifting their hind leg, rubbing their backside, or standing in order to rub the anogenital gland onto a landmark. Females, however, exercise squatting or simply rubbing their genitals onto a landmark. The season plays a major role in mediating chemical communication. Depending on the season, mainly whether it is breeding season or not, may influence which odors are prioritized. Chemical signals can have different functions in different seasons. During the non-breeding season, females prefer the odors of other females because reproduction is not their primary motivation. However, during breeding season, odors from the opposite sex will be more attractive. Because they are solitary mammals and their breeding season is so brief, female pandas secrete chemical cues in order to let males know their sexual status. The chemical cues female pandas secrete can be considered to be pheromones for sexual reproduction. Females deposit scent marks through their urine which induces an increase in androgen levels in males. Androgen is a sex hormone found in both males and females; testosterone is the major androgen produced by males. Civetone and decanoic acid are chemicals found in female urine which promote behavioral responses in males; both chemicals are considered giant panda pheromones. Male pandas also secrete chemical signals that include information about their sexual reproductivity and age, which is beneficial for a female when choosing a mate. For example, age can be useful for a female to determine sexual maturity and sperm quality. Pandas are also able to determine when the signal was placed, further aiding in the quest to find a potential mate. However, chemical cues are not just used for communication between males and females, pandas can determine individuality from chemical signals. This allows them to be able to differentiate between a potential partner or someone of the same sex, which could be a potential competitor. Chemical cues, or odors, play an important role in how a panda chooses their habitat. Pandas look for odors that tell them not only the identity of another panda, but if they should avoid them or not. Pandas tend to avoid their species for most of the year, breeding season being the brief time of major interaction. Chemical signaling allows for avoidance and competition. Pandas whose habitats are in similar locations will collectively leave scent marks in a unique location which is termed "scent stations". When pandas come across these scent stations, they are able to identify a specific panda and the scope of their habitat. This allows pandas to be able to pursue a potential mate or avoid a potential competitor. Pandas can assess an individual's dominance status, including their age and size, via odor cues and may choose to avoid a scent mark if the signaler's competitive ability outweighs their own. A panda's size can be conveyed through the height of the scent mark. Since larger animals can place higher scent marks, an elevated scent mark advertises a higher competitive ability. Age must also be taken into consideration when assessing a competitor's fighting ability. For example, a mature panda will be larger than a younger, immature panda and possess an advantage during a fight. Reproduction Giant pandas reach sexual maturity between the ages of four and eight, and may be reproductive until age 20. The mating season is between March and May, when a female goes into estrus, which lasts for two or three days and only occurs once a year. When mating, the female is in a crouching, head-down position as the male mounts her from behind. Copulation time ranges from 30 seconds to five minutes, but the male may mount her repeatedly to ensure successful fertilisation. The gestation period is somewhere between 95 and 160 days - the variability is due to the fact that the fertilized egg may linger in the reproductive system for a while before implanting on the uterine wall. Giant pandas give birth to twins in about half of pregnancies. If twins are born, usually only one survives in the wild. The mother will select the stronger of the cubs, and the weaker cub will die due to starvation. The mother is thought to be unable to produce enough milk for two cubs since she does not store fat. The father has no part in helping raise the cub. When the cub is first born, it is pink, blind, and toothless, weighing only , or about of the mother's weight, proportionally the smallest baby of any placental mammal. It nurses from its mother's breast six to 14 times a day for up to 30 minutes at a time. For three to four hours, the mother may leave the den to feed, which leaves the cub defenseless. One to two weeks after birth, the cub's skin turns grey where its hair will eventually become black. Slight pink colour may appear on the cub's fur, as a result of a chemical reaction between the fur and its mother's saliva. A month after birth, the colour pattern of the cub's fur is fully developed. Its fur is very soft and coarsens with age. The cub begins to crawl at 75 to 80 days; mothers play with their cubs by rolling and wrestling with them. The cubs can eat small quantities of bamboo after six months, though mother's milk remains the primary food source for most of the first year. Giant panda cubs weigh at one year and live with their mothers until they are 18 months to two years old. The interval between births in the wild is generally two years. Initially, the primary method of breeding giant pandas in captivity was by artificial insemination, as they seemed to lose their interest in mating once they were captured. This led some scientists to trying methods such as showing them videos of giant pandas mating and giving the males sildenafil (commonly known as Viagra). In the 2000s, researchers started having success with captive breeding programs, and they have now determined giant pandas have comparable breeding to some populations of the American black bear, a thriving bear species. In July 2009, Chinese scientists confirmed the birth of the first cub to be successfully conceived through artificial insemination using frozen sperm. The technique for freezing the sperm in liquid nitrogen was first developed in 1980 and the first birth was hailed as a solution to the dwindling availability of giant panda semen, which had led to inbreeding. Panda semen, which can be frozen for decades, could be shared between different zoos to save the species. As of 2009, it is expected that zoos in destinations such as San Diego in the United States and Mexico City will be able to provide their own semen to inseminate more giant pandas. Attempts have also been made to reproduce giant pandas by interspecific pregnancy where cloned panda embryos were implanted into the uterus of an animal of another species. This has resulted in panda fetuses, but no live births. Human interaction Early references In Ancient China, people thought pandas to be rare and noble creatures – the Empress Dowager Bo was buried with a panda skull in her vault. The grandson of Emperor Taizong of Tang is said to have given Japan two pandas and a sheet of panda skin as a sign of goodwill. Unlike many other animals in Ancient China, pandas were rarely thought to have medical uses. The few known uses include the Sichuan tribal peoples' use of panda urine to melt accidentally swallowed needles, and the use of panda pelts to control menstruation as described in the Qin dynasty encyclopedia Erya. The creature named mo (貘) mentioned in some ancient books has been interpreted as giant panda. The dictionary Shuowen Jiezi (Eastern Han Dynasty) says that the mo, from Shu (Sichuan), is bear-like, but yellow-and-black, although the older Erya describes mo simply as a "white leopard". The interpretation of the legendary fierce creature pixiu (貔貅) as referring to the giant panda is also common. During the reign of the Yongle Emperor (early 15th century), his relative from Kaifeng sent him a captured zouyu (騶虞), and another zouyu was sighted in Shandong. Zouyu is a legendary "righteous" animal, which, similarly to a qilin, only appears during the rule of a benevolent and sincere monarch. In captivity Pandas have been kept in zoos as early as the Western Han Dynasty in China, where the writer Sima Xiangru noted that the panda was the most treasured animal in the emperor's garden of exotic animals in the capital Chang'an (present Xi'an). Not until the 1950s were pandas again recorded to have been exhibited in China's zoos. Chi Chi at the London Zoo became very popular. This influenced the World Wildlife Fund to use a panda as its symbol. A 2006 New York Times article outlined the economics of keeping pandas, which costs five times more than keeping the next most expensive animal, an elephant. American zoos generally pay the Chinese government $1 million a year in fees, as part of a typical ten-year contract. San Diego's contract with China was to expire in 2008, but got a five-year extension at about half of the previous yearly cost. The last contract, with the Memphis Zoo in Memphis, Tennessee, ended in 2013. In the 1970s, gifts of giant pandas to American and Japanese zoos formed an important part of the diplomacy of the People's Republic of China (PRC), as it marked some of the first cultural exchanges between China and the West. This practice has been termed "panda diplomacy". By 1984, however, pandas were no longer given as gifts. Instead, China began to offer pandas to other nations only on 10-year loans for a fee of up to US$1,000,000 per year and with the provision that any cubs born during the loan are the property of China. As a result of this change in policy, nearly all the pandas in the world are owned by China, and pandas leased to foreign zoos and all cubs are eventually returned to China. As of 2022, Xin Xin at the Chapultepec Zoo in Mexico City, was the last living descendant of the gifted pandas. Since 1998, because of a WWF lawsuit, the United States Fish and Wildlife Service only allows US zoos to import a panda if the zoo can ensure China channels more than half of its loan fee into conservation efforts for giant pandas and their habitat. In May 2005, China offered a breeding pair to Taiwan. The issue became embroiled in cross-Strait relations – due to both the underlying symbolism and technical issues such as whether the transfer would be considered "domestic" or "international" or whether any true conservation purpose would be served by the exchange. A contest in 2006 to name the pandas was held in the mainland, resulting in the politically charged names Tuan Tuan and Yuan Yuan (from , implying reunification). China's offer was initially rejected by Chen Shui-bian, then President of Taiwan. However, when Ma Ying-jeou assumed the presidency in 2008, the offer was accepted and the pandas arrived in December of that year. In the 2020s, certain "celebrity pandas" have gained a cult following amongst internet users, with dedicated fan accounts existing to keep tabs on the animals. Known as "giant panda fever" or "panda-monium", individual pandas are known to get billions of views and engagements on social media, as well as product lines specifically emulating them. At Chengdu Research Base of Giant Panda Breeding, certain of these "celebrity pandas" are known to garner hours-long lines specifically to see them. Conservation The giant panda is a vulnerable species, threatened by continued habitat loss and fragmentation, and by a very low birthrate, both in the wild and in captivity. Its range is confined to a small portion on the western edge of its historical range, which stretched through southern and eastern China, northern Myanmar, and northern Vietnam. The species is scattered into more than 30 subpopulations of relatively few animals. Building of roads and human settlement near panda habitat, result in population declines. Diseases from domesticated pets and livestock is another threat. By 2100, it is estimated that the distribution of giant pandas will shrink by up to 100%, mainly due to the effects of climate change. The giant panda is listed on CITES Appendix I, meaning trade of their parts is prohibited and that they require this protection to avoid extinction. They have been protected and placed in category 1, by the 1988 Wildlife Protection Act. The giant panda has been a target of poaching by locals since ancient times and by foreigners since it was introduced to the West. Starting in the 1930s, foreigners were unable to poach giant pandas in China because of the Second Sino-Japanese War and the Chinese Civil War, but pandas remained a source of soft furs for the locals. The population boom in China after 1949 created stress on the pandas' habitat and the subsequent famines led to the increased hunting of wildlife, including pandas. After the Chinese economic reform, demand for panda skins from Hong Kong and Japan led to illegal poaching for the black market, acts generally ignored by the local officials at the time. In 1963, the PRC government set up Wolong National Nature Reserve to save the declining panda population. The giant panda is among the world's most adored and protected rare animals, and is one of the few in the world whose natural inhabitant status was able to gain a UNESCO World Heritage Site designation. The Sichuan Giant Panda Sanctuaries, located in the southwest province of Sichuan and covering seven natural reserves, were inscribed onto the World Heritage List in 2006. A 2015 paper found that the giant panda can serve as an umbrella species as the preservation of their habitat also helps other endemic species in China, including 70% of the country's forest birds, 70% of mammals and 31% of amphibians. In 2012, Earthwatch Institute, a global nonprofit that teams volunteers with scientists to conduct important environmental research, launched a program called "On the Trail of Giant Panda". This program, based in the Wolong National Nature Reserve, allows volunteers to work up close with pandas cared for in captivity, and help them adapt to life in the wild, so that they may breed, and live longer and healthier lives. Efforts to preserve the panda bear populations in China have come at the expense of other animals in the region, including snow leopards, wolves, and dholes. In order to improve living and mating conditions for the fragmented populations of pandas, nearly 70 natural reserves have been combined to form the Giant Panda National Park in 2020. With a size of 10,500 square miles, the park is roughly three times as large as Yellowstone National Park and incorporates the Wolong National Nature Reserve. Small, isolated populations run the risk of inbreeding and smaller genetic variety makes the individuals more vulnerable to various defects and genetic mutation. Population In 2006, scientists reported that the number of pandas living in the wild may have been underestimated at about 1,000. Previous population surveys had used conventional methods to estimate the size of the wild panda population, but using a new method that analyzes DNA from panda droppings, scientists believed the wild population were as large as 3,000. In 2006, there were 40 panda reserves in China, compared to just 13 reserves in 1998. As the species has been reclassified from "endangered" to "vulnerable" since 2016, the conservation efforts are thought to be working. Furthermore, in response to this reclassification, the State Forestry Administration of China announced that they would not accordingly lower the conservation level for panda, and would instead reinforce the conservation efforts. In 2020, the panda population of the new national park was already above 1,800 individuals, which is roughly 80 percent of the entire panda population in China. Establishing the new protected area in the Sichuan Province also gives various other endangered or threatened species, like the Siberian tiger, the possibility to improve their living conditions by offering them a habitat. Other species who benefit from the protection of their habitat include the snow leopard, the golden snub-nosed monkey, the red panda and the complex-toothed flying squirrel. In July 2021, Chinese conservation authorities announced that giant pandas are no longer endangered in the wild following years of conservation efforts, with a population in the wild exceeding 1,800. China has received international praise for its conservation of the species, which has also helped the country establish itself as a leader in endangered species conservation. See also Giant pandas around the world List of giant pandas Panda tea Pygmy giant panda Wildlife of China List of endangered and protected species of China References Notes Bibliography AFP (via Discovery Channel) (20 June 2006). Panda Numbers Exceed Expectations. Associated Press (via CNN) (2006). Article link. Catton, Chris (1990). Pandas. Christopher Helm. Friends of the National Zoo (2006). Panda Cam: A Nation Watches Tai Shan the Panda Cub Grow. New York: Fireside Books. Goodman, Brenda (12 February 2006). Pandas Eat Up Much of Zoos' Budgets. The New York Times. (An earlier edition is available as The Smithsonian Book of Giant Pandas, Smithsonian Institution Press, 2002, .) Panda Facts At a Glance (N.d.). www.wwfchina.org. WWF China. Ryder, Joanne (2001). Little panda: The World Welcomes Hua Mei at the San Diego Zoo. New York: Simon & Schuster. (There are also several later reprints) Warren, Lynne (July 2006). "Panda, Inc." National Geographic. (About Mei Xiang, Tai Shan and the Wolong Panda Research Facility in Chengdu China). Journal of Mammalogy, Volume 96, Issue 6, 24 November 2015, Pages 1116–1127, https://doi.org/10.1093/jmammal/gyv118 External links BBC Nature: Giant panda news, and video clips from BBC programmes past and present. Panda Pioneer: the release of the first captive-bred panda 'Xiang Xiang' in 2006 WWF – environmental conservation organization Pandas International – panda conservation group National Zoo Live Panda Cams – Baby Panda Tai Shan and mother Mei Xiang Information from Animal Diversity NPR News 2007/08/20 – Panda Romance Stems From Bamboo View the panda genome on Ensembl. Texts and pictures of the Panda exhibition at the Museum für Naturkunde Berlin iPanda-50: annotated image dataset for fine-grained panda identification on Github Mammals of China Endemic fauna of China Clawed herbivores Herbivorous mammals EDGE species Vulnerable animals Vulnerable fauna of Asia Articles containing video clips Species that are or were threatened by agricultural development Species that are or were threatened by logging Mammals described in 1869 Taxa named by Armand David Ailuropodinae National symbols of China
Giant panda
Biology
7,779
7,488,155
https://en.wikipedia.org/wiki/Isotoluene
The isotoluenes in organic chemistry are the non-aromatic toluene isomers with an exocyclic double bond. They are of some academic interest in relation to aromaticity and isomerisation mechanisms. The three basic isotoluenes are ortho-isotoluene or 5-methylene-1,3-cyclohexadiene (here labelled 1); para-isotoluene (2); and meta-isotoluene (3). Another structural isomer is the bicyclic compound 5-methylenebicyclo[2.2.0] hexene (4). The o- and p-isotoluenes isomerise to toluene, a reaction driven by aromatic stabilisation. It is estimated that these compounds are 96 kJ mol−1 less stable. The isomerisation of p-isotoluene to toluene takes place at 100 °C in benzene with bimolecular reaction kinetics by an intermolecular free radical reaction. The intramolecular isomerisation, a 1,3-sigmatropic reaction, is unfavorable because an antarafacial mode is enforced. Other dimer radical reaction products are formed as well. The ortho-isomer is found to isomerise at 60 °C in benzene, also in a second order reaction. The proposed reaction mechanism is a concerted intermolecular ene reaction. The reaction product is either toluene or a mixture of dimerized ene reaction products, depending on the exact reaction conditions. Ortho-isotoluene has been researched in connection with the mechanism of initiator-free polymerization of polystyrene. See also Pentacene References Hydrocarbons Dienes
Isotoluene
Chemistry
378
24,712,863
https://en.wikipedia.org/wiki/C6H13O9P
{{DISPLAYTITLE:C6H13O9P}} The molecular formula C6H13O9P may refer to: Fructose 1-phosphate Fructose 6-phosphate Galactose 1-phosphate Glucose 1-phosphate Glucose 6-phosphate Mannose 1-phosphate Mannose 6-phosphate Molecular formulas
C6H13O9P
Physics,Chemistry
69
5,521,842
https://en.wikipedia.org/wiki/Superghost
In a supersymmetric quantum field theory, a superghost is a fermionic Faddeev–Popov ghost, which is used in the gauge fixing of a fermionic symmetry generator. References Supersymmetric quantum field theory String theory
Superghost
Physics,Astronomy
55
36,228,680
https://en.wikipedia.org/wiki/Premunition
Premunition, also known as infection-immunity, is a host response that protects against high numbers of parasite and illness without eliminating the infection. This type of immunity is relatively rapid, progressively acquired, short-lived, and partially effective. For malaria, premunition is maintained by repeated antigen exposure from infective bites. Thus, if an individual departs from an endemic area, he or she may lose premunition and become susceptible to malaria. Antibody action contributes to premunition. However, premunition is probably much more complex than simple antibody and antigen interaction. In the case of malaria, the sporozoite and merozoite stages of Plasmodium elicit the antibody response which leads to premunition. Immunoglobulin E targets the parasites and leads to eosinophil degranulation which releases major basic protein that damages the parasites, and other factors elicit a local inflammatory response. However, Plasmodium can change its surface antigens, so the development of an antibody repertoire that can recognize multiple surface antigens is important for premunition to be achieved. Premunition has not been well-studied, and although it likely occurs broadly, it is mainly emphasized for its role in malaria, tuberculosis, syphilis and relapsing fever. Premunization is the artificial induction of premunition. Premunity is progressive development of immunity in individuals exposed to an infective agent, mainly belonging to protozoa and Rickettsia, but not in viruses. After the initial infection, which generally occurs in childhood, the effect in subsequent infections is diminished. Infections thereafter may exhibit little or no symptomatology in spite of parasitemia. The next stage is resistance to infection altogether. Loss of premunity is estimated to be the cause of the rebound of malaria in 1965, in India after the dramatic success of the National Malaria Control Programme that was launched for rural India in 1953. Premunity occurs in infections of babesiosis, malaria Onchocerca volvulus, and Trichomonas. See also Herd immunity Adaptive immune system References Further reading and Immunology
Premunition
Biology
442
56,029,714
https://en.wikipedia.org/wiki/SS%20LeBaron%20Russell%20Briggs
SS LeBaron Russell Briggs was a Liberty ship built in the United States during World War II. She was named after LeBaron Russell Briggs, the first Dean of Men at Harvard College and the president of Radcliffe College. Construction LeBaron Russell Briggs was laid down on 29 March 1944, under a Maritime Commission (MARCOM) contract, MC hull 2301, by J.A. Jones Construction, Panama City, Florida; she was sponsored by Mrs. George R. Smith, daughter of James Addison Jones, and launched on 12 May 1944. History She was allocated to R.A. Nichol & Company, on 31 May 1944. On 5 March 1948, she was laid up in the National Defense Reserve Fleet, in Wilmington, North Carolina. On 26 September 1957, she was relocated to the National Defense Reserve Fleet, in the Hudson River Group. On 8 December 1961, she was withdrawn from the fleet to be loaded with grain under the "Grain Program 1961". She returned loaded with grain to the fleet on 22 December 1961. On 17 June 1963, she was withdrawn from the fleet to have the grain unloaded, she returned empty on 22 June 1963. On 30 July 1970, she was turned over to the US Navy for use in Operation CHASE (Chase 10). On 18 August 1970, she was loaded with 418 steel jacketed concrete vaults, which encased 12,540 US Army M55 rockets containing Sarin nerve gas, and one container of VX gas, and towed out east of Cape Kennedy, Florida, where she was scuttled in of water. References Bibliography Liberty ships Ships built in Panama City, Florida 1944 ships Wilmington Reserve Fleet Hudson River Reserve Fleet Hudson River Reserve Fleet Grain Program Military projects of the United States Maritime incidents in 1970
SS LeBaron Russell Briggs
Engineering
359
15,124,370
https://en.wikipedia.org/wiki/Arthur%20Michael
Arthur Michael (August 7, 1853 – February 8, 1942) was an American organic chemist who is best known for the Michael reaction. Life Arthur Michael was born into a wealthy family in Buffalo, New York in 1853, the son of John and Clara Michael, well-off real-estate investors. He was educated in that same city, learning chemistry both from a local teacher and in his own homebuilt laboratory. An illness thwarted Michael's plans to attend Harvard, and instead in 1871 he traveled to Europe with his parents and decided to study in Germany. He studied in Hofmann's chemical laboratory in Berlin at the University of Berlin, he studied with Robert Bunsen at Heidelberg University and after 2 years again in Berlin with Hofmann. He then studied for another year with Wurtz at the École de Médecine in Paris and with Dmitri Mendeleev in St. Petersburg. Returning to the United States in 1880, Michael became professor of chemistry at Tufts College where he taught from 1882 to 1889. He received an A. M. degree from Tufts in 1882, and a Ph.D. in 1890. At Tufts College, Michael met and married, in 1888, one of his own science students, Helen Cecilia De Silver Abbott. Following several years in England, during which the couple worked in a self-constructed laboratory on the Isle of Wight, they returned to the United States in 1894 where Arthur Michael again taught at Tufts, leaving in 1907 as an emeritus professor. Michael's retirement from academia lasted but five years. In 1912 he became a professor of chemistry at Harvard University, and there he stayed until a second retirement, in 1936. Throughout his career, Michael worked with some of the foremost chemists of his day, obtained chemistry professorships, and achieved fame among his peers. Arthur Michael died in Orlando, Florida, on February 8, 1942. His wife died in 1904. They had no children. Work Arthur Michael is remembered today primarily for the Michael reaction, also called the Michael addition. As originally defined by Michael, the reaction involves the combination of an enolate ion of a ketone or aldehyde to an α,β-unsaturated carbonyl compound at the β carbon. Michael was also well known in his day for incorporating thermodynamic concepts into organic chemistry, particularly for his use of entropy arguments. Perhaps his most enduring contribution to science was his central role in introducing the European model of graduate education into the United States. Activities and honors National Academy of Sciences (1889) Arthur Michael is credited with the 1897 first ascents of Mount Lefroy and Mount Victoria in the Canadian Rockies along with J. Norman Collie, also a fellow professor of organic chemistry. Michael Peak was named by his friend Edward Whymper in 1901 in his honor. References External links Brief biography and a photograph 1853 births 1942 deaths American organic chemists Members of the United States National Academy of Sciences
Arthur Michael
Chemistry
595
48,326,079
https://en.wikipedia.org/wiki/Water%20vapor%20windows
Water vapor windows are wavelengths of infrared light that have little absorption by water vapor in Earth's atmosphere. Because of this weak absorption, these wavelengths are allowed to reach the Earth's surface barring effects from other atmospheric components. This process is highly impacted by greenhouse gases because of the effective emission temperature. The water vapor continuum and greenhouse gases are significantly linked due to water vapor's benefits on climate change. Definition Water vapor is a gas that absorbs many wavelengths of Infrared (IR) energy in the Earth's atmosphere, and these wavelength ranges that can partially reach the surface are coming through what is called 'water vapor windows'. However, these windows do not absorb all of the infrared light, and electromagnetic energy is allowed to freely flow as a result. Astronomers can view the Universe with IR telescopes, called Infrared astronomy, because of these windows. The mid-infrared window, which has a range of 800–1250 cm^-1, is one of the more significant windows, for it has a massive influence on radiation fluxes in high humidity areas of the atmosphere. There has also been increased attention on the windows at 4700 cm^-1 and 6300 cm^-1 since their water vapor micro-windows confirm that uncertainties in water vapor window parameters only occur at the edges. Moreover, the net incoming solar shortwave radiation and the net outgoing terrestrial longwave radiation at the top of the atmosphere keep the Earth's energy balance in check. Greenhouse Effect's Impact Water vapor windows are also impacted by greenhouse gases since the water cycle is greatly accelerated due to these gases. The global averaged value of emitted, longwave radiation is 238.5 Wm^-2. One may get the effective emission temperature of the globe by assuming that the Earth-atmosphere system radiates as a blackbody in accordance with the Stefan-Boltzmann equation of blackbody radiation. The resultant temperature is -18.7 °C. Compared to +14.5 °C, the average worldwide temperature of the Earth's surface is 33 °C cooler. Thus, the Earth's surface is up to 33 °C warmer than it would be without the atmosphere. Moreover, the observation of longwave radiation demonstrates that the greenhouse effect exists in the Earth's atmosphere. These windows also allow orbiting satellites to measure the IR energy leaving the planet, the SSTs, and other important matters. See Electromagnetic absorption by water: Atmospheric effects. Water vapor absorbing these wavelengths of IR energy is mainly attributed to water being a polar molecule. Water's polarity allows it to absorb and release radiation at far, near and mid-infrared wavelengths. The polarity also largely impacts how water interacts with nature, for it allows complexes of water, such as the water dimer. Water Vapor Continuum Being one of the planet's most significant gases in the atmosphere, water vapor is important to study due to its benefits to climate change. Water vapor absorption mostly occurs in what is called the water vapor continuum, which is a combination of bands and windows that heavily influence radiation in the atmosphere. This continuum has two parts, which are the self-continuum and the foreign continuum. The self-continuum has a negative dependence on temperature, and the self-continuum is significantly stronger at the edges of the windows. Background These windows were originally discovered by John Tyndall. He discovered that most of the infrared coming from the Universe is being blocked and then absorbed by water vapor and other greenhouse gases in the Earth's atmosphere. See also Greenhouse gas Effective temperature Infrared astronomy Electromagnetic spectrum Electromagnetic absorption by water References External links Atmospheric radiation Satellite meteorology Electromagnetic spectrum Climatology Climate change Atmosphere
Water vapor windows
Physics
735
2,684,988
https://en.wikipedia.org/wiki/Fluid%20mechanics
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them. It has applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology. It can be divided into fluid statics, the study of fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic. Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. History The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes investigated fluid statics and buoyancy and formulated his famous law known now as the Archimedes' principle, which was published in his work On Floating Bodies—generally considered to be the first major work on fluid mechanics. Iranian scholar Abu Rayhan Biruni and later Al-Khazini applied experimental scientific methods to fluid mechanics. Rapid advancement in fluid mechanics began with Leonardo da Vinci (observations and experiments), Evangelista Torricelli (invented the barometer), Isaac Newton (investigated viscosity) and Blaise Pascal (researched hydrostatics, formulated Pascal's law), and was continued by Daniel Bernoulli with the introduction of mathematical fluid dynamics in Hydrodynamica (1739). Inviscid flow was further analyzed by various mathematicians (Jean le Rond d'Alembert, Joseph Louis Lagrange, Pierre-Simon Laplace, Siméon Denis Poisson) and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille and Gotthilf Hagen. Further mathematical justification was provided by Claude-Louis Navier and George Gabriel Stokes in the Navier–Stokes equations, and boundary layers were investigated (Ludwig Prandtl, Theodore von Kármán), while various scientists such as Osborne Reynolds, Andrey Kolmogorov, and Geoffrey Ingram Taylor advanced the understanding of fluid viscosity and turbulence. Main branches Fluid statics Fluid statics or hydrostatics is the branch of fluid mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium; and is contrasted with fluid dynamics, the study of fluids in motion. Hydrostatics offers physical explanations for many phenomena of everyday life, such as why atmospheric pressure changes with altitude, why wood and oil float on water, and why the surface of water is always level whatever the shape of its container. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspects of geophysics and astrophysics (for example, in understanding plate tectonics and anomalies in the Earth's gravitational field), to meteorology, to medicine (in the context of blood pressure), and many other fields. Fluid dynamics Fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density, and temperature, as functions of space and time. It has several subdisciplines itself, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and movements on aircraft, determining the mass flow rate of petroleum through pipelines, predicting evolving weather patterns, understanding nebulae in interstellar space and modeling explosions. Some fluid-dynamical principles are used in traffic engineering and crowd dynamics. Relationship to continuum mechanics Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table. In a mechanical view, a fluid is a substance that does not support shear stress; that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress. Assumptions The assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. Fundamentally, every fluid mechanical system is assumed to obey: Conservation of mass Conservation of energy Conservation of momentum The continuum assumption For example, the assumption that mass is conserved means that for any fixed control volume (for example, a spherical volume)—enclosed by a control surface—the rate of change of the mass contained in that volume is equal to the rate at which mass is passing through the surface from outside to inside, minus the rate at which mass is passing from inside to outside. This can be expressed as an equation in integral form over the control volume. The is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Under the continuum assumption, macroscopic (observed/measurable) properties such as density, pressure, temperature, and bulk velocity are taken to be well-defined at "infinitesimal" volume elements—small in comparison to the characteristic length scale of the system, but large in comparison to molecular length scale. Fluid properties can vary continuously from one volume element to another and are average values of the molecular properties. The continuum hypothesis can lead to inaccurate results in applications like supersonic speed flows, or molecular flows on nano scale. Those problems for which the continuum hypothesis fails can be solved using statistical mechanics. To determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, but molecular approach (statistical mechanics) can be applied to find the fluid motion for larger Knudsen numbers. Navier–Stokes equations The Navier–Stokes equations (named after Claude-Louis Navier and George Gabriel Stokes) are differential equations that describe the force balance at a given point within a fluid. For an incompressible fluid with vector velocity field , the Navier–Stokes equations are . These differential equations are the analogues for deformable materials to Newton's equations of motion for particles – the Navier–Stokes equations describe changes in momentum (force) in response to pressure and viscosity, parameterized by the kinematic viscosity . Occasionally, body forces, such as the gravitational force or Lorentz force are added to the equations. Solutions of the Navier–Stokes equations for a given physical problem must be sought with the help of calculus. In practical terms, only the simplest cases can be solved exactly in this way. These cases generally involve non-turbulent, steady flow in which the Reynolds number is small. For more complex cases, especially those involving turbulence, such as global weather systems, aerodynamics, hydrodynamics and many more, solutions of the Navier–Stokes equations can currently only be found with the help of computers. This branch of science is called computational fluid dynamics. Inviscid and viscous fluids An inviscid fluid has no viscosity, . In practice, an inviscid flow is an idealization, one that facilitates mathematical treatment. In fact, purely inviscid flows are only known to be realized in the case of superfluidity. Otherwise, fluids are generally viscous, a property that is often most important within a boundary layer near a solid surface, where the flow must match onto the no-slip condition at the solid. In some cases, the mathematics of a fluid mechanical system can be treated by assuming that the fluid outside of boundary layers is inviscid, and then matching its solution onto that for a thin laminar boundary layer. For fluid flow over a porous boundary, the fluid velocity can be discontinuous between the free fluid and the fluid in the porous media (this is related to the Beavers and Joseph condition). Further, it is useful at low subsonic speeds to assume that gas is incompressible—that is, the density of the gas does not change even though the speed and static pressure change. Newtonian versus non-Newtonian fluids A Newtonian fluid (named after Isaac Newton) is defined to be a fluid whose shear stress is linearly proportional to the velocity gradient in the direction perpendicular to the plane of shear. This definition means regardless of the forces acting on a fluid, it continues to flow. For example, water is a Newtonian fluid, because it continues to display fluid properties no matter how much it is stirred or mixed. A slightly less rigorous definition is that the drag of a small object being moved slowly through the fluid is proportional to the force applied to the object. (Compare friction). Important fluids, like water as well as most gasses, behave—to good approximation—as a Newtonian fluid under normal conditions on Earth. By contrast, stirring a non-Newtonian fluid can leave a "hole" behind. This will gradually fill up over time—this behavior is seen in materials such as pudding, oobleck, or sand (although sand isn't strictly a fluid). Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property—for example, most fluids with long molecular chains can react in a non-Newtonian manner. Equations for a Newtonian fluid The constant of proportionality between the viscous stress tensor and the velocity gradient is known as the viscosity. A simple equation to describe incompressible Newtonian fluid behavior is where is the shear stress exerted by the fluid ("drag"), is the fluid viscosity—a constant of proportionality, and is the velocity gradient perpendicular to the direction of shear. For a Newtonian fluid, the viscosity, by definition, depends only on temperature, not on the forces acting upon it. If the fluid is incompressible the equation governing the viscous stress (in Cartesian coordinates) is where is the shear stress on the face of a fluid element in the direction is the velocity in the direction is the direction coordinate. If the fluid is not incompressible the general form for the viscous stress in a Newtonian fluid is where is the second viscosity coefficient (or bulk viscosity). If a fluid does not obey this relation, it is termed a non-Newtonian fluid, of which there are several types. Non-Newtonian fluids can be either plastic, Bingham plastic, pseudoplastic, dilatant, thixotropic, rheopectic, viscoelastic. In some applications, another rough broad division among fluids is made: ideal and non-ideal fluids. An ideal fluid is non-viscous and offers no resistance whatsoever to a shearing force. An ideal fluid really does not exist, but in some calculations, the assumption is justifiable. One example of this is the flow far from solid surfaces. In many cases, the viscous effects are concentrated near the solid boundaries (such as in boundary layers) while in regions of the flow field far away from the boundaries the viscous effects can be neglected and the fluid there is treated as it were inviscid (ideal flow). When the viscosity is neglected, the term containing the viscous stress tensor in the Navier–Stokes equation vanishes. The equation reduced in this form is called the Euler equation. See also Transport phenomena Aerodynamics Applied mechanics Bernoulli's principle Communicating vessels Computational fluid dynamics Compressor map Secondary flow Different types of boundary conditions in fluid dynamics Fluid–structure interaction Immersed boundary method Stochastic Eulerian Lagrangian method Stokesian dynamics Smoothed-particle hydrodynamics References Further reading External links Free Fluid Mechanics books Annual Review of Fluid Mechanics. . CFDWiki – the Computational Fluid Dynamics reference wiki. Educational Particle Image Velocimetry – resources and demonstrations Civil engineering
Fluid mechanics
Engineering
2,624
10,230,735
https://en.wikipedia.org/wiki/Panaeolus%20africanus
Panaeolus africanus is a little brown mushroom that contains irregular amounts of the hallucinogens psilocybin and psilocin. It has been found in central Africa and southern Sudan. Description This is a little brown mushroom that grows on hippopotamus and elephant dung and has black spores. The cap is up to 2 cm in diameter, gray, conic, and often with scaly cracks. It is viscid when moist and the flesh is grey to white. The gills are grayish when young and turn black with a mottled appearance as the spores mature. The stem is 4 cm long by 5 mm thick, and is pruinose at the top. The spores are black, rather variable, 13 x 9 μm, and shaped like almonds. Macroscopically, this species resembles Panaeolus semiovatus var. phalaenarum. Cap: 1.5–2 cm broad. Obtusely conic, hemispheric and rarely broadly convex in age. Surface smooth (but may crack to form scales when exposed to the sun), viscid when wet, especially in young specimens, sometimes reddish brown towards the disc, and becoming grayish brown in age. Margin incurved when young, often irregular, and non-translucent. Flesh greyish white. Gills: Attachment adnate to adnexed, sometimes sinuate, rarely subdecurrent, widely spaced, irregular, greyish a first, soon grayish black, blackish with age, and mottled as spores mature. Stem: 30-50mm by 4-6mm thick, equal, firm, pruinose towards the apex. Whitish to white with pinkish tones, generally lighter than the cap, and lacking any veil remnants. Microscopic Features: Spores nearly black in deposit, 11.5-14.5 by 7.9-10 micrometers, lemon shaped, and often variable. Basidia 2- and 4-spored. Cheilocystidia clavate 17-24 by 7.4-12 micrometers. Pleurocystidia present, with extended sharp apices, 25-60 (60) by 10-17 (20) micrometers. Habitat and distribution Reported from central Africa to the southern regions of the Sudan. Probably more widely distributed. Found on hippopotamus and elephant dung in the spring or during the rainy seasons. See also List of Panaeolus species References External links Mushroom John - Panaeolus africanus Entheogens Psychoactive fungi africanus Psychedelic tryptamine carriers Fungi of North America Fungi of Africa Fungus species
Panaeolus africanus
Biology
539
314,402
https://en.wikipedia.org/wiki/Liquid%20oxygen
Liquid oxygen, sometimes abbreviated as LOX or LOXygen, is a clear cyan liquid form of dioxygen . It was used as the oxidizer in the first liquid-fueled rocket invented in 1926 by Robert H. Goddard, an application which is ongoing. Physical properties Liquid oxygen has a clear cyan color and is strongly paramagnetic: it can be suspended between the poles of a powerful horseshoe magnet. Liquid oxygen has a density of , slightly denser than liquid water, and is cryogenic with a freezing point of and a boiling point of at . Liquid oxygen has an expansion ratio of 1:861 and because of this, it is used in some commercial and military aircraft as a transportable source of breathing oxygen. Because of its cryogenic nature, liquid oxygen can cause the materials it touches to become extremely brittle. Liquid oxygen is also a very powerful oxidizing agent: organic materials will burn rapidly and energetically in liquid oxygen. Further, if soaked in liquid oxygen, some materials such as coal briquettes, carbon black, etc., can detonate unpredictably from sources of ignition such as flames, sparks or impact from light blows. Petrochemicals, including asphalt, often exhibit this behavior. The tetraoxygen molecule (O4) was predicted in 1924 by Gilbert N. Lewis, who proposed it to explain why liquid oxygen defied Curie's law. Modern computer simulations indicate that, although there are no stable O4 molecules in liquid oxygen, O2 molecules do tend to associate in pairs with antiparallel spins, forming transient O4 units. Liquid nitrogen has a lower boiling point at −196 °C (77 K) than oxygen's −183 °C (90 K), and vessels containing liquid nitrogen can condense oxygen from air: when most of the nitrogen has evaporated from such a vessel, there is a risk that liquid oxygen remaining can react violently with organic material. Conversely, liquid nitrogen or liquid air can be oxygen-enriched by letting it stand in open air; atmospheric oxygen dissolves in it, while nitrogen evaporates preferentially. The surface tension of liquid oxygen at its normal pressure boiling point is . Uses In commerce, liquid oxygen is classified as an industrial gas and is widely used for industrial and medical purposes. Liquid oxygen is obtained from the oxygen found naturally in air by fractional distillation in a cryogenic air separation plant. Air forces have long recognized the strategic importance of liquid oxygen, both as an oxidizer and as a supply of gaseous oxygen for breathing in hospitals and high-altitude aircraft flights. In 1985, the USAF started a program of building its own oxygen-generation facilities at all major consumption bases. In rocket propellant Liquid oxygen is the most common cryogenic liquid oxidizer propellant for spacecraft rocket applications, usually in combination with liquid hydrogen, kerosene or methane. Liquid oxygen was used in the first liquid fueled rocket. The World War II V-2 missile also used liquid oxygen under the name A-Stoff and Sauerstoff. In the 1950s, during the Cold War both the United States' Redstone and Atlas rockets, and the Soviet R-7 Semyorka used liquid oxygen. Later, in the 1960s and 1970s, the ascent stages of the Apollo Saturn rockets, and the Space Shuttle main engines used liquid oxygen. As of 2024, many active rockets use liquid oxygen: Chinese space program CASC: Long March 5, Long March 6, Long March 7, Long March 8, Long March 12, Long March 9 (under development), Long March 10 (under development) Galactic Energy: Pallas-1 (under development) i-Space: Hyperbola-3 (under development) LandSpace: Zhuque-2 Orienspace: Gravity-2 (under development) Space Pioneer: Tianlong-2 European Space Agency: Ariane 6 Indian Space Research Organisation: GSLV JAXA (Japan): H-IIA, H3 Korea Aerospace Research Institute: Naro-1, Nuri Roscosmos (Russia): Soyuz-2, Angara United States Blue Origin: New Shepard, New Glenn (under development) Firefly Aerospace: Firefly Alpha NASA: Space Launch System Northrop Grumman: Antares 300 (under development) Rocket Lab: Electron, Neutron (under development) SpaceX: Falcon 9, Falcon Heavy, Starship United Launch Alliance: Atlas V, Vulcan History By 1845, Michael Faraday had managed to liquefy most gases then known to exist. Six gases, however, resisted every attempt at liquefaction and were known at the time as "permanent gases". They were oxygen, hydrogen, nitrogen, carbon monoxide, methane, and nitric oxide. In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air. In 1883, Polish professors Zygmunt Wróblewski and Karol Olszewski produced the first measurable quantity of liquid oxygen. See also Oxygen storage Industrial gas Cryogenics Liquid hydrogen Liquid helium Liquid nitrogen List of stoffs Natterer compressor Rocket fuel Solid oxygen Tetraoxygen References Further reading Rocket oxidizers Cryogenics Oxygen Industrial gases Liquids 1883 in science
Liquid oxygen
Physics,Chemistry
1,082
3,552,700
https://en.wikipedia.org/wiki/Data%20fabrication
In scientific inquiry and academic research, data fabrication is the intentional misrepresentation of research results. As with other forms of scientific misconduct, it is the intent to deceive that marks fabrication as unethical, and thus different from scientists deceiving themselves. There are many ways data can be fabricated. Experimental data can be fabricated by reporting experiments that were never conducted, and accurate data can be manipulated or misrepresented to suit a desired outcome. One of the biggest problems with this form of scientific fraud is that "university investigations into research misconduct are often inadequate, opaque and poorly conducted. They challenge the idea that institutions can police themselves on research integrity." Sometimes intentional fabrication can be difficult to distinguish from unintentional academic incompetence or malpractice. Examples of this include the failure to account for measurement error, or the failure to adequately control experiments for any parameters being measured. Fabrication can also occur in the context of undergraduate or graduate studies wherein a student fabricates a laboratory or homework assignment. Such cheating, when discovered, is usually handled within the institution, and does not become a scandal within the larger academic community (as cheating by students seldom has any academic significance). Consequences A finding that a scientist engaged in fabrication will often mean the end to their career as a researcher. Scientific misconduct is grounds for dismissal of tenured faculty, as well as for forfeiture of research grants. Given the tight-knit nature of many academic communities, and the high stakes involved, researchers who are found to have committed fabrication are often effectively (and permanently) blacklisted from the profession, with reputable research organizations and universities refusing to hire them; funding sources refusing to sponsor them or their work, and journals refusing to consider any of their articles for publication. In some cases, however, especially if the researcher is senior and well-established, the academic community can close ranks to prevent injury to the scientist's career. Fabricators may also have previously earned academic credentials removed. Two cases: In 2004, Jan Hendrik Schön was stripped of his doctorate degree by the University of Konstanz after a committee formed by Bell Labs found him guilty of fabrication related to research done during his employment there. This action was undertaken even though Schön was not accused (in the matter in question) of any fabrication or other misconduct relating to his work which led to or supported the degree—the doctorate was revoked, according to University officials, solely due to Schön behaving "unworthily" in the Bell Labs affair. In 2001, peer reviewers at the academic journal Nutrition raised suspicions about a publication draft submitted by Dr. Ranjit Chandra, a world-renowned Canadian nutrition researcher at Memorial University of Newfoundland. It was confirmed through investigation that Dr. Chandra had published the results of a series of studies, stretching over more than a decade, which reported claims based on incorrectly represented analysis of, in some cases, non-existent data. By that time, Dr. Chandra had already been invited to the Order of Canada, which was subsequently revoked in 2015. Dr. Chandra retired in 2002, and Memorial University established the Marilyn Harvey Award to Recognize the Importance of Research Ethics in honour of the nurse researcher who first raised the alarm about Dr. Chandra's findings in the early 1990s. Not all alleged fraud is found to be so, and debates are part of the scientific community. An interesting case is the accusation against Dr. Margaret Mead, a world-renowned anthropologist who published field work conducted early in her life, which proclaimed that Samoan culture was more relaxed and harmonious about sexual relations and mores. Her truthfulness and research process were roundly criticized by a later researcher in Samoa, Dr. Derek Freeman. More recently, Dr. Freeman's own research quality has come under scrutiny, with a hint that perhaps his own views on sexuality and his research with Elders in Samoa led him to reject Dr. Mead's findings. Dr. Freeman's allegations harmed Dr. Mead's reputation at a time when few women were scientists See also Cyril Burt Junk science Research paper mill Retraction References Further reading Lewis-Kraus, Gideon, "Big Little Lies: Dan Ariely and Francesca Gino got famous studying dishonesty. Did they fabricate some of their work?", The New Yorker, 9 October 2023, pp. 40-53. Scientific misconduct
Data fabrication
Technology
888
14,880,045
https://en.wikipedia.org/wiki/GABRA3
Gamma-aminobutyric acid receptor subunit alpha-3 is a protein that in humans is encoded by the GABRA3 gene. Function GABA is the major inhibitory neurotransmitter in the mammalian brain where it acts at GABAA receptors, which are ligand-gated chloride channels. Chloride conductance of these channels can be modulated by agents such as benzodiazepines that bind to the GABAA receptor. At least 16 distinct subunits of GABA-A receptors have been identified. GABA receptors are composed of 5 subunits with an extracellular ligand binding domains and ion channel domains that are integral to the membrane. Ligand binding to these receptors activates the channel. Subunit selective ligands Recent research has produced several ligands that are selective for GABAA receptors containing the α3 subunit. Subtype-selective agonists for α3 produce anxiolytic effects without sedative, amnesia, or ataxia. selective a3 agonists also show lack of dependence, and could make them superior to currently marketed drugs. Agonists Adipiplon PWZ-029 (partial agonist at α3, partial inverse agonist at α5) TP003 (Selective full agonist at α3) Inverse agonists α3IA RNA editing The GABRA3 transcript undergoes pre-mRNA editing by the ADAR family of enzymes. A-to-I editing changes an isoleucine codon to code for a methionine residue. This editing is thought to be important for brain development, as the level of editing is low at birth and becomes almost 100% in an adult brain. The editing occurs in an RNA stem-loop found in exon 9. The structured loci was identified using a specialised bioinformatics screen of the human genome. The proposed function of the edit is to alter chloride permeability of the GABA receptor. At the time of discovery, Kv1.1 mRNA was the only previously known mammalian coding site containing both the edit sequence and the editing complementary sequence. Type A to I RNA editing is catalyzed by a family of adenosine deaminases acting on RNA (ADARs) that specifically recognize adenosines within double-stranded regions of pre-mRNAs and deaminate them to inosine. Inosines are recognised as guanosine by the cells translational machinery. There are three members of the ADAR family ADARs 1–3, with ADAR1 and ADAR2 being the only enzymatically active members. ADAR3 is thought to have a regulatory role in the brain. ADAR1 and ADAR 2 are widely expressed in tissues, while ADAR3 is restricted to the brain. The double-stranded regions of RNA are formed by base-pairing between residues in the close to region of the editing site, with residues usually in a neighboring intron but can be an exonic sequence. The region that base pairs with the editing region is known as an Editing Complementary Sequence (ECS). Location The editing site was previously believed to be a single nucleotide polymorphism. The editing site is found at amino acid 5 of transmembrane domain 3 of exon 9. The predicted double-stranded RNA structure is interrupted by three bulges and a mismatch at the editing site. The double-stranded region is 22 base pairs in length. As with editing of the KCNA1 gene product, the editing region and the editing complementary sequence are both found in exonic regions. In the pre=mRNA of GABRA3, both are found within exon 9. The other subunits of the receptor are thought not to be edited, as their predicted secondary structure is less likely to be edited. Also, alpha subunits 1 and 6 have a uridine instead of an adenosine at the site corresponding to the editing site in alpha subunit 3. Point mutation experiments determined that a Cytidine 15 nucleotides from the editing site is the base opposite the edited base. Using a GABRA3 mini-gene that encodes for exon 9 cotransfected to HEK293 cells with either ADAR1 or -2 or none, it was determined that both active ADARs can efficiently edited the site in exon 9. Regulation The mRNA expression of the alpha 3 subunit is developmentally regulated. It is the dominant subunit in the forebrain tissue at birth, gradually decreasing in prominence as alpha subunit 1 takes over. Also experiments with mice have demonstrated that editing of pre-mRNA alpha 3 subunit increases from 50% at birth to nearly 100% in adult. Editing levels are lower in the hippocampus Conservation At the location corresponding to the I/M site of GABRA3 in frog and pufferfish there is a genomically encoded methionine. In all other species, there is an isoleucine at the position. Consequences Structure Editing results in a codon change from (AUA) I to (AUG) M at the editing site. This results in translation of a methionine instead of an isoleucine at the I/M site. The amino acid change occurs in the transmembrane domain 3. The 4 transmembrane domains of each of the 5 subunits that make up the receptor interact to form the receptor channel. It is likely that the change of amino acids disturbs the structure, effecting gating and inactivation of the channel. This is because methionine has a larger side chain. Function While the effect of editing on protein function is unknown, the developmental increase in editing does correspond to changes in function of the GABAA receptor. GABA binding leads to chloride channel activation, resulting in rapid increase in concentration of the ion. Initially, the receptor is an excitatory receptor, mediating depolarisation (efflux of Cl− ions) in immature neurons before changing to an inhibitory receptor, mediating hyperpolarisation (influx of Cl− ions) later on. GABAA converts to an inhibitory receptor from an excitatory receptor by the upregulation of KCC2 cotransporter. This decreases the concentration of Cl− ion within cells. Therefore, the GABAA subunits are involved in determining the nature of the receptor in response to GABA ligand. These changes suggest that editing of the subunit is important in the developing brain by regulating the Cl− permeability of the channel during development. The unedited receptor is activated faster and deactivates slower than the edited receptor. See also GABAA receptor References Further reading External links Ion channels
GABRA3
Chemistry
1,366
26,781,470
https://en.wikipedia.org/wiki/Methallenestril
Methallenestril () (brand names Cur-men, Ercostrol, Geklimon, Novestrine, Vallestril), also known as methallenoestril () and as methallenestrol, as well as Horeau's acid, is a synthetic nonsteroidal estrogen and a derivative of allenolic acid and allenestrol (specifically, a methyl ether of it) that was formerly used to treat menstrual issues but is now no longer marketed. It is a seco-analogue of bisdehydrodoisynolic acid, and although methallenestril is potently estrogenic in rats, in humans it is only weakly so in comparison. Vallestril was a brand of methallenestril issued by G. D. Searle & Company in the 1950s. Methallenestril is taken by mouth. By the oral route, a dose of 25 mg methallenestril is approximately equivalent to 1 mg diethylstilbestrol, 4 mg dienestrol, 20 mg hexestrol, 25 mg estrone, 2.5 mg conjugated estrogens, and 0.05 mg ethinylestradiol. See also Carbestrol Fenestrel Doisynoestrol Doisynolic acid References Abandoned drugs Carboxylic acids Naphthalenes Synthetic estrogens
Methallenestril
Chemistry
295
2,146,848
https://en.wikipedia.org/wiki/Change%20of%20variables
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem. Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution). A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial: Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written (this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable . Substituting x by into the polynomial gives which is just a quadratic equation with the two solutions: The solutions in terms of the original variable are obtained by substituting x3 back in for u, which gives Then, assuming that one is interested only in real solutions, the solutions of the original equation are Simple example Consider the system of equations where and are positive integers with . (Source: 1991 AIME) Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as . Making the substitutions and reduces the system to . Solving this gives and . Back-substituting the first ordered pair gives us , which gives the solution Back-substituting the second ordered pair gives us , which gives no solutions. Hence the solution that solves the system is . Formal introduction Let , be smooth manifolds and let be a -diffeomorphism between them, that is: is a times continuously differentiable, bijective map from to with times continuously differentiable inverse from to . Here may be any natural number (or zero), (smooth) or (analytic). The map is called a regular coordinate transformation or regular variable substitution, where regular refers to the -ness of . Usually one will write to indicate the replacement of the variable by the variable by substituting the value of in for every occurrence of . Other examples Coordinate transformation Some systems can be more easily solved when switching to polar coordinates. Consider for example the equation This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution given by Note that if runs outside a -length interval, for example, , the map is no longer bijective. Therefore, should be limited to, for example . Notice how is excluded, for is not bijective in the origin ( can take any value, the point will be mapped to (0, 0)). Then, replacing all occurrences of the original variables by the new expressions prescribed by and using the identity , we get Now the solutions can be readily found: , so or . Applying the inverse of shows that this is equivalent to while . Indeed, we see that for the function vanishes, except for the origin. Note that, had we allowed , the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of is crucial. The function is always positive (for ), hence the absolute values. Differentiation The chain rule is used to simplify complicated differentiation. For example, consider the problem of calculating the derivative Let with Then: Integration Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant. Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems. Change of variables formula in terms of Lebesgue measure The following theorem allows us to relate integrals with respect to Lebesgue measure to an equivalent integral with respect to the pullback measure under a parameterization G. The proof is due to approximations of the Jordan content. Suppose that is an open subset of and is a diffeomorphism. If is a Lebesgue measurable function on , then is Lebesgue measurable on . If or then . If and is Lebesgue measurable, then is Lebesgue measurable, then .As a corollary of this theorem, we may compute the Radon–Nikodym derivatives of both the pullback and pushforward measures of under . Pullback measure and transformation formula The pullback measure in terms of a transformation is defined as . The change of variables formula for pullback measures is . Pushforward measure and transformation formula The pushforward measure in terms of a transformation , is defined as . The change of variables formula for pushforward measures is . As a corollary of the change of variables formula for Lebesgue measure, we have that Radon-Nikodym derivative of the pullback with respect to Lebesgue measure: Radon-Nikodym derivative of the pushforward with respect to Lebesgue measure: From which we may obtain The change of variables formula for pullback measure: The change of variables formula for pushforward measure: Differential equations Variable changes for differentiation and integration are taught in elementary calculus and the steps are rarely carried out in full. The very broad use of variable changes is apparent when considering differential equations, where the independent variables may be changed using the chain rule or the dependent variables are changed resulting in some differentiation to be carried out. Exotic changes, such as the mingling of dependent and independent variables in point and contact transformations, can be very complicated but allow much freedom. Very often, a general form for a change is substituted into a problem and parameters picked along the way to best simplify the problem. Scaling and shifting Probably the simplest change is the scaling and shifting of variables, that is replacing them with new variables that are "stretched" and "moved" by constant amounts. This is very common in practical applications to get physical parameters out of problems. For an nth order derivative, the change simply results in where This may be shown readily through the chain rule and linearity of differentiation. This change is very common in practical applications to get physical parameters out of problems, for example, the boundary value problem describes parallel fluid flow between flat solid walls separated by a distance δ; μ is the viscosity and the pressure gradient, both constants. By scaling the variables the problem becomes where Scaling is useful for many reasons. It simplifies analysis both by reducing the number of parameters and by simply making the problem neater. Proper scaling may normalize variables, that is make them have a sensible unitless range such as 0 to 1. Finally, if a problem mandates numeric solution, the fewer the parameters the fewer the number of computations. Momentum vs. velocity Consider a system of equations for a given function . The mass can be eliminated by the (trivial) substitution . Clearly this is a bijective map from to . Under the substitution the system becomes Lagrangian mechanics Given a force field , Newton's equations of motion are Lagrange examined how these equations of motion change under an arbitrary substitution of variables , He found that the equations are equivalent to Newton's equations for the function , where T is the kinetic, and V the potential energy. In fact, when the substitution is chosen well (exploiting for example symmetries and constraints of the system) these equations are much easier to solve than Newton's equations in Cartesian coordinates. See also Change of variables (PDE) Change of variables for probability densities Substitution property of equality Universal instantiation References Elementary algebra Mathematical physics
Change of variables
Physics,Mathematics
1,601
34,289,263
https://en.wikipedia.org/wiki/List%20of%20spectroscopists
Articles about notable spectroscopists. A William de Wiveleslie Abney David Alter Anders Jonas Ångström B Roman Balabin Johann Balmer G. Michael Bancroft Charles Glover Barkla Nikolay Basov Jane Blankenship Nicolaas Bloembergen Niels Bohr Frederick Sumner Brackett (1896–1988), discovered the hydrogen Brackett series. Bertram Brockhouse John Browning Robert Bunsen C Miguel A. Catalán (1894–1957) Arthur Compton William Crookes Robert Curl Clair Cameron Patterson D Louis de Broglie Peter Debye E Richard R. Ernst F Edward Robert Festing James Franck Willis H. Flygare G Roy J. Glauber Walter Gordy G. V. Pawan Kumar H August Hagenbach John L. Hall Theodor W. Hänsch Werner Heisenberg Brian Henderson (physicist) John Herschel William Herschel Gerhard Herzberg Victor Francis Hess Antony Hewish William Huggins I J Derek Jackson (1906–1982), made the first spectroscopic determination of a nuclear magnetic spin. Pierre Janssen K Michael Kasha Alfred Kastler Gustav Kirchhoff Harry Kroto L Willis Eugene Lamb George Downing Liveing (1827–1924) Norman Lockyer Derek Long Hendrik Lorentz Theodore Lyman M Alfred Maddock Gary E. Martin Thomas Melvill Allan Merchant, spectral properties of the Baslescu-Lenard Equation Robert Andrews Millikan Edward Morley Rudolf Mössbauer N O P August Herman Pfund (1879–1949), discovered the Pfund hydrogen spectral series. Max Planck (1858–1947) Bill Price Aleksandr Mikhailovich Prokhorov Q R Isidor Isaac Rabi Sir Chandrasekhara Venkata Raman Norman Foster Ramsey Jr. Carl David Tolmé Runge (1856–1927), Runge–Kutta method, Runge's phenomenon, Laplace–Runge–Lenz vector Johannes Rydberg Martin Ryle S Sérgio Pereira da Silva Porto Victor Schumann (1841–1913), discovered the vacuum ultraviolet. Arthur Leonard Schawlow Angelo Secchi Kai Siegbahn Manne Siegbahn Richard Smalley Johannes Stark Miriam Michael Stimson (1913–2002), pioneered the KBr disk technique for Infrared spectroscopy of solid compounds. T William Fox Talbot Matthew Pothen Thekaekara Charles Hard Townes U V Joseph von Fraunhofer Philipp Eduard Anton von Lenard W Charles Wheatstone Kurt Wüthrich X Xiaoliang Sunney Xie, (born 1962), chemist at Harvard University, pioneer in the field of single molecule microscopy and coherent anti-Stokes Raman spectroscopy microscopy. Y Charles Augustus Young (1834–1908), solar spectroscopist astronomer. Z Pieter Zeeman Ahmed Zewail See also List of chemists List of physicists Lists of physicists by field Spectroscopy
List of spectroscopists
Physics,Chemistry
578
2,216,689
https://en.wikipedia.org/wiki/Controlled%20explosion
A controlled explosion is the deliberate detonation of an explosive, generally as a means of demolishing a building or destroying a second improvised or manufactured explosive device. Demolition During demolition, controlled explosions can be used to collapse the exterior walls of a building inward, limiting damage to neighboring buildings. This requires careful placement of explosive charges on supports and load bearing walls. Bomb disposal Controlled explosions are used by bomb disposal teams to detonate a bomb at a specific time in order to limit damage to nearby structures, vehicles, and people. The bomb may be moved to a clear location away from bystanders or buildings before detonation unless it has (or is suspected of having) an anti-handling mechanism. If an anti-handling mechanism is found, the bomb may be left in place while the site around it is cleared. If it is not possible to clear the area or move the bomb, a containment vessel may be used to limit the damage from the explosion or transport the device to a cleared area. Once the area is clear, a second explosive or shaped charge is placed on the device by explosive ordinance disposal (EOD or police bomb disposal) personnel, either manually or with a bomb disposal robot, and detonated remotely. The controlled explosion should also detonate or disable the suspected bomb. See also Bomb disposal Demolition DEMIRA EOD References External links BBC News summary of "controlled explosions" Law enforcement techniques Bomb disposal
Controlled explosion
Chemistry
286
5,341,225
https://en.wikipedia.org/wiki/Headbanging
Headbanging is the act of violently shaking one's head in rhythm with music. It is common in rock, punk, heavy metal music and dubstep, where headbanging is often used by musicians on stage. Headbanging is also common in traditional Islamic Sufi music traditions such as Qawwali in the Indian subcontinent and Iran. History Sufi music Headbanging has been common in Islamic devotional Sufi music traditions dating back centuries, such as the Indian subcontinent's 600-year-old Qawwali tradition, and among dervishes in Iran's Kurdistan Province. Qawwali performances, particularly at Sufi shrines in the Indian subcontinent, usually in honour of Allah, Islamic prophets, or Sufi saints, often have performers and spectators induced into a trance-like state and headbanging in a manner similar to metal and rock concerts. A popular song often performed by Sufis and fakirs in the Indian subcontinent is the 600-year-old "Dama Dam Mast Qalandar" (in honour of 13th-century Sufi saint Lal Shahbaz Qalandar), which often has performers and spectators rapidly headbanging to the beats of naukat drum sounds. The most well-known Qawwali performer in modern times is late Pakistani singer Nusrat Fateh Ali Khan, whose performances often induced trance-like headbanging experiences in the late 20th century. Khan's popularity in the Indian subcontinent led to the emergence of fusion genres such as Sufi rock and techno qawwali in South Asian popular music (Pakistani pop, Indi-pop, Bollywood music and British-Asian music) in the 1990s which combine the traditional trance-like zikr headbanging of Qawwali with elements of modern rock, techno or dance music, which has occasionally been met with criticism and controversy from traditional Sufi and Qawwali circles. Rock music The origin of the term "headbanging" is contested. It is possible that the term "headbanger" was coined during Led Zeppelin's first US tour in 1969. During a show at the Boston Tea Party concert venue, audience members in the first row were banging their heads against the stage in rhythm with the music. Furthermore, concert footage of Led Zeppelin performing at the Royal Albert Hall January 9, 1970, on the Led Zeppelin DVD released in 2003, the front row can be seen headbanging throughout the performance. However, an instance of headbanging prior to the alleged coining of the term can be seen during Cream's Farewell Concert in November 1968, also at the Royal Albert Hall. Specifically during the performance of Sunshine of Your Love, front row audience members with particularly large amounts of hair are seen quickly bobbing their heads to the music in a fashion typically associated with modern headbanging. Ozzy Osbourne and Geezer Butler of Black Sabbath are among the first documented headbangers, as it is possible to see in footage of their gig in Paris, 1970. In the early 1970s, Status Quo was one of the first hard rock bands to headbang on stage. Lemmy from Motörhead, however, said in an interview on the documentary The Decline of Western Civilization II: The Metal Years, that the term "Headbanger" may have originated in the band's name, as in "Motorheadbanger". The practice itself and its association with the rock genre was popularized by guitarist Angus Young of the band AC/DC. Early televised performances in the 1950s of Jerry Lee Lewis depict young male fans who had grown their hair in the fashion of Lewis, where his front locks would fall in front of his face. Lewis would continuously flip his hair back away from his face, prompting the fans to mimic the movement in rapid repetition in a fashion resembling headbanging. Parrots At least one parrot, a cockatoo named Snowball, developed the habit of headbanging to music, causing something of an Internet sensation. Scientists were intrigued, as untrained dancing among animals is rare. Health issues In the mid-1980s Metallica bassist Cliff Burton complained repeatedly about neck pain associated with his almost constant and heavy headbanging during concerts or even rehearsals. In 2005, Evanescence guitarist Terry Balsamo incurred a stroke which doctors postulated may have been caused by frequent headbanging. In 2007, Irish singer and former Moloko vocalist Róisín Murphy suffered an eye injury during a performance of her song "Primitive" when she headbanged into a chair on stage. In 2009, Slayer bassist/vocalist Tom Araya began experiencing spinal problems due to his aggressive form of headbanging, and had to undergo anterior cervical discectomy and fusion. After recuperating from the surgery, he can no longer headbang. In 2011, Megadeth guitarist Dave Mustaine said that his neck and spine condition, known as spinal stenosis, was caused by many years of headbanging. That same year, Stone Sour drummer Roy Mayorga suffered a stroke as a result of his frequent headbanging while drumming. The event led to him having to re-learn how to play drums. Slipknot sampler Craig Jones once suffered from whiplash after an extended case of powerful headbanging. Several case reports can be found in the medical literature which connect excessive headbanging to aneurysms and hematomas within the brain and damage to the arteries in the neck which supply the brain. More specifically, cases with damage to the basilar artery, the carotid artery and the vertebral artery have been reported. Several case reports also associated headbanging with subdural hematoma, sometimes fatal, and mediastinal emphysema similar to shaken baby syndrome. An observational study comparing headbanging to non-headbanging teenagers in a dance marathon concluded that the activity is associated with pain in varying parts of the body, most notably the neck, where it manifests as whiplash. See also Chronic traumatic encephalopathy Corpsegrinder Crowd surfing Dutty Wine Hard rock Headbangers Ball Headbanger (disambiguation) Heavy metal music Heavy metal subculture List of dances Moshing Punk rock Sign of the horns Stage diving Yoshiki (musician) References Syllabus-free dance Rock music Heavy metal subculture Gestures Qawwali
Headbanging
Biology
1,294
78,193,931
https://en.wikipedia.org/wiki/ENX-105
ENX-105 is an investigational new drug being developed by Engrail Therapeutics for the treatment of post-traumatic stress disorder (PTSD). It is currently in the preclinical stage, trailing behind a closely related Engrail compound, ENX-104, which is focused on depression and anhedonia. The drug is described as a dopamine D2 and D3 receptor antagonist and serotonin 5-HT1A and 5-HT2A receptor agonist. In terms of its serotonin 5-HT2A receptor agonism, it is said to not produce the head-twitch response in animals and hence to be putatively non-hallucinogenic. As with ENX-104, ENX-105 is a deuterated enantiomer of nemonapride. References Amines Benzamides Benzyl compounds Chlorobenzene derivatives D2 antagonists D3 antagonists Deuterated compounds Experimental antidepressants Experimental non-hallucinogens Methoxy compounds Non-hallucinogenic 5-HT2A receptor agonists Pyrrolidines
ENX-105
Chemistry
239
57,537,274
https://en.wikipedia.org/wiki/Alina%20Carmen%20Cojocaru
Alina Carmen Cojocaru is a Romanian mathematician who works in number theory and is known for her research on elliptic curves, arithmetic geometry, and sieve theory. She is a professor of mathematics at the University of Illinois at Chicago and a researcher in the Institute of Mathematics of the Romanian Academy. Cojocaru earned her Ph.D. from Queen's University in Kingston, Ontario, in 2002. Her dissertation, Cyclicity of Elliptic Curves Modulo p, was jointly supervised by M. Ram Murty and Ernst Kani. Cojocaru was elected to be an American Mathematical Society (AMS) Council member at large from February 1st, 2023, to January 31st, 2024. Books Cojocaru is an author of the book An Introduction to Sieve Methods and their Applications (with M. Ram Murty, London Mathematical Society Student Texts 66, Cambridge University Press, 2006). She is also an editor of Women in Numbers: Research Directions in Number Theory (with Kristin Lauter, Rachel Justine Pries, and Renate Scheidler, Fields Institute Communications 60, American Mathematical Society, 2011). Scholar: A Scientific Celebration Highlighting Open Lines of Arithmetic Research: Conference in Honour of M. Ram Murty's Mathematical Legacy on His 60th Birthday (with C. David and F. Pappalardi, Contemporary Mathematics 655, American Mathematical Society, 2016) Selected publications Cojocaru, Alina Carmen; Murty, M. Ram (2004), "Cyclicity of elliptic curves modulo and elliptic curve analogues of Linnik's problem". Math. Ann. 330, no. 3, 601–625. MR 2099195. Cojocaru, Alina Carmen (2005), "On the surjectivity of the Galois representations associated to non-CM elliptic curves. With an appendix by Ernst Kani". Canad. Math. Bull. 48, no. 1, 16–31. MR 2118760. Cojocaru, Alina Carmen; Hall, Chris (2005). "Uniform results for Serre's theorem for elliptic curves". Int. Math. Res. Not., no. 50, 3065–3080. MR 2189500. References External links Home page Year of birth missing (living people) Living people 21st-century Romanian mathematicians 21st-century American women mathematicians 21st-century American mathematicians University of Illinois Chicago faculty Number theorists Queen's University at Kingston alumni
Alina Carmen Cojocaru
Mathematics
514
70,180,865
https://en.wikipedia.org/wiki/Die%20shot
A die shot or die photography is a photo or recording of the layout of an integrated circuit, showings its design with any packaging removed. A die shot can be compared with the cross-section of an (almost) two-dimensional computer chip, on which the design and construction of various tracks and components can be clearly seen. Due to the high complexity of modern computer chips, die-shots are often displayed colourfully, with various parts coloured by diffraction within the parts of the die, using special lighting or even manually. Methods A die shot is a picture of a computer chip without its housing or packaging. There are two ways to capture such a chip "naked" on a photo; by either taking the photo before a chip is packaged or by removing its package. Avoiding the package Taking a photo before the chip ends up in a housing is typically preserved to the chip manufacturer, because the chip is packed fairly quickly in the production process to protect the sensitive very small parts against external influences. However, manufacturers may be reluctant to share die shots to prevent competitors from easily gaining insight into the technological progress and complexity of a chip. Removing the package Removing the housing from a chip is typically a chemical process called decapping - a chip is so small and the parts are so microscopic that opening a housing (also named delidding) with tools such as saws, sanders or dremels could damage the chip in such a way that a die shot is no longer or less useful. For example, sulphuric acid can be used to dissolve the plastic housing of a chip. Chips are immersed in a glass jar with sulphuric acid, after which the sulphuric acid is boiled for up to 45 minutes at a temperature of 337 degrees Celsius. Once the plastic housing has decayed, there may be other processes to remove leftover carbon, such as with a hot bath of concentrated nitric acid. After this, the contents of a chip are relatively exposed and a picture can be made of the chip with macrophotography or microphotography. Legal aspects The circuit layout of integrated circuits is generally not subject to copyright, because it is functional in nature. Since die shots are usually much too low in resolution and picture many layers at once, it is impossible to reproduce the pictured circuits using a die shot. Therefore die shots are not restricted under laws like the Semiconductor Chip Protection Act of 1984. Gallery See also Die (integrated circuit) References External links "Hardwarecop CPU Museum" - a website showing die shots sorted per ship-series. CPU Graveyard dieshots - wiki with over 1000 die shots. YouTube-video explaining how one can take a die shot. Integrated circuits Photography
Die shot
Technology,Engineering
550
40,384,270
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S5
The Samsung Galaxy S5 is an Android-based smartphone unveiled, produced, released and marketed by Samsung Electronics as part of the Samsung Galaxy S series. Unveiled on 24 February 2014 at Mobile World Congress in Barcelona, Spain, it was released on 11 April 2014 in 150 countries as the immediate successor to the Galaxy S4. As with the S4, the S5 is an evolution of the prior year's model, placing a particular emphasis on an improved build with a textured rear cover, IP67 certification for dust and water resistance, a more refined user experience, new security features such as a fingerprint reader and private mode, expanded health-related features including a built-in heart rate monitor, a USB 3.0 port, and an updated camera featuring speedy auto-focus through phase-detection. The video resolution has been upgraded to 2160p (4K) and the frame rate at 1080p has been doubled to 60 for a smooth appearance. The Galaxy S5 received mostly positive reviews; the phone was praised for its display, hardware, camera, long battery life, and incorporating water resistance while retaining a removable battery and MicroSD card slot, making it the final in its series with the former. However, the S5 was criticized for bloatware, its unresponsive fingerprint scanner, and its ostensibly non-"premium" polycarbonate in light of rival smartphones' metal or glass bodies. In August 2015, following the release of its then-latest flagship, the Galaxy S6, Samsung released an updated version called the "Galaxy S5 Neo" which has an Exynos 7 Octa (7580) processor clocked at 1.6 GHz. It has 2 GB of RAM, 16 GB of internal storage, USB 2.0 port, comes with Android 5.0.2 "Lollipop", and lacks fingerprint unlocking and 4K (2160p) video recording. Release date The Galaxy S5 was unveiled on 24 February 2014 as part of the company's presentation at Mobile World Congress in Barcelona, Spain. Samsung Electronics president JK Shin explained that consumers did not want a phone dependent on "eye-popping" or "complex" technology, but one with "beautiful design and performance", a "simple, yet powerful camera", "faster and seamless connectivity", and fitness-oriented features. Samsung announced that it would release the S5 on 11 April 2014 in 150 countries—including the United Kingdom and United States. On 18 June 2014, Samsung unveiled an LTE-Advanced version of the S5, exclusively released in South Korea. Unlike other models, the LTE-A version also upgrades the display to a quad HD, 1440p panel. Shortly after the release of the S5, it was discovered that some Samsung Galaxy S5 devices—particularly those on Verizon Wireless—were suffering from a major bug that caused the device's camera hardware to permanently cease functioning, and display a "Camera failed" error on-screen whenever users attempt to use the camera. Both Samsung and Verizon confirmed the issue, which affected a limited number of Galaxy S5 devices; Samsung instructed users affected by the bug to contact the company or their carrier to have their phone replaced under warranty. Specifications Hardware and design Changes The design of the S5 evolves upon the design of the S4. It features a rounded, polycarbonate chassis carrying a "modern glam" look with a dot pattern similarly to that on the 2012 Google Nexus 7 tablet computer, faux metal trim and a removable rear cover. Unlike past models, the S5's rear cover uses a higher quality soft plastic and is dimpled to improve grip. The S5 is IP67 certified for dust and water resistance. As such, the phone is able to be submerged in water up to for up to 30 minutes. The S5's Micro-USB 3.0 port uses a removable cover. The S5 is available in Charcoal Black, Electric Blue, Copper Gold, and Shimmery White color finishes. The S5's screen is a 1080p Super AMOLED panel, which is slightly larger than that of the S4, and allows for automatic brightness and gamut adjustments. Its minimum dimmable brightness level has been lowered to facilitate vision in dark environments without the need for third-party screen filter overlay apps which would interfere with screen captures. Front panel Below the screen are three buttons. The physical "Home" button in the centre contains a swipe-based fingerprint scanner. There is a "Recent apps" key (also known as "task key") on the left side and a "Back" key on the right side of the home button. As with all main models from the Galaxy S series, the navigation buttons left and right to the home buttons are capacitive. However, in accordance with Android 4.0 human interface guidelines, the S5 no longer uses a "Menu" key like its predecessors (left side of home button), although its button layout is still reversed in comparison to other Android devices with the S5's button layout (such as the HTC One X and Galaxy Nexus, whose "Back" buttons are to the left of "Home"). However, holding the new task key for one second simulates pressing the menu key. The S5 comes with a LED light just by the top left of the bezel, which can be set to notify you with a blue light when calls and texts are received when the device isn't being used and the screen is turned off. It also notifies users with a red light when the phone is charging, and a blinking red light when the battery is low. This red light turns green when charged to a 100% of battery capacity. Camera The S5 includes a 16 megapixel (5312×2988) rear-facing camera, which offers 4K (2160p) video recording at 30 fps, phase detection autofocus (which Samsung claims to be able to focus in around 0.3 seconds), real-time HDR photos and video (of which the latter is available for the first time in a Samsung flagship mobile device), a BSI CMOS image sensor with Samsung's "Isocell" technology, which isolates the individual pixels inside the sensor to improve its ability to capture light and reduce crosstalk. The model number of the image sensor is S5K2P2XX. Compared to conventional BSI sensors, this reduces electrical crosstalk by about 30 percent. The camera is also able to record 1080p@60fps for smoother real-time playback. It can also record slow-motion videos with 720p at 120fps, but only encoded using the menial method. The front camera uses a Samsung CMOS S5K8B1YX03 image sensor, an aperture of 2.4 and captures both photos and videos at 1080p; the latter at 30 frames per second. Related section: Camera and gallery software. Miscellaneous Next to the camera's flash on the rear of the device is a new heart rate sensor, which can be used as part of the S Health software. The top of the device has an IR blaster and headphone jack. The IR blaster is a transmitter only and it has a built-in database of devices that can be controlled by Samsung's Smart Remote application. The Galaxy S5 lacks the thermometer (temperature) and hygrometer (humidity) sensors that both 2013's flagships, the Galaxy S4 and Galaxy Note 3 were equipped with. Internal specifications The S5 is powered by a 2.5 GHz quad-core Snapdragon 801 system-on-chip with 2 GB of RAM. Although not mentioned during the keynote presentation, a variant (SM-G900H) with an octa-core Exynos 5422 system-on-chip was also released in multiple markets. Like the previous model, it uses two clusters of four cores; four Cortex-A15 cores at 2.1 GHz, and four Cortex-A7 cores at 1.5 GHz. Depending on resource usage, the SoC can use the power-efficient A7 cores for lighter processing loads, and switch to the A15 cores for more demanding loads. Unlike previous iterations, however, the Exynos 5422 can run both sets of cores at the same time instead of only one at a time. Battery The S5 contains a 2800 mAh lithium ion battery, which is user-replaceable despite IP67 water resistance. It is Qi compatible (requires an optional Wireless Charging Cover) and also contains an "Ultra Power Saving" mode to extend further battery life; when enabled, all non-essential processes are disabled, and the screen switches to grey scale rendering. Samsung claims that with Ultra Power Saving on, an S5 with 10% charge remaining can last for an additional 24 hours in standby mode. Another improvement in power efficiency comes from the use of Qualcomm's envelope tracker, which reduces the power used in connectivity. Software The S5 shipped with Android 4.4.2 KitKat but has received updates, the most recent being 6.0.1 Marshmallow. It has Samsung's TouchWiz software, which for the S5 has a flatter, more geometric look than that found on the S4. Certain aspects of the changes were influenced by a recent patent licensing deal with Google, which requires that Samsung's TouchWiz interface follow the design of "stock" Android more closely. The S5 adds the Galaxy Note 3's "My Magazine" feature to the leftmost page on the home screen, the Settings menu was updated with a new grid-based layout, a Kids' Mode was added, while the S Health app was given expanded functionality, integrating with the new heart rate sensor on the device, along with the new Gear 2 smartwatch and Gear Fit activity tracker. The "Download Booster" tool allows internet usage to be split across LTE and Wi-Fi to improve download speed. Due to carrier policies, Download Booster was not available on Galaxy S5 models released in the United States running KitKat 4.4.2, excluding T-Mobile US and U.S. Cellular. The precluded telephone application is equipped with additional options for noise cancellation, call holding, volume boosting and the ability to personalize the call sound. A disk space analyzer tool is included. The Galaxy S5 inherits the interaction functionality of the Galaxy S4, such as finger Air View, air gesture and motion gesture controls, and Samsung SmartScreen. For Air View, a self-capacitive touch screen layer is used to detect the floating finger. The "Quick Glance" feature from the Galaxy S4 was succeeded by "Air wake-up", where hovering above the front proximity sensor next to the "Samsung" wordmark wakes the phone up from stand-by mode rather than just showing the clock and status. Security The S5 contains a number of new security features. The fingerprint reader can be used to unlock the phone, while an SDK is available so third-party developers may offer fingerprint-oriented functionality in their apps; for example, PayPal integrated support for the fingerprint sensor to authenticate online purchases. The S5 also adds "Private Mode", which allows users to maintain hidden apps and file folders that cannot be accessed without additional authentication. Camera and gallery software The camera app was updated with a new "Shot & More" menu that captures eight still photos in quick succession stored into a single JPG file, allowing users to make edits to photos after they are taken using Samsung's gallery software. It incorporates camera features from earlier Samsung phones (Best Photo, Best Face, Drama Shot, Eraser and Action panorama) into a single camera mode. The camera software is also equipped with a new selective focus mode that captures multiple photos at different focus settings (also known as focus bracketing, albeit being stored in a single file), allowing the user to choose the desired focus setting (near focus, far focus, pan focus) any time after capture, and allows users to export the photo at the desired focus settings into a separate file. Additional camera modes are downloadable from a store provided by Samsung. The camera software is equipped with a remote viewfinder feature, which allows using the display of one unit as the viewfinder of a second phone paired through Wi-Fi Direct. Next to the S5, only the Galaxy S4, Galaxy S4 Zoom, Galaxy K Zoom, Galaxy Note 3 and Galaxy Alpha are equipped with this functionality. Two new features, Virtual Tour and Spherical (360°) Panorama have been added, respectively allowing to capture photos stitched together to a navigable 3D environment and to a spherical view of the surroundings, both inside the precluded gallery software. 360-degree panoramas are navigable with both touch and gyroscope sensor (device movement). The camera setting shortcuts on the left (horizontal) side of the screen are customizable, allowing the user to select four shortcuts to more frequently accessed camera settings. The camera software has been criticized for over-sharpening photos in post processing. The gallery software is able to show Exif meta data of pictures such as exposure time, exposure value, light sensitivity (ISO), aperture, focal length, flash status and a histogram. Updates An update to Android 5.0 "Lollipop" was first released for S5 models in Poland in December 2014. The update incorporates performance improvements, an updated "Recent apps" view that utilizes a card-based layout, access to notifications on the lock screen, and modifications to the TouchWiz interface to adhere to Material design language. In April 2016, Samsung released an update to Android 6.0.1 "Marshmallow" for international S5 models. It enables new features such as "Google Now on Tap", which allows users to perform searches within the context of information currently being displayed on-screen, and "Doze", which optimizes battery usage when the device is not being physically handled. Using a custom ROM such as LineageOS, Android 13 can be installed on Samsung Galaxy S5 devices. Variants Rugged variants Samsung released two rugged versions of the S5, the S5 Active and S5 Sport. Both models feature a version of the S5's design with a full set of physical navigation buttons and do not include the fingerprint scanner, but are otherwise identical to standard models of the S5. Both devices also include an exclusive "Activity Zone" app, which contains a barometer, compass, and stopwatch. The S5 Active adds an "Active Key" to the side of the device, which can be configured to launch certain apps on short and long presses; by default, the button launches Activity Zone. The Sprint S5 Sport has additional Sprint Fit Live software, which acts as a hub for health-oriented content and S Health, along with the Under Armour-owned MapMyFitness MVP service and the music streaming service Spotify—the device comes with complimentary subscriptions to both services. Both models come in different color schemes (grey, camouflage green, and red for the S5 Active, and blue and red for the S5 Sport), and the S5 Sport is slightly lighter in weight than the S5 Active, at instead of . The S5 Active and S5 Sport were released in the United States in June 2014, and are exclusive to AT&T and Sprint respectively. The S5 Active was released in Canada in October 2014. In June 2014, Samsung also released a dual SIM version of the Galaxy S5, called Samsung Galaxy S5 Duos, model SM-G900FD. The Duos has the LTE specification of the Galaxy S5. Neo variant A hardware revision called the Galaxy S5 Neo was quietly released in August 2015. The S5 Neo is a lower cost variant of the original Galaxy S5 that downgrades the SoC to an Exynos 7580 Octa system-on-chip, while improving the front camera to a 5-megapixel unit. Other changes include the removal of a fingerprint sensor and a conventional USB 2.0 port in lieu of a USB 3.0 port with a water-resistant cover, the latter of which had been a source of complaints from users who found the cover was easily broken from regular use, compared to the previous year's model. A noticeable design distinction is the finer dot pattern on the removable rear cover. Samsung also released Android 7.0 Nougat with Samsung Experience which the original S5 did not have. The S5 Neo was initially made available in Europe (SM-G903F) and Canada (SM-G903W). While the original Galaxy S5 is able to record videos at 2160p at 30fps, 1080p at 60fps and 720p at 120fps, the camera of the Galaxy S5 Neo can only record videos at 1080p at up to 30fps and lacks slow motion video recording entirely. The Galaxy S5 Neo lacks wireless charging support entirely and cannot be retrofitted with a wireless charging back cover. Comparison table Reception Critical reception The S5 received mostly positive reviews; critics acknowledged that the S5 was primarily a technological evolution of its predecessor with few changes of significance. Although praised for an improved appearance and build quality, the design of the S5 was panned for retaining a nearly identical appearance and construction to the S4, and for not using higher quality materials such as metal or a higher-quality plastic. The display of the S5 was praised for having a high quality, not being as oversaturated as previous models, and having a wide range of viewing angles, brightness states, and gamut settings to fine tune its appearance. TechRadar also noticed that, despite the high power of its processor, some apps and interface functions suffered from performance issues, indicating that the S5's operating system may not have been completely optimized for its system-on-chip. The S5's interface was praised for having a cleaner appearance than previous iterations—however, it was still criticized for containing too many unnecessary features and settings. The S5's camera received mostly positive reviews for the improvements to image quality provided by its Isocell image sensor, but was deemed to still be not as good as its competitors, particularly in the case of low-light images. While the S5's camera interface was praised for having a streamlined design, it was criticized for taking too long to load, and the Selective Focus features were panned for being inconsistent in quality. While praised for providing more uses than the Touch ID function on the iPhone 5s, the fingerprint sensor was panned for requiring an unnatural vertical swiping gesture, having inconsistent and unforgiving results, and for being inconvenient in comparison to a password or PIN in most use cases due to these shortcomings. The Berlin-based Security Research Labs found that because the S5's fingerprint sensor could easily be spoofed, allows unlimited chances and does not require a PIN after 48 hours of inactivity or on startup like Touch ID, and can be used for more than just unlocking the phone, it "gives a would-be attacked an even greater incentive to learn the simple skill of spoofing fingerprints." Engadget considered the heart rate sensor to be similarly unforgiving and sometimes being inaccurate in comparison to other heart rate trackers, while The Verge felt that it was a redundant addition due to the concurrent introduction of the Samsung Gear Fit, which includes a heart rate tracker of its own, and is likely to also be purchased by those wanting to take full advantage of the S Health software on their S5. Sales The S5 shipped to retailers 10 million units in 25 days, making it the fastest shipping smartphone in Samsung's history. Samsung shipped 11 million units of the S5 during its first month of availability, exceeding shipped units of the S4 in the same period by 1 million units. However, 12 million units of the S5 were shipped in its first three months of availability, which is lower than the S4's sales. Due to the lower sales of the S5, in July 2014, Samsung reported its lowest profits in over two years, and a drop in market share from 32.3% to 25.2% over the past year. The loss in market share was attributed primarily to growing pressure from competitors – especially in the growing low-end smartphone market, and an already saturated market for high-end smartphones. By the end of 2014, it was reported that sales of the S5 were 40% down on the previous S4 model, prompting management changes at Samsung. See also Comparison of Samsung Galaxy S smartphones Comparison of smartphones Samsung Galaxy S series Samsung Rugby Smart Samsung Galaxy 5, a similarly-named smartphone from 2010 Samsung Galaxy Note 5, another similarly-named smartphone from 2015 Notes References External links Android (operating system) devices Discontinued flagship smartphones Samsung smartphones Samsung Galaxy Mobile phones introduced in 2014 Mobile phones with user-replaceable battery Mobile phones with infrared transmitter Mobile phones with self-capacitive touch screen layer Mobile phones with 4K video recording Discontinued Samsung Galaxy smartphones
Samsung Galaxy S5
Technology
4,388
64,117,353
https://en.wikipedia.org/wiki/Ytterbium%28II%29%20hydride
Ytterbium(II) hydride is the hydride of ytterbium with the chemical formula YbH2. In this compound, the ytterbium atom has an oxidation state of +2 and the hydrogen atoms have an oxidation state of -1. Its resistivity at room temperature is 107 Ω·cm. Ytterbium hydride has a high thermostability. Production Ytterbium hydride can be produced by reacting ytterbium with hydrogen gas: Yb + H2 → YbH2 References Ytterbium(II) compounds Metal hydrides
Ytterbium(II) hydride
Chemistry
130
54,932,727
https://en.wikipedia.org/wiki/Gagik%20Shmavonyan
Gagik Shmavonyan (Armenian: Գագիկ Շմավոնյան, May 12, 1963) is Professor at National Polytechnic University of Armenia, PhD in physics, D.Sc. in Engineering., Consultant at Nanolabs (Philippines), Expert at Malta Council for Science and Technology, Science Fund of the Republic of Serbia, Cyprus Research Promotion Foundation, European Cooperation in Science and Technology (COST) and Science Committee of Armenia, as well as President of NanoHiTech Association (Nanotechnology). Education He got his PhD in physics in 1996 and D.Sc in Engineering in 2009 at National Polytechnic University of Armenia. He did postdoc at National Taiwan University (2001–2002). He was an Invited/Visiting Professor/Scholar at the University of Hull, UK (2000, 2003), Polytechnic of Milan, Como, Italy (2004–2005), University of Bremen, Germany (2002, 2006), Free University Berlin, Germany (2011), Trinity College Dublin, CRANN, Ireland (2012), University of Santiago de Compostela, Spain (2013–2014), Institute of Nanoscience and Nanotechnology, National Center for Scientific Research "Demokritos", Athens, Greece (2017), University of Cergy-Pontoise, France (2016, 2017), Eberhard Karls University of Tübingen, Germany (2019). Research interests 2D atomic materials (graphene, etc.), smart materials, 2D hybrid atomic structures, 2D atomic devices, 2D flexible electronics, Nanostructured optoelectronic devices: photoelectrochemical, photovoltaic and thermophotovoltaic cells, perovskite solar cells, semiconductor lasers and semiconductor optical amplifiers, etc. Selected publications Shmavonyan Gagik, Tran Tuan-Hoang, Cheshev Dmitry, Averkiev Andrey, Sheremet Evgeniya, Chapter 5.4. Nanospectroscopy of graphene and two-dimensional atomic materials and hybrid structures, in Vol. 3: Applications in 3-volume Textbook Optical Nanospectroscopy: Fundamentals (1st Vol., 330 p.), Methods (2nd Vol., 330 p.), Applications (3rd Vol., 468 p.), pp. 401-439, Alfred J. Meixner, Monika Fleischer, Dieter P. Kern, Evgeniya Sheremet, Norman McMillan (Eds.), Textbook series, Publisher de Gruyter, 1128 pages, Berlin, Boston, December 31, 2022 (in English), Hardcover Published: December 31, 2022, , eBook Published: December 31, 2022. López-Quintela M.A., Shmavonyan G.Sh., Vázquez Vázquez C. Method for producing sheets of graphene, USPTO patent US 10,968,104 B2, Filing date: November 17, 2017, Date of patent: April 6, 2021. López-Quintela M.A., Shmavonyan G.Sh., Vázquez Vázquez C. Method for producing sheets for graphene, Spanish patent ES2575711 B2, Nov. 3, 2016, WIPO patent WO/2016/107942 A1, July 7, 2016. EPO patent EP15875286.5, Nov. 22, 2017. Shmavonyan G.Sh., Vázquez Vázquez C., López-Quintela M.A. Single-step rubbing method for mass production of large-size and defect-free two-dimensional material nanostripes, films and hybrid nanostructures on any substrate, Transl. Mater. Res., IOP Publishing, 4 (2), 025001, 2017. Shmavonyan G.Sh. Monograph Semiconductor nanostrucured optolectronic devices, “Engineer” press, Khachatryan N.A. (Eds.), Yerevan, Armenia, 250 p., 2017 (in Armenian), 2019. Petrosyan O.H., Shmavonyan G.Sh., Vardanyan A.A. Nanoelectronic elements and devices, Academic manual, “Engineer” press, Khachatryan N.A. (Eds.), Yerevan, Armenia, 326 p., 2019 (in Armenian). Shmavonyan G.Sh. Basics of Nanotechnologies, Academic manual, “Engineer” press, Khachatryan N.A. (Eds.), Yerevan, Armenia, 224 p., 2011 (in Armenian), . Ispiryan N.P., Shmavonyan G.Sh. Physics of Semiconductors, Problem book, “Engineer” press, Khachatryan N.A. (Eds.), Yerevan, Armenia, 111 p., 2010 (in Armenian), . Buniatyan V.V., Shmavonyan G.Sh., Martirosyan N.V. Investigation methods of electronic and opto¬elec¬tro¬nic solid state materials and structures, Academic manual, “Engineer” press, Khachatryan N.A. (Eds.), Yerevan, Armenia, 418 p., 2005 (in Armenian). References External links Google Scholar gagikshmavonyan.am Armenian Public TV CA15107 Multicomp Autumn Meeting in Bucharest Nanotechnologists Materials scientists and engineers Engineers from Yerevan Optoelectronics 1963 births Living people
Gagik Shmavonyan
Materials_science,Engineering
1,158
2,038,941
https://en.wikipedia.org/wiki/Annus%20mirabilis%20papers
The annus mirabilis papers (from Latin annus mīrābilis, "miraculous year") are four papers that Albert Einstein published in the scientific journal Annalen der Physik (Annals of Physics) in 1905. As major contributions to the foundation of modern physics, these scientific publications were the ones for which he gained fame among physicists. They revolutionized science's understanding of the fundamental concepts of space, time, mass, and energy. Because Einstein published all four of these papers in a single year, 1905 is called his annus mirabilis (miraculous year). The first paper explained the photoelectric effect, which established the energy of the light quanta , and was the only specific discovery mentioned in the citation awarding Einstein the 1921 Nobel Prize in Physics. The second paper explained Brownian motion, which established the Einstein relation and compelled physicists to accept the existence of atoms. The third paper introduced Einstein's special theory of relativity, which proclaims the constancy of the speed of light and derives the Lorentz transformations. Einstein also examined relativistic aberration and the transverse Doppler effect. The fourth, a consequence of special relativity, developed the principle of mass–energy equivalence, expressed in the equation and which led to the discovery and use of nuclear power decades later. These four papers, together with quantum mechanics and Einstein's later general theory of relativity, are the foundation of modern physics. Background At the time the papers were written, Einstein did not have easy access to a complete set of scientific reference materials, although he did regularly read and contribute reviews to Annalen der Physik. Additionally, scientific colleagues available to discuss his theories were few. He worked as an examiner at the Patent Office in Bern, Switzerland, and he later said of a co-worker there, Michele Besso, that he "could not have found a better sounding board for my ideas in all of Europe". In addition, co-workers and the other members of the self-styled "Olympia Academy" (Maurice Solovine and Conrad Habicht) and his wife, Mileva Marić, had some influence on Einstein's work, but how much is unclear. Through these papers, Einstein tackled some of the era's most important physics questions and problems. In 1900, Lord Kelvin, in a lecture titled "Nineteenth-Century Clouds over the Dynamical Theory of Heat and Light", suggested that physics had no satisfactory explanations for the results of the Michelson–Morley experiment and for black body radiation. As introduced, special relativity provided an account for the results of the Michelson–Morley experiments. Einstein's explanation of the photoelectric effect extended the quantum theory which Max Planck had developed in his successful explanation of black-body radiation. Despite the greater fame achieved by his other works, such as that on special relativity, it was his work on the photoelectric effect that won him his Nobel Prize in 1921. The Nobel committee had waited patiently for experimental confirmation of special relativity; however, none was forthcoming until the time dilation experiments of Ives and Stilwell (1938 and 1941) and Rossi and Hall (1941). Papers Photoelectric effect The article "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" ("On a Heuristic Viewpoint Concerning the Production and Transformation of Light") received 18 March and published 9 June, proposed the idea of energy quanta. This idea, motivated by Max Planck's earlier derivation of the law of black-body radiation (which was preceded by the discovery of Wien's displacement law, by Wilhelm Wien, several years prior to Planck) assumes that luminous energy can be absorbed or emitted only in discrete amounts, called quanta. Einstein states, In explaining the photoelectric effect, the hypothesis that energy consists of discrete packets, as Einstein illustrates, can be directly applied to black bodies, as well. The idea of light quanta contradicts the wave theory of light that follows naturally from James Clerk Maxwell's equations for electromagnetic behavior and, more generally, the assumption of infinite divisibility of energy in physical systems. Einstein noted that the photoelectric effect depended on the wavelength, and hence the frequency of the light. At too low a frequency, even intense light produced no electrons. However, once a certain frequency was reached, even low intensity light produced electrons. He compared this to Planck's hypothesis that light could be emitted only in packets of energy given by hf, where h is the Planck constant and f is the frequency. He then postulated that light travels in packets whose energy depends on the frequency, and therefore only light above a certain frequency would bring sufficient energy to liberate an electron. Even after experiments confirmed that Einstein's equations for the photoelectric effect were accurate, his explanation was not universally accepted. Niels Bohr, in his 1922 Nobel address, stated, "The hypothesis of light-quanta is not able to throw light on the nature of radiation." By 1921, when Einstein was awarded the Nobel Prize and his work on photoelectricity was mentioned by name in the award citation, some physicists accepted that the equation was correct and light quanta were possible. In 1923, Arthur Compton's X-ray scattering experiment helped more of the scientific community to accept this formula. The theory of light quanta was a strong indicator of wave–particle duality, a fundamental principle of quantum mechanics. A complete picture of the theory of photoelectricity was realized after the maturity of quantum mechanics. Brownian motion The article "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen" ("On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat"), received 11 May and published 18 July, delineated a stochastic model of Brownian motion. Einstein derived expressions for the mean squared displacement of particles. Using the kinetic theory of gases, which at the time was controversial, the article established that the phenomenon, which had lacked a satisfactory explanation even decades after it was first observed, provided empirical evidence for the reality of the atom. It also lent credence to statistical mechanics, which had been controversial at that time, as well. Before this paper, atoms were recognized as a useful concept, but physicists and chemists debated whether atoms were real entities. Einstein's statistical discussion of atomic behavior gave experimentalists a way to count atoms by looking through an ordinary microscope. Wilhelm Ostwald, one of the leaders of the anti-atom school, later told Arnold Sommerfeld that he had been convinced of the existence of atoms by Jean Perrin's subsequent Brownian motion experiments. Special relativity Einstein's (On the Electrodynamics of Moving Bodies), his third paper that year, was received on 30 June and published 26 September. It reconciles Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light. This later became known as Einstein's special theory of relativity. The paper mentions the names of only five other scientists: Isaac Newton, James Clerk Maxwell, Heinrich Hertz, Christian Doppler, and Hendrik Lorentz. It does not have any references to any other publications. Many of the ideas had already been published by others, as detailed in history of special relativity and relativity priority dispute. However, Einstein's paper introduces a theory of time, distance, mass, and energy that was consistent with electromagnetism, but omitted the force of gravity. At the time, it was known that Maxwell's equations, when applied to moving bodies, led to asymmetries (moving magnet and conductor problem), and that it had not been possible to discover any motion of the Earth relative to the 'light medium' (i.e. aether). Einstein puts forward two postulates to explain these observations. First, he applies the principle of relativity, which states that the laws of physics remain the same for any non-accelerating frame of reference (called an inertial reference frame), to the laws of electrodynamics and optics as well as mechanics. In the second postulate, Einstein proposes that the speed of light has the same value in all frames of reference, independent of the state of motion of the emitting body. Special relativity is thus consistent with the result of the Michelson–Morley experiment, which had not detected a medium of conductance (or aether) for light waves unlike other known waves that require a medium (such as water or air), and which had been crucial for the development of the Lorentz transformations and the principle of relativity. Einstein may not have known about that experiment, but states, The speed of light is fixed, and thus not relative to the movement of the observer. This was impossible under Newtonian classical mechanics. Einstein argues, It had previously been proposed, by George FitzGerald in 1889 and by Lorentz in 1892, independently of each other, that the Michelson–Morley result could be accounted for if moving bodies were contracted in the direction of their motion. Some of the paper's core equations, the Lorentz transforms, had been published by Joseph Larmor (1897, 1900), Hendrik Lorentz (1895, 1899, 1904) and Henri Poincaré (1905), in a development of Lorentz's 1904 paper. Einstein's presentation differed from the explanations given by FitzGerald, Larmor, and Lorentz, but was similar in many respects to the formulation by Poincaré (1905). His explanation arises from two axioms. The first is Galileo's idea that the laws of nature should be the same for all observers that move with constant speed relative to each other. Einstein writes, The second axiom is the rule that the speed of light is the same for every observer. The theory, now called the special theory of relativity, distinguishes it from his later general theory of relativity, which considers all observers to be equivalent. Acknowledging the role of Max Planck in the early dissemination of his ideas, Einstein wrote in 1913 "The attention that this theory so quickly received from colleagues is surely to be ascribed in large part to the resoluteness and warmth with which he [Planck] intervened for this theory". In addition, the spacetime formulation by Hermann Minkowski in 1907 was influential in gaining widespread acceptance. Also, and most importantly, the theory was supported by an ever-increasing body of confirmatory experimental evidence. Mass–energy equivalence On 21 November Annalen der Physik published a fourth paper (received September 27) "Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?" ("Does the Inertia of a Body Depend Upon Its Energy Content?"), in which Einstein deduced what is sometimes described as the most famous of all equations: . Einstein considered the equivalency equation to be of paramount importance because it showed that a massive particle possesses an energy, the "rest energy", distinct from its classical kinetic and potential energies. The paper is based on James Clerk Maxwell's and Heinrich Rudolf Hertz's investigations and, in addition, the axioms of relativity, as Einstein states, The equation sets forth that the energy of a body at rest () equals its mass () times the speed of light () squared, or . The mass–energy relation can be used to predict how much energy will be released or consumed by nuclear reactions; one simply measures the mass of all constituents and the mass of all the products and multiplies the difference between the two by . The result shows how much energy will be released or consumed, usually in the form of light or heat. When applied to certain nuclear reactions, the equation shows that an extraordinarily large amount of energy will be released, millions of times as much as in the combustion of chemical explosives, where the amount of mass converted to energy is negligible. This explains why nuclear weapons and nuclear reactors produce such phenomenal amounts of energy, as they release binding energy during nuclear fission and nuclear fusion, and convert a portion of subatomic mass to energy. Commemoration The International Union of Pure and Applied Physics (IUPAP) resolved to commemorate the 100th year of the publication of Einstein's extensive work in 1905 as the World Year of Physics 2005. This was subsequently endorsed by the United Nations. Notes References Citations Primary sources Secondary sources Gribbin, John, and Gribbin, Mary. Annus Mirabilis: 1905, Albert Einstein, and the Theory of Relativity, Chamberlain Bros., 2005. . (Includes DVD.) Renn, Jürgen, and Dieter Hoffmann, "1905a miraculous year". 2005 J. Phys. B: At. Mol. Opt. Phys. 38 S437-S448 (Max Planck Institute for the History of Science) [Issue 9 (14 May 2005)]. . Stachel, John, et al., Einstein's Miraculous Year. Princeton University Press, 1998. . External links Collection of the Annus Mirabilis papers and their English translations at the Library of Congress website 1905 documents 1905 in science Historical physics publications Old quantum theory Physics papers Works by Albert Einstein Works originally published in Annalen der Physik
Annus mirabilis papers
Physics
2,756
30,178,671
https://en.wikipedia.org/wiki/Candida%20blankii
Candida blankii is a species of budding yeast (Saccharomycotina) in the family Saccharomycetaceae. The yeast may be a dangerous pathogen and resistant to treatment in human hosts. Research on the fungi has therapeutic, medical and industrial implications. Taxonomy Candida blankii was discovered in the 1960s, after the analysis of the organs of infected mink in Canada by F. Blank. These mink were infected with the unknown yeast, and all died from mycosis. It was described in 1968 by H. R. Buckley and N. van Uden, who named it in honour of Blank. The description was published in the journal Mycopathologia et Mycologia Applicata, along with descriptions of four other new species. Identification On Sabouraud dextrose agar, C. blankii isolates present as typical yeast, i.e., cream-colored colonies, which then tend toward pink and later dark blue. Blood sample DNA sequencing of the 26S ribosomal subunit can definitively identify C. blankii. Ecology In nature, Candida blankii forms symbiotic relationships with other organisms. An Indian study of seven bee species and 9 plant species found 45 yeast species from 16 genera colonise the nectaries of flowers and honey stomachs of bees. Most were members of the genus Candida; the most common species in honey bee stomachs was Dekkera intermedia, while the most common species colonising flower nectaries was C. blankii. Although the mechanics are not fully understood, it was found that Azadirachta indica flowers more if C. blankii are present. Human pathology A few human infections of Candida blankii have been found. Their existence suggests that the condition may have been under-reported. In 2015, the yeast was found in the airways of a patient with cystic fibrosis; this was the first recorded case of C. blankii infection in humans. A second case was reported in 2018. The fungus proved resistant to treatment with antifungals. The yeast was characterized as "an opportunist pathogen for lung transplant and/or CF patients". Because of its resistance, it was said to warrant further study. Different strains, it was suggested, should also be studied "to increase knowledge of genetic diversity and antifungal susceptibility profile". Fungal blood-stream infections (fungaemia) have been newly associated with C blankii. Polyene antifungals have been identified as a possible treatment. The species has been detected in meat intended for human consumption, including Iberian ham. Biotechnology Like many yeasts, Candida blankii has been the subject of various biotechnological studies, including for use as a BOD biosensor. The metabolic process of C. blankii is aerobic. Consequently, it oxidizes many forms of alcohol, amino acid, carbohydrates, and other organic compounds. As a BOD biosensor, practical applications may be limited due to short term effectiveness. A diploid isolate of C. blankii had an observed "potential for use in single cell protein production from hemicellulose hydrolysates", which is related to Cellulosic ethanol (i.e., ethanol production). This yeast is one of several studied extensively for use in xylose fermentation. Candida blankii has been tested as an aid for the degradation of hemicellulose hydrolysates. C. blankii "cultivated on a mixture of n-paraffins (6% vol/vol) has been shown to produce fumaric acid", which could be important in ethanol production, once the process is worked out. Notes References External links Candida blankii on MycoBank Candida blankii on Index Fungorum Candida blankii MicrobeWiki, Boston University blankii Fungi described in 1968 Pathogenic microbes Fungal pathogens of humans Animal fungal diseases Fungus species
Candida blankii
Biology
820
30,904,877
https://en.wikipedia.org/wiki/Age%20of%20the%20captain
The age of the captain is a mathematical word problem which cannot be answered even though there seems to be plenty of information supplied. It was given for the first time by Gustave Flaubert in a letter to his sister Caroline in 1841: More recently, a simpler version has been used to study how students react to word problems: A captain owns 26 sheep and 10 goats. How old is the captain? Many children in elementary school, from different parts of the world, attempt to "solve" this nonsensical problem by giving the answer 36, obtained by adding the numbers 26 and 10. It has been suggested that this indicates schooling and education fail to instill critical thinking in children, and do not teach them that a question may be unsolvable. However, others have countered that in education students are taught that all questions have a solution and that giving any answer is better than leaving it blank, hence the attempt to "solve" it. You can also find this problem in Richard Rusczyk's "Introduction to Geometry" at the end of chapter 18 in the "extra" box, as well as in Evan Chen's "Euclidean Geometry in Mathematical Olympiads" at the beginning of chapter 5. Variations In 2018, the question "If a ship had 26 sheep and 10 goats onboard, how old is the ship's captain?" appeared in a fifth-grade examination for 11-year-old students of a Shunqing primary school. A Weibo commenter noted that the total weight of 26 average sheep and 10 average goats is . In China, one needs to have possessed a boat license for five years to drive a ship with over of cargo. As the minimum age to get a boat's license is 23, the captain is at least 28. The statutory retirement age in China is 60 for men and up to 55 for women, thus the captain's age is between 28 and 60. Some versions begin with a statement like "Imagine you're a ship's captain", before distracting the answerer with much irrelevant information. The answer is simply the age of the answerer. References Recreational mathematics Word puzzles Gustave Flaubert
Age of the captain
Mathematics
438
16,781,823
https://en.wikipedia.org/wiki/HD%2033564%20b
HD 33564 b is an extrasolar planet located approximately 68 light-years away in the constellation of Camelopardalis. This planet orbits around F6V star HD 33564. The planet has an eccentric orbit, ranging in distance from 0.737 AU at periastron to 1.497 AU at apastron. HD 33564 b is a gas giant in a habitable zone of its star. Based on a probable 10−4 fraction of the planet mass as a satellite, it can have a Mars-sized moon with habitable surface. On the other hand, this mass can be distributed into many small satellites as well. See also HD 81040 b References External links Camelopardalis Giant planets in the habitable zone Exoplanets discovered in 2005 Exoplanets detected by radial velocity
HD 33564 b
Astronomy
168
57,861,351
https://en.wikipedia.org/wiki/1%2C1-Dihydroxyethene
1,1-Dihydroxyethene is an organic compound consisting of two hydroxy groups as substituents on the same carbon atom of an ethene chain. The chemical is also called ketene hydrate because it is the carbonyl hydrate of ketene. Its structure can also be considered as the enol form of acetic acid. This compound is likely a key intermediate in the hydration reaction that converts ketene into acetic acid. The analysis of the possible pathways for this reaction has been cited as an example of the importance of considering activation energy of each mechanistic step. The hydration of the carbonyl of ketene, which formally involves an addition reaction of one water molecule onto the carbonyl group, is likely catalyzed by a second water molecule. The compound has now been synthesized and identified spectroscopically. References Enols Geminal diols
1,1-Dihydroxyethene
Chemistry
188