id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
250,556
https://en.wikipedia.org/wiki/Vacutainer
A vacutainer blood collection tube is a sterile glass or plastic test tube with a colored rubber stopper creating a vacuum seal inside of the tube, facilitating the drawing of a predetermined volume of liquid. Vacutainer tubes may contain additives designed to stabilize and preserve the specimen prior to analytical testing. Tubes are available with a safety-engineered stopper, with a variety of labeling options and draw volumes. The color of the top indicates the additives in the vial. Vacutainer tubes were invented by Joseph Kleiner in 1949. Vacutainer is a registered trademark of Becton Dickinson, which manufactures and sells the tubes today. Principles The Vacutainer needle is double-ended: the inner end is encased in a thin rubber coating that prevents blood from leaking out if the Vacutainer tubes are changed during a multi-draw, and the outer end which is inserted into the vein. When the needle is screwed into the translucent plastic needle holder, the coated end is inside the holder. When a tube is inserted into the holder, its rubber cap is punctured by this inner needle and the vacuum in the tube pulls blood through the needle and into the tube. The filled tube is then removed and another can be inserted and filled the same way. The amount of air evacuated from the tube predetermines how much blood will fill the tube before blood stops flowing. Each tube is topped with a color-coded plastic or rubber cap. Tubes often include additives that mix with the blood when collected, and the color of each tube's plastic cap indicates which additives it contains. Blood collection tubes expire because over time the vacuum is lost and blood will not be drawn into the tube when the needle punctures the cap. Types of tubes Vacutainer tubes may contain additional substances that preserve blood for processing in a medical laboratory. Using the wrong tube may make the blood sample unusable for the intended purpose. These additives are typically thin film coatings applied using an ultrasonic nozzle. The additives may include anticoagulants (EDTA, sodium citrate, heparin) or a gel with density between those of blood cells and blood plasma. Additionally, some tubes contain additives that preserve certain components of or substances within the blood, such as glucose. When a tube is centrifuged, the materials within are separated by density, with the blood cells sinking to the bottom and the plasma or serum accumulating at the top. Tubes containing gel can be easily handled and transported after centrifugation without the blood cells and serum mixing. The meanings of the various colors are standardized across manufacturers. The term order of draw refers to the sequence in which tubes should be filled. The needle which pierces the tubes can carry additives from one tube into the next, so the sequence is standardized so that any cross-contamination of additives will not affect laboratory results. History Vacutainer technology was developed in 1947 by Joseph Kleiner, and is currently marketed by Becton Dickinson (B-D). The Vacutainer was preceded by other vacuum-based phlebotomy technology such as the Keidel vacuum. The plastic tube version, known as Vacutainer PLUS, was developed at B-D in the early 1990s by E. Vogler, D. Montgomery and G. Harper amongst others of the Surface Science Group as US patents 5344611, 5326535, 5320812, 5257633 and 5246666. Vacutainers are widely used in phlebotomy in developed countries due to safety and ease of use. Vacutainers have the advantage of being prepared with additives, allowing easy multi-tube draws, and having a lower chance of hemolysis. In developing countries, it is still common to draw blood using a syringe or syringes. Many brands have now started manufacturing Vaccutainer such as Vacu-8, Hemo Tube and Hemo Vac Plus. These tubes are now also available in pre-barcoded forms. References External links Blood Collection Tube Coating Video Pre barcoded tubes Medical equipment Blood Containers
Vacutainer
Biology
859
15,072,390
https://en.wikipedia.org/wiki/NFKBIL1
NF-kappa-B inhibitor-like protein 1 is a protein that in humans is encoded by the NFKBIL1 gene. Function This gene encodes a divergent member of the I-kappa-B family of proteins. Its function is unclear. The gene lies within the major histocompatibility complex (MHC) class I region on chromosome 6. References Further reading
NFKBIL1
Chemistry
80
3,981,043
https://en.wikipedia.org/wiki/Vesiculovirus
Vesiculovirus is a genus of negative-sense single-stranded RNA viruses in the family Rhabdoviridae, within the order Mononegavirales. Taxonomy The genus contains the following species: Alagoas vesiculovirus Carajas vesiculovirus Chandipura vesiculovirus Cocal vesiculovirus Eptesicus vesiculovirus Indiana vesiculovirus Isfahan vesiculovirus Jurona vesiculovirus Malpais Spring vesiculovirus Maraba vesiculovirus Morreton vesiculovirus New Jersey vesiculovirus Perinet vesiculovirus Piry vesiculovirus Radi vesiculovirus Rhinolophus vesiculovirus Yug Bogdanovac vesiculovirus References External links ViralZone: Vesiculovirus Vesiculoviruses Virus genera
Vesiculovirus
Biology
174
26,659,481
https://en.wikipedia.org/wiki/Riding%20coat
A riding coat or jacket is a garment initially designed as outerwear for horseback riding. It protects the wearer's upper clothes from dirt and wear, and may provide additional protection in case of falls. It is very helpful to the riders. History East Asia The Manchu "horse jacket" (magua) was a dark blue riding coat worn by Manchurian horsemen before becoming a staple item of menswear across the Qing Empire. It subsequently developed into the Burmese Taikpon and the Chinese Tangzhuang. Britain Original waterproof designs – similar to a Mackintosh – generally comprised a full-length coat with a wide skirt and leg straps to keep it in place. Other typical features included a belted waist, large patch pockets with protective flap, raglan sleeves with tab, and wind cuff, fly front, throat tab and a broad collar. In 1823, Charles Macintosh (1766–1843) patented his invention for waterproof rubberized cloth, pressing together two sheets of cotton material with dissolved Indian rubber placed in between. It was a brilliant idea for making any fabric waterproof, and the first Macintosh coats were made at the family's dyestuffs factory, Charles Macintosh and Co. of Glasgow. The rubber processing pioneer Thomas Hancock (1786–1865) was aware of Macintosh’s work, and in 1825 he took out a license to manufacture the patented "waterproof double textures". Hancock's solutions for using masticated scrap rubber instead had a higher rubber content than Macintosh's. It gave a uniform film to the cloth while minimizing water penetration and odor. In 1831, John Hancock joined Charles Macintosh & Co. as a partner, leading to the merging of the two companies. This collaboration brought about the development of an automated spreading machine, which replaced the use of paint brushes in Macintosh's original designs. A significant setback for Hancock occurred in 1834 when his London factory was destroyed in a fire. It forced Macintosh to close his Glasgow factory, relocating all operations to Manchester. From then on, the manufacturing of "proper" raincoats or macs impervious to all weathers – constructed of two layers of rubber-coated cotton fabric or "double textured" – was concentrated, with all necessary expertise and experience, in Manchester or the Lancastrian cotton towns. Such rubber or rubberized products amounted to a "cottage industry", as confirmed by the abundance of company records in the National Archives at Kew, Surrey. Classic, belted, double-textured trench coats in off-white or fawn for riding or walking were fashionable before World War 2. They lasted until the end of the century as a specifically British fashion, flattering the human form and enhancing its magnetism. Typical wartime usage can be seen in Danger UXB (Anthony Andrews), first broadcast in the late 1970s, or the 1976 movie The Eagle Has Landed (film) by Donald Sutherland. The military flavor of rubberized raincoats continued with the 1997 tv program Bodyguards (as sported by John Shrapnel playing Commander MacIntyre of the elite protection team). A model pictured in the December 1944 issue of Vogue (magazine) showed the attractiveness and practicality of these garments for the fashion-conscious, while they appeared in favorite 1950s and 1960s feature films such as Genevieve (1953) (worn by Dinah Sheridan), Me and the Colonel (1958) (Nicole Maurey) and Twice Round the Daffodils (1962) (Sheila Hancock), always sharp, clean, rustling and making a bold statement. Meanwhile, traditional gentlemen's outfitters, such as Cordings, Hackett, and Gieves & Hawkes, continued to sell plenty of the popular walking coats in thick rubberized cotton. Around 1960, zippered jackets with a cinched waist were ordinary for young and old in Britain. Dark green hooded anoraks were fashioned with the same materials. The anoraks were typically made in dark green for scouting, hiking, climbing, canoeing, and other outdoor activities. In 1970, double-textured "gangster" macs were the must-have, trendy outerwear for girls, originating from the Valstar "Gangster" brand designed by Maurice Attwood. The styles featuring a signature yoke in front and back, a belt and peplum, and wrist straps with buckles were sold in a range of colors, lengths, and either cotton or viscose at major high street stores like Debenhams (under their Debroyal brand) and C&A (Vivienne style) at prices from £10 to £20. The yoked design was all the rage, even appearing in small sizes for children. This style, together with a similar style of rainwear, graced the foremost actors and actresses of the time. Cinema films included Country Dance (1970) (Susannah York), Hoffman (1970) (Sinéad Cusack), No Blade of Grass (1970) (Nigel Davenport, Jean Wallace, Lynne Frederick), The Ragman's Daughter (1972) (Victoria Tennant) and All Creatures Great and Small (film) (1974) (Lisa Harrow, Simon Ward). Examples of the many TV series in that period containing Valstar “Gangster” type double-textured rainwear were Take Three Girls (Liza Goddard), The Lotus Eaters (Wanda Ventham), and Man About the House (Paula Wilcox). Since they provided effective insulation against the cold, the garments were later called “winter macs” by females, who would wear them buttoned, with short upturned collars and - to complete the look - a neckerchief giving a bright, contrasting slash of color. The retro "gangster" style has been revived as the "Chorlton" in a choice of five colors by Lakeland Elements of Lancaster since Chorlton-upon-Medlock, now part of Greater Manchester, was the location of one of the early Macintosh factories. Over the years, other design initiatives and variants included the introduction of colorful, light double-textured, and single-textured rubberized macs. There were ponchos, military-style capes, and, more recently the short, navy blue Margaret Howell hoody. On the Continent of Europe, the green hooded anorak or slicker, with yellow rubber lining, retained its popularity. This can be seen in the classic French relationship movie The Aviator's Wife (1981). The similarly unisex Friesennerz reversible hooded anorak in yellow rubber with blue or sometimes fawn lining, was sold on Germany’s high streets and sported by Glenda Jackson in her 1978 film The Class of Miss MacMichael. Certainly, the latter mac was beloved by young tourists of German nationality making a pilgrimage to the fashion mecca of the Swinging Sixties, Carnaby Street in London WW1. References Fashion design Coats (clothing) Clothing industry
Riding coat
Engineering
1,416
475,212
https://en.wikipedia.org/wiki/Yakov%20Zeldovich
Yakov Borisovich Zeldovich (, ; 8 March 1914 – 2 December 1987), also known as YaB, was a leading Soviet physicist of Belarusian origin, who is known for his prolific contributions in physical cosmology, physics of thermonuclear reactions, combustion, and hydrodynamical phenomena. From 1943, Zeldovich, a self-taught physicist, started his career by playing a crucial role in the development of the former Soviet program of nuclear weapons. In 1963, he returned to academia to embark on pioneering contributions on the fundamental understanding of the thermodynamics of black holes and expanding the scope of physical cosmology. Biography Early life and education Yakov Zeldovich was born into a Belarusian Jewish family in his grandfather's house in Minsk. However, in mid-1914, the Zeldovich family moved to Saint Petersburg. They resided there until August 1941, when the family was evacuated together with the faculty of the Institute of Chemical Physics to Kazan to avoid the Axis Invasion of the Soviet Union. They remained in Kazan until the summer of 1943, when Zeldovich moved to Moscow. His father, Boris Naumovich Zeldovich, was a lawyer; his mother, Anna Petrovna Zeldovich (née Kiveliovich), a translator from French to Russian, was a member of the Writer's Union. Despite being born into a devoted and religious Jewish family, Zeldovich was an "absolute atheist". Zeldovich was an autodidact. He was regarded as having a remarkably versatile intellect, and during his life he explored and made major contributions to a wide range of scientific endeavors. From a given opportunity in May 1931, he secured an appointment as a laboratory assistant at the Institute of Chemical Physics of the Academy of Sciences of the Soviet Union, and remained associated with the institute for the remainder of his life. As a laboratory assistant, he received preliminary instructions on the topics involved in the physical chemistry and built up his reputation among his seniors at the Institute of Chemical Physics. From 1932 to 1934, Zeldovich attended the undergraduate courses on physics and mathematics at the Leningrad State University (now Saint Petersburg State University), and later attended the technical lectures on introductory physics at the Leningrad Polytechnic Institute (now Peter the Great St. Petersburg Polytechnic University). In 1936, he was successful in his candidacy for the Candidate of Science degree (a Soviet equivalent of PhD), having successfully defended his dissertation on the topic of the "adsorption and catalysis on heterogeneous surfaces". The centrality of his thesis focused towards the research on the Freundlich (or classical) adsorption isotherm, and Zeldovich discovered the theoretical foundation of this empirical observation. In 1939, Zeldovich prepared his dissertation based on the mathematical theory of the physical interpretation of nitrogen oxidation, and successfully received the Doctor of Sciences in mathematical physics when it was reviewed by Alexander Frumkin. Zeldovich discovered its mechanism, known in physical chemistry as the thermal mechanism or Zeldovich mechanism. Soviet program of nuclear weapons Zeldovich is regarded as a secret principal of the Soviet nuclear weapons project; his travels abroad were highly restricted, to Eastern Europe, under close Soviet security. Soon after the discovery of nuclear fission (by German chemist Otto Hahn in 1939) Russian physicists had begun investigating the scope of nuclear-fission physics, and undertook seminars on that topic; Igor Kurchatov and Yulii Khariton were engaged in 1940. In May 1941, Zeldovich worked with Khariton in constructing a theory, on the kinetics of nuclear reactions in the presence of the critical conditions. The work of Khariton and Zeldovich was extended into theories of ignition, combustion and detonation; these accounted for features which had not previously been correctly predicted, observed, nor explained. The modern theory of detonation accordingly is called the Zeldovich-von Neumann-Dohring, or ZND, theory, and its development involved tedious fast neutron calculations; this work had been delayed, due to the German invasion of the Soviet Union, which obstructed progress on findings that in June 1941 would be de-classified. In 1942, Zeldovich was relocated to Kazan, and tasked by the People's Commissariat of Munitions to carry out work on conventional gun powders to be supplied to the Soviet Army, while Khariton was asked to design the new types of conventional weaponry. In 1943, Joseph Stalin decided to launch an arms build-up of nuclear weapons, under the charge of Igor Kurchatov; the latter requested Stalin to relocate Zeldovich and Khariton to Moscow, in the nuclear weapons program. Zeldovich joined Igor Kurchatov's small team at this secretive laboratory in Moscow to launch the work on the nuclear combustion theory, and became a head of the theoretical department at the Arzamas-16 in 1946. With Isaak Gurevich, Isaak Pomeranchuk, and Khariton, Zeldovich prepared a scientific report on the feasibility of releasing energy through nuclear fusion triggered by an atomic explosion, and presented it to Igor Kurchatov. Zeldovich had benefitted from physical and technical knowledge provided by German physicist Klaus Fuchs and American physicist Theodore Hall, who each had worked on the American Manhattan Project to develop nuclear weapons. In 1949, Zeldovich led a team of physicists that conducted the first nuclear test, the RDS-1, based roughly on the American design obtained through the atomic spies in the United States, though he continued his fundamental work on explosive theory. Zeldovich then began working on modernizing the successive designs of the nuclear weapon and initially conceived the idea of hydrogen bomb to Andrei Sakharov and others. In the course of his work on nuclear weapons, Zeldovich did ground-breaking work in radiation hydrodynamics, and the physics of matter at high pressure. Between 1950 and 1953, Zeldovich performed calculations necessary for the feasibility of the hydrogen bomb that were verified by Andrei Sakharov, although the two groups worked in parallel on the development of the thermonuclear fusion. However, it was Sakharov that radically changed the approach to thermonuclear fusion, aided by Vitaly Ginzburg in 1952. He remained associated with the nuclear testing program, while heading the experimental laboratories at Arzamas-16 until October 1963, when he left for academia. Academia and cosmology In 1952, Zeldovich began work in the field of elementary particles and their transformations. He predicted the beta decay of a pi meson. Together with Semyon Gershtein he noticed the analogy between the weak and electromagnetic interactions, and in 1960, he predicted the muon catalysis (more precisely, the muon-catalysed dt-fusion) phenomenon. In 1977, Zeldovich together with was awarded the Kurchatov Medal, the highest award in nuclear physics of the Soviet Union. The citation was "for prediction of characteristics of ultracold neutrons, their detection and investigation". He was elected academician of the USSR Academy of Sciences on 20 June 1958. He was a head of division at the Institute of the Applied Mathematics of the USSR Academy of Sciences from 1965 until January 1983. In early 1960s, Zeldovich started working in astrophysics and physical cosmology. In 1964, he and independently Edwin Salpeter were the first to suggest that accretion discs around massive black holes are responsible for the huge amounts of energy radiated by quasars. From 1965, he was a professor at the Department of Physics of the Moscow State University and a head of the division of Relativistic Astrophysics at the Sternberg Astronomical Institute. In 1966, he and Igor Novikov were the first to propose searching for black hole candidates among binary systems in which one star is optically bright and X-ray dark and the other optically dark but X-ray bright (the black hole candidate). Zeldovich worked on the theory of the evolution of the hot universe, the properties of the microwave background radiation, the large-scale structure of the universe, and the theory of black holes. He predicted, with Rashid Sunyaev, that the cosmic microwave background should undergo inverse Compton scattering. This is called the Sunyaev-Zeldovich effect, and measurements by telescopes such as the Atacama Cosmology Telescope and the South Pole Telescope has established it as one of the key observational probes of cluster cosmology. Zeldovich contributed sharp insights into the nature of the large scale structure of the universe, in particular, through the use of Lagrangian perturbation theory (the Zeldovich approximation) and the application of the Burgers' equation approach via the adhesion approximation. In 1974, in collaboration with A. G. Polnarev, suggested the existence of a gravitational memory effect, for which a system of freely falling particles initially at relative rest are displaced after the passing of a burst of gravitational radiation. Black hole thermodynamics Zeldovich played a key role in developing the theory of black hole evaporation due to Hawking radiation. Zeldovich and Charles W. Misner concomitantly predicted the possibility of particle generation by rotating Kerr black holes in 1971, 1972. Previously, In 1965, Zeldovich had predicted that Kerr black holes would split the emission lines of photons as in a Zeeman effect. During Stephen Hawking's visit to Moscow in 1973, Soviet scientists Zeldovich and Alexei Starobinsky showed Hawking that, according to the quantum mechanical uncertainty principle, rotating black holes should create and emit particles. Family With his wife, Varvara Pavlovna Konstantinova, Yakov Zeldovich had a son and two daughters who were also physicists: son – Boris Zeldovich; daughters – Olga Yakovlevna Zeldovich and Marina Yakovlevna Zeldovich. Zeldovich also had a daughter, Annushka, with O.K. Shiryaeva. He had one more daughter in 1945, Alexandra Varkovitskaya, with a linguist and folklorist Ludmila Varkovitskaya. Zeldovich had another son with Nina Nikolaevna Agapova in 1958, whose name was Leonid Yakovlevich Agapov; he died in 2016 at the age of 58. Publications Books Awards and honors Igor Kurchatov called him a "genius" and Andrei Sakharov named him "a man of universal scientific interests." After the first meeting in Moscow, Stephen W. Hawking wrote to Zeldovich: "now I know that you are a real person and not a group of scientists like Bourbaki." He was a member of the American Academy of Arts and Sciences (1975), the United States National Academy of Sciences (1979), and the American Philosophical Society (1979). Dirac Medal of the ICTP (1985) Bruce Medal (1983) Gold Medal of the Royal Astronomical Society (1984). Kurchatov Medal (1977) Three times Hero of Socialist Labor (1949, 1953, 1957) Stalin Prize (1943, 1949, 1951, 1953) Lenin Prize (1957) Three Orders of Lenin (1949, 1962, 1974) Two Orders of the Red Banner of Labour (1945,1964) Order of the October Revolution (1962) An asteroid 11438 Zeldovich was named in his honor in 2001. References Further reading Overbye, D. Lonely Hearts of the Cosmos: The Scientific Quest for the Secret of the Universe. New York: HarperCollins, 1991. Annotated Bibliography for Yakov Borisovich Zel'dovich from the Alsos Digital Library for Nuclear Issues Yakov Borisovich Zel'dovich – page at the Moscow State University dedicated to Zeldovich Theory of combustion of unmixed gases – Zeldovich 1949, translated 1974 External links 1914 births 1987 deaths Scientists from Minsk Foreign associates of the National Academy of Sciences Foreign members of the Royal Society Full Members of the USSR Academy of Sciences Academic staff of Moscow State University Nuclear weapons program of the Soviet Union people Heroes of Socialist Labour Recipients of the Stalin Prize Recipients of the Lenin Prize Recipients of the Bruce Medal Recipients of the Gold Medal of the Royal Astronomical Society Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Belarusian atheists Belarusian Jews Soviet astronomers Belarusian astronomers Soviet cosmologists Fluid dynamicists Jewish atheists Particle physicists Soviet inventors Belarusian inventors Jewish physicists Members of the American Philosophical Society Russian scientists
Yakov Zeldovich
Physics,Chemistry
2,578
64,516,898
https://en.wikipedia.org/wiki/Music%20and%20sleep
Sleep problems are found to be correlated with poor well-being and low quality of life. Persistent sleeping disturbances can lead to fatigue, irritability, and various health issues. Numerous studies have examined the positive impact of music on sleep quality. As early as 2000 B.C., lullabies were designed to aid infant sleep. For adults with sleep-related disorders, music serves as a useful intervention in reducing stress. Approximately 25% of the population facing sleep difficulties regularly use music as a tool for relaxation. This process can be either self-prescribed or under the guidance of a music therapist. Music therapy is introduced into the medical field for treating sleeping disorders following scientific experimentations and observations. Compared to other pharmacological methods for improving sleep, music has no reported side effects and is easy to administer. In direct comparisons, music has improved sleep quality greater than audiobooks and has been comparable to sedative hypnotics. In addition, music can be combined with relaxation techniques such as breathing exercises and progressive muscle relaxation. One review of non-pharmacological sleep aids identified music as the only sleep aid with adequate research. The influence of music on sleep has been investigated across various contexts, exploring how music stimuli can influence different aspects of the sleeping experience. Implications of findings help in building up a more effective procedure of musical therapies to target sleep problems. A number of companies such as Acoustic Sheep (producer of SleepPhones) and Snoozeband produce headbands that play music to make it easier and more comfortable to listen to music while trying to get to sleep. Major empirical findings Influence on sleep quality Research suggests that music contributes to higher perceived sleep quality, greater sleep efficiency, longer sleep durations, less sleep disturbance, and less daytime dysfunction for older adults. This was assessed through improved scores on the Pittsburgh Sleep Quality Index (PSQI) questionnaire. Polysomnography investigations have found listening to slow-tempo music increased slow-wave sleep (deep sleep) and reduced rapid eye movement sleep (lighter sleep stage). Music facilitates a large improvement in sleep quality for insomnia patients. Interventions including music-assisted relaxation and listening to music effectively reduce sleep onset latency for people with insomnia. However, several studies found music to have neither positive nor negative effects on subjective sleep quality for normal individuals. Modulation on heart rate and blood pressure Music can reduce sympathetic nervous system activity, decreasing blood pressure and heart rate. The decrease in systolic blood pressure, diastolic blood pressure, and heart rate signal a state of calmness, which is essential for having a good night sleep. Sedative music, which is characterized by a slow tempo, repetitive rhythm, gentle contours, and strings, is effective in generating anxiolytic responses to aid sleep. Brainwave activity and hormonal responses Electroencephalogram (EEG) studies give insights into how music alters brainwave activities during sleep. Gentle and soothing music can lead to increased delta wave activities which indicate deep sleep. Several experiments have tested listening to preferred music significantly decreases cortisol levels and reduces the amount of stress experienced. Saliva melatonin, a hormone associated with sleep initiation, was found to be elevated among people using interactive music therapies. These hormones work in attributing a more conducive environment for falling asleep and maintaining stable sleep stages. Mechanisms Dickson & Schubert's RPR Dickson & Schubert summarized and evaluated six researcher proposed reasons (RPR) by which music could potentially aid sleep: Entrainment: The synchronization of certain biological processes such as neural activity or heart rate with the rhythmic structure of the music. When the rhythmic pattern of music aligns with the natural rhythm of the body, it is more likely to fall asleep quickly. Masking: Masking involves using music to mitigate the impact of noxious background noises. By listening to music at a comfortable volume, individuals can block those disruptive sounds from outside and create a peaceful sleeping environment. Enjoyment: Listening to preferred, emotionally relatable, or pleasant music can have a positive impact on mood. This induces positive emotions such as happiness, reducing the stress felt to enhance sleep patterns. Distraction: Music acts as a distractor to inner stressful thoughts, by providing a focal point of attention. Individuals focusing on the music itself can divert their attention away from worrying thoughts that are keeping them awake. Expectation: Individuals held the cultural belief that certain types of music can aid their sleep. This belief would act as a placebo effect rather than a direct contributor to sleep quality. They may improve subjective sleeping experience due to the power of suggestions. Relaxation: Music can induce relaxation response by reducing physiological and psychological stress. Slow tempo and calming melodies can reduce heart rate, decrease cortisol level, and alleviate tension. This makes individual easier to fall asleep. Habit formation Dickson & Schubert proposed Habit Formation as an additional RPR under the Arts on Prescription model. Based on classical conditioning, repeated pairing of the music with the intention of sleep can generate a conditioned response. By forming this habit, music alone would be effective in triggering a relaxation response, which signals the body that it is time to sleep. This requires a minimum of three weeks for individuals suffering mild insomnia to become healthy sleepers and continues to improve sleep quality over three months. Music improved sleep quality with increased exposure regardless of differences in the demographic, music genre, duration of treatment, and exposure frequency. Dickson suggests "listening to music that you find relaxing, at the same time, every night for at least three weeks". Musical genres and features Typical genres of music used for sleep (sedative music) include classical music, ethnic music, ambient music, meditation music and lullabies, although researchers have recognised a wide diversity of music genres aiding sleep. The characteristics of music that have improved sleep quality in the music-sleep literature include slow tempo, small change of rhythm, and moderate pitch variation of melody. The selection of music (self selected or researcher selected) does not appear to impact sleep quality. Instrumental vs Lyrics Instrumental music such as sitar or violin is recognized as more effective in inducing sleep than vocal music. Although lyric gives depth and meaning to the music, it also stimulates cognitive processes, making it more difficult to fall asleep. Whereas instrumental music focuses on the melody and rhythm, it allows for relaxation without the distraction of lyrics. Research has given evidence for the use of instrumental music in improving sleep quality. Nature Sounds and Binaural Beats Nature sounds like birdsong or rainfall can provoke a feeling of peacefulness and tranquillity to facilitate sleep. Binaural beats work by presenting two different frequencies to each ear that synchronize brainwave activity. Those two methods can be combined to improve sleep quality by targeting both the sensory experience and brainwave alterations. Sedative music developed in collaboration with researchers Can't Sleep (app) - Gaelen Thomas Dickson (music psychology) Pzizz (app) - Maryanne Garry (psychology) Marconi Union - Weightless (song) - Lyz Cooper (sound therapy) Max Richter - Sleep (album) - David Eagleman (neuroscience) Individual variability While many studies have shown the significant influence of soft slow music on sleep, it is essential to acknowledge that this effect is not uniform across all individuals. Extensive research has revealed the variability in individual responses to musical stimuli, which can be due to their personal preference, cultural background, and susceptibility to different music types. Some may find classical music entertaining, while others prefer ambient music for relaxation. Cultural background can also shape an individual's perception and response to music stimuli. The concept of music and sleep, although applicable to the general population, needs to take into account these differences to tailor each individual's taste. By customizing music choices, the overall effectiveness of music in improving sleep can be maximized, contributing to a better life quality for people. See also Long Ambients 1: Calm. Sleep. Lullaby Music psychology Music therapy Sleep (album) ASMR References Musicology Music therapy Music psychology Sleep
Music and sleep
Biology
1,638
19,041,235
https://en.wikipedia.org/wiki/ArmaTrac
ArmaTrac is the tractor brand of Erkunt Tractor, the tractor division of the Erkunt group owned by the Indian giant Mahindra & Mahindra. History The manufacturer company Erkunt, was established in 1953 in Turkey, as a general foundry & pattern shop. By 1955, it had become a factory for machining parts as well as casting. The factory was chiefly engaged in the production of waste water pipes to NATO standards. Following eight years of growth in 1961 the company became a corporation. Erkunt was then producing intermediate goods for the automotive, agricultural tractor and motor industries, utilizing the most advanced technology of the day. Erkunt has grown to employ 174 white collar and 851 blue collar workers, totalling 1025 in a 60,000 ton/year capacity complex that is the leader in Turkey for grey iron and nodular casting production and mechanical processing. For the past 20 years the company has been exporting to Europe. Production has grown to 60,000 tons per year. Erkunt exports 85% of total production of which 76% is machined, and 24% is raw castings. The products of company has ISO/TS 16949 and the company registered to ISO 9001 and one of the largest independent casting and machining company in Europe. In 2003, Erkunt Group started the tractor division with the establishment of Erkunt Tractor Industries, Inc. and became a member of OEM group. Erkunt Tractor started manufacturing tractors in September 2004. In the following 6 years, 8268 tractors were sold in domestic market through 73 dealers and 105 sales points. The company, which entered the market with only two models, is now able to offer 46 different models between 50 and 110 hp, to potential customers in different countries. At the end of year 2010, Erkunt's market share in Turkey is 18% and is the second biggest tractor manufacturer in Turkey. Erkunt distributes tractors under ArmaTrac brand in international markets and works on distributorship basis. Currently, Erkunt has distributors in United Kingdom, Latvia, Bulgaria, Hungary, Cyprus, Serbia, Poland and Croatia in Europe as well as Yemen, Senegal, Jordan, Algeria in Africa; Antigua Barbuda in America. The tractors are designed by Turkish engineers, which is a first in tractor market of Turkey, as most other manufacturers build them under license. The tractors can also be designed for different countries, in order to meet the market's needs. The tractors use ZF and Carraro transmissions built under license by them in Turkey. References Tractors Turkish brands Mahindra Group
ArmaTrac
Engineering
531
424,964
https://en.wikipedia.org/wiki/Water%20clock
A water clock or clepsydra (; ; ) is a timepiece by which time is measured by the regulated flow of liquid into (inflow type) or out from (outflow type) a vessel, and where the amount of liquid can then be measured. Water clocks are one of the oldest time-measuring instruments. The simplest form of water clock, with a bowl-shaped outflow, existed in Babylon, Egypt, and Persia around the 16th century BC. Other regions of the world, including India and China, also provide early evidence of water clocks, but the earliest dates are less certain. Water clocks were used in ancient Greece and in ancient Rome, as described by technical writers such as Ctesibius (died 222 BC) and Vitruvius (died after 15 BC). Designs A water clock uses the flow of water to measure time. If viscosity is neglected, the physical principle required to study such clocks is Torricelli's law. Two types of water clock exist: inflow and outflow. In an outflow water clock, a container is filled with water, and the water is drained slowly and evenly out of the container. This container has markings that are used to show the passage of time. As the water leaves the container, an observer can see where the water is level with the lines and tell how much time has passed. An inflow water clock works in basically the same way, except instead of flowing out of the container, the water is filling up the marked container. As the container fills, the observer can see where the water meets the lines and tell how much time has passed. Some modern timepieces are called "water clocks" but work differently from the ancient ones. Their timekeeping is governed by a pendulum, but they use water for other purposes, such as providing the power needed to drive the clock by using a water wheel or something similar, or by having water in their displays. The Greeks and Romans advanced water clock design to include the inflow clepsydra with an early feedback system, gearing, and escapement mechanism, which were connected to fanciful automata and resulted in improved accuracy. Further advances were made in Byzantium, Syria, and Mesopotamia, where increasingly accurate water clocks incorporated complex segmental and epicyclic gearing, water wheels, and programmability, advances which eventually made their way to Europe. Independently, the Chinese developed their own advanced water clocks, incorporating gears, escapement mechanisms, and water wheels, passing their ideas on to Korea and Japan. Some water clock designs were developed independently, and some knowledge was transferred through the spread of trade. These early water clocks were calibrated with a sundial. While never reaching a level of accuracy comparable to today's standards of timekeeping, the water clock was a commonly used timekeeping device for millennia, until it was replaced by more accurate verge escapement mechanical clocks in Europe around 1300. Regional development Egypt The oldest water clock of which there is physical evidence dates to c. 1417–1379 BC in the New Kingdom of Egypt, during the reign of the pharaoh Amenhotep III, where it was used in the Precinct of Amun-Re at Karnak. The oldest documentation of the water clock is the tomb inscription of the 16th century BC Egyptian court official Amenemhet, which identifies him as its inventor. These simple water clocks, which were of the outflow type, were stone vessels with sloping sides that allowed water to drip at a nearly constant rate from a small hole near the bottom. There were twelve separate columns with consistently spaced markings on the inside to measure the passage of "hours" as the water level reached them. The columns were for each of the twelve months to allow for the variations of the seasonal hours. Priests used these clocks to determine the time at night so that the temple rites and sacrifices could be performed at the correct hour. Babylon In Babylon, water clocks were of the outflow type and were cylindrical in shape. Use of the water clock as an aid to astronomical calculations dates back to the Old Babylonian Empire (c. 2000 – c. 1600 BC). While there are no surviving water clocks from the Mesopotamian region, most evidence of their existence comes from writings on clay tablets. Two collections of tablets, for example, are the Enuma Anu Enlil (1600–1200 BC) and the MUL.APIN (7th century BC). In these tablets, water clocks are used for payment of the night and day watches (guards). These clocks were unique, as they did not have an indicator such as hands (as are typically used today) or grooved notches (as were used in Egypt). Instead, these clocks measured time "by the weight of water flowing from" it. The volume was measured in capacity units called qa. The weight, mana or mina (the Greek unit for about one pound), is the weight of water in a water clock. In Babylonian times, time was measured with temporal hours. So, as seasons changed, so did the length of a day. "To define the length of a 'night watch' at the summer solstice, one had to pour two mana of water into a cylindrical clepsydra; its emptying indicated the end of the watch. One-sixth of mana had to be added each succeeding half-month. At the equinox, three mana had to be emptied in order to correspond to one watch, and four mana was emptied for each watch of the winter solstitial night." India N. Narahari Achar and Subhash Kak suggest that water clocks were used in ancient India as early as the 2nd millennium BC, based on their appearance in the Atharvaveda'. According to N. Kameswara Rao, pots excavated from the Indus Valley Civilisation site of Mohenjo-daro may have been used as water clocks. They are tapered at the bottom, have a hole on the side, and are similar to the utensil used to perform abhiṣeka (ritual water pouring) on lingams. The Jyotisha, one of the six Vedanga disciplines, describes water clocks called ghati or kapala that measure time in units of nadika (around 24 minutes). A clepsydra in the form of a floating and sinking copper vessel is mentioned in the Sürya Siddhānta (5th century AD). At Nalanda mahavihara, an ancient Buddhist university, four-hour intervals were measured by a water clock, which consisted of a similar copper bowl holding two large floats in a larger bowl filled with water. The bowl was filled with water from a small hole at its bottom; it sank when filled and was marked by the beating of a drum in the daytime. The amount of water added varied with the seasons, and students at the university operated the clock. Descriptions of similar water clocks are also given in the Pañca Siddhāntikā by the polymath Varāhamihira in the 6th century, which adds further detail to the account given in the Sūrya Siddhānta. Further descriptions are recorded in the Brāhmasphuṭasiddhānta by the mathematician Brahmagupta in the 7th century. A detailed description with measurements is also recorded by the astronomer Lalla in the 8th century, who describes the ghati as a hemispherical copper vessel with a hole that is fully filled after one nadika. China In ancient China, as well as throughout East Asia, water clocks were very important in the study of astronomy and astrology. The oldest written reference dates the use of the water clock in China to the 6th century BC. From about 200 BC onwards, the outflow clepsydra was replaced almost everywhere in China by the inflow type with an indicator-rod borne on a float(called fou chien lou,浮箭漏). The Han dynasty philosopher and politician Huan Tan (40 BC – AD 30), a Secretary at the Court in charge of clepsydrae, wrote that he had to compare clepsydrae with sundials because of how temperature and humidity affected their accuracy, demonstrating that the effects of evaporation, as well as of temperature on the speed at which water flows, were known at this time. The liquid in water clocks was liable to freezing, and had to be kept warm with torches, a problem that was solved in 976 by the Chinese astronomer and engineer Zhang Sixun. His invention—a considerable improvement on Yi Xing's clock—used mercury instead of water. Mercury is a liquid at room temperature, and freezes at , lower than any air temperature common outside polar regions. Again, instead of using water, the early Ming Dynasty engineer Zhan Xiyuan (c. 1360–1380) created a sand-driven wheel clock, improved upon by Zhou Shuxue (c. 1530–1558). The use of clepsydrae to drive mechanisms illustrating astronomical phenomena began with the Han Dynasty polymath Zhang Heng (78–139) in 117, who also employed a waterwheel. Zhang Heng was the first in China to add an extra compensating tank between the reservoir and the inflow vessel, which solved the problem of the falling pressure head in the reservoir tank. Zhang's ingenuity led to the creation by the Tang dynasty mathematician and engineer Yi Xing (683–727) and Liang Lingzan in 725 of a clock driven by a waterwheel linkwork escapement mechanism. The same mechanism would be used by the Song dynasty polymath Su Song (1020–1101) in 1088 to power his astronomical clock tower, as well as a chain drive. Su Song's clock tower, over tall, possessed a bronze power-driven armillary sphere for observations, an automatically rotating celestial globe, and five front panels with doors that permitted the viewing of changing mannequins which rang bells or gongs, and held tablets indicating the hour or other special times of the day. In the 2000s, in Beijing's Drum Tower an outflow clepsydra is operational and displayed for tourists. It is connected to automata so that every quarter-hour a small brass statue of a man claps his cymbals. Persia The use of water clocks in Greater Iran, especially in the desert areas such as Yazd, Isfahan, Zibad, and Gonabad, dates back to 500 BC. Later, they were also used to determine the exact holy days of pre-Islamic religions such as Nowruz (March equinox), Mehregan (September equinox), Tirgan (summer solstice) and Yaldā Night (winter solstice) – the shortest, longest, and equal-length days and nights of the years. The water clocks, called pengan (and later fenjan) used were one of the most practical ancient tools for timing the yearly calendar. The water clock was the most accurate and commonly used timekeeping device for calculating the amount or the time that a farmer must take water from a qanat or well for irrigation until more accurate current clocks replaced it. Persian water clocks were a practical, useful, and necessary tool for the qanat's shareholders to calculate the length of time they could divert water to their farms or gardens. The qanat was the only water source for agriculture and irrigation in arid area so a just and fair water distribution was very important. Therefore, a very fair and clever old person was elected to be the manager of the water clock or mir āb, and at least two full-time managers were needed to control and observe the number of hours and announce the exact time of the days and nights from sunrise to sunset because shareholders usually divided between day and night owners. The Persian water clock consisted of a large pot full of water and a bowl with a small hole in the center. When the bowl became full of water, it would sink into the pot, and the manager would empty the bowl and again put it on the top of the water in the pot. He would record the number of times the bowl sank by putting small stones into a jar. The place where the clock was situated and its managers were collectively known as the khane pengān. Usually this would be the top floor of a public house, with west- and east-facing windows to show the time of sunset and sunrise. The Zibad water clock was in use until 1965, when it was replaced by modern clocks. Greco-Roman world The word "clepsydra" comes from the Greek meaning "water thief". The Greeks considerably advanced the water clock by tackling the problem of the diminishing flow. They introduced several types of the inflow clepsydra, one of which included the earliest feedback control system. Ctesibius invented an indicator system typical for later clocks such as the dial and pointer. The Roman engineer Vitruvius described early alarm clocks, working with gongs or trumpets. A commonly used water clock was the simple outflow clepsydra. This small earthenware vessel had a hole in its side near the base. In both Greek and Roman times, this type of clepsydra was used in courts for allocating periods of time to speakers. In important cases, such as when a person's life was at stake, it was filled completely, but for more minor cases, only partially. If proceedings were interrupted for any reason, such as to examine documents, the hole in the clepsydra was stopped with wax until the speaker was able to resume his pleading. Clepsydrae for keeping time Some scholars suspect that the clepsydra may have been used as a stop-watch for imposing a time limit on clients' visits in Athenian brothels. Slightly later, in the early 3rd century BC, the Hellenistic physician Herophilos employed a portable clepsydra on his house visits in Alexandria for measuring his patients' pulse-beats. By comparing the rate by age group with empirically obtained data sets, he was able to determine the intensity of the disorder. Between 270 BC and AD 500, Hellenistic (Ctesibius, Hero of Alexandria, Archimedes) and Roman horologists and astronomers were developing more elaborate mechanized water clocks. The added complexity was aimed at regulating the flow and at providing fancier displays of the passage of time. For example, some water clocks rang bells and gongs, while others opened doors and windows to show figurines of people, or moved pointers, and dials. Some even displayed astrological models of the universe. The 3rd century BC engineer Philo of Byzantium referred in his works to water clocks already fitted with an escapement mechanism, the earliest known of its kind. The biggest achievement of the invention of clepsydrae during this time, however, was by Ctesibius with his incorporation of gears and a dial indicator to automatically show the time as the lengths of the days changed throughout the year, because of the temporal timekeeping used during his day. Also, a Greek astronomer, Andronicus of Cyrrhus, supervised the construction of his Horologion, known today as the Tower of the Winds, in the Athens marketplace (or agora) in the first half of the 1st century BC. This octagonal clocktower showed scholars and shoppers both sundials and a windvane. Inside it was a mechanized clepsydra, although the type of display it used cannot be known for sure; some possibilities are: a rod that moved up and down to display the time, a water-powered automaton that struck a bell to mark the hours, or a moving star disk in the ceiling. Medieval Islamic world In the medieval Islamic world (632-1280), the use of water clocks has its roots from Archimedes during the rise of Alexandria in Egypt and continues on through Byzantium. The water clocks by the Arabic engineer Al-Jazari, however, are credited for going "well beyond anything" that had preceded them. In Al-Jazari's 1206 treatise, he describes one of his water clocks, the elephant clock. The clock recorded the passage of temporal hours, which meant that the rate of flow had to be changed daily to match the uneven length of days throughout the year. To accomplish this, the clock had two tanks, the top tank was connected to the time indicating mechanisms and the bottom was connected to the flow control regulator. Basically, at daybreak, the tap was opened and water flowed from the top tank to the bottom tank via a float regulator that maintained a constant pressure in the receiving tank. The most sophisticated water-powered astronomical clock was Al-Jazari's castle clock, considered by some to be an early example of a programmable analog computer, in 1206. It was a complex device that was about high, and had multiple functions alongside timekeeping. It included a display of the zodiac and the solar and lunar orbits, and a pointer in the shape of the crescent moon which traveled across the top of a gateway, moved by a hidden cart and causing automatic doors to open, each revealing a mannequin, every hour. It was possible to re-program the length of day and night in order to account for the changing lengths of day and night throughout the year, and it also featured five musician automata who automatically play music when moved by levers operated by a hidden camshaft attached to a water wheel. Other components of the castle clock included a main reservoir with a float, a float chamber and flow regulator, plate and valve trough, two pulleys, crescent disc displaying the zodiac, and two falcon automata dropping balls into vases. The first water clocks to employ complex segmental and epicyclic gearing was invented earlier by the Arab engineer Ibn Khalaf al-Muradi in Islamic Iberia c. 1000. His water clocks were driven by water wheels, as was also the case for several Chinese water clocks in the 11th century. Comparable water clocks were built in Damascus and Fez. The latter (Dar al-Magana) remains until today and its mechanism has been reconstructed. The first European clock to employ these complex gears was the astronomical clock created by Giovanni de Dondi in c. 1365. Like the Chinese, Arab engineers at the time also developed an escapement mechanism which they employed in some of their water clocks. The escapement mechanism was in the form of a constant-head system, while heavy floats were used as weights. Korea In 718, Unified Silla established the system of clepsydra for the first time in Korean history, imitating the Tang Dynasty. In 1434, during Joseon rule, Jang Yeong-sil (), a palace guard and later chief court engineer, constructed the Borugak Jagyeongnu or self-striking water clock of Borugak Pavillion for Sejong the Great. What made his water clock self-striking (or automatic) was using jack-work mechanisms: three wooden figures or "jacks" struck objects to signal the time. This innovation no longer required the reliance of human workers, known as "rooster men", to constantly replenish it. The uniqueness of the clock was its capability to announce dual-times automatically with visual and audible signals. Jang developed a signal conversion technique that made it possible to measure analog time and announce digital time simultaneously as well as to separate the water mechanisms from the ball-operated striking mechanisms. The conversion device was called pangmok, and was placed above the inflow vessel that measured the time, the first device of its kind in the world. Thus, the Borugak water clock is the first hydro-mechanically engineered dual-time clock in the history of horology. Japan Emperor Tenji made Japan's first water clock called a . They were highly socially significant and run by Temperature, water viscosity, and clock accuracy When viscosity can be neglected, the outflow rate of the water is governed by Torricelli's law, or more generally, by Bernoulli's principle. Viscosity will dominate the outflow rate if the water flows out through a nozzle that is sufficiently long and thin, as given by the Hagen–Poiseuille equation. Approximately, the flow rate is for such design inversely proportional to the viscosity, which depends on the temperature. Liquids generally become less viscous as the temperature increases. In the case of water, the viscosity varies by a factor of about seven between zero and 100 degrees Celsius. Thus, a water clock with such a nozzle would run about seven times faster at 100 °C than at 0 °C. Water is about 25 percent more viscous at 20 °C than at 30 °C, and a variation in temperature of one degree Celsius, in this "room temperature" range, produces a change of viscosity of about two percent. Therefore, a water clock with such a nozzle that keeps good time at some given temperature would gain or lose about half an hour per day if it were one degree Celsius warmer or cooler. To make it keep time within one minute per day would require its temperature to be controlled within °C (about °F). There is no evidence that this was done in antiquity, so ancient water clocks with sufficiently thin and long nozzles (unlike the modern pendulum-controlled one described above) cannot have been reliably accurate by modern standards. However, while modern timepieces may not be reset for long periods, water clocks were likely reset every day, when refilled, based on a sundial, so the cumulative error would not have been great. See also Bernard Gitton History of timekeeping devices Hourglass Notes Sources used (Reprinted in ) Bibliography External links The Clock of Flowing Time in Berlin NIST: A Walk Through Time – Early Clocks Bernard Gitton's Time-Flow Clocks Qanat is cultural, social and scientific heritage in Iran Egypt's Water Clock A Brief History of Clocks: From Thales to Ptolemy The Indianapolis Children's Museum Water Clock Nanaimo, BC Water Clock Animation: Ctesibius Water Clock Rees's Universal Dictionary article on Clepsydra, 1819 The Royal Gorge Bridge Water Clock The Mechanical Water Clock Of Ibn Al-Haytham computer servies on site on clocks Egyptian inventions Time measurement systems Timekeeping
Water clock
Physics
4,605
8,745
https://en.wikipedia.org/wiki/Design%20pattern
A design pattern is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander and has been adapted for various other disciplines, particularly software engineering. Details An organized collection of design patterns that relate to a particular field is called a pattern language. This language gives a common terminology for discussing the situations designers are faced with. Documenting a pattern requires explaining why a particular situation causes problems, and how the components of the pattern relate to each other to give the solution. Christopher Alexander describes common design problems as arising from "conflicting forces"—such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern. Pattern documentation should also explain when it is applicable. Since two houses may be very different from one another, a design pattern for houses must be broad enough to apply to both of them, but not so vague that it doesn't help the designer make decisions. The range of situations in which a pattern can be used is called its context. Some examples might be "all houses", "all two-story houses", or "all places where people spend time". For instance, in Christopher Alexander's work, bus stops and waiting rooms in a surgery center are both within the context for the pattern "A PLACE TO WAIT". Examples Software design pattern, in software design Architectural pattern, for software architecture Interaction design pattern, used in interaction design / human–computer interaction Pedagogical patterns, in teaching Pattern gardening, in gardening Business models also have design patterns. See . See also Style guide Design paradigm Anti-pattern Dark pattern References Further reading ja:デザインパターン pl:Wzorzec projektowy tr:Tasarım örüntüsü vi:Mẫu thiết kế zh:设计模式
Design pattern
Engineering
506
3,552,981
https://en.wikipedia.org/wiki/Surface%20freezing
Surface freezing is the appearance of long-range crystalline order in a near-surface layer of a liquid. The surface freezing effect is opposite to a far more common surface melting, or premelting. Surface freezing was experimentally discovered in melts of alkanes and related chain molecules in the early 1990s independently by two groups: John Earnshaw and his group (Queen's University of Belfast) used light scattering. This method did not allow a determination of the frozen layer's thickness, and whether or not it is laterally ordered. A group led by Ben Ocko (Brookhaven National Laboratory), Eric Sirota (Exxon) and Moshe Deutsch (Bar-Ilan University, Israel) independently discovered the same effect, using x-ray surface diffraction which allowed them to show that the frozen layer is a crystalline monolayer, with molecules oriented roughly along the surface normal, and ordered in an hexagonal lattice. A related effect, the existence of a smectic phase at the surface of a nematic liquid bulk, was observed in liquid crystals by Jens Als-Nielsen (Risø National Laboratory, Denmark) and Peter Pershan (Harvard University) in the early 1980s. However, the surface layer there was neither ordered, nor confined to a single layer. Surface freezing has since been found in a wide range of chain molecules and at various interfaces: liquid-air, liquid-solid and liquid-liquid. References Phases of matter
Surface freezing
Physics,Chemistry
298
961,303
https://en.wikipedia.org/wiki/Messier%2034
Messier 34 (also known as M34, NGC 1039, or the Spiral Cluster) is a large and relatively near open cluster in Perseus. It was probably discovered by Giovanni Batista Hodierna before 1654 and included by Charles Messier in his catalog of comet-like objects in 1764. Messier described it as, "A cluster of small stars a little below the parallel of γ (Andromedae). In an ordinary telescope of 3 feet one can distinguish the stars." Based on the distance modulus of 8.38, it is about away. For stars ranging from 0.12 to 1 solar mass (), the cluster has about 400. It spans about 35′ on the sky which translates to a true radius of 7.5 light years at such distance. The cluster is just visible to the naked eye in very dark conditions, well away from city lights. It is possible to see it in binoculars when light pollution is low. The age of this cluster lies between the ages of the Pleiades open cluster at 100 million years and the Hyades open cluster at 800 million years. Specifically, comparison between noted stellar spectra and the values predicted by stellar evolutionary models suggest 200–250 million years. This is roughly the age at which stars with half a solar mass enter the main sequence. By comparison, stars like the Sun enter the main sequence after 30 million years. The average proportion of elements with higher atomic numbers than helium is termed the metallicity by astronomers. This is expressed by the logarithm of the ratio of iron to hydrogen, compared to the same proportion in the Sun. For M34, the metallicity has a value of [Fe/H] = +0.07 ± 0.04. This is equivalent to a 17% higher proportion of iron compared to the Sun. Other elements show a similar abundance, save for nickel which is underabundant. At least 19 members are white dwarfs. These are stellar remnants of progenitor stars of up to eight solar masses () that have evolved through the main sequence and are no longer have thermonuclear fusion to generate energy. Seventeen of these are of spectral type DA or DAZ, while one is a type DB and the last is a type DC. See also List of Messier objects References External links Messier 34, SEDS Messier pages Messier 34 – Image by Donald P. Waid Messier 034 Messier 034 034 Messier 034 Orion–Cygnus Arm ?
Messier 34
Astronomy
515
53,278,977
https://en.wikipedia.org/wiki/Language%20Server%20Protocol
The Language Server Protocol (LSP) is an open, JSON-RPC-based protocol for use between source code editors or integrated development environments (IDEs) and servers that provide "language intelligence tools": programming language-specific features like code completion, syntax highlighting and marking of warnings and errors, as well as refactoring routines. The goal of the protocol is to allow programming language support to be implemented and distributed independently of any given editor or IDE. In the early 2020s, LSP quickly became a "norm" for language intelligence tools providers. History LSP was originally developed for Microsoft Visual Studio Code and is now an open standard. On June 27, 2016, Microsoft announced a collaboration with Red Hat and Codenvy to standardize the protocol's specification. The protocol was originally supported and adopted by these three companies. Its specification is hosted and developed on GitHub. Background Modern IDEs provide programmers with sophisticated features like code completion, refactoring, navigating to a symbol's definition, syntax highlighting, and error and warning markers. For example, in a text-based programming language, a programmer might want to rename a method read. The programmer could either manually edit the respective source code files and change the appropriate occurrences of the old method name into the new name, or instead use an IDE's refactoring capabilities to make all the necessary changes automatically. To be able to support this style of refactoring, an IDE needs a sophisticated understanding of the programming language that the program's source is written in. A programming tool without such an understanding—for example, one that performs a naive search-and-replace instead—could introduce errors. When renaming a read method, for example, the tool should not replace the partial match in a variable that might be called readyState, nor should it replace the portion of a code comment containing the word "already". Neither should renaming a local variable read, for example, end up altering identically-named variables in other scopes. Conventional compilers or interpreters for a specific programming language are typically unable to provide these language services, because they are written with the goal of either transforming the source code into object code or immediately executing the code. Additionally, language services must be able to handle source code that is not well-formed, e.g. because the programmer is in the middle of editing and has not yet finished typing a statement, procedure, or other construct. Additionally, small changes to a source code file which are done during typing usually change the semantics of the program. In order to provide instant feedback to the user, the editing tool must be able to very quickly evaluate the syntactical and semantical consequences of a specific modification. Compilers and interpreters therefore provide a poor candidate for producing the information needed for an editing tool to consume. Prior to the design and implementation of the Language Server Protocol for the development of Visual Studio Code, most language services were generally tied to a given IDE or other editor. In the absence of the Language Server Protocol, language services are typically implemented by using a tool-specific extension API. Providing the same language service to another editing tool requires effort to adapt the existing code so that the service may target the second editor's extension interfaces. The Language Server Protocol allows for decoupling language services from the editor so that the services may be contained within a general-purpose language server. Any editor can inherit sophisticated support for many different languages by making use of existing language servers. Similarly, a programmer involved with the development of a new programming language can make services for that language available to existing editing tools. Making use of language servers via the Language Server Protocol thus also reduces the burden on vendors of editing tools, because vendors do not need to develop language services of their own for the languages the vendor intends to support, as long as the language servers have already been implemented. The Language Server Protocol also enables the distribution and development of servers contributed by an interested third party, such as end users, without additional involvement by either the vendor of the compiler for the programming language in use or the vendor of the editor to which the language support is being added. LSP is not restricted to programming languages. It can be used for any kind of text-based language, like specifications or domain-specific languages (DSL). Technical overview When a user edits one or more source code files using a language server protocol-enabled tool, the tool acts as a client that consumes the language services provided by a language server. The tool may be a text editor or IDE and the language services could be refactoring, code completion, etc. The client informs the server about what the user is doing, e.g., opening a file or inserting a character at a specific text position. The client can also request the server to perform a language service, e.g. to format a specified range in the text document. The server answers a client's request with an appropriate response. For example, the formatting request is answered either by a response that transfers the formatted text to the client or by an error response containing details about the error. The Language Server Protocol defines the messages to be exchanged between client and language server. They are JSON-RPC preceded by headers similar to HTTP. Messages may originate from the server or client. The protocol does not make any provisions about how requests, responses and notifications are transferred between client and server. For example, client and server could be components within the same process exchanging JSON strings via method calls. They could also be different processes on the same or on different machines communicating via network sockets. Registry There are lists of LSP-compatible implementations, maintained by the community-driven Langserver.org or Microsoft. References Further reading External links Communications protocols Open standards Programming tools
Language Server Protocol
Technology
1,195
442,816
https://en.wikipedia.org/wiki/Integrated%20Taxonomic%20Information%20System
The Integrated Taxonomic Information System (ITIS) is an American partnership of federal agencies designed to provide consistent and reliable information on the taxonomy of biological species. ITIS was originally formed in 1996 as an interagency group within the US federal government, involving several US federal agencies, and has now become an international body, with Canadian and Mexican government agencies participating. The database draws from a large community of taxonomic experts. Primary content staff are housed at the Smithsonian National Museum of Natural History and IT services are provided by a US Geological Survey facility in Denver. The primary focus of ITIS is North American species, but many biological groups exist worldwide and ITIS collaborates with other agencies to increase its global coverage. Reference database ITIS provides an automated reference database of scientific and common names for species. it contains over 839,000 scientific names, synonyms, and common names for terrestrial, marine, and freshwater taxa from all biological kingdoms (animals, plants, fungi, and microbes). While the system does focus on North American species at minimum, it also includes many species not found in North America, especially among birds, fishes, amphibians, mammals, bacteria, many reptiles, several plant groups, and many invertebrate animal groups. Data presented in ITIS are considered public information, and may be freely distributed and copied, though appropriate citation is requested. ITIS is frequently used as the de facto source of taxonomic data in biodiversity informatics projects. ITIS couples each scientific name with a stable and unique taxonomic serial number (TSN) as the "common denominator" for accessing information on such issues as invasive species, declining amphibians, migratory birds, fishery stocks, pollinators, agricultural pests, and emerging diseases. It presents the names in a standard classification that contains author, date, distributional, and bibliographic information related to the names. In addition, common names are available through ITIS in the major official languages of the Americas (English, French, Spanish, and Portuguese). Catalogue of Life ITIS and its international partner, Species 2000, cooperate to annually produce the Catalogue of Life, a checklist and index of the world's species. The Catalogue of Life's goal was to complete the global checklist of 1.9 million species by 2011. As of May 2012, the Catalogue of Life has reached 1.4 million species—a major milestone in its quest to complete the first up-to-date comprehensive catalogue of all living organisms. ITIS and the Catalogue of Life are core to the Encyclopedia of Life initiative announced May 2007. EOL will be built largely on various Creative Commons licenses. Legacy database Of the ~714,000 (May 2016) scientific names in the current database, approximately 210,000 were inherited from the database formerly maintained by the National Oceanographic Data Center (NODC) of the US National Oceanic and Atmospheric Administration (NOAA). The newer material has been checked to higher standards of taxonomic credibility, and over half of the original material has been checked and improved to the same standard. Building on efforts by Richard Swartz, Marvin Wass, and Donald Boesch in 1972 to establish an "intelligent" numeric coding system for taxonomy, the first edition of the NODC Taxonomic Code was published in 1977. Hard copy editions were published until 1984. Subsequent editions were published digitally until 1996. 1996 marked the release of NODC version 8, which served as a bridge to ITIS, which abandoned "intelligent" numeric codes in favor of more stable, but "un-intelligent" Taxonomic Serial Numbers. Standards Biological taxonomy is not fixed, and opinions about the correct status of taxa at all levels, and their correct placement, are constantly revised as a result of new research. Many aspects of classification remain a matter of scientific judgment. The ITIS database is updated to take account of new research as it becomes available. Records within ITIS include information about how far it has been possible to check and verify them. Its information should be checked against other sources where these are available, and against the primary research scientific literature where possible. Member agencies Agriculture and Agri-Food Canada Comisión Nacional para el Conocimiento y Uso de la Biodiversidad (CONABIO) National Oceanic and Atmospheric Administration National Park Service NatureServe Smithsonian Institution United States Department of Agriculture United States Environmental Protection Agency United States Geological Survey United States Fish and Wildlife Service See also Encyclopedia of Life PlantList Wikispecies World Register of Marine Species References External links – Integrated Taxonomic Information System (ITIS) Canada Interface: Integrated Taxonomic Information System (ITIS*CA) Mexico Interface: Sistema Integrado de Información Taxonómica (SIIT*MX) (archived link) Taxonomy (biology) organizations International organizations based in the United States Online taxonomy databases Organizations established in 1996 Biology websites Biodiversity databases
Integrated Taxonomic Information System
Biology,Environmental_science
989
38,664,288
https://en.wikipedia.org/wiki/To%20This%20Day
"To This Day" is a 2011 spoken word poem written by Shane Koyczan. In the poem, Koyczan talks about bullying he and others received during their lives and its deep, long-term impact. Koyczan first came to international notice when he read his poetry at the 2010 Vancouver Olympics' Opening Ceremony. The poem was first released on Koyczan's 2012 album "Remembrance Year". Animated film An animated film for "To This Day" was released onto YouTube on February 19, 2013. It features the work of 12 animators, supported by 80 artists. The video is part of the To This Day project and was released to mark Pink Shirt Day, an anti-bullying initiative. The project aims to highlight the deep and long-term impact of bullying on the individual and help schools engage better with bullying and child suicide. Reception The video received 1.4 million hits in the first two days and currently has over 25 million. Reception for the poem has been overwhelmingly positive, receiving coverage on CBS and CBC News. Koyczan was chosen to read the poem and show to film at the TED conference, California, in 2013, accompanied by violinist Hannah Epperson. After the video's release Koyczan received hundreds of letters from people that have experienced bullying. In 2013, Koyczan commented: My hope is [that it] would reach some of the people who were just out there looking for something to get them through another day. When I wrote the poem two years ago and people started coming to me because they just needed to talk after hearing it, I realized this is not a Canadian problem or an American problem, it’s everywhere...I believe the bullies must be forgiven. That’s how we heal. Koyczan describes how, following torment at school, he became a bully himself around the age of 14, an image of the thing he hated. He says that keeping communication channels open and clear between parents and their children will help address bullying issues. He commented that he hopes the poem and the project will promote a connectivity between those who have suffered from bullying, that they might feel less isolated. The project aims to help schools engage better with bullying and child suicide. The books has since been published by Annick Press as a graphic novel, entitled "To This Day: For the Bullied and Beautiful". References External links YouTube video 2013 poems Canadian poems Harassment and bullying 2013 YouTube videos Viral videos
To This Day
Biology
505
40,615,963
https://en.wikipedia.org/wiki/Rainbow%20Loom
Rainbow Loom is a plastic tool used to weave colourful rubber and plastic bands (called loom bands) into decorative items such as bracelets and charms. It was invented in 2010 by Cheong Choon Ng in Novi, Michigan. Description The Rainbow Loom is a plastic pegboard measuring by . It has push pin-type pegs over which small, coloured rubber bands are looped and pulled by a rainbow loom crochet hook. The resulting looped knots, known as Brunnian links, can be assembled on the loom into bracelets and other shapes. The Rainbow Loom kit includes the loom (the pegboard), a rainbow loom hook, 25 special C-shaped clips to connect both ends of the bracelet, and 600+ small rubber bands in assorted colours. History Rainbow Loom was created by Cheong Choon Ng, a Malaysian immigrant of Chinese descent who came to the United States in 1991 to attend Wichita State University, where he earned a graduate degree in mechanical engineering. He was employed as a crash-test engineer for Nissan Motor Company in 2010. He conceived the idea of a toy loom for rubber-band crafting after seeing his young daughters make rubber-band bracelets. He tried to show them how they could link the rubber bands together but was unsuccessful because of the large size of his fingers, so he stuck a scrap board with multiple rows of pegs on which the bands could be linked more easily. The bracelets became popular with the neighborhood children, and his daughter suggested that he sell them. He spent six months developing the loom kit and designed 28 versions. His prototype, which he called Twistz Bandz, used a wooden board, pegs, and dental hooks. He invested $10,000 and found a factory in China to manufacture the parts, which he and his wife assembled in their home in June 2011. Ng decided to rename his product after discovering that an elastic hair band on the market was named Twist Band, and his brother and niece came up with the name Rainbow Loom. Efforts to sell the loom online and in toy stores, however, were unsuccessful because customers did not understand how to use the product. Ng started a website and filmed instructional videos featuring his daughters and niece. In summer 2012, Ng received his first store orders from franchises of Learning Express Toys, a specialty crafts chain, and sales picked up. In June 2013 arts and crafts retail chain Michaels test-marketed the product in 32 stores; by August the chain was carrying Rainbow Loom in its 1,100 U.S. locations. Rainbow Loom is also sold at Mastermind Toys in Canada and specialty stores. As of August 2013, 600 retailers were selling Rainbow Loom at a retail price of $15 to $17. The kits are manufactured in China, and Ng supervises distribution out of a warehouse near his home. In 2013, Ng worked with The Beadery and Toner Plastics to produce the Wonder Loom, a redesigned version of the Rainbow Loom that is made in the United States. The Wonder Loom is sold by Walmart. In April 2014, Ng released a travel-sized version of the Rainbow Loom called the Monster Tail, which allows simple bracelets to be made on only eight pegs, arranged in a rectangle. In mid-May 2015, Rainbow Loom released two new products: The Alpha Loom, another travel-sized loom that can be used to make vibrantly coloured name bracelets with special types of new bands, which are twice as thick but half the size of regular bands. It has seven pegs on either side, and it comes with a special hook that has seven hooks on so users can hook over seven bands at once, instead of one. It also comes with an instruction manual with pixelated grids for users to photocopy, cut out, measure around the wrist, and design the patterns themselves, with pictures and letters to spell words. The Hair Loom Studio, also released in May 2015, is used to make designs on the Rainbow Loom, Finger Loom, or Monster Tail, which can then be transferred onto the user's hair by pushing the design off a "guide tube" onto a long strand of hair. The bands for this are made of silicone and can be removed without pulling at the hair. There are two versions of the Hair Loom Studio, a large "double" and a small "single" loom. Reception Targeted at children aged 8 to 14, Rainbow Loom became a popular pastime in summer camps and summer clubs in 2013, according to The New York Times and Today. Grade school-age children make and swap their rubber-band bracelets in the same way as friendship bracelets, and children have posted thousands of their own instructional videos online. As of October 2013, Rainbow Loom's YouTube channel featured 66 how-to videos and had received nearly 4 million views. In November 2013 third-graders at St. John the Worker school in Orefield, Pennsylvania participated in a "Rainbow Loom-a-thon", weaving rubber-band bracelets for cancer patients. Rainbow Loom was named one of the three most popular toys of 2013 by Cyber Monday Awards and was the most-searched toy on Google that same year. It was described in a 2014 BBC News article as "one of the most popular toys in the world". Among the celebrities seen wearing Rainbow Loom bracelets given to them by fans are Prince William, Duke of Cambridge, Catherine, Duchess of Cambridge, David Beckham, Harry Styles, Miley Cyrus, and Pope Francis. In schools In October 2013 two New York City schools banned Rainbow Loom bracelets, stating they were distracting students in the classroom and breeding animosity in the playground. Two Orlando, Florida schools have also enforced strict rules on wearing and trading Rainbow Loom bracelets. A Wallingford, Connecticut school had also banned the creation and exchange of Rainbow loom band bracelets, due to the rise of arguments that arose from band exchanges. Imitations Choon Ng applied to the United States Patent and Trademark Office in 2010 for a patent on Rainbow Loom. He then received US Patent No. 8,485,565 for "Brunnian link making device and kit" on July 16, 2013. Ng received a second US Patent, No. 8,684,420 on April 1, 2014. In August 2013 Ng filed suit against Zenacon LLC, makers of FunLoom; LaRose Industries LLC, makers of Cra-Z-Loom; and Toys "R" Us, distributors of Cra-Z-Loom, alleging that the rival products copied the design of the C-shaped fasteners used in rubber-band jewelry-making on the Rainbow Loom. LaRose Industries immediately lodged a countersuit against Ng's company, Choon's Design LLC. In August 2014, LaRose challenged Choon's patent and filed the first-ever post-grant review proceeding brought under the America Invents Act. The review ended in January 2015 after a settlement. In some of the knock-off versions, high levels of the carcinogenic substance phthalate have been found, in some cases well above the allowed limit in children's toys in Europe. British investigators found phthalate levels over 400 times the legal limit, and several toy stores have removed these products. The Norwegian Environment Agency also found higher than legal levels of DEHP, and several products have been banned from the market. However, it has been confirmed that it was only some non-Rainbow Loom-brand charms, and not bands. Gallery See also Silly Bandz References External links Taking Stock: Rainbow Loom Inventor on How He Got Started Video interview on Bloomberg Television "Blockbuster Toy Rainbow Loom: Weaving, Rubber Bands, And Digital Literacy" Forbes, October 23, 2013 "Boys Love Rainbow Loom, Defying Stereotype and Delighting Moms Everywhere" Time, October 25, 2013 "For Power Suits in Executive Suites, the Latest Accessory Is Rainbow Loom" The Wall Street Journal, March 24, 2014 "I Invented the Loom Band" The Guardian, September 27, 2014 2010s toys Art and craft toys Products introduced in 2010 Weaving equipment 2010s fads and trends Malaysian inventions
Rainbow Loom
Engineering
1,693
228,900
https://en.wikipedia.org/wiki/Propylene%20glycol
Propylene glycol (IUPAC name: propane-1,2-diol) is a viscous, colorless liquid. It is almost odorless and has a faintly sweet taste. Its chemical formula is CH3CH(OH)CH2OH. As it contains two alcohol groups, it is classified as a diol. An aliphatic diol may also be called a glycol. It is miscible with a broad range of solvents, including water, acetone, and chloroform. In general, glycols are non-irritating and have very low volatility. For certain uses as a food additive, propylene glycol is considered as GRAS by the US Food and Drug Administration, and is approved for food manufacturing. In the European Union, it has E-number E1520 for food applications. For cosmetics and pharmacology, the number is E490. Propylene glycol is also present in propylene glycol alginate, which is known as E405. Propylene glycol is approved and used as a vehicle for topical, oral, and some intravenous pharmaceutical preparations in the US and Europe. Structure The compound is sometimes called (alpha) α-propylene glycol to distinguish it from the isomer propane-1,3-diol, known as (beta) β-propylene glycol. Propylene glycol is chiral. Commercial processes typically use the racemate. The S-isomer is produced by biotechnological routes. Production Industrial Industrially, propylene glycol is mainly produced from propylene oxide (for food-grade use). According to a 2018 source, 2.16 M tonnes are produced annually. Manufacturers use either non-catalytic high-temperature process at to , or a catalytic method, which proceeds at to in the presence of ion exchange resin or a small amount of sulfuric acid or alkali. Final products contain 20% propylene glycol, 1.5% of dipropylene glycol, and small amounts of other polypropylene glycols. Further purification produces finished industrial grade or USP/JP/EP/BP grade propylene glycol that is typically 99.5% or greater. Use of USP (US Pharmacopoeia) propylene glycol can reduce the risk of Abbreviated New Drug Application (ANDA) rejection. Propylene glycol can also be obtained from glycerol, a byproduct from the production of biodiesel. This starting material is usually reserved for industrial use because of the noticeable odor and taste that accompanies the final product. Laboratory (S)-Propanediol is synthesized via fermentation methods. Lactic acid and lactaldehyde are common intermediates. Dihydroxyacetone phosphate, one of the two products of breakdown (glycolysis) of fructose 1,6-bisphosphate, is a precursor to methylglyoxal. This conversion is the basis of a potential biotechnological route to the commodity chemical 1,2-propanediol. Three-carbon deoxysugars are also precursor to the 1,2-diol. A small-scale, nonbiological route from D-mannitol is illustrated in the following scheme: Applications Polymers Forty-five percent of propylene glycol produced is used as a chemical feedstock for the production of unsaturated polyester resins. In this regard, propylene glycol reacts with a mixture of unsaturated maleic anhydride and isophthalic acid to give a copolymer. This partially unsaturated polymer undergoes further crosslinking to yield thermoset plastics. Related to this application, propylene glycol reacts with propylene oxide to give oligomers and polymers that are used to produce polyurethanes. Propylene glycol is used in water-based acrylic architectural paints to extend dry time which it accomplishes by preventing the surface from drying due to its slower evaporation rate compared to water. Food and drug In regulated amounts, propylene glycol is designated as safe for food manufacturing as an anticaking agent, emulsifier, flavor agent, humectant, texturizer, stabilizer, solvent, antioxidant, antimicrobial agent, and thickener. As regulated by the US FDA for substances deemed as GRAS, propylene glycol is "not subject to premarket review and approval by FDA because it is generally recognized, by qualified experts, to be safe under the intended conditions of use." The scientific panel evaluating propylene glycol for food manufacturing defined its conclusion as: "There is no evidence in the available information on [propylene glycol] that demonstrates, or suggests reasonable grounds to suspect, a hazard to the public when they are used at levels that are now current or might reasonably be expected in the future." The FDA law defined maximum limits for the use of propylene glycol in various food categories under good manufacturing practices: 2.0% for general food categories 2.5% for frozen dairy products 5% for alcoholic beverages 5% for nuts and nut products 24% for confections and frostings 97% for seasonings and flavorings The European Food Safety Authority authorizes propylene glycol for use in food manufacturing, establishing a safe daily intake of 25 mg per kg of body weight. Specifically for ice cream or ice milk products, Health Canada permits use of propylene glycol mono fatty acid esters as an emulsifier and stabilizer at a maximum level of use of 0.35% of the ice cream made from the ingredients mix. Propylene glycol is used in a variety of other edible items, such as baked goods, desserts, prepared meals, flavoring mixes, candy, popcorn, whipped dairy products, and soda. It is also used in beer to stabilize the foam. Vaporizers used for delivery of pharmaceuticals or personal-care products often include propylene glycol among the ingredients. In alcohol-based hand sanitizers, it is used as a humectant to prevent the skin from drying. Propylene glycol is used as a solvent in many pharmaceuticals, including oral, injectable, and topical formulations. Many pharmaceutical drugs which are insoluble in water utilize propylene glycol as a solvent and carrier; benzodiazepine tablets are one example. Propylene glycol is also used as a solvent and carrier for many pharmaceutical capsule preparations. Additionally, certain formulations of artificial tears use propylene glycol as an ingredient. Antifreeze The freezing point of water is depressed when mixed with propylene glycol. It is used as aircraft de-icing and anti-icing fluid. A 50% water-diluted and heated solution is used for removal of icing accretions from the fuselages of commercial aircraft on the ground (de-icing), and 100% undiluted cold solution is used only on wings and tail surfaces of an aircraft in order to prevent ice accretion from forming during a specific period of time before takeoff (anti-icing). Normally, such time-frame is limited to 15–90 minutes, depending on the severity of snowfall and outside air temperature. Water-propylene glycol mixtures dyed pink to indicate the mixture is relatively nontoxic are sold under the name of RV or marine antifreeze. Propylene glycol is frequently used as a substitute for ethylene glycol in low toxicity, environmentally friendly automotive antifreeze. It is also used to winterize the plumbing systems in vacant structures. The eutectic composition/temperature is 60:40 propylene glycol:water/−60 °C. The −50 °F/−45 °C commercial product is, however, water rich; a typical formulation is 40:60. Electronic cigarettes liquid Propylene glycol, vegetable glycerin, or a mixture of both, are the main ingredients in e-liquid used in electronic cigarettes. They are aerosolized to resemble smoke and serve as carriers for substances such as nicotine and flavorants. Miscellaneous applications As a solvent for many substances, both natural and synthetic. As a humectant (E1520). As a freezing point depressant for slurry ice. In veterinary medicine as an oral treatment for hyperketonaemia in ruminants. In the cosmetics industry, where propylene glycol is very commonly used as a carrier or base for various types of makeup. For trapping and preserving insects (including as a DNA preservative). For the creation of theatrical smoke and fog in special effects for film and live entertainment. So-called 'smoke machines' or 'hazers' vaporize a mixture of propylene glycol and water to create the illusion of smoke. While many of these machines use a propylene glycol-based fluid, some use oil. Those which use propylene glycol do so in a process that is identical to how electronic cigarettes work; utilizing a heating element to produce a dense vapor. The vapor produced by these machines has the aesthetic look and appeal of smoke, but without exposing performers and stage crew to the harms and odors associated with actual smoke. As an additive in polymerase chain reaction (PCR) to reduce the melting temperature of nucleic acids for targeting of GC rich sequences. As a surfactant, it is used to prevent water from beading up on objects. It is used in photography for this purpose to reduce the risk of water spots, or deposits of minerals from water used to process film or paper. Safety in humans When used in average quantities, propylene glycol has no measurable effect on development and/or reproduction on animals and probably does not adversely affect human development or reproduction without active use. The safety of electronic cigarettes—which utilize propylene glycol-based preparations of nicotine or THC and other cannabinoids—is the subject of much controversy. Vitamin E acetate has also been identified in this controversy. Oral administration The acute oral toxicity of propylene glycol is very low, and large quantities are required to cause perceptible health effects in humans; in fact, the toxicity of propylene glycol is one third that of ethanol. Propylene glycol is metabolized in the human body into pyruvic acid (a normal part of the glucose-metabolism process, readily converted to energy), acetic acid (handled by ethanol-metabolism), lactic acid (a normal acid generally abundant during digestion), and propionaldehyde (a potentially hazardous substance). According to the Dow Chemical Company, the LD50 (dose that kills 50% of the test population) for rats is 20 g/kg (oral/rat). Toxicity generally occurs at plasma concentrations over 4 g/L, which requires extremely high intake over a relatively short period of time, or when used as a vehicle for drugs or vitamins given intravenously or orally in large bolus doses. It would be nearly impossible to reach toxic levels by consuming foods or supplements, which contain at most 1 g/kg of PG, except for alcoholic beverages in the US which are allowed 5 percent = 50 g/kg. Cases of propylene glycol poisoning are usually related to either inappropriate intravenous administration or accidental ingestion of large quantities by children. The potential for long-term oral toxicity is also low. In a National Toxicology Program continuous breeding study, no effects on fertility were observed in male or female mice that received propylene glycol in drinking water at doses up to 10100 mg/kg bw/day. No effects on fertility were seen in either the first or second generation of treated mice. In a 2-year study, 12 rats were provided with feed containing as much as 5% propylene glycol, and showed no apparent ill effects. Skin and eye contact Propylene glycol may be non-irritating to the skin, see section Allergic reaction below for details on allergic reactions. Undiluted propylene glycol is minimally irritating to the eye, producing slight transient conjunctivitis; the eye recovers after the exposure is removed. A 2018 human volunteer study found that 10 male and female subjects undergoing 4 hours exposures to concentrations of up to 442 mg/m3 and 30 minutes exposures to concentrations of up to 871 mg/m3 in combination with moderate exercise did not show pulmonary function deficits, or signs of ocular irritation, with only slight symptoms of respiratory irritation reported. Propylene glycol has not caused sensitization or carcinogenicity in laboratory animal studies, nor has it demonstrated genotoxic potential. Inhalation Inhalation of propylene glycol vapors appears to present no significant hazard in ordinary applications. Due to the lack of chronic inhalation data, it is recommended that propylene glycol not be used in inhalation applications such as theatrical productions, or antifreeze solutions for emergency eye wash stations. Recently, propylene glycol (commonly alongside glycerol) has been included as a carrier for nicotine and other additives in e-cigarette liquids, the use of which presents a novel form of exposure. The potential hazards of chronic inhalation of propylene glycol or the latter substance as a whole are as-yet unknown. According to a 2010 study, the concentrations of PGEs (counted as the sum of propylene glycol and glycol ethers) in indoor air, particularly bedroom air, has been linked to increased risk of developing numerous respiratory and immune disorders in children, including asthma, hay fever, eczema, and allergies, with increased risk ranging from 50% to 180%. This concentration has been linked to use of water-based paints and water-based system cleansers. However, the study authors write that glycol ethers and not propylene glycol are the likely culprit. Intravenous administration Studies with intravenously administered propylene glycol have resulted in LD50 values in rats and rabbits of 7 mL/kg BW. Ruddick (1972) also summarized intramuscular LD50 data for rat as 13–20 mL/kg BW, and 6 mL/kg BW for the rabbit. Adverse effects to intravenous administration of drugs that use propylene glycol as an excipient have been seen in a number of people, particularly with large bolus dosages. Responses may include CNS depression, "hypotension, bradycardia, QRS and T abnormalities on the ECG, arrhythmia, cardiac arrhythmias, seizures, agitation, serum hyperosmolality, lactic acidosis, and haemolysis". A high percentage (12–42%) of directly-injected propylene glycol is eliminated or secreted in urine unaltered depending on dosage, with the remainder appearing in its glucuronide-form. The speed of renal filtration decreases as dosage increases, which may be due to propylene glycol's mild anesthetic / CNS-depressant properties as an alcohol. In one case, intravenous administration of propylene glycol-suspended nitroglycerin to an elderly man may have induced coma and acidosis. However, no confirmed lethality from propylene glycol was reported. Animals Propylene glycol is an approved food additive for dog and sugar glider food under the category of animal feed and is generally recognized as safe for dogs, with an LD50 of 9 mL/kg. The LD50 is higher for most laboratory animals (20 mL/kg). However, it is prohibited for use in food for cats due to links to Heinz body formation and a reduced lifespan of red blood cells. Heinz body formation from MPG has not been observed in dogs, cattle, or humans. PG has been used in the dairy industry since the 1950s for cows showing signs of ketosis. The negative energy balance during the early stages of lactation can cause the animal's body to have lower glucose levels, inducing the liver to make up for this by the conversion of body fat, leading to several health conditions, e.g. displaced abomasum. PG "reduces the propionate ratio of acetate to acetaminophen, while increasing conversion of ruminal PG to propionate, and aid[s] in the closure of energy deficit in cattle." Allergic reaction Estimates on the prevalence of propylene glycol allergy range from 0.8% (10% propylene glycol in aqueous solution) to 3.5% (30% propylene glycol in aqueous solution). The North American Contact Dermatitis Group (NACDG) data from 1996 to 2006 showed that the most common site for propylene glycol contact dermatitis was the face (25.9%), followed by a generalized or scattered pattern (23.7%). Investigators believe that the incidence of allergic contact dermatitis to propylene glycol may be greater than 2% in patients with eczema or fungal infections, which are very common in countries with lesser sun exposure and lower-than-normal vitamin D balances. Therefore, propylene glycol allergy is more common in those countries. Because of its potential for allergic reactions and frequent use across a variety of topical and systemic products, propylene glycol was named the American Contact Dermatitis Society's Allergen of the Year for 2018. Recent publication from The Mayo Clinic reported 0.85% incidence of positive patch tests to propylene glycol (100/11,738 patients) with an overall irritant rate of 0.35% (41/11,738 patients) during a 20-year period of 1997–2016. 87% of the reactions were classified as weak and 9% as strong. The positive reaction rates were 0%, 0.26%, and 1.86% for 5%, 10%, and 20% propylene glycol respectively, increasing with each concentration increase. The irritant reaction rates were 0.95%, 0.24%, and 0.5% for 5%, 10%, and 20% propylene glycol, respectively. Propylene glycol skin sensitization occurred in patients sensitive to a number of other concomitant positive allergens, most common of which were: Myroxylon pereirae resin, benzalkonium chloride, carba mix, potassium dichromate, neomycin sulfate; for positive propylene glycol reactions, the overall median of 5 and mean of 5.6 concomitant positive allergens was reported. Environmental impacts Propylene glycol occurs naturally, probably as the result of anaerobic catabolism of sugars in the human gut. It is degraded by vitamin B12-dependent enzymes, which convert it to propionaldehyde. Propylene glycol is expected to degrade rapidly in water from biological processes, but is not expected to be significantly influenced by hydrolysis, oxidation, volatilization, bioconcentration, or adsorption to sediment. Propylene glycol is readily biodegradable under aerobic conditions in freshwater, in seawater and in soil. Therefore, propylene glycol is considered as not persistent in the environment. Propylene glycol exhibits a low degree of toxicity toward aquatic organisms. Several guideline studies available for freshwater fish with the lowest observed lethal concentration of 96-h LC50 value of 40,613 mg/L in a study with Oncorhynchus mykiss. Similarly, the lethal concentration determined in marine fish is a 96-h LC50 of >10,000 mg/L in Scophthalmus maximus. Although propylene glycol has low toxicity, it exerts high levels of biochemical oxygen demand (BOD) during degradation in surface waters. This process can adversely affect aquatic life by consuming oxygen needed by aquatic organisms for survival. Large quantities of dissolved oxygen (DO) in the water column are consumed when microbial populations decompose propylene glycol. References External links Agency for Toxic Substances and Disease Registry Alton E. Martin - Frank H. Murphy, DOW CHEMICAL COMPANY Propylene glycol website WebBook page for C3H8O2 ATSDR - Case Studies in Environmental Medicine: Ethylene Glycol and Propylene Glycol Toxicity U.S. Department of Health and Human Services (public domain) Propylene Glycol - chemical product info: properties, production, applications. ChemSub Online: Propylene glycol Propylene Glycol Toxicity in Pets Household chemicals Alcohol solvents Cosmetics chemicals Food additives Excipients Alkanediols Commodity chemicals Vicinal diols E-number additives
Propylene glycol
Chemistry
4,480
17,838,802
https://en.wikipedia.org/wiki/Weight%20on%20bit
Weight on bit (WOB), as expressed in the oil industry, is the amount of downward force exerted on the drill bit and is normally measured in thousands of pounds. Weight on bit is provided by drill collars, which are thick-walled tubular pieces machined from solid bars of steel, usually plain carbon steel but sometimes of nonmagnetic nickel-copper alloy or other nonmagnetic premium alloys. Gravity acts on the large mass of the collars to provide the downward force needed for the bits to efficiently break rock. To accurately control the amount of force applied to the bit, the driller carefully monitors the surface weight measured while the bit is just off the bottom of the wellbore. Next, the drillstring (and the drill bit), is slowly and carefully lowered until it touches bottom. After that point, as the driller continues to lower the top of the drillstring, more and more weight is applied to the bit, and correspondingly less weight is measured as hanging at the surface. If the surface measurement shows 20,000 pounds [9080 kg] less weight than with the bit off bottom, then there should be 20,000 pounds force on the bit (in a vertical hole). Some downhole Measurement While Drilling (MWD) sensors can measure weight-on-bit more accurately and transmit the data to the surface. References Oilfield terminology
Weight on bit
Chemistry
285
63,426,102
https://en.wikipedia.org/wiki/PR%20toxin
Penicillin Roquefort toxin (PR toxin) is a mycotoxin produced by the fungus Penicillium roqueforti. In 1973, PR toxin was first partially characterized by isolating moldy corn on which the fungi had grown. Although its lethal dose was determined shortly after the isolation of the chemical, details of its toxic effects were not fully clarified until 1982 in a study with mice, rats, anesthetized cats and preparations of isolated rat auricles. Structure and reactivity PR toxin contains multiple functional groups, including acetoxy (CH3COO-), aldehyde (-CHO), α,β-unsaturated ketone (-C=C-CO) and two epoxides. The aldehyde group on C-12 is directly involved in the biological activity as removal leads to inactivation of the compound. The two epoxide groups do not play an important role, as removal showed no difference in activity. When exposed to air, PR toxin may decompose. How and why this happens, is however not known. Synthesis PR toxin is derived from the 15-carbon hydrocarbon aristolochene, a sesquiterpene produced from farnesyl diphosphate catalyzed by the enzyme aristolochene synthase. Aristolochene then gains an alcohol, a ketone, and an additional alkene, mediated by hydroxysterol oxidase and quinone oxidoreductase. Addition of the fused-epoxide oxygen by P450 monooxygenase gives eremofortin B. Epoxidation of the isopropenyl sidechain, again by P450 monooxygenase, and addition of the acetyl group by an acetyltransferase gives eremofortin A. A short-chain oxidoreductase oxidizes a methyl group on the side-chain to eremofortin C, the primary alcohol analog of PR toxin (incorrectly illustrated in the following diagram), which is then further oxidized by a short-chain alcohol dehydrogenase to give the aldehyde. Eremofortin C has been isolated from microbial sources and found to be in a spontaneous equilibrium between an open-chain hydroxy–ketone structure and a lactol form. Metabolism Different experiments have shown the effects of the PR toxin on liver cells in culture (in vitro) and in the liver (in vivo). In vitro The PR toxin caused an inhibition of the incorporation of amino acids. These results show that the toxin was responsible for altering the translating process. Together with some earlier experiments it has been proved that the PR toxin was indeed active on the cell metabolism. Another interesting finding is the decreased activity of respiratory control and oxidative phosphorylation in the (isolated) mitochondria of the liver . Apparently the amount of polysomes wasn't the determining factor, the inhibition was not decreased by increasing the amount of polysomes. The increase of pH 5 enzymes on the other hand, had a significant inhibitory effect. A higher concentration of pH 5 enzymes made the inhibitory effect less effective. These findings proved that the PR toxin was not altering the polysomes but in some way dysfunctions the pH 5 enzymes. In vivo When the PR toxin was directly administered to rats, protein synthesis in the liver was not as high as it normally would be. This in vivo administration showed that the isolated cells from the rat's liver had a much lower transcriptional capacity. The process did not alter the uptake of amino acids in the liver, but the translational process was exclusively affected. The toxic effect of this toxin is as expected close with the fact that the process of protein synthesis is inhibited. However the real toxic effect could be that some required proteins aren't made in a proper amount. Mechanism of action Multiple experiments have shown the different effects of PR toxin: it can cause damage to the liver and kidney, can induce carcinogenicity, and can in vivo inhibit DNA replication, protein synthesis, and transcription. Most experiments on the effect of the PR toxin focus on the inhibition of protein synthesis and impairment of the liver. The PR toxin dysfunctions the transcriptional process in the liver. RNA polymerases I & II, the two main RNA polymerase systems in the liver, are affected by the toxin. The toxin needs no further enzymatic conversion to exert its effects on these systems. The liver seems to be the most influenced organ by the PR toxin. Toxicity The toxicity of PR toxin was measured both intraperitoneally as well as orally. The first determined median lethal dose of pure PR toxin intraperitoneal in weanling rats was 11 mg/kg. The oral median lethal dose was 115 mg/kg. The same study reported that ten minutes after an oral dose of 160 mg/kg, the animals experienced breathing problems that eventually led to death. Acute Rat studies (mg/kg) - LDLo test, via oral route: 115 - LD50 test, intraperitoneal route: 11.6 - LD50 test, intravenous: 8.2 Acute Mouse studies (mg/kg) - LD50 test, via oral route: 72 - LD50 test, intraperitoneal: 2 - LD50 test, intravenous: 2 An acute human study has yet to be done, so no LD50 test results or doses are known yet. However, there is one case report from 1982 in which toxic effects are described on a human. This person was working in a factory in which the blue cheese was produced. The mold of Penicillium roqueforti was inhaled by this person and she developed hypersensitivity pneumonitis. Because of this lung inflammation, the person experienced among other things coughing, dyspnea, reduced lung volumes and hypoxemia. Antibodies against the mold were found afterwards in serum and lavage fluid. However, the LD50 values have not yet been determined. Effects on animals Studies of the effects on animals were done on mice, rats, anesthetized cats and preparations of isolated rat auricle. Toxic effects in mice and rats included abdominal writhing, decrease of motor activity and respiration rate, weakness of the hind legs and ataxia. The effects were different for the different ways PR toxin was taken up. When the median lethal dose was ingested orally, the pathology was described as swollen-gas filled stomach and intestines as well as edema and congestion in the lungs. The kidney showed degenerative changes as well as hemorrhage. If PR toxin was injected intraperitoneally, cats, mice and rats developed ascites fluid and edema of the lungs and scrotum. While intravenous injection showed, for the same animals, large volumes of pleural and pericardial volumes as well as lung edema. In conclusion, the tissue cells and blood vessels were directly damaged by PR toxin. This caused leakage of fluid resulting among other things in edema of the lungs and ascites fluid. Also, the damage on the blood vessels resulted in increased capillary permeability. This increased permeability lead to a decrease in blood volume and direct damage to the vital organs including lungs, kidneys, liver and heart. References Mycotoxins Epoxides Acetates Ketones Heterocyclic compounds with 3 rings
PR toxin
Chemistry
1,545
714,053
https://en.wikipedia.org/wiki/Transduction%20%28genetics%29
Transduction is the process by which foreign DNA is introduced into a cell by a virus or viral vector. An example is the viral transfer of DNA from one bacterium to another and hence an example of horizontal gene transfer. Transduction does not require physical contact between the cell donating the DNA and the cell receiving the DNA (which occurs in conjugation), and it is DNase resistant (transformation is susceptible to DNase). Transduction is a common tool used by molecular biologists to stably introduce a foreign gene into a host cell's genome (both bacterial and mammalian cells). Discovery (bacterial transduction) Transduction was discovered in Salmonella by Norton Zinder and Joshua Lederberg at the University of Wisconsin–Madison in 1952. In the lytic and lysogenic cycles Transduction happens through either the lytic cycle or the lysogenic cycle. When bacteriophages (viruses that infect bacteria) that are lytic infect bacterial cells, they harness the replicational, transcriptional, and translation machinery of the host bacterial cell to make new viral particles (virions). The new phage particles are then released by lysis of the host. In the lysogenic cycle, the phage chromosome is integrated as a prophage into the bacterial chromosome, where it can stay dormant for extended periods of time. If the prophage is induced (by UV light for example), the phage genome is excised from the bacterial chromosome and initiates the lytic cycle, which culminates in lysis of the cell and the release of phage particles. Generalized transduction (see below) occurs in both cycles during the lytic stage, while specialized transduction (see below) occurs when a prophage is excised in the lysogenic cycle. As a method for transferring genetic material Transduction by bacteriophages The packaging of bacteriophage DNA into phage capsids has low fidelity. Small pieces of bacterial DNA may be packaged into the bacteriophage particles. There are two ways that this can lead to transduction. Generalized transduction Generalized transduction occurs when random pieces of bacterial DNA are packaged into a phage. It happens when a phage is in the lytic stage, at the moment that the viral DNA is packaged into phage heads. If the virus replicates using 'headful packaging', it attempts to fill the head with genetic material. If the viral genome results in spare capacity, viral packaging mechanisms may incorporate bacterial genetic material into the new virion. Alternatively, generalized transduction may occur via recombination. Generalized transduction is a rare event and occurs on the order of 1 phage in 11,000. The new virus capsule that contains part bacterial DNA then infects another bacterial cell. When the bacterial DNA packaged into the virus is inserted into the recipient cell three things can happen to it: The DNA is recycled for spare parts. If the DNA was originally a plasmid, it will re-circularize inside the new cell and become a plasmid again. If the new DNA matches with a homologous region of the recipient cell's chromosome, it will exchange DNA material similar to the actions in bacterial recombination. Specialized transductionSpecialized transduction is the process by which a restricted set of bacterial genes is transferred to another bacterium. Those genes that are located adjacent to the prophage are transferred due to improper excision. Specialized transduction occurs when a prophage excises imprecisely from the chromosome so that bacterial genes lying adjacent to it are included in the excised DNA. The excised DNA along with the viral DNA is then packaged into a new virus particle, which is then delivered to a new bacterium when the phage attacks new bacterium. Here, the donor genes can be inserted into the recipient chromosome or remain in the cytoplasm, depending on the nature of the bacteriophage. An example of specialized transduction is λ phage in Escherichia coli. Lateral transductionLateral transduction is the process by which very long fragments of bacterial DNA are transferred to another bacterium. So far, this form of transduction has been only described in Staphylococcus aureus, but it can transfer more genes and at higher frequencies than generalized and specialized transduction. In lateral transduction, the prophage starts its replication in situ before excision in a process that leads to replication of the adjacent bacterial DNA. After which, packaging of the replicated phage from its pac site (located around the middle of the phage genome) and adjacent bacterial genes occurs in situ, to 105% of a phage genome size. Successive packaging after initiation from the original pac'' site leads to several kilobases of bacterial genes being packaged into new viral particles that are transferred to new bacterial strains. If the transferred genetic material in these transducing particles provides sufficient DNA for homologous recombination, the genetic material will be inserted into the recipient chromosome. Because multiple copies of the phage genome are produced during in situ replication, some of these replicated prophages excise normally (instead of being packaged in situ), producing normal infectious phages. Mammalian cell transduction with viral vectors Transduction with viral vectors can be used to insert or modify genes in mammalian cells. It is often used as a tool in basic research and is actively researched as a potential means for gene therapy. Process In these cases, a plasmid is constructed in which the genes to be transferred are flanked by viral sequences that are used by viral proteins to recognize and package the viral genome into viral particles. This plasmid is inserted (usually by transfection) into a producer cell together with other plasmids (DNA constructs) that carry the viral genes required for the formation of infectious virions. In these producer cells, the viral proteins expressed by these packaging constructs bind the sequences on the DNA/RNA (depending on the type of viral vector) to be transferred and insert it into viral particles. For safety, none of the plasmids used contains all the sequences required for virus formation, so that simultaneous transfection of multiple plasmids is required to get infectious virions. Moreover, only the plasmid carrying the sequences to be transferred contains signals that allow the genetic materials to be packaged in virions so that none of the genes encoding viral proteins are packaged. Viruses collected from these cells are then applied to the cells to be altered. The initial stages of these infections mimic infection with natural viruses and lead to expression of the genes transferred and (in the case of lentivirus/retrovirus vectors) insertion of the DNA to be transferred into the cellular genome. However, since the transferred genetic material does not encode any of the viral genes, these infections do not generate new viruses (the viruses are "replication-deficient"). Some enhancers have been used to improve transduction efficiency such as polybrene, protamine sulfate, retronectin, and DEAE Dextran. Medical applications Gene therapy: Correcting genetic diseases by direct modification of genetic error. See also Electroporation – use of an electrical field to increase cell membrane permeability. Phage therapy – therapeutic use of bacteriophages. Transfection – means of inserting DNA into a cell. Transformation (genetics) – means of inserting DNA into a cell. Viral vector – commonly used tool to deliver genetic material into cells. References External links Overview at ncbi.nlm.nih.gov http://www.med.umich.edu/vcore/protocols/RetroviralCellScreenInfection13FEB2006.pdf (transduction protocol) Generalized and Specialized transduction at sdsu.edu Bacteriology Bacteriophages Modification of genetic information Molecular biology Virology
Transduction (genetics)
Chemistry,Biology
1,637
18,500,477
https://en.wikipedia.org/wiki/Intergradation
In zoology, intergradation is the way in which two distinct subspecies are connected via areas where populations are found that have the characteristics of both. There are two types of intergradation: primary and secondary intergradation. Primary intergradation This occurs in cases where two subspecies are connected via one or more intermediate populations, each of which is in turn intermediate to its adjacent populations and exhibits more or less the same amount of variability as any other population within the species. Adjacent populations and subspecies are subject to cline intergradation, and in these situations it is usually taken for granted that the clines are causally related (by natural selection) to environmental gradients. Secondary intergradation When contact between a geographically isolated subspecies is reestablished with the main body of the species or with another isolate subspecies, interbreeding takes place as long as the isolate has not yet evolved an effective set of isolating mechanisms. Consequently, a relatively distinct zone or belt of hybridization will develop depending on the degree of genetic and phenotypic difference that was achieved by the previously isolated subspecies. See also Ring species References Hybridisation (biology) Population genetics Evolutionary biology
Intergradation
Biology
233
173,493
https://en.wikipedia.org/wiki/Small%20Magellanic%20Cloud
The Small Magellanic Cloud (SMC) is a dwarf galaxy near the Milky Way. Classified as a dwarf irregular galaxy, the SMC has a D25 isophotal diameter of about , and contains several hundred million stars. It has a total mass of approximately 7 billion solar masses. At a distance of about 200,000 light-years, the SMC is among the nearest intergalactic neighbors of the Milky Way and is one of the most distant objects visible to the naked eye. The SMC is visible from the entire Southern Hemisphere and can be fully glimpsed low above the southern horizon from latitudes south of about 15° north. The galaxy is located across the constellation of Tucana and part of Hydrus, appearing as a faint hazy patch resembling a detached piece of the Milky Way. The SMC has an average apparent diameter of about 4.2° (8 times the Moon's) and thus covers an area of about 14 square degrees (70 times the Moon's). Since its surface brightness is very low, this deep-sky object is best seen on clear moonless nights and away from city lights. The SMC forms a pair with the Large Magellanic Cloud (LMC), which lies 20° to the east, and, like the LMC, is a member of the Local Group. It is currently a satellite of the Milky Way, but is likely a former satellite of the LMC. Observation history In the southern hemisphere, the Magellanic clouds have long been included in the lore of native inhabitants, including south sea islanders and indigenous Australians. Persian astronomer Al Sufi mentions them in his Book of Fixed Stars, repeating a quote by the polymath Ibn Qutaybah, but had not observed them himself. European sailors may have first noticed the clouds during the Middle Ages when they were used for navigation. Portuguese and Dutch sailors called them the Cape Clouds, a name that was retained for several centuries. During the circumnavigation of the Earth by Ferdinand Magellan in 1519–1522, they were described by Antonio Pigafetta as dim clusters of stars. In Johann Bayer's celestial atlas Uranometria, published in 1603, he named the smaller cloud, Nubecula Minor. In Latin, Nubecula means a little cloud. Between 1834 and 1838, John Frederick William Herschel made observations of the southern skies with his reflector from the Royal Observatory. While observing the Nubecula Minor, he described it as a cloudy mass of light with an oval shape and a bright center. Within the area of this cloud he catalogued a concentration of 37 nebulae and clusters. In 1891, Harvard College Observatory opened an observing station at Arequipa in Peru. Between 1893 and 1906, under the direction of Solon Bailey, the telescope at this site was used to survey photographically both the Large and Small Magellanic Clouds. Henrietta Swan Leavitt, an astronomer at the Harvard College Observatory, used the plates from Arequipa to study the variations in relative luminosity of stars in the SMC. In 1908, the results of her study were published, which showed that a type of variable star called a "cluster variable", later called a Cepheid variable after the prototype star Delta Cephei, showed a definite relationship between the variability period and the star's apparent brightness. Leavitt realized that since all the stars in the SMC are roughly the same distance from Earth, this result implied that there is similar relationship between period and absolute brightness. This important period-luminosity relation allowed the distance to any other Cepheid variable to be estimated in terms of the distance to the SMC. She hoped a few Cepheid variables could be found close enough to Earth so that their parallax, and hence distance from Earth, could be measured. This soon happened, allowing Cepheid variables to be used as standard candles, facilitating many astronomical discoveries. Using this period-luminosity relation, in 1913 the distance to the SMC was first estimated by Ejnar Hertzsprung. First he measured thirteen nearby cepheid variables to find the absolute magnitude of a variable with a period of one day. By comparing this to the periodicity of the variables as measured by Leavitt, he was able to estimate a distance of 10,000 parsecs (30,000 light years) between the Sun and the SMC. This later proved to be a gross underestimate of the true distance, but it did demonstrate the potential usefulness of this technique. Announced in 2006, measurements with the Hubble Space Telescope suggest that either the Large and Small Magellanic Clouds may be moving too fast to be orbiting the Milky Way, or that the Milky Way Galaxy is more massive than was thought. Features The SMC contains a central bar structure, and astronomers speculate that it was once a barred spiral galaxy that was disrupted by the Milky Way to become somewhat irregular. There is a bridge of gas connecting the Small Magellanic Cloud with the Large Magellanic Cloud (LMC), which is evidence of tidal interaction between the galaxies. This bridge of gas is a star-forming site. The Magellanic Clouds have a common envelope of neutral hydrogen, indicating they have been gravitationally bound for a long time. In 2017, using the Dark Energy Survey plus MagLiteS data, a stellar over-density associated with the Small Magellanic Cloud was discovered, which is probably the result of interactions between the SMC and LMC. X-ray sources The Small Magellanic Cloud contains a large and active population of X-ray binaries. Recent star formation has led to a large population of massive stars and high-mass X-ray binaries (HMXBs) which are the relics of the short-lived upper end of the initial mass function. The young stellar population and the majority of the known X-ray binaries are concentrated in the SMC's Bar. HMXB pulsars are rotating neutron stars in binary systems with Be-type (spectral type 09-B2, luminosity classes V–III) or supergiant stellar companions. Most HMXBs are of the Be type which account for 70% in the Milky Way and 98% in the SMC. The Be-star equatorial disk provides a reservoir of matter that can be accreted onto the neutron star during periastron passage (most known systems have large orbital eccentricity) or during large-scale disk ejection episodes. This scenario leads to strings of X-ray outbursts with typical X-ray luminosities Lx = 1036–1037 erg/s, spaced at the orbital period, plus infrequent giant outbursts of greater duration and luminosity. Monitoring surveys of the SMC performed with NASA's Rossi X-ray Timing Explorer (RXTE) see X-ray pulsars in outburst at more than 1036 erg/s and have counted 50 by the end of 2008. The ROSAT and ASCA missions detected many faint X-ray point sources, but the typical positional uncertainties frequently made positive identification difficult. Recent studies using XMM-Newton and Chandra have now cataloged several hundred X-ray sources in the direction of the SMC, of which perhaps half are considered likely HMXBs, and the remainder a mix of foreground stars, and background AGN. No X-rays above background were observed from the Magellanic Clouds during the September 20, 1966, Nike-Tomahawk flight. Balloon observation from Mildura, Australia, on October 24, 1967, of the SMC set an upper limit of X-ray detection. An X-ray astronomy instrument was carried aboard a Thor missile launched from Johnston Atoll on September 24, 1970, at 12:54 UTC for altitudes above 300 km, to search for the Small Magellanic Cloud. The SMC was detected with an X-ray luminosity of 5 erg/s in the range 1.5–12 keV, and 2.5 erg/s in the range 5–50 keV for an apparently extended source. The fourth Uhuru catalog lists an early X-ray source within the constellation Tucana: 4U 0115-73 (3U 0115-73, 2A 0116-737, SMC X-1). Uhuru observed the SMC on January 1, 12, 13, 16, and 17, 1971, and detected one source located at 01149-7342, which was then designated SMC X-1. Some X-ray counts were also received on January 14, 15, 18, and 19, 1971. The third Ariel 5 catalog (3A) also contains this early X-ray source within Tucana: 3A 0116-736 (2A 0116-737, SMC X-1). The SMC X-1, a HMXRB, is at J2000 right ascension (RA) declination (Dec) . Two additional sources detected and listed in 3A include SMC X-2 at 3A 0042-738 and SMC X-3 at 3A 0049-726. Mini Magellanic Cloud (MMC) It has been proposed by astrophysicists D. S. Mathewson, V. L. Ford and N. Visvanathan that the SMC may in fact be split in two, with a smaller section of this galaxy behind the main part of the SMC (as seen from Earth perspective), and separated by about 30,000 ly. They suggest the reason for this is due to a past interaction with the LMC that split the SMC, and that the two sections are still moving apart. They dubbed this smaller remnant the Mini Magellanic Cloud. In 2023, it was reported that the SMC is indeed two separate structures with distinct stellar and gaseous chemical compositions, separated by around 5 kiloparsecs. See also Large Magellanic Cloud Magellanic Clouds Objects within the Small Magellanic Cloud: NGC 265 NGC 290 NGC 346 NGC 602 References External links NASA Extragalactic Database entry on the SMC SEDS entry on the SMC SMC at ESA/Hubble Astronomy Picture of the Day 2010 January 7 The Tail of the Small Magellanic Cloud - Likely stripped from the galaxy by gravitational tides, the tail contains mostly gas, dust, and newly formed stars. Dwarf barred irregular galaxies Peculiar galaxies Low surface brightness galaxies Milky Way Subgroup Tucana NGC objects 03085 Astronomical objects known since antiquity Magellanic Clouds Local Group Hydrus Magellanic spiral galaxies
Small Magellanic Cloud
Astronomy
2,185
76,388,687
https://en.wikipedia.org/wiki/Ray%20Flannery
Martin Raymond (Ray) Flannery (January 8, 1941, in Claudy, County Londonderry – May 2, 2013, in Atlanta, Georgia) was Regents’ Professor Emeritus in theoretical physics at Georgia Tech. He was known for his work in atomic, molecular, and optical physics (AMO), and published over 160 papers in that area, 66 as sole author. Education and career From 1952 to 1958 Flannery attended St Columb's College in Derry. In 1958 he entered Queen's University of Belfast (QUB), getting a B.Sc. in mathematics in 1961, and then a Ph.D. in 1964 under advisors Alan L. Stewart and Uno (Uuno) Öpik. His thesis was in two parts: Some properties of three-electron atomic systems abd and Photoionization of molecular hydrogen. His early academic career included faculty positions at Queen's University Belfast (1964–66), University of Innsbruck (1966), Georgia Institute of Technology (1967–68), and Harvard University (1968–71). At Georgia Tech, he rose through the ranks from Associate Professor (1971) to Professor (1974) and Regents' Professor (1993), formally retiring in 2007. He also held the following positions: 1977: Visiting Fellow of the Joint Institute for Laboratory Astrophysics (JILA) in Boulder, Colorado. 1979: Fellow of the American Physical Society (APS). 1980 & 2000: Fellow of the Institute of Physics in London. 1993 & 2002: Fellow of the Institute for Theoretical Atomic, Molecular and Optical Physics at Harvard. Selected papers 1970: "Semiquantal theory of heavy-particle excitation, deexcitation, and lonization by neutral atoms: I. Slow and Intermediate Energy Collisions" in the Annals of Physics, Vol 61, #2, 1997: "Passive millimeter-wave camera" (with Yujiri, Larry, et al.), Passive Millimeter-Wave Imaging Technology. Vol. 3064. SPIE, . 2011: "The elusive d'Alembert-Lagrange dynamics of nonholonomic systems" in the American Journal of Physics, 79:9 Awards and honors 1961 Awarded the Purser Postgraduate Prize upon getting his B.Sc. at QUB. 1997 Elected an Honorary Member of the Royal Irish Academy. 1998 The American Physical Society awarded him the Will Allis Prize for the Study of Ionized Gases. In 2012 the school of mathematics and physics at QUB established the Raymond Flannery Prize "to be awarded annually to the graduate in the School of Mathematics and Physics with the best overall mark". References External links Home page at Georgia Tech 1941 births 2013 deaths People from County Londonderry Scientists from Derry (city) Theoretical physicists Georgia Tech faculty People educated at St Columb's College Alumni of Queen's University Belfast
Ray Flannery
Physics
586
7,123,267
https://en.wikipedia.org/wiki/Static%20web%20page
A static web page, sometimes called a flat page or a stationary page, is a web page that is delivered to a web browser exactly as stored, in contrast to dynamic web pages which are generated by a web application. Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so. However, a webpage's JavaScript can introduce dynamic functionality which may make the static web page dynamic. Overview Static web pages are often HTML documents, stored as files in the file system and made available by the web server over HTTP (nevertheless URLs ending with ".html" are not always static). However, loose interpretations of the term could include web pages stored in a database, and could even include pages formatted using a template and served through an application server, as long as the page served is unchanging and presented essentially as stored. The content of static web pages remain stationary irrespective of the number of times it is viewed. Such web pages are suitable for the contents that rarely need to be updated, though modern web template systems are changing this. Maintaining large numbers of static pages as files can be impractical without automated tools, such as static site generators. Any personalization or interactivity has to run client-side, which is restricting. Advantages Provide improved security over dynamic websites (dynamic websites are at risk to web shell attacks if a vulnerability is present) Improved performance for end users compared to dynamic websites Fewer or no dependencies on systems such as databases or other application servers Cost savings from utilizing cloud storage, as opposed to a hosted environment Security configurations are easy to set up, which makes it more secure Disadvantages Dynamic functionality must be performed on the client side Static site generators Static site generators are applications that compile static websites - typically populating HTML templates in a predefined folder and file structure, with content supplied in a format such as Markdown or AsciiDoc. Examples of static site generators include: Ruby programming language: Jekyll (powers GitHub Pages) Middleman Go programming language: Hugo JavaScript programming language: Next.js Astro.build Python programming language: Pelican Julia programming language: Franklin References External links The definitive listing of Static Site Generators, a community-curated list of static site generators. Web 1.0 Static website generators Web development
Static web page
Engineering
505
208,701
https://en.wikipedia.org/wiki/The%20Astronomical%20Journal
The Astronomical Journal (often abbreviated AJ in scientific papers and references) is a peer-reviewed monthly scientific journal owned by the American Astronomical Society (AAS) and currently published by IOP Publishing. It is one of the premier journals for astronomy in the world. Until 2008, the journal was published by the University of Chicago Press on behalf of the AAS. The reasons for the change to the IOP were given by the society as the desire of the University of Chicago Press to revise its financial arrangement and their plans to change from the particular software that had been developed in-house. The other two publications of the society, the Astrophysical Journal and its supplement series, followed in January 2009. The journal was established in 1849 by Benjamin A. Gould. It ceased publication in 1861 due to the American Civil War, but resumed in 1885. Between 1909 and 1941 the journal was edited in Albany, New York. In 1941, editor Benjamin Boss arranged to transfer responsibility for the journal to the AAS. The first electronic edition of The Astronomical Journal was published in January, 1998. With the July, 2006 issue, The Astronomical Journal began e-first publication, an electronic version of the journal released independently of the hardcopy issues. all of the scientific AAS journals were placed under a single editor-in-chief. On January 1, 2022, the AAS Journals, including AJ, transitioned to Gold open access model, with all new papers released under a Creative Commons Attribution license and access restrictions and subscription charges removed from previously published papers. Editors 2016–present Ethan Vishniac 2005–2015 John Gallagher III 1984–2004 Paul W. Hodge 1980–1983 Norman H. Baker 1975–1979 Norman H. Baker and Leon B. Lucy 1967–1974 Lodewijk Woltjer (with Baker and Lucy for later volumes) 1966–1967 Gerald Maurice Clemence 1965–1966: Dirk Brouwer and Gerald Maurice Clemence 1963–1965: Dirk Brouwer 1959–1963: Dirk Brouwer and Harlan James Smith 1941–1959: Dirk Brouwer 1912–1941: Benjamin Boss 1909–1912: Lewis Boss 1896–1909: Seth Carlo Chandler 1885–1896: Benjamin A. Gould, Jr. 1849–1861: Benjamin A. Gould, Jr. See also The Astronomical Almanac The Astrophysical Journal References External links Dudley Observatory, The Astronomical Journal Scanned issues (1849-1997) from ADS Astronomy journals Academic journals established in 1849 Monthly journals English-language journals IOP Publishing academic journals American Astronomical Society academic journals 1849 establishments in the United States Open access journals
The Astronomical Journal
Astronomy
524
3,749,493
https://en.wikipedia.org/wiki/GLI2
Zinc finger protein GLI2 also known as GLI family zinc finger 2 is a protein that in humans is encoded by the GLI2 gene. The protein encoded by this gene is a transcription factor. GLI2 belongs to the C2H2-type zinc finger protein subclass of the Gli family. Members of this subclass are characterized as transcription factors which bind DNA through zinc finger motifs. These motifs contain conserved H-C links. Gli family zinc finger proteins are mediators of Sonic hedgehog (Shh) signaling and they are implicated as potent oncogenes in the embryonal carcinoma cell. The protein encoded by this gene localizes to the cytoplasm and activates patched Drosophila homolog (PTCH) gene expression. It is also thought to play a role during embryogenesis. Isoforms There are four isoforms: Gli2 alpha, beta, gamma and delta. Structure C-terminal activator and N-terminal repressor regions have been identified in both Gli2 and Gli3. However, the N-terminal part of human Gli2 is much smaller than its mouse or frog homologs, suggesting that it may lack repressor function. Function Gli2 affects ventroposterior mesodermal development by regulating at least three different genes; Wnt genes involved in morphogenesis, Brachyury genes involved in tissue specification and Xhox3 genes involved in positional information. The anti-apoptotic protein BCL-2 is up regulated by Gli2 and, to a lesser extent, Gli1 – but not Gli3, which may lead to carcinogenesis. Additionally, in the amphibian model organism Xenopus laevis, it has been shown that Gli2 plays a key role in the induction, specification, migration and differentiation of the neural crest. In this context, Gli2 is responding to the Indian Hedgehog signaling pathway. It has been shown in mouse models that Gli1 can compensate for knocked out Gli2 function when expressed from the Gli2 locus. This suggests that in mouse embryogenesis, Gli1 and Gli2 regulate a similar set of target genes. Mutations do develop later in development suggesting Gli1/Gli2 transcriptional regulation is context dependent. Gli2 and Gli3 are important in the formation and development of lung, trachea and oesophagus tissue during embryo development. Studies have also shown that GLI2 plays a dual role as activator of keratinocyte proliferation and repressor of epidermal differentiation. There is a significant level of crosstalk and functional overlap between the Gli TFs. Gli2 has been shown to compensate for the loss of Gli1 in transgenic Gli1-/- mice which are phenotypically normal. However, loss of Gli3 leads to abnormal patterning and loss of Gli2 affects the development of ventral cell types, most significantly in the floor plate. Gli2 has been shown to compensate for Gli1 ventrally and Gli3 dorsally in transgenic mice. Gli2 null mice embryos develop neural tube defects which, can be rescued by overexpression of Gli1 (Jacob and Briscoe, 2003). Gli1 has been shown to induce the two GLI2 α/β isoforms. Transgenic double homozygous Gli1-/- and Gli2-/- knockout mice display serious central nervous system and lung defects have small lungs, undescended testes, and a hopping gait as well as an extra postaxial nubbin on the limbs. Gli2-/- and Gli3-/- double homozygous transgenic mice are not viable and do not survive beyond embryonic level. These studies suggest overlapping roles for Gli1 with Gli2 and Gli2 with Gli3 in embryonic development. Transgenic Gli1-/- and Gli2-/- mice have a similar phenotype to transgenic Gli1 gain of function mice. This phenotype includes failure to thrive, early death, and a distended gut although no tumors form in transgenic Gli1-/- and Gli2-/- mice. This could suggest that overexpression of human Gli1 in the mouse may have led to a dominant negative rather than a gain-of-function phenotype. Transgenic mice over-expressing the transcription factor Gli2 under the K5 promoter in cutaneous keratinocytes develop multiple skin tumours on the ears, tail, trunk and dorsal aspect of the paw, resembling those of basal cell carcinoma (BCC). Unlike Gli1 transgenic mice, Gli2 transgenic mice only developed BCC-like tumors. Transgenic mice with N-terminal deletion of Gli2, developed the benign trichoblastomas, cylindromas and hamartomas but rarely developed BCCs. Gli2 is expressed in the interfollicular epidermis and the outer root sheath of hair follicles in normal human skin. This is significant as Shh regulates hair follicle growth and morphogenesis. When inappropriately activated causes hair follicle derived tumors, the most clinically significant being the BCC. Of the four Gli2 isoforms the expression of Gli2beta mRNA was increased the most in BCCs. Gli2beta is an isoform spliced at the first splicing site which contains a repression domain and consists of an intact activation domain. Overexpression of this Gli2 splice variant may lead to the upregulation of the Shh signalling pathway, thereby inducing BCCs. Clinical significance Mutations of the GLI2 gene are associated with midline craniofacial anomalies, hypopituitarism, and sometimes holoprosencephaly (https://omim.org/entry/165230, Holoprosencephaly 9, Culler-Jones syndrome) In human keratinocytes Gli2 activation upregulates a number of genes involved in cell cycle progression including E2F1, CCND1, CDC2 and CDC45L. Gli2 is able to induce G1–S phase progression in contact-inhibited keratinocytes which may drive tumour development. Although both Gli1 and Gl12 have been implicated it is unclear whether one or both are needed for carcinogenesis. However, due to feed back loops, one may directly or indirectly induce the other. Cis-regulatory catalog of GLI2 Minhas et al. 2015 have recently elucidated a subset of cis-regulatory elements controlling GLI2 expression. They have shown that conserved non-coding elements (CNEs) from the intron of GLI2 gene act as tissue-specific enhancers and reporter gene expression induced by these elements correlates with previously reported endogenous gli2 expression in zebrafish. The regulatory activities of these elements are observed in several embryonic domains, including neural tube and pectoral fin. References External links Transcription factors
GLI2
Chemistry,Biology
1,510
35,534,124
https://en.wikipedia.org/wiki/C21H26N2O4
{{DISPLAYTITLE:C21H26N2O4}} The molecular formula C21H26N2O4 (molar mass: 370.44 g/mol) may refer to: Ciladopa (AY-27,110) Samidorphan (ALKS-33) Scholarine Molecular formulas
C21H26N2O4
Physics,Chemistry
71
1,508,809
https://en.wikipedia.org/wiki/Dunes%20%28hotel%20and%20casino%29
The Dunes was a hotel and casino on the Las Vegas Strip in Paradise, Nevada. It opened on May 23, 1955, as the tenth resort on the Strip. It was initially owned by a group of businessmen from out of state, but failed to prosper under their management. It also opened at a time of decreased tourism, while the Strip was simultaneously becoming overbuilt with hotel rooms. A few months after the opening, management was taken over by the operators of the Sands resort, also on the Strip. This group failed to improve business and relinquished control less than six months later. Businessman Major Riddle turned business around after taking over operations in 1956. He was involved with the resort until his death in 1980. He had several partners, including Sid Wyman, who worked for the Dunes from 1961 until his death in 1978. Mafia attorney Morris Shenker joined in 1975, following one of the most extensive routine investigations ever conducted by the Nevada Gaming Control Board. The Dunes had frequent connections with Mafia figures, some of whom were alleged to have hidden ownership in the resort, and state officials were concerned about Shenker's association with such figures. In 1957, the Dunes debuted Las Vegas' first topless show, Minsky Goes to Paris, prompting other resorts to follow suit. Two other successful shows, by Frederic Apcar, would later debut at the Dunes. The resort also offered amenities such as the Emerald Green golf course, which opened in 1964. The Dunes was one of two Strip resorts to include a golf course, the other one being the Desert Inn. The Emerald Green was the longest course in Nevada, at 7,240 yards. The Dunes opened with 194 rooms, while a 21-story tower brought the total to 960. The tower was among the tallest buildings in Nevada, and was opened in 1965. By this time, the resort also had the tallest free-standing sign in the world, rising 181 feet. Several popular restaurants were also added in the 1960s, including the underwater-themed Dome of the Sea, and the Top O' the Strip, located at the top of the hotel tower. Another tower, 17 stories in height, was opened in 1979, giving the resort a total of 1,282 rooms. The Dunes added a second gaming facility, the Oasis Casino, in 1982. The Dunes experienced financial problems in the 1980s, and had many prospective buyers during this time, including businessman Steve Wynn. Japanese investor Masao Nangaku eventually bought the resort in 1987, at a cost of $157 million. Nangaku intended to renovate and expand the Dunes, although his plans were derailed by an unusually lengthy control board investigation, which dissuaded financiers. Wynn's company, Mirage Resorts, bought the Dunes in November 1992, paying $75 million. Plans were announced to replace it with a lake resort. The Dunes closed on January 26, 1993. The original North Tower was imploded on October 27, 1993, during a highly publicized ceremony which helped promote Wynn's new Treasure Island resort, located about a mile north. The demolition event garnered 200,000 spectators. The newer South Tower was imploded on July 20, 1994, without the fanfare of the first implosion; it attracted 3,000 spectators. Wynn's new resort, Bellagio, eventually opened on the former Dunes site in 1998. History The Dunes was initially owned by a group of businessmen that included Robert Rice of Beverly Hills, James A. Sullivan of Rhode Island, Milton Gettinger of New York, and Alfred Gottesman, a wealthy theater operator in Florida. Rice and Gottesman were new to the gaming industry. The group proposed the project, originally called the Araby, in July 1953. It was later renamed the Vegas Plaza, and then Hotel Deauville. Groundbreaking took place on June 22, 1954, with the resort now known as the Dunes. It was built by the Los Angeles-based McNeil Construction Company, which spent 11 months working on the resort. The Dunes opened on May 23, 1955, as the tenth resort on the Las Vegas Strip. The opening attracted many celebrities, including Cesar Romero, Spike Jones, and Rita Moreno. Gottesman and Sullivan were majority stockholders, and also served as 50-50 partners in the operation of the casino. Businessman Kirk Kerkorian bought a three-percent interest a couple months after the opening, marking his first Las Vegas investment. The Dunes was one of four new Las Vegas resorts to open within a six-week period, resulting in financial trouble for each of them. The Las Vegas Valley had been overbuilt with hotel rooms during a time of lessened demand, and the Dunes was also the southernmost resort on the Strip, located a considerable distance from other properties. A Dunes attorney blamed the resort's financial trouble on a persistent losing streak in its casino. Rice believed that the financial problems were the result of it competing with other resorts for expensive live entertainment. In addition, the Dunes had numerous creditors. Among these was McNeil Construction, which filed a $166,000 lien against the ownership group, representing unpaid salary. The group said it would not pay the balance, stating that the construction contract had been violated. In August 1955, an agreement was reached for Sands Hotel Corporation, owner of the Sands Hotel and Casino, to lease and operate the struggling Dunes. To mark the management change, a three-day celebration was held starting on September 9, 1955. Singer Frank Sinatra headlined the ceremony and entered on a camel. Sands closed the casino portion in January 1956, due to falling profits. It was the third Las Vegas casino to close in recent months, following the Moulin Rouge Hotel and Royal Nevada. Live entertainment also ceased, although the hotel remained open. Rice blamed disagreements within Sands for the casino's failure. The group lost $1.2 million operating the Dunes, and relinquished control of the resort on February 1, 1956. Businessman Major Riddle subsequently partnered with local hotel operator William Miller to reopen the casino. They would be equal partners with 44-percent ownership, while Rice would own the remainder. The Dunes casino reopened in June 1956. Seven months later, plans were announced for Sullivan and Gottesman to sell the property to Jacob Gottlieb, owner of a Chicago trucking firm. Gottlieb became the resort's landlord through Western Realty Company, and Miller departed the property as president and general manager. The resort was managed through Riddle's operating company, M&R Investment. The Dunes was sold in a Clark County sheriff's auction at the end of 1957, to satisfy the debt owed to McNeil Construction. It sold for $115,000, but was valued at $3.5 million. Gottesman, Sullivan, and Gettinger bought it back in November 1958. The resort thrived under Riddle, who added several new shows and facilities. On April 15, 1959, the Dunes hosted the first double groundbreaking ceremony in Las Vegas history: one for a convention center, built south of the existing resort facilities, and another for a 500-space parking lot directly north of the resort. In 1961, St. Louis businessmen Sid Wyman, Charlie Rich, and George Duckworth invested in the Dunes and became the new operators through a lease agreement. Wyman was put in charge of casino operations, and Riddle remained as the majority owner. The following year, he sold 15 percent of the operating corporation to the three men, reducing his interest to 37 percent. Several notable individuals were married at the Dunes, including Mary Tyler Moore and Grant Tinker (1962), Cary Grant and Dyan Cannon (1965), and Jane Fonda and Roger Vadim (1965). Mike Goodman, author of the best-selling 1963 book How to Win: At Cards, Dice, Races, Roulette, was a pit boss at the Dunes during the 1960s. Gambling author Barney Vinson also worked there. During the 1960s, the resort's western edge was condemned for construction of Interstate 15. The resort added a golf course in 1964. A 21-story hotel tower, initially known as the Diamond of the Dunes, was opened in May 1965, to mark the resort's 10th anniversary. It was part of a $20 million expansion project, and later became the North Tower, following the addition of another hotel building to the south. In 1969, M&R merged with Continental Connector Corporation, a New York-based electronics firm. M&R became a subsidiary of Continental Connector, which owned the Dunes and the land beneath it. Later in 1969, the U.S. Securities and Exchange Commission filed suit against Continental Connector, accusing it of making inaccurate financial statements regarding earnings at the Dunes. The company subsequently sought a buyer for the resort. In 1970, businessman Howard Hughes was in discussions to purchase the Dunes, although negotiations ended without a deal. Rapid-American Corporation began discussions to acquire the resort, but eventually dropped out. Rice, Wyman, Duckworth and three other top resort officials were indicted in 1971 by a federal grand jury, alleging that they filed false corporate income tax returns and that they conspired to skim money from the gaming tables. The officials pleaded innocent, and Wyman later divested his ownership, but remained with the Dunes as a consultant. Mafia connections The Dunes had numerous Mafia connections for much of its history. Sullivan's early ownership in the resort was actually held by Raymond Patriarca, and Gottlieb was affiliated with Jimmy Hoffa, president of the Teamsters Union. During the 1950s and 1960s, the union financed many casino expansions in Las Vegas through its pension fund. This included a $5 million loan for the Dunes' original hotel tower. Allen Dorfman, who handled negotiations on behalf of the pension fund, was alleged to have hidden ownership in the Dunes. The Dunes occasionally provided first-class treatment to Mafia figures such as Anthony Giordano, who was arrested at the resort in 1969, while visiting Wyman. The FBI planted surveillance bugs at the Dunes during the 1960s, and certain resort employees worked as informants for the agency during the 1970s. In 1972, a new group emerged as a prospective buyer for the resort, still under the ownership of Continental Connector. The group included San Diego developer Irvin Kahn and partner Morris Shenker, a St. Louis attorney who was representing Wyman and other resort officials in their case. The Nevada Gaming Control Board launched a routine investigation into Shenker and Kahn's financing, but halted its probe in 1973, following Kahn's death. In 1974, Shenker owned 37 percent of the Dunes through stock holdings in Continental Connector, and he sought to buy out the remainder, prompting the control board to reopen and expand its investigation into his financial background. It was one of the most extensive investigations in Nevada gaming history, as state officials had concerns about Mafia figures with whom Shenker was associated. Shenker later denied allegations that his ownership in the resort was a front for Nick Civella, whom Shenker had represented previously as attorney. Civella had a comped visit at the resort in 1974, but Shenker noted that he had not yet taken control of the Dunes at that time, and said he would not have allowed Civella to stay there if he had been in charge. In 1975, Tony "The Ant" Spilotro began spending extensive time in the Dunes casino, where he would take phone calls routed to the poker room. The gaming control board accused him of treating the Dunes as his personal office, and questioned Shenker and Riddle as to why he was allowed on the premises, given his Black Book status. The men denied knowing Spilotro or his background, and said they only had an outdated photograph of him from 20 years earlier, making it difficult to identify him. The control board alleged that management was, in fact, aware of Spilotro and had already been warned about his presence at the resort. M&R had negotiated a $40 million loan from the Teamsters Union pension fund in 1974. A $75 million expansion was planned to begin in 1976, and would include two additional hotel towers. The project would be financed in part by the Teamsters loan. However, the union withheld the funds, citing the Employee Retirement Income Security Act of 1974. Specifically, the union stated that the loan could not be granted because Continental Connector owned a trucking company which employed teamsters who had contributed to the pension fund. Shenker criticized the pension fund's reasoning, saying that Continental Connector had already divested itself of ownership in the trucking company. A second tower, rising 17 stories, eventually opened in 1979. In 1980, members of the Colombo crime family received comped stays at the resort. Later years Wyman died of cancer in June 1978, and gaming at the Dunes was halted for two minutes in his honor. In 1979, Continental Connector was renamed Dunes Hotels and Casinos Inc., amid plans for a second Dunes resort in Atlantic City. Riddle died in 1980, and Shenker suffered a heart attack that year, prompting him to seriously consider selling the Dunes. In 1982, the resort added a second casino building, known as the Oasis Casino. In December 1982, it was announced that the resort would be sold to brothers Stuart and Clifford Perlman for $185 million, which would include the assumption of $105 million in debt. The Perlmans provided a $10 million loan to prevent the Dunes from being seized by the Internal Revenue Service, but later backed out of the purchase after learning that the debt would be $20 million more than initially expected. Circus Circus Enterprises subsequently considered a purchase, as did Golden Nugget chairman Steve Wynn, who made a $115 million offer. In May 1984, the Dunes was sold to John Anderson, a farmer in Davis, California, who also owned the Maxim hotel-casino in Las Vegas. Shenker maintained a 26-percent stake. M&R filed for Chapter 11 bankruptcy in November 1985. Later that month, Wynn made another $115 million offer, which was rejected by Anderson and Shenker, deeming it too low and valuing the Dunes at $143.5 million. Numerous other offers would be made over the next two years, including one by New York businessman Donald Trump. Blumenfeld Properties, a Philadelphia real estate development company, made a $145.5 million offer for the Dunes, but ultimately did not purchase the resort. Burton Cohen was named as the resort's president in January 1986, following the departure of its previous president. Financial firm EF Hutton eventually formed a partnership that was interested in purchasing the Dunes, while a separate group led by Kerkorian was also in discussions. Talks with the two prospective buyers ended in February 1987, without a deal. Shortly thereafter, Texas-based lender Southmark Corporation purchased the first and second mortgages of the Dunes from Valley Bank and First Security Leasing, the Dunes' two major creditors. Later in 1987, Hilton Hotels and Japanese investor Masao Nangaku both considered buying the Dunes. Foreclosure was delayed to allow more time for a possible purchase. Hilton offered $122.5 million, and planned to refurbish the existing rooms while adding a third tower, at an additional cost of $110 million. Cohen believed that the resort needed 2,000 hotel rooms to adequately compete with other resorts. Kerkorian re-emerged as a prospective buyer, and Sheldon Adelson also considered purchasing the 163-acre resort. Nangaku ultimately prevailed, offering a $157.7 million bid in August 1987. His purchase was finalized four months later. While Nangaku waited to receive a gaming license, he hired Dennis Gomes to operate the Dunes, replacing Cohen as president. Nangaku underwent an unusually long gaming control board probe. Investigators suspected that unlicensed people from Nangaku's company, Minami Group, were involved in the resort. The control board encountered difficulty when looking into Nangaku's business associates because of differences in how Japan handles documents, which are generally kept confidential. Investigators also suspected that the associates were making attempts to hinder their efforts. In December 1988, Nangaku received a limited two-year gaming license while investigators continued their probe. Nangaku planned up to $280 million in renovations, including a new hotel tower and the demolition of the original motel-style structures, although little work had been done by mid-1989. He blamed the limited gaming license, stating that financiers were hesitant to lend money because of uncertainty about whether he would remain licensed in the near future. The first phase of Nangaku's multimillion renovation eventually began in September 1989. The following year, Nangaku announced a planned $200 million remodeling project. He also hired the architectural firm Hellmuth, Obata & Kassabaum to design the new high-rise tower. Nangaku eventually received a permanent gaming license in May 1991, at which point he was seeking a partner to help renovate and operate the Dunes. The resort had laid off hundreds of workers that year, due to financial troubles brought on by the early 1990s recession. Despite Nangaku's expansion plans for the resort, he ultimately invested only $12 million in basic repairs. The Las Vegas Review-Journal had written in 1988 that the Dunes had lost its "mystical luster" over the past 20 years, with its high rollers migrating to "more attractive" resorts. The newspaper's John L. Smith wrote that the Dunes had lost its "classy resort" reputation and had become "a dump by Strip standards" despite its name recognition and prime location on the central Strip. The Dunes failed to stay competitive against new megaresorts opening on the Strip, including The Mirage in 1989, and the Excalibur a year later. During 1990, the resort was losing $500,000 monthly. Wynn's company, since renamed as Mirage Resorts, agreed to purchase the Dunes in October 1992. It was sold the following month for $75 million. At the time, the property was losing $2 million a month. Wynn planned to demolish the Dunes and redevelop the site. Gaming executive Richard Goeglein led a team which helped operate the Dunes in the months leading up to its closure. Closure and demolition The Dunes closed on January 26, 1993. Wynn said: "It's becoming in death a much better place than it was in life. This thing about melancholy in its passing is sorta strange. No one felt that while it [the Dunes] was laying there, terminally ill. It's been laying there on life support systems for many years". At the time of its closing, the Dunes employed more than 1,200 people. Employees held reunions each year following the closure. An on-site sale of the Dunes inventory, including light fixtures and carpeting, began in March 1993. Demolition started on September 16, 1993. A four-alarm fire began on-site that afternoon, after workers accidentally ran over an electrical outlet in a bulldozer. The fire affected a two-story hotel building and eventually spread across the property. More than 200 firefighters responded, and six blocks of the Strip were closed off for more than four hours until the fire was contained. The original North Tower was demolished on the night of October 27, 1993, one day after the opening of Wynn's new Strip resort Treasure Island, located about a mile north. The tower was imploded with great fanfare in an event emceed by Wynn that incorporated his new resort; on his command, a faux pirate ship at Treasure Island shot its cannon several times, simulating the Dunes' destruction by cannonballs as the implosion began. The tower was brought down around 10:10 p.m., following a six-minute fireworks show. The $1.5 million demolition event attracted 200,000 spectators. The Dunes was the first Las Vegas resort to be imploded, and numerous others would follow suit into the next decade. The tower's implosion was handled by Controlled Demolition, Inc. The demolition required 365 pounds of dynamite, and 550 gallons of aviation fuel were also used, creating fireballs that went up each floor of the tower's east side, facing the Strip and spectators. The Oasis Casino and the Dunes' two-story casino building were not part of the implosion. Fireworks sparked two small fires on the roof of the Oasis, and numerous small fires began in the Dunes' casino area, all put out by on-site firefighters. Both facilities were bulldozed following the implosion. A three-month clean-up project began to remove the debris left from the imploded tower. During the clean-up, workers discovered hundreds of $100 Dunes casino chips in the resort's foundation; some casinos executives would dispose of outdated chips by burying them in the foundation of their buildings. The South Tower was briefly used as a job center for Treasure Island. It was eventually imploded on the morning of July 20, 1994, without the fanfare of the first implosion. Mirage Resorts had urged people not to show up for the second implosion, which attracted approximately 3,000 spectators. Commenting on the end of the Dunes, Wynn said, "This is not an execution; this is a phoenix rising". His new resort, Bellagio, eventually opened on the former Dunes site in 1998. The resort's lake covers much of the land once occupied by the Dunes' casino and hotel structures. Fire safety and 1986 arson spree New fire-safety rules were implemented in Las Vegas following the MGM Grand fire (1980) and Las Vegas Hilton fire (1981). In 1985, the Dunes was one of seven hotels that failed to comply with the new safety rules, receiving six citations. The Dunes agreed to close its main showroom and convention center in exchange for a county extension, allowing time to raise $13.5 million needed to bring the facilities up to standard. In February 1986, the Dunes won additional extensions to meet the fire-safety requirements. Later that month, a series of arson fires were set to several Strip resorts, including the Dunes, the Holiday Casino, and the Sands. As a precaution, 1,650 hotel guests were evacuated from the Dunes just before midnight. On the casino floor, many gamblers refused to leave and continued playing. Firefighters quickly determined that the fires posed no threat to the casino area. Crews battled a total of five fires at the Dunes, and guests were allowed to return to their rooms after three hours. Six people were treated for smoke inhalation, and damage was estimated at $55,000. The Dunes offered a $10,000 reward for information leading to the arrest of the arsonist. A man was eventually arrested for the arson spree and sentenced to 10 years in prison. In light of the recent fires, the county reconsidered the extensions previously granted to the Dunes. By May 1986, the resort had made significant progress on its fire retrofit work. Features The Dunes featured an Arabian theme, and was designed by Robert Dorr Jr. and John Replogle. The resort initially occupied 85 acres. The casino opened with 120 slot machines. The convention center, opened in 1959, included seating for 800 people. The casino was remodeled in 1961, and a keno lounge would be added 10 years later, part of a $2 million renovation project. In 1965, the Dunes became the first Strip business to offer a nursery, which would supervise children while their parents enjoyed the resort's amenities. By that point, the Dunes also had two swimming pools and a dozen shops, while additional retailers would be added in 1979. An addition, containing various amenities, was approved by the county in 1981. The expansion cost $15 million, and included the Oasis Casino, which opened on August 20, 1982. The structure, with an exterior of black mirrored glass, was built at a cost of $17 million. The Oasis provided the Dunes property with an additional of gaming space. Although the Oasis was a two-story building, it opened without the second floor, which was unfinished and sealed off. The Oasis Casino featured curved neon palm trees at its entrance, standing 70 feet with fronds 20 feet in length. They were designed by Ad-Art sign designer Jack DuBois, based on early design work by Raul Rodriguez. The palms were dismantled in April 1993, after being sold during the liquidation sale to a buyer in Taiwan. By 1999, the palms had been installed at the entrance to the NASA nightclub in Bangkok. The club closed some time after that, and the whereabouts of the palms are unknown. Hotel The Dunes opened with 194 rooms, and plans for additional rooms were already in the works, although it would be years before they came to fruition. In 1957, plans were announced for a $2 million expansion that would include a 14-story tower. A year later, the proposed tower was increased to 18 stories. An additional 246 rooms were eventually added in 1960, with the opening of the Olympic Wing, joining the existing Seahorse Wing. Groundbreaking for the tower eventually took place on October 20, 1962. It was designed by Milton Schwartz, and the opening was pushed back because of design changes. The tower eventually opened in May 1965. It had 510 rooms, bringing the total room count to 960. At 21 stories, it was among the tallest buildings in Nevada. The tower was originally known as Diamond of the Dunes, and was later called the North Tower, following the addition of the South Tower. Construction of the latter began on July 26, 1978, part of a $100 million expansion and remodeling project. The 17-story South Tower was topped off on April 12, 1979, and was opened that December. The second tower was designed by Maxwell Starkman and included 464 rooms, for a new total of 1,282. Golf course The Dunes opened its Emerald Green golf driving range in November 1961. The Emerald Green golf course debuted in 1964, and had its formal opening in April 1965. Since then, the resort was sometimes known as the Dunes Hotel and Country Club, reflecting its golf amenities. The Emerald Green measured 7,240 yards, making it the longest course in Las Vegas. It stretched south from Flamingo Road to Tropicana Avenue, occupying roughly 80 acres along the eastern edge of I-15. Riddle bought the site from banker Jerry Mack and Mel Close, bringing the resort a total of 163 acres. The Emerald Green's closure in 1993 left the Desert Inn as the only other Strip resort with a golf course. At the time, the Emerald Green had seen an average of 65,000 golfers each year, second only to the Las Vegas Municipal Golf Course. It was especially popular among celebrities. The Emerald Green site is now occupied by parts of Park MGM (opened in 1996) and CityCenter (2009), as well as T-Mobile Arena (2016). Sultan and neon sign The Dunes originally featured a 30-foot-(9-meter-)high sultan statue above its entrance. The fiberglass statue was created by sculptor Kermit Hawkins. The sultan's turban included a diamond that lit up at night, and which was actually a car headlamp that had been put in place. In 1964, the sultan was moved to the edge of the golf course along I-15, serving as an advertisement to motorists. The sultan was destroyed by fire, caused by a short circuit, on the night of December 31, 1985. Lee Klay of the Federal Sign and Signal Company designed a roadside sign for the Dunes, activated on November 12, 1964. Klay recalled that the resort owners asked him to create "a big phallic symbol going up in the sky as far as you can make it". At 181 feet (55 meters), it was the tallest free-standing sign in the world. The foundation measured 80 feet in width, and supported two white-colored columns forming a bulbous onion dome or stylized spade shape at the top. Contained within this shape were two-story-high letters spelling out "Dunes", with a large diamond atop the lettering. The sign contained 16,000 feet of neon tubing, including 7,200 lamps. At night, the sign lit up in red coloring. Blackout curtains were added in hotel rooms facing the sign, as some guests had trouble sleeping because of the neon lighting. Schwartz objected to the construction of the sign, believing that it conflicted with the design of his hotel tower, although Riddle overrode him. A full-time, three-man team worked to maintain the sign, which had a service elevator going up one of its columns to the top. The sign was intentionally destroyed as part of the 1993 implosion event, with the use of 18-grain detonating cord. Architectural historian Alan Hess had advocated for saving the sign, although Mirage Resorts stated that it was in extremely poor condition, with demolition being cheaper than preservation. Saving the sign would have required it to be disassembled in eight-foot sections, at a cost of up to $100,000. A smaller, similar sign exists at the city's Neon Museum. In 2019, filmmaker Tim Burton also debuted a Dunes-inspired sign as part of Lost Vegas: Tim Burton, an exhibit at the Neon Museum. An original neon entrance sign from the resort is also located at the Nevada State Museum in Las Vegas. Restaurants A popular fine-dining restaurant, Sultan's Table, opened on March 4, 1961. It was designed by Schwartz, and included live music for diners. Riddle was inspired to build Sultan's Table after visiting an upscale restaurant, the Villa Fontana, in Mexico City. Sultan's Table was the first gourmet restaurant to open on the Strip, and Diners Club named it "America's finest and most beautiful new restaurant". The Dunes opened its Dome of the Sea on June 12, 1964. It was a seafood restaurant with an underwater theme. It was also designed by Schwartz, who created the exterior as a circular building that "looked like it came from outer space". Schwartz collaborated with designer Sean Kenny on the interior, which had a budget of $150,000. Images of fish and seaweed were projected onto the restaurant's interior walls. It also featured a harpist, dressed as a mermaid, who performed in the center of the room. For a brief period starting in 1972, the restaurant would transform into Dome After Hours, offering cocktails and continuous live entertainment between the hours of 1:00 and 5:00 a.m. A restaurant and lounge, Top O' the Strip, opened on June 4, 1965. It was located on the top floor of the new hotel tower, providing views of the city. It was popular among tourists, and also featured live entertainment. It was renamed Top O' the Dunes in 1979. Live entertainment Comedian Wally Cox was an early entertainer at the Dunes, opening there in July 1955, although he was fired due to poor audience reception. Gottesman acknowledged that Cox was ill-prepared and brought no new material to his performances. Cox had been signed for four weeks, but only gave three performances. Comedian Stan Irwin briefly filled in for Cox, who was then hired back later in the month. Entertainers at Top O' the Strip included Art and Dotty Todd, Russ Morgan, and Bob Anderson. The Dunes also opened a Comedy Store location in 1984, hosting numerous comedians. It relocated to the Golden Nugget hotel-casino in 1990, but briefly returned to the Dunes in 1992. Shows The Dunes' 1955 opening included Vera-Ellen in a production show titled New York-Paris-Paradise, which was contracted for a four-week run. It was part of Gottesman's policy to focus on shows rather than big-name stars; he said, "There aren't enough name stars in the world to play all the Vegas hotels". New York-Paris-Paradise was directed by Robert Nesbitt and played in the Dunes' showroom, known as the Arabian Room. On January 10, 1957, Riddle debuted Las Vegas' first topless show, titled Minsky Goes to Paris. Riddle said, "We have something people can't get on television". The show's success inspired other resorts to debut their own topless shows. During 1958, the show was attracting 9,000 viewers weekly. Later known as Minsky's Follies, the show ran until 1961. Riddle brought Tenderloin, a Broadway musical, to the Dunes in May 1961. The Broadway show Guys and Dolls, starring Betty Grable and Dan Dailey, also played at the Dunes for about six months, starting in 1962. The Dunes opened a new venue, the Persian Room, in December 1961. It replaced the Sinbad Cocktail Lounge. The Persian Room debuted with Vive Les Girls, a French musical revue by Frederic Apcar. It was successful, becoming an annual show at the Dunes. It closed in 1971, when the Persian Room was replaced by the keno lounge. The Dunes had also debuted another show by Apcar in December 1963, titled Casino de Paris and initially starring Line Renaud. The show cost approximately $6 million to create, featuring 100 cast members and more than 500 costumes. The show incorporated a custom stage known as the Octopus or Octuramic. Designed by Schwartz and Kenny, the stage had several arms capable of extending 50 feet above the audience. Circular dancing platforms, 20 feet in diameter, were built at the end of each arm, allowing showgirls to dance above the audience. The show ended in June 1981, due to the high costs of putting it on each week. Showstoppers, a family show by Jeff Kutash, was planned to open in 1990, but was canceled before its premiere. Boxing Many major professional boxing events took place at the Dunes from 1975 to 1990; notably the May 20, 1983, undercard that featured Ossie Ocasio retaining his WBA's world Cruiserweight title by fifteen round unanimous decision over Randy Stephens, Greg Page beat Renaldo Snipes by twelve rounds unanimous decision in a WBC's Heavyweight division elimination bout, Michael Dokes retained his WBA world Heavyweight title with a fifteen-round draw (tie) over Mike Weaver in their rematch, and Larry Holmes won over Tim Witherspoon by a twelve-round split decision to retain his WBC world Heavyweight title. This was the first time in history that two world Heavyweight championship fights took place on the same day. In popular culture The Dunes made numerous appearances in television, including a 1964 episode of Arrest and Trial. It is featured in a 1977 episode of The Bionic Woman titled "Fembots in Las Vegas", and a 1978 episode of Charlie's Angels titled "Angels in Vegas". The Dunes sign is used in the intro of the television series Vega$, and the resort is seen in the pilot episode of the 1980s television series Knight Rider, titled "Knight of the Phoenix". It also appears in the season-two premiere episode "Goliath". The Dunes made film appearances as well, including the 1971 James Bond movie Diamonds Are Forever, in which it serves as the office of Whyte House casino manager Bert Saxby. The Dunes sign also makes an appearance in the film, and a deleted scene, available on home media releases, takes place in the Dome of the Sea restaurant. In the 1984 film Oxford Blues, the main character (portrayed by Rob Lowe) works as a parking attendant at the Dunes. The sign and hotel also appear in the 1984 film Cannonball Run II, and are seen in the closing credits of the 1989 film K-9. The sign also appears in the 1991 comedy Hot Shots!, when the pilot nicknamed "Wash Out" mistakes a runway and lands near the hotel. The 1991 film Harley Davidson and the Marlboro Man includes footage of the casino and hotel, including its rooftop. The hotel's 1993 implosion was filmed for Treasure Island: The Adventure Begins, a television special promoting Wynn's Treasure Island resort. The implosion is also among other Las Vegas resort demolitions featured during the closing credits of the 2003 film The Cooler. The Dunes is shown across from the fictional Tangiers casino at the beginning of the 1995 film Casino, directed by Martin Scorsese. The Dunes is also seen during the Las Vegas sequence of Scorsese's 2019 film The Irishman. See also List of Las Vegas Strip hotels Notes References External links Footage of the Dunes' grand opening with Frank Sinatra Implosion of the Dunes 1955 establishments in Nevada 1993 disestablishments in Nevada Casinos completed in 1955 Casino hotels Buildings and structures demolished by controlled implosion Demolished hotels in Clark County, Nevada Hotel buildings completed in 1955 Hotel buildings completed in 1965 Hotel buildings completed in 1979 Hotels established in 1955 Defunct casinos in the Las Vegas Valley Defunct hotels in the Las Vegas Valley Buildings and structures demolished in 1993 Buildings and structures demolished in 1994 Las Vegas Strip Resorts in the Las Vegas Valley Skyscraper hotels in Paradise, Nevada Former skyscraper hotels
Dunes (hotel and casino)
Engineering
7,571
9,330,860
https://en.wikipedia.org/wiki/Accepted%20and%20experimental%20value
In science, and most specifically chemistry, the accepted value denotes a value of a substance accepted by almost all scientists and the experimental value denotes the value of a substance's properties found in a localized lab. See also Accuracy and precision Error Approximation error References Analytical chemistry
Accepted and experimental value
Chemistry
54
16,936,858
https://en.wikipedia.org/wiki/Stackable%20switch
A stackable switch is a network switch that is fully functional operating standalone but which can also be set up to operate together with one or more other network switches, with this group of switches showing the characteristics of a single switch but having the port capacity of the sum of the combined switches. The term stack refers to the group of switches that have been set up in this way. The common characteristic of a stack acting as a single switch is that there is a single IP address for remote administration of the stack as a whole, not an IP address for the administration of each unit in the stack. Stackable switches are customarily Ethernet, rack-mounted, managed switches of 1–2 rack unit (RU) in size, with a fixed set of data ports on the front. Some models have slots for optional slide-in modules to add ports or features to the base stackable unit. The most common configurations are 24-port and 48-port models. Comparison with other switch architectures A stackable switch is distinct from a standalone switch, which only operates as a single entity. A stackable switch is distinct from a switch modular chassis. Benefits Stackable switches have these benefits: Simplified network administration: Whether a stackable switch operates alone or “stacked” with other units, there is always just a single management interface for the network administrator to deal with. This simplifies the setup and operation of the network. Scalability: A small network can be formed around a single stackable unit, and then the network can grow with additional units over time if and when needed, with little added management complexity. Deployment flexibility: Stackable switches can operate together with other stackable switches or can operate independently. Units one day can be combined as a stack in a single site, and later can be run in different locations as independent switches. Resilient connections: In some vendor architectures, active connections can be spread across multiple units so that should one unit in a stack be removed or fail, data will continue to flow through other units that remain functional. Improving backplane: A series of switches, when stacked together, improves the backplane of the switches in stack also. Drawbacks Compared with a modular chassis switch, stackable switches have these drawbacks: For locations needing numerous ports, a modular chassis may cost less. With stackable switching, each unit in a stack has its own enclosure and at minimum a single power supply. With modular switching, there is one enclosure and one set of power supplies. High-end modular switches have high-resiliency / high-redundancy features not available in all stackable architectures. Additional overhead when sending stacking data between switches. Some stacking protocols add additional headers to frames, further increasing overhead. Functionality Features associated with stackable switches can include: Single IP address for multiple units. Multiple switches can share one IP address for administrative purposes, thus conserving IP addresses. Single management view from multiple interfaces. Stack-level views and commands can be provided from a single command line interface (CLI) and/or embedded Web interface. The view into the stack can be unified. Stacking resiliency. Multiple switches can have ways to bypass a “down” switch in a stack, thus allowing the remaining units to function as a stack even with a failed or removed unit. Layer 3 redundancy. Some stackable architectures allow for continued Layer 3 routing if there is a “down” switch in a stack. If routing is centralized in one unit in the stack, and that unit fails, then there must be a recovery mechanism to move routing to a backup unit in the stack. Mix and match of technology. Some stackable architectures allow for mixing switches of different technologies or from different product families, yet still achieve unified management. For example, some stacking allows for mixing of 10/100 and gigabit switches in a stack. Dedicated stacking bandwidth. Some switches come with built-in ports dedicated for stacking, which can preserve other ports for data network connections and can avoid the possible expense of an additional module to add stacking. Proprietary data handling or cables can be used to achieve higher bandwidths than standard gigabit or 10-gigabit connections. Link aggregation of ports on different units in the stack. Some stacking technologies allow for link aggregation from ports on different stacked switches either to other switches not in the stack (for example a core network) or to allow servers and other devices to have multiple connections to the stack for improved redundancy and throughput. Not all stackable switches support link aggregation across the stack. There is not universal agreement as to the threshold for being a stackable versus being a standalone switch. Some companies call their switches stackable if they support a single IP address for multiple units even if they lack other features from this list. Some industry analysts have said a product is not a stackable if it lacks one of the above features (e.g., dedicated bandwidth). Terminology Here are other terms associated with stackable switches: Stacking backplane Used to describe the connections between stacked units, and the bandwidth of that connection. Most typically, switches that have primarily Fast Ethernet ports would have at minimum gigabit connections for its stacking backplane; likewise, switches that primarily have Gigabit Ethernet ports would have at minimum 10-gigabit connections. Clustering The term sometimes used for a stacking approach that focuses on unified management with a single IP address for multiple stackable units. Units can be distributed and of multiple types. Stack master or commander In some stack architectures, one unit is designated the main unit of the stack. All management is routed through that single master unit. Some call this the master or commander unit. Other units in the stack are referred to as slave or member units. See also Comparison of stackable switches Modular computer network switch Further reading What is a “Stackable Management Switch”?, EUSSO Technologies, 2003. Small Business Stackable Switch White Paper, NETGEAR Inc., 2001. Cisco StackWise and StackWise Plus Technology, Cisco Systems. Ethernet Networking hardware
Stackable switch
Engineering
1,237
13,160,311
https://en.wikipedia.org/wiki/Airborne%20Real-time%20Cueing%20Hyperspectral%20Enhanced%20Reconnaissance
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance, also known by the acronym ARCHER, is an aerial imaging system that produces ground images far more detailed than plain sight or ordinary aerial photography can. It is the most sophisticated unclassified hyperspectral imaging system available, according to U.S. Government officials. ARCHER can automatically scan detailed imaging for a given signature of the object being sought (such as a missing aircraft), for abnormalities in the surrounding area, or for changes from previous recorded spectral signatures. It has direct applications for search and rescue, counterdrug, disaster relief and impact assessment, and homeland security, and has been deployed by the Civil Air Patrol (CAP) in the US on the Australian-built Gippsland GA8 Airvan fixed-wing aircraft. CAP, the civilian auxiliary of the United States Air Force, is a volunteer education and public-service non-profit organization that conducts aircraft search and rescue in the US. Overview ARCHER is a daytime non-invasive technology, which works by analyzing an object's reflected light. It cannot detect objects at night, underwater, under dense cover, underground, under snow or inside buildings. The system uses a special camera facing down through a quartz glass portal in the belly of the aircraft, which is typically flown at a standard mission altitude of and 100 knots (50 meters/second) ground speed. The system software was developed by Space Computer Corporation of Los Angeles and the system hardware is supplied by NovaSol Corp. of Honolulu, Hawaii specifically for CAP. The ARCHER system is based on hyperspectral technology research and testing previously undertaken by the United States Naval Research Laboratory (NRL) and Air Force Research Laboratory (AFRL). CAP developed ARCHER in cooperation with the NRL, AFRL and the United States Coast Guard Research & Development Center in the largest interagency project CAP has undertaken in its 74-year history. Since 2003, almost US$5 million authorized under the 2002 Defense Appropriations Act has been spent on development and deployment. , CAP reported completing the initial deployment of 16 aircraft throughout the U.S. and training over 100 operators, but had only used the system on a few search and rescue missions, and had not credited it with being the first to find any wreckage. In searches in Georgia and Maryland during 2007, ARCHER located the aircraft wreckage, but both accidents had no survivors, according to Col. Drew Alexa, director of advanced technology, and the ARCHER program manager at CAP. An ARCHER equipped aircraft from the Utah Wing of the Civil Air Patrol was used in the search for adventurer Steve Fossett in September 2007. ARCHER did not locate Mr. Fossett, but was instrumental in uncovering eight previously uncharted crash sites in the high desert area of Nevada, some decades old. Col. Alexa described the system to the press in 2007: "The human eye sees basically three bands of light. The ARCHER sensor sees 50. It can see things that are anomalous in the vegetation such as metal or something from an airplane wreckage." Major Cynthia Ryan of the Nevada Civil Air Patrol, while also describing the system to the press in 2007, stated, "ARCHER is essentially something used by the geosciences. It's pretty sophisticated stuff … beyond what the human eye can generally see," She elaborated further, "It might see boulders, it might see trees, it might see mountains, sagebrush, whatever, but it goes 'not that' or 'yes, that'. The amazing part of this is that it can see as little as 10 per cent of the target, and extrapolate from there." In addition to the primary search and rescue mission, CAP has tested additional uses for ARCHER. For example, an ARCHER equipped CAP GA8 was used in a pilot project in Missouri in August 2005 to assess the suitability of the system for tracking hazardous material releases into the environment, and one was deployed to track oil spills in the aftermath of Hurricane Rita in Texas during September 2005. Since then, in the case of a flight originating in Missouri, the ARCHER system proved its usefulness in October 2006, when it found the wreckage in Antlers, Okla. The National Transportation and Safety Board was extremely pleased with the data ARCHER provided, which was later used to locate aircraft debris spread over miles of rough, wooded terrain. In July 2007, the ARCHER system identified a flood-borne oil spill originating in a Kansas oil refinery, that extended downstream and had invaded previously unsuspected reservoir areas. The client agencies (EPA, Coast Guard, and other federal and state agencies) found the data essential to quick remediation. In September 2008, a Civil Air Patrol GA-8 from Texas Wing searched for a missing aircraft from Arkansas. It was found in Oklahoma, identified simultaneously by ground searchers and the overflying ARCHER system. Rather than a direct find, this was a validation of the system's accuracy and efficacy. In the subsequent recovery, it was found that the ARCHER plotted the debris area with great accuracy. Technical description The major ARCHER subsystem components include: advanced hyperspectral imaging (HSI) system with a resolution of one square meter per pixel. panchromatic high-resolution imaging (HRI) camera with a resolution of per pixel. global positioning system (GPS) integrated with an inertial navigation system (INS) Hyperspectral imager The passive hyperspectral imaging spectroscopy remote sensor observes a target in multi-spectral bands. The HSI camera separates the image spectra into 52 "bins" from 500 nanometers (nm) wavelength at the blue end of the visible spectrum to 1100 nm in the infrared, giving the camera a spectral resolution of 11.5 nm. Although ARCHER records data in all 52 bands, the computational algorithms only use the first 40 bands, from 500 nm to 960 nm because the bands above 960 nm are too noisy to be useful. For comparison, the normal human eye will respond to wavelengths from approximately 400 to 700 nm, and is trichromatic, meaning the eye's cone cells only sense light in three spectral bands. As the ARCHER aircraft flies over a search area, reflected sunlight is collected by the HSI camera lens. The collected light passes through a set of lenses that focus the light to form an image of the ground. The imaging system uses a pushbroom approach to image acquisition. With the pushbroom approach, the focusing slit reduces the image height to the equivalent of one vertical pixel, creating a horizontal line image. The horizontal line image is then projected onto a diffraction grating, which is a very finely etched reflecting surface that disperses light into its spectra. The diffraction grating is specially constructed and positioned to create a two-dimensional (2D) spectrum image from the horizontal line image. The spectra are projected vertically, i.e., perpendicular to the line image, by the design and arrangement of the diffraction grating. The 2D spectrum image projects onto a charge-coupled device (CCD) two-dimensional image sensor, which is aligned so that the horizontal pixels are parallel to the image's horizontal. As a result, the vertical pixels are coincident to the spectra produced from the diffraction grating. Each column of pixels receives the spectrum of one horizontal pixel from the original image. The arrangement of vertical pixel sensors in the CCD divides the spectrum into distinct and non-overlapping intervals. The CCD output consists of electrical signals for 52 spectral bands for each of 504 horizontal image pixels. The on-board computer records the CCD output signal at a frame rate of sixty times each second. At an aircraft altitude of 2,500 ft AGL and a speed of 100 knots, a 60 Hz frame rate equates to a ground image resolution of approximately one square meter per pixel. Thus, every frame captured from the CCD contains the spectral data for a ground swath that is approximately one meter long and 500 meters wide. High-resolution imager A high-resolution imaging (HRI) black-and-white, or panchromatic, camera is mounted adjacent to the HSI camera to enable both cameras to capture the same reflected light. The HRI camera uses a pushbroom approach just like the HSI camera with a similar lens and slit arrangement to limit the incoming light to a thin, wide beam. However, the HRI camera does not have a diffraction grating to disperse the incoming reflected light. Instead, the light is directed to a wider CCD to capture more image data. Because it captures a single line of the ground image per frame, it is called a line scan camera. The HRI CCD is 6,144 pixels wide and one pixel high. It operates at a frame rate of 720 Hz. At ARCHER search speed and altitude (100 knots over the ground at 2,500 ft AGL) each pixel in the black-and-white image represents a 3 inch by 3 inch area of the ground. This high resolution adds the capability to identify some objects. Processing A monitor in the cockpit displays detailed images in real time, and the system also logs the image and Global Positioning System data at a rate of 30 gigabytes (GB) per hour for later analysis. The on-board data processing system performs numerous real-time processing functions including data acquisition and recording, raw data correction, target detection, cueing and chipping, precision image geo-registration, and display and dissemination of image products and target cue information. ARCHER has three methods for locating targets: signature matching where reflected light is matched to spectral signatures anomaly detection using a statistical model of the pixels in the image to determine the probability that a pixel does not match the profile, and change detection which executes a pixel-by-pixel comparison of the current image against ground conditions that were obtained in a previous mission over the same area. In change detection, scene changes are identified, and new, moved or departed targets are highlighted for evaluation. In spectral signature matching, the system can be programmed with the parameters of a missing aircraft, such as paint colors, to alert the operators of possible wreckage. It can also be used to look for specific materials, such as petroleum products or other chemicals released into the environment, or even ordinary items like commonly available blue polyethylene tarpaulins. In an impact assessment role, information on the location of blue tarps used to temporarily repair buildings damaged in a storm can help direct disaster relief efforts; in a counterdrug role, a blue tarp located in a remote area could be associated with illegal activity. References External links NovaSol Corp Space Computer Corporation Civil Air Patrol Spectroscopy Earth observation remote sensors
Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance
Physics,Chemistry
2,169
29,184,426
https://en.wikipedia.org/wiki/Beta%20Sagittae
Beta Sagittae, Latinized from β Sagittae, is a single star in the northern constellation of Sagitta. It is a faint star but visible to the naked eye with an apparent visual magnitude of 4.38. Based upon an annual parallax shift of 7.7237 mas as seen from the Gaia satellite, it is located 420 light years from the Sun. The star is moving closer to the Sun with a radial velocity of −22 km/s. This is an evolved red giant with a stellar classification of . The suffix notation indicates a mild overabundance of the cyanogen molecule in the spectrum. Beta Sagittae is an estimated 129 million years old with 4.33 times the mass of the Sun, and has expanded to roughly 27 times the Sun's radius. The star is radiating 392 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,850 K. Naming In Chinese, (), meaning Left Flag, refers to an asterism consisting of β Sagittae, α Sagittae, δ Sagittae, ζ Sagittae, γ Sagittae, 13 Sagittae, 11 Sagittae, 14 Sagittae and ρ Aquilae. Consequently, the Chinese name for β Sagittae itself is (, .) References G-type giants Sagitta Sagittae, Beta Durchmusterung objects Sagittae, 06 185958 096837 7488
Beta Sagittae
Astronomy
324
36,561,481
https://en.wikipedia.org/wiki/HOBBIES%20%28electromagnetic%20solver%29
HOBBIES is a general purpose electromagnetic solver for various applications. The name is an acronym for Higher Order Basis Based Integral Equation Solver. The software is based on the Method of Moments (MoM), and it employs higher order polynomials as the basis functions for the frequency domain integral equation solver. The higher-order basis functions can significantly reduce the number of unknowns compared with the traditional piece-wise basis functions, e.g., Rao-Wiltion-Glisson triangular patch basis functions (RWGs). HOBBIES can be used to solve various types of electromagnetic field problems including antenna design, antenna placement, scattering analysis, EMI/EMC analysis, etc. The software pioneered the commercial implementation of parallel computation for solving extremely electrically large problems using modest computational resources. References External links Numerical software Electronic design automation software Electromagnetic simulation software
HOBBIES (electromagnetic solver)
Mathematics
172
43,352,180
https://en.wikipedia.org/wiki/3%20Serpentis
3 Serpentis is a binary star in the constellation Serpens with an orbital period of approximately 66 years. It is dimly visible to the naked eye with an apparent magnitude of 5.337. Located around distant, it is an orange giant of spectral type K0III, a star that has used up its core hydrogen. The two components of 3 Serpentis can be resolved using speckle interferometry and were separated by 0.23" in 2014. The orbit is highly eccentric and at periastron passage in 1997, the two are calculated to have been only 6 mas apart. Individual spectra for the two components of 3 Serpentis cannot be obtained and the spectral type of K0III is for the two stars combined. The primary is 2.5 magnitudes brighter than the secondary and cooler. The combined spectral type indicates that the primary is likely to have evolved away from the main sequence, but comparison of the colour and brightness of the secondary suggest it is still a main sequence star. References Serpens Serpentis, 03 K-type giants Durchmusterung objects 5674 135482 074649 Binary stars
3 Serpentis
Astronomy
231
50,770,215
https://en.wikipedia.org/wiki/Benzyl%20isothiocyanate
Benzyl isothiocyanate (BITC) is an isothiocyanate found in plants of the mustard family. Occurrence It can be found in Alliaria petiolata, pilu oil, and papaya seeds where it is the main product of the glucotropaeolin breakdown by the enzyme myrosinase. Activity Benzyl isothiocyanate, and other isothiocyanates in general, were found to be protective against pancreatic carcinogenesis in vitro via expression of the p21/WAF1 gene. A recent published study showed its restraining impact on obesity, fatty liver, and insulin resistance in diet-induced obesity mouse model. References Isothiocyanates Benzyl esters
Benzyl isothiocyanate
Chemistry
152
45,639,598
https://en.wikipedia.org/wiki/International%20Journal%20of%20Greenhouse%20Gas%20Control
The International Journal of Greenhouse Gas Control is a monthly scientific journal covering industry research on greenhouse gas control through carbon capture and storage at large stationary emitters in the power sector and in other major resource, manufacturing and production industries. It is peer-reviewed and published by Elsevier. As of 2024 the founding editor is John J Gale, employed by the International Energy Agency Greenhouse Gas Research and Development Programme, Cheltenham, United Kingdom. The editor-in-chief is Andrea Ramirez, Delft University of Technology. The International Energy Agency Greenhouse Gas Research and Development Programme states: "Our research actively contributes to the development and deployment of CCS technology." According to the Journal Citation Reports in 2021, the journal had a 2020 impact factor of 3.738, It started in 2007 as a quarterly publication, in 2009 every other month and in 2014 monthly issues. References External links Elsevier academic journals Academic journals established in 2007 English-language journals Energy and fuel journals Environmental science journals Monthly journals Climate change and society Climate change journals
International Journal of Greenhouse Gas Control
Environmental_science
205
1,493,053
https://en.wikipedia.org/wiki/Cockayne%20syndrome
Cockayne syndrome (CS), also called Neill-Dingwall syndrome, is a rare and fatal autosomal recessive neurodegenerative disorder characterized by growth failure, impaired development of the nervous system, abnormal sensitivity to sunlight (photosensitivity), eye disorders and premature aging. Failure to thrive and neurological disorders are criteria for diagnosis, while photosensitivity, hearing loss, eye abnormalities, and cavities are other very common features. Problems with any or all of the internal organs are possible. It is associated with a group of disorders called leukodystrophies, which are conditions characterized by degradation of neurological white matter. There are two primary types of Cockayne syndrome: Cockayne syndrome type A (CSA), arising from mutations in the ERCC8 gene, and Cockayne syndrome type B (CSB), resulting from mutations in the ERCC6 gene. The underlying disorder is a defect in a DNA repair mechanism. Unlike other defects of DNA repair, patients with CS are not predisposed to cancer or infection. Cockayne syndrome is a rare but destructive disease usually resulting in death within the first or second decade of life. The mutation of specific genes in Cockayne syndrome is known, but the widespread effects and its relationship with DNA repair is yet to be well understood. It is named after English physician Edward Alfred Cockayne (1880–1956) who first described it in 1936 and re-described in 1946. Neill-Dingwall syndrome was named after Mary M. Dingwall and Catherine A. Neill. These two scientists described the case of two brothers with Cockayne syndrome and asserted it was the same disease described by Cockayne. In their article, the two contributed to the signs of the disease through their discovery of calcifications in the brain. They also compared Cockayne syndrome to what is now known as Hutchinson–Gilford progeria syndrome (HGPS), then called progeria, due to the advanced aging that characterizes both disorders. Types CS Type I, the "classic" form, is characterized by normal fetal growth with the onset of abnormalities in the first two years of life. Vision and hearing gradually decline. The central and peripheral nervous systems progressively degenerate until death in the first or second decade of life as a result of serious neurological degradation. Cortical atrophy is less severe in CS Type I. CS Type II is present from birth (congenital) and is much more severe than CS Type 1. It involves very little neurological development after birth. Death usually occurs by age seven. This specific type has also been designated as cerebro-oculo-facio-skeletal (COFS) syndrome or Pena-Shokeir syndrome Type II. COFS syndrome is named so due to the effects it has on the brain, eyes, face, and skeletal system, as the disease frequently causes brain atrophy, cataracts, loss of fat in the face, and osteoporosis. COFS syndrome can be further subdivided into several conditions (COFS types 1, 2, 3 (associated with xeroderma pigmentosum) and 4). Typically patients with this early-onset form of the disorder show more severe brain damage, including reduced myelination of white matter, and more widespread calcifications, including in the cortex and basal ganglia. CS Type III, characterized by late-onset, is typically milder than Types I and II. Often patients with Type III will live into adulthood. Xeroderma pigmentosum-Cockayne syndrome (XP-CS) occurs when an individual also has xeroderma pigmentosum, another DNA repair disease. Some symptoms of each disease are expressed. For instance, freckling and pigment abnormalities characteristic of XP are present. The neurological disorder, spasticity, and underdevelopment of sexual organs characteristic of CS are seen. However, hypomyelination and the facial features of typical CS patients are not present. Causes If hyperoxia or excess oxygen occurs in the body, the cellular metabolism produces several highly reactive forms of oxygen called free radicals. This can cause oxidative damage to cellular components including the DNA. In normal cells, our body repairs the damaged sections. In the case of this disease, due to subtle defects in transcription, children's genetic machinery for synthesizing proteins needed by the body does not operate at normal capacity. Over time, went this theory, results in developmental failure and death. Every minute, the body pumps 10 to 20 liters of oxygen through the blood, carrying it to billions of cells in our bodies. In its normal molecular form, oxygen is harmless. However, cellular metabolism involving oxygen can generate several highly reactive free radicals. These free radicals can cause oxidative damage to cellular components including the DNA. In an average human cell, several thousand lesions occur in the DNA every day. Many of these lesions result from oxidative damage. Each lesion—a damaged section of DNA—must be snipped out and the DNA repaired to preserve its normal function. Unrepaired DNA can lose its ability to code for proteins. Mutations also can result. These mutations can activate oncogenes or silence tumor suppressor genes. According to research, oxidative damage to active genes is not preferentially repaired, and in the most severe cases, the repair is slowed throughout the whole genome. The resulting accumulation of oxidative damage could impair the normal functions of the DNA and may even result in triggering a program of cell death (apoptosis). The children with this disease do not repair the active genes where oxidative damage occurs. Normally, oxidative damage repair is faster in the active genes (which make up less than five percent of the genome) than in inactive regions of the DNA. The resulting accumulation of oxidative damage could impair the normal functions of the DNA and may even result in triggering a program of cell death (apoptosis). Genetics Cockayne syndrome is classified genetically as follows: Mutations in the ERCC8 (also known as CSA) gene or the ERCC6 (also known as CSB) gene are the cause of Cockayne syndrome type A and type B. Mutations in the ERCC6 gene mutation makes up ~70% of cases. The proteins made by these genes are involved in repairing damaged DNA via the transcription-coupled repair mechanism, particularly the DNA in active genes. DNA damage is caused by ultraviolet rays from sunlight, radiation, or free radicals in the body. A normal cell can repair DNA damage before it accumulates. If either the ERCC6 or the ERCC8 gene is altered (as in Cockayne Syndrome), DNA damage encountered during transcription isn't repaired, causing RNA polymerase to stall at that location, interfering with gene expression. As the unrepaired DNA damage accumulates, progressively more active gene expression is impeded, leading to malfunctioning cells or cell death, which likely contributes to the signs of Cockayne Syndrome such as premature aging and neuronal hypomyelination. Mechanism In contrast to cells with normal repair capability, CSA and CSB deficient cells are unable to preferentially repair cyclobutane pyrimidine dimers induced by the action of ultraviolet (UV) light on the template strand of actively transcribed genes. This deficiency reflects the loss of ability to perform the DNA repair process known as transcription coupled nucleotide excision repair (TC-NER). Within the damaged cell, the CSA protein normally localizes to sites of DNA damage, particularly inter-strand cross-links, double-strand breaks and some monoadducts. CSB protein is also normally recruited to DNA damaged sites, and its recruitment is most rapid and robust as follows: interstrand crosslinks > double-strand breaks > monoadducts > oxidative damage. CSB protein forms a complex with another DNA repair protein, SNM1A (DCLRE1A), a 5' – 3' exonuclease, that localizes to inter-strand cross-links in a transcription dependent manner. The accumulation of CSB protein at sites of DNA double-strand breaks occurs in a transcription dependent manner and facilitates homologous recombinational repair of the breaks. During the G0/G1 phase of the cell cycle, DNA damage can trigger a CSB-dependent recombinational repair process that uses an RNA (rather than DNA) template. The premature aging features of CS are likely due, at least in part, to the deficiencies in DNA repair (see DNA damage theory of aging). Diagnosis People with this syndrome have smaller than normal head sizes (microcephaly), are of short stature (dwarfism), their eyes appear sunken, and they have an "aged" look. They often have long limbs with joint contractures (inability to relax the muscle at a joint), a hunched back (kyphosis), and they may be very thin (cachetic), due to a loss of subcutaneous fat. Their small chin, large ears, and pointy, thin nose often give an aged appearance. The skin of those with Cockayne syndrome is also frequently affected: hyperpigmentation, varicose or spider veins (telangiectasia), and serious sensitivity to sunlight are common, even in individuals without XP-CS. Often patients with Cockayne Syndrome will severely burn or blister with very little heat exposure. The eyes of patients can be affected in various ways and eye abnormalities are common in CS. Cataracts and cloudiness of the cornea (corneal opacity) are common. The loss of and damage to the nerves of the optic nerve, causing optic atrophy, can occur. Nystagmus, or involuntary eye movement, and pupils that fail to dilate demonstrate a loss of control of voluntary and involuntary muscle movement. A salt and pepper retinal pigmentation is also a typical sign. Diagnosis is determined by a specific test for DNA repair, which measures the recovery of RNA after exposure to UV radiation. Despite being associated with genes involved in nucleotide excision repair (NER), unlike xeroderma pigmentosum, CS is not associated with an increased risk of cancer. Laboratory Studies In Cockayne syndrome patients, UV-irradiated cells show decreased DNA and RNA synthesis. Laboratory studies are mainly useful to eliminate other disorders. For example, skeletal radiography, endocrinologic tests, and chromosomal breakage studies can help in excluding disorders included in the differential diagnosis. Imaging Studies Brain CT scanning in Cockayne syndrome patients may reveal calcifications and cortical atrophy. Other Tests Prenatal evaluation is possible. Amniotic fluid cell culturing is used to demonstrate that fetal cells are deficient in RNA synthesis after UV irradiation. Neurology Imaging studies reveal a widespread absence of the myelin sheaths of the neurons in the white matter of the brain and general atrophy of the cortex. Calcifications have also been found in the putamen, an area of the forebrain that regulates movements and aids in some forms of learning, along with the cortex. Additionally, atrophy of the central area of the cerebellum found in patients with Cockayne syndrome could also result in the lack of muscle control, particularly involuntary, and poor posture typically seen. Treatment There is no permanent cure for this syndrome, although patients can be symptomatically treated. Treatment usually involves physical therapy and minor surgeries to the affected organs, such as cataract removal. Also wearing high-factor sunscreen and protective clothing is recommended because Cockayne Syndrome patients are very sensitive to UV radiation. Optimal nutrition can also help. Genetic counseling for the parents is recommended, as the disorder has a 25% chance of being passed to any future children, and prenatal testing is also a possibility. Another important aspect is the prevention of recurrence of CS in other siblings. Identification of gene defects involved makes it possible to offer genetic counseling and antenatal diagnostic testing to the parents who already have one affected child. Currently, there are two ongoing projects focused on the development of gene therapy for Cockayne syndrome. The first project, led by the Viljem Julijan Association for Children with Rare Diseases, aims to develop gene therapy specifically for Cockayne syndrome type B. The second project, led by the Riaan Research Initiative, is dedicated to the development of gene therapy for Cockayne syndrome type A. Prognosis The prognosis for those with Cockayne syndrome is poor, as death typically occurs by the age of 12. The prognosis for Cockayne syndrome varies by disease type. There are three types of Cockayne syndrome according to the severity and onset of the symptoms. However, the differences between the types are not always clear-cut, and some researchers believe the signs and symptoms reflect a spectrum instead of distinct types: Cockayne syndrome Type A (CSA) is marked by normal development until a child is 1 or 2 years old, at which point growth slows and developmental delays are noticed. Symptoms are not apparent until they are 1 year. Life expectancy for type A is approximately 10 to 20 years. These symptoms are seen in CS type 1 children. Cockayne syndrome type B (CSB), also known as "cerebro-oculo-facio-skeletal (COFS) syndrome" (or "Pena-Shokeir syndrome type B"), is the most severe subtype. Symptoms are present at birth and normal brain development stops after birth. The average lifespan for children with type B is up to 7 years of age. These symptoms are seen in CS type 2 children. Cockayne syndrome type C (CSC) appears later in childhood with milder symptoms than the other types and a slower progression of the disorder. People with this type of Cockayne syndrome live into adulthood, with an average lifespan of 40 to 50 years. These symptoms are seen in CS type 3. Epidemiology Cockayne syndrome is rare worldwide. No racial predilection is reported for Cockayne syndrome. No sexual predilection is described for Cockayne syndrome; the male-to-female ratio is equal. Cockayne syndrome I (CS-A) manifests in childhood. Cockayne syndrome II (CS-B) manifests at birth or in infancy, and it has a worse prognosis. Recent research The recent research on Jan 2018 mentions different CS features that are seen globally with similarities and differences: CS has an incidence of 1 in 250,000 live births, and a prevalence of approximately 1 per 2.5 million, which is remarkably consistent across various regions globally: See also Accelerated aging disease Biogerontology Degenerative disease Genetic disorder CAMFAK syndrome — thought to be a form (or subset) of Cockayne syndrome References External links This article incorporates some public domain text from The U.S. National Library of Medicine Autosomal recessive disorders Rare diseases Neurological disorders Syndromes affecting the nervous system Genodermatoses DNA replication and repair-deficiency disorders Progeroid syndromes Diseases named after discoverers
Cockayne syndrome
Biology
3,151
50,446,231
https://en.wikipedia.org/wiki/Vogel%20conflict%20test
The Vogel conflict test (VCT) is a conflict based experimental method primarily used in pharmacology. It is used to determine anxiolytic properties of drugs. The VCT predicts drugs that can manage generalized anxiety disorders and acute anxiety states. Conditioning Suppressing behaviour through punishment is commonly used to determine the anxiolytic properties of drugs. During the VCT, animals are punished by electrical shocks when trying to get either food or water. Therefore, the number of times the animal goes to get food or water decreases. When anxiolytic drugs are injected, the number of times animals go up to get food or water increases, even though the animal will still be punished. Procedure Experiments are done in a mouse operant conditioning chamber. Conditioning chambers are used to train animals to do simple tasks such as pulling a lever or pushing a button. The animals can be rewarded or punished for doing these tasks. The original method by VCT included 48 hours of water deprivation and then a mild electrical shock every 20 licks when finally given water. Modern versions of the test are less severe. Water deprivation is 18 hours or water is provided for 1 hour a day for four days before beginning the test. Electrical shock is only given for a 3-5 minute time period. The VCT can be done with food deprivation too. Before beginning the test animals must be acclimated to the cage and food pellets. The conditioning chamber must be checked to ensure everything is in working order. On Day 1 animals are placed in the conditioning chamber. Whenever the animal pulls the lever, a food pellet will drop. This takes place for 8 hours. Then the animal is placed back in its cage. Any animal that is unable to eat 15 or more food pellets is either removed from the experiment or receives further training, In Day 2 the animal is injected with saline and then placed back into the conditioning chamber. It is placed there and observed for 1 hour. On Day 3 animals are injected with the agent being tested. They are divided into two groups. One group is taken one by one and placed in the chamber. When they push the lever, they will receive a mild electrical shock. The other group is also placed in the apparatus but does not receive an electrical shock when they push the lever. The group that is injected with the agent but not shocked sets a baseline for the experiment. Dose dependent responses can be tested using the baseline. Following long term administration of the agent, animals should be observed to determine any potential rebound or withdrawal effects. Criticisms Since conditions such as anxiety are idiopathic, animal models are difficult to create and therefore flawed. However animal models can be pharmacologically validated by usually by benzodiazepines, a common anti anxiety medication. Other drugs that are known to treat anxiety such as SSRIs which theoretically increase number of responses, show no effect in the VCT. The VCT can give false positives. If the shock the animals are receiving is too low, then animals might ignore the shock and continue to get food or water which can skew results. Drugs that increase thirst or appetite can give inconsistent results. Animal training can take time. If doing the VCT using food, animals must be trained to be able to pull the lever for the food dispenser and accept the pellet. If using water, animals must be trained to be able to push the button and drink water. Because of this, etiological tests which observe spontaneous fears are usually preferred by researchers. Lab personnel must be trained to avoid injury or disease to the animals. Animal housing has well known effects on stress. The original Vogel study did not specify if animals were group housed or individually housed. Modern studies therefore use both types of housing which can have different results. Different animal models can show different anxiety results. Animals can show high levels of anxiety in one test and low levels in a different test. The VCT, which measures anxiety through decreased consumption, cannot be compared directly to tests such as the open field or plus maze that measure anxiety through locomotion activity. See also Behavioural despair test Learned helplessness Open field (animal test) Tail suspension test References Animal testing techniques Psychology experiments
Vogel conflict test
Chemistry
854
8,934,260
https://en.wikipedia.org/wiki/VirtualBox
Oracle VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox and InnoTek VirtualBox) is a hosted hypervisor for x86 virtualization developed by Oracle Corporation. VirtualBox was originally created by InnoTek Systemberatung GmbH, which was acquired by Sun Microsystems in 2008, which was in turn acquired by Oracle in 2010. VirtualBox may be installed on Microsoft Windows, macOS, Linux, Solaris and OpenSolaris. There are also ports to FreeBSD and Genode. It supports the creation and management of guest virtual machines running Windows, Linux, BSD, OS/2, Solaris, Haiku, and OSx86, as well as limited virtualization of guests on Apple hardware. For some guest operating systems, a "Guest Additions" package of device drivers and system applications is available, which typically improves performance, especially that of graphics, and allows changing the resolution of the guest OS automatically when the window of the virtual machine on the host OS is resized. Released under the terms of the GNU General Public License and, optionally, the CDDL for most files of the source distribution, VirtualBox is free and open-source software, though the Extension Pack is proprietary software, free of charge only to personal users. The License to VirtualBox was relicensed to GPLv3 with linking exceptions to the CDDL and other GPL-incompatible licenses. History VirtualBox was first offered by InnoTek Systemberatung GmbH, a German company based in Weinstadt, under a proprietary software license, making one version of the product available at no cost for personal or evaluation use, subject to the VirtualBox Personal Use and Evaluation License (PUEL). In January 2007, based on counsel by LiSoG, InnoTek released VirtualBox Open Source Edition (OSE) as free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2. InnoTek also contributed to the development of OS/2 and Linux support in virtualization and OS/2 ports of products from Connectix which were later acquired by Microsoft. Specifically, InnoTek developed the "additions" code in both Windows Virtual PC and Microsoft Virtual Server, which enables various host–guest OS interactions like shared clipboards or dynamic viewport resizing. Sun Microsystems acquired InnoTek in February 2008. Following the acquisition of Sun Microsystems by Oracle Corporation in January 2010, the product was re-branded as "Oracle VM VirtualBox". In December 2019, VirtualBox removed support for software-based virtualization and exclusively performs hardware-assisted virtualization. Release history Licensing The core package, since version 4 in December 2010, is free software under GNU General Public License version 2 (GPLv2). A supplementary package, under a proprietary license, adds support for USB 2.0 and 3.0 devices, Remote Desktop Protocol (RDP), disk encryption, NVMe, and Preboot Execution Environment (PXE). This package is called "VirtualBox Oracle VM VirtualBox extension pack". It includes closed-source components, so it is not source-available. The license is called Personal Use and Evaluation License (PUEL). It allows gratis access for personal use, educational use, and evaluation. Since VirtualBox version 5.1.30, Oracle defines personal use as installation on a single computer for non-commercial purposes. Prior to version 4, there were two different packages of the VirtualBox software. The full package was offered gratis under the PUEL, with licenses for other commercial deployment purchasable from Oracle. A second package called the VirtualBox Open Source Edition (OSE) was released under GPLv2. This removed the same proprietary components not available under GPLv2. , building the BIOS for VirtualBox requires the Open Watcom compiler, which is released under the Sybase Open Watcom Public License. The Open Source Initiative has approved this as "Open Source" but the Free Software Foundation and the Debian Free Software Guidelines do not consider it "free". VirtualBox has experimental support for macOS guests. However, macOS's end user license agreement does not permit running on non-Apple hardware. The operating system enforces this by calling the Apple System Management Controller (SMC), to verify the hardware's authenticity. All Apple machines have an SMC. Virtualization Users of VirtualBox can load multiple guest OSes under a single host operating-system (host OS). Each guest can be started, paused and stopped independently within its own virtual machine (VM). The user can independently configure each VM and run it under a choice of software-based virtualization or hardware assisted virtualization if the underlying host hardware supports this. The host OS and guest OSs and applications can communicate with each other through a number of mechanisms including a common clipboard and a virtualized network facility. Guest VMs can also directly communicate with each other if configured to do so. Hardware-assisted VirtualBox supports both Intel's VT-x and AMD's AMD-V hardware-assisted virtualization. Making use of these facilities, VirtualBox can run each guest VM in its own separate address-space; the guest OS ring 0 code runs on the host at ring 0 in VMX non-root mode rather than in ring 1. Starting with version 6.1, VirtualBox only supports this method. Until then, VirtualBox specifically supported some guests (including 64-bit guests, SMP guests and certain proprietary OSs) only on hosts with hardware-assisted virtualization. Devices and peripherals VirtualBox emulates hard disks in three formats: the native VDI (Virtual Disk Image), VMware's VMDK, and Microsoft's VHD. It thus supports disks created by other hypervisor software. VirtualBox can also connect to iSCSI targets and to raw partitions on the host, using either as virtual hard disks. VirtualBox emulates IDE (PIIX4 and ICH6 controllers), SCSI, SATA (ICH8M controller), and SAS controllers, to which hard drives can be attached. VirtualBox has supported Open Virtualization Format (OVF) since version 2.2.0 (April 2009). Both ISO images and physical devices connected to the host can be mounted as CD or DVD drives. VirtualBox supports running operating systems from live CDs and DVDs. By default, VirtualBox provides graphics support through a custom virtual graphics-card that is VBE or UEFI GOP compatible. The Guest Additions for Windows, Linux, Solaris, OpenSolaris, and OS/2 guests include a special video-driver that increases video performance and includes additional features, such as automatically adjusting the guest resolution when resizing the VM window and desktop composition via virtualized WDDM drivers. For an Ethernet network adapter, VirtualBox virtualizes these Network Interface Cards: AMD PCnet PCI II (Am79C970A) AMD PCnet-Fast III (Am79C973) Intel Pro/1000 MT Desktop (82540EM) Intel Pro/1000 MT Server (82545EM) Intel Pro/1000 T Server (82543GC) Paravirtualized network adapter (virtio-net) The emulated network cards allow most guest OSs to run without the need to find and install drivers for networking hardware as they are shipped as part of the guest OS. A special paravirtualized network adapter is also available, which improves network performance by eliminating the need to match a specific hardware interface, but requires special driver support in the guest. (Many distributions of Linux ship with this driver included.) By default, VirtualBox uses NAT through which Internet software for end-users such as Firefox or ssh can operate. Bridged networking via a host network adapter or virtual networks between guests can also be configured. Up to 36 network adapters can be attached simultaneously, but only four are configurable through the graphical interface. For a sound card, VirtualBox virtualizes Intel HD Audio, Intel ICH AC'97, and SoundBlaster 16 devices. A USB 1.1 controller is emulated, so that any USB devices attached to the host can be seen in the guest. The proprietary extension pack adds a USB 2.0 or USB 3.0 controller and, if VirtualBox acts as an RDP server, it can also use USB devices on the remote RDP client, as if they were connected to the host, although only if the client supports this VirtualBox-specific extension (Oracle provides clients for Solaris, Linux, and Sun Ray thin clients that can do this, and has promised support for other platforms in future versions). Software-based In the absence of hardware-assisted virtualization, versions 6.0.24 and earlier of VirtualBox could adopt a standard software-based virtualization approach. This mode supports 32-bit guest operating systems which run in rings 0 and 3 of the Intel ring architecture. The system reconfigures the guest OS code, which would normally run in ring 0, to execute in ring 1 on the host hardware. Because this code contains many privileged instructions which cannot run natively in ring 1, VirtualBox employs a Code Scanning and Analysis Manager (CSAM) to scan the ring 0 code recursively before its first execution to identify problematic instructions and then calls the Patch Manager (PATM) to perform in-situ patching. This replaces the instruction with a jump to a VM-safe equivalent compiled code fragment in hypervisor memory. The guest user-mode code, running in ring 3, generally runs directly on the host hardware in ring 3. In both cases, VirtualBox uses CSAM and PATM to inspect and patch the offending instructions whenever a fault occurs. VirtualBox also contains a dynamic recompiler, based on QEMU to recompile any real mode or protected mode code entirely (e.g. BIOS code, a DOS guest, or any operating system startup). Using these techniques, VirtualBox could achieve performance comparable to that of VMware in its later versions. The feature was dropped starting with VirtualBox 6.1. Features Snapshots of the RAM and storage that allow reverting to a prior state. Screenshots and screen video capture "Host key" for releasing the keyboard and mouse cursor to the host system if captured (coupled) to the guest system, and for keyboard shortcuts to features such as configuration, restarting, and screenshot. By default, it is the right-side key, or on Mac, the left key. Mouse pointer integration, meaning automatic coupling and uncoupling of mouse cursor when moved inside and outside the virtual screen, if supported by guest operating system. Seamless mode – the ability to run virtualized applications side by side with normal desktop applications Shared clipboard Shared folders through "guest additions" software Special drivers and utilities to facilitate switching between systems Ability to specify amount of shared RAM, video memory, and CPU execution cap Ability to emulate multiple screens Command line interaction (in addition to the GUI) Public API (Java, Python, SOAP, XPCOM) to control VM configuration and execution Nested paging for AMD-V and Intel VT (only for processors supporting SLAT and with SLAT enabled) Limited support for 3D graphics acceleration (including OpenGL up to (but not including) 3.0 and Direct3D 9.0c via Wine's Direct3D to OpenGL translation in versions prior to 7.0 or DXVK in later releases) SMP support (up to 32 virtual CPUs per virtual machine), since version 3.0 Teleportation (aka Live Migration) 2D video output acceleration (not to be mistaken with video decoding acceleration), since version 3.1 EFI has been supported since version 3.1 (Windows 7 guests are not supported) Storage emulation Ability to mount virtual hard disk drives and disk images. Virtual optical disc images can be used for booting and sharing files to guest systems lacking networking support. NCQ support for SATA, SCSI and SAS raw disks and partitions SATA disk hotplugging Pass-through mode for solid-state drives Pass-through mode for CD/DVD/BD drives – allows users to play audio CDs, burn optical disks, and play encrypted DVD discs Can disable host OS I/O cache Allows limitation of IO bandwidth PATA, SATA, SCSI, SAS, iSCSI, floppy disk controllers VM disk image encryption using AES128/AES256 Storage support includes: Raw hard disk access – allows physical hard disk partitions on the host system to appear in the guest system VMware Virtual Machine Disk (VMDK) format support – allows exchange of disk images with VMware Microsoft VHD support QEMU qed and qcow disks HDD format disks (only version 2; versions 3 and 4 are not supported) used by Parallels virtualization products Limitations 3D graphics acceleration for Windows guests earlier than Windows 7 was removed in version 6.1. This affected Windows XP and Windows Vista. VirtualBox has a very low transfer rate to and from USB2 devices. For USB3 equipment, device pass-through does not work in older guest OSes, such as Windows Vista and Windows XP, which lack appropriate drivers. However, since version 5.0, VirtualBox has added an experimental USB3 controller (the Renesas uPD720201 xHCI), which enables USB3 in these operating systems. This requires editing some configuration files. Guest Additions for macOS are unavailable at this time. Native Guest Additions for Windows 9x (Windows 95, 98 and ME) are not available. This results in poor performance due to the lack of graphics acceleration with the default limited color depth. External third-party software is available to enable support for 32-bit color mode, resulting in better performance. EFI support is incomplete, e.g. EFI boot for a Windows 7 guest is not supported. Only older versions of DirectX and OpenGL pass-through are supported (the feature can be enabled using the 3D Acceleration option for each VM individually). Video RAM is limited to 128 MiB (256 MiB with 2D Video Acceleration enabled) due to technical difficulties (merely changing the GUI to allow the user to allocate more video RAM to a VM or manually editing the configuration file of a VM won't work and will result in a fatal error). Windows 95/98/98SE/ME cannot be installed or work unreliably with modern CPUs (AMD Zen and newer; Intel Tiger Lake and newer) and hardware assisted virtualization (VirtualBox 6.1 and higher). This is due to these OSes not being coded correctly. An open source patch has been developed to fix the issue which also addresses Windows 95/98/98SE bug which makes the system crash when running on new fast CPUs. VirtualBox 7.0 and later is required to run a pristine Windows 11 guest. Full compatibility with Windows 11 is achieved in VirtualBox version 7.0.14 and higher. Host OS The supported operating systems include: Windows 10 64-bit and higher. Support for 64-bit Windows was added with VirtualBox 1.5. Support for 32-bit Windows was removed in 6.0. Support for Windows 2000 was removed in version 1.6. Support for Windows XP was removed in version 5.0. Support for Windows Vista was removed in version 5.2. Support for Windows 7 (64-bit) was removed in version 6.1. Support for Windows 8 (64-bit) was removed in version 7.0. Support for Windows 8.1 (64-bit) was removed in version 7.1. Windows Server 2019 and higher. Support for Windows Server 2003 was removed in 5.0. Support for Windows Server 2008 was removed in 6.0. Support for Windows Server 2008 R2 was removed in version 7.0. Support for Windows Server 2012 and 2016 was removed in version 7.1. Linux distributions macOS from version 11 (Big Sur) to 14 (Sonoma) both ARM and Intel versions: Preliminary Mac OS X support (beta stage) was added with VirtualBox 1.4, full support with 1.6. Support for Mac OS X 10.4 (Tiger) and earlier was removed with VirtualBox 3.1. Support for Mac OS X 10.5 (Leopard) was removed with VirtualBox 4.2. Support for Mac OS X 10.6 (Snow Leopard) and 10.7 (Lion) was removed with VirtualBox 5.0. Support for Mac OS X 10.8 (Mountain Lion) was removed with VirtualBox 5.1. Support for Mac OS X 10.9 (Mavericks) was removed with VirtualBox 5.2. Support for Mac OS X 10.10 (Yosemite) and OS X 10.11 (El Capitan) was removed with VirtualBox 6.0. Support for macOS 10.12 (Sierra) was officially removed with VirtualBox 6.1 (as of 6.1.16 it will still install and run, however). Support for macOS 10.13 (High Sierra) and macOS 10.14 (Mojave) was officially removed with VirtualBox 7.0. Support for macOS 10.15 (Catalina) was officially removed with VirtualBox 7.1. Oracle Solaris Guest additions Some features require the installation of the closed-source "VirtualBox Extension Pack": Support for a virtual USB 2.0/3.0 controller (EHCI/xHCI) (Starting with VirtualBox 7.0, this functionality was integrated into the GPL version instead.) VirtualBox RDP: support for the proprietary remote connection protocol developed by Microsoft and Citrix Systems. PXE boot for Intel cards. VM disk image encryption Webcam support While VirtualBox itself is free to use and is distributed under an open source license the VirtualBox Extension Pack is licensed under the VirtualBox Personal Use and Evaluation License (PUEL). Personal use of the extension pack is free but commercial users need to purchase a license. Guest Additions are installed within each guest virtual machine which supports them; the Extension Pack is installed on the host running VirtualBox. See also Comparison of platform virtualization software VMware Workstation OS-level virtualization x86 virtualization References External links Oracle Oracle Cloud Articles containing video clips Cross-platform free software Free emulation software Free software programmed in C++ Free virtualization software Platform virtualization software Software derived from or incorporating Wine Software that uses Qt Sun Microsystems software Virtualization software for Linux Cloud infrastructure Oracle Cloud Services
VirtualBox
Technology
3,886
6,894,253
https://en.wikipedia.org/wiki/Passenger%20leukocyte
In tissue and organ transplantation, the passenger leukocyte theory is the proposition that leucocytes within a transplanted allograft sensitize the recipient's alloreactive T-lymphocytes, causing transplant rejection. The concept was first proposed by George Davis Snell and the term coined in 1968 when Elkins and Guttmann showed that leukocytes present in a donor graft initiate an immune response in the recipient of a transplant. See also History of immunology References Further reading Immunology Organ transplantation
Passenger leukocyte
Biology
113
5,407,581
https://en.wikipedia.org/wiki/Mucous%20membrane%20of%20the%20soft%20palate
The mucous membrane of the soft palate is thin, and covered with stratified squamous epithelium on both surfaces, except near the pharyngeal ostium of the auditory tube, where it is columnar and ciliated. According to Klein, the mucous membrane on the nasal surface of the soft palate in the fetus is covered throughout by columnar ciliated epithelium, which subsequently becomes squamous; some anatomists state that it is covered with columnar ciliated epithelium, except at its free margin, throughout life. Beneath the mucous membrane on the oral surface of the soft palate is a considerable amount of adenoid tissue. The palatine glands form a continuous layer on its posterior surface and around the uvula. They are primarily mucus-secreting glands, as opposed to serous or mixed secreting glands. References Membrane biology
Mucous membrane of the soft palate
Chemistry
190
25,887,284
https://en.wikipedia.org/wiki/Panaeolus%20venezolanus
Panaeolus venezolanus is a species of mushroom in the Bolbitiaceae family. This species of mushroom has a cap with a diameter of 20–35 mm and has a brownish gray to ashy gray color. See also List of Psilocybin mushrooms Psilocybin mushrooms Psilocybe References venezolanus Fungus species
Panaeolus venezolanus
Biology
76
47,776,323
https://en.wikipedia.org/wiki/Phellodon%20confluens
Phellodon confluens, commonly known as the fused cork hydnum, is a species of tooth fungus in the family Bankeraceae. It was originally described in 1825 as Hydnum confluens by Christiaan Hendrik Persoon. Czech mycologist Zdenek Pouzar transferred it to the genus Phellodon in 1956. The fungus is found in Asia, Europe, and North America. It is considered vulnerable in Switzerland. References External links Fungi described in 1825 Fungi of Asia Fungi of Europe Fungi of North America Inedible fungi confluens Taxa named by Christiaan Hendrik Persoon Fungus species
Phellodon confluens
Biology
133
14,200,830
https://en.wikipedia.org/wiki/Tamapin
Tamapin is a toxin from the Indian Red Scorpion (Hottentotta tamulus), which is a selective and potent blocker of SK2 channels. Etymology Tamapin is named after the scorpion from which it was isolated. Sources Tamapin has been isolated from hottentotta tamulus, the Indian red scorpion. Chemical structure and methods of isolation Tamapin belongs to short-chain scorpion toxin subfamily 5, together with PO5 and Scyllatoxin. Its sequence similarity to other toxins that can compete with the binding site of apamin is much lower. It is 31 amino acids long and its weight is 3458 daltons. Its amino acid sequence is AFCNLRRCELSCRSLGLLGKCIGEECKCVPY, with disulfide bonds between Cys3-Cys21, Cys8-Cys26, and Cys12-Cys28 (chemical formula C146H234N42O42S6). Tamapin has been isolated via detection of the apamin-competing fraction of the venom from the scorpion via a Sephadex G-50 size exclusion chromatography, followed by high performance liquid chromatography (HPLC). An isoform of tamapin, tamapin-2, has been found, in which the tyrosine is replaced by a histadine. Tamapin-2 can also compete very effectively with apamin for binding to synaptosomes. Target and mode of action The target of tamapin is the small conductance calcium-dependent potassium (SK) channel. This scorpion toxin blocks SK2 channels with selectivity for SK2 versus SK1 channels in a largely reversible manner. Despite completely different sequences, Apamin (a bee venom toxin) and tamapin share at least in part, the same binding sites on rat brain synaptosomes. Cloned SK2 are most sensitive for Apamin in binding assays and physiological recordings. However, tamapin displaces Apamin in binding assays and is therefore a stronger toxin with respect to Apamin. SK 1 and SK 3 are only affected with a high concentration of tapamin and therefore this toxin inhibits SK2 with the highest affinity, SK3 intermediate and the lowest affinity for SK1 channels. A less closely related member of the SK channels is the intermediate conductance calcium activated potassium channel SK4, also known as IK1, which is not sensitive to Apamin and is also not affected by Tapamin. The same applies to voltage dependent potassium channels; the block of SK2-mediated currents is not voltage dependent. This specific channel block evokes a reduction in the small conductance calcium-dependent potassium channels current. Toxicity Previous studies showed that the effect of tamapin is largely reversible and depends on time and concentration. The Indian red scorpion (Hottentotta tamulus) causes a large number of deaths annually especially among young children. Its venom contains highly specific potassium channel blockers such as iberiotoxin, which is a highly specific blocker of the high conductance calcium activated potassium channel, and tamulustoxin. References Neurotoxins Ion channel toxins
Tamapin
Chemistry
672
45,519,483
https://en.wikipedia.org/wiki/Positive%20behavior%20interventions%20and%20supports
Positive behavior interventions and supports (PBIS) is a set of ideas and tools used in schools to improve students' behavior. PBIS uses evidence and data-based programs, practices, and strategies to frame behavioral improvement relating to student growth in academic performance, safety, behavior, and establishing and maintaining positive school culture. PBIS tries to address the behavioral needs of at-risk students and the multi-leveled needs of all students, in an effort to create an environment that promotes effective teaching and learning in schools. Educational researchers such as Robert H. Horner believe that PBIS enhances the school staff's time for delivering effective instructions and lessons to all students. In contrast to PBIS, many schools used exclusionary discipline practices including detentions, suspensions, or expulsions to separate students from the classroom and from peers. PBIS emphasizes preventing problem behaviors before they happen to increase the opportunity for students to learn by keeping them in the classroom. PBIS is a team-based framework for schools that borrows elements from response to intervention, an intervention that uses diagnostic data to develop personalized learning and behavior intervention plans (BiP) for all students. Background PBIS (PBSIS) is an acronym for positive behavioral intervention and supports. PBIS emphasizes the integrated use of classroom management and school-wide discipline strategies coupled with effective academic instruction to create a positive and safe school climate for all students. PBIS is based in a behaviorist psychology approach to improving student behavior, which means that teachers and students identify misbehavior, model appropriate behaviors, and provide clear consequences for behavior in the classroom context. In a PBIS model, schools must define, teach, and reinforce appropriate behaviors to ensure success. PBIS follows research showing that punishing students inconsistently without a positive alternative, is ineffective and only offers short-term solutions. Modeling and rewarding positive behaviors are more effective. The goal of PBIS is to establish a positive school climate. To do this, a continuum of behavior support has been established which can be applied at the school level (primary level), for small groups of students (secondary level), and for individual level (tertiary level). When PBIS is applied to the entire school, it is called schoolwide PBIS, or SWPBIS. One must also consider the fact that PBIS can only work successfully if it is done with fidelity. Core features PBIS is a framework approach that helps schools to identify the key tasks in developing preventative positive behavior tailored to their own school. The approach is defined by the following core design components. High fidelity implementation of school-wide PBIS has been linked with improvements in student and staff behavior, but less is known about which aspects of the model may be present in schools prior to training, and whether some features of PBIS are implemented faster than others. Outcomes PBIS is designed to enhance academic and social behavior outcomes for all students by (a) emphasizing the use of data for informing decisions about the selection, implementation, and progress monitoring of evidence-based behavioral practices; and (b) organizing resources and systems to improve durable implementation fidelity. The intended outcome is the goal for improved student behavior towards which the school community aims. The goals must be measurable, and must clearly be the result of implementing the PBIS model. Outcomes of a successful PBIS framework with a school can be measured in both behavior data and academic achievement of the students in the school. The academic and behavior goals are often defined and supported by the students, families, and teachers in tandem for the program to succeed. Data PBIS is grounded in the data being used for all levels of decision making. The team often considers trends in numbers, locations of where problems occurred, which classes they occurred in, and individual students that were involved. Through the data, usually collected by staff or preferably a licensed Board Certified Behavioral Analyst (BCBA), schools take stock of their current situation, pinpoint areas for change and or improvement, and evaluate the effects of current and future interventions. Practices This component refers to the evidence-based curriculum, instruction, interventions, and strategies implemented within the school, and are meant to create a common and shared understanding of expectations. These practices are introduced through the team after a review of the data. Systems PBIS requires a review team of educators with buy-in from across the school and strong administrative support to design and enforce the PBIS system. The team is generally a representative group of all of the staff (classroom teachers, special education teachers, specialists, etc.). In a secondary school, students could also be included as part of this team. The team creates the systems used by the remainder of the staff and the students they serve. Continuum of support Each PBIS model uses a continuum of support. Tiers of support that range in intensity are offered to students as their needs fluctuate. Most students are supported by tier 1 (universal) level of support, which describes the general school context. As students struggle with the behavioral expectations of the model, they may advance to higher tiers to provide more support. Tier 1: Primary level (support for all) The first level in PBIS is the universal level. In terms of PBIS, this refers to the school-wide expectations that are defined and taught to all school staff in each of the settings within the school. These expectations are developed by the team and taught to students by their regular classroom teachers, administrators, counselors, psychologists, behavior specialists or others who have contact with all students. Many schools who have adopted this framework use "Be Safe, Be Respectful, Be Responsible," but like all aspects, this is determined by the school. The fidelity of the expectations should be determined by the continuous collecting of data. Along with the expectations, there should be a system of acknowledgement and reinforcement of expected behaviors. The core principles of PBIS at the primary level are: schools can effectively teach appropriate behavior to all students intervene early use a multi-tier model use research-based interventions monitor student progress often use data to make decisions use assessments to screen, diagnose, and monitor progress These principles make the PBIS program significant in that it makes it more proactive rather than reactive. Furthermore, PBIS helps schools develop a common language, common practices, and consistent application of positive and negative reinforcement at a school-wide level. Tier 2: Secondary level (support for some) Although the primary level is taught and reinforced with all students, there may be some students who need additional intervention for students with at-risk behaviors as determined by the data. Secondary prevention provides intensive or targeted interventions to support students not responding to the primary efforts. These behavioral interventions are taught by specialized staff like special educators, school psychologists, behavior interventionists, and counselors. Some examples of tier 2 behavior interventions are targeted social skill groups, behavior plans with continuous progress monitoring. Tier 3: Tertiary level (support for few) PBIS also acknowledges that some students have high risk behaviors and need specialized, or individualized skill building practice due to exhibited habits of problem behavior. Tier 3 behavioral interventions involve a functional behavioral assessment (FBA) and an individualized plan of support which includes: new skills to replace problem behaviors reorganization of current environment or "triggers" procedures for monitoring, evaluating, and reassessing the plan. To succeed in a tier 3 intervention, both tier 1 and tier 2 interventions must also be in place. Also, support must be conducted in a comprehensive and collaborative manner. The Tier 3 process should include the student with the behavior issue and the people who know them the best on a personal level working together as a behavioral support team (BST). The BST should consist of teachers, administrators, school social workers, psychologists, counselors and/or a licensed behavior specialist. Support should be tailored to the student’s specific needs and student interests should be taken into consideration. It should also include multiple interventions. The goal at this level is to diminish problematic behavior, increase adaptive skills, and attempt to increase the student's quality of life. Alternatives Culturally responsive PBIS Culturally responsive PBIS (CR-PBIS) is also a framework aimed at restructuring school culture much like PBIS, but CR-PBIS uses strategies that acknowledge the differences in behavioral norms for culturally and linguistically diverse students and incorporate trauma-informed practices. The aims of CR-PBIS is to merge strategies deployed in culturally relevant teaching with PBIS, often to explcitly reduce disproportional exclusionary discipline rates for African American children. The behavior of students with color, without the knowledge of the community they come from, may be seen as disruptive, resistant, and limiting to student success. CR-PBIS tries to address these concerns that are left untouched by the traditional PBIS framework by including families and the outside community as a large piece of creating an environment for teaching and learning. Responsive classroom Just like PBIS, responsive classroom (RC) centers on research-based approaches and strive to ensure high quality education to all students. Both agree that these positive behaviors and the skills needed to reach them must be explicitly taught to students using Social Skills taught in class. Lastly, both also believe that non-punitive strategies are more effective than what has been used in schools in the past. Where PBIS is a framework that coaches the school staff to create their own resources and changes, Responsive Classroom outlines specific prescribed practices that schools should use to reach this goal. Responsive classroom is a for-purchase-program that staff can be trained in at multiple levels and has published tools and strategies for meeting these goals with students. See also Positive behavior support (PBS) Licensed behavior analyst (BCBA) Behavior modification Notes and references Bal, A., Thorius, K.K., & Kozleski, E. (2012). "Culturally responsive positive behavioral support matters". Equity Alliance. Retrieved March 9, 2015. U.S. Office of Special Education Programs. (2015). Positive Behavioral Interventions and Supports. Retrieved from: https://www.pbis.org/ Behavior modification Behaviorism School and classroom behaviour
Positive behavior interventions and supports
Biology
2,048
240,151
https://en.wikipedia.org/wiki/The%207%20Habits%20of%20Highly%20Effective%20People
The 7 Habits of Highly Effective People is a business and self-help book written by Stephen R. Covey. First published in 1989, the book goes over Covey's ideas on how to spur and nurture personal change. He also explores the concept of effectiveness in achieving results, as well as the need for focus on character ethic rather than the personality ethic in selecting value systems. As named, his book is laid out through seven habits he has identified as conducive to personal growth. The seven habits The book is laid out through seven habits. Covey intends the first three as a means of achieving independence, the next three as a means of achieving interdependence, and the last, seventh habit as a means to maintain the previous. Be proactive Proactivity is about taking responsibility for one's reaction to one's own experiences, taking the initiative to respond positively and improve the situation. Covey postulates, in a discussion of the work of psychiatrist Viktor Frankl, that between stimulus and response lies a person's ability to choose how to react, and that nothing can hurt a person without the person's consent. Covey discusses recognizing one's circle of influence and circle of concern. Covey discusses focusing one's responses and focusing on the center of one's influence. Begin with the end in mind Covey discusses envisioning what one wants in the future (a personal mission statement) so one can work and plan towards it, and understanding how people make important life decisions. To be effective one needs to act based on principles and constantly review one's mission statements, says Covey. He asks: Are you – right now – who you want to be? What do you have to say about yourself? How do you want to be remembered? If habit 1 advises changing one's life to act and be proactive, habit 2 advises that "you are the programmer". Grow and stay humble, Covey says. Covey says that all things are created twice: Before one acts, one should act in one's mind first. Before creating something, measure twice. Do not just act; think first: Is this how I want it to go, and are these the correct consequences? Put first things first Covey talks about what is important versus what is urgent. Priority should be given in the following order: Quadrant I. Urgent and important (Do) – important deadlines and crises Quadrant II. Not urgent but important (Plan) – long-term development Quadrant III. Urgent but not important (Delegate) – distractions with deadlines Quadrant IV. Not urgent and not important (Eliminate) – frivolous distractions The order is important, says Covey: after completing items in quadrant I, people should spend the majority of their time on II, but many people spend too much time in III and IV. The calls to delegate and eliminate are reminders of their relative priority. If habit 1 advises that "you are the programmer", habit 2 advises: "write the program, become a leader", and habit 3 ”follow the program”. Keep personal integrity by minimizing the difference between what you say versus what you do, says Covey. Think win–win Seek mutually beneficial win–win solutions or agreements in your relationships, says Covey. Valuing and respecting people by seeking a "win" for all is ultimately a better long-term resolution than if only one person in the situation gets their way. Thinking win–win isn't about being nice, nor is it a quick-fix technique; it is a character-based code for human interaction and collaboration, says Covey. Seek first to understand, then to be understood Use empathetic listening to genuinely understand a person, which compels them to reciprocate the listening and take an open mind to be influenced. This creates an atmosphere of caring and positive problem-solving. Habit 5 is expressed in the ancient Greek philosophy of three modes of persuasion: Ethos is one's personal credibility. It's the trust that one inspires, one's "emotional bank account". Pathos is the empathetic side, the alignment with the emotional trust of another person's communication. Logos is the logic, the reasoning part of the presentation. The order of the concepts indicates their relative importance, says Covey. Synergize Combine the strengths of people through positive teamwork, so as to achieve goals that no one could have done alone, Covey exhorts. Sharpen the saw Covey says that one should balance and renew one's resources, energy, and health to create a sustainable, long-term, effective lifestyle. He primarily emphasizes exercise for physical renewal, good prayer, and good reading for mental renewal. He also mentions service to society for spiritual renewal. Covey explains the "upward spiral" model. Through conscience, along with meaningful and consistent progress, an upward spiral will result in growth, change, and constant improvement. In essence, one is always attempting to integrate and master the principles outlined in The 7 Habits at progressively higher levels at each iteration. Subsequent development on any habit will render a different experience and one will learn the principles with a deeper understanding. The upward spiral model consists of three parts: learn, commit, do. According to Covey, one must continue consistently educating the conscience with increasing levels in order to grow and develop on the upward spiral. The idea of renewal by education will propel one along the path of personal freedom, security, wisdom, and power, says Covey. Reception At the end of 1994, U.S. President Bill Clinton invited Covey, along with other authors, to Camp David to counsel him on how to integrate the book's ideas into his presidency. In August 2011, Time listed 7 Habits as one of "The 25 Most Influential Business Management Books". Upon Covey's death in 2012, the book had sold more than 20 million copies. Formats and adaptations In addition to the book and audiobook versions, a VHS version also exists. In 1998, Stephen's son, Sean Covey, wrote a version of the book for teens, The 7 Habits of Highly Effective Teens, which simplifies the 7 habits for younger readers to make them easier to understand. This was later followed by The 6 Most Important Decisions You Will Ever Make: A Guide for Teens (2006), which highlights key times in the life of a teen and gives advice on how to deal with them, and The 7 Habits of Happy Kids (2008), a children's book illustrated by Stacy Curtis that further simplifies the 7 habits for children and teaches them through stories with anthropomorphic animal characters. References External links Official Stephen Covey homepage 1989 non-fiction books Business books Self-help books Personal development Free Press (publisher) books Time management
The 7 Habits of Highly Effective People
Physics,Biology
1,402
1,840,608
https://en.wikipedia.org/wiki/S/2004%20S%2012
S/2004 S 12 is a natural satellite of Saturn. Its discovery was announced by Scott S. Sheppard, David C. Jewitt, Jan Kleyna, and Brian G. Marsden on 4 May 2005 from observations taken between 12 December 2004 and 9 March 2005. S/2004 S 12 is about 5 kilometres in diameter, and orbits Saturn at an average distance of 19,855,000 kilometres in about 1,044 days, at an inclination of 163.9° to the ecliptic, in a retrograde direction and with an eccentricity of 0.371. This moon was considered lost until its recovery was announced on 12 October 2022. (In 2021, it had also been found in Canada-France-Hawaii Telescope observations from 2019.) References Institute for Astronomy Saturn Satellite Data Jewitt's New Satellites of Saturn page MPEC 2005-J13: Twelve New Satellites of Saturn, 3 May 2005 (discovery and ephemeris) Norse group Moons of Saturn Irregular satellites Discoveries by Scott S. Sheppard Astronomical objects discovered in 2005 Moons with a retrograde orbit Recovered astronomical objects
S/2004 S 12
Astronomy
225
27,385,764
https://en.wikipedia.org/wiki/Vandal%20%28tanker%29
Vandal was a river tanker designed by Karl Hagelin and Johny Johnson for Branobel. Russian Vandal and French Petite-Pierre, launched in 1903, were the world's first diesel-powered ships (sources disagree over which of the two, Vandal or Petite-Pierre, was the first). Vandal was the first equipped with fully functional diesel-electric transmission. In the 1890s oil industry searched for an economical oil-burning engine, and the solution was found by German engineer Rudolph Diesel. Diesel marketed his technology to oil barons around the world; in February 1898 he granted exclusive licenses to build his engines in Sweden and Russia to Emanuel Nobel of the Nobel family. The Russian licence cost Nobel 800,000 marks in cash and stock of the newly founded Russian Diesel Company. The Saint Petersburg engine plant was a quick success; it started with diesel-powered industrial pumps for oil pipelines and soon grabbed the mass market for flour mill engines. It produced more diesel engines than any other concern in the world. In 1902 Karl Hagelin, "a veteran of the Volga and sometime visionary", suggesting mating diesel engines to river barges. He envisioned direct shipment of oil through a 1,800-mile route from the lower Volga to Saint Petersburg and Finland. The canals of the Volga–Baltic Waterway dictated use of relatively small barges, making use of steam engines uneconomical. Diesel engine seemed a natural choice. Hagelin believed that reversing the engine and regulating its speed could be done with an electrical transmission, and contracted Swedish ASEA to test the electrical drive system. Hagelin then recruited naval architect Johny Johnson of Gothenburg to design the ship. Johnson placed the diesel engine and electric generator in the middle, and the electric motors in the stern, driving the propellers directly. The holds were separated by longitudinal (rather than transverse) bulkheads running the length of the ship, a feature that became common on ocean-going tankers. The ship's power plant of three 120hp diesel engines was built in Sweden by Swedish Diesel (Aktiebolaget Diesels Motorer) and ASEA. Each engine had three cylinders with a bore of 290 mm and stroke of 430 mm. They ran at a constant 240 rpm, and the electrical transmission, controlled by a tram-like lever, varied propeller speed from 30 to 300 rpm. The hull was built at Sormovo shipyard in Nizhny Novgorod and towed to Saint Petersburg for the final assembly. Its size (244.5 × 31¾ × 8 feet) was tailored to the canals of the North rather than the Volga. Named Vandal, it commenced commercial operation in the spring of 1903. Vandal was accidentally damaged on its maiden voyage, repaired and served on the Volga route for ten years. The larger Sarmat, with four 180 h.p. engines, was launched next summer. Unlike Vandal, Sarmat'''s engines could be coupled to the propellers directly, bypassing the electrical drive and saving up to 15% of engine power that would be otherwise lost in the electric transmission. Sarmat operated until 1923; the hulk was moored in Nizhny Novgorod until the 1970s. The new ships attracted public and professional interest and brought in new orders. Plant payroll expanded to more than a thousand men, but growth brought in management problems. Rolf Nobel, Ludwig Nobel Jr. and Hagelin split with Emanuel over the future of diesel-powered shipping. Hagelin's proposal to convert existing steam-powered fleet to diesel engines was rejected by Emanuel. Hagelin quit, and accepted the post of Swedish consul general in Saint Petersburg. In 1907 Hagelin and Johnson designed a 4,500-ton tanker, and again Emanuel Nobel rejected the proposal. The inventors sold their blueprints to Merkulyev Brothers of Kolomna who built the world's first true seagoing diesel-powered tanker, Mysl, in 1908. This, at last, compelled Emanuel to grant Hagelin sweeping rights to modernize the company fleet that reached 315 vessels in 1915. Notes References Gardiner, Robert; Greenway, Ambrose (1994). The golden age of shipping: the classic merchant ship, 1900–1960. Conway Maritime. . Tolf, Robert (1976). The Russian Rockefellers: the saga of the Nobel family and the Russian oil industry. Hoover Press. . Thomas, Donald E. (2004). Diesel: Technology And Society In Industrial Germany''. University of Alabama Press. . 1903 ships Tankers of Russia Marine propulsion Diesel–electric vehicles
Vandal (tanker)
Engineering
926
17,308,643
https://en.wikipedia.org/wiki/V%20speeds
In aviation, V-speeds are standard terms used to define airspeeds important or useful to the operation of all aircraft. These speeds are derived from data obtained by aircraft designers and manufacturers during flight testing for aircraft type-certification. Using them is considered a best practice to maximize aviation safety, aircraft performance, or both. The actual speeds represented by these designators are specific to a particular model of aircraft. They are expressed by the aircraft's indicated airspeed (and not by, for example, the ground speed), so that pilots may use them directly, without having to apply correction factors, as aircraft instruments also show indicated airspeed. In general aviation aircraft, the most commonly used and most safety-critical airspeeds are displayed as color-coded arcs and lines located on the face of an aircraft's airspeed indicator. The lower ends of the white arc and the green arc are the stalling speed with wing flaps in landing configuration, and stalling speed with wing flaps retracted, respectively. These are the stalling speeds for the aircraft at its maximum weight. The yellow band is the range in which the aircraft may be operated in smooth air, and then only with caution to avoid abrupt control movement. The red line is the VNE, the never-exceed speed. Proper display of V-speeds is an airworthiness requirement for type-certificated aircraft in most countries. Regulations The most common V-speeds are often defined by a particular government's aviation regulations. In the United States, these are defined in title 14 of the United States Code of Federal Regulations, known as the Federal Aviation Regulations (FARs). In Canada, the regulatory body, Transport Canada, defines 26 commonly used V-speeds in their Aeronautical Information Manual. V-speed definitions in FAR 23, 25 and equivalent are for designing and certification of airplanes, not for their operational use. The descriptions below are for use by pilots. Regulatory V-speeds These V-speeds are defined by regulations. They are typically defined with constraints such as weight, configuration, or phases of flight. Some of these constraints have been omitted to simplify the description. Other V-speeds Some of these V-speeds are specific to particular types of aircraft and are not defined by regulations. Mach numbers Whenever a limiting speed is expressed by a Mach number, it is expressed relative to the local speed of sound, e.g. VMO: Maximum operating speed, MMO: Maximum operating Mach number. V1 definitions V1 is the critical engine failure recognition speed or takeoff decision speed. It is the speed above which the takeoff will continue even if an engine fails or another problem occurs, such as a blown tire. The speed will vary among aircraft types and varies according to factors such as aircraft weight, runway length, wing flap setting, engine thrust used and runway surface contamination; thus, it must be determined by the pilot before takeoff. Aborting a takeoff after V1 is strongly discouraged because the aircraft may not be able to stop before the end of the runway, thus suffering a runway overrun. V1 is defined differently in different jurisdictions, and definitions change over time as aircraft regulations are amended. The US Federal Aviation Administration and the European Union Aviation Safety Agency define it as: "the maximum speed in the takeoff at which the pilot must take the first action (e.g., apply brakes, reduce thrust, deploy speed brakes) to stop the airplane within the accelerate-stop distance. V1 also means the minimum speed in the takeoff, following a failure of the critical engine at VEF, at which the pilot can continue the takeoff and achieve the required height above the takeoff surface within the takeoff distance." V1 thus includes reaction time. In addition to this reaction time, a safety margin equivalent to 2 seconds at V1 is added to the accelerate-stop distance. Transport Canada defines it as: "Critical engine failure recognition speed" and adds: "This definition is not restrictive. An operator may adopt any other definition outlined in the aircraft flight manual (AFM) of TC type-approved aircraft as long as such definition does not compromise operational safety of the aircraft." See also ICAO recommendations on use of the International System of Units Balanced field takeoff Notes References Further reading Airspeed Aircraft performance
V speeds
Physics
860
65,829,777
https://en.wikipedia.org/wiki/David%20Wood%20%28mathematician%29
David Ronald Wood (born in Christchurch, New Zealand in 1971) is a Professor in the School of Mathematics at Monash University in Melbourne, Australia. His research area is discrete mathematics and theoretical computer science, especially structural graph theory, extremal graph theory, geometric graph theory, graph colouring, graph drawing, and combinatorial geometry. Wood received a Ph.D. in computer science from Monash University in 2000. His thesis "Three-Dimensional Orthogonal Graph Drawing", supervised by Graham Farr, was awarded a Mollie Holman Doctoral Medal. He held postdoctoral research positions at the University of Sydney, at Carleton University in Ottawa, at Charles University in Prague, at McGill University in Montreal, at Universitat Politècnica de Catalunya in Barcelona, and at the University of Melbourne. Since 2012 he has been at Monash University, where he was promoted to Professor in 2016. He has been awarded distinguished research fellowships including a Marie Curie Fellowship from the European Commission (2006–2008), a QEII Fellowship from the Australian Research Council (2008–2012), and a Future Fellowship from the Australian Research Council (2014–2017). David Wood was an invited speaker at the 9th European Congress of Mathematics. Wood is a Fellow of the Australian Mathematics Society and life member of the Combinatorial Mathematics Society of Australasia (CMSA). He was president of the CMSA in 2015–2016 and Vice-President in 2011–2014. He is a Deputy Director of The Mathematical Research Institute MATRIX. Wood is an Editor-in-Chief of the Electronic Journal of Combinatorics, Editor-in-Chief of the MATRIX Book Series, and an Editor of the Journal of Computational Geometry, Journal of Graph Theory, and SIAM Journal on Discrete Mathematics. His main research contributions are in graph product structure theory, extremal graph minor theory, graph treewidth, graphs on surfaces, graph colouring, geometric graph theory, poset dimension, and graph drawing. Major publications References External links David Wood's home page at Monash University People from Christchurch Australian mathematicians 21st-century Australian mathematicians Monash University alumni Academic staff of Monash University Living people 1971 births Graph theorists
David Wood (mathematician)
Mathematics
449
1,137,384
https://en.wikipedia.org/wiki/Kugelmugel
Kugelmugel, officially the Republic of Kugelmugel (), is a spherical art object located in Vienna, Austria. It came about as the result of the artist Edwin Lipburger constructing the 8 square meter diameter spherical object without permissions from the authorities in Austria. After the dispute between the artist and authorities, the artist declared it a micronation, and it was eventually granted asylum by the then-mayor Helmut Zilk in Vienna where it is housed in the Prater park. The 'Republic' is currently administered by Linda Treiber as president. History In 1971, the 8-square meter in diameter building was constructed by Edwin Lipburger and his son Nikolaus in Katzelsdorf near Wiener Neustadt in Lower Austria. A prolonged dispute between the authorities and Lipburger ensued over the following years over the unpermitted construction. Lipburger declared Kugelmugel as its own micronation, even issuing passports. In August 1975, the township of Neudörfl offered to house the building and discussion of how to transport the building, potentially via helicopter were discussed, but ultimately didn't pan out. In 1979, Lipburger was sentenced to prison for 10 weeks for "unlawful assumption of a public authority" (). In 1982 the building was taken apart and transported to Vienna in the Prater park, near the Hauptallee, and surrounded by eight-foot-tall barbed-wire fences. The house occupies the only address within the proclaimed Republic: "2., Antifaschismusplatz" ("2nd district of Vienna; Anti-Fascism Square"). In 1982 the building was officially adopted by the city of Vienna by then-mayor Helmut Zilk who granted the object "asylum". However it was then assumed by the city that the artistic object would only be temporary, so the legal dispute continued on with the city of Vienna after power and water was cut to Kugelmugel. Various lawsuits have been filed subsequently and eventually rose all the way to the Supreme Administrative Court of Austria in 2007. However all lawsuits failed with no formal agreement ever reached between Lipburger and the city. The city has stopped any attempts at reclaiming it and has accepted it as part of Vienna's culture and is allowing the building in its existence as a "building built without a permit" (). Lipburger died in January 2015, but the Republic retains an official population of more than 650 non-resident citizens. Following Lipburger's passing, his son Nikolaus Lipburger took over the management of the nation. Subsequently in 2022, Nikolaus Lipburger hired Linda Treiber as new president of the nation to manage Kugelmugel. Closed for sometime during the COVID-19 pandemic, the "republic" reopened its borders to visitors on the 1st of June 2024. Visitors get issued a "visa permit" with stamp and signature for a fee to visit the art nation. In German, the word "Kugel" means "ball" or "sphere"; "Mugel" is an Austrian German expression for a bump or a hill on a field, from which mogul skiing is also derived. In popular culture Kugelmugel is a character in Hetalia: Axis Powers, a Japanese webcomic, manga and anime about personified countries. In the series, Kugelmugel is a short boy with braids who has a passion for art and claims everything is art, referring to how Kugelmugel is an art object itself. References Further reading External links Wiener Neustadt-Land District Buildings and structures in Vienna Micronations in Austria 1984 establishments in Austria Visionary environments Spherical objects Expressionist architecture Prater
Kugelmugel
Physics
761
70,092,623
https://en.wikipedia.org/wiki/Utroba%20Cave
The Utroba Cave, also known as Womb Cave, is a prehistoric cave sanctuary in Kardzhali Province, Bulgaria. Located in the Eastern Rhodupe Mountains near the Ilinista village, the cave resembles a human vulva and dates to the Thracian period. Historians believe that it was once used as a fertility shrine. It is known in Bulgarian as (, , ) or (, , 'Womb Cave'). History The cave is located 20 kilometers from the city of Kardzhali near the village of Ilinitsa and it dates to 480 BC. It is also referred to as "The Cave Womb" or "Womb Cave" because the entrance is the shape of a vulva. The inside of the cave resembles a uterus. Locally it is also called "The Blaring Rock". Researchers believe that the entrance to the cave was a slit, which was then widened by humans. The entrance to the cave is tall and wide and inside the cave there is a -tall altar which has been carved. Archaeologist Nikolay Ovcharov believes that the cave and altar were used by the Thracians. There are several Thracian sanctuaries found in Bulgaria. Ovcharov believes that it was used as a fertility shrine for the Thracians. The "cult" places of the Thracians are usually located at the top of mountains and they have running water. There is also constantly flowing water at the Utroba Cave, which flows from the cave to the foothills. There is an opening in the ceiling which allows the light into the cave. The light creates a phallus shape every day at noon, but it only reaches the altar on one day of the year. In the middle of the day at a certain time of year the light which is in the shape of a phallus penetrates deep into the cave all the way to the altar. In February or March the light takes the shape of a phallus and enters a hole at the altar: the light then flickers for 1-2 minutes. The penetrating and flickering light is thought to symbolize fertilization. Even today, there are childless couples who go to the cave hoping it will help them conceive a child. See also Thracian religion References External links Video of the Cave-Womb Tourist attractions in Kardzhali Province Landforms of Kardzhali Province Prehistoric sites in Bulgaria Caves of Bulgaria Limestone caves Human sexuality
Utroba Cave
Biology
502
39,668,608
https://en.wikipedia.org/wiki/Carbon-fiber%20tape
Carbon-fiber tape is a flat material made of carbon fiber. It weighs one-seventh as much as steel for a given strength. The carbon fiber core lasts longer than conventional steel cable. The material is resistant to wear and abrasion and, unlike steel, does not densify and stretch. Applications In June 2013, KONE elevator company announced Ultrarope for use as a replacement for steel cables in elevators. It seals the carbon fibers in high-friction polymer. Unlike steel cable, Ultrarope was designed for buildings that require up to 1,000 meters of lift. Steel elevators top out at 500 meters. The company estimated that in a 500-meter-high building, an elevator would use 15 per cent less electrical power than a steel-cabled version. As of June 2013, the product had passed all European Union and US certification tests. See also Carbon fibers Carbon nanotube References External links Carbon Fiber Products Allotropes of carbon Elevators Fibre-reinforced polymers
Carbon-fiber tape
Chemistry,Engineering
201
11,304,514
https://en.wikipedia.org/wiki/Physical%20organic%20chemistry
Physical organic chemistry, a term coined by Louis Hammett in 1940, refers to a discipline of organic chemistry that focuses on the relationship between chemical structures and reactivity, in particular, applying experimental tools of physical chemistry to the study of organic molecules. Specific focal points of study include the rates of organic reactions, the relative chemical stabilities of the starting materials, reactive intermediates, transition states, and products of chemical reactions, and non-covalent aspects of solvation and molecular interactions that influence chemical reactivity. Such studies provide theoretical and practical frameworks to understand how changes in structure in solution or solid-state contexts impact reaction mechanism and rate for each organic reaction of interest. Application Physical organic chemists use theoretical and experimental approaches work to understand these foundational problems in organic chemistry, including classical and statistical thermodynamic calculations, quantum mechanical theory and computational chemistry, as well as experimental spectroscopy (e.g., NMR), spectrometry (e.g., MS), and crystallography approaches. The field therefore has applications to a wide variety of more specialized fields, including electro- and photochemistry, polymer and supramolecular chemistry, and bioorganic chemistry, enzymology, and chemical biology, as well as to commercial enterprises involving process chemistry, chemical engineering, materials science and nanotechnology, and pharmacology in drug discovery by design. Scope Physical organic chemistry is the study of the relationship between structure and reactivity of organic molecules. More specifically, physical organic chemistry applies the experimental tools of physical chemistry to the study of the structure of organic molecules and provides a theoretical framework that interprets how structure influences both mechanisms and rates of organic reactions. It can be thought of as a subfield that bridges organic chemistry with physical chemistry. Physical organic chemists use both experimental and theoretical disciplines such as spectroscopy, spectrometry, crystallography, computational chemistry, and quantum theory to study both the rates of organic reactions and the relative chemical stability of the starting materials, transition states, and products. Chemists in this field work to understand the physical underpinnings of modern organic chemistry, and therefore physical organic chemistry has applications in specialized areas including polymer chemistry, supramolecular chemistry, electrochemistry, and photochemistry. History The term physical organic chemistry was itself coined by Louis Hammett in 1940 when he used the phrase as a title for his textbook. Chemical structure and thermodynamics Thermochemistry Organic chemists use the tools of thermodynamics to study the bonding, stability, and energetics of chemical systems. This includes experiments to measure or determine the enthalpy (ΔH), entropy (ΔS), and Gibbs' free energy (ΔG) of a reaction, transformation, or isomerization. Chemists may use various chemical and mathematical analyses, such as a Van 't Hoff plot, to calculate these values. Empirical constants such as bond dissociation energy, standard heat of formation (ΔfH°), and heat of combustion (ΔcH°) are used to predict the stability of molecules and the change in enthalpy (ΔH) through the course of the reactions. For complex molecules, a ΔfH° value may not be available but can be estimated using molecular fragments with known heats of formation. This type of analysis is often referred to as Benson group increment theory, after chemist Sidney Benson who spent a career developing the concept. The thermochemistry of reactive intermediates—carbocations, carbanions, and radicals—is also of interest to physical organic chemists. Group increment data are available for radical systems. Carbocation and carbanion stabilities can be assessed using hydride ion affinities and pKa values, respectively. Conformational analysis One of the primary methods for evaluating chemical stability and energetics is conformational analysis. Physical organic chemists use conformational analysis to evaluate the various types of strain present in a molecule to predict reaction products. Strain can be found in both acyclic and cyclic molecules, manifesting itself in diverse systems as torsional strain, allylic strain, ring strain, and syn-pentane strain. A-values provide a quantitative basis for predicting the conformation of a substituted cyclohexane, an important class of cyclic organic compounds whose reactivity is strongly guided by conformational effects. The A-value is the difference in the Gibbs' free energy between the axial and equatorial forms of substituted cyclohexane, and by adding together the A-values of various substituents it is possible to quantitatively predict the preferred conformation of a cyclohexane derivative. In addition to molecular stability, conformational analysis is used to predict reaction products. One commonly cited example of the use of conformational analysis is a bi-molecular elimination reaction (E2). This reaction proceeds most readily when the nucleophile attacks the species that is antiperiplanar to the leaving group. A molecular orbital analysis of this phenomenon suggest that this conformation provides the best overlap between the electrons in the R-H σ bonding orbital that is undergoing nucleophilic attack and the empty σ* antibonding orbital of the R-X bond that is being broken. By exploiting this effect, conformational analysis can be used to design molecules that possess enhanced reactivity. The physical processes which give rise to bond rotation barriers are complex, and these barriers have been extensively studied through experimental and theoretical methods. A number of recent articles have investigated the predominance of the steric, electrostatic, and hyperconjugative contributions to rotational barriers in ethane, butane, and more substituted molecules. Non-covalent interactions Chemists use the study of intramolecular and intermolecular non-covalent bonding/interactions in molecules to evaluate reactivity. Such interactions include, but are not limited to, hydrogen bonding, electrostatic interactions between charged molecules, dipole-dipole interactions, polar-π and cation-π interactions, π-stacking, donor-acceptor chemistry, and halogen bonding. In addition, the hydrophobic effect—the association of organic compounds in water—is an electrostatic, non-covalent interaction of interest to chemists. The precise physical origin of the hydrophobic effect originates from many complex interactions, but it is believed to be the most important component of biomolecular recognition in water. For example, researchers elucidated the structural basis for folic acid recognition by folate acid receptor proteins. The strong interaction between folic acid and folate receptor was attributed to both hydrogen bonds and hydrophobic interactions. The study of non-covalent interactions is also used to study binding and cooperativity in supramolecular assemblies and macrocyclic compounds such as crown ethers and cryptands, which can act as hosts to guest molecules. Acid–base chemistry The properties of acids and bases are relevant to physical organic chemistry. Organic chemists are primarily concerned with Brønsted–Lowry acids/bases as proton donors/acceptors and Lewis acids/bases as electron acceptors/donors in organic reactions. Chemists use a series of factors developed from physical chemistry -- electronegativity/Induction, bond strengths, resonance, hybridization, aromaticity, and solvation—to predict relative acidities and basicities. The hard/soft acid/base principle is utilized to predict molecular interactions and reaction direction. In general, interactions between molecules of the same type are preferred. That is, hard acids will associate with hard bases, and soft acids with soft bases. The concept of hard acids and bases is often exploited in the synthesis of inorganic coordination complexes. Kinetics Physical organic chemists use the mathematical foundation of chemical kinetics to study the rates of reactions and reaction mechanisms. Unlike thermodynamics, which is concerned with the relative stabilities of the products and reactants (ΔG°) and their equilibrium concentrations, the study of kinetics focuses on the free energy of activation (ΔG‡) -- the difference in free energy between the reactant structure and the transition state structure—of a reaction, and therefore allows a chemist to study the process of equilibration. Mathematically derived formalisms such as the Hammond Postulate, the Curtin-Hammett principle, and the theory of microscopic reversibility are often applied to organic chemistry. Chemists have also used the principle of thermodynamic versus kinetic control to influence reaction products. Rate laws The study of chemical kinetics is used to determine the rate law for a reaction. The rate law provides a quantitative relationship between the rate of a chemical reaction and the concentrations or pressures of the chemical species present. Rate laws must be determined by experimental measurement and generally cannot be elucidated from the chemical equation. The experimentally determined rate law refers to the stoichiometry of the transition state structure relative to the ground state structure. Determination of the rate law was historically accomplished by monitoring the concentration of a reactant during a reaction through gravimetric analysis, but today it is almost exclusively done through fast and unambiguous spectroscopic techniques. In most cases, the determination of rate equations is simplified by adding a large excess ("flooding") all but one of the reactants. Catalysis The study of catalysis and catalytic reactions is very important to the field of physical organic chemistry. A catalyst participates in the chemical reaction but is not consumed in the process. A catalyst lowers the activation energy barrier (ΔG‡), increasing the rate of a reaction by either stabilizing the transition state structure or destabilizing a key reaction intermediate, and as only a small amount of catalyst is required it can provide economic access to otherwise expensive or difficult to synthesize organic molecules. Catalysts may also influence a reaction rate by changing the mechanism of the reaction. Kinetic isotope effect Although a rate law provides the stoichiometry of the transition state structure, it does not provide any information about breaking or forming bonds. The substitution of an isotope near a reactive position often leads to a change in the rate of a reaction. Isotopic substitution changes the potential energy of reaction intermediates and transition states because heavier isotopes form stronger bonds with other atoms. Atomic mass affects the zero-point vibrational state of the associated molecules, shorter and stronger bonds in molecules with heavier isotopes and longer, weaker bonds in molecules with light isotopes. Because vibrational motions will often change during a course of a reaction, due to the making and breaking of bonds, the frequencies will be affected, and the substitution of an isotope can provide insight into the reaction mechanism and rate law. Substituent effects The study of how substituents affect the reactivity of a molecule or the rate of reactions is of significant interest to chemists. Substituents can exert an effect through both steric and electronic interactions, the latter of which include resonance and inductive effects. The polarizability of molecule can also be affected. Most substituent effects are analyzed through linear free energy relationships (LFERs). The most common of these is the Hammett Plot Analysis. This analysis compares the effect of various substituents on the ionization of benzoic acid with their impact on diverse chemical systems. The parameters of the Hammett plots are sigma (σ) and rho (ρ). The value of σ indicates the acidity of substituted benzoic acid relative to the unsubstituted form. A positive σ value indicates the compound is more acidic, while a negative value indicates that the substituted version is less acidic. The ρ value is a measure of the sensitivity of the reaction to the change in substituent, but only measures inductive effects. Therefore, two new scales were produced that evaluate the stabilization of localized charge through resonance. One is σ+, which concerns substituents that stabilize positive charges via resonance, and the other is σ− which is for groups that stabilize negative charges via resonance. Hammett analysis can be used to help elucidate the possible mechanisms of a reaction. For example, if it is predicted that the transition state structure has a build-up of negative charge relative to the ground state structure, then electron-donating groups would be expected to increase the rate of the reaction. Other LFER scales have been developed. Steric and polar effects are analyzed through Taft Parameters. Changing the solvent instead of the reactant can provide insight into changes in charge during the reaction. The Grunwald-Winstein Plot provides quantitative insight into these effects. Solvent effects Solvents can have a powerful effect on solubility, stability, and reaction rate. A change in solvent can also allow a chemist to influence the thermodynamic or kinetic control of the reaction. Reactions proceed at different rates in different solvents due to the change in charge distribution during a chemical transformation. Solvent effects may operate on the ground state and/or transition state structures. An example of the effect of solvent on organic reactions is seen in the comparison of SN1 and SN2 reactions. Solvent can also have a significant effect on the thermodynamic equilibrium of a system, for instance as in the case of keto-enol tautomerizations. In non-polar aprotic solvents, the enol form is strongly favored due to the formation of an intramolecular hydrogen-bond, while in polar aprotic solvents, such as methylene chloride, the enol form is less favored due to the interaction between the polar solvent and the polar diketone. In protic solvents, the equilibrium lies towards the keto form as the intramolecular hydrogen bond competes with hydrogen bonds originating from the solvent. A modern example of the study of solvent effects on chemical equilibrium can be seen in a study of the epimerization of chiral cyclopropylnitrile Grignard reagents. This study reports that the equilibrium constant for the cis to trans isomerization of the Grignard reagent is much greater—the preference for the cis form is enhanced—in THF as a reaction solvent, over diethyl ether. However, the faster rate of cis-trans isomerization in THF results in a loss of stereochemical purity. This is a case where understanding the effect of solvent on the stability of the molecular configuration of a reagent is important with regard to the selectivity observed in an asymmetric synthesis. Quantum chemistry Many aspects of the structure-reactivity relationship in organic chemistry can be rationalized through resonance, electron pushing, induction, the eight electron rule, and s-p hybridization, but these are only helpful formalisms and do not represent physical reality. Due to these limitations, a true understanding of physical organic chemistry requires a more rigorous approach grounded in particle physics. Quantum chemistry provides a rigorous theoretical framework capable of predicting the properties of molecules through calculation of a molecule's electronic structure, and it has become a readily available tool in physical organic chemists in the form of popular software packages. The power of quantum chemistry is built on the wave model of the atom, in which the nucleus is a very small, positively charged sphere surrounded by a diffuse electron cloud. Particles are defined by their associated wavefunction, an equation which contains all information associated with that particle. All information about the system is contained in the wavefunction. This information is extracted from the wavefunction through the use of mathematical operators. The energy associated with a particular wavefunction, perhaps the most important information contained in a wavefunction, can be extracted by solving the Schrödinger equation (above, Ψ is the wavefunction, E is the energy, and Ĥ is the Hamiltonian operator) in which an appropriate Hamiltonian operator is applied. In the various forms of the Schrödinger equation, the overall size of a particle's probability distribution increases with decreasing particle mass. For this reason, nuclei are of negligible size in relation to much lighter electrons and are treated as point charges in practical applications of quantum chemistry. Due to complex interactions which arise from electron-electron repulsion, algebraic solutions of the Schrödinger equation are only possible for systems with one electron such as the hydrogen atom, H2+, H32+, etc.; however, from these simple models arise all the familiar atomic (s,p,d,f) and bonding (σ,π) orbitals. In systems with multiple electrons, an overall multielectron wavefunction describes all of their properties at once. Such wavefunctions are generated through the linear addition of single electron wavefunctions to generate an initial guess, which is repeatedly modified until its associated energy is minimized. Thousands of guesses are often required until a satisfactory solution is found, so such calculations are performed by powerful computers. Importantly, the solutions for atoms with multiple electrons give properties such as diameter and electronegativity which closely mirror experimental data and the patterns found in the periodic table. The solutions for molecules, such as methane, provide exact representations of their electronic structure which are unobtainable by experimental methods. Instead of four discrete σ-bonds from carbon to each hydrogen atom, theory predicts a set of four bonding molecular orbitals which are delocalized across the entire molecule. Similarly, the true electronic structure of 1,3-butadiene shows delocalized π-bonding molecular orbitals stretching through the entire molecule rather than two isolated double bonds as predicted by a simple Lewis structure. A complete electronic structure offers great predictive power for organic transformations and dynamics, especially in cases concerning aromatic molecules, extended π systems, bonds between metal ions and organic molecules, molecules containing nonstandard heteroatoms like selenium and boron, and the conformational dynamics of large molecules such as proteins wherein the many approximations in chemical formalisms make structure and reactivity prediction impossible. An example of how electronic structure determination is a useful tool for the physical organic chemist is the metal-catalyzed dearomatization of benzene. Chromium tricarbonyl is highly electrophilic due to the withdrawal of electron density from filled chromium d-orbitals into antibonding CO orbitals, and is able to covalently bond to the face of a benzene molecule through delocalized molecular orbitals. The CO ligands inductively draw electron density from benzene through the chromium atom, and dramatically activate benzene to nucleophilic attack. Nucleophiles are then able to react to make hexacyclodienes, which can be used in further transformations such as Diels Alder cycloadditions. Quantum chemistry can also provide insight into the mechanism of an organic transformation without the collection of any experimental data. Because wavefunctions provide the total energy of a given molecular state, guessed molecular geometries can be optimized to give relaxed molecular structures very similar to those found through experimental methods. Reaction coordinates can then be simulated, and transition state structures solved. Solving a complete energy surface for a given reaction is therefore possible, and such calculations have been applied to many problems in organic chemistry where kinetic data is unavailable or difficult to acquire. Spectroscopy, spectrometry, and crystallography Physical organic chemistry often entails the identification of molecular structure, dynamics, and the concentration of reactants in the course of a reaction. The interaction of molecules with light can afford a wealth of data about such properties through nondestructive spectroscopic experiments, with light absorbed when the energy of a photon matches the difference in energy between two states in a molecule and emitted when an excited state in a molecule collapses to a lower energy state. Spectroscopic techniques are broadly classified by the type of excitation being probed, such as vibrational, rotational, electronic, nuclear magnetic resonance (NMR), and electron paramagnetic resonance spectroscopy. In addition to spectroscopic data, structure determination is often aided by complementary data collected from X-Ray diffraction and mass spectrometric experiments. NMR and EPR spectroscopy One of the most powerful tools in physical organic chemistry is NMR spectroscopy. An external magnetic field applied to a paramagnetic nucleus generates two discrete states, with positive and negative spin values diverging in energy; the difference in energy can then be probed by determining the frequency of light needed to excite a change in spin state for a given magnetic field. Nuclei that are not indistinguishable in a given molecule absorb at different frequencies, and the integrated peak area in an NMR spectrum is proportional to the number of nuclei responding to that frequency. It is possible to quantify the relative concentration of different organic molecules simply by integration peaks in the spectrum, and many kinetic experiments can be easily and quickly performed by following the progress of a reaction within one NMR sample. Proton NMR is often used by the synthetic organic chemist because protons associated with certain functional groups give characteristic absorption energies, but NMR spectroscopy can also be performed on isotopes of nitrogen, carbon, fluorine, phosphorus, boron, and a host of other elements. In addition to simple absorption experiments, it is also possible to determine the rate of fast atom exchange reactions through suppression exchange measurements, interatomic distances through multidimensional nuclear Overhauser effect experiments, and through-bond spin-spin coupling through homonuclear correlation spectroscopy. In addition to the spin excitation properties of nuclei, it is also possible to study the properties of organic radicals through the same fundamental technique. Unpaired electrons also have a net spin, and an external magnetic field allows for the extraction of similar information through electron paramagnetic resonance (EPR) spectroscopy. Vibrational spectroscopy Vibrational spectroscopy, or infrared (IR) spectroscopy, allows for the identification of functional groups and, due to its low expense and robustness, is often used in teaching labs and the real-time monitoring of reaction progress in difficult to reach environments (high pressure, high temperature, gas phase, phase boundaries). Molecular vibrations are quantized in an analogous manner to electronic wavefunctions, with integer increases in frequency leading to higher energy states. The difference in energy between vibrational states is nearly constant, often falling in the energy range corresponding to infrared photons, because at normal temperatures molecular vibrations closely resemble harmonic oscillators. It allows for the crude identification of functional groups in organic molecules, but spectra are complicated by vibrational coupling between nearby functional groups in complex molecules. Therefore, its utility in structure determination is usually limited to simple molecules. Further complicating matters is that some vibrations do not induce a change in the molecular dipole moment and will not be observable with standard IR absorption spectroscopy. These can instead be probed through Raman spectroscopy, but this technique requires a more elaborate apparatus and is less commonly performed. However, as Raman spectroscopy relies on light scattering it can be performed on microscopic samples such as the surface of a heterogeneous catalyst, a phase boundary, or on a one microliter (μL) subsample within a larger liquid volume. The applications of vibrational spectroscopy are often used by astronomers to study the composition of molecular gas clouds, extrasolar planetary atmospheres, and planetary surfaces. Electronic excitation spectroscopy Electronic excitation spectroscopy, or ultraviolet-visible (UV-vis) spectroscopy, is performed in the visible and ultraviolet regions of the electromagnetic spectrum and is useful for probing the difference in energy between the highest energy occupied (HOMO) and lowest energy unoccupied (LUMO) molecular orbitals. This information is useful to physical organic chemists in the design of organic photochemical systems and dyes, as absorption of different wavelengths of visible light give organic molecules color. A detailed understanding of an electronic structure is therefore helpful in explaining electronic excitations, and through careful control of molecular structure it is possible to tune the HOMO-LUMO gap to give desired colors and excited state properties. Mass spectrometry Mass spectrometry is a technique which allows for the measurement of molecular mass and offers complementary data to spectroscopic techniques for structural identification. In a typical experiment a gas phase sample of an organic material is ionized and the resulting ionic species are accelerated by an applied electric field into a magnetic field. The deflection imparted by the magnetic field, often combined with the time it takes for the molecule to reach a detector, is then used to calculate the mass of the molecule. Often in the course of sample ionization large molecules break apart, and the resulting data show a parent mass and a number of smaller fragment masses; such fragmentation can give rich insight into the sequence of proteins and nucleic acid polymers. In addition to the mass of a molecule and its fragments, the distribution of isotopic variant masses can also be determined and the qualitative presence of certain elements identified due to their characteristic natural isotope distribution. The ratio of fragment mass population to the parent ion population can be compared against a library of empirical fragmentation data and matched to a known molecular structure. Combined gas chromatography and mass spectrometry is used to qualitatively identify molecules and quantitatively measure concentration with great precision and accuracy, and is widely used to test for small quantities of biomolecules and illicit narcotics in blood samples. For synthetic organic chemists it is a useful tool for the characterization of new compounds and reaction products. Crystallography Unlike spectroscopic methods, X-ray crystallography always allows for unambiguous structure determination and provides precise bond angles and lengths totally unavailable through spectroscopy. It is often used in physical organic chemistry to provide an absolute molecular configuration and is an important tool in improving the synthesis of a pure enantiomeric substance. It is also the only way to identify the position and bonding of elements that lack an NMR active nucleus such as oxygen. Indeed, before x-ray structural determination methods were made available in the early 20th century all organic structures were entirely conjectural: tetrahedral carbon, for example, was only confirmed by the crystal structure of diamond, and the delocalized structure of benzene was confirmed by the crystal structure of hexamethylbenzene. While crystallography provides organic chemists with highly satisfying data, it is not an everyday technique in organic chemistry because a perfect single crystal of a target compound must be grown. Only complex molecules, for which NMR data cannot be unambiguously interpreted, require this technique. In the example below, the structure of the host–guest complex would have been quite difficult to solve without a single crystal structure: there are no protons on the fullerene, and with no covalent bonds between the two halves of the organic complex spectroscopy alone was unable to prove the hypothesized structure. See also Journal of Physical Organic Chemistry Gaussian, an example of a commercially available quantum mechanical software package used. particularly, in academic settings References Further reading General Peter Atkins & Julio de Paula, 2006, "Physical chemistry," 8th Edn., New York, NY, USA:Macmillan, , accessed 21 June 2015. [E.g., see p. 422 for a group theoretical/symmetry description of atomic orbitals contributing to bonding in methane, CH4, and pp. 390f for estimation of π-electron binding energy for 1,3-butadiene by the Hückel method.] Thomas H. Lowry & Kathleen Schueller Richardson, 1987, Mechanism and Theory in Organic Chemistry, 3rd Edn., New York, NY, USA:Harper & Row, , accessed 20 June 2015. [The authoritative textbook on the subject, containing a number of appendices that provide technical details on molecular orbital theory, kinetic isotope effects, transition state theory, and radical chemistry.] Eric V. Anslyn & Dennis A. Dougherty, 2006, Modern Physical Organic Chemistry, Sausalito, Calif.: University Science Books, . [A modernized and streamlined treatment with an emphasis on applications and cross-disciplinary connections.] Michael B. Smith & Jerry March, 2007, "March's Advanced Organic Chemistry: Reactions, Mechanisms, and Structure," 6th Ed., New York, NY, USA:Wiley & Sons, , accessed 19 June 2015. Francis A. Carey & Richard J. Sundberg, 2006, "Advanced Organic Chemistry: Part A: Structure and Mechanisms," 4th Edn., New York, NY, USA:Springer Science & Business Media, , accessed 19 June 2015. Hammett, Louis P. (1940) Physical Organic Chemistry, New York, NY, USA: McGraw Hill, accessed 20 June 2015. History [An outstanding starting point on the history of the field, from a critically important contributor, referencing and discussing the early Hammett text, etc.] Thermochemistry L. K. Doraiswamy, 2005, "Estimation of properties of organic compounds (Ch. 3)," pp. 36–51, 118-124 (refs.), in Organic Synthesis Engineering, Oxford, Oxon, ENG:Oxford University Press, , accessed 22 June 2015. (This book chapter surveys a very wide range of physical properties and their estimation, including the narrow list of thermochemical properties appearing in the June 2015 WP article, placing the Benson et al. method alongside many other methods. L. K. Doraiswamy is Anson Marston Distinguished Professor of Engineering at Iowa State University.) Organic chemistry
Physical organic chemistry
Chemistry
6,076
74,760,455
https://en.wikipedia.org/wiki/Purpureocillium%20roseum
Purpureocillium roseum is a species of fungus in the genus Purpureocillium in the order of Hypocreales. References Ophiocordycipitaceae Fungi described in 2020 Fungus species
Purpureocillium roseum
Biology
48
58,507,279
https://en.wikipedia.org/wiki/Aspergillus%20heyangensis
Aspergillus heyangensis is a species of fungus in the genus Aspergillus. It is from the Aenei section. The species was first described in 1994. It has been reported to produce a decaturin. Growth and morphology A. heyangensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References heyangensis Fungi described in 1994 Fungus species
Aspergillus heyangensis
Biology
115
62,306,648
https://en.wikipedia.org/wiki/Ratl
A ratl (رطل ) is a medieval Middle Eastern unit of measurement found in several historic recipes. The term was used to measure both liquid and weight (around a pound and a pint in 10th century Baghdad, but anywhere from 8 ounces to 8 pounds depending on the time period and region). While there were a variety of names for different shapes of cups and mugs in use at the time, the ratl seems to have had a position roughly equivalent to a British pint in that the name of the drinking-vessel also implied a standardized measurement as opposed to merely the object's shape, in both 10th century Baghdad and 13th century Andalusia. However, those standardized measures varied both by region and by purpose: the spice-measuring ratl, the flax-measuring ratl, the oil-measuring ratl, and the quicksilver-measuring ratl all differed from each other. The ratl was a part of a sequence of measurements ranging from a grain of barley through the dirham (used as a common point of reference in both medieval European and Middle Eastern regions) on up to the Sa (Islamic measure). measurement 1 Mudd=8/6 ratl. 1 Sá =4 mudd=5+1/3 ratl. 1 Ratl =128+4/7 dirham or 128 dirham or 130 dirham. 1 Uqiyyah=40 dirham. 1 Nashsh=20 dirham. 7 mithqal =10 dirham. 1 mithqal=72 grains of average barely both edges cut. 1 mithqal=20 qirat قِيراط of makkah=21+3/7 qirat of Damascus. 1 Dirham= 0.7 mithqal =14 qirat of makkah=15 qirat of Damascus. 1 mil= 4000 zira. 1 wasq=60 sá. In al-Warraq's tenth-century cookbook, different regions used some of the same terms to mean different units of measurement and the relationships between them. Some of those relationships are described below. References Customary units of measurement Units of measurement Cooking weights and measures
Ratl
Mathematics
448
48,415,829
https://en.wikipedia.org/wiki/Qcells
Hanwha Qcells (commonly known as simply Qcells) is a manufacturer of photovoltaic cells. The company is headquartered in Seoul, South Korea, after being founded in 1999 in Bitterfeld-Wolfen, Germany, where the company still has its engineering offices. Qcells was purchased out of bankruptcy in August 2012 by the Hanwha Group, a South Korean business conglomerate. Qcells now operates as a subsidiary of Hanwha Solutions, the group's energy and petrochemical company. Qcells has manufacturing facilities in the United States, Malaysia, and South Korea. The company was the sixth-largest producer of solar cells in 2019, with shipments totaling 7.3 gigawatts. History On 23 July 2001, the company produced its first working polycrystalline solar cell on its new production line in Thalheim. Qcells would grow to become one of the world's largest solar cell manufacturers, employing over 2,000 people and encouraging other companies to open facilities in the surrounding area, which would come to be known as Germany's "Solar Valley". The company went public on 5 October 2005, listing on the Frankfurt Stock Exchange. High share prices during the initial public offering poured money into the company and made the founders wealthy. Lemoine died in 2006, and shortly thereafter, Fest and Grunow left the company to go back into research. Only Milner remained and served as the company's CEO. In 2005, Qcells established the CdTe PV manufacturer Calyxo. In November 2007, Qcells agreed a deal with Solar Fields, which intellectual property and assets were merged into Calyxo's newly established subsidiary Calyxo USA. In 2011, Solar Fields took over Calyxo. In 2008, Qcells acquired 17.9% stake in Renewable Energy Corporation. This stake was sold in 2009. At the same year, Qcells' subsidiary Sontor merged with a thin-film company Solarfilm. In June 2009, the company acquired Solibro, a joint venture it had established in 2006. Solibro manufactured thin-film solar cells based on copper-indium-gallium-diselenide. These modules were marketed until the sale of Solibro to Hanergy in 2012. Qcells was hit hard by the Great Recession in late 2008, with share prices slipping from over 80 euros to under 20. In response, the company laid off 500 employees. Milner resigned as CEO in early 2010, and by the end of the year, the company's finances appeared to stabilize. Just a few months later, in 2011, the global solar cell market crashed, with production overcapacity driving prices extremely low. Qcells saw sales slide by around 1 billion euros, ran a loss of 846 million euros and on 3 April 2011, the company filed for bankruptcy. In August 2012, the Hanwha Group, a large South Korean business conglomerate, agreed to acquire Qcells, saying that it presented synergy opportunities. In 2010, Hanwha had purchased a 49.99% share in Chinese manufacturer Solarfun which had been renamed Hanwha SolarOne. SolarOne had been producing solar cells for Qcells under contract. Due to high costs, production in Germany ceased in 2015, with Hanwha moving the work to its SolarOne facilities in China and newly opened manufacturing facilities in Malaysia and South Korea. In 2019, Qcells opened its first manufacturing facility in the United States. In recent years, Hanwha has since worked to simplify the structure of units, merging SolarOne into Qcells in December 2014, merging Qcells and the company's Advanced Materials (petrochemicals) group in 2018, Qcells & Advanced Materials acquired a solar company operated by the Hanwha Chemicals group in 2019, and in 2020 Hanwha Qcells & Advanced Materials merged with Hanwha Chemical to form the Hanwha Solutions group. In January 2023, Qcells made a commitment to invest more than $2.5 billion to build a fully integrated, silicon-based solar supply chain in the United States from raw material to finished module with full production expected by the end of 2024. In August 2024, Qcells received a conditional commitment for a future $1.45 billion loan from the US department of energy to help finance the construction of a fully integrated solar cell manufuacturing facility north of Atlanta, Georgia. The loan guarantee was approved in part by Qcells receiving an order from Microsoft for 12 gigawatts of solar panels through 2032 thus demonstrating a market for their product. The loan was finalized in December 2024. Qcells also operates a residential solar financing platform in the United States, EnFin, offering loans for those who choose to install PV systems in their homes. In August 2023, the U.S. Department of Commerce ruled that Qcells had not circumvented tariffs on Chinese-made goods following an investigation involving multiple photovoltaic cell manufacturers. In July 2024, it was reported that Hanwha Qcells' factory in Dalton, Georgia, was importing cells made with Chinese wafers from TCL Zhonghuan Renewable Energy Technology Co. and Gokin Solar Co., wafer suppliers who source Xinjiang, China polysilicon from Daqo and GCL, both of which are on the UFLPA Entity List. However, the report had stated that there is no evidence that components containing the banned polysilicon have turned up in Qcells panels. Large manufacturers have their own separate duty rates. Several big China-based producers received far lower rates than Hanwha Qcells. The Commerce Department calculated a subsidy rate of 14.72% for Hanwha Qcells products produced in Malaysia, based in part on government loans and below-market land provisions to the company in that country. Operations Qcells develops and produces monocrystaline silicon photovoltaic cells and solar panels. It produces and installs PV systems for commercial, industrial, and residential applications and provides EPC services for large-scale solar power plants. The company's engineering offices are located at the original headquarters in Thalheim, Germany. Production facilities are located in Dalton, Georgia, and Cartersville, Georgia, in the United States; Cyberjaya in Malaysia; and Jincheon in South Korea. See also List of photovoltaics companies Photovoltaic array Photovoltaics Theory of solar cells Thin-film cell References External links Technology companies established in 1999 Engineering companies of South Korea Manufacturing companies based in Seoul Photovoltaics manufacturers Thin-film cell manufacturers Silicon wafer producers Hanwha subsidiaries Companies formerly listed on the Nasdaq South Korean brands South Korean companies established in 1999
Qcells
Engineering
1,403
2,479,157
https://en.wikipedia.org/wiki/Monte%20Carlo%20N-Particle%20Transport%20Code
Monte Carlo N-Particle Transport (MCNP) is a general-purpose, continuous-energy, generalized-geometry, time-dependent, Monte Carlo radiation transport code designed to track many particle types over broad ranges of energies and is developed by Los Alamos National Laboratory. Specific areas of application include, but are not limited to, radiation protection and dosimetry, radiation shielding, radiography, medical physics, nuclear criticality safety, detector design and analysis, nuclear oil well logging, accelerator target design, fission and fusion reactor design, decontamination and decommissioning. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and fourth-degree elliptical tori. Point-wise cross section data are typically used, although group-wise data also are available. For neutrons, all reactions given in a particular cross-section evaluation (such as ENDF/B-VI) are accounted for. Thermal neutrons are described by both the free gas and S(α,β) models. For photons, the code accounts for incoherent and coherent scattering, the possibility of fluorescent emission after photoelectric absorption, absorption in pair production with local emission of annihilation radiation, and bremsstrahlung. A continuous-slowing-down model is used for electron transport that includes positrons, k x-rays, and bremsstrahlung but does not include external or self-induced fields. Important standard features that make MCNP very versatile and easy to use include a powerful general source, criticality source, and surface source; both geometry and output tally plotters; a rich collection of variance reduction techniques; a flexible tally structure; and an extensive collection of cross-section data. MCNP contains numerous flexible tallies: surface current and flux, volume flux (track length), point or ring detectors, particle heating, fission heating, pulse height tally for energy or charge deposition, mesh tallies, and radiography tallies. The key value MCNP provides is a predictive capability that can replace expensive or impossible-to-perform experiments. It is often used to design large-scale measurements providing a significant time and cost savings to the community. LANL's latest version of the MCNP code, version 6.2, represents one piece of a set of synergistic capabilities each developed at LANL; it includes evaluated nuclear data (ENDF) and the data processing code, NJOY. The international user community's high confidence in MCNP's predictive capabilities are based on its performance with verification and validation test suites, comparisons to its predecessor codes, automated testing, underlying high quality nuclear and atomic databases and significant testing by its users. History The Monte Carlo method for radiation particle transport has its origins at LANL dates back to 1946. The creators of these methods were Stanislaw Ulam, John von Neumann, Robert Richtmyer, and Nicholas Metropolis. Monte Carlo for radiation transport was conceived by Stanislaw Ulam in 1946 while playing Solitaire while recovering from an illness. "After spending a lot of time trying to estimate success by combinatorial calculations, I wondered whether a more practical method...might be to lay it out say one hundred times and simply observe and count the number of successful plays." In 1947, John von Neumann sent a letter to Robert Richtmyer proposing the use of a statistical method to solve neutron diffusion and multiplication problems in fission devices. His letter contained an 81-step pseudo code and was the first formulation of a Monte Carlo computation for an electronic computing machine. Von Neumann's assumptions were: time-dependent, continuous-energy, spherical but radially-varying, one fissionable material, isotropic scattering and fission production, and fission multiplicities of 2, 3, or 4. He suggested 100 neutrons each to be run for 100 collisions and estimated the computational time to be five hours on ENIAC. Richtmyer proposed suggestions to allow for multiple fissionable materials, no fission spectrum energy dependence, single neutron multiplicity, and running the computation for computer time and not for the number of collisions. The code was finalized in December 1947. The first calculations were run in April/May 1948 on ENIAC. While waiting for ENIAC to be physically relocated, Enrico Fermi invented a mechanical device called FERMIAC to trace neutron movements through fissionable materials by the Monte Carlo method. Monte Carlo methods for particle transport have been driving computational developments since the beginning of modern computers; this continues today. In the 1950s and 1960s, these new methods were organized into a series of special-purpose Monte Carlo codes, including MCS, MCN, MCP, and MCG. These codes were able to transport neutrons and photons for specialized LANL applications. In 1977, these separate codes were combined to create the first generalized Monte Carlo radiation particle transport code, MCNP. In 1977, MCNP was first created by merging MCNG with MCP to create MCNP. The first release of the MCNP code was version 3 and was released in 1983. It is distributed by the Radiation Safety Information Computational Center in Oak Ridge, TN. Monte Carlo N-Particle eXtended Monte Carlo N-Particle eXtended (MCNPX) was also developed at Los Alamos National Laboratory, and is capable of simulating particle interactions of 34 different types of particles (nucleons and ions) and 2000+ heavy ions at nearly all energies, including those simulated by MCNP. Both codes can be used to judge whether or not nuclear systems are critical and to determine doses from sources, among other things. MCNP6 is a merger of MCNP5 and MCNPX. Comparison MCNP6 is less accurate than MCNPX. Geant4 is less accurate than MCNPX. Geant4 is less accurate than MCNP5. Geant4 is slower than MCNPX. See also Safety code (nuclear reactor) Nuclear data Monte Carlo method Monte Carlo methods for electron transport Nuclear reactor Nuclear engineering Neutron FLUKA Geant4 MELCOR RELAP5-3D Serpent (software) Notes External links LANL MCNP website Radiation Safety Information Computational Center Nuclear technology Nuclear safety and security Monte Carlo software Physics software Fortran software Scientific simulation software Monte Carlo particle physics software
Monte Carlo N-Particle Transport Code
Physics
1,283
21,504,556
https://en.wikipedia.org/wiki/Torrance%20Tests%20of%20Creative%20Thinking
The Torrance Tests of Creative Thinking, formerly the Minnesota Tests of Creative Thinking, is a test of creativity built on J. P. Guilford's work and created by Ellis Paul Torrance, the Torrance Tests of Creative Thinking originally involved simple tests of divergent thinking and other problem-solving skills, which were scored on four scales: Fluency. The total number of interpretable, meaningful, and relevant ideas generated in response to the stimulus. Flexibility. The number of different categories of relevant responses. Originality. The statistical rarity of the responses. Elaboration. The amount of detail in the responses. History In 1976, Arasteh and Arasteh wrote that the most systematic assessment of creativity in elementary school children has been conducted by Torrance and his associates (1960a, 1960b, 1960c, 1961, 1962, 1962a, 1963a, and 1964) with the Minnesota Tests of Creative Thinking, which was later renamed the Torrance Tests of Creative Thinking, with several thousands of schoolchildren. The Minnesota group, in contrast to Guilford, has devised scoring tasks involving both verbal and non-verbal aspects and relying on senses other than vision. They also differ from the battery developed by Wallach and Kogan (1965), which contains measures representing "creative tendencies" that are similar in nature. Several longitudinal studies have been conducted to follow up on the elementary school-aged students who were first administered the Torrance Tests in 1958 in Minnesota. There was a 22-year follow-up, a 40-year follow-up, and a 50-year follow-up. Torrance (1962) grouped the different subtests of the Minnesota Tests of Creative Thinking into three categories: Verbal tasks using verbal stimuli Verbal tasks using non-verbal stimuli Non-verbal task The third edition of the Torrance Tests of Creative Thinking in 1984 removed the "flexibility" scale from the figural test but added "resistance to premature closure" (based on Gestalt psychology) and "abstractness of titles" as two new criterion-referenced scores on the figural. Torrance called the new scoring procedure Streamlined Scoring. With the five norm-referenced measures that he had (fluency, originality, abstractness of titles, elaboration, and resistance to premature closure), he added 13 criterion-referenced measures that include: emotional expressiveness, story-telling articulateness, movement or actions, expressiveness of titles, syntheses of incomplete figures, synthesis of lines and of circles, unusual visualization, extending or breaking boundaries, humor, richness of imagery, colourfulness of imagery, and fantasy. Tasks A brief description of the tasks used by Torrance is given below: Verbal tasks using verbal stimuli Unusual uses The unusual uses tasks using verbal stimuli are direct modifications of Guilford's Brick uses test. After preliminary tryouts, Torrance (1962) decided to substitute tin cans and books for bricks. It was believed the children would be able to handle tin cans and books more easily since both are more available to children than bricks. Impossibilities task It was used originally by Guilford and his associates (1951) as a measure of fluency involving complex restrictions and large potential. In a course in personality development and mental hygiene, Torrance has experimented with a number of modifications of the basic task, making the restrictions more specific. In this task the subjects are asked to list as many impossibilities as they can. Consequences task The consequences task was also used originally by Guilford and his associates (1951). Torrance has made several modifications in adapting it. He chose three improbable situations and the children were required to list out their consequences. Just suppose task It is an adaptation of the consequences type of test designed to elicit a higher degree of spontaneity and to be more effective with children. As in the consequence task, the subject is confronted with an improbable situation and asked to predict the possible outcomes from the introduction of a new or unknown variable. Situations task The situation task was modeled after Guilford's (1951) test designed to assess the ability to see what needs to be done. Subjects were given three common problems and asked to think of as many solutions to these problems as they can. For example, if all schools were abolished, what would you do to try to become educated? Common problems task This task is an adoption of Guilford's (1951) Test designed to assess the ability to see defects, needs and deficiencies and found to be one of the tests of the factors termed sensitivity to problems. Subjects are instructed that they will be given common situations and that they will be asked to think of as many problems as they can that may arise in connection with these situations. For example, doing homework while going to school in the morning. Improvement task This test was adopted from Guilford's (1952) apparatus test, which was designed to assess the ability to see defects and all aspects of sensitivity to problems. The subjects are given a list of common objects and are asked to suggest as many ways as they can to improve each object, not concerning whether or not it is possible to implement the change. Mother-Hubbard problem This task was conceived as an adoption of the situational task for oral administration in the primary grades and for older groups. This test concerns the development of ideas. The task asks children how they would solve the problem of Old Mother Hubbard going to the cupboard to get her dog a bone but finding it empty. Imaginative stories task The child is told to write the most interesting and exciting story they can think of. Topics are suggested (e.g., the dog that did not bark); or the child may use their own ideas. Cow jumping problems The cow jumping problem is a companion task for the Mother-Hubbard problem and has been administered to the same groups under the same conditions and scored according to the similar procedures. The task is to think of all possible things which might have happened when the cow jumped over the moon. Verbal tasks using nonverbal stimuli Ask and guess taskThe ask and guess task requires the individual first to ask questions about a picture – questions which cannot be answered by just looking at the picture. Next they are asked to make guesses or formulate hypotheses about the possible causes of the event depicted, and then their consequences both immediate and remote. Product improvement task In this task common toys are used and children are asked to think of as many improvements as they can which would make the toy ‘more fun to play with’. Subjects are then asked to think of unusual uses of these toys other than 'something to play with’. Unusual uses task In this task, along with the product improvement task another task (unusual uses) is used. The child is asked to think of the cleverest, most interesting and most unusual uses of the given toy, other than as a plaything. These uses could be for the toy as it is, or for the toy as changed. Non-verbal tasks (figural) Incomplete figures task It is an adaptation of the ‘Drawing completion test’ developed by Kate Franck and used by Barron (1958). On an ordinary white paper an area of fifty four square inches is divided into ten squares each containing a different stimulus figure. The subjects are asked to sketch some novel objects or design by adding as many lines as they can to the ten figures. Picture construction task or shapes task In this task the children are given shape of a triangle or a jelly bean and a sheet of white paper. The children are asked to think of a picture in which the given shape is an integral part. They should paste it wherever they want on the white sheet and add lines with pencil to make any novel picture. They have to think of a name for the picture and write it at the bottom. Circles and squares task It was originally designed as a nonverbal test of ideational fluency and flexibility, then modified in such a way as to stress originality and elaboration. Two printed forms are used in the test. In one form, the subject is confronted with a page of forty two circles and asked to sketch objects or pictures which have circles as a major part. In the alternate form, squares are used instead of circles. Creative design task Hendrickson has designed it which seems to be promising, but scoring procedures are being tested but have not been perfected yet. The materials consist of circles and strips of various sizes and colours, a four-page booklet, scissors and glue. Subjects are instructed to construct pictures or designs, making use of all of the coloured circles and strips with a thirty-minute time limit. Subjects may use one, two, three, or four pages; alter the circles and strips or use them as they are; add other symbols with pencil or crayon. See also Creativity References Creativity Psychological tests and scales
Torrance Tests of Creative Thinking
Biology
1,801
33,587,105
https://en.wikipedia.org/wiki/Gliese%20293
L 97-12 (or WD 0752-676, or LHS 34, or Gliese 293) is a nearby degenerate star (white dwarf), located in the constellation Volans, the single known component of the system. Distance Possibly, L 97-12 is the ninth-closest white dwarf after Sirius B, Procyon B, van Maanen's star, Gliese 440, 40 Eridani B, Stein 2051 B, GJ 1221 and Gliese 223.2. (However, there is probability, that white dwarfs GJ 1087, Gliese 518 and (with lesser probability) Gliese 915 may be located closer.) Trigonometric parallax of L 97-12 was included in the YPC (Yale Parallax Catalog), and subsequently it was measured more precisely in CTIOPI (Cerro Tololo Inter-American Observatory (CTIO) Parallax Investigation) 0.9 m telescope program: Physical parameters The mass of L 97-12 is 0.59 ± 0.01 Solar masses, and its surface gravity is 108.00 ± 0.02 cm·s−2, or approximately 102,000 of Earth's, corresponding to a radius of , or 139% of Earth's. L 97-12 has temperature 5,700 ± 90 K, almost like the Sun, and cooling age, i.e. age as degenerate star (not including lifetime as main-sequence star and as giant star) 2.65 ± 0.10 Gyr. It has a white appearance due to similar temperature to Sun. See also List of star systems within 25–30 light-years Notes References Volans White dwarfs 0293 J07530814-6747314
Gliese 293
Astronomy
374
2,419,630
https://en.wikipedia.org/wiki/Chlorinated%20polyvinyl%20chloride
Chlorinated polyvinyl chloride (CPVC) is a thermoplastic produced by chlorination of polyvinyl chloride (PVC) resin. CPVC is significantly more flexible than PVC, and can also withstand higher temperatures. Uses include hot and cold water delivery pipes and industrial liquid handling. CPVC, like PVC, is deemed safe for the transport and use of potable water. History Genova Products located in Michigan initially created the first CPVC tubing and fittings for hot- and cold-water distribution systems in the early 1960s. The original tetrahydrofuran (THF) / methyl ethyl ketone (MEK) formulas for CPVC cements were developed by Genova in conjunction with the B.F. Goodrich Company, the original developer of the CPVC resin. Production process Chlorinated polyvinyl chloride (CPVC) is PVC that has been chlorinated via a free radical chlorination reaction. This reaction is typically initiated by application of thermal or UV energy utilizing various approaches. In the process, chlorine gas is decomposed into free radical chlorine which is then reacted with PVC in a post-production step, essentially replacing a portion of the hydrogen in the PVC with chlorine. Depending on the method, a varying amount of chlorine is introduced into the polymer allowing for a measured way to fine-tune the final properties. The chlorine content may vary from manufacturer to manufacturer; the base can be as low as PVC 56.7% to as high as 74% by mass, although most commercial resins have chlorine content from 63% to 69%. As the chlorine content in CPVC is increased, its glass transition temperature (Tg) increases significantly. Under normal operating conditions, CPVC becomes unstable at 70% mass of chlorine. Various additives are also introduced into the resin in order to make the material more receptive to processing. These additives may consist of stabilizers, impact modifiers, pigments and lubricants. Physical properties CPVC shares most of the features and properties of PVC, but also has some key differences. CPVC is readily workable, including machining, welding, and forming. Because of its excellent corrosion resistance at elevated temperatures, CPVC is ideally suited for self-supporting constructions where temperatures up to 200 °F (93 °C) are present. The ability to bend, shape, and weld CPVC enables its use in a wide variety of processes and applications. It exhibits fire-retardant properties. Comparison to polyvinyl chloride (PVC) Heat resistance CPVC can withstand corrosive water at temperatures greater than PVC, typically greater by 40–50 °C (greater by 72–90 °F), contributing to its popularity as a material for water-piping systems in residential and commercial construction. CPVC maximal operating temperature peaks at 200 °F (93 °C). Mechanical properties The principal mechanical difference between CPVC and PVC is that CPVC is significantly more ductile, allowing greater flexure and crush resistance. Additionally, the mechanical strength of CPVC makes it a viable candidate to replace many types of metal pipe in conditions where metal's susceptibility to corrosion limits its use. {| class="wikitable" |+ Properties of CPVC and PVC |- ! rowspan=2 | ! colspan=2 | Schedule 40 ! colspan=2 | Schedule 80 |- ! CPVC ! PVC ! CPVC ! PVC |- ! Max. working pressure | | | | |- ! Tensile strength | | | | |- ! Temperature limits | | | | |} Additionally, CPVC is a thermoplastic and as such has greater insulation than that of copper pipes. Due to this increased insulation, CPVC experiences less condensation formation and better maintains water temperature for both hot and cold applications. Bonding Due to its specific composition, bonding CPVC requires a specialized solvent cement different from PVC, with high-strength formulas being first introduced in 1965 by Genova Products, followed by alternatives such as IPS's Weld-On line. Primers, solvent cements, and bonding agents for CPVC reportedly must meet ASTM F493 specifications, differing from PVC solvent cements that must adhere to ASTM D2564 standards. Fire properties CPVC is similar to PVC in resistance to fire. It is typically very difficult to ignite and tends to self-extinguish when not in a directly applied flame. Due to its chlorine content, the incineration of CPVC, either in a fire or in an industrial disposal process, can result in the creation of chlorinated dioxins and the similarly dangerous polychlorinated dibenzofurans, both which bioaccumulate. References Vinyl polymers Plastics
Chlorinated polyvinyl chloride
Physics
1,008
24,078,726
https://en.wikipedia.org/wiki/Frost%20crack
Frost crack or Southwest canker is a form of tree bark damage sometimes found on thin barked trees, visible as vertical fractures on the southerly facing surfaces of tree trunks. Frost crack is distinct from sun scald and sun crack and physically differs from normal rough-bark characteristics as seen in mature oaks, pines, poplars and other tree species. Normal bark formation The sloughing or peeling of the bark is a normal process, especially in the spring when the tree begins to grow. The outer layers of the bark are dead tissue and therefore they cannot grow, the outer bark splitting in order for the tree to grow in circumference, increasing its diameter. The inner bark cambium and phloem tissues are living, and form a new protective layer of cells as the outer bark pulls apart. Normal furrowed bark has a layer of bark over the wood below, however bark may peel or fall off the tree in sheets (river birch), plates (sycamore and pine), strips (cedar) or blocks (dogwood). Causes Frost cracks are frequently the result of some sort of weakness in the bark which occurred to the tree earlier. In late winter and early spring, water in the phloem, known as the inner bark and in the xylem, known as the wood, expands and contracts under often significantly fluctuating temperatures. Wood that is in some way damaged does not contract to the same degree as healthy wood. Rapid expansion and contraction of water within the wood and bark, particularly under rapidly falling night temperatures, can result in a frost crack, often accompanied by a loud explosive report. Research suggests that the main cause is actually 'frost-shrinkage' due to the freezing-out of cell wall moisture into lumens of wood cells. Other causes are the expansion of freezing water in cell lumens, and additionally the formation of ice lenses within wood. As stated, previous defects such as healed wounds, branch stubs, etc. in tree trunks function as stress raisers and trigger the frost cracking. In winter when the sun sets or the sky clouds over, the temperature of the tree drops very quickly and as the bark cools more quickly and the wood contracts more slowly, the bark rips open in a long crack, sometimes with an audible report likened to a rifle crack. Cold, clear, sunny days are the most likely to result in frost cracking, particularly as the heat energy from the low Sun on a Winter day can be higher than any other time of year. Trees that are growing in poorly drained sites are more subject to frost cracking than are those growing in drier, better drained soils. Trees suddenly left exposed by felling are highly susceptible. Frost crack examples Physical appearance Although frost cracks may be up to several metres long, these cracks usually only become apparent in early spring; they are often found on the southwest side of the tree. These cracks may heal in the summer and be reopen again in the winters, so that successive cracking and healing over a number of years results in the formation of 'frost ribs' on the sides of affected trees. The wood beneath the frost crack is rarely damaged. The cracks usually originate at the base of the trunk and extends from a metre to several metres upwards. Some discoloration is often found at the site of the damage. Effect of damage Frost cracks often act as sites of entry for wood decay organisms, including insects, fungi and bacteria. Timber damaged in this way is unsuitable for use in buildings, etc. Tree species susceptibility Species such as crab-apple, beech, walnut, oaks, maples, sycamore, horse-chestnut, willows and lime are prone to developing frost crack given the right conditions. Prevention Avoiding the use of fertilizers late in the growing season can reduce the incidence of splits, also protecting the bark of young trees from physical damage such as that caused by lawn mowers, car bumpers, grazing animals, spades, trimmers, etc. Protect young trees in winter with paper tree wrap from ground level to the first main branches. Repair Most tree species try to seal the edges of wounds by forming a callus. The wound's edges begin to form this callus during the first growing season after that crack appears and the callus layer will continue to grow and after many years, the wound may close over entirely. See also Exploding tree Sun scald References External links Video footage and commentary on Frost Crack Plant physiology Trees
Frost crack
Biology
906
25,646,409
https://en.wikipedia.org/wiki/Cycle%20rank
In graph theory, the cycle rank of a directed graph is a digraph connectivity measure proposed first by Eggan and Büchi . Intuitively, this concept measures how close a digraph is to a directed acyclic graph (DAG), in the sense that a DAG has cycle rank zero, while a complete digraph of order n with a self-loop at each vertex has cycle rank n. The cycle rank of a directed graph is closely related to the tree-depth of an undirected graph and to the star height of a regular language. It has also found use in sparse matrix computations (see ) and logic . Definition The cycle rank r(G) of a digraph is inductively defined as follows: If G is acyclic, then . If G is strongly connected and E is nonempty, then where is the digraph resulting from deletion of vertex and all edges beginning or ending at . If G is not strongly connected, then r(G) is equal to the maximum cycle rank among all strongly connected components of G. The tree-depth of an undirected graph has a very similar definition, using undirected connectivity and connected components in place of strong connectivity and strongly connected components. History Cycle rank was introduced by in the context of star height of regular languages. It was rediscovered by as a generalization of undirected tree-depth, which had been developed beginning in the 1980s and applied to sparse matrix computations . Examples The cycle rank of a directed acyclic graph is 0, while a complete digraph of order n with a self-loop at each vertex has cycle rank n. Apart from these, the cycle rank of a few other digraphs is known: the undirected path of order n, which possesses a symmetric edge relation and no self-loops, has cycle rank . For the directed -torus , i.e., the cartesian product of two directed circuits of lengths m and n, we have and for m ≠ n (, ). Computing the cycle rank Computing the cycle rank is computationally hard: proves that the corresponding decision problem is NP-complete, even for sparse digraphs of maximum outdegree at most 2. On the positive side, the problem is solvable in time on digraphs of maximum outdegree at most 2, and in time on general digraphs. There is an approximation algorithm with approximation ratio . Applications Star height of regular languages The first application of cycle rank was in formal language theory, for studying the star height of regular languages. established a relation between the theories of regular expressions, finite automata, and of directed graphs. In subsequent years, this relation became known as Eggan's theorem, cf. . In automata theory, a nondeterministic finite automaton with ε-moves (ε-NFA) is defined as a 5-tuple, (Q, Σ, δ, q0, F), consisting of a finite set of states Q a finite set of input symbols Σ a set of labeled edges δ, referred to as transition relation: Q × (Σ ∪{ε}) × Q. Here ε denotes the empty word. an initial state q0 ∈ Q a set of states F distinguished as accepting states F ⊆ Q. A word w ∈ Σ* is accepted by the ε-NFA if there exists a directed path from the initial state q0 to some final state in F using edges from δ, such that the concatenation of all labels visited along the path yields the word w. The set of all words over Σ* accepted by the automaton is the language accepted by the automaton A. When speaking of digraph properties of a nondeterministic finite automaton A with state set Q, we naturally address the digraph with vertex set Q induced by its transition relation. Now the theorem is stated as follows. Eggan's Theorem: The star height of a regular language L equals the minimum cycle rank among all nondeterministic finite automata with ε-moves accepting L. Proofs of this theorem are given by , and more recently by . Cholesky factorization in sparse matrix computations Another application of this concept lies in sparse matrix computations, namely for using nested dissection to compute the Cholesky factorization of a (symmetric) matrix in parallel. A given sparse -matrix M may be interpreted as the adjacency matrix of some symmetric digraph G on n vertices, in a way such that the non-zero entries of the matrix are in one-to-one correspondence with the edges of G. If the cycle rank of the digraph G is at most k, then the Cholesky factorization of M can be computed in at most k steps on a parallel computer with processors . See also Circuit rank References . . . . . . . . . Graph connectivity Graph invariants NP-complete problems
Cycle rank
Mathematics
1,018
25,776,254
https://en.wikipedia.org/wiki/American%20Astronomical%20Society%20215th%20meeting
The 215th meeting of the American Astronomical Society (AAS) took place in Washington, D.C., Jan. 3 to Jan. 7, 2010. It is one of the largest astronomy meetings ever to take place as 3,500 astronomers and researchers were expected to attend and give more than 2,200 scientific presentations. The meeting was actually billed as the "largest Astronomy meeting in the universe". An array of discoveries were announced, along with new views of the universe that we inhabit; such as quiet planets like Earth - where life could develop are probably plentiful, even though an abundance of cosmic hurdles exist - such as experienced by our own planet in the past. Infrared scanning the sky The NASA mission of the Wide-field Infrared Survey Explorer (WISE) is to use infrared light to scan the entire sky for millions of hidden objects, including asteroids, failed stars and powerful galaxies . Launched on December 14, 2009, the data from WISE will serve as a navigation tool for other probes in space missions, such as NASA's Hubble Space Telescope and the Spitzer Space Telescope. The first image was presented at the 215th annual AAS meeting. An infrared snapshot of a region in the constellation Carina, near the Milky Way was taken shortly after the survey telescope ejected its cover. In a patch of sky three times larger than the Moon, the picture shows about 3,000 stars in the Carina constellation. Planet formation around massive Stars The focus for discovering new exoplanets has been on sun-like stars. The catalog of more than 400 exoplanets has proved that these searches are successful, because exoplanets of various sizes have been discovered. However, other star types are also a likely place to discover new exoplanets. New research, announced at the meeting, confirms that planet formation is a natural by-product of star formation. Planet formation occurs even around stars much more massive than the Sun. However, the life of the stars which the planets orbit are so short that intelligent extraterrestrial life is not very likely. A and B type stars were surveyed for the research which involved NASA's Spitzer Space Telescope, the Two Micron All-Sky Survey, and astronomers from the Center for Astrophysics Harvard & Smithsonian (CfA) and the National Optical Astronomy Observatory (NOAO). Gravitational wave detection In a span of three months seventeen pulsars - millisecond pulsars - were discovered in the Milky Way galaxy. Unknown high-energy sources detected by NASA's Fermi Gamma-ray Space Telescope revealed the existence and location of the pulsars. This is an accelerated pace for discovering such objects, which could be used as a "galactic GPS" to detect gravitational waves passing near Earth. Although the pulsars are relatively old they have not slowed because, these millisecond pulsars have been kept rapidly rotating and renewed with material by accretion of matter from a companion star. The combined total of 60 known millisecond pulsars creates an all-sky array. Precise monitoring of timing changes, utilizing this array, may allow the first direct detection of gravitational waves. Temperature, gravity, and planet migration According to the classical model of planet migration, the earth should have been drawn into the sun as a planetoid, along with other planets. However, a new theoretical model was presented at the annual meeting. It shows that the assumption a proto-planetary disk around a star has constant temperature across its whole span is erroneous. Portions of the disk are actually opaque and so cannot cool quickly by radiating heat out to space. This creates temperature differences across the disk, and these differences have not been accounted for before in models that were applied. The differences in temperature counteract the natural gravitational pull of the sun (or proto-sun), at a crucial time during planet formation. Kepler space telescope On 4 January 2010, the Kepler space telescope announced the discovery its first five new exoplanets, named Kepler-4b, 5b, 6b, 7b and 8b. These exoplanets had sizes comparable to that of Neptune to larger than Jupiter, with orbits ranging from 3.3 to 4.9 days, and estimated temperatures ranging from 2,200 °F to 3,000 °F (1,200 °C to 1,650 °C). Super-Earth HD156668b The discovery of HD156668b, a super-Earth class exoplanet, was announced on January 7, 2010, at the 215th meeting American Astronomical Society (AAS), in Washington DC. Overview A super-Earth is an extrasolar planet with a mass between that of Earth and the Solar System's gas giants. The term super-Earth refers only to the mass of the planet and does not imply anything about the surface conditions or habitability. Andrew Howard of the University of California at Berkeley, announced the planet's discovery at the 215th meeting of the American Astronomical Society in Washington, D.C. At the meeting, the details of the findings were first presented by the research group which had used the twin Keck Telescopes in Hawaii to detect the exoplanet. With the twin telescopes functioning as a single observatory, by means of interferometry, it was determined that HD156668b is only four times larger than the Earth, and the second smallest exoplanet yet found. There are over 400 exoplanets thus far discovered, and only a very small number are categorized as Super-Earth class. Finding such planets as HD156668b, which are closer to Earth in size, has become a priority in Astronomy science. For example, the Kepler mission, is part of the intense popular interest surrounding the discovery of hundreds of planets orbiting other stars. Kepler telescope, however has a more specific mission - to discover hundreds of terrestrial planets which are defined as exoplanets that are one half to twice the size of the Earth. A priority is to find those in the habitable zone of their stars where liquid water and possibly life might exist. Discoveries such as HD156668b allow astronomers such as the Keck research group to demonstrate they are able to find smaller and smaller planets. Ultimately, results such as those from the Keck group, and the Keppler mission will allow the Solar System to be placed within a continuum of planetary systems in the Galaxy. HD156668b, is considered to be relatively close at just 80 light-years away. It is in the constellation Hercules. According to early measurements, it appears to be orbiting its parent star about once every four days (approximately). The wobble of the planet's star revealed the existence of HD156668b. The alignment probability is 0.5% for finding a planet in an Earth-like orbit about a solar-like star, compared to the giant planets discovered in four-day orbits, the alignment probability is more like 10%. Other researchers from the California Institute of Technology, Yale University and Penn State University also participated in the study. Black hole update Black holes along with new data were a notable topic at the conference. Black hole pairs Almost every galaxy has a black hole with a mass of one million to one billion times that of the sun. A super-massive black hole, of more than 4 million solar masses, is located in the center of our own Milky Way galaxy. As the universe has evolved, galaxies often collide and merge, creating larger galaxies. This has led to the supposition that galaxies in mid-merge should have a two great black holes (a pair) orbiting one another. Expectations were, that this should be a common observation, hand in hand with mid merge collisions. However, observation has not validated this supposition; only a few orbiting pairs had been found. When observation did not match expectation, this posed problems for theories of how galaxies merge and grow. These statistics have been recently altered. 33 pair of super-massive orbiting black holes were recently discovered. The first 32 pair by the DEEP2 Galaxy Redshift Survey conducted with the Keck II Telescope on Hawaii's Mauna Kea. This survey determined which black hole was moving toward earth at which time. When the black hole moves toward Earth, its light is blue-shifted, meaning it has a shorter wavelength. Orbiting pairs were identified by looking for instances when one black hole was blueshifted and the other redshifted. The pairs orbit each other at 200 km per second, at several thousand light years apart. Intermediate mass black hole In a globular cluster 65 million light years from Earth evidence is accumulating that a black hole, one thousand times more massive than the sun, has caused the destruction of a white dwarf star. It appears that the white dwarf is heating up as it falls toward the black hole. This event creates an intense stellar astrophysical X-ray source, called an ultraluminous X-ray source. The indication of this type of strong X-ray source means that it is more luminous than any known stellar X-ray source, but less luminous than the X-ray intensity of supermassive black holes, which places it in the range of theorized intermediate black holes. Their exact nature of ULXs has remained a mystery, but one suggestion is that some ULXs are black holes with masses between about a hundred and a thousands times that of the Sun. A mix of detected natural elements seems to indicate the actual source of the X-ray emissions are debris from the white dwarf. If evidence authenticates the observations from NASA's Chandra X-ray Observatory and the Magellan telescopes, it means the first actual observation of an intermediate black hole. Furthermore, it would be the first confirmed observation of a black hole destroying a star. And it would support theories which state intermediate black holes exist in globular clusters. Prior to this it has been argued that supermassive black holes in the centers of galaxies are to be attributed with disruption and destruction of stars. However, observing such an event in a globular cluster is a first. To date no candidate for an intermediate black hole has been widely accepted. A possible candidate Data obtained in optical light with the Magellan I and II telescopes in Las Campanas, Chile, also provides intriguing information about this object, which is found in the elliptical galaxy NGC 1399 in the Fornax galaxy cluster. The spectrum reveals emission from oxygen and nitrogen but no hydrogen, a rare set of signals from within globular clusters. The physical conditions deduced from the spectra suggest that the gas is orbiting a black hole of at least 1,000 solar masses. To explain these observations, researchers suggest that a white dwarf star strayed too close to an intermediate-mass black hole and was ripped apart by tidal forces. The black hole is swallowing material from the white dwarf star, and the material's velocity implies the size of the black hole. In this scenario the X-ray emission is produced by debris from the disrupted white dwarf star that is heated as it falls towards the black hole and the optical emission comes from debris further out that is illuminated by these X-rays. Another interesting aspect of this object is that it is found within a globular cluster, a very old, very tight grouping of stars. Astronomers have long suspected globular clusters contained intermediate-mass black holes, but there has been no conclusive evidence of their existence there to date. If confirmed, this finding would represent the first such substantiation. Galactic Dark Matter Halo The Milky Way, and probably most other galaxies too, are surrounded by a halo of dark matter. The shape of the Milky Way has been determined. The research is the first time scientists have measured the three-dimensional shape of a dark matter halo . See also American Astronomical Society Astrophysical Journal List of astronomical societies List of physics conferences References External links 215th meeting of the American Astronomical Society. Earth-Like Planets May Abound in the Milky Way. AAAS Science Now. April 2010. Astronomy organizations Physics conferences 2010 in science 2010 in Washington, D.C. 2010 conferences Astronomy conferences Science events in the United States January 2010 events in the United States
American Astronomical Society 215th meeting
Astronomy
2,487
7,706,010
https://en.wikipedia.org/wiki/Professional%20organizing
Decluttering means removing unnecessary items, sorting and arranging, or putting things back in place. This article deals with the clearing of places of residence, such as in homes and commercial buildings, but the principles can also be applied to other areas. The activity can be done independently, or with help from family, friends or professionals. There are many methods for systematic decluttering and organizing. Some examples are Julie Morgenstern's SPACE, danshari and konmari. In danshari, a distinction is made between minimalists (who try to minimize their belongings) and those who try to optimize their belongings. History Cutting out unnecessary things, letting go of superfluous things and becoming free of attachment to things has roots in Buddhist philosophy. In 1984, professional organizing emerged as an industry in Los Angeles, USA. In 2009, Hideko Yamashita introduced the danshari method in her book Danshari: Shin Katazukejutsu (original title: 人生を変える断捨離). Danshari er constructed by the words dan (refuse), sha (dispose) and ri (separate). In 2010, danshari was nominated for a prize for new buzzwords awarded by the Japanese publisher Jiyuukokuminsha. Since then, there has been a resurgence of other authors and influencers sharing their decluttering methodologies. A notable example is the konmari decluttering method named after Marie Kondo. In 2015, she was listed as one of the world's 100 most influential people by Time Magazine. Professional Organizers A professional organizer helps individuals and companies with organization. In addition to the actual organizing process and implementation of systems and processes, it can be just as important that the client learns methods so that they can maintain order and master organizing independently in the future. They can help clients identify severity of clutter in regards to safety. As one of their main jobs, professional organizers help clients reduce excessive clutter (paper, books, clothing, shoes, office supplies, home decor items, etc.) in the home or in the office. It may also include body doubling. For homeowners, a professional organizer might plan and reorganize the space of a room, improve paper management, consult on organizing skills (space, data, objects) or productivity skills (time, information, priorities) such as calendaring or task management, goal-setting, or coach in time-management, or goal-setting. It may also include body doubling. In a business setting, professional organizers work closely with their clients to increase productivity by stream-lining paper-filing, electronic organization, and employee time-management. Organizers may be additionally trained in brain-based challenges such as left and right-brained strength/dominance, ADHD, OCD, hoarding, Autism, chronic disorganization (CD), dementia, Alzheimer's, other vulnerable populations, and special populations such as children, students, creative-types and seniors. In popular culture The organizing industry has been popularized through a number of TV programs. Among others, the British reality show Life Laundry ran for three seasons from 2002 to 2004. Other examples of English-language programs include Clean Sweep, Neat, Mission: Organization, Tidying Up with Marie Kondo, Hot Mess House, and Get Organized with The Home Edit. Methods There are a number of different decluttering methods and frameworks that can be used either by individuals by themselves or under the guidance of professionals. The methods can be used from simple tasks such as designing a functional closet to complex tasks such as organizing a cross-country move. SPACE method Writer Julie Morgenstern suggests communicating these principles by using the acronym "SPACE", interpreted as: Sort Purge Assign a home Containerize Equalize The last step ("E") consists of monitoring how the new system that has been created is working, adjusting it if needed, and maintaining it. This principle is applicable to every type of organization. Danshari method In the danshari method of Hideko Yamashita, the three parts of the word dan-sha-ri refers to: Refuse: Refrain from unnecessary things you come across or are offered Dispose: Throw away unnecessary or unused things Separate: Let go and free yourself from attachment to things or desires for superfluous things Rejecting what is not needed, throwing it away and refraining from depending on it is said to open one's mind, approach perfection and lead an easier and more comfortable life. Konmari method In the konmari method of Marie Kondo, one begins by collecting all of one's belongings, one category at a time, and then chooses to keep only the things that spark joy and choose a place for everything from then on. Kondo advises to start the process of decluttering by quickly and completely throwing away what is in the house that does not inspire joy. Following this philosophy will recognize the utility of each item, and help the owner learn more about themselves, which will help them more easily decide what to keep or discard. Kondo says her method is partly inspired by the Shintō religion. Decluttering and organizing things properly can be a spiritual practice in Shintoism, which is concerned with the energy or divine spirit (kami) of things and the right way of living (kannagara). This can be done by showing the valuable objects you own as (not necessarily actual monetary value) so that you can value the object. Certification NAPO In April 2007 The National Association of Productivity and Organizing Professionals launched the world-class Certified Professional Organizer® (CPO®) credential administered by the Board of Certification for Professional Organizers® (BCPO®), recognized as the industry standard for professional organizers. Certified Professional Organizers will perform assessments of client(s)' habits and routines, perception, personal preferences (learning/behavior styles), organizing skills (e.g., space, data, objects), productivity skills (e.g., time, information, priorities), technological/computer skills, physical considerations (e.g., injury, illness, limited mobility), mental health considerations (e.g., ADHD, OCD, hoarding, dimentia) and other factors (e.g., influence of age, religion, culture). They will evaluate the environment's characteristics of physical space (e.g., square footage, power source, doors/windows, furniture and equipment and safety. They will identify external factors (e.g., company policies, family dynamics, lease agreements) and determine available budget. They will develop a project plan by reviewing their assessment, determining scope, prioritizing objectives, determining tasks, identifying resources such as organizing (e.g., containers/labels), productivity (e.g., calendar/task management systems) and technology (digital storage, cloud-based, online, devices, apps) tools, furniture and equipment, referrals (e.g., other professionals, educational materials), and removal options (e.g., donation, disposal, selling, shredding).They will establish a timeline, estimate costs (e.g., consulting fees, supplies, vendors), and finalize the project plan. They will implement the approved project plan by teaching, transferring and applying organizing and productivity fundamentals and methodologies (e.g., consolidating, sorting, categorizing, eliminating excess, identifying and optimizing containers, decision-making, maximizing function and usability, process and workflow, goal setting and prioritization, planning and time management, maintaining systems, optimizing personal resources such as energy, money and health, creation of routines and habits, set boundary-setting and delegation.), use communication skills of clarification, negotiation and influence, address challenges and obstacles such as procrastination, perfectionism and scope creep, manage the project (e.g., resources, budget, schedule and expectations and evaluate client satisfaction of processes, timeline and resources. They will follow up and maintain the project by evaluating effectiveness and sustainability of changes, transfer of skills and make recommendations of modifications and resources. They will recognize and apply the BCPO Code of Ethics and they will attend to protection of records, identity and cybersecurity. The Institute for Challenging Disorganization® The Institute for Challenging Disorganization® offers a certification program focused on chronic disorganization. The Virtual Organizer The Virtual Organizer offers the Certified Virtual Organizing Professional™ certification. Ultimate Academy Ultimate Academy offers a Certified Ultimate Professional Organizer™ certification. The American Society of Professional Organizers The American Society of Professional Organizers offers the Certified Home Organizer® program. Problematic decluttering In some cases, people can get so caught up in clearing that they end up throwing away or selling things that belong to family members without permission of the owners. This can be done either intentionally or unintentionally. This can include collections that are valuable financially and/or emotionally and can be a factor in divorces. It is not necessarily destructive to throw away other people's things, but to avoid misunderstandings it is important for couples who live together to communicate and agree on their values. After the COVID-19 pandemic, the lack of availability of food and other necessities clarified possible disadvantages of living without stocks of basic supplies. Some minimalists thus changed their mindset accordingly, leading to speculation on whether the number of "preppers" will increase. See also Adjustable shelving Bookcase Cabinetry Closet Eurobox, system of reusable containers for transport and storage in standardised sizes Filing cabinet Kitchen cabinet Lean thinking, methods for improving efficiency, effectivity and quality of work Mobile shelving Pantry Personal organizer Shadow board, a method for organizing tools Shelf (storage) Small office/home office Study (room) Wardrobe References External links New York Times article on using professional organizing services. Institute for Challenging Disorganization on brain-based organizing. The National Association of Productivity and Organizing Professionals International OCD Foundation Alzheimer's Foundation of America The International Federation of Professional Organizing Associations American Society of Professional Organizers National Association of Senior and Specialty Move Managers Organization Coaching Time management Ordering
Professional organizing
Physics
2,085
74,925,213
https://en.wikipedia.org/wiki/Representational%20harm
Systems cause representational harm when they misrepresent a group of people in a negative manner. Representational harms include perpetuating harmful stereotypes about or minimizing the existence of a social group, such as a racial, ethnic, gender, or religious group. Machine learning algorithms often commit representational harm when they learn patterns from data that have algorithmic bias, and this has been shown to be the case with large language models. While preventing representational harm in models is essential to prevent harmful biases, researchers often lack precise definitions of representational harm and conflate it with allocative harm, an unequal distribution of resources among social groups, which is more widely studied and easier to measure. However, recognition of representational harms is growing and preventing them has become an active research area. Researchers have recently developed methods to effectively quantify representational harm in algorithms, making progress on preventing this harm in the future. Types Three prominent types of representational harm include stereotyping, denigration, and misrecognition. These subcategories present many dangers to individuals and groups. Stereotypes are oversimplified and usually undesirable representations of a specific group of people, usually by race and gender. This often leads to the denial of educational, employment, housing, and other opportunities. For example, the model minority stereotype of Asian Americans as highly intelligent and good at mathematics can be damaging professionally and academically. Denigration is the action of unfairly criticizing individuals. This frequently happens when the demeaning of social groups occurs. For example, when searching for "Black-sounding" names versus "white-sounding" ones, some retrieval systems bolster the false perception of criminality by displaying ads for bail-bonding businesses. A system may shift the representation of a group to be of lower social status, often resulting in a disregard from society. Misrecognition, or incorrect recognition, can display in many forms, including, but not limited to, erasing and alienating social groups, and denying people the right to self-identify. Erasing and alienating social groups involves the unequal visibility of certain social groups; specifically, systematic ineligibility in algorithmic systems perpetuates inequality by contributing to the underrepresentation of social groups. Not allowing people to self-identify is closely related as people's identities can be 'erased' or 'alienated' in these algorithms. Misrecognition causes more than surface-level harm to individuals: psychological harm, social isolation, and emotional insecurity can emerge from this subcategory of representational harm. Quantification As the dangers of representational harm have become better understood, some researchers have developed methods to measure representational harm in algorithms. Modeling stereotyping is one way to identify representational harm. Representational stereotyping can be quantified by comparing the predicted outcomes for one social group with the ground-truth outcomes for that group observed in real data. For example, if individuals from group A achieve an outcome with a probability of 60%, stereotyping would be observed if it predicted individuals to achieve that outcome with a probability greater than 60%. The group modeled stereotyping in the context of classification, regression, and clustering problems, and developed a set of rules to quantitatively determine if the model predictions exhibit stereotyping in each of these cases. Other attempts to measure representational harms have focused on applications of algorithms in specific domains such as image captioning, the act of an algorithm generating a short description of an image. In a study on image captioning, researchers measured five types of representational harm. To quantify stereotyping, they measured the number of incorrect words included in the model-generated image caption when compared to a gold-standard caption. They manually reviewed each of the incorrectly included words, determining whether the incorrect word reflected a stereotype associated with the image or whether it was an unrelated error, which allowed them to have a proxy measure of the amount of stereotyping occurring in this caption generation. These researchers also attempted to measure demeaning representational harm. To measure this, they analyzed the frequency with which humans in the image were mentioned in the generated caption. It was hypothesized that if the individuals were not mentioned in the caption, then this was a form of dehumanization. Examples One of the most notorious examples of representational harm was committed by Google in 2015 when an algorithm in Google Photos classified Black people as gorillas. Developers at Google said that the problem was caused because there were not enough faces of Black people in the training dataset for the algorithm to learn the difference between Black people and gorillas. Google issued an apology and fixed the issue by blocking its algorithms from classifying anything as a primate. In 2023, Google's photos algorithm was still blocked from identifying gorillas in photos. Another prevalent example of representational harm is the possibility of stereotypes being encoded in word embeddings, which are trained using a wide range of text. These word embeddings are the representation of a word as an array of numbers in vector space, which allows an individual to calculate the relationships and similarities between words. However, recent studies have shown that these word embeddings may commonly encode harmful stereotypes, such as the common example that the phrase "computer programmer" is oftentimes more closely related to "man" than it is to "women" in vector space. This could be interpreted as a misrepresentation of computer programming as a profession that is better performed by men, which would be an example of representational harm. References Wikipedia Student Program Technology Information ethics AI safety
Representational harm
Technology,Engineering
1,170
75,868,083
https://en.wikipedia.org/wiki/IEEE%20802.11bn
IEEE 802.11bn, dubbed Ultra High Reliability (UHR), is to be the next IEEE 802.11 standard. It is also designated Wi-Fi 8. As its name suggests, 802.11bn aims to improve the reliability of Wi-Fi. Notes References Wi-Fi
IEEE 802.11bn
Technology
61
48,164
https://en.wikipedia.org/wiki/Maximal%20ideal
In mathematics, more specifically in ring theory, a maximal ideal is an ideal that is maximal (with respect to set inclusion) amongst all proper ideals. In other words, I is a maximal ideal of a ring R if there are no other ideals contained between I and R. Maximal ideals are important because the quotients of rings by maximal ideals are simple rings, and in the special case of unital commutative rings they are also fields. In noncommutative ring theory, a maximal right ideal is defined analogously as being a maximal element in the poset of proper right ideals, and similarly, a maximal left ideal is defined to be a maximal element of the poset of proper left ideals. Since a one-sided maximal ideal A is not necessarily two-sided, the quotient R/A is not necessarily a ring, but it is a simple module over R. If R has a unique maximal right ideal, then R is known as a local ring, and the maximal right ideal is also the unique maximal left and unique maximal two-sided ideal of the ring, and is in fact the Jacobson radical J(R). It is possible for a ring to have a unique maximal two-sided ideal and yet lack unique maximal one-sided ideals: for example, in the ring of 2 by 2 square matrices over a field, the zero ideal is a maximal two-sided ideal, but there are many maximal right ideals. Definition There are other equivalent ways of expressing the definition of maximal one-sided and maximal two-sided ideals. Given a ring R and a proper ideal I of R (that is I ≠ R), I is a maximal ideal of R if any of the following equivalent conditions hold: There exists no other proper ideal J of R so that I ⊊ J. For any ideal J with I ⊆ J, either J = I or J = R. The quotient ring R/I is a simple ring. There is an analogous list for one-sided ideals, for which only the right-hand versions will be given. For a right ideal A of a ring R, the following conditions are equivalent to A being a maximal right ideal of R: There exists no other proper right ideal B of R so that A ⊊ B. For any right ideal B with A ⊆ B, either B = A or B = R. The quotient module R/A is a simple right R-module. Maximal right/left/two-sided ideals are the dual notion to that of minimal ideals. Examples If F is a field, then the only maximal ideal is {0}. In the ring Z of integers, the maximal ideals are the principal ideals generated by a prime number. More generally, all nonzero prime ideals are maximal in a principal ideal domain. The ideal is a maximal ideal in ring . Generally, the maximal ideals of are of the form where is a prime number and is a polynomial in which is irreducible modulo . Every prime ideal is a maximal ideal in a Boolean ring, i.e., a ring consisting of only idempotent elements. In fact, every prime ideal is maximal in a commutative ring whenever there exists an integer such that for any . The maximal ideals of the polynomial ring are principal ideals generated by for some . More generally, the maximal ideals of the polynomial ring over an algebraically closed field K are the ideals of the form . This result is known as the weak Nullstellensatz. Properties An important ideal of the ring called the Jacobson radical can be defined using maximal right (or maximal left) ideals. If R is a unital commutative ring with an ideal m, then k = R/m is a field if and only if m is a maximal ideal. In that case, R/m is known as the residue field. This fact can fail in non-unital rings. For example, is a maximal ideal in , but is not a field. If L is a maximal left ideal, then R/L is a simple left R-module. Conversely in rings with unity, any simple left R-module arises this way. Incidentally this shows that a collection of representatives of simple left R-modules is actually a set since it can be put into correspondence with part of the set of maximal left ideals of R. Krull's theorem (1929): Every nonzero unital ring has a maximal ideal. The result is also true if "ideal" is replaced with "right ideal" or "left ideal". More generally, it is true that every nonzero finitely generated module has a maximal submodule. Suppose I is an ideal which is not R (respectively, A is a right ideal which is not R). Then R/I is a ring with unity (respectively, R/A is a finitely generated module), and so the above theorems can be applied to the quotient to conclude that there is a maximal ideal (respectively, maximal right ideal) of R containing I (respectively, A). Krull's theorem can fail for rings without unity. A radical ring, i.e. a ring in which the Jacobson radical is the entire ring, has no simple modules and hence has no maximal right or left ideals. See regular ideals for possible ways to circumvent this problem. In a commutative ring with unity, every maximal ideal is a prime ideal. The converse is not always true: for example, in any nonfield integral domain the zero ideal is a prime ideal which is not maximal. Commutative rings in which prime ideals are maximal are known as zero-dimensional rings, where the dimension used is the Krull dimension. A maximal ideal of a noncommutative ring might not be prime in the commutative sense. For example, let be the ring of all matrices over . This ring has a maximal ideal for any prime , but this is not a prime ideal since (in the case ) and are not in , but . However, maximal ideals of noncommutative rings are prime in the generalized sense below. Generalization For an R-module A, a maximal submodule M of A is a submodule satisfying the property that for any other submodule N, implies or . Equivalently, M is a maximal submodule if and only if the quotient module A/M is a simple module. The maximal right ideals of a ring R are exactly the maximal submodules of the module RR. Unlike rings with unity, a nonzero module does not necessarily have maximal submodules. However, as noted above, finitely generated nonzero modules have maximal submodules, and also projective modules have maximal submodules. As with rings, one can define the radical of a module using maximal submodules. Furthermore, maximal ideals can be generalized by defining a maximal sub-bimodule M of a bimodule B to be a proper sub-bimodule of M which is contained in no other proper sub-bimodule of M. The maximal ideals of R are then exactly the maximal sub-bimodules of the bimodule RRR. See also Prime ideal References Ideals (ring theory) Ring theory Prime ideals
Maximal ideal
Mathematics
1,505
46,871,288
https://en.wikipedia.org/wiki/Phlegmacium%20balteaticlavatum
Phlegmacium balteaticlavatum is a species of fungus in the family Cortinariaceae. Taxonomy It was originally described in 2014 and placed in the large mushroom genus Cortinarius (subgenus Phlegmacium). In 2022 the species was transferred from Cortinarius and reclassified as Phlegmacium balteaticlavatum based on genomic data. Habitat and distribution The species is found in Finland, where it grows in mixed forests with trees such as birch, poplar, willow, spruce, and pine. Fruitbodies occur from mid-August to mid-September. Etymology The specific epithet balteaticlavatum refers to both its affinity to C. balteatus and its club-shaped (clavate) stipe. See also List of Cortinarius species References External links balteaticlavatum Fungi described in 2014 Fungi of Finland Fungus species
Phlegmacium balteaticlavatum
Biology
189
106,164
https://en.wikipedia.org/wiki/Globular%20protein
In biochemistry, globular proteins or spheroproteins are spherical ("globe-like") proteins and are one of the common protein types (the others being fibrous, disordered and membrane proteins). Globular proteins are somewhat water-soluble (forming colloids in water), unlike the fibrous or membrane proteins. There are multiple fold classes of globular proteins, since there are many different architectures that can fold into a roughly spherical shape. The term globin can refer more specifically to proteins including the globin fold. Globular structure and solubility The term globular protein is quite old (dating probably from the 19th century) and is now somewhat archaic given the hundreds of thousands of proteins and more elegant and descriptive structural motif vocabulary. The globular nature of these proteins can be determined without the means of modern techniques, but only by using ultracentrifuges or dynamic light scattering techniques. The spherical structure is induced by the protein's tertiary structure. The molecule's apolar (hydrophobic) amino acids are bounded towards the molecule's interior whereas polar (hydrophilic) amino acids are bound outwards, allowing dipole–dipole interactions with the solvent, which explains the molecule's solubility. Globular proteins are only marginally stable because the free energy released when the protein folded into its native conformation is relatively small. This is because protein folding requires entropic cost. As a primary sequence of a polypeptide chain can form numerous conformations, native globular structure restricts its conformation to a few only. It results in a decrease in randomness, although non-covalent interactions such as hydrophobic interactions stabilize the structure. Protein folding Although it is still unknown how proteins fold up naturally, new evidence has helped advance understanding. Part of the protein folding problem is that several non-covalent, weak interactions are formed, such as hydrogen bonds and Van der Waals interactions. Via several techniques, the mechanism of protein folding is currently being studied. Even in the protein's denatured state, it can be folded into the correct structure. Globular proteins seem to have two mechanisms for protein folding, either the diffusion-collision model or nucleation condensation model, although recent findings have shown globular proteins, such as PTP-BL PDZ2, that fold with characteristic features of both models. These new findings have shown that the transition states of proteins may affect the way they fold. The folding of globular proteins has also recently been connected to treatment of diseases, and anti-cancer ligands have been developed which bind to the folded but not the natural protein. These studies have shown that the folding of globular proteins affects its function. By the second law of thermodynamics, the free energy difference between unfolded and folded states is contributed by enthalpy and entropy changes. As the free energy difference in a globular protein that results from folding into its native conformation is small, it is marginally stable, thus providing a rapid turnover rate and effective control of protein degradation and synthesis. Role Unlike fibrous proteins which only play a structural function, globular proteins can act as: Enzymes, by catalyzing organic reactions taking place in the organism in mild conditions and with a great specificity. Different esterases fulfill this role. Messengers, by transmitting messages to regulate biological processes. This function is done by hormones, i.e. insulin etc. Transporters of other molecules through membranes Stocks of amino acids. Regulatory roles are also performed by globular proteins rather than fibrous proteins. Structural proteins, e.g., actin and tubulin, which are globular and soluble as monomers, but polymerize to form long, stiff fibers Members Among the most known globular proteins is hemoglobin, a member of the globin protein family. Other globular proteins are the alpha, beta and gamma (IgA, IgD, IgE, IgG and IgM) globulin. See protein electrophoresis for more information on the different globulins. Nearly all enzymes with major metabolic functions are globular in shape, as well as many signal transduction proteins. Albumins are also globular proteins, although, unlike all of the other globular proteins, they are completely soluble in water. They are not soluble in oil. References Proteins by structure Protein structure
Globular protein
Chemistry
928
45,696,106
https://en.wikipedia.org/wiki/UW%20Coronae%20Borealis
UW Coronae Borealis, also known as MS 1603.6+2600, is a low-mass X-ray binary star system in the constellation Corona Borealis. Astronomer Simon Morris and colleagues discovered the X-ray source in 1990 and were able to match it up with a faint star with an average visual magnitude of 19.4. The system is thought to be made up of a neutron star that has an accretion disk that draws material from its companion, a star less massive than the Sun. The disk is asymmetrical. The variability of the system is complex, with several periods identified: the two components orbit each other every 111 minutes, while there is another period of 112.6 minutes. The beat period of these is 5.5 days, which is thought to represent the precession of the asymmetrical accretion disk around the neutron star. References Corona Borealis X-ray binaries Coronae Borealis, UW
UW Coronae Borealis
Astronomy
199
3,805,460
https://en.wikipedia.org/wiki/Ardent%20spirit
Ardent spirits (ethyl alcohol), in alchemy, are those liquors obtained after repeated distillations from fermented vegetables. They are thus called because they will take fire and burn. Examples include brandy, spirits of wine, etc. References Alchemical substances
Ardent spirit
Chemistry
58
11,421,976
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD37
In molecular biology, SNORD37 (also known as U37) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA U37 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the C/D box family function in directing site-specific 2'-O-methylation of substrate RNAs. This snoRNA was originally identified by computational screening of vertebrate genomes for conserved C/D box motifs within intronic regions and expression experimentally verified by northern blotting. The mouse orthologue was identified. SNORD37 is predicted to guide the 2'O-ribose methylation of the 28S ribosomal RNA (rRNA) at residue A3697. References External links Small nuclear RNA
Small nucleolar RNA SNORD37
Chemistry
262
47,106,648
https://en.wikipedia.org/wiki/Wei-Shou%20Hu
Wei-Shou Hu (born November 5, 1951) is a Taiwanese-American chemical engineer. He is currently the Distinguished McKnight University Professor of Chemical Engineering and Material Science at the University of Minnesota. Education He earned his B.S. in agricultural chemistry from National Taiwan University in 1974 and his Ph.D. in biochemical engineering from the Massachusetts Institute of Technology under the guidance of Daniel I.C. Wang in 1983. He has been a professor with the University of Minnesota since 1983. Hu has long impacted the field of cell culture bioprocessing since its infancy by steadfastly introducing quantitative and systematic analysis into this field. His work, which covers areas such as modeling and controlling cell metabolism, modulating glycosylation, and process data mining, has helped shape the advances of biopharmaceutical process technology. He recently led an industrial consortium to embark on genomic research on Chinese hamster ovary cells, the main workhorse of biomanufacturing, and to promote post-genomic research in cell bioprocessing. Hu's research focuses on the field of cell culture bioprocessing, particularly metabolic control of the physiological state of the cell. In addition to his work with Chinese hamster ovary cells, his work has enabled the use of process engineering for cell therapy, especially with liver cells. Hu has written four different biotechnology books. One of his articles is cited by 63. He is the 2005 recipient of the Marvin Johnson Award from the American Chemical Society, the distinguished service award of Society of Biological Engineers, a special award from Asia Pacific Biochemical Engineering Conference (2009), and the Amgen Award from Engineering Conferences International, as well as both the distinguished service award and the Division award from the Food, Pharmaceuticals and Bioengineering Division of the American Institute of Chemical Engineers. He has authored the books Bioseparations, Cell Culture Technology for Pharmaceutical and Cell-Based Therapies and Cell Culture Bioprocess Engineering References External links Hu Group Website University of Minnesota page Cellular Bioprocess Technology Course 1951 births Living people 20th-century American engineers 21st-century American engineers American chemical engineers Biochemical engineering National Taiwan University alumni Minnesota CEMS MIT School of Engineering alumni Place of birth missing (living people) Taiwanese chemical engineers Taiwanese emigrants to the United States University of Minnesota faculty
Wei-Shou Hu
Chemistry,Engineering,Biology
478
39,739,775
https://en.wikipedia.org/wiki/Mohamed%20M.%20Atalla
Mohamed M. Atalla (; August 4, 1924 – December 30, 2009) was an Egyptian-American engineer, physicist, cryptographer, inventor and entrepreneur. He was a semiconductor pioneer who made important contributions to modern electronics. He is best known for inventing, along with his colleague Dawon Kahng, the MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) in 1959, which along with Atalla's earlier surface passivation processes, had a significant impact on the development of the electronics industry. He is also known as the founder of the data security company Atalla Corporation (now Utimaco Atalla), founded in 1972. He received the Stuart Ballantine Medal (now the Benjamin Franklin Medal in physics) and was inducted into the National Inventors Hall of Fame for his important contributions to semiconductor technology as well as data security. Born in Port Said, Egypt, he was educated at Cairo University in Egypt and then Purdue University in the United States, before joining Bell Labs in 1949 and later adopting the more anglicized "John" or "Martin" M. Atalla as professional names. He made several important contributions to semiconductor technology at Bell Labs, including his development of the surface passivation process and his demonstration of the MOSFET with Kahng in 1959. His work on MOSFET was initially overlooked at Bell, which led to his resignation from Bell and joining Hewlett-Packard (HP), founding its Semiconductor Lab in 1962 and then HP Labs in 1966, before leaving to join Fairchild Semiconductor, founding its Microwave & Optoelectronics division in 1969. His work at HP and Fairchild included research on Schottky diode, gallium arsenide (GaAs), gallium arsenide phosphide (GaAsP), indium arsenide (InAs) and light-emitting diode (LED) technologies. He later left the semiconductor industry, and became an entrepreneur in cryptography and data security. In 1972, he founded Atalla Corporation, and filed a patent for a remote Personal Identification Number (PIN) security system. In 1973, he released the first hardware security module, the "Atalla Box", which encrypted PIN and ATM messages, and went on to secure the majority of the world's ATM transactions. He later founded the Internet security company TriStrata Security in the 1990s. He died in Atherton, California, on December 30, 2009. Early life and education (19241949) Mohamed Mohamed Atalla was born in Port Said, Kingdom of Egypt. He studied at Cairo University in Egypt, where he received his Bachelor of Science degree. He later moved to the United States to study mechanical engineering at Purdue University. There, he received his master's degree (MSc) in 1947 and his doctorate (PhD) in 1949, both in mechanical engineering. His MSc thesis was titled "High Speed Flow in Square Diffusers" and his PhD thesis was titled "High Speed Compressible Flow in Square Diffusers". Bell Telephone Laboratories (19491962) After completing his PhD at Purdue University, Atalla was employed at Bell Telephone Laboratories (BTL) in 1949. In 1950, he began working at Bell's New York City operations, where he worked on problems related to the reliability of electromechanical relays, and worked on circuit-switched telephone networks. With the emergence of transistors, Atalla was moved to the Murray Hill lab, where he began leading a small transistor research team in 1956. Despite coming from a mechanical engineering background and having no formal education in physical chemistry, he proved himself to be a quick learner in physical chemistry and semiconductor physics, eventually demonstrating a high level of skill in these fields. He researched, among other things, the surface properties of silicon semiconductors and the use of silica as a protective layer of silicon semiconductor devices. He eventually adopted the alias pseudonyms "Martin" M. Atalla or "John" M. Atalla for his professional career. Between 1956 and 1960, Atalla led a small team of several BTL researchers, including Eileen Tannenbaum, Edwin Joseph Scheibner and Dawon Kahng. They were new recruits at BTL, like himself, with no senior researchers on the team. Their work was initially not taken seriously by senior management at BTL and its owner AT&T, due to the team consisting of new recruits, and due to the team leader Atalla himself coming from a mechanical engineering background, in contrast to the physicists, physical chemists and mathematicians who were taken more seriously, despite Atalla demonstrating advanced skills in physical chemistry and semiconductor physics. Despite working mostly on their own, Atalla and his team made significant advances in semiconductor technology. According to Fairchild Semiconductor engineer Chih-Tang Sah, the work of Atalla and his team during 19561960 was "the most important and significant technology advance" in silicon semiconductor technology. Surface passivation by thermal oxidation An initial focus of Atalla's research was to solve the problem of silicon surface states. At the time, the electrical conductivity of semiconductor materials such as germanium and silicon were limited by unstable quantum surface states, where electrons are trapped at the surface, due to dangling bonds that occur because unsaturated bonds are present at the surface. This prevented electricity from reliably penetrating the surface to reach the semiconducting silicon layer. Due to the surface state problem, germanium was the dominant semiconductor material of choice for transistors and other semiconductor devices in the early semiconductor industry, as germanium was capable of higher carrier mobility. He made a breakthrough with his development of the surface passivation process. This is the process by which a semiconductor surface is rendered inert, and does not change semiconductor properties as a result of interaction with air or other materials in contact with the surface or edge of the crystal. The surface passivation process was first developed by Atalla in the late 1950s. He discovered that the formation of a thermally grown silicon dioxide (SiO2) layer greatly reduced the concentration of electronic states at the silicon surface, and discovered the important quality of SiO2 films to preserve the electrical characteristics of p–n junctions and prevent these electrical characteristics from deteriorating by the gaseous ambient environment. He found that silicon oxide layers could be used to electrically stabilize silicon surfaces. He developed the surface passivation process, a new method of semiconductor device fabrication that involves coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below. By growing a layer of silicon dioxide on top of a silicon wafer, Atalla was able to overcome the surface states that prevented electricity from reaching the semiconducting layer. His surface passivation method was a critical step that made possible the ubiquity of silicon integrated circuits, and later became critical to the semiconductor industry. For the surface passivation process, he developed the method of thermal oxidation, which was a breakthrough in silicon semiconductor technology. Atalla first published his findings in BTL memos during 1957, before presenting his work at an Electrochemical Society meeting in 1958, the Radio Engineers' Semiconductor Device Research Conference. The semiconductor industry saw the potential significance of Atalla's surface oxidation method, with RCA calling it a "milestone in the surface field." The same year, he made further refinements to the process with his colleagues Eileen Tannenbaum and Edwin Joseph Scheibner, before they published their results in May 1959. According to Fairchild Semiconductor engineer Chih-Tang Sah, the surface passivation process developed by Atalla and his team "blazed the trail" that led to the development of the silicon integrated circuit. Atalla's silicon transistor passivation technique by thermal oxide was the basis for several important inventions in 1959: the MOSFET (MOS transistor) by Atalla and Dawon Kahng at Bell Labs, the planar process by Jean Hoerni at Fairchild Semiconductor. MOSFET (MOS transistor) Building on his earlier pioneering research on the surface passivation and thermal oxidation processes, Atalla developed the metal–oxide–semiconductor (MOS) process. Atalla then proposed that a field effect transistor–a concept first envisioned in the 1920s and confirmed experimentally in the 1940s, but not achieved as a practical device—be built of metal-oxide-silicon. Atalla assigned the task of assisting him to Dawon Kahng, a Korean scientist who had recently joined his group. That led to the invention of the MOSFET (metal–oxide–semiconductor field-effect transistor) by Atalla and Kahng, in November 1959. Atalla and Kahng first demonstrated the MOSFET in early 1960. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuit (IC) chips. Nanolayer transistor In 1960, Atalla and Kahng fabricated the first MOSFET with a gate oxide thickness of 100 nm, along with a gate length of 20μm. In 1962, Atalla and Kahng fabricated a nanolayer-base metal–semiconductor junction (M–S junction) transistor. This device has a metallic layer with nanometric thickness sandwiched between two semiconducting layers, with the metal forming the base and the semiconductors forming the emitter and collector. With its low resistance and short transit times in the thin metallic nanolayer base, the device was capable of high operation frequency compared to bipolar transistors. Their pioneering work involved depositing metal layers (the base) on top of single crystal semiconductor substrates (the collector), with the emitter being a crystalline semiconductor piece with a top or a blunt corner pressed against the metallic layer (the point contact). They deposited gold (Au) thin films with a thickness of 10 nm on n-type germanium (n-Ge), while the point contact was n-type silicon (n-Si). Atalla resigned from BTL in 1962. Schottky diode Extending their work on MOS technology, Atalla and Kahng next did pioneering work on hot carrier devices, which used what would later be called a Schottky barrier. The Schottky diode, also known as the Schottky-barrier diode, was theorized for years, but was first practically realized as a result of the work of Atalla and Kahng during 19601961. They published their results in 1962 and called their device the "hot electron" triode structure with semiconductor-metal emitter. It was one of the first metal-base transistors. The Schottky diode went on to assume a prominent role in mixer applications. Hewlett-Packard (19621969) In 1962, Atalla joined Hewlett-Packard, where he co-founded Hewlett-Packard and Associates (HP Associates), which provided Hewlett-Packard with fundamental solid-state capabilities. He was the Director of Semiconductor Research at HP Associates, and the first manager of HP's Semiconductor Lab. He continued research on Schottky diodes, while working with Robert J. Archer, at HP Associates. They developed high vacuum metal film deposition technology, and fabricated stable evaporated/sputtered contacts, publishing their results in January 1963. Their work was a breakthrough in metal–semiconductor junction and Schottky barrier research, as it overcame most of the fabrication problems inherent in point-contact diodes and made it possible to build practical Schottky diodes. At the Semiconductor Lab during the 1960s, he launched a material science investigation program that provided a base technology for gallium arsenide (GaAs), gallium arsenide phosphide (GaAsP) and indium arsenide (InAs) devices. These devices became the core technology used by HP's Microwave Division to develop sweepers and network analyzers that pushed 2040 GHz frequency, giving HP more than 90% of the military communications market. Atalla helped create HP Labs in 1966. He directed its solid-state division. Fairchild Semiconductor (19691972) In 1969, he left HP and joined Fairchild Semiconductor. He was the vice president and general manager of the Microwave & Optoelectronics division, from its inception in May 1969 up until November 1971. He continued his work on light-emitting diodes (LEDs), proposing they could be used for indicator lights and optical readers in 1971. He later left Fairchild in 1972. Atalla Corporation (19721990) He left the semiconductor industry in 1972, and began a new career as an entrepreneur in data security and cryptography. In 1972, he founded Atalla Technovation, later called Atalla Corporation, which dealt with safety problems of banking and financial institutions. Hardware security module He invented the first hardware security module (HSM), the so-called "Atalla Box", a security system that secures a majority of transactions from ATMs today. At the same time, Atalla contributed to the development of the personal identification number (PIN) system, which has developed among others in the banking industry as the standard for identification. The work of Atalla in the early 1970s led to the use of hardware security modules. His "Atalla Box", a security system which encrypts PIN and ATM messages, and protected offline devices with an un-guessable PIN-generating key. He commercially released the "Atalla Box" in 1973. The product was released as the Identikey. It was a card reader and customer identification system, providing a terminal with plastic card and PIN capabilities. The system was designed to let banks and thrift institutions switch to a plastic card environment from a passbook program. The Identikey system consisted of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. The device consisted of two keypads, one for the customer and one for the teller. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer's account number was read by the card reader. This process replaced manual entry and avoided possible key stroke errors. It allowed users to replace traditional customer verification methods such as signature verification and test questions with a secure PIN system. A key innovation of the Atalla Box was the key block, which is required to securely interchange symmetric keys or PINs with other actors of the banking industry. This secure interchange is performed using the Atalla Key Block (AKB) format, which lies at the root of all cryptographic block formats used within the Payment Card Industry Data Security Standard (PCI DSS) and American National Standards Institute (ANSI) standards. Fearful that Atalla would dominate the market, banks and credit card companies began working on an international standard. Its PIN verification process was similar to the later IBM 3624. Atalla was an early competitor to IBM in the banking market, and was cited as an influence by IBM employees who worked on the Data Encryption Standard (DES). In recognition of his work on the PIN system of information security management, Atalla has been referred to as the "Father of the PIN" and as a father of information security technology. The Atalla Box protected over 90% of all ATM networks in operation as of 1998, and secured 85% of all ATM transactions worldwide as of 2006. Atalla products still secure the majority of the world's ATM transactions, as of 2014. Online security In 1972, Atalla filed for a remote PIN verification system, which utilized encryption techniques to assure telephone link security while entering personal ID information, which would be transmitted as encrypted data over telecommunications networks to a remote location for verification. This was a precursor to telephone banking, Internet security and e-commerce. At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla announced an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities of processing online transactions and dealing with network security. Designed with the focus of taking bank transactions online, the Identikey system was extended to shared-facility operations. It was consistent and compatible with various switching networks, and was capable of resetting itself electronically to any one of 64,000 irreversible nonlinear algorithms as directed by card data information. The Interchange Identikey device was released in March 1976. It was one of the first products designed to deal with online transactions, along with Bunker Ramo Corporation products unveiled at the same NAMSB conference. In 1979, Atalla introduced the first network security processor (NSP). In 1987, Atalla Corporation merged with Tandem Computers. Atalla went into retirement in 1990. As of 2013, 250million card transactions are protected by Atalla products every day. TriStrata Security (19931999) It was not long until several executives of large banks persuaded him to develop security systems for the Internet to work. They were worried about the fact that no useful framework for electronic commerce would have been possible at that time without innovation in the computer and network security industry. Following a request from former Wells Fargo Bank president William Zuendt in 1993, Atalla began developing a new Internet security technology, allowing companies to scramble and transmit secure computer files, e-mail, and digital video and audio, over the internet. As a result of these activities, he founded the company TriStrata Security in 1996. In contrast to most conventional computer security systems at the time, which built walls around a company's entire computer network to protect the information within from thieves or corporate spies, TriStrata took a different approach. Its security system wrapped a secure, encrypted envelope around individual pieces of information (such as a word processing file, a customer database, or e-mail) that can only be opened and deciphered with an electronic permit, allowing companies to control which users have access to this information and the necessary permits. It was considered a new approach to enterprise security at the time. Later years and death (20002009) Atalla was the chairman of A4 System, as of 2003. He lived in Atherton, California. Atalla died on December 30, 2009, in Atherton. Awards and honors Atalla was awarded the Stuart Ballantine Medal (now the Benjamin Franklin Medal in physics) at the 1975 Franklin Institute Awards, for his important contributions to silicon semiconductor technology and his invention of the MOSFET. In 2003, Atalla received a Distinguished Alumnus doctorate from Purdue University. In 2009, he was inducted into the National Inventors Hall of Fame for his important contributions to semiconductor technology as well as data security. He was referred to as one of the "Sultans of Silicon" along with several other semiconductor pioneers. In 2014, the 1959 invention of the MOSFET was included on the list of IEEE milestones in electronics. In 2015, Atalla was inducted into the IT History Society's IT Honor Roll for his important contributions to information technology. References External links 1924 births 2009 deaths 20th-century American engineers 20th-century American inventors 21st-century American engineers 21st-century American inventors American cryptographers American electronics engineers American mechanical engineers American physical chemists Benjamin Franklin Medal (Franklin Institute) laureates Cairo University alumni Egyptian chemists Egyptian cryptographers Egyptian electrical engineers Egyptian emigrants to the United States Egyptian inventors Egyptian mechanical engineers Egyptian physicists Electronics engineers Internet pioneers Modern cryptographers MOSFETs People from Atherton, California People from Port Said Physical chemists Purdue University alumni Purdue University College of Engineering alumni
Mohamed M. Atalla
Chemistry,Engineering
4,057
51,546,400
https://en.wikipedia.org/wiki/Extrasynaptic%20NMDA%20receptor
Extrasynaptic NMDA receptors are glutamate-gated neurotransmitter receptors that are localized to non-synaptic sites on the neuronal cell surface. In contrast to synaptic NMDA receptors that promote acquired neuroprotection and synaptic plasticity, extrasynaptic NMDA receptors are coupled to activation of death-signaling pathways. Extrasynaptic NMDA receptors are responsible for initiating excitotoxicity and have been implicated in the etiology of neurodegenerative diseases, including stroke, Huntington’s disease, Alzheimer’s disease, and amyotrophic lateral sclerosis (ALS). Extrasynaptic NMDA receptors form a death signaling complex with the transient receptor potential cation channel subfamily M member 4 (TRPM4). The NMDAR/TRPM4 complex is considered central to glutamate excitotoxicity. NMDAR/TRPM4 interaction interface inhibitors (also known as 'interface inhibitors') disrupt the NMDAR/TRPM4 complex thereby detoxifying extrasynaptic NMDA receptors. In mouse disease models, interface inhibitors protect against stroke induced brain damage and retinal ganglion cell degeneration. References Molecular neuroscience
Extrasynaptic NMDA receptor
Chemistry
254
22,025,044
https://en.wikipedia.org/wiki/Imagix%204D
Imagix 4D is a source code analysis tool from Imagix Corporation, used primarily for understanding, documenting, and evolving existing C, C++ and Java software. Applied technologies include full semantic source analysis. Software visualization supports program comprehension. Static data flow analysis-based verifications detect problems in variable usage, task interactions and concurrency. Software metrics measure design quality and identify potential testing and maintenance issues. See also Rational Rose Rigi Software visualization List of tools for static code analysis Sourcetrail References Use inside SEI's ARMIN Architecture Reconstruction and Mining Tool Use inside Bosch's Model-Centric Software Architecture Reconstruction External links Imagix Corp. website Code navigation tools Static program analysis tools Software metrics Documentation generators
Imagix 4D
Mathematics,Engineering
151
2,433,102
https://en.wikipedia.org/wiki/Bouillon%20cube
A bouillon cube (also known as a stock cube) is dehydrated broth or stock formed into a small cube or other cuboid shape. The most common format is a cube about wide. It is typically made from dehydrated vegetables or meat stock, a small portion of fat, MSG, salt, and seasonings, shaped into a small cube. Vegetarian and vegan types are also made. Bouillon is also available in granular, powdered, liquid, and paste forms. History Dehydrated meat stock, in the form of tablets, was known in the 17th century to English food writer Anne Blencowe, who died in 1718, and elsewhere as early as 1735. Various French cooks in the early 19th century (Lefesse, Massué, and Martin) tried to patent bouillon cubes and tablets, but were turned down for lack of originality. Nicolas Appert also proposed such dehydrated bouillon in 1831. Portable soup was a kind of dehydrated food used in the 18th and 19th centuries. It was a precursor of meat extract and bouillon cubes, and of industrially dehydrated food. It is also known as pocket soup or veal glue. It is a cousin of the glace de viande of French cooking. It was long a staple of seamen and explorers, for it would keep for many months or even years. In this context, it was a filling and nutritious dish. Portable soup of less extended vintage was, according to the 1881 Household Cyclopedia, "exceedingly convenient for private families, for by putting one of the cakes in a saucepan with about a quart of water, and a little salt, a basin of good broth may be made in a few minutes." In the mid-19th century, German chemist Justus von Liebig developed meat extract, but it was more expensive than bouillon cubes. The invention of the bouillon cube is also attributed to Auguste Escoffier, one of the most accomplished French chefs of his time, who also pioneered many other advances in food preservation, such as the canning of tomatoes and vegetables. Industrially produced bouillon cubes were commercialized by Maggi in 1908, by Oxo in 1910, and by Knorr in 1912. By 1913, at least 10 brands were available, with salt contents of 59–72%. Ingredients The ingredients vary between manufacturers and may change from time to time. Typically, the ingredients consists of salt, hydrogenated fat, monosodium glutamate, flavor enhancers, and flavors. Maggi bouillon cubes are manufactured from iodized salt, hydrogenated palm oil, wheat flour, flavor enhancers (monosodium glutamate, disodium inosinate, disodium guanylate), chicken fat, chicken meat, sugar, caramel, yeast extract, onion, spices (turmeric, white pepper, coriander), parsley. Production process Stock cubes are made by mixing already dry ingredients into a paste. The ingredients are usually mixed in a container (batch mixing), left to mature, and then shaped into the cube form. Alternatively, they can be mixed directly into an extruder. See also Instant dashi List of dried foods Portable soup References Convenience foods Cubes Food ingredients Dried foods Umami enhancers
Bouillon cube
Technology
707
67,685,236
https://en.wikipedia.org/wiki/Haline%20contraction%20coefficient
The Haline contraction coefficient, abbreviated as β, is a coefficient that describes the change in ocean density due to a salinity change, while the potential temperature and the pressure are kept constant. It is a parameter in the Equation Of State (EOS) of the ocean. β is also described as the saline contraction coefficient and is measured in [kg]/[g] in the EOS that describes the ocean. An example is TEOS-10. This is the thermodynamic equation of state. β is the salinity variant of the thermal expansion coefficient α, where the density changes due to a change in temperature instead of salinity. With these two coefficients, the density ratio can be calculated. This determines the contribution of the temperature and salinity to the density of a water parcel. β is called a contraction coefficient, because when salinity increases, water becomes denser, and if the temperature increases, water becomes less dense. Definition Τhe haline contraction coefficient is defined as: where ρ is the density of a water parcel in the ocean and S is the absolute salinity. The subscripts Θ and p indicate that β is defined at constant potential temperature Θ and constant pressure p. The haline contraction coefficient is constant when a water parcel moves adiabatically along the isobars. Application The amount that density is influenced by a change in salinity or temperature can be computed from the density formula that is derived from the thermal wind balance. The Brunt–Väisälä frequency can also be defined when β is known, in combination with α, Θ and S. This frequency is a measure of the stratification of a fluid column and is defined over depth as: . The direction of the mixing and whether the mixing is temperature- or salinity-driven can be determined from the density difference and the Brunt-Väisälä frequency. Computation β can be computed when the conserved temperature, the absolute salinity and the pressure are known from a water parcel. Python offers the Gibbs SeaWater (GSW) oceanographic toolbox. It contains coupled non-linear equations that are derived from the Gibbs function. These equations are formulated in the equation of state of seawater, also called the equation of seawater. This equation relates the thermodynamic properties of the ocean (density, temperature, salinity and pressure). These equations are based on empirical thermodynamic properties. This means that the properties of the ocean can be computed from other thermodynamic properties. The difference between the EOS and TEOS-10 is that in TEOS-10, salinity is stated as absolute salinity, while in the previous EOS version salinity was stated as conductivity-based salinity. The absolute salinity is based on density, where it uses the mass off all non-H2O molecules. Conductivity-based salinity is calculated directly from conductivity measurements taken by (for example) buoys. The GSW beta(SA,CT,p) function can calculate β when the absolute salinity (SA), conserved temperature (CT) and the pressure are known. The conserved temperature cannot be obtained directly from assimilation data bases like GODAS. But these variables can be calculated with GSW. Physical examples β is not a constant, it mostly changes with latitude and depth. At locations where salinity is high, as in the tropics, β is low and where salinity is low, β is high. A high β means that the increase in density is more than when β is low.The effect of β is shown in the figures. Near Antarctica, ocean salinity is low. This is because meltwater that runs off Antarctica dilutes the ocean. This water is dense, because it is cold. β around Antarctica is relatively high. Near Antarctica, temperature is the main contributor for the high density there. Water near the tropics already has high salinity. Evaporation leaves salt behind in the water, increasing salinity and therefore density. As water temperatures are a lot higher, density in the tropics is lower than around the poles. In the tropics, salinity is the main contributor to density. References Oceanography
Haline contraction coefficient
Physics,Environmental_science
882
53,718,775
https://en.wikipedia.org/wiki/DIDO-2
DIDO-2 (COSPAR 2017-008BE) is a nano-satellite of the Israeli / Swiss company SpacePharma. The nano-satellite is part of a research project whose goal is to test a miniaturized end-to-end pharmaceutical laboratory (called ) in space under microgravity conditions. The project includes two satellites called DIDO-1 and DIDO-2. The platforms of the 3U CubeSats are developed and built by the Dutch company ISIS. The first satellite DIDO-1 was originally to fly on a Falcon 9 in 2016. The current (as of 2019) status and plans for this satellite are unknown. DIDO-2 was successfully launched on February 15, 2017, at 3:58 UTC from Satish Dhawan Space Centre on a PSLV-XL rocket (mission PSLV-C37) that released 104 satellites. In 2018, a third mission, DIDO-3 was being planned. Specifications DIDO-2 is a 3U CubeSat, weighing 4.2 kg. References External links Satellites of Switzerland Spacecraft launched in 2017 Spacecraft launched by PSLV rockets
DIDO-2
Astronomy
233
1,967,838
https://en.wikipedia.org/wiki/Radiosensitivity
Radiosensitivity is the relative susceptibility of cells, tissues, organs or organisms to the harmful effect of ionizing radiation. Cells types affected Cells are least sensitive when in the S phase, then the G1 phase, then the G2 phase, and most sensitive in the M phase of the cell cycle. This is described by the 'law of Bergonié and Tribondeau', formulated in 1906: X-rays are more effective on cells which have a greater reproductive activity. From their observations, they concluded that quickly dividing tumor cells are generally more sensitive than the majority of body cells. This is not always true. Tumor cells can be hypoxic and therefore less sensitive to X-rays because most of their effects are mediated by the free radicals produced by ionizing oxygen. It has meanwhile been shown that the most sensitive cells are those that are undifferentiated, well nourished, dividing quickly and highly active metabolically. Amongst the body cells, the most sensitive are spermatogonia and erythroblasts, epidermal stem cells, gastrointestinal stem cells. The least sensitive are nerve cells and muscle fibers. Very sensitive cells are also oocytes and lymphocytes, although they are resting cells and do not meet the criteria described above. The reasons for their sensitivity are not clear. There also appears to be a genetic basis for the varied vulnerability of cells to ionizing radiation. This has been demonstrated across several cancer types and in normal tissues. Cell damage classification The damage to the cell can be lethal (the cell dies) or sublethal (the cell can repair itself). Cell damage can ultimately lead to health effects which can be classified as either Tissue Reactions or Stochastic Effects according to the International Commission on Radiological Protection. Tissue reactions Tissue reactions have a threshold of irradiation under which they do not appear and above which they typically appear. Fractionation of dose, dose rate, the application of antioxidants and other factors may affect the precise threshold at which a tissue reaction occurs. Tissue reactions include skin reactions (epilation, erythema, moist desquamation), cataracts, circulatory disease, and other conditions. Seven proteins were discovered in a systematic review, which correlated with radiosensitivity in normal tissues: γH2AX, TP53BP1, VEGFA, CASP3, CDKN2A, IL6, and IL1B. Stochastic effects Stochastic effects do not have a threshold of irradiation, are coincidental, and cannot be avoided. They can be divided into somatic and genetic effects. Among the somatic effects, secondary cancer is the most important. It develops because radiation causes DNA mutations directly and indirectly. Direct effects are those caused by ionizing particles and rays themselves, while the indirect effects are those that are caused by free radicals, generated especially in water radiolysis and oxygen radiolysis. The genetic effects confer the predisposition of radiosensitivity to the offspring. The process is not well understood yet. Target structures For decades, the main cellular target for radiation induced damage was thought to be the DNA molecule. This view has been challenged by data indicating that in order to increase survival, the cells must protect their proteins, which in turn repair the damage in the DNA. An important part of protection of proteins (but not DNA) against the detrimental effects of reactive oxygen species (ROS), which are the main mechanism of radiation toxicity, is played by non-enzymatic complexes of manganese ions and small organic metabolites. These complexes were shown to protect the proteins from oxidation in vitro and also increased radiation survival in mice. An application of the synthetically reconstituted protective mixture with manganese was shown to preserve the immunogenicity of viral and bacterial epitopes at radiation doses far above those necessary to kill the microorganisms, thus opening a possibility for a quick whole-organism vaccine production. The intracellular manganese content and the nature of complexes it forms (both measurable by electron paramagnetic resonance) were shown to correlate with radiosensitivity in bacteria, archaea, fungi and human cells. An association was also found between total cellular manganese contents and their variation, and clinically inferred radioresponsiveness in different tumor cells, a finding that may be useful for more precise radiodosages and improved treatment of cancer patients. See also Background radiation Cell death Lethal dose, LD50 LNT model, Linear no-threshold response model for ionizing radiation Radiation sensitivity, the susceptibility of a material to physical or chemical changes induced by radiation References Radiobiology Radioactivity Radiation health effects Oncology
Radiosensitivity
Physics,Chemistry,Materials_science,Biology
972
67,886,971
https://en.wikipedia.org/wiki/1991%20Ghotki%20train%20crash
On 8 June 1991, a train crash killed over 100 people in Ghotki, Sindh, Pakistan. A passenger train carrying 800 passengers from Karachi to Lahore crashed into a parked freight train. See also List of railway accidents and incidents in Pakistan References 1991 in Pakistan 1991 disasters in Pakistan 1991 train crash June 1991 events in Pakistan
1991 Ghotki train crash
Technology
66
161,804
https://en.wikipedia.org/wiki/Africanized%20bee
The Africanized bee, also known as the Africanized honey bee (AHB) and colloquially as the "killer bee", is a hybrid of the western honey bee (Apis mellifera), produced originally by crossbreeding of the East African lowland honey bee (A. m. scutellata) with various European honey bee subspecies such as the Italian honey bee (A. m. ligustica) and the Iberian honey bee (A. m. iberiensis). The East African lowland honey bee was first introduced to Brazil in 1956 in an effort to increase honey production, but 26 swarms escaped quarantine in 1957. Since then, the hybrid has spread throughout South America and arrived in North America in 1985. Hives were found in south Texas in the United States in 1990. Africanized honey bees are typically much more defensive, react to disturbances faster, and chase people further () than other varieties of honey bees. They have killed some 1,000 humans, with victims receiving 10 times more stings than from European honey bees. They have also killed horses and other animals. History There are 29 recognized subspecies of Apis mellifera based largely on geographic variations. All subspecies are cross-fertile. Geographic isolation led to numerous local adaptations. These adaptations include brood cycles synchronized with the bloom period of local flora, forming a winter cluster in colder climates, migratory swarming in Africa, enhanced (long-distance) foraging behavior in desert areas, and numerous other inherited traits. The Africanized honey bees in the Western Hemisphere are descended from hives operated by biologist Warwick E. Kerr, who had interbred honey bees from Europe and southern Africa. Kerr was attempting to breed a strain of bees that would produce more honey in tropical conditions than the European strain of honey bee then in use throughout North, Central and South America. The hives containing this particular African subspecies were housed at an apiary near Rio Claro, São Paulo, in the southeast of Brazil, and were noted to be especially defensive. These hives had been fitted with special excluder screens (called queen excluders) to prevent the larger queen bees and drones from getting out and mating with the local population of European bees. According to Kerr, in October 1957 a visiting beekeeper, noticing that the queen excluders were interfering with the worker bees' movement, removed them, resulting in the accidental release of 26 Tanganyikan swarms of A. m. scutellata. Following this accidental release, the Africanized honey bee swarms spread out and crossbred with local European honey bee colonies. The descendants of these colonies have since spread throughout the Americas, moving through the Amazon basin in the 1970s, crossing into Central America in 1982, and reaching Mexico in 1985. Because their movement through these regions was rapid and largely unassisted by humans, Africanized honey bees have earned the reputation of being a notorious invasive species. The prospect of killer bees arriving in the United States caused a media sensation in the late 1970s, inspired several horror movies, and sparked debate about the wisdom of humans altering entire ecosystems. The first Africanized honey bees in the U.S. were discovered in 1985 at an oil field in the San Joaquin Valley of California. Bee experts theorized the colony had not traveled overland but instead "arrived hidden in a load of oil-drilling pipe shipped from South America." The first permanent colonies arrived in Texas from Mexico in 1990. In the Tucson region of Arizona, a study of trapped swarms in 1994 found that only 15 percent had been Africanized; this number had grown to 90 percent by 1997. Characteristics Though Africanized honey bees display certain behavioral traits that make them less than desirable for commercial beekeeping, excessive defensiveness and swarming foremost, they have now become the dominant type of honey bee for beekeeping in Central and South America due to their genetic dominance as well as ability to out-compete their European counterpart, with some beekeepers asserting that they are superior honey producers and pollinators. Africanized honey bees, as opposed to other Western bee types: Tend to swarm more frequently and go farther than other types of honey bees. Are more likely to migrate as part of a seasonal response to lowered food supply. Are more likely to "abscond"—the entire colony leaves the hive and relocates—in response to stress. Have greater defensiveness when in a resting swarm, compared to other honey bee types. Live more often in ground cavities than the European types. Guard the hive aggressively, with a larger alarm zone around the hive. Have a higher proportion of "guard" bees within the hive. Deploy in greater numbers for defense and pursue perceived threats over much longer distances from the hive. Cannot survive extended periods of forage deprivation, preventing introduction into areas with harsh winters or extremely dry late summers. Live in dramatically higher population densities. North American distribution Africanized honey bees are considered an invasive species in the Americas. As of 2002, the Africanized honey bees had spread from Brazil south to northern Argentina and north to Central America, Trinidad (the West Indies), Mexico, Texas, Arizona, Nevada, New Mexico, Florida, and southern California. In June 2005, it was discovered that the bees had spread into southwest Arkansas. Their expansion stopped for a time at eastern Texas, possibly due to the large population of European honey bee hives in the area. However, discoveries of the Africanized honey bees in southern Louisiana show that they have gotten past this barrier, or have come as a swarm aboard a ship. On 11 September 2007, Commissioner Bob Odom of the Louisiana Department of Agriculture and Forestry said that Africanized honey bees had established themselves in the New Orleans area. In February 2009, Africanized honey bees were found in southern Utah. The bees had spread into eight counties in Utah, as far north as Grand and Emery Counties by May 2017. In October 2010, a 73-year-old man was killed by a swarm of Africanized honey bees while clearing brush on his south Georgia property, as determined by Georgia's Department of Agriculture. In 2012, Tennessee state officials reported that a colony was found for the first time in a beekeeper's colony in Monroe County in the eastern part of the state. In June 2013, 62-year-old Larry Goodwin of Moody, Texas, was killed by a swarm of Africanized honey bees. In May 2014, Colorado State University confirmed that bees from a swarm which had aggressively attacked an orchardist near Palisade, in west-central Colorado, were from an Africanized honey bee hive. The hive was subsequently destroyed. In tropical climates they effectively out-compete European honey bees and, at their peak rate of expansion, they spread north at almost two kilometers (about 1¼ mile) a day. There were discussions about slowing the spread by placing large numbers of docile European-strain hives in strategic locations, particularly at the Isthmus of Panama, but various national and international agricultural departments could not prevent the bees' expansion. Current knowledge of the genetics of these bees suggests that such a strategy, had it been tried, would not have been successful. As the Africanized honey bee migrates further north, colonies continue to interbreed with European honey bees. In a study conducted in Arizona in 2004 it was observed that swarms of Africanized honey bees could take over weakened European honey bee hives by invading the hive, then killing the European queen and establishing their own queen. There are now relatively stable geographic zones in which either Africanized honey bees dominate, a mix of Africanized and European honey bees is present, or only non-Africanized honey bees are found, as in the southern portions of South America or northern North America. African honey bees abscond (abandon the hive and any food store to start over in a new location) more readily than European honeybees. This is not necessarily a severe loss in tropical climates where plants bloom all year, but in more temperate climates it can leave the colony with not enough stores to survive the winter. Thus Africanized honey bees are expected to be a hazard mostly in the southern states of the United States, reaching as far north as the Chesapeake Bay in the east. The cold-weather limits of the Africanized honey bee have driven some professional bee breeders from Southern California into the harsher wintering locales of the northern Sierra Nevada and southern Cascade Range. This is a more difficult area to prepare bees for early pollination placement in, such as is required for the production of almonds. The reduced available winter forage in northern California means that bees must be fed for early spring buildup. The arrival of the Africanized honey bee in Central America is threatening the traditional craft of keeping Melipona stingless bees in log gums, although they do not interbreed or directly compete with each other. The honey production from an individual hive of Africanized honey bees can be as high as . This value exceeds the much smaller of the various Melipona stingless bee species. Thus economic pressures are forcing beekeepers to switch from the traditional stingless bees to the new reality of the Africanized honey bee. Whether this will lead to the extinction of the former is unknown, but they are well adapted to exist in the wild, and there are a number of indigenous plants that the Africanized honey bees do not visit, so the fate of the Melipona bees remains to be seen. Foraging behavior Africanized honey bees begin foraging at young ages and harvest a greater quantity of pollen compared to their European counterparts (Apis mellifera ligustica). This may be linked to the high reproductive rate of the Africanized honey bee, which requires pollen to feed its greater number of larvae. Africanized honey bees are also sensitive to sucrose at lower concentrations. This adaptation causes foragers to harvest resources with low concentrations of sucrose that include water, pollen, and unconcentrated nectar. A study comparing A. m. scutellata and A. m. ligustica published by Fewell and Bertram in 2002 suggests that the differential evolution of this suite of behaviors is due to the different environmental pressures experienced by African and European subspecies. Proboscis extension responses Honey bee sensitivity to different concentrations of sucrose is determined by a reflex known as the proboscis extension response (PER). Different species of honey bees that employ different foraging behaviors will vary in the concentration of sucrose that elicits their proboscis extension response. For example, European honey bees (Apis mellifera ligustica) forage at older ages and harvest less pollen and more concentrated nectar. The differences in resources collected during harvesting are a result of the European honey bee's sensitivity to sucrose at higher concentrations. Evolution The differences in a variety of behaviors between different species of honey bees are the result of a directional selection that acts upon several foraging behavior traits as a common entity. Selection in natural populations of honey bees show that positive selection of sensitivity to low concentrations of sucrose are linked to foraging at younger ages and collecting resources low in sucrose. Positive selection of sensitivity to high concentrations of sucrose were linked to foraging at older ages and collecting resources higher in sucrose. Additionally of interest, "change in one component of a suite of behaviors appear[s] to direct change in the entire suite." When resource density is low in Africanized honey bee habitats, it is necessary for the bees to harvest a greater variety of resources because they cannot afford to be selective. Honey bees that are genetically inclined towards resources high in sucrose, such as concentrated nectar, will not be able to sustain themselves in harsher environments. The noted to low sucrose concentration in Africanized honey bees may be a result of selective pressure in times of scarcity when their survival depends on their attraction to low quality resources. Morphology and genetics The popular term "killer bee" has only limited scientific meaning today because there is no generally accepted fraction of genetic contribution used to establish a cut-off between a "killer" honey bee and an ordinary honey bee. Government and scientific documents prefer "Africanized honey bee" as an accepted scientific taxon. Morphological tests Although the native East African lowland honey bees (Apis mellifera scutellata) are smaller and build smaller comb cells than the European honey bees, their hybrids are not smaller. Africanized honey bees have slightly shorter wings, which can only be recognized reliably by performing a statistical analysis on micro-measurements of a substantial sample. One of the problems with this test is that there are other subspecies, such as A. m. iberiensis, which also have shortened wings. This trait is hypothesized to derive from ancient hybrid haplotypes thought to have links to evolutionary lineages from Africa. Some belong to A. m. intermissa, but others have an indeterminate origin; the Egyptian honeybee (Apis mellifera lamarckii), present in small numbers in the southeastern U.S., has the same morphology. DNA tests Currently testing techniques have moved away from external measurements to DNA analysis, but this means the test can only be done by a sophisticated laboratory. Molecular diagnostics using the mitochondrial DNA (mtDNA) cytochrome b gene can differentiate A. m. scutellata from other A. mellifera lineages, though mtDNA only allows one to detect Africanized colonies that have Africanized queens and not colonies where a European queen has mated with Africanized drones. A test based on single nucleotide polymorphisms was created in 2015 to detect Africanized bees based on the proportion of African and European ancestry. Western variants The western honey bee is native to the continents of Europe, Asia, and Africa. As of the early 1600s, it was introduced to North America, with subsequent introductions of other European subspecies 200 years later. Since then, they have spread throughout the Americas. The 29 subspecies can be assigned to one of four major branches based on work by Ruttner and subsequently confirmed by analysis of mitochondrial DNA. African subspecies are assigned to branch A, northwestern European subspecies to branch M, southwestern European subspecies to branch C, and Mideast subspecies to branch O. The subspecies are grouped and listed. There are still regions with localized variations that may become identified subspecies in the near future, such as A. m. pomonella from the Tian Shan Mountains, which would be included in the Mideast subspecies branch. The western honey bee is the third insect whose genome has been mapped, and is unusual in having very few transposons. According to the scientists who analyzed its genetic code, the western honey bee originated in Africa and spread to Eurasia in two ancient migrations. They have also discovered that the number of genes in the honey bee related to smell outnumber those for taste. The genome sequence revealed several groups of genes, particularly the genes related to circadian rhythms, were closer to vertebrates than other insects. Genes related to enzymes that control other genes were also vertebrate-like. African variants There are two lineages of the East African lowland subspecies (Apis mellifera scutellata) in the Americas: actual matrilineal descendants of the original escaped queens and a much smaller number that are Africanized through hybridization. The matrilineal descendants carry African mtDNA, but partially European nuclear DNA, while the honey bees that are Africanized through hybridization carry European mtDNA, and partially African nuclear DNA. The matrilineal descendants are in the vast majority. This is supported by DNA analyses performed on the bees as they spread northwards; those that were at the "vanguard" were over 90% African mtDNA, indicating an unbroken matriline, but after several years in residence in an area interbreeding with the local European strains, as in Brazil, the overall representation of African mtDNA drops to some degree. However, these latter hybrid lines (with European mtDNA) do not appear to propagate themselves well or persist. Population genetics analysis of Africanized honey bees in the United States, using a maternally inherited genetic marker, found 12 distinct mitotypes, and the amount of genetic variation observed supports the idea that there have been multiple introductions of AHB into the United States. A newer publication shows the genetic admixture of the Africanized honey bees in Brazil. The small number of honey bees with African ancestry that were introduced to Brazil in 1956, which dispersed and hybridized with existing managed populations of European origin and quickly spread across much of the Americas, is an example of a massive biological invasion as earlier told in this article. Here, they analysed whole-genome sequences of 32 Africanized honey bees sampled from throughout Brazil to study the effect of this process on genome diversity. By comparison with ancestral populations from Europe and Africa, they infer that these samples had 84% African ancestry, with the remainder from western European populations. However, this proportion varied across the genome and they identified signals of positive selection in regions with high European ancestry proportions. These observations are largely driven by one large gene-rich 1.4 Mbp segment on chromosome 11 where European haplotypes are present at a significantly elevated frequency and likely confer an adaptive advantage in the Africanized honey bee population. Consequences of selection The chief difference between the European subspecies of honey bees kept by beekeepers and the African ones is attributable to both selective breeding and natural selection. By selecting only the most gentle, non-defensive subspecies, beekeepers have, over centuries, eliminated the more defensive ones and created a number of subspecies suitable for apiculture. In Central and southern Africa there was formerly no tradition of beekeeping, and the hive was destroyed in order to harvest the honey, pollen and larvae. The bees adapted to the climate of Sub-Saharan Africa, including prolonged droughts. Having to defend themselves against aggressive insects such as ants and wasps, as well as voracious animals like the honey badger, African honey bees evolved as a subspecies group of highly defensive bees unsuitable by a number of metrics for domestic use. As Africanized honey bees migrate into regions, hives with an old or absent queen can become hybridized by crossbreeding. The aggressive Africanized drones out-compete European drones for a newly developed queen of such a hive, ultimately resulting in hybridization of the existing colony. Requeening, a term for replacing out the older existing queen with a new, already fertilized one, can avoid hybridization in apiaries. As a prophylactic measure, the majority of beekeepers in North America tend to requeen their hives annually, maintaining strong colonies and avoiding hybridization. Defensiveness Africanized honey bees exhibit far greater defensiveness than European honey bees and are more likely to deal with a perceived threat by attacking in large swarms. These hybrids have been known to pursue a perceived threat for a distance of well over 500 meters (1,640 ft). The venom of an Africanized honey bee is the same as that of a European honey bee, but since the former tends to sting in far greater numbers, deaths from them are naturally more numerous than from European honey bees. While allergies to the European honey bee may cause death, complications from Africanized honey bee stings are usually not caused from allergies to their venom. Humans stung many times by the Africanized honey bees can exhibit serious side effects such as inflammation of the skin, dizziness, headaches, weakness, edema, nausea, diarrhea, and vomiting. Some cases even progress to affecting different body systems by causing increased heart rates, respiratory distress, and even renal failure. Africanized honey bee sting cases can become very serious, but they remain relatively rare and are often limited to accidental discovery in highly populated areas. Impact on humans Fear factor The Africanized honey bee is widely feared by the public, a reaction that has been amplified by sensationalist movies (such as The Swarm) and some of the media reports. Stings from Africanized honey bees kill on average two or three people per year. As the Africanized honey bee spreads through Florida, a densely populated state, officials worry that public fear may force misguided efforts to combat them: Misconceptions "Killer bee" is a term frequently used in media such as movies that portray aggressive behavior or actively seeking to attack humans. "Africanized honey bee" is considered a more descriptive term in part because their behavior is increased defensiveness compared to European honey bees that can exhibit similar defensive behaviors when disturbed. The sting of the Africanized honey bee is no more potent than any other variety of honey bee, and although they are similar in appearance to European honey bees, they tend to be slightly smaller and darker in color. Although Africanized honey bees do not actively search for humans to attack, they are more dangerous because they are more easily provoked, quicker to attack in greater numbers, and then pursue the perceived threat farther, for as much as a quarter of a mile (400 metres). While studies have shown that Africanized honey bees can infiltrate European honey bee colonies and then kill and replace their queen (thus usurping the hive), this is less common than other methods. Wild and managed colonies will sometimes be seen to fight over honey stores during the dearth (periods when plants are not flowering), but this behavior should not be confused with the aforementioned activity. The most common way that a European honey bee hive will become Africanized is through crossbreeding during a new queen's mating flight. Studies have consistently shown that Africanized drones are more numerous, stronger and faster than their European cousins and are therefore able to out-compete them during these mating flights. The result of mating between Africanized drones and European queens is almost always Africanized offspring. Impact on apiculture In areas of suitable temperate climate, the survival traits of Africanized honey bee colonies help them outperform European honey bee colonies. They also return later and work under conditions that often keep European honey bees hive-bound. This is the reason why they have gained a reputation as superior honey producers, and those beekeepers who have learned to adapt their management techniques now seem to prefer them to their European counterparts. Studies show that in areas of Florida that contain Africanized honey bees, the honey production is higher than in areas in which they do not live. It is also becoming apparent that Africanized honey bees have another advantage over European honey bees in that they seem to show a higher resistance to several health issues, including parasites such as Varroa destructor, some fungal diseases like chalkbrood, and even the mysterious colony collapse disorder which was plaguing beekeepers in the early 2000's. Despite all its negative factors, it is possible that the Africanized honey bee might actually end up being a boon to apiculture. Queen management In areas where Africanized honey bees are well established, bought and pre-fertilized (i.e. mated) European queens can be used to maintain a hive's European genetics and behavior. However, this practice can be expensive, since these queens must be bought and shipped from breeder apiaries in areas completely free of Africanized honey bees, such as the northern U.S. states or Hawaii. As such, this is generally not practical for most commercial beekeepers outside the U.S., and it is one of the main reasons why Central and South American beekeepers have had to learn to manage and work with the existing Africanized honey bee. Any effort to crossbreed virgin European queens with Africanized drones will result in the offspring exhibiting Africanized traits; only 26 swarms escaped in 1957, and nearly 60 years later there does not appear to be a noticeable lessening of the typical Africanized characteristics. Gentleness Not all Africanized honey bee hives display the typical hyper-defensive behavior, which may provide bee breeders a point to begin breeding a gentler stock (gAHBs). Work has been done in Brazil towards this end, but in order to maintain these traits, it is necessary to develop a queen breeding and mating facility in order to requeen colonies and to prevent reintroduction of unwanted genes or characteristics through unintended crossbreeding with feral colonies. In Puerto Rico, some bee colonies are already beginning to show more gentle behavior. This is believed to be because the more gentle bees contain genetic material that is more similar to the European honey bee, although they also contain Africanized honey bee material. This degree of aggressiveness is surprisingly almost unrelated to individual genetics – instead being almost entirely determined by the entire hive's proportion of aggression genetics. Safety While bee incidents are much less common than they were during the first wave of Africanized honey bee colonization, this can be largely attributed to modified and improved bee management techniques. Prominent among these are locating bee-yards much farther away from human habitation, creating barriers to keep livestock at enough of a distance to prevent interaction, and education of the general public to teach them how to properly react when feral colonies are encountered and what resources to contact. The Africanized honey bee is now considered the honey bee of choice for beekeeping in Brazil. Impact on pets and livestock Africanized honey bees are a threat to outdoor pets, especially mammals. The most detailed information available pertains to dogs. Less is known about livestock as victims. There is a widespread consensus that cattle suffer occasional Africanized honey bee attacks in Brazil, but there is little relevant documentation. It appears that cows sustain hundreds of stings if they are attacked, but can survive such injuries. See also Bee removal Notes References Further reading External links Lists general information and resources for Africanized Honeybee. Western honey bee breeds Hybrid animals Agricultural pest insects Invasive insect species Pest insects Beekeeping in the United States Invasive animal species in the United States
Africanized bee
Biology
5,243
44,997,732
https://en.wikipedia.org/wiki/Differentiation%20%28journal%29
Differentiation is a peer-reviewed academic journal covering cell differentiation and cell development. It was established in 1973 and is published 10 times per year by Elsevier, on behalf of the International Society of Differentiation. The editor-in-chief are Loydie Jerome-Majewska (McGill University), Crystal Rogers (University of California, Davis), and Rosa Uribe (Rice University). According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.9. References External links International Society of Differentiation Molecular and cellular biology journals Developmental biology journals Academic journals established in 1973 Elsevier academic journals English-language journals
Differentiation (journal)
Chemistry
129
455,344
https://en.wikipedia.org/wiki/Frances%20Glessner%20Lee
Frances Glessner Lee (March 25, 1878 – January 27, 1962) was an American forensic scientist. She was influential in developing the science of forensics in the United States. To this end, she created the Nutshell Studies of Unexplained Death, twenty true crime scene dioramas recreated in minute detail at dollhouse scale, used for training homicide investigators. Eighteen of the Nutshell Studies of Unexplained Death are still in use for teaching purposes by the Maryland Office of the Chief Medical Examiner, and the dioramas are also now considered works of art. Glessner Lee also helped to establish the Department of Legal Medicine at Harvard University, and endowed the Magrath Library of Legal Medicine there. She became the first female police captain in the United States, and is known as the "mother of forensic science". Early life Glessner Lee was born in Chicago on March 25, 1878. Her father, John Jacob Glessner, was an industrialist who became wealthy from International Harvester. She and her brother were educated at home; her brother went to Harvard. As a child Frances fell ill with tonsillitis, and her mother took her to the doctor. When the first doctor prescribed a dangerous treatment for her illness, the Glessners sought a second opinion and Frances underwent a successful surgery at a time when surgery was very dangerous and often lethal. Frances became interested in learning more about medicine because of this experience. When summering in the White Mountains, local doctors allowed her to attend home visits with them. There Glessner learned the skills of nursing. She inherited the Harvester fortune and finally had the money to pursue an interest in how detectives could examine clues. Career Glessner Lee was inspired to pursue forensic investigation by one of her brother's classmates, George Burgess Magrath, with whom she was close friends. He was studying medicine at Harvard Medical School and was particularly interested in death investigation. Magrath would become a professor in pathology at Harvard Medical School and a chief medical examiner in Boston and together they lobbied to have coroners replaced by medical professionals. In 1931, Glessner Lee endowed the Harvard Department of Legal Medicine—the first such department in the country—and her gifts would later establish the George Burgess Magrath Library, a chair in legal medicine, and the Harvard Seminars in Homicide Investigation. She also endowed the Harvard Associates in Police Science, a national organization for the furtherance of forensic science; it has a division dedicated to her, called the Frances Glessner Lee Homicide School. Nutshell Studies of Unexplained Death In 1945 Glessner Lee donated her dioramas to Harvard for use in her seminars. She hosted a series of semi-annual seminars, where she presented 30 to 40 men with the "Nutshell Studies of Unexplained Death", intricately constructed dioramas of actual crime scenes, complete with working doors, windows and lights. The 20 models were based on composites of actual cases and were designed to test the abilities of students to collect all relevant evidence. The models depicted multiple causes of death, and were based on autopsies and crime scenes that Glessner Lee visited. Glessner Lee paid close attention to detail in creating the models. The rooms were filled with working mousetraps and rocking chairs, food in the kitchens, and more, and the corpses accurately represented discoloration or bloating that would be present at the crime scene. Each model cost about $3,000-$4,500 to create. Viewers were given 90 minutes to study the scene. Eighteen of the original dioramas were still used for training purposes by Harvard Associates in Police Science in 1999. As of 2020, the models could be found at the Maryland Office of the Chief Medical Examiner in Baltimore, where they were still used in the annual Frances Glessner Lee homicide investigation seminar. For her work, Glessner Lee was made a captain in the New Hampshire State Police on October 27, 1943, making her the first woman to join the International Association of Chiefs of Police. This has been reported as honorary, but in 18 Tiny Deaths by Bruce Goldfarb, New Hampshire Police Superintendent Ralph Caswell (who appointed her captain) is quoted as saying, "This was not an honorary post. She was actually a full fledged captain with all the authority and responsibility of the post." The dioramas of the crime scenes Glessner depicted were as follows; three room dwelling, log cabin, blue bedroom, dark bathroom, burned cabin, unpapered bedroom, pink bathroom, attic, woodsman's shack, barn, saloon and jail, striped bedroom, living room, two story porch, kitchen, garage, parsonage parlor, and bedroom. They were once part of an exhibit in the Renwick Gallery of the Smithsonian American Art Museum. Personal life Glessner married a lawyer, Blewett Harrison Lee, who was from the family line of General Robert E Lee, with whom she had three children. The marriage ended in divorce in 1914. Glessner Lee's perfectionism and dioramas reflect her family background. Her father was an avid collector of fine furniture with which he furnished the family home. He wrote a book on the subject, and the family home, designed by Henry Hobson Richardson, is now the John J. Glessner House museum on the near South Side of Chicago. The first miniature Glessner built was of the Chicago Symphony Orchestra. She did so for her mother's birthday and it was her biggest project at the time. Glessner Lee was fond of the stories of Sherlock Holmes, whose plot twists were often the result of overlooked details. Many of her dioramas featured female victims in domestic settings, illustrating the dark side of the "feminine roles she had rehearsed in her married life." In popular culture The first book about Frances Glessner Lee and her dioramas, "The Nutshell Studies of Unexplained Death" by Corinne May Botz, is published by Monacelli Press in 2004. Frances Glessner Lee's biography, 18 Tiny Deaths: The Untold Story of Frances Glessner Lee and the Invention of Modern Forensics, by Bruce Goldfarb, was released by Sourcebooks on February 4, 2020. The Nutshell Studies of Unexplained Death provided the inspiration for the Miniature Killer in the television show CSI: Crime Scene Investigation. Glessner Lee is paid tribute to in the book Encyclopedia Horrifica by Joshua Gee. Frances Glessner Lee and Erle Stanley Gardner were friends, and he dedicated several of his detective novels to her, including The Case of the Dubious Bridegroom. The character of Agnes Lesser in the Father Brown episode "The Smallest of Things" is based on Glessner Lee. The Renwick Gallery of the Smithsonian American Art Museum exhibited 18 of the Nutshell Studies of Unexplained Death from October 20, 2017 to January 28, 2018. Sponsors included the American Academy of Forensic Sciences. On November 18, 2017, the film Murder in a Nutshell: The Frances Glessner Lee Story, directed by Susan Marks, premiered at the Renwick Gallery, followed by a moderated discussion with filmmaker. Frances Glessner Lee and her pioneering work with crime scene dioramas is cited in some detail and plays a crucial role in episode 17 of the 17th season of NCIS, "In a Nutshell". In her book Gory Details: Adventures from the Dark Side of Science, science journalist Erika Engelhaupt describes her own experience working with a team on solving the crime of one of the Nutshell dioramas and discusses Frances Glessner Lee's contribution to forensic science. In the fantasy mystery series The Undetectables, Courtney Smyth loosely based the character Francine Leon on Frances Glessner Lee, and took inspiration from Glessner Lee's Nutshell Studies to create magical crime scene dioramas that are used by the characters to solve crimes. See also New Hampshire Historical Marker No. 257: Frances Glessner Lee (1878–1962) 'Mother of Forensic Science' References Further reading Botz, Corinne May. The Nutshell Studies of Unexplained Death. New York: Monacelli, 2004. , , October 23, 2017. Goldfarb, Bruce. 18 Tiny Deaths: The Untold Story of Frances Glessner Lee and the Invention of Modern Forensics Naperville, IL. Sourcebooks 2020 Jeltsen, Melissa. "These Bloody Dollhouse Scenes Reveal A Secret Truth About American Crime." Huffington Post, February 2, 2018. Rosberg, Gerald M. "A Colloquium on Violent Death Brings 30 Detectives to Harvard". The Harvard Crimson, December 6, 1966. External links The Nutshell Studies of Unexplained Death Photographs Of Dolls and Murder documentary website Glessner House website 1878 births 1962 deaths American forensic scientists Women forensic scientists American women philanthropists Philanthropists from Illinois Scientists from Chicago Women in law enforcement Model makers Harvard Medical School people
Frances Glessner Lee
Physics
1,849
60,166,827
https://en.wikipedia.org/wiki/Ortho%20effect
Ortho effect is an organic chemistry phenomenon where the presence of a chemical group at the at ortho position or the 1 and 2 position of a phenyl ring, relative to the carboxylic compound changes the chemical properties of the compound. This is caused by steric effects and bonding interactions along with polar effects caused by the various substituents which are in a given molecule, resulting in changes in its chemical and physical properties. The ortho effect is associated with substituted benzene compounds. There are three main ortho effects in substituted benzene compounds: Steric hindrance forces cause substitution of a chemical group in the ortho position of benzoic acids become stronger acids. Steric inhibition of protonation caused by substitution of anilines to become weaker bases, compared to substitution of isomers in the meta and para position. Electrophilic aromatic substitution of disubstituted benzene compounds causes steric effects which determines the regioselectivity of an incoming electrophile in disubstituted benzene compounds Ortho substituted benzoic acids When a substituent group is located ortho position to the carboxyl group in a substituted benzoic acid compound, the compound becomes more acidic surpassing the unmodified benzoic acid. Generally ortho-substituted benzoic acids are stronger acids than their meta and para isomers. Mechanism of action When ortho substitution occurs in benzoic acid, steric hindrance causes the carboxyl group to twist out of the plane of the benzene ring. The twisting inhibits the resonance of the carboxyl group with the phenyl ring, leading to increased acidity of the carboxyl group. This increased acidity contrasts with the reduced acidity caused by destabilizing cross-conjugation. The destabilizing cross-conjugation causes decreased acidity of benzoic acid compared to formic acid. pKa values The table given below shows pKa values of various monosubstituted benzoic acids. Ortho substituted aniline When any group is present at ortho position to an amide group (NH2) in aniline then the basic character of that compound becomes weaker. For example, see the order of basicity of following substituted aniline:- p-Toluidine > m-Toluidine > Aniline > o-Toluidine Aniline > m-Nitroaniline > p-Nitroaniline > o-Nitroaniline Aniline > p-Haloaniline > m-Haloaniline > o-Haloaniline p-Aminophenol pKb=8.50 > o-Aminophenol pKb=9.28 > Aniline pKb=9.38 > m-Aminophenol pKb=9.80 The protonation of substituted aniline is inhibited by steric hindrance. When protonated, the nitrogen in the amino group changes its orbital hybridization from sp2 to sp3, becoming non-planar. This leads to steric hindrance between the ortho-substituted group and the hydrogen atom of the amino group, reducing the stability of the conjugate acid and consequently decreasing the pH of substituted aniline. Electrophilic aromatic substitution of disubstituted benzene compounds The ortho effect also occurs when a meta-directing group is positioned in a meta arrangement relative to an ortho–para-directing group, a new substituent introduced into the molecule tends to preferentially occupy the ortho position relative to the meta-directing group rather than the para position. Currently, there is no definitive explanation for the ortho effect, but it is hypothesized that there may be intramolecular assistance from the meta-directing group influencing the positioning of the incoming substituent. For example, the electrophilic aromatic nitration of 1-methyl-3-nitrobenzene affords 4-methyl-1,2-dinitrobenzene and 1-methyl-2,3-dinitrobenzene in 60.1% and 28.4% yields, respectively. In contrast, 2-methyl-1,4-dinitrobenzene (2c) is isolated in only 9.9% yield. As witnessed in the above example, when a π-acceptor substituent (πAS) is meta to a π-donor substituent (πDS), the electrophilic aromatic nitration occurs ortho to the πAS rather than para. Similar results were also observed on the nitration of 3-methylbenzoic acid in which 5-methyl-2-nitrobenzoic acid and 3-methyl-2-nitrobenzoic acid were obtained as the major compounds, whereas 3-methyl-4-nitrobenzoic acid was reported as a minor compound. Also in nitration of the nitration of 3-bromobenzoic acid 5-bromo-2-nitrobenzoic acid (83%yield) was obtained as major product and 3-bromo-2-nitrobenzoic acid (13% yield) as minor. On an interesting note the potential isomer 3-bromo-4-nitrobenzoic acid was not detected. Diels-Alder reactions The ortho effect occurs in Diels-Alder reactions when the Z-substituted dienophiles react with 1-substituted butadienes to give 3,4-disubstituted cyclohexenes, independent of the nature of diene substituents. References External links Supplemental Topics § The Ortho Effect – Department of Chemistry, Michigan State University Organic chemistry Benzene Isomerism Chemical bonding
Ortho effect
Physics,Chemistry,Materials_science
1,219
367,867
https://en.wikipedia.org/wiki/Citron
The citron (Citrus medica), historically cedrate, is a large fragrant citrus fruit with a thick rind. It is said to resemble a 'huge, rough lemon'. It is one of the original citrus fruits from which all other citrus types developed through natural hybrid speciation or artificial hybridization. Though citron cultivars take on a wide variety of physical forms, they are all closely related genetically. It is used in Asian and Mediterranean cuisine, traditional medicines, perfume, and religious rituals and offerings. Hybrids of citrons with other citrus are commercially more prominent, notably lemons and many limes. Etymology The fruit's English name "citron" derives ultimately from Latin, citrus, which is also the origin of the genus name. Other languages A source of confusion is that 'citron' in French and English are false friends, as the French word 'citron' refers to what in English is a lemon; whereas the French word for the citron is 'cédrat. Indeed, into the 16th century, the English term citron included the lemon and perhaps the lime as well. Other languages that use variants of citron to refer to the lemon include Armenian, Czech, Dutch, Finnish, German, Estonian, Latvian, Lithuanian, Hungarian, Esperanto, Polish and the Scandinavian languages. In Italian it is known as , the same name used also to indicate the coniferous tree cedar. Similarly, in Latin, citrus, or thyine wood referred to the wood of a North African cypress, Tetraclinis articulata. In Indo-Iranian languages, it is called , as against ('bitter orange'). Both names were borrowed into Arabic and introduced into Spain and Portugal after their occupation by Muslims in AD 711, whence the latter became the source of the name orange through rebracketing (and the former of 'toronja' and 'toranja', which today describe the grapefruit in Spanish and Portuguese respectively). Dutch merchants seasonally import for baked goods; a thick, light green colored commercially candied half peeling from Indonesia and other countries ( – Indonesian word for love, Citrus médica variety 'Macrocárpa'), which can reach 2.5 kilograms mass. A bitter taste is removed by salt treatment before processing into confectionery. In Hebrew it is called an etrog (); in Yiddish, it is pronounced "esrog" or "esreg". The citron plays an important role in the harvest holiday of Sukkot paired with lulavim (fronds of the date palm). Origin and distribution The citron is an old and original citrus species. There is molecular evidence that most cultivated citrus species arose by hybridization of a small number of ancestral types: the citron, pomelo, mandarin and, to a lesser extent, papedas and kumquat. The citron is usually fertilized by self-pollination, which results in their displaying a high degree of genetic homozygosity. It is the male parent of any citrus hybrid rather than a female one. Archaeological evidence for citrus fruits has been limited, as neither seeds nor pollen are likely to be routinely recovered in archaeology. The citron is thought to have been native to the southeast foothills of the Himalayas. It is thought that by the 4th century BC, when Theophrastus mentions the "Median apple." Despite its scientific designation, which is an adaptation of the old name in classical Greek sources “Median pome”, this fruit was not indigenous to Media or ancient Media the citron was mostly cultivated in the Caspian Sea (north of Mazandarn and Gilan) on its way to the Mediterranean basin, where it was cultivated during the later centuries in different areas as described by Erich Isaac. Many mention the role of Alexander the Great and his armies as they attacked Iran and what is today Pakistan, as being responsible for the spread of the citron westward, reaching the European countries such as Greece and Italy.Biology of Citrus Antiquity Leviticus mentions the "fruit of the beautiful ('hadar') tree" as being required for ritual use during the Feast of Tabernacles (Lev. 23:40). According to Jewish Rabbinical tradition, the "fruit of the tree hadar" refers to the citron. Mishna Sukkah, , deals with halakhic aspects of the citron. The Egyptologist and archaeologist Victor Loret said he had identified it depicted on the walls of the botanical garden at the Karnak Temple, which dates back to the time of Thutmosis III, approximately 3,500 years ago. Citron was also cultivated in Sumer as early as the 3rd millennium BC. The citron has been cultivated since ancient times, predating the cultivation of other citrus species. Theophrastus The following description on citron was given by Theophrastus In the east and south there are special plants ... i.e. in Media(Perhaps they mistakenly called it Mad because it was located in the east of Parthia and south and the tree grows in the question of Caspian sea, Mazandaran, Gilan , not Mad ) and Persia there are many types of fruit, between them there is a fruit called Median or Persian Apple. The tree has a leaf similar to and almost identical with that of the andrachn (Arbutus andrachne L.), but has thorns like those of the apios (the wild pear, Pyrus amygdaliformis Vill.) or the firethorn (Cotoneaster pyracantha Spach.), except that they are white, smooth, sharp and strong. The fruit is not eaten, but is very fragrant, as is also the leaf of the tree; and the fruit is put among clothes, it keeps them from being moth-eaten. It is also useful when one has drunk deadly poison, for when it is administered in wine; it upsets the stomach and brings up the poison. It is also useful to improve the breath, for if one boils the inner part of the fruit in a dish or squeezes it into the mouth in some other medium, it makes the breath more pleasant. The seed is removed from the fruit and sown in the spring in carefully tilled beds, and it is watered every fourth or fifth day. As soon the plant is strong it is transplanted, also in the spring, to a soft, well watered site, where the soil is not very fine, for it prefers such places. And it bears its fruit at all seasons, for when some have gathered, the flower of the others is on the tree and is ripening others. Of the flowers I have said those that have a sort of distaff [meaning the pistil] projecting from the middle are fertile, while those that do not have this are sterile. It is also sown, like date palms, in pots punctured with holes. This tree, as has been remarked, grows in Media and Persia. Pliny the Elder Citron was also described by Pliny the Elder, who called it nata Assyria malus. The following is from his book Natural History: There is another tree also with the same name of "citrus", and bears a fruit that is held by some persons in particular dislike for its smell and remarkable bitterness; while, on the other hand, there are some who esteem it very highly. This tree is used as an ornament to houses; it requires, however, no further description. The citron tree, called the Assyrian, and by some the Median or Persian apple, is an antidote against poisons. The leaf is similar to that of the arbute, except that it has small prickles running across it. As to the fruit, it is never eaten, but it is remarkable for its extremely powerful smell, which is the case, also, with the leaves; indeed, the odour is so strong, that it will penetrate clothes, when they are once impregnated with it, and hence it is very useful in repelling the attacks of noxious insects. The tree bears fruit at all seasons of the year; while some is falling off, other fruit is ripening, and other, again, just bursting into birth. Various nations have attempted to naturalize this tree among them, for the sake of its medica or Persian properties, by planting it in pots of clay, with holes drilled in them, for the purpose of introducing the air to the roots; and I would here remark, once for all, that it is as well to remember that the best plan is to pack all slips of trees that have to be carried to any distance, as close together as they can possibly be placed. It has been found, however, that this tree will grow nowhere except in Persia. It is this fruit, the pips of which, as we have already mentioned, the Parthian grandees employ in seasoning their ragouts, as being peculiarly conducive to the sweetening of the breath. We find no other tree very highly commended that is produced in Media. Citrons, either the pulp of them or the pips, are taken in wine as an antidote to poisons. A decoction of citrons, or the juice extracted from them, is used as a gargle to impart sweetness to the breath. The pips of this fruit are recommended for pregnant women to chew when affected with qualmishness. Citrons are good, also, for a weak stomach, but it is not easy to eat them except with vinegar. Medieval authors Ibn al-'Awwam's 12th-century agricultural encyclopedia, Book on Agriculture, contains an article on citron tree cultivation in Spain. Description and variation Fruit The citron fruit is usually ovate or oblong, narrowing towards the stylar end. However, the citron's fruit shape is highly variable, due to the large quantity of albedo, which forms independently according to the fruits' position on the tree, twig orientation, and many other factors. The rind is leathery, furrowed, and adherent. The inner portion is thick, white and hard; the outer is uniformly thin and very fragrant. The pulp is usually acidic, but also can be sweet, and some varieties are entirely pulpless. Most citron varieties contain a large number of monoembryonic seeds. The seeds are white with dark inner coats and red-purplish chalazal spots for the acidic varieties, and colorless for the sweet ones. Some citron varieties have persistent styles which do not fall off after fecundation. Those are usually preferred for ritual etrog use in Judaism. Some citrons have medium-sized oil bubbles at the outer surface, medially distant to each other. Some varieties are ribbed and faintly warted on the outer surface. A fingered citron variety is commonly called Buddha's hand. The color varies from green, when unripe, to a yellow-orange when overripe. The citron does not fall off the tree and can reach 8–10 pounds (4–5 kg) if not picked before fully mature.The Search for the Authentic Citron: Historic and Genetic Analysis; HortScience 40(7):1963–1968. 2005 However, they should be picked before the winter, as the branches might bend or break to the ground, and may cause numerous fungal diseases for the tree. Despite the wide variety of forms taken on by the fruit, citrons are all closely related genetically, representing a single species. Genetic analysis divides the known cultivars into three clusters: a Mediterranean cluster thought to have originated in India, and two clusters predominantly found in China, one representing the fingered citrons, and another consisting of non-fingered varieties. Plant Citrus medica is a slow-growing shrub or small tree that reaches a height of about . It has irregular straggling branches and stiff twigs and long spines at the leaf axils. The evergreen leaves are green and lemon-scented with slightly serrate edges, ovate-lanceolate or ovate elliptic 2.5 to 7.0 inches long. Petioles are usually wingless or with minor wings. The clustered flowers of the acidic varieties are purplish tinted from outside, but the sweet ones are white-yellowish. The citron tree is very vigorous with almost no dormancy, blooming several times a year, and is therefore fragile and extremely sensitive to frost. Varieties and hybrids The acidic varieties include the Florentine and Diamante citron from Italy, the Greek citron and the Balady citron from Israel. The sweet varieties include the Corsican and Moroccan citrons. The pulpless varieties also include some fingered varieties and the Yemenite citron. There are also a number of citron hybrids; for example, ponderosa lemon, the lumia and rhobs el Arsa are known citron hybrids. Some claim that even the Florentine citron is not pure citron, but a citron hybrid. Uses Culinary While the lemon and orange are primarily peeled to consume their pulpy and juicy segments, the citron's pulp is dry, containing a small quantity of juice, if any. The main content of a citron fruit is its thick white rind, which adheres to the segments and cannot easily be separated from them. The citron gets halved and depulped, then its rind (the thicker the better) is cut into pieces. Those are cooked in sugar syrup and used as a spoon sweet known in Greek as "kítro glykó" (κίτρο γλυκό), or diced and candied with sugar and used as a confection in cakes. In Italy, a soft drink called "Cedrata" is made from the fruit. In Samoa a refreshing drink called "vai tipolo" is made from squeezed juice. It is also added to a raw fish dish called "oka" and to a variation of palusami or luáu. Citron is a regularly used item in Asian cuisine. Today the citron is also used for the fragrance or zest of its flavedo, but the most important part is still the inner rind (known as pith or albedo), which is a fairly important article in international trade and is widely employed in the food industry as succade, as it is known when it is candied in sugar. The dozens of varieties of citron are collectively known as Lebu in Bangladesh, West Bengal, where it is the primary citrus fruit. In Iran the citron's thick white rind is used to make jam; in Pakistan the fruit is used to make jam but is also pickled; in South Indian cuisine, some varieties of citron (collectively referred to as "Narthangai" in Tamil and "Heralikayi" in Kannada) are widely used in pickles and preserves. In Karnataka, heralikayi (citron) is used to make lemon rice. In Kutch, Gujarat, it is used to make pickle, wherein entire slices of fruits are salted, dried and mixed with jaggery and spices to make sweet spicy pickle. In the United States, citron is an important ingredient in holiday fruitcakes. Folk medicine From ancient through medieval times, the citron was used mainly for supposed medical purposes to combat seasickness, scurvy and other disorders. The essential oil of the flavedo (the outermost, pigmented layer of rind) was also regarded as an antibiotic. The juice of the citron has a high content of vitamin C and dietary fiber (pectin) which can be extracted from the thick albedo of the citron. Religious In Judaism The citron (the word for which in Hebrew is etrog) is used by Jews for a religious ritual during the Jewish harvest holiday of Sukkot, the Feast of Tabernacles; therefore, it is considered to be a Jewish symbol, one found on various Hebrew antiques and archaeological findings. In Buddhism A variety of citron native to China has sections that separate into finger-like parts and is used as an offering in Buddhist temples. In Hinduism In Nepal, the citron () is worshipped during the Bhai Tika ceremony during Tihar. The worship is thought to stem from the belief that it is a favorite of Yama, Hindu god of death, and his sister Yami. Perfumery For many centuries, citron's fragrant essential oil (oil of cedrate''') has been used in perfumery, the same oil that was used medicinally for its antibiotic properties. Its major constituent is limonene. See also Archaeological finds of citrons in Israel Gallery of Etrog citrons Gallery of Fingered citrons Candied Fruit Peel Gallery Citations Further reading H. Harold Hume, Citrus Fruits and Their Culture Frederick J. Simoons, Food in China: A Cultural and Historical Inquiry Pinhas Spiegel-Roy, Eliezer E. Goldschmidt, Biology of Citrus Alphonse de Candolle, Origin of Cultivated Plants'' External links USDA Plants Profile – Citrus medica "Citron" Purdue University University of California- "Citrus Diversity" Buddha's Hand citron by David Karp (pomologist) Citrus Essential oils False friends Four species (Sukkot) Fruit trees Fruits originating in Asia Garden plants of Asia Medicinal plants of Asia Ornamental trees Perfumes Sukkot
Citron
Chemistry
3,631
73,160,662
https://en.wikipedia.org/wiki/Nalepella
Nalepella, the rust mites, is a genus of very small Trombidiform mites in the family Phytoptidae. They are commonly found on a variety of conifers, including hemlock, spruce, balsam fir, and pine. They sometimes infest Christmas trees in nurseries. Nalepella mites are vagrants, meaning they circulate around the tree; females overwinter in bark cracks. Infested spruce emit a characteristic odour. Distribution The genus is holarctic, and species are found in North America, Europe, and China. Effects The mites feed on the cell sap of the tree's needles, sometimes causing severe damage. Typical effects from a Nalepella infestation include needle discolouration and premature needle drop. The colour of discolouration varies by species; for example, Nalepella tsugifoliae causes yellowed or grey discolouration, while Nalepella halourga's discolouration is more bronze in colour. Some species are considered serious pests of ornamental coniferous trees. They are commonly found on Christmas trees in North America and Europe, and they may seriously damage the tree. Spruce infested by Nalepella were found to increase emissions of certain compounds that may cause the characteristic smell of infested plants. Another study in 2009 found that some compounds emitted by infected spruce attracted or repelled Hylobius abietis, another pest of conifers. Life cycle Nalepella mite eggs overwinter on needles, then hatch early in the spring. As cold-season mites, they are most active in the early spring and the fall. The mites deposit eggs during the fall, but may continue to be active into the winter. They have multiple generations per year. Species Species details Nalepella brewrieanae N. brewrieanae, first discovered in 2003 on Picea breweriana. It was first described from Germany, but is also known from Poland. Besides P. breweriana, it is also known from P. abies and P. glauca. Nalepella danica Nalepella danica infests members of the Abies (fir) genus. Specifically, it has been recorded from A. alba, A. concolor, A. lasiocarpa, and A. nordmanniana. It causes small rusty brown to bronze spots on the needles of its host plant, but a severe infestation can result in defoliation. Nymphs typically grow between 90 and 108 μm, while female adults 145 and 240 μm. They are known exclusively from Denmark. Nalepella ednae Nalepella ednae is distributed across the central and Northwestern United States, as well as in British Columbia. They are of concern in Mexico, where they may be introduced via cut Christmas trees. Although it is only known from a few fir species, all may be hosts. The damage they cause is unknown. Nalepella haarlovi Nalepella haarlovi is known from Denmark and Finland. It has been recorded infesting Picea sitchensis. They are one of the most economically important members of the genus. This species has four to eight generations per year. Nalepella halourga Nalepella halourga, commonly known as the spruce rust mite, is restricted to Picea (spruce). Their colour varies throughout the year; during the growing season, they are colourless to pale yellow, but in the fall they turn reddish-purple. They are found in Eastern North America. Nalepella longoctonema Nalepella longoctonema was first described in 1991 from two fir species in Oregon. They grow to 206 μm in length, and have been collected in large numbers on fir plantations. They are one of the most economically important members of the genus. Nalepella shevtchenkoi Nalepella shevtchenkoi lives around the bases of the host plant's needles, as well as on its stems. It is known from Abies (fir) and Picea (spruce) species. The species is considered one of the most damaging of the eriophyoid mites. It is found in parts of central and eastern Europe. Nalepella tsugifoliae The hemlock rust mite is reddish-orange in colour, and has relatively large eggs. They infest fir, hemlock, larch, and yew to high densities- there may be as many as 100 mites on one needle. Infested trees turn bluish, then yellow, before beginning to drop needles. They feed on both sides of the tree's needles. Notes References Pest arthropods Trombidiformes Trombidiformes genera
Nalepella
Biology
1,004
33,193,885
https://en.wikipedia.org/wiki/Classical%20Mechanics%20%28Kibble%20and%20Berkshire%29
Classical Mechanics is a well-established textbook written by Thomas Walter Bannerman Kibble and Frank Berkshire of the Imperial College Mathematics Department. The book provides a thorough coverage of the fundamental principles and techniques of classical mechanics, a long-standing subject which is at the base of all of physics. Publication history The English language editions were published as follows: The first edition was published by Kibble, as Kibble, T. W. B. Classical Mechanics. London: McGraw–Hill, 1966. 296 p. The second ed., also just by Kibble, was published in 1973. The 4th, jointly with F H Berkshire, was published in 1996. The 5th, jointly with F H Berkshire, was published in 2004. The book has been translated into several languages: French, by Michel Le Ray and Françoise Guérin as Mécanique classique Modern Greek, by Δ. Σαρδελής και Π. Δίτσας, επιμέλεια Γ. Ι. Παπαδόπουλος. Σαρδελής, Δ. Δίτσας, Π as Κλασσική μηχανική German Turkish, by Kemal Çolakoğlu as Klasik mekanik (stok kodu: 9789757477563) Spanish, as Mecánica clásica (ediciones Urmo, Bilbao, january/1987) Portuguese, as Mecânica clássica Reception The various editions are held in 1789 libraries. In comparison, the various (2011) editions of Herbert Goldstein's Classical Mechanics are held in 1772 libraries The original edition was reviewed in Current Science. The fourth edition was reviewed by C. Isenberg in 1997 in the European Journal of Physics, and the fifth edition was reviewed in Contemporary Physics. Contents (5th edition) Preface Useful Constants and Units Chapter 1: Introduction Chapter 2: Linear motion Chapter 3: Energy and Angular momentum Chapter 4: Central Conservative Forces Chapter 5: Rotating Frames Chapter 6: Potential Theory Chapter 7: The Two-Body Problem Chapter 8: Many-Body Systems Chapter 9: Rigid Bodies Chapter 10: Lagrangian mechanics Chapter 11: Small oscillations and Normal modes Chapter 12: Hamiltonian mechanics Chapter 13: Dynamical systems and their geometry Chapter 14: Order and Chaos in Hamiltonian systems Appendix A: Vectors Appendix B: Conics Appendix C: Phase plane Analysis near Critical Points Appendix D: Discrete Dynamical Systems – Maps Answers to Problems Bibliography Index See also Newtonian mechanics Classical Mechanics (Goldstein book) List of textbooks on classical and quantum mechanics References External links 2004 non-fiction books Classical mechanics Mathematical physics Physics textbooks
Classical Mechanics (Kibble and Berkshire)
Physics,Mathematics
558
23,974,535
https://en.wikipedia.org/wiki/Omnivore
An omnivore () is an animal that regularly consumes significant quantities of both plant and animal matter. Obtaining energy and nutrients from plant and animal matter, omnivores digest carbohydrates, protein, fat, and fiber, and metabolize the nutrients and energy of the sources absorbed. Often, they have the ability to incorporate food sources such as algae, fungi, and bacteria into their diet. Omnivores come from diverse backgrounds that often independently evolved sophisticated consumption capabilities. For instance, dogs evolved from primarily carnivorous organisms (Carnivora) while pigs evolved from primarily herbivorous organisms (Artiodactyla). Despite this, physical characteristics such as tooth morphology may be reliable indicators of diet in mammals, with such morphological adaptation having been observed in bears. The variety of different animals that are classified as omnivores can be placed into further sub-categories depending on their feeding behaviors. Frugivores include cassowaries, orangutans and grey parrots; insectivores include swallows and pink fairy armadillos; granivores include large ground finches and mice. All of these animals are omnivores, yet still fall into special niches in terms of feeding behavior and preferred foods. Being omnivores gives these animals more food security in stressful times or makes possible living in less consistent environments. Etymology and definitions The word omnivore derives from Latin omnis 'all' and vora, from vorare 'to eat or devour', having been coined by the French and later adopted by the English in the 1800s. Traditionally the definition for omnivory was entirely behavioral by means of simply "including both animal and vegetable tissue in the diet." In more recent times, with the advent of advanced technological capabilities in fields like gastroenterology, biologists have formulated a standardized variation of omnivore used for labeling a species' actual ability to obtain energy and nutrients from materials. This has subsequently conditioned two context-specific definitions. Behavioral: This definition is used to specify if a species or individual is actively consuming both plant and animal materials. (e.g. "vegans do not participate in the omnivore based diet.") In the fields of nutrition, sociology and psychology the terms "omnivore" & "omnivory" is often used to distinguish prototypical highly diverse human diet patterns from restricted diet patterns that exclude major categories of food. Physiological: This definition is often used in academia to specify species that have the capability to obtain energy and nutrients from both plant and animal matter. (e.g. "humans are omnivores due to their capability to obtain energy and nutrients from both plant and animal materials.") The taxonomic utility of omnivore's traditional and behavioral definition is limited, since the diet, behavior, and phylogeny of one omnivorous species may be very different from that of another: for instance, an omnivorous pig digging for roots and scavenging for fruit and carrion is taxonomically and ecologically quite distinct from an omnivorous chameleon that eats leaves and insects. The term "omnivory" is also not always comprehensive because it does not deal with mineral foods such as salt licks or with non-omnivores that self-medicate by consuming either plant or animal material which they otherwise would not (i.e. zoopharmacognosy). Classification, contradictions and difficulties Though Carnivora is a taxon for species classification, no such equivalent exists for omnivores, as omnivores are widespread across multiple taxonomic clades. The Carnivora order does not include all carnivorous species, and not all species within the Carnivora taxon are carnivorous. (The members of Carnivora are formally referred to as carnivorans.) It is common to find physiological carnivores consuming materials from plants or physiological herbivores consuming material from animals, e.g. felines eating grass and deer eating birds. From a behavioral aspect, this would make them omnivores, but from the physiological standpoint, this may be due to zoopharmacognosy. Physiologically, animals must be able to obtain both energy and nutrients from plant and animal materials to be considered omnivorous. Thus, such animals are still able to be classified as carnivores and herbivores when they are just obtaining nutrients from materials originating from sources that do not seemingly complement their classification. For instance, it is well documented that animals such as giraffes, camels, and cattle will gnaw on bones, preferably dry bones, for particular minerals and nutrients. Felines, which are usually regarded as obligate carnivores, occasionally eat grass to regurgitate indigestibles (e.g. hair, bones), aid with hemoglobin production, and as a laxative. Occasionally, it is found that animals historically classified as carnivorous may deliberately eat plant material. For example, in 2013, it was considered that American alligators (Alligator mississippiensis) may be physiologically omnivorous once investigations had been conducted on why they occasionally eat fruits. It was suggested that alligators probably ate fruits both accidentally and deliberately. "Life-history omnivores" is a specialized classification given to organisms that change their eating habits during their life cycle. Some species, such as grazing waterfowl like geese, are known to eat mainly animal tissue at one stage of their lives, but plant matter at another. The same is true for many insects, such as beetles in the family Meloidae, which begin by eating animal tissue as larvae, but change to eating plant matter after they mature. Likewise, many mosquito species in early life eat plants or assorted detritus, but as they mature, males continue to eat plant matter and nectar whereas the females (such as those of Anopheles, Aedes and Culex) also eat blood to reproduce effectively. Omnivorous species General Although cases exist of herbivores eating meat and carnivores eating plant matter, the classification "omnivore" refers to the adaptation and main food source of the species in general, so these exceptions do not make either individual animals or the species as a whole omnivorous. For the concept of "omnivore" to be regarded as a scientific classification, some clear set of measurable and relevant criteria would need to be considered to differentiate between an "omnivore" and other categories, e.g. faunivore, folivore, and scavenger. Some researchers argue that evolution of any species from herbivory to carnivory or carnivory to herbivory would be rare except via an intermediate stage of omnivory. Omnivorous mammals Various mammals are omnivorous in the wild, such as species of hominids, pigs, badgers, bears, foxes, coatis, civets, hedgehogs, opossums, skunks, sloths, squirrels, raccoons, chipmunks, mice, hamsters and rats. Most bear species are omnivores, but individual diets can range from almost exclusively herbivorous (hypocarnivore) to almost exclusively carnivorous (hypercarnivore), depending on what food sources are available locally and seasonally. Polar bears are classified as carnivores, both taxonomically (they are in the order Carnivora), and behaviorally (they subsist on a largely carnivorous diet). Depending on the species of bear, there is generally a preference for one class of food, as plants and animals are digested differently. Canines including wolves, dogs, dingoes, and coyotes eat some plant matter, but they have a general preference and are evolutionarily geared towards meat. However, the maned wolf is a canid whose diet is naturally 50% plant matter. Like most arboreal species, squirrels are primarily granivores, subsisting on nuts and seeds. However, like virtually all mammals, squirrels avidly consume some animal food when it becomes available. For example, the American eastern gray squirrel has been introduced to parts of Britain, continental Europe and South Africa. Its effect on populations of nesting birds is often serious because of consumption of eggs and nestlings. Other species Various birds are omnivorous, with diets varying from berries and nectar to insects, worms, fish, and small rodents. Examples include cranes, cassowaries, chickens, crows and related corvids, kea, rallidae, and rheas. In addition, some lizards (such as Galapagos Lava Lizard), turtles, fish (such as piranhas and catfish), and invertebrates are omnivorous. Quite often, mainly herbivorous creatures will eagerly eat small quantities of animal food when it becomes available. Although this is trivial most of the time, omnivorous or herbivorous birds, such as sparrows, often will feed their chicks insects while food is most needed for growth. On close inspection it appears that nectar-feeding birds such as sunbirds rely on the ants and other insects that they find in flowers, not for a richer supply of protein, but for essential nutrients such as cobalt/vitamin b12 that are absent from nectar. Similarly, monkeys of many species eat maggoty fruit, sometimes in clear preference to sound fruit. When to refer to such animals as omnivorous, or otherwise, is a question of context and emphasis, rather than of definition. See also References Animals by eating behaviors Ethology
Omnivore
Biology
1,988
58,959,426
https://en.wikipedia.org/wiki/List%20of%20brightest%20natural%20objects%20in%20the%20sky
This is a list of the brightest natural objects in the sky. This list orders objects by apparent magnitude from Earth, not anywhere else. This list is with reference to naked eye viewing; all objects are listed by their visual magnitudes, and objects too close together to be distinguished are listed jointly. Objects are listed by their proper names or their most commonly used stellar designation. This list does not include transient objects such as comets, man-made objects, or supernovae. List See also Apparent magnitude Bayer designation Extraterrestrial sky Historical brightest stars List of brightest stars List of nearest bright stars List of nearest stars and brown dwarfs Notes References Citations Sources Brightest Stars, List of Brightest Stars, brightest
List of brightest natural objects in the sky
Astronomy
141
77,921,803
https://en.wikipedia.org/wiki/Hemipholiota%20populnea
Hemipholiota populnea is a mushroom-forming fungus commonly known as destructive Pholiota, although separate from the genus Pholiota. It is saprobic and fruits on the wood of hardwood logs, especially cottonwood. References Strophariaceae Fungus species Fungi described in 1828 Taxa named by Christiaan Hendrik Persoon
Hemipholiota populnea
Biology
72
1,409,609
https://en.wikipedia.org/wiki/Dyson%20series
In scattering theory, a part of mathematical physics, the Dyson series, formulated by Freeman Dyson, is a perturbative expansion of the time evolution operator in the interaction picture. Each term can be represented by a sum of Feynman diagrams. This series diverges asymptotically, but in quantum electrodynamics (QED) at the second order the difference from experimental data is in the order of 10−10. This close agreement holds because the coupling constant (also known as the fine-structure constant) of QED is much less than 1. Dyson operator In the interaction picture, a Hamiltonian , can be split into a free part and an interacting part as . The potential in the interacting picture is where is time-independent and is the possibly time-dependent interacting part of the Schrödinger picture. To avoid subscripts, stands for in what follows. In the interaction picture, the evolution operator is defined by the equation: This is sometimes called the Dyson operator. The evolution operator forms a unitary group with respect to the time parameter. It has the group properties: Identity and normalization: Composition: Time Reversal: Unitarity: and from these is possible to derive the time evolution equation of the propagator: In the interaction picture, the Hamiltonian is the same as the interaction potential and thus the equation can also be written in the interaction picture as Caution: this time evolution equation is not to be confused with the Tomonaga–Schwinger equation. The formal solution is which is ultimately a type of Volterra integral. Derivation of the Dyson series An iterative solution of the Volterra equation above leads to the following Neumann series: Here, , and so the fields are time-ordered. It is useful to introduce an operator , called the time-ordering operator, and to define The limits of the integration can be simplified. In general, given some symmetric function one may define the integrals and The region of integration of the second integral can be broken in sub-regions, defined by . Due to the symmetry of , the integral in each of these sub-regions is the same and equal to by definition. It follows that Applied to the previous identity, this gives Summing up all the terms, the Dyson series is obtained. It is a simplified version of the Neumann series above and which includes the time ordered products; it is the path-ordered exponential: This result is also called Dyson's formula. The group laws can be derived from this formula. Application on state vectors The state vector at time can be expressed in terms of the state vector at time , for as The inner product of an initial state at with a final state at in the Schrödinger picture, for is: The S-matrix may be obtained by writing this in the Heisenberg picture, taking the in and out states to be at infinity: Note that the time ordering was reversed in the scalar product. See also Schwinger–Dyson equation Magnus series Peano–Baker series Picard iteration References Charles J. Joachain, Quantum collision theory, North-Holland Publishing, 1975, (Elsevier) Scattering theory Quantum field theory Freeman Dyson
Dyson series
Physics,Chemistry
657
1,010,567
https://en.wikipedia.org/wiki/Reflected-wave%20switching
Reflected-wave switching is a signalling technique used in backplane computer buses such as PCI. A backplane computer bus is a type of multilayer printed circuit board that has at least one (almost) solid layer of copper called the ground plane, and at least one layer of copper tracks that are used as wires for the signals. Each signal travels along a transmission line formed by its track and the narrow strip of ground plane directly beneath it. This structure is known in radio engineering as microstrip line. Each signal travels from a transmitter to one or more receivers. Most computer buses use binary digital signals, which are sequences of pulses of fixed amplitude. In order to receive the correct data, the receiver must detect each pulse once, and only once. To ensure this, the designer must take the high-frequency characteristics of the microstrip into account. When a pulse is launched into the microstrip by the transmitter, its amplitude depends on the ratio of the impedances of the transmitter and the microstrip. The impedance of the transmitter is simply its output resistance. The impedance of the microstrip is its characteristic impedance, which depends on its dimensions and on the materials used in the backplane's construction. As the leading edge of the pulse (the incident wave) passes the receiver, it may or may not have sufficient amplitude to be detected. If it does, then the system is said to use incident-wave switching. This is the system used in most computer buses predating PCI, such as the VME bus. When the pulse reaches the end of the microstrip, its behaviour depends on the circuit conditions at this point. If the microstrip is correctly terminated (usually with a combination of resistors), the pulse is absorbed and its energy is converted to heat. This is the case in an incident-wave switching bus. If, on the other hand, there is no termination at the end of the microstrip, and the pulse encounters an open circuit, it is reflected back towards its source. As this reflected wave travels back along the microstrip, its amplitude is added to that of the original pulse. As the reflected wave passes the receiver for a second time, this time from the opposite direction, it now has enough amplitude to be detected. This is what happens in a reflected-wave switching bus. In incident-wave switching buses, reflections from the end of the bus are undesirable and must be prevented by adding termination. Terminating an incident-wave trace varies in complexity from a DC-balanced, AC-coupled termination to a single resistor series terminator, but all incident wave terminations consume both power and space (Johnson and Graham, 1993). However, incident-wave switching buses can be significantly longer than reflected-wave switching buses operating at the same frequency. If the limited bus length is acceptable, a reflected-wave switching bus will use less power, and fewer components to operate at a given frequency. The bus has to be short enough, such that a pulse may travel twice the length of the backplane (one complete journey for the incident wave, and another for the reflected wave), and stabilize sufficiently to be read in a single bus cycle. The travel time can be calculated by dividing the round-trip length of the bus by the speed of propagation of the signal (which is roughly one half to two-thirds of c, the speed of light in vacuum). References Johnson, Howard; Graham, Martin (1993). High Speed Digital Design. Prentice Hall. . Computer engineering Computer buses
Reflected-wave switching
Technology,Engineering
727
31,533,033
https://en.wikipedia.org/wiki/Morchella%20semilibera
Morchella semilibera, commonly called the half-free morel, is an edible species of fungus in the family Morchellaceae native to Europe and Asia. DNA analysis has shown that the half-free morels, which appear nearly identical on a macroscopic scale, are a cryptic species complex, consisting of at least three geographically isolated species. Because de Candolle originally described the species based on specimens from Europe, the scientific name M. semilibera should be restricted to the European species. In 2012, Morchella populiphila was described from western North America, while Peck's 1903 species name Morchella punctipes was reaffirmed for eastern North American half-free morels. M. semilibera and the other half-free morels are closely related to the black morels (M. elata and others). A proposal has been made to conserve the name Morchella semilibera against several earlier synonyms, including Phallus crassipes, P. gigas and P. undosus. These names, sanctioned by Elias Magnus Fries, have since been shown to be the same species as M. semilibera. References External links semilibera Edible fungi Fungi described in 1805 Fungi of Asia Fungi of Europe Taxa named by Augustin Pyramus de Candolle Fungus species
Morchella semilibera
Biology
274
16,083,822
https://en.wikipedia.org/wiki/Baily%E2%80%93Borel%20compactification
In mathematics, the Baily–Borel compactification is a compactification of a quotient of a Hermitian symmetric space by an arithmetic group, introduced by . Example If C is the quotient of the upper half plane by a congruence subgroup of SL2(Z), then the Baily–Borel compactification of C is formed by adding a finite number of cusps to it. See also L² cohomology References Algebraic geometry Compactification (mathematics)
Baily–Borel compactification
Mathematics
102
59,131,452
https://en.wikipedia.org/wiki/Logeion
Logeion is an open-access database of Latin and Ancient Greek dictionaries. Developed by Josh Goldenberg and Matt Shanahan in 2011, it is hosted by the University of Chicago. Apart from simultaneous search capabilities across different dictionaries and reference works, Logeion offers access to frequency and collocation data from the Perseus Project. Features Having started out as an aggregator for Latin and Ancient Greek dictionaries, Logeion has implemented multiple new features in its development. These include: the integration of reference works on antiquity; frequency and collocation data from the Perseus Project; corpus examples, equally retrieved from the Perseus Project; references to relevant chapters in a number of (English-language) textbooks. Furthermore, an iOS app was developed by Joshua Day in 2013. The app's second version, launched in 2018, is also available for Android devices. Dictionaries As of November 2018, Logeion contains the following dictionaries. Dictionaries with full-text search Ancient Greek dictionaries Autenrieth, G. (1891). A Homeric Dictionary for Schools and Colleges. New York: Harper and Brothers. Liddell, H. G., & Scott, R. (1889). An Intermediate Greek Lexicon. Oxford: Clarendon Press. Liddell, H. G., Scott, R., Jones, H. S., & McKenzie, R. (1940). A Greek-English Lexicon. Oxford: Clarendon Press. Slater, W. L. (1969). Lexicon to Pindar. Berlin: De Gruyter. Latin dictionaries Lewis, Ch. T., & Short, Ch. (1879). A Latin Dictionary. Oxford: Clarendon Press. Lewis, Ch. T. (1890). An Elementary Latin Dictionary. New York: American Book Company. Dictionaries without full-text search Ancient Greek dictionaries Adrados, F. R., & Somolinos, J. R. (Eds.). Diccionario Griego-Español. Madrid: CSIC. Muñoz Delgado, L. (2001). Léxico de magia y religión en los papiros mágicos griegos. Madrid: CSIC. Latin dictionaries Babeliowsky, J. K. L., den Hengst, D., Holtland, W., van Lakwijk, W., Marcelis, J. Th. K., Pinkster, H., Smolenaars, J. J. L. (1975). Basiswoordenlijst Latijn. Den Haag: Staatsuitgeverij. Du Cange, Ch. et al. (1883-1887). Glossarium mediae et infimae latinitatis. Niort: L. Favre. Frieze, H. S. (1902). Vergil’s Aeneid Books I-XII, with an Introduction, Notes, and Vocabulary, revised by Walter Dennison. New York: American Book Company. Gaffiot, F. (1934). Dictionnaire Illustré Latin-Français. Paris: Hachette. Latham, R. E., Howlett, D. R., & Ashdowne, R. K. (1975-2013). The Dictionary of Medieval Latin from British Sources. London: British Academy. Pinkster, H. (Ed.) (2018). Woordenboek Latijn/Nederlands. Amsterdam: Amsterdam University Press. Reference works NA (n.d.). The Perseus Encyclopedia. Medford, MA: Tufts University. Peck, H. Th. (1898). Harper's Dictionary of Classical Antiquities. New York: Harper and Brothers. Smith, W. (1854). Dictionary of Greek and Roman Geography. London: Walter and Maberly; John Murray. Smith, W., Wayte, W., & Marindin, G. E. (1890). Dictionary of Greek and Roman Antiquities. London: John Murray. Stilwell, R. (1976). The Princeton Encyclopedia of Classical Sites. Princeton: Princeton University Press. See also List of academic databases and search engines References External links Logeion Computing in classical studies Online databases Digital humanities Online dictionaries
Logeion
Technology
910
38,365,166
https://en.wikipedia.org/wiki/E-COM
E-COM, short for Electronic Computer Originated Mail, was a hybrid mail process used from 1982 to 1985 by the U.S. Postal Service (USPS) to print electronically originated mail, and deliver it in envelopes to customers within two days of transmission. Description The E-COM service allowed customers to transmit messages of up to two pages from their own computers, via telecommunication lines, to one or more of 25 serving post offices (SPOs) located in the following cities: Atlanta, Boston, Charlotte, Chicago, Cincinnati, Dallas, Denver, Detroit, Kansas City, Los Angeles, Milwaukee, Minneapolis, Nashville, New Orleans, New York, Orlando, Philadelphia, Phoenix, Pittsburgh, Richmond, St. Louis, San Antonio, San Francisco, Seattle, and Washington, D.C. After an electronic message was received by an SPO, it was processed and sorted by ZIP Code, then printed on letter-size bond paper, folded, and sealed in an envelope printed with a blue E-COM logo. In order to be eligible for the service, customers were required to send a minimum of 200 messages per transmission. History USPS began looking into electronic mail in 1977. E-COM was originally proposed on September 8, 1978, and service was expected to begin by December of that year. The proposal was caught up in a two-year regulatory dispute, and a modified version of the E-COM service as recommended by the Postal Rate Commission was approved on August 15, 1980, by the Postal Service Board of Governors. E-COM services began on January 4, 1982, and the original rates were 26 cents for the first page plus 2 cents for the second page for each transmission. In addition, there was an annual fee of $50 for the service. During its inaugural year of service, 3.2 million E-COM messages were sent, and more than 600 customers submitted applications for the service. Federal law prohibits the USPS from subsidizing a mail class by overcharging the users of other mail classes; however, E-COM was heavily subsidized from its introduction. During its first year of operation, the USPS lost $5.25 per letter. The House Government Operations Committee indicated that "The Postal Service deliberately manipulates the release of information about E-COM in order to make E-COM appear to be more successful than it really is." On June 18-21, 1982, the US Congress’ Joint Subcommittee on Economic Goals and Intergovernmental policy, held a hearing on the future of mail delivery in the United States, and whether the US Postal Service should be prevented from competing with the numerous commercial electronic mail providers, then in operation. Subsequent to this, there were difficulties in securing approval for a competitive and profitable rate for the service, and beginning in June 1984 the Postal Service started trying to sell the E-COM service to a private firm. Having not received offers that were financially attractive enough to be accepted, Postmaster General Paul Carlin notified the board of governors at the June 3, 1985, meeting that the postal service would request through the Postal Rate Commission the authority to close down the operation as soon as possible. The E-COM service was officially discontinued on September 2, 1985. Another service, INTELPOST, the Postal Service's international electronic mail venture beginning in 1980 which provided a high-speed facsimile copy service between continents and was also shut down in the mid 1980s. Legacy The abbreviation E-COM resulted in Electronics journal publishing a headline in June 1979 reading “Postal Service pushes ahead with E-mail”. This is the first known usage of the term E-mail. See also History of email Hybrid mail Print-to-mail References Notes Postal systems
E-COM
Technology
756
40,328,783
https://en.wikipedia.org/wiki/Fundidesulfovibrio%20putealis
Fundidesulfovibrio putealis is a bacterium. It is sulfate-reducing. Its cells are motile by means of a polar flagellum and contain desulfoviridin. The type strain is B7-43T (=DSM 16056T =ATCC BAA-905T). Originally described under Desulfovibrio, it was reassigned to Fundidesulfovibrio by Waite et al. in 2020. References Further reading Staley, James T., et al. "Bergey's manual of systematic bacteriology, vol. 3."Williams and Wilkins, Baltimore, MD (1989): 2250–2251. External links LPSN Type strain of Desulfovibrio putealis at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 2005 Desulfovibrionales
Fundidesulfovibrio putealis
Biology
180
2,526,857
https://en.wikipedia.org/wiki/Isotopes%20of%20uranium
Uranium (U) is a naturally occurring radioactive element (radioelement) with no stable isotopes. It has two primordial isotopes, uranium-238 and uranium-235, that have long half-lives and are found in appreciable quantity in Earth's crust. The decay product uranium-234 is also found. Other isotopes such as uranium-233 have been produced in breeder reactors. In addition to isotopes found in nature or nuclear reactors, many isotopes with far shorter half-lives have been produced, ranging from U to U (except for U). The standard atomic weight of natural uranium is . Natural uranium consists of three main isotopes, U (99.2739–99.2752% natural abundance), U (0.7198–0.7202%), and U (0.0050–0.0059%). All three isotopes are radioactive (i.e., they are radioisotopes), and the most abundant and stable is uranium-238, with a half-life of (about the age of the Earth). Uranium-238 is an alpha emitter, decaying through the 18-member uranium series into lead-206. The decay series of uranium-235 (historically called actino-uranium) has 15 members and ends in lead-207. The constant rates of decay in these series makes comparison of the ratios of parent-to-daughter elements useful in radiometric dating. Uranium-233 is made from thorium-232 by neutron bombardment. Uranium-235 is important for both nuclear reactors (energy production) and nuclear weapons because it is the only isotope existing in nature to any appreciable extent that is fissile in response to thermal neutrons, i.e., thermal neutron capture has a high probability of inducing fission. A chain reaction can be sustained with a large enough (critical) mass of uranium-235. Uranium-238 is also important because it is fertile: it absorbs neutrons to produce a radioactive isotope that decays into plutonium-239, which also is fissile. List of isotopes |- | U | | style="text-align:right" | 92 | style="text-align:right" | 122 | | | α | Th | 0+ | | |-id=Uranium-215 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 123 | rowspan=2|215.026720(11) | rowspan=2|1.4(0.9) ms | α | Th | rowspan=2|5/2−# | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-216 | U | | style="text-align:right" | 92 | style="text-align:right" | 124 | 216.024760(30) | | α | Th | 0+ | | |-id=Uranium-216m | style="text-indent:1em" | U | | colspan="3" style="text-indent:2em" | 2206 keV | | α | Th | 8+ | | |-id=Uranium-217 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 125 | rowspan=2|217.024660(86)# | rowspan=2| | α | Th | rowspan=2|(1/2−) | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-218 | U | | style="text-align:right" | 92 | style="text-align:right" | 126 | 218.023505(15) | | α | Th | 0+ | | |-id=Uranium-218m | rowspan=2 style="text-indent:1em" | U | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 2117 keV | rowspan=2| | α | Th | rowspan=2|8+ | rowspan=2| | rowspan=2| |- | IT? | U |-id=Uranium-219 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 127 | rowspan=2|219.025009(14) | rowspan=2|60(7) μs | α | Th | rowspan=2|(9/2+) | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-221 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 129 | rowspan=2|221.026323(77) | rowspan=2|0.66(14) μs | α | Th | rowspan=2|(9/2+) | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-222 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 130 | rowspan=2|222.026058(56) | rowspan=2|4.7(0.7) μs | α | Th | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-223 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 131 | rowspan=2|223.027961(63) | rowspan=2|65(12) μs | α | Th | rowspan=2|7/2+# | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-224 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 132 | rowspan=2|224.027636(16) | rowspan=2|396(17) μs | α | Th | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-225 | U | | style="text-align:right" | 92 | style="text-align:right" | 133 | 225.029385(11) | 62(4) ms | α | Th | 5/2+# | | |-id=Uranium-226 | U | | style="text-align:right" | 92 | style="text-align:right" | 134 | 226.029339(12) | 269(6) ms | α | Th | 0+ | | |-id=Uranium-227 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 135 | rowspan=2|227.0311811(91) | rowspan=2|1.1(0.1) min | α | Th | rowspan=2|(3/2+) | rowspan=2| | rowspan=2| |- | β? | Pa |-id=Uranium-228 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 136 | rowspan=2|228.031369(14) | rowspan=2|9.1(0.2) min | α (97.5%) | Th | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | EC (2.5%) | Pa |-id=Uranium-229 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 137 | rowspan=2|229.0335060(64) | rowspan=2|57.8(0.5) min | β (80%) | Pa | rowspan=2|(3/2+) | rowspan=2| | rowspan=2| |- | α (20%) | Th |-id=Uranium-230 | rowspan=3|U | rowspan=3| | rowspan=3 style="text-align:right" | 92 | rowspan=3 style="text-align:right" | 138 | rowspan=3|230.0339401(48) | rowspan=3|20.23(0.02) d | α | Th | rowspan=3|0+ | rowspan=3| | rowspan=3| |- | SF ? | (various) |- | CD (4.8×10%) | PbNe |-id=Uranium-231 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 139 | rowspan=2|231.0362922(29) | rowspan=2|4.2(0.1) d | EC | Pa | rowspan=2|5/2+# | rowspan=2| | rowspan=2| |- | α (.004%) | Th |- | rowspan=4|U | rowspan=4| | rowspan=4 style="text-align:right" | 92 | rowspan=4 style="text-align:right" | 140 | rowspan=4|232.0371548(19) | rowspan=4|68.9(0.4) y | α | Th | rowspan=4|0+ | rowspan=4| | rowspan=4| |- | CD (8.9×10%) | PbNe |- | SF (10%) | (various) |- | CD? | HgMg |- | rowspan=4|U | rowspan=4| | rowspan=4 style="text-align:right" | 92 | rowspan=4 style="text-align:right" | 141 | rowspan=4|233.0396343(24) | rowspan=4|1.592(2)×10 y | α | Th | rowspan=4|5/2+ | rowspan=4|Trace | rowspan=4| |- | CD (≤7.2×10%) | PbNe |- | SF ? | (various) |- | CD ? | HgMg |- | rowspan=5|U | rowspan=5|Uranium II | rowspan=5 style="text-align:right" | 92 | rowspan=5 style="text-align:right" | 142 | rowspan=5|234.0409503(12) | rowspan=5|2.455(6)×10 y | α | Th | rowspan=5|0+ | rowspan=5|[0.000054(5)] | rowspan=5|0.000050–0.000059 |- | SF (1.64×10%) | (various) |- | CD (1.4×10%) | HgMg |- | CD (≤9×10%) | PbNe |- | CD (≤9×10%) | PbNe |-id=Uranium-234m | style="text-indent:1em" | U | | colspan="3" style="text-indent:2em" | 1421.257(17) keV | 33.5(2.0) ms | IT | U | 6− | | |- | rowspan=5|U | rowspan=5|Actin UraniumActino-Uranium | rowspan=5 style="text-align:right" | 92 | rowspan=5 style="text-align:right" | 143 | rowspan=5|235.0439281(12) | rowspan=5|7.038(1)×10 y | α | Th | rowspan=5|7/2− | rowspan=5|[0.007204(6)] | rowspan=5|0.007198–0.007207 |- | SF (7×10%) | (various) |- | CD (8×10%) | PbNe |- | CD (8×10%) | PbNe |- | CD (8×10%) | HgMg |-id=Uranium-235m1 | style="text-indent:1em" | U | | colspan="3" style="text-indent:2em" | 0.076737(18) keV | 25.7(1) min | IT | U | 1/2+ | | |-id=Uranium-235m2 | style="text-indent:1em" | U | | colspan="3" style="text-indent:2em" | 2500(300) keV | 3.6(18) ms | SF | (various) | | | |- | rowspan=4|U | rowspan=4|Thoruranium | rowspan=4 style="text-align:right" | 92 | rowspan=4 style="text-align:right" | 144 | rowspan=4|236.0455661(12) | rowspan=4|2.342(3)×10 y | α | Th | rowspan=4|0+ | rowspan=4|Trace | rowspan=4| |- | SF (9.6×10%) | (various) |- | CD (≤2.0×10%) | HgMg |- | CD (≤2.0×10%) | HgMg |-id=Uranium-236m1 | style="text-indent:1em" | U | | colspan="3" style="text-indent:2em" | 1052.5(6) keV | 100(4) ns | IT | U | 4− | | |-id=Uranium-236m2 | rowspan=2 style="text-indent:1em" | U | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 2750(3) keV | rowspan=2|120(2) ns | IT (87%) | U | rowspan=2|(0+) | rowspan=2| | rowspan=2| |- | SF (13%) | (various) |- | U | | style="text-align:right" | 92 | style="text-align:right" | 145 | 237.0487283(13) | 6.752(2) d | β | Np | 1/2+ | Trace | |-id=Uranium-237m | style="text-indent:1em" | U | | colspan="3" style="text-indent:2em" | 274.0(10) keV | 155(6) ns | IT | U | 7/2− | | |- | rowspan=3|U | rowspan=3|Uranium I | rowspan=3 style="text-align:right" | 92 | rowspan=3 style="text-align:right" | 146 | rowspan=3|238.050787618(15) | rowspan=3|4.468(3)×10 y | α | Th | rowspan=3|0+ | rowspan=3|[0.992742(10)] | rowspan=3|0.992739–0.992752 |- | SF (5.44×10%) | (various) |- | ββ (2.2×10%) | Pu |-id=Uranium-238m | rowspan=2 style="text-indent:1em" | U | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 2557.9(5) keV | rowspan=2|280(6) ns | IT (97.4%) | U | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | SF (2.6%) | (various) |- | U | | style="text-align:right" | 92 | style="text-align:right" | 147 | 239.0542920(16) | 23.45(0.02) min | β | Np | 5/2+ | Trace | |-id=Uranium-239m1 | style="text-indent:1em" | U | | colspan="3" style="text-indent:2em" | 133.7991(10) keV | 780(40) ns | IT | U | 1/2+ | | |-id=Uranium-239m2 | rowspan=2 style="text-indent:1em" | U | rowspan=2| | rowspan=2 colspan="3" style="text-indent:2em" | 2500(900)# keV | rowspan=2|>250 ns | SF? | (various) | rowspan=2|0+ | rowspan=2| | rowspan=2| |- | IT? | U |-id=Uranium-240 | rowspan=2|U | rowspan=2| | rowspan=2 style="text-align:right" | 92 | rowspan=2 style="text-align:right" | 148 | rowspan=2|240.0565924(27) | rowspan=2|14.1(0.1) h | β | Np | rowspan=2|0+ | rowspan=2|Trace | rowspan=2| |- | α? | Th |- | U | | style="text-align:right" | 92 | style="text-align:right" | 149 | 241.06031(5) | ~40 min | β | Np | 7/2+# | |--> |-id=Uranium-242 | U | | style="text-align:right" | 92 | style="text-align:right" | 150 | 242.06296(10) | 16.8(0.5) min | β | Np | 0+ | | Actinides vs fission products Uranium-214 Uranium-214 is the lightest known isotope of uranium. It was discovered at the Spectrometer for Heavy Atoms and Nuclear Structure (SHANS) at the Heavy Ion Research Facility in Lanzhou, China in 2021, produced by firing argon-36 at tungsten-182. It alpha-decays with a half-life of . Uranium-232 Uranium-232 has a half-life of 68.9 years and is a side product in the thorium cycle. It has been cited as an obstacle to nuclear proliferation using U, because the intense gamma radiation from Tl (a daughter of U, produced relatively quickly) makes U contaminated with it more difficult to handle. Uranium-232 is a rare example of an even-even isotope that is fissile with both thermal and fast neutrons. Uranium-233 Uranium-233 is a fissile isotope that is bred from thorium-232 as part of the thorium fuel cycle. U was investigated for use in nuclear weapons and as a reactor fuel. It was occasionally tested but never deployed in nuclear weapons and has not been used commercially as a nuclear fuel. It has been used successfully in experimental nuclear reactors and has been proposed for much wider use as a nuclear fuel. It has a half-life of around 160,000 years. Uranium-233 is produced by neutron irradiation of thorium-232. When thorium-232 absorbs a neutron, it becomes thorium-233, which has a half-life of only 22 minutes. Thorium-233 beta decays into protactinium-233. Protactinium-233 has a half-life of 27 days and beta decays into uranium-233; some proposed molten salt reactor designs attempt to physically isolate the protactinium from further neutron capture before beta decay can occur. Uranium-233 usually fissions on neutron absorption but sometimes retains the neutron, becoming uranium-234. The capture-to-fission ratio is smaller than the other two major fissile fuels, uranium-235 and plutonium-239; it is also lower than that of short-lived plutonium-241, but bested by very difficult-to-produce neptunium-236. Uranium-234 U occurs in natural uranium as an indirect decay product of uranium-238, but makes up only 55 parts per million of the uranium because its half-life of 245,500 years is only about 1/18,000 that of U. The path of production of U is this: U alpha decays to thorium-234. Next, with a short half-life, Th beta decays to protactinium-234. Finally, Pa beta decays to U. U alpha decays to thorium-230, except for a small percentage of nuclei that undergo spontaneous fission. Extraction of small amounts of U from natural uranium could be done using isotope separation, similar to normal uranium-enrichment. However, there is no real demand in chemistry, physics, or engineering for isolating U. Very small pure samples of U can be extracted via the chemical ion-exchange process, from samples of plutonium-238 that have aged somewhat to allow some alpha decay to U. Enriched uranium contains more U than natural uranium as a byproduct of the uranium enrichment process aimed at obtaining uranium-235, which concentrates lighter isotopes even more strongly than it does U. The increased percentage of U in enriched natural uranium is acceptable in current nuclear reactors, but (re-enriched) reprocessed uranium might contain even higher fractions of U, which is undesirable. This is because U is not fissile, and tends to absorb slow neutrons in a nuclear reactor—becoming U. U has a neutron capture cross section of about 100 barns for thermal neutrons, and about 700 barns for its resonance integral—the average over neutrons having various intermediate energies. In a nuclear reactor, non-fissile isotopes capture a neutron breeding fissile isotopes. U is converted to U more easily and therefore at a greater rate than uranium-238 is to plutonium-239 (via neptunium-239), because U has a much smaller neutron-capture cross section of just 2.7 barns. Uranium-235 Uranium-235 makes up about 0.72% of natural uranium. Unlike the predominant isotope uranium-238, it is fissile, i.e., it can sustain a fission chain reaction. It is the only fissile isotope that is a primordial nuclide or found in significant quantity in nature. Uranium-235 has a half-life of 703.8 million years. It was discovered in 1935 by Arthur Jeffrey Dempster. Its (fission) nuclear cross section for slow thermal neutron is about 504.81 barns. For fast neutrons it is on the order of 1 barn. At thermal energy levels, about 5 of 6 neutron absorptions result in fission and 1 of 6 result in neutron capture forming uranium-236. The fission-to-capture ratio improves for faster neutrons. Uranium-236 Uranium-236 has a half-life of about 23 million years; and is neither fissile with thermal neutrons, nor very good fertile material, but is generally considered a nuisance and long-lived radioactive waste. It is found in spent nuclear fuel and in the reprocessed uranium made from spent nuclear fuel. Uranium-237 Uranium-237 has a half-life of about 6.75 days. It decays into neptunium-237 by beta decay. It was discovered by Japanese physicist Yoshio Nishina in 1940, who in a near-miss discovery, inferred the creation of element 93, but was unable to isolate the then-unknown element or measure its decay properties. Uranium-238 Uranium-238 (U or U-238) is the most common isotope of uranium in nature. It is not fissile, but is fertile: it can capture a slow neutron and after two beta decays become fissile plutonium-239. Uranium-238 is fissionable by fast neutrons, but cannot support a chain reaction because inelastic scattering reduces neutron energy below the range where fast fission of one or more next-generation nuclei is probable. Doppler broadening of U's neutron absorption resonances, increasing absorption as fuel temperature increases, is also an essential negative feedback mechanism for reactor control. About 99.284% of natural uranium is uranium-238, which has a half-life of 1.41×10 seconds (4.468×10 years). Depleted uranium has an even higher concentration of U, and even low-enriched uranium (LEU) is still mostly U. Reprocessed uranium is also mainly U, with about as much uranium-235 as natural uranium, a comparable proportion of uranium-236, and much smaller amounts of other isotopes of uranium such as uranium-234, uranium-233, and uranium-232. Uranium-239 Uranium-239 is usually produced by exposing U to neutron radiation in a nuclear reactor. U has a half-life of about 23.45 minutes and beta decays into neptunium-239, with a total decay energy of about 1.29 MeV. The most common gamma decay at 74.660 keV accounts for the difference in the two major channels of beta emission energy, at 1.28 and 1.21 MeV. Np then, with a half-life of about 2.356 days, beta-decays to plutonium-239. Uranium-241 In 2023, in a paper published in Physical Review Letters, a group of researchers based in Korea reported that they had found uranium-241 in an experiment involving U+Pt multinucleon transfer reactions. Its half-life is about 40 minutes. References Uranium Uranium
Isotopes of uranium
Chemistry
5,973