id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
63,762,870 | https://en.wikipedia.org/wiki/Volvation | Volvation (from Latin volvere "roll", and the suffix -(a)tion; sometimes called enrolment or conglobation), is a defensive behavior in certain animals, in which the animal rolls its own body into a ball, presenting only the hardest parts of its integument (the animal's "armor"), or its spines to predators.
Among mammals, vertebrates like pangolins (Manidae) and hedgehogs (Erinaceidae) exhibit the ability to conglobate. Armadillos in the genus Tolypeutes (South American three-banded armadillos) are able to roll into a defensive ball; however the nine-banded armadillo and other species have too many plates.
Earthworms may volvate during periods of extreme heat or drought.
Among pill millipedes, volvation is both a protection against external threats and against dehydration.
Woodlice or pillbugs (Armadillidae) curl themselves into "pills" not only for defense, but also to conserve moisture while resting or sleeping, because they must keep their pseudotrachaea ("gills") wet. Volvation is particularly well evolved in subterranean isopods, but only Caecosphaeroma burgundum is able to roll up into a hermetic sphere without any outward projections, and thus "approaches perfection in volvation".
Multi-shelled chitons also volvate, although evidence suggests that they do not use this behavior as an anti-predatory defense but rather as a form of locomotion.
In vertebrates, an animal's decision to volvate is mediated by the periaqueductal gray region.
Gallery
See also
Rotating locomotion in living systems
References
Ethology
Predation
Antipredator adaptations | Volvation | Biology | 378 |
10,845,121 | https://en.wikipedia.org/wiki/Builder%27s%20Old%20Measurement | Builder's Old Measurement (BOM, bm, OM, and o.m.) is the method used in England from approximately 1650 to 1849 for calculating the cargo capacity of a ship. It is a volumetric measurement of cubic capacity. It estimated the tonnage of a ship based on length and maximum beam. It is expressed in "tons burden" (, ), and abbreviated "tons bm".
The formula is:
where:
Length is the length, in feet, from the stem to the sternpost;
Beam is the maximum beam, in feet.
The Builder's Old Measurement formula remained in effect until the advent of steam propulsion. Steamships required a different method of estimating tonnage, because the ratio of length to beam was larger and a significant volume of internal space was used for boilers and machinery. In 1849, the Moorsom System was created in the United Kingdom. The Moorsom system calculates the cargo-carrying capacity in cubic feet, another method of volumetric measurement. The capacity in cubic feet is then divided by 100 cubic feet of capacity per gross ton, resulting in a tonnage expressed in tons.
History and derivation
King Edward I levied the first tax on the hire of ships in England in 1303 based on tons burthen. Later, King Edward III levied a tax of 3 shillings on each "tun" of imported wine, roughly . At that time a "tun" was a wine container of 252 wine gallons, approx weighing about , a weight known today as a long ton or imperial ton. In order to estimate the capacity of a ship in terms of 'tun' for tax purposes, an early formula used in England was:
where:
Length is the length (undefined), in feet
Beam is the beam, in feet.
Depth is the depth of the hold, in feet below the main deck.
The numerator yields the ship's volume expressed in cubic feet.
If a "tun" is deemed to be equivalent to 100 cubic feet, then the tonnage is simply the number of such 100 cubic feet 'tun' units of volume.
100 the divisor is unitless, so tonnage would be expressed in 'ft3 of tun'.
In 1678 Thames shipbuilders used a method assuming that a ship's burden would be 3/5 of its displacement. Since tonnage is calculated by multiplying length × beam × draft × block coefficient, all divided by 35 ft3 per ton of seawater, the resulting formula would be:
where:
Draft is estimated to be half of the beam.
Block coefficient is based on an assumed average of 0.62.
35 ft3 is the volume of one ton of sea water.
Or by solving :
In 1694 a new British law required that tonnage for tax purposes be calculated according to a similar formula:
This formula remained in effect until the Builder's Old Measurement rule (above) was put into use in 1720, and then mandated by Act of Parliament in 1773.
Depth
Depth to deck
The height from the underside of the hull, excluding the keel itself, at the ship's midpoint, to the top of the uppermost full length deck.
Depth in hold
Interior space; The height from the lowest part of the hull inside the ship, at its midpoint, to the ceiling that is made up of the uppermost full length deck. For old warships it is to the ceiling that is made up of the lowermost full length deck.
Main deck
Main deck, that is used in context of depth measurement, is usually defined as the uppermost full length deck. For the 16th century ship Mary Rose, main deck is the second uppermost full length deck. In a calculation of the tonnage of Mary Rose the draft was used instead of the depth.
American tons burthen
The British took the length measurement from the outside of the stem to the outside of the sternpost, whereas the Americans measured from inside the posts. The British measured breadth from outside the planks, whereas the Americans measured the breadth from inside the planks. Lastly, the British divided by 94, whereas the Americans divided by 95.
The upshot was that American calculations gave a lower number than the British ones. The British measure yields values about 6% greater than the American. For instance, when the British measured the captured , their calculations gave her a burthen of 1533 tons, whereas the American calculations gave the burthen as 1444 tons.
The US system was in use from 1789 until 1864, when a modified version of the Moorsom System was adopted.
See also
Thames Measurement
References
External links
"Concerning Measuring of Ships", The Sea-Man's Vade Mecum, London, 1707. pp 127–131.
"Of Finding the Tonnage or Burthen of Ships, &c.", David Steel, The Shipwright's Vade-Mecum, London, 1805. pp. 249–251.
"Burthen", or "Burden", William Falconer's Dictionary of the Marine, London, 1780, page 56
Mass
Nautical terminology
Sailing rules and handicapping
Ship measurements
Volume | Builder's Old Measurement | Physics,Mathematics | 1,039 |
3,416,085 | https://en.wikipedia.org/wiki/Sumatran%20water%20shrew | The Sumatran water shrew (Chimarrogale sumatrana) is a red-toothed shrew found only in the Padang highlands of western Sumatra, Indonesia. Its natural habitats are streams in montane forests. The species is only known from a holotype, which is damaged, and was previously listed as critically endangered by IUCN. It is believed to be severely threatened by habitat loss.
References
External links
Chimarrogale
Endemic fauna of Indonesia
Shrew, Sumatran Water
Shrew, Sumatran Water
Shrew, Sumatran Water
Mammals described in 1921
Taxa named by Oldfield Thomas | Sumatran water shrew | Biology | 123 |
4,556,078 | https://en.wikipedia.org/wiki/History%20of%20sound%20recording | The history of sound recording - which has progressed in waves, driven by the invention and commercial introduction of new technologies — can be roughly divided into four main periods:
The Acoustic era (1877–1925)
The Electrical era (1925–1945)
The Magnetic era (1945–1975)
The Digital era (1975–present)
Experiments in capturing sound on a recording medium for preservation and reproduction began in earnest during the Industrial Revolution of the 1800s. Many pioneering attempts to record and reproduce sound were made during the latter half of the 19th century – notably Édouard-Léon Scott de Martinville's phonautograph of 1857 – and these efforts culminated in the invention of the phonograph by Thomas Edison in 1877. Digital recording emerged in the late 20th century and has since flourished with the popularity of digital music and online streaming services.
Overview
The Acoustic Era (1877–1925)
The earliest practical recording technologies were entirely mechanical devices. These recorders typically used a large conical horn to collect and focus the physical air pressure of the sound waves produced by the human voice or musical instruments. A sensitive membrane or diaphragm, located at the apex of the cone, was connected to an articulated scriber or stylus, and as the changing air pressure moved the diaphragm back and forth, the stylus scratched or incised an analog of the sound waves onto a moving recording medium, such as a roll of coated paper, or a cylinder or disc coated with a soft material such as wax or a soft metal.
These early recordings were necessarily of low fidelity and volume and captured only a narrow segment of the audible sound spectrum — typically only from around 250 Hz up to about 2,500 Hz — so musicians and engineers were forced to adapt to these sonic limitations. Musical ensembles of the period often favored louder instruments such as trumpet, cornet, and trombone; lower-register brass instruments such as the tuba and the euphonium doubled or replaced the double bass, and blocks of wood stood in for bass drums. Performers also had to arrange themselves strategically around the horn to balance the sound, and to play as loudly as possible. The reproduction of domestic phonographs was similarly limited in both frequency-range and volume.
By the end of the acoustic era, the disc had become the standard medium for sound recording, and its dominance in the domestic audio market lasted until the end of the 20th century.
The Electrical Era (1925–1945) (including sound on film)
The second wave of sound recording history was ushered in by the introduction of Western Electric's integrated system of electrical microphones, electronic signal amplifiers and electromechanical recorders, which was adopted by major US record labels in 1925. Sound recording now became a hybrid process — sound could now be captured, amplified, filtered, and balanced electronically, and the disc-cutting head was now electrically powered, but the actual recording process remained essentially mechanical – the signal was still physically inscribed into a wax master disc, and consumer discs were mass-produced mechanically by stamping a metal electroform made from the wax master into a suitable substance, originally a shellac-based compound and later polyvinyl plastic.
The Western Electric system greatly improved the fidelity of sound recording, increasing the reproducible frequency range to a much wider band (between 60 Hz and 6000 Hz) and allowing a new class of professional – the audio engineer – to capture a fuller, richer, and more detailed and balanced sound on record, using multiple microphones connected to multi-channel electronic amplifiers, compressors, filters and mixers. Electrical microphones led to a dramatic change in the performance style of singers, ushering in the age of the crooner, while electronic amplification had a wide-ranging impact in many areas, enabling the development of broadcast radio, public address systems, and electronically amplified home record players.
In addition, the development of electronic amplifiers for musical instruments now enabled quieter instruments such as the guitar and the string bass to compete on equal terms with the naturally louder wind and horn instruments, and musicians and composers also began to experiment with entirely new electronic musical instruments such as the Theremin, the Ondes Martenot, the electronic organ, and the Hammond Novachord, the world's first analog polyphonic synthesizer.
Contemporaneous with these developments, several inventors were engaged in a race to develop practical methods of providing synchronised sound with films. Some early sound films — such as the landmark 1927 film The Jazz Singer – used large soundtrack records which were played on a turntable mechanically interlocked with the projector. By the early 1930s, the movie industry had almost universally adopted sound-on-film technology, in which the audio signal to be recorded was used to modulate a light source that was imaged onto the moving film through a narrow slit, allowing it to be photographed as variations in the density or width of a soundtrack running along a dedicated area of the film. The projector used a steady light and a photoelectric cell to convert the variations back into an electrical signal, which was amplified and sent to loudspeakers behind the screen.
The adoption of sound-on-film also helped movie-industry audio engineers to make rapid advances in the process we now know as multi-tracking, by which multiple separately-recorded audio sources (such as voices, sound effects and background music) can be replayed simultaneously, mixed together, and synchronized with the action on film to create new blended audio tracks of great sophistication and complexity. One of the best-known examples of a constructed composite sound from that era is the famous "Tarzan yell" created for the series of Tarzan movies starring Johnny Weissmuller.
Among the vast and often rapid changes that have taken place over the last century of audio recording, it is notable that there is one crucial audio device, invented at the start of the Electrical Era, which has survived virtually unchanged since its introduction in the 1920s: the electro-acoustic transducer, or loudspeaker. The most common form is the dynamic loudspeaker – effectively a dynamic microphone in reverse. This device typically consists of a shallow conical diaphragm, usually of a stiff paper-like material concentrically pleated to make it more flexible, firmly fastened at its perimeter, with the coil of a moving-coil electromagnetic driver attached around its apex. When an audio signal from a recording, a microphone, or an electrified instrument is fed through an amplifier to the loudspeaker, the varying electromagnetic field created in the coil causes it and the attached cone to move backwards and forward, and this movement generates the audio-frequency pressure waves that travel through the air to our ears, which hear them as sound.
Although there have been numerous refinements to the technology, and other related technologies have been introduced (e.g. the electrostatic loudspeaker), the basic design and function of the dynamic loudspeaker has not changed substantially in 90 years, and it remains overwhelmingly the most common, sonically accurate and reliable means of converting electronic audio signals back into audible sound.
The Magnetic Era (1945–1975)
The third wave of development in audio recording began in 1945 when the allied nations gained access to a new German invention: magnetic tape recording. The technology was invented in the 1930s but remained restricted to Germany (where it was widely used in broadcasting) until the end of World War II. Magnetic tape provided another dramatic leap in audio fidelity—indeed, Allied observers first became aware of the existence of the new technology because they noticed that the audio quality of obviously pre-recorded programs was practically indistinguishable from live broadcasts.
From 1950 onwards, magnetic tape quickly became the standard medium of audio master recording in the radio and music industries and led to the development of the first hi-fi stereo recordings for the domestic market, the development of multi-track tape recording for music, and the demise of the disc as the primary mastering medium for sound. Magnetic tape also brought about a radical reshaping of the recording process—it made possible recordings of far longer duration and much higher fidelity than ever before, and it offered recording engineers the same exceptional plasticity that film gave to cinema editors—sounds captured on tape could now easily be manipulated sonically, edited, and combined in ways that were simply impossible with disc recordings.
These experiments reached an early peak in the 1950s with the recordings of Les Paul and Mary Ford, who pioneered the use of tape editing and multi-tracking to create large virtual ensembles of voices and instruments, constructed entirely from multiple taped recordings of their own voices and instruments. Magnetic tape fueled a rapid and radical expansion in the sophistication of popular music and other genres, allowing composers, producers, engineers and performers to realize previously unattainable levels of complexity. Other concurrent advances in audio technology led to the introduction of a range of new consumer audio formats and devices, on both disc and tape, including the development full-frequency-range disc reproduction, the change from shellac to polyvinyl plastic for disc manufacture, the invention of the 33rpm, 12-inch long-playing (LP) disc and the 45rpm 7-inch single, the introduction of domestic and professional portable tape recorders (which enabled high-fidelity recordings of live performances), the popular 4-track cartridge and compact cassette formats, and even the world's first sampling keyboards, the pioneering tape-based keyboard instrument the Chamberlin, and its more famous successor, the Mellotron.
The Digital Era (1975–present)
The fourth and current phase, the digital era, has seen rapid, dramatic and far-reaching series of changes. In a period of fewer than 20 years, all previous recording technologies were rapidly superseded by digital sound encoding, and the Japanese electronics corporation Sony in the 1970s was instrumental with the first consumer PCM encoder PCM-F1, introduced in 1981. Unlike all previous technologies, which captured a continuous analog of the sounds being recorded, digital recording captured sound by means of a very dense and rapid series of discrete samples of the sound. When played back through a digital-to-analog converter, these audio samples are recombined to form a continuous flow of sound. The first all-digitally-recorded popular music album, Ry Cooder's Bop 'Til You Drop, was released in 1979, and from that point, digital sound recording and reproduction quickly became the new standard at every level, from the professional recording studio to the home hi-fi.
Although a number of short-lived hybrid studio and consumer technologies appeared in this period (e.g. Digital Audio Tape or DAT, which recorded digital signal samples onto standard magnetic tape), Sony assured the preeminence of its new digital recording system by introducing, together with Philips, the digital compact disc (CD). The compact disc rapidly replaced both the 12" album and the 7" single as the new standard consumer format and ushered in a new era of high-fidelity consumer audio.
CDs are small, portable and durable, and they could reproduce the entire audible sound spectrum, with a large dynamic range (~96 dB), perfect clarity and no distortion. Because CDs were encoded and read optically, using a laser beam, there was no physical contact between the disc and the playback mechanism, so a well-cared-for CD could be played over and over, with absolutely no degradation or loss of fidelity. CDs also represented a considerable advance in both the physical size of the medium and its storage capacity. LPs could only practically hold about 20–25 minutes of audio per side because they were physically limited by the size of the disc itself and the density of the grooves that could be cut into it — the longer the recording, the closer together the grooves and thus the lower the overall fidelity. CDs, on the other hand, were less than half the overall size of the old 12" LP format, but offered about double the duration of the average LP, with up to 80 minutes of audio.
The compact disc almost totally dominated the consumer audio market by the end of the 20th century, but within another decade, rapid developments in computing technology saw it rendered virtually redundant in just a few years by the most significant new invention in the history of audio recording — the digital audio file (.wav, .mp3 and other formats). When combined with newly developed digital signal compression algorithms, which greatly reduced file sizes, digital audio files came to dominate the domestic market, thanks to commercial innovations such as Apple's iTunes media application, and their popular iPod portable media player.
However, the introduction of digital audio files, in concert with the rapid developments in home computing, soon led to an unforeseen consequence — the widespread unlicensed distribution of audio and other digital media files. The uploading and downloading of large volumes of digital media files at high speed was facilitated by freeware file-sharing technologies such as Napster and BitTorrent.
Although infringement remains a significant issue for copyright owners, the development of digital audio has had considerable benefits for consumers and labels. In addition to facilitating the high-volume, low-cost transfer and storage of digital audio files, this new technology has also powered an explosion in the availability of so-called back-catalog titles stored in the archives of recording labels, thanks to the fact that labels can now convert old recordings and distribute them digitally at a fraction of the cost of physically reissuing albums on LP or CD. Digital audio has also enabled dramatic improvements in the restoration and remastering of acoustic and pre-digital electric recordings, and even freeware consumer-level digital software can very effectively eliminate scratches, surface noise and other unwanted sonic artifacts from old 78rpm and vinyl recordings and greatly enhance the sound quality of all but the most badly damaged records. In the field of consumer-level digital data storage, the continuing trend towards increasing capacity and falling costs means that consumers can now acquire and store vast quantities of high-quality digital media (audio, video, games and other applications), and build up media libraries consisting of tens or even hundreds of thousands of songs, albums, or videos — collections which, for all but the wealthiest, would have been both physically and financially impossible to amass in such quantities if they were on 78 or LP, yet which can now be contained on storage devices no larger than the average hardcover book.
The digital audio file marked the end of one era in recording and the beginning of another. Digital files effectively eliminated the need to create or use a discrete, purpose-made physical recording medium (a disc, or a reel of tape, etc.) as the primary means of capturing, manufacturing and distributing commercial sound recordings. Concurrent with the development of these digital file formats, dramatic advances in home computing and the rapid expansion of the Internet mean that digital sound recordings can now be captured, processed, reproduced, distributed and stored entirely electronically, on a range of magnetic and optical recording media, and these can be distributed anywhere in the world, with no loss of fidelity, and crucially, without the need to first transfer these files to some form of permanent recording medium for shipment and sale.
Music streaming services have gained popularity since the late 2000s. Streaming audio does not require the listener to own the audio files. Instead, they listen over the internet. Streaming services offer an alternative method of consuming music and some follow a freemium business model. The freemium model many music streaming services use, such as Spotify and Apple Music, provide a limited amount of content for free, and then premium services for payment. There are two categories in which streaming services are categorized, radio or on-demand. Streaming services such as Pandora use the radio model, allowing users to select playlists but not specific songs to listen to, while services such as Apple Music allow users to listen to both individual songs and pre-made playlists.
Acoustical recording
The earliest method of sound recording and reproduction involved the live recording of a performance directly to a recording medium by an entirely mechanical process, often called acoustical recording. In the standard procedure used until the mid-1920s, the sounds generated by the performance vibrated a diaphragm with a recording stylus connected to it while the stylus cut a groove into a soft recording medium rotating beneath it. To make this process as efficient as possible, the diaphragm was located at the apex of a hollow cone that served to collect and focus the acoustical energy, with the performers crowded around the other end. Recording balance was achieved empirically. A performer who recorded too strongly or not strongly enough would be moved away from or nearer to the mouth of the cone. The number and kind of instruments that could be recorded were limited. Brass instruments, which recorded well, often substituted instruments such as cellos and bass fiddles, which did not. In some early jazz recordings, a block of wood was used in place of the snare drum, which could easily overload the recording diaphragm.
Phonautograph
In 1857, Édouard-Léon Scott de Martinville invented the phonautograph, the first device that could record sound waves as they passed through the air. It was intended only for visual study of the recording and could not play back the sound. The recording medium was a sheet of soot-coated paper wrapped around a rotating cylinder carried on a threaded rod. A stylus, attached to a diaphragm through a series of levers, traced a line through the soot, creating a graphic record of the motions of the diaphragm as it was minutely propelled back and forth by the audio-frequency variations in air pressure.
In the spring of 1877 another inventor, Charles Cros, suggested that the process could be reversed by using photoengraving to convert the traced line into a groove that would guide the stylus, causing the original stylus vibrations to be recreated, passed on to the linked diaphragm, and sent back into the air as sound. Edison's invention of the phonograph soon eclipsed this idea, and it was not until 1887 that yet another inventor, Emile Berliner, actually photoengraved a phonautograph recording into metal and played it back.
Scott's early recordings languished in French archives until 2008 when scholars keen to resurrect the sounds captured in these and other types of early experimental recordings tracked them down. Rather than using rough 19th-century technology to create playable versions, they were scanned into a computer and software was used to convert their sound-modulated traces into digital audio files. Brief excerpts from two French songs and a recitation in Italian, all recorded in 1860, are the most substantial results.
Phonograph/Gramophone
The phonograph, invented by Thomas Edison in 1877, could both record sound and play it back. The earliest type of phonograph sold recorded on a thin sheet of tinfoil wrapped around a grooved metal cylinder. A stylus connected to a sound-vibrated diaphragm indented the foil into the groove as the cylinder rotated. The stylus vibration was at a right angle to the recording surface, so the depth of the indentation varied with the audio-frequency changes in air pressure that carried the sound. This arrangement is known as vertical or hill-and-dale recording. The sound could be played back by tracing the stylus along the recorded groove and acoustically coupling its resulting vibrations to the surrounding air through the diaphragm and a so-called amplifying horn.
The crude tinfoil phonograph proved to be of little use except as a novelty. It was not until the late 1880s that an improved and much more useful form of the phonograph was marketed. The new machines recorded on easily removable hollow wax cylinders and the groove was engraved into the surface rather than indented. The targeted use was business communication, and in that context, the cylinder format had some advantages. When entertainment use proved to be the real source of profits, one seemingly negligible disadvantage became a major problem: the difficulty of making copies of a recorded cylinder in large quantities.
At first, cylinders were copied by acoustically connecting a playback machine to one or more recording machines through flexible tubing, an arrangement that degraded the audio quality of the copies. Later, a pantograph mechanism was used, but it could only produce about 25 fair copies before the original was too worn down. During a recording session, as many as a dozen machines could be arrayed in front of the performers to record multiple originals. Still, a single take would ultimately yield only a few hundred copies at best, so performers were booked for marathon recording sessions in which they had to repeat their most popular numbers over and over again. By 1902, successful molding processes for manufacturing prerecorded cylinders had been developed.
The wax cylinder got a competitor with the advent of the Gramophone, which was patented by Emile Berliner in 1887. The vibration of the Gramophone's recording stylus was horizontal, parallel to the recording surface, resulting in a zig-zag groove of constant depth. This is known as lateral recording. Berliner's original patent showed a lateral recording etched around the surface of a cylinder, but in practice, he opted for the disc format. The Gramophones he soon began to market were intended solely for playing prerecorded entertainment discs and could not be used to record. The spiral groove on the flat surface of a disc was relatively easy to replicate: a negative metal electrotype of the original record could be used to stamp out hundreds or thousands of copies before it wore out. Early on, the copies were made of hard rubber, and sometimes of celluloid, but soon a shellac-based compound was adopted.
Gramophone, Berliner's trademark name, was abandoned in the US in 1900 because of legal complications, with the result that in American English Gramophones and Gramophone records, along with disc records and players made by other manufacturers, were long ago brought under the umbrella term phonograph, a word which Edison's competitors avoided using but which was never his trademark, simply a generic term he introduced and applied to cylinders, discs, tapes and any other formats capable of carrying a sound-modulated groove. In the UK, proprietary use of the name Gramophone continued for another decade until, in a court case, it was adjudged to have become genericized and so could be used freely by competing disc record makers, with the result that in British English a disc record is called a gramophone record and phonograph record is traditionally assumed to mean a cylinder.
Not all cylinder records are alike. They were made of various soft or hard waxy formulations or early plastics, sometimes in unusual sizes; did not all use the same groove pitch; and were not all recorded at the same speed. Early brown wax cylinders were usually cut at about 120 rpm, whereas later cylinders ran at 160 rpm for clearer and louder sound at the cost of reduced maximum playing time. As a medium for entertainment, the cylinder was already losing the format war with the disc by 1910, but the production of entertainment cylinders did not entirely cease until 1929 and use of the format for business dictation purposes persisted into the 1950s.
Disc records, too, were sometimes made in unusual sizes, or from unusual materials, or otherwise deviated from the format norms of their eras in some substantial way. The speed at which disc records were rotated was eventually standardized at about 78 rpm, but other speeds were sometimes used. Around 1950, slower speeds became standard: 45, 33⅓, and the rarely used 16⅔ rpm. The standard material for discs changed from shellac to vinyl, although vinyl had been used for some special-purpose records since the early 1930s and some 78 rpm shellac records were still being made in the late 1950s.
Electrical recording
Until the mid-1920s records were played on purely mechanical record players usually powered by a wind-up spring motor. The sound was amplified by an external or internal horn that was coupled to the diaphragm and stylus, although there was no real amplification: the horn simply improved the efficiency with which the diaphragm's vibrations were transmitted into the open air. The recording process was, in essence, the same non-electronic setup operating in reverse, but with a recording, stylus engraving a groove into a soft waxy master disc and carried slowly inward across it by a feed mechanism.
The advent of electrical recording in 1925 made it possible to use sensitive microphones to capture the sound and greatly improved the audio quality of records. A much wider range of frequencies could be recorded, the balance of high and low frequencies could be controlled by elementary electronic filters, and the signal could be amplified to the optimum level for driving the recording stylus. The leading record labels switched to the electrical process in 1925 and the rest soon followed, although one straggler in the US held out until 1929.
There was a period of nearly five years, from 1925 to 1930 when the top audiophile technology for home sound reproduction consisted of a combination of electrically recorded records with the specially-developed Victor Orthophonic Victrola, an acoustic phonograph that used waveguide engineering and a folded horn to provide a reasonably flat frequency response. The first electronically amplified record players reached the market only a few months later, around the start of 1926, but at first, they were much more expensive and their audio quality was impaired by their primitive loudspeakers; they did not become common until the late 1930s.
Electrical recording increased the flexibility of the process, but the performance was still cut directly to the recording medium, so if a mistake was made the whole recording was spoiled. Disc-to-disc editing was possible, by using multiple turntables to play parts of different takes and recording them to a new master disc, but switching sources with split-second accuracy was difficult and lower sound quality was inevitable, so except for use in editing some early sound films and radio recordings it was rarely done.
Electrical recording made it more feasible to record one part to disc and then play that back while playing another part, recording both parts to a second disc. This and conceptually related techniques, known as overdubbing, enabled studios to create recorded performances that feature one or more artists each singing multiple parts or playing multiple instrument parts and that therefore could not be duplicated by the same artist or artists performing live. The first commercially issued records using overdubbing were released by the Victor Talking Machine Company in the late 1920s. However, overdubbing was of limited use until the advent of audio tape. Use of tape overdubbing was pioneered by Les Paul in the 1940s.
Magnetic recording
Magnetic wire recording
Wire recording or magnetic wire recording is an analog type of audio storage in which a magnetic recording is made on thin steel or stainless steel wire.
The wire is pulled rapidly across a recording head, which magnetizes each point along the wire in accordance with the intensity and polarity of the electrical audio signal being supplied to the recording head at that instant. By later drawing the wire across the same or a similar head while the head is not being supplied with an electrical signal, the varying magnetic field presented by the passing wire induces a similarly varying electric current in the head, recreating the original signal at a reduced level.
Magnetic wire recording was replaced by magnetic tape recording, but devices employing one or the other of these media had been more or less simultaneously under development for many years before either came into widespread use. The principles and electronics involved are nearly identical. Wire recording initially had the advantage that the recording medium itself was already fully developed, while tape recording was held back by the need to improve the materials and methods used to manufacture the tape.
Magnetic recording was demonstrated in principle as early as 1898 by Valdemar Poulsen in his telegraphone. Magnetic wire recording, and its successor, magnetic tape recording, involve the use of a magnetized medium that moves with a constant speed past a recording head. An electrical signal, which is analogous to the sound that is to be recorded, is fed to the recording head, inducing a pattern of magnetization similar to the signal. A playback head can then pick up the changes in the magnetic field from the tape and convert it into an electrical signal.
With the addition of electronic amplification developed by Curt Stille in the 1920s, the telegraphone evolved into wire recorders which were popular for voice recording and dictation during the 1940s and into the 1950s. The reproduction quality of wire recorders was significantly lower than that achievable with phonograph disk recording technology. There were also practical difficulties, such as the tendency of the wire to become tangled or snarled. Splicing could be performed by knotting together the cut wire ends, but the results were not very satisfactory.
On Christmas Day, 1932 the British Broadcasting Corporation first used a steel tape recorder for their broadcasts. The device used was a Marconi-Stille recorder, a huge and dangerous machine which used steel tape that had sharp edges. The tape was wide and thick running at past the recording and reproducing heads. This meant that the length of tape required for a half-hour program was nearly and a full reel weighed .
Magnetic tape sound recording
Engineers at AEG, working with the chemical giant IG Farben, created the world's first practical magnetic tape recorder, the 'K1', which was first demonstrated in 1935. During World War II, an engineer at the Reichs-Rundfunk-Gesellschaft discovered the AC biasing technique. With this technique, an inaudible high-frequency signal, typically in the range of 50 to 150 kHz, is added to the audio signal before being applied to the recording head. Biasing radically improved the sound quality of magnetic tape recordings. By 1943 AEG had developed stereo tape recorders.
During the war, the Allies became aware of radio broadcasts that seemed to be transcriptions (much of this due to the work of Richard H. Ranger), but their audio quality was indistinguishable from that of a live broadcast and their duration was far longer than was possible with 78 rpm discs. At the end of the war, the Allies captured a number of German Magnetophon recorders from Radio Luxembourg which aroused great interest. These recorders incorporated all of the key technological features of analog magnetic recording, particularly the use of high-frequency bias.
American audio engineer John T. Mullin served in the U.S. Army Signal Corps and was posted to Paris in the final months of World War II. His unit was assigned to find out everything they could about German radio and electronics, including the investigation of claims that the Germans had been experimenting with high-energy directed radio beams as a means of disabling the electrical systems of aircraft. Mullin's unit soon amassed a collection of hundreds of low-quality magnetic dictating machines, but it was a chance visit to a studio at Bad Neuheim near Frankfurt while investigating radio beam rumors that yielded the real prize.
Mullin was given two suitcase-sized AEG 'Magnetophon' high-fidelity recorders and fifty reels of recording tape. He had them shipped home and over the next two years, he worked on the machines constantly, modifying them and improving their performance. His major aim was to interest Hollywood studios in using magnetic tape for movie soundtrack recording.
Mullin gave two public demonstrations of his machines, and they caused a sensation among American audio professionals—many listeners could not believe that what they were hearing was not a live performance. By luck, Mullin's second demonstration was held at MGM studios in Hollywood and in the audience that day was Bing Crosby's technical director, Murdo Mackenzie. He arranged for Mullin to meet Crosby and in June 1947 he gave Crosby a private demonstration of his magnetic tape recorders.
Crosby was stunned by the amazing sound quality and instantly saw the huge commercial potential of the new machines. Live music was the standard for American radio at the time and the major radio networks did not permit the use of disc recording in many programs because of their comparatively poor sound quality. But Crosby disliked the regimentation of live broadcasts, preferring the relaxed atmosphere of the recording studio. He had asked NBC to let him pre-record his 1944–45 series on transcription discs, but the network refused, so Crosby had withdrawn from live radio for a year, returning for the 1946–47 season only reluctantly.
Mullin's tape recorder came along at precisely the right moment. Crosby realized that the new technology would enable him to pre-record his radio show with a sound quality that equaled live broadcasts and that these tapes could be replayed many times with no appreciable loss of quality. Mullin was asked to tape one show as a test and was immediately hired as Crosby's chief engineer to pre-record the rest of the series.
Crosby became the first major American music star to use tape to pre-record radio broadcasts and the first to master commercial recordings on tape. The taped Crosby radio shows were painstakingly edited through tape-splicing to give them a pace and flow that was wholly unprecedented in radio. Mullin even claims to have been the first to use canned laughter; at the insistence of Crosby's head writer, Bill Morrow, he inserted a segment of raucous laughter from an earlier show into a joke in a later show that had not worked well.
Keen to make use of the new recorders as soon as possible, Crosby invested $50,000 of his own money into Ampex, and the tiny six-man concern soon became the world leader in the development of tape recording, revolutionizing radio and recording with its famous Ampex Model 200 tape deck, issued in 1948 and developed directly from Mullin's modified Magnetophones.
Development of magnetic tape recorders in the late 1940s and early 1950s is associated with the Brush Development Company and its licensee, Ampex; the equally important development of magnetic tape media itself was led by Minnesota Mining and Manufacturing corporation (now known as 3M).
Multitrack recording
The next major development in the magnetic tape was multitrack recording, in which the tape is divided into multiple tracks parallel with each other. Because they are carried on the same medium, the tracks stay in perfect synchronization. The first development in multitracking was stereo sound, which divided the recording head into two tracks. First developed by German audio engineers ca. 1943, two-track recording was rapidly adopted for modern music in the 1950s because it enabled signals from two or more microphones to be recorded separately at the same time (while the use of several microphones to record on the same track had been common since the emergence of the electrical era in the 1920s), enabling stereophonic recordings to be made and edited conveniently. (The first stereo recordings, on disks, had been made in the 1930s, but were never issued commercially.) Stereo (either true, two-microphone stereo or multi mixed) quickly became the norm for commercial classical recordings and radio broadcasts, although many pop music and jazz recordings continued to be issued in monophonic sound until the mid-1960s.
Much of the credit for the development of multitrack recording goes to guitarist, composer and technician Les Paul, who also helped design the famous electric guitar that bears his name. His experiments with tapes and recorders in the early 1950s led him to order the first custom-built eight-track recorder from Ampex, and his pioneering recordings with his then-wife, singer Mary Ford, were the first to make use of the technique of multitracking to record separate elements of a musical piece asynchronously — that is, separate elements could be recorded at different times. Paul's technique enabled him to listen to the tracks he had already taped and record new parts in time alongside them.
Multitrack recording was immediately taken up in a limited way by Ampex, who soon produced a commercial 3-track recorder. These proved extremely useful for popular music since they enabled backing music to be recorded on two tracks (either to allow the overdubbing of separate parts or to create a full stereo backing track) while the third track was reserved for the lead vocalist. Three-track recorders remained in widespread commercial use until the mid-1960s and much famous pop recordings — including many of Phil Spector's so-called Wall of Sound productions and early Motown hits — were taped on Ampex 3-track recorders. Engineer Tom Dowd was among the first to use the multitrack recording for popular music production while working for Atlantic Records during the 1950s.
The next important development was 4-track recording. The advent of this improved system gave recording engineers and musicians vastly greater flexibility for recording and overdubbing, and 4-track was the studio standard for most of the later 1960s. Many of the most famous recordings by The Beatles and The Rolling Stones were recorded on 4-track, and the engineers at London's Abbey Road Studios became particularly adept at a technique called reduction mixes in the UK and bouncing down in the United States, in which several tracks were recorded onto one 4-track machine and then mixed together and transferred (bounced down) to one track of a second 4-track machine. In this way, it was possible to record literally dozens of separate tracks and combine them into finished recordings of great complexity.
All of the Beatles classic mid-1960s recordings, including the albums Revolver and Sgt. Pepper's Lonely Hearts Club Band, were recorded in this way. There were limitations, however, because of the build-up of noise during the bouncing-down process, and the Abbey Road engineers are still famed for their ability to create dense multitrack recordings while keeping background noise to a minimum.
4-track tape also enabled the development of quadraphonic sound, in which each of the four tracks was used to simulate a complete 360-degree surround sound. A number of albums were released both in stereo and quadrophonic format in the 1970s, but 'quad' failed to gain wide commercial acceptance. Although it is now considered a gimmick, it was the direct precursor of the surround sound technology that has become standard in many modern home theatre systems.
In a professional setting today, such as a studio, audio engineers may use 24 tracks or more for their recordings, using one or more tracks for each instrument played.
The combination of the ability to edit via tape splicing and the ability to record multiple tracks revolutionized studio recording. It became common studio recording practice to record on multiple tracks and bounce down afterward. The convenience of tape editing and multitrack recording led to the rapid adoption of magnetic tape as the primary technology for commercial musical recordings. Although 33⅓ rpm and 45 rpm vinyl records were the dominant consumer format, recordings were customarily made first on tape, then transferred to disc, with Bing Crosby leading the way in the adoption of this method in the United States.
Further developments
Analog magnetic tape recording introduces noise, usually called tape hiss, caused by the finite size of the magnetic particles in the tape. There is a direct tradeoff between noise and economics. Signal-to-noise ratio is increased at higher speeds and with wider tracks, and decreased at lower speeds and with narrower tracks.
By the late 1960s, disk reproducing equipment became so good that audiophiles soon became aware that some of the noise audible on recordings was not surface noise or deficiencies in their equipment, but reproduced tape hiss. A few specialist companies started making direct to disc recordings, made by feeding microphone signals directly to a disk cutter (after amplification and mixing), in essence reverting to the pre-War direct method of recording. These recordings never became popular, but they dramatically demonstrated the magnitude and importance of the tape hiss problem.
Before 1963, when Philips introduced the Compact audio cassette, almost all tape recording had used the reel-to-reel format. Previous attempts to package the tape in a convenient cassette that required no threading met with limited success; the most successful was 8-track cartridge used primarily in automobiles for playback only. The Philips Compact audio cassette added much-needed convenience to the tape recording format and a decade or so later had begun to dominate the consumer market, although it was to remain lower in quality than open-reel formats.
In the 1970s, advances in solid-state electronics made the design and marketing of more sophisticated analog circuitry economically feasible. This led to a number of attempts to reduce tape hiss through the use of various forms of volume compression and expansion, the most notable and commercially successful being several systems developed by Dolby Laboratories. These systems divided the frequency spectrum into several bands and applied volume compression/expansion independently to each band (Engineers now often use the term compansion to refer to this process). The Dolby systems were very successful at increasing the effective dynamic range and signal-to-noise ratio of analog audio recording; to all intents and purposes, audible tape hiss could be eliminated. The original Dolby A was only used in professional recording. Successors found use in both professional and consumer formats; Dolby B became almost universal for prerecorded music on cassette. Subsequent forms, including Dolby C, (and the short-lived Dolby S) were developed for home use.
In the 1980s, digital recording methods were introduced, and analog tape recording was gradually displaced, although it has not disappeared by any means. (Many professional studios, particularly those catering to big-budget clients, use analog recorders for multitracking and/or mixdown.) The digital audio tape never became important as a consumer recording medium partially due to legal complications arising from piracy fears on the part of the record companies. They had opposed magnetic tape recording when it first became available to consumers, but the technical difficulty of juggling recording levels, overload distortion, and residual tape hiss was sufficiently high that unlicensed reproduction of magnetic tape never became an insurmountable commercial problem. With digital methods, copies of recordings could be exact, and copyright infringement might have become a serious commercial problem. Digital tape is still used in professional situations and the DAT variant has found a home in computer data backup applications. Many professional and home recordists now use hard-disk-based systems for recording, burning the final mixes to recordable CDs (CD-R's).
Most Police forces in the United Kingdom (and possibly elsewhere) still use analog compact cassette systems to record Police Interviews as it provides a medium less prone to accusations of tampering.
Recording on film
The first attempts to record sound to an optical medium occurred around 1900. Prior to the use of recorded sound in film, theatres would have live orchestras present during silent films. The musicians would sit in the pit below the screen and would provide the background noise and set the mood for whatever was occurring in the movie. In 1906, Eugene Augustin Lauste applied for a patent to record Sound-on-film, but was ahead of his time. In 1923, Lee de Forest applied for a patent to record to film; he also made a number of short experimental films, mostly of vaudeville performers. William Fox began releasing sound-on-film newsreels in 1926, the same year that Warner Bros. released Don Juan with music and sound effects recorded on discs, as well as a series of short films with fully-synchronized sound on discs. In 1927, the sound film The Jazz Singer was released; while not the first sound film, it made a tremendous hit and made the public and the film industry realize that sound film was more than a mere novelty.
The Jazz Singer used a process called Vitaphone that involved synchronizing the projected film to sound recorded on a disc. It essentially amounted to playing a phonograph record, but one that was recorded with the best electrical technology of the time. Audiences used to acoustic phonographs and recordings would, in the theatre, have heard something resembling 1950s high fidelity.
However, in the days of analog technology, no process involving a separate disk could hold synchronization precisely or reliably. Vitaphone was quickly supplanted by technologies that recorded an optical soundtrack directly onto the side of the strip of motion picture film. This was the dominant technology from the 1930s through the 1960s and is still in use although the analog soundtrack is being replaced by digital sound on film formats.
There are two types of synchronized film soundtrack, optical and magnetic. Optical soundtracks are visual renditions of sound wave forms and provide sound through a light beam and optical sensor within the projector. Magnetic soundtracks are essentially the same as used in conventional analog tape recording.
Magnetic soundtracks can be joined with the moving image but it creates an abrupt discontinuity because of the offset of the audio track relative to the picture. Whether optical or magnetic, the audio pickup must be located several inches ahead of the projection lamp, shutter and drive sprockets. There is usually a flywheel as well to smooth out the film moves to eliminate the flutter that would otherwise result from the negative pulldown mechanism. If you have films with a magnetic track, you should keep them away from strong magnetic sources, such as televisions. These can weaken or wipe the magnetic sound signal.
For optical recording on film there are two methods utilized. Variable density recording uses changes in the darkness of the soundtrack side of the film to represent the soundwave. Variable area recording uses changes in the width of a dark strip to represent the soundwave.
In both cases, a light that is sent through the part of the film that corresponds to the soundtrack changes in intensity, proportional to the original sound, and that light is not projected on the screen but converted into an electrical signal by a light-sensitive device.
Optical soundtracks are prone to the same sorts of degradation that affect the picture, such as scratching and copying.
Unlike the film image that creates the illusion of continuity, soundtracks are continuous. This means that if film with a combined soundtrack is cut and spliced, the image will cut cleanly but the soundtrack will most likely produce a cracking sound. Fingerprints on the film may also produce cracking or interference.
In the late 1950s, the cinema industry, desperate to provide a theatre experience that would be overwhelmingly superior to television, introduced widescreen processes such as Cinerama, Todd-AO and CinemaScope. These processes at the same time introduced technical improvements in sound, generally involving the use of multitrack magnetic sound, recorded on an oxide stripe laminated onto the film. In subsequent decades, a gradual evolution occurred with more and more theatres installing various forms of magnetic-sound equipment.
In the 1990s, digital audio systems were introduced and began to prevail. In some of them, the sound recording is again recorded on a separate disk, as in Vitaphone; others use a digital, optical sound track on the film itself. Digital processes can now achieve reliable and perfect synchronization.
Digital recording
The first digital audio recorders were reel-to-reel decks introduced by companies such as Denon (1972), Soundstream (1979) and Mitsubishi. They used a digital technology known as PCM recording. Within a few years, however, many studios were using devices that encoded the digital audio data into a standard video signal, which was then recorded on a U-matic or other videotape recorder, using the rotating-head technology that was standard for video. A similar technology was used for a consumer format, known as Digital Audio Tape (DAT) which used rotating heads on a narrow tape contained in a cassette. DAT records at sampling rates of 48 kHz or 44.1 kHz, the latter being the same rate used on compact discs. Bit depth is 16 bits, also the same as compact discs. DAT was a failure in the consumer-audio field (too expensive, too finicky, and crippled by anti-copying regulations), but it became popular in studios (particularly home studios) and radio stations. A failed digital tape recording system was the Digital Compact Cassette (DCC).
Within a few years after the introduction of digital recording, multitrack recorders (using stationary heads) were being produced for use in professional studios. In the early 1990s, relatively affordable multitrack digital recorders were introduced for use in home studios; they returned to recording on videotape. The most notable of this type of recorder is the ADAT. Developed by Alesis and first released in 1991, the ADAT machine is capable of recording 8 tracks of digital audio onto a single S-VHS video cassette. The ADAT machine, followed by the Tascam equivalent, the DA-88, using a smaller Hi-8 video cassette, was a common fixture in professional and home studios around the world until approximately 2000 when it was supplanted by various interfaces and 'DAWs' (digital audio workstations) which allowed a computer's hard drive to be the recording medium..
In the consumer market, tapes and gramophones were largely displaced by the compact disc (CD) and a lesser extent the minidisc. These recording media are fully digital and require complex electronics to play back. Digital recording has progressed towards higher fidelity, with formats such as DVD-A offering sampling rates of up to 192 kHz.
Digital sound files can be stored on any computer storage medium. The development of the MP3 audio file format, and legal issues involved in copying such files, has driven most of the innovation in music distribution since their introduction in the late 1990s.
As hard disk capacities and computer CPU speeds increased at the end of the 1990s, hard disk recording became more popular. As of early 2005 hard disk recording takes two forms. One is the use of standard desktop or laptop computers, with adapters for encoding audio into two or many tracks of digital audio. These adapters can either be in-the-box soundcards or external devices, either connecting to in-box interface cards or connecting to the computer via USB or Firewire cables. The other common form of hard disk recording uses a dedicated recorder which contains analog-to-digital and digital-to-analog converters as well as one or two removable hard drives for data storage. Such recorders, packing 24 tracks in a few units of rack space, are actually single-purpose computers, which can in turn be connected to standard computers for editing.
The revival of vinyl
Vinyl records, or long-playing (LP) records, have become popular again as a way to consume music despite the rise of digital media. Over 15 thousand units were sold between 2008 and 2012, their sales reaching the highest level in 2012 since 1993. Popular artists have begun releasing their albums on vinyl, and stores such as Urban Outfitters and Whole Foods Market have started selling them. Popular music corporations, such as Sony, have started manufacturing LP for the first time since 1989 as this medium becomes more popular. However, some companies are facing production problems as there are only 16 record plants currently functioning in the United States.
Technique
The analog tape recorder made it possible to erase or record over a previous recording so that mistakes could be fixed. Another advantage of recording on tape is the ability to cut the tape and join it back together. This allows the recording to be edited. Pieces of the recording can be removed, or rearranged. See also audio editing, audio mixing, multitrack recording.
The advent of electronic instruments (especially keyboards and synthesizers), effects and other instruments has led to the importance of MIDI in recording. For example, using MIDI timecode, it is possible to have different equipment 'trigger' without direct human intervention at the time of recording.
In more recent times, computers (digital audio workstations) have found an increasing role in the recording studio, as their use eases the tasks of cutting and looping, as well as allowing for instantaneous changes, such as duplication of parts, the addition of effects and the rearranging of parts of the recording.
See also
Binaural recording
Bootleg recording
High fidelity
Microphone technique
Timeline of audio formats
Volta Laboratory-Sound recording
References
Further reading
Bennett, H. Stith, On Becoming a Rock Musician, Amherst : University of Massachusetts Press, 1980.
Middleton, Richard (1990/2002). Studying Popular Music. Philadelphia: Open University Press. .
Milner, Greg, "Perfecting Sound Forever: An Aural History of Recorded Music", Faber & Faber; 1 edition (June 9, 2009) . Cf. p. 14 on H. Stith Bennett and "recording consciousness".
"Recording Technology History: notes revised July 6, 2005, by Steven Schoenherr", San Diego University (archived 2010)
External links
First Sounds (audio files of the earliest recorded sound, dating back to the 1850s)
Song recording guide
Recording History – The History of Sound Recording Technology
Listen to The Hen Convention - Australia's oldest surviving piece of recorded sound (1897) on the National Film and Sound Archive's australianscreen online
Sound recording history - Museum of Magnetic Sound Recording
Audio engineering
Sound recording
History of sound recording
Sound recording
nn:Lydopptak | History of sound recording | Technology,Engineering | 10,770 |
24,470,919 | https://en.wikipedia.org/wiki/William%20Klemperer | William A. Klemperer (October 6, 1927 – November 5, 2017) was an American chemist, chemical physicist and molecular spectroscopist. Klemperer is most widely known for introducing molecular beam methods into chemical physics research, greatly increasing the understanding of nonbonding interactions between atoms and molecules through development of the microwave spectroscopy of van der Waals molecules formed in supersonic expansions, pioneering astrochemistry, including developing the first gas phase chemical models of cold molecular clouds that predicted an abundance of the molecular HCO+ ion that was later confirmed by radio astronomy.
Biography
Bill Klemperer was born in New York City in 1927 as the child of two physicians. He and his younger brother were raised in New York and New Rochelle. He graduated from New Rochelle High School in 1944 and then enlisted in the U.S. Navy Air Corps, where he trained as a tail gunner. He obtained an A.B. from Harvard University in 1950, majoring in Chemistry, and obtained a Ph.D. in Physical Chemistry under the direction of George C. Pimentel at University of California, Berkeley, in early 1954.
After one semester as an instructor at Berkeley, Bill returned to Harvard in July 1954. Though his initial appointment was as an instructor of analytical chemistry, a position which was considered unlikely to lead to a faculty position, he was appointed full professor in 1965. He has remained associated with Harvard Chemistry throughout a long career. He spent 1968-69 on sabbatical at Cambridge University and 1979-81 as Assistant Director for Mathematical and Physical Sciences at the U.S. National Science Foundation. He was a visiting scientist at Bell Laboratories. He also served as an advisor to NASA. Klemperer became an emeritus professor in 2002 but remained active in both research and teaching.
Science
Klemperer's early work concentrated on the infrared spectroscopy of small molecules that are only stable in the gas phase at high temperatures. Among these are the alkali halides, for many of which he obtained the first vibrational spectra. The work provided basic structural data for many oxides and fluorides, and gave insight into the details of the bonding. It also led Klemperer to recognize the potential of molecular beams in spectroscopy, and in particular the use of the electric resonance technique to address fundamental problems in structural chemistry.
Klemperer introduced the technique of supersonic cooling as a spectroscopic tool, which has increased the intensity of molecular beams and also simplified the spectra.
Klemperer helped to found the field of interstellar chemistry. In interstellar space, densities and temperatures are extremely low, and all chemical reactions must be exothermic, with no activation barriers. The chemistry is driven by ion-molecule reactions, and Klemperer's modeling of those that occur in molecular clouds has led to a remarkably detailed understanding of their rich highly non-equilibrium chemistry. Klemperer assigned HCO+ as the carrier of the mysterious but universal "X-ogen" radio-astronomical line at 89.6 GHz, which had been reported by D. Buhl and L.E. Snyder.
Klemperer arrived at this prediction by taking the data seriously. The radio telescope data showed an isolated transition with no hyperfine splitting; thus there were no nuclei in the carrier of the signal with spin of one or greater nor was it a free radical with a magnetic moment. HCN is an extremely stable molecule and thus its isoelectronic analog, HCO+, whose structure and spectra could be well predicted by analogy, would also be stable, linear, and have a strong but sparse spectrum. Further, the chemical models he was developing predicted that HCO+ would be one of the most abundant molecular species. Laboratory spectra of HCO+ (taken later by Claude Woods et al.,) proved him right and thereby demonstrated that Herbst and Klemperer's models provided a predictive framework for our understanding of interstellar chemistry.
The greatest impact of Klemperer's work has been in the study of intermolecular forces, a field of fundamental importance for all of molecular- and nano-science. Before Klemperer introduced spectroscopy with supersonic beams, the spectra of weakly bound species were almost unknown, having been restricted to dimers of a few very light systems. Scattering measurements provided precise intermolecular potentials for atom–atom systems, but provided at best only limited information on the anisotropy of atom–molecule potentials.
He foresaw that he could synthesize dimers of almost any pair of molecules he could dilute in his beam and study their minimum energy structure in exquisite detail by rotational spectroscopy. This was later extended to other spectral regions by Klemperer and many others, and has qualitatively changed the questions that could be asked. Nowadays it is routine for microwave and infrared spectroscopists to follow his "two step synthesis" to obtain the spectrum of a weakly bound complex: "Buy the components and expand." Klemperer quite literally changed the study of the intermolecular forces between molecules from a qualitative to a quantitative science.
The dimer of hydrogen fluoride was the first hydrogen bonded complex to be studied by these new techniques, and it was a puzzle. Instead of the simple rigid-rotor spectrum, which would have produced a 1 to 0 transition at 12 GHz, the lowest frequency transition was observed at 19 GHz. Arguing by analogy to the well known tunneling-inversion spectrum of ammonia, Klemperer recognized that the key to understanding the spectrum was to recognize that HF–HF was undergoing quantum tunnelling to FH–FH, interchanging the roles of proton donor and acceptor.
Each rotational level was split into two tunneling states, with an energy separation equal to the tunneling rate divided by the Planck constant. The observed microwave transitions all involved a simultaneous change in rotational and tunneling energy. The tunneling frequency is extremely sensitive to the height and shape of the inter-conversion barrier, and thus samples the potential in the classically forbidden regions. Resolved tunneling splittings proved to be common in the spectra of weakly bound molecular dimers.
Awards
Bill Klemperer has had many awards and honors, which include:
Inducted a Fellow of the American Physical Society, 1954
Elected to the American Academy of Arts and Sciences, 1963
Elected to the National Academy of Sciences, 1969
John Price Wetherill Medal, awarded by the Franklin Institute, 1978
Irving Langmuir Award, awarded by the American Chemical Society, 1980
The Distinguished Service Medal, awarded by the U.S. National Science Foundation, 1981
The Earle K. Plyler Prize for Molecular Spectroscopy, awarded by the American Physical Society, 1983
The Bomem-Michelson Award for the advancement of the field of vibrational spectroscopy. awarded by the Coblentz Society, 1990
Inaugural George C. Pimentel Memorial Lecturer, Chemistry Department, UC Berkeley. 1991-2.
The Remsen Award from the Maryland Section of the American Chemical Society, 1992
The Peter Debye Award in Physical Chemistry, awarded by the American Chemical Society, 1994
The Faraday Medal and Lectureship from the Royal Society of Chemistry (England), 1995
Honorary Doctor of Science from the University of Chicago, 1996
Honorary Citizen of Toulouse, France, 2000
E. Bright Wilson Award in Spectroscopy from the American Chemical Society, 2001
References
External links
Faculty Homepage@Harvard
Brief Bio
Video of Klemperer's Faraday Medal Lecture on the Chemistry of Interstellar Space
Klemperer's C.V + publication list up to 2003.
1927 births
2017 deaths
Harvard University alumni
University of California, Berkeley alumni
20th-century American chemists
Spectroscopists
Astrochemists
Harvard University faculty
Members of the United States National Academy of Sciences
Scientists from New Rochelle, New York
Fellows of the American Physical Society
United States Navy personnel of World War II
United States Navy sailors
New Rochelle High School alumni | William Klemperer | Chemistry | 1,631 |
33,937,959 | https://en.wikipedia.org/wiki/Boletellus%20belizensis | Boletellus belizensis is a species of bolete fungus in the family Boletaceae. Found in Belize, it was described as new to science in 2007.
References
External links
belizensis
Fungi described in 2007
Fungi of Central America
Fungus species | Boletellus belizensis | Biology | 50 |
5,952,567 | https://en.wikipedia.org/wiki/Inmarsat | Inmarsat is a British satellite telecommunications company, offering global mobile services. It provides telephone and data services to users worldwide, via portable or mobile terminals which communicate with ground stations through fifteen geostationary telecommunications satellites.
Inmarsat's network provides communications services to a range of governments, aid agencies, media outlets and businesses (especially in the shipping, airline and mining industries) with a need to communicate in remote regions or where there is no reliable terrestrial network. The company was listed on the London Stock Exchange until it was acquired by Connect Bidco, a consortium consisting of Apax Partners, Warburg Pincus, the CPP Investment Board and the Ontario Teachers' Pension Plan, in December 2019.
On 8 November 2021, a deal was announced between Inmarsat's owners and Viasat, in which Viasat was to purchase Inmarsat. The acquisition was completed in May 2023.
History
Origins
The present company originates from the International Maritime Satellite Organization (INMARSAT), a non-profit intergovernmental organisation established in 1979 at the behest of the International Maritime Organization (IMO)—the United Nations maritime body—and pursuant to the Convention on the International Maritime Satellite Organization, signed by 28 countries in 1976. The organisation was created to establish and operate a satellite communications network for the maritime community. In coordination with the International Civil Aviation Organization in the 1980s, the convention governing INMARSAT was amended to include improvements to aeronautical communications, notably for public safety. The member states owned varying shares of the operational business. The main offices were originally located in the Euston Tower, Euston Road, London.
Privatization
In the mid-1990s, many member states were unwilling to invest in improvements to INMARSAT's network, especially owing to the competitive nature of the satellite communications industry, while many recognised the need to maintain the organisation's older systems and the need for an intergovernmental organisation to oversee public safety aspects of satellite communication networks. In 1998, an agreement was reached to modify INMARSAT's mission as an intergovernmental organisation and separate and privatise the organisation's operational business, with public safety obligations attached to the sale.
In April 1999, INMARSAT was succeeded by the International Mobile Satellite Organization (IMSO) as an intergovernmental regulatory body for satellite communications, while INMARSAT's operational unit was separated and became the UK-based company Inmarsat Ltd. The IMSO and Inmarsat Ltd. signed an agreement imposing public safety obligations on the new company. Inmarsat was the first international satellite organisation that was privatised.
In 2005, Apax Partners and Permira bought shares in the company. The company was also first listed on the London Stock Exchange in that year. In March 2008, it was disclosed that U.S. hedge fund Harbinger Capital owned 28% of the company. In 2009, Inmarsat completed the acquisition of satellite communications provider Stratos Global Corporation (Stratos) and acquired a 19-per cent stake in SkyWave Mobile Communications Inc., a provider of Inmarsat D+/IsatM2M network services which in turn purchased the GlobalWave business from TransCore. Inmarsat won the 2010 MacRobert Award for its Broadband Global Area Network (BGAN) service.
Inmarsat at first provided services using Marisat and MARECS, which were launched by the US Navy and ESA respectively. In the early 1990s Inmarsat launched its first dedicated satellite constellation, Inmarsat-2. These satellites provided the Inmarsat-A service for maritime uses. Between 1996 and 1998 Inmarsat's second constellation, Inmarsat-3, was launched. Consisting of five geostationary L-band satellites the constellation provides the Inmarsat-B and Inmarsat-C services, primarily providing low bandwidth communications and safety services for global shipping. Following privatisation in 1999 Inmarsat developed and launched the first satellite communications system offering global coverage, BGAN. This service was provided initially through the three Inmarsat-4 satellite launched between 2005 and 2008, and was then extended with the addition of Alphasat in 2013. In the 2010s, Inmarsat began development of the High Throughput Satellite (HTS) constellation Global Xpress, operating in the Ka-band portion of the spectrum. Global Xpress, launched in 2015, offers global satellite capacity to various markets including shipping and aviation. Global Xpress also marks a significant expansion of Inmarsat's commercial operations in the aviation markets. In 2017, Inmarsat launched its first S-band satellite, intended to provide (in association with an LTE ground network) inflight internet access across Europe. In March 2018, Inmarsat partnered with Isotropic Systems to develop a state-of-the-art, all electronic scanning antenna intended to be used with the Global Xpress network.
On 20 September 2018, Inmarsat announced its strategic collaboration with Panasonic Avionics Corporation for an initial ten-year period, to provide in-flight broadband for commercial airlines. Inmarsat will be the exclusive provider of Panasonic for connectivity using the Ka-band satellite signal. Inmarsat will now be offering Panasonic's portfolio of services to its commercial aviation customers.
Malaysia Airlines Flight 370
In March 2014, Malaysia Airlines Flight 370 disappeared with 239 passengers and crew en route from Kuala Lumpur to Beijing. After turning away from its planned path and disappearing from radar coverage, the aircraft's satellite data unit remained in contact with Inmarsat's ground station in Perth via the IOR satellite (Indian Ocean Region, 64° East). The aircraft used Inmarsat's Classic Aero satellite phone service. Analysis of these communications by Inmarsat and independently by other agencies determined that the aircraft flew into the southern Indian Ocean and was used to guide the search for the aircraft.
Takeover by Connect Bidco and privatisation
In March 2019 the company's board agreed to recommend a takeover offer of US$3.4 billion from Connect Bidco, a consortium consisting of Apax Partners, Warburg Pincus, the CPP Investment Board and the Ontario Teachers' Pension Plan. On 9 October 2019, Bloomberg reported that the UK government was set to approve the takeover with the final consultation for the deal set to conclude on 24 October 2019. In November 2019, Inmarsat rejected an eleventh-hour effort to derail the US$6 billion sale, in which it was accused of ignoring a potential boost to the company's value. Oaktree argued that the recommended offer for Inmarsat failed to take account of the potential value of spectrum assets used by Inmarsat's U.S. partner Ligado. Inmarsat delisted from London Stock Exchange, as the private equity funds took control of the company, on 5 December 2019; at the time, Inmarsat was operating 14 geostationary communications satellites.
Acquisition by Viasat
On 8 November 2021, a $7.3bn deal was announced between Inmarsat's owners, led by Apax and Warburg Pincus, and Viasat in which Viasat would purchase Inmarsat for $850m in cash, issuing approximately 46 million shares of Viasat stock and taking on $3.4bn in debt. Viasat has promised to honour a pledge made by the previous owners, when it was taken private in 2019, that Inmarsat would remain a UK-based company, and for other planned investments.
Provisional approval for the merger was given by the UK's Competition and Markets Authority in March 2023 with 25 May 2023 set as the date for a formal decision. On 31 May 2023, the acquisition was closed.
Operations
The Inmarsat head office is at Old Street Roundabout in the London Borough of Islington. Aside from its commercial services, Inmarsat provides Global Maritime Distress and Safety System (GMDSS) to ships and aircraft at no charge, as a public service.
Services include traditional voice calls, low-level data tracking systems, and high-speed Internet and other data services as well as distress and safety services. The Broadband Global Area Network (BGAN) network provides General Packet Radio Service (GPRS) - type services at up to 800 kbit/s at a latency of 900-1100 ms via an Internet Protocol (IP) satellite modem the size of a notebook computer, while the Global Xpress network offers up to 50 Mbit/s at a latency of 700 ms via antennas as small as 60 cm. Other services provide mobile Integrated Services Digital Network (ISDN) services used by the media for live reporting on world events via videophone, and inflight Internet access via the European Aviation Network.
The price of a call via Inmarsat has now dropped to a level where they are comparable to, and in many cases lower than, international roaming costs, or hotel phone calls. Voice call charges are the same for any location in the world where the service is used. Tariffs for calls to Inmarsat country codes vary, depending on the country in which they are placed. Inmarsat primarily uses country code 870 (see below). Newer Inmarsat services use an IP technology that features an always-on capability where the users are only charged for the amount of data they send and receive, rather than the length of time they are connected. In addition to its own satellites, Inmarsat has a collaboration agreement with ACeS regarding handheld voice services.
Coverage
There are three types of coverage related to each Inmarsat I-4 satellite.
Global beam coverage
Each satellite is equipped with a single global beam that covers up to one-third of the Earth's surface, apart from the poles. Overall, global beam coverage extends from latitudes of −82 to +82° regardless of longitude.
Regional spot beam coverage
Each regional beam covers a fraction of the area covered by a global beam, but collectively all of the regional beams offer virtually the same coverage as the global beams. Use of regional beams allow user terminals (also called mobile earth stations) to operate with significantly smaller antennas. Regional beams were introduced with the I-3 satellites. Each I-3 satellite provides four to six spot beams; each I-4 satellite provides 19 regional beams.
Narrow spot beam coverage
Narrow beams are offered by the three Inmarsat-4 satellites. Narrow beams vary in size, tend to be several hundred kilometres across. The narrow beams, while much smaller than the global or regional beams, are far more numerous and hence offer the same global coverage. Narrow spot beams allow yet smaller antennas and much higher data rates. They form the backbone of Inmarsat's handheld (GSPS) and broadband services (BGAN). This coverage was introduced with the I-4 satellites. Each I-4 satellite provides around 200 narrow spot beams.
Global Xpress (I-5)
The Inmarsat I-5 satellites provide global coverage using four geostationary satellites. Each satellite supports 89 beams, giving a total coverage of approximately one-third of the Earth's surface per satellite. In addition, 6 steerable beams are available per satellite, which may be moved to provide higher capacity to selected locations.
On 26 November 2019, the first satellite to extend the original 4 satellite first generation Global Xpress constellation was launched from Centre Spatial Guyanais (CSG) by an Ariane 5 launch vehicle.
Satellites
Country codes
The permanent telephone country code for calling Inmarsat destinations is:
870 SNAC (Single Network Access Code)
The 870 number is an automatic locator; it is not necessary to know to which satellite the destination Inmarsat terminal is logged-in. SNAC is now usable by all Inmarsat services.
Country codes phased out on 31 December 2008 were
871 Atlantic Ocean Region – East (AOR-E)
872 Pacific Ocean Region (POR)
873 Indian Ocean Region (IOR)
874 Atlantic Ocean Region – West (AOR-W)
Since 18 July 2017, Inmarsat users using the service provided by China Transport Telecommunication & Information Center may apply for 11 digits Chinese mobile phone numbers starting with 1749. An international call function is not required when making phone calls to such numbers from Mainland China.
Networks
Inmarsat networks provide existing, evolved, and advanced services. Existing and evolved services are offered through land Earth stations which are not owned nor operated by Inmarsat, but through companies which have a commercial agreement with Inmarsat. Advanced services are provided via distribution partners but the satellite gateways are owned and operated by Inmarsat directly.
High Throughput Services
Global Xpress: Since 2015, Inmarsat has offered high throughput services through the Global Xpress network. This service provides an IP based global service of up to 50 Mbit/s downlink and 5 Mbit/s uplink at a latency of 700 ms. Services are provided for maritime, aviation, government and enterprise markets. Global Xpress is supported by the existing BGAN L-band network, and services are offered using a combination of the two networks to increase availability and reliability. In March 2018, Inmarsat partnered with Isotropic Systems to develop all-electronic scanning antenna intended to be used with the Global Xpress network.
European Aviation Network: Inmarsat offers aviation services through the European Aviation Network, developed in partnership with Deutsche Telekom. The European Aviation Network uses a ground-based LTE network and an Inmarsat S-band satellite to provide 50 Gbit/s capacity to aircraft in European airspace. The project faced a number of legal and regulatory challenges. In October 2017, Inmarsat stated that commercial service would begin in 2018. Construction of the ground network was completed in February 2018, and by mid-2019 the service was offered on over 100 routes from key destinations such as London, Madrid, Barcelona, Athens, Lisbon, Prague, Rome and Vienna. As of early 2021, the service has been used on 200,000 flights throughout Europe on flights on 250 aircraft operated by British Airways, Iberia and Vueling.
Advanced services
The "BGAN Family" is a set of IP-based shared-carrier services:
BGAN: Broadband Global Area Network for use on land. BGAN uses the I-4 satellites to offer a shared-channel IP packet-switched service of up to 800 kbit/s (uplink and downlink speeds may differ and depend on terminal model) and a streaming-IP service from 32 kbit/s up to X-Stream data rate (services depend on terminal model). Most terminals also offer circuit-switched Mobile Integrated Services Digital Network (ISDN) services at 64 kbit/s and even low speed (4.8 kbit/s) voice etc. services. BGAN service is available globally on all I4 satellites.
FleetBroadband (FB): A maritime service, FleetBroadband is based on BGAN technology, offering similar services and using the same infrastructure as BGAN. A range of Fleet Broadband user terminals are available, designed for ships.
SwiftBroadband (SB): An aeronautical service, SwiftBroadband is based on BGAN technology and offers similar services. SB terminals are designed for commercial, private, and military aircraft.
M2M communications
The "BGAN M2M Family" is a set of IP-based services designed for long-term machine-to-machine management of fixed assets:
BGAN M2M: launched in January 2012, is a global, IP-based low-data rate service, designed for high data availability and performance in permanently unmanned environments. With high-frequency, very low-latency data reporting, BGAN M2M is intended for monitoring fixed assets such as pipelines and oil well heads, or backhauling electricity consumption data within a utility.
IsatM2M: IsatM2M is a global, short burst data, store and forward service intended to deliver messages of 10.5 or 25.5 bytes in the send direction, and 100 bytes in the receive direction. The service is delivered to market via SkyWave Mobile Communications and Honeywell Global Tracking.
IsatData Pro: IsatData Pro is a global satellite data service designed for two-way text and data communications to remote assets with message size to mobile: 10 KB / from mobile: 6.4 KB with typical delivery time of 15 seconds. This service is intended for mission-critical applications for managing trucks, fishing vessels and oil and gas and heavy equipment, text message remote workers and security applications. It is provided by SkyWave Mobile Communications Inc, part of Orbcomm.
Global voice services
The company offers portable and fixed phone services:
IsatPhone 2: IsatPhone 2 is a mobile satellite phone offering voice telephony. It has a variety of data capabilities, including SMS, short message emailing and GPS look-up-and-send, as well as supporting a data service of up to 20 kbit/s.
IsatPhone Link: IsatPhone Link is a low-cost, fixed, global satellite phone service. It provides voice connectivity for those working or living in areas without cellular coverage and includes data capabilities.
FleetPhone: Inmarsat's FleetPhone service is a fixed phone service for use on smaller vessels where voice communications is the primary requirement or on vessels where additional voice lines are needed. It provides a low-cost, global satellite phone service option for those working or sailing outside cellular coverage.
Existing and evolved services
These are based on older technologies:
Aeronautical (Classic Aero): provides analogue voice/fax/data services for aircraft. Three levels of terminals, Aero-L (Low Gain Antenna) primarily for packet data including ACARS and ADS, Aero-H (High Gain Antenna) for medium quality voice and fax/data at up to 9600 bit/s, and Aero-I (Intermediate Gain Antenna) for low quality voice and fax/data at up to 2400 bit/s. There are also aircraft-rated versions of Inmarsat-C and mini-M/M4. The aircraft version of GAN is called Swift 64 (see below).
Inmarsat-C: this is effectively a "satellite telex" terminal with low-speed all-digital (transmission bit rate 1200 bit/s and information bit rate of 600 bit/s) store-and-forward, polling etc. capabilities. Certain models of Inmarsat-C terminals are approved for the Global Maritime Distress and Safety System (GMDSS) system, equipped with GPS.
Fleet: a family of networks that includes the Inmarsat-Fleet77, Inmarsat-Fleet55 and Inmarsat-Fleet33 members (The numbers 77, 55 and 33 come from the diameter of the antenna in centimetres). Much like GAN, it provides a selection of low speed services like voice at 4.8 kbit/s, fax/data at 2.4 kbit/s, medium speed services like fax/data at 9.6 kbit/s, ISDN like services at 64 kbit/s (called Mobile ISDN) and shared-channel IP packet-switched data services at 64 kbit/s (called Mobile Packet Data Service or MPDS - see below). However, not all services are available with all members of the family. The latest service to be supported is Mobile ISDN at 128 kbit/s on Inmarsat-Fleet77 terminals.
Swift 64: Similar to GAN, providing voice, low rate fax/data, 64 kbit/s ISDN, and MPDS services, for private, business, and commercial aircraft. Swift 64 is often sold in a multi-channel version, to support several times 64 kbit/s.
Inmarsat D/D+/IsatM2M: Inmarsat's pager, although much larger than terrestrial versions. Some units are equipped with GPS. The original Inmarsat-D terminals were one-way (to mobile) pagers. The newer Inmarsat-D+ terminals are the equivalent of a two-way pager. The main use of this technology nowadays is in tracking trucks and buoys and SCADA applications.
MPDS (Mobile Packet Data Service): Previously known as IPDS, this is an IP-based data service in which several users share a 64 kbit/s carrier in a manner similar to ADSL. MPDS-specific terminals are not sold; rather, this is a service which comes with most terminals that are designed for GAN, Fleet, and Swift64.
IsatPhone: provides voice services at 4.8 kbit/s and medium speed fax/data services at 2.4 kbit/s. This service emerged from a collaboration agreement with ACeS, and is available in the EMEA and APAC satellite regions. Coverage is available in Africa, the Middle-East, Asia and Europe, as well as in maritime areas of the EMEA and APAC coverage.
Retired services
Inmarsat-B: service was retired on 30 December 2016. It provided digital voice services, telex services, medium speed fax/data services at 9.6 kbit/s and high speed data services at 56, 64 or 128 kbit/s. There was also a 'leased' mode for Inmarsat-B available on the spare Inmarsat satellites.
Inmarsat-M: provides voice services at 4.8 kbit/s and medium speed fax/data services at 2.4 kbit/s. It paved the way towards Inmarsat-Mini-M. Service has ended.
Mini-M: provides voice services at 4.8 kbit/s and medium speed fax/data services at 2.4 kbit/s. One 2.4 kbit/s channel takes up 4.8 kbit/s on the satellite. Service was closed early January 2017.
GAN (Global Area Network): provides a selection of low speed services like voice at 4.8 kbit/s, fax and data at 2.4 kbit/s, ISDN like services at 64 kbit/s (called Mobile ISDN) and shared-channel IP packet-switched data services at 64 kbit/s (called Mobile Packet Data Service or MPDS, formerly Inmarsat Packet Data Service – IPDS). GAN is also known as "M4". Service was closed early in January 2017.
Projects since 2008
European Aviation Network
On 30 June 2008, the European Parliament and the European Council adopted the European's Decision to establish a single selection and authorisation process (ESAP – European S-band Application Process) to ensure a coordinated introduction of mobile satellite services (MSS) in Europe. The late 2008 selection process attracted four applications by prospective operators (ICO Global Communications (ICO), Inmarsat, Solaris Mobile (EchoStar Mobile), and TerreStar).
In May 2009, the European Commission selected two operators, Inmarsat Ventures and Solaris Mobile, giving these operators "the right to use the specific radio frequencies identified in the Commission's decision and the right to operate their respective mobile satellite systems". EU Member States now have to ensure that the two operators have the right to use the specific radio frequencies identified in the commission's decision and the right to operate their respective mobile satellite systems for 18 years from the selection decision. The operators are compelled to start operations within 24 months (May 2011) from the selection decision.
Inmarsat's S-band satellite programme provides mobile multimedia broadcast, mobile two-way broadband telecommunications and next-generation MSS services across all member states of the European Union and as far east as Moscow and Ankara by means of a hybrid satellite/terrestrial network. It was built by Thales Alenia Space and launched in 2017. The complementary ground network consists of around 300 LTE base stations constructed by Deutsche Telekom.
The European Aviation Network faced legal challenges , including one from Viasat alleging unfair bidding practices and a misuse of spectrum and a ruling by the Belgian telecommunications regulator revoking permission for the use of the ground network in Belgium.
Global Xpress Expansion
Inmarsat ordered a fifth Global Xpress satellite from Thales Group. The satellite launched 26 November 2019 from Centre Spatial Guyanais (CSG) aboard an Ariane 5 launch vehicle. The satellite has been described as a 'very high throughput satellite', and provides services to the Middle East, India and Europe. former CEO Rupert Pearce (new CEO Rajeev Suri) indicated that Inmarsat planned further expansion of the Global Xpress network. Trials of new technologies demonstrated bandwidths of 330 Mbit/s over the existing Global Xpress network, far in excess of the existing 50 Mbit/s.
Polar Coverage
To provide GX coverage to the users in the Arctic region, Inmarsat plans to improve GlobalXpress connectivity to above 65° North.
Two high-capacity, multi-beam payloads GX-10A and GX-10B were planned in Highly elliptical orbits (HEO) for reliable coverage. Inmarsat worked in partnership with Space Norway HEOSAT in the Arctic Satellite Broadband Mission. The satellites carrying the Inmarsat payload were to be manufactured by Northrop Grumman Innovation Systems (NGIS). Launch for GX-10A and 10B was scheduled for 2022.
Inmarsat-6
At the end of 2015, Inmarsat ordered two sixth generation satellites from Airbus. These satellites planned to offer both Ka- and L-band payloads and additional capacity to the existing BGAN and Global Xpress networks. In 2017, it was announced that the first of these satellites would be launched by MHI in December 2021. The first of the two satellites, Inmarsat-6 F1, was successfully launched on 22 December 2021 on a H-IIA rocket. The second satellite Inmarsat-6 F2 (GX 6B) was launched on 18 February 2023 on a Falcon 9 Block 5 rocket, but suffered a power system failure in orbit that prevented it from becoming operational.
Inmarsat-8
In May 2023, Inmarsat ordered three eighth-generation satellites from SWISSto12 SA, which will be based on their HummingSat satellite platform.
IRIS and ICE
Inmarsat is participating in two ESA ARTES programs, IRIS and ICE:
IRIS is a project to improve tracking of aircraft, and to improve communications between aircraft and air traffic controllers. Inmarsat will provide high capacity satellite communications links for aircraft, and improve detection of aircraft locations in time and space.
ICE (Inmarsat Communications Evolution) is a partnership with industrial partners intended to identify innovative technologies that can expand and enhance the capabilities of the next generation of satellite communications.
Issues
INMARSAT and Iridium frequency bands abut each other at 1626.5 MHz thus each satcom radio has the ability to interfere with the other. Usually, the far more powerful INMARSAT radio disrupts the Iridium radio up to away.
See also
Mobile-satellite service
Satellite phone
AeroMobile
DVB-SH
Globalstar
Globalsat Group
Intersputnik
Iridium Communications
Librestream
Maritime safety information
O3b Networks
OnAir (telecommunications)
Orbcomm
Radiotelephone
SES Broadband for Maritime
Sky and Space Global
Thuraya
Wideband Global SATCOM (WGS)
References
External links
Communications satellite operators
Telecommunications companies of the United Kingdom
Technology companies based in London
Satellite telephony
English brands
Telecommunications companies established in 1979
Apax Partners companies
Satellite Internet access
Special international telephone services
Companies listed on the London Stock Exchange
1979 establishments in England
British companies established in 1979
Warburg Pincus companies
2023 mergers and acquisitions
Permira companies | Inmarsat | Technology | 5,670 |
1,953,160 | https://en.wikipedia.org/wiki/Limnic%20eruption | A limnic eruption, also known as a lake overturn, is a very rare type of natural hazard in which dissolved carbon dioxide () suddenly erupts from deep lake waters, forming a gas cloud capable of asphyxiating wildlife, livestock, and humans. Scientists believe earthquakes, volcanic activity, and other explosive events can serve as triggers for limnic eruptions as the rising displaces water. Lakes in which such activity occurs are referred to as limnically active lakes or exploding lakes. Some features of limnically active lakes include:
-saturated incoming water
A cool lake bottom indicating an absence of direct volcanic heat with lake waters
An upper and lower thermal layer with differing saturations
Proximity to areas with volcanic activity
Investigations of the Lake Monoun and Lake Nyos casualties led scientists to classify limnic eruptions as a distinct type of hazard event, even though they can be indirectly linked to volcanic eruptions.
Historical occurrences
Due to the largely invisible nature of the underlying cause ( gas) behind limnic eruptions, it is difficult to determine to what extent, and when, eruptions have occurred in the past. The Roman historian Plutarch reports that in 406BC, Lake Albano surged over the surrounding hills, despite there being no rain nor tributaries flowing into the lake to account for the rise in water level. The ensuing flood destroyed fields and vineyards before eventually pouring into the sea. This event is thought to have been caused by volcanic gases, trapped in sediment at the bottom of the lake and gradually building up until suddenly releasing, causing the water to overflow.
In recent history, this phenomenon has been observed twice. The first recorded limnic eruption occurred in Cameroon at Lake Monoun in 1984, causing asphyxiation and death of 37 people living nearby. A second, deadlier eruption happened at neighboring Lake Nyos in 1986, releasing over 80 million m3 of , killing around 1,700 people and 3,000 livestock, again by asphyxiation.
A third lake, the much larger Lake Kivu, rests on the border between the Democratic Republic of the Congo and Rwanda, and contains massive amounts of dissolved . Sediment samples taken from the lake showed an event caused living creatures in the lake to go extinct around every 1,000 years, and caused nearby vegetation to be swept back into the lake. Limnic eruptions can be detected and quantified on a concentration scale by taking air samples of the affected region.The Messel pit fossil deposits of Messel, Germany, show evidence of a limnic eruption there in the early Eocene. Among the victims are perfectly preserved insects, frogs, turtles, crocodiles, birds, anteaters, insectivores, early primates, and paleotheres.
Causes
For a lake to undergo a limnic eruption, the water must be nearly saturated with gas. was the primary component in the two observed cases, Lake Nyos and Lake Monoun. In Lake Kivu's case, scientists, including lake physicist Alfred Johny Wüest, were also concerned about the concentrations of methane. may originate from volcanic gas emitted from under the lake, or from decomposition of organic material.
Before a lake becomes saturated, it behaves like an unopened carbonated soft drink: the is dissolved in the water. In both lakes and soft drinks, dissolves much more readily at higher pressure due to Henry's law. When the pressure is released, the comes out of solution as bubbles of gas, which rise to the surface. also dissolves more readily in cooler water, so very deep lakes can dissolve very large amounts of since pressure increases, and temperature decreases, with depth. A small increase in water temperature can lead to the release of a large amount of .
Once a lake is saturated, it is very unstable and it gives off a smell of rotten eggs and gunpowder, but a trigger is needed to set off an eruption. In the case of the 1986 Lake Nyos eruption, landslides were the suspected triggers, but a volcanic eruption, an earthquake, or even wind and rain storms can be potential triggers. Limnic eruptions can also be caused by gradual gas saturation at specific depths triggering spontaneous gas development. Regardless of cause, the trigger pushes gas-saturated water higher in the lake, where the reduced pressure is insufficient to keep gas in solution. The buoyancy from the resulting bubbles lifts the water even higher, releasing yet more bubbles. This process forms a column of gas, at which point the water at the bottom is pulled up by suction, and it too loses in a runaway process. This eruption discharges the gas into the air and can displace enough water to form a tsunami.
Limnic eruptions are exceptionally rare for several reasons. First, a source must exist; regions with volcanic activity are most at risk. Second, the vast majority of lakes are holomictic (their layers mix regularly), preventing a buildup of dissolved gases. Only meromictic lakes are stratified, allowing to remain dissolved. It is estimated only one meromictic lake exists for every 1,000 holomictic lakes. Finally, a lake must be very deep in order to have sufficiently pressurized water that can dissolve large amounts of .
Consequences
Once an eruption occurs, a large cloud forms above the lake and expands to the surrounding region. Because is denser than air, it has a tendency to sink to the ground, simultaneously displacing breathable air, resulting in asphyxia. can make human bodily fluids highly acidic and potentially cause poisoning. As victims gasp for air, they actually accelerate asphyxia by inhaling .
At Lake Nyos, the gas cloud descended into a nearby village where it settled, killing nearly everyone; casualties as far as were reported. A change in skin color on some bodies led scientists to hypothesize the gas cloud may have contained dissolved acid such as hydrogen chloride, though this hypothesis is disputed. Many victims were found with blisters on their skin, thought to have been caused by pressure ulcers, which were likely caused by low blood oxygen levels in those asphyxiated by carbon dioxide. Nearby vegetation was largely unaffected, except any growing immediately adjacent to the lake. There, vegetation was damaged or destroyed by a high tsunami caused by the violent eruption.
Degassing
Efforts are underway to develop a solution for removing the gas from these lakes and to prevent a build-up which could lead to another disaster. A team led by French scientist Michel Halbwachs began experimenting at Lake Monoun and Lake Nyos in 1990 using siphons to degas the waters of these lakes in a controlled manner. The team positioned a pipe vertically in the lake with its upper end above the water surface. Water saturated with enters the bottom of the pipe and rises to the top. The lower pressure at the surface allows the gas to come out of solution. Only a small amount of water must be mechanically pumped initially through the pipe to start the flow. As saturated water rises, the comes out of solution and forms bubbles. The natural buoyancy of the bubbles draws the water up the pipe at high velocity resulting in a fountain at the surface. The degassifying water acts like a pump, drawing more water into the bottom of the pipe, and creating a self-sustaining flow. This is the same process which leads to a natural eruption, but in this case it is controlled by the size of the pipe.
Each pipe has a limited pumping capacity and several would be required for both Lake Monoun and Lake Nyos to degas a significant fraction of the deep lake water and render the lakes safe. The deep lake waters are slightly acidic due to the dissolved which causes corrosion to the pipes and electronics, necessitating ongoing maintenance. There is some concern that from the pipes could settle on the surface of the lake forming a thin layer of unbreathable air and thus potentially causing problems for wildlife.
In January 2001, a single pipe was installed by the French-Cameroonian team on Lake Nyos, and two more pipes were installed in 2011 with funding support from the United Nations Development Programme. A pipe was installed at Lake Monoun in 2003 and two more were added in 2006. These three pipes are thought to be sufficient to prevent an increase in levels, removing approximately the same amount of gas that naturally enters at the lake bed. In January 2003, an 18-month project was approved to fully degas Lake Monoun, and the lake has since been rendered safe.
There is some evidence that Lake Michigan in the United States spontaneously degasses on a much smaller scale each fall.
Lake Kivu risks
Lake Kivu is not only about 1,700 times larger than Lake Nyos, but is also located in a far more densely populated area, with over two million people living along its shores. The part within the Democratic Republic of the Congo is a site of active armed conflict and low state capacity for the DRC government, which impedes both studies and any subsequent mitigating actions. Lake Kivu has not reached a high level of saturation yet; if the water were to become heavily saturated, a limnic eruption would pose a great risk to human and animal life, potentially killing millions.
Two significant changes in Lake Kivu's physical state have brought attention to a possible limnic eruption: the high rates of methane dissociation and a rising surface temperature. Research investigating historical and present-day temperatures show Lake Kivu's surface temperature is increasing by about 0.12 °C per decade. Lake Kivu is in close proximity to potential triggers: Mount Nyiragongo (an active volcano which erupted in January 2002 and May 2021), an active earthquake zone, and other active volcanoes.
While the lake could be degassed in a manner similar to Lake Monoun and Lake Nyos, due to the size of Lake Kivu and the volume of gas it contains, such an operation would be expensive, running into the millions of dollars. A scheme initiated in 2010 to use methane trapped in the lake as a fuel source to generate electricity in Rwanda has led to a degree of degassing. During the procedure for extracting the flammable methane gas used to fuel power stations on the shore, some is removed in a process known as catalyst scrubbing. It is unclear whether enough gas will be removed to eliminate the danger of a limnic eruption at Lake Kivu.
See also
Cold-water geyser
Mazuku
Natural disaster
Tsunamis in lakes
References
External links
Page of the team degassing Lake Nyos
Lake's silent killer to be disarmed
Lake Nyos (1986)
Degassing Lake Nyos
Cracking the Killer Lakes of Cameroon
Lake Monoun
BBC Cameroons "killer lake" degassed
Using Science to Solve Problems: The Killer Lakes of Cameroon
Carbon dioxide
Lakes
Volcanic eruption types
Geological hazards
Natural disasters | Limnic eruption | Physics,Chemistry,Environmental_science | 2,192 |
38,889,260 | https://en.wikipedia.org/wiki/Ewaso%20Lions | The Ewaso Lions Project was founded in 2007 for the protection of lions (Panthera leo) and their habitat in Northern Kenya. The project works to study and incorporate local communities in helping to protect the lions in the Samburu National Reserve, Buffalo Springs National Reserve and Shaba National Reserve of the Ewaso Nyiro ecosystem in Northern Kenya.
The lion is a vulnerable species, having seen a major population decline of 30–50% over the past two decades. Currently there are less than 2,000 lions in Kenya.
The Ewaso Lions Project research camp sits within the Westgate Community Conservancy, west of Samburu National Reserve. Shivani Bhalla, representing the Ewaso Lions is a regular featured guest speaker at the annual Wildlife Conservation Network Expo.
Programs
Scientific research
The Ewaso Lions Project core research focuses on discovering the factors that drive lion pride locations and movements and the extent of conflict with humans, as well as the effect of habitat loss. Methods include a lion census to estimate population, size and trends; surveys of local communities to gauge extent of human-lion conflict and its impact; camera traps are set to document lion activity; fitting lions with radio and GPS collars to map movements in and out of reserves; scat analysis to understand feeding patterns.
Community outreach
The Ewaso Lions Project works with local communities to find ways that people can coexist with lions. Community Outreach programs include: "Warrior Watch" which recruits and trains Samburu warriors to collect data and respond to community issues like livestock depredation; primary school education on wildlife and wildlife clubs in schools along with scholarships for students interested in conservation as well as taking children on safaris to see the animals first hand; a mobile film project that shows wildlife films in rural villages gives local people an opportunity to see the animals up close; a book compiling poems, stories, myths and drawings about lions from the local community; and a race each year to bring the community together and bring awareness to lion conservation.
See also
Conservation movement
Natural environment
References
External links
ewasolions.org, Ewaso Lions Project Website
Conservation projects
Endangered species
International environmental organizations
Animal welfare organisations based in Kenya
Organizations established in 2007
Cat conservation organizations | Ewaso Lions | Biology | 452 |
14,012,859 | https://en.wikipedia.org/wiki/Dunlap%27s%20Creek%20Bridge |
Dunlap's Creek Bridge is the first arch bridge in the United States built of cast iron. It was designed by Richard Delafield and built by the United States Army Corps of Engineers.<ref
name="jackson">
</ref> Constructed from 1836 to 1839 on the National Road in Brownsville, Pennsylvania, it remains in use today. It is listed on the National Register of Historic Places and is a National Historic Civil Engineering Landmark (1978). It is located in the Brownsville Commercial Historic District and supports Market Street, the local main thoroughfare. Due to the steep sides of the Monongahela River valley, there is only room for two short streets parallel to the river's shore and graded mild enough to be comfortable to walk before the terrain rises too steeply for business traffic.
History
There have been four structures on this site. The first two collapsed in 1808 and 1820. The third, a wood-framed structure, needed replacement by 1832.
This bridge is constructed using five parallel tubular ribs, each made of 9 elliptical segments to form the arch.
See also
List of bridges documented by the Historic American Engineering Record in Pennsylvania
List of Registered Historic Places in Fayette County, Pennsylvania
List of historic civil engineering landmarks
Notes
References
External links
Dunlap’s Creek Bridge: Historical Marker Database
[ National Register nomination form]
Dunlap's Creek Bridge: History and Heritage of Civil Engineering
Bridges in Fayette County, Pennsylvania
Road bridges on the National Register of Historic Places in Pennsylvania
Historic American Engineering Record in Pennsylvania
Historic Civil Engineering Landmarks
National Register of Historic Places in Fayette County, Pennsylvania
History of Fayette County, Pennsylvania
Iron bridges in the United States
Arch bridges in the United States
National Road | Dunlap's Creek Bridge | Engineering | 347 |
12,002,936 | https://en.wikipedia.org/wiki/Displacement%E2%80%93length%20ratio | The displacement–length ratio (DLR or D/L ratio) is a calculation used to express how heavy a boat is relative to its waterline length.
DLR was first published in
It is calculated by dividing a boat's displacement in long tons (2,240 pounds) by the cube of one one-hundredth of the waterline length (in feet):
DLR can be used to compare the relative mass of various boats no matter what their length. A DLR less than 200 is indicative of a racing boat, while a DLR greater than 300 or so is indicative of a heavy cruising boat.
See also
Sail Area-Displacement ratio
References
Ship measurements
Nautical terminology
Engineering ratios
Naval architecture | Displacement–length ratio | Mathematics,Engineering | 143 |
28,822,858 | https://en.wikipedia.org/wiki/Holographic%20sensor | A holographic sensor is a device that comprises a hologram embedded in a smart material that detects certain molecules or metabolites. This detection is usually a chemical interaction that is transduced as a change in one of the properties of the holographic reflection (as in the Bragg reflector), either refractive index or spacing between the holographic fringes. The specificity of the sensor can be controlled by adding molecules in the polymer film that selectively interacts with the molecules of interest.
A holographic sensor aims to integrate the sensor component, the transducer and the display in one device for fast reading of molecular concentrations based in colorful reflections or wavelengths.
Certain molecules that mimic biomolecule active sites or binding sites can be incorporated into the polymer that forms the holographic film in order to make the holographic sensors selective and/or sensitive to certain medical important molecules like glucose, etc.
The holographic sensors can be read from a fair distance because the transducer element is light that has been refracted and reflected by the holographic grating embedded in the sensor. Therefore, they can be used in industrial applications where non-contact with the sensor is required.
Other applications for holographic sensors are anti-counterfeiting
Metabolites
Some of the metabolites detected by a holographic sensor are:
Ammonia
pH
Hydrocarbons
VOCs
Gases
Glucose
Water content
Lactate and other biomolecules
Metal ions
References
Holography
Sensors | Holographic sensor | Technology,Engineering | 308 |
23,718,510 | https://en.wikipedia.org/wiki/Daniel%20Garc%C3%ADa%20And%C3%BAjar | Daniel García Andújar (1966 in Almoradí) is a self-taught, outsider visual media artist, activist, and art theorist from Spain. He lives and works in Barcelona. His work has been exhibited widely, including Manifesta 4, the Venice Biennale and documenta 14 Athens, Kassel. He has directed numerous workshops for artists and social collectives worldwide.
Work and contributions
Andújar is one of the principal exponents of Net.art, founder of Technologies To The People and a member of irational.org. The most prominent projects in this sphere would be the Street Access Machine (1996), a machine allowing those begging in the street to access digital money; The Body Research Machine (1997), an interactive machine that scanned the body's DNA strands, processing them for scientific experiments, and x-devian by knoppix, an open-source operating system presented as part of the Individual Citizen Republic Project: The System (2003) project. Another course the work takes would be the critical reflection on the art world TTTP presents through the Technologies to the People Foundation with its collections distributed free of charge—Photo Collection (1997), Video Collection (1998) and Net Art Classics Collection (1999)—already calling the idea of material and intellectual property into question during this period. Andújar is the director of numerous internet projects, such as e-sevilla, e-valencia, e-madrid and e-barcelona.
From January to April 2015, the Museo Nacional Centro de Arte Reina Sofía (MNCARS) hosted a comprehensive solo exhibition of his Works curated by Manuel Borja-Villel under the title Operating System. His Work is in major public and private collections, including the Museo Nacional Centro de Arte Reina Sofia's National Collection.
Projects
1996–2011: Technologies To The People, began in 1996 is an historical net.art project.
2003: X-devian. The New Technologies to the People System.
2004: Postcapital Archive (1989–2001).
2011: A vuelo de pájaro Let's Democratize Democracy.
Exhibitions
2006 – Postcapital, Palau de la Virreina, Barcelona, Spain.
2008 – Anna Kournikova Deleted By Memeright Trusted System – Art in the Age of Intellectual Property. Postcapital Archive. Hartware MedienKunstVerein, PHOENIX Halle Dortmund, Germany. Curated by: Inke Arns and Francis Hunger
2008 – Unrecorded, Akbank Sanat, Istanbul, Turkey. Curated by Basak Senova.
2008 – Herramientas del arte. Relecturas (Tools of Art: Re-readings), Parpalló, Valencia. With Rogelio López Cuenca and Isidoro Valcárcel Medina, Curated by: Álvaro de los Ángeles. Spain.
2008 -Banquete_nodos y redes. LABoral Centro de Arte y Creación Industrial, Gijón, Curated by: Karin Ohlenschläger. Spain.
2008 -The Wonderful World of irational.org: Tools, Techniques and Events 1996–2006. Museum of Contemporary Art Vojvodina, Novi Sad. Curators: Inke Arns (Dortmund) and Jacob Lillemose (Kopenhagen). Serbia.
2009 – Postcapital. Archive 1989–2001, Württembergischer Kunstverein, Stuttgart. Curated by Iris Dressler and Hans D. Christ.
2009 – Postcapital (Mauer), Museum for Modern Art, Bremen, Germany. Curated by Dr. Anne Thurmann-Jajes.
2009 – Subversive Praktiken, Württembergischer Kunstverein, Stuttgart.
2009 – Postcapital Archive. The Unavowable Community. Catalan Pavilion, 53. Biennale, Venice. Curated by Valentin Roma.
2009 – Postcapital Archive (1989–2001), Iberia Art Center, Beijing. Curated by Valentin Roma.
2010 – Postkapital Arşiv 1989–2001 Sedat Yazici Riva Foundation for Education, Culture and Art, Istanbul Curated by Basak Senova.
2010 – BARCELONA – VALÈNCIA – PALMA. A History of Confluence and Divergence. Objects of desire. Centre de Cultura Contemporània de Barcelona, Spain. Curated by Ignasi Aballí, Melcior Comes and Vicent Sanchis.
2010 – Postcapital Archive (1989–2001) . Total Museum of Contemporary Art, Seoul, South Korea. Curated by Nathalie Boseul Shin and Hans D. Christ.
2010 – Postcapital Archive (1989–2001) La comunidad inconfesable, Bòlit, Centre d’Art Contemporani, Girona, Spain. Curated by Valentín Roma.
2010 – The Wall. Postcapital Archive (1989–2001), Espai Visor, Valencia.
2015 – Sistema Operativo. Museo Nacional Centro de Arte Reina Sofía. Curated by Manuel Borja-Villel.
2015 – Naturaleza vigilada / Überwachte Natur, Museo Vostell Malpartida.
Museum Collections
Museo Nacional Centro de Arte Reina Sofía, Madrid
Institut Valencià d'Art Modern (IVAM), Valencia
Museu d'Art Contemporani de Barcelona (MACBA), Barcelona
ARTIUM – Basque Museum Center of Contemporary Art, Vitoria-Gasteiz
Museo de Arte Contemporáneo de Castilla y León (MUSAC), Léon
CA2M – Centro de Arte Dos de Mayo, Madrid
Centro de Artes Visuales Helga de Alvear, Cáceres
Es Baluard Museu d’Art Modern, Palma de Mallorca
Museu d´Art Jaume Morera, Lleida
Banco De España, Madrid
Walker Art CenterMinneapolis, MN
The Newark MuseumNewark, NJ
les Abattoirs – FRAC Midi-PyrénéesToulouse
Selected books
Hans D. Christ, Iris Dressler (ed.): Technologies To The People. "Postcapital Archive (1989–2001)" Daniel Garcia Andujar, Hatje Cantz Verlag, edited 2011. .
Daniel G. Andújar, Operating System, Authors: Jacob Lillemose, Iris Dressler, Javier de la Cueva, José Luis Pardo, Alberto López Cuenca, Isidoro Valcárcel Medina. Museo Nacional Centro de Arte Reina Sofía, Madrid, 2015, NIPO: 036-15-006-1
Daniel G. Andújar: Naturaleza vigilada. Überwachte Natur. Museo Vostell Malpartida, Cáceres, 2015, Deposito legal Cc-285-2015.
References
Hans D. Christ, Iris Dressler (Hrsg.): Technologies To The People. "Postcapital Archive (1989–2001)" Daniel Garcia Andujar, Hatje Cantz Verlag, Ostfildern 2011
External links
Technologies To The People
Ways of working. Interview with Iris Dressler
Oral Memories Video Daniel García Andújar
Metrópolis RTVE Video Daniel García Andújar
Projects links
Artist site
Postcapital
e-barcelona
e-valencia
e-sevilla
e-madrid
e-stuttgart
e-seoul
Irational
1966 births
Living people
Artists from Catalonia
Spanish contemporary artists
Self-taught artists
Net.artists | Daniel García Andújar | Technology | 1,499 |
1,452,979 | https://en.wikipedia.org/wiki/Zn%C3%A1m%27s%20problem | In number theory, Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972, although other mathematicians had considered similar problems around the same time.
The initial terms of Sylvester's sequence almost solve this problem, except that the last chosen term equals one plus the product of the others, rather than being a proper divisor. showed that there is at least one solution to the (proper) Znám problem for each . Sun's solution is based on a recurrence similar to that for Sylvester's sequence, but with a different set of initial values.
The Znám problem is closely related to Egyptian fractions. It is known that there are only finitely many solutions for any fixed . It is unknown whether there are any solutions to Znám's problem using only odd numbers, and there remain several other open questions.
The problem
Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. That is, given , what sets of integers
are there such that, for each , divides but is not equal to
A closely related problem concerns sets of integers in which each integer in the set is a divisor, but not necessarily a proper divisor, of one plus the product of the other integers in the set. This problem does not seem to have been named in the literature, and will be referred to as the improper Znám problem. Any solution to Znám's problem is also a solution to the improper Znám problem, but not necessarily vice versa.
History
Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972. had posed the improper Znám problem for , and , independently of Znám, found all solutions to the improper problem for . showed that Znám's problem is unsolvable for , and credited J. Janák with finding the solution for .
Examples
Sylvester's sequence is an integer sequence in which each term is one plus the product of the previous terms. The first few terms of the sequence are
Stopping the sequence early produces a set like that almost meets the conditions of Znám's problem, except that the largest value equals one plus the product of the other terms, rather than being a proper divisor. Thus, it is a solution to the improper Znám problem, but not a solution to Znám's problem as it is usually defined.
One solution to the proper Znám problem, for , is . A few calculations will show that
Connection to Egyptian fractions
Any solution to the improper Znám problem is equivalent (via division by the product of the values ) to a solution to the equation
where as well as each must be an integer, and conversely any such solution corresponds to a solution to the improper Znám problem. However, all known solutions have , so they satisfy the equation
That is, they lead to an Egyptian fraction representation of the number one as a sum of unit fractions. Several of the cited papers on Znám's problem study also the solutions to this equation. describe an application of the equation in topology, to the classification of singularities on surfaces, and describe an application to the theory of nondeterministic finite automata.
Number of solutions
The number of solutions to Znám's problem for any is finite, so it makes sense to count the total number of solutions for each . showed that there is at least one solution to the (proper) Znám problem for each . Sun's solution is based on a recurrence similar to that for Sylvester's sequence, but with a different set of initial values. The number of solutions for small values of , starting with , forms the sequence
2, 5, 18, 96 .
Presently, a few solutions are known for and , but it is unclear how many solutions remain undiscovered for those values of .
However, there are infinitely many solutions if is not fixed:
showed that there are at least 39 solutions for each , improving earlier results proving the existence of fewer solutions; conjecture that the number of solutions for each value of grows monotonically with .
It is unknown whether there are any solutions to Znám's problem using only odd numbers. With one exception, all known solutions start with 2. If all numbers in a solution to Znám's problem or the improper Znám problem are prime, their product is a primary pseudoperfect number; it is unknown whether infinitely many solutions of this type exist.
See also
Giuga number
Primary pseudoperfect number
References
Notes
Sources
.
.
.
.
.
.
.
.
.
.
.
.
External links
Number theory
Integer sequences
Egyptian fractions
Mathematical problems | Znám's problem | Mathematics | 1,002 |
3,021,435 | https://en.wikipedia.org/wiki/Primary%20ideal | In mathematics, specifically commutative algebra, a proper ideal Q of a commutative ring A is said to be primary if whenever xy is an element of Q then x or yn is also an element of Q, for some n > 0. For example, in the ring of integers Z, (pn) is a primary ideal if p is a prime number.
The notion of primary ideals is important in commutative ring theory because every ideal of a Noetherian ring has a primary decomposition, that is, can be written as an intersection of finitely many primary ideals. This result is known as the Lasker–Noether theorem. Consequently, an irreducible ideal of a Noetherian ring is primary.
Various methods of generalizing primary ideals to noncommutative rings exist, but the topic is most often studied for commutative rings. Therefore, the rings in this article are assumed to be commutative rings with identity.
Examples and properties
The definition can be rephrased in a more symmetric manner: a proper ideal is primary if, whenever , we have or or . (Here denotes the radical of .)
A proper ideal Q of R is primary if and only if every zero divisor in R/Q is nilpotent. (Compare this to the case of prime ideals, where P is prime if and only if every zero divisor in R/P is actually zero.)
Any prime ideal is primary, and moreover an ideal is prime if and only if it is primary and semiprime (also called radical ideal in the commutative case).
Every primary ideal is primal.
If Q is a primary ideal, then the radical of Q is necessarily a prime ideal P, and this ideal is called the associated prime ideal of Q. In this situation, Q is said to be P-primary.
On the other hand, an ideal whose radical is prime is not necessarily primary: for example, if , , and , then is prime and , but we have , , and for all n > 0, so is not primary. The primary decomposition of is ; here is -primary and is -primary.
An ideal whose radical is maximal, however, is primary.
Every ideal with radical is contained in a smallest -primary ideal: all elements such that for some . The smallest -primary ideal containing is called the th symbolic power of .
If P is a maximal prime ideal, then any ideal containing a power of P is P-primary. Not all P-primary ideals need be powers of P, but at least they contain a power of P; for example the ideal (x, y2) is P-primary for the ideal P = (x, y) in the ring k[x, y], but is not a power of P, however it contains P².
If A is a Noetherian ring and P a prime ideal, then the kernel of , the map from A to the localization of A at P, is the intersection of all P-primary ideals.
A finite nonempty product of -primary ideals is -primary but an infinite product of -primary ideals may not be -primary; since for example, in a Noetherian local ring with maximal ideal , (Krull intersection theorem) where each is -primary, for example the infinite product of the maximal (and hence prime and hence primary) ideal of the local ring yields the zero ideal, which in this case is not primary (because the zero divisor is not nilpotent). In fact, in a Noetherian ring, a nonempty product of -primary ideals is -primary if and only if there exists some integer such that .
Footnotes
References
On primal ideals, Ladislas Fuchs
External links
Primary ideal at Encyclopaedia of Mathematics
Commutative algebra
Ideals (ring theory) | Primary ideal | Mathematics | 791 |
9,248,704 | https://en.wikipedia.org/wiki/30P/Reinmuth | Comet 30P/Reinmuth, also known as Comet Reinmuth 1, is a periodic comet in the Solar System, first discovered by Karl Reinmuth (Landessternwarte Heidelberg-Königstuhl, Germany) on February 22, 1928.
First calculations of orbit concluded a period of 25 years, but this was revised down to seven years and speculation this was the same comet as Comet Taylor, which had been lost since 1915. Further calculations by George van Biesbroeck concluded they were different comets.
The 1935 approach was observed though not as favourable, in 1937 the comet passed close to Jupiter which increased the perihelion distance and orbital period.
Due to miscalculations, the 1942 appearance was missed, but it has been observed on every subsequent appearance since.
The comet nucleus is estimated to be 7.8 kilometers in diameter.
References
External links
Orbital simulation from JPL (Java) / Horizons Ephemeris
30P/Reinmuth magnitude plot for 2010
30P at Kronk's Cometography
30P at Kazuo Kinoshita's Comets
30P at Seiichi Yoshida's Comet Catalog
Periodic comets
0030
Discoveries by Karl Wilhelm Reinmuth
030P
19280222 | 30P/Reinmuth | Astronomy | 256 |
41,051,864 | https://en.wikipedia.org/wiki/Hulsebos-Hesselman%20axial%20oil%20engines | Hulsebos-Hesselman axial oil engines were five cylinder, four stroke, wobble plate engines that originated in and were used throughout the Netherlands during the late 1930s. Numerous patents can be found concerning this engine, all of which appear to attribute the engine's "wabbler" operating principles to the inventor Wichert Hulsebos.
Combustion system
This engine used the Hesselman engine low compression combustion system where oils of varying grades were sprayed into the cylinder during compression but ignition was initiated and assisted by a spark plug. Overhead inlet and exhaust valves, water cooling and a magneto for ignition were standard features.
Dimensions
The capacity of 4 litres was achieved with a bore of 95 mm and a stroke of 114 mm and it made use of a compression ratio of 6 to 1. Power output was said to be 70 bhp at 2400 rpm.
Wobble plate arrangement
The wobble plate arrangement cleverly employed a bevel gear set to prevent its rotation around the crankshaft. One bevel gear was fitted to the rear of the wobble plate itself and meshed with another which was fixed to the main body of the engine.
Production
They were manufactured by Hulsemo N.V., Casimirlaan 5, Arnhem, The Netherlands, and were exhibited in the Berlin and Amsterdam Shows during 1938.
References
Diesel engines
Engine technology | Hulsebos-Hesselman axial oil engines | Technology | 274 |
5,259,465 | https://en.wikipedia.org/wiki/Diglyme | Diglyme, or bis(2-methoxyethyl) ether, is an organic compound with the chemical formula . It is a colorless liquid with a slight ether-like odor. It is a solvent with a high boiling point. It is the dimethyl ether of diethylene glycol. The name diglyme is a portmanteau of diglycol methyl ether. It is miscible with water as well as organic solvents.
It is prepared by a reaction of dimethyl ether and ethylene oxide over an acid catalyst.
Solvent
Because of its resistance to strong bases, diglyme is favored as a solvent for reactions of alkali metal reagents even at high temperatures. Rate enhancements in reactions involving organometallic reagents, such as Grignard reactions or metal hydride reductions, have been observed when using diglyme as a solvent.
Diglyme is also used as a solvent in hydroboration reactions with diborane.
It serves as a chelate for alkali metal cations, leaving anions more active.
Safety
The European Chemicals Agency lists diglyme as a substance of very high concern (SVHC) as a reproductive toxin.
At higher temperatures and in the presence of active metals diglyme is known to decompose, which can produce large amounts of gas and heat. This decomposition led to the T2 Laboratories reactor explosion in 2007.
References
Glycol ethers
Chelating agents
Hazardous air pollutants | Diglyme | Chemistry | 310 |
20,392,928 | https://en.wikipedia.org/wiki/Equianalgesic | An equianalgesic chart is a conversion chart that lists equivalent doses of analgesics (drugs used to relieve pain). Equianalgesic charts are used for calculation of an equivalent dose (a dose which would offer an equal amount of analgesia) between different analgesics. Tables of this general type are also available for NSAIDs, benzodiazepines, depressants, stimulants, anticholinergics and others.
Format
Equianalgesic tables are available in different formats, such as pocket-sized cards for ease of reference. A frequently-seen format has the drug names in the left column, the route of administration in the center columns and any notes in the right column.
Purpose
There are several reasons for switching a patient to a different pain medication. These include practical considerations such as lower cost or unavailability of a drug at the patient's preferred pharmacy, or medical reasons such as lack of effectiveness of the current drug or to minimize adverse effects. Some patients request to be switched to a different narcotic due to stigma associated with a particular drug (e.g. a patient refusing methadone due to its association with opioid addiction treatment). Equianalgesic charts are also used when calculating an equivalent dosage of the same drug, but with a different route of administration.
Precautions
An equianalgesic chart can be a useful tool, but the user must take care to correct for all relevant variables such as route of administration, cross tolerance, half-life and the bioavailability of a drug. For example, the narcotic levorphanol is 4–8 times stronger than morphine, but also has a much longer half-life. Simply switching the patient from 40 mg of morphine to 10 mg of levorphanol would be dangerous due to dose accumulation, and hence frequency of administration should also be taken into account.
There are other concerns about equianalgesic charts. Many charts derive their data from studies conducted on opioid-naive patients. Patients with chronic (rather than acute) pain may respond to analgesia differently. Repeated administration of a medication is also different from single dosing, as many drugs have active metabolites that can build up in the body. Patient variables such as sex, age, and organ function may also influence the effect of the drug on the system. These variables are rarely included in equianalgesic charts.
Opioid equivalency table
Opioids are a class of compounds that elicit analgesic (pain killing) effects in humans and animals by binding to the μ-opioid receptor within the central nervous system. The following table lists opioid and non-opioid analgesic drugs and their relative potencies. Values for the potencies represent opioids taken orally unless another route of administration is provided. As such, their bioavailabilities differ, and they may be more potent when taken intravenously.
Nonlinearities
This chart measures pain relief versus mass of medication. Not all medications have a fixed relationship on this scale. Methadone is different from most opioids because its potency can vary depending on how long it is taken. Acute use (1–3 days) yields a potency about 1.5× stronger than that of morphine and chronic use (7 days+) yields a potency about 2.5 to 5× that of morphine. Similarly, the effect of tramadol increases after consecutive dosing due to the accumulation of its active metabolite and an increase of the oral bioavailability in chronic use.
See also
Oripavine – for more on the comparative strength of oripavine derivatives
References
Explanatory notes
Citations
Bibliography
Books
, Extra information, including printable charts
Articles
Websites
Online opioid equianalgesia calculator Electronic calculator that includes logic for bidirectional and dose-dependent conversions
Anesthesia
Clinical pharmacology
Comparison of psychoactive substances
Medical terminology
Nociception
Opioids
Pain | Equianalgesic | Chemistry | 842 |
20,622,091 | https://en.wikipedia.org/wiki/Swype | Swype was a virtual keyboard for touchscreen smartphones and tablets originally developed by Swype Inc., founded in 2002, where the user enters words by sliding a finger or stylus from the first letter of a word to its last letter, lifting only between words. It uses error-correction algorithms and a language model to guess the intended word. It also includes a predictive text system, handwriting and speech recognition support. Swype was first commercially available on the Samsung Omnia II running Windows Mobile, and was originally pre-loaded on specific devices.
In October 2011, Swype Inc. was acquired by Nuance Communications where the company continued its development and implemented its speech recognition algorithm, Dragon Dictation.
In February 2018, Nuance announced that it had stopped development on the app and that no further updates will be made to it. The Android app was pulled from the Play Store. The iOS app was also pulled from the App Store. The trial version of Swype is not visible anymore for users in Play Store except users who have installed the app by accessing it in the installed apps part of the Play Store. Cloud features of the paid version such as "Backup&Sync" no longer function, and Nuance Communications has refused to issue refunds to customers who have purchased the app and can no longer reinstall it.
Software
Swype consists of three major components that contribute to its accuracy and speed: an input path analyzer, word search engine with corresponding database, and a manufacturer customizable interface.
The creators of Swype predict that users will achieve over 50 words per minute, with the chief technical officer (CTO) and founder Cliff Kushler claiming to have reached 55 words per minute. On 22 March 2010, a Swype employee by the name of Franklin Page achieved a new Guinness World Record of 35.54 seconds for the fastest text message on a touchscreen mobile phone using Swype on the Samsung i8000, and reportedly improved on 22 August of the same year to 25.94 using a Samsung Galaxy S. The Guinness world record text message consists of 160 characters in 25 words and was at that time typed in 25.94 seconds, which corresponds to a speed of nearly 58 words per minute, or 370 characters per minute. However, it has since been bettered by the Fleksy app on an Android phone to 18.19 seconds in 2014.
, Swype supports the following languages:
Swype was listed among Time magazine's 50 Best Android Applications for 2013.
Availability
In February 2018, the Android app was pulled from the Play Store. The iOS app was also pulled from the App Store.
Starting from 2018, users need to use a 3rd party service to download the full version of Swype.
In late February 2018, the full version of Swype was discontinued. The trial version of Swype is hidden from the Play Store and App Store. The Swype website was also discontinued and has become a redirect page to XT9 Smart Input.
In a statement emailed to The Verge, Nuance Communications said it would discontinue support of the Swype keyboard app and instead focus on other products. "The core technology behind Swype will continue to be utilized and improved upon across other Nuance offerings—and integrated into our broader AI-powered solutions—most notably in Android-based keyboard solutions for our automotive customers," the company said.
See also
Dasher (software)
Keyboard (computing)
Multi-touch
Shorthand
T9 (predictive text)
References
External links
United States Patent 7,098,896. C. Kushler, R. Marsden, "System and method for continuous stroke word-based text input"
United States Patent 7,250,938. D. Kirkland, D. Kumhyr, E. Ratliff, K. Smith, "System and method for improved user input on personal computing devices"
Android (operating system) software
Android virtual keyboards
Symbian software
Virtual keyboards
Input methods for handheld devices | Swype | Technology | 814 |
57,896,281 | https://en.wikipedia.org/wiki/TXS%200506%2B056 | TXS 0506+056 is a very high energy blazar – a quasar with a relativistic jet pointing directly towards Earth – of BL Lac-type. With a redshift of 0.3365 ± 0.0010, it has a luminosity distance of about . Its approximate location on the sky is off the left shoulder of the constellation Orion. Discovered as a radio source in 1983, the blazar has since been observed across the entire electromagnetic spectrum.
TXS 0506+056 is the first known source of high energy astrophysical neutrinos, identified following the IceCube-170922A neutrino event in an early example of multi-messenger astronomy. The only astronomical sources previously observed by neutrino detectors were the Sun and supernova 1987A, which were detected decades earlier at much lower neutrino energies.
Observational history
The object has been detected by numerous astronomical surveys, so has numerous valid source designations. The most commonly used, TXS 0506+056, comes from its inclusion in the Texas Survey of radio sources (standard abbreviation TXS) and its approximate equatorial coordinates in the B1950 equinox used by that survey.
TXS 0506+056 was first discovered as a radio source in 1983. It was identified as an active galaxy in the 1990s, and a possible blazar in the early 2000s. By 2009 it was regarded as a confirmed blazar and catalogued as a BL Lac object. Gamma rays from TXS 0506+056 were detected by the EGRET and Fermi Gamma-ray Space Telescope missions.
Radio observations using very-long-baseline interferometry have shown apparent superluminal motion in the blazar's jet. TXS 0506+056 is one of the blazars regularly monitored by the OVRO 40 meter Telescope, so has an almost-continuous radio light curve recorded from 2008 onwards.
The gamma-ray flux from TXS 0506+056 is highly variable, by at least a factor of a thousand, but on average it is in the top 4% of brightest gamma-ray sources on the sky. It is also very bright in radio waves, in the top 1% of sources. Given its distance, this makes TXS 0506+056 one of the most intrinsically powerful BL Lac objects known, particularly in high-energy gamma rays.
Neutrino emission
On September 22, 2017, the IceCube Neutrino Observatory detected a high energy muon neutrino, dubbed IceCube-170922A. The neutrino carried an energy of ~290 tera–electronvolts (TeV); for comparison, the Large Hadron Collider can generate a maximum energy of 13 TeV. Within one minute of the neutrino detection, IceCube sent an automated alert to astronomers around the world with coordinates to search for a possible source.
A search of this region in the sky, 1.33 degrees across, yielded only one likely source: TXS 0506+056, a previously-known blazar, which was found to be in a flaring state of high gamma ray emission. It was subsequently observed at other wavelengths of light across the electromagnetic spectrum, including radio, infrared, optical, X-rays and gamma-rays. The detection of both neutrinos and light from the same object was an early example of multi-messenger astronomy.
A search of archived neutrino data from IceCube found evidence for an earlier flare of lower-energy neutrinos in 2014-2015 (a form of precovery), which supports identification of the blazar as a source of neutrinos. An independent analysis found no gamma-ray flare during this earlier period of neutrino emission, but supported its association with the blazar. The neutrinos emitted by TXS 0506+056 are six orders of magnitude higher in energy than those from any previously-identified astrophysical neutrino source.
The observations of high energy neutrinos and gamma-rays from this source imply that it is also a source of cosmic rays, because all three should be produced by the same physical processes, though no cosmic rays from TXS 0506+056 have been directly observed. In the blazar, a charged pion was produced by the interaction of a high-energy proton or nucleus (i.e. a cosmic ray) with the radiation field or with matter. The pion then decayed into a lepton and the neutrino. The neutrino interacts only weakly with matter, so it escaped the blazar. Upon reaching Earth, the neutrino interacted with the Antarctic ice to produce a muon, which was observed by the Cherenkov radiation it generated as it moved through the IceCube detector.
Analysis of 16 very long baseline radio array 15-GHz observations between 2009 and 2018 of TXS 0506+056 revealed the presence of a curved jet or potentially a collision of two jets, which could explain the 2014-2015 neutrino generation at the time of a low gamma-ray flux and indicate that TXS 0506+056 might be an atypical blazar.
In 2020, a study using MASTER global telescope network found that TXS 0506+056 was in an 'off' state in the optical spectrum 1 minute after the alert for IceCube-170922A event and switched back on 2 hours later. This would indicate that the blazar was in a state of neutrino efficiency.
See also
Messier 77 – a second neutrino source reported by IceCube in November 2022
SN 1987A#Neutrino emissions – a burst of neutrinos observed to come from a supernova
Neutrino astronomy
GW170817 – the first multi-messenger event involving gravitational waves; occurred five weeks before IceCube-170922A
References
External links
Frankfurt Quasar Monitoring: MG 0509+0541 with finding chart.
Aladin Lite view of Fermi data centered on TXS 0506+056
Astronomical X-ray sources
BL Lacertae objects
Orion (constellation)
Neutrino astronomy
Radio galaxies | TXS 0506+056 | Astronomy | 1,297 |
15,217,336 | https://en.wikipedia.org/wiki/ZBTB21 | Zinc finger and BTB domain-containing protein 21 is a protein that in humans is encoded by the ZBTB21 gene.
See also
BTB domain
Zinc finger
References
Further reading
External links
Transcription factors | ZBTB21 | Chemistry,Biology | 42 |
10,297,209 | https://en.wikipedia.org/wiki/Quisqualic%20acid | Quisqualic acid is an agonist of the AMPA, kainate, and group I metabotropic glutamate receptors. It is one of the most potent AMPA receptor agonists known. It causes excitotoxicity and is used in neuroscience to selectively destroy neurons in the brain or spinal cord. Quisqualic acid occurs naturally in the seeds of Quisqualis species.
Research conducted by the USDA Agricultural Research Service, has demonstrated quisqualic acid is also present within the flower petals of zonal geranium (Pelargonium x hortorum) and is responsible for causing rigid paralysis of the Japanese beetle. Quisqualic acid is thought to mimic L-glutamic acid, which is a neurotransmitter in the insect neuromuscular junction and mammalian central nervous system.
History
Combretum indicum (Quisqualis indica var. villosa) is native to tropical Asia but is still doubt whether is indigenous from Africa or was introduced there. Since the amino acid that can be isolated from its fruits can nowadays be made in the lab, the plant is mostly cultivated as an ornamental plant.
Its fruits are known for having anthelmintic effect, therefore they are used to treat ascariasis. The dried seeds are used to reduce vomiting and to stop diarrhoea, but an oil extracted from the seeds can have purgative properties. The roots are taken as a vermifuge and leaf juice, softened in oil, are applied to treat ulcers, parasitic skin infections or fever.
The plant is used for pain relief, and in the Indian Ocean islands, a decoction of the leaves is used to bath children with eczema. In the Philippines, people chew the fruits to get rid of the cough and the crushed fruits and seeds are applied to ameliorate nephritis. In Vietnam, they use the root of the plant to treat rheumatism. In Papua New Guinea the plants are taken as a contraceptive medicine.
However the plant does not have just medicinal use. In west Africa, the long and elastic stems are used for fish weir, fish traps and basketry. The flowers are edible, and they are added in salads to add color.
The seed oil contains palmitic, oleic, stearic, linoleic, myristic and arachidonic acid. The flowers are rich in the flavonoid glycosides pelargonidin – 3 – glucoside and rutin. The leaves and stem bark are rich in tannins, while from the leafy stem several diphenylpropanoids were isolated.
The active compound (quisqualic acid) resembles the action of the anthelmintic α-santonin, so in some countries the seeds of the plants are used to substitute for the drug. However, the acid has shown excitatory effects on cultured neurons, as well as in a variety of animal models, as it causes several types of limbic seizures and neuronal necrosis.
The quisqualic acid can be now commercially synthesized, and it functions as an antagonist for its receptor, found in the mammalian central nervous system.
Chemistry
Structure
It is an organic compound, associated with the class of L – alpha – amino acids. These compounds have the L configuration of the alpha carbon atom.
Quisqualic acid contains, in its structure a five membered, planar, conjugated, aromatic heterocyclic system, consisting of one oxygen atom and two nitrogen atoms at position 2 and 4 of the oxadiazole ring. The 1,2,4–oxadiazole ring structure is present in many natural products of pharmacological importance. Quisqualic acid, which is extracted from the seeds of Quisqualis indica is a strong antagonist of the α–amino–3–hydroxy–5–methyl–4–isoxazolepropionic acid receptors.
Reactivity and synthesis
Biosynthesis
L – quisqualic acid is a glutamate receptor agonist, acting at AMPA receptors and metabotropic glutamate receptors positively linked to phosphoinositide hydrolysis. It sensitizes neurons in hippocampus to depolarization by L-AP6.
Being a 3, 5 disubstituted oxadiazole, quisqualic acid is a stable compound.
One way of synthesizing quisqualic acid is by enzymatic synthesis. Therefore, cysteine synthase is purified from the leaves of Quisqualis indica var. villosa, showing two forms of this enzyme. Both isolated isoenzymes catalyse the formation of cysteine from O-acetyl-L-serine and hydrogen sulphide, but only one of them catalyses the formation of L – quisqualic acid.
Industrial synthesis
Another way of synthesizing the product is by having L-serine as starting material.
Initial step in synthesis is the conversion of L-serine to its N-t-butoxycarbonyl derivative. Amine group of serine has to be protected, so di-tert-butyldicarbonate in isopropanol and aqueous sodium hydroxide was added, at room temperature. The result of the reaction is the N-t-Boc protected acid. Acylation of this acid with O-benzylhydroxylamine hydrochloride followed. T-Boc protected serine was treated with one equivalent of isobutyl chloroformate and N-methylmorpholine in dry THF, resulting in mixed anhydride. This than reacts with O – benzylhydroxylamine to give the hydroxamate. The hydroxamate proceeds to be converted into β – lactam, which was hydrolyzed to the hydroxylamino acid (77) by treatment with one equivalent of sodium hydroxide. After acidification with saturated aqueous solution of citric acid, the final product, L-quisqualic acid, was isolated.
Functions
Molecular mechanisms of action
Quisqualic acid is functionally similar to glutamate, which is an endogenous agonist of glutamate's receptors. It functions as a neurotransmitter in insect neuromuscular junction and CNS. It passes the blood brain barrier and binds to cell surface receptors AMPA and Kainate receptors in the brain.
AMPA receptor is a type of ionotropic glutamate receptor coupled to ion channels and when bound to a ligand, it modulates the excitability by gating the flow of calcium and sodium ions into the intracellular domain. On the other hand, kainate receptors are less understood than AMPA receptors. Although, the function is somewhat similar: the ion channel permeates the flow of sodium and potassium ions, and to a lower extent the Calcium ions.
As mentioned, binding of quisqualic acid to these receptors leads to an influx of calcium and sodium ions into the neurons, which triggers downstream signaling cascades. Calcium signaling involves protein effectors such as kinases (CaMK, MAPK/ERKs), CREB-transcription factor and various phosphatases. It regulates gene expression and may modify the properties of the receptors.
Sodium and calcium ions together generate an excitatory postsynaptic potential (EPSP) that triggers action potentials. It's worthwhile to mention that overactivation of glutamate receptors and kainate receptors lead to excitotoxicity and neurological damage.
A greater dose of quisqualic acid over activates these receptors that can induce seizures, due to prolonged action potentials firing the neurons. Quisqualic acid is also associated with various neurological disorders such as epilepsy and stroke.
Metabotropic glutamate receptors, also known as mGluRs are a type of glutamate receptor which are members of the G-protein coupled receptors. These receptors are important in neural communication, memory formation, learning and regulation. Like Glutamate, quisqualic acid binds to this receptor and shows even a higher potency, mainly for mGlu1 and mGlu5 and exert its effects through a complex second messenger system. Activation of these receptors leads to an increase in inositol triphosphate (IP3) and diacylglycerol (DAG) by the activation of phospholipase C (PLC). Eventually, IP3 diffuses to bind to IP3 receptors on the ER, which are calcium channels that eventually increase the Calcium concentration in the cell.
Modulation of NMDA receptor
The effects of quisqualic acid depend on the location and context. These 2 receptors are known to potentiate the activity of N-methyl-D-aspartate receptors (NMDARs), a certain type of ion channel that is a neurotoxic. Excessive amounts of NMDA have been found to cause harm to the neurons in the presence of mGlu1 and mGlu5 receptors.
Effects on plasticity
Activation of group 1 mGluRs are implicated in synaptic plasticity and contribute to both neurotoxicity and neuroprotection such as protection of the retina against NMDA toxicity, mentioned above. It causes a reduction in ZENK expression, which leads to myopia in chicken.
Role in disease
Studies on mice have suggested that mGlu1 may be involved in the development of certain cancers. Knowing that these types of receptors are mostly localized in the thalamus, hypothalamus and caudate nucleus regions of the brain, the overactivation of these receptors by quisqualic acid can suggest a potential role in movement disorders.
Use/purpose, availability, efficacy, side effects/ adverse effects
Quisqualic acid is an excitatory amino acid (EAA) and a potent agonist of metabotropic glutamate receptors, where evidence shows that activation of these receptors may cause a long lasting sensitization of neurons to depolarization, a phenomenon called the “Quis effect ”.
The first uses of quisqualic acid in research date back to 1975, where the first description of the acid noted that it had strong excitatory effects in the spinal cords of frogs and rats as well as on the neuromuscular junction in crayfish. Since then, its main use in research has been as template for excitotoxic models of spinal cord injury (SCI) studies. When injected into the spinal cord, quisqualic acid can cause excessive activation of glutamate receptors, leading to neuronal damage and loss. This excitotoxic model has been used to study the mechanisms of SCI and to develop potential treatments for related conditions. Several studies have demonstrated experimentally the similarity between the pathology and symptoms of SCI induced by quisqualic acid injections and those observed in clinical spinal cord injuries.
After administration of quis-injection, spinal neurons located close to areas of neuronal degeneration and cavitation exhibit a decrease in mechanical threshold, meaning they become more sensitive to mechanical stimuli. This heightened sensitivity is accompanied by prolonged after discharge responses. These results suggest that excitatory amino acid agonists can induce morphological changes in the spinal cord, which can lead to physiological changes in adjacent neurons, ultimately resulting in altered mechanosensitivity.
There is evidence to suggest that excitatory amino acids like quisqualic acid play a significant role in the induction of cell death following stroke, hypoxia-ischemia, and traumatic brain injury .
Studies involving the binding of quisqualic acid have indicated that the amino acid does not show selectivity for a singular specific receptor subtype, which was initially identified as the quisqualate receptor. Instead, it demonstrates high affinity for other types of excitatory amino acid receptors, including kainate, AMPA, and metabotropic receptors, as well as some transport sites, such as the chloride-dependent L-AP4-sensitive sites. In addition, it also exhibits affinity for certain enzymes responsible for cleaving dipeptides, including the enzyme responsible for cleaving N-acetyl-aspartylglutamate (NAALADase) .
Regarding bioavailability, no database information is present, as there is limited research on its pharmacokinetics. However, even though the bioavailability is not well established, studies in rats suggest that age may play a role in the presence of administered quisqualic acid effects. An experiment which was done on rats within two age groups (20-days-old and 60-days-old) showed that, when given quisqualic acid microinjections, 60-day-old rats had more seizures compared to the younger rats. Additionally, the rats were given the same amount of quisqualic acid, however the immature animals received a higher dosage per body weight, implying that the harm inflicted by the excitatory amino acid may have been comparatively lower in the younger animals.
Quisqualic acid has not been used in clinical trials and currently has no medicinal use, therefore no information about adverse or side effects has been reported.
There has been a significant decrease in research done on quisqualic acid after the early 2000s, possibly attributed to a lack of specificity and/or lack of other clinical uses apart from SCI investigations, which have progressed with other methods of research.
Metabolism/Biotransformation
Quisqualic acid enters the body through different routes, such as ingestion, inhalation, or injection. The ADME (absorption, distribution, metabolism and excretion) process has been studied by means of various animal models in the laboratory.
Absorption: quisqualic acid is a small and lipophilic molecule, thus is expected to be rapid. It is predicted to be absorbed in the human intestine and from then it circulates to the blood brain barrier. Analysis of amino acid transport systems is complex by the presence of multiple transporters with overlapping specificity. Since glutamate and quisqualic acid are similar, it is predicted that sodium/potassium transport in the gastrointestinal tract is the absorption site of the acid.
Distribution: knowing the receptors it binds to, it can be readily predicted where the acid is present such as: hippocampus, basal ganglia, olfactory regions.
Metabolism: quisqualic acid is thought to be metabolized in the liver by oxidative metabolism carried out by cytochrome P450 enzymes, Glutathione S-transferase (detoxifying agents). A study showed that the exposure to quisqualic acid revealed that P450, GST were involved. It is also confirmed by using admetSAR tool to evaluate chemical ADMET properties. Its metabolites are thought to be NMDA and quinolinic acid.
Excretion: Mostly, as a rule of thumb, amino acids undergo transamination/deamination in the liver. Thus amino acids are converted into ammonia and keto acids, which are eventually excreted via the kidneys.
It is worth mentioning that the pharmacokinetics of quisqualic acid has not been extensively studied and there is sparse information available on its ADME process. Therefore, more research is needed to fully understand the metabolism of the acid in the body.
See also
Quisqualamine
Non-proteinogenic amino acids
References
2,3-Diaminopropionic acids
Convulsants
Ureas
Carbamates
Lactams
Imides
Oxadiazolidines
AMPA receptor agonists
Kainate receptor agonists
MGlu1 receptor agonists
MGlu5 receptor agonists
Non-proteinogenic amino acids
Toxic amino acids
Neurotoxins
Excitotoxins | Quisqualic acid | Chemistry | 3,327 |
58,687,359 | https://en.wikipedia.org/wiki/Precordial%20concordance | Precordial concordance, also known as QRS concordance is when all precordial leads on an electrocardiogram are either positive (positive concordance) or negative (negative concordance). When there is a negative concordance, it almost always represents a life-threatening condition called ventricular tachycardia because there is no other condition that suggests any abnormal conduction from the apex of the heart to the upper parts. However, in positive concordance another rare conditions such as left side accessory pathways or blocks are also possible.
References
Electrophysiology
Physiology | Precordial concordance | Biology | 119 |
31,033,998 | https://en.wikipedia.org/wiki/Web%20audience%20measurement | Web Audience Measurement (WAM) is an audience measurement and website analytics tool that measures Internet usage in India. The system, a joint effort of IMRB International and Internet and Mobile Association of India surveys over 6000 individuals across 8 metropolitan centers in India and tracks a variety of metrics such as time-on-site, exposure, reach and frequency of Internet usage.
WAM uses audience measurement and is a continuous tracking panel study that provides cross sectional data on Internet usage segmented by gender, SEC and location. This panel-based approach uses metering technology, design for an Indian context that tracks computers.
Web Rating Points factor multiple measures of Internet usage to provide a more comprehensive picture to web advertisers and attempts to standardize web analytics in India. The web analytics market in India is currently fragmented, with Comscore and Vizisense being IMRB's key competitors. Several discussions revolve around the difference between the numbers provided by all the competitors in the digital audience measurement space. Therefore, choosing the right measurement partner is imperative for media stakeholders. This creates rifts between users of two different audience measurement tools.
References
Audience measurement
Internet radio
Web analytics
Digital marketing
Telecommunications in India
Internet in India | Web audience measurement | Technology | 244 |
12,200,818 | https://en.wikipedia.org/wiki/Dean%20number | The Dean number (De) is a dimensionless group in fluid mechanics, which occurs in the study of flow in curved pipes and channels. It is named after the British scientist W. R. Dean, who was the first to provide a theoretical solution of the fluid
motion through curved pipes for laminar flow by using a perturbation procedure from a Poiseuille flow in a straight pipe to a flow in a pipe with very small curvature.
Physical Context
If a fluid is moving along a straight pipe that after some point becomes curved, then the flow entering a curved portion develops a centrifugal force in an asymmetrical geometry. Such asymmetricity affects the parabolic velocity profile and causes a shift in the location of the maximum velocity compared to a straight pipe. Therefore, the maximum velocity shifts from the centerline towards the concave outer wall and forms an asymmetric velocity profile. There will be an adverse pressure gradient generated from the curvature with an increase in pressure, therefore a decrease in velocity close to the convex wall, and the contrary occurring towards the concave outer wall of the pipe. This gives rise to a secondary motion superposed on the primary flow, with the fluid in the centre of the pipe being swept towards the outer side of the bend and the fluid near the pipe wall will return towards the inside of the bend. This secondary motion is expected to appear as a pair of counter-rotating cells, which are called Dean vortices.
Definition
The Dean number is typically denoted by De (or Dn). For a flow in a pipe or tube it is defined as:
where
is the density of the fluid
is the dynamic viscosity
is the axial velocity scale
is the diameter (for non-circular geometry, an equivalent diameter is used; see Reynolds number)
is the radius of curvature of the path of the channel.
is the Reynolds number.
The Dean number is therefore the product of the Reynolds number (based on axial flow through a pipe of diameter ) and the square root of the curvature ratio.
Turbulence transition
The flow is completely unidirectional for low Dean numbers (De < 40~60). As the Dean number increases between 40~60 to 64~75, some wavy perturbations can be observed in the cross-section, which evidences some secondary flow. At higher Dean numbers than that (De > 64~75) the pair of Dean vortices becomes stable, indicating a primary dynamic instability. A secondary instability appears for De > 75~200, where the vortices present undulations, twisting, and eventually merging and pair splitting. Fully turbulent flow forms for De > 400. Transition from laminar to turbulent flow has also been examined in a number of studies, even though no universal solution exists since the parameter is highly dependent on the curvature ratio. Somewhat unexpectedly, laminar flow can be maintained for larger Reynolds numbers (even by a factor of two for the highest curvature ratios studied) than for straight pipes, even though curvature is known to cause instability.
Dean equations
The Dean number appears in the so-called Dean equations. These are an approximation to the full Navier–Stokes equations for the steady axially uniform flow of a Newtonian fluid in a toroidal pipe, obtained by retaining just the leading order curvature effects (i.e. the leading-order equations for ).
We use orthogonal coordinates with corresponding unit vectors aligned with the centre-line of the pipe at each point. The axial direction is , with being the normal in the plane of the centre-line, and the binormal. For an axial flow driven by a pressure gradient , the axial velocity is scaled with . The cross-stream velocities are scaled with , and cross-stream pressures with . Lengths are scaled with the tube radius .
In terms of these non-dimensional variables and coordinates, the Dean equations are then
where
is the convective derivative.
The Dean number De is the only parameter left in the system, and encapsulates the leading order curvature effects. Higher-order approximations will involve additional parameters.
For weak curvature effects (small De), the Dean equations can be solved as a series expansion in De. The first correction to the leading-order axial Poiseuille flow is a pair of vortices in the cross-section carrying flow from the inside to the outside of the bend across the centre and back around the edges. This solution is stable up to a critical Dean number . For larger De, there are multiple solutions, many of which are unstable.
References
Further reading
Dimensionless numbers of fluid mechanics
Fluid dynamics | Dean number | Chemistry,Engineering | 936 |
24,983,492 | https://en.wikipedia.org/wiki/NN%20Serpentis | NN Serpentis (abbreviated NN Ser) is an eclipsing post-common envelope binary system approximately 1670 light-years away. The system comprises an eclipsing white dwarf and red dwarf. The two stars orbit each other every 0.13 days.
Planetary system
A planetary system has been inferred to exist around NN Ser by several teams. All of these teams rely on the fact that Earth sits in the same plane as the NN Serpentis binary star system, so humans can see the larger red dwarf eclipse the white dwarf every 0.13 days. Astronomers are then able to use these frequent eclipses to spot a pattern of small but significant irregularities in the orbit of stars, which could be attributed to the presence and gravitational influence of circumbinary planets.
Chen (2009) used these "eclipse timing variations" to suggesting a putative orbital period spanning between 30 and 285 years and a minimum mass between 0.0043 and 0.18 Solar masses.
In late 2009, Qian estimated a minimum mass of 10.7 Jupiter masses and orbital period of 7.56 years for this planet, probably located at 3.29 Astronomical Units. This has since been disproven by further measurements of the eclipse times of the binary stars.
In late 2009 and 2010, researchers from the UK (University of Warwick and the University of Sheffield), Germany (Georg-August-Universitat in Göttingen, Eberhard-Karls-Universitat in Tübingen), Chile (Universidad de Valparaíso), and the United States (University of Texas at Austin) suggested that the eclipse timing variations are caused by two gas giant planets. The more massive gas giant is about 6 times the mass of Jupiter and orbits the binary star every 15.5 years, the other orbits every 7.75 years and is about 1.6 times the mass of Jupiter.
All published planetary models have failed to predict changes in eclipse timing since 2018, suggesting that a different explanation for the eclipse timing variations may be needed.
See also
Algol
HW Virginis
CM Draconis
Kepler-16
Kepler-47, another binary system with 3 planets
References
External links
The Extrasolar Planet Encyclopaedia — Catalog Listing
UK Astronomers Help Find Snooker Star System
Eclipsing binaries
Serpentis, NN
White dwarfs
M-type main-sequence stars
2
Hypothetical planetary systems
Serpens | NN Serpentis | Astronomy | 489 |
26,652,964 | https://en.wikipedia.org/wiki/Intersex | Intersex people are individuals born with any of several sex characteristics, including chromosome patterns, gonads, or genitals that, according to the Office of the United Nations High Commissioner for Human Rights, "do not fit typical binary notions of male or female bodies".
Sex assignment at birth usually aligns with a child's external genitalia. The number of births with ambiguous genitals is in the range of 1:4,500–1:2,000 (0.02%–0.05%). Other conditions involve the development of atypical chromosomes, gonads, or hormones. Some persons may be assigned and raised as a girl or boy but then identify with another gender later in life, while most continue to identify with their assigned sex. The number of births where the baby is intersex has been reported differently depending on who reports and which definition of intersex is used. Anne Fausto-Sterling and her book co-authors claim the prevalence of "nondimorphic sexual development" in humans might be as high as 1.7%. However, a response published by Leonard Sax reports this figure includes conditions such as late onset congenital adrenal hyperplasia and Klinefelter syndrome, which most clinicians do not recognize as intersex; Sax states, "if the term intersex is to retain any meaning, the term should be restricted to those conditions in which chromosomal sex is inconsistent with phenotypic sex, or in which the phenotype is not classifiable as either male or female", stating the prevalence of intersex is about 0.018% (one in 5,500 births), about 100 times less than Fausto-Sterling's estimate.
Terms used to describe intersex people are contested, and change over time and place. Intersex people were previously referred to as "hermaphrodites" or "congenital eunuchs". In the 19th and 20th centuries, some medical experts devised new nomenclature in an attempt to classify the characteristics that they had observed, the first attempt to create a taxonomic classification system of intersex conditions. Intersex people were categorized as either having "true hermaphroditism", "female pseudohermaphroditism", or "male pseudohermaphroditism". These terms are no longer used, and terms including the word "hermaphrodite" are considered to be misleading, stigmatizing, and scientifically specious in reference to humans. In biology, the term "hermaphrodite" is used to describe an organism that can produce both male and female gametes. Some people with intersex traits use the term "intersex", and some prefer other language. In clinical settings, the term "disorders of sex development" (DSD) has been used since 2006, a shift in language considered controversial since its introduction.
Intersex people face stigmatization and discrimination from birth, or following the discovery of intersex traits at stages of development such as puberty. Intersex people may face infanticide, abandonment, and stigmatization from their families. Globally, some intersex infants and children, such as those with ambiguous outer genitalia, are surgically or hormonally altered to create more socially acceptable sex characteristics. This is considered controversial, with no firm evidence of favorable outcomes. Such treatments may involve sterilization. Adults, including elite female athletes, have also been subjects of such treatment. Increasingly, these issues are considered human rights abuses, with statements from international and national human rights and ethics institutions. Intersex organizations have also issued statements about human rights violations, including the 2013 Malta declaration of the third International Intersex Forum. In 2011, Christiane Völling became the first intersex person known to have successfully sued for damages in a case brought for non-consensual surgical intervention. In April 2015, Malta became the first country to outlaw non-consensual medical interventions to modify sex anatomy, including that of intersex people.
Terminology
There is no clear consensus definition of intersex and no clear delineation of which specific conditions qualify an individual as intersex. The World Health Organization's International Classification of Diseases (ICD), the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM), and many medical journals classify intersex traits or conditions among disorders of sex development (DSD).
A common adjective for people with disorders of sex development (DSD) is "intersex".
Etymology and definitions
In 1917, Richard Goldschmidt created the term "intersexuality" to refer to a variety of physical sex ambiguities. However, according to The SAGE Encyclopedia of LGBTQ Studies, it was not until Anne Fausto Sterling published her article "The Five Sexes: Why Male and Female Are Not Enough" in 1993 that the term reached popularity.
According to the UN Office of the High Commissioner for Human Rights:
Attitudes towards the term
Some intersex organizations reference "intersex people" and "intersex variations or traits" while others use more medicalized language such as "people with intersex conditions", or people "with intersex conditions or DSDs (differences of sex development)" and "children born with variations of sex anatomy". In May 2016, interACT published a statement recognizing "increasing general understanding and acceptance of the term 'intersex'".
Australian sociological research on 272 "people born with atypical sex characteristics", published in 2016, found that 60% of respondents used the term "intersex" to self-describe their sex characteristics, including people identifying themselves as intersex, describing themselves as having an intersex variation or, in smaller numbers, having an intersex condition. Respondents also commonly used diagnostic labels and referred to their sex chromosomes, with word choices depending on audience.
Research on 202 respondents by the Lurie Children's Hospital, Chicago, and the AIS-DSD Support Group (now known as InterConnect Support Group) published in 2017 found that 80% of Support Group respondents "strongly liked, liked or felt neutral about intersex" as a term, while caregivers were less supportive. The hospital reported that the use of the term "disorders of sex development" may negatively affect care.
Another study by a group of children's hospitals in the United States found that 53% of 133 parent and adolescent participants recruited at five clinics did not like the term "intersex". Participants who were members of support groups were more likely to dislike the term. A "dsd-LIFE" study in 2020 found that around 43% of 179 participants thought the term "intersex" was bad, 20% felt neutral about the term, while 37% thought the term was good.
The term "hermaphrodite"
Historically, the term "hermaphrodite" was used in law to refer to people whose sex was in doubt. The 12th century states that "Whether an hermaphrodite may witness a testament, depends on which sex prevails" ("Hermafroditus an ad testamentum adhiberi possit, qualitas sexus incalescentis ostendit"). Similarly, the 17th century English jurist and judge Edward Coke (Lord Coke), wrote in his Institutes of the Lawes of England on laws of succession stating, "Every heire is either a male, a female, or an hermaphrodite, that is both male and female. And an hermaphrodite (which is also called Androgynus) shall be heire, either as male or female, according to that type of sexe which doth prevaile."
During the Victorian era, medical authors attempted to ascertain whether or not humans could be hermaphrodites, adopting a precise biological definition for the term, and making distinctions between "male pseudohermaphrodite", "female pseudohermaphrodite" and especially "true hermaphrodite". These terms, which reflected histology (microscopic appearance) of the gonads, are no longer used. Until the mid-20th century, "hermaphrodite" was used synonymously with "intersex". Medical terminology shifted in the early 21st century, not only due to concerns about language, but also a shift to understandings based on genetics. The term "hermaphrodite" is also controversial as it implies the existence of someone fully male and fully female. This is a fantasy by certain people who seek "hermaphrodite" sex partners; in the Intersex movement, such people are called "wannafucks". As such the term "hermaphrodite" is often seen as degrading and offensive, although many intersex activists use it as a direct form of self empowerment and critique such as in the ISNA's first newsletter Hermaphrodites with Attitude.
The Intersex Society of North America has stated that hermaphrodites should not be confused with intersex people and that using "hermaphrodite" to refer to intersex individuals is considered to be stigmatizing and misleading.
Prevalence
Estimates of the number of people who are intersex vary, depending on which conditions are counted as intersex. The now-defunct Intersex Society of North America stated that:
Anne Fausto-Sterling and her co-authors stated in 2000 that "[a]dding the estimates of all known causes of nondimorphic sexual development suggests that approximately 1.7% of all live births do not conform to a Platonic ideal of absolute sex chromosome, gonadal, genital, and hormonal dimorphism"; these publications have been widely quoted by intersex activists. Of the 1.7%, 1.5% points (88% of those considered "nondimorphic sexual development" in this figure) consist of individuals with late onset congenital adrenal hyperplasia (LOCAH) which may be asymptomatic but can present after puberty and cause infertility.
Leonard Sax, in response to Fausto-Sterling, estimated that the prevalence of intersex was about 0.018% of the world's population, discounting several conditions included in Fausto-Sterling's estimate that included LOCAH, Klinefelter syndrome (47,XXY), Turner syndrome (45,X), the chromosomal variants of 47,XYY and 47,XXX, and vaginal agenesis. Sax reasons that in these conditions chromosomal sex is consistent with phenotypic sex and phenotype is classifiable as either male or female.
In a 2003 letter to the editor, political scientist Carrie Hull analyzed the data used by Fausto-Sterling and said the estimated intersex rate should instead have been 0.37%, due to many errors. In a response letter published simultaneously, Fausto-Sterling welcomed the additional analysis and said "I am not invested in a particular final estimate, only that there BE an estimate." A 2018 review reported that the number of births with ambiguous genitals is in the range of 0.02% to 0.05%.
Intersex Human Rights Australia says it maintains 1.7% as its preferred upper limit "despite its flaws", stating both that the estimate "encapsulates the entire population of people who are stigmatized—or risk stigmatization—due to innate sex characteristics", and that Sax's definitions exclude individuals who experience such stigma and who have helped to establish the intersex movement. According to InterACT, a major organization for intersex rights in the US, states that 1.7% of people have some variation of sexual development, 0.5% have atypical genitalia, and 0.05% have mixed/ambiguous genitalia. A study relying on a nationally representative survey conducted in Mexico between 2021 and 2022 obtained similar estimates: around 1.6% of individuals aged 15 to 64 reported being born with sex variations.
The following summarizes prevalences of traits that medical experts consider to be intersex (where sex chromosome anomalies are involved, the karyotype is often summarized by the total number of chromosomes followed by the sex chromosomes present in each cell):
Notes:
History
From early history, societies have been aware of intersex people. Some of the earliest evidence is found in mythology: the Greek historian Diodorus Siculus wrote of the mythological Hermaphroditus in the first century BC, who was "born with a physical body which is a combination of that of a man and that of a woman", and reputedly possessed supernatural properties. He also recounted the lives of Diophantus of Abae and Callon of Epidaurus. Ardhanarishvara, an androgynous composite form of male deity Shiva and female deity Parvati, originated in Kushan culture as far back as the first century AD. A statue depicting Ardhanarishvara is included in India's Meenakshi Temple; this statue clearly shows both male and female bodily elements.
Hippocrates ( – BC, Greek physician) and Galen (129 – AD, Roman physician, surgeon, and philosopher) both viewed sex as a spectrum between men and women, with "many shades in between, including hermaphrodites, a perfect balance of male and female". Pliny the Elder (AD 23/24–79), a Roman naturalist, described "those who are born of both sexes, whom we call hermaphrodites, at one time androgyni" (from the Greek , "man", and , "woman"). Augustine (354 – 430 AD), the influential Catholic theologian, wrote in The Literal Meaning of Genesis that humans were created in two sexes, despite "as happens in some births, in the case of what we call androgynes".
In medieval and early modern European societies, Roman law, post-classical canon law, and later common law, referred to a person's sex as male, female or hermaphrodite, with legal rights as male or female depending on the characteristics that appeared most dominant. The 12th century states, "Whether an hermaphrodite may witness a testament, depends on which sex prevails." The foundation of common law, the 17th century Institutes of the Lawes of England described how a hermaphrodite could inherit "either as male or female, according to that kind of sexe which doth prevaile". Legal cases have been described in canon law and elsewhere over the centuries.
Some non-European societies have sex or gender systems that recognize more than the two categories of male/man and female/woman. Some of these cultures, for instance the South-Asian Hijra communities, may include intersex people in a third gender category. Although—according to Morgan Holmes—early Western anthropologists categorized such cultures as "primitive", Holmes has argued that analyses of these cultures have been simplistic or romanticized and fail to take account of the ways that subjects of all categories are treated.
During the Victorian era, medical authors introduced the terms "true hermaphrodite" for an individual who has both ovarian and testicular tissue, "male pseudo-hermaphrodite" for a person with testicular tissue, but either female or ambiguous sexual anatomy, and "female pseudo-hermaphrodite" for a person with ovarian tissue, but either male or ambiguous sexual anatomy. Some later shifts in terminology have reflected advances in genetics, while other shifts are suggested to be due to pejorative associations.
The term "intersexuality" was coined by Richard Goldschmidt in 1917. The first suggestion to replace the term "hermaphrodite" with "intersex" was made by Cawadias in the 1940s.
Since the rise of modern medical science, some intersex people with ambiguous external genitalia have had their genitalia surgically modified to resemble either female or male genitals. Surgeons pinpointed intersex babies as a "social emergency" when born. An 'optimal gender policy', initially developed by John Money, stated that early intervention helped avoid gender identity confusion, but this lacks evidence. Early interventions have adverse consequences for psychological and physical health. Since advances in surgery have made it possible for intersex conditions to be concealed, many people are not aware of how frequently intersex conditions arise in human beings or that they occur at all.
Dialogue between what were once antagonistic groups of activists and clinicians has led to only slight changes in medical policies and how intersex patients and their families are treated in some locations. In 2011, Christiane Völling became the first intersex person known to have successfully sued for damages in a case brought for non-consensual surgical intervention. In April 2015, Malta became the first country to outlaw non-consensual medical interventions to modify sex anatomy, including that of intersex people. Many civil society organizations and human rights institutions now call for an end to unnecessary "normalizing" interventions, including in the Malta declaration.
Human rights and legal issues
Human rights institutions are placing increasing scrutiny on harmful practices and issues of discrimination against intersex people. These issues have been addressed by a rapidly increasing number of international institutions including, in 2015, the Council of Europe, the United Nations Office of the United Nations High Commissioner for Human Rights and the World Health Organization (WHO). In 2024, the United Nations Human Rights Council adopted its first resolution to protect the rights of intersex people. These developments have been accompanied by International Intersex Forums and increased cooperation among civil society organizations. However, the implementation, codification, and enforcement of intersex human rights in national legal systems remains slow.
Physical integrity and bodily autonomy
Stigmatization and discrimination from birth may include infanticide, abandonment, and the stigmatization of families. The birth of an intersex child was often viewed as a curse or a sign of a witch mother, especially in parts of Africa. Abandonments and infanticides have been reported in Uganda, Kenya, South Asia, and China.
Infants, children and adolescents also experience "normalising" interventions on intersex persons that are medically unnecessary and the pathologisation of variations in sex characteristics. In countries where the human rights of intersex people have been studied, medical interventions to modify the sex characteristics of intersex people have still taken place without the consent of the intersex person. Interventions have been described by human rights defenders as a violation of many rights, including (but not limited to) bodily integrity, non-discrimination, privacy, and experimentation. These interventions have frequently been performed with the consent of the intersex person's parents, when the person is legally too young to consent. Such interventions have been criticized by the WHO, other UN bodies such as the Office of the High Commissioner for Human Rights, and an increasing number of regional and national institutions due to their adverse consequences, including trauma, impact on sexual function and sensation, and violation of rights to physical and mental integrity. The UN organizations decided that infant intervention should not be allowed, in favor of waiting for the child to mature enough to be a part of the decision-making—this allows for a decision to be made with total consent. In April 2015, Malta became the first country to outlaw surgical intervention without consent. In the same year, the Council of Europe became the first institution to state that intersex people have the right not to undergo sex affirmation interventions.
Anti-discrimination and equal treatment
People born with intersex bodies are seen as different. Intersex infants, children, adolescents and adults "are often stigmatized and subjected to multiple human rights violations", including discrimination in education, healthcare, employment, sport, and public services. Researchers have documented significant disparities in mental, physical, and sexual health when comparing intersex individuals to the general population, including higher rates of bullying, stigmatization, harassment, violence, and suicidal intention, as well as substantial barriers in the workplace.
Several countries have so far explicitly protected intersex people from discrimination, with landmarks including South Africa, Australia, and, most comprehensively, Malta.
Remedies and claims for compensation
Claims for compensation and remedies for human rights abuses include the 2011 case of Christiane Völling in Germany. A second case was adjudicated in Chile in 2012, involving a child and his parents. A further successful case in Germany, taken by Michaela Raab, was reported in 2015. In the United States, the Minor Child (M.C. v Aaronson) lawsuit was "a medical malpractice case related to the informed consent for a surgery performed on the Crawford's adopted child (known as M.C.) at [Medical University of South Carolina] in April 2006". The case was one of the first lawsuit of its type to challenge "legal, ethical, and medical issues regarding genital-normalizing surgery" in minors, and was eventually settled out of court by the Medical University of South Carolina for $440,000 in 2017.
Information and support
Access to information, medical records, peer and other counselling and support. With the rise of modern medical science in Western societies, a secrecy-based model was also adopted, in the belief that this was necessary to ensure normal physical and psychosocial development.
Legal recognition
The Asia Pacific Forum of National Human Rights Institutions states that legal recognition is firstly "about intersex people who have been issued a male or a female birth certificate being able to enjoy the same legal rights as other men and women". In some regions, obtaining any form of birth certification may be an issue. A Kenyan court case in 2014 established the right of an intersex boy, "Baby A", to a birth certificate.
Like all individuals, some intersex individuals may be raised as a certain sex (male or female) but then identify with another later in life, while most do not. Recognition of third sex or gender classifications occurs in several countries, however, it is controversial when it becomes assumed or coercive, as is the case with some German infants. Sociological research in Australia, a country with a third 'X' sex classification, shows that 19% of people born with atypical sex characteristics selected an "X" or "other" option, while 75% of survey respondents self-described as male or female (52% as women, 23% as men), and 6% as unsure.
LGBT and LGBTI
Intersex conditions can be contrasted with transgender gender identities and the attached gender dysphoria a transgender person may feel, wherein their gender identity does not match their assigned sex. However, some people are both intersex and transgender; although intersex people by definition have variable sex characteristics that do not align with either typically male or female, this may be considered separate to an individual's assigned gender, the way they are raised and perceived, and their internal gender identity. A 2012 clinical review paper found that between 8.5% and 20% of people with intersex variations experienced gender dysphoria. In an analysis of the use of preimplantation genetic diagnosis to eliminate intersex traits, Behrmann and Ravitsky state: "Parental choice against intersex may ... conceal biases against same-sex attractedness and gender nonconformity."
The relationship of intersex people and communities to LGBTQ communities is complex, but intersex people are often added to the LGBT acronym, resulting in the acronym LGBTI (or when also including asexual people, LGBTQIA+). Emi Koyama describes how inclusion of intersex in LGBTI can fail to address intersex-specific human rights issues, including creating false impressions "that intersex people's rights are protected" by laws protecting LGBT people, and failing to acknowledge that many intersex people are not LGBT. Organisation Intersex International Australia states that some intersex individuals are homosexual, and some are heterosexual, but "LGBTI activism has fought for the rights of people who fall outside of expected binary sex and gender norms." Julius Kaggwa of SIPD Uganda has written that, while the gay community "offers us a place of relative safety, it is also oblivious to our specific needs". Mauro Cabral has written that transgender people and organizations "need to stop approaching intersex issues as if they were trans issues", including use of intersex conditions and people as a means of explaining being transgender; "we can collaborate a lot with the intersex movement by making it clear how wrong that approach is."
In society
Fiction, literature and media
An intersex character is the narrator in Jeffrey Eugenides' Pulitzer Prize-winning novel Middlesex.
The memoir, Born Both: An Intersex Life (Hachette Books, 2017), by intersex author and activist Hida Viloria, received strong praise from The New York Times Book Review, The Washington Post, Rolling Stone, People Magazine, and Psychology Today, was one of School Library Journal 2017 Top Ten Adult Books for Teens, and was a 2018 Lambda Literary Award nominee.
Television works about intersex and films about intersex are scarce. The Spanish-language film XXY won the Critics' Week grand prize at the 2007 Cannes Film Festival and the ACID/CCAS Support Award. Faking It is notable for providing both the first intersex main character in a television show, and television's first intersex character played by an intersex actor.
Civil society institutions
Intersex peer support and advocacy organizations have existed since at least 1985, with the establishment of the Androgen Insensitivity Syndrome Support Group Australia in 1985. The Androgen Insensitivity Syndrome Support Group (UK) was established in 1988. The Intersex Society of North America (ISNA) may have been one of the first intersex civil society organizations to have been open to people regardless of diagnosis; it was active from 1993 to 2008.
Events
Intersex Awareness Day is an internationally observed civil awareness day designed to highlight the challenges faced by intersex people, occurring annually on 26 October. It marks the first public demonstration by intersex people, which took place in Boston on 26 October 1996, outside a venue where the American Academy of Pediatrics was holding its annual conference.
Intersex Day of Remembrance, also known as Intersex Solidarity Day, is an internationally observed civil awareness day designed to highlight issues faced by intersex people, occurring annually on 8 November. It marks the birthday of , a French intersex person whose memoirs were later published by Michel Foucault in Herculine Barbin: Being the Recently Discovered Memoirs of a Nineteenth-century French Hermaphrodite.
Flags
The intersex flag was created in July 2013 by Morgan Carpenter of Intersex Human Rights Australia to create a flag "that is not derivative, but is yet firmly grounded in meaning". The circle is described as "unbroken and unornamented, symbolising wholeness and completeness, and our potentialities. We are still fighting for bodily autonomy and genital integrity, and this symbolises the right to be who and how we want to be."
In 2021, Valentino Vecchietti of Intersex Equality Rights UK redesigned the Progress Pride Flag to incorporate the intersex flag. This design added a yellow triangle with a purple circle in it to the chevron of the Progress Pride flag. It also changed the color of green to a lighter shade without adding new symbolism. Intersex Equality Rights UK posted the new flag on Instagram and Twitter.
Religion
In Judaism, the Talmud contains extensive discussion concerning the status of two types of intersex people in Jewish law; namely, the androgynous, who exhibit both male and female external sexual organs, and the , who exhibit neither. In the 1970s and 1980s, the treatment of intersex babies started to be discussed in Orthodox Jewish medical halacha by prominent rabbinic leaders, such as Eliezer Waldenberg and Moshe Feinstein.
Sport
Erik Schinegger, Foekje Dillema, Maria José Martínez-Patiño and Santhi Soundarajan were subject to adverse sex verification testing resulting in ineligibility to compete in organised competitive competition. Stanisława Walasiewicz, an athlete diagnosed posthumously with Turner syndrome was posthumously ruled ineligible to have competed.
The South African middle-distance runner Caster Semenya won 3 World Championships gold medals and 2 Olympic gold medals in the women's 800 metres. When Semenya won gold at the 2009 World Championships, the International Association of Athletics Federations (IAAF) requested sex verification tests on the very same day. The results were not released, and Semenya was ruled eligible to compete. In 2019, new IAAF rules came into force for athletes like Semenya with certain disorders of sex development (DSDs) requiring medication to suppress testosterone levels in order to participate in 400m, 800m, and 1500m women's events. Semenya objected to undergoing the treatment which is now mandatory. She has filed a series of legal cases to restore her ability to compete in these events without testosterone suppression, arguing that the World Athletics rules are discriminatory.
Katrina Karkazis, Rebecca Jordan-Young, Georgiann Davis and Silvia Camporesi have claimed that IAAF policies on "hyperandrogenism" in female athletes are "significantly flawed", arguing that the policy does not protect against breaches of privacy, requires athletes to undergo unnecessary treatment in order to compete, and intensifies "gender policing", and recommended that athletes be able to compete in accordance with their legally-recognised gender.
In April 2014, the BMJ reported that four elite women athletes with XY chromosomes and 5α-reductase 2 deficiency were subjected to sterilization and "partial clitoridectomies" in order to compete in sport. The authors noted that partial clitoridectomy was "not medically indicated" and "does not relate to real or perceived athletic 'advantage'". Intersex advocates regarded this intervention as "a clearly coercive process". In 2016, the United Nations Special Rapporteur on health, Dainius Pūras, criticized "current and historic" sex verification policies, describing how "a number of athletes have undergone gonadectomy (removal of reproductive organs) and partial clitoridectomy (a form of female genital mutilation) in the absence of symptoms or health issues warranting those procedures."
Biology
The notion of intersex individuals can be understood in the context of sexual system biology that varies across different types of organisms. Most animal species (~95%, including humans) are gonochoric, in which individuals are of either a female or male sex. Hermaphroditic species (some animals and most flowering plants) are represented by individuals that can express both sexes simultaneously or sequentially during their lifetimes. Intersex individuals in a number of gonochoric species, who express both female and male phenotypic characters to some degree, are known to exist at very low prevalences.
Although "hermaphrodite" and "intersex" have been used synonymously in humans, a hermaphrodite is specifically an individual capable of producing female and male gametes. While there are reports of individuals that seemed to have the potential to produce both types of gamete, in more recent years the term hermaphrodite as applied to humans has fallen out of favor, since female and male reproductive functions have not been observed together in the same individual.
Medical
Research in the late 20th century led to a growing medical consensus that diverse intersex bodies are normal, but relatively rare, forms of human biology. Clinician and researcher Milton Diamond stresses the importance of care in the selection of language related to intersex people:
Medical classifications
Sexual differentiation
The common pathway of sexual differentiation, where a productive human female has an XX chromosome pair, and a productive male has an XY pair, is relevant to the development of intersex conditions.
During fertilization, the sperm adds either an X (female) or a Y (male) chromosome to the X in the ovum. This determines the genetic sex of the embryo. During the first weeks of development, genetic male and female fetuses are "anatomically indistinguishable", with primitive gonads beginning to develop during approximately the sixth week of gestation. The gonads, in a bipotential state, may develop into either testes (the male gonads) or ovaries (the female gonads), depending on the consequent events. Up until and including the seventh week, genetically female and genetically male fetuses appear identical.
At around eight weeks of gestation, the gonads of an XY embryo differentiate into functional testes, secreting testosterone. Ovarian differentiation, for XX embryos, does not occur until approximately week 12 of gestation. In typical female differentiation, the Müllerian duct system develops into the uterus, fallopian tubes, and inner third of the vagina.
In males, the Müllerian duct-inhibiting hormone AMH causes this duct system to regress. Next, androgens cause the development of the Wolffian duct system, which develops into the vas deferens, seminal vesicles, and ejaculatory ducts.
By birth, the typical fetus has been completely sexed male or female, meaning that the genetic sex (XY-male or XX-female) corresponds with the phenotypical sex; that is to say, genetic sex corresponds with internal and external gonads, and external appearance of the genitals.
Signs
There are a variety of symptoms that can occur. Ambiguous genitalia is the most common sign. There can be micropenis, clitoromegaly, partial labial fusion, electrolyte abnormalities, delayed or absent puberty, unexpected changes at puberty, hypospadias, labial or inguinal (groin) masses (which may turn out to be testes) in girls and undescended testes (which may turn out to be ovaries) in boys.
Ambiguous genitalia
Ambiguous genitalia may appear as a large clitoris or as a small penis.
Because there is variation in all of the processes of the development of the sex organs, a child can be born with a sexual anatomy that is typically female or feminine in appearance with a larger-than-average clitoris (clitoral hypertrophy) or typically male or masculine in appearance with a smaller-than-average penis that is open along the underside. The appearance may be quite ambiguous, describable as female genitals (a vulva) with a very large clitoris and partially fused labia, or as male genitals with a very small penis, completely open along the midline ("hypospadic"), and empty scrotum. Fertility is variable.
Measurement systems for ambiguous genitalia
The orchidometer is a medical instrument to measure the volume of the testicles. It was developed by Swiss pediatric endocrinologist Andrea Prader. The Prader scale and Quigley scale are visual rating systems that measure genital appearance. These measurement systems were satirized in the Phall-O-Meter, created by the (now defunct) Intersex Society of North America.
Other signs
In order to help in classification, methods other than a genitalia inspection can be performed. For instance, a karyotype display of a tissue sample may determine which of the causes of intersex is prevalent in the case. Additionally, electrolyte tests, endoscopic exam, ultrasound and hormone stimulation tests can be done.
Causes
Intersex can be divided into four categories which are: 46, XX intersex; 46, XY intersex; true gonadal intersex; and complex or undetermined intersex.
46, XX intersex
This condition used to be called "female pseudohermaphroditism". Persons with this condition have female internal genitalia and karyotype (XX) and various degree of external genitalia virilization. External genitalia is masculinized congenitally when female fetus is exposed to excess androgenic environment. Hence, the chromosome of the person is of a female, the ovaries of a female, but external genitals that appear like a male. The labia fuse, and the clitoris enlarges to appear like a penis. The causes of this can be male hormones taken during pregnancy, congenital adrenal hyperplasia, male-hormone-producing tumors in the mother and aromatase deficiency.
46, XY intersex
This condition used to be called "male pseudohermaphroditism". This is defined as incomplete masculinization of the external genitalia. Thus, the person has male chromosomes, but the external genitals are incompletely formed, ambiguous, or clearly female. This condition is also called 46, XY with undervirilization. 46, XY intersex has many possible causes, which can be problems with the testes and testosterone formation. Also, there can be problems with using testosterone. Some people lack the enzyme needed to convert testosterone to dihydrotestosterone, which is a cause of 5-alpha-reductase deficiency. Androgen insensitivity syndrome is the most common cause of 46, XY intersex.
True gonadal intersex
This condition used to be called "true hermaphroditism". This is defined as having asymmetrical gonads with ovarian and testicular differentiation on either sides separately or combined as ovotestis. In most cases, the cause of this condition is unknown.
Complex or undetermined intersex
This is the condition of having any chromosome configurations rather than 46, XX or 46, XY intersex. This condition does not result in an imbalance between internal and external genitalia. However, there may be problems with sex hormone levels, overall sexual development, and altered numbers of sex chromosomes.
Conditions
There are a variety of opinions on what conditions or traits are and are not intersex, dependent on the definition of intersex that is used. Current human rights based definitions stress a broad diversity of sex characteristics that differ from expectations for male or female bodies. During 2015, the Council of Europe, the European Union Agency for Fundamental Rights and Inter-American Commission on Human Rights have called for a review of medical classifications on the basis that they presently impede enjoyment of the right to health; the Council of Europe expressed concern that "the gap between the expectations of human rights organisations of intersex people and the development of medical classifications has possibly widened over the past decade."
Medical interventions
Rationales
Medical interventions take place to address physical health concerns and psychosocial risks. Both types of rationale are the subject of debate, particularly as the consequences of surgical (and many hormonal) interventions are lifelong and irreversible. Questions regarding physical health include accurately assessing risk levels, necessity, and timing. Psychosocial rationales are particularly susceptible to questions of necessity as they reflect social and cultural concerns.
There remains no clinical consensus about an evidence base, surgical timing, necessity, type of surgical intervention, and degree of difference warranting intervention. Such surgeries are the subject of significant contention due to consequences that include trauma, impact on sexual function and sensation, and violation of rights to physical and mental integrity. This includes community activism, and multiple reports by international human rights and health institutions and national ethics bodies.
In the cases where gonads may pose a cancer risk, as in some cases of androgen insensitivity syndrome, concern has been expressed that treatment rationales and decision-making regarding cancer risk may encapsulate decisions around a desire for surgical "normalization".
Types
Feminizing and masculinizing surgeries: Surgical procedures depend on the diagnosis, and there is often a concern as to whether surgery should be performed at all. Typically, surgery is performed shortly after birth. Defenders of the practice argue that individuals must be clearly identified as male or female for them to function socially and develop "normally". Psychosocial reasons are often stated. This is criticised by many human rights institutions, and authors. Unlike other aesthetic surgical procedures performed on infants, such as corrective surgery for a cleft lip, genital surgery may lead to negative consequences for sexual functioning in later life, or feelings of freakishness and unacceptability.
Hormone treatment: There is widespread evidence of prenatal testing and hormone treatment to prevent or eliminate intersex traits, associated also with the problematization of sexual orientation and gender non-conformity.
Psychosocial support: All stakeholders support psychosocial support. A joint international statement by participants at the Third International Intersex Forum in 2013 sought, among other demands: "Recognition that medicalization and stigmatisation of intersex people result in significant trauma and mental health concerns. In view of ensuring the bodily integrity and well-being of intersex people, autonomous non-pathologising psycho-social and peer support be available to intersex people throughout their life (as self-required), as well as to parents and/or care providers."
Genetic selection and terminations: The ethics of preimplantation genetic diagnosis to select against intersex traits was the subject of 11 papers in the October 2013 issue of the American Journal of Bioethics. There is widespread evidence of pregnancy terminations arising from prenatal testing, as well as prenatal hormone treatment to prevent intersex traits. Behrmann and Ravitsky find social concepts of sex, gender and sexual orientation to be "intertwined on many levels. Parental choice against intersex may thus conceal biases against same-sex attractedness and gender nonconformity."
Medical display. Photographs of intersex children's genitalia are circulated in medical communities for documentary purposes, and individuals with intersex traits may be subjected to repeated genital examinations and display to medical teams. Problems associated with experiences of medical photography of intersex children have been discussed along with their ethics, control and usage. "The experience of being photographed has exemplified for many people with intersex conditions the powerlessness and humiliation felt during medical investigations and interventions."
Gender dysphoria: The DSM-5 included a change from using gender identity disorder to gender dysphoria. This revised code now specifically includes intersex people who do not identify with their sex assigned at birth and experience clinically significant distress or impairment, using the language of disorders of sex development.
See also
Intersex Awareness Day
Intersex people and military service
Sexual differentiation in humans
Intersex healthcare
Gynandromorphism
Endosex
True hermaphroditism
Androgyny
46,XX/46,XY
Notes
References
Bibliography
External links
Intersex topics
Sex
Medical controversies | Intersex | Biology | 8,984 |
5,281,661 | https://en.wikipedia.org/wiki/Ponerosteus | Ponerosteus is a dubious genus of extinct archosauromorph from the Late Cretaceous (Cenomanian-aged) Korycanar Formation of the Czech Republic that was initially identified as a species of the dinosaur Iguanodon.
The type, and currently only, species is P. exogyrarum.
Discovery and naming
The holotype, NAMU Ob 40, consisting solely of an internal cast of a tibia, was discovered near Holubice, Kralupy nad Vltavou, and was first identified as a dinosaur, which was named "Iguanodon exogirarum" (later "Iguanodon exogyrarum") by Antonín Frič in 1878. He later (1905) renamed it Procerosaurus, unaware that this name was already in use (von Huene, 1902) for what is now a synonym of Tanystropheus. NAMU Ob 40 was renamed Ponerosteus exogyrarum (species name amended) by George Olshevsky in 2000, and Olshevsky considered Ponerosteus to be a nomen dubium; the holotype has since been put on display at the National Museum in Prague.
The name Ponerosteus can be translated as "bad", "worthless", or "useless bone", which describes the fragmentary nature of the holotype.
Classification
Although initially identified as being a dinosaur belonging to the genus Iguanodon, Ponerosteus is currently classified within Archosauromorpha.
References
Prehistoric animals of Europe
Nomina dubia
Fossil taxa described in 2000
Taxa named by George Olshevsky | Ponerosteus | Biology | 342 |
3,938,286 | https://en.wikipedia.org/wiki/Mullard%205-10 | The Mullard 5-10 was a circuit for a valve amplifier designed by the British valve company, Mullard in 1954 at the Mullard Applications Research Laboratory (ARL) in Mitcham Surrey UK, part of the New Road factory complex, to take advantage of their particular products. The circuit was first published in Practical Wireless magazine.
The amplifier featured five valves and an output of 10 watts - hence '5-10'. Of those valves, one was a full-wave rectifier (an EZ80 or EZ81), one was a pre-amplifier pentode EF86 and one a double-triode ECC83 as phase splitter. The power amplification was handled by a pair of EL84 working in push-pull configuration.
The frequency response of the circuit was from 40Hz to 20,000Hz with less than 0.2% THD.
In 1959 Mullard published its famous booklet "Mullard Circuits for Audio Amplifiers" covering a range of amplifier and pre-amplifier circuits using valves developed within the Receiving Valve Development Department at the New Road factory.
The circuit design of the Mullard 5-10, together with the recommended Partridge output transformers, was famous for its unique sound reproduction and many variations of this amplifier (including Mullard's own 20-watt version, the Mullard 5-20 using the EL34) were in widespread use until the end of the valve era; similar designs are still manufactured as expensive equipment for valve audiophiles.
External links
The original text from Practical Wireless magazine
Book extract by FJ Camm describing the Mullard 5-10
Valve amplifiers
Vacuum tubes | Mullard 5-10 | Physics | 348 |
10,355,154 | https://en.wikipedia.org/wiki/New%20Mexico%20statistical%20areas | The U.S. currently has 19 statistical areas that have been delineated by the Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated two combined statistical areas, four metropolitan statistical areas, and 13 micropolitan statistical areas in . As of 2023, the largest of these is the Albuquerque-Santa Fe-Los Alamos, NM CSA, comprising the area around New Mexico's largest city of Albuquerque as well as its capital, Santa Fe.
Table
Primary statistical areas
Primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area. Of the 19 statistical areas of New Mexico, 13 are PSAs comprising two combined statistical areas, one metropolitan statistical area and ten micropolitan statistical areas.
See also
Geography of New Mexico
Demographics of New Mexico
Notes
References
External links
Office of Management and Budget
United States Census Bureau
United States statistical areas
Statistical Areas Of New Mexico
Statistical Areas Of New Mexico | New Mexico statistical areas | Mathematics | 208 |
1,313,639 | https://en.wikipedia.org/wiki/Skyrmion | In particle theory, the skyrmion () is a topologically stable field configuration of a certain class of non-linear sigma models. It was originally proposed as a model of the nucleon by (and named after) Tony Skyrme in 1961. As a topological soliton in the pion field, it has the remarkable property of being able to model, with reasonable accuracy, multiple low-energy properties of the nucleon, simply by fixing the nucleon radius. It has since found application in solid-state physics, as well as having ties to certain areas of string theory.
Skyrmions as topological objects are important in solid-state physics, especially in the emerging technology of spintronics. A two-dimensional magnetic skyrmion, as a topological object, is formed, e.g., from a 3D effective-spin "hedgehog" (in the field of micromagnetics: out of a so-called "Bloch point" singularity of homotopy degree +1) by a stereographic projection, whereby the positive north-pole spin is mapped onto a far-off edge circle of a 2D-disk, while the negative south-pole spin is mapped onto the center of the disk. In a spinor field such as for example photonic or polariton fluids the skyrmion topology corresponds to a full Poincaré beam (a spin vortex comprising all the states of polarization mapped by a stereographic projection of the Poincaré sphere to the real plane). A dynamical pseudospin skyrmion results from the stereographic projection of a rotating polariton Bloch sphere in the case of dynamical full Bloch beams.
Skyrmions have been reported, but not conclusively proven, to appear in Bose–Einstein condensates, thin magnetic films, and chiral nematic liquid crystals, as well as in free-space optics.
As a model of the nucleon, the topological stability of the skyrmion can be interpreted as a statement that the baryon number is conserved; i.e. that the proton does not decay. The Skyrme Lagrangian is essentially a one-parameter model of the nucleon. Fixing the parameter fixes the proton radius, and also fixes all other low-energy properties, which appear to be correct to about 30%, a significant level of predictive power.
Hollowed-out skyrmions form the basis for the chiral bag model (Cheshire Cat model) of the nucleon. The exact results for the duality between the fermion spectrum and the topological winding number of the non-linear sigma model have been obtained by Dan Freed. This can be interpreted as a foundation for the duality between a quantum chromodynamics (QCD) description of the nucleon (but consisting only of quarks, and without gluons) and the Skyrme model for the nucleon.
The skyrmion can be quantized to form a quantum superposition of baryons and resonance states. It could be predicted from some nuclear matter properties.
Topological soliton
In field theory, skyrmions are homotopically non-trivial classical solutions of a nonlinear sigma model with a non-trivial target manifold topology – hence, they are topological solitons. An example occurs in chiral models of mesons, where the target manifold is a homogeneous space of the structure group
where SU(N)L and SU(N)R are the left and right chiral symmetries, and SU(N)diag is the diagonal subgroup. In nuclear physics, for N = 2, the chiral symmetries are understood to be the isospin symmetry of the nucleon. For N = 3, the isoflavor symmetry between the up, down and strange quarks is more broken, and the skyrmion models are less successful or accurate.
If spacetime has the topology S3×R, then classical configurations can be classified by an integral winding number because the third homotopy group
is equivalent to the ring of integers, with the congruence sign referring to homeomorphism.
A topological term can be added to the chiral Lagrangian, whose integral depends only upon the homotopy class; this results in superselection sectors in the quantised model. In (1 + 1)-dimensional spacetime, a skyrmion can be approximated by a soliton of the Sine–Gordon equation; after quantisation by the Bethe ansatz or otherwise, it turns into a fermion interacting according to the massive Thirring model.
Lagrangian
The Lagrangian for the skyrmion, as written for the original chiral SU(2) effective Lagrangian of the nucleon-nucleon interaction (in (3 + 1)-dimensional spacetime), can be written as
where , , are the isospin Pauli matrices, is the Lie bracket commutator, and tr is the matrix trace. The meson field (pion field, up to a dimensional factor) at spacetime coordinate is given by . A broad review of the geometric interpretation of is presented in the article on sigma models.
When written this way, the is clearly an element of the Lie group SU(2), and an element of the Lie algebra su(2). The pion field can be understood abstractly to be a section of the tangent bundle of the principal fiber bundle of SU(2) over spacetime. This abstract interpretation is characteristic of all non-linear sigma models.
The first term, is just an unusual way of writing the quadratic term of the non-linear sigma model; it reduces to . When used as a model of the nucleon, one writes
with the dimensional factor of being the pion decay constant. (In 1 + 1 dimensions, this constant is not dimensional and can thus be absorbed into the field definition.)
The second term establishes the characteristic size of the lowest-energy soliton solution; it determines the effective radius of the soliton. As a model of the nucleon, it is normally adjusted so as to give the correct radius for the proton; once this is done, other low-energy properties of the nucleon are automatically fixed, to within about 30% accuracy. It is this result, of tying together what would otherwise be independent parameters, and doing so fairly accurately, that makes the Skyrme model of the nucleon so appealing and interesting. Thus, for example, constant in the quartic term is interpreted as the vector-pion coupling ρ–π–π between the rho meson (the nuclear vector meson) and the pion; the skyrmion relates the value of this constant to the baryon radius.
Topological charge or winding number
The local winding number density (or topological charge density) is given by
where is the totally antisymmetric Levi-Civita symbol (equivalently, the Hodge star, in this context).
As a physical quantity, this can be interpreted as the baryon current; it is conserved: , and the conservation follows as a Noether current for the chiral symmetry.
The corresponding charge is the baryon number:
Which is conserved due to topological reasons and it is always an integer. For this reason, it is associated with the baryon number of the nucleus.
As a conserved charge, it is time-independent: , the physical interpretation of which is that protons do not decay.
In the chiral bag model, one cuts a hole out of the center and fills it with quarks. Despite this obvious "hackery", the total baryon number is conserved: the missing charge from the hole is exactly compensated by the spectral asymmetry of the vacuum fermions inside the bag.
Magnetic materials/data storage
One particular form of skyrmions is magnetic skyrmions, found in magnetic materials that exhibit spiral magnetism due to the Dzyaloshinskii–Moriya interaction, double-exchange mechanism or competing Heisenberg exchange interactions. They form "domains" as small as 1 nm (e.g. in Fe on Ir(111)). The small size and low energy consumption of magnetic skyrmions make them a good candidate for future data-storage solutions and other spintronics devices.
Researchers could read and write skyrmions using scanning tunneling microscopy. The topological charge, representing the existence and non-existence of skyrmions, can represent the bit states "1" and "0". Room-temperature skyrmions were reported.
Skyrmions operate at current densities that are several orders of magnitude weaker than conventional magnetic devices. In 2015 a practical way to create and access magnetic skyrmions under ambient room-temperature conditions was announced. The device used arrays of magnetized cobalt disks as artificial Bloch skyrmion lattices atop a thin film of cobalt and palladium. Asymmetric magnetic nanodots were patterned with controlled circularity on an underlayer with perpendicular magnetic anisotropy (PMA). Polarity is controlled by a tailored magnetic-field sequence and demonstrated in magnetometry measurements. The vortex structure is imprinted into the underlayer's interfacial region by suppressing the PMA by a critical ion-irradiation step. The lattices are identified with polarized neutron reflectometry and have been confirmed by magnetoresistance measurements.
A recent (2019) study demonstrated a way to move skyrmions, purely using electric field (in the absence of electric current). The authors used Co/Ni multilayers with a thickness slope and Dzyaloshinskii–Moriya interaction and demonstrated skyrmions. They showed that the displacement and velocity depended directly on the applied voltage.
In 2020, a team of researchers from the Swiss Federal Laboratories for Materials Science and Technology (Empa) has succeeded for the first time in producing a tunable multilayer system in which two different types of skyrmions – the future bits for "0" and "1" – can exist at room temperature.
See also
Hopfion, 3D counterpart of skyrmions
References
Further reading
Developments in Magnetic Skyrmions Come in Bunches, IEEE Spectrum 2015 web article
Hypothetical particles
Quantum chromodynamics | Skyrmion | Physics | 2,130 |
420,524 | https://en.wikipedia.org/wiki/Bottleneck%20traveling%20salesman%20problem | The Bottleneck traveling salesman problem (bottleneck TSP) is a problem in discrete or combinatorial optimization. The problem is to find the Hamiltonian cycle (visiting each node exactly once) in a weighted graph which minimizes the weight of the highest-weight edge of the cycle. It was first formulated by with some additional constraints, and in its full generality by .
Complexity
The problem is known to be NP-hard. The decision problem version of this, "for a given length is there a Hamiltonian cycle in a graph with no edge longer than ?", is NP-complete. NP-completeness follows immediately by a reduction from the problem of finding a Hamiltonian cycle.
Algorithms
Another reduction, from the bottleneck TSP to the usual TSP (where the goal is to minimize the sum of edge lengths), allows any algorithm for the usual TSP to also be used to solve the bottleneck TSP.
If the edge weights of the bottleneck TSP are replaced by any other numbers that have the same relative order, then the bottleneck solution remains unchanged.
If, in addition, each number in the sequence exceeds the sum of all smaller numbers, then the bottleneck solution will also equal the usual TSP solution.
For instance, such a result may be attained by resetting each weight to where is the number of vertices in the graph and is the rank of the original weight of the edge in the sorted sequence of weights. For instance, following this transformation, the Held–Karp algorithm could be used to solve the bottleneck TSP in time .
Alternatively, the problem can be solved by performing a binary search or sequential search for the smallest such that the subgraph of edges of weight at most has a Hamiltonian cycle. This method leads to solutions whose running time is only a logarithmic factor larger than the time to find a Hamiltonian cycle.
Variations
In an asymmetric bottleneck TSP, there are cases where the weight from node A to B is different from the weight from B to A (e. g. travel time between two cities with a traffic jam in one direction).
The Euclidean bottleneck TSP, or planar bottleneck TSP, is the bottleneck TSP with the distance being the ordinary Euclidean distance. The problem still remains NP-hard. However, many heuristics work better for it than for other distance functions.
The maximum scatter traveling salesman problem is another variation of the traveling salesman problem in which the goal is to find a Hamiltonian cycle that maximizes the minimum edge length rather than minimizing the maximum length. Its applications include the analysis of medical images, and the scheduling of metalworking steps in aircraft manufacture to avoid heat buildup from steps that are nearby in both time and space. It can be translated into an instance of the bottleneck TSP problem by negating all edge lengths (or, to keep the results positive, subtracting them all from a large enough constant). However, although this transformation preserves the optimal solution, it does not preserve the quality of approximations to that solution.
Metric approximation algorithm
If the graph is a metric space then there is an efficient approximation algorithm that finds a Hamiltonian cycle with maximum edge weight being no more than twice the optimum.
This result follows by Fleischner's theorem, that the square of a 2-vertex-connected graph always contains a Hamiltonian cycle. It is easy to find a threshold value , the smallest value such that the edges of weight form a 2-connected graph. Then provides a valid lower bound on the bottleneck TSP weight, for the bottleneck TSP is itself a 2-connected graph and necessarily contains an edge of weight at least . However, the square of the subgraph of edges of weight at most is Hamiltonian. By the triangle inequality for metric spaces, its Hamiltonian cycle has edges of weight at most .
This approximation ratio is best possible. For, any unweighted graph can be transformed into a metric space by setting its edge weights to and setting the distance between all nonadjacent pairs of vertices to . An approximation with ratio better than in this metric space could be used to determine whether the original graph contains a Hamiltonian cycle, an NP-complete problem.
Without the assumption that the input is a metric space, no finite approximation ratio is possible.
See also
Travelling salesman problem
References
Combinatorial optimization
Graph algorithms
Hamiltonian paths and cycles
NP-complete problems | Bottleneck traveling salesman problem | Mathematics | 906 |
23,644,361 | https://en.wikipedia.org/wiki/David%20B.%20A.%20Epstein | David Bernard Alper Epstein (born 1937) is a mathematician known for his work in hyperbolic geometry, 3-manifolds, and group theory, amongst other fields. He co-founded the University of Warwick mathematics department with Christopher Zeeman and is founding editor of the journal Experimental Mathematics.
Higher education and early career
In 1954, Epstein came to the UK after completing his bachelor's degree in mathematics in South Africa. Having received the exemption for Mathematical Tripos part I at the University of Cambridge, he completed Mathematical Tripos part II in 1955 and Mathematical Tripos part III in 1957. He completed his Ph.D. on the topic of three-dimensional manifolds under the supervision of Christopher Zeeman in 1960. He then travelled to Princeton University, where he spent one year attending the lectures of Norman Steenrod on cohomology operations, making notes and revisions to them, later published as a book by the Princeton University Press in 1962.
In 1961, Epstein moved to the Institute for Advanced Study (IAS) in Princeton, New Jersey. He returned to the UK in 1962 to become a research fellow of the newly founded Churchill College, Cambridge. In 1964, he moved to the Mathematics Institute of the University of Warwick to take up a Readership position there. He was the first academic at the University of Warwick to move into local accommodation, though many professors were appointed before him.
Awards and honours
Epstein was awarded the Senior Berwick Prize by the London Mathematical Society in 1988.
In 2004 he was elected a Fellow of the Royal Society. In 2012 he became a fellow of the American Mathematical Society.
Personal life
David Epstein was born in 1937 in Pretoria, South Africa to Ben Epstein and Pauline (or Polly) Alper, both Jewish of Lithuanian descent, though Polly was born in South Africa. David finished school at the age of 14, and graduated from the University of the Witwatersrand at the age of 17. He then won a scholarship to the University of Cambridge, where he did Parts II and III of the Mathematical Tripos, graduating in 1957. He married Rona in 1958, after dating her from when he was 16 and she was 14. He did a Ph.D. in Cambridge under Christopher Zeeman, which he completed at the age of 23 in 1960, when he was awarded a Research Fellowship at Trinity College, Cambridge, which he never took up.
After completing his Ph.D., Epstein went to Princeton University for one year, and then to the Institute for Advanced Study in Princeton, New Jersey for another year. He returned to Cambridge in 1962, where he was an assistant lecturer at the university and director of studies at the new Churchill College. In 1963 his younger sister Debbie left South Africa when she was considered to be in danger of arrest by the South African apartheid regime. At this stage, his father Ben was also having severe problems with the South African regime as a result of his ethical stand as a doctor. For example, he was instructed by the hospital administration to stop putting "starvation" as the cause of death on the death certificates of black children, an instruction that he refused to follow. His mother Polly was also active politically against the government. Polly and Ben at first wanted to emigrate to the United States, but they were denied visas, so they emigrated instead to the UK.
Selected publications
D.B.A. Epstein, Projective planes in 3-manifolds. Proceedings of the London Mathematical Society (3) 11 1961 469–484.
D.B.A. Epstein and R.L.E. Schwarzenberger, Imbeddings of real projective spaces. Annals of Mathematics (2) 76 1962 180–184.
D.B.A. Epstein, Steenrod operations in homological algebra. Inventiones Mathematicae 1 1966 152–208.
D.B.A. Epstein, Periodic flows on three-manifolds. Annals of Mathematics (2) 95 1972 66–82.
D.B.A. Epstein and E. Vogt, A counterexample to the periodic orbit conjecture in codimension 3. Annals of Mathematics (2) 108 (1978), no. 3, 539–552.
D.B.A. Epstein and A. Marden, Convex hulls in hyperbolic space, a theorem of Sullivan, and measured pleated surfaces. Analytical and geometric aspects of hyperbolic space (Coventry/Durham, 1984), 113–253, London Math. Soc. Lecture Note Ser., 111, Cambridge University Press, Cambridge, 1987.
D.B.A. Epstein and R.C. Penner, Euclidean decompositions of noncompact hyperbolic manifolds. Journal of Differential Geometry 27 (1988), no. 1, 67–80.
Epstein, David B. A.; Cannon, James W.; Holt, Derek F.; Levy, Silvio V. F.; Paterson, Michael S.; Thurston, William P. Word Processing in Groups. Jones and Bartlett Publishers, Boston, MA, 1992. xii+330 pp.
References
1937 births
Living people
20th-century British mathematicians
21st-century British mathematicians
Topologists
Academics of the University of Warwick
Fellows of the Royal Society
Fellows of the American Mathematical Society
Alumni of the University of Cambridge
South African people of Lithuanian-Jewish descent
People from Pretoria | David B. A. Epstein | Mathematics | 1,082 |
24,144,121 | https://en.wikipedia.org/wiki/C23H36O2 | {{DISPLAYTITLE:C23H36O2}}
The molecular formula C23H36O2 (molar mass: 344.53 g/mol, exact mass: 344.2715 u) may refer to:
Cardanolide
Dimepregnen, or 6α,16α-dimethylpregn-4-en-3β-ol-20-one
Hexahydrocannabiphorol
Luteone (terpenoid) | C23H36O2 | Chemistry | 102 |
2,622,542 | https://en.wikipedia.org/wiki/Radisys | Radisys Corporation is an American technology company located in Hillsboro, Oregon, United States that makes technology used by telecommunications companies in mobile networks. Founded in 1987 in Oregon by former employees of Intel, the company went public in 1995. The company's products are used in mobile network applications such as small cell radio access networks, wireless core network elements, deep packet inspection and policy management equipment; conferencing, and media services including voice, video and data. In 2015, the first-quarter revenues of Radisys totaled $48.7 million, and approximately employed 700 people. Arun Bhikshesvaran is the company's chief executive officer.
On 30 June 2018, multinational conglomerate Reliance Industries acquired Radisys for $74 million.
It now operates as an independent subsidiary.
History
Radisys was founded in 1987 as Radix Microsystems in Beaverton, Oregon, by former Intel engineers Dave Budde and Glen Myers. The first investors were employees who put up $50,000 each, with Tektronix later investing additional funds into the company. Originally located in space leased from Sequent Computer Systems, by 1994 the company had grown to annual sales of $20 million. The company's products were computers used in end products such as automated teller machines to paint mixers. On October 20, 1995, the company became a publicly traded company when it held an initial public offering (IPO). The IPO raised $19.6 million for Radisys after selling 2.7 million shares at $12 per share.
In 1996, the company moved its headquarters to a new campus in Hillsboro, and at that time sales reached $80 million and the company had a profit of $9.6 million that year with 175 employees. Company co-founder Dave Budde left the company in 1997, with company revenues at $81 million annually at that time. The company grew in part by acquisitions such as Sonitech International in 1997, part of IBM's Open Computing Platform unit and Texas Micro in 1999, all of S-Link in 2001, and Microware also in 2001. Radisys also moved some production to China in order to take advantage of the lower manufacturing costs.
In 2002, the company had grown to annual revenues of $200 million, and posted a profit in the fourth quarter for the first time in several quarters. That year Scott Grout was named as chief executive officer of the company and C. Scott Gibson became the chairman of the board, both replacing Glen Myers who co-founded the company. The company sold off its signaling gateway line in 2003.
They raised $97 million through selling convertible senior notes in November 2003. In 2004, the company stopped granting stock options to employees and transitioned to giving restricted shares for some compensation. Radisys grew to annual revenues of $320 million by 2005. The company continued to grow through acquisitions such as a $105 million deal that added Convedia Corp. in 2006. Radisys continued buying assets when it purchased part of Intel's communications business for about $30 million in 2007. After five-straight quarterly losses, the company posted a profit of $481,000 in their 2009 fourth quarter.
In May 2011, the company announced they were buying Continuous Computing for $105 million in stock and cash. Once the transaction was completed in July 2011, Continuous' CEO Mike Dagenais became the CEO of Radisys. Dagenais left the company in October 2012 with former CFO Brian Bronson taking over as CEO. In 2018, Reliance Industries acquired Radisys. Arun Bhikshesvaran took over as CEO in July 2019.
Products
Radisys supports two markets: communications networking and commercial systems. The latter makes products for use in the testing, medical imaging, defense, and industrial automation fields. For example, end-products that Radisys' is a supplier to as original equipment manufacturers include items such as MRI scanners, ultrasound equipment, logic analyzers, and items used in semiconductor manufacturing. Communications networking equipment includes those for wireless communications, switches, distribution of video, and internet protocol based networking equipment.
The company has engineering groups, working on open telecom architectures, computer architecture and systems integration. In 2009, Radisys' biggest customers were Philips Healthcare, Agilent, Fujitsu, Danaher Corporation, and Nokia Siemens Network (NSN). NSN was the largest single customer, totaling over 43% of revenues.
See also
RMX (operating system)
Silicon Forest
TenAsys
List of companies based in Oregon
References
External links
Computer companies of the United States
Embedded systems
Companies based in Hillsboro, Oregon
Manufacturing companies based in Oregon
Computer companies established in 1987
1987 establishments in Oregon
Companies formerly listed on the Nasdaq
Reliance Industries subsidiaries
1995 initial public offerings
2018 mergers and acquisitions
American subsidiaries of foreign companies
Computer hardware companies
Networking hardware companies
Jio | Radisys | Technology,Engineering | 989 |
13,067,121 | https://en.wikipedia.org/wiki/AES64 | The AES coarse-groove calibration discs (AES-S001-064) are a boxed set of two identical discs, one for routine use, one for master reference. The intent is to characterize the reproduction chain for the mass transfer of coarse-groove records to digital media, much like using a photographic calibration reference in image work.
Libraries and archives around the world have collections of many thousands of coarse-groove mechanical audio recordings, phonograph or gramophone records, largely 78s or 78 revolutions per minute (rpm) discs. This is a substantial recorded heritage of mankind's music and spoken word made over a period of 65 years. The 78 rpm disc was largely out of production by 1960. These mechanical recordings won't be available indefinitely since the plastics used in their manufacture are deteriorating slowly but steadily. Preservation programs have been underway by a number of organizations. Decreasing costs of digital storage media now make it possible to consider all mechanical audio recordings for transfer to the digital domain. Thus a widespread need was recognized by the Audio Engineering Society (AES) to provide a calibration tool for standard transfer of mechanical coarse-groove audio recordings from the analog to the digital domain.
Specifications
Side A:
Gliding tone, 20 Hz to 20 kHz
Speed: 77.92 rpm
Lateral (mono) coarse groove
Time constants: 3180/450/0 ms
Separate outer & inner bands:
1 kHz trigger tone
Gliding tone, 20 Hz to 20 kHz
1 kHz reference level*
*20 mm Light Band Width (LBW);
approx 8 cm/s peak-to-peak, 5.7 cm/s rms
Side B:
Single tones, 18kHz to 30 Hz
Speed: 77.92 rpm
Lateral (mono) coarse groove
Time constants: 3180/450/50 ms
(Pressed under license from EMI Records Ltd.)
A Closer Look At The Preservation Problem
According to Ted Kendall, maker of the Front End audio restoration unit also known as "The Mousetrap", the equalization time constants for post-1955 78s used in the Front End are 3180/450/50 ms. These time constants are identical to those used in the AES Coarse-groove Calibration Discs. Since the 78 rpm record would be obsolete by 1960, this means that there is a very large population of pre-1955 78s requiring different equalization settings depending on the vintage and label of the disc.
Type of Recording: Equalizer Settings
Acoustic recordings (pre-1925): Flat/AC/AC
FFRR 78s: Flat/636/25
EMI 78s 1945-1955: Flat/636/Flat
Most other UK 78s 1925-1945: Flat/531/Flat
Post-1955 78s: 3180/450/50
BBC direct recordings 1945-1960: Flat/BBC/BBC
CCIR standard coarse-groove transcriptions: Flat/450/50
AES (some early US Lps): Flat/400/63.6
Modern LPs (RIAA equalization): 3180/318/75
Lateral cut NAB transcriptions: 2250/250/100
Vertical cut NAB transcriptions: Flat/531/40*
Western Electric 78s: Flat 531/Flat*
Adjustments needed*
So, the dilemma is this: should coarse-groove recordings be transferred in mass to digital using an arbitrary phonoequalization curve such as with the AES calibration discs, or should each recording be matched to the curve appropriate to its vintage and label, then transferred to digital media?
Use of RIAA Equalization
Because the RIAA equalization standard has been in use internationally for phonograph records since 1953 and is based on recording practices used for many years by RCA Victor, a dominant record producer, the electronics needed for this purpose are as readily available as record players are. For vintage recordings the Esoteric Sound Re-Equalizer can readily be connected as a standard item to the record playback equipment. The Re-Equalizer is used to modify RIAA. Then, depending on the vintage and label of the 78 rpm record, the appropriate equalizer bass turnover and treble rolloff settings can be easily looked up in a reference guide.
Other Equipment
Another approach to obtaining the right phonograph record equalization settings for transferring vintage recordings to digital media is to use the Chronologic Equalizer in the Souvenir Vintage Sound Processor – MK-2 made by K-A-B Electronics.
Equalizer Setting
AC: Acoustic recordings
AE: Early electric recordings; Victor (some 1925), Columbia (1925), and most European to 1955
E3: Recordings with a 300 Hz turnover; Columbia (1925-1938), and FRR to 1955
E5: Recordings with a 500 Hz turnover; Victor (most 1925-1952)
E7: Recordings with a 700 Hz turnover (some NBC Orthacoustic transcriptions)
CO: Columbia 78 curve (1938 to 1955)
TR: Transcriptions (NAB)
MO: RIAA equalization
Notes
Audio engineering
Broadcast engineering
Sound technology
Audio storage | AES64 | Engineering | 1,028 |
701,289 | https://en.wikipedia.org/wiki/Cathartic | In medicine, a cathartic is a substance that accelerates defecation. This is similar to a laxative, which is a substance that eases defecation, usually by softening feces. It is possible for a substance to be both a laxative and a cathartic. However, agents such as psyllium seed husks increase the bulk of the feces.
Cathartics such as sorbitol, magnesium citrate, magnesium sulfate, or sodium sulfate were previously used as a form of gastrointestinal decontamination following poisoning via ingestion. They are no longer routinely recommended for poisonings. High-dose cathartics may be an effective means of ridding the lower gastrointestinal tract of toxins; however, they carry a risk of electrolyte imbalances and dehydration. Catharsis can be an effect of pesticide poisonings, such as with elemental sulfur.
References
Toxicology treatments | Cathartic | Environmental_science | 203 |
13,925,642 | https://en.wikipedia.org/wiki/Atlas%20Carver | The Atlas Carver (sometimes erroneously referred to as "CAVA") was a proposed South African twin-engine, delta wing fourth-generation fighter aircraft. In development during the 1980s and early 1990s, the Carver was ultimately cancelled during 1991.
The South African Border War played a considerable role in stimulating the demand for the production of a modern fighter aircraft within which to equip the South African Air Force (SAAF), with in the face of increasingly capable opposition. Additionally, South Africa was incapable of importing such aircraft due to a long-standing arms embargo having been placed upon the nation's government bodies by United Nations Security Council Resolution 418. The South African government decided to launch a pair of domestically-conducted programmes, a short-term upgrade programme of the existing fleet of French-built Dassault Mirage III fighters, which became known as the Atlas Cheetah, while a long-term and more extensive effort to design and manufacture a virtually-clean sheet fighter aircraft, known as Project Carver. Both programmes were headed by South African firm Atlas Aircraft Corporation.
As envisioned, Carver was intended to be a modern and capable successor aircraft to replace multiple, ageing types then in service with the SAAF, such as the British-built Blackburn Buccaneer, French-built Mirage IIIs, and the Atlas Cheetahs. A key objective for the new fighter was to achieve performance levels that were either equal to or in excess of the capabilities of late-generation Soviet fighters, which were increasingly likely to be deployed to neighbouring states, specifically Angola. Externally, the Carver bore some resemblance to the Dassault Mirage 4000 prototype; the design drew upon several elements of Dassault Aviation's Mirage family, including the decision to incorporate a number of Snecma engine components which were produced under licence in South Africa. While being a heavily indigenous effort, partially due to the embargo, South Africa was able to acquire substantial assistance on Project Carver from both France and Israel; many aerospace engineers and designers were hired from these nations, while technical information related to Israel's aborted IAI Lavi programme was also acquired under the Israel–South Africa Agreement. During February 1991, the cancellation of Project Carver was announced by South African President F. W. de Klerk, who stated that the programme's research and development costs were too great to justify during peacetime. In its place, the government preferred acquisition of foreign aircraft which had become possible again after the lifting of the international arms embargo against South Africa; ultimately, the Swedish-built Saab JAS 39 Gripen fighter was procured to equip the SAAF with instead.
Development
Background
Throughout much of the 1970s and 1980s, the development of South Africa's military equipment, including the assets of the South African Air Force (SAAF), became increasingly influenced by the changes in fortune and demands imposed by the lengthy South African Border War. Having originally been started as a limited-scale counter-insurgency campaign, it progressively escalated into a larger conflict that was being waged across areas of both South-West Africa (modern-day Namibia) and southern Angola against militants of the Communist-leaning South-West African People's Organisation (SWAPO). As South African forces came to frequently launch raids into neighbouring Angola, these attacks often provoked armed clashes with the members of the People's Armed Forces of Liberation of Angola (FAPLA), which was at that time bolstered by the provision of Soviet armaments alongside a sizeable contingent of Cuban troops dispatched to intervene in the theatre. During November 1985, FAPLA began acquiring more sophisticated combat aircraft and radar installations; gradually, the addition of these improved assets enabled air superiority over southern Angola to be seized from South Africa's expeditionary forces, rendering offensive operations more risky to conduct and increasingly the likelihood of losses.
In response to the changing situation in Angola, South Africa sought to regain air superiority in the theatre by enacting several improvements of its own. As a short-term measure, it was decided to upgrade the majority of the SAAF's existing Dassault Mirage III fighter aircraft, equipping them with a range of new armaments, equipment, and avionics which were designed to allow the aircraft to operate while being less vulnerable to both Soviet-designed missiles and radar. However, these modified Mirages, which were known as Atlas Cheetahs, were considered only an interim solution until an entirely new multirole fighter could be deployed. As a consequence of the imposing of a mandatory arms embargo upon the South African government by United Nations Security Council Resolution 418, the means of acquiring such combat aircraft were limited; any new aircraft, along with its associated systems and support equipment, would either have to be sourced domestically or assembled using components that had been imported or licensed prior to the enactment of the embargo.
Project launch
Accordingly, South Africa embarked on a comprehensive development programme to design and eventually manufacture an envisioned modern-generation fighter aircraft to meet its requirement; this programme soon received the codename of Project Carver. The programme was organised as a joint-effort between the SAAF, Armscor subsidiary Atlas Aircraft Corporation and the National Research Laboratory. The objective of the programme was the replacement of all multirole fighters then in service with the SAAF from the mid-1990s onwards. The associated development costs were high, partially as a result of the effects of the arms embargo, which necessitated the development of all the new technology within South Africa. At that point in time, the country's aerospace industry lacked any prior experience with the production of anything more intricate than various models of helicopters and light trainer aircraft; thus, it was decided to recruit large numbers of foreign aircraft engineers from around the world, including some French nationals who had previously spearheaded design work on France's then-new fighter aircraft, the Dassault Mirage 2000.
Project Carver also received extensive support from the government of Israel. This assistance was provided in various forms, such as technical assistance and hundreds of skilled experts from Israel's cancelled IAI Lavi domestic fighter programme. Reportedly, various incentives were offered by Atlas to Israeli engineers, including starting salaries of US$7,000 per month paid in any currency, free accommodation and regular free or heavily discounted flights to Israel, to encourage them to join Project Carver. Overall, it was estimated that the programme would require in excess of 4,000 engineers at the peak of the development phase, which was scheduled to run for roughly six years.
Headed by ex-Dassault Aviation designer David Fabish, work commenced on the initial design phase, during which various concepts were explored for the aircraft. By 1986, Atlas had selected a design for a lightweight single-engined aircraft, being in length and possessing a wingspan of roughly , a single vertical stabiliser and a mid-mounted delta wing furnished with leading-edge root extensions (LERX) set above side-mounted curved air intakes for the engine. The concept called for composite materials to be used throughout the airframe, for reducing both the weight and the radar cross-section of the aircraft. Additionally, it was planned to able to utilise all of the weapons then in SAAF's arsenal or in development at that time; these munitions included the H-2 guided bomb, V3C and U-Darter short-range guided missiles and the then-planned R-Darter beyond visual range (BVR) air-to-air missile; the total payload capacity was intended to have been comparable to that of the Mirage 2000.
Delays and design switch
The Carver programme was beset by numerous delays, often resulting from changes to the aircraft's tactical requirements, as well as the necessity to design the aircraft around a preexisting engine type, namely the Snecma Atar 09K50; along with other design requirements, such as the need to equal the Buccaneer in terms of both range and load-carrying capability. Recognising the age of the Atar engine, South Africa made several covert attempts to acquire more modern jet engines, such as the Snecma M53 (which powered the Dassault Mirage 2000) and Snecma M88s (as used by the then-upcoming Dassault Rafale and the planned Yugoslavian Novi Avion), but such efforts were ultimately fruitless. At the same time, efforts were made to improve the engine via several domestically developed modifications aiming to increase its performance and reliability, such as the turbine being refitted with single-crystal blades and the riveted compressor being replaced by a welded counterpart, which reportedly boosted the engine's performance by 10 per cent and improved throttle management.
During early development, it became clear to the designers that the desired range and load-carrying capacities would be unachievable if the aircraft was powered by a single Atar engine. Consequently, they decided to adopt a twin-engine layout for the proposed fighter instead; the decision to abandon the initially-selected single engine format in favour of the twin-engine approach reportedly threw the project into chaos for some time. The change resulted in a delay of at least two years, the adoption of a twin-engine layout necessitated a larger and heavier airframe to be used along with more complex systems; essentially, the design team had to return to the drawing board. During 1988, the SAAF, having recognised that the project could no longer be ready within the original schedule, decided to approved a further interim programme, known as Project Tunny, to satisfy the nation's immediate air defence needs into the 2008–2012 period; this resulted in an improved version of the Atlas Cheetah, the Cheetah C, being produced from 1993 to 1994.
During 1988, Atlas commenced the construction of a single Carver prototype; according to reports, this aircraft was never fully completed and no test flights were known to have taken place. During mid-1989, aerospace publication Flight International reported that the in-development fighter aircraft was intended to be inducted into SAAF service within the latter half of the 1990s, and would be used to replace various types then in use, such as the French-built Mirage III fighters, British-built Blackburn Buccaneer and English Electric Canberra bombers, before eventually replacing the comparatively newer Dassault Mirage F1 fleet as well. In terms of its basic configuration, Carver resembled a delta wing layout; reportedly, the design had been externally influenced by Dassault's family of delta-winged Mirage fighters; specifically, the aircraft bore a large number of similarities to its Atlas Cheetah predecessor as well as to the Dassault Mirage 4000 prototype fighter.
Termination
The Angolan Tripartite Accord and the end of the South African Border War represented a major loss of impetus for Project Carver. During February 1991, President F. W. de Klerk formally announced the programme's cancellation. The principal official reason given at the time for the cancellation was that the expense of developing
an indigenous fighter aircraft could not be justified in the light of the decreased threat in the newfound peacetime, changes in politics including the movement away from apartheid, and the gradual normalisation of international relationships. Recognising that new aircraft were still required, the South African government set about examining options for the procurement of an off-the-shelf fighter aircraft (which had been made possible due to the embargo having been lifted) in order to replace the SAAF's inventory of Dassault Mirage F1s and Atlas Cheetahs.
See also
References
External links
Carver
Secret military programs
Abandoned military aircraft projects of South Africa
1980s South African fighter aircraft
Israel–South Africa relations
Delta-wing aircraft
Twinjets | Atlas Carver | Engineering | 2,379 |
210,900 | https://en.wikipedia.org/wiki/Solar%20luminosity | The solar luminosity () is a unit of radiant flux (power emitted in the form of photons) conventionally used by astronomers to measure the luminosity of stars, galaxies and other celestial objects in terms of the output of the Sun.
One nominal solar luminosity is defined by the International Astronomical Union to be . The Sun is a weakly variable star, and its actual luminosity therefore fluctuates. The major fluctuation is the eleven-year solar cycle (sunspot cycle) that causes a quasi-periodic variation of about ±0.1%. Other variations over the last 200–300 years are thought to be much smaller than this.
Determination
Solar luminosity is related to solar irradiance (the solar constant). Solar irradiance is responsible for the orbital forcing that causes the Milankovitch cycles, which determine Earthly glacial cycles. The mean irradiance at the top of the Earth's atmosphere is sometimes known as the solar constant, . Irradiance is defined as power per unit area, so the solar luminosity (total power emitted by the Sun) is the irradiance received at the Earth (solar constant) multiplied by the area of the sphere whose radius is the mean distance between the Earth and the Sun:
where is the unit distance (the value of the astronomical unit in metres) and is a constant (whose value is very close to one) that reflects the fact that the mean distance from the Earth to the Sun is not exactly one astronomical unit.
See also
Sun
Solar mass
Solar radius
Nuclear fusion
Active region
Triple-alpha process
References
Further reading
External links
LISIRD: LASP Interactive Solar Irradiance Datacenter
Stellar Luminosity Calculator
Solar Luminosity
Variation of Solar Luminosity
Luminosity
Stellar astronomy
Units of power
Units of measurement in astronomy | Solar luminosity | Physics,Astronomy,Mathematics | 375 |
11,840,868 | https://en.wikipedia.org/wiki/Entropy%20power%20inequality | In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary.
Statement of the inequality
For a random vector X : Ω → Rn with probability density function f : Rn → R, the differential entropy of X, denoted h(X), is defined to be
and the entropy power of X, denoted N(X), is defined to be
In particular, N(X) = |K| 1/n when X is normal distributed with covariance matrix K.
Let X and Y be independent random variables with probability density functions in the Lp space Lp(Rn) for some p > 1. Then
Moreover, equality holds if and only if X and Y are multivariate normal random variables with proportional covariance matrices.
Alternative form of the inequality
The entropy power inequality can be rewritten in an equivalent form that does not explicitly depend on the definition of entropy power (see Costa and Cover reference below).
Let X and Y be independent random variables, as above. Then, let X' and Y' be independently distributed random variables with gaussian distributions, such that
and
Then,
See also
Information entropy
Information theory
Limiting density of discrete points
Self-information
Kullback–Leibler divergence
Entropy estimation
References
Information theory
Probabilistic inequalities
Statistical inequalities | Entropy power inequality | Mathematics,Technology,Engineering | 348 |
18,150,996 | https://en.wikipedia.org/wiki/Burning%20Index | Burning Index (BI) is a number used by the National Oceanic and Atmospheric Administration (NOAA) to describe the potential amount of effort needed to contain a single fire in a particular fuel type within a rating area. The National Fire Danger Rating System (NFDRS) uses a modified version of Bryam's equation for flame length – based on the Spread Component (SC) and the available energy (ERC) – to calculate flame length from which the Burning Index is computed.
The equation for flame length is listed below:
where:
j is a scaling factor,
SC is the spread component,
and ERC is the Energy Release Component.
Consequently, the equation for the Burning Index is:
where is the Burning Index scaling factor of (10/ft). Therefore, dividing the Burning Index by 10 produces a reasonable estimate of the flame length at the head of a fire. A unique Burning Index (BI) table is required for each fuel model.
References
Fire
Firefighting | Burning Index | Chemistry | 198 |
49,033 | https://en.wikipedia.org/wiki/Epigenetics | In biology, epigenetics is the study of heritable traits, or a stable change of cell function, that happen without changes to the DNA sequence. The Greek prefix epi- ( "over, outside of, around") in epigenetics implies features that are "on top of" or "in addition to" the traditional (DNA sequence based) genetic mechanism of inheritance. Epigenetics usually involves a change that is not erased by cell division, and affects the regulation of gene expression. Such effects on cellular and physiological phenotypic traits may result from environmental factors, or be part of normal development. Epigenetic factors can also lead to cancer.
The term also refers to the mechanism of changes: functionally relevant alterations to the genome that do not involve mutation of the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence. Further, non-coding RNA sequences have been shown to play a key role in the regulation of gene expression. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA. These epigenetic changes may last through cell divisions for the duration of the cell's life, and may also last for multiple generations, even though they do not involve changes in the underlying DNA sequence of the organism; instead, non-genetic factors cause the organism's genes to behave (or "express themselves") differently.
One example of an epigenetic change in eukaryotic biology is the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. In other words, as a single fertilized egg cell – the zygote – continues to divide, the resulting daughter cells change into all the different cell types in an organism, including neurons, muscle cells, epithelium, endothelium of blood vessels, etc., by activating some genes while inhibiting the expression of others.
Definitions
The term epigenesis has a generic meaning of "extra growth" that has been used in English since the 17th century. In scientific publications, the term epigenetics started to appear in the 1930s (see Fig. on the right). However, its contemporary meaning emerged only in the 1990s.
A definition of the concept of epigenetic trait as a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence" was formulated at a Cold Spring Harbor meeting in 2008, although alternate definitions that include non-heritable traits are still being used widely.
Waddington's canalisation, 1940s
The hypothesis of epigenetic changes affecting the expression of chromosomes was put forth by the Russian biologist Nikolai Koltsov. From the generic meaning, and the associated adjective epigenetic, British embryologist C. H. Waddington coined the term epigenetics in 1942 as pertaining to epigenesis, in parallel to Valentin Haecker's 'phenogenetics' (). Epigenesis in the context of the biology of that period referred to the differentiation of cells from their initial totipotent state during embryonic development.
When Waddington coined the term, the physical nature of genes and their role in heredity was not known. He used it instead as a conceptual model of how genetic components might interact with their surroundings to produce a phenotype; he used the phrase "epigenetic landscape" as a metaphor for biological development. Waddington held that cell fates were established during development in a process he called canalisation much as a marble rolls down to the point of lowest local elevation. Waddington suggested visualising increasing irreversibility of cell type differentiation as ridges rising between the valleys where the marbles (analogous to cells) are travelling.
In recent times, Waddington's notion of the epigenetic landscape has been rigorously formalized in the context of the systems dynamics state approach to the study of cell-fate. Cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory.
Contemporary
Robin Holliday defined in 1990 epigenetics as "the study of the mechanisms of temporal and spatial control of gene activity during the development of complex organisms."
More recent usage of the word in biology follows stricter definitions. As defined by Arthur Riggs and colleagues, it is "the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence."
The term has also been used, however, to describe processes which have not been demonstrated to be heritable, such as some forms of histone modification. Consequently, there are attempts to redefine "epigenetics" in broader terms that would avoid the constraints of requiring heritability. For example, Adrian Bird defined epigenetics as "the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states." This definition would be inclusive of transient modifications associated with DNA repair or cell-cycle phases as well as stable changes maintained across multiple cell generations, but exclude others such as templating of membrane architecture and prions unless they impinge on chromosome function. Such redefinitions however are not universally accepted and are still subject to debate. The NIH "Roadmap Epigenomics Project", which ran from 2008 to 2017, uses the following definition: "For purposes of this program, epigenetics refers to both heritable changes in gene activity and expression (in the progeny of cells or of individuals) and also stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable." In 2008, a consensus definition of the epigenetic trait, a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence," was made at a Cold Spring Harbor meeting.
The similarity of the word to "genetics" has generated many parallel usages. The "epigenome" is a parallel to the word "genome", referring to the overall epigenetic state of a cell, and epigenomics refers to global analyses of epigenetic changes across the entire genome. The phrase "genetic code" has also been adapted – the "epigenetic code" has been used to describe the set of epigenetic features that create different phenotypes in different cells from the same underlying DNA sequence. Taken to its extreme, the "epigenetic code" could represent the total state of the cell, with the position of each molecule accounted for in an epigenomic map, a diagrammatic representation of the gene expression, DNA methylation and histone modification status of a particular genomic region. More typically, the term is used in reference to systematic efforts to measure specific, relevant forms of epigenetic information such as the histone code or DNA methylation patterns.
Mechanisms
Covalent modification of either DNA (e.g. cytosine methylation and hydroxymethylation) or of histone proteins (e.g. lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, and lysine ubiquitination and sumoylation) play central roles in many types of epigenetic inheritance. Therefore, the word "epigenetics" is sometimes used as a synonym for these processes. However, this can be misleading. Chromatin remodeling is not always inherited, and not all epigenetic inheritance involves chromatin remodeling. In 2019, a further lysine modification appeared in the scientific literature linking epigenetics modification to cell metabolism, i.e. lactylation
Because the phenotype of a cell or individual is affected by which of its genes are transcribed, heritable transcription states can give rise to epigenetic effects. There are several layers of regulation of gene expression. One way that genes are regulated is through the remodeling of chromatin. Chromatin is the complex of DNA and the histone proteins with which it associates. If the way that DNA is wrapped around the histones changes, gene expression can change as well. Chromatin remodeling is accomplished through two main mechanisms:
The first way is post translational modification of the amino acids that make up histone proteins. Histone proteins are made up of long chains of amino acids. If the amino acids that are in the chain are changed, the shape of the histone might be modified. DNA is not completely unwound during replication. It is possible, then, that the modified histones may be carried into each new copy of the DNA. Once there, these histones may act as templates, initiating the surrounding new histones to be shaped in the new manner. By altering the shape of the histones around them, these modified histones would ensure that a lineage-specific transcription program is maintained after cell division.
The second way is the addition of methyl groups to the DNA, mostly at CpG sites, to convert cytosine to 5-methylcytosine. 5-Methylcytosine performs much like a regular cytosine, pairing with a guanine in double-stranded DNA. However, when methylated cytosines are present in CpG sites in the promoter and enhancer regions of genes, the genes are often repressed. When methylated cytosines are present in CpG sites in the gene body (in the coding region excluding the transcription start site) expression of the gene is often enhanced. Transcription of a gene usually depends on a transcription factor binding to a (10 base or less) recognition sequence at the enhancer that interacts with the promoter region of that gene (Gene expression#Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription). About 22% of transcription factors are inhibited from binding when the recognition sequence has a methylated cytosine. In addition, presence of methylated cytosines at a promoter region can attract methyl-CpG-binding domain (MBD) proteins. All MBDs interact with nucleosome remodeling and histone deacetylase complexes, which leads to gene silencing. In addition, another covalent modification involving methylated cytosine is its demethylation by TET enzymes. Hundreds of such demethylations occur, for instance, during learning and memory forming events in neurons.
There is frequently a reciprocal relationship between DNA methylation and histone lysine methylation. For instance, the methyl binding domain protein MBD1, attracted to and associating with methylated cytosine in a DNA CpG site, can also associate with H3K9 methyltransferase activity to methylate histone 3 at lysine 9. On the other hand, DNA maintenance methylation by DNMT1 appears to partly rely on recognition of histone methylation on the nucleosome present at the DNA site to carry out cytosine methylation on newly synthesized DNA. There is further crosstalk between DNA methylation carried out by DNMT3A and DNMT3B and histone methylation so that there is a correlation between the genome-wide distribution of DNA methylation and histone methylation.
Mechanisms of heritability of histone state are not well understood; however, much is known about the mechanism of heritability of DNA methylation state during cell division and differentiation. Heritability of methylation state depends on certain enzymes (such as DNMT1) that have a higher affinity for 5-methylcytosine than for cytosine. If this enzyme reaches a "hemimethylated" portion of DNA (where 5-methylcytosine is in only one of the two DNA strands) the enzyme will methylate the other half. However, it is now known that DNMT1 physically interacts with the protein UHRF1. UHRF1 has been recently recognized as essential for DNMT1-mediated maintenance of DNA methylation. UHRF1 is the protein that specifically recognizes hemi-methylated DNA, therefore bringing DNMT1 to its substrate to maintain DNA methylation.
Although histone modifications occur throughout the entire sequence, the unstructured N-termini of histones (called histone tails) are particularly highly modified. These modifications include acetylation, methylation, ubiquitylation, phosphorylation, sumoylation, ribosylation and citrullination. Acetylation is the most highly studied of these modifications. For example, acetylation of the K14 and K9 lysines of the tail of histone H3 by histone acetyltransferase enzymes (HATs) is generally related to transcriptional competence (see Figure).
One mode of thinking is that this tendency of acetylation to be associated with "active" transcription is biophysical in nature. Because it normally has a positively charged nitrogen at its end, lysine can bind the negatively charged phosphates of the DNA backbone. The acetylation event converts the positively charged amine group on the side chain into a neutral amide linkage. This removes the positive charge, thus loosening the DNA from the histone. When this occurs, complexes like SWI/SNF and other transcriptional factors can bind to the DNA and allow transcription to occur. This is the "cis" model of the epigenetic function. In other words, changes to the histone tails have a direct effect on the DNA itself.
Another model of epigenetic function is the "trans" model. In this model, changes to the histone tails act indirectly on the DNA. For example, lysine acetylation may create a binding site for chromatin-modifying enzymes (or transcription machinery as well). This chromatin remodeler can then cause changes to the state of the chromatin. Indeed, a bromodomain – a protein domain that specifically binds acetyl-lysine – is found in many enzymes that help activate transcription, including the SWI/SNF complex. It may be that acetylation acts in this and the previous way to aid in transcriptional activation.
The idea that modifications act as docking modules for related factors is borne out by histone methylation as well. Methylation of lysine 9 of histone H3 has long been associated with constitutively transcriptionally silent chromatin (constitutive heterochromatin) (see bottom Figure). It has been determined that a chromodomain (a domain that specifically binds methyl-lysine) in the transcriptionally repressive protein HP1 recruits HP1 to K9 methylated regions. One example that seems to refute this biophysical model for methylation is that tri-methylation of histone H3 at lysine 4 is strongly associated with (and required for full) transcriptional activation (see top Figure). Tri-methylation, in this case, would introduce a fixed positive charge on the tail.
It has been shown that the histone lysine methyltransferase (KMT) is responsible for this methylation activity in the pattern of histones H3 & H4. This enzyme utilizes a catalytically active site called the SET domain (Suppressor of variegation, Enhancer of Zeste, Trithorax). The SET domain is a 130-amino acid sequence involved in modulating gene activities. This domain has been demonstrated to bind to the histone tail and causes the methylation of the histone.
Differing histone modifications are likely to function in differing ways; acetylation at one position is likely to function differently from acetylation at another position. Also, multiple modifications may occur at the same time, and these modifications may work together to change the behavior of the nucleosome. The idea that multiple dynamic modifications regulate gene transcription in a systematic and reproducible way is called the histone code, although the idea that histone state can be read linearly as a digital information carrier has been largely debunked. One of the best-understood systems that orchestrate chromatin-based silencing is the SIR protein based silencing of the yeast hidden mating-type loci HML and HMR.
DNA methylation
DNA methylation frequently occurs in repeated sequences, and helps to suppress the expression and mobility of 'transposable elements': Because 5-methylcytosine can be spontaneously deaminated (replacing nitrogen by oxygen) to thymidine, CpG sites are frequently mutated and become rare in the genome, except at CpG islands where they remain unmethylated. Epigenetic changes of this type thus have the potential to direct increased frequencies of permanent genetic mutation. DNA methylation patterns are known to be established and modified in response to environmental factors by a complex interplay of at least three independent DNA methyltransferases, DNMT1, DNMT3A, and DNMT3B, the loss of any of which is lethal in mice. DNMT1 is the most abundant methyltransferase in somatic cells, localizes to replication foci, has a 10–40-fold preference for hemimethylated DNA and interacts with the proliferating cell nuclear antigen (PCNA).
By preferentially modifying hemimethylated DNA, DNMT1 transfers patterns of methylation to a newly synthesized strand after DNA replication, and therefore is often referred to as the 'maintenance' methyltransferase. DNMT1 is essential for proper embryonic development, imprinting and X-inactivation. To emphasize the difference of this molecular mechanism of inheritance from the canonical Watson-Crick base-pairing mechanism of transmission of genetic information, the term 'Epigenetic templating' was introduced. Furthermore, in addition to the maintenance and transmission of methylated DNA states, the same principle could work in the maintenance and transmission of histone modifications and even cytoplasmic (structural) heritable states.
RNA methylation
RNA methylation of N6-methyladenosine (m6A) as the most abundant eukaryotic RNA modification has recently been recognized as an important gene regulatory mechanism.
Histone modifications
Histones H3 and H4 can also be manipulated through demethylation using histone lysine demethylase (KDM). This recently identified enzyme has a catalytically active site called the Jumonji domain (JmjC). The demethylation occurs when JmjC utilizes multiple cofactors to hydroxylate the methyl group, thereby removing it. JmjC is capable of demethylating mono-, di-, and tri-methylated substrates.
Chromosomal regions can adopt stable and heritable alternative states resulting in bistable gene expression without changes to the DNA sequence. Epigenetic control is often associated with alternative covalent modifications of histones. The stability and heritability of states of larger chromosomal regions are suggested to involve positive feedback where modified nucleosomes recruit enzymes that similarly modify nearby nucleosomes. A simplified stochastic model for this type of epigenetics is found here.
It has been suggested that chromatin-based transcriptional regulation could be mediated by the effect of small RNAs. Small interfering RNAs can modulate transcriptional gene expression via epigenetic modulation of targeted promoters.
RNA transcripts
Sometimes a gene, after being turned on, transcribes a product that (directly or indirectly) maintains the activity of that gene. For example, Hnf4 and MyoD enhance the transcription of many liver-specific and muscle-specific genes, respectively, including their own, through the transcription factor activity of the proteins they encode. RNA signalling includes differential recruitment of a hierarchy of generic chromatin modifying complexes and DNA methyltransferases to specific loci by RNAs during differentiation and development. Other epigenetic changes are mediated by the production of different splice forms of RNA, or by formation of double-stranded RNA (RNAi). Descendants of the cell in which the gene was turned on will inherit this activity, even if the original stimulus for gene-activation is no longer present. These genes are often turned on or off by signal transduction, although in some systems where syncytia or gap junctions are important, RNA may spread directly to other cells or nuclei by diffusion. A large amount of RNA and protein is contributed to the zygote by the mother during oogenesis or via nurse cells, resulting in maternal effect phenotypes. A smaller quantity of sperm RNA is transmitted from the father, but there is recent evidence that this epigenetic information can lead to visible changes in several generations of offspring.
MicroRNAs
MicroRNAs (miRNAs) are members of non-coding RNAs that range in size from 17 to 25 nucleotides. miRNAs regulate a large variety of biological functions in plants and animals. So far, in 2013, about 2000 miRNAs have been discovered in humans and these can be found online in a miRNA database. Each miRNA expressed in a cell may target about 100 to 200 messenger RNAs(mRNAs) that it downregulates. Most of the downregulation of mRNAs occurs by causing the decay of the targeted mRNA, while some downregulation occurs at the level of translation into protein.
It appears that about 60% of human protein coding genes are regulated by miRNAs. Many miRNAs are epigenetically regulated. About 50% of miRNA genes are associated with CpG islands, that may be repressed by epigenetic methylation. Transcription from methylated CpG islands is strongly and heritably repressed. Other miRNAs are epigenetically regulated by either histone modifications or by combined DNA methylation and histone modification.
mRNA
In 2011, it was demonstrated that the methylation of mRNA plays a critical role in human energy homeostasis. The obesity-associated FTO gene is shown to be able to demethylate N6-methyladenosine in RNA.
sRNAs
sRNAs are small (50–250 nucleotides), highly structured, non-coding RNA fragments found in bacteria. They control gene expression including virulence genes in pathogens and are viewed as new targets in the fight against drug-resistant bacteria. They play an important role in many biological processes, binding to mRNA and protein targets in prokaryotes. Their phylogenetic analyses, for example through sRNA–mRNA target interactions or protein binding properties, are used to build comprehensive databases. sRNA-gene maps based on their targets in microbial genomes are also constructed.
Long non-coding RNAs
Numerous investigations have demonstrated the pivotal involvement of long non-coding RNAs (lncRNAs) in the regulation of gene expression and chromosomal modifications, thereby exerting significant control over cellular differentiation. These long non-coding RNAs also contribute to genomic imprinting and the inactivation of the X chromosome.
In invertebrates such as social insects of honey bees, long non-coding RNAs are detected as a possible epigenetic mechanism via allele-specific genes underlying aggression via reciprocal crosses.
Prions
Prions are infectious forms of proteins. In general, proteins fold into discrete units that perform distinct cellular functions, but some proteins are also capable of forming an infectious conformational state known as a prion. Although often viewed in the context of infectious disease, prions are more loosely defined by their ability to catalytically convert other native state versions of the same protein to an infectious conformational state. It is in this latter sense that they can be viewed as epigenetic agents capable of inducing a phenotypic change without a modification of the genome.
Fungal prions are considered by some to be epigenetic because the infectious phenotype caused by the prion can be inherited without modification of the genome. PSI+ and URE3, discovered in yeast in 1965 and 1971, are the two best studied of this type of prion. Prions can have a phenotypic effect through the sequestration of protein in aggregates, thereby reducing that protein's activity. In PSI+ cells, the loss of the Sup35 protein (which is involved in termination of translation) causes ribosomes to have a higher rate of read-through of stop codons, an effect that results in suppression of nonsense mutations in other genes. The ability of Sup35 to form prions may be a conserved trait. It could confer an adaptive advantage by giving cells the ability to switch into a PSI+ state and express dormant genetic features normally terminated by stop codon mutations.
Prion-based epigenetics has also been observed in Saccharomyces cerevisiae.
Molecular basis
Epigenetic changes modify the activation of certain genes, but not the genetic code sequence of DNA. The microstructure (not code) of DNA itself or the associated chromatin proteins may be modified, causing activation or silencing. This mechanism enables differentiated cells in a multicellular organism to express only the genes that are necessary for their own activity. Epigenetic changes are preserved when cells divide. Most epigenetic changes only occur within the course of one individual organism's lifetime; however, these epigenetic changes can be transmitted to the organism's offspring through a process called transgenerational epigenetic inheritance. Moreover, if gene inactivation occurs in a sperm or egg cell that results in fertilization, this epigenetic modification may also be transferred to the next generation.
Specific epigenetic processes include paramutation, bookmarking, imprinting, gene silencing, X chromosome inactivation, position effect, DNA methylation reprogramming, transvection, maternal effects, the progress of carcinogenesis, many effects of teratogens, regulation of histone modifications and heterochromatin, and technical limitations affecting parthenogenesis and cloning.
DNA damage
DNA damage can also cause epigenetic changes. DNA damage is very frequent, occurring on average about 60,000 times a day per cell of the human body (see DNA damage (naturally occurring)). These damages are largely repaired, however, epigenetic changes can still remain at the site of DNA repair. In particular, a double strand break in DNA can initiate unprogrammed epigenetic gene silencing both by causing DNA methylation as well as by promoting silencing types of histone modifications (chromatin remodeling - see next section). In addition, the enzyme Parp1 (poly(ADP)-ribose polymerase) and its product poly(ADP)-ribose (PAR) accumulate at sites of DNA damage as part of the repair process. This accumulation, in turn, directs recruitment and activation of the chromatin remodeling protein, ALC1, that can cause nucleosome remodeling. Nucleosome remodeling has been found to cause, for instance, epigenetic silencing of DNA repair gene MLH1. DNA damaging chemicals, such as benzene, hydroquinone, styrene, carbon tetrachloride and trichloroethylene, cause considerable hypomethylation of DNA, some through the activation of oxidative stress pathways.
Foods are known to alter the epigenetics of rats on different diets. Some food components epigenetically increase the levels of DNA repair enzymes such as MGMT and MLH1 and p53. Other food components can reduce DNA damage, such as soy isoflavones. In one study, markers for oxidative stress, such as modified nucleotides that can result from DNA damage, were decreased by a 3-week diet supplemented with soy. A decrease in oxidative DNA damage was also observed 2 h after consumption of anthocyanin-rich bilberry (Vaccinium myrtillius L.) pomace extract.
DNA repair
Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear.
Repair of oxidative DNA damage can alter epigenetic markers
In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation.
Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes.
When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration.
As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA.
While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations.
Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed.
Homologous recombinational repair alters epigenetic markers
At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period.
In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion.
Non-homologous end joining can cause some epigenetic marker alterations
Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%.
Techniques used to study epigenetics
Epigenetic research uses a wide range of molecular biological techniques to further understanding of epigenetic phenomena. These techniques include chromatin immunoprecipitation (together with its large-scale variants ChIP-on-chip and ChIP-Seq), fluorescent in situ hybridization, methylation-sensitive restriction enzymes, DNA adenine methyltransferase identification (DamID) and bisulfite sequencing. Furthermore, the use of bioinformatics methods has a role in computational epigenetics.
Chromatin Immunoprecipitation
Chromatin Immunoprecipitation (ChIP) has helped bridge the gap between DNA and epigenetic interactions. With the use of ChIP, researchers are able to make findings in regards to gene regulation, transcription mechanisms, and chromatin structure.
Fluorescent in situ hybridization
Fluorescent in situ hybridization (FISH) is very important to understand epigenetic mechanisms. FISH can be used to find the location of genes on chromosomes, as well as finding noncoding RNAs. FISH is predominantly used for detecting chromosomal abnormalities in humans.
Methylation-sensitive restriction enzymes
Methylation sensitive restriction enzymes paired with PCR is a way to evaluate methylation in DNA - specifically the CpG sites. If DNA is methylated, the restriction enzymes will not cleave the strand. Contrarily, if the DNA is not methylated, the enzymes will cleave the strand and it will be amplified by PCR.
Bisulfite sequencing
Bisulfite sequencing is another way to evaluate DNA methylation. Cytosine will be changed to uracil from being treated with sodium bisulfite, whereas methylated cytosines will not be affected.
Nanopore sequencing
Certain sequencing methods, such as nanopore sequencing, allow sequencing of native DNA. Native (=unamplified) DNA retains the epigenetic modifications which would otherwise be lost during the amplification step. Nanopore basecaller models can distinguish between the signals obtained for epigenetically modified bases and unaltered based and provide an epigenetic profile in addition to the sequencing result.
Structural inheritance
In ciliates such as Tetrahymena and Paramecium, genetically identical cells show heritable differences in the patterns of ciliary rows on their cell surface. Experimentally altered patterns can be transmitted to daughter cells. It seems existing structures act as templates for new structures. The mechanisms of such inheritance are unclear, but reasons exist to assume that multicellular organisms also use existing cell structures to assemble new ones.
Nucleosome positioning
Eukaryotic genomes have numerous nucleosomes. Nucleosome position is not random, and determine the accessibility of DNA to regulatory proteins. Promoters active in different tissues have been shown to have different nucleosome positioning features. This determines differences in gene expression and cell differentiation. It has been shown that at least some nucleosomes are retained in sperm cells (where most but not all histones are replaced by protamines). Thus nucleosome positioning is to some degree inheritable. Recent studies have uncovered connections between nucleosome positioning and other epigenetic factors, such as DNA methylation and hydroxymethylation.
Histone variants
Different histone variants are incorporated into specific regions of the genome non-randomly. Their differential biochemical characteristics can affect genome functions via their roles in gene regulation, and maintenance of chromosome structures.
Genomic architecture
The three-dimensional configuration of the genome (the 3D genome) is complex, dynamic and crucial for regulating genomic function and nuclear processes such as DNA replication, transcription and DNA-damage repair.
Functions and consequences
In the brain
Memory
Memory formation and maintenance are due to epigenetic alterations that cause the required dynamic changes in gene transcription that create and renew memory in neurons.
An event can set off a chain of reactions that result in altered methylations of a large set of genes in neurons, which give a representation of the event, a memory.
Areas of the brain important in the formation of memories include the hippocampus, medial prefrontal cortex (mPFC), anterior cingulate cortex and amygdala, as shown in the diagram of the human brain in this section.
When a strong memory is created, as in a rat subjected to contextual fear conditioning (CFC), one of the earliest events to occur is that more than 100 DNA double-strand breaks are formed by topoisomerase IIB in neurons of the hippocampus and the medial prefrontal cortex (mPFC). These double-strand breaks are at specific locations that allow activation of transcription of immediate early genes (IEGs) that are important in memory formation, allowing their expression in mRNA, with peak mRNA transcription at seven to ten minutes after CFC.
Two important IEGs in memory formation are EGR1 and the alternative promoter variant of DNMT3A, DNMT3A2. EGR1 protein binds to DNA at its binding motifs, 5′-GCGTGGGCG-3′ or 5′-GCGGGGGCGG-3', and there are about 12,000 genome locations at which EGR1 protein can bind. EGR1 protein binds to DNA in gene promoter and enhancer regions. EGR1 recruits the demethylating enzyme TET1 to an association, and brings TET1 to about 600 locations on the genome where TET1 can then demethylate and activate the associated genes.
The DNA methyltransferases DNMT3A1, DNMT3A2 and DNMT3B can all methylate cytosines (see image this section) at CpG sites in or near the promoters of genes. As shown by Manzo et al., these three DNA methyltransferases differ in their genomic binding locations and DNA methylation activity at different regulatory sites. Manzo et al. located 3,970 genome regions exclusively enriched for DNMT3A1, 3,838 regions for DNMT3A2 and 3,432 regions for DNMT3B. When DNMT3A2 is newly induced as an IEG (when neurons are activated), many new cytosine methylations occur, presumably in the target regions of DNMT3A2. Oliviera et al. found that the neuronal activity-inducible IEG levels of Dnmt3a2 in the hippocampus determined the ability to form long-term memories.
Rats form long-term associative memories after contextual fear conditioning (CFC). Duke et al. found that 24 hours after CFC in rats, in hippocampus neurons, 2,097 genes (9.17% of the genes in the rat genome) had altered methylation. When newly methylated cytosines are present in CpG sites in the promoter regions of genes, the genes are often repressed, and when newly demethylated cytosines are present the genes may be activated. After CFC, there were 1,048 genes with reduced mRNA expression and 564 genes with upregulated mRNA expression. Similarly, when mice undergo CFC, one hour later in the hippocampus region of the mouse brain there are 675 demethylated genes and 613 hypermethylated genes. However, memories do not remain in the hippocampus, but after four or five weeks the memories are stored in the anterior cingulate cortex. In the studies on mice after CFC, Halder et al. showed that four weeks after CFC there were at least 1,000 differentially methylated genes and more than 1,000 differentially expressed genes in the anterior cingulate cortex, while at the same time the altered methylations in the hippocampus were reversed.
The epigenetic alteration of methylation after a new memory is established creates a different pool of nuclear mRNAs. As reviewed by Bernstein, the epigenetically determined new mix of nuclear mRNAs are often packaged into neuronal granules, or messenger RNP, consisting of mRNA, small and large ribosomal subunits, translation initiation factors and RNA-binding proteins that regulate mRNA function. These neuronal granules are transported from the neuron nucleus and are directed, according to 3′ untranslated regions of the mRNA in the granules (their "zip codes"), to neuronal dendrites. Roughly 2,500 mRNAs may be localized to the dendrites of hippocampal pyramidal neurons and perhaps 450 transcripts are in excitatory presynaptic nerve terminals (dendritic spines). The altered assortments of transcripts (dependent on epigenetic alterations in the neuron nucleus) have different sensitivities in response to signals, which is the basis of altered synaptic plasticity. Altered synaptic plasticity is often considered the neurochemical foundation of learning and memory.
Aging
Epigenetics play a major role in brain aging and age-related cognitive decline, with relevance to life extension.
Other and general
In adulthood, changes in the epigenome are important for various higher cognitive functions. Dysregulation of epigenetic mechanisms is implicated in neurodegenerative disorders and diseases. Epigenetic modifications in neurons are dynamic and reversible. Epigenetic regulation impacts neuronal action, affecting learning, memory, and other cognitive processes.
Early events, including during embryonic development, can influence development, cognition, and health outcomes through epigenetic mechanisms.
Epigenetic mechanisms have been proposed as "a potential molecular mechanism for effects of endogenous hormones on the organization of developing brain circuits".
Nutrients could interact with the epigenome to "protect or boost cognitive processes across the lifespan".
A review suggests neurobiological effects of physical exercise via epigenetics seem "central to building an 'epigenetic memory' to influence long-term brain function and behavior" and may even be heritable.
With the axo-ciliary synapse, there is communication between serotonergic axons and antenna-like primary cilia of CA1 pyramidal neurons that alters the neuron's epigenetic state in the nucleus via the signalling distinct from that at the plasma membrane (and longer-term).
Epigenetics also play a major role in the brain evolution in and to humans.
Development
Developmental epigenetics can be divided into predetermined and probabilistic epigenesis. Predetermined epigenesis is a unidirectional movement from structural development in DNA to the functional maturation of the protein. "Predetermined" here means that development is scripted and predictable. Probabilistic epigenesis on the other hand is a bidirectional structure-function development with experiences and external molding development.
Somatic epigenetic inheritance, particularly through DNA and histone covalent modifications and nucleosome repositioning, is very important in the development of multicellular eukaryotic organisms. The genome sequence is static (with some notable exceptions), but cells differentiate into many different types, which perform different functions, and respond differently to the environment and intercellular signaling. Thus, as individuals develop, morphogens activate or silence genes in an epigenetically heritable fashion, giving cells a memory. In mammals, most cells terminally differentiate, with only stem cells retaining the ability to differentiate into several cell types ("totipotency" and "multipotency"). In mammals, some stem cells continue producing newly differentiated cells throughout life, such as in neurogenesis, but mammals are not able to respond to loss of some tissues, for example, the inability to regenerate limbs, which some other animals are capable of. Epigenetic modifications regulate the transition from neural stem cells to glial progenitor cells (for example, differentiation into oligodendrocytes is regulated by the deacetylation and methylation of histones). Unlike animals, plant cells do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. While plants do utilize many of the same epigenetic mechanisms as animals, such as chromatin remodeling, it has been hypothesized that some kinds of plant cells do not use or require "cellular memories", resetting their gene expression patterns using positional information from the environment and surrounding cells to determine their fate.
Epigenetic changes can occur in response to environmental exposure – for example, maternal dietary supplementation with genistein (250 mg/kg) have epigenetic changes affecting expression of the agouti gene, which affects their fur color, weight, and propensity to develop cancer. Ongoing research is focused on exploring the impact of other known teratogens, such as diabetic embryopathy, on methylation signatures.
Controversial results from one study suggested that traumatic experiences might produce an epigenetic signal that is capable of being passed to future generations. Mice were trained, using foot shocks, to fear a cherry blossom odor. The investigators reported that the mouse offspring had an increased aversion to this specific odor. They suggested epigenetic changes that increase gene expression, rather than in DNA itself, in a gene, M71, that governs the functioning of an odor receptor in the nose that responds specifically to this cherry blossom smell. There were physical changes that correlated with olfactory (smell) function in the brains of the trained mice and their descendants. Several criticisms were reported, including the study's low statistical power as evidence of some irregularity such as bias in reporting results. Due to limits of sample size, there is a probability that an effect will not be demonstrated to within statistical significance even if it exists. The criticism suggested that the probability that all the experiments reported would show positive results if an identical protocol was followed, assuming the claimed effects exist, is merely 0.4%. The authors also did not indicate which mice were siblings, and treated all of the mice as statistically independent. The original researchers pointed out negative results in the paper's appendix that the criticism omitted in its calculations, and undertook to track which mice were siblings in the future.
Transgenerational
Epigenetic mechanisms were a necessary part of the evolutionary origin of cell differentiation. Although epigenetics in multicellular organisms is generally thought to be a mechanism involved in differentiation, with epigenetic patterns "reset" when organisms reproduce, there have been some observations of transgenerational epigenetic inheritance (e.g., the phenomenon of paramutation observed in maize). Although most of these multigenerational epigenetic traits are gradually lost over several generations, the possibility remains that multigenerational epigenetics could be another aspect to evolution and adaptation.
As mentioned above, some define epigenetics as heritable.
A sequestered germ line or Weismann barrier is specific to animals, and epigenetic inheritance is more common in plants and microbes. Eva Jablonka, Marion J. Lamb and Étienne Danchin have argued that these effects may require enhancements to the standard conceptual framework of the modern synthesis and have called for an extended evolutionary synthesis. Other evolutionary biologists, such as John Maynard Smith, have incorporated epigenetic inheritance into population-genetics models or are openly skeptical of the extended evolutionary synthesis (Michael Lynch). Thomas Dickins and Qazi Rahman state that epigenetic mechanisms such as DNA methylation and histone modification are genetically inherited under the control of natural selection and therefore fit under the earlier "modern synthesis".
Two important ways in which epigenetic inheritance can differ from traditional genetic inheritance, with important consequences for evolution, are:
rates of epimutation can be much faster than rates of mutation
the epimutations are more easily reversible
In plants, heritable DNA methylation mutations are 100,000 times more likely to occur compared to DNA mutations. An epigenetically inherited element such as the PSI+ system can act as a "stop-gap", good enough for short-term adaptation that allows the lineage to survive for long enough for mutation and/or recombination to genetically assimilate the adaptive phenotypic change. The existence of this possibility increases the evolvability of a species.
More than 100 cases of transgenerational epigenetic inheritance phenomena have been reported in a wide range of organisms, including prokaryotes, plants, and animals. For instance, mourning-cloak butterflies will change color through hormone changes in response to experimentation of varying temperatures.
The filamentous fungus Neurospora crassa is a prominent model system for understanding the control and function of cytosine methylation. In this organism, DNA methylation is associated with relics of a genome-defense system called RIP (repeat-induced point mutation) and silences gene expression by inhibiting transcription elongation.
The yeast prion PSI is generated by a conformational change of a translation termination factor, which is then inherited by daughter cells. This can provide a survival advantage under adverse conditions, exemplifying epigenetic regulation which enables unicellular organisms to respond rapidly to environmental stress. Prions can be viewed as epigenetic agents capable of inducing a phenotypic change without modification of the genome.
Direct detection of epigenetic marks in microorganisms is possible with single molecule real time sequencing, in which polymerase sensitivity allows for measuring methylation and other modifications as a DNA molecule is being sequenced. Several projects have demonstrated the ability to collect genome-wide epigenetic data in bacteria.
Epigenetics in bacteria
While epigenetics is of fundamental importance in eukaryotes, especially metazoans, it plays a different role in bacteria. Most importantly, eukaryotes use epigenetic mechanisms primarily to regulate gene expression which bacteria rarely do. However, bacteria make widespread use of postreplicative DNA methylation for the epigenetic control of DNA-protein interactions. Bacteria also use DNA adenine methylation (rather than DNA cytosine methylation) as an epigenetic signal. DNA adenine methylation is important in bacteria virulence in organisms such as Escherichia coli, Salmonella, Vibrio, Yersinia, Haemophilus, and Brucella. In Alphaproteobacteria, methylation of adenine regulates the cell cycle and couples gene transcription to DNA replication. In Gammaproteobacteria, adenine methylation provides signals for DNA replication, chromosome segregation, mismatch repair, packaging of bacteriophage, transposase activity and regulation of gene expression. There exists a genetic switch controlling Streptococcus pneumoniae (the pneumococcus) that allows the bacterium to randomly change its characteristics into six alternative states that could pave the way to improved vaccines. Each form is randomly generated by a phase variable methylation system. The ability of the pneumococcus to cause deadly infections is different in each of these six states. Similar systems exist in other bacterial genera. In Bacillota such as Clostridioides difficile, adenine methylation regulates sporulation, biofilm formation and host-adaptation.
Medicine
Epigenetics has many and varied potential medical applications.
Twins
Direct comparisons of identical twins constitute an optimal model for interrogating environmental epigenetics. In the case of humans with different environmental exposures, monozygotic (identical) twins were epigenetically indistinguishable during their early years, while older twins had remarkable differences in the overall content and genomic distribution of 5-methylcytosine DNA and histone acetylation. The twin pairs who had spent less of their lifetime together and/or had greater differences in their medical histories were those who showed the largest differences in their levels of 5-methylcytosine DNA and acetylation of histones H3 and H4.
Dizygotic (fraternal) and monozygotic (identical) twins show evidence of epigenetic influence in humans. DNA sequence differences that would be abundant in a singleton-based study do not interfere with the analysis. Environmental differences can produce long-term epigenetic effects, and different developmental monozygotic twin subtypes may be different with respect to their susceptibility to be discordant from an epigenetic point of view.
A high-throughput study, which denotes technology that looks at extensive genetic markers, focused on epigenetic differences between monozygotic twins to compare global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs. In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic "drift". Epigenetic drift is the term given to epigenetic modifications as they occur as a direct function with age. While age is a known risk factor for many diseases, age-related methylation has been found to occur differentially at specific sites along the genome. Over time, this can result in measurable differences between biological and chronological age. Epigenetic changes have been found to be reflective of lifestyle and may act as functional biomarkers of disease before clinical threshold is reached.
A more recent study, where 114 monozygotic twins and 80 dizygotic twins were analyzed for the DNA methylation status of around 6000 unique genomic regions, concluded that epigenetic similarity at the time of blastocyst splitting may also contribute to phenotypic similarities in monozygotic co-twins. This supports the notion that microenvironment at early stages of embryonic development can be quite important for the establishment of epigenetic marks. Congenital genetic disease is well understood and it is clear that epigenetics can play a role, for example, in the case of Angelman syndrome and Prader–Willi syndrome. These are normal genetic diseases caused by gene deletions or inactivation of the genes but are unusually common because individuals are essentially hemizygous because of genomic imprinting, and therefore a single gene knock out is sufficient to cause the disease, where most cases would require both copies to be knocked out.
Genomic imprinting
Some human disorders are associated with genomic imprinting, a phenomenon in mammals where the father and mother contribute different epigenetic patterns for specific genomic loci in their germ cells. The best-known case of imprinting in human disorders is that of Angelman syndrome and Prader–Willi syndrome – both can be produced by the same genetic mutation, chromosome 15q partial deletion, and the particular syndrome that will develop depends on whether the mutation is inherited from the child's mother or from their father.
In the Överkalix study, paternal (but not maternal) grandsons of Swedish men who were exposed during preadolescence to famine in the 19th century were less likely to die of cardiovascular disease. If food was plentiful, then diabetes mortality in the grandchildren increased, suggesting that this was a transgenerational epigenetic inheritance. The opposite effect was observed for females – the paternal (but not maternal) granddaughters of women who experienced famine while in the womb (and therefore while their eggs were being formed) lived shorter lives on average.
Examples of drugs altering gene expression from epigenetic events
The use of beta-lactam antibiotics can alter glutamate receptor activity and the action of cyclosporine on multiple transcription factors. Additionally, lithium can impact autophagy of aberrant proteins, and opioid drugs via chronic use can increase the expression of genes associated with addictive phenotypes.
Parental nutrition, in utero exposure to stress or endocrine disrupting chemicals, male-induced maternal effects such as the attraction of differential mate quality, and maternal as well as paternal age, and offspring gender could all possibly influence whether a germline epimutation is ultimately expressed in offspring and the degree to which intergenerational inheritance remains stable throughout posterity. However, whether and to what extent epigenetic effects can be transmitted across generations remains unclear, particularly in humans.
Addiction
Addiction is a disorder of the brain's reward system which arises through transcriptional and neuroepigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling). Transgenerational epigenetic inheritance of addictive phenotypes has been noted to occur in preclinical studies. However, robust evidence in support of the persistence of epigenetic effects across multiple generations has yet to be established in humans; for example, an epigenetic effect of prenatal exposure to smoking that is observed in great-grandchildren who had not been exposed.
Research
The two forms of heritable information, namely genetic and epigenetic, are collectively called dual inheritance. Members of the APOBEC/AID family of cytosine deaminases may concurrently influence genetic and epigenetic inheritance using similar molecular mechanisms, and may be a point of crosstalk between these conceptually compartmentalized processes.
Fluoroquinolone antibiotics induce epigenetic changes in mammalian cells through iron chelation. This leads to epigenetic effects through inhibition of α-ketoglutarate-dependent dioxygenases that require iron as a co-factor.
Various pharmacological agents are applied for the production of induced pluripotent stem cells (iPSC) or maintain the embryonic stem cell (ESC) phenotypic via epigenetic approach. Adult stem cells like bone marrow stem cells have also shown a potential to differentiate into cardiac competent cells when treated with G9a histone methyltransferase inhibitor BIX01294.
Cell plasticity, which is the adaptation of cells to stimuli without changes in their genetic code, requires epigenetic changes. These have been observed in cell plasticity in cancer cells during epithelial-to-mesenchymal transition and also in immune cells, such as macrophages. Interestingly, metabolic changes underly these adaptations, since various metabolites play crucial roles in the chemistry of epigenetic marks. This includes for instance alpha-ketoglutarate, which is required for histone demethylation, and acetyl-Coenzyme A, which is required for histone acetylation.
Epigenome editing
Epigenetic regulation of gene expression that could be altered or used in epigenome editing are or include mRNA/lncRNA modification, DNA methylation modification and histone modification.
CpG sites, SNPs and biological traits
Methylation is a widely characterized mechanism of genetic regulation that can determine biological traits. However, strong experimental evidences correlate methylation patterns in SNPs as an important additional feature for the classical activation/inhibition epigenetic dogma. Molecular interaction data, supported by colocalization analyses, identify multiple nuclear regulatory pathways, linking sequence variation to disturbances in DNA methylation and molecular and phenotypic variation.
UBASH3B locus
UBASH3B encodes a protein with tyrosine phosphatase activity, which has been previously linked to advanced neoplasia. SNP rs7115089 was identified as influencing DNA methylation and expression of this locus, as well as and Body Mass Index (BMI). In fact, SNP rs7115089 is strongly associated with BMI and with genetic variants linked to other cardiovascular and metabolic traits in GWASs. New studies suggesting UBASH3B as a potential mediator of adiposity and cardiometabolic disease. In addition, animal models demonstrated that UBASH3B expression is an indicator of caloric restriction that may drive programmed susceptibility to obesity and it is associated with other measures of adiposity in human peripherical blood.
NFKBIE locus
SNP rs730775 is located in the first intron of NFKBIE and is a cis eQTL for NFKBIE in whole blood. Nuclear factor (NF)-κB inhibitor ε (NFKBIE) directly inhibits NF-κB1 activity and is significantly co-expressed with NF-κB1, also, it is associated with rheumatoid arthritis. Colocalization analysis supports that variants for the majority of the CpG sites in SNP rs730775 cause genetic variation at the NFKBIE locus which is suggestible linked to rheumatoid arthritis through trans acting regulation of DNA methylation by NF-κB.
FADS1 locus
Fatty acid desaturase 1 (FADS1) is a key enzyme in the metabolism of fatty acids. Moreover, rs174548 in the FADS1 gene shows increased correlation with DNA methylation in people with high abundance of CD8+ T cells. SNP rs174548 is strongly associated with concentrations of arachidonic acid and other metabolites in fatty acid metabolism, blood eosinophil counts. and inflammatory diseases such as asthma. Interaction results indicated a correlation between rs174548 and asthma, providing new insights about fatty acid metabolism in CD8+ T cells with immune phenotypes.
Pseudoscience
As epigenetics is in the early stages of development as a science and is surrounded by sensationalism in the public media, David Gorski and geneticist Adam Rutherford have advised caution against the proliferation of false and pseudoscientific conclusions by new age authors making unfounded suggestions that a person's genes and health can be manipulated by mind control. Misuse of the scientific term by quack authors has produced misinformation among the general public.
See also
Baldwin effect
Behavioral epigenetics
Biological effects of radiation on the epigenome
Computational epigenetics
Contribution of epigenetic modifications to evolution
DAnCER database (2010)
Epigenesis (biology)
Epigenetics in forensic science
Epigenetics of autoimmune disorders
Epiphenotyping
Epigenetic therapy
Epigenetics of neurodegenerative diseases
Genetics
Lamarckism
Nutriepigenomics
Position-effect variegation
Preformationism
Somatic epitype
Synthetic genetic array
Sleep epigenetics
Transcriptional memory
Transgenerational epigenetic inheritance
References
Further reading
External links
The Human Epigenome Project (HEP)
The Epigenome Network of Excellence (NoE)
Canadian Epigenetics, Environment and Health Research Consortium (CEEHRC)
The Epigenome Network of Excellence (NoE) – public international site
"DNA Is Not Destiny" – Discover magazine cover story
"The Ghost In Your Genes", Horizon (2005), BBC
Epigenetics article at Hopkins Medicine
Towards a global map of epigenetic variation
Genetic mapping
Lamarckism | Epigenetics | Biology | 13,929 |
7,530,857 | https://en.wikipedia.org/wiki/Resonant-tunneling%20diode | A resonant-tunneling diode (RTD) is a diode with a resonant-tunneling structure in which electrons can tunnel through some resonant states at certain energy levels. The current–voltage characteristic often exhibits negative differential resistance regions.
All types of tunneling diodes make use of quantum mechanical tunneling.
Characteristic to the current–voltage relationship of a tunneling diode is the presence of one or more negative differential resistance regions, which enables many unique applications. Tunneling diodes can be very compact and are also capable of ultra-high-speed operation because the quantum tunneling effect through the very thin layers is a very fast process. One area of active research is directed toward building oscillators and switching devices that can operate at terahertz frequencies.
Introduction
An RTD can be fabricated using many different types of materials (such as III–V, type IV, II–VI semiconductor) and different types of resonant tunneling structures, such as the heavily doped p–n junction in Esaki diodes, double barrier, triple barrier, quantum well, or quantum wire. The structure and fabrication process of Si/SiGe resonant interband tunneling diodes are suitable for integration with modern Si complementary metal–oxide–semiconductor (CMOS) and Si/SiGe heterojunction bipolar technology.
One type of RTDs is formed as a single quantum well structure surrounded by very thin layer barriers. This structure is called a double barrier structure. Carriers such as electrons and holes can only have discrete energy values inside the quantum well. When a voltage is placed across an RTD, a terahertz wave is emitted, which is why the energy value inside the quantum well is equal to that of the emitter side. As voltage is increased, the terahertz wave dies out because the energy value in the quantum well is outside the emitter side energy.
Another feature seen in RTD structures is the negative resistance on application of bias as can be seen in the image generated from Nanohub. The forming of negative resistance will be examined in detail in operation section below.
This structure can be grown by molecular beam heteroepitaxy. GaAs and AlAs in particular are used to form this structure. AlAs/InGaAs or InAlAs/InGaAs can be used.
The operation of electronic circuits containing RTDs can be described by a Liénard system of equations, which are a generalization of the Van der Pol oscillator equation.
Operation
The following process is also illustrated from rightside figure. Depending on the number of barriers and number of confined states inside the well, the process described below could be repeated.
Positive resistance region
For low bias, as the bias increases, the 1st confined state between the potential barriers gets closer to the source Fermi level, so the current it carries increases.
Negative resistance region
As the bias increases further, the 1st confined state becomes lower in energy and gradually goes into the energy range of bandgap, so the current it carries decreases. At this time, the 2nd confined state is still too high above in energy to conduct significant current.
2nd positive resistance region
Similar to the first region, as the 2nd confined state becomes closer and closer to the source Fermi level, it carries more current, causing the total current to increase again.
Intraband resonant tunneling
In quantum tunneling through a single barrier, the transmission coefficient, or the tunneling probability, is always less than one (for incoming particle energy less than the potential barrier height). Considering a potential profile which contains two barriers (which are located close to each other), one can calculate the transmission coefficient (as a function of the incoming particle energy) using any of the standard methods.
Tunneling through a double barrier was first solved in the Wentzel-Kramers-Brillouin (WKB) approximation by David Bohm in 1951, who pointed out the resonances in the transmission coefficient occur at certain incident electron energies. It turns out that, for certain energies, the transmission coefficient is equal to one, i.e. the double barrier is totally transparent for particle transmission. This phenomenon is called resonant tunneling. It is interesting that while the transmission coefficient of a potential barrier is always lower than one (and decreases with increasing barrier height and width), two barriers in a row can be completely transparent for certain energies of the incident particle.
Later, in 1964, L. V. Iogansen discussed the possibility of resonant transmission of an electron through double barriers formed in semiconductor crystals. In the early 1970s, Tsu, Esaki, and Chang computed the two terminal current-voltage (I-V) characteristic of a finite superlattice, and predicted that resonances could be observed not only in the transmission coefficient but also in the I-V characteristic. Resonant tunneling also occurs in potential profiles with more than two barriers. Advances in the MBE technique led to observation of negative differential conductance (NDC) at terahertz frequencies, as reported by Sollner et al. in the early 1980s. This triggered a considerable research effort to study tunneling through multi-barrier structures.
The potential profiles required for resonant tunneling can be realized
in semiconductor system using heterojunctions which utilize semiconductors
of different types to create potential barriers or wells in the conduction
band or the valence band.
III-V resonant tunneling diodes
Resonant tunneling diodes are typically realized in III-V compound material systems, where heterojunctions made up of various III-V compound semiconductors are used to create the double or multiple potential barriers in the conduction band or valence band. Reasonably high performance III-V resonant tunneling diodes have been realized. Such devices have not entered mainstream applications yet because the processing of III-V materials is incompatible with Si CMOS technology and the cost is high.
Most of semiconductor optoelectronics use III-V semiconductors and so it is possible to combine III-V RTDs to make OptoElectronic Integrated Circuits (OEICS) that use the negative differential resistance of the RTD to provide electrical gain for optoelectronic devices. Recently, the device-to-device variability in an RTDs current–voltage characteristic has been used as a way to uniquely identify electronic devices, in what is known as a quantum confinement physical unclonable function (QC-PUF). Spiking behaviour in RTDs is under investigation for optical neuromorphic computing.
Si/SiGe resonant tunneling diodes
Resonant tunneling diodes can also be realized using the Si/SiGe materials system. Both hole tunneling and electron tunneling have been observed. However, the performance of Si/SiGe resonant tunneling diodes was limited due to the limited conduction band and valence band discontinuities between Si and SiGe alloys. Resonant tunneling of holes through Si/SiGe heterojunctions was attempted first because of the typically relatively larger valence band discontinuity in Si/SiGe heterojunctions than the conduction band discontinuity for (compressively) strained Si1−xGex layers grown on Si substrates. Negative differential resistance was only observed at low temperatures but not at room temperature. Resonant tunneling of electrons through Si/SiGe heterojunctions was obtained later, with a limited peak-to-valley current ratio (PVCR) of 1.2 at room temperature. Subsequent developments have realized Si/SiGe RTDs (electron tunneling) with a PVCR of 2.9 with a PCD of 4.3 kA/cm2 and a PVCR of 2.43 with a PCD of 282 kA/cm2 at room temperature.
Interband resonant tunneling diodes
Resonant interband tunneling diodes (RITDs) combine the structures and behaviors of both intraband resonant tunneling diodes (RTDs) and conventional interband tunneling diodes, in which electronic transitions occur between the energy levels in the quantum wells in the conduction band and that in the valence band. Like resonant tunneling diodes, resonant interband tunneling diodes can be realized in both the III-V and Si/SiGe materials systems.
III-V RITDs
In the III-V materials system, InAlAs/InGaAs RITDs with peak-to-valley current ratios (PVCRs) higher than 70 and as high as 144 at room temperature and Sb-based RITDs with room temperature PVCR as high as 20 have been obtained. The main drawback of III-V RITDs is the use of III-V materials whose processing is incompatible with Si processing and is expensive.
Si/SiGe RITDs
In Si/SiGe materials system, Si/SiGe resonant interband tunneling diodes have also been developed which have the potential of being integrated into the mainstream Si integrated circuits technology.
Structure
The five key points to the design are:
(i) an intrinsic tunneling barrier,
(ii) delta-doped injectors,
(iii) offset of the delta-doping planes from the heterojunction interfaces,
(iv) low temperature molecular beam epitaxial growth (LTMBE), and
(v) postgrowth rapid thermal annealing (RTA) for activation of dopants and reduction of density of point defects.
Performance
A minimum PVCR of about 3 is needed for typical circuit applications. Low current density Si/SiGe RITDs are suitable for low-power memory applications, and high current density tunnel diodes are needed for high-speed digital/mixed-signal applications. Si/SiGe RITDs have been engineered to have room temperature PVCRs up to 4.0. The same structure was duplicated by another research group using a different MBE system, and PVCRs of up to 6.0 have been obtained. In terms of peak current density, peak current densities ranging from as low as 20 mA/cm2 and as high as 218 kA/cm2, spanning seven orders of magnitude, have been achieved. A resistive cut-off frequency of 20.2 GHz has been realized on photolithography defined SiGe RITD followed by wet etching for further reducing the diode size, which should be able to improve when even smaller RITDs are fabricated using techniques such as electron beam lithography.
Integration with Si/SiGe CMOS and heterojunction bipolar transistors
Integration of Si/SiGe RITDs with Si CMOS has been demonstrated. Vertical integration of Si/SiGe RITD and SiGe heterojunction bipolar transistors was also demonstrated, realizing a 3-terminal negative differential resistance circuit element with adjustable peak-to-valley current ratio. These results indicate that Si/SiGe RITDs is a promising candidate of being integrated with the Si integrated circuit technology.
Other Applications
Other applications of SiGe RITD have been demonstrated using breadboard circuits, including multi-state logic.
References
External links
For information on Optoelectronic applications of RTDs see http://userweb.elec.gla.ac.uk/i/ironside/RTD/RTDOpto.html.
Resonant Tunneling Diode Simulation Tool on Nanohub enables the simulation of resonant tunneling diodes under realistic bias conditions for realistically extended devices.
Terahertz technology
Diodes | Resonant-tunneling diode | Physics | 2,393 |
7,770,329 | https://en.wikipedia.org/wiki/Seidel%20adjacency%20matrix | In mathematics, in graph theory, the Seidel adjacency matrix of a simple undirected graph G is a symmetric matrix with a row and column for each vertex, having 0 on the diagonal, −1 for positions whose rows and columns correspond to adjacent vertices, and +1 for positions corresponding to non-adjacent vertices.
It is also called the Seidel matrix or—its original name—the (−1,1,0)-adjacency matrix.
It can be interpreted as the result of subtracting the adjacency matrix of G from the adjacency matrix of the complement of G.
The multiset of eigenvalues of this matrix is called the Seidel spectrum.
The Seidel matrix was introduced by J. H. van Lint and in 1966 and extensively exploited by Seidel and coauthors.
The Seidel matrix of G is also the adjacency matrix of a signed complete graph KG in which the edges of G are negative and the edges not in G are positive. It is also the adjacency matrix of the two-graph associated with G and KG.
The eigenvalue properties of the Seidel matrix are valuable in the study of strongly regular graphs.
References
van Lint, J. H., and Seidel, J. J. (1966), Equilateral point sets in elliptic geometry. Indagationes Mathematicae, vol. 28 (= Proc. Kon. Ned. Aka. Wet. Ser. A, vol. 69), pp. 335–348.
Seidel, J. J. (1976), A survey of two-graphs. In: Colloquio Internazionale sulle Teorie Combinatorie (Proceedings, Rome, 1973), vol. I, pp. 481–511. Atti dei Convegni Lincei, No. 17. Accademia Nazionale dei Lincei, Rome.
Seidel, J. J. (1991), ed. D.G. Corneil and R. Mathon, Geometry and Combinatorics: Selected Works of J. J. Seidel. Boston: Academic Press. Many of the articles involve the Seidel matrix.
Seidel, J. J. (1968), Strongly Regular Graphs with (−1,1,0) Adjacency Matrix Having Eigenvalue 3. Linear Algebra and its Applications 1, 281–298.
Algebraic graph theory
Matrices | Seidel adjacency matrix | Mathematics | 529 |
15,403,569 | https://en.wikipedia.org/wiki/HD%20122563 | HD 122563 is an extremely metal-poor red giant star, and the brightest known metal-poor star in the sky. Its low heavy element content was first recognized by spectroscopic analysis in 1963. For more than twenty years it was the most metal-poor star known, being more metal-poor than any known globular cluster, and it is the most accessible example of an extreme population II or Halo star.
As the most extreme metal-poor star known, HD 122563's composition was crucial in constraining theories for galactic chemical evolution; in particular, its composition peculiarities provided signposts for understanding the accumulation of heavy elements by stellar nucleosynthesis in the Galaxy. For example, it has an excess of oxygen, [O/Fe] = +0.6, while the proportions of strontium, yttrium, zirconium, barium and the lanthanide elements suggest that the s-process has made no contribution to the material present in the star: in HD 122563, all these elements are products of the r-process instead. The implication is that the star formed at a time and place where there had not been enough time for any previous generation of stars to have produced s-process elements, though there was r-process material present.
Spectral type
The spectral type of HD 122563 is one of characteristics which initially indicated its peculiarity. In the Bright Star Catalogue its spectral type is given as F8 IV, but its color index indicates a surface temperature much cooler than an F8 star should be. Because the spectral type of a star in the A to K star regime is judged by the relative strengths of the absorption lines of the metals relative to the hydrogen Balmer lines, the extreme metal deficiency results in weak metal lines and yields a spuriously early spectral type. If the spectral classification is performed including the metal deficiency, the result is a rather later type, G8:III: Fe-5.
References
External links
HR 5270
Image HD 122563
122563
5270
Boötes
068594
G-type giants
Population II stars
Durchmusterung objects
K-type subgiants | HD 122563 | Astronomy | 446 |
2,902,748 | https://en.wikipedia.org/wiki/10%20Arietis | 10 Arietis is a binary star system in the northern constellation of Aries. 10 Arietis is the Flamsteed designation. It is visible to the naked eye as a dim, yellow-white hued star with a combined apparent visual magnitude of 5.63. Based upon parallax measurements, it is located around 159 light years away from the Sun. The system is receding from the Earth with a heliocentric radial velocity of +12.9 km/s.
The pair orbit each other with a period of approximately 325 years and an eccentricity of 0.59. The semimajor axis of the orbit has an angular size of . The magnitude 5.92 primary, designated component A, is an aging F-type subgiant star with a stellar classification of F8 IV. The secondary star, component B, is a magnitude 7.95 F-type main-sequence star with a stellar classification of F9 V. There is a magnitude 13.5 visual companion, designated component C, at an angular separation of along a position angle of 150°, as of 2001.
References
External links
HR 605 in the Bright Star Catalogue
CCDM J02037+2556
Image 10 Arietis
F-type main-sequence stars
F-type subgiants
Arietis, 10
Binary stars
Aries (constellation)
BD+25 0341
Arietis, 10
012558
009621
0605 | 10 Arietis | Astronomy | 299 |
51,742 | https://en.wikipedia.org/wiki/Drawing%20board | A drawing board (also drawing table, drafting table or architect's table) is, in its antique form, a kind of multipurpose desk which can be used for any kind of drawing, writing or impromptu sketching on a large sheet of paper or for reading a large format book or other oversized document or for drafting precise technical illustrations (such as engineering drawings or architectural drawings). The drawing table used to be a frequent companion to a pedestal desk in a study or private library, during the pre-industrial and early industrial era.
During the Industrial Revolution, draftsmanship gradually became a specialized trade and drawing tables slowly moved out of the libraries and offices of most gentlemen. They became more utilitarian and were built of steel and plastic instead of fine woods and brass.
More recently, engineers and draftsmen use the drawing board for making and modifying drawings on paper with ink or pencil. Different drawing instruments (set square, protractor, etc.) are used on it to draw parallel, perpendicular or oblique lines. There are instruments for drawing circles, arcs, other curves and symbols too (compass, French curve, stencil, etc.). However, with the gradual introduction of computer aided drafting and design (CADD or CAD) in the last decades of the 20th century and the first of the 21st century, the drawing board is becoming less common.
A drawing table is also sometimes called a mechanical desk because, for several centuries, most mechanical desks were drawing tables. Unlike the gadgety mechanical desks of the second part of the 18th century, however, the mechanical parts of drawing tables were usually limited to notches, ratchets, and perhaps a few simple gears, or levers or cogs to elevate and incline the working surface.
Very often a drawing table could look like a writing table or even a pedestal desk when the working surface was set at the horizontal and the height adjusted to 29 inches, in order to use it as a "normal" desk. The only giveaway was usually a lip on one of the sides of the desktop. This lip or edge stopped paper or books from sliding when the surface was given an angle. It was also sometimes used to hold writing implements. When the working surface was extended at its full height, a drawing table could be used as a standing desk.
Many reproductions have been made and are still being produced of drawing tables, copying the period styles they were originally made in during the 18th and 19th centuries.
History
In the 18th and 19th centuries, drawing paper was dampened and then its edges glued to the drawing board. After drying the paper would be flat and smooth. The completed drawing was then cut free. Paper could also be secured to the drawing board with drawing pins or even C-clamps. More recent practice is to use self-adhesive drafting tape to secure paper to the board, including the sophisticated use of individualized adhesive dots from a dispensing roll. Some drawing boards are magnetized, allowing paper to be held down by long steel strips. Boards used for overlay drafting or animation may include registration pins or peg bars to ensure alignment of multiple layers of drawing media.
Contemporary drafting tables
Despite the prevalence of computer aided drafting, many older architects and even some structural designers still rely on paper and pencil graphics produced on a drafting table.
Modern drafting tables typically rely on a steel frame. Steel provides as much strength as the old oak drafting table frames and much easier portability. Typically the drafting board surface is a thick sheet of compressed fibreboard with sheets of Formica laminated to all its surfaces. The drafting board surface is usually secured to the frame by screws which can easily be removed for drafting table transportation.
The steel frame allows mechanical linkages to be installed that control both the height and angle of the drafting board surface. Typically, a single foot pedal is used to control a clutch which clamps the board in the desired position. A heavy counterweight full of lead shot is installed in the steel linkage so that if the pedal is accidentally released, the drafting board will not spring into the upright position and injure the user. Drafting table linkages and clutches have to be maintained to ensure that this safety mechanism counterbalances the weight of the table surface.
The drafting table surface is usually covered with a thin vinyl sheet called a board cover. This provides an optimum surface for pen and pencil drafting. It allows compasses and dividers to be used without damaging the wooden surface of the board. A board cover must be frequently cleaned to prevent graphite buildup from making new drawings dirty. At the bottom edge of the table, a single strip of aluminum or steel may serve as a place to rest drafting pencils. More purpose-built trays are also used which hold pencils even while the board is being adjusted.
Various types of drafting machine may be attached to the board surface to assist the draftsperson or artist. Parallel rules often span the entire width of the board and are so named because they remain parallel to the top edge of the board as they are moved up and down. Drafting machines use pre-calibrated scales and built in protractors to allow accurate drawing measurement.
Some drafting tables incorporate electric motors to provide the up and down and angle adjustment of the drafting table surface. These tables are at least as heavy as the original oak and brass drafting tables and so sacrifice portability for the convenience of push button table adjustment.
Modern-day idiom
The expression "back to the drawing board" is used when a plan or course of action needs to be changed, often drastically; usually due to a very unsuccessful result; e.g., "The battle plan, the result of months of conferences, failed because the enemy retreated too far back. It was back to the drawing board for the army captains."
The phrase was coined in the caption to a Peter Arno cartoon of The New Yorker of March 1, 1941.
See also
List of desk forms and types
Studio
Surface computing
Drafting machine
Technical drawing tools
Plane table
References
External links
Drafting Table Use and Care
Architectural communication
Furniture
Tables (furniture)
Technical drawing tools | Drawing board | Engineering | 1,238 |
4,945,092 | https://en.wikipedia.org/wiki/Digital%20magnetofluidics | Digital magnetofluidics is a method for moving, combining, splitting, and controlling drops of water or biological fluids using magnetic fields. This is accomplished by adding superparamagnetic particles to a drop placed on a superhydrophobic surface. Normally this type of surface would exhibit a lotus effect and the drop of water would roll or slide off. But by using magnetic fields, the drop is stabilized and its movements and structure can be controlled.
References
A. Egatz-Gomez, S. Melle, A.A. García, S. Lindsay, M.A. Rubio, P. Domínguez, T. Picraux, J. Taraci, T. Clement, and M. Hayes, “Superhydrophobic Nanowire Surfaces for Drop Movement Using Magnetic Fields,” in Proc. NSTI Nanotechnology Conference and Trade Show, 2006, pp. 501–504.
Fluid mechanics | Digital magnetofluidics | Engineering | 193 |
30,507,978 | https://en.wikipedia.org/wiki/Mu%20Ophiuchi | μ Ophiuchi, Latinized as Mu Ophiuchi, is a solitary, blue-white hued star in the equatorial constellation of Ophiuchus. It is visible to the naked as a faint point of light with an apparent visual magnitude of 4.62. This object is located approximately 760 light years away from the Sun based on parallax, but is drifting closer with a radial velocity of −18.5 km/s.
This object has a stellar classification of B8II-IIIp:Mn, showing a luminosity class with mixed traits of a giant or bright giant star. The suffix notation indicates it is a candidate chemically peculiar star with an overabundance of manganese in its spectrum. It may be a mercury-manganese star. This object has 11 times the radius of the Sun and is radiating nearly 400 times the Sun's luminosity from its photosphere at an effective temperature of 7,748 K. It is spinning with a projected rotational velocity of 95 km/s.
In 2006, a new nearby star cluster, Mamajek 2 (), was discovered. Mu Ophiuchi is a candidate member. The cluster has an estimated age of million years.
References
B-type bright giants
B-type giants
Ophiuchus
Ophiuchi, Mu
BD-08 4472
Ophiuchi, 57
159975
086284
6567 | Mu Ophiuchi | Astronomy | 292 |
3,600,229 | https://en.wikipedia.org/wiki/Curb | A curb (American English) or kerb (British English) is the edge where a raised sidewalk or road median/central reservation meets a street or other roadway.
History
Although curbs have been used throughout modern history, and indeed were present in ancient Pompeii, their widespread construction and use only began in the 18th century, as a part of the various movements towards city beautification that were attempted in the period.
A series of Paving Acts in the 18th century, especially the 1766 Paving and Lighting Act, authorized the City of London Corporation to create footways along the streets of London, pave them with Purbeck stone (the thoroughfare in the middle was generally cobblestone) and raise them above street level with curbs forming the separation. The corporation was also made responsible for the regular upkeep of the roads, including their cleaning and repair, for which they charged a tax from 1766.
Previously, small wooden bollards had been put up to demarcate the area of the street reserved for pedestrian use. By the late 18th century, this method of separating pedestrians from carriageways had largely been supplanted by the use of curbs. With the introduction of macadam roads in the early 19th-century, curbs became ubiquitous in the streets of London.
Curbs present an obstacle for accessibility to physically disabled persons in public spaces. In 1945, Jack Fisher of Kalamazoo, Michigan, celebrated the installation of one of the nation's first curb cuts to facilitate mobility in the center of the city. In the United States, activism and passage of federal legislation on accessibility requirements such as the Americans with Disabilities Act of 1990 (ADA) have facilitated travel for wheelchair users and other people.
Function
Curbs may fulfill any or several of a number of functions. By delineating the edge of the pavement, they separate the road from the roadside and discourage drivers from parking or driving on sidewalks and lawns. They also provide structural support to the pavement edge. Curbs can be used to channel runoff water from rain or melted snow and ice into storm drains.
There is also an aesthetic aspect, in that curbs look formal and "finished".
Since curbs add to the cost of a road, they are generally limited to urban and suburban areas and are rarely found in rural areas except where certain drainage conditions (such as mountains or culverts) make them necessary. Curbs are not universally used, however, even in urban settings (see living street).
Safety
In low-speed environments, curbs are effective at channeling motor vehicle traffic and can provide some redirective capacity for low-speed impacts.
On higher speed roads, the main function of curbs is to provide drainage, and they are mostly used in areas of a bridge approach or other locations with erosion risk.
A high-speed vehicle that hits a curb may actually turn towards the sidewalk, rather than be directed away from it. A vehicle that strikes a curb can also be tripped into a rollover crash or vaulted into the air. The vehicle could be vaulted over a traffic barrier into the object the barrier is intended to shield. This is a reason why they are rarely used on rural or high-speed roads. Where a curb is used with a traffic barrier, the barrier should either be close to or well behind the curb, to reduce the chances of a vehicle going over the barrier.
Depending on the area and the distance between the travel lane and the edge of the pavement, an edge line can be used to indicate the outside (shoulder) edge of the road. Retroreflective road marking material can also be applied to the curb itself to make it more conspicuous.
Curbs are also meant to inform pedestrians to stop or slow down as they prepare to cross roadways. For example, cultural context and behavioral norms of a society may affect safety in that people are more likely to cross on a red light while standing alone than waiting with others at the curb.
Types of curb
There are a number of types of curbs, categorized by shape, material, height, and whether the curb is combined with a gutter. Most curb is constructed separately from the pavement, and the gutter is formed at the joint between the roadway and the curb. The combined curb and gutter (also called "curb and channel") has a concrete curb and gutter cast together in one piece. "Integral curb" is curbing constructed integrally as a part of the concrete pavement.
Shape
Curbs often have a vertical or nearly-vertical face, also called "barrier", "non-mountable", or "insurmountable curb". A vertical-faced curb is used to discourage motor vehicle drivers from leaving the roadway. The square (90°-edge) or close-to-square type is still almost always used in towns and cities, as it is a straight step down and thus less likely to be tripped-over by pedestrians. By contrast, a slope-faced curb allows motor vehicles to cross it at low speed. Slope-faced curb is most often used on major suburban thoroughfares.
In certain locales, such as California, there is an effort to standardize the design to achieve efficiencies in construction and lower costs. Trends include using a gutter that balances the increased initial price with lower maintenance costs.
At crosswalks and other pedestrian crossings, narrow dropped curb cuts are used to allow small wheeled vehicles such as wheelchairs, children's tricycles, prams, and strollers to cross. This makes it easier to traverse for some pedestrians, and especially for those in wheelchairs. Wider curb cuts are also used to allow motor vehicles to cross sidewalks at low speed, typically for driveways.
In Great Britain, "high containment kerbs" are used at locations with pedestrians, fuel station pumps, and other areas that need greater protection from vehicle traffic. These are high - much higher than standard curb, with a sloped lower portion and a concave face. These are also known as "trief" curbs.
Rounded curbs are most often used at driveways, and continuously along suburban residential streets where there are many driveways and the sidewalk has a grassy setback from the street. This type of curbing starts out nearly flat like the road, curves up in a concave manner to a gentle slope, then curves back in a convex manner to nearly flat again, making it much easier to drive over, and is also known as a "rolled" or "mountable curb" in some localities. These types of curbs are preferred by builders because they are less expensive than installing straight curbs and gutters. They are easier to lay using concrete and require less forming as steel templates can be used with only front and back forms needed. Their use also eliminates the need for driveway cuts, curbs, and aprons, thus further reducing costs.
Material
Curbs are constructed of many materials, including asphalt, stone, or masonry blocks, but most often are made of Portland cement concrete. The type of material may depend on the type of paving material used for the road and the desired function or need. For example, a Portland concrete curb used with an asphalt concrete road surface provides a highly visible barrier at the edge of the road surface. Other types of curb material include stone slabs, cobblestone, and manufactured pavers.
A concrete curb may be constructed by setting forms by hand, filling them, letting them set up, and then removing the forms. When large quantities of curb are to be constructed, it is often more efficient to use a slip form casting machine. Curbs can also be precast at a central location and trucked to the construction site.
Asphalt curb is usually made with a paving machine. It can be cheaper if it is formed at the same time that a road is paved, but is less durable than a concrete curb.
Stone curb, often made from granite, is durable and resistant to de-icing salt. It is also chosen for aesthetic reasons. In areas where granite is available, it may be cheaper than concrete curb. One disadvantage of granite curb is that it can cut a tire sidewall if it is rough-faced.
Belgian block curbs are made by placing blocks over a concrete slip. Then, more concrete is wedged in between the blocks to hold them together. These blocks can be vertical or angled in order to create a mountable curb.
Height
When designing a curbed roadway, engineers specify the "reveal" or "lip". The reveal is the height of the section that is visible (revealed) above the road surface. Typical reveals are in the range. Curbs at handicapped curb cuts (or "kerb ramps", for example in Australia) should have no reveal. One of the recommendations has been using a 4/12 batter in to accommodate automobile design because steeper batters tend to interfere with body trim, hubcaps, and lower door edges while curb faces in excess of in height may prevent the full opening of car doors. Most curb extends down into the ground below the pavement surface, to improve their stability over time. The total height, including the buried portion, is often .
Integral gutter
Curbs with integral gutters are used where better hydraulic flow performance is needed. However, this places a longitudinal joint (parallel to the direction of travel) near where bicyclists often ride. If the main roadway and gutter settle differently over time, the vertical edge that develops at the joint can cause a hazard for bicyclists.
Paint
In some places curbs are painted to increase visibility or mark a special street side.
Auto racing curbs
In auto racing, curbs are flat curbstones lining the corners or chicanes of racing tracks. They are often painted red and white, and are intended to prevent unauthorized short-cuts and keep the racers safely on the track. Although they are not considered part of the racing track, drivers sometimes "ride the curbs" in order to maintain momentum and gain a time advantage in cornering.
Cultural identifiers
In certain parts of the world, curb design signifies cultural association. For example, in countries of the former Portuguese Empire, such as Brazil, curbs are often distinctively decorated (along with the footpaths more broadly) in a style known as Portuguese pavement, marking a clear link to Portugal, where the style originated. More explicitly, curbstones can be painted (by official sanction or otherwise) to stress an identity or ideology; for instance, in Northern Ireland, curbstones are frequently painted in communities to identify a religious/political affiliation – typically either red, white, and blue for Unionist/Loyalist areas, and green, white, and orange for Nationalist areas.
See also
Curb appeal
Curb cut effect
Curb extension
Curb feeler
Curb stomp
Kassel kerb
Kerb guidance
Kerbside collection
Road surface
Wheel chock#Parking chocks—Curbs for parking spaces
References
External links
Road hazards
Road infrastructure | Curb | Technology | 2,227 |
13,193,620 | https://en.wikipedia.org/wiki/Bound%20graph | In graph theory, a bound graph expresses which pairs of elements of some partially ordered set have an upper bound. Rigorously, any graph G is a bound graph if there exists a partial order ≤ on the vertices of G with the property that for any vertices u and v of G, uv is an edge of G if and only if u ≠ v and there is a vertex w such that u ≤ w and v ≤ w.
The bound graphs are exactly the graphs that have a clique edge cover, a family of cliques that cover all edges, with the additional property that each clique includes a vertex that does not belong to any other clique in the family. For the bound graph of a given partial order, each clique can be taken to be the subset of elements less than or equal to some given element. A graph that is covered by cliques in this way is the bound graph of a partial order on its vertices, obtained by ordering the unique vertices in each clique as a chain, above all other vertices in that clique.
Bound graphs are sometimes referred to as upper bound graphs, but the analogously defined lower bound graphs comprise exactly the same class—any lower bound for ≤ is easily seen to be an upper bound for the dual partial order ≥.
References
Graph families
Order theory | Bound graph | Mathematics | 264 |
35,752,192 | https://en.wikipedia.org/wiki/Committed%20dose | The committed dose in radiological protection is a measure of the stochastic health risk due to an intake of radioactive material into the human body. Stochastic in this context is defined as the probability of cancer induction and genetic damage, due to low levels of radiation. The SI unit of measure is the sievert.
A committed dose from an internal source represents the same effective risk as the same amount of effective dose applied uniformly to the whole body from an external source, or the same amount of equivalent dose applied to part of the body. The committed dose is not intended as a measure for deterministic effects, such as radiation sickness, which are defined as the severity of a health effect which is certain to happen.
The radiation risk proposed by the International Commission on Radiological Protection (ICRP) predicts that an effective dose of one sievert carries a 5.5% chance of developing cancer. Such a risk is the sum of both internal and external radiation dose.
ICRP definition
The ICRP states "Radionuclides incorporated in the human body irradiate the tissues over time periods determined by their physical half-life and their biological retention
within the body. Thus they may give rise to doses to body tissues for many months or years after the intake. The need to regulate exposures to radionuclides and the
accumulation of radiation dose over extended periods of time has led to the definition of committed dose quantities".
The ICRP defines two dose quantities for individual committed dose.
Committed equivalent dose is the time integral of the equivalent dose rate in a particular tissue or organ that will be received by an individual following intake of radioactive material into the body by a Reference Person, where t is the integration time in years. This refers specifically to the dose in a specific tissue or organ, in the similar way to external equivalent dose.
Committed effective dose, is the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors WT, where t is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children. This refers specifically to the dose to the whole body, in the similar way to external effective dose. The committed effective dose is used to demonstrate compliance with dose limits and is entered into the "dose of record" for occupational exposures used for recording, reporting and retrospective demonstration of compliance with regulatory dose limits.
The ICRP further states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities (e.g., activity retained in the body or in daily excreta). The radiation dose is determined from the intake using recommended dose coefficients".
Dose intake
The intake of radioactive material can occur through four pathways:
inhalation of airborne contaminants such as radon
ingestion of contaminated food or liquids
absorption of vapours such as tritium oxide through the skin
injection of medical radioisotopes such as technetium-99m
Some artificial radioisotopes such as iodine-131 are chemically identical to natural isotopes needed by the body, and may be more readily absorbed if the individual has a deficit of that element. For instance, potassium iodide (KI), administered orally immediately after exposure, may be used to protect the thyroid from ingested radioactive iodine in the event of an accident or attack at a nuclear power plant, or the detonation of a nuclear explosive which would release radioactive iodine.
Other radioisotopes have an affinity for particular tissues, such as plutonium into bone, and may be retained there for years in spite of their foreign nature.
In summary, not all radiation is harmful. The radiation can be absorbed through multiple pathways, varying due to the circumstances of the situation. If the radioactive material is necessary, it can be ingested orally via stable isotopes of specific elements. This is only suggested to those that have a lack of these elements however, because radioactive material can go from healthy to harmful with very small amounts. The most harmful way to absorb radiation is that of absorption because it is almost impossible to control how much will enter the body.
Physical factors
Since irradiation increases with proximity to the source of radiation, and as it is impossible to distance or shield an internal source, radioactive materials inside the body can deliver much higher doses to the host organs than they normally would from outside the body. This is particularly true for alpha and beta emitters that are easily shielded by skin and clothing. Some have hypothesized that alpha's high relative biological effectiveness might be attributable to cell's tendency to absorb transuranic metals into the cellular nucleus where they would be in very close proximity to the genome, though an elevated effectiveness can also be observed for external alpha radiation in cellular studies. As in the calculations for equivalent dose and effective dose, committed dose must include corrections for the relative biological effectiveness of the radiation type and weightings for tissue sensitivity.
Duration
The dose rate from a single uptake decays over time due to both radioactive decay, and biological decay (i.e. excretion from the body). The combined radioactive and biological half-life, called the effective half-life of the material, may range from hours for medical radioisotopes to decades for transuranic waste. Committed dose is the integral of this decaying dose rate over the presumed remaining lifespan of the organism. Most regulations require this integral to be taken over 50 years for uptakes during adulthood or over 70 years for uptakes during childhood. In dosimetry accounting, the entire committed dose is conservatively assigned to the year of uptake, even though it may take many years for the tissues to actually accumulate this dose.
Measurement
There is no direct way to measure committed dose. Estimates can be made by analyzing the data from whole body counting, blood samples, urine samples, fecal samples, biopsies, and measurement of intake.
Whole body counting (WBC) is the most direct approach, but has some limitations: it cannot detect beta emitters such as tritium; it provides no chemical information about any compound that the radioisotope may be bound to; it may be inconclusive regarding the nature of the radioisotope detected; and it is a complex measurement subject to many sources of measurement and calibration error.
Analysis of blood samples, urine samples, fecal samples, and biopsies can provide more exact information about the chemical and isotopic nature of the contaminant, its distribution in the body, and the rate of elimination. Urine samples are the standard way to measure tritium intake, while fecal samples are the standard way to measure transuranic intake.
If the nature and quantity of radioactive materials taken into the body is known, and a reliable biochemical model of this material is available, this can be sufficient to determine committed dose. In occupational or accident scenarios, approximate estimates can be based on measurements of the environment that people were exposed to, but this cannot take into account factors such as breathing rate and adherence to hygiene practices. Exact information about the intake and its biochemical impact is usually only available in medical situations where radiopharmaceuticals are measured in a radioisotope dose calibrator prior to injection.
Annual limit on intake (ALI) is the derived limit for the amount of radioactive material taken into the body of an adult worker by inhalation or ingestion in a year. ALI is the intake of a given radionuclide in a year that would result in:
a committed effective dose equivalent of 0.02 Sv (2 rems) for a "reference human body", or
a committed dose equivalent of 0.2 Sv (20 rems) to any individual organ or tissue,
whatever dose is the smaller.
Health effects
Intake of radioactive materials into the body tends to increase the risk of cancer, and possibly other stochastic effects. The International Commission on Radiological Protection has proposed a model whereby the incidence of cancers increases linearly with effective dose at a rate of 5.5% per sievert. This model is widely accepted for external radiation, but its application to internal contamination has been disputed. This model fails to account for the low rates of cancer in early workers at Los Alamos National Laboratory who were exposed to plutonium dust, and the high rates of thyroid cancer in children following the Chernobyl accident . The informal European Committee on Radiation Risk has questioned the ICRP model used for internal exposure. However a UK National Radiological Protection Board report endorses the ICRP approaches to the estimation of doses and risks from internal emitters and agrees with CERRIE conclusions that these should be best estimates and that associated uncertainties should receive more attention.
The true relationship between committed dose and cancer is almost certainly non-linear. For example, iodine-131 is notable in that high doses of the isotope are sometimes less dangerous than low doses, since they tend to kill thyroid tissues that would otherwise become cancerous as a result of the radiation. Most studies of very-high-dose I-131 for treatment of Graves disease have failed to find any increase in thyroid cancer, even though there is linear increase in thyroid cancer risk with I-131 absorption at moderate doses.
Internal exposure of the public is controlled by regulatory limits on the radioactive content of food and water. These limits are typically expressed in becquerel/kilogram, with different limits set for each contaminant.
Intake of very large amounts of radioactive material can cause acute radiation syndrome (ARS) in rare instances. Examples include the Alexander Litvinenko poisoning and Leide das Neves Ferreira. While there is no doubt that internal contamination was the cause of ARS in these cases, there is not enough data to establish what quantities of committed dose might cause ARS symptoms. In most scenarios where ARS is a concern, the external effective radiation dose is usually much more hazardous than the internal dose. Normally, the greatest concern with internal exposure is that the radioactive material may stay in the body for an extended period of time, "committing" the subject to accumulating dose long after the initial exposure has ceased. Over a hundred people, including Eben Byers and the radium girls, have received committed doses in excess of 10 Gy and went on to die of cancer or natural causes, whereas the same amount of acute external dose would invariably cause an earlier death by ARS.
Examples
Below are a series of examples of internal exposure.
Thorotrast
The exposure caused by Potassium-40 present within a normal person.
The exposure to the ingestion of a soluble radioactive substance, such as 89Sr in cows' milk.
A person who is being treated for cancer by means of an unsealed source radiotherapy method where a radioisotope is used as a drug (usually a liquid or pill). A review of this topic was published in 1999. Because the radioactive material becomes intimately mixed with the affected object it is often difficult to decontaminate the object or person in a case where internal exposure is occurring. While some very insoluble materials such as fission products within a uranium dioxide matrix might never be able to truly become part of an organism, it is normal to consider such particles in the lungs and digestive tract as a form of internal contamination which results in internal exposure.
Boron neutron capture therapy (BNCT) involves injecting a boron-10 tagged chemical that preferentially binds to tumor cells. Neutrons from a nuclear reactor are shaped by a neutron moderator to the neutron energy spectrum suitable for BNCT treatment. The tumor is selectively bombarded with these neutrons. The neutrons quickly slow down in the body to become low energy thermal neutrons. These thermal neutrons are captured by the injected boron-10, forming excited (boron-11) which breaks down into lithium-7 and a helium-4 alpha particle both of these produce closely spaced ionizing radiation. This concept is described as a binary system using two separate components for the therapy of cancer. Each component in itself is relatively harmless to the cells, but when combined for treatment they produce a highly cytocidal (cytotoxic) effect which is lethal (within a limited range of 5-9 micrometers or approximately one cell diameter). Clinical trials, with promising results, are currently carried out in Finland and Japan.
Related quantities
The US Nuclear Regulatory commission defines some non-SI quantities for the calculation of committed dose for use only within the US regulatory system. They carry different names to those used within the International ICRP radiation protection system, thus:
Committed dose equivalent (CDE) is the equivalent dose received by a particular organ or tissue from an internal source, without weighting for tissue sensitivity. This is essentially an intermediate calculation result that cannot be directly compared to final dosimetry quantities
Committed effective dose equivalent (CEDE) as defined in Title 10, Section 20.1003, of the Code of Federal Regulations of the USA the CEDE dose (HE,50) is the sum of the products of the committed dose equivalents for each of the body organs or tissues that are irradiated multiplied by the weighting factors (WT) applicable to each of those organs or tissues.
Confusion between US and ICRP dose quantity systems can arise because the use of the term "dose equivalent" has been used within the ICRP system since 1991 only for quantities calculated using the value of Q (Linear energy transfer - LET), which the ICRP calls "operational quantities". However within the US NRC system "dose equivalent" is still used to name quantities which are calculated with tissue and radiation weighting factors, which in the ICRP system are now known as the "protection quantities" which are called "effective dose" and "equivalent dose".
See also
Internal dosimetry
Radioactivity
Ionizing radiation
Collective dose
Total effective dose equivalent
Cumulative dose
Committed dose equivalent
References
US nuclear regulatory commission glossary
Argonne national laboratory glossary
Limitation of Exposure to Ionizing Radiation (Report No. 116). National Council on Radiation Protection and Measurements (NCRP).
External links
UK Govt COMARE website
Uk Govt CERRIE website
- "The confusing world of radiation dosimetry" - M.A. Boyd, 2009, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems.
Radioactivity
Radiation health effects
Radiation protection | Committed dose | Physics,Chemistry,Materials_science | 2,954 |
146,396 | https://en.wikipedia.org/wiki/Tracheal%20intubation | Tracheal intubation, usually simply referred to as intubation, is the placement of a flexible plastic tube into the trachea (windpipe) to maintain an open airway or to serve as a conduit through which to administer certain drugs. It is frequently performed in critically injured, ill, or anesthetized patients to facilitate ventilation of the lungs, including mechanical ventilation, and to prevent the possibility of asphyxiation or airway obstruction.
The most widely used route is orotracheal, in which an endotracheal tube is passed through the mouth and vocal apparatus into the trachea. In a nasotracheal procedure, an endotracheal tube is passed through the nose and vocal apparatus into the trachea. Other methods of intubation involve surgery and include the cricothyrotomy (used almost exclusively in emergency circumstances) and the tracheotomy, used primarily in situations where a prolonged need for airway support is anticipated.
Because it is an invasive and uncomfortable medical procedure, intubation is usually performed after administration of general anesthesia and a neuromuscular-blocking drug. It can, however, be performed in the awake patient with local or topical anesthesia or in an emergency without any anesthesia at all. Intubation is normally facilitated by using a conventional laryngoscope, flexible fiberoptic bronchoscope, or video laryngoscope to identify the vocal cords and pass the tube between them into the trachea instead of into the esophagus. Other devices and techniques may be used alternatively.
After the trachea has been intubated, a balloon cuff is typically inflated just above the far end of the tube to help secure it in place, to prevent leakage of respiratory gases, and to protect the tracheobronchial tree from receiving undesirable material such as stomach acid. The tube is then secured to the face or neck and connected to a T-piece, anesthesia breathing circuit, bag valve mask device, or a mechanical ventilator. Once there is no longer a need for ventilatory assistance or protection of the airway, the tracheal tube is removed; this is referred to as extubation of the trachea (or decannulation, in the case of a surgical airway such as a cricothyrotomy or a tracheotomy).
For centuries, tracheotomy was considered the only reliable method for intubation of the trachea. However, because only a minority of patients survived the operation, physicians undertook tracheotomy only as a last resort, on patients who were nearly dead. It was not until the late 19th century, however, that advances in understanding of anatomy and physiology, as well an appreciation of the germ theory of disease, had improved the outcome of this operation to the point that it could be considered an acceptable treatment option. Also at that time, advances in endoscopic instrumentation had improved to such a degree that direct laryngoscopy had become a viable means to secure the airway by the non-surgical orotracheal route. By the mid-20th century, the tracheotomy as well as endoscopy and non-surgical tracheal intubation had evolved from rarely employed procedures to becoming essential components of the practices of anesthesiology, critical care medicine, emergency medicine, and laryngology.
Tracheal intubation can be associated with complications such as broken teeth or lacerations of the tissues of the upper airway. It can also be associated with potentially fatal complications such as pulmonary aspiration of stomach contents which can result in a severe and sometimes fatal chemical aspiration pneumonitis, or unrecognized intubation of the esophagus which can lead to potentially fatal anoxia. Because of this, the potential for difficulty or complications due to the presence of unusual airway anatomy or other uncontrolled variables is carefully evaluated before undertaking tracheal intubation. Alternative strategies for securing the airway must always be readily available.
Indications
Tracheal intubation is indicated in a variety of situations when illness or a medical procedure prevents a person from maintaining a clear airway, breathing, and oxygenating the blood. In these circumstances, oxygen supplementation using a simple face mask is inadequate.
Depressed level of consciousness
Perhaps the most common indication for tracheal intubation is for the placement of a conduit through which nitrous oxide or volatile anesthetics may be administered. General anesthetic agents, opioids, and neuromuscular-blocking drugs may diminish or even abolish the respiratory drive. Although it is not the only means to maintain a patent airway during general anesthesia, intubation of the trachea provides the most reliable means of oxygenation and ventilation and the greatest degree of protection against regurgitation and pulmonary aspiration.
Damage to the brain (such as from a massive stroke, non-penetrating head injury, intoxication or poisoning) may result in a depressed level of consciousness. When this becomes severe to the point of stupor or coma (defined as a score on the Glasgow Coma Scale of less than 8), dynamic collapse of the extrinsic muscles of the airway can obstruct the airway, impeding the free flow of air into the lungs. Furthermore, protective airway reflexes such as coughing and swallowing may be diminished or absent. Tracheal intubation is often required to restore patency (the relative absence of blockage) of the airway and protect the tracheobronchial tree from pulmonary aspiration of gastric contents.
Hypoxemia
Intubation may be necessary for a patient with decreased oxygen content and oxygen saturation of the blood caused when their breathing is inadequate (hypoventilation), suspended (apnea), or when the lungs are unable to sufficiently transfer gasses to the blood. Such patients, who may be awake and alert, are typically critically ill with a multisystem disease or multiple severe injuries. Examples of such conditions include cervical spine injury, multiple rib fractures, severe pneumonia, acute respiratory distress syndrome (ARDS), or near-drowning. Specifically, intubation is considered if the arterial partial pressure of oxygen (PaO2) is less than 60 millimeters of mercury (mm Hg) while breathing an inspired O2 concentration (FIO2) of 50% or greater. In patients with elevated arterial carbon dioxide, an arterial partial pressure of CO2 (PaCO2) greater than 45 mm Hg in the setting of acidemia would prompt intubation, especially if a series of measurements demonstrate a worsening respiratory acidosis. Regardless of the laboratory values, these guidelines are always interpreted in the clinical context.
Airway obstruction
Actual or impending airway obstruction is a common indication for intubation of the trachea. Life-threatening airway obstruction may occur when a foreign body becomes lodged in the airway; this is especially common in infants and toddlers. Severe blunt or penetrating injury to the face or neck may be accompanied by swelling and an expanding hematoma, or injury to the larynx, trachea or bronchi. Airway obstruction is also common in people who have suffered smoke inhalation or burns within or near the airway or epiglottitis. Sustained generalized seizure activity and angioedema are other common causes of life-threatening airway obstruction which may require tracheal intubation to secure the airway.
Manipulation of the airway
Diagnostic or therapeutic manipulation of the airway (such as bronchoscopy, laser therapy or stenting of the bronchi) may intermittently interfere with the ability to breathe; intubation may be necessary in such situations.
Newborns
Syndromes such as respiratory distress syndrome, congenital heart disease, pneumothorax, and shock may lead to breathing problems in newborn infants that require endotracheal intubation and mechanically assisted breathing (mechanical ventilation). Newborn infants may also require endotracheal intubation during surgery while under general anaesthesia.
Equipment
Laryngoscopes
The vast majority of tracheal intubations involve the use of a viewing instrument of one type or another. The modern conventional laryngoscope consists of a handle containing batteries that power a light and a set of interchangeable blades, which are either straight or curved. This device is designed to allow the laryngoscopist to directly view the larynx. Due to the widespread availability of such devices, the technique of blind intubation of the trachea is rarely practiced today, although it may still be useful in certain emergency situations, such as natural or man-made disasters. In the prehospital emergency setting, digital intubation may be necessitated if the patient is in a position that makes direct laryngoscopy impossible. For example, digital intubation may be used by a paramedic if the patient is entrapped in an inverted position in a vehicle after a motor vehicle collision with a prolonged extrication time.
The decision to use a straight or curved laryngoscope blade depends partly on the specific anatomical features of the airway, and partly on the personal experience and preference of the laryngoscopist. The Miller blade, characterized by its straight, elongated shape with a curved tip, is frequently employed in patients with challenging airway anatomy, such as those with limited mouth opening or a high larynx. Its design allows for direct visualization of the epiglottis, facilitating precise glottic exposure.
Conversely, the Macintosh blade, with its curved configuration reminiscent of the letters "C" or "J," is favored in routine intubations for patients with normal airway anatomy. Its curved design enables indirect laryngoscopy, providing enhanced visualization of the vocal cords and glottis in most adult patients.
The choice between the Miller and Macintosh blades is influenced by specific anatomical considerations and the preferences of the laryngoscopist. While the Macintosh blade is the most commonly utilized curved laryngoscope blade, the Miller blade is the preferred option for straight blade intubation. Both blades are available in various sizes, ranging from size 0 (infant) to size 4 (large adult), catering to patients of different ages and anatomies. Additionally, there exists a myriad of specialty blades with unique features, including mirrors for enhanced visualization and ports for oxygen administration, primarily utilized by anesthetists and otolaryngologists in operating room settings.
Fiberoptic laryngoscopes have become increasingly available since the 1990s. In contrast to the conventional laryngoscope, these devices allow the laryngoscopist to indirectly view the larynx. This provides a significant advantage in situations where the operator needs to see around an acute bend in order to visualize the glottis, and deal with otherwise difficult intubations. Video laryngoscopes are specialized fiberoptic laryngoscopes that use a digital video camera sensor to allow the operator to view the glottis and larynx on a video monitor. Other "noninvasive" devices which can be employed to assist in tracheal intubation are the laryngeal mask airway (used as a conduit for endotracheal tube placement) and the Airtraq.
Stylets
An intubating stylet is a malleable metal wire designed to be inserted into the endotracheal tube to make the tube conform better to the upper airway anatomy of the specific individual. This aid is commonly used with a difficult laryngoscopy. Just as with laryngoscope blades, there are also several types of available stylets, such as the Verathon Stylet, which is specifically designed to follow the 60° blade angle of the GlideScope video laryngoscope.
The Eschmann tracheal tube introducer (also referred to as a "gum elastic bougie") is specialized type of stylet used to facilitate difficult intubation. This flexible device is in length, 15 French (5 mm diameter) with a small "hockey-stick" angle at the far end. Unlike a traditional intubating stylet, the Eschmann tracheal tube introducer is typically inserted directly into the trachea and then used as a guide over which the endotracheal tube can be passed (in a manner analogous to the Seldinger technique). As the Eschmann tracheal tube introducer is considerably less rigid than a conventional stylet, this technique is considered to be a relatively atraumatic means of tracheal intubation.
The tracheal tube exchanger is a hollow catheter, in length, that can be used for removal and replacement of tracheal tubes without the need for laryngoscopy. The Cook Airway Exchange Catheter (CAEC) is another example of this type of catheter; this device has a central lumen (hollow channel) through which oxygen can be administered. Airway exchange catheters are long hollow catheters which often have connectors for jet ventilation, manual ventilation, or oxygen insufflation. It is also possible to connect the catheter to a capnograph to perform respiratory monitoring.
The lighted stylet is a device that employs the principle of transillumination to facilitate blind orotracheal intubation (an intubation technique in which the laryngoscopist does not view the glottis).
Tracheal tubes
A tracheal tube is a catheter that is inserted into the trachea for the primary purpose of establishing and maintaining a patent (open and unobstructed) airway. Tracheal tubes are frequently used for airway management in the settings of general anesthesia, critical care, mechanical ventilation, and emergency medicine. Many different types of tracheal tubes are available, suited for different specific applications. An endotracheal tube is a specific type of tracheal tube that is nearly always inserted through the mouth (orotracheal) or nose (nasotracheal). It is a breathing conduit designed to be placed into the airway of critically injured, ill or anesthetized patients in order to perform mechanical positive pressure ventilation of the lungs and to prevent the possibility of aspiration or airway obstruction. The endotracheal tube has a fitting designed to be connected to a source of pressurized gas such as oxygen. At the other end is an orifice through which such gases are directed into the lungs and may also include a balloon (referred to as a cuff). The tip of the endotracheal tube is positioned above the carina (before the trachea divides to each lung) and sealed within the trachea so that the lungs can be ventilated equally. A tracheostomy tube is another type of tracheal tube; this curved metal or plastic tube is inserted into a tracheostomy stoma or a cricothyrotomy incision.
Tracheal tubes can be used to ensure the adequate exchange of oxygen and carbon dioxide, to deliver oxygen in higher concentrations than found in air, or to administer other gases such as helium, nitric oxide, nitrous oxide, xenon, or certain volatile anesthetic agents such as desflurane, isoflurane, or sevoflurane. They may also be used as a route for administration of certain medications such as bronchodilators, inhaled corticosteroids, and drugs used in treating cardiac arrest such as atropine, epinephrine, lidocaine and vasopressin.
Originally made from latex rubber, most modern endotracheal tubes today are constructed of polyvinyl chloride. Tubes constructed of silicone rubber, wire-reinforced silicone rubber or stainless steel are also available for special applications. For human use, tubes range in size from in internal diameter. The size is chosen based on the patient's body size, with the smaller sizes being used for infants and children. Most endotracheal tubes have an inflatable cuff to seal the tracheobronchial tree against leakage of respiratory gases and pulmonary aspiration of gastric contents, blood, secretions, and other fluids. Uncuffed tubes are also available, though their use is limited mostly to children (in small children, the cricoid cartilage is the narrowest portion of the airway and usually provides an adequate seal for mechanical ventilation).
In addition to cuffed or uncuffed, preformed endotracheal tubes are also available. The oral and nasal RAE tubes (named after the inventors Ring, Adair and Elwyn) are the most widely used of the preformed tubes.
There are a number of different types of double-lumen endo-bronchial tubes that have endobronchial as well as endotracheal channels (Carlens, White and Robertshaw tubes). These tubes are typically coaxial, with two separate channels and two separate openings. They incorporate an endotracheal lumen which terminates in the trachea and an endobronchial lumen, the distal tip of which is positioned 1–2 cm into the right or left mainstem bronchus. There is also the Univent tube, which has a single tracheal lumen and an integrated endobronchial blocker. These tubes enable one to ventilate both lungs, or either lung independently. Single-lung ventilation (allowing the lung on the operative side to collapse) can be useful during thoracic surgery, as it can facilitate the surgeon's view and access to other relevant structures within the thoracic cavity.
The "armored" endotracheal tubes are cuffed, wire-reinforced silicone rubber tubes. They are much more flexible than polyvinyl chloride tubes, yet they are difficult to compress or kink. This can make them useful for situations in which the trachea is anticipated to remain intubated for a prolonged duration, or if the neck is to remain flexed during surgery. Most armored tubes have a Magill curve, but preformed armored RAE tubes are also available. Another type of endotracheal tube has four small openings just above the inflatable cuff, which can be used for suction of the trachea or administration of intratracheal medications if necessary. Other tubes (such as the Bivona Fome-Cuf tube) are designed specifically for use in laser surgery in and around the airway.
Methods to confirm tube placement
No single method for confirming tracheal tube placement has been shown to be 100% reliable. Accordingly, the use of multiple methods for confirmation of correct tube placement is now widely considered to be the standard of care. Such methods include direct visualization as the tip of the tube passes through the glottis, or indirect visualization of the tracheal tube within the trachea using a device such as a bronchoscope. With a properly positioned tracheal tube, equal bilateral breath sounds will be heard upon listening to the chest with a stethoscope, and no sound upon listening to the area over the stomach. Equal bilateral rise and fall of the chest wall will be evident with ventilatory excursions. A small amount of water vapor will also be evident within the lumen of the tube with each exhalation and there will be no gastric contents in the tracheal tube at any time.
Ideally, at least one of the methods utilized for confirming tracheal tube placement will be a measuring instrument. Waveform capnography has emerged as the gold standard for the confirmation of tube placement within the trachea. Other methods relying on instruments include the use of a colorimetric end-tidal carbon dioxide detector, a self-inflating esophageal bulb, or an esophageal detection device. The distal tip of a properly positioned tracheal tube will be located in the mid-trachea, roughly above the bifurcation of the carina; this can be confirmed by chest x-ray. If it is inserted too far into the trachea (beyond the carina), the tip of the tracheal tube is likely to be within the right main bronchus—a situation often referred to as a "right mainstem intubation". In this situation, the left lung may be unable to participate in ventilation, which can lead to decreased oxygen content due to ventilation/perfusion mismatch.
Special situations
Emergencies
Tracheal intubation in the emergency setting can be difficult with the fiberoptic bronchoscope due to blood, vomit, or secretions in the airway and poor patient cooperation. Because of this, patients with massive facial injury, complete upper airway obstruction, severely diminished ventilation, or profuse upper airway bleeding are poor candidates for fiberoptic intubation. Fiberoptic intubation under general anesthesia typically requires two skilled individuals. Success rates of only 83–87% have been reported using fiberoptic techniques in the emergency department, with significant nasal bleeding occurring in up to 22% of patients. These drawbacks limit the use of fiberoptic bronchoscopy somewhat in urgent and emergency situations.
Personnel experienced in direct laryngoscopy are not always immediately available in certain settings that require emergency tracheal intubation. For this reason, specialized devices have been designed to act as bridges to a definitive airway. Such devices include the laryngeal mask airway, cuffed oropharyngeal airway and the esophageal-tracheal combitube (Combitube). Other devices such as rigid stylets, the lightwand (a blind technique) and indirect fiberoptic rigid stylets, such as the Bullard scope, Upsher scope and the WuScope can also be used as alternatives to direct laryngoscopy. Each of these devices have its own unique set of benefits and drawbacks, and none of them is effective under all circumstances.
Rapid-sequence induction and intubation
Rapid sequence induction and intubation (RSI) is a particular method of induction of general anesthesia, commonly employed in emergency operations and other situations where patients are assumed to have a full stomach. The objective of RSI is to minimize the possibility of regurgitation and pulmonary aspiration of gastric contents during the induction of general anesthesia and subsequent tracheal intubation. RSI traditionally involves preoxygenating the lungs with a tightly fitting oxygen mask, followed by the sequential administration of an intravenous sleep-inducing agent and a rapidly acting neuromuscular-blocking drug, such as rocuronium, succinylcholine, or cisatracurium besilate, before intubation of the trachea.
One important difference between RSI and routine tracheal intubation is that the practitioner does not manually assist the ventilation of the lungs after the onset of general anesthesia and cessation of breathing, until the trachea has been intubated and the cuff has been inflated. Another key feature of RSI is the application of manual 'cricoid pressure' to the cricoid cartilage, often referred to as the "Sellick maneuver", prior to instrumentation of the airway and intubation of the trachea.
Named for British anesthetist Brian Arthur Sellick (1918–1996) who first described the procedure in 1961, the goal of cricoid pressure is to minimize the possibility of regurgitation and pulmonary aspiration of gastric contents. Cricoid pressure has been widely used during RSI for nearly fifty years, despite a lack of compelling evidence to support this practice. The initial article by Sellick was based on a small sample size at a time when high tidal volumes, head-down positioning and barbiturate anesthesia were the rule. Beginning around 2000, a significant body of evidence has accumulated which questions the effectiveness of cricoid pressure. The application of cricoid pressure may in fact displace the esophagus laterally instead of compressing it as described by Sellick. Cricoid pressure may also compress the glottis, which can obstruct the view of the laryngoscopist and actually cause a delay in securing the airway.
Cricoid pressure is often confused with the "BURP" (Backwards Upwards Rightwards Pressure) maneuver. While both of these involve digital pressure to the anterior aspect (front) of the laryngeal apparatus, the purpose of the latter is to improve the view of the glottis during laryngoscopy and tracheal intubation, rather than to prevent regurgitation. Both cricoid pressure and the BURP maneuver have the potential to worsen laryngoscopy.
RSI may also be used in prehospital emergency situations when a patient is conscious but respiratory failure is imminent (such as in extreme trauma). This procedure is commonly performed by flight paramedics. Flight paramedics often use RSI to intubate before transport because intubation in a moving fixed-wing or rotary-wing aircraft is extremely difficult to perform due to environmental factors. The patient will be paralyzed and intubated on the ground before transport by aircraft.
Cricothyrotomy
A cricothyrotomy is an incision made through the skin and cricothyroid membrane to establish a patent airway during certain life-threatening situations, such as airway obstruction by a foreign body, angioedema, or massive facial trauma. A cricothyrotomy is nearly always performed as a last resort in cases where orotracheal and nasotracheal intubation are impossible or contraindicated. Cricothyrotomy is easier and quicker to perform than tracheotomy, does not require manipulation of the cervical spine and is associated with fewer complications.
The easiest method to perform this technique is the needle cricothyrotomy (also referred to as a percutaneous dilational cricothyrotomy), in which a large-bore (12–14 gauge) intravenous catheter is used to puncture the cricothyroid membrane. Oxygen can then be administered through this catheter via jet insufflation. However, while needle cricothyrotomy may be life-saving in extreme circumstances, this technique is only intended to be a temporizing measure until a definitive airway can be established. While needle cricothyrotomy can provide adequate oxygenation, the small diameter of the cricothyrotomy catheter is insufficient for elimination of carbon dioxide (ventilation). After one hour of apneic oxygenation through a needle cricothyrotomy, one can expect a PaCO2 of greater than 250 mm Hg and an arterial pH of less than 6.72, despite an oxygen saturation of 98% or greater. A more definitive airway can be established by performing a surgical cricothyrotomy, in which a endotracheal tube or tracheostomy tube can be inserted through a larger incision.
Several manufacturers market prepackaged cricothyrotomy kits, which enable one to use either a wire-guided percutaneous dilational (Seldinger) technique, or the classic surgical technique to insert a polyvinylchloride catheter through the cricothyroid membrane. The kits may be stocked in hospital emergency departments and operating suites, as well as ambulances and other selected pre-hospital settings.
Tracheotomy
Tracheotomy consists of making an incision on the front of the neck and opening a direct airway through an incision in the trachea. The resulting opening can serve independently as an airway or as a site for a tracheostomy tube to be inserted; this tube allows a person to breathe without the use of his nose or mouth. The opening may be made by a scalpel or a needle (referred to as surgical and percutaneous techniques respectively) and both techniques are widely used in current practice. In order to limit the risk of damage to the recurrent laryngeal nerves (the nerves that control the voice box), the tracheotomy is performed as high in the trachea as possible. If only one of these nerves is damaged, the patient's voice may be impaired (dysphonia); if both of the nerves are damaged, the patient will be unable to speak (aphonia). In the acute setting, indications for tracheotomy are similar to those for cricothyrotomy. In the chronic setting, indications for tracheotomy include the need for long-term mechanical ventilation and removal of tracheal secretions (e.g., comatose patients, or extensive surgery involving the head and neck).
Children
There are significant differences in airway anatomy and respiratory physiology between children and adults, and these are taken into careful consideration before performing tracheal intubation of any pediatric patient. The differences, which are quite significant in infants, gradually disappear as the human body approaches a mature age and body mass index.
For infants and young children, orotracheal intubation is easier than the nasotracheal route. Nasotracheal intubation carries a risk of dislodgement of adenoids and nasal bleeding. Despite the greater difficulty, nasotracheal intubation route is preferable to orotracheal intubation in children undergoing intensive care and requiring prolonged intubation because this route allows a more secure fixation of the tube. As with adults, there are a number of devices specially designed for assistance with difficult tracheal intubation in children. Confirmation of proper position of the tracheal tube is accomplished as with adult patients.
Because the airway of a child is narrow, a small amount of glottic or tracheal swelling can produce critical obstruction. Inserting a tube that is too large relative to the diameter of the trachea can cause swelling. Conversely, inserting a tube that is too small can result in inability to achieve effective positive pressure ventilation due to retrograde escape of gas through the glottis and out the mouth and nose (often referred to as a "leak" around the tube). An excessive leak can usually be corrected by inserting a larger tube or a cuffed tube.
The tip of a correctly positioned tracheal tube will be in the mid-trachea, between the collarbones on an anteroposterior chest radiograph. The correct diameter of the tube is that which results in a small leak at a pressure of about of water. The appropriate inner diameter for the endotracheal tube is estimated to be roughly the same diameter as the child's little finger. The appropriate length for the endotracheal tube can be estimated by doubling the distance from the corner of the child's mouth to the ear canal. For premature infants internal diameter is an appropriate size for the tracheal tube. For infants of normal gestational age, internal diameter is an appropriate size. For normally nourished children 1 year of age and older, two formulae are used to estimate the appropriate diameter and depth for tracheal intubation. The internal diameter of the tube in mm is (patient's age in years + 16) / 4, while the appropriate depth of insertion in cm is 12 + (patient's age in years / 2).
Newborn infants
Endotrachael suctioning is often used during intubation in newborn infants to reduce the risk of a blocked tube due to secretions, a collapsed lung, and to reduce pain. Suctioning is sometimes used at specifically scheduled intervals, "as needed", and less frequently. Further research is necessary to determine the most effective suctioning schedule or frequency of suctioning in intubated infants.
In newborns free flow oxygen used to be recommended during intubation however as there is no evidence of benefit the 2011 NRP guidelines no longer do.
Predicting difficulty
Tracheal intubation is not a simple procedure and the consequences of failure are grave. Therefore, the patient is carefully evaluated for potential difficulty or complications beforehand. This involves taking the medical history of the patient and performing a physical examination, the results of which can be scored against one of several classification systems. The proposed surgical procedure (e.g., surgery involving the head and neck, or bariatric surgery) may lead one to anticipate difficulties with intubation. Many individuals have unusual airway anatomy, such as those who have limited movement of their neck or jaw, or those who have tumors, deep swelling due to injury or to allergy, developmental abnormalities of the jaw, or excess fatty tissue of the face and neck. Using conventional laryngoscopic techniques, intubation of the trachea can be difficult or even impossible in such patients. This is why all persons performing tracheal intubation must be familiar with alternative techniques of securing the airway. Use of the flexible fiberoptic bronchoscope and similar devices has become among the preferred techniques in the management of such cases. However, these devices require a different skill set than that employed for conventional laryngoscopy and are expensive to purchase, maintain and repair.
When taking the patient's medical history, the subject is questioned about any significant signs or symptoms, such as difficulty in speaking or difficulty in breathing. These may suggest obstructing lesions in various locations within the upper airway, larynx, or tracheobronchial tree. A history of previous surgery (e.g., previous cervical fusion), injury, radiation therapy, or tumors involving the head, neck and upper chest can also provide clues to a potentially difficult intubation. Previous experiences with tracheal intubation, especially difficult intubation, intubation for prolonged duration (e.g., intensive care unit) or prior tracheotomy are also noted.
A detailed physical examination of the airway is important, particularly:
the range of motion of the cervical spine: the subject should be able to tilt the head back and then forward so that the chin touches the chest.
the range of motion of the jaw (the temporomandibular joint): three of the subject's fingers should be able to fit between the upper and lower incisors.
the size and shape of the upper jaw and lower jaw, looking especially for problems such as maxillary hypoplasia (an underdeveloped upper jaw), micrognathia (an abnormally small jaw), or retrognathia (misalignment of the upper and lower jaw).
the thyromental distance: three of the subject's fingers should be able to fit between the Adam's apple and the chin.
the size and shape of the tongue and palate relative to the size of the mouth.
the teeth, especially noting the presence of prominent maxillary incisors, any loose or damaged teeth, or crowns.
Many classification systems have been developed in an effort to predict difficulty of tracheal intubation, including the Cormack-Lehane classification system, the Intubation Difficulty Scale (IDS), and the Mallampati score. The Mallampati score is drawn from the observation that the size of the base of the tongue influences the difficulty of intubation. It is determined by looking at the anatomy of the mouth, and in particular the visibility of the base of palatine uvula, faucial pillars and the soft palate. Although such medical scoring systems may aid in the evaluation of patients, no single score or combination of scores can be trusted to specifically detect all and only those patients who are difficult to intubate. Furthermore, one study of experienced anesthesiologists, on the widely used Cormack–Lehane classification system, found they did not score the same patients consistently over time, and that only 25% could correctly define all four grades of the widely used Cormack–Lehane classification system. Under certain emergency circumstances (e.g., severe head trauma or suspected cervical spine injury), it may be impossible to fully utilize these the physical examination and the various classification systems to predict the difficulty of tracheal intubation. A Cochrane systematic review examined the sensitivity and specificity of various bedside tests commonly used for predicting difficulty in airway management. In such cases, alternative techniques of securing the airway must be readily available.
Complications
Tracheal intubation is generally considered the best method for airway management under a wide variety of circumstances, as it provides the most reliable means of oxygenation and ventilation and the greatest degree of protection against regurgitation and pulmonary aspiration. However, tracheal intubation requires a great deal of clinical experience to master and serious complications may result even when properly performed.
Four anatomic features must be present for orotracheal intubation to be straightforward: adequate mouth opening (full range of motion of the temporomandibular joint), sufficient pharyngeal space (determined by examining the back of the mouth), sufficient submandibular space (distance between the thyroid cartilage and the chin, the space into which the tongue must be displaced in order for the larygoscopist to view the glottis), and adequate extension of the cervical spine at the atlanto-occipital joint. If any of these variables is in any way compromised, intubation should be expected to be difficult.
Minor complications are common after laryngoscopy and insertion of an orotracheal tube. These are typically of short duration, such as sore throat, lacerations of the lips or gums or other structures within the upper airway, chipped, fractured or dislodged teeth, and nasal injury. Other complications which are common but potentially more serious include accelerated or irregular heartbeat, high blood pressure, elevated intracranial and introcular pressure, and bronchospasm.
More serious complications include laryngospasm, perforation of the trachea or esophagus, pulmonary aspiration of gastric contents or other foreign bodies, fracture or dislocation of the cervical spine, temporomandibular joint or arytenoid cartilages, decreased oxygen content, elevated arterial carbon dioxide, and vocal cord weakness. In addition to these complications, tracheal intubation via the nasal route carries a risk of dislodgement of adenoids and potentially severe nasal bleeding. Newer technologies such as flexible fiberoptic laryngoscopy have fared better in reducing the incidence of some of these complications, though the most frequent cause of intubation trauma remains a lack of skill on the part of the laryngoscopist.
Complications may also be severe and long-lasting or permanent, such as vocal cord damage, esophageal perforation and retropharyngeal abscess, bronchial intubation, or nerve injury. They may even be immediately life-threatening, such as laryngospasm and negative pressure pulmonary edema (fluid in the lungs), aspiration, unrecognized esophageal intubation, or accidental disconnection or dislodgement of the tracheal tube. Potentially fatal complications more often associated with prolonged intubation or tracheotomy include abnormal communication between the trachea and nearby structures such as the innominate artery (tracheoinnominate fistula) or esophagus (tracheoesophageal fistula). Other significant complications include airway obstruction due to loss of tracheal rigidity, ventilator-associated pneumonia and narrowing of the glottis or trachea. The cuff pressure is monitored carefully in order to avoid complications from over-inflation, many of which can be traced to excessive cuff pressure restricting the blood supply to the tracheal mucosa. A 2000 Spanish study of bedside percutaneous tracheotomy reported overall complication rates of 10–15% and procedural mortality of 0%, which is comparable to those of other series reported in the literature from the Netherlands and the United States.
Inability to secure the airway, with subsequent failure of oxygenation and ventilation is a life-threatening complication which if not immediately corrected leads to decreased oxygen content, brain damage, cardiovascular collapse, and death. When performed improperly, the associated complications (e.g., unrecognized esophageal intubation) may be rapidly fatal. Without adequate training and experience, the incidence of such complications is high. The case of Andrew Davis Hughes, from Emerald Isle, NC is a widely known case in which the patient was improperly intubated and, due to the lack of oxygen, sustained severe brain damage and died. For example, among paramedics in several United States urban communities, unrecognized esophageal or hypopharyngeal intubation has been reported to be 6% to 25%. Although not common, where basic emergency medical technicians are permitted to intubate, reported success rates are as low as 51%. In one study, nearly half of patients with misplaced tracheal tubes died in the emergency room. Because of this, the American Heart Association's Guidelines for Cardiopulmonary Resuscitation have de-emphasized the role of tracheal intubation in favor of other airway management techniques such as bag-valve-mask ventilation, the laryngeal mask airway and the Combitube. Higher quality studies demonstrate favorable evidence for this shift, as they have shown no survival or neurological benefit with endotracheal intubation over supraglottic airway devices (Laryngeal mask or Combitube).
One complication—unintentional and unrecognized intubation of the esophagus—is both common (as frequent as 25% in the hands of inexperienced personnel) and likely to result in a deleterious or even fatal outcome. In such cases, oxygen is inadvertently administered to the stomach, from where it cannot be taken up by the circulatory system, instead of the lungs. If this situation is not immediately identified and corrected, death will ensue from cerebral and cardiac anoxia.
Of 4,460 claims in the American Society of Anesthesiologists (ASA) Closed Claims Project database, 266 (approximately 6%) were for airway injury. Of these 266 cases, 87% of the injuries were temporary, 5% were permanent or disabling, and 8% resulted in death. Difficult intubation, age older than 60 years, and female gender were associated with claims for perforation of the esophagus or pharynx. Early signs of perforation were present in only 51% of perforation claims, whereas late sequelae occurred in 65%.
During the SARS and COVID-19 pandemics, tracheal intubation has been used with a ventilator in severe cases where the patient struggles to breathe. Performing the procedure carries a risk of the caregiver becoming infected.
Alternatives
Although it offers the greatest degree of protection against regurgitation and pulmonary aspiration, tracheal intubation is not the only means to maintain a patent airway. Alternative techniques for airway management and delivery of oxygen, volatile anesthetics or other breathing gases include the laryngeal mask airway, i-gel, cuffed oropharyngeal airway, continuous positive airway pressure (CPAP mask), nasal BiPAP mask, simple face mask, and nasal cannula.
General anesthesia is often administered without tracheal intubation in selected cases where the procedure is brief in duration, or procedures where the depth of anesthesia is not sufficient to cause significant compromise in ventilatory function. Even for longer duration or more invasive procedures, a general anesthetic may be administered without intubating the trachea, provided that patients are carefully selected, and the risk-benefit ratio is favorable (i.e., the risks associated with an unprotected airway are believed to be less than the risks of intubating the trachea).
Airway management can be classified into closed or open techniques depending on the system of ventilation used. Tracheal intubation is a typical example of a closed technique as ventilation occurs using a closed circuit. Several open techniques exist, such as spontaneous ventilation, apnoeic ventilation or jet ventilation. Each has its own specific advantages and disadvantages which determine when it should be used.
Spontaneous ventilation has been traditionally performed with an inhalational agent (i.e. gas induction or inhalational induction using halothane or sevoflurane) however it can also be performed using intravenous anaesthesia (e.g. propofol, ketamine or dexmedetomidine). SponTaneous Respiration using IntraVEnous anaesthesia and High-flow nasal oxygen (STRIVE Hi) is an open airway technique that uses an upwards titration of propofol which maintains ventilation at deep levels of anaesthesia. It has been used in airway surgery as an alternative to tracheal intubation.
History
Tracheotomy
The earliest known depiction of a tracheotomy is found on two Egyptian tablets dating back to around 3600 BC. The 110-page Ebers Papyrus, an Egyptian medical papyrus which dates to roughly 1550 BC, also makes reference to the tracheotomy. Tracheotomy was described in the Rigveda, a Sanskrit text of ayurvedic medicine written around 2000 BC in ancient India. The Sushruta Samhita from around 400 BC is another text from the Indian subcontinent on ayurvedic medicine and surgery that mentions tracheotomy. Asclepiades of Bithynia (–40 BC) is often credited as being the first physician to perform a non-emergency tracheotomy. Galen of Pergamon (AD 129–199) clarified the anatomy of the trachea and was the first to demonstrate that the larynx generates the voice. In one of his experiments, Galen used bellows to inflate the lungs of a dead animal. Ibn Sīnā (980–1037) described the use of tracheal intubation to facilitate breathing in 1025 in his 14-volume medical encyclopedia, The Canon of Medicine. In the 12th century medical textbook Al-Taisir, Ibn Zuhr (1092–1162)—also known as Avenzoar—of Al-Andalus provided a correct description of the tracheotomy operation.
The first detailed descriptions of tracheal intubation and subsequent artificial respiration of animals were from Andreas Vesalius (1514–1564) of Brussels. In his landmark book published in 1543, De humani corporis fabrica, he described an experiment in which he passed a reed into the trachea of a dying animal whose thorax had been opened and maintained ventilation by blowing into the reed intermittently. Antonio Musa Brassavola (1490–1554) of Ferrara successfully treated a patient with peritonsillar abscess by tracheotomy. Brassavola published his account in 1546; this operation has been identified as the first recorded successful tracheotomy, despite the many previous references to this operation. Towards the end of the 16th century, Hieronymus Fabricius (1533–1619) described a useful technique for tracheotomy in his writings, although he had never actually performed the operation himself. In 1620 the French surgeon Nicholas Habicot (1550–1624) published a report of four successful tracheotomies. In 1714, anatomist Georg Detharding (1671–1747) of the University of Rostock performed a tracheotomy on a drowning victim.
Despite the many recorded instances of its use since antiquity, it was not until the early 19th century that the tracheotomy finally began to be recognized as a legitimate means of treating severe airway obstruction. In 1852, French physician Armand Trousseau (1801–1867) presented a series of 169 tracheotomies to the Académie Impériale de Médecine. 158 of these were performed for the treatment of croup, and 11 were performed for "chronic maladies of the larynx". Between 1830 and 1855, more than 350 tracheotomies were performed in Paris, most of them at the Hôpital des Enfants Malades, a public hospital, with an overall survival rate of only 20–25%. This compares with 58% of the 24 patients in Trousseau's private practice, who fared better due to greater postoperative care.
In 1871, the German surgeon Friedrich Trendelenburg (1844–1924) published a paper describing the first successful elective human tracheotomy to be performed for the purpose of administration of general anesthesia. In 1888, Sir Morell Mackenzie (1837–1892) published a book discussing the indications for tracheotomy. In the early 20th century, tracheotomy became a life-saving treatment for patients affected with paralytic poliomyelitis who required mechanical ventilation. In 1909, Philadelphia laryngologist Chevalier Jackson (1865–1958) described a technique for tracheotomy that is used to this day.
Laryngoscopy and non-surgical techniques
In 1854, a Spanish singing teacher named Manuel García (1805–1906) became the first man to view the functioning glottis in a living human. In 1858, French pediatrician Eugène Bouchut (1818–1891) developed a new technique for non-surgical orotracheal intubation to bypass laryngeal obstruction resulting from a diphtheria-related pseudomembrane. In 1880, Scottish surgeon William Macewen (1848–1924) reported on his use of orotracheal intubation as an alternative to tracheotomy to allow a patient with glottic edema to breathe, as well as in the setting of general anesthesia with chloroform. In 1895, Alfred Kirstein (1863–1922) of Berlin first described direct visualization of the vocal cords, using an esophagoscope he had modified for this purpose; he called this device an autoscope.
In 1913, Chevalier Jackson was the first to report a high rate of success for the use of direct laryngoscopy as a means to intubate the trachea. Jackson introduced a new laryngoscope blade that incorporated a component that the operator could slide out to allow room for passage of an endotracheal tube or bronchoscope. Also in 1913, New York surgeon Henry H. Janeway (1873–1921) published results he had achieved using a laryngoscope he had recently developed. Another pioneer in this field was Sir Ivan Whiteside Magill (1888–1986), who developed the technique of awake blind nasotracheal intubation, the Magill forceps, the Magill laryngoscope blade, and several apparati for the administration of volatile anesthetic agents. The Magill curve of an endotracheal tube is also named for Magill. Sir Robert Macintosh (1897–1989) introduced a curved laryngoscope blade in 1943; the Macintosh blade remains to this day the most widely used laryngoscope blade for orotracheal intubation.
Between 1945 and 1952, optical engineers built upon the earlier work of Rudolph Schindler (1888–1968), developing the first gastrocamera. In 1964, optical fiber technology was applied to one of these early gastrocameras to produce the first flexible fiberoptic endoscope. Initially used in upper GI endoscopy, this device was first used for laryngoscopy and tracheal intubation by Peter Murphy, an English anesthetist, in 1967. The concept of using a stylet for replacing or exchanging orotracheal tubes was introduced by Finucane and Kupshik in 1978, using a central venous catheter.
By the mid-1980s, the flexible fiberoptic bronchoscope had become an indispensable instrument within the pulmonology and anesthesia communities. The digital revolution of the 21st century has brought newer technology to the art and science of tracheal intubation. Several manufacturers have developed video laryngoscopes which employ digital technology such as the CMOS active pixel sensor (CMOS APS) to generate a view of the glottis so that the trachea may be intubated.
See also
Intratracheal instillation
Notes
References
External links
Video of endotracheal intubation using C-MAC D-blade and bougie used as introducer.
Videos of direct laryngoscopy recorded with the Airway Cam (TM) imaging system
Examples of some devices for facilitation of tracheal intubation
Free image rich resource explaining various types of endotracheal tubes
Tracheal intubation live case 2022
Airway management
Anesthesia
Emergency medical procedures
First aid
Intensive care medicine
Medical equipment
Oral and maxillofacial surgery
Otorhinolaryngology
Respiratory system procedures
Respiratory therapy
Medical treatments | Tracheal intubation | Biology | 11,143 |
65,141,595 | https://en.wikipedia.org/wiki/Uzbekcosmos | The Space Research and Technology Agency under the Ministry of Digital Technologies of the Republic of Uzbekistan (Uzbek: O'zbekiston Respublikasi Raqamli texnologiyalar vazirligi huzuridagi Kosmik tadqiqotlar va texnologiyalar agentligi) also known as Uzbekspace Agency (Uzbek: "O'zbekkosmos" agentligi) is the official Uzbek state space agency. The agency is officially tasked with the development and implementation of a unified state policy and strategic directions in the field of space research and technology. Uzbekspace Agency was formed by decree of the President of the Republic of Uzbekistan Shavkat Mirziyoyev on August 30, 2019.
References
Space agencies
Space programs by country | Uzbekcosmos | Engineering | 165 |
41,325,698 | https://en.wikipedia.org/wiki/Marc%20Lee | Marc Lee (born March 17, 1969) is a Swiss new media artist working in the fields of interactive installation art, internet art, performance art and video art.
Biography
Lee was born in 1969 in Knutwil, Lucerne, in Switzerland. He studied at the Basel University of Art and Design installation and at the Zurich University of the Arts new media art through 2003.
Lee creates network-oriented interactive projects since 1999. He is experimenting with information and communication technologies. His projects locate and critically discuss economic, political, cultural and creative issues.
His artworks reflect the visions and limits of our information society in an intelligent and artistic manner.
Marc Lee has exhibited in major art exhibitions including: ZKM Karlsruhe, New Museum New York, Transmediale Berlin, Ars Electronica Linz, Contemporary Art Biennale Sevilla, Media Art Biennale Seoul, Viper and Shift Festival Basel, Read_Me Festival Moskau, CeC Delhi, MoMA Shanghai, ICC Tokyo and National Museum of Modern and Contemporary Art Seoul.
Lee's work are in private and public collections including the Federal Art Collection Switzerland and the ZKM Karlsruhe and he has won many prices and honorary mentions at international festivals, including Transmediale Berlin and Ars Electronica Linz
Art projects
10.000 Moving Cities – Same but Different explores how our planet is becoming increasingly homogeneous and how globalization creates more and more “places without a local identity” – as described in Marc Augé’s essay Non-place (1992). In 10.000 Moving Cities all cities have the identical buildings, but the information on the building facades are constantly different. They are searched in real time on social networks about the chosen location. This ongoing experimental research project is developed by Marc Lee in collaboration with Intelligent Sensor-Actuator-Systems Laboratory (ISAS) at the Karlsruhe Institute of Technology and the ZKM Center for Art and Media Karlsruhe. Four versions have been created so far: augmented reality (AR) version, virtual reality (VR) version, mobile app version, and a "real cubes" version. These versions are technologically very different but address the identical topic. A large-scale exhibition was at the premiere exhibition Connecting_Unfolding of the National Museum of Modern and Contemporary Art Seoul.
Unfiltered – TikTok and the Emerging Face of Culture is an immersive installation. It explores the influence of digital accessibility and questioning its impact on public consciousness, visual aesthetics and identity structures. With the increasing access to social media, digital hierarchies are being broken. Platforms like TikTok are the new city town hall, whose "influence" is no longer limited to the urban elite.
Echolocation – Mapping the Free Flow of Information Around the World in Realtime deals with cultural diversity and at the same time with the powerful homogenization. It poses questions about the meaning of our culture which is becoming increasingly similar. In Echolocation, stories posted on social networks like YouTube, Flickr and Twitter can be searched in real time about self chosen location.
Corona TV Bot is current version of Marc Lee’s TV bot, an ongoing project started in 2004, filters the latest Twitter and YouTube posts according to self-definable keywords or hashtags. In response to the coronavirus pandemic, the latest Twitter and YouTube posts about COVID-19 and Coronavirus are mediated in a wild continuous TV show feed and reflect the coronavirus pandemic 24/7 online. Since the Corona virus pandemic, 6-hour broadcasts are recorded every week. These resources can be compared in a chronological order to make cultural, economic and political factors, differences, development and change tangible.
360° VR Mobile Art Apps are research projects for interactive art installations. Visitors can interact using smartphones or tablets and become performers. The mobile display is projected on one or more walls in the exhibition space. The sonic sound experiences are specially composed for the apps and respond to movements and navigation modes.
Political Campaigns – Battle of Opinion on Social Media In political campaigns around the world, supporters of opposing parties have engaged in heated battles on social media. "Political Campaigns" filters the latest Twitter, Instagram, and YouTube posts, which include search terms of top candidates or parties, and weaves them into a wild TV news show (24/7). What counts today are likes and retweets, which travel across the screen fighting for victory and indicate their current online market value. A network-based TV show that confronts us with opinions that don't reflect just variants of our own.
Pic-Me – Fly to the Locations Where Users Send Posts With Pic-Me you can virtually fly to the places from where users send posts to Instagram, thus offering another view on how the media handles posts on social networks. This work makes us think what happens to the huge amounts of data generated by humans and collected by institutions worldwide?
Loogie.net generates television news programs on self-made topics at the push of a button. The first founded "interactive news television station" in 2003. This research project is a TV news channel, media satire and art installation at the same time.
Breaking the News – Be a News-Jockey Information on self-made topics are transmitted in real time from the internet and audiovisualized on four large projections. The user becomes a live performer, a news jockey. About News-Jockey:
Used to be my home too cartographs in real-time our rich biodiversity and at the same time the continuous extinction of species. This experiment shows photos of plants, fungi and animals that are uploaded right now by unknown users to iNaturalist.org via mobile phone. These are mapped on Google Earth at the exact location where they were photographed. In addition, taxonomically similar species that occurred in the same country and became extinct within the last 30 years are assigned in real time via RedList.org.
YANTO – yaw and not tip over is a speculative simulation on the future of aquafarming that questions the limitations of techno-solutionist approaches to species depletion and climate collapse, such as genetic engineering, synthetic biology and artificial intelligence. The narrative sets in a speculative environment where AI and synthetic biology work together to create an optimized environment for farmed species. A simulator powered by AI creates hybrid species to balance a delicate ecosystem.
Speculative Evolution imagines a speculative ecosystem 30 years from now, where artificial intelligence and biotechnologies work together to create and optimize species to withstand the increasingly hostile environment. From the perspective of an AI agent, the audiences are invited to create new variations of animals, fungi, plants, and robots, fly with these engineered and mutated species, and observe the changing ecosystem.
Exhibitions (selection)
2023 – Transmediale Festival, Akademie der Künste, Berlin, Germany
2022 – Swiss Media Art - Pax Art Awards, HEK (Haus der Elektronischen Künste), Basel, Switzerland
2017–2018 – Aestetic of Changes, MAK - 150 Years of the University of Applied Arts Vienna, Austria
2013 – Connecting_Unfolding, MMCA - National Museum of Modern and Contemporary Art, Seoul, Korea
2004–2008 – Loogie.net Algorithmic Revolution, ZKM Medienmuseum, Karlsruhe, Germany
2002 – Open\\_Source\\_Art\\_Hack, New Museum, New York, USA
Publications
2022 - ESCH, ZKM Karlsruhe, Hacking Identity – Dancing Diversity
2020 – post-futuristisch, KUNSTFORUM International Bd. 267, Magazine
2019 – LUX AETERNA - ISEA 2019 Art, Catalogue
2019 – xCoAx 2019: Proceedings of the Seventh Conference on Computation, Communication, Aesthetics and X
2019 – FILE SÃO PAULO 2019: 20 Years of FILE 20 Years of Art and Technology ()
2019 – Research TECHNOLOGY URBANITY, Schafhof - European Center for Art Upper Bavaria
2017 – THE UNFRAMED WORLD, Virtual Reality as Artistic Medium, Sabine Himmelsbach()
2014 – Inauguration, National Museum of Modern and Contemporary Art, Korea ()
2010 – Was tun. Figuren des Protests. Taktiken des Widerstands ()
2008 – Digital Playground 2008, "Hack the City!" ()
2004 – Read_me: Software Art & Cultures, Aarhus ()
2004 – MetaWorx – Young Swiss Interactive. Approaches to Interactivity ()
2004 – 56kTV - bastard channel MAGAZIN
References
External links
Official Website
Loogie.net
1969 births
Living people
New media artists
Net.artists
Swiss performance artists
Swiss video artists
Zurich University of the Arts alumni
Swiss contemporary artists
20th-century Swiss artists
21st-century Swiss artists | Marc Lee | Technology | 1,762 |
48,408,937 | https://en.wikipedia.org/wiki/Fort%20Saint-Jean%20%28Lyon%29 | The Fort Saint-Jean () is located in the 1st arrondissement of Lyon and part of the first fort belt of Lyon, which includes
Fort de Loyasse, and the now-demolished Fort Duchère and Fort de Caluire.
History
The fort was initially nothing but a bastion built as a component of the wall around the Croix-Rousse hill at the beginning of 16th century by François I, to protect the town from the Swiss. In 1636 the Halincourt gate was built to the Rhone. The fort was completed in the 18th century, but construction of the current building began in 1834. Fort Saint-Jean has an area of 17,000 m2 and dominates the Saône river from 40m above the river. In 1932, the Military Health Service had its regional pharmacy there.
On 2 September 1944, when Lyon was occupied by the Germans, a group of volunteers gathered at the fort to prevent the occupiers from destroying the bridges over the Saône.
In 1984 the fort was occupied by the Veterinary Service of the Armed Forces.
Today
Rehabilitated in 2001 by architect Pierre Vurpas, Fort St. Saint-Jean has been home, since 2004, to the National Treasury School (ENT) which became the National School of Public Finance (ENFiP) on 4 August 2010. This school trains public finance controllers and occasionally hosts cultural events.
See also
Ceintures de Lyon
Bibliography
Les défenses de Lyon, François Dallemagne ; Georges Fessy(photographer). - Éditions Lyonnaises d'Art et d'Histoire, 2006. ()
1st arrondissement of Lyon
Fortifications of Lyon
Fortification lines
16th-century fortifications
18th-century fortifications
19th-century fortifications
18th-century architecture in France | Fort Saint-Jean (Lyon) | Engineering | 351 |
13,061,682 | https://en.wikipedia.org/wiki/Platelet-activating%20factor%20receptor | The platelet-activating factor receptor (PAF-R) is a G-protein coupled receptor which binds platelet-activating factor. It is encoded in the human by the PTAFR gene.
The PAF receptor shows structural characteristics of the rhodopsin (MIM 180380) gene family and binds platelet-activating factor (PAF). PAF is a phospholipid (1-0-alkyl-2-acetyl-sn-glycero-3-phosphorylcholine) that has been implicated as a mediator in diverse pathologic processes, such as allergy, asthma, septic shock, arterial thrombosis, and inflammatory processes.[supplied by OMIM] Its pathogenetic role in chronic kidney failure has also been reported recently.
Ligands
Agonists
Platelet activating factor
Antagonists
Apafant (WEB-2086)
Israpafant (Y-24180)
Lexipafant
Rupatadine
References
Further reading
External links
G protein-coupled receptors | Platelet-activating factor receptor | Chemistry | 228 |
4,141,655 | https://en.wikipedia.org/wiki/W%20state | The W state is an entangled quantum state of three qubits which in the bra-ket notation has the following shape
and which is remarkable for representing a specific type of multipartite entanglement and for occurring in several applications in quantum information theory. Particles prepared in this state reproduce the properties of Bell's theorem, which states that no classical theory of local hidden variables can produce the predictions of quantum mechanics . The state is named after Wolfgang Dür, who first reported the state together with Guifré Vidal, and Ignacio Cirac in 2000.
Properties
The W state is the representative of one of the two non-biseparable classes of three-qubit states, the other being the Greenberger–Horne–Zeilinger state, , which cannot be transformed (not even probabilistically) into each other by local quantum operations. Thus and represent two very different kinds of tripartite entanglement.
This difference is, for example, illustrated by the following interesting property of the W state: if one of the three qubits is lost, the state of the remaining 2-qubit system is still entangled. This robustness of W-type entanglement contrasts strongly with the GHZ state, which is fully separable after loss of one qubit.
The states in the W class can be distinguished from all other 3-qubit states by means of multipartite entanglement measures. In particular, W states have non-zero entanglement across any bipartition, while the 3-tangle vanishes, which is also non-zero for GHZ-type states.
Generalization
The notion of W state has been generalized for qubits and then refers to the quantum superposition with equal expansion coefficients of all possible pure states in which exactly one of the qubits is in an "excited state" , while all other ones are in the "ground state" :
Both the robustness against particle loss and the LOCC-inequivalence with the (generalized) GHZ state also hold for the -qubit W state.
Applications
In systems in which a single qubit is stored in an ensemble of many two-level systems the logical "1" is often represented by the W state, while the logical "0" is represented by the state . Here the W state's robustness against particle loss is a very beneficial property ensuring good storage properties of these ensemble-based quantum memories.
See also
NOON state
References
Quantum information theory
Quantum states | W state | Physics | 507 |
1,529,808 | https://en.wikipedia.org/wiki/Farrington%20Daniels | Farrington Daniels (March 8, 1889 – June 23, 1972) was an American physical chemist who is considered one of the pioneers of the modern direct use of solar energy.
Biography
Daniels was born in Minneapolis, Minnesota on March 8, 1889. Daniels began day school in 1895 at the Kenwood School and then on to Douglas School. As a boy, he was fascinated with Thomas Edison, Samuel F. B. Morse, Alexander Graham Bell, and John Charles Fields. He decided early that he wanted to be an electrician and inventor. He attended Central and East Side high schools. By this point he liked chemistry and physics, but equally enjoyed "Manual Training."
In 1906, he entered the University of Minnesota, majoring in chemistry and adding to the usual mathematics and analytical courses some courses in botany and scientific German. He was initiated into the Beta Chapter of Alpha Chi Sigma in 1908. He sometimes worked summers as a railroad surveyor. He took his degree in chemistry in 1910. The following year he spent half his time in teaching and received an MS for graduate work in physical chemistry. He entered Harvard in 1911, paying for his studies partly through a teaching fellowship, and received a PhD in 1914. His doctoral research on the electrochemistry of thallium alloys was supervised by Theodore William Richards.
In the summer of 1912, Daniels had visited England and Europe. After earning his PhD, Harvard would have sent him on a traveling fellowship in Europe, but World War I broke out. So instead he accepted a position as instructor at the Worcester Polytechnic Institute, where, besides teaching, he found he had considerable time for research in calorimetry, for which he received a grant from the American Academy of Arts and Sciences. He joined the University of Wisconsin in 1920 as an assistant professor in 1920, and remained until his retirement in 1959 as chairman of the chemistry department.
During World War II, Daniels joined the staff of the Metallurgical Laboratory, a part of the Manhattan Project effort by the United States to develop the first nuclear weapons. He served first as associate director of the laboratory's chemistry division from the summer of 1944 before, on July 1, 1945, becoming overall director of that institution, a post held until May 1946. He was active in the planning of the laboratory's immediate successor, the Argonne National Laboratory, serving as first chairman of its Board of Governors from 1946 until 1948. It was in that role, in 1947, that Daniels conceived the pebble bed reactor, a reactor design in which helium rises through fissioning uranium oxide or carbide pebbles, cooling them by carrying away heat for power production. The "Daniels' pile" was an early version of the later high-temperature gas-cooled reactor developed further at ORNL without success, but which was developed later as nuclear power reactor by Rudolf Schulten.
Daniels became concerned to limit or stop the nuclear arms race after the war. In that regard, he became a board member of the Bulletin of the Atomic Scientists.
Daniels is also known for writing several textbooks on physical chemistry, including Mathematical preparation for physical chemistry (1928), Experimental physical chemistry, co-authored with J. Howard Mathews and John Warren (1934), Chemical Kinetics (1938), Physical Chemistry, co-authored with Robert Alberty (1957). Some of these books went through many subsequent editions until about 1980.
He was elected in 1928 a Fellow of the American Association for the Advancement of Science (AAAS). He was elected to the United States National Academy of Sciences in 1947 and the American Philosophical Society in 1948. He was awarded the Priestley Medal and elected to the American Academy of Arts and Sciences in 1957.
Daniels died on June 23, 1972, from complications from liver cancer. He was survived by his wife, four children, and twelve grandchildren.
He was inducted posthumously to the Alpha Chi Sigma Hall of Fame in 1982.
Involvement with solar energy
Daniels became a leading American expert on the principles involved with the practical utilization of solar energy. He pursued understanding of the heat and the convection that can be derived from it, as well as the electrical energy that could be derived from it. As Director of the University of Wisconsin–Madison's Solar Energy Laboratory, he explored such areas of practical application as cooking, space heating, agricultural and industrial drying, distillation, cooling and refrigeration, and photo- and thermo-electric conversion, and he was also interested in energy storage. In particular, he believed there were many practical applications of solar energy for ready use in the developing world.
Daniels was active with the Association for Applied Solar Energy in the mid-1950s. He suggested that AFASE embark upon the publication of a scientific journal, and the first issue of The Journal of Solar Energy Science and Engineering appeared in January, 1957. Later, as Professor Emeritus of Chemistry of the University of Wisconsin–Madison, he led a group of solar scientists who proposed that AFASE be reorganized, that its directors and officers be elected by the membership, and that the name be changed to The Solar Energy Society – all of which was done. He supported solar energy because, as he said in 1955, "We realize, as never before, that our fossil fuels – coal, oil, and gas – will not last forever."
One of his classic books is Direct Use of the Sun's Energy, published by Yale University Press in 1964. The book was reprinted in a mass market edition in 1974 by Ballantine Books, after the 1973 oil crisis, and was described as "The best book on solar energy that I know of" by the Whole Earth Catalog's Steve Baer.
References
External links
National Academy of Sciences Biographical Memoir
1889 births
1972 deaths
American Congregationalists
20th-century American engineers
American physical chemists
Harvard University alumni
University of Minnesota College of Liberal Arts alumni
University of Wisconsin–Madison faculty
Manhattan Project people
Worcester Polytechnic Institute faculty
People associated with renewable energy
Presidents of the Geochemical Society
Fellows of the American Association for the Advancement of Science
Members of the American Philosophical Society
20th-century American chemists | Farrington Daniels | Chemistry | 1,224 |
63,327,266 | https://en.wikipedia.org/wiki/Lunar%20penetrometer | The lunar penetrometer was a spherical electronic tool that served to measure the load-bearing characteristics of the Moon in preparation for spacecraft landings. It was designed by NASA to be dropped onto the surface from a vehicle orbiting overhead and transmit information to the spacecraft. However, despite it being proposed for several lunar and planetary missions, the device was never actually fielded by NASA.
History
The lunar penetrometer was first developed in the early 1960s as part of NASA Langley Research Center’s Lunar Penetrometer Program. At the time, immense pressures from the ongoing Space Race caused NASA to shift its focus from conducting purely scientific lunar expeditions to landing a man on the Moon before the Russians. As a result, the Jet Propulsion Laboratory's lunar flight projects, Ranger and Surveyor, were reconfigured to provide direct support to Project Apollo.
One of the major problems that NASA faced in preparation for the Apollo Moon landing was the inability to determine the surface characteristics of the Moon with regard to spacecraft landings and post-landing locomotion of exploratory vehicles and personnel. While radio and optical technology situated on Earth at the time could make out large-scale characteristics such as the size and distribution of mountains and craters, there wasn't an Earth-based method of measuring small-scale features, such as the lunar surface texture and topographical details, with adequate resolution. In 1961, NASA's chief engineer Abe Silverstein proposed to the U.S. Congress that Project Ranger would help provide important data on the Moon's surface topography to facilitate the Apollo lunar landing. Once funding was provided to the Ranger program, Silverstein directed NASA laboratories to investigate potential instruments that could return information on the hardness of the lunar surface.
Introduced shortly after Silverstein's directive, the Lunar Penetrometer Program devised the development of an impact-measuring instrumented projectile, or penetrometer, that provided preliminary information about the Moon's surface. The lunar penetrometer housed an impact accelerometer that measured the deceleration time history of the projectile as it made contact with the lunar surface to measure its hardness, bearing strength, and penetrability as well as a radio telemeter that could transmit the impact information to a remote receiver. Knowledge of the complete impact acceleration time history would have also made it possible for NASA researchers to ascertain the physical composition of the soil and whether it was granular, powdery, or brittle. If successful, the lunar penetrometer was planned for deployment for uncrewed landings in the Ranger and Surveyor programs as well as for the Apollo mission.
However, the Jet Propulsion Laboratory Space Sciences Division Manager Robert Meghreblian decided in August 1963 that the use of the lunar penetrometer to provide information on the lunar surface in situ was too risky. Instead, it was decided that the lunar surface composition would be determined by using gamma-ray spectrometry and surface topography via television photography and radar probing. In 1966, the lunar penetrometer was investigated as a potential sounding device for the Apollo missions, but no information exists on whether it was used in that manner.
Design
In order to function properly, the lunar penetrometer was designed to sense the accelerations encountered by the projectile body during the impact process and telemeter the collected information to a nearby receiving station. Doing so required the penetrometer to package an acceleration sensing device as well as an independent telemetry system with a power supply, transmitter, and antenna system. The components also needed to be housed within a casing that could withstand a wide range of impact loads.
The lunar penetrometer came in the form of a spherical omnidirectional penetrometer that did not have to account for the orientation of the penetrometer during impact, which was difficult to factor in an environment with little to no atmosphere like the lunar surface. The omnidirectional design packaged the accelerometer, computer, power supply, and the telemetry system within a 3-inch diameter sphere. The lunar penetrometer's spherical instrumentation compartment had an omnidirectional acceleration sensor located at the center surrounded by concentrically placed batteries and electronic modules. The components were enclosed within an electromagnetic shield that provided a uniform metallic reference for the omnidirectional antenna encircling the instrumentation compartment. Outside the compartment, an impact limiter made out of balsa wood provided shock absorption to limit the impact forces on the internal components to tolerable levels and provided a low overall penetrometer density to assure sensitivity to soft, weak target surfaces. The balsa impact limiter was coated in a thin outer shell made out of fiber-glass epoxy.
Accelerometer
As part of the Lunar Penetrometer Program, the NASA Langley Research Center tasked the Harry Diamond Laboratories (later consolidated to form the U.S. Army Research Laboratory) with the development of the omnidirectional accelerometer for the lunar penetrometer. The omnidirectional accelerometer, or the omnidirectional acceleration sensor, was an accelerometer capable of measuring the acceleration time histories independent of its angular acceleration or orientation at impact. The researchers at Harry Diamond Laboratories originally employed a hollow piezoelectric sphere but later transitioned to modifying a conventional triaxial accelerometer. The instantaneous magnitude of the acceleration was computed by obtaining the square root of the sum of the squares of the three orthogonal, acceleration-time signatures. The omnidirectional accelerometer withstood a maximum of 40,000 G during shock testing and operated using a 20V power supply drawing 10 mA.
Telemetry system
The telemetry system for the lunar penetrometer was commissioned by NASA to the Canadian defence contractor Computing Devices of Canada (now known as General Dynamics Mission Systems). It consisted of a network that fed the output of the accelerometer to a radio frequency power amplifier that was also connected to a master oscillator and a buffer amplifier. The amplifiers and the oscillator functioned together to act as a transmitter, whose outputs were fed to a spherical antenna that was embedded in the outer skin of the penetrometer.
Relay craft
Due to limitations in available power, antenna efficiency, and other factors, the impact acceleration information from the lunar penetrometers could not be transmitted for extensive distances. As a result, a relay craft needed to be placed within the transmission field of the lunar penetrometers to intercept the lunar penetrometer signals and transmit them to a distant receiving station. When located within moderate range of a receiving station like a parent spacecraft, the relay craft served to simply amplify and redirect the lunar penetrometer signals. At greater distances, the relay craft would perform data signal processing where it exchanged the peak power requirement of instantaneous data transmission for longer transmission time to decrease the demands placed upon the power supply. The relay craft functioned so that it would receive the lunar penetrometer signals and transmit them to the receiving station only after the lunar penetrometers landed on the surface and before the relay craft itself crashed onto the ground. As a result, a strict time limit would be imposed on the relay craft to deliver the necessary data sent by the penetrometers.
Operation
During lunar reconnaissance, a payload containing the lunar penetrometer and the relay station structure would be mounted on the spacecraft as it traveled to its destination. Above the lunar surface, the spacecraft would release the payload, which would spin for axis attitude stability and use the main retrorocket motor to reduce the descent velocity. At approximately 5,600 feet above the target area, the second retrorocket would fire once the main retrorocket was jettisoned from the payload. The centrifugal force resulting from the spin stabilization technique would cause a salvo of lunar penetrometers to disperse and free fall toward the lunar surface. The payload carriage would hold 16 lunar penetrometers in total that would be released in salvos of four at about 2 second intervals. The impact of the lunar penetrometers would be categorized as elastic, plastic, or penetration depending on the target surface. After the secondary retrorocket burns out, the payload would free fall to the lunar surface as well. Once the penetrometers make contact with the lunar surface, the impact information would then be transmitted to the descending payload relay station, which would then be relayed to a transmitting antenna system on Earth. In short, this chain of communication would take place within the time interval between the release of the lunar penetrometers and the moment the payload relay station lands on the lunar surface.
Testing
Shock testing
Harry Diamond Laboratories was tasked with developing a high-energy shock testing method that monitored the omnidirectional accelerometer's behavior during acceleration peaking at 20,000 G. Components of the omnidirectional accelerometer, such as the resistors, capacitors, oscillators, and magnetic cores, were subjected to a modified air gun test. The component being tested was placed within a target body inside an extension tube in front of an air gun. The air gun would fire a projectile, impacting the target body and accelerating it to a peak of 20,000 G until it hit the lead target only a short distance away inside the extension tube. The results of the shock test showed that the resistors and capacitors changed very little during shock, while the commercial subcarrier oscillator and the tape-wound magnetic cores were affected considerably.
Impact testing
More than 200 impact tests were conducted with the spherical lunar penetrometer in investigating its soil penetration characteristics. Most consisted of impacting the penetrometers to a wide range of target materials at velocities ranging from 6 to 76 m/s and then recording the measured impact characteristics. Several experiments investigated the penetrometer's ability to predict the depth to which a lunar module would penetrate the surface of the landing zone. The results of these studies found that the lunar penetrometers were successful in not only identifying the nature of the impacted surface, i.e. whether the surface was rigid or collapsible, but also in distinguishing between particulate materials of different bearing strength from peak impact accelerations. The lunar penetrometers were able to accurately predict the conditions of the landing pad penetrations.
Sounding device application
The lunar penetrometer was studied as a potential sounding device for a crewed Apollo lunar module landing in 1966. The device was suggested to assist astronauts in on-the-spot decision making regarding whether a safe landing of the lunar module could be made. Once dropped individually or in salvo within the landing zone, the lunar penetrometers could autonomously transmit an acceleration-time profile upon impact and characterize the surface hardness of the landing zone. A short study on the feasibility of this application was conducted to determine the flight, trajectory, and impact parameters of the lunar penetrometers once launched from a lunar module. The study found that the lunar penetrometer's impact velocities were limited to range from 120 ft/s to 200 ft/s, meaning that the velocities impact angles would have to vary between 54 and 62 percent from the vertical. The earliest that a lunar penetrometer had to be launched was at a range of 3,400 feet and an altitude of 1,075 feet, which would grant the crew in the lunar module 16 seconds to analyze the penetrometer data.
References
Measuring instruments
Impactor spacecraft
Exploration of the Moon | Lunar penetrometer | Technology,Engineering | 2,409 |
12,335,752 | https://en.wikipedia.org/wiki/Energy%20minimization | In the field of computational chemistry, energy minimization (also called energy optimization, geometry minimization, or geometry optimization) is the process of finding an arrangement in space of a collection of atoms where, according to some computational model of chemical bonding, the net inter-atomic force on each atom is acceptably close to zero and the position on the potential energy surface (PES) is a stationary point (described later). The collection of atoms might be a single molecule, an ion, a condensed phase, a transition state or even a collection of any of these. The computational model of chemical bonding might, for example, be quantum mechanics.
As an example, when optimizing the geometry of a water molecule, one aims to obtain the hydrogen-oxygen bond lengths and the hydrogen-oxygen-hydrogen bond angle which minimize the forces that would otherwise be pulling atoms together or pushing them apart.
The motivation for performing a geometry optimization is the physical significance of the obtained structure: optimized structures often correspond to a substance as it is found in nature and the geometry of such a structure can be used in a variety of experimental and theoretical investigations in the fields of chemical structure, thermodynamics, chemical kinetics, spectroscopy and others.
Typically, but not always, the process seeks to find the geometry of a particular arrangement of the atoms that represents a local or global energy minimum. Instead of searching for global energy minimum, it might be desirable to optimize to a transition state, that is, a saddle point on the potential energy surface. Additionally, certain coordinates (such as a chemical bond length) might be fixed during the optimization.
Molecular geometry and mathematical interpretation
The geometry of a set of atoms can be described by a vector of the atoms' positions. This could be the set of the Cartesian coordinates of the atoms or, when considering molecules, might be so called internal coordinates formed from a set of bond lengths, bond angles and dihedral angles.
Given a set of atoms and a vector, , describing the atoms' positions, one can introduce the concept of the energy as a function of the positions, . Geometry optimization is then a mathematical optimization problem, in which it is desired to find the value of for which is at a local minimum, that is, the derivative of the energy with respect to the position of the atoms, , is the zero vector and the second derivative matrix of the system, , also known as the Hessian matrix, which describes the curvature of the PES at , has all positive eigenvalues (is positive definite).
A special case of a geometry optimization is a search for the geometry of a transition state; this is discussed below.
The computational model that provides an approximate could be based on quantum mechanics (using either density functional theory or semi-empirical methods), force fields, or a combination of those in case of QM/MM. Using this computational model and an initial guess (or ansatz) of the correct geometry, an iterative optimization procedure is followed, for example:
calculate the force on each atom (that is, )
if the force is less than some threshold, finish
otherwise, move the atoms by some computed step that is predicted to reduce the force
repeat from the start
Practical aspects of optimization
As described above, some method such as quantum mechanics can be used to calculate the energy, , the gradient of the PES, that is, the derivative of the energy with respect to the position of the atoms, and the second derivative matrix of the system, , also known as the Hessian matrix, which describes the curvature of the PES at .
An optimization algorithm can use some or all of , and to try to minimize the forces and this could in theory be any method such as gradient descent, conjugate gradient or Newton's method, but in practice, algorithms which use knowledge of the PES curvature, that is the Hessian matrix, are found to be superior. For most systems of practical interest, however, it may be prohibitively expensive to compute the second derivative matrix, and it is estimated from successive values of the gradient, as is typical in a Quasi-Newton optimization.
The choice of the coordinate system can be crucial for performing a successful optimization. Cartesian coordinates, for example, are redundant since a non-linear molecule with atoms has vibrational degrees of freedom whereas the set of Cartesian coordinates has dimensions. Additionally, Cartesian coordinates are highly correlated, that is, the Hessian matrix has many non-diagonal terms that are not close to zero. This can lead to numerical problems in the optimization, because, for example, it is difficult to obtain a good approximation to the Hessian matrix and calculating it precisely is too computationally expensive. However, in case that energy is expressed with standard force fields, computationally efficient methods have been developed able to derive analytically the Hessian matrix in Cartesian coordinates while preserving a computational complexity of the same order to that of gradient computations. Internal coordinates tend to be less correlated but are more difficult to set-up and it can be difficult to describe some systems, such as ones with symmetry or large condensed phases. Many modern computational chemistry software packages contain automatic procedures for the automatic generation of reasonable coordinate systems for optimization.
Degree of freedom restriction
Some degrees of freedom can be eliminated from an optimization, for example, positions of atoms or bond lengths and angles can be given fixed values. Sometimes these are referred to as being frozen degrees of freedom.
Figure 1 depicts a geometry optimization of the atoms in a carbon nanotube in the presence of an external electrostatic field. In this optimization, the atoms on the left have their positions frozen. Their interaction with the other atoms in the system are still calculated, but alteration the atoms' position during the optimization is prevented.
Transition state optimization
Transition state structures can be determined by searching for saddle points on the PES of the chemical species of interest. A first-order saddle point is a position on the PES corresponding to a minimum in all directions except one; a second-order saddle point is a minimum in all directions except two, and so on. Defined mathematically, an nth order saddle point is characterized by the following: and the Hessian matrix, , has exactly n negative eigenvalues.
Algorithms to locate transition state geometries fall into two main categories: local methods and semi-global methods. Local methods are suitable when the starting point for the optimization is very close to the true transition state (very close will be defined shortly) and semi-global methods find application when it is sought to locate the transition state with very little a priori knowledge of its geometry. Some methods, such as the Dimer method (see below), fall into both categories.
Local searches
A so-called local optimization requires an initial guess of the transition state that is very close to the true transition state. Very close typically means that the initial guess must have a corresponding Hessian matrix with one negative eigenvalue, or, the negative eigenvalue corresponding to the reaction coordinate must be greater in magnitude than the other negative eigenvalues. Further, the eigenvector with the most negative eigenvalue must correspond to the reaction coordinate, that is, it must represent the geometric transformation relating to the process whose transition state is sought.
Given the above pre-requisites, a local optimization algorithm can then move "uphill" along the eigenvector with the most negative eigenvalue and "downhill" along all other degrees of freedom, using something similar to a quasi-Newton method.
Dimer method
The dimer method can be used to find possible transition states without knowledge of the final structure or to refine a good guess of a transition structure. The “dimer” is formed by two images very close to each other on the PES. The method works by moving the dimer uphill from the starting position whilst rotating the dimer to find the direction of lowest curvature (ultimately negative).
Activation Relaxation Technique (ART)
The Activation Relaxation Technique (ART) is also an open-ended method to find new transition states or to refine known saddle points on the PES. The method follows the direction of lowest negative curvature (computed using the Lanczos algorithm) on the PES to reach the saddle point, relaxing in the perpendicular hyperplane between each "jump" (activation) in this direction.
Chain-of-state methods
Chain-of-state methods can be used to find the approximate geometry of the transition state based on the geometries of the reactant and product. The generated approximate geometry can then serve as a starting point for refinement via a local search, which was described above.
Chain-of-state methods use a series of vectors, that is points on the PES, connecting the reactant and product of the reaction of interest, and , thus discretizing the reaction pathway. Very commonly, these points are referred to as beads due to an analogy of a set of beads connected by strings or springs, which connect the reactant and products. The series of beads is often initially created by interpolating between and , for example, for a series of beads, bead might be given by
where . Each of the beads has an energy, , and forces, and these are treated with a constrained optimization process that seeks to get an as accurate as possible representation of the reaction pathway. For this to be achieved, spacing constraints must be applied so that each bead does not simply get optimized to the reactant and product geometry.
Often this constraint is achieved by projecting out components of the force on each bead , or alternatively the movement of each bead during optimization, that are tangential to the reaction path. For example, if for convenience, it is defined that , then the energy gradient at each bead minus the component of the energy gradient that is tangential to the reaction pathway is given by
where is the identity matrix and is a unit vector representing the reaction path tangent at . By projecting out components of the energy gradient or the optimization step that are parallel to the reaction path, an optimization algorithm significantly reduces the tendency of each of the beads to be optimized directly to a minimum.
Synchronous transit
The simplest chain-of-state method is the linear synchronous transit (LST) method. It operates by taking interpolated points between the reactant and product geometries and choosing the one with the highest energy for subsequent refinement via a local search. The quadratic synchronous transit (QST) method extends LST by allowing a parabolic reaction path, with optimization of the highest energy point orthogonally to the parabola.
Nudged elastic band
In Nudged elastic band (NEB) method, the beads along the reaction pathway have simulated spring forces in addition to the chemical forces, , to cause the optimizer to maintain the spacing constraint. Specifically, the force on each point i is given by
where
is the spring force parallel to the pathway at each point (k is a spring constant and , as before, is a unit vector representing the reaction path tangent at ).
In a traditional implementation, the point with the highest energy is used for subsequent refinement in a local search. There are many variations on the NEB method, such including the climbing image NEB, in which the point with the highest energy is pushed upwards during the optimization procedure so as to (hopefully) give a geometry which is even closer to that of the transition state. There have also been extensions to include Gaussian process regression for reducing the number of evaluations. For systems with non-Euclidean (R^2) geometry, like magnetic systems, the method is modified to the geodesic nudged elastic band approach.
String method
The string method uses splines connecting the points, , to measure and enforce distance constraints between the points and to calculate the tangent at each point. In each step of an optimization procedure, the points might be moved according to the force acting on them perpendicular to the path, and then, if the equidistance constraint between the points is no-longer satisfied, the points can be redistributed, using the spline representation of the path to generate new vectors with the required spacing.
Variations on the string method include the growing string method, in which the guess of the pathway is grown in from the end points (that is the reactant and products) as the optimization progresses.
Comparison with other techniques
Geometry optimization is fundamentally different from a molecular dynamics simulation. The latter simulates the motion of molecules with respect to time, subject to temperature, chemical forces, initial velocities, Brownian motion of a solvent, and so on, via the application of Newton's laws of Motion. This means that the trajectories of the atoms which get computed, have some physical meaning. Geometry optimization, by contrast, does not produce a "trajectory" with any physical meaning – it is concerned with minimization of the forces acting on each atom in a collection of atoms, and the pathway via which it achieves this lacks meaning. Different optimization algorithms could give the same result for the minimum energy structure, but arrive at it via a different pathway.
See also
Constraint composite graph
Graph cuts in computer vision – apparatus for solving computer vision problems that can be formulated in terms of energy minimization
Energy principles in structural mechanics
References
External links
Numerical Recipes in Fortran 77
Additional references
Payne et al., "Iterative minimization techniques for ab initio total-energy calculations: Molecular dynamics and conjugate gradients", Reviews of Modern Physics 64 (4), pp. 1045–1097. (1992) (abstract)
Stich et al., "Conjugate gradient minimization of the energy functional: A new method for electronic structure calculation", Physical Review B 39 (8), pp. 4997–5004, (1989)
Chadi, "Energy-minimization approach to the atomic geometry of semiconductor surfaces", Physical Review Letters 41 (15), pp. 1062–1065 (1978)
Mathematical optimization
Computational chemistry | Energy minimization | Chemistry,Mathematics | 2,875 |
229,643 | https://en.wikipedia.org/wiki/Molality | In chemistry, molality is a measure of the amount of solute in a solution relative to a given mass of solvent. This contrasts with the definition of molarity which is based on a given volume of solution.
A commonly used unit for molality is the moles per kilogram (mol/kg). A solution of concentration 1 mol/kg is also sometimes denoted as 1 molal. The unit mol/kg requires that molar mass be expressed in kg/mol, instead of the usual g/mol or kg/kmol.
Definition
The molality (b), of a solution is defined as the amount of substance (in moles) of solute, nsolute, divided by the mass (in kg) of the solvent, msolvent:
.
In the case of solutions with more than one solvent, molality can be defined for the mixed solvent considered as a pure pseudo-solvent. Instead of mole solute per kilogram solvent as in the binary case, units are defined as mole solute per kilogram mixed solvent.
Origin
The term molality is formed in analogy to molarity which is the molar concentration of a solution. The earliest known use of the intensive property molality and of its adjectival unit, the now-deprecated molal, appears to have been published by G. N. Lewis and M. Randall in the 1923 publication of Thermodynamics and the Free Energies of Chemical Substances. Though the two terms are subject to being confused with one another, the molality and molarity of a dilute aqueous solution are nearly the same, as one kilogram of water (solvent) occupies the volume of 1 liter at room temperature and a small amount of solute has little effect on the volume.
Unit
The SI unit for molality is moles per kilogram of solvent.
A solution with a molality of 3 mol/kg is often described as "3 molal", "3 m" or "3 m". However, following the SI system of units, the National Institute of Standards and Technology, the United States authority on measurement, considers the term "molal" and the unit symbol "m" to be obsolete, and suggests mol/kg or a related unit of the SI.
Usage considerations
Advantages
The primary advantage of using molality as a measure of concentration is that molality only depends on the masses of solute and solvent, which are unaffected by variations in temperature and pressure. In contrast, solutions prepared volumetrically (e.g. molar concentration or mass concentration) are likely to change as temperature and pressure change. In many applications, this is a significant advantage because the mass, or the amount, of a substance is often more important than its volume (e.g. in a limiting reagent problem).
Another advantage of molality is the fact that the molality of one solute in a solution is independent of the presence or absence of other solutes.
Problem areas
Unlike all the other compositional properties listed in "Relation" section (below), molality depends on the choice of the substance to be called “solvent” in an arbitrary mixture. If there is only one pure liquid substance in a mixture, the choice is clear, but not all solutions are this clear-cut: in an alcohol–water solution, either one could be called the solvent; in an alloy, or solid solution, there is no clear choice and all constituents may be treated alike. In such situations, mass or mole fraction is the preferred compositional specification.
Relation to other compositional quantities
In what follows, the solvent may be given the same treatment as the other constituents of the solution, such that the molality of the solvent of an n-solute solution, say b0, is found to be nothing more than the reciprocal of its molar mass, M0 (expressed in the unit kg/mol):
.
For the solutes the expression of molalities is similar:
.
The expressions linking molalities to mass fractions and mass concentrations contain the molar masses of the solutes Mi:
.
Similarly the equalities below are obtained from the definitions of the molalities and of the other compositional quantities.
The mole fraction of solvent can be obtained from the definition by dividing the numerator and denominator to the amount of solvent n0:
.
Then the sum of ratios of the other mole amounts to the amount of solvent is substituted with expressions from below containing molalities:
giving the result
.
Mass fraction
The conversions to and from the mass fraction, w1, of the solute in a single-solute solution are
where b1 is the molality and M1 is the molar mass of the solute.
More generally, for an n-solute/one-solvent solution, letting bi and wi be, respectively, the molality and mass fraction of the i-th solute,
,
where Mi is the molar mass of the ith solute, and w0 is the mass fraction of the solvent, which is expressible both as a function of the molalities as well as a function of the other mass fractions,
.
Substitution gives:
.
Mole fraction
The conversions to and from the mole fraction, x1 mole fraction of the solute in a single-solute solution are
,
where M0 is the molar mass of the solvent.
More generally, for an n-solute/one-solvent solution, letting xi be the mole fraction of the ith solute,
,
where x0 is the mole fraction of the solvent, expressible both as a function of the molalities as well as a function of the other mole fractions:
.
Substitution gives:
.
Molar concentration (molarity)
The conversions to and from the molar concentration, c1, for one-solute solutions are
,
where ρ is the mass density of the solution, b1 is the molality, and M1 is the molar mass (in kg/mol) of the solute.
For solutions with n solutes, the conversions are
,
where the molar concentration of the solvent c0 is expressible both as a function of the molalities as well as a function of the other molarities:
.
Substitution gives:
,
Mass concentration
The conversions to and from the mass concentration, ρsolute, of a single-solute solution are
,
or
,
where ρ is the mass density of the solution, b1 is the molality, and M1 is the molar mass of the solute.
For the general n-solute solution, the mass concentration of the ith solute, ρi, is related to its molality, bi, as follows:
,
where the mass concentration of the solvent, ρ0, is expressible both as a function of the molalities as well as a function of the other mass concentrations:
.
Substitution gives:
.
Equal ratios
Alternatively, one may use just the last two equations given for the compositional property of the solvent in each of the preceding sections, together with the relationships given below, to derive the remainder of properties in that set:
,
where i and j are subscripts representing all the constituents, the n solutes plus the solvent.
Example of conversion
An acid mixture consists of 0.76, 0.04, and 0.20 mass fractions of 70% HNO3, 49% HF, and H2O, where the percentages refer to mass fractions of the bottled acids carrying a balance of H2O. The first step is determining the mass fractions of the constituents:
.
The approximate molar masses in kg/mol are
.
First derive the molality of the solvent, in mol/kg,
,
and use that to derive all the others by use of the equal ratios:
.
Actually, bH2O cancels out, because it is not needed. In this case, there is a more direct equation: we use it to derive the molality of HF:
.
The mole fractions may be derived from this result:
,
,
.
Osmolality
Osmolality is a variation of molality that takes into account only solutes that contribute to a solution's osmotic pressure. It is measured in osmoles of the solute per kilogram of water. This unit is frequently used in medical laboratory results in place of osmolarity, because it can be measured simply by depression of the freezing point of a solution, or cryoscopy (see also: osmostat and colligative properties).
Relation to apparent (molar) properties
Molality appears in the expression of the apparent (molar) volume of a solute as a function of the molality b of that solute (and density of the solution and solvent):
,
.
For multicomponent systems the relation is slightly modified by the sum of molalities of solutes. Also a total molality and a mean apparent molar volume can be defined for the solutes together and also a mean molar mass of the solutes as if they were a single solute. In this case the first equality from above is modified with the mean molar mass M of the pseudosolute instead of the molar mass of the single solute:
,
, yi,j being ratios involving molalities of solutes i,j and the total molality bT.
The sum of products molalities - apparent molar volumes of solutes in their binary solutions equals the product between the sum of molalities of solutes and apparent molar volume in ternary or multicomponent solution.
.
Relation to apparent molar properties and activity coefficients
For concentrated ionic solutions the activity coefficient of the electrolyte is split into electric and statistical components.
The statistical part includes molality b, hydration index number h, the number of ions from the dissociation and the ratio ra between the apparent molar volume of the electrolyte and the molar volume of water.
Concentrated solution statistical part of the activity coefficient is:
.
Molalities of a ternary or multicomponent solution
The molalities of solutes b1, b2 in a ternary solution obtained by mixing two binary aqueous solutions with different solutes (say a sugar and a salt or two different salts) are different than the initial molalities of the solutes bii in their binary solutions:
,
,
,
.
The content of solvent in mass fractions w01 and w02 from each solution of masses ms1 and ms2 to be mixed as a function of initial molalities is calculated. Then the amount (mol) of solute from each binary solution is divided by the sum of masses of water after mixing:
,
.
Mass fractions of each solute in the initial solutions w11 and w22
are expressed as a function of the initial molalities b11, b22:
,
.
These expressions of mass fractions are substituted in the final molalitaties:
,
.
The results for a ternary solution can be extended to a multicomponent solution (with more than two solutes).
From the molalities of the binary solutions
The molalities of the solutes in a ternary solution can be expressed also from molalities in the binary solutions and their masses:
,
.
The binary solution molalities are:
,
.
The masses of the solutes determined from the molalities of the solutes and the masses of water can be substituted in the expressions of the masses of solutions:
.
Similarly for the mass of the second solution:
.
One can obtain the masses of water present in the sum from the denominator of the molalities of the solutes in the ternary solutions as functions of binary molalities and masses of solution:
,
.
Thus the ternary molalities are:
,
.
For solutions with three or more solutes the denominator is a sum of the masses of solvent in the n binary solutions which are mixed:
,
,
.
See also
Molarity
References
Chemical properties
Mass-specific quantities
es:Concentración#Molalidad | Molality | Physics,Chemistry,Mathematics | 2,537 |
64,045,652 | https://en.wikipedia.org/wiki/2MASS%20J10475385%2B2124234 | 2MASS J10475385+2124234 (abbreviated to 2MASS J1047+21) is a brown dwarf of spectral class T6.5, in the constellation Leo about 34 light-years from Earth, hence in galactic topographical and interstellar medium study terms being in the Local Bubble and very nearby in the Orion Arm. The object first attracted attention by becoming the first brown dwarf of spectral class T from which radio waves were detected. This discovery then permitted its wind speeds to be computed.
Discovery
2MASS J1047+21 was discovered in 1999 along with eight other brown dwarf candidates during the Two Micron All-Sky Survey (2MASS), conducted from 1997 to 2001. Follow-up observations with the Keck I 10-meter telescope's Near Infrared Camera (NIRC) were conducted on 27 May 1999 and identified methane in 2MASS J1047+21's near-infrared spectrum, classifying it as a T-type brown dwarf.
Detection of Radio Emissions
In 2010, astronomers using the Arecibo radio telescope discovered bursts of low-frequency radio waves coming from 2MASS J1047+21. This radio emission comes from electrons spiraling around the magnetic field lines of the brown dwarf. Since the frequency of the radio emission is linked to the strength of the magnetic field, the team measured a magnetic field strength of 1.7 kG. The bursts were also found to drift in frequency, in a manner reminiscent of certain types of solar radio emission. The radio emissions, together with the detection of Hα, which is usually found in stellar chromospheres, shows that 2MASS J1047+21 is magnetically active.
Measurement of Wind Speed
The wind speed is directly inferred from minute, regular cycles in its visible (which matches its ultra-violet) appearance compared to the same at radio wave spectra. The radio emissions are coming from electrons interacting with the magnetic field, which is rooted deep in the interior. The visible and infrared (IR) data, on the other hand, reveal what's happening in the gas giant's cloud tops.
Characteristics
2MASS J1047+21 is a T-type brown dwarf.
Distance
2MASS J1047+21 is about from Earth.
Magnetic Field
Radio emissions imply a magnetic field strength greater than 1.7 kG, or approximately 3000 times stronger than the Earth's magnetic field.
Wind speeds
Wind speeds on 2MASS J1047+21 were measured to be by the Spitzer Space Telescope.
See also
List of stars in Leo
other T-dwarfs with radio emission:
SIMP J013656.5+093347.3 T2.5, planetary-mass object
WISEPC J112254.73+255021.5 T6
WISEPA J101905.63+652954.2 T5.5+T7.0
WISEPA J062309.94-045624.6 T8
References
External links
In a First, NASA Measures Wind Speed on a Brown Dwarf, Calla Cofield, Jet Propulsion Laboratory, 9 Apr 2020
Astronomers Measure Wind Speed on a Brown Dwarf, Dave Finley, National Radio Astronomy Observatory, 9 Apr 2020
Planet 2MASS J10475385+2124234, The Extrasolar Planets Encyclopaedia
Astronomers measure wind speed on a brown dwarf, Phys.org, 9 Apr 2020
Astronomers Measure the Wind Speed on a Brown Dwarf for the First Time. Spoiler: Insanely Fast, Evan Gough, Universe Today, 15 Apr 2020
Leo (constellation)
Brown dwarfs
T-type brown dwarfs
J10475385+2124234
Astronomical objects discovered in 1999
TIC objects | 2MASS J10475385+2124234 | Astronomy | 762 |
2,368,360 | https://en.wikipedia.org/wiki/Obstetrical%20dilemma | The obstetrical dilemma is a hypothesis to explain why humans often require assistance from other humans during childbirth to avoid complications, whereas most non-human primates give birth unassisted with relatively little difficulty. This occurs due to the tight fit of the fetal head to the maternal birth canal, which is additionally convoluted, meaning the head and therefore body of the infant must rotate during childbirth in order to fit, unlike in other, non-upright walking mammals. Consequently, there is an unusually high incidence of cephalopelvic disproportion and obstructed labor in humans.
The obstetrical dilemma claims that this difference is due to the biological trade-off imposed by two opposing evolutionary pressures in the development of the human pelvis: smaller birth canals in the mothers, and larger brains, and therefore skulls in the babies. Proponents believe bipedal locomotion (the ability to walk upright) decreased the size of the bony parts of the birth canal. They also believe that as hominids' and humans' skull and brain sizes increased over the millennia, that women needed wider hips to give birth, that these wider hips made women inherently less able to walk or run than men, and that babies had to be born earlier to fit through the birth canal, resulting in the so-called fourth trimester period for newborns (being born when the baby seems less developed than in other animals). Recent evidence has suggested bipedal locomotion is only a part of the strong evolutionary pressure constraining the expansion of the maternal birth canal. In addition to bipedal locomotion, the reduced strength of the pelvic floor due to a wider maternal pelvis also leads to fitness detriments in the mother pressuring the birth canal to remain relatively narrow.
This idea was widely accepted when first published in 1960, but has since been criticized by other scientists.
History
The term, obstetrical dilemma, was coined in 1960, by Sherwood Larned Washburn, a prominent early American physical anthropologist, in order to describe the evolutionary development of the human pelvis and its relation to childbirth and pregnancy in hominids and non-human primates. In the intervening decades, the term has been used broadly among anthropologists, biologists, and other scientists to describe aspects of this hypothesis and related topics.
Evolution of human birth
Human pelvis
The obstetrical dilemma hypothesizes that when hominids began to develop bipedal locomotion, the conflict between these two opposing evolutionary pressures became greatly exacerbated. Because humans are currently the only recognized extant obligately bipedal primates, meaning the body shape requires to only use two legs, major evolutionary developments had to occur in order to alter to the shape of the female pelvis. Human males evolved narrower hips optimized for locomotion, whereas female hips evolved to be a wider optimization because of childbirthing needs. Human pelves have no gross distinguishing skeletal markers for sex before puberty. With puberty, hormones alter the shape of the pelvis in females to cater to obstetrical demands. Overall, through evolution of the species, a number of structures in the body have changed size, proportion, or location in order to accommodate bipedal locomotion and allow a person to stand upright and face forward. To help support the upper body, a number of structural changes were made to the pelvis. The ilial pelvic bone shifted forward and broadened, while the ischial pelvic bone shrank, narrowing the pelvic canal. These changes were occurring at the same time as humans were developing larger craniums.
Male versus female
Examination of the pelvis is the most useful method for identifying biological sex through the skeleton. Distinguishing features between the human male and female pelvis stem from the selective pressures of childbearing and birth. Females must be able to carry out the process of childbirth but also be able to move bipedally. The human female pelvis has evolved to be as wide as possible while still being able to allow bipedal locomotion. The compromise between these two necessary functions of the female pelvis can be especially seen through the comparative skeletal anatomy between males and females.
(Diagram of human pelvis needed here)
The human pelvis is made up of three sections: the hip bones (ilium, ischium and pubis), the sacrum, and the coccyx. How these three segments articulate and what their dimensions are is key for differentiation between males and females. Females acquired the characteristic of the overall pelvic bone being thinner and denser than the pelvic bones of males. The female pelvis has also evolved to be much wider and allow for greater room in order to safely deliver a child. After sexual maturation, it can be observed that the pubic arch in females is generally an obtuse angle (between 90 and 100 degrees) while males tend to have more of an acute angle (approximately 70 degrees). This difference in angles can be attributed to the fact that the overall pelvis for a female is preferred to be wider and more open than a male pelvis. Another key difference can be seen in the sciatic notch. The sciatic notch in females tend to be wider than the sciatic notches of males. The pelvic inlet is also a key difference. The pelvic inlet can be observed as oval-shaped in females and more of a heart-shape in males. The difference in inlet shape is related to the distance between the ischium bones of the pelvis. To allow for a wider and more oval-shaped inlet, female ischium bones are further apart from one another than the ischium bones of a male.
Differences in the sacrum between males and females can also be attributed to the needs of child birth. The female sacrum is wider than the male sacrum. The female sacrum can also be observed as being shorter than the sacrum of a male. The difference in width can be explained by the overall wider shape of the female pelvis. The female sacrum is also more curved posteriorly. This could be explained by the need for as much space as possible for a birthing canal. The articulating coccyx in females is also generally observed as being straighter and more flexible than the coccyx of a male for the same reason. Because of the female pelvic bones in general being further apart from one another than those of the male pelvis, the acetabula in a female are positioned more medially and further apart from one another. It is this orientation that allows for the stereotypical swinging motion of a female's hips while walking. The acetabula not only differ in distance, but depth as well. It has been found that female acetabula have a greater depth than those in males, but also paired with a smaller femoral head. This in turn creates a more stable hip joint(insert). One of the last key differences can be seen in the auricular surface of the pelvic bones. The auricular surface where the sacroiliac joint articulates seen in females generally has a rougher texture compared to the surfaces seen in males. This difference in the texture of the articulating surface may be due to the differences in shape of the sacrum between males and females. These key differences can be examined and used to determine biological sex between two different sets of pelvic bones; all due to the need for bipedal locomotion while having the need for childbearing and childbirth in females.
Adaptations to ensure live birth
Early human ancestors, hominids, originally gave birth in a similar way that non-human primates do because early obligate quadrupedal individuals would have retained similar skeletal structure to great apes. Most non-human primates today have neonatal heads that are close in size to the mother's birth canal, as evidenced by observing female primates who do not need assistance in birthing, often seeking seclusion away from others of their species. In modern humans, parturition (childbirth) differs greatly from the rest of the primates because of both pelvic shape of the mother and neonatal shape of the infant. Further adaptations evolved to cope with bipedalism and larger craniums were also important such as neonatal rotation of the infant, shorter gestation length, assistance with birth, and a malleable neonatal head.
Neonatal rotation
Neonatal rotation was a solution for humans evolving larger brain sizes. Comparative zoological analysis has shown that the size of the human brain is anomalous, as humans have brains that are significantly larger than other animals of similar proportions. Even among the great apes, humans are distinctive in this regard, having brains three to four times larger than those of chimpanzees, humans' nearest relatives. Although the close correspondence between the neonatal cranium and the maternal pelvis in monkeys is also characteristic of humans, the orientation of the pelvic diameters differs. On average, a human fetus is nearly twice as large in relation to its mother's weight as would be expected for another similarly sized primate. The extremely close correspondence between the fetal head and the maternal pelvic dimensions requires that these dimensions line up at all points (inlet, midplane, and outlet) during the birth process. During delivery, neonatal rotation occurs when the body gets rotated to align head and shoulders transversely when entering the small pelvis, otherwise known as internal rotation. The fetus then rotates longitudinally to exit the birth canal, which is known as external rotation. In humans, the long axes of the inlet and the outlet of the obstetric canal lie perpendicular to each other. This is an important mechanism because growth in the size of the cranium as well as the width of the shoulders makes it more difficult for the infant to fit through the pelvis. This enables the largest dimensions of the fetal head to align with the largest dimensions of each plane of the maternal pelvis as labor progresses. This differs in non-human primates as there is no need for neonatal rotation in non-human primates because the birth canal is wide enough to accommodate the infant. This elaborate mechanism of labor, which requires a constant readjustment of the fetal head in relation to the bony pelvis (and which may vary somewhat depending on the shape of the pelvis in question), is completely different from the obstetrical mechanics of the other higher primates whose infants generally drop through the pelvis without any rotation or realignment. In contrast to the narrow shoulders of monkeys and higher primates, which are able to pass through the birth canal without any rotation, modern humans have broad, rigid shoulders, which generally require the same series of rotations that the head undergoes in order to travel through.
Due to the evolution of bipedalism in humans, the pelvis had evolved to have a shorter, more forward curved ilium and broader sacrum in order to support ambulating on two legs. This caused the birth canal to shrink and form a more oval shape, thus the infant must undergo specific movements to rotate itself in a certain position to be able to pass through the pelvis. These movements are referred as the seven cardinal movements, which the infant rotates itself at the widest diameter of the pelvis to allow for the narrowest aspect of the fetal body to align with the narrowest diameter of the pelvic. These movements include engagement, descent, flexion, internal rotation, extension, external rotation, and expulsion.
Engagement is the first movement of labor where the first part of the head enters the pelvic inlet.
Descent refers to the deeper movement of the head through the pelvic inlet with the widest diameter of the infant's head.
Flexion occurs during descent, where tissues of the pelvis create resistance as the head moves down the pelvic cavity and brings the infant's chin to the chest. This allows for the smallest part of the head to begin to push through pelvis and actively promote delivery of the baby.
Internal rotation occurs when the head continues to descend and comes in contact with the pelvic floor, which has resistant muscles. These muscles allow for the infant to rotate their head to allow their head and shoulders to move through the pelvic. Due to the broad shape of the sacrum, the head of the fetus must be rotated from occiput transverse to occiput anterior position, which means the infant must rotate from the sideways position so the anterior head faces the buttock of the mother.
Extension is the point where the head moves past the pubic symphysis, where it has to curve underneath the birth canal while the anterior head still faces the mother's bottom.
External rotation (or Restitution) occurs when the baby pauses after the head passes through the body. During this pause, the infant rotates itself sideways (facing the mother's thigh) to allow for the shoulder to fit though the birth canal.
Expulsion is the final step of labor. During this stage, the anterior shoulder moves past the birth canal first then the posterior shoulder. Once both shoulders are out, the baby is delivered through the birth canal completely.
While the seven cardinal movements is considered the normal mechanism for labor and delivery of human babies, pelvic sizes and shapes can vary among female humans which can increase the risk of errors in rotations and delivery, especially since these movements are done completely by the baby. One of the biggest issues with the pelvic shape for childbirth is the Ischial spine. Since the ischial spines support the pelvic floor, if the spines are too far apart it can lead to a weakened pelvic floor muscles. This can cause issues as pregnancy progresses, such as difficulty carrying the fetus to full term. Another complication that can occur during human childbirth is shoulder dystocia, where the shoulder is stuck in the birth canal. This can lead to fractured humerus and clavicle of the fetus and hemorrhaging of the mother postpartum. Thus, these neonatal rotations are important in allowing the baby to safely pass through the pelvic and ensure the health of the mother as well.
Gestation length and altriciality
Gestation length in humans is believed to be shorter than most other primates of comparable size. The gestation length for humans is 266 days, or eight days short of nine months, which is counted from the first day of the woman's last menstrual period. During gestation, mothers must support the metabolic cost of tissue growth, both of the fetus and the mother, as well as the ever-increasing metabolic rate of the growing fetus. Comparative data from across mammals and primates suggest that there is a metabolic constraint on how large and energetically expensive a fetus can grow before it must leave the mother's body. It is thought that this shorter gestation period is an adaptation to ensure the survival of mother and child because it leads to altriciality. Neonatal brain and body size have increased in the hominin lineage, and human maternal investment is greater than expected for a primate of similar body mass. The obstetrical dilemma hypothesis suggests that in order to successfully undergo childbirth, the infant must be born earlier and earlier, thereby making the child increasingly developmentally premature. The concept of the infant being born underdeveloped is called altriciality. Humans are born with an underdeveloped brain; only 25% of their brains fully developed at birth as opposed to non-human primates where the infant is born with 45–50% brain development. Scientists have believed that the shorter gestation period can be attributed to the narrower pelvis, as the baby must be born before its head reaches a volume that cannot be accommodated by the obstetric canal.
Social assistance
Human infants are also almost always born with assistance from other humans because of the way that the pelvis is shaped. Since the pelvis and opening of birth canal face backwards, humans have difficulty giving birth themselves because they cannot guide the baby out of the canal. Non-human primates seek seclusion when giving birth because they do not need any help due to the pelvis and opening being more forward. There is no evidence to ascertain at what point in human evolution birth assistance arose, but some researchers have suggested Homo habilis. Human infants depend on their parents much more and for much longer than other primates. Humans spend a lot of their time caring for their children as they develop whereas other species stand on their own from when they are born. The faster an infant develops, the higher the reproductive output of a female can be. So in humans, the cost of slow development of their infants is that humans reproduce relatively slowly. This phenomenon is also known as cooperative breeding.
Malleable cranium
Humans are born with a very malleable fetal head which is not fully developed when the infant exits the womb. This soft spot on the crown of the infant allows for the head to be compressed in order to better fit through the birth canal without obstructing it. This allows for the head to develop more after birth and for the cranium to continue growing without affecting the birthing process.
Challenges to the obstetrical dilemma hypothesis
The obstetrical dilemma hypothesis has had several challenges to it, as more data is collected and analyzed. Several different fields of study have taken an interest in understanding more about the human birth process and that of human ancestor species.
Early brain growth rates
Some studies have shown that higher brain growth rates happen earlier on in ontogeny than previously thought, which challenges the idea that the explanation of the obstetrical dilemma is that humans are born with underdeveloped brains. This is because if brain growth rates were largest in early development, that is when the brain size would increase the most. Premature birth would not allow for a much larger head size if most of the growth had already happened. Also, it has been suggested that maternal pelvic dimensions are sensitive to some ecological factors.
Maternal heat stress
There has been a lot of evidence linking body mass to brain mass, leading to the determination of maternal metabolism as a key factor in the growth of the fetus. Maternal constraints could be largely due to thermal stress or energy availability. A larger brain mass in the neonate corresponds to more energy needed to sustain it. It takes much more energy for the mother if the brain fully develops in the womb. If maternal energy is the limiting factor then an infant can only grow as much as the mother can sustain. Also, because fetal size is positively correlated to maternal energy use, thermal stress is an issue because the larger the fetus, the more the mother can suffer heat stress.
Environmental effects
Additional studies suggest that other factors may further complicate the obstetrical dilemma hypothesis. One of these is dietary shifts, possibly due to the emergence of agriculture. This can be both due to change in diet as well as the increase in population density since agriculture was developed; more people leads to more disease. Studies have also been done in twins to show that pelvic size may be due more to the environment in which they live than their genetics. Another study disproves the thought that narrower hips are optimized for locomotion because it was found that a Late Stone Age population Southern Africa that survived largely on terrestrial mobility had women who had uncharacteristically small body size with large pelvic canals.
Energetics of gestation and growth hypothesis
The energetics of gestation and growth (EGG) hypothesis offers a direct challenge to the obstetrical dilemma hypothesis, equating the constraints on gestation and parturition to the energy restrictions of the mother. It has been shown in studies using professional athletes and pregnant women, that there is an upper limitation to the amount of energy a woman can produce before it causes deleterious effects: approximately 2.1x their basal metabolic rate. During pregnancy the growing brain mass and length in the neonate correspond to more energy needed to sustain it. This results in a competing balance between the fetus's demand for energy and the maternal ability to meet that demand. At approximately nine months gestation, the fetus's energy needs surpasses the mother's energy limitation, correlating with the average time of birth. The newly born infant can then be sustained on breast milk, which is a more efficient, less energy demanding mechanism of nutrient transfer between mother and child. Additionally, this hypothesis demonstrates that, contrary to the obstetrical dilemma, an increased pelvic size would not be deleterious to bipedalism. Studying the running mechanics of males and females, it was shown that an increased pelvic size related to neither an increased metabolic nor structural demand on a woman.
Obstetrical dilemma revisited
The obstetrical dilemma hypothesis has also been challenged conceptually based on new studies. The authors argue that the obstetrical dilemma hypothesis assumes that human, and therefore hominid, childbirth has been a painful and dangerous experience through the species' evolution. This assumption may be fundamentally false as many early analyses focused on maternal death data from primarily females of European-descent in Western Europe and the United States during the 19th and 20th century, a limited population. In a recent study a covariation between human pelvis shape, stature, and head size is reported. It is said that females with a large head possess a birth canal that can better accommodate large-headed neonates. Mothers with large heads usually give birth to neonates with large heads. Therefore, the detected pattern of covariation contributes to ease childbirth and has likely evolved in response to strong correlational selection. A recent study aimed to evaluate the original ideas under the 'obstetrical dilemma' and provide a detailed, more complex explanation for the tight fetopelvic fit observed in humans. They propose the original obstetrical dilemma hypothesis remains valuable as a foundation to explain the complex combination of evolutionary, ecological, and biocultural pressures that constrain maternal pelvic form and fetal size.
See also
Solitary birth
References
Obstetrics
Human evolution
Pelvis
Biological hypotheses | Obstetrical dilemma | Biology | 4,556 |
7,682,020 | https://en.wikipedia.org/wiki/5086%20aluminium%20alloy | 5086 aluminium alloy is an aluminium–magnesium alloy, primarily alloyed with magnesium. It is not strengthened by heat treatment, instead becoming stronger due to strain hardening, or cold mechanical working of the material.
Since heat treatment doesn't strongly affect the strength, 5086 can be readily welded and retain most of its mechanical strength. The good results with welding and good corrosion properties in seawater make 5086 extremely popular for vessel gangways, building boat and yacht hulls.
Basic properties
5086 has a density of , with a specific gravity of 2.66.
Melting point is .
Chemical properties
The alloy composition of 5086 is:
Chromium - 0.05%–0.25% by weight
Copper - 0.1% maximum
Iron - 0.5% maximum
Magnesium - 3.5%–4.5%
Manganese - 0.2%–0.7%
Silicon - 0.4% maximum
Titanium - 0.15% maximum
Zinc - 0.25% maximum
Others each 0.05% maximum
Others total 0.15% maximum
Remainder Aluminium
Mechanical properties
The mechanical properties of 5086 vary significantly with hardening and temperature.
–O hardening
Unhardened 5086 has a yield strength of and ultimate tensile strength of from . At cryogenic temperatures it is slightly stronger: at , yield of and ultimate tensile strength of ; above its strength is reduced.
Elongation, the strain before material failure, ranges from 46% at , 35% at , 32% at , 22% at , 30% at , 36% at , and increases above there.
–H32 hardening
H32 strain hardened 5086, with properties measured at , has yield strength of , ultimate tensile strength of , and elongation of 6-12%.
–H34 hardening
–H112 hardening
–H116 hardening
H116 strain hardened 5086, with properties measured at , has yield strength of , ultimate tensile strength of , and elongation of 12%.
Uses
5086 is the preferred hull material for small aluminium boats or larger yachts. Its high strength and good corrosion resistance make it an excellent match for yachting.
5086 has a tendency to undergo Stress corrosion cracking and is not used much in aircraft construction as a result.
5086 has been used in vehicle armor, notably in the M113 Armored Personnel Carrier and M2 Bradley Infantry fighting vehicle.
Welding
5086 is often assembled using arc welding, typically MIG or TIG welding. The newer technique of Friction stir welding has also been successfully applied but is not in common use.
Arc welding reduces mechanical properties to no worse than –O hardening condition. For –H116 base material, measured at ambient temperature, yield strength decreases from to and ultimate strength from . The relatively low decrease in ultimate strength (about 10%) is extremely good performance for an aluminium alloy.
References
Further reading
"Properties of Wrought Aluminum and Aluminum Alloys: 5086, Alclad 5086", Properties and Selection: Nonferrous Alloys and Special-Purpose Materials, Vol 2, ASM Handbook, ASM International, 1990, p. 93-4.
Aluminium alloy table
Aluminium–magnesium alloys | 5086 aluminium alloy | Chemistry | 647 |
237,868 | https://en.wikipedia.org/wiki/Product%20%28category%20theory%29 | In category theory, the product of two (or more) objects in a category is a notion designed to capture the essence behind constructions in other areas of mathematics such as the Cartesian product of sets, the direct product of groups or rings, and the product of topological spaces. Essentially, the product of a family of objects is the "most general" object which admits a morphism to each of the given objects.
Definition
Product of two objects
Fix a category Let and be objects of A product of and is an object typically denoted equipped with a pair of morphisms satisfying the following universal property:
For every object and every pair of morphisms there exists a unique morphism such that the following diagram commutes:
Whether a product exists may depend on or on and If it does exist, it is unique up to canonical isomorphism, because of the universal property, so one may speak of the product. This has the following meaning: if is another product, there exists a unique isomorphism such that and .
The morphisms and are called the canonical projections or projection morphisms; the letter alliterates with projection. Given and the unique morphism is called the product of morphisms and and is denoted
Product of an arbitrary family
Instead of two objects, we can start with an arbitrary family of objects indexed by a set
Given a family of objects, a product of the family is an object equipped with morphisms satisfying the following universal property:
For every object and every -indexed family of morphisms there exists a unique morphism such that the following diagrams commute for all
The product is denoted If then it is denoted and the product of morphisms is denoted
Equational definition
Alternatively, the product may be defined through equations. So, for example, for the binary product:
Existence of is guaranteed by existence of the operation
Commutativity of the diagrams above is guaranteed by the equality: for all and all
Uniqueness of is guaranteed by the equality: for all
As a limit
The product is a special case of a limit. This may be seen by using a discrete category (a family of objects without any morphisms, other than their identity morphisms) as the diagram required for the definition of the limit. The discrete objects will serve as the index of the components and projections. If we regard this diagram as a functor, it is a functor from the index set considered as a discrete category. The definition of the product then coincides with the definition of the limit, being a cone and projections being the limit (limiting cone).
Universal property
Just as the limit is a special case of the universal construction, so is the product. Starting with the definition given for the universal property of limits, take as the discrete category with two objects, so that is simply the product category The diagonal functor assigns to each object the ordered pair and to each morphism the pair The product in is given by a universal morphism from the functor to the object in This universal morphism consists of an object of and a morphism which contains projections.
Examples
In the category of sets, the product (in the category theoretic sense) is the Cartesian product. Given a family of sets the product is defined as
with the canonical projections
Given any set with a family of functions
the universal arrow is defined by
Other examples:
In the category of topological spaces, the product is the space whose underlying set is the Cartesian product and which carries the product topology. The product topology is the coarsest topology for which all the projections are continuous.
In the category of modules over some ring the product is the Cartesian product with addition defined componentwise and distributive multiplication.
In the category of groups, the product is the direct product of groups given by the Cartesian product with multiplication defined componentwise.
In the category of graphs, the product is the tensor product of graphs.
In the category of relations, the product is given by the disjoint union. (This may come as a bit of a surprise given that the category of sets is a subcategory of the category of relations.)
In the category of algebraic varieties, the product is given by the Segre embedding.
In the category of semi-abelian monoids, the product is given by the history monoid.
In the category of Banach spaces and short maps, the product carries the norm.
A partially ordered set can be treated as a category, using the order relation as the morphisms. In this case the products and coproducts correspond to greatest lower bounds (meets) and least upper bounds (joins).
Discussion
An example in which the product does not exist: In the category of fields, the product does not exist, since there is no field with homomorphisms to both and
Another example: An empty product (that is, is the empty set) is the same as a terminal object, and some categories, such as the category of infinite groups, do not have a terminal object: given any infinite group there are infinitely many morphisms so cannot be terminal.
If is a set such that all products for families indexed with exist, then one can treat each product as a functor How this functor maps objects is obvious. Mapping of morphisms is subtle, because the product of morphisms defined above does not fit. First, consider the binary product functor, which is a bifunctor. For we should find a morphism We choose This operation on morphisms is called Cartesian product of morphisms. Second, consider the general product functor. For families we should find a morphism We choose the product of morphisms
A category where every finite set of objects has a product is sometimes called a Cartesian category
(although some authors use this phrase to mean "a category with all finite limits").
The product is associative. Suppose is a Cartesian category, product functors have been chosen as above, and denotes a terminal object of We then have natural isomorphisms
These properties are formally similar to those of a commutative monoid; a Cartesian category with its finite products is an example of a symmetric monoidal category.
Distributivity
For any objects of a category with finite products and coproducts, there is a canonical morphism where the plus sign here denotes the coproduct. To see this, note that the universal property of the coproduct guarantees the existence of unique arrows filling out the following diagram (the induced arrows are dashed):
The universal property of the product then guarantees a unique morphism induced by the dashed arrows in the above diagram. A distributive category is one in which this morphism is actually an isomorphism. Thus in a distributive category, there is the canonical isomorphism
See also
Coproduct – the dual of the product
Diagonal functor – the left adjoint of the product functor.
References
Chapter 5.
Definition 2.1.1 in
External links
Interactive Web page which generates examples of products in the category of finite sets. Written by Jocelyn Paine.
Limits (category theory) | Product (category theory) | Mathematics | 1,470 |
30,957,264 | https://en.wikipedia.org/wiki/Mountain%20Wave%20Project | The Mountain Wave Project (MWP) pursues global scientific research of gravity waves and associated turbulence. MWP seeks to develop new scientific insights and knowledge through high altitude and record seeking glider flights with the goal of increasing overall flight safety and improving pilot training.
Corporate history
Motivation
Wind movement over terrain and ground obstacles can create wavelike wind formations which can reach up to the stratosphere. In 1998 the pilots René Heise and Klaus Ohlmann founded the MWP, a project for global classification, research, and analysis of orographically created wind structures (e.g. Chinook, Foehn, Mistral, Zonda). The MWP is an independent non-profit-project of the Scientific and Meteorological Section of the Organisation Scientifique et Technique du Vol à Voile (OSTIV) and is supported by the Fédération Aéronautique Internationale (FAI).
The MWP was originally focused on achieving better understanding. of the complex thermal and dynamic air movements in the atmosphere, and using that knowledge to achieve ever greater long distance soaring flights. As MWP gained greater awareness of the power inherent to mountain wave-like structures in the atmosphere, and their strong vertical airflows, it became obvious that they presented great dangers to civil aviation in multiple ways. Therefore, the focus of the MWP shifted to a more scientific approach to the airflow phenomena, with the goal of discovering new ways to increase overall aviation safety. Through the support of other scientists and cooperation partners the core group became more powerful and gained greater depth of knowledge. The integration of Joerg Hacker from the Airborne Research Australia (ARA) into the core group significantly enhanced the overall depth of knowledge of the group.
Airborne measurements
In order to learn more about the relevant physical process in the atmosphere, the MWP Team launched two expeditions in the Argentinean Andes in 1999 and 2006. For high altitude flights a modified Stemme S10 VT motorglider was used as a platform for airborne data acquisition and measurement. The pilots were assisted with life support equipment and physiological preparations by the renowned flight physicians of the German Aerospace Center (DLR) and by astronaut Ulf Merbold.
Thanks to the help of qualified scientists and state-of-the-art sensor technology, the MWP achieved its goal to gather and analyze wave structure data with impressive results at the operation in Mendoza in October 2006. Research flights and operations were completed in the region between the Tupungato (5.700 m) and Aconcagua (6.900m), which is very well known for its extremely treacherous turbulence.
Record flights
Between 2000 and 2004 MWP team member Klaus Ohlmann further developed and expanded on the knowledge about wave systems gained in the Andes in 1999, and accumulated a wealth of experience. This educational process allowed him to win the OSTIV Kuettner Prize for the first 2000 km straight out wave flight, as well as completing the world's longest recorded soaring flight of 3,008 km. He was supported by his MWP teammates in Germany using internet communications to provide specific weather predictions using a new weather forecasting tool. In these flights, he provided crucial in-flight data, which in turn helped to improve subsequent weather predictions by the team in Germany.
Two MWP members participated in the 2006 field research campaign of the Terrain Induced Rotor Experiment (T-REX) which took place in the Sierra Nevada (U.S.A.). René Heise served as scientific reviewer for the National Science Foundation and contributed MWP wave forecasts to the data archive. Wolf-Dietrich Herold documented activities in Boulder/CO and Bishop/CA and produced a TV-report of the project for the German TV station RBB.
Programming objectives
Detection and determination of physical processes in the atmosphere, and their associated synoptic characteristics, which play the primary role in the generation and development of mountain waves.
Investigation of rotor bands: determination of their location, spatial extension and classification of associated turbulence
High resolution measurement of relevant meteorological variables (e.g., potential temperature, turbulence parameters, vertical and horizontal wind, humidity, etc.)
Visualisation of the rotors/regions of turbulence with a GeoInformationService (GIS).
Statistical analysis of wave flights (IGC-files of GPS flight loggers) to develop an empirical GIS-based representation of wave and rotor locations
Verification of mesoscale forecast models and fine tuning of the applied parameterisations
Application of the acquired data, scientific results, and prediction tools to enhance the safety and effectiveness of air traffic route planning, and improve pilot training. Furthermore, assisting in the development and creation of focused training methodology, tools and simulator scenarios.
Expeditions
Argentina’99: Base San Martín de los Andes (Argentina); some flights above 1,000 km, a record flight (1,550 km) of Klaus Ohlmann up to Fireland (Rio Grande), the southernmost glider flight in the World
Serres (France) & Jaca (Spain) 2003: Measurement flights of southerly wave conditions in Provence, additionally wave flights under stormy weather conditions in the Lee of Pyrenees
Operation Mendoza 2006: Base Plumerillo (Argentina); Measurement Campaign at invitation of the Argentine Air Force, Flights with BATprobe up to 12,500 m height over the cordillera of the Tupungato-Aconcagua region.
Tibet 2010- site visit: Presentation of the MWP field campaign in Lhasa Exploration of emergency landing strips along the route Shigatse - Tingri
Project results
Development of an operational lee wave forecast in co- operation with the Bundeswehr Geo Information Service and the German Weather Service.
Global and Regional Assessment Tool for wave activities and the risk of turbulence. This experimental forecast tool is used especially with a combination of a relocatable mesoscale models for the regions of Antarctica, Hindukush/ Tian Shan, Kamchatka, Sierra Nevada and Tibet.
First scientific measurement flights of turbulence over the Andes. Validation of airborne measurements of the parameters- wind, temperature, moisture and pressure in combination with soundations and vertical satellite measurements (Radio-Occultation, Remote Sensing with GPS)
Cataloging of over 200 global positions of Rotor-Wave systems, and their visualization in a Geographical Informations System (GIS); Analyzing of accidents and incidents due to mountain wave turbulence in the commercial and general aviation
Development of a mathematical and statistical algorithms to filter wave climbs in GNSS-flight recorder data, using for an optimization of record flights
Aviation Highlights: record flight to Rio Grande in Tierra del Fuego (MWP-Argentina `99); World Record Flight (FAI Category Free Distance) 2.120 km (OSTIV Kuettner Prize)
High altitude physiology preparations and recommendations for pilots (Human Factors)
Awards
2003 OSTIV Kuettner Prize for the first 2000 km straight out wave flight for MWP chief pilot (Klaus Ohlmann)
2007 Second Awardee of „Lilienthal- Preis”
2011 Finalists of the Aerospace Medical Association Jeff Myers Young Investigator Award (Rene Heise)
2011 Outreach & Communication Award of the European Meteorological Society (EMS)
GEO-TV features
2003 Berlin-Brandenburg Broadcasting (RBB) - Rodeo in the Sky - Research for greater flight safety/Rodeo am Himmel - Forschung für mehr Flugsicherheit (45 min; German/English)
2007 ARTE 360° GEO- documentation - The Waveriders of the Andes/Die Windreiter der Anden/Les Enragés du vol à voile (45min; German/French)
2011 3sat TV-feature in connection with the 6th Severe Weather Congress Hamburg 2011, Wellengang in der Luft- hinter Bergen entstehen gefährliche Luftwirbel (6min; German)
References
External links
Mountain Wave Project- official website
Website at MetPanel of OSTIV
FAI World Records - Class D (Gliders)
FAA-Seminar: Where Wild Winds Rule - Mountain Wave Flying Training
Mountain Wave Project - Website of the MWP at MetPanel of OSTIV
Scientific TV-feature about MWP site visit in Tibet 2010 at German Channel 3sat retrieved 2011-04-18
Gliding & Motorgliding International Dec 12, 2000
Frankfurter Zeitung vom 7. April 2010: Auf der perfekten Welle (On the perfect Wave)
Tagesspiegel vom 14. Mai 2007 Tödliche Turbulenzen (Deadly Turbulence)
Spiegel vom 16. Oktober 2006 Expressaufwind aus Sturmwinden (Express-Lift from Storm Winds)
SpektrumDirekt - Wissenschaft online retrieved 2002-11-07
Mountain meteorology
Atmospheric dynamics
Mesoscale meteorology
Waves
Gliding meteorology | Mountain Wave Project | Physics,Chemistry | 1,778 |
70,809,635 | https://en.wikipedia.org/wiki/Entoloma%20prunuloides | Entoloma prunuloides is a species of agaric (gilled mushroom) in the family Entolomataceae. It has been given the recommended English name of Mealy Pinkgill, based on its distinctive smell. The species has a European distribution, occurring mainly in agriculturally unimproved grassland. Threats to its habitat have resulted in the Mealy Pinkgill being assessed as globally "vulnerable" on the IUCN Red List of Threatened Species.
Taxonomy
The species was first described by Swedish mycologist Elias Magnus Fries in 1821 as Agaricus prunuloides. French mycologist Lucien Quélet transferred it to the genus Entoloma in 1872.
Description
Basidiocarps are agaricoid, up to 80 mm (3 in) tall, the cap convex to flat and broadly umbonate, up to 70 mm (2.75 in) across. The cap surface is smooth, finely fibrillose, cream to ochraceous grey. The lamellae (gills) are white becoming pink from the spores. The stipe (stem) is smooth, finely fibrillose, white, lacking a ring. The spore print is pink, the spores (under a microscope) multi-angled, inamyloid, measuring about 6.5 to 8 by 6.5 to 8 μm. The whole fungus has a distinctive, mealy smell.
Similar species
Entoloma ochreoprunuloides has the same mealy smell but differs in its darker, grey-brown cap.
Distribution and habitat
The Mealy Pinkgill is rare but widespread in Europe. Like many other European pinkgills, it occurs in old, agriculturally unimproved, short-sward grassland (pastures and lawns).
Conservation
Entoloma prunuloides is typical of waxcap grasslands, a declining habitat due to changing agricultural practices. As a result, the species is of global conservation concern and is listed as "vulnerable" on the IUCN Red List of Threatened Species.
References
External links
Fungi of Europe
Fungi described in 1821
Entolomataceae
Taxa named by Elias Magnus Fries
Fungus species | Entoloma prunuloides | Biology | 441 |
40,881,367 | https://en.wikipedia.org/wiki/Rhodoferax | Rhodoferax is a genus of Betaproteobacteria belonging to the purple nonsulfur bacteria. Originally, Rhodoferax species were included in the genus Rhodocyclus as the Rhodocyclus gelatinous-like group. The genus Rhodoferax was first proposed in 1991 to accommodate the taxonomic and phylogenetic discrepancies arising from its inclusion in the genus Rhodocyclus. Rhodoferax currently comprises four described species: R. fermentans, R. antarcticus, R. ferrireducens, and R. saidenbachensis. R. ferrireducens, lacks the typical phototrophic character common to two other Rhodoferax species. This difference has led researchers to propose the creation of a new genus, Albidoferax, to accommodate this divergent species. The genus name was later corrected to Albidiferax. Based on geno- and phenotypical characteristics, A. ferrireducens was reclassified in the genus Rhodoferax in 2014. R. saidenbachensis, a second non-phototrophic species of the genus Rhodoferax was described by Kaden et al. in 2014.
Taxonomy
Rhodoferax species are Gram-negative rods, ranging in diameter from 0.5 to 0.9 μm with a single polar flagellum. The first two species described for the genus, R. fermentans and R. antarcticus, are facultative photoheterotrophs that can grow anaerobically when exposed to light and aerobically under dark conditions at atmospheric levels of oxygen. R. ferrireducens is a nonphototrophic facultative anaerobe capable of reducing Fe(III) at temperatures as low as 4 °C. R. saidenbachensis grows strictly aerobic and has a very low rate of cell division. All Rhodoferax species possess ubiquinone and rhodoquinone derivatives with eight unit isoprenoid side chains. Dominant fatty acids in Rhodoferax cells are palmitoleic acid (16:1) and palmitic acid (16:0), as well as 3-OH octanoic acid (8:0). Major carotenoids found in the phototrophic species are spheroidene, OH-spheroidene, and spirilloxanthin.
Genomes
As of 2014, three genomes have been sequenced from the genus Rhodoferax. Sequencing of the R. ferrireducens T118 genome was carried out by the Joint Genome Institute, and assembly was completed in 2005. The R. ferrireducens genome contains a 4.71 Mbp chromosome with 59.9% GC content and a 257-kbp plasmid with 54.4% GC content. It has 4,169 protein-coding genes, six rRNA genes, and 44 tRNA genes on the chromosome, as well as 75 pseudogenes. The plasmid contains 248 protein coding genes, one tRNA gene, and 2 pseudogenes. Examination of the R. ferrireducens genome indicates that though it cannot grow autotrophically, several genes associated with CO2 fixation are present. The genome contains the gene for the ribulose-1,5-bisphosphate carboxylase/oxygenase (rubisco) large subunit, while the small subunit is missing. Other Calvin-cycle enzymes are present, but the phosphoketolase and sedoheptulose-bisphosphatase genes are missing. The genome also contains several genes suggesting R. ferrireducens may have some ability to resist exposure to metalloids and heavy metals. These genes include a putative arsenite efflux pump and an arsenate reductase, as well as genes similar to those found in organisms capable of tolerating copper, chromium, cadmium, zinc, and cobalt. Despite its psychrotolerance, the genome appears to lack any known major cold-shock proteins.
Another sequenced genome in the genus Rhodoferax comes from R. antarcticus. This genome consists of a 3.8-Mbp chromosome with 59.1% GC content and a 198-kbp plasmid with 48.4% GC content. The chromosome contains 4,036 putative open reading frames (ORFs), and the plasmid contains 226 ORFs. Within the genome are 64 tRNA, and three rRNA genes. Analysis of the genome reveals the presence of two forms of rubisco. The presence of two forms may allow R. antarcticus to take advantage of changing CO2 concentrations.
The third Rhodoferax genome, Rhodoferax saidenbachensis , was sequenced by the Swedish Veterinary Institute SVA. The GC content of the 4.26 Mb genome is 60.9%. There are 3949 protein-coding genes, 46 tRNA, and six rRNA genes in the genome of the R. sidenbachensis type strain ED16 = DSM22694.
Habitats
Rhodoferax species are frequently found in stagnant aquatic systems exposed to light. Isolates of R. fermentans used for the type description of the genus were first isolated from ditch water and activated sludge. Other environments from which this species has been isolated include pond water and sewage. In the case of R. antarcticus, strains were first isolated from microbial mats collected from saline ponds in Cape Royds, Ross Island, Antarctica. In contrast to other Rhodoferax species, where isolation sources were exposed to light, the isolation of the nonphototrophic R. ferrireducens was carried out using anaerobic subsurface aquifer sediments.
Physiology/biochemistry
Growth of some Rhodoferax species can be supported by anoxygenic photoorganotrophy, anaerobic-dark fermentation, or aerobic respiration. The species R. fermentans and R. antarcticus are capable of phototrophic growth using carbon sources such as acetate, pyruvate, lactate, succinate, malate, fumarate, glucose, fructose, citrate, and aspartate. Anaerobic growth via sugar fermentation can be carried out in the dark by R. fermentans, and is stimulated by the addition of bicarbonate. R. antarcticus has not yet demonstrated the ability to ferment under dark anaerobic conditions, but is capable of aerobic chemoorganotrophy. In contrast, R. ferrireducens is not capable of photoorganotrophy or fermentation, but is capable of anaerobic growth using organic electron donors (i.e. acetate, lactate, propionate, pyruvate, malate, succinate, and benzoate) to reduce Fe(III) to Fe(II).
Growth temperatures for Rhodoferax species range from 2 to 30 °C. R. fermentans is a mesophilic species with an optimal growth temperature between 25 and 30 °C. The other three species, R. antarcticus , R. ferrireducens, and R. saidenbachensis are psychrotolerant species with optimal growth temperatures above 15 °C, but capable of growth at temperatures near 0 °C.
Biotechnology
Currently, research in the area of sustainable energy is investigating the application and design of microbial fuel cells (MFC) using R. ferrireducens. In an MFC, a bacterial suspension is provided a reduced compound, which the bacteria use as a source of electrons. The bacteria metabolize this compound and shuttle the released electrons through their respiratory networks and ultimately donate them to a synthetic electron acceptor, also known as an anode. When connected to a cathode, the bacterial metabolism of the reduced compound generates electricity and CO2. The advantage of MFCs over conventional electricity generation is the direct conversion of chemical energy into electricity, improving energy conversion efficiency. A unique feature of using R. ferrireducens over other bacteria is that many other bacteria require the addition of a mediator to shuttle the electrons from the bacterial cells to the anode. For R. ferrireducens, through an unknown membrane protein, electrons are directly shuttled from the membrane to the anode.
References
Phototrophic bacteria
Comamonadaceae
Bacteria genera | Rhodoferax | Chemistry,Biology | 1,798 |
9,770 | https://en.wikipedia.org/wiki/Eclipse | An eclipse is an astronomical event which occurs when an astronomical object or spacecraft is temporarily obscured, by passing into the shadow of another body or by having another body pass between it and the viewer. This alignment of three celestial objects is known as a syzygy. An eclipse is the result of either an occultation (completely hidden) or a transit (partially hidden). A "deep eclipse" (or "deep occultation") is when a small astronomical object is behind a bigger one.
The term eclipse is most often used to describe either a solar eclipse, when the Moon's shadow crosses the Earth's surface, or a lunar eclipse, when the Moon moves into the Earth's shadow. However, it can also refer to such events beyond the Earth–Moon system: for example, a planet moving into the shadow cast by one of its moons, a moon passing into the shadow cast by its host planet, or a moon passing into the shadow of another moon. A binary star system can also produce eclipses if the plane of the orbit of its constituent stars intersects the observer's position.
For the special cases of solar and lunar eclipses, these only happen during an "eclipse season", the two times of each year when the plane of the Earth's orbit around the Sun crosses with the plane of the Moon's orbit around the Earth and the line defined by the intersecting planes points near the Sun. The type of solar eclipse that happens during each season (whether total, annular, hybrid, or partial) depends on apparent sizes of the Sun and Moon. If the orbit of the Earth around the Sun and the Moon's orbit around the Earth were both in the same plane with each other, then eclipses would happen every month. There would be a lunar eclipse at every full moon, and a solar eclipse at every new moon. It is because of the non-planar differences that eclipses are not a common event. If both orbits were perfectly circular, then each eclipse would be the same type every month.
Lunar eclipses can be viewed from the entire nightside half of the Earth. But solar eclipses, particularly total eclipses occurring at any one particular point on the Earth's surface, are very rare events that can be many decades apart.
Etymology
The term is derived from the ancient Greek noun (), which means 'the abandonment', 'the downfall', or 'the darkening of a heavenly body', which is derived from the verb () which means 'to abandon', 'to darken', or 'to cease to exist', a combination of prefix (), from preposition (), 'out', and of verb (), 'to be absent'.
Umbra, penumbra and antumbra
For any two objects in space, a line can be extended from the first through the second. The latter object will block some amount of light being emitted by the former, creating a region of shadow around the axis of the line. Typically these objects are moving with respect to each other and their surroundings, so the resulting shadow will sweep through a region of space, only passing through any particular location in the region for a fixed interval of time. As viewed from such a location, this shadowing event is known as an eclipse.
Typically the cross-section of the objects involved in an astronomical eclipse is roughly disk-shaped. The region of an object's shadow during an eclipse is divided into three parts:
The umbra (Latin for 'shadow'), within which the object completely covers the light source. For the Sun, this light source is the photosphere.
The antumbra (from Latin ante, 'before, in front of', plus umbra) extending beyond the tip of the umbra, within which the object is completely in front of the light source but too small to completely cover it.
The penumbra (from the Latin paene, 'almost, nearly', plus umbra), within which the object is only partially in front of the light source.
A total eclipse occurs when the observer is within the umbra, an annular eclipse when the observer is within the antumbra, and a partial eclipse when the observer is within the penumbra. During a lunar eclipse only the umbra and penumbra are applicable, because the antumbra of the Sun-Earth system lies far beyond the Moon. Analogously, Earth's apparent diameter from the viewpoint of the Moon is nearly four times that of the Sun and thus cannot produce an annular eclipse. The same terms may be used analogously in describing other eclipses, e.g., the antumbra of Deimos crossing Mars, or Phobos entering Mars's penumbra.
The first contact occurs when the eclipsing object's disc first starts to impinge on the light source; second contact is when the disc moves completely within the light source; third contact when it starts to move out of the light; and fourth or last contact when it finally leaves the light source's disc entirely.
For spherical bodies, when the occulting object is smaller than the star, the length (L) of the umbra's cone-shaped shadow is given by:
where Rs is the radius of the star, Ro is the occulting object's radius, and r is the distance from the star to the occulting object. For Earth, on average L is equal to 1.384 km, which is much larger than the Moon's semimajor axis of 3.844 km. Hence the umbral cone of the Earth can completely envelop the Moon during a lunar eclipse. If the occulting object has an atmosphere, however, some of the luminosity of the star can be refracted into the volume of the umbra. This occurs, for example, during an eclipse of the Moon by the Earth—producing a faint, ruddy illumination of the Moon even at totality.
On Earth, the shadow cast during an eclipse moves very approximately at 1 km per sec. This depends on the location of the shadow on the Earth and the angle in which it is moving.
Eclipse cycles
An eclipse cycle takes place when eclipses in a series are separated by a certain interval of time. This happens when the orbital motions of the bodies form repeating harmonic patterns. A particular instance is the saros, which results in a repetition of a solar or lunar eclipse every 6,585.3 days, or a little over 18 years. Because this is not a whole number of days, successive eclipses will be visible from different parts of the world. In one saros period there are 239.0 anomalistic periods, 241.0 sidereal periods, 242.0 nodical periods, and 223.0 synodic periods. Although the orbit of the Moon does not give exact integers, the numbers of orbit cycles are close enough to integers to give strong similarity for eclipses spaced at 18.03 yr intervals.
Earth–Moon system
An eclipse involving the Sun, Earth, and Moon can occur only when they are nearly in a straight line, allowing one to be hidden behind another, viewed from the third. Because the orbital plane of the Moon is tilted with respect to the orbital plane of the Earth (the ecliptic), eclipses can occur only when the Moon is close to the intersection of these two planes (the nodes). The Sun, Earth and nodes are aligned twice a year (during an eclipse season), and eclipses can occur during a period of about two months around these times. There can be from four to seven eclipses in a calendar year, which repeat according to various eclipse cycles, such as a saros.
Between 1901 and 2100 there are the maximum of seven eclipses in:
four (penumbral) lunar and three solar eclipses: 1908, 2038.
four solar and three lunar eclipses: 1918, 1973, 2094.
five solar and two lunar eclipses: 1934.
Excluding penumbral lunar eclipses, there are a maximum of seven eclipses in:
1591, 1656, 1787, 1805, 1918, 1935, 1982, and 2094.
Solar eclipse
As observed from the Earth, a solar eclipse occurs when the Moon passes in front of the Sun. The type of solar eclipse event depends on the distance of the Moon from the Earth during the event. A total solar eclipse occurs when the Earth intersects the umbra portion of the Moon's shadow. When the umbra does not reach the surface of the Earth, the Sun is only partially occulted, resulting in an annular eclipse. Partial solar eclipses occur when the viewer is inside the penumbra.
The eclipse magnitude is the fraction of the Sun's diameter that is covered by the Moon. For a total eclipse, this value is always greater than or equal to one. In both annular and total eclipses, the eclipse magnitude is the ratio of the angular sizes of the Moon to the Sun.
Solar eclipses are relatively brief events that can only be viewed in totality along a relatively narrow track. Under the most favorable circumstances, a total solar eclipse can last for 7 minutes, 31 seconds, and can be viewed along a track that is up to 250 km wide. However, the region where a partial eclipse can be observed is much larger. The Moon's umbra will advance eastward at a rate of 1,700 km/h, until it no longer intersects the Earth's surface.
During a solar eclipse, the Moon can sometimes perfectly cover the Sun because its apparent size is nearly the same as the Sun's when viewed from the Earth. A total solar eclipse is in fact an occultation while an annular solar eclipse is a transit.
When observed at points in space other than from the Earth's surface, the Sun can be eclipsed by bodies other than the Moon. Two examples include when the crew of Apollo 12 observed the Earth to eclipse the Sun in 1969 and when the Cassini probe observed Saturn to eclipse the Sun in 2006.
Lunar eclipse
Lunar eclipses occur when the Moon passes through the Earth's shadow. This happens only during a full moon, when the Moon is on the far side of the Earth from the Sun. Unlike a solar eclipse, an eclipse of the Moon can be observed from nearly an entire hemisphere. For this reason it is much more common to observe a lunar eclipse from a given location. A lunar eclipse lasts longer, taking several hours to complete, with totality itself usually averaging anywhere from about 30 minutes to over an hour.
There are three types of lunar eclipses: penumbral, when the Moon crosses only the Earth's penumbra; partial, when the Moon crosses partially into the Earth's umbra; and total, when the Moon crosses entirely into the Earth's umbra. Total lunar eclipses pass through all three phases. Even during a total lunar eclipse, however, the Moon is not completely dark. Sunlight refracted through the Earth's atmosphere enters the umbra and provides a faint illumination. Much as in a sunset, the atmosphere tends to more strongly scatter light with shorter wavelengths, so the illumination of the Moon by refracted light has a red hue, thus the phrase 'Blood Moon' is often found in descriptions of such lunar events as far back as eclipses are recorded.
Historical record
Records of solar eclipses have been kept since ancient times. Eclipse dates can be used for chronological dating of historical records. A Syrian clay tablet, in the Ugaritic language, records a solar eclipse which occurred on March 5, 1223, B.C., while Paul Griffin argues that a stone in Ireland records an eclipse on November 30, 3340 B.C. Positing classical-era astronomers' use of Babylonian eclipse records mostly from the 13th century BC provides a feasible and mathematically consistent explanation for the Greek finding all three lunar mean motions (synodic, anomalistic, draconitic) to a precision of about one part in a million or better. Chinese historical records of solar eclipses date back over 3,000 years and have been used to measure changes in the Earth's rate of spin.
The first person to give scientific explanation on eclipses was Anaxagoras [c500BC - 428BC]. Anaxagoras stated that the Moon shines by reflected light from the Sun.
In 5th century AD, solar and lunar eclipses were scientifically explained by Aryabhata, in his treatise Aryabhatiya. Aryabhata states that the Moon and planets shine by reflected sunlight and explains eclipses in terms of shadows cast by and falling on Earth. Aryabhata provides the computation and the size of the eclipsed part during an eclipse. Indian computations were very accurate that 18th-century French scientist Guillaume Le Gentil, during a visit to Pondicherry, India, found the Indian computations of the duration of the lunar eclipse of 30 August 1765 to be short by only 41 seconds, whereas Le Gentil's charts were long by 68 seconds.
By the 1600s, European astronomers were publishing books with diagrams explaining how lunar and solar eclipses occurred. In order to disseminate this information to a broader audience and decrease fear of the consequences of eclipses, booksellers printed broadsides explaining the event either using the science or via astrology.
Eclipses in mythology and religion
The American author Gene Weingarten described the tension between belief and eclipses thus: "I am a devout atheist but can't explain why the moon is exactly the right size, and gets positioned so precisely between the Earth and the sun, that total solar eclipses are perfect. It bothers me."
The Graeco-Roman historian Cassius Dio, writing between AD 211–229, relates the anecdote that Emperor Claudius considered it necessary to prevent disturbance among the Roman population by publishing a prediction for a solar eclipse which would fall on his birthday anniversary [1 August in the year AD 45]. In this context, Cassius Dio provides a detailed explanation of solar and lunar eclipses.
Typically in mythology, eclipses were understood to be one variation or another of a spiritual battle between the sun and evil forces or spirits of darkness. More specifically, in Norse mythology, it is believed that there is a wolf by the name of Fenrir that is in constant pursuit of the Sun, and eclipses are thought to occur when the wolf successfully devours the divine Sun. Other Norse tribes believed that there are two wolves by the names of Sköll and Hati that are in pursuit of the Sun and the Moon, known by the names of Sol and Mani, and these tribes believed that an eclipse occurs when one of the wolves successfully eats either the Sun or the Moon.
In most types of mythologies and certain religions, eclipses were seen as a sign that the gods were angry and that danger was soon to come, so people often altered their actions in an effort to dissuade the gods from unleashing their wrath. In the Hindu religion, for example, people often sing religious hymns for protection from the evil spirits of the eclipse, and many people of the Hindu religion refuse to eat during an eclipse to avoid the effects of the evil spirits. Hindu people living in India will also wash off in the Ganges River, which is believed to be spiritually cleansing, directly following an eclipse to clean themselves of the evil spirits. In early Judaism and Christianity, eclipses were viewed as signs from God, and some eclipses were seen as a display of God's greatness or even signs of cycles of life and death. However, more ominous eclipses such as a blood moon were believed to be a divine sign that God would soon destroy their enemies.
Other planets and dwarf planets
Gas giants
The gas giant planets have many moons and thus frequently display eclipses. The most striking involve Jupiter, which has four large moons and a low axial tilt, making eclipses more frequent as these bodies pass through the shadow of the larger planet. Transits occur with equal frequency. It is common to see the larger moons casting circular shadows upon Jupiter's cloudtops.
The eclipses of the Galilean moons by Jupiter became accurately predictable once their orbital elements were known. During the 1670s, it was discovered that these events were occurring about 17 minutes later than expected when Jupiter was on the far side of the Sun. Ole Rømer deduced that the delay was caused by the time needed for light to travel from Jupiter to the Earth. This was used to produce the first estimate of the speed of light.
The timing of the Jovian satellite eclipses was also used to calculate an observer's longitude upon the Earth. By knowing the expected time when an eclipse would be observed at a standard longitude (such as Greenwich), the time difference could be computed by accurately observing the local time of the eclipse. The time difference gives the longitude of the observer because every hour of difference corresponded to 15° around the Earth's equator. This technique was used, for example, by Giovanni D. Cassini in 1679 to re-map France.
On the other three gas giants (Saturn, Uranus and Neptune) eclipses only occur at certain periods during the planet's orbit, due to their higher inclination between the orbits of the moon and the orbital plane of the planet. The moon Titan, for example, has an orbital plane tilted about 1.6° to Saturn's equatorial plane. But Saturn has an axial tilt of nearly 27°. The orbital plane of Titan only crosses the line of sight to the Sun at two points along Saturn's orbit. As the orbital period of Saturn is 29.7 years, an eclipse is only possible about every 15 years.
Mars
On Mars, only partial solar eclipses (transits) are possible, because neither of its moons is large enough, at their respective orbital radii, to cover the Sun's disc as seen from the surface of the planet. Eclipses of the moons by Mars are not only possible, but commonplace, with hundreds occurring each Earth year. There are also rare occasions when Deimos is eclipsed by Phobos. Martian eclipses have been photographed from both the surface of Mars and from orbit.
Pluto
Pluto, with its proportionately largest moon Charon, is also the site of many eclipses. A series of such mutual eclipses occurred between 1985 and 1990. These daily events led to the first accurate measurements of the physical parameters of both objects.
Mercury and Venus
Eclipses are impossible on Mercury and Venus, which have no moons. However, as seen from the Earth, both have been observed to transit across the face of the Sun. Transits of Venus occur in pairs separated by an interval of eight years, but each pair of events happen less than once a century. According to NASA, the next pair of Venus transits will occur on December 10, 2117, and December 8, 2125. Transits of Mercury are much more common, occurring 13 times each century, on average.
Eclipsing binaries
A binary star system consists of two stars that orbit around their common centre of mass. The movements of both stars lie on a common orbital plane in space. When this plane is very closely aligned with the location of an observer, the stars can be seen to pass in front of each other. The result is a type of extrinsic variable star system called an eclipsing binary.
The maximum luminosity of an eclipsing binary system is equal to the sum of the luminosity contributions from the individual stars. When one star passes in front of the other, the luminosity of the system is seen to decrease. The luminosity returns to normal once the two stars are no longer in alignment.
The first eclipsing binary star system to be discovered was Algol, a star system in the constellation Perseus. Normally this star system has a visual magnitude of 2.1. However, every 2.867 days the magnitude decreases to 3.4 for more than nine hours. This is caused by the passage of the dimmer member of the pair in front of the brighter star. The concept that an eclipsing body caused these luminosity variations was introduced by John Goodricke in 1783.
Types
Sun – Moon – Earth: Solar eclipse | annular eclipse | hybrid eclipse | partial eclipse
Sun – Earth – Moon: Lunar eclipse | penumbral eclipse | partial lunar eclipse | central lunar eclipse
Sun – Phobos – Mars: Transit of Phobos from Mars | Solar eclipses on Mars
Sun – Deimos – Mars: Transit of Deimos from Mars | Solar eclipses on Mars
Other types: Solar eclipses on Jupiter | Solar eclipses on Saturn | Solar eclipses on Uranus | Solar eclipses on Neptune | Solar eclipses on Pluto
See also
List of solar eclipses in the 21st century
Mursili's eclipse
Transit of Venus
References
External links
A Catalogue of Eclipse Cycles
Search 5,000 years of eclipses
NASA eclipse home page
International Astronomical Union's Working Group on Solar Eclipses
Interactive eclipse maps site
Classroom demonstration of how an eclipse occurs
Image galleries
The World at Night Eclipse Gallery
Solar and Lunar Eclipse Image Gallery
Williams College eclipse collection of images
Astrological aspects
Astronomical events
Earth phenomena
Concepts in astronomy | Eclipse | Physics,Astronomy | 4,353 |
12,804,558 | https://en.wikipedia.org/wiki/Biosynthesis%20of%20doxorubicin | Doxorubicin (DXR) is a 14-hydroxylated version of daunorubicin, the immediate precursor of DXR in its biosynthetic pathway. Daunorubicin is more abundantly found as a natural product because it is produced by a number of different wild type strains of Streptomyces. In contrast, only one known non-wild type species, Streptomyces peucetius subspecies caesius ATCC 27952, was initially found to be capable of producing the more widely used doxorubicin. This strain was created by Arcamone et al. in 1969 by mutating a strain producing daunorubicin, but not DXR, at least in detectable quantities. Subsequently, Hutchinson's group showed that under special environmental conditions, or by the introduction of genetic modifications, other strains of streptomyces can produce doxorubicin. His group has also cloned many of the genes required for DXR production, although not all of them have been fully characterized. In 1996, Strohl's group discovered, isolated and characterized dox A, the gene encoding the enzyme that converts daunorubicin into DXR. By 1999, they produced recombinant Dox A, a Cytochrome P450 oxidase, and found that it catalyzes multiple steps in DXR biosynthesis, including steps leading to daunorubicin. This was significant because it became clear that all daunorubicin producing strains have the necessary genes to produce DXR, the much more therapeutically important of the two. Hutchinson's group went on to develop methods to improve the yield of DXR, from the fermentation process used in its commercial production, not only by introducing Dox A encoding plasmids, but also by introducing mutations to deactivate enzymes that shunt DXR precursors to less useful products, for example baumycin-like glycosides. Some triple mutants, that also over-expressed Dox A, were able to double the yield of DXR. This is of more than academic interest because at that time DXR cost about $1.37 million per kg and current production in 1999 was 225 kg per annum. More efficient production techniques have brought the price down to $1.1 million per kg for the non-liposomal formulation. Although DXR can be produced semi-synthetically from daunorubicin, the process involves electrophilic bromination and multiple steps and the yield is poor. Since daunorubicin is produced by fermentation, it would be ideal if the bacteria could complete DXR synthesis more effectively.
Overview
The anthracycline skeleton of doxorubicin (DXR) is produced by a Type II polyketide synthase (PKS) in streptomyces peucetius. First, a 21-carbon decaketide chain (Fig 1. (1)) is synthesized from a single 3-carbon propionyl group from propionyl-CoA, and 9 2-carbon units derived from 9 sequential (iterative) decarboxylative condensations of malonyl-CoA. Each malonyl-CoA unit contributes a 2-carbon ketide unit to the growing polyketide chain. Each addition is catalyzed by the "minimal PKS" consisting of an acyl carrier protein (ACP), a ketosynthase (KS)/chain length factor (CLF) heterodimer and a malonyl-Coa:ACP acyltransferase(MAT). (refer to top of Figure 10.
This process is very similar to fatty acid synthesis, by fatty acid synthases and to Type I polyketide synthesis. But, in contrast to fatty acid synthesis, the keto groups of the growing polyketide chain are not modified during chain elongation and they are not usually fully reduced. In contrast to Type I PKS systems, the synthetic enzymes (KS, CLF, ACP and AT) are not attached covalently to each other, and may not even remain associated during each step of the polyketide chain synthesis.
After the 21-carbon decaketide chain of DXR is completed, successive modifications are made to eventually produce a tetracyclic anthracycline aglycone (without glycoside attached). The daunosamine amino sugar, activated by addition of Thiamine diphosphate (TDP), is created in another series of reactions. It is joined to the anthracycline aglycone and further modifications are done to produce first daunorubicin then DXR.
There are at least 3 gene clusters important to DXR biosynthesis: dps genes which specify the enzymes required for the linear polyketide chain synthesis and its first cyclizations, the dnr cluster is responsible for the remaining modifications of the anthracycline structure and the dnm genes involved in the amino sugar, daunosamine, synthesis. Additionally, there is a set of "self resistance" genes to reduce the toxic impact of the anthracycline on the producing organism. One mechanism is a membrane pump that causes efflux of the DXR out of the cell (drr loci). Since these complex molecules are only advantageous under specific conditions, and require a lot of energy to produce, their synthesis is tightly regulated.
Polyketide Chain Synthesis
Doxorubicin is synthesized by a specialized polyketide synthase.
The initial event in DXR synthesis is the selection of the propionyl-CoA starter unit and its decarboxylative addition to a two carbon ketide unit, derived from malonyl-CoA to produce the five carbon B-ketovaleryl ACP. The five carbon diketide is delivered by the ACP to the cysteine sulfhydryl group at the KS active site, by thioester exchange, and the ACP is released from the chain. The free ACP picks up another malonate group from malonyl-CoA, also by thioester exchange, with release of the CoA. The ACP brings the new malonate to the active site of the KS where is it decarboxylated, possibly with the help of the CLF subunit, and joined to produce a 7 carbon triketide, now anchored to the ACP (see top of Figure 1). Again the ACP hands the chain off to the KS subunit and the process is repeated iteratively until the decaketide is completed.
In most Type II systems the initiating event is delivery by ACP of an acetate unit, derived from acetyl-CoA, to the active site of the ketosynthase (KS) subunit of the KS/CLF heterodimer. The default mode for Type II PKS systems is the incorporation of acetate as the primer unit, and that holds true for the DXR "minimal PKS". In other words, the action of KS/CLF/ACP (Dps A, B and G) from this system will not produce 21-carbon decaketides, but 20-carbon decaketides instead, because acetate is the “preferred” starter. The process of specifying propionate is not completely understood, but it is clear that it depends on an additional protein, Dps C, which may be acting as a ketosynthase or acyltransferase selective for propionyl-CoA, and possibly Dps D makes a contribution.
A dedicated MAT has been found to be dispensable for polyketide production under in vitro conditions. The PKS may "borrow" the MAT from its own fatty acid synthase and this may be the primary way ACP receives its malonate group in DXR biosynthesis. Additionally, there is excellent evidence that "self-malonylation" is an inherent characteristic of Type II ACPs. In summary, a given Type II PKS may provide its own MAT (s), it may borrow one from FAS, or its ACP may “self-malonylate”.
It is unknown whether the same KS/CLF/ACP ternary complex chaperones the growth of a full-length polyketide chain through the entire catalytic cycle, or whether the ACP dissociates after each condensation reaction. A 2.0-Å resolution structure of the actinorhodin KS/CLF, which is very similar to the dps KS/CLF, shows polyketides being elongated inside an amphipathic tunnel formed at the interface of the KS and CLF subunits. The tunnel is about 17-Å long and one side has many charged amino acid residues which appear to be stabilizing the carbonyl groups of the chain, while the other side is hydrophobic. This structure explains why both subunits are necessary for chain elongation and how the reactive growing chain is protected from random spontaneous reactions until it is positioned properly for orderly cyclization. The structure also suggests a mechanism for chain length regulation. Amino acid side groups extend into the tunnel and act as "gates". A couple of particularly bulky residues may be impassable by the chain, causing termination. Modifications to tunnel residues based on this structure were able to alter the chain length of the final product. The final condensation causes the polyketide chain to "buckle" allowing an intramolecular attack by the C-12 methylene carbanion, generated by enzyme catalyzed proton removal and stabilized by electrostatic interactions in the tunnel, on the C-7 carbonyl (see 3 in Figure 1). This tunnel aided intramolecular aldol condensation provides the first cyclization when the chain is still in the tunnel. The same C-7/C-12 attack occurs in the biosynthesis of DXR, in a similar fashion.
Conversion to 12-deoxyalkalonic acid
The 21-carbon decaketide is converted to 12-deoxyalkalonic acid (5), the first free easily isolated intermediate in DXR biosynthesis, in 3 steps. These steps are catalyzed by the final 3 enzymes in the dps gene cluster and are considered part of the polyketide synthase.
While the decaketide is still associated with the KS/CLF heterodimer the 9-carbonyl group is reduced by Dps E, the 9-ketoreductase, using NADPH as the reducing agent/hydride donor. Dps F, the “1st ring cyclase” /aromatase, is very specific and is in the family of C-7/C-12 cyclases that require prior C-9 keto-reduction. These two reactions are felt to occur while the polyketide chain is still partially in the KS/CLF tunnel and it is not known what finally cleaves the chain from its covalent link to the KS or ACP. If the Dps F cyclase is inactivated by mutations or gene deletions, the chain will cyclize spontaneously in random fashion. Thus, Dps F is thought to “chaperone” or help fold the polyketide to ensure non-random cyclization, a reaction that is energetically favorable and leads to subsequent dehydration and resultant aromatization.
Next, Dps Y regioselectively promotes formation of the next two carbon-carbon bonds and then catalyzes dehydration leading to aromatization of one of the rings to give (5).
Conversion to ε-rhodomycinone
The next reactions are catalyzed by enzymes originating from the dnr gene cluster. Dnr G, a C-12 oxygenase (see (5) for numbering) introduces a keto group using molecular oxygen. It is an "anthrone type oxygenase", also called a quinone-forming monooxygenase, many of which are important 'tailoring enzymes' in the biosynthesis of several types of aromatic polyketide antibiotics. They have no cofactors: no flavins, metals or energy sources. Their mechanism is poorly understood but may involve a "protein radical".
Alkalonic acid (6), a quinone, is the product. Dnr C, alkalonic acid-O-methyltransferase methylates the carboxylic acid end of the molecule forming an ester, using S-adenosyl methionine (SAM) as the cofactor/methyl group donor. The product is alkalonic acid methyl ester (7). The methyl group is removed later, but it serves to activate the adjacent methylene bridge facilitating its attack on the terminal carbonyl group, a reaction catalyzed by DnrD.
Dnr D, the fourth ring cyclase (AAME cyclase), catalyzes an intramolecular aldol addition reaction. No cofactors are required and neither aromatization nor dehydration occurs. A simple base catalyzed mechanism is proposed. The product is aklaviketone (8).
Dnr H, aklaviketone reductase, stereospecifically reduces the 17-keto group of the new fourth ring to a 17-OH group to give aklavinone (9). This introduces a new chiral center and NADPH is a cofactor.
Dnr F, aklavinone-11-hydroxylase, is a FAD monooxygenase that uses NADPH to activate molecular oxygen for subsequent hydroxylation. ε-rhodomycinone (10) is the product.
Conversion to doxorubicin
Dnr S, daunosamine glycosyltransferase catalyzes the addition of the TDP activated glycoside, L-daunosamine-TDP to ε-rhodomycinone to give rhodomycin D (Figure 2). The release of TDP drives the reaction forward. The enzyme has sequence similarity to glycosyltransferases of the other "unusual sugars" added to Type II PKS aromatic products. Dnr P, rhodomycin D methylesterase, removes the methyl group added previously by DnrC. It initially served to activate the adjacent methylene bridge, and after that it prevented its carboxyl group from leaving the C-10 carbon (see Fig 2). Had the carboxyl group not been esterified prior to the fourth ring cyclization, its departure as would have been favored by the formation of a bicyclic aromatic system. After C-7 reduction and glycosylation, the C-8 methylene bridge is no longer activated for deprotonation, thereby making aromatization less likely. Note that the non-isolable intermediate, with numbering, is the 3rd molecule in Figure 2. The numbering system is very odd and a vestige of early nomenclature. The decarboxylation of the intermediate occurs spontaneously, or by the influence of Dnr P, giving 13-deoxycarminomycin.
A crystal structure, with bound products, of aclacinomycin methylesterase, an [enzyme] with 53% sequence homology to Dnr P, from streptomyces purpurascens, has been solved. It is able to catalyze the same reaction and uses a classic Ser-His-Asp catalytic triad with serine acting as the nucleophile and gly-met providing stabilization of the transition state by forming an "oxyanion hole". The active site amino acids are almost entirely the same as Dnr P, and the mechanism is almost certainly identical.
Although Dox A is shown next in the biosynthetic scheme (Figure 2), Dnr K, carminomycin 4-O-methyltransferase is able to O-methylate the 4-hydroxyl group of any of the glycosides in Figure 2. A 2.35 Å resolution crystal structure of the enzyme with bound products has recently been solved. The orientation of the products is consistent with a SN2 mechanism of methyl transfer. Site-directed mutagenesis of the potential acid/base residues in the active site did not affect catalysis leading to the conclusion that Dnr K most likely acts as an entropic enzyme in that rate enhancement is mainly due to orientational and proximity effects. This is in contrast to most other O-methyltransferases where acid/base catalysis has been demonstrated to be an essential contribution to rate enhancement.
Dox A catalyzes three successive oxidations in streptomyces peucetius. Deficient DXR production is not primarily due to low levels of or malfunctioning Dox A, but because there are many products diverted away from the pathway shown in Figure 2. Each of the glycosides is a potential target of shunt enzymes, not shown, some of which are products of the dnr gene cluster. Mutations of these enzymes does significantly boost DXR production. In addition, Dox A has a very low kcat/Km value for C-14 oxidation (130/M) compared to C-13 oxidation (up to 22,000/M for some substrates). Genetic manipulation to overexpress Dox A has also increased yields, particularly if the genes for the shunt enzymes are inactivated simultaneously.
Dox A is a cytochrome P-450 monooxygenase that has broad substrate specificity, catalyzing anthracycline hydroxylation at C-13 and C-14 ( Figure 2). The enzyme has an absolute requirement for molecular oxygen and NADPH. Initially, two successive oxidations are done at C-13, followed by a single oxidation of C-14 that converts daunorubicin to doxorubicin.
References
Topoisomerase inhibitors
Biosynthesis | Biosynthesis of doxorubicin | Chemistry | 3,835 |
4,899,945 | https://en.wikipedia.org/wiki/Shrink%20ray | In science fiction, a shrink ray is any device which uses energy to reduce the physical size of matter. Many are also capable of enlarging items as well. A growth ray typically only has the ability to enlarge.
Scientific
Science fiction writer and polymath Isaac Asimov wrote: Miniaturization doesn't actually make sense unless you miniaturize the very atoms which build up matter. Otherwise a tiny brain in a human the size of an insect, composed of normal atoms, is composed of too few atoms for the miniaturized human to be any more intelligent than the insect. Also, miniaturizing atoms is impossible according to the rules of quantum mechanics.
Depending on how those atoms were supposed to have been miniaturized, a miniature human may or may not weigh as much as they originally did, which is an observation that has been used for various effects over the years in fictions such as comic books.
However, the problems of a miniature human don't stop there. Basic geometry governs parameters such as relationships between cross-sectional area, volume, and surface area. It may be impossible for a one-inch high human to kill themselves in a fall of any conceivable height, but they may be able to drown themselves with a single drop of water.
Appearances in popular culture
Films and television
Dr. Cyclops from the 1940 horror film of that name shrank his victims by locking them inside an "atomic generator".
In the 1958 movie Attack of the Puppet People, a scientist captures people and shrinks them to 6 inches in height with an ultrasonic wave device, so he can keep them as company. (He keeps them sealed inside special suspended animation canisters between "puppet shows".)
The 1966 science-fiction film Fantastic Voyage (written by Harry Kleiner, novelization by Isaac Asimov) is plotted around such a device, allowing the miniaturized submarine Proteus to carry a crew inside a stricken scientist in an attempt to save his life. They have one hour to cure him before they expand back to normal size.
Dr. Shrinker was a segment that aired 16 episodes as part of the ABC network's The Krofft Supershow in 1976. Dr. Shrinker (Jay Robinson) is an evil scientist with a lab on an uncharted island. When teenagers Brad, B.J., and Gordie are stranded on the island, Dr. Shrinker subjects them to his shrinker machine. They manage to escape the lab in miniature form; the series follows their adventures as they try to evade the clutches of the mad scientist and his assistant Hugo (Billy Barty).
In the episode "The Big Break-In" of the 1987 Teenage Mutant Ninja Turtles animated TV series, Krang uses a remote-controlled minimizer device with a shrink ray to shrink down US Army bases, preventing them from attacking the Technodrome. In the episode "Funny, They Shrunk Michelangelo" of the same TV series captain Talbot Breech uses a miniaturizing ray to shrink US naval ships and put them into bottles as revenge on those who turned him down for the US Naval Academy. The same show also features the episode "Poor Little Rich Turtle" features Shredder using a shrink ray on the girl Buffy Shellhammer, who despite her young age runs a company, to force her telling a super rocket fuel formula her grandfather told her before he died.
In The Penguins of Madagascar episode "Jiggles", Kowalski uses his shrink ray to shrink Jiggles to normal size.
In an episode "Getting Antsy" of the 1991 Darkwing Duck cartoon, a villain Lilliput uses the ray to shrink buildings and landmarks of the fictional city St. Canard. He then employs ants to haul the shrunken buildings to a miniature golf, where they become part of the course. Eventually, Lilliput also shrinks the hero Darkwing Duck to an even tinier size.
In an episode of Codename: Kids Next Door, titled Operation: M.I.N.I.G.O.L.F., after Numbuh 2 bests champion golfer, Rupert Putkin, in a game of mini-golf, Rupert seeks revenge. He builds a shrink ray to shrink down the world's greatest monuments and uses them in his own miniature-golf course. Afterwards, he shrinks Numbuh 2 and forces him to play a rematch in his smaller height. Later, Rupert plans to use Numbuh 2 as the golfball as he hits him into a hole that will activate his shrink ray to shrink the world. His plan fails when he misses and Numbuh 2 hits the reverse setting on the shrink ray, which zaps Rupert and makes him grow bigger, causing him to step on and crush his golf course. The shrink ray is destroyed subsequently, but Numbuh 2 remains tiny. He runs back to his friends in the treehouse, and they use him for ping-pong.
In another episode of Kids Next Door, entitled "Operation: S.P.R.O.U.T.", when Numbuh 4 swallows a Brussels sprout, supposedly to make him an adult faster, his friends at Sector V take action. Numbuhs 1, 2, and 5 use a shrink ray to shrink down to the size of a booger. Afterwards, Numbuh 3 places them inside of Numbuh 4's nose, and they make their way to the stomach to recover the Brussels sprout, before the shrink effect wears off. They are able to retrieve the massive sprout and make it out of Numbuh 4's body just before they re-expand to normal size. In the aftermath, Numbuh 4 returns home and accidentally eats liver, implying that they have to go on the same mission again.
In the TV series, The Adventures of Jimmy Neutron: Boy Genius, one of Jimmy's most-used inventions is his shrink ray, first introduced in his film in which he shrank himself in order to sneak out of his house.
In the 1996 Tim Burton's movie, Mars Attacks, the Martian Leader uses a shrinking ray to shrink and crush the President of the United States' main general, General Decker.
In "In the Belly of the Boss", the third segment of The Simpsons Halloween special episode "Treehouse of Horror XV", the Simpson family is shrunk by Professor Frink, the entire segment being a parody of the aforementioned Fantastic Voyage.
A shrink ray invented by Professor Wayne Szalinski is featured throughout the Honey, I Shrunk the Kids franchise, and is the primary invention used throughout. Not only could it shrink people or other objects, but it could also reverse the effect to bring them back to normal size.
In a 1995 episode of Captain Planet and the Planeteers called "No Small Problem", Dr. Blight invents a shrinking ray which Sly Sludge uses to shrink rubbish.
In an episode of Home Improvement, Tim and Al shrink themselves using shrink rays to work deep inside of the engine.
In Doctor Who, the Master uses a Tissue Compression Eliminator to shrink and kill people.
The Lilo & Stitch franchise has featured several instances of shrink rays:
In Lilo & Stitch: The Series, Dr. Jumba Jookiba has a "reducer ray" that can shrink objects. This device is used in "Poxy" to shrink Lilo and Stitch in the X-Buggy to get into Pleakley to retrieve a microscopic experiment (Experiment 222/Poxy) that makes people ill. Gantu is also shrunk down by the same device in the episode. In "Short Stuff", a different device called the "Protoplasmic Growth Ray" is used to enlarge Stitch to become big enough to ride a roller coaster, but he's accidentally made gigantic and thus too big to ride the coaster. The episode's experiment (Experiment 297/Shortstuff) is also enlarged by the device, getting into a fight and winning against Stitch (who was enlarged even further to try to defeat him) at the fair. Stitch is brought back to his regular size to defeat the experiment. Gantu also used the device to enlarge Experiment 625 to get him to crush Lilo, Stitch, Jumba, and Pleakley, but 625 shows no interest in them and instead uses his enlarged size to eat the world's largest sandwich.
Two episodes of the anime spin-off series Stitch! feature Experiment 001/Shrink, an experiment first introduced in the Lilo & Stitch: The Series finale film Leroy & Stitch (as a cameo) as a rare example of a living creature (albeit an artificial lifeform) being a shrink ray. In the episode "Shrink", the experiment shrinks several characters to a smaller size until he returns them to their regular sizes, although at the end of the episode he instead grows the alien insect BooGoo to become larger than the planet Earth. In "Experiment-a-palooza", Shrink grows Stitch into a giant; the latter goes on a rampage—as he has also reverted to his former destructive programming thanks to the abilities of Experiment 210/Retro—until Yuna manages to calm him down.
In the Aqua Teen Hunger Force episode "Unremarkable Voyage", Frylock builds a shrink ray (which can also enlarge items). Master Shake quickly gains control of the machine and proceeds to abuse its power for his own benefit. The shrink ray turns out to be faulty, however, as shrunken items return to normal size after a period of time.
A shrink ray is a recurring device shown in Venture Brothers, although it never seems to actually work properly.
In Despicable Me, Gru used a shrink ray to shrink the Moon and pocket it. The effects of the shrink ray are only temporary, however, and the bigger the mass of an object, the quicker the effect wears off; this is called the "Nefario Principle". Vector returns the Moon to satellite orbit before it returns to normal size using his escape pod, but the Moon expands before he could do so in time and his escape pod is consequently destroyed, trapping him and a Minion on the now normal-sized Moon.
In Innerspace, a naval aviator is selected as a guinea pig to participate in a project which places him in a submersible pod to be shrunk to microscopic size and injected into the body of a rabbit.
In Gravity Falls, Dipper Pines was teased by his twin Mabel for being slightly shorter than her, so he finds a crystal that he attaches to a flashlight. One side enlarges things and the other shrinks things. It falls into Gideon's hands until the twins stop Gideon from shrinking Grunkle Stan and taking the Mystery Shack.
In Dragon Ball, a Micro Band invented by Bulma can shrink its wearer. Also, a portable home called a Capsule House can be carried around in a capsule and deployed when desired.
In the episode "The Sound of Fear" from the Monsters vs. Aliens TV series, Dr. Crockroach uses a shrink gun to shrink Susan. After a few minutes, she is back to normal.
In an episode of Phineas and Ferb, Dr. Doofinshmirtz shrinks himself but misses his hand. Phineas and Ferb have also shrunk themselves twice.
In the Barbie: Life in the Dreamhouse episode entitled "The Shrinkerator", Ken builds a shrink ray and accidentally shrinks Barbie and Raquelle.
In the WordGirl episode entitled "Shrinkin' in the Ray", Dr. Too-Brains uses a shrink ray to shrink cheese and shrunk Scoops and WordGirl.
In The Electric Company episode "Shrink, Shrank, Shrunk", Manny uses a "shrinkinator" to shrink the water bottle and the car but accidentally shrinks Jessica, Marcus, and himself instead. In the end, they return to normal size.
In the sixth season of Archer, episodes "Drastic Voyage: Part I" and "Drastic Voyage: Part II" involve a CIA-developed machine which can perform "molecular miniaturization" and which is essentially a shrink ray. The episodes both parody and reference Fantastic Voyage.
In Mickey Mouse episode called "Down the Hatch", Mickey and Goofy get trapped inside Donald's body after they accidentally get shrunken down from a shrink ray to miniature proportions.
In the episode "Incredible Shrinking Cat" from The Tom and Jerry Comedy Show, Tom and Jerry end up in a scientist's laboratory where they discover a shrink and growth ray, which is used on Tom.
Radio
The "Pertwee System of Infinite Acceleration" in the Dimension X episode "The Professor Was a Thief" was a shrink ray (November 5, 1950).
Literature
Cold War in a Country Garden (Lindsay Gutteridge, Pocket 1973) concerns the adventures of miniaturized spies.
Small World (Tabitha King, 1982) is about a dollhouse enthusiast who gains a device that will shrink anything, and takes it too far.
The Atom, Ant-Man, The Wasp, and Doll Man are but a few of the comic book characters who had as a primary power the ability to shrink (usually by technological means).
Video games
Duke Nukem 3D and Duke Nukem Forever have a shrink gun capable of shrinking an enemy, which allows the player character Duke Nukem to step on the shrunken foe, instantly killing it. A similar weapon appears in Duke Nukem: Manhattan Project.
Engineers in World of Warcraft are capable of crafting a Gnomish shrink ray.
The Eiffel Tower is reduced in size, then stolen, in the game Evil Genius.
In the Men in Black game Crashdown, Agent Jay uses a shrink ray to shrink himself so he may fight a group of alien insects.
Pandemonium! features a shrink ray power-up.
Call of Duty: Black Ops Zombie mode includes weapon called 31-79JGb215, which reduces zombies' size for a short amount of time so they can be instantly killed with any weapon, or even by running towards them, although this weapon is only seen in the map Shangri-La.
The game Ratchet & Clank: Size Matters features a shrink ray item, which Ratchet uses to enter keyholes and unlock them.
Other
The term "grocery shrink ray" has been used to describe a manufacturer decreasing the amount of product in a package while keeping the package price the same, as a scheme to implement a hidden price increase.
See also
Raygun
Size change in fiction
References
Fictional energy weapons
Pseudoscience
Fiction about size change | Shrink ray | Physics,Mathematics | 2,991 |
27,249,357 | https://en.wikipedia.org/wiki/Beijing%20Planetarium | The Beijing Planetarium () is a planetarium in Xicheng District, Beijing, China.
The planetarium comprises two main buildings, Building A & B. Building A, which was built in 1957, contains the Celestial Theater, an Eastern Exhibition Hall and a Western Exhibition Hall. It was the first large-scale planetarium in China, and at one time the only planetarium in Asia. Building B, which began operations in 2004, contains a digital space theater, 3D and 4D theaters, several exhibition halls and two observatories.
See also
List of planetariums
References
External links
Official website
Official website
Museums in Beijing
Science museums in China
Planetaria in China
National first-grade museums of China
Xicheng District | Beijing Planetarium | Astronomy | 147 |
47,330,034 | https://en.wikipedia.org/wiki/Suillus%20triacicularis | Suillus triacicularis is a species of bolete fungus in the family Suillaceae. Described as new to science in 2014, it is found in the northwestern Himalayas, India, where it grows in association with Pinus roxburghii.
References
External links
tridentinus
Fungi described in 2014
Fungi of India
Fungus species | Suillus triacicularis | Biology | 67 |
4,354,277 | https://en.wikipedia.org/wiki/Channel%2037 | Channel 37 is an intentionally unused ultra-high frequency (UHF) television broadcasting channel by countries in most of ITU region 2 such as the United States, Canada, Mexico and Brazil. The frequency range allocated to this channel is important for radio astronomy, so all broadcasting is prohibited within a window of frequencies centred typically on . Similar reservations exist in portions of the Eurasian and Asian regions, although the channel numbering varies.
History
Channel 37 in System M and N countries occupied a band of UHF frequencies from . This band is particularly important to radio astronomy because it allows observation in a region of the spectrum in between the dedicated frequency allocations near 410 MHz and 1.4 GHz. The area reserved or unused differs from nation to nation and region to region (as for example the EU and British Isles have slightly different reserved frequency areas).
One radio astronomy application in this band is for very-long-baseline interferometry.
When UHF channels were being allocated in the United States in 1952, channel 37 was assigned to 18 communities across the country. One of them, Valdosta, Georgia, featured the only construction permit ever issued for channel 37: WGOV-TV, owned by Eurith Dickenson "Dee" Rivers Jr., son of the former governor of Georgia (hence the call letters). Rivers received the CP on February 26, 1953, but WGOV-TV never made it to the air; on October 28, 1955, they requested an allocation on channel 8, but the petition was denied.
In 1963, the Federal Communications Commission (FCC) adopted a 10-year moratorium on any allocation of stations to Channel 37. A new ban on such stations took effect at the beginning of 1974, and was made permanent by a number of later FCC actions. As a result of this, and similar actions by the Canadian Radio-television and Telecommunications Commission, Channel 37 has never been used by any over-the-air television station in Canada or the United States.
The 2016-2021 repack left no North American stations above UHF 36.
The low-power WNWT-LD in New York was given virtual channel 37 in August 2019, thus becoming the first American station to be so assigned via the digital television PSIP standard. While the channel is displayed as "37.1" or "37-1" on a digital television set, WNWT-LD's physical signal remains on VHF channel 3, causing no interference.
Allocation issues
Reservations and use outside the US have a non-exclusive legal status
The Canadian Radio-television and Telecommunications Commission (CRTC) enacted such a ban on Channel 37, but radio astronomy has no exclusive status on this channel.
Mexico observes a similar ban on the use of this TV channel, but the allocation, like Canada's, is not exclusive.
Guatemala has a ban on Channel 37.
The Bahamas has a similar ban to Canada's on Channel 37.
Belize has an absolute ban on Channel 37.
Most NTSC System-M countries have an informal ban on Channel 37 as well but give radio astronomy no exclusive use of the channel.
The 2016-2021 repack left no US, Canadian, and Mexican OTA TV broadcasters above UHF 36. Many small-market rebroadcasters were taken dark by their corporate owners. This left former UHF 38–83 in the hands of cellular telephone and land-mobile operators, with UHF 14-36 as the main OTA TV broadcast band and UHF 37 as a vacant guardband.
Since July 2000, Channel 37 may be used in the US for medical telemetry equipment on a co-primary basis. The equipment must emit no more than one watt of effective radiated power and is for use in hospitals and other such facilities.
The power level permitted by the FCC is many times more than the amount allowed for Part 15 unlicensed broadcasting.
In US areas set aside for radio-frequency silence, the equipment is banned by statute and regulation.
The seemingly-low power level can be troublesome for radio astronomy equipment, which depends on detecting extraordinarily low signal strengths. Any use of the same frequencies raises the noise floor, thereby decreasing the signal-to-noise ratio and making the work more difficult.
Channel 1 was also removed from the TV bandplan in the late 1940s, channels 70 to 83 (800 MHz band) by the 1980s mainly for cellular telephone and trunked two-way land mobile radio systems and, in June 2009, channels 52 to 69 (700 MHz band) for mobile phones, emergency services and mobile TV services such as Qualcomm's now-defunct MediaFLO (channel 55). Additional channels from 38 to 51 (600 MHz band) were auctioned in early 2017, leaving channel 37 as a guard band between repacked TV stations and more mobile networks, for which T-Mobile US won most of the licenses.
Certain channels, 14 through 20, are used for land mobile communications in some large metropolitan areas in the U.S. However, facilities using this decades-old co-allocation are treated as just another station to avoid interference to in their local area.
The channels displayed by cable converter boxes under these numbers are not on the same frequencies as their over-the-air counterparts; there are also virtual channel numbering schemes in use in digital television which do not map directly to fixed frequency channel assignments. As such, a "cable 37" channel may (and most often does) exist, but on a much lower frequency.
Outside North America
In NTSC-M countries
Outside North America, channel 37 is actively used in these countries where NTSC-M is used:
In the Philippines, GMA TV-5 in Davao uses UHF 37 as GMA Network's digital channel (ISDB-T) with analogue broadcasts on Channel 5. Channel 37 was also used by UNTV-37 in Metro Manila as Progressive Broadcasting Corporation's analogue channel, with digital broadcasts assigned to Channel 38 (ISDB-T).
In Trinidad and Tobago, WIN-TV is broadcast on Channels 37 and 39, using NTSC
In the Dominican Republic channel 37 is also used for the CDN news channel
In South Korea KBS 2TV's analog channel on gwanak-san Transmission station.
In other countries
In these other countries, the frequency allocation for these TV channels is different:
in the UK (many transmitters used by the Five network actually broadcast in the past on PAL channel 37; as mentioned below, it was previously used as a local device output channel before Five's launch, and the network had to provide service regarding re-tunes of those devices to a new channel at no cost to the viewer).
in Western Europe, Channel 37 is used fairly widely as a relay transmitter frequency.
In Malaysia, NTV7 broadcast in PAL on CCIR Channel 37 (599.25 MHz) prior to analog shutdown. Currently the frequency is not in use.
In Indonesia, Channel 37 UHF has been used by many analog terrestrial networks depending on its location, such as MNCTV for the Jakarta metropolitan area, tvOne for Medan, GTV for Semarang, TVRI for Makassar, and Metro TV for Denpasar, while in digital, the frequency now used as multiplexing by Emtek in Samarinda and MNC Group in Mamuju.
Channel 37 is not the same frequency as it is in the countries using the System-M/N standard. At least in the UK, 606–614 MHz is reserved for radio astronomy.
The UK's namesake "Channel 37", while different in frequency, was formerly part of a small group of channels reserved for non-broadcast purposes such as RF modulators for output devices such as game consoles and videocassette recorders. The UK-named 34-37 channel range is no longer reserved in this manner.
In Japan, UHF television channel frequencies are offset by one channel compared to North American channel naming convention. Japan's channel 36 is in use by TV Asahi in some regions.
Global UHF TV allocation table (605–615 MHz)
This Radio Astronomy Allocation is between the following wavelengths:
605 MHz = 0.49552 m = 49.55 cm
615 MHz = 0.48747 m = 48.75 cm
Assume either a 100 kHz or a 250 kHz guardband with respect to this allocation.
DVB-T adoption note : The tables above are not accurate for nations that have adopted DVB-T. The frequencies for audio and video are merged with DVB terrestrial television. The new DVB frequencies are rounded off to an even number in MHz as a general rule.
National arrangements for radio astronomy different from ITU-R
National arrangements for radio astronomy different from ITU-R Radio Regulations
Central & Western Europe
Austria: no allocation - only mention of No. 5.149
Bulgaria: no allocation
Belgium: assignment to radio astronomy (shared with active services)
Finland: no allocation
Estonia: no allocation
Iceland: no allocation
Liechtenstein: no allocation
Luxembourg: no allocation
Netherlands: primary status
Portugal: no allocation
Spain: no allocation
Sweden: no allocation
United Kingdom: no reference to No. 5.149
Rest of World
Armenia: no allocation
Russian Federation: no allocation
Turkey: no allocation
New Zealand, Maori TV and others (not allocated to Radio Astronomy at all)
References
External links
North America
FCC database for Channel 37, had shown possible usage conflict with Mexico but now has the channel entirely clear in Canada, the U.S.A. & Mexico.
US FCC's attack on Channel 37
AE5D.com: Channel 37 - The last Empty Channel
W9WI.com: An article about channel 37 and channels above 69
craf.eu: Astrophysical importance of the band 608 - 614 MHz
Spare That Channel, Time, 10 May 1963
FREQUENCIES ALLOCATED TO RADIO ASTRONOMY USED BY THE DSN
McAdams On: Channel 37
Rest of World
NZ Long Term Digital TV Plan
Broadcast engineering
37
History of television
Fictional television stations | Channel 37 | Engineering | 2,023 |
6,030,254 | https://en.wikipedia.org/wiki/Digital%20backlot | A digital backlot or virtual backlot is a motion-picture set that is neither a genuine location nor a constructed studio; the shooting takes place entirely on a stage with a blank background (often a greenscreen) that will later on project an artificial environment put in during post-production. Digital backlots are mainly used for genres such as science fiction, where building a real set would be too expensive or outright impossible.
Notable films
Among the first films to introduce the technique was Mini Moni the Movie by Shinji Higuchi in 2002, predated by Rest In Peace by Stolpskott Film (2000). Others include:
Released
Rest in Peace (Sweden, 2000) – Shot entirely with green-screen. Some sections fully CGI.
Casshern (Japan, 2004) – Shot on celluloid. A few practical set pieces used.
Able Edwards (United States, 2004) – Shot digitally on Canon XL1 cameras.
Immortal (France, 2004) – Shot on celluloid. Also showed CGI characters interacting with live actors.
Sky Captain and the World of Tomorrow (United States, 2004) – Shot digitally on Sony CineAlta cameras.
Sin City (United States, 2005) – Shot digitally on CineAlta cameras. Three practical sets used.
MirrorMask (United States/United Kingdom, 2005) – Shot on celluloid. 80% of film uses digital backlot. Some practical set pieces used.
The Cabinet of Dr. Caligari (United States, 2005) – Shot digitally.
300 (United States, 2007) – Shot on celluloid. Two practical sets used.
Speed Racer (United States, 2008) – Directed by the Wachowskis. Three practical sets used.
The Spirit (United States, 2008) – Director Frank Miller shot the film with the same techniques he and Robert Rodriguez used on Sin City.
Avatar (United States, 2009) – Directed by James Cameron. Two practical sets used.
Goemon (Japan, 2009) – The second film from Casshern helmer Kazuaki Kiriya.
Alice in Wonderland (United States, 2010) – Directed by Tim Burton. Practical sets used.
Sin City: A Dame to Kill For (United States 2014) – Co-directed by Robert Rodriguez and Frank Miller. Sequel to Sin City.
Upcoming
Tribes of October
See also
Computer-generated imagery
Digital cinema
Digital cinematography
Filmizing
Live-action/animated film
virtual studio
References
Film and video technology
Digital media
Film and video terminology | Digital backlot | Technology | 508 |
2,398,362 | https://en.wikipedia.org/wiki/Eta%20Aquilae | Eta Aquilae (η Aql, η Aquilae) is a multiple star in the equatorial constellation of Aquila, the eagle. It was once part of the former constellation Antinous. Its apparent visual magnitude varies between 3.49 and 4.3, making it one of the brighter members of Aquila. Based upon parallax measurements made by the Gaia spacecraft on its third data release (DR3), this star is located at a distance of roughly . The primary component is a Classical Cepheid variable.
System
The η Aquilae system contains at least two stars, probably three. The primary star η Aql A is by far the brightest and dominates the spectrum. An ultraviolet excess in the spectral energy distribution suggest the presence of a faint hot companion, η Aql B, which has been given a spectral type of B8.9 V. The fractional spectral type is an artefact of the mathematics used to model the spectrum, not an indication of any specific spectral features that would be intermediate between B8 and B9. Radial velocity measurements could not find a satisfactory fit, which suggests that the orbit of η Aql B may be face-on, or very large.
A companion has been resolved visually 0.66" distant, but measurements give this a spectral type of F1 - F5. It seems likely that the hot star detected in the spectrum is closer and unresolved. The resolved companion has not been shown to be physically associated, but it is estimated that it would have a period of nearly a thousand years. Measurements with the HST fine guidance sensors show variations likely to be due to orbital motion on a scale of two years, so η Aql would appear to be a triple system.
At Eta Aquilae's distance (), its apparent brightness is diminished by 0.74 magnitudes due to extinction caused by interstellar dust between Earth and the star.
Cepheid variable
η Aquilae A is a Cepheid variable star, discovered by Edward Pigott in 1784. It has an apparent magnitude that ranges from 3.49 to 4.3 over a period of 7.177 days. Along with Delta Cephei, Zeta Geminorum and Beta Doradus, it is one of the most prominent naked eye Cepheids; that is, both the star itself and the variation in its brightness can be distinguished with the naked eye. Some other Cepheids such as Polaris are bright but have only a very small variation in brightness.
This massive star, being 100–200 million years old, has burned through the hydrogen fuel at its core and evolved into a supergiant, giving it a baseline stellar classification of F6 Iab. The periodic pulsations of this star actually cause the stellar class to vary between F6.5Ib to G2Ib over the course of each cycle.
Compared to the Sun, Eta Aquilae has around 6 times the mass, 60 times the radius, and is radiating 3,400 times as much luminosity. This energy is being emitted from the outer envelope at an effective temperature of 5,700 K, giving it the yellow-whitish hued glow of a G-type star. The radius of the star varies by () over the course of a pulsation cycle. Compared to its neighbors, this star has a high peculiar velocity of .
Name
In Chinese, (), meaning Celestial Drumstick, refers to an asterism consisting of η Aquilae, θ Aquilae, 62 Aquilae and 58 Aquilae. Consequently, the Chinese name for η Aquilae itself is (, .)
This star, along with δ Aql and θ Aql, were Al Mizān (ألميزان), the Scale-beam.According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, Al Mizān was the title for three stars: δ Aql as Al Mizān I, η Aql as Al Mizān II and θ Aql as Al Mizān III.
η Aquilae, together with θ Aql, δ Aql, ι Aql, κ Aql and λ Aql, was part of the obsolete constellation Antinous.
Notes
References
External links
Eta Aquilae
Image Eta Aquilae
Al Mizān II
187929
Aquila (constellation)
Aquilae, Eta
Classical Cepheid variables
F-type supergiants
F-type main-sequence stars
B-type main-sequence stars
Triple star systems
Aquilae, 55
7570
097804
J19522835+0100203
BD+00 4337 | Eta Aquilae | Astronomy | 978 |
32,867,978 | https://en.wikipedia.org/wiki/H%C3%A4stens | Hästens Sängar AB, or simply Hastens (, Swedish for "The horse's" as in "the horse's beds"), is a Swedish manufacturer established in 1852, that produces and trades in high-end mattress, bed linen, bedding pillows and lifestyle accessories.
The company was founded by Pehr Adolf Janson in 1852 as a saddler business and is a family-owned company. David Janson shifted their focus in the early 1900s from making saddles to making beds. By 1952, a century after its foundation, they had become the official bedding supplier of Sweden's royal court, a title they share with IKEA since 1984. The company continues to manufacture all beds in its factory in Sweden.
Hästens manufactures beds and mattresses by using materials such as cotton, horse hair, wool and flax. Hästens retail stores also sell branded premium bedlinen, pillows, duvets and accessories.
History
Pehr Adolf Janson (1830-1885) was awarded his master certificate in 1852 by King Oscar I of Sweden. Master saddlers were also makers of mattresses, since horsetail hair was an essential material for the pads that went into the carriage. At that time, becoming a master saddler in Sweden required the certificate to be issued by the King himself.
In the late 1800s, the family moved to Hed and Pehr Adolf's son Per Thure Janson decided to follow his father's path in becoming a master saddler. Per Thure started a company together with his son David Janson. The business of making beds took off and they were soon producing more beds than saddles.
In 1939, British architect Ralph Erskine (1914–2005) travelled by bicycle to Sweden, where he later met David Janson who commissioned him to build the Hästens Factory, which was one of the first buildings that he designed in Sweden. It was designed by Erskine in 1948 and he also designed the further expansion of the factory in 1998, in Köping where the company is still located .
In 1978, Jack Ryde designed Hästens' blue check pattern that was presented for a furniture trade fair. The blue check pattern is a registered trademark for Hästens beds and is protected.
Hästens today
Markets
Hästens currently operates retail stores around the world in locations such as: Stockholm, London, Dallas, Houston, Paris, Amsterdam, Israel, Zurich, Frankfurt, Barcelona, Bari, Brussels, Malta, New York, México City, Beijing, Shanghai, Hong Kong, Taipei, Seoul and Singapore.
Awards
1952 - Purveyor to the Swedish Royal Court
2006 - Swedish Trade Council Export Award & Best International Growth Company by Ernst & Young
2010 - Wallpaper* Design Award Best Bed
2011 - Palme d'Or de la Literie de Prestige
2013 - Signum Priset Sweden
2015 - Hurun Report: The Best of the Best Awards 2015
Legal cases
In 2000, a Swedish court ruled that Hästens was not allowed to advertise with phrases such as "The finest beds in the world" and "due to [our unique manufacturing process in Köping], we can offer 25 years' warranty on springs and frames". The latter was, among other reasons, because the springs and frames were actually manufactured by a subcontractor. Many companies have tried to copy the check patterns and Hästens has successfully undertaken legal procedures in several countries against infringements and counterfeits. As of 2011, the phrase "At Hästens we set out to make the best beds in the world." is used.
References
External links
1852 introductions
Luxury brands
Swedish brands
Beds
Bed manufacturers
Mattress retailers of Sweden
Multinational companies headquartered in Sweden
Purveyors to the Court of Sweden
Swedish companies established in 1852
Manufacturing companies established in 1852
Shops in New York City
Companies based in Västmanland County
Sleep | Hästens | Biology | 775 |
22,761,653 | https://en.wikipedia.org/wiki/Endornaviridae | Endornaviridae is a family of viruses. Plants, fungi, and oomycetes serve as natural hosts. There are 31 species in this family, assigned to 2 genera (Alphaendornavirus and Betaendornavirus). Members of Alphaendornavirus infect plants, fungi and the oomycete Phytophthora sp., members of Betaendornavirus infect ascomycete fungi.
Taxonomy
The following genera are assigned to the family:
Alphaendornavirus
Betaendornavirus
Structure
Linear, single-stranded, positive-sense RNA genome of about 14 kb to 17.6 kb. A site specific break (nick) is found in the coding strand about 1 to 2 kb from the 5’ terminus. ViralZone conflicts with ICTV, listing Endornaviridae as dsRNA viruses.
As the Endornaviridae genomes don't include a coat protein (CP) gene, they no true virions are associated with members of this family. For Vicia faba endornavirus, the RNA genome has been associated with some pleomorphic cytoplasmic membrane vesicles.
Life cycle
Viral replication is cytoplasmic.
The viral replicative form of the Endornaviridae is dsRNA. Replication follows the double-stranded RNA virus replication model. Double-stranded RNA virus transcription is the method of transcription.
As the replicative dsRNA form is relatively stable, it can be found in comparatively high quantities in host tissues, and therefore is a likely subject of isolations (this is the reason why Endornaviridae often are classified as dsRNA viruses, in contrast to the official ssRNA(+) ICTV classification).
The virus exits the host cell by cell to cell movement.
Plants, fungi, and oomycetes serve as the natural hosts. Transmission routes are pollen associated.
References
External links
ICTV: ICTV Report: Endornaviridae
SIB: Viralzone: Endornaviridae
NCBI: Endornaviridae (family)
Syunichi Urayama, Hiromitsu Moriyama, Nanako Aoki, Yukihiro Nakazawa, Ryo Okada, Eri Kiyota, Daisuke Miki, Ko Shimamoto, Toshiyuki Fukuhara: Knock-down of OsDCL2 in rice negatively affects maintenance of the endogenous dsRNA virus, Oryza sativa endornavirus. In: Plant Cell Physiol. 2010 Jan;51(1):58-67. doi:10.1093/pcp/pcp167. PMID 19933266. Epub 2009 Nov 19.
Double-stranded RNA viruses
Viral plant pathogens and diseases
Virus families
Riboviria | Endornaviridae | Biology | 576 |
3,421,465 | https://en.wikipedia.org/wiki/Ion%20Barbu | Ion Barbu (, pen name of Dan Barbilian; 18 March 1895 –11 August 1961) was a Romanian mathematician and poet. His name is associated with the Mathematics Subject Classification number 51C05, which is a major posthumous recognition reserved only to pioneers of investigations in an area of mathematical inquiry. As a poet, he is known for his volume Joc secund ("Mirrored Play"), in which he sought to fulfill his vision of a poetry which adhered to the same virtues that he found in mathematics.
Early life
Born in Câmpulung-Muscel, Argeș County, he was the son of Constantin Barbilian and Smaranda, born Șoiculescu. He attended elementary school in Câmpulung, Dămienești, and Stâlpeni, and for secondary studies he went to the Ion Brătianu High School in Pitești, the Dinicu Golescu High School in Câmpulung, and finally the Gheorghe Lazăr High School and the Mihai Viteazul High School in Bucharest. During that time, he discovered that he had a talent for mathematics, and started publishing in ; it was also then that he discovered his passion for poetry.
He was a student at the University of Bucharest when World War I caused his studies to be interrupted by military service. After being sent to Botoșani in December 1916, he attended the Reserve Officers' School in Bârlad and was promoted to the rank of corporal in April 1917. Serving under the command of major Barbu Alinescu, he advanced to platoon leader by April 1918, and went into reserve as a sub-lieutenant in 1919. Barbilian completed his undergraduate degree in 1921. The next year he won a doctoral grant to go to the University of Göttingen, where he studied number theory with Edmund Landau for two years. However, he attended few classes, suffered from cocaine and ether addiction, and eventually abandoned his studies at Göttingen. Returning to Bucharest, chronically ill as a result of drug intoxication, he was hospitalized for rehabilitation from August 1924 to January 1925. In 1925 he began to teach mathematics at , along with his German wife, Gerda, who taught German literature. He then studied with Gheorghe Țițeica, completing in 1929 his Ph.D. thesis, Reprezentarea canonică a adunării funcțiilor ipereliptice (Canonical representation of the addition of hyperelliptic functions). The thesis defense committee was presided by David Emmanuel and included Țițeica and Dimitrie Pompeiu. In the spring of 1929 he bought a house at 8, Carol Davila Street, Bucharest, where he would live for the rest of his life. For a while, he taught at the Cantemir Vodă High School. In the summer of 1937, he served as president of the commission administering the Baccalaureate at the Gheorghe Lazăr High School in Sibiu, after which he issued a scathing report to the Ministry of Education.
Achievements in mathematics
Apollonian metric
In 1935, Barbilian published his article describing metrization of a region K, the interior of a simple closed curve J. Let xy denote the Euclidean distance from x to y. Barbilian's function for the distance from a to b in K is
As Barbilian noted, this construction generates various geometries that are generalizations of the Klein projective model; he highlighted four special cases, including the Poincaré disk model in hyperbolic geometry. At the University of Missouri in 1938 Leonard Blumenthal wrote Distance Geometry. A Study of the Development of Abstract Metrics, where he used the term "Barbilian spaces" for metric spaces based on Barbilian's function to obtain their metric. And in 1954 the American Mathematical Monthly published an article by Paul J. Kelly on Barbilian's method of metrizing a region bounded by a curve. Barbilian claimed he did not have access to Kelly's publication, but he did read Blumenthal's review of it in Mathematical Reviews and he understood Kelly's construction. This motivated him to write in final form a series of four papers, which appeared after 1958, where the metric geometry of the spaces that today bears his name is investigated thoroughly.
He answered in 1959 with an article which described "a very general procedure of metrization through which the positive functions of two points, on certain sets, can be refined to a distance." Besides Blumenthal and Kelly, articles on "Barbilian spaces" have appeared in the 1990s from Patricia Souza, while Wladimir G. Boskoff, Marian G. Ciucă and Bogdan Suceavă wrote in the 2000s about "Barbilian's metrization procedure". Barbilian indicated in his paper Asupra unui principiu de metrizare that he preferred the term "Apollonian metric space", and articles from Alan F. Beardon, Frederick Gehring and Kari Hag, Peter A. Häströ, Zair Ibragimov and others use that term. According to Suceavă, "Barbilian's metrization procedure is important for at least three reasons: (1) It yields a natural generalization of Poincaré and Beltrami–Klein's hyperbolic geometries; (2) It has been studied in the context of the study of Apollonian metric; (3) Provides a large class of examples of Lagrange generalized metrics irreducible to Riemann, Finsler, or Lagrange metrics."
Ring geometry
Barbilian made a contribution to the foundations of geometry with his articles in 1940 and 1941 in Jahresbericht der Deutschen Mathematiker-Vereinigung on projective planes with coordinates from a ring. According to Boskoff and Suceavă, this work "inspired research in ring geometries, nowadays associated with his, Hjelmslev's and Klingenberg's names."
A more critical stance was taken in 1995 by Ferdinand D. Velkamp:
A systematic study of projective planes over large classes of associative rings was initiated by D. Barbilian. His very general approach in [1940 and 41] remained rather unsatisfactory, however, his axioms were partly of a geometric nature, partly algebraic as pertaining to the ring of coordinates, and there were a number of difficulties which Barbilian could not overcome.
Nevertheless, in 1989 John R. Faulkner wrote an article "Barbilian Planes" that clarified terminology and advanced the study. In his introduction, he wrote:
A classical result from projective geometry is that a Desarguesian projective plane is coordinatized by an associative division ring. A Barbilian plane is a geometric structure which extends the notion of a projective plane and thereby allows a coordinate ring which is not necessarily a division ring. There are advantages ...
The terms affine Barbilian plane and Barbilian domain were introduced by Werner Leissner in 1975, in two papers ("Affine Barbilian planes I and II"). Referring to these papers, Dirk Keppens says that Leissner introduced this terminology "as a tribute to Barbilian, who was one of the founders of (projective) ring geometry."
Textbooks
1956: , Bucharest.
1960: , Bucharest.
Academic career
In 1930, Barbilian returned to full-time mathematics and joined the academic staff at the University of Bucharest. In 1942, he was named professor, with some help from fellow mathematician Grigore Moisil.
As a mathematician, Barbilian authored 80 research papers and studies. His last paper, written in collaboration with Nicolae Radu, appeared posthumously, in 1962, and is the last in the cycle of four works where he investigates the Apollonian metric.
Poetry
Barbu made his literary debut in 1918 in Alexandru Macedonski's magazine , and then started contributing to Sburătorul, where Eugen Lovinescu saw him as a "new poet". His first volume of poetry, După melci ("After Snails"), was published in 1921. This was followed by his major work, Joc secund, published in 1930, to critical acclaim. The volume contains some 35 of Barbu’s total published output of around 100 poems.
His poem Ut algebra poesis (As Algebra, So Poetry), written in to his fellow poet Nina Cassian (with whom he had fallen in love), alludes to his regret at having abandoned his studies at Göttingen and an appreciation of two great mathematicians: Emmy Noether, who he had met there, and Carl Friedrich Gauss, who left a lasting legacy at Göttingen.
<small>—translation by Sarah Glaz and JoAnne Growney</small>
According to Loveday Kempthorne and Peter Donelan, Barbu "saw mathematics and poetry as equally capable of holding the answer to understanding and reaching a transcendental ideal." He is known as "one of the greatest Romanian poets of the twentieth century and perhaps the greatest of all" according to Romanian literary critic .
Political creed
Barbu was mostly apolitical, with one exception: around 1940 he became a sympathizer of the fascist movement The Iron Guard (hoping to be promoted to full professor if they came to power), dedicating a poem to one of its leaders, Corneliu Zelea Codreanu. In 1940, he also wrote a poem praising Hitler. Suceavă attributes these moves to be opportunistic devices in a professional advancement plan and ignores Barbu’s own explanation, that he was attempting to deflect attention from the fact that he was hiding in his house his wife’s brother, a German citizen who eluded conscription by staying hidden in Romania.
After the Communists came to power in the wake of World War II, his friend Alexandru Rosetti sought to convince Barbu to write poems praising the new regime. Barbu reluctantly wrote in early 1948 one poem that can be interpreted as pro-communist, namely "Bălcescu living", but he never relapsed and kept his dignified demeanor until the end.
Death and legacy
Ion Barbu died of liver failure in Bucharest in 1961. He is buried in the city's Bellu Cemetery.
The Dan Barbilian Theoretical High School in Câmpulung, the Ion Barbu Theoretical High School in Pitești, the Ion Barbu Technological High School in Giurgiu, and a secondary school in Galați are all named after him. There are Ion Barbu streets in Alba Iulia, Hărman, Murfatlar, Sânmartin, Șelimbăr, Tâncăbești, Timișoara, Zalău, and 1 Decembrie, and Dan Barbilian streets in Câmpulung and Giurgiu.
Presence in English language anthologies
Born in Utopia - An anthology of Modern and Contemporary Romanian Poetry - Carmen Firan and Paul Doru Mugur (editors) with Edward Foster - Talisman House Publishers - 2006 -
Testament - Anthology of Romanian Verse - American Edition - monolingual English language edition - Daniel Ioniță (editor and principal translator) with Eva Foster, Daniel Reynaud and Rochelle Bews - Australian-Romanian Academy for Culture - 2017 -
Testament – 400 Years of Romanian Poetry – 400 de ani de poezie românească – bilingual edition – Daniel Ioniță (editor and principal translator) with Daniel Reynaud, Adriana Paul & Eva Foster – Editura Minerva, 2019 –
Romanian Poetry from its Origins to the Present'' – bilingual edition English/Romanian – Daniel Ioniță (editor and principal translator) with Daniel Reynaud, Adriana Paul and Eva Foster – Australian-Romanian Academy Publishing – 2020 – ;
References
1895 births
1961 deaths
People from Câmpulung
Gheorghe Lazăr National College (Bucharest) alumni
Mihai Viteazul National College (Bucharest) alumni
Romanian military personnel of World War I
University of Bucharest alumni
Academic staff of the University of Bucharest
Romanian avant-garde
Romanian male poets
20th-century Romanian mathematicians
20th-century pseudonymous writers
20th-century Romanian inventors
20th-century Romanian poets
20th-century Romanian male writers
Romanian schoolteachers
Romanian textbook writers
Geometers
Pseudonymous mathematicians
Members of the Romanian Academy elected posthumously
Members of the Romanian Academy of Sciences
Deaths from liver failure
Burials at Bellu Cemetery | Ion Barbu | Mathematics | 2,548 |
60,263,786 | https://en.wikipedia.org/wiki/Data%20technology | Data technology (may be shortened to DataTech or DT) is the technology connected to areas such as martech or adtech. Data technology sector includes solutions for data management, and products or services that are based on data generated by both human and machines. DataTech is an emerging industry that uses Artificial Intelligence, Big Data analysis and Machine Learning algorithms to improve business activities in various sectors, such as digital marketing, or business analysis (e.g. predictive analytics).
Key areas
Data technology has been used to manage big data sets, build solutions for data management and integrate data from various sources to discover new business or analytical insights from collected information.
Growing global amount of generated data (the number is forecast to reach 163 zettabytes in 2025) determines spendings on technologies that help control data assets. The big data market is expected to reach $156.72 billion by 2026. Spendings on data, including data technologies, in digital marketing reach $26.0 B in 2019 globally.
Data technologies are developed to help manage data generated by human or by machines, which will be 200 billion by 2020. Data technologies aim to manage growing data streams, get valuable insights from data and find solutions to integrate the most important data sources for companies and organizations. Therefore, key areas for DataTech sector are:
Data Management Technologies - technologies and platforms for managing growing sets of data, such as data generated by customers (1st, 2nd and 3rd party data). Common platforms for managing data are Data Management Platform or Customer Data Platform.
Data Integration - services that match the data from two or more sources to get more information about stored data. If company collects user data in Customer-relationship management system, it can enrich it with the data from external sources to create 360-customer view (by integrating data, the company will know e.g. interests, demography and intentions of users who are in their databases).
Data Consulting - services based on analysing customer data and discovering insights from big data sets. It uses Machine Learning algorithms to find useful information from chaotic data.
Technologies for AdTech sector - products and services that support digital marketing environment, including SSP, Demand-side platform and services used for targeting the right group in online campaigns.
Building strategic data ecosystem - service that allow to build data ecosystem in organization, by identifying and choosing the right data sources, integrating data and preparing adequate analytical algorithms to discover new insights about customers.
Internet of Things - products and services that helps store and manage data generated by machines.
Notes
References
Business analysis
Digital marketing
Big data | Data technology | Technology | 518 |
5,061,860 | https://en.wikipedia.org/wiki/Initiation%20factor | In molecular biology, initiation factors are proteins that bind to the small subunit of the ribosome during the initiation of translation, a part of protein biosynthesis.
Initiation factors can interact with repressors to slow down or prevent translation. They have the ability to interact with activators to help them start or increase the rate of translation. In bacteria, they are simply called IFs (i.e.., IF1, IF2, & IF3) and in eukaryotes they are known as eIFs (i.e.., eIF1, eIF2, eIF3). Translation initiation is sometimes described as three step process which initiation factors help to carry out. First, the tRNA carrying a methionine amino acid binds to the small subunit of ribosome, then binds to the mRNA, and finally joins together with the large subunit of ribosome. The initiation factors that help with this process each have different roles and structures.
Types
The initiation factors are divided into three major groups by taxonomic domains. There are some homologies shared (click the domain names to see the domain-specific factors):
Structure and function
Many structural domains have been conserved through evolution, as prokaryotic initiation factors share similar structures with eukaryotic factors. The prokaryotic initiation factor, IF3, assists with start site specificity, as well as mRNA binding. This is in comparison with the eukaryotic initiation factor, eIF1, who also performs these functions. The elF1 structure is similar to the C-terminal domain of IF3, as they each contain a five-stranded beta sheet against two alpha helices.
The prokaryotic initiation factors IF1 and IF2 are also homologs of the eukaryotic initiation factors eIF1A and eIF5B. IF1 and eIF1A, both containing an OB-fold, bind to the A site and assist in the assembly of initiation complexes at the start codon. IF2 and eIF5B assist in the joining of the small and large ribosomal subunits. The eIF5B factor also contains elongation factors. Domain IV of eIF5B is closely related to the C-terminal domain of IF2, as they both consist of a beta-barrel. The elF5B also contains a GTP-binding domain, which can switch from an active GTP to an inactive GDP. This switch helps to regulate the affinity of the ribosome for the initiation factor.
A eukaryotic initiation factor eIF3 plays an important role in translational initiation. It has a complex structure, composed of 13 subunits. It helps to create the 43S pre-initiation complex, composed of the small 40S subunit attached to other initiation factors. It also helps to create the 48S pre-initiation complex, consisting of the 43S complex with the mRNA. The eIF3 factor can also be used post-translation in order to separate the ribosomal complex and keep the small and large subunits apart. The initiation factor interacts with the eIF1 and eIF5 factors used for scanning and selection of the start codons. This can create changes in the selection of the factors, binding to different codons.
Another important eukaryotic initiation factor, eIF2, binds the tRNA containing methionine to the P site of the small ribosome. The P site is where the tRNA carrying an amino acid forms a peptide bond with the incoming amino acids and carries the peptide chain. The factor consists of an alpha, beta, and gamma subunit. The eIF2 gamma subunit is characterized by a GTP-binding domain and beta-barrel folds. It binds to the tRNA through GTP. Once the initiation factor helps the tRNA bind, the GTP hydrolyzes and is released the eIF2. The eIF2 beta subunit is identified by its Zn-finger. The eIF2 alpha subunit is characterized by an OB-fold domain and two beta strands. This subunit helps to regulate translation, as it becomes phosphorylated to inhibit protein synthesis.
The eIF4F complex supports the cap-dependent translation initiation process and is composed of the initiation factors eIF4A, eIF4E, and eIF4G. The cap end of the mRNA, being the 5’ end, is brought to the complex where the 43S ribosomal complex can bind and scan the mRNA for the start codon. During this process, the 60S ribosomal subunit binds and the large 80S ribosomal complex is formed. The eIF4G plays a role, as it interacts with the polyA-binding protein, attracting the mRNA. The eIF4E then binds the cap of the mRNA and the small ribosomal subunit binds to the eIF4G to begin the process of creating the 80S ribosomal complex. The eIF4A works to make this process more successful, as it is a DEAD box helicase. It allows for the unwinding of the untranslated regions of the mRNA to allow for ribosomal binding and scanning.
In cancer
In cancerous cells, initiation factors assist in cellular transformation and development of tumors. The survival and growth of cancer is directly related to the modification of initiation factors and is used as a target for pharmaceuticals. Cells need increased energy when cancerous and derive this energy from proteins. Over-expression of initiation factors correlates with cancers, as they increase protein synthesis for proteins needed in cancers. Some initiation factors, such as eIF4E, are important in synthesizing specific proteins needed for the proliferation and survival of cancer. The careful selection of proteins ensures that proteins that are usually limited in translation and only proteins needed for cancer cell growth will be synthesized. This includes proteins involved in growth, malignancy, and angiogenesis. The eIF4E factor, along with eIF4A and eIF4G, also play a role in transitioning benign cancer cells to metastatic.
The largest initiation factor, eIF3, is another significant initiation factor in human cancers. Due to its role in creating the 43S pre-initiation complex, it helps to bind the ribosomal subunit to the mRNA. The initiation factor has been linked to cancers through over-expression. For example, one of the thirteen eIF3 proteins, eIF3c, interacts with and represses proteins used in tumor suppression. Limited expression of certain eIF3 proteins, such as eIF3a an eIF3d, has been proven to decrease the vigorous growth of cancer cells. The over-expression of eIF3a has been linked to breast, lung, cervix, esophagus, stomach, and colon cancers. It is prevalent during early stages of oncogenesis and likely selectively translates proteins needed for cell proliferation. When eIF3a is suppressed, it has shown to decrease the malignancy of breast and lung cancer, most likely due to its role in tumor growth.
References
External links
See also
Ribosome
Eukaryotic translation
Eukaryotic initiation factor
Molecular biology
Protein biosynthesis
Gene expression | Initiation factor | Chemistry,Biology | 1,477 |
9,751,861 | https://en.wikipedia.org/wiki/Impromidine | Impromidine (INN) is a highly potent and specific histamine H2 receptor agonist.
It has been used diagnostically as a gastric secretion indicator.
See also
Histamine agonists
References
Imidazoles
Guanidines
Thioethers | Impromidine | Chemistry | 59 |
18,485,734 | https://en.wikipedia.org/wiki/Guanoxabenz | Guanoxabenz is a metabolite of guanabenz.
References
Alpha-2 adrenergic receptor agonists
Chloroarenes
Guanidines | Guanoxabenz | Chemistry | 39 |
1,532,268 | https://en.wikipedia.org/wiki/Epsilon%20Bo%C3%B6tis | Epsilon Boötis (ε Boötis, abbreviated Epsilon Boo, ε Boo), officially named Izar ( ), is a binary star in the northern constellation of Boötes. The star system can be viewed with the unaided eye at night, but resolving the pair with a small telescope is challenging; an aperture of or greater is required.
Nomenclature
ε Boötis (Latinised to Epsilon Boötis) is the star's Bayer designation.
It bore the traditional names Izar, Mirak and Mizar, and was named by Friedrich Georg Wilhelm von Struve. Izar, and Mizar are from the and مئزر Mi'zar ('kilt like undergarment') and ('the loins'); is Latin for 'loveliest'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Izar for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names.
In the catalogue of stars in the Calendarium of Al Achsasi Al Mouakket, this star was designated ( ), which was translated into Latin as , meaning 'belt of barker'.
In Chinese astronomy, ('Celestial Lance'), refers to an asterism consisting of Epsilon Boötis, Sigma Boötis and Rho Boötis. Consequently, the Chinese name for Epsilon Boötis itself is ('the First Star of Celestial Lance').
Properties
Epsilon Boötis consists of a pair of stars with an angular separation of at a position angle of . The brighter component (A) has an apparent visual magnitude of 2.37, making it readily visible to the naked eye at night. The fainter component (B) is at magnitude 5.12, which by itself would also be visible to the naked eye. Parallax measurements from the Hipparcos astrometry satellite put the system at a distance of about from the Earth. This means the pair has a projected separation of 185 Astronomical Units, and they orbit each other with a period of at least 1,000 years.
The brighter member has a stellar classification of K0 II-III, which means it is a fairly late-stage star well into its stellar evolution, having already exhausted its supply of hydrogen fuel at the core. With more than four times the mass of the Sun, it has expanded to about 38 times the Sun's radius and is emitting 650 times the luminosity of the Sun. This energy is being radiated from its outer envelope at an effective temperature of 4,755 K, giving it the orange hue of a K-type star.
The companion star has a classification of A2 V, so it is a main sequence star that is generating energy through the thermonuclear fusion of hydrogen at its core. This star is rotating rapidly, with a projected rotational velocity of . It has a surface temperature of about and a radius nearly three times the Sun, leading to a bolometric luminosity 45 times that of the Sun.
By the time the smaller main sequence star reaches the current point of the primary in its evolution, the larger star will have lost much of its mass in a planetary nebula and will have evolved into a white dwarf. The pair will have essentially changed roles: the brighter star becoming the dim dwarf, while the lesser companion will shine as a giant star.
In culture
In 1973, the Scottish astronomer and science fiction writer Duncan Lunan claimed to have managed to interpret a message caught in the 1920s by two Norwegian physicists that, according to his theory, came from a 13,000 year old satellite polar orbiting the Earth known as the Black Knight and sent there by the inhabitants of a planet orbiting Epsilon Boötis. The story was even reported in Time magazine. Lunan later withdrew his Epsilon Boötis theory, presenting proofs against it and clarifying why he was brought to formulate it in the first place, but later revoked his withdrawal.
References
External links
Information page for HR 5506 (Izar) on VizieR
Information page for HR 5505 (ε Boötes B) on VizieR
Information page for CCDM J14449+2704 (all component stars) on VizieR
Image of Epsilon Boötis
List of constellations and named stars
Izar star chart with viewing information on in-the-sky.org
ε Boötes B star chart with viewing information on in-the-sky.org
Binary stars
Bootis, 36
129988 9
072105
Bootis, Epsilon
Boötes
K-type bright giants
K-type giants
A-type main-sequence stars
Izar
5505 6
BD+27 2417 | Epsilon Boötis | Astronomy | 965 |
725,995 | https://en.wikipedia.org/wiki/Glovebox | A glovebox (or glove box) is a sealed container that is designed to allow one to manipulate objects where a separate atmosphere is desired. Built into the sides of the glovebox are gloves arranged in such a way that the user can place their hands into the gloves and perform tasks inside the box without breaking containment. Part or all of the box is usually transparent to allow the user to see what is being manipulated. A smaller antechamber compartment is used to transport items into or out of the main chamber without compromising the internal environment. Antechambers are much smaller than the main chambers so they can be exposed to ambient conditions more often and achieve inert conditions quickly.
Two types of gloveboxes exist. The first allows a person to work with hazardous substances, such as radioactive materials or infectious disease agents, and the second allows manipulation of substances that must be contained within a very high purity inert atmosphere, such as argon or nitrogen. It is also possible to use a glovebox for manipulation of items in a vacuum chamber.
Inert atmosphere work
The gas in a glovebox is pumped through a series of treatment devices which remove solvents, water and oxygen from the gas. Copper metal (or some other finely divided metal) is commonly used to remove oxygen, this oxygen removing column is normally regenerated by passing a hydrogen/nitrogen mixture through it while it is heated: the water formed is passed out of the box with the excess hydrogen and nitrogen. It is common to use molecular sieves to remove water by absorbing it in the molecular sieves' pores. Such a box is often used by organometallic chemists to transfer dry solids from one container to another container.
An alternative to using a glovebox for air sensitive work is to employ Schlenk methods using a Schlenk line. One disadvantage of working in a glovebox is that organic solvents will attack the plastic seals. As a result, the box will start to leak and water and oxygen can then enter the box. Another disadvantage of a glovebox is that oxygen and water can diffuse through the plastic gloves. Also, coordinating solvents, such as tetrahydrofuran and dichloromethane, can bind irreversibly to the copper catalyst, reducing its effectiveness. One way to prolong the lifespan of the glovebox and catalyst is to turn off circulation when using solvents, followed by purging when work involving solvents is finished.
Inert atmosphere gloveboxes are typically kept at a higher pressure than the surrounding air, so that any microscopic leaks are mostly leaking inert gas out of the box instead of letting air in.
Hazardous materials work
At the now-deactivated Rocky Flats Plant, which manufactured plutonium triggers, also called "pits", production facilities consisted of linked stainless steel gloveboxes up to 64 feet, or 20 meters, in length, which contained the equipment which forged and machined the trigger parts. The gloves were lead-lined. Other materials used in the gloveboxes included acrylic viewing windows and Benelex shielding composed of wood fiber and plastic which shielded against neutron radiation. Manipulation of the lead-lined gloves was onerous work.
Some gloveboxes for radioactive work are under inert conditions, for instance, one nitrogen-filled box contains an argon-filled box. The argon box is fitted with a gas treatment system to keep the gas very pure to enable electrochemical experiments in molten salts.
Gloveboxes are also used in the biological sciences when dealing with anaerobes or high-biosafety level pathogens.
Gloveboxes used for hazardous materials are generally maintained at a lower pressure than the surrounding atmosphere, so that microscopic leaks result in air intake rather than hazard outflow. Gloveboxes used for hazardous materials generally incorporate HEPA filters into the exhaust, to keep the hazard contained.
Gallery
See also
Desiccators are used for storing chemicals which are moisture-sensitive, but do not react quickly or violently with water.
Fume hoods are used for hazardous material handling where less operator protection and the same atmosphere can be used.
Hot cells often use remote manipulators to provide radiological containment where more operator protection is required.
Sandblasting cabinets are a type of glovebox which shield the user from the high-velocity abrasive particles inside.
Schlenk lines are used for manipulating oxygen- and moisture-sensitive chemicals.
References
External links
American Glovebox Society
Hans-Jürgen Bässler und Frank Lehmann: Containment Technology: Progress in the Pharmaceutical and Food Processing Industry. Springer, Berlin 2013
Laboratory equipment
Gas technologies
Air-free techniques
Radiation protection | Glovebox | Chemistry,Engineering | 946 |
78,129 | https://en.wikipedia.org/wiki/TX-2 | The MIT Lincoln Laboratory TX-2 computer was the successor to the Lincoln TX-0 and was known for its role in advancing both artificial intelligence and human–computer interaction. Wesley A. Clark was the chief architect of the TX-2.
Specifications
The TX-2 was a transistor-based computer using the then-huge amount of 64K 36-bit words of magnetic-core memory. The TX-2 became operational in 1958. Because of its powerful capabilities, Ivan Sutherland's revolutionary Sketchpad program was developed for and ran on the TX-2. One of its key features was the ability to directly interact with the computer through a graphical display.
The compiler was developed by Lawrence Roberts while he was studying at the MIT Lincoln Laboratory.
Relationship with DEC
Digital Equipment Corporation was a spin-off of the TX-0 and TX-2 projects. The TX-2 Tape System was a block addressable 1/2" tape developed for the TX-2 by Tom Stockebrand which evolved into LINCtape and DECtape.
Role in creating the Internet
Dr. Leonard Kleinrock developed the mathematical theory of packet networks which he successfully simulated on the TX-2 computer at Lincoln Lab.
References
External links
TX-2 documentation at bitsavers.org
Interview with UCLA's Dr. Leonard Kleinrock
One-of-a-kind computers
Transistorized computers
36-bit computers | TX-2 | Technology | 282 |
8,336 | https://en.wikipedia.org/wiki/Decision%20problem | In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question based on the given input values. An example of a decision problem is deciding with the help of an algorithm whether a given natural number is prime. Another example is the problem, "given two numbers x and y, does x evenly divide y?"
A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers x and y, does x evenly divide y?" would give the steps for determining whether x evenly divides y. One such algorithm is long division. If the remainder is zero the answer is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm is called decidable.
Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an effective method to determine the existence of some object or its membership in a set; some of the most important problems in mathematics are undecidable.
The field of computational complexity categorizes decidable decision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of recursion theory, meanwhile, categorizes undecidable decision problems by Turing degree, which is a measure of the noncomputability inherent in any solution.
Definition
A decision problem is a yes-or-no question on an infinite set of inputs. It is traditional to define the decision problem as the set of possible inputs together with the set of inputs for which the answer is yes.
These inputs can be natural numbers, but can also be values of some other kind, like binary strings or strings over some other alphabet. The subset of strings for which the problem returns "yes" is a formal language, and often decision problems are defined as formal languages.
Using an encoding such as Gödel numbering, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. Therefore, the algorithm of a decision problem is to compute the characteristic function of a subset of the natural numbers.
Examples
A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any effective method is enough to establish decidability.
Decidability
A decision problem is decidable or effectively solvable if the set of inputs (or natural numbers) for which the answer is yes is a recursive set. A problem is partially decidable, semidecidable, solvable, or provable if the set of inputs (or natural numbers) for which the answer is yes is a recursively enumerable set. Problems that are not decidable are undecidable. For those it is not possible to create an algorithm, efficient or otherwise, that solves them.
The halting problem is an important undecidable decision problem; for more examples, see list of undecidable problems.
Complete problems
Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions. A decision problem P is said to be complete for a set of decision problems S if P is a member of S and every problem in S can be reduced to P. Complete decision problems are used in computational complexity theory to characterize complexity classes of decision problems. For example, the Boolean satisfiability problem is complete for the class NP of decision problems under polynomial-time reducibility.
Function problems
Decision problems are closely related to function problems, which can have answers that are more complex than a simple 'yes' or 'no'. A corresponding function problem is "given two numbers x and y, what is x divided by y?".
A function problem consists of a partial function f; the informal "problem" is to compute the values of f on the inputs for which it is defined.
Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a function f is the set of pairs (x,y) such that f(x) = y.) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair (x,y)) when the function is not computable in polynomial time (in which case running time is computed as a function of x alone). The function f(x) = 2x has this property.
Every decision problem can be converted into the function problem of computing the characteristic function of the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of an NP-complete problem and its co-NP-complete complement is exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation.
Optimization problems
Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding the best answer to a particular input. Optimization problems arise naturally in many applications, such as the traveling salesman problem and many questions in linear programming.
Function and optimization problems are often transformed into decision problems by considering the question of whether the output is equal to or less than or equal to a given value. This allows the complexity of the corresponding decision problem to be studied; and in many cases the original function or optimization problem can be solved by solving its corresponding decision problem. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for each N, to decide whether the graph has any tour with weight less than N. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour.
Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such as operations research.
See also
ALL (complexity)
Computational problem
Decidability (logic) – for the problem of deciding whether a formula is a consequence of a logical theory.
Search problem
Counting problem (complexity)
Word problem (mathematics)
References
Computational problems
Computability theory | Decision problem | Mathematics | 1,421 |
3,651,405 | https://en.wikipedia.org/wiki/Michael%20Lynch%20%28geneticist%29 | Michael Lynch (born December 6, 1951) is an American geneticist who is the Director of the Biodesign Institute for Mechanisms of Evolution at Arizona State University, Tempe, Arizona.
Biography
Lynch held a Distinguished Professorship of Evolution, Population Genetics and Genomics at Indiana University, Bloomington, Indiana. Besides over 250 papers, especially in population genetics, he has written a two volume textbook with Bruce Walsh. Alongside this textbook he has also published two other books. He promotes neutral theories to explain genomic architecture based on the effects of population sizes in different lineages; he presented this point of view in his 2007 book "The Origins of Genome Architecture". In 2009, he was elected to the National Academy of Sciences (Evolutionary Biology). Lynch was a Biology undergraduate at St. Bonaventure University and received a B.S. in Biology in 1973. He obtained his PhD from the University of Minnesota (Ecology and Behavioral Biology) in 1977.
Research
Evolution of genome architecture
Population genetics principles, phylogenetic analyses, rate calculations, and allele frequency spectra of derived SNPs are employed to understand evolutionary mechanisms behind eukaryotic genome complexity. Hypotheses around the ideas that eukaryotic genome complexity evolved as a result of a passive response to reduced population size, deleterious newly arisen introns in species of Daphnia, genomic response to alterations in population size and mutation rates in E. coli and the evolutionary fates of duplicate genes in of species of Paramecium using complete genomic sequencing are investigated.
Role of mutation in evolution
Most mutations are mildly deleterious and can eventually lead to decreased evolutionary fitness in a species. Using the Tree of Life, Lynch investigates the significant variation across diverse invertebrates and simple eukaryotic and prokaryotic organisms using a mutation-accumulation strategy. To address this mutation diversity and the load of mutation on survival in some species, a novel method involving a mutation accumulation strategy that is followed by whole genome sequencing allows for estimation of error rates in transcription and variation among eukaryotic lineages. The work done to estimate this variation translates to population genetic theories for mutation rates and how somatic mutations can eventually evolve to multicellularity. These approaches promote the evolutionary ideas of the drift-barrier hypothesis.
Role of recombination in evolution
A major drawback of sexual recombination is the separation of complexes of alleles that have adapted together. Study of Daphnia pulex, a microcrustacean that has the ability to reproduce sexually and asexually based upon which is advantageous at particular evolutionary time points, allows for direct quantification and comparison of recombination rates in mobile genetic elements in sexual and asexual lineages. This species of Daphnia's asexual lineage is rather young in an evolutionary time perspective and rapidly go extinct. It is hypothesized that this rapid extinction is caused by a loss of heterozygosity caused by asexual reproduction as well as gene conversion exposing them to pre-existing deleterious mutations. A new reference genome assembly of this species has recently been generated and attention to the role of recombination in Daphnia has been of hallmark importance to Lynch's research in recent years.
Evolutionary cell biology
Currently, no formal field of evolutionary cell biology exists. The link between the evolution of phenotypes and molecular evolution is found at the level of cellular architecture. Recent work spearheaded by Michael Lynch and his lab seeks to link traditional evolutionary theory with molecular and cellular biology alongside comparative cellular biology observations. Using Paramecium as a model species, studies of the evolutionary basis of: evolution of cellular surveillance mechanisms, barriers as a result of random genetic drift on molecular perfection, multimeric proteins, vesicle transport and gene expression.
Honors and awards
2013: President of the Genetics Society of America
2022: Thomas Hunt Morgan Medal
See also
Drift-barrier hypothesis
Dysgenics
References
1951 births
Living people
University of Minnesota College of Biological Sciences alumni
American evolutionary biologists
Population geneticists
Indiana University faculty
Members of the United States National Academy of Sciences
Neutral theory | Michael Lynch (geneticist) | Biology | 837 |
6,643,829 | https://en.wikipedia.org/wiki/Animal%20migration%20tracking | Animal migration tracking is used in wildlife biology, conservation biology, ecology, and wildlife management to study animals' behavior in the wild. One of the first techniques was bird banding, placing passive ID tags on birds legs, to identify the bird in a future catch-and-release. Radio tracking involves attaching a small radio transmitter to the animal and following the signal with a RDF receiver. Sophisticated modern techniques use satellites to track tagged animals, and GPS tags which keep a log of the animal's location. With the Emergence of IoT the ability to make devices specific to the species or what is to be tracked is possible. One of the many goals of animal migration research has been to determine where the animals are going; however, researchers also want to know why they are going "there". Researchers not only look at the animals' migration but also what is between the migration endpoints to determine if a species is moving to new locations based on food density, a change in water temperature, or other stimulus, and the animal's ability to adapt to these changes. Migration tracking is a vital tool in efforts to control the impact of human civilization on populations of wild animals, and prevent or mitigate the ongoing extinction of endangered species.
Technologies
In the fall of 1803, American Naturalist John James Audubon wondered whether migrating birds returned to the same place each year. So he tied a string around the leg of a bird before it flew south. The following spring, Audubon saw the bird had indeed come back.
Scientists today still attach tags, such as metal bands, to track movement of animals. Metal bands require the re-capture of animals for the scientists to gather data; the data is thus limited to the animal's release and destination points.
Recent technologies have helped solve this problem. Some electronic tags give off repeating signals that are picked up by radio devices or satellites while other electronic tags could include archival tags (or data loggers). Scientists can track the locations and movement of the tagged animals without recapturing them using this RFID technology or satellites. These electronic tags can provide a great deal of data. Modern technologies are also smaller, minimising the negative impact of the tag on the animal.
Recent advancements in tracking technology have improved the ability to monitor animal migration without the need for recapturing. Wildlife Drones, an Australian company, developed a drone-based radio telemetry system to track small, mobile species like the Swift Parrot, one of Australia’s most endangered birds. Traditional tracking methods for such species involved very high frequency (VHF) radio-tags and manual tracking with handheld receivers, which were labour-intensive and limited in range. The system allows researchers to collect data remotely from multiple tagged animals over large distances, increasing the efficiency and effectiveness of wildlife monitoring.
Radio tracking
Tracking an animal by radio telemetry involves two devices. Telemetry, in general, involves the use of a transmitter that is attached to an animal and sends out a signal in the form of radio waves, just as a radio station does. A scientist might place the transmitter around an animal's ankle, neck, wing, carapace, or dorsal fin. Alternatively, they may surgically implant it as internal radio transmitters have the advantage of remaining intact and functioning longer than traditional attachments, being protected from environmental variables and wear. The transmitter typically uses a frequency in the VHF band as antennas in this band are conveniently small. To conserve battery power the transmitter usually transmits brief pulses, perhaps one per second. A specialized radio receiver called a radio direction finding (RDF) receiver picks up the signal. The receiver is usually in a truck, an ATV, or an airplane. The receiver has a directional antenna (usually a simple Yagi antenna) which receives most strongly from a single direction, and some means of indicating the strength of the received signal, either by a meter or by the loudness of the pulses in earphones. The antenna is rotated until the received radio signal is strongest; then the antenna is pointing toward the animal. To keep track of the signal, the scientist follows the animal using the receiver. This approach of using radio tracking can be used to track the animal manually but is also used when animals are equipped with other payloads. The receiver is used to home in on the animal to get the payload back.
Another form of radio tracking that can be utilized, especially in the case of small bird migration, is the use of geolocators or "geologgers". This technology utilizes a light sensor that tracks the light-level data during regular intervals in order to determine a location based on the length of the day and the time of solar noon. While there are benefits and challenges with using this method of tracking, it is one of the only practical means of tracking small birds over long distances during migration.
Passive integrated transponders (PIT) are another method of telemetry used to track the movements of a species Passive integrated transponders, or "PIT tags", are electronic tags that allow researchers to collect data from a specimen without the need to recapture and handle the animal. Data is captured and monitored via electronic interrogation antennae, which records the time and location of the individual. Pit tags are a humane method of tracking that has little risk of infection or mortality due to the limited contact necessary to monitor the specimens. They are also cost-efficient in that they can be used repeatedly should the need arise to remove the tag from the animal.
Motus wildlife tracking network is a program by Birds Canada, it was launched in 2014 in the US and Canada, by 2022 there are more than 40,000 transmitters on various animals, mostly birds, and 1,500 receiver stations have been installed in 34 countries, most receivers are concentrated in the United States and Canada.
Satellite tracking
Receivers can be placed in Earth-orbiting satellites such as ARGOS. Networks, or groups, of satellites are used to track animals. Each satellite in a network picks up electronic signals from a transmitter on an animal. Together, the signals from all satellites determine the precise location of the animal. The satellites also track the animal's path as it moves. Satellite tracking is especially useful because the scientists do not have to follow after the animal nor do they have to recover the tag to get the data on where the animal is going or has gone. Satellite networks have tracked the migration and territorial movements of caribou, sea turtles, whales, great white sharks, seals, elephants, bald eagles, ospreys and vultures. Additionally Pop-up satellite archival tags are used on marine mammals and various species of fish. There are two main systems, the above-mentioned Argos and the GPS.
Thanks to these systems, conservationists can find the key sites for migratory species. Another form of satellite tracking would be the use of acoustic telemetry. This involves the use of electronic tags that emit sound in order for the researchers to track and monitor an animal within three dimensions, which is helpful in instances when large quantities of a species are being tracked at a time.
IoT tracking
IoT or the internet of things proves to be a potential resource for the future of wildlife tracking and research. This technology can range from low-power wide-area LPWA sensor networks attached to wildlife by safe adhesive to cameras connected to the internet using machine learning to determine what images are interesting and categorize the photos. With LPWA, the applications are endless. All that needs to be done is to develop the sensors that attach to any animal. With the sensors' low power, changing their batteries becomes less of a problem. The program Where's The Bear is a wildlife monitoring software by the Computer Science Department at the University of California Santa Barba. They use cameras as their sensors and machine learning to quantify the photos into empty pictures triggered by wind and rain. They are instead reporting those of different species of animals. To make the training process of the algorithm rapid, they used edited photos with animals inserted in the shot of the given sensors view to sense the different animals. This training was able to make the technology more accurate with fewer false positives and false negatives. This method increased the ability to categorize animals’ photos, proving a potential new technology for vast groups of people for commercial and public use.
Stable isotopes
Stable isotopes are one of the intrinsic markers used for studying migration of animals. One of the benefits of intrinsic markers in general, including stable isotope analysis, is that it does not require an organism to be captured and tagged and then recaptured at a later time. Each capture of an organism provides information on where it has been based on diet. The three types of intrinsic markers that can be used as tools for animal migration studies are: (1) contaminants, parasites and pathogens, (2) trace elements, and (3) stable isotopes. Certain geographic regions have specific stable isotope ratios that affect the chemistry of organisms foraging in those locations, this creates "isoscapes" that scientists can use to understand where the organism has been eating. A couple prerequisites must be met in order to use stable isotope analysis successfully: (1) the animal must have at least one light isotope of interest in specific tissues that can be sampled (this condition is almost always met since these light isotopes are building blocks of most animal tissues), and (2) the organism needs to migrate between isotopically different regions and these isotopes must be retained in the tissue in order for the differences to be measured.
Stable isotope analysis has a lot of benefits and has been used in terrestrial and aquatic organisms. For example, stable isotope analysis has been confirmed to work in determining foraging locations of nesting loggerhead sea turtles. Satellite telemetry was used to confirm that the location derived from the analysis were accurate to where these turtles actually traveled. This is important because it allows for greater sample sizes to be used in migration studies, since satellite telemetry is expensive and tissue, blood, and egg samples can be taken from the female turtles laying eggs.
Importance
Electronic tags are giving scientists a complete, accurate picture of migration patterns. For example, when scientists used radio transmitters to track one herd of caribou, they learned two important things. First, they learned that the herd moves much more than previously thought. Second, they learned that each year the herd returns to about the same place to give birth. This information would have been difficult or impossible to obtain with "low tech" tags.
Tracking migrations is an important tool to better understand and protect species. For example, Florida manatees are an endangered species, and therefore they need protection. Radio tracking showed that Florida manatees may travel as far as Rhode Island when they migrate. This information suggests that the manatees may need protection along much of the Atlantic Coast of the United States. Previously, protection efforts focused mainly in the Florida area.
In the wake of the BP oil spill, efforts in tracking animals has increased in the Gulf. Most researchers who use electronic tags have only a few options: pop-up satellite tags, archival tags, or satellite tags. Historically these tags were generally expensive and could cost several thousands of dollars per tag. However, with current advancements in technology prices are now allowing researchers to tag more animals. With this increase in the number of species and individuals that can be tagged it is important to record and acknowledge the potential negative effects these devices might have.
See also
Acoustic tag
Bird ringing
Data storage tag
History of wildlife tracking technology
Light level geolocator
GIS and aquatic science
Pop-up satellite archival tag
Tracking collar
Coded wire tag
Motus (wildlife tracking network)
References
External links
"Satellite Tracking." Space Today.
Tomkiewicz Jr, Stanley. "Tracking animal Wild life." telonics.
Zanoni, Mary. "Animal ID." Klamath Basin.
"John James Audubon." Audubon. National Audubon Society, Inc.
"Satellite Tracking Migratory Birds." Western Ecological Research Center.
"Satellite Tracking Threatened Manatees." Space Today.
"Tracking Manatee Movement." Save The Manatee Club.
"Manatee Migration Updates." Journey North. Learner.
Radio interview Robert and Kirk Miner remember their grandfather, Jack Miner, and talk about the Jack Miner Migratory Bird Sanctuary. Originally aired February 1, 2008.
Zoology
Animal migration
Geopositioning | Animal migration tracking | Biology | 2,506 |
31,847,498 | https://en.wikipedia.org/wiki/Lindstr%C3%B6m%E2%80%93Gessel%E2%80%93Viennot%20lemma | In mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973. The lemma is named after Bernt Lindström, Ira Gessel and Gérard Viennot.
Statement
Let G be a locally finite directed acyclic graph. This means that each vertex has finite degree, and that G contains no directed cycles. Consider base vertices and destination vertices , and also assign a weight to each directed edge e. These edge weights are assumed to belong to some commutative ring. For each directed path P between two vertices, let be the product of the weights of the edges of the path. For any two vertices a and b, write e(a,b) for the sum over all paths from a to b. This is well-defined if between any two points there are only finitely many paths; but even in the general case, this can be well-defined under some circumstances (such as all edge weights being pairwise distinct formal indeterminates, and being regarded as a formal power series). If one assigns the weight 1 to each edge, then e(a,b) counts the number of paths from a to b.
With this setup, write
.
An n-tuple of non-intersecting paths from A to B means an n-tuple (P1, ..., Pn) of paths in G with the following properties:
There exists a permutation of such that, for every i, the path Pi is a path from to .
Whenever , the paths Pi and Pj have no two vertices in common (not even endpoints).
Given such an n-tuple (P1, ..., Pn), we denote by the permutation of from the first condition.
The Lindström–Gessel–Viennot lemma then states that the determinant of M is the signed sum over all n-tuples P = (P1, ..., Pn) of non-intersecting paths from A to B:
That is, the determinant of M counts the weights of all n-tuples of non-intersecting paths starting at A and ending at B, each affected with the sign of the corresponding permutation of , given by taking to .
In particular, if the only permutation possible is the identity (i.e., every n-tuple of non-intersecting paths from A to B takes ai to bi for each i) and we take the weights to be 1, then det(M) is exactly the number of non-intersecting n-tuples of paths starting at A and ending at B.
Proof
To prove the Lindström–Gessel–Viennot lemma, we first introduce some notation.
An n-path from an n-tuple of vertices of G to an n-tuple of vertices of G will mean an n-tuple of paths in G, with each leading from to . This n-path will be called non-intersecting just in case the paths Pi and Pj have no two vertices in common (including endpoints) whenever . Otherwise, it will be called entangled.
Given an n-path , the weight of this n-path is defined as the product .
A twisted n-path from an n-tuple of vertices of G to an n-tuple of vertices of G will mean an n-path from to for some permutation in the symmetric group . This permutation will be called the twist of this twisted n-path, and denoted by (where P is the n-path). This, of course, generalises the notation introduced before.
Recalling the definition of M, we can expand det M as a signed sum of permutations; thus we obtain
It remains to show that the sum of over all entangled twisted n-paths vanishes. Let denote the set of entangled twisted n-paths. To establish this, we shall construct an involution with the properties and for all . Given such an involution, the rest-term
in the above sum reduces to 0, since its addends cancel each other out (namely, the addend corresponding to each cancels the addend corresponding to ).
Construction of the involution: The idea behind the definition of the involution is to take choose two intersecting paths within an entangled path, and switch their tails after their point of intersection. There are in general several pairs of intersecting paths, which can also intersect several times; hence, a careful choice needs to be made. Let be any entangled twisted n-path. Then is defined as follows. We call a vertex crowded if it belongs to at least two of the paths . The fact that the graph is acyclic implies that this is equivalent to "appearing at least twice in all the paths". Since P is entangled, there is at least one crowded vertex. We pick the smallest such that contains a crowded vertex. Then, we pick the first crowded vertex v on ("first" in sense of "encountered first when travelling along "), and we pick the largest j such that v belongs to . The crowdedness of v implies j > i. Write the two paths and as
where , and where and are chosen such that v is the -th vertex along and the -th vertex along (that is, ). We set and and and . Now define the twisted n-path to coincide with except for components and , which are replaced by
It is immediately clear that is an entangled twisted n-path. Going through the steps of the construction, it is easy to see that , and furthermore that and , so that applying again to involves swapping back the tails of and leaving the other components intact. Hence . Thus is an involution. It remains to demonstrate the desired antisymmetry properties:
From the construction one can see that coincides with except that it swaps and , thus yielding . To show that we first compute, appealing to the tail-swap
Hence .
Thus we have found an involution with the desired properties and completed the proof of the Lindström–Gessel–Viennot lemma.
Remark. Arguments similar to the one above appear in several sources, with variations regarding the choice of which tails to switch. A version with j smallest (unequal to i) rather than largest appears in the Gessel-Viennot 1989 reference (proof of Theorem 1).
Applications
Schur polynomials
The Lindström–Gessel–Viennot lemma can be used to prove the equivalence of the following two different definitions of Schur polynomials. Given a partition of n, the Schur polynomial can be defined as:
where the sum is over all semistandard Young tableaux T of shape λ, and the weight of a tableau T is defined as the monomial obtained by taking the product of the xi indexed by the entries i of T. For instance, the weight of the tableau
is .
where hi are the complete homogeneous symmetric polynomials (with hi understood to be 0 if i is negative). For instance, for the partition (3,2,2,1), the corresponding determinant is
To prove the equivalence, given any partition λ as above, one considers the r starting points and the r ending points , as points in the lattice , which acquires the structure of a directed graph by asserting that the only allowed directions are going one to the right or one up; the weight associated to any horizontal edge at height i is xi, and the weight associated to a vertical edge is 1. With this definition, r-tuples of paths from A to B are exactly semistandard Young tableaux of shape λ, and the weight of such an r-tuple is the corresponding summand in the first definition of the Schur polynomials. For instance, with the tableau
,
one gets the corresponding 4-tuple
On the other hand, the matrix M is exactly the matrix written above for D. This shows the required equivalence. (See also §4.5 in Sagan's book, or the First Proof of Theorem 7.16.1 in Stanley's EC2, or §3.3 in Fulmek's arXiv preprint, or §9.13 in Martin's lecture notes, for slight variations on this argument.)
The Cauchy–Binet formula
One can also use the Lindström–Gessel–Viennot lemma to prove the Cauchy–Binet formula, and in particular the multiplicativity of the determinant.
Generalizations
Talaska's formula
The acyclicity of G is an essential assumption in the Lindström–Gessel–Viennot lemma; it guarantees (in reasonable situations) that the sums are well-defined, and it advects into the proof (if G is not acyclic, then f might transform a self-intersection of a path into an intersection of two distinct paths, which breaks the argument that f is an involution). Nevertheless, Kelli Talaska's 2012 paper establishes a formula generalizing the lemma to arbitrary digraphs. The sums are replaced by formal power series, and the sum over nonintersecting path tuples now becomes a sum over collections of nonintersecting and non-self-intersecting paths and cycles, divided by a sum over collections of nonintersecting cycles. The reader is referred to Talaska's paper for details.
See also
Matrix tree theorem
References
Lemmas
Combinatorics
Theorems in combinatorics | Lindström–Gessel–Viennot lemma | Mathematics | 2,040 |
356,782 | https://en.wikipedia.org/wiki/Nominal%20watt | Nominal wattage is used to simplify the measurement of the efficiency of a loudspeaker.
The impedance of a loudspeaker varies with frequency. This means that if different sine wave tones are fed into the loudspeaker at the same voltage (or the same current), the amount of electric power consumed will vary.
By convention, loudspeakers are designed to generate the same sound pressure level (SPL) at the listener for the same voltage at varying frequencies - regardless of the variation in electric power. This permits a loudspeaker to be used with an amplifier having a low internal impedance and a flat frequency response is realized for the combined amplifier/loudspeaker system.
However, an amplifier with a low internal impedance delivers more electrical output power when the load impedance reduces (until the impedances become approximately matched). Such high power levels could cause damage to either the amplifier or the amplifier's power supply, or the circuit connected to the amplifier's output (including the loudspeaker).
Therefore, an additional convention exists whereby loudspeaker manufacturers specify a conservative estimate of the average impedance that the loudspeaker will present while playing typical music. This is called the nominal impedance. Amplifiers can therefore be safely specified to operate into a load that has this nominal impedance (or higher, but not lower).
Typical nominal impedances for speakers include 4, 6, 8 and 16Ω (ohms), with 4Ω being most common in in-car loudspeakers, and 8Ω being most common elsewhere. A loudspeaker with an 8Ω nominal impedance may exhibit actual impedances ranging from approximately 5 to 100Ω depending on frequency.
In this context, the nominal wattage is the theoretical electric power that would be transferred from amplifier to speaker if the loudspeaker was actually exhibiting its nominal impedance. The actual electric power may vary from about twice the nominal power down to less than one tenth.
Loudspeaker efficiency is measured with respect to nominal power in order to emulate the situation outlined above where a low internal impedance amplifier is used with a loudspeaker. The convention is to supply one nominal watt during testing. If the nominal impedance is 4 ohms, the voltage would be 2 volts. If the nominal impedance is 8Ω, the voltage would be 2.83 volts.
References
EIA RS-299-A, Loudspeakers, Dynamic, Magnet Structures and Impedance
IEC 60268-5, Sound System Equipment - Part 5: Loudspeakers
Loudspeaker technology
Units of power | Nominal watt | Physics,Mathematics | 545 |
12,843,844 | https://en.wikipedia.org/wiki/Tryptophan%20tryptophylquinone | Tryptophan tryptophylquinone (TTQ) is an enzyme cofactor, generated by posttranslational modification of amino acids within the protein. Methylamine dehydrogenase (MADH), an amine dehydrogenase, requires TTQ for its catalytic function.
See also
Amicyanin
References
Alpha-Amino acids
Amino acid derivatives
Tryptamines
Indolequinones | Tryptophan tryptophylquinone | Chemistry,Biology | 87 |
76,568,876 | https://en.wikipedia.org/wiki/The%20Hagedorn%20Prize | The Hagedorn Prize is an annual award within the field of medical research, specifically recognizing outstanding contributions to diabetes research and endocrinology. Named after Hans Christian Hagedorn, a renowned Danish scientist and co-founder of Nordisk Insulinlaboratorium (now part of Novo Nordisk), the prize celebrates achievements in the understanding and treatment of diabetes.
Hagedorn's work significantly advanced the quality of insulin production and diabetes care, making this award a tribute to his legacy in the field. The Hagedorn Prize is recognised as the most prestigious award in Internal medicine in Denmark.
Background
History
The Hagedorn Prize was established by the Danish Society of Internal Medicine in 1966 to recognize the contribution to medical science made by Hans Christian Hagedorn (1888–1971). The prize is awarded at the society's annual general meeting.
Initially, the Hagedorn Prize received its endowment from a distinct foundation, funded by contributions from Nordisk Insulinlaboratorium, with the Board of the Danish Society for Internal Medicine serving as its governing body. However, by 2008, the foundation's resources were deemed inadequate to sustain a meaningful award. Consequently, the remaining capital was transferred to the Novo Nordisk Foundation, which subsequently assumed the responsibility of bestowing the Hagedorn Prize, while maintaining the ongoing involvement of the Society. The recipient of the prize is determined by the society's board, relying on recommendations provided by its members.
About the Danish Society of Internal Medicine
The Danish Society of Internal Medicine, comprising nearly 4,500 members, serves as an overarching body for the nine internal medicine specialties within Denmark. Its objectives include the advancement of scientific research in internal medicine and the facilitation of ongoing education for specialist physicians in the discipline. Established in 1916, the Society operates with a board of directors composed of nine members, appointed by the boards of each respective internal medicine specialty.
Award
The Hagedorn Prize includes a monetary component, designed to support the ongoing research of the recipient. In addition, awardees are presented with a medal and a certificate recognizing their contributions to advancing diabetes research and treatment. The prize is intended not only to honor individuals for their past achievements but also to encourage further innovation and research in diabetes care.
Recipients
List of recipients of The Hagedorn Prize
References
Academic awards
Danish science and technology awards
Diabetes research
Medicine awards
Research awards | The Hagedorn Prize | Technology | 479 |
62,086,463 | https://en.wikipedia.org/wiki/Metal%E2%80%93inorganic%20framework | Metal–inorganic frameworks (MIFs) are a class of compounds consisting of metal ions or clusters coordinated to inorganic ligands to form one-, two-, or three-dimensional structures. They are a subclass of coordination polymers, with the special feature that they are often porous. They are inorganic counterpart of Metal–organic frameworks.
History
Millon's base which have been known since early 20th century, can be considered as MIFs.
Linkers
MIF with Borazocine linker was developed for hydrogen storage. Cu2I2Se6 has Se6 linkers. There are many MIFs with pnictogen linkers.
References
Crystal engineering
Porous media
Coordination polymers
Inorganic compounds | Metal–inorganic framework | Chemistry,Materials_science,Engineering | 146 |
380,532 | https://en.wikipedia.org/wiki/Biogen | Biogen Inc. is an American multinational biotechnology company based in Cambridge, Massachusetts, United States specializing in the discovery, development, and delivery of therapies for the treatment of neurological diseases to patients worldwide. Biogen operates in Argentina, Brazil, Canada, China, France, Germany, Hungary, India, Italy, Japan, Mexico, Netherlands, Poland, Sweden, and Switzerland.
History
Biogen was founded in 1978 in Geneva as Biotechnology Geneva by several prominent biologists, including Kenneth Murray from the University of Edinburgh, Phillip Allen Sharp from the Massachusetts Institute of Technology, Walter Gilbert from Harvard University (Gilbert served as CEO during the start-up phase of Biogen), Heinz Schaller from the University of Heidelberg, and Charles Weissmann from the University of Zurich (Weissmann contributed the first product interferon alpha). Gilbert and Sharp were subsequently honored with Nobel Prizes: Gilbert was recognized in 1980 with the Nobel Prize in Chemistry for his understanding of DNA sequencing and Sharp received the Nobel Prize in Physiology or Medicine in 1993 for his discovery of split genes.
In 2003, Biogen merged with San Diego, California-based IDEC Pharmaceuticals (formed in 1985 by University of California-San Diego's physicians and immunologists Ivor Royston and Robert E. Sobol, San Diego bio entrepreneur Howard Birndorf, and Stanford University cancer researchers Ron Levy and Richard Miller) and adopted the name Biogen Idec. After the merger, Biogen Idec became the 3rd largest Biotechnology company in the world.
Following shifts in research core areas, the company has since shortened its name, reverting to simply Biogen. Biogen stock is a component of several stock indices such as the S&P 500, S&P 1500, and NASDAQ-100 and the company is listed on the NASDAQ stock exchange under the ticker symbol, BIIB.
In May 2006, the company announced it would acquire cancer specialist, Conforma Therapeutics for $250 million. Later in the same month, the company announced its intention to acquire Fumapharm AG, consolidating ownership of Fumaderm and BG-12, an oral fumarate, which was being studied for the treatment of multiple sclerosis and psoriasis.
In January 2007, the company announced it would acquire Syntonix Pharmaceuticals for up to $120 million, gaining Syntonix's lead product for hemophilia B as well as the technology for developing inhalable treatments.
In 2008, two new brain infection cases from Tysabri users surfaced in Europe that raised international concern about Tysabri and its effects with the progressive multifocal leukoencephalopathy (PML) brain condition. Biogen is one of the drug's producers.
In 2011, Biogen announced that its drug Fampyra received conditional marketing approval. Under the conditional approval, Biogen agrees to provide additional data on the long-term benefits and safety of Fampyra.
On December 10, 2012, Biogen announced its global collaboration agreement with Isis Pharmaceuticals to develop and research antisense drugs to treat neurological and neuromuscular diseases.
In February 2013, Bloomberg broke the news that Biogen was planning to pay Elan $3.25 billion for the full rights to Tysabri, used to treat multiple sclerosis.
In 2013, Biogen was the first U.S.-based biotechnology company to appear on the Dow Jones Sustainability World Index.
In January 2015, the company announced that it would acquire Convergence Pharmaceuticals for up to $675 million, with the acquisition aiming to accelerate the development of Convergence's pipeline, in particular CNV1014802 – a Phase II small molecule sodium channel blocking candidate. In October 2015, the company announced that it would lay off 11% of its workforce, effective immediately.
On May 3, 2016, Biogen announced to spin off its hemophilia business, known as Bioverativ. The hemophilia business would become an independent publicly traded company. Bioverativ offered two hemophilia drugs in 2016, Alprolix and Eloctate, and plans on developing its Hemophilia-focused goals.
In 2016, Biogen released Spinraza (nusinersin), a treatment for Spinal Muscular Atrophy. The drug is among the most expensive treatments available, with a price of $750,000 for the first year of doses, and $375,000 for each subsequent year and likely for the rest of a patient's life. While there still isn't a cure, Spinraza significantly improves the quality of life in infants and adults.
In 2017, Biogen announced that its drug Fampyra converted from conditional marketing authorization to standard marketing approval. EU multiple sclerosis (MS) patients use Fampyra to improve walking.
In February 2020, Biogen and Sangamo Therapeutics announced a global licensing deal to develop compounds for neuromuscular and neurological diseases.
In September 2020, Biogen Inc. made a $10 million deposit in OneUnited Bank to provide more capital to fund home loans and commercial development in Black communities. In November, the company announced it would acquire a $650 million stake in Cambridge-based Sage Therapeutics and make an upfront payment of $875 million, in order to jointly develop a number of depression treatments.
In July 2023, it was announced Biogen had acquired the Plano, Texas-headquartered biotech company, Reata Pharmaceuticals for nearly $6.5 billion.
In May 2024, Biogen acquired Human Immunology Biosciences (HI-Bio) for $1.15 billion.
Aducanumab
In 2007, the company reached a licensing agreement with Neurimmune, a spin-off from the University of Zurich, for the Alzheimer's disease drug, Aducanumab, developed by this Swiss company. Later, Neurimmune sold its rights for license fees for $200 million to Biogen.
In December 2014, Biogen announced that Aducanumab for Alzheimer's treatment was preparing to go through a late-stage trial of its experimental Alzheimer's disease treatment after the medication dramatically improved cognition and reduced brain plaque levels in early-stage study.
In March 2015, Aducanumab became the first experimental Alzheimer's treatment to show significant results in regard to slowing down cognitive decline and reducing brain-destroying plaques. In July 2015, Biogen initiated two late-stage studies called ENGAGE and EMERGE, which will assess Aducanumab in adults with early Alzheimer's disease.
In 2016, Aducanumab decreased amoyloid-beta in the brains of people with early-stage Alzheimer's disease, according to a report published in Nature on August 31, 2016. On March 21, 2019, Biogen announced that the Phase 3 clinical trials of Aducanumab were halted. Biogen announced it would acquire Nightstar Therapeutics for $25.50 per share ($800 million in total). Nightstar focused on adeno-associated virus based gene-therapies for inherited retinal disorders. With a setback in their drug research, Biogen's shares fell sharply that same month. It ended the trial of Aducanumab, which it was making along with Eisai. In October 2019, however, they announced that they would pursue FDA approval together with Eisai.
On October 22, 2019, despite two Phase 3 clinical trials being previously halted for futility, Biogen announced its plan to submit for FDA's approval of Aducanumab. In May 2020, Biogen wrapped up construction on a state-of-the-art facility in Solothurn, Switzerland, which will produce Aducanumab by late 2021, alongside its North Carolina manufacturing facility. The monoclonal antibody, co-developed with Eisai, attracted considerable interest from biotech investors when Warren Buffett's Berkshire Hathaway bought 648,447 Biogen shares at a combined value of $192.4 million.
On July 8, 2020, Biogen and Eisai announced that both companies had together successfully submitted for Aducanumab's FDA regulatory and marketing approval.
On June 7, 2021, the FDA gave accelerated approval to Aducanumab under the name Aduhelm, which proved to be controversial. The drug was priced at $56,000 US dollars per year, but it was not covered by many insurers as they awaited further proof that the drug was effective. The US Government did not subsidise it outside clinical trials. According to the FDA's website, the drug was proven to reduce amyloid-beta plaques in the brain, which was likely to benefit patients. The FDA has stated that if the post-approval trial did not indicate that Aduhelm works, the drug may be taken out of the market.
Biogen abandoned the drug in January 2024, for financial reasons.
Bioverativ
In May 2016, the company announced that it would spin off its hemophilia drug business (Eloctate and Alprolix) into a public company. In August, the company announced that the spun off company would be called Bioverativ, in order to show heritage with Biogen. The company would trade on the NASDAQ exchange under the ticker symbol BIVV and would look to be spun off in early 2017. Bioverativ was acquired by Sanofi in 2018.
Acquisition history
The following is an illustration of the company's major mergers and acquisitions and historical predecessors (this is not a comprehensive list):
Biogen
Biogen IDEC
Biogen (Est 1978)
IDEC Pharmaceuticals
Conforma Therapeutics (Acq 2006)
Fumapharm AG (Acq 2006)
Syntonix Pharmaceuticals (Acq 2007)
Convergence Pharmaceuticals (Acq 2015)
Nightstar Therapeutics (Acq 2019)
Reata Pharmaceuticals (Acq 2023)
Human Immunology Biosciences (HI-Bio)
COVID-19 pandemic
On March 5, 2020, Biogen reported that three individuals who met with their employees at a conference in Boston had tested positive for COVID-19 the previous week. On March 6, public health officials reported five new cases associated with the Biogen leadership meeting and by March 9, Massachusetts health officials had announced 30 new presumptive COVID-19 cases, all connected to the Biogen conference. Researchers first estimated that the conference would be linked to over 20,000 of the state's coronavirus cases. Researchers later estimated that up to 300,000 cases worldwide had been caused by the Biogen conference, including 1.6% of all U.S. cases of the coronavirus.
Finances
For the fiscal year 2017, Biogen reported earnings of US$2.539 billion, with an annual revenue of US$12.274 billion, an increase of 7.2% over the previous fiscal cycle. Biogen's shares traded at over $289 per share, and its market capitalization was valued at over US$63 billion in November 2018. The company ranked 228 on the 2021 Fortune 500 list of the largest United States corporations by revenue.
Products
Pipeline
Biogen focused its R&D efforts on the discovery and development of treatments for patients with high unmet medical needs in the areas of neurology, hematology, and immunology.
Investigational MS medicines:
Daclizumab High-Yield Process (DAC HYP): is being developed as a potential once-monthly subcutaneous injection in the treatment of relapsing-remitting multiple sclerosis (RRMS). DAC HYP is being developed in collaboration with Abbvie, Inc. In June 2014, the companies announced positive top-line results from the Phase III DECIDE clinical trial, where DAC HYP demonstrated superiority over interferon beta-1a in annualized relapse rate.
Anti-LINGO-1 (BIIB033) (Opicinumab): is the first candidate being investigated for its potential to remyelinate and repair neurons damaged by MS, currently in Phase 2 trials.
Biogen has several candidates in Phase 1 and 2 clinical trials in neurodegenerative and immunological diseases including MS, neuropathic pain, spinal muscular atrophy and lupus nephritis:
Phase 2a: anti-LINGO-1 molecule (Opicinumab) in acute optic neuritis
Phase 2b: anti-TWEAK monoclonal antibody in lupus nephritis
Phase 2a: STX-100 in patients with idiopathic pulmonary fibrosis
Phase 2: Neublastin for neuropathic pain in 2013
Phase 1/2: BIIB067 (ISIS-SOD1Rx) for amyotrophic lateral sclerosis, in collaboration with Ionis
Biogen also has several development agreements in place with Ionis Pharmaceuticals to collaborate to leverage antisense technology in advancing the treatment of neurological disorders.
In February 2012, Biogen formalized a joint venture with Samsung, creating Samsung Bioepis. This joint venture brings Biogen's expertise and capabilities in protein engineering, cell line development, and recombinant biologics manufacturing to position the joint venture so Biogen can participate in the emerging market for biosimilars.
In early 2014, Biogen entered into an agreement with Eisai, Inc., to jointly develop and commercialize two of their candidates for Alzheimer's disease, which have the potential to reduce Aβ plaques that form in the brains of patients, as well as to slow the formation of new plaques, potentially improving symptoms and suppressing disease progression.
Biogen also has since 2015 an agreement with AGTC to develop gene therapy for several genetic diseases, including X-linked retinoschisis (XLRS) and X-linked Retinitis pigmentosa (XLRP) ophthalmologic diseases. To this aim, Biogen paid AGTC $124 million, including an equity investment of $30 million, and up to 1,1 billion in future milestones.
In March 2019, Biogen halted Phase 3 trials of Alzheimer's disease drug Aducanumab after "an independent group's analysis show[ed] that the trials were unlikely to 'meet their primary endpoint.'" However, in October 2019 they reversed their plans and said that they would be pursuing US FDA approval for Aducanumab. The reversal came after Biogen said a new analysis of a larger patient pool showed promising results. In July 2020, Biogen completed submission of a Biologics license application (BLA) to the FDA for review, and requested accelerated review. However, an advisory panel for the FDA voted against approval of this drug. On June 7, 2021, the FDA granted approval of Aducanumab for the treatment of Alzheimer's disease. Aducanumab was approved using the accelerated approval pathway, and Biogen will be required to conduct a post-approval clinical trial to verify clinical benefit for continued approval.
Lawsuits
In September 2022, Biogen agreed to pay $900 million to the U.S. federal governments, states, and a whistleblower. Biogen had bribed doctors between 2009 and 2014 to increase prescriptions of Avonex, Tysabri, and Tecfidera (all for multiple sclerosis).
See also
Neurological diseases
Kenneth Murray
Eisai
Tim Harris (biochemist)
References
External links
Companies based in Cambridge, Massachusetts
Biotechnology companies of the United States
National Medal of Technology recipients
Pharmaceutical companies of the United States
Pharmaceutical companies established in 2003
2003 establishments in Massachusetts
Life science companies based in Massachusetts
Life sciences industry
Swiss companies established in 1978
Pharmaceutical companies established in 1978
Biotechnology companies established in 1978
Spinal muscular atrophy
Health care companies based in Massachusetts
1991 initial public offerings | Biogen | Biology | 3,241 |
1,750,722 | https://en.wikipedia.org/wiki/Terence%20Dickinson | Terence Dickinson (10 November 1943 – 1 February 2023) was a Canadian amateur astronomer and astrophotographer who lived near Yarker, Ontario, Canada. He was the author of 14 astronomy books for both adults and children. He was the founder and former editor of SkyNews magazine. Dickinson had been an astronomy commentator for Discovery Channel Canada and taught at St. Lawrence College. He made appearances at such places as the Ontario Science Centre. In 1994, the International Astronomical Union committee on Minor Planet Nomenclature named asteroid 5272 Dickinson in honour of his "ability to explain the universe in everyday language".
Biography
Dickinson was born in Toronto, Ontario, on 10 November 1943. He became interested in astronomy at age five after seeing a bright meteor from just outside his family's home. When he was 14 he received a 60 mm telescope as a Christmas present, the first of nearly 20 telescopes he owned. Past occupations include editor of Astronomy magazine (1974-75) and planetarium instructor. He became a full-time science writer in 1976. He received the 1993 Industry Canada's Michael Smith Award for Public Promotion of Science, the 1993 Canadian Science Writers' Association Award First Place for Science and Technology writing, and the Royal Canadian Institute's Sandford Fleming Medal in 1992. In 1995 Dickinson was made a Member of the Order of Canada, which is the nation's highest civilian achievement award. The Astronomical Society of the Pacific awarded him the Klumpke-Roberts Award in 1996. He received an honorary Doctor of Science degree from Queen's University in 2019.
In 1983, Dickinson published NightWatch: A Practical Guide to Viewing the Universe. The book includes star charts, tables of future solar and lunar eclipses, planetary conjunctions, planet locations, and other illustrations. The Journal of the Royal Astronomical Society described NightWatch as the essential star-watching guide for amateur astronomers of all levels of experience. The book has become the world's best-selling manual for amateur stargazing.
Dickinson internationally published twelve titles, primarily through Firefly Books.
Dickinson died on 1 February 2023 at the age of 79.
Publications
NightWatch: A Practical Guide to Viewing the Universe (March 4, 1983)
The Universe and Beyond (October 2, 1986)
Exploring the Night Sky: The Equinox Astronomy Guide for Beginners (February 22, 1987)
Exploring the Sky by Day: The Equinox Guide to Weather and the Atmosphere (September 10, 1988)
From the Big Bang to Planet X: The 50 Most-Asked Questions About the Universe... and Their Answers (September 1, 1993; Out of Print)
The Backyard Astronomer's Guide (January 15, 1994, with Alan Dyer)
Extraterrestrials: A Field Guide for Earthlings (October 1, 1994; Out of Print)
Other Worlds: A Beginner's Guide to Planets and Moons (September 5, 1995)
Splendors of the Universe: A Practical Guide to Photographing the Night Sky (November 16, 1997, with Jack Newton)
Summer Stargazing: A Practical Guide for Recreational Astronomers (April 2, 2005; Out of Print)
Hubble's Universe: Greatest Discoveries and Latest Images (September 6, 2012)
The Hubble Space Telescope: Our Eye on the Universe (September 27, 2019, with Tracy C. Read)
References
External links
SkyNews: The Canadian Magazine of Astronomy and Stargazing
The Backyard Astronomer's Guide
1943 births
2023 deaths
20th-century Canadian astronomers
21st-century Canadian astronomers
Amateur astronomers
Canadian astronomers
Canadian non-fiction writers
Canadian science writers
Scientists from Toronto
Writers from Toronto
Members of the Order of Canada
Sandford Fleming Award recipients | Terence Dickinson | Astronomy | 743 |
43,662,522 | https://en.wikipedia.org/wiki/Partial%20groupoid | In abstract algebra, a partial groupoid (also called halfgroupoid, pargoid, or partial magma) is a set endowed with a partial binary operation.
A partial groupoid is a partial algebra.
Partial semigroup
A partial groupoid is called a partial semigroup if the following associative law holds:
For all such that and , the following two statements hold:
if and only if , and
if (and, because of 1., also ).
References
Further reading
Algebraic structures | Partial groupoid | Mathematics | 102 |
30,600,763 | https://en.wikipedia.org/wiki/Evolution%20of%20reptiles | Reptiles arose about 320 million years ago during the Carboniferous period. Reptiles, in the traditional sense of the term, are defined as animals that have scales or scutes, lay land-based hard-shelled eggs, and possess ectothermic metabolisms. So defined, the group is paraphyletic, excluding endothermic animals like birds that are descended from early traditionally-defined reptiles. A definition in accordance with phylogenetic nomenclature, which rejects paraphyletic groups, includes birds while excluding mammals and their synapsid ancestors. So defined, Reptilia is identical to Sauropsida.
Though few reptiles today are apex predators, many examples of apex reptiles have existed in the past. Reptiles have an extremely diverse evolutionary history that has led to biological successes, such as dinosaurs, pterosaurs, plesiosaurs, mosasaurs, and ichthyosaurs.
First reptiles
Early reptiles
The origin of the reptiles lies about 320–310 million years ago, in the swamps of the late Carboniferous period, when the first reptiles evolved from advanced labyrinthodonts.
The oldest known animal that may have been an amniote, a reptile rather than an amphibian, is Casineria (though it has also been argued to be a temnospondyl amphibian).
A series of footprints from the fossil strata of Nova Scotia, dated to 315 million years, show typical reptilian toes and imprints of scales.
The tracks are attributed to Hylonomus, the oldest unquestionable reptile known.
It was a small, lizard-like animal, about 20 to 30 cm (8–12 in) long, with numerous sharp teeth indicating an insectivorous diet.
Other examples include Westlothiana (sometimes considered a stem-amniote rather than a true amniote) and Paleothyris, both of similar build and presumably similar habit. One of the best known early reptiles is Mesosaurus, a genus from the Early Permian that had returned to water, feeding on fish.
The earliest reptiles were largely overshadowed by bigger labyrinthodont amphibians, such as Cochleosaurus, and remained a small, inconspicuous part of the fauna until after the small ice age at the end of the Carboniferous.
Anapsids, synapsids, diapsids and sauropsids
It was traditionally assumed that first reptiles were anapsids, having a solid skull with holes only for the nose, eyes, spinal cord, etc.; the discoveries of synapsid-like openings in the skull roof of the skulls of several members of Parareptilia, including lanthanosuchoids, millerettids, bolosaurids, some nycteroleterids, some procolophonoids and at least some mesosaurs made it more ambiguous and it is currently uncertain whether the ancestral reptile had an anapsid-like or synapsid-like skull. Very soon after the first reptiles appeared, they split into two branches. One branch, Synapsida (including modern mammals), had one opening in the skull roof behind each eye. The other branch, Sauropsida, is itself divided into two main groups. One of them, the aforementioned Parareptilia, contained taxa with anapsid-like skull, as well as taxa with one opening behind each eye (see above). Members of the other group, Diapsida, possessed a hole in their skulls behind each eye, along with a second hole located higher on the skull. The function of the holes in both synapsids and diapsids was to lighten the skull and give room for the jaw muscles to move, allowing for a more powerful bite.
Turtles have been traditionally believed to be surviving anapsids, on the basis of their skull structure. The rationale for this classification was disputed, with some arguing that turtles are diapsids that reverted to this primitive state in order to improve their armor (see Parareptilia). Later morphological phylogenetic studies with this in mind placed turtles firmly within Diapsida. All molecular studies have strongly upheld the placement of turtles within diapsids, most commonly as a sister group to extant archosaurs.
Mammalian evolution
A basic cladogram of the origin of mammals.
Important developments in the transition from reptile to mammal were the evolution of warm-bloodedness, of molar occlusion, of the three-ossicle middle ear, of hair, and of mammary glands. By the end of the Triassic, there were many species that looked like modern mammals and, by the Middle Jurassic, the lineages leading to the three extant mammal groups — the monotremes, the marsupials, and the placentals — had diverged.
Rise of dinosaurs
Permian reptiles
Near the end of the Carboniferous, while the terrestrial reptiliomorph labyrinthodonts were still present, the synapsids evolved the first fully terrestrial large vertebrates, the pelycosaurs such as Edaphosaurus. In the mid-Permian period, the climate turned drier, resulting in a change of fauna: The primitive pelycosaurs were replaced by the more advanced therapsids.
The anapsid reptiles, whose massive skull roofs had no postorbital holes, continued and flourished throughout the Permian. The pareiasaurs reached giant proportions in the late Permian, eventually disappearing at the close of the period.
Late in the period, the diapsid reptiles split into two main lineages, the archosaurs (ancestors of crocodiles and dinosaurs) and the lepidosaurs (predecessors of modern tuataras, lizards, and snakes). Both groups remained lizard-like and relatively small and inconspicuous during the Permian.
The Mesozoic era, the "Age of Reptiles"
The close of the Permian saw the greatest mass extinction known (see the Permian–Triassic extinction event). Most of the earlier anapsid/synapsid megafauna disappeared, being replaced by the archosauromorph diapsids. The archosaurs were characterized by elongated hind legs and an erect pose, the early forms looking somewhat like long-legged crocodiles. The archosaurs became the dominant group during the Triassic period, developing into the well-known dinosaurs and pterosaurs, as well as the pseudosuchians. The Mesozoic is often called the "Age of Reptiles", a phrase coined by the early 19th-century paleontologist Gideon Mantell who recognized the dinosaurs and the ancestors of the crocodilians as the dominant land vertebrates. Some of the dinosaurs were the largest land animals ever to have lived while some of the smaller theropods gave rise to the first birds.
The sister group to Archosauromorpha is Lepidosauromorpha, containing squamates and rhynchocephalians, as well as their fossil relatives. Lepidosauromorpha contained at least one major group of the Mesozoic sea reptiles: the mosasaurs, which emerged during the Cretaceous period. The phylogenetic placement of other main groups of fossil sea reptiles – the sauropterygians and the ichthyosaurs, which evolved in the early Triassic and in the Middle Triassic respectively – is more controversial. Different authors linked these groups either to lepidosauromorphs or to archosauromorphs, and ichthyosaurs were also argued to be diapsids that did not belong to the least inclusive clade containing lepidosauromorphs and archosauromorphs.
The therapsids came under increasing pressure from the dinosaurs in the Jurassic; the mammals and the tritylodontids were the only survivors of the line by the end of the period.
Bird evolution
The main points to the transition from reptile to bird are the evolution from scales to feathers, the evolution of the beak (although independently evolved in other organisms), the hollowfication of bones, development of flight, and warm-bloodedness.
The evolution of birds is thought to have begun in the Jurassic Period, with the earliest birds derived from theropod dinosaurs. Birds are categorized as a biological class, Aves. The earliest known species in Aves is Archaeopteryx lithographica, from the Late Jurassic period. Modern phylogenetics place birds in the dinosaur clade Theropoda. According to the current consensus, Aves and Crocodilia are the sole living members of an unranked clade, the Archosauria.
Simplified cladogram from Senter (2007).
Demise of the dinosaurs
The close of the Cretaceous period saw the demise of the Mesozoic era reptilian megafauna. Along with massive amount of volcanic activity at the time, the meteor impact that created the Cretaceous–Paleogene boundary is accepted as the main cause for this mass extinction event. Of the large marine reptiles, only sea turtles are left, and, of the dinosaurs, only the small feathered theropods survived in the form of birds. The end of the “Age of Reptiles” led to the “Age of Mammals”. Despite the change in phrasing, reptile diversification continued throughout the Cenozoic. Today, squamates make up the majority of extant reptiles today (over 90%). There are approximately 9,766 extant species of reptiles, compared with 5,400 species of mammals, so the number of reptilian species (without birds) is nearly twice the number of mammals.
Role reversal
After the Cretaceous–Paleogene extinction event wiped out all of the non-avian dinosaurs (birds are generally regarded as the surviving dinosaurs) and several mammalian groups, placental and marsupial mammals diversified into many new forms and ecological niches throughout the Paleogene and Neogene eras. Some reached enormous sizes and almost as wide a variation as the dinosaurs once did. Nevertheless, mammalian megafauna never quite reached the skyscraper heights of some sauropods.
Nonetheless, large reptiles still composed important megafaunal components, such as giant tortoises, large crocodilians and, more locally, large varanids.
The four orders of Reptilia
Testudines
Testudines, or turtles, may have evolved from anaspids, but their exact origin is unknown and heavily debated. Fossils date back to around 220 million years ago and share remarkably similar characteristics. These first turtles retain the same body plan as do all modern testudines and are mostly herbivorous, with some feeding exclusively on small marine organisms. The trade-mark shell is believed to have evolved from extensions from their backbone and widened ribs that fused together. This is supported by the fossil of Odontochelys semitestacea, which has an incomplete shell originating from the ribs and back bone. This species also had teeth with its beak, giving more support to it being a transitional fossil, although this claim is still controversial. This shell evolved to protect against predators, but also slows down the land-based species by a great amount. This has caused many species to go extinct in recent times. Because of alien species out-competing them for food and the inability to escape from humans, there are many endangered species in this order.
Sphenodontia
Sphenodontians arose in the mid Triassic and now consists of a single genus, tuatara, which comprises two endangered species that live on New Zealand and some of its minor surrounding islands. Their evolutionary history is filled with many species. Recent paleogenetic discoveries show that tuataras are prone to quick speciation.
Squamata
The most recent order of reptiles, squamates, are recognized by having a movable quadrate bone (giving them upper-jaw movement), possessing horny scales and hemipenes. They originate from the early Jurassic and are made up of the three suborders Lacertilia (paraphyletic), Serpentes, and Amphisbaenia. Although they are the most recent order, squamates contain more species than any of the other reptilian orders. Squamates are a monophyletic group included, with the Sphenodontia (e.g. tuataras), in the Lepidosauria. The latter superorder, together with some extinct animals like the plesiosaurs, constitute the Lepidosauromorpha, the sister infraclass to the group, the Archosauromorpha, that contains crocodiles, turtles, and birds. Although squamate fossils first appear in the early Jurassic, mitochondrial phylogenetics suggests that they evolved in the late Permian. Most evolutionary relationships within the squamates are not yet completely worked out, with the relationship of snakes to other groups being most problematic. From morphological data, Iguanid lizards have been thought to have diverged from other squamates very early, but recent molecular phylogenies, both from mitochondrial and nuclear DNA, do not support this early divergence. Because snakes have a faster molecular clock than other squamates, and there are few early snake and snake ancestor fossils, it is difficult to resolve the relationship between snakes and other squamate groups.
Crocodilia
The first organisms that showed similar characteristics of Crocodilians were the Crurotarsi, who appeared during the early Triassic 250 million years ago. This quickly gave rise to the Eusuchia clade 220 million years ago, which would eventually lead to the order of Crocodilians, the first of which arose about 85 million years ago during the late Cretaceous. The earliest fossil evidence of eusuchians is of the genus Isisfordia. Early species mainly fed on fish and vegetation. They were land-based, most having long legs (when compared to modern crocodiles) and many were bipedal. As diversification increased, many apex predators arose, all of which are now extinct. Modern Crocodilia arose through specific evolutionary traits. The complete loss of bipedalism was traded for a generally low quadrupedal stance for an easy and less noticeable entrance to bodies of water. The shape of the skull/jaw changed to allow more grasp along with upward-pointing nostrils and eyes. Mimicry is evident, as the backs of all crocodilia resemble some type of floating log and their general color scheme of brown and green mimics moss or wood. Their tail also took on a paddle shape to increase swimming speed. The only remaining groups of this order are the alligators, caimans, crocodiles, gharials, and false gharials.
References
reptiles
Herpetology
Prehistoric reptiles | Evolution of reptiles | Biology | 3,036 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.