text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1 value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|---|---|
The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umeda et al. developed the first general English text-to-speech system in 1968, at the Electrotechnical Laboratory in Japan. In 1961, physicist John Larry Kelly, Jr and his colleague Louis Gerstman used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of Bell Labs. Kelly's voice recorder synthesizer (vocoder) recreated the song "Daisy Bell", with musical accompaniment from Max Mathews. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel 2001: A Space Odyssey, where the HAL 9000 computer sings the same song as astronaut Dave Bowman puts it to sleep. Despite the success of purely electronic speech synthesis, research into mechanical speech-synthesizers continues.
Linear predictive coding (LPC), a form of speech coding, began development with the work of Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. Further developments in LPC technology were made by Bishnu S. Atal and Manfred R. Schroeder at Bell Labs during the 1970s. LPC was later the basis for early speech synthesizer chips, such as the Texas Instruments LPC Speech Chips used in the Speak & Spell toys from 1978.
In 1975, Fumitada Itakura developed the line spectral pairs (LSP) method for high-compression speech coding, while at NTT. From 1975 to 1981, Itakura studied problems in speech analysis and synthesis based on the LSP method. In 1980, his team developed an LSP-based speech synthesizer chip. LSP is an important technology for speech synthesis and coding, and in the 1990s was adopted by almost all international speech coding standards as an essential component, contributing to the enhancement of digital speech communication over mobile channels and the internet.
In 1975, MUSA was released, and was one of the first Speech Synthesis systems. It consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was also able to sing Italian in an "a cappella" style. | Speech synthesis | Wikipedia | 480 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Dominant systems in the 1980s and 1990s were the DECtalk system, based largely on the work of Dennis Klatt at MIT, and the Bell Labs system; the latter was one of the first multilingual language-independent systems, making extensive use of natural language processing methods.
Handheld electronics featuring speech synthesis began emerging in the 1970s. One of the first was the Telesensory Systems Inc. (TSI) Speech+ portable calculator for the blind in 1976. Other devices had primarily educational purposes, such as the Speak & Spell toy produced by Texas Instruments in 1978. Fidelity released a speaking version of its electronic chess computer in 1979. The first video game to feature speech synthesis was the 1980 shoot 'em up arcade game, Stratovox (known in Japan as Speak & Rescue), from Sun Electronics. The first personal computer game with speech synthesis was Manbiki Shoujo (Shoplifting Girl), released in 1980 for the PET 2001, for which the game's developer, Hiroshi Suzuki, developed a "zero cross" programming technique to produce a synthesized speech waveform. Another early example, the arcade version of Berzerk, also dates from 1980. The Milton Bradley Company produced the first multi-player electronic game using voice synthesis, Milton, in the same year.
In 1976, Computalker Consultants released their CT-1 Speech Synthesizer. Designed by D. Lloyd Rice and Jim Cooper, it was an analog synthesizer built to work with microcomputers using the S-100 bus standard.
Early electronic speech-synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but output from contemporary speech synthesis systems remains clearly distinguishable from actual human speech.
Synthesized voices typically sounded male until 1990, when Ann Syrdal, at AT&T Bell Laboratories, created a female voice.
Kurzweil predicted in 2005 that as the cost-performance ratio caused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs.
Synthesizer technologies
The most important qualities of a speech synthesis system are naturalness and intelligibility. Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics. | Speech synthesis | Wikipedia | 484 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
The two primary technologies generating synthetic speech waveforms are concatenative synthesis and formant synthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used.
Concatenation synthesis
Concatenative synthesis is based on the concatenation (stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis.
Unit selection synthesis
Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases, and sentences. Typically, the division into segments is done using a specially modified speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the waveform and spectrogram. An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the fundamental frequency (pitch), duration, position in the syllable, and neighboring phones. At run time, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighted decision tree.
Unit selection provides the greatest naturalness, because it applies only a small amount of digital signal processing (DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data, representing dozens of hours of speech. Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database. Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems.
Diphone synthesis | Speech synthesis | Wikipedia | 506 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as linear predictive coding, PSOLA or MBROLA. or more recent techniques such as pitch modification in the source domain using discrete cosine transform. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available software implementations. An early example of Diphone synthesis is a teaching robot, Leachim, that was invented by Michael J. Freeman. Leachim contained information regarding class curricular and certain biographical information about the students whom it was programmed to teach. It was tested in a fourth grade classroom in the Bronx, New York.
Domain-specific synthesis
Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings. | Speech synthesis | Wikipedia | 392 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in non-rhotic dialects of English the "r" in words like "clear" is usually only pronounced when the following word has a vowel as its first letter (e.g. "clear out" is realized as ). Likewise in French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called liaison. This alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be context-sensitive.
Formant synthesis
Formant synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using additive synthesis and an acoustic model (physical modelling synthesis). Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called rules-based synthesis; however, many concatenative systems also have rules-based components.
Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in embedded systems, where memory and microprocessor power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice. | Speech synthesis | Wikipedia | 460 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the Texas Instruments toy Speak & Spell, and in the early 1980s Sega arcade machines and in many Atari, Inc. arcade games using the TMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces.
Articulatory synthesis
Articulatory synthesis consists of computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues.
Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the NeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the University of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by Steve Jobs in the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing as gnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model".
More recent synthesizers, developed by Jorge C. Lucero and colleagues, incorporate models of vocal fold biomechanics, glottal aerodynamics and acoustic wave propagation in the bronchi, trachea, nasal and oral cavities, and thus constitute full systems of physics-based speech simulation.
HMM-based synthesis
HMM-based synthesis is a synthesis method based on hidden Markov models, also called Statistical Parametric Synthesis. In this system, the frequency spectrum (vocal tract), fundamental frequency (voice source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves based on the maximum likelihood criterion.
Sinewave synthesis | Speech synthesis | Wikipedia | 503 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Sinewave synthesis is a technique for synthesizing speech by replacing the formants (main bands of energy) with pure tone whistles.
Deep learning-based synthesis
Deep learning speech synthesis uses deep neural networks (DNN) to produce artificial speech from text (text-to-speech) or spectrum (vocoder).
The deep neural networks are trained using a large amount of recorded speech and, in the case of a text-to-speech system, the associated labels and/or input text.
15.ai uses a multi-speaker model—hundreds of voices are trained concurrently rather than sequentially, decreasing the required training time and enabling the model to learn and generalize shared emotional context, even for voices with no exposure to such emotional context. The deep learning model used by the application is nondeterministic: each time that speech is generated from the same string of text, the intonation of the speech will be slightly different. The application also supports manually altering the emotion of a generated line using emotional contextualizers (a term coined by this project), a sentence or phrase that conveys the emotion of the take that serves as a guide for the model during inference.
ElevenLabs is primarily known for its browser-based, AI-assisted text-to-speech software, Speech Synthesis, which can produce lifelike speech by synthesizing vocal emotion and intonation. The company states its software is built to adjust the intonation and pacing of delivery based on the context of language input used. It uses advanced algorithms to analyze the contextual aspects of text, aiming to detect emotions like anger, sadness, happiness, or alarm, which enables the system to understand the user's sentiment, resulting in a more realistic and human-like inflection. Other features include multilingual speech generation and long-form content creation with contextually-aware voices.
The DNN-based speech synthesizers are approaching the naturalness of the human voice.
Examples of disadvantages of the method are low robustness when the data are not sufficient, lack of controllability and low performance in auto-regressive models.
For tonal languages, such as Chinese or Taiwanese language, there are different levels of tone sandhi required and sometimes the output of speech synthesizer may result in the mistakes of tone sandhi. | Speech synthesis | Wikipedia | 470 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Audio deepfakes
In 2023, VICE reporter Joseph Cox published findings that he had recorded five minutes of himself talking and then used a tool developed by ElevenLabs to create voice deepfakes that defeated a bank's voice-authentication system.
Challenges
Text normalization challenges
The process of normalizing text is rarely straightforward. Texts are full of heteronyms, numbers, and abbreviations that all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project".
Most text-to-speech (TTS) systems do not generate semantic representations of their input texts, as processes for doing so are unreliable, poorly understood, and computationally ineffective. As a result, various heuristic techniques are used to guess the proper way to disambiguate homographs, like examining neighboring words and using statistics about frequency of occurrence.
Recently TTS systems have begun to use HMMs (discussed above) to generate "parts of speech" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required training corpora is frequently difficult in these languages.
Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words (at least in English), like "1325" becoming "one thousand three hundred twenty-five". However, numbers occur in many different contexts; "1325" may also be read as "one three two five", "thirteen twenty-five" or "thirteen hundred and twenty five". A TTS system can often infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the system provides a way to specify the context if it is ambiguous. Roman numerals can also be read differently depending on context. For example, "Henry VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as "Chapter Eight". | Speech synthesis | Wikipedia | 481 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Similarly, abbreviations can be ambiguous. For example, the abbreviation "in" for "inches" must be differentiated from the word "in", and the address "12 St John St." uses the same abbreviation for both "Saint" and "Street". TTS systems with intelligent front ends can make educated guesses about ambiguous abbreviations, while others provide the same result in all cases, resulting in nonsensical (and sometimes comical) outputs, such as "Ulysses S. Grant" being rendered as "Ulysses South Grant".
Text-to-phoneme challenges
Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on its spelling, a process which is often called text-to-phoneme or grapheme-to-phoneme conversion (phoneme is the term used by linguists to describe distinctive sounds in a language). The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correct pronunciations is stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary. The other approach is rule-based, in which pronunciation rules are applied to words to determine their pronunciations based on their spellings. This is similar to the "sounding out", or synthetic phonics, approach to learning reading.
Each approach has advantages and drawbacks. The dictionary-based approach is quick and accurate, but completely fails if it is given a word which is not in its dictionary. As dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as the system takes into account irregular spellings or pronunciations. (Consider that the word "of" is very common in English, yet is the only word in which the letter "f" is pronounced .) As a result, nearly all speech synthesis systems use a combination of these approaches. | Speech synthesis | Wikipedia | 428 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Languages with a phonemic orthography have a very regular writing system, and the prediction of the pronunciation of words based on their spellings is quite successful. Speech synthesis systems for such languages often use the rule-based method extensively, resorting to dictionaries only for those few words, like foreign names and loanwords, whose pronunciations are not obvious from their spellings. On the other hand, speech synthesis systems for languages like English, which have extremely irregular spelling systems, are more likely to rely on dictionaries, and to use rule-based methods only for unusual words, or words that are not in their dictionaries.
Evaluation challenges
The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities.
Since 2005, however, some researchers have started to evaluate speech synthesis systems using a common speech dataset.
Prosodics and emotional content
A study in the journal Speech Communication by Amy Drahota and colleagues at the University of Portsmouth, UK, reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling. It was suggested that identification of the vocal features that signal emotional content may be used to help make synthesized speech sound more natural. One of the related issues is modification of the pitch contour of the sentence, depending upon whether it is an affirmative, interrogative or exclamatory sentence. One of the techniques for pitch modification uses discrete cosine transform in the source domain (linear prediction residual). Such pitch synchronous pitch modification techniques need a priori pitch marking of the synthesis speech database using techniques such as epoch extraction using dynamic plosion index applied on the integrated linear prediction residual of the voiced regions of speech. In general, prosody remains a challenge for speech synthesizers, and is an active research topic.
Dedicated hardware
Icophone
General Instrument SP0256-AL2
National Semiconductor DT1050 Digitalker (Mozer – Forrest Mozer)
Texas Instruments LPC Speech Chips
Hardware and software systems
Popular systems offering speech synthesis as a built-in capability.
Texas Instruments | Speech synthesis | Wikipedia | 486 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
In the early 1980s, TI was known as a pioneer in speech synthesis, and a highly popular plug-in speech synthesizer module was available for the TI-99/4 and 4A. Speech synthesizers were offered free with the purchase of a number of cartridges and were used by many TI-written video games (games offered with speech during this promotion included Alpiner and Parsec). The synthesizer uses a variant of linear predictive coding and has a small in-built vocabulary. The original intent was to release small cartridges that plugged directly into the synthesizer unit, which would increase the device's built-in vocabulary. However, the success of software text-to-speech in the Terminal Emulator II cartridge canceled that plan.
Mattel
The Mattel Intellivision game console offered the Intellivoice Voice Synthesis module in 1982. It included the SP0256 Narrator speech synthesizer chip on a removable cartridge. The Narrator had 2kB of Read-Only Memory (ROM), and this was utilized to store a database of generic words that could be combined to make phrases in Intellivision games. Since the Orator chip could also accept speech data from external memory, any additional words or phrases needed could be stored inside the cartridge itself. The data consisted of strings of analog-filter coefficients to modify the behavior of the chip's synthetic vocal-tract model, rather than simple digitized samples.
SAM
Also released in 1982, Software Automatic Mouth was the first commercial all-software voice synthesis program. It was later used as the basis for Macintalk. The program was available for non-Macintosh Apple computers (including the Apple II, and the Lisa), various Atari models and the Commodore 64. The Apple version preferred additional hardware that contained DACs, although it could instead use the computer's one-bit audio output (with the addition of much distortion) if the card was not present. The Atari made use of the embedded POKEY audio chip. Speech playback on the Atari normally disabled interrupt requests and shut down the ANTIC chip during vocal output. The audible output is extremely distorted speech when the screen is on. The Commodore 64 made use of the 64's embedded SID audio chip.
Atari
Arguably, the first speech system integrated into an operating system was the circa 1983 unreleased Atari 1400XL/1450XL computers. These used the Votrax SC01 chip and a finite-state machine to enable World English Spelling text-to-speech synthesis. | Speech synthesis | Wikipedia | 502 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
The Atari ST computers were sold with "stspeech.tos" on floppy disk.
Apple
The first speech system integrated into an operating system that shipped in quantity was Apple Computer's MacInTalk. The software was licensed from third-party developers Joseph Katz and Mark Barton (later, SoftVoice, Inc.) and was featured during the 1984 introduction of the Macintosh computer. This January demo required 512 kilobytes of RAM memory. As a result, it could not run in the 128 kilobytes of RAM the first Mac actually shipped with. So, the demo was accomplished with a prototype 512k Mac, although those in attendance were not told of this and the synthesis demo created considerable excitement for the Macintosh. In the early 1990s Apple expanded its capabilities offering system wide text-to-speech support. With the introduction of faster PowerPC-based computers they included higher quality voice sampling. Apple also introduced speech recognition into its systems which provided a fluid command set. More recently, Apple has added sample-based voices. Starting as a curiosity, the speech system of Apple Macintosh has evolved into a fully supported program, PlainTalk, for people with vision problems. VoiceOver was for the first time featured in 2005 in Mac OS X Tiger (10.4). During 10.4 (Tiger) and first releases of 10.5 (Leopard) there was only one standard voice shipping with Mac OS X. Starting with 10.6 (Snow Leopard), the user can choose out of a wide range list of multiple voices. VoiceOver voices feature the taking of realistic-sounding breaths between sentences, as well as improved clarity at high read rates over PlainTalk. Mac OS X also includes say, a command-line based application that converts text to audible speech. The AppleScript Standard Additions includes a say verb that allows a script to use any of the installed voices and to control the pitch, speaking rate and modulation of the spoken text.
Amazon
Used in Alexa and as Software as a Service in AWS (from 2017).
AmigaOS | Speech synthesis | Wikipedia | 414 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
The second operating system to feature advanced speech synthesis capabilities was AmigaOS, introduced in 1985. The voice synthesis was licensed by Commodore International from SoftVoice, Inc., who also developed the original MacinTalk text-to-speech system. It featured a complete system of voice emulation for American English, with both male and female voices and "stress" indicator markers, made possible through the Amiga's audio chipset. The synthesis system was divided into a translator library which converted unrestricted English text into a standard set of phonetic codes and a narrator device which implemented a formant model of speech generation.. AmigaOS also featured a high-level "Speak Handler", which allowed command-line users to redirect text output to speech. Speech synthesis was occasionally used in third-party programs, particularly word processors and educational software. The synthesis software remained largely unchanged from the first AmigaOS release and Commodore eventually removed speech synthesis support from AmigaOS 2.1 onward.
Despite the American English phoneme limitation, an unofficial version with multilingual speech synthesis was developed. This made use of an enhanced version of the translator library which could translate a number of languages, given a set of rules for each language.
Microsoft Windows
Modern Windows desktop systems can use SAPI 4 and SAPI 5 components to support speech synthesis and speech recognition. SAPI 4.0 was available as an optional add-on for Windows 95 and Windows 98. Windows 2000 added Narrator, a text-to-speech utility for people who have visual impairment. Third-party programs such as JAWS for Windows, Window-Eyes, Non-visual Desktop Access, Supernova and System Access can perform various text-to-speech tasks such as reading text aloud from a specified website, email account, text document, the Windows clipboard, the user's keyboard typing, etc. Not all programs can use speech synthesis directly. Some programs can use plug-ins, extensions or add-ons to read text aloud. Third-party programs are available that can read text from the system clipboard.
Microsoft Speech Server is a server-based package for voice synthesis and recognition. It is designed for network use with web applications and call centers.
Votrax
From 1971 to 1996, Votrax produced a number of commercial speech synthesizer components. A Votrax synthesizer was included in the first generation Kurzweil Reading Machine for the Blind. | Speech synthesis | Wikipedia | 485 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Text-to-speech systems
Text-to-speech (TTS) refers to the ability of computers to read text aloud. A TTS engine converts written text to a phonemic representation, then converts the phonemic representation to waveforms that can be output as sound. TTS engines with different languages, dialects and specialized vocabularies are available through third-party publishers.
Android
Version 1.6 of Android added support for speech synthesis (TTS).
Internet
Currently, there are a number of applications, plugins and gadgets that can read messages directly from an e-mail client and web pages from a web browser or Google Toolbar. Some specialized software can narrate RSS-feeds. On one hand, online RSS-narrators simplify information delivery by allowing users to listen to their favourite news sources and to convert them to podcasts. On the other hand, on-line RSS-readers are available on almost any personal computer connected to the Internet. Users can download generated audio files to portable devices, e.g. with a help of podcast receiver, and listen to them while walking, jogging or commuting to work.
A growing field in Internet based TTS is web-based assistive technology, e.g. 'Browsealoud' from a UK company and Readspeaker. It can deliver TTS functionality to anyone (for reasons of accessibility, convenience, entertainment or information) with access to a web browser. The non-profit project Pediaphon was created in 2006 to provide a similar web-based TTS interface to the Wikipedia.
Other work is being done in the context of the W3C through the W3C Audio Incubator Group with the involvement of The BBC and Google Inc.
Open source
Some open-source software systems are available, such as:
eSpeak which supports a broad range of languages.
Festival Speech Synthesis System which uses diphone-based synthesis, as well as more modern and better-sounding techniques.
gnuspeech which uses articulatory synthesis from the Free Software Foundation. | Speech synthesis | Wikipedia | 421 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Others
Following the commercial failure of the hardware-based Intellivoice, gaming developers sparingly used software synthesis in later games. Earlier systems from Atari, such as the Atari 5200 (Baseball) and the Atari 2600 (Quadrun and Open Sesame), also had games utilizing software synthesis.
Some e-book readers, such as the Amazon Kindle, Samsung E6, PocketBook eReader Pro, enTourage eDGe, and the Bebook Neo.
The BBC Micro incorporated the Texas Instruments TMS5220 speech synthesis chip.
Some models of Texas Instruments home computers produced in 1979 and 1981 (Texas Instruments TI-99/4 and TI-99/4A) were capable of text-to-phoneme synthesis or reciting complete words and phrases (text-to-dictionary), using a very popular Speech Synthesizer peripheral. TI used a proprietary codec to embed complete spoken phrases into applications, primarily video games.
IBM's OS/2 Warp 4 included VoiceType, a precursor to IBM ViaVoice.
GPS Navigation units produced by Garmin, Magellan, TomTom and others use speech synthesis for automobile navigation.
Yamaha produced a music synthesizer in 1999, the Yamaha FS1R which included a Formant synthesis capability. Sequences of up to 512 individual vowel and consonant formants could be stored and replayed, allowing short vocal phrases to be synthesized.
Digital sound-alikes
At the 2018 Conference on Neural Information Processing Systems (NeurIPS) researchers from Google presented the work 'Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis', which transfers learning from speaker verification to achieve text-to-speech synthesis, that can be made to sound almost like anybody from a speech sample of only 5 seconds.
Also researchers from Baidu Research presented a voice cloning system with similar aims at the 2018 NeurIPS conference, though the result is rather unconvincing.
By 2019 the digital sound-alikes found their way to the hands of criminals as Symantec researchers know of 3 cases where digital sound-alikes technology has been used for crime. | Speech synthesis | Wikipedia | 426 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
This increases the stress on the disinformation situation coupled with the facts that
Human image synthesis since the early 2000s has improved beyond the point of human's inability to tell a real human imaged with a real camera from a simulation of a human imaged with a simulation of a camera.
2D video forgery techniques were presented in 2016 that allow near real-time counterfeiting of facial expressions in existing 2D video.
In SIGGRAPH 2017 an audio driven digital look-alike of upper torso of Barack Obama was presented by researchers from University of Washington. It was driven only by a voice track as source data for the animation after the training phase to acquire lip sync and wider facial information from training material consisting of 2D videos with audio had been completed.
In March 2020, a freeware web application called 15.ai that generates high-quality voices from an assortment of fictional characters from a variety of media sources was released. Initial characters included GLaDOS from Portal, Twilight Sparkle and Fluttershy from the show My Little Pony: Friendship Is Magic, and the Tenth Doctor from Doctor Who.
Speech synthesis markup languages
A number of markup languages have been established for the rendition of text as speech in an XML-compliant format. The most recent is Speech Synthesis Markup Language (SSML), which became a W3C recommendation in 2004. Older speech synthesis markup languages include Java Speech Markup Language (JSML) and SABLE. Although each of these was proposed as a standard, none of them have been widely adopted.
Speech synthesis markup languages are distinguished from dialogue markup languages. VoiceXML, for example, includes tags related to speech recognition, dialogue management and touchtone dialing, in addition to text-to-speech markup.
Applications | Speech synthesis | Wikipedia | 355 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread. It allows environmental barriers to be removed for people with a wide range of disabilities. The longest application has been in the use of screen readers for people with visual impairment, but text-to-speech systems are now commonly used by people with dyslexia and other reading disabilities as well as by pre-literate children. They are also frequently employed to aid those with severe speech impairment usually through a dedicated voice output communication aid. Work to personalize a synthetic voice to better match a person's personality or historical voice is becoming available. A noted application, of speech synthesis, was the Kurzweil Reading Machine for the Blind which incorporated text-to-phonetics software based on work from Haskins Laboratories and a black-box synthesizer built by Votrax.
Speech synthesis techniques are also used in entertainment productions such as games and animations. In 2007, Animo Limited announced the development of a software application package based on its speech synthesis software FineSpeech, explicitly geared towards customers in the entertainment industries, able to generate narration and lines of dialogue according to user specifications. The application reached maturity in 2008, when NEC Biglobe announced a web service that allows users to create phrases from the voices of characters from the Japanese anime series Code Geass: Lelouch of the Rebellion R2. 15.ai has been frequently used for content creation in various fandoms, including the My Little Pony: Friendship Is Magic fandom, the Team Fortress 2 fandom, the Portal fandom, and the SpongeBob SquarePants fandom.
Text-to-speech for disability and impaired communication aids have become widely available. Text-to-speech is also finding new applications; for example, speech synthesis combined with speech recognition allows for interaction with mobile devices via natural language processing interfaces. Some users have also created AI virtual assistants using 15.ai and external voice control software.
Text-to-speech is also used in second language acquisition. Voki, for instance, is an educational tool created by Oddcast that allows users to create their own talking avatar, using different accents. They can be emailed, embedded on websites or shared on social media. | Speech synthesis | Wikipedia | 456 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
Content creators have used voice cloning tools to recreate their voices for podcasts, narration, and comedy shows. Publishers and authors have also used such software to narrate audiobooks and newsletters. Another area of application is AI video creation with talking heads. Webapps and video editors like Elai.io or Synthesia allow users to create video content involving AI avatars, who are made to speak using text-to-speech technology.
Speech synthesis is a valuable computational aid for the analysis and assessment of speech disorders. A voice quality synthesizer, developed by Jorge C. Lucero et al. at the University of Brasília, simulates the physics of phonation and includes models of vocal frequency jitter and tremor, airflow noise and laryngeal asymmetries. The synthesizer has been used to mimic the timbre of dysphonic speakers with controlled levels of roughness, breathiness and strain.
Singing synthesis | Speech synthesis | Wikipedia | 191 | 42799 | https://en.wikipedia.org/wiki/Speech%20synthesis | Technology | Media and communication | null |
In meteorology, a cyclone () is a large air mass that rotates around a strong center of low atmospheric pressure, counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere as viewed from above (opposite to an anticyclone). Cyclones are characterized by inward-spiraling winds that rotate about a zone of low pressure. The largest low-pressure systems are polar vortices and extratropical cyclones of the largest scale (the synoptic scale). Warm-core cyclones such as tropical cyclones and subtropical cyclones also lie within the synoptic scale. Mesocyclones, tornadoes, and dust devils lie within the smaller mesoscale.
Upper level cyclones can exist without the presence of a surface low, and can pinch off from the base of the tropical upper tropospheric trough during the summer months in the Northern Hemisphere. Cyclones have also been seen on extraterrestrial planets, such as Mars, Jupiter, and Neptune. Cyclogenesis is the process of cyclone formation and intensification. Extratropical cyclones begin as waves in large regions of enhanced mid-latitude temperature contrasts called baroclinic zones. These zones contract and form weather fronts as the cyclonic circulation closes and intensifies. Later in their life cycle, extratropical cyclones occlude as cold air masses undercut the warmer air and become cold core systems. A cyclone's track is guided over the course of its 2 to 6 day life cycle by the steering flow of the subtropical jet stream.
Weather fronts mark the boundary between two masses of air of different temperature, humidity, and densities, and are associated with the most prominent meteorological phenomena. Strong cold fronts typically feature narrow bands of thunderstorms and severe weather, and may on occasion be preceded by squall lines or dry lines. Such fronts form west of the circulation center and generally move from west to east; warm fronts form east of the cyclone center and are usually preceded by stratiform precipitation and fog. Warm fronts move poleward ahead of the cyclone path. Occluded fronts form late in the cyclone life cycle near the center of the cyclone and often wrap around the storm center. | Cyclone | Wikipedia | 444 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
Tropical cyclogenesis describes the process of development of tropical cyclones. Tropical cyclones form due to latent heat driven by significant thunderstorm activity, and are warm core. Cyclones can transition between extratropical, subtropical, and tropical phases. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. In the Atlantic and the northeastern Pacific oceans, a tropical cyclone is generally referred to as a hurricane (from the name of the ancient Central American deity of wind, Huracan), in the Indian and south Pacific oceans it is called a cyclone, and in the northwestern Pacific it is called a typhoon.
The growth of instability in the vortices is not universal. For example, the size, intensity, moist-convection, surface evaporation, the value of potential temperature at each potential height can affect the nonlinear evolution of a vortex.
Nomenclature
Henry Piddington published 40 papers dealing with tropical storms from Calcutta between 1836 and 1855 in The Journal of the Asiatic Society. He also coined the term cyclone, meaning the coil of a snake. In 1842, he published his landmark thesis, Laws of the Storms.
Structure
There are a number of structural characteristics common to all cyclones. A cyclone is a low-pressure area. A cyclone's center (often known in a mature tropical cyclone as the eye), is the area of lowest atmospheric pressure in the region. Near the center, the pressure gradient force (from the pressure in the center of the cyclone compared to the pressure outside the cyclone) and the force from the Coriolis effect must be in an approximate balance, or the cyclone would collapse on itself as a result of the difference in pressure.
Because of the Coriolis effect, the wind flow around a large cyclone is counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. In the Northern Hemisphere, the fastest winds relative to the surface of the Earth therefore occur on the eastern side of a northward-moving cyclone and on the northern side of a westward-moving one; the opposite occurs in the Southern Hemisphere. In contrast to low-pressure systems, the wind flow around high-pressure systems are clockwise (anticyclonic) in the northern hemisphere, and counterclockwise in the southern hemisphere.
Formation | Cyclone | Wikipedia | 485 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
Cyclogenesis is the development or strengthening of cyclonic circulation in the atmosphere. Cyclogenesis is an umbrella term for several different processes that all result in the development of some sort of cyclone. It can occur at various scales, from the microscale to the synoptic scale.
Extratropical cyclones begin as waves along weather fronts before occluding later in their life cycle as cold-core systems. However, some intense extratropical cyclones can become warm-core systems when a warm seclusion occurs.
Tropical cyclones form as a result of significant convective activity, and are warm core. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. Cyclolysis is the opposite of cyclogenesis, and is the high-pressure system equivalent, which deals with the formation of high-pressure areas—Anticyclogenesis.
A surface low can form in a variety of ways. Topography can create a surface low. Mesoscale convective systems can spawn surface lows that are initially warm-core. The disturbance can grow into a wave-like formation along the front and the low is positioned at the crest. Around the low, the flow becomes cyclonic. This rotational flow moves polar air towards the equator on the west side of the low, while warm air move towards the pole on the east side. A cold front appears on the west side, while a warm front forms on the east side. Usually, the cold front moves at a quicker pace than the warm front and "catches up" with it due to the slow erosion of higher density air mass out ahead of the cyclone. In addition, the higher density air mass sweeping in behind the cyclone strengthens the higher pressure, denser cold air mass. The cold front over takes the warm front, and reduces the length of the warm front. At this point an occluded front forms where the warm air mass is pushed upwards into a trough of warm air aloft, which is also known as a trowal. | Cyclone | Wikipedia | 447 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
Tropical cyclogenesis is the development and strengthening of a tropical cyclone. The mechanisms by which tropical cyclogenesis occurs are distinctly different from those that produce mid-latitude cyclones. Tropical cyclogenesis, the development of a warm-core cyclone, begins with significant convection in a favorable atmospheric environment. There are six main requirements for tropical cyclogenesis:
sufficiently warm sea surface temperatures,
atmospheric instability,
high humidity in the lower to middle levels of the troposphere
enough Coriolis force to develop a low-pressure center
a preexisting low-level focus or disturbance
low vertical wind shear.
An average of 86 tropical cyclones of tropical storm intensity form annually worldwide, with 47 reaching hurricane/typhoon strength, and 20 becoming intense tropical cyclones (at least Category 3 intensity on the Saffir–Simpson hurricane scale).
Synoptic scale
The following types of cyclones are identifiable in synoptic charts.
Surface-based types
There are three main types of surface-based cyclones: Extratropical cyclones, Subtropical cyclones and Tropical cyclones
Extratropical cyclone
An extratropical cyclone is a synoptic scale low-pressure weather system that does not have tropical characteristics, as it is connected with fronts and horizontal gradients (rather than vertical) in temperature and dew point otherwise known as "baroclinic zones".
"Extratropical" is applied to cyclones outside the tropics, in the middle latitudes. These systems may also be described as "mid-latitude cyclones" due to their area of formation, or "post-tropical cyclones" when a tropical cyclone has moved (extratropical transition) beyond the tropics. They are often described as "depressions" or "lows" by weather forecasters and the general public. These are the everyday phenomena that, along with anticyclones, drive weather over much of the Earth.
Although extratropical cyclones are almost always classified as baroclinic since they form along zones of temperature and dewpoint gradient within the westerlies, they can sometimes become barotropic late in their life cycle when the temperature distribution around the cyclone becomes fairly uniform with radius. An extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone, if it dwells over warm waters sufficient to warm its core, and as a result develops central convection. A particularly intense type of extratropical cyclone that strikes during winter is known colloquially as a nor'easter.
Polar low | Cyclone | Wikipedia | 494 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
A polar low is a small-scale, short-lived atmospheric low-pressure system (depression) that is found over the ocean areas poleward of the main polar front in both the Northern and Southern Hemispheres. Polar lows were first identified on the meteorological satellite imagery that became available in the 1960s, which revealed many small-scale cloud vortices at high latitudes. The most active polar lows are found over certain ice-free maritime areas in or near the Arctic during the winter, such as the Norwegian Sea, Barents Sea, Labrador Sea and Gulf of Alaska. Polar lows dissipate rapidly when they make landfall. Antarctic systems tend to be weaker than their northern counterparts since the air-sea temperature differences around the continent are generally smaller . However, vigorous polar lows can be found over the Southern Ocean. During winter, when cold-core lows with temperatures in the mid-levels of the troposphere reach move over open waters, deep convection forms, which allows polar low development to become possible. The systems usually have a horizontal length scale of less than and exist for no more than a couple of days. They are part of the larger class of mesoscale weather systems. Polar lows can be difficult to detect using conventional weather reports and are a hazard to high-latitude operations, such as shipping and gas and oil platforms. Polar lows have been referred to by many other terms, such as polar mesoscale vortex, Arctic hurricane, Arctic low, and cold air depression. Today the term is usually reserved for the more vigorous systems that have near-surface winds of at least 17 m/s.
Subtropical
A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. As early as the 1950s, meteorologists were unclear whether they should be characterized as tropical cyclones or extratropical cyclones, and used terms such as quasi-tropical and semi-tropical to describe the cyclone hybrids. By 1972, the National Hurricane Center officially recognized this cyclone category. Subtropical cyclones began to receive names off the official tropical cyclone list in the Atlantic Basin in 2002. They have broad wind patterns with maximum sustained winds located farther from the center than typical tropical cyclones, and exist in areas of weak to moderate temperature gradient. | Cyclone | Wikipedia | 471 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
Since they form from extratropical cyclones, which have colder temperatures aloft than normally found in the tropics, the sea surface temperatures required is around 23 degrees Celsius (73 °F) for their formation, which is three degrees Celsius (5 °F) lower than for tropical cyclones. This means that subtropical cyclones are more likely to form outside the traditional bounds of the hurricane season. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm.
Tropical
A tropical cyclone is a storm system characterized by a low-pressure center and numerous thunderstorms that produce strong winds and flooding rain. A tropical cyclone feeds on heat released when moist air rises, resulting in condensation of water vapour contained in the moist air. They are fueled by a different heat mechanism than other cyclonic windstorms such as nor'easters, European windstorms, and polar lows, leading to their classification as "warm core" storm systems.
The term "tropical" refers to both the geographic origin of these systems, which form almost exclusively in tropical regions of the globe, and their dependence on Maritime Tropical air masses for their formation. The term "cyclone" refers to the storms' cyclonic nature, with counterclockwise rotation in the Northern Hemisphere and clockwise rotation in the Southern Hemisphere. Depending on their location and strength, tropical cyclones are referred to by other names, such as hurricane, typhoon, tropical storm, cyclonic storm, tropical depression, or simply as a cyclone. | Cyclone | Wikipedia | 310 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
While tropical cyclones can produce extremely powerful winds and torrential rain, they are also able to produce high waves and a damaging storm surge. Their winds increase the wave size, and in so doing they draw more heat and moisture into their system, thereby increasing their strength. They develop over large bodies of warm water, and hence lose their strength if they move over land. This is the reason coastal regions can receive significant damage from a tropical cyclone, while inland regions are relatively safe from strong winds. Heavy rains, however, can produce significant flooding inland. Storm surges are rises in sea level caused by the reduced pressure of the core that in effect "sucks" the water upward and from winds that in effect "pile" the water up. Storm surges can produce extensive coastal flooding up to from the coastline. Although their effects on human populations can be devastating, tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere.
Many tropical cyclones develop when the atmospheric conditions around a weak disturbance in the atmosphere are favorable. Others form when other types of cyclones acquire tropical characteristics. Tropical systems are then moved by steering winds in the troposphere; if the conditions remain favorable, the tropical disturbance intensifies, and can even develop an eye. On the other end of the spectrum, if the conditions around the system deteriorate or the tropical cyclone makes landfall, the system weakens and eventually dissipates. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses. A tropical cyclone is usually not considered to become subtropical during its extratropical transition.
Upper level types
Polar cyclone | Cyclone | Wikipedia | 384 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
A polar, sub-polar, or Arctic cyclone (also known as a polar vortex) is a vast area of low pressure that strengthens in the winter and weakens in the summer. A polar cyclone is a low-pressure weather system, usually spanning to , in which the air circulates in a counterclockwise direction in the northern hemisphere, and a clockwise direction in the southern hemisphere. The Coriolis acceleration acting on the air masses moving poleward at high altitude, causes a counterclockwise circulation at high altitude. The poleward movement of air originates from the air circulation of the Polar cell. The polar low is not driven by convection as are tropical cyclones, nor the cold and warm air mass interactions as are extratropical cyclones, but is an artifact of the global air movement of the Polar cell. The base of the polar low is in the mid to upper troposphere. In the Northern Hemisphere, the polar cyclone has two centers on average. One center lies near Baffin Island and the other over northeast Siberia. In the southern hemisphere, it tends to be located near the edge of the Ross ice shelf near 160 west longitude. When the polar vortex is strong, its effect can be felt at the surface as a westerly wind (toward the east). When the polar cyclone is weak, significant cold outbreaks occur.
TUTT cell
Under specific circumstances, upper level cold lows can break off from the base of the tropical upper tropospheric trough (TUTT), which is located mid-ocean in the Northern Hemisphere during the summer months. These upper tropospheric cyclonic vortices, also known as TUTT cells or TUTT lows, usually move slowly from east-northeast to west-southwest, and their bases generally do not extend below in altitude. A weak inverted surface trough within the trade wind is generally found underneath them, and they may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus clouds and the appearance of a surface vortex. In rare cases, they become warm-core tropical cyclones. Upper cyclones and the upper troughs that trail tropical cyclones can cause additional outflow channels and aid in their intensification. Developing tropical disturbances can help create or deepen upper troughs or upper lows in their wake due to the outflow jet emanating from the developing tropical disturbance/cyclone.
Mesoscale
The following types of cyclones are not identifiable in synoptic charts. | Cyclone | Wikipedia | 508 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
Mesocyclone
A mesocyclone is a vortex of air, to in diameter (the mesoscale of meteorology), within a convective storm. Air rises and rotates around a vertical axis, usually in the same direction as low-pressure systems in both northern and southern hemisphere. They are most often cyclonic, that is, associated with a localized low-pressure region within a supercell. Such storms can feature strong surface winds and severe hail. Mesocyclones often occur together with updrafts in supercells, where tornadoes may form. About 1,700 mesocyclones form annually across the United States, but only half produce tornadoes.
Tornado
A tornado is a violently rotating column of air that is in contact with both the surface of the earth and a cumulonimbus cloud or, in rare cases, the base of a cumulus cloud. Also referred to as twisters, a colloquial term in America, or cyclones, although the word cyclone is used in meteorology, in a wider sense, to name any closed low-pressure circulation.
Dust devil
A dust devil is a strong, well-formed, and relatively long-lived whirlwind, ranging from small (half a metre wide and a few metres tall) to large (more than 10 metres wide and more than 1000 metres tall). The primary vertical motion is upward. Dust devils are usually harmless, but can on rare occasions grow large enough to pose a threat to both people and property.
Waterspout
A waterspout is a columnar vortex forming over water that is, in its most common form, a non-supercell tornado over water that is connected to a cumuliform cloud. While it is often weaker than most of its land counterparts, stronger versions spawned by mesocyclones do occur.
Steam devil
A gentle vortex over calm water or wet land made visible by rising water vapour.
Fire whirl
A fire whirl – also colloquially known as a fire devil, fire tornado, firenado, or fire twister – is a whirlwind induced by a fire and often made up of flame or ash.
Other planets | Cyclone | Wikipedia | 453 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
Cyclones are not unique to Earth. Cyclonic storms are common on giant planets, such as the Small Dark Spot on Neptune. It is about one third the diameter of the Great Dark Spot and received the nickname "Wizard's Eye" because it looks like an eye. This appearance is caused by a white cloud in the middle of the Wizard's Eye. Mars has also exhibited cyclonic storms. Jovian storms like the Great Red Spot are usually mistakenly named as giant hurricanes or cyclonic storms. However, this is inaccurate, as the Great Red Spot is, in fact, the inverse phenomenon, an anticyclone. | Cyclone | Wikipedia | 129 | 42806 | https://en.wikipedia.org/wiki/Cyclone | Physical sciences | Atmospheric circulation | null |
Radio frequency (RF) is the oscillation rate of an alternating electric current or voltage or of a magnetic, electric or electromagnetic field or mechanical system in the frequency range from around to around . This is roughly between the upper limit of audio frequencies and the lower limit of infrared frequencies, and also encompasses the microwave range. These are the frequencies at which energy from an oscillating current can radiate off a conductor into space as radio waves, so they are used in radio technology, among other uses. Different sources specify different upper and lower bounds for the frequency range.
Electric current
Electric currents that oscillate at radio frequencies (RF currents) have special properties not shared by direct current or lower audio frequency alternating current, such as the 50 or 60 Hz current used in electrical power distribution.
Energy from RF currents in conductors can radiate into space as electromagnetic waves (radio waves). This is the basis of radio technology.
RF current does not penetrate deeply into electrical conductors but tends to flow along their surfaces; this is known as the skin effect.
RF currents applied to the body often do not cause the painful sensation and muscular contraction of electric shock that lower frequency currents produce. This is because the current changes direction too quickly to trigger depolarization of nerve membranes. However, this does not mean RF currents are harmless; they can cause internal injury as well as serious superficial burns called RF burns.
RF current can ionize air, creating a conductive path through it. This property is exploited by "high frequency" units used in electric arc welding, which use currents at higher frequencies than power distribution uses.
Another property is the ability to appear to flow through paths that contain insulating material, like the dielectric insulator of a capacitor. This is because capacitive reactance in a circuit decreases with increasing frequency.
In contrast, RF current can be blocked by a coil of wire, or even a single turn or bend in a wire. This is because the inductive reactance of a circuit increases with increasing frequency.
When conducted by an ordinary electric cable, RF current has a tendency to reflect from discontinuities in the cable, such as connectors, and travel back down the cable toward the source, causing a condition called standing waves. RF current may be carried efficiently over transmission lines such as coaxial cables.
Frequency bands | Radio frequency | Wikipedia | 478 | 42852 | https://en.wikipedia.org/wiki/Radio%20frequency | Physical sciences | Waves | Physics |
The radio spectrum of frequencies is divided into bands with conventional names designated by the International Telecommunication Union (ITU):
{| class="wikitable" style="text-align:right"
|-
! scope="col" rowspan="2" | Frequencyrange !! scope="col" rowspan="2" | Wavelengthrange !! scope="col" colspan="2" | ITU designation !! scope="col" rowspan="2" | IEEE bands
|-
! scope="col" | Full name
! scope="col" | Abbreviation
|-
! scope="row" | Below 3 Hz
| >105 km || || style="text-align:center" | ||
|-
! scope="row" | 3–30 Hz
| 105–104 km|| Extremely low frequency || style="text-align:center" | ELF ||
|-
! scope="row" | 30–300 Hz
| 104–103 km|| Super low frequency || style="text-align:center" | SLF ||
|-
! scope="row" | 300–3000 Hz
| 103–100 km|| Ultra low frequency || style="text-align:center" | ULF ||
|-
! scope="row" | 3–30 kHz
| 100–10 km|| Very low frequency || style="text-align:center" | VLF ||
|-
! scope="row" | 30–300 kHz
| 10–1 km|| Low frequency || style="text-align:center" | LF ||
|-
! scope="row" | 300 kHz – 3 MHz
| 1 km – 100 m|| Medium frequency || style="text-align:center" | MF ||
|-
! scope="row" | 3–30 MHz
| 100–10 m|| High frequency || style="text-align:center" | HF || style="text-align:center" | HF
|-
! scope="row" | 30–300 MHz
| 10–1 m|| Very high frequency || style="text-align:center" | VHF || style="text-align:center" | VHF
|-
! scope="row" | 300 MHz – 3 GHz | Radio frequency | Wikipedia | 507 | 42852 | https://en.wikipedia.org/wiki/Radio%20frequency | Physical sciences | Waves | Physics |
| 1 m – 100 mm|| Ultra high frequency || style="text-align:center" | UHF || style="text-align:center" | UHF, L, S
|-
! scope="row" | 3–30 GHz
| 100–10 mm|| Super high frequency || style="text-align:center" | SHF || style="text-align:center" | S, C, X, Ku, K, Ka
|-
! scope="row" | 30–300 GHz
| 10–1 mm|| Extremely high frequency || style="text-align:center" | EHF || style="text-align:center" | Ka, V, W, mm
|-
! scope="row" | 300 GHz – 3 THz
| 1 mm – 0.1 mm|| Tremendously high frequency || style="text-align:center" | THF ||
|-
|
|
|} | Radio frequency | Wikipedia | 208 | 42852 | https://en.wikipedia.org/wiki/Radio%20frequency | Physical sciences | Waves | Physics |
Frequencies of 1 GHz and above are conventionally called microwave, while frequencies of 30 GHz and above are designated millimeter wave.
More detailed band designations are given by the standard IEEE letter- band frequency designations and the EU/NATO frequency designations.
Applications
Communications
Radio frequencies are used in communication devices such as transmitters, receivers, computers, televisions, and mobile phones, to name a few. Radio frequencies are also applied in carrier current systems including telephony and control circuits. The MOS integrated circuit is the technology behind the current proliferation of radio frequency wireless telecommunications devices such as cellphones.
Medicine
Medical applications of radio frequency (RF) energy, in the form of electromagnetic waves (radio waves) or electrical currents, have existed for over 125 years, and now include diathermy, hyperthermy treatment of cancer, electrosurgery scalpels used to cut and cauterize in operations, and radiofrequency ablation. Magnetic resonance imaging (MRI) uses radio frequency fields to generate images of the human body.
Non-surgical weight loss equipment
Radio Frequency or RF energy is also being used in devices that are being advertised for weight loss and fat removal. The possible effects RF might have on the body and whether RF can lead to fat reduction needs further study. Currently, there are devices such as trusculpt ID, Venus Bliss and many others utilizing this type of energy alongside heat to target fat pockets in certain areas of the body. That being said, there is limited studies on how effective these devices are.
Measurement
Test apparatus for radio frequencies can include standard instruments at the lower end of the range, but at higher frequencies, the test equipment becomes more specialized.
Mechanical oscillations
While RF usually refers to electrical oscillations, mechanical RF systems are not uncommon: see mechanical filter and RF MEMS. | Radio frequency | Wikipedia | 368 | 42852 | https://en.wikipedia.org/wiki/Radio%20frequency | Physical sciences | Waves | Physics |
The human genome is a complete set of nucleic acid sequences for humans, encoded as the DNA within each of the 24 distinct chromosomes in the cell nucleus. A small DNA molecule is found within individual mitochondria. These are usually treated separately as the nuclear genome and the mitochondrial genome. Human genomes include both protein-coding DNA sequences and various types of DNA that does not encode proteins. The latter is a diverse category that includes DNA coding for non-translated RNA, such as that for ribosomal RNA, transfer RNA, ribozymes, small nuclear RNAs, and several types of regulatory RNAs. It also includes promoters and their associated gene-regulatory elements, DNA playing structural and replicatory roles, such as scaffolding regions, telomeres, centromeres, and origins of replication, plus large numbers of transposable elements, inserted viral DNA, non-functional pseudogenes and simple, highly repetitive sequences. Introns make up a large percentage of non-coding DNA. Some of this non-coding DNA is non-functional junk DNA, such as pseudogenes, but there is no firm consensus on the total amount of junk DNA.
Although the sequence of the human genome has been completely determined by DNA sequencing in 2022 (including methylome), it is not yet fully understood. Most, but not all, genes have been identified by a combination of high throughput experimental and bioinformatics approaches, yet much work still needs to be done to further elucidate the biological functions of their protein and RNA products.
Size of the human genome
In 2000, scientists reported the sequencing of 88% of human genome, but as of 2020, at least 8% was still missing. In 2021, scientists reported sequencing a complete, female genome (i.e., without the Y chromosome). The human Y chromosome, consisting of 62,460,029 base pairs from a different cell line and found in all males, was sequenced completely in January 2022. | Human genome | Wikipedia | 407 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
The current version of the standard reference genome is called GRCh38.p14 (July 2023). It consists of 22 autosomes plus one copy of the X chromosome and one copy of the Y chromosome. It contains approximately 3.1 billion base pairs (3.1 Gb or 3.1 x 109 bp). This represents the size of a composite genome based on data from multiple individuals but it is a good indication of the typical amount of DNA in a haploid set of chromosomes because the Y chromosome is quite small. Most human cells are diploid so they contain twice as much DNA (~6.2 billion base pairs).
In 2023, a draft human pangenome reference was published. It is based on 47 genomes from persons of varied ethnicity. Plans are underway for an improved reference capturing still more biodiversity from a still wider sample.
While there are significant differences among the genomes of human individuals (on the order of 0.1% due to single-nucleotide variants and 0.6% when considering indels), these are considerably smaller than the differences between humans and their closest living relatives, the bonobos and chimpanzees (~1.1% fixed single-nucleotide variants and 4% when including indels).
Molecular organization and gene content
The total length of the human reference genome does not represent the sequence of any specific individual, nor does it represent the sequence of all of the DNA found within a cell. The human reference genome only includes one copy of each of the paired, homologous autosomes plus one copy of each of the two sex chromosomes (X and Y). The total amount of DNA in this reference genome is 3.1 billion base pairs (3.1 Gb).
Protein-coding genes
Protein-coding sequences represent the most widely studied and best understood component of the human genome. These sequences ultimately lead to the production of all human proteins, although several biological processes (e.g. DNA rearrangements and alternative pre-mRNA splicing) can lead to the production of many more unique proteins than the number of protein-coding genes.
The human reference genome contains somewhere between 19,000 and 20,000 protein-coding genes. These genes contain an average of 10 introns and the average size of an intron is about 6 kb (6,000 bp). This means that the average size of a protein-coding gene is about 62 kb and these genes take up about 40% of the genome. | Human genome | Wikipedia | 511 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Exon sequences consist of coding DNA and untranslated regions (UTRs) at either end of the mature mRNA. The total amount of coding DNA is about 1-2% of the genome.
Many people divide the genome into coding and non-coding DNA based on the idea that coding DNA is the most important functional component of the genome. About 98-99% of the human genome is non-coding DNA.
Non-coding genes
Noncoding RNA molecules play many essential roles in cells, especially in the many reactions of protein synthesis and RNA processing. Noncoding genes include those for tRNAs, ribosomal RNAs, microRNAs, snRNAs and long non-coding RNAs (lncRNAs). The number of reported non-coding genes continues to rise slowly but the exact number in the human genome is yet to be determined. Many RNAs are thought to be non-functional.
Many ncRNAs are critical elements in gene regulation and expression. Noncoding RNA also contributes to epigenetics, transcription, RNA splicing, and the translational machinery. The role of RNA in genetic regulation and disease offers a new potential level of unexplored genomic complexity.
Pseudogenes
Pseudogenes are inactive copies of protein-coding genes, often generated by gene duplication, that have become nonfunctional through the accumulation of inactivating mutations. The number of pseudogenes in the human genome is on the order of 13,000, and in some chromosomes is nearly the same as the number of functional protein-coding genes. Gene duplication is a major mechanism through which new genetic material is generated during molecular evolution.
For example, the olfactory receptor gene family is one of the best-documented examples of pseudogenes in the human genome. More than 60 percent of the genes in this family are non-functional pseudogenes in humans. By comparison, only 20 percent of genes in the mouse olfactory receptor gene family are pseudogenes. Research suggests that this is a species-specific characteristic, as the most closely related primates all have proportionally fewer pseudogenes. This genetic discovery helps to explain the less acute sense of smell in humans relative to other mammals. | Human genome | Wikipedia | 453 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Regulatory DNA sequences
The human genome has many different regulatory sequences which are crucial to controlling gene expression. Conservative estimates indicate that these sequences make up 8% of the genome, however extrapolations from the ENCODE project give that 20 or more of the genome is gene regulatory sequence. Some types of non-coding DNA are genetic "switches" that do not encode proteins, but do regulate when and where genes are expressed (called enhancers).
Regulatory sequences have been known since the late 1960s. The first identification of regulatory sequences in the human genome relied on recombinant DNA technology. Later with the advent of genomic sequencing, the identification of these sequences could be inferred by evolutionary conservation. The evolutionary branch between the primates and mouse, for example, occurred 70–90 million years ago. So computer comparisons of gene sequences that identify conserved non-coding sequences will be an indication of their importance in duties such as gene regulation.
Other genomes have been sequenced with the same intention of aiding conservation-guided methods, for exampled the pufferfish genome. However, regulatory sequences disappear and re-evolve during evolution at a high rate.
As of 2012, the efforts have shifted toward finding interactions between DNA and regulatory proteins by the technique ChIP-Seq, or gaps where the DNA is not packaged by histones (DNase hypersensitive sites), both of which tell where there are active regulatory sequences in the investigated cell type.
Repetitive DNA sequences
Repetitive DNA sequences comprise approximately 50% of the human genome.
About 8% of the human genome consists of tandem DNA arrays or tandem repeats, low complexity repeat sequences that have multiple adjacent copies (e.g. "CAGCAGCAG..."). The tandem sequences may be of variable lengths, from two nucleotides to tens of nucleotides. These sequences are highly variable, even among closely related individuals, and so are used for genealogical DNA testing and forensic DNA analysis. | Human genome | Wikipedia | 398 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Repeated sequences of fewer than ten nucleotides (e.g. the dinucleotide repeat (AC)n) are termed microsatellite sequences. Among the microsatellite sequences, trinucleotide repeats are of particular importance, as sometimes occur within coding regions of genes for proteins and may lead to genetic disorders. For example, Huntington's disease results from an expansion of the trinucleotide repeat (CAG)n within the Huntingtin gene on human chromosome 4. Telomeres (the ends of linear chromosomes) end with a microsatellite hexanucleotide repeat of the sequence (TTAGGG)n.
Tandem repeats of longer sequences (arrays of repeated sequences 10–60 nucleotides long) are termed minisatellites.
Transposable genetic elements, DNA sequences that can replicate and insert copies of themselves at other locations within a host genome, are an abundant component in the human genome. The most abundant transposon lineage, Alu, has about 50,000 active copies, and can be inserted into intragenic and intergenic regions. One other lineage, LINE-1, has about 100 active copies per genome (the number varies between people). Together with non-functional relics of old transposons, they account for over half of total human DNA. Sometimes called "jumping genes", transposons have played a major role in sculpting the human genome. Some of these sequences represent endogenous retroviruses, DNA copies of viral sequences that have become permanently integrated into the genome and are now passed on to succeeding generations. There are also a significant number of retroviruses in human DNA, at least 3 of which have been proven to possess an important function (i.e., HIV-like functional HERV-K; envelope genes of non-functional viruses HERV-W and HERV-FRD play a role in placenta formation by inducing cell-cell fusion).
Mobile elements within the human genome can be classified into LTR retrotransposons (8.3% of total genome), SINEs (13.1% of total genome) including Alu elements, LINEs (20.4% of total genome), SVAs (SINE-VNTR-Alu) and Class II DNA transposons (2.9% of total genome).
Junk DNA | Human genome | Wikipedia | 488 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
There is no consensus on what constitutes a "functional" element in the genome since geneticists, evolutionary biologists, and molecular biologists employ different definitions and methods. Due to the ambiguity in the terminology, different schools of thought have emerged. In evolutionary definitions, "functional" DNA, whether it is coding or non-coding, contributes to the fitness of the organism, and therefore is maintained by negative evolutionary pressure whereas "non-functional" DNA has no benefit to the organism and therefore is under neutral selective pressure. This type of DNA has been described as junk DNA. In genetic definitions, "functional" DNA is related to how DNA segments manifest by phenotype and "nonfunctional" is related to loss-of-function effects on the organism. In biochemical definitions, "functional" DNA relates to DNA sequences that specify molecular products (e.g. noncoding RNAs) and biochemical activities with mechanistic roles in gene or genome regulation (i.e. DNA sequences that impact cellular level activity such as cell type, condition, and molecular processes). There is no consensus in the literature on the amount of functional DNA since, depending on how "function" is understood, ranges have been estimated from up to 90% of the human genome is likely nonfunctional DNA (junk DNA) to up to 80% of the genome is likely functional. It is also possible that junk DNA may acquire a function in the future and therefore may play a role in evolution, but this is likely to occur only very rarely. Finally DNA that is deliterious to the organism and is under negative selective pressure is called garbage DNA.
Sequencing
The first human genome sequences were published in nearly complete draft form in February 2001 by the Human Genome Project and Celera Corporation. Completion of the Human Genome Project's sequencing effort was announced in 2004 with the publication of a draft genome sequence, leaving just 341 gaps in the sequence, representing highly repetitive and other DNA that could not be sequenced with the technology available at the time. The human genome was the first of all vertebrates to be sequenced to such near-completion, and as of 2018, the diploid genomes of over a million individual humans had been determined using next-generation sequencing.
These data are used worldwide in biomedical science, anthropology, forensics and other branches of science. Such genomic studies have led to advances in the diagnosis and treatment of diseases, and to new insights in many fields of biology, including human evolution. | Human genome | Wikipedia | 506 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
By 2018, the total number of genes had been raised to at least 46,831, plus another 2300 micro-RNA genes. A 2018 population survey found another 300 million bases of human genome that was not in the reference sequence. Prior to the acquisition of the full genome sequence, estimates of the number of human genes ranged from 50,000 to 140,000 (with occasional vagueness about whether these estimates included non-protein coding genes). As genome sequence quality and the methods for identifying protein-coding genes improved, the count of recognized protein-coding genes dropped to 19,000–20,000.
In 2022, the Telomere-to-Telomere (T2T) consortium reported the complete sequence of a human female genome, filling all the gaps in the X chromosome (2020) and the 22 autosomes (May 2021). The previously unsequenced parts contain immune response genes that help to adapt to and survive infections, as well as genes that are important for predicting drug response. The completed human genome sequence will also provide better understanding of human formation as an individual organism and how humans vary both between each other and other species.
Although the 'completion' of the human genome project was announced in 2001, there remained hundreds of gaps, with about 5–10% of the total sequence remaining undetermined. The missing genetic information was mostly in repetitive heterochromatic regions and near the centromeres and telomeres, but also some gene-encoding euchromatic regions. There remained 160 euchromatic gaps in 2015 when the sequences spanning another 50 formerly unsequenced regions were determined. Only in 2020 was the first truly complete telomere-to-telomere sequence of a human chromosome determined, namely of the X chromosome. The first complete telomere-to-telomere sequence of a human autosomal chromosome, chromosome 8, followed a year later. The complete human genome (without Y chromosome) was published in 2021, while with Y chromosome in January 2022.
In 2023, a draft human pangenome reference was published. It is based on 47 genomes from persons of varied ethnicity. Plans are underway for an improved reference capturing still more biodiversity from a still wider sample.
Genomic variation in humans
Human reference genome
With the exception of identical twins, all humans show significant variation in genomic DNA sequences. The human reference genome (HRG) is used as a standard sequence reference. | Human genome | Wikipedia | 502 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
There are several important points concerning the human reference genome:
The HRG is a haploid sequence. Each chromosome is represented once.
The HRG is a composite sequence, and does not correspond to any actual human individual.
The HRG is periodically updated to correct errors, ambiguities, and unknown "gaps".
The HRG in no way represents an "ideal" or "perfect" human individual. It is simply a standardized representation or model that is used for comparative purposes.
The Genome Reference Consortium is responsible for updating the HRG. Version 38 was released in December 2013.
Measuring human genetic variation
Most studies of human genetic variation have focused on single-nucleotide polymorphisms (SNPs), which are substitutions in individual bases along a chromosome. Most analyses estimate that SNPs occur 1 in 1000 base pairs, on average, in the euchromatic human genome, although they do not occur at a uniform density. Thus follows the popular statement that "we are all, regardless of race, genetically 99.9% the same", although this would be somewhat qualified by most geneticists. For example, a much larger fraction of the genome is now thought to be involved in copy number variation. A large-scale collaborative effort to catalog SNP variations in the human genome is being undertaken by the International HapMap Project.
The genomic loci and length of certain types of small repetitive sequences are highly variable from person to person, which is the basis of DNA fingerprinting and DNA paternity testing technologies. The heterochromatic portions of the human genome, which total several hundred million base pairs, are also thought to be quite variable within the human population (they are so repetitive and so long that they cannot be accurately sequenced with current technology). These regions contain few genes, and it is unclear whether any significant phenotypic effect results from typical variation in repeats or heterochromatin.
Most gross genomic mutations in gamete germ cells probably result in inviable embryos; however, a number of human diseases are related to large-scale genomic abnormalities. Down syndrome, Turner Syndrome, and a number of other diseases result from nondisjunction of entire chromosomes. Cancer cells frequently have aneuploidy of chromosomes and chromosome arms, although a cause and effect relationship between aneuploidy and cancer has not been established.
Mapping human genomic variation | Human genome | Wikipedia | 492 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Whereas a genome sequence lists the order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a genome sequence and aids in navigating around the genome.
An example of a variation map is the HapMap being developed by the International HapMap Project. The HapMap is a haplotype map of the human genome, "which will describe the common patterns of human DNA sequence variation." It catalogs the patterns of small-scale variations in the genome that involve single DNA letters, or bases.
Researchers published the first sequence-based map of large-scale structural variation across the human genome in the journal Nature in May 2008. Large-scale structural variations are differences in the genome among people that range from a few thousand to a few million DNA bases; some are gains or losses of stretches of genome sequence and others appear as re-arrangements of stretches of sequence. These variations include differences in the number of copies individuals have of a particular gene, deletions, translocations and inversions.
Structural variation
Structural variation refers to genetic variants that affect larger segments of the human genome, as opposed to point mutations. Often, structural variants (SVs) are defined as variants of 50 base pairs (bp) or greater, such as deletions, duplications, insertions, inversions and other rearrangements. About 90% of structural variants are noncoding deletions but most individuals have more than a thousand such deletions; the size of deletions ranges from dozens of base pairs to tens of thousands of bp. On average, individuals carry ~3 rare structural variants that alter coding regions, e.g. delete exons. About 2% of individuals carry ultra-rare megabase-scale structural variants, especially rearrangements. That is, millions of base pairs may be inverted within a chromosome; ultra-rare means that they are only found in individuals or their family members and thus have arisen very recently.
SNP frequency across the human genome | Human genome | Wikipedia | 417 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Single-nucleotide polymorphisms (SNPs) do not occur homogeneously across the human genome. In fact, there is enormous diversity in SNP frequency between genes, reflecting different selective pressures on each gene as well as different mutation and recombination rates across the genome. However, studies on SNPs are biased towards coding regions, the data generated from them are unlikely to reflect the overall distribution of SNPs throughout the genome. Therefore, the SNP Consortium protocol was designed to identify SNPs with no bias towards coding regions and the Consortium's 100,000 SNPs generally reflect sequence diversity across the human chromosomes. The SNP Consortium aims to expand the number of SNPs identified across the genome to 300 000 by the end of the first quarter of 2001. Changes in non-coding sequence and synonymous changes in coding sequence are generally more common than non-synonymous changes, reflecting greater selective pressure reducing diversity at positions dictating amino acid identity. Transitional changes are more common than transversions, with CpG dinucleotides showing the highest mutation rate, presumably due to deamination.
Personal genomes
A personal genome sequence is a (nearly) complete sequence of the chemical base pairs that make up the DNA of a single person. Because medical treatments have different effects on different people due to genetic variations such as single-nucleotide polymorphisms (SNPs), the analysis of personal genomes may lead to personalized medical treatment based on individual genotypes. | Human genome | Wikipedia | 308 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
The first personal genome sequence to be determined was that of Craig Venter in 2007. Personal genomes had not been sequenced in the public Human Genome Project to protect the identity of volunteers who provided DNA samples. That sequence was derived from the DNA of several volunteers from a diverse population. However, early in the Venter-led Celera Genomics genome sequencing effort the decision was made to switch from sequencing a composite sample to using DNA from a single individual, later revealed to have been Venter himself. Thus the Celera human genome sequence released in 2000 was largely that of one man. Subsequent replacement of the early composite-derived data and determination of the diploid sequence, representing both sets of chromosomes, rather than a haploid sequence originally reported, allowed the release of the first personal genome. In April 2008, that of James Watson was also completed. In 2009, Stephen Quake published his own genome sequence derived from a sequencer of his own design, the Heliscope. A Stanford team led by Euan Ashley published a framework for the medical interpretation of human genomes implemented on Quake's genome and made whole genome-informed medical decisions for the first time. That team further extended the approach to the West family, the first family sequenced as part of Illumina's Personal Genome Sequencing program. Since then hundreds of personal genome sequences have been released, including those of Desmond Tutu, and of a Paleo-Eskimo. In 2012, the whole genome sequences of two family trios among 1092 genomes was made public. In November 2013, a Spanish family made four personal exome datasets (about 1% of the genome) publicly available under a Creative Commons public domain license. The Personal Genome Project (started in 2005) is among the few to make both genome sequences and corresponding medical phenotypes publicly available.
The sequencing of individual genomes further unveiled levels of genetic complexity that had not been appreciated before. Personal genomics helped reveal the significant level of diversity in the human genome attributed not only to SNPs but structural variations as well. However, the application of such knowledge to the treatment of disease and in the medical field is only in its very beginnings. Exome sequencing has become increasingly popular as a tool to aid in diagnosis of genetic disease because the exome contributes only 1% of the genomic sequence but accounts for roughly 85% of mutations that contribute significantly to disease.
Human knockouts | Human genome | Wikipedia | 493 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
In humans, gene knockouts naturally occur as heterozygous or homozygous loss-of-function gene knockouts. These knockouts are often difficult to distinguish, especially within heterogeneous genetic backgrounds. They are also difficult to find as they occur in low frequencies.
Populations with high rates of consanguinity, such as countries with high rates of first-cousin marriages, display the highest frequencies of homozygous gene knockouts. Such populations include Pakistan, Iceland, and Amish populations. These populations with a high level of parental-relatedness have been subjects of human knock out research which has helped to determine the function of specific genes in humans. By distinguishing specific knockouts, researchers are able to use phenotypic analyses of these individuals to help characterize the gene that has been knocked out.
Knockouts in specific genes can cause genetic diseases, potentially have beneficial effects, or even result in no phenotypic effect at all. However, determining a knockout's phenotypic effect and in humans can be challenging. Challenges to characterizing and clinically interpreting knockouts include difficulty calling of DNA variants, determining disruption of protein function (annotation), and considering the amount of influence mosaicism has on the phenotype.
One major study that investigated human knockouts is the Pakistan Risk of Myocardial Infarction study. It was found that individuals possessing a heterozygous loss-of-function gene knockout for the APOC3 gene had lower triglycerides in the blood after consuming a high fat meal as compared to individuals without the mutation. However, individuals possessing homozygous loss-of-function gene knockouts of the APOC3 gene displayed the lowest level of triglycerides in the blood after the fat load test, as they produce no functional APOC3 protein.
Human genetic disorders | Human genome | Wikipedia | 378 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Most aspects of human biology involve both genetic (inherited) and non-genetic (environmental) factors. Some inherited variation influences aspects of our biology that are not medical in nature (height, eye color, ability to taste or smell certain compounds, etc.). Moreover, some genetic disorders only cause disease in combination with the appropriate environmental factors (such as diet). With these caveats, genetic disorders may be described as clinically defined diseases caused by genomic DNA sequence variation. In the most straightforward cases, the disorder can be associated with variation in a single gene. For example, cystic fibrosis is caused by mutations in the CFTR gene and is the most common recessive disorder in caucasian populations with over 1,300 different mutations known.
Disease-causing mutations in specific genes are usually severe in terms of gene function and are rare, thus genetic disorders are similarly individually rare. However, since there are many genes that can vary to cause genetic disorders, in aggregate they constitute a significant component of known medical conditions, especially in pediatric medicine. Molecularly characterized genetic disorders are those for which the underlying causal gene has been identified. Currently there are approximately 2,200 such disorders annotated in the OMIM database.
Studies of genetic disorders are often performed by means of family-based studies. In some instances, population based approaches are employed, particularly in the case of so-called founder populations such as those in Finland, French-Canada, Utah, Sardinia, etc. Diagnosis and treatment of genetic disorders are usually performed by a geneticist-physician trained in clinical/medical genetics. The results of the Human Genome Project are likely to provide increased availability of genetic testing for gene-related disorders, and eventually improved treatment. Parents can be screened for hereditary conditions and counselled on the consequences, the probability of inheritance, and how to avoid or ameliorate it in their offspring. | Human genome | Wikipedia | 378 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
There are many different kinds of DNA sequence variation, ranging from complete extra or missing chromosomes down to single nucleotide changes. It is generally presumed that much naturally occurring genetic variation in human populations is phenotypically neutral, i.e., has little or no detectable effect on the physiology of the individual (although there may be fractional differences in fitness defined over evolutionary time frames). Genetic disorders can be caused by any or all known types of sequence variation. To molecularly characterize a new genetic disorder, it is necessary to establish a causal link between a particular genomic sequence variant and the clinical disease under investigation. Such studies constitute the realm of human molecular genetics.
With the advent of the Human Genome and International HapMap Project, it has become feasible to explore subtle genetic influences on many common disease conditions such as diabetes, asthma, migraine, schizophrenia, etc. Although some causal links have been made between genomic sequence variants in particular genes and some of these diseases, often with much publicity in the general media, these are usually not considered to be genetic disorders per se as their causes are complex, involving many different genetic and environmental factors. Thus there may be disagreement in particular cases whether a specific medical condition should be termed a genetic disorder.
Additional genetic disorders of mention are Kallman syndrome and Pfeiffer syndrome (gene FGFR1), Fuchs corneal dystrophy (gene TCF4), Hirschsprung's disease (genes RET and FECH), Bardet-Biedl syndrome 1 (genes CCDC28B and BBS1), Bardet-Biedl syndrome 10 (gene BBS10), and facioscapulohumeral muscular dystrophy type 2 (genes D4Z4 and SMCHD1). | Human genome | Wikipedia | 371 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Genome sequencing is now able to narrow the genome down to specific locations to more accurately find mutations that will result in a genetic disorder. Copy number variants (CNVs) and single nucleotide variants (SNVs) are also able to be detected at the same time as genome sequencing with newer sequencing procedures available, called Next Generation Sequencing (NGS). This only analyzes a small portion of the genome, around 1–2%. The results of this sequencing can be used for clinical diagnosis of a genetic condition, including Usher syndrome, retinal disease, hearing impairments, diabetes, epilepsy, Leigh disease, hereditary cancers, neuromuscular diseases, primary immunodeficiencies, severe combined immunodeficiency (SCID), and diseases of the mitochondria. NGS can also be used to identify carriers of diseases before conception. The diseases that can be detected in this sequencing include Tay-Sachs disease, Bloom syndrome, Gaucher disease, Canavan disease, familial dysautonomia, cystic fibrosis, spinal muscular atrophy, and fragile-X syndrome. The Next Genome Sequencing can be narrowed down to specifically look for diseases more prevalent in certain ethnic populations.
Evolution
Comparative genomics studies of mammalian genomes suggest that approximately 5% of the human genome has been conserved by evolution since the divergence of extant lineages approximately 200 million years ago, containing the vast majority of genes. The published chimpanzee genome differs from that of the human genome by 1.23% in direct sequence comparisons. Around 20% of this figure is accounted for by variation within each species, leaving only ~1.06% consistent sequence divergence between humans and chimps at shared genes. This nucleotide by nucleotide difference is dwarfed, however, by the portion of each genome that is not shared, including around 6% of functional genes that are unique to either humans or chimps. | Human genome | Wikipedia | 401 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
In other words, the considerable observable differences between humans and chimps may be due as much or more to genome level variation in the number, function and expression of genes rather than DNA sequence changes in shared genes. Indeed, even within humans, there has been found to be a previously unappreciated amount of copy number variation (CNV) which can make up as much as 5–15% of the human genome. In other words, between humans, there could be +/- 500,000,000 base pairs of DNA, some being active genes, others inactivated, or active at different levels. The full significance of this finding remains to be seen. On average, a typical human protein-coding gene differs from its chimpanzee ortholog by only two amino acid substitutions; nearly one third of human genes have exactly the same protein translation as their chimpanzee orthologs. A major difference between the two genomes is human chromosome 2, which is equivalent to a fusion product of chimpanzee chromosomes 12 and 13. (later renamed to chromosomes 2A and 2B, respectively).
Humans have undergone an extraordinary loss of olfactory receptor genes during our recent evolution, which explains our relatively crude sense of smell compared to most other mammals. Evolutionary evidence suggests that the emergence of color vision in humans and several other primate species has diminished the need for the sense of smell.
In September 2016, scientists reported that, based on human DNA genetic studies, all non-Africans in the world today can be traced to a single population that exited Africa between 50,000 and 80,000 years ago.
Mitochondrial DNA
The human mitochondrial DNA is of tremendous interest to geneticists, since it undoubtedly plays a role in mitochondrial disease. It also sheds light on human evolution; for example, analysis of variation in the human mitochondrial genome has led to the postulation of a recent common ancestor for all humans on the maternal line of descent (see Mitochondrial Eve). | Human genome | Wikipedia | 400 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Due to the damage induced by the exposure to Reactive Oxygen Species mitochondrial DNA (mtDNA) has a more rapid rate of variation than nuclear DNA. This 20-fold higher mutation rate allows mtDNA to be used for more accurate tracing of maternal ancestry. Studies of mtDNA in populations have allowed ancient migration paths to be traced, such as the migration of Native Americans from Siberia or Polynesians from southeastern Asia. It has also been used to show that there is no trace of Neanderthal DNA in the European gene mixture inherited through purely maternal lineage. Due to the restrictive all or none manner of mtDNA inheritance, this result (no trace of Neanderthal mtDNA) would be likely unless there were a large percentage of Neanderthal ancestry, or there was strong positive selection for that mtDNA. For example, going back 5 generations, only 1 of a person's 32 ancestors contributed to that person's mtDNA, so if one of these 32 was pure Neanderthal an expected ~3% of that person's autosomal DNA would be of Neanderthal origin, yet they would have a ~97% chance of having no trace of Neanderthal mtDNA.
Epigenome
Epigenetics describes a variety of features of the human genome that transcend its primary DNA sequence, such as chromatin packaging, histone modifications and DNA methylation, and which are important in regulating gene expression, genome replication and other cellular processes. Epigenetic markers strengthen and weaken transcription of certain genes but do not affect the actual sequence of DNA nucleotides. DNA methylation is a major form of epigenetic control over gene expression and one of the most highly studied topics in epigenetics. During development, the human DNA methylation profile experiences dramatic changes. In early germ line cells, the genome has very low methylation levels. These low levels generally describe active genes. As development progresses, parental imprinting tags lead to increased methylation activity. | Human genome | Wikipedia | 401 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Epigenetic patterns can be identified between tissues within an individual as well as between individuals themselves. Identical genes that have differences only in their epigenetic state are called epialleles. Epialleles can be placed into three categories: those directly determined by an individual's genotype, those influenced by genotype, and those entirely independent of genotype. The epigenome is also influenced significantly by environmental factors. Diet, toxins, and hormones impact the epigenetic state. Studies in dietary manipulation have demonstrated that methyl-deficient diets are associated with hypomethylation of the epigenome. Such studies establish epigenetics as an important interface between the environment and the genome. | Human genome | Wikipedia | 145 | 42888 | https://en.wikipedia.org/wiki/Human%20genome | Biology and health sciences | Genetics and taxonomy | null |
Anthrax is an infection caused by the bacterium Bacillus anthracis or Bacillus cereus biovar anthracis. Infection typically occurs by contact with the skin, inhalation, or intestinal absorption. Symptom onset occurs between one day and more than two months after the infection is contracted. The skin form presents with a small blister with surrounding swelling that often turns into a painless ulcer with a black center. The inhalation form presents with fever, chest pain, and shortness of breath. The intestinal form presents with diarrhea (which may contain blood), abdominal pains, nausea, and vomiting.
According to the U.S. Centers for Disease Control and Prevention, the first clinical descriptions of cutaneous anthrax were given by Maret in 1752 and Fournier in 1769. Before that, anthrax had been described only in historical accounts. The German scientist Robert Koch was the first to identify Bacillus anthracis as the bacterium that causes anthrax.
Anthrax is spread by contact with the bacterium's spores, which often appear in infectious animal products. Contact is by breathing or eating or through an area of broken skin. It does not typically spread directly between people. Risk factors include people who work with animals or animal products, and military personnel. Diagnosis can be confirmed by finding antibodies or the toxin in the blood or by culture of a sample from the infected site.
Anthrax vaccination is recommended for people at high risk of infection. Immunizing animals against anthrax is recommended in areas where previous infections have occurred. A two-month course of antibiotics such as ciprofloxacin, levofloxacin and doxycycline after exposure can also prevent infection. If infection occurs, treatment is with antibiotics and possibly antitoxin. The type and number of antibiotics used depend on the type of infection. Antitoxin is recommended for those with widespread infection. | Anthrax | Wikipedia | 402 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
A rare disease, human anthrax is most common in Africa and central and southern Asia. It also occurs more regularly in Southern Europe than elsewhere on the continent and is uncommon in Northern Europe and North America. Globally, at least 2,000 cases occur a year, with about two cases a year in the United States. Skin infections represent more than 95% of cases. Without treatment the risk of death from skin anthrax is 23.7%. For intestinal infection the risk of death is 25 to 75%, while respiratory anthrax has a mortality of 50 to 80%, even with treatment. Until the 20th century anthrax infections killed hundreds of thousands of people and animals each year. In herbivorous animals infection occurs when they eat or breathe in the spores while grazing. Animals may become infected by killing and/or eating infected animals.
Several countries have developed anthrax as a weapon. It has been used in biowarfare and bioterrorism since 1914. In 1975, the Biological Weapons Convention prohibited the "development, production and stockpiling" of biological weapons. It has since been used in bioterrorism. Likely delivery methods of weaponized anthrax include aerial dispersal or dispersal through livestock; notable bioterrorism uses include the 2001 anthrax attacks and an incident in 1993 by the Aum Shinrikyo group.
Etymology
The English name comes from anthrax (), the Greek word for coal, possibly having Egyptian etymology, because of the characteristic black skin lesions people with a cutaneous anthrax infection develop. The central black eschar surrounded by vivid red skin has long been recognised as typical of the disease. The first recorded use of the word "anthrax" in English is in a 1398 translation of Bartholomaeus Anglicus's work (On the Properties of Things, 1240).
Anthrax was historically known by a wide variety of names, indicating its symptoms, location, and groups considered most vulnerable to infection. They include Siberian plague, Cumberland disease, charbon, splenic fever, malignant edema, woolsorter's disease and .
Signs and symptoms
Skin | Anthrax | Wikipedia | 442 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Cutaneous anthrax, also known as hide-porter's disease, is when anthrax occurs on the skin. It is the most common (>90% of cases) and least dangerous form (low mortality with treatment, 23.7% mortality without). Cutaneous anthrax presents as a boil-like skin lesion that eventually forms an ulcer with a black center (eschar). The black eschar often shows up as a large, painless, necrotic ulcer (beginning as an irritating and itchy skin lesion or blister that is dark and usually concentrated as a black dot, somewhat resembling bread mold) at the site of infection. In general, cutaneous infections form within the site of spore penetration two to five days after exposure. Unlike bruises or most other lesions, cutaneous anthrax infections normally do not cause pain. Nearby lymph nodes may become infected, reddened, swollen, and painful. A scab forms over the lesion soon, and falls off in a few weeks. Complete recovery may take longer. Cutaneous anthrax is typically caused when B. anthracis spores enter through cuts on the skin. This form is found most commonly when humans handle infected animals and/or animal products.
Injection
In December 2009, an outbreak of anthrax occurred among injecting heroin users in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. It was the first documented non-occupational human anthrax outbreak in the UK since 1960. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan. Injected anthrax may have symptoms similar to cutaneous anthrax, with the exception of black areas, and may also cause infection deep into the muscle and spread faster. This can make it harder to recognise and treat.
Lungs
Inhalation anthrax usually develops within a week after exposure, but may take up to 2 months. During the first few days of illness, most people have fever, chills, and fatigue. These symptoms may be accompanied by cough, shortness of breath, chest pain, and nausea or vomiting, making inhalation anthrax difficult to distinguish from influenza and community-acquired pneumonia. This is often described as the prodromal period. | Anthrax | Wikipedia | 474 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Over the next day or so, shortness of breath, cough, and chest pain become more common, and complaints not involving the chest such as nausea, vomiting, altered mental status, sweats, and headache develop in one-third or more of people. Upper respiratory tract symptoms occur in only a quarter of people, and muscle pains are rare. Altered mental status or shortness of breath generally brings people to healthcare and marks the fulminant phase of illness.
It infects the lymph nodes in the chest first, rather than the lungs themselves, a condition called hemorrhagic mediastinitis, causing bloody fluid to accumulate in the chest cavity, thereby causing shortness of breath. The second (pneumonia) stage occurs when the infection spreads from the lymph nodes to the lungs. Symptoms of the second stage develop suddenly within hours or days after the first stage. Symptoms include high fever, extreme shortness of breath, shock, and rapid death within 48 hours in fatal cases.
Gastrointestinal
Gastrointestinal (GI) infection is most often caused by consuming anthrax-infected meat and is characterized by diarrhea, potentially with blood, abdominal pains, acute inflammation of the intestinal tract, and loss of appetite. Occasional vomiting of blood can occur. Lesions have been found in the intestines and in the mouth and throat. After the bacterium invades the gastrointestinal system, it spreads to the bloodstream and throughout the body, while continuing to make toxins.
Cause
Bacteria | Anthrax | Wikipedia | 311 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Bacillus anthracis is a rod-shaped, Gram-positive, facultative anaerobe bacterium about 1 by 9 μm in size. It was shown to cause disease by Robert Koch in 1876 when he took a blood sample from an infected cow, isolated the bacteria, and put them into a mouse. The bacterium normally rests in spore form in the soil, and can survive for decades in this state. Herbivores are often infected while grazing, especially when eating rough, irritant, or spiky vegetation; the vegetation has been hypothesized to cause wounds within the gastrointestinal tract, permitting entry of the bacterial spores into the tissues. Once ingested or placed in an open wound, the bacteria begin multiplying inside the animal or human and typically kill the host within a few days or weeks. The spores germinate at the site of entry into the tissues and then spread by the circulation to the lymphatics, where the bacteria multiply.
The production of two powerful exotoxins and lethal toxin by the bacteria causes death. Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence and the dark, nonclotting blood that oozes from the body orifices. Most anthrax bacteria inside the body after death are outcompeted and destroyed by anaerobic bacteria within minutes to hours post mortem, but anthrax vegetative bacteria that escape the body via oozing blood or opening the carcass may form hardy spores. These vegetative bacteria are not contagious. One spore forms per vegetative bacterium. The triggers for spore formation are not known, but oxygen tension and lack of nutrients may play roles. Once formed, these spores are very hard to eradicate. | Anthrax | Wikipedia | 374 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
The infection of herbivores (and occasionally humans) by inhalation normally begins with inhaled spores being transported through the air passages into the tiny air sacs (alveoli) in the lungs. The spores are then picked up by scavenger cells (macrophages) in the lungs and transported through small vessels (lymphatics) to the lymph nodes in the central chest cavity (mediastinum). Damage caused by the anthrax spores and bacilli to the central chest cavity can cause chest pain and difficulty breathing. Once in the lymph nodes, the spores germinate into active bacilli that multiply and eventually burst the macrophages, releasing many more bacilli into the bloodstream to be transferred to the entire body. Once in the bloodstream, these bacilli release three proteins: lethal factor, edema factor, and protective antigen. The three are not toxic by themselves, but their combination is incredibly lethal to humans. Protective antigen combines with these other two factors to form lethal toxin and edema toxin, respectively. These toxins are the primary agents of tissue destruction, bleeding, and death of the host. If antibiotics are administered too late, even if the antibiotics eradicate the bacteria, some hosts still die of toxemia because the toxins produced by the bacilli remain in their systems at lethal dose levels. | Anthrax | Wikipedia | 283 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Exposure and transmission
Anthrax can enter the human body through the intestines (gastrointestinal), lungs (pulmonary), or skin (cutaneous), and causes distinct clinical symptoms based on its site of entry. Anthrax does not usually spread from an infected human to an uninfected human. If the disease is fatal to the person's body, its mass of anthrax bacilli becomes a potential source of infection to others and special precautions should be used to prevent further contamination. Pulmonary anthrax, if left untreated, is almost always fatal. Historically, pulmonary anthrax was called woolsorters' disease because it was an occupational hazard for people who sorted wool. Today, this form of infection is extremely rare in industrialized nations. Cutaneous anthrax is the most common form of transmission but also the least dangerous of the three transmissions. Gastrointestinal anthrax is likely fatal if left untreated, but very rare.The spores of anthrax are able to survive in harsh conditions for decades or even centuries. Such spores can be found on all continents, including Antarctica. Disturbed grave sites of infected animals have been known to cause infection after 70 years. In one such event, a young boy died from gastrointestinal anthrax due to the thawing of reindeer corpses from 75 years before contact. Anthrax spores traveled though groundwater used for drinking and caused tens of people to be hospitalized, largely children. Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual pathway of exposure for humans. Workers exposed to dead animals and animal products are at the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on open range where they mix with wild animals still occasionally occurs in the U.S. and elsewhere.
Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores, but most exposure levels are not sufficient to produce infection. A lethal infection is reported to result from inhalation of about 10,000–20,000 spores, though this dose varies among host species. | Anthrax | Wikipedia | 444 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Mechanism
The lethality of the anthrax disease is due to the bacterium's two principal virulence factors: the poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils; and the tripartite protein toxin, called anthrax toxin, consisting of protective antigen (PA), edema factor (EF), and lethal factor (LF). PA plus LF produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue swelling (edema), respectively.
To enter the cells, the edema and lethal factors use another protein produced by B. anthracis called protective antigen, which binds to two surface receptors on the host cell. A cell protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments forming a heptameric ring-shaped structure named a prepore. Once in this shape, the complex can competitively bind up to three EFs or LFs, forming a resistant complex. Receptor-mediated endocytosis occurs next, providing the newly formed toxic complex access to the interior of the host cell. The acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the cytosol. It is unknown how exactly the complex results in the death of the cell. | Anthrax | Wikipedia | 326 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with calmodulin removes calmodulin from stimulating calcium-triggered signaling, thus inhibiting the immune response. To be specific, LF inactivates neutrophils (a type of phagocytic cell) by the process just described so they cannot phagocytose bacteria. Throughout history, lethal factor was presumed to cause macrophages to make TNF-alpha and interleukin 1 beta (IL1B). TNF-alpha is a cytokine whose primary role is to regulate immune cells, as well as to induce inflammation and apoptosis or programmed cell death. Interleukin 1 beta is another cytokine that also regulates inflammation and apoptosis. The overproduction of TNF-alpha and IL1B ultimately leads to septic shock and death. However, recent evidence indicates anthrax also targets endothelial cells that line serious cavities such as the pericardial cavity, pleural cavity, and peritoneal cavity, lymph vessels, and blood vessels, causing vascular leakage of fluid and cells, and ultimately hypovolemic shock and septic shock.
Diagnosis
Various techniques may be used for the direct identification of B. anthracis in clinical material. Firstly, specimens may be Gram stained. Bacillus spp. are quite large in size (3 to 4 μm long), they may grow in long chains, and they stain Gram-positive. To confirm the organism is B. anthracis, rapid diagnostic techniques such as polymerase chain reaction-based assays and immunofluorescence microscopy may be used. | Anthrax | Wikipedia | 387 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
All Bacillus species grow well on 5% sheep blood agar and other routine culture media. Polymyxin-lysozyme-EDTA-thallous acetate can be used to isolate B. anthracis from contaminated specimens, and bicarbonate agar is used as an identification method to induce capsule formation. Bacillus spp. usually grow within 24 hours of incubation at 35 °C, in ambient air (room temperature) or in 5% CO2. If bicarbonate agar is used for identification, then the medium must be incubated in 5% CO2. B. anthracis colonies are medium-large, gray, flat, and irregular with swirling projections, often referred to as having a "medusa head" appearance, and are not hemolytic on 5% sheep blood agar. The bacteria are not motile, susceptible to penicillin, and produce a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify B. anthracis includes gamma bacteriophage testing, indirect hemagglutination, and enzyme-linked immunosorbent assay to detect antibodies. The best confirmatory precipitation test for anthrax is the Ascoli test.
Prevention
Precautions are taken to avoid contact with the skin and any fluids exuded through natural body openings of a deceased body that is suspected of harboring anthrax. The body should be put in strict quarantine. A blood sample is collected and sealed in a container and analyzed in an approved laboratory to ascertain if anthrax is the cause of death. The body should be sealed in an airtight body bag and incinerated to prevent the transmission of anthrax spores. Microscopic visualization of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome methylene blue (McFadyean stain) is fully diagnostic, though the culture of the organism is still the gold standard for diagnosis. Full isolation of the body is important to prevent possible contamination of others. | Anthrax | Wikipedia | 430 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots with no perforations are used when handling the body. No skin, especially if it has any wounds or scratches, should be exposed. Disposable personal protective equipment is preferable, but if not available, decontamination can be achieved by autoclaving. Used disposable equipment is burned and/or buried after use. All contaminated bedding or clothing is isolated in double plastic bags and treated as biohazard waste. Respiratory equipment capable of filtering small particles, such the US National Institute for Occupational Safety and Health- and Mine Safety and Health Administration-approved high-efficiency respirator, is worn. By addressing Anthrax from a One Health perspective, we can reduce the risks of transmission and better protect both human and animal populations.
The prevention of anthrax from the environmental sources like air, water, & soil is disinfection used by effective microorganisms through spraying, and bokashi mudballs mixed with effective microorganisms for the contaminated waterways.
Vaccines
Vaccines against anthrax for use in livestock and humans have had a prominent place in the history of medicine. The French scientist Louis Pasteur developed the first effective vaccine in 1881. Human anthrax vaccines were developed by the Soviet Union in the late 1930s and in the US and UK in the 1950s. The current FDA-approved US vaccine was formulated in the 1960s. | Anthrax | Wikipedia | 301 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Currently administered human anthrax vaccines include acellular subunit vaccine (United States) and live vaccine (Russia) varieties. All currently used anthrax vaccines show considerable local and general reactogenicity (erythema, induration, soreness, fever) and serious adverse reactions occur in about 1% of recipients. The American product, BioThrax, is licensed by the FDA and was formerly administered in a six-dose primary series at 0, 2, 4 weeks and 6, 12, 18 months, with annual boosters to maintain immunity. In 2008, the FDA approved omitting the week-2 dose, resulting in the currently recommended five-dose series. This five-dose series is available to military personnel, scientists who work with anthrax and members of the public who do jobs which cause them to be at-risk. New second-generation vaccines currently being researched include recombinant live vaccines and recombinant subunit vaccines. In the 20th century the use of a modern product (BioThrax) to protect American troops against the use of anthrax in biological warfare was controversial.
Antibiotics
Preventive antibiotics are recommended in those who have been exposed. Early detection of sources of anthrax infection can allow preventive measures to be taken. In response to the anthrax attacks of October 2001, the United States Postal Service (USPS) installed biodetection systems (BDSs) in their large-scale mail processing facilities. BDS response plans were formulated by the USPS in conjunction with local responders including fire, police, hospitals, and public health. Employees of these facilities have been educated about anthrax, response actions, and prophylactic medication. Because of the time delay inherent in getting final verification that anthrax has been used, prophylactic antibiotic treatment of possibly exposed personnel must be started as soon as possible.
Treatment | Anthrax | Wikipedia | 386 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Anthrax cannot be spread from person to person, except in the rare case of skin exudates from cutaneous anthrax. However, a person's clothing and body may be contaminated with anthrax spores. Effective decontamination of people can be accomplished by a thorough wash-down with antimicrobial soap and water. Wastewater is treated with bleach or another antimicrobial agent. Effective decontamination of articles can be accomplished by boiling them in water for 30 minutes or longer. Chlorine bleach is ineffective in destroying spores and vegetative cells on surfaces, though formaldehyde is effective. Burning clothing is very effective in destroying spores. After decontamination, there is no need to immunize, treat, or isolate contacts of persons ill with anthrax unless they were also exposed to the same source of infection.
Antibiotics
Early antibiotic treatment of anthrax is essential; delay significantly lessens chances for survival. Treatment for anthrax infection and other bacterial infections includes large doses of intravenous and oral antibiotics, such as fluoroquinolones (ciprofloxacin), doxycycline, erythromycin, vancomycin, or penicillin. FDA-approved agents include ciprofloxacin, doxycycline, and penicillin. In possible cases of pulmonary anthrax, early antibiotic prophylaxis treatment is crucial to prevent possible death. Many attempts have been made to develop new drugs against anthrax, but existing drugs are effective if treatment is started soon enough.
Monoclonal antibodies
In May 2009, Human Genome Sciences submitted a biologic license application (BLA, permission to market) for its new drug, raxibacumab (brand name ABthrax) intended for emergency treatment of inhaled anthrax. On 14 December 2012, the US Food and Drug Administration approved raxibacumab injection to treat inhalational anthrax. Raxibacumab is a monoclonal antibody that neutralizes toxins produced by B. anthracis. In March 2016, FDA approved a second anthrax treatment using a monoclonal antibody which neutralizes the toxins produced by B. anthracis. Obiltoxaximab is approved to treat inhalational anthrax in conjunction with appropriate antibacterial drugs, and for prevention when alternative therapies are not available or appropriate. | Anthrax | Wikipedia | 512 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Biologic for Drug-, Antibody- or Vaccine-resistant Anthrax
Treatment of multi-drug resistant, antibody- or vaccine-resistant Anthrax is also possible. Legler, et al. showed that pegylated CapD (capsule depolymerase) could provide protection against 5 LD50 exposures to lethal Ames spores without the use of antibiotics, monoclonal antibodies, or vaccines. The CapD enzyme removes the poly-D-glutamate (PDGA) capsular material from the bacteria, rendering it susceptible to the innate immune responses. The unencapsulated bacteria can then be cleared.
Prognosis
Cutaneous anthrax is rarely fatal if treated, because the infection area is limited to the skin, preventing the lethal factor, edema factor, and protective antigen from entering and destroying a vital organ. Without treatment, up to 20% of cutaneous skin infection cases progress to toxemia and death.
Before 2001, fatality rates for inhalation anthrax were 90%; since then, they have fallen to 45%. People that progress to the fulminant phase of inhalational anthrax nearly always die, with one case study showing a death rate of 97%. Anthrax meningoencephalitis is also nearly always fatal.
Gastrointestinal anthrax infections can be treated, but usually result in fatality rates of 25% to 60%, depending upon how soon treatment commences.
Injection anthrax is the rarest form of anthrax, and has only been seen to have occurred in a group of heroin injecting drug users. | Anthrax | Wikipedia | 336 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Animals
Anthrax, a bacterial disease caused by Bacillus anthracis, can have devastating effects on animals. It primarily affects herbivores such as cattle, sheep, and goats, but a wide range of mammals, birds, and even humans can also be susceptible. Infection typically occurs through the ingestion of spores in contaminated soil or plants. Once inside the host, the spores transform into active bacteria, producing lethal toxins that lead to severe symptoms. Infected animals often exhibit high fever, rapid breathing, and convulsions, and they may succumb to the disease within hours to days. The presence of anthrax can pose significant challenges to livestock management and wildlife conservation efforts, making it a critical concern for both animal health and public health, as it can occasionally be transmitted to humans through contact with infected animals or contaminated products. Infected animals may stagger, have difficulty breathing, tremble, and finally collapse and die within a few hours.
Epidemiology
Globally, at least 2,000 cases occur a year.
United States
The last fatal case of natural inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after working with infected wool imported from Pakistan. To minimize the chance of spreading the disease, the body was transported to UCLA in a sealed plastic body bag within a sealed metal container for autopsy.
Gastrointestinal anthrax is exceedingly rare in the United States, with only two cases on record. The first case was reported in 1942, according to the Centers for Disease Control and Prevention. During December 2009, the New Hampshire Department of Health and Human Services confirmed a case of gastrointestinal anthrax in an adult female. The CDC investigated the source and the possibility that it was contracted from an African drum recently used by the woman taking part in a drum circle. The woman apparently inhaled anthrax, in spore form, from the hide of the drum. She became critically ill, but with gastrointestinal anthrax rather than inhaled anthrax, which made her unique in American medical history. The building where the infection took place was cleaned and reopened to the public and the woman recovered. The New Hampshire state epidemiologist, Jodie Dionne-Odom, stated "It is a mystery. We really don't know why it happened." | Anthrax | Wikipedia | 480 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
In 2007 two cases of cutaneous anthrax were reported in Danbury, Connecticut. The case involved a maker of traditional African-style drums who was working with a goat hide purchased from a dealer in New York City which had been previously cleared by Customs. While the hide was being scraped, a spider bite led to the spores entering the bloodstream. His son also became infected.
Croatia
In July 2022, dozens of cattle in a nature park in Lonjsko Polje, a flood plain by the Sava river, died of anthrax and 6 people have been hospitalized with light, skin-related symptoms.
United Kingdom
In November 2008, a drum maker in the United Kingdom who worked with untreated animal skins died from anthrax. In December 2009, an outbreak of anthrax occurred among heroin addicts in the Glasgow and Stirling areas of Scotland, resulting in 14 deaths. The source of the anthrax is believed to have been dilution of the heroin with bone meal in Afghanistan.
History
Discovery
Robert Koch, a German physician and scientist, first identified the bacterium that caused the anthrax disease in 1875 in Wollstein (now Wolsztyn, Poland). His pioneering work in the late 19th century was one of the first demonstrations that diseases could be caused by microbes. In a groundbreaking series of experiments, he uncovered the lifecycle and means of transmission of anthrax. His experiments not only helped create an understanding of anthrax but also helped elucidate the role of microbes in causing illness at a time when debates still took place over spontaneous generation versus cell theory. Koch went on to study the mechanisms of other diseases and won the 1905 Nobel Prize in Physiology or Medicine for his discovery of the bacterium causing tuberculosis. | Anthrax | Wikipedia | 357 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Although Koch arguably made the greatest theoretical contribution to understanding anthrax, other researchers were more concerned with the practical questions of how to prevent the disease. In Britain, where anthrax affected workers in the wool, worsted, hides, and tanning industries, it was viewed with fear. John Henry Bell, a doctor born & based in Bradford, first made the link between the mysterious and deadly "woolsorter's disease" and anthrax, showing in 1878 that they were one and the same. In the early 20th century, Friederich Wilhelm Eurich, the German bacteriologist who settled in Bradford with his family as a child, carried out important research for the local Anthrax Investigation Board. Eurich also made valuable contributions to a Home Office Departmental Committee of Inquiry, established in 1913 to address the continuing problem of industrial anthrax. His work in this capacity, much of it collaboration with the factory inspector G. Elmhirst Duckering, led directly to the Anthrax Prevention Act (1919).
First vaccination
Anthrax posed a major economic challenge in France and elsewhere during the 19th century. Horses, cattle, and sheep were particularly vulnerable, and national funds were set aside to investigate the production of a vaccine. French scientist Louis Pasteur was charged with the production of a vaccine, following his successful work in developing methods that helped to protect the important wine and silk industries.
In May 1881, Pasteur – in collaboration with his assistants Jean-Joseph Henri Toussaint, Émile Roux and others – performed a public experiment at Pouilly-le-Fort to demonstrate his concept of vaccination. He prepared two groups of 25 sheep, one goat, and several cattle. The animals of one group were twice injected with an anthrax vaccine prepared by Pasteur, at an interval of 15 days; the control group was left unvaccinated. Thirty days after the first injection, both groups were injected with a culture of live anthrax bacteria. All the animals in the unvaccinated group died, while all of the animals in the vaccinated group survived. | Anthrax | Wikipedia | 432 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
After this apparent triumph, which was widely reported in the local, national, and international press, Pasteur made strenuous efforts to export the vaccine beyond France. He used his celebrity status to establish Pasteur Institutes across Europe and Asia, and his nephew, Adrien Loir, travelled to Australia in 1888 to try to introduce the vaccine to combat anthrax in New South Wales. Ultimately, the vaccine was unsuccessful in the challenging climate of rural Australia, and it was soon superseded by a more robust version developed by local researchers John Gunn and John McGarvie Smith.
The human vaccine for anthrax became available in 1954. This was a cell-free vaccine instead of the live-cell Pasteur-style vaccine used for veterinary purposes. An improved cell-free vaccine became available in 1970.
Engineered strains
The Sterne strain of anthrax, named after the Trieste-born immunologist Max Sterne, is an attenuated strain used as a vaccine, which contains only the anthrax toxin virulence plasmid and not the polyglutamic acid capsule expressing plasmid.
Strain 836, created by the Soviet bioweapons program in the 1980s, was later called by the Los Angeles Times "the most virulent and vicious strain of anthrax known to man".
The virulent Ames strain, which was used in the 2001 anthrax attacks in the United States, has received the most news coverage of any anthrax outbreak. The Ames strain contains two virulence plasmids, which separately encode for a three-protein toxin, called anthrax toxin, and a polyglutamic acid capsule.
Nonetheless, the Vollum strain, developed but never used as a biological weapon during the Second World War, is much more dangerous. The Vollum (also incorrectly referred to as Vellum) strain was isolated in 1935 from a cow in Oxfordshire. This same strain was used during the Gruinard bioweapons trials. A variation of Vollum, known as "Vollum 1B", was used during the 1960s in the US and UK bioweapon programs. Vollum 1B is widely believed to have been isolated from William A. Boyles, a 46-year-old scientist at the US Army Biological Warfare Laboratories at Camp (later Fort) Detrick, Maryland, who died in 1951 after being accidentally infected with the Vollum strain.
Society and culture | Anthrax | Wikipedia | 497 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Site cleanup
Anthrax spores can survive for very long periods of time in the environment after release. Chemical methods for cleaning anthrax-contaminated sites or materials may use oxidizing agents such as peroxides, ethylene oxide, Sandia Foam, chlorine dioxide (used in the Hart Senate Office Building), peracetic acid, ozone gas, hypochlorous acid, sodium persulfate, and liquid bleach products containing sodium hypochlorite. Nonoxidizing agents shown to be effective for anthrax decontamination include methyl bromide, formaldehyde, and metam sodium. These agents destroy bacterial spores. All of the aforementioned anthrax decontamination technologies have been demonstrated to be effective in laboratory tests conducted by the US EPA or others.
Decontamination techniques for Bacillus anthracis spores are affected by the material with which the spores are associated, environmental factors such as temperature and humidity, and microbiological factors such as the spore species, anthracis strain, and test methods used.
A bleach solution for treating hard surfaces has been approved by the EPA. Chlorine dioxide has emerged as the preferred biocide against anthrax-contaminated sites, having been employed in the treatment of numerous government buildings over the past decade. Its chief drawback is the need for in situ processes to have the reactant on demand.
To speed the process, trace amounts of a nontoxic catalyst composed of iron and tetroamido macrocyclic ligands are combined with sodium carbonate and bicarbonate and converted into a spray. The spray formula is applied to an infested area and is followed by another spray containing tert-butyl hydroperoxide.
Using the catalyst method, complete destruction of all anthrax spores can be achieved in under 30 minutes. A standard catalyst-free spray destroys fewer than half the spores in the same amount of time.
Cleanups at a Senate Office Building, several contaminated postal facilities, and other US government and private office buildings, a collaborative effort headed by the Environmental Protection Agency showed decontamination to be possible, but time-consuming and costly. Clearing the Senate Office Building of anthrax spores cost $27 million, according to the Government Accountability Office. Cleaning the Brentwood postal facility in Washington cost $130 million and took 26 months. Since then, newer and less costly methods have been developed. | Anthrax | Wikipedia | 500 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Cleanup of anthrax-contaminated areas on ranches and in the wild is much more problematic. Carcasses may be burned, though often 3 days are needed to burn a large carcass and this is not feasible in areas with little wood. Carcasses may also be buried, though the burying of large animals deeply enough to prevent resurfacing of spores requires much manpower and expensive tools. Carcasses have been soaked in formaldehyde to kill spores, though this has environmental contamination issues. Block burning of vegetation in large areas enclosing an anthrax outbreak has been tried; this, while environmentally destructive, causes healthy animals to move away from an area with carcasses in search of fresh grass. Some wildlife workers have experimented with covering fresh anthrax carcasses with shadecloth and heavy objects. This prevents some scavengers from opening the carcasses, thus allowing the putrefactive bacteria within the carcass to kill the vegetative B. anthracis cells and preventing sporulation. This method also has drawbacks, as scavengers such as hyenas are capable of infiltrating almost any exclosure.
The experimental site at Gruinard Island is said to have been decontaminated with a mixture of formaldehyde and seawater by the Ministry of Defence. It is not clear whether similar treatments had been applied to US test sites.
Biological warfare
Anthrax spores have been used as a biological warfare weapon. Its first modern incidence occurred when Nordic rebels, supplied by the German General Staff, used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. Anthrax was first tested as a biological warfare agent by Unit 731 of the Japanese Kwantung Army in Manchuria during the 1930s; some of this testing involved intentional infection of prisoners of war, thousands of whom died. Anthrax, designated at the time as Agent N, was also investigated by the Allies in the 1940s. | Anthrax | Wikipedia | 406 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
In 1942, British scientists at Porton Down began research on Operation Vegetarian, an ultimately unused biowarfare military operation plan which called for animal feed pellets containing linseed infected with anthrax spores of the Vollum-14578 strain to be dropped by air over the countryside of Nazi Germany. The pellets would be eaten by cattle, which would in turn be eaten by the human population and as such severely disrupt the German war effort. In the same year, bioweapons tests were carried out on the uninhabited Gruinard Island in the Scottish Highlands, with Porton Down scientists studying the effect of anthrax on the island's population of sheep. Ultimately, five million pellets were created, though plans to drop them over Germany using Royal Air Force bombers in 1944 were scrapped after the success of Operation Overlord and the subsequent Allied liberation of France. All pellets were destroyed using incinerators in 1945.
Weaponized anthrax was part of the US stockpile prior to 1972, when the United States signed the Biological Weapons Convention. President Nixon ordered the dismantling of US biowarfare programs in 1969 and the destruction of all existing stockpiles of bioweapons. In 1978–79, the Rhodesian government used anthrax against cattle and humans during its campaign against rebels. The Soviet Union created and stored 100 to 200 tons of anthrax spores at Kantubek on Vozrozhdeniya Island; they were abandoned in 1992 and destroyed in 2002.
American military and British Army personnel are no longer routinely vaccinated against anthrax prior to active service in places where biological attacks are considered a threat.
Sverdlovsk incident (2 April 1979) | Anthrax | Wikipedia | 352 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
Despite signing the 1972 agreement to end bioweapon production, the government of the Soviet Union had an active bioweapons program that included the production of hundreds of tons of anthrax after this period. On 2 April 1979, some of the over one million people living in Sverdlovsk (now called Ekaterinburg, Russia), about east of Moscow, were exposed to an accidental release of anthrax from a biological weapons complex located near there. At least 94 people were infected, of whom at least 68 died. One victim died four days after the release, 10 over an eight-day period at the peak of the deaths, and the last six weeks later. Extensive cleanup, vaccinations, and medical interventions managed to save about 30 of the victims. Extensive cover-ups and destruction of records by the KGB continued from 1979 until Russian President Boris Yeltsin admitted this anthrax accident in 1992. Jeanne Guillemin reported in 1999 that a combined Russian and United States team investigated the accident in 1992.
Nearly all of the night-shift workers of a ceramics plant directly across the street from the biological facility (compound 19) became infected, and most died. Since most were men, some NATO governments suspected the Soviet Union had developed a sex-specific weapon. The government blamed the outbreak on the consumption of anthrax-tainted meat, and ordered the confiscation of all uninspected meat that entered the city. They also ordered all stray dogs to be shot and people not have contact with sick animals. Also, a voluntary evacuation and anthrax vaccination program was established for people from 18 to 55. | Anthrax | Wikipedia | 338 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
To support the cover-up story, Soviet medical and legal journals published articles about an outbreak in livestock that caused gastrointestinal anthrax in people having consumed infected meat, and cutaneous anthrax in people having come into contact with the animals. All medical and public health records were confiscated by the KGB. In addition to the medical problems the outbreak caused, it also prompted Western countries to be more suspicious of a covert Soviet bioweapons program and to increase their surveillance of suspected sites. In 1986, the US government was allowed to investigate the incident, and concluded the exposure was from aerosol anthrax from a military weapons facility. In 1992, President Yeltsin admitted he was "absolutely certain" that "rumors" about the Soviet Union violating the 1972 Bioweapons Treaty were true. The Soviet Union, like the US and UK, had agreed to submit information to the UN about their bioweapons programs, but omitted known facilities and never acknowledged their weapons program.
Anthrax bioterrorism
In theory, anthrax spores can be cultivated with minimal special equipment and a first-year collegiate microbiological education.
To make large amounts of an aerosol form of anthrax suitable for biological warfare requires extensive practical knowledge, training, and highly advanced equipment.
Concentrated anthrax spores were used for bioterrorism in the 2001 anthrax attacks in the United States, delivered by mailing postal letters containing the spores. The letters were sent to several news media offices and two Democratic senators: Tom Daschle of South Dakota and Patrick Leahy of Vermont. As a result, 22 were infected and five died. Only a few grams of material were used in these attacks and in August 2008, the US Department of Justice announced they believed that Bruce Ivins, a senior biodefense researcher employed by the United States government, was responsible. These events also spawned many anthrax hoaxes.
Due to these events, the US Postal Service installed biohazard detection systems at its major distribution centers to actively scan for anthrax being transported through the mail. As of 2020, no positive alerts by these systems have occurred.
Decontaminating mail
In response to the postal anthrax attacks and hoaxes, the United States Postal Service sterilized some mail using gamma irradiation and treatment with a proprietary enzyme formula supplied by Sipco Industries. | Anthrax | Wikipedia | 484 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
A scientific experiment performed by a high school student, later published in the Journal of Medical Toxicology, suggested a domestic electric iron at its hottest setting (at least ) used for at least 5 minutes should destroy all anthrax spores in a common postal envelope.
Other animals
Anthrax is especially rare in dogs and cats, as is evidenced by a single reported case in the United States in 2001. Anthrax outbreaks occur in some wild animal populations with some regularity.
Russian researchers estimate arctic permafrost contains around 1.5 million anthrax-infected reindeer carcasses, and the spores may survive in the permafrost for 105 years. A risk exists that global warming in the Arctic can thaw the permafrost, releasing anthrax spores in the carcasses. In 2016, an anthrax outbreak in reindeer was linked to a 75-year-old carcass that defrosted during a heat wave. | Anthrax | Wikipedia | 195 | 42898 | https://en.wikipedia.org/wiki/Anthrax | Biology and health sciences | Infectious disease | null |
A photodiode is a semiconductor diode sensitive to photon radiation, such as visible light, infrared or ultraviolet radiation, X-rays and gamma rays. It produces an electrical current when it absorbs photons. This can be used for detection and measurement applications, or for the generation of electrical power in solar cells. Photodiodes are used in a wide range of applications throughout the electromagnetic spectrum from visible light photocells to gamma ray spectrometers.
Principle of operation
A photodiode is a PIN structure or p–n junction. When a photon of sufficient energy strikes the diode, it creates an electron–hole pair. This mechanism is also known as the inner photoelectric effect. If the absorption occurs in the junction's depletion region, or one diffusion length away from it, these carriers are swept from the junction by the built-in electric field of the depletion region. Thus holes move toward the anode, and electrons toward the cathode, and a photocurrent is produced. The total current through the photodiode is the sum of the dark current (current that is passed in the absence of light) and the photocurrent, so the dark current must be minimized to maximize the sensitivity of the device.
To first order, for a given spectral distribution, the photocurrent is linearly proportional to the irradiance.
Photovoltaic mode
In photovoltaic mode (zero bias), photocurrent flows into the anode through a short circuit to the cathode. If the circuit is opened or has a load impedance, restricting the photocurrent out of the device, a voltage builds up in the direction that forward biases the diode, that is, anode positive with respect to cathode. If the circuit is shorted or the impedance is low, a forward current will consume all or some of the photocurrent. This mode exploits the photovoltaic effect, which is the basis for solar cells – a traditional solar cell is just a large area photodiode. For optimum power output, the photovoltaic cell will be operated at a voltage that causes only a small forward current compared to the photocurrent.
Photoconductive mode | Photodiode | Wikipedia | 466 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
In photoconductive mode the diode is reverse biased, that is, with the cathode driven positive with respect to the anode. This reduces the response time because the additional reverse bias increases the width of the depletion layer, which decreases the junction's capacitance and increases the region with an electric field that will cause electrons to be quickly collected. The reverse bias also creates dark current without much change in the photocurrent.
Although this mode is faster, the photoconductive mode can exhibit more electronic noise due to dark current or avalanche effects. The leakage current of a good PIN diode is so low (<1 nA) that the Johnson–Nyquist noise of the load resistance in a typical circuit often dominates.
Related devices
Avalanche photodiodes are photodiodes with structure optimized for operating with high reverse bias, approaching the reverse breakdown voltage. This allows each photo-generated carrier to be multiplied by avalanche breakdown, resulting in internal gain within the photodiode, which increases the effective responsivity of the device.
A phototransistor is a light-sensitive transistor. A common type of phototransistor, the bipolar phototransistor, is in essence a bipolar transistor encased in a transparent case so that light can reach the base–collector junction. It was invented by John N. Shive at Bell Labs in 1948 but it was not announced until 1950. The electrons that are generated by photons in the base–collector junction are injected into the base, and this photodiode current is amplified by the transistor's current gain β (or hfe). If the base and collector leads are used and the emitter is left unconnected, the phototransistor becomes a photodiode. While phototransistors have a higher responsivity for light they are not able to detect low levels of light any better than photodiodes. Phototransistors also have significantly longer response times. Another type of phototransistor, the field-effect phototransistor (also known as photoFET), is a light-sensitive field-effect transistor. Unlike photobipolar transistors, photoFETs control drain-source current by creating a gate voltage. | Photodiode | Wikipedia | 468 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
A solaristor is a two-terminal gate-less phototransistor. A compact class of two-terminal phototransistors or solaristors have been demonstrated in 2018 by ICN2 researchers. The novel concept is a two-in-one power source plus transistor device that runs on solar energy by exploiting a memresistive effect in the flow of photogenerated carriers.
Materials
The material used to make a photodiode is critical to defining its properties, because only photons with sufficient energy to excite electrons across the material's bandgap will produce significant photocurrents.
Materials commonly used to produce photodiodes are listed in the table below.
Because of their greater bandgap, silicon-based photodiodes generate less noise than germanium-based photodiodes.
Binary materials, such as MoS2, and graphene emerged as new materials for the production of photodiodes.
Unwanted and wanted photodiode effects
Any p–n junction, if illuminated, is potentially a photodiode. Semiconductor devices such as diodes, transistors and ICs contain p–n junctions, and will not function correctly if they are illuminated by unwanted light. This is avoided by encapsulating devices in opaque housings. If these housings are not completely opaque to high-energy radiation (ultraviolet, X-rays, gamma rays), diodes, transistors and ICs can malfunction due to induced photo-currents. Background radiation from the packaging is also significant. Radiation hardening mitigates these effects.
In some cases, the effect is actually wanted, for example to use LEDs as light-sensitive devices (see LED as light sensor) or even for energy harvesting, then sometimes called light-emitting and light-absorbing diodes (LEADs).
Features
Critical performance parameters of a photodiode include spectral responsivity, dark current, response time and noise-equivalent power. | Photodiode | Wikipedia | 402 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
Spectral responsivity The spectral responsivity is a ratio of the generated photocurrent to incident light power, expressed in A/W when used in photoconductive mode. The wavelength-dependence may also be expressed as a quantum efficiency or the ratio of the number of photogenerated carriers to incident photons which is a unitless quantity.
Dark current The dark current is the current through the photodiode in the absence of light, when it is operated in photoconductive mode. The dark current includes photocurrent generated by background radiation and the saturation current of the semiconductor junction. Dark current must be accounted for by calibration if a photodiode is used to make an accurate optical power measurement, and it is also a source of noise when a photodiode is used in an optical communication system.
Response time The response time is the time required for the detector to respond to an optical input. A photon absorbed by the semiconducting material will generate an electron–hole pair which will in turn start moving in the material under the effect of the electric field and thus generate a current. The finite duration of this current is known as the transit-time spread and can be evaluated by using Ramo's theorem. One can also show with this theorem that the total charge generated in the external circuit is e and not 2e as one might expect by the presence of the two carriers. Indeed, the integral of the current due to both electron and hole over time must be equal to e. The resistance and capacitance of the photodiode and the external circuitry give rise to another response time known as RC time constant (). This combination of R and C integrates the photoresponse over time and thus lengthens the impulse response of the photodiode. When used in an optical communication system, the response time determines the bandwidth available for signal modulation and thus data transmission. | Photodiode | Wikipedia | 391 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
Noise-equivalent power Noise-equivalent power (NEP) is the minimum input optical power to generate photocurrent, equal to the rms noise current in a 1 hertz bandwidth. NEP is essentially the minimum detectable power. The related characteristic detectivity () is the inverse of NEP (1/NEP) and the specific detectivity () is the detectivity multiplied by the square root of the area () of the photodetector () for a 1 Hz bandwidth. The specific detectivity allows different systems to be compared independent of sensor area and system bandwidth; a higher detectivity value indicates a low-noise device or system. Although it is traditional to give () in many catalogues as a measure of the diode's quality, in practice, it is hardly ever the key parameter. | Photodiode | Wikipedia | 168 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
When a photodiode is used in an optical communication system, all these parameters contribute to the sensitivity of the optical receiver which is the minimum input power required for the receiver to achieve a specified bit error rate.
Applications
P–n photodiodes are used in similar applications to other photodetectors, such as photoconductors, charge-coupled devices (CCD), and photomultiplier tubes. They may be used to generate an output which is dependent upon the illumination (analog for measurement), or to change the state of circuitry (digital, either for control and switching or for digital signal processing).
Photodiodes are used in consumer electronics devices such as compact disc players, smoke detectors, medical devices and the receivers for infrared remote control devices used to control equipment from televisions to air conditioners. For many applications either photodiodes or photoconductors may be used. Either type of photosensor may be used for light measurement, as in camera light meters, or to respond to light levels, as in switching on street lighting after dark.
Photosensors of all types may be used to respond to incident light or to a source of light which is part of the same circuit or system. A photodiode is often combined into a single component with an emitter of light, usually a light-emitting diode (LED), either to detect the presence of a mechanical obstruction to the beam (slotted optical switch) or to couple two digital or analog circuits while maintaining extremely high electrical isolation between them, often for safety (optocoupler). The combination of LED and photodiode is also used in many sensor systems to characterize different types of products based on their optical absorbance.
Photodiodes are often used for accurate measurement of light intensity in science and industry. They generally have a more linear response than photoconductors.
They are also widely used in various medical applications, such as detectors for computed tomography (coupled with scintillators), instruments to analyze samples (immunoassay), and pulse oximeters.
PIN diodes are much faster and more sensitive than p–n junction diodes, and hence are often used for optical communications and in lighting regulation. | Photodiode | Wikipedia | 456 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
P–n photodiodes are not used to measure extremely low light intensities. Instead, if high sensitivity is needed, avalanche photodiodes, intensified charge-coupled devices or photomultiplier tubes are used for applications such as astronomy, spectroscopy, night vision equipment and laser rangefinding.
Comparison with photomultipliers
Advantages compared to photomultipliers:
Excellent linearity of output current as a function of incident light
Spectral response from 190 nm to 1100 nm (silicon), longer wavelengths with other semiconductor materials
Low noise
Ruggedized to mechanical stress
Low cost
Compact and light weight
Long lifetime
High quantum efficiency, typically 60–80%
No high voltage required
Disadvantages compared to photomultipliers:
Small area
No internal gain (except avalanche photodiodes, but their gain is typically 102–103 compared to 105-108 for the photomultiplier)
Much lower overall sensitivity
Photon counting only possible with specially designed, usually cooled photodiodes, with special electronic circuits
Response time for many designs is slower
Latent effect
Pinned photodiode
The pinned photodiode (PPD) has a shallow implant (P+ or N+) in N-type or P-type diffusion layer, respectively, over a P-type or N-type (respectively) substrate layer, such that the intermediate diffusion layer can be fully depleted of majority carriers, like the base region of a bipolar junction transistor. The PPD (usually PNP) is used in CMOS active-pixel sensors; a precursor NPNP triple junction variant with the MOS buffer capacitor and the back-light illumination scheme with complete charge transfer and no image lag was invented by Sony in 1975. This scheme was widely used in many applications of charge transfer devices. | Photodiode | Wikipedia | 361 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
Early charge-coupled device image sensors suffered from shutter lag. This was largely explained with the re-invention of the pinned photodiode. It was developed by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. Sony in 1975 recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented by Sony in 1975, developed by NEC in 1982 by Kodak in 1984 was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD sensors, becoming a fixture in consumer electronic video cameras and then digital still cameras.
A CMOS image sensor with a low-voltage-PPD technology was first fabricated in 1995 by a joint JPL and Kodak team. The CMOS sensor with PPD technology was further advanced and refined by R.M. Guidash in 1997, K. Yonemoto and H. Sumi in 2000, and I. Inoue in 2003. This led to CMOS sensors achieve imaging performance on par with CCD sensors, and later exceeding CCD sensors.
Photodiode array
A one-dimensional array of hundreds or thousands of photodiodes can be used as a position sensor, for example as part of an angle sensor. A two-dimensional array is used in image sensors and optical mice.
In some applications, photodiode arrays allow for high-speed parallel readout, as opposed to integrating scanning electronics as in a charge-coupled device (CCD) or CMOS sensor. The optical mouse chip shown in the photo has parallel (not multiplexed) access to all 16 photodiodes in its 4 × 4 array. | Photodiode | Wikipedia | 457 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
Passive-pixel image sensor
The passive-pixel sensor (PPS) is a type of photodiode array. It was the precursor to the active-pixel sensor (APS). A passive-pixel sensor consists of passive pixels which are read out without amplification, with each pixel consisting of a photodiode and a MOSFET switch. In a photodiode array, pixels contain a p–n junction, integrated capacitor, and MOSFETs as selection transistors. A photodiode array was proposed by G. Weckler in 1968, predating the CCD. This was the basis for the PPS.
The noise of photodiode arrays is sometimes a limitation to performance. It was not possible to fabricate active pixel sensors with a practical pixel size in the 1970s, due to limited microlithography technology at the time. | Photodiode | Wikipedia | 178 | 42937 | https://en.wikipedia.org/wiki/Photodiode | Technology | Components | null |
An endospore is a dormant, tough, and non-reproductive structure produced by some bacteria in the phylum Bacillota. The name "endospore" is suggestive of a spore or seed-like form (endo means 'within'), but it is not a true spore (i.e., not an offspring). It is a stripped-down, dormant form to which the bacterium can reduce itself. Endospore formation is usually triggered by a lack of nutrients, and usually occurs in gram-positive bacteria. In endospore formation, the bacterium divides within its cell wall, and one side then engulfs the other. Endospores enable bacteria to lie dormant for extended periods, even centuries. There are many reports of spores remaining viable over 10,000 years, and revival of spores millions of years old has been claimed. There is one report of viable spores of Bacillus marismortui in salt crystals approximately 25 million years old. When the environment becomes more favorable, the endospore can reactivate itself into a vegetative state. Most types of bacteria cannot change to the endospore form. Examples of bacterial species that can form endospores include Bacillus cereus, Bacillus anthracis, Bacillus thuringiensis, Clostridium botulinum, and Clostridium tetani. Endospore formation is not found among Archaea.
The endospore consists of the bacterium's DNA, ribosomes and large amounts of dipicolinic acid. Dipicolinic acid is a spore-specific chemical that appears to help in the ability for endospores to maintain dormancy. This chemical accounts for up to 10% of the spore's dry weight. | Endospore | Wikipedia | 370 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
Endospores can survive without nutrients. They are resistant to ultraviolet radiation, desiccation, high temperature, extreme freezing and chemical disinfectants. Thermo-resistant endospores were first hypothesized by Ferdinand Cohn after studying Bacillus subtilis growth on cheese after boiling the cheese. His notion of spores being the reproductive mechanism for the growth was a large blow to the previous suggestions of spontaneous generation. Astrophysicist Steinn Sigurdsson said "There are viable bacterial spores that have been found that are 40 million years old on Earth—and we know they're very hardened to radiation." Common antibacterial agents that work by destroying vegetative cell walls do not affect endospores. Endospores are commonly found in soil and water, where they may survive for long periods of time. A variety of different microorganisms form "spores" or "cysts", but the endospores of low G+C gram-positive bacteria are by far the most resistant to harsh conditions.
Some classes of bacteria can turn into exospores, also known as microbial cysts, instead of endospores. Exospores and endospores are two kinds of "hibernating" or dormant stages seen in some classes of microorganisms.
Life cycle of bacteria
The bacterial life cycle does not necessarily include sporulation. Sporulation is usually triggered by adverse environmental conditions, so as to help the survival of the bacterium. Endospores exhibit no signs of life and can thus be described as cryptobiotic. Endospores retain viability indefinitely and they can germinate into vegetative cells under the appropriate conditions. Endospores have survived thousands of years until environmental stimuli trigger germination. They have been characterized as the most durable cells produced in nature.
Structure | Endospore | Wikipedia | 392 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
Bacteria produce a single endospore internally. The spore is sometimes surrounded by a thin covering known as the exosporium, which overlies the spore coat. The spore coat, which acts like a sieve that excludes large toxic molecules like lysozyme, is resistant to many toxic molecules and may also contain enzymes that are involved in germination. In Bacillus subtilus endospores, the spore coat is estimated to contain more than 70 coat proteins, which are organized into an inner and an outer coat layer. The X-ray diffraction pattern of purified B. subtilis endospores indicates the presence of a component with a regular periodic structure, which Kadota and Iijima speculated might be formed from a keratin-like protein. However, after further studies this group concluded that the structure of the spore coat protein was different from keratin. When the B. subtilis genome was sequenced, no ortholog of human keratin was detected. The cortex lies beneath the spore coat and consists of peptidoglycan. The core wall lies beneath the cortex and surrounds the protoplast or core of the endospore. The core contains the spore chromosomal DNA which is encased in chromatin-like proteins known as SASPs (small acid-soluble spore proteins), that protect the spore DNA from UV radiation and heat. The core also contains normal cell structures, such as ribosomes and other enzymes, but is not metabolically active.
Up to 20% of the dry weight of the endospore consists of calcium dipicolinate within the core, which is thought to stabilize the DNA. Dipicolinic acid could be responsible for the heat resistance of the spore, and calcium may aid in resistance to heat and oxidizing agents. However, mutants resistant to heat but lacking dipicolinic acid have been isolated, suggesting other mechanisms contributing to heat resistance are also at work. Small acid-soluble proteins (SASPs) are found in endospores. These proteins tightly bind and condense the DNA, and are in part responsible for resistance to UV light and DNA-damaging chemicals. | Endospore | Wikipedia | 460 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
Visualising endospores under light microscopy can be difficult due to the impermeability of the endospore wall to dyes and stains. While the rest of a bacterial cell may stain, the endospore is left colourless. To combat this, a special stain technique called a Moeller stain is used. That allows the endospore to show up as red, while the rest of the cell stains blue. Another staining technique for endospores is the Schaeffer-Fulton stain, which stains endospores green and bacterial bodies red. The arrangement of spore layers is as follows:
Exosporium
Spore coat
Spore cortex
Core wall
Location
The position of the endospore differs among bacterial species and is useful in identification. The main types within the cell are terminal, subterminal, and centrally placed endospores. Terminal endospores are seen at the poles of cells, whereas central endospores are more or less in the middle. Subterminal endospores are those between these two extremes, usually seen far enough towards the poles but close enough to the center so as not to be considered either terminal or central. Lateral endospores are seen occasionally.
Examples of bacteria having terminal endospores include Clostridium tetani, the pathogen that causes the disease tetanus. Bacteria having a centrally placed endospore include Bacillus cereus. Sometimes the endospore can be so large the cell can be distended around the endospore. This is typical of Clostridium tetani.
Formation and destruction
Under conditions of starvation, especially the lack of carbon and nitrogen sources, a single endospore forms within some of the bacteria through a process called sporulation. | Endospore | Wikipedia | 365 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
When a bacterium detects environmental conditions are becoming unfavourable it may start the process of endosporulation, which takes about eight hours. The DNA is replicated and a membrane wall known as a spore septum begins to form between it and the rest of the cell. The plasma membrane of the cell surrounds this wall and pinches off to leave a double membrane around the DNA, and the developing structure is now known as a forespore. Calcium dipicolinate, the calcium salt of dipicolinic acid, is incorporated into the forespore during this time. The dipicolinic acid helps stabilize the proteins and DNA in the endospore. Next the peptidoglycan cortex forms between the two layers and the bacterium adds a spore coat to the outside of the forespore. In the final stages of endospore formation the newly forming endospore is dehydrated and allowed to mature before being released from the mother cell. The cortex is what makes the endospore so resistant to temperature. The cortex contains an inner membrane known as the core. The inner membrane that surrounds this core leads to the endospore's resistance against UV light and harsh chemicals that would normally destroy microbes. Sporulation is now complete, and the mature endospore will be released when the surrounding vegetative cell is degraded.
Endospores are resistant to most agents that would normally kill the vegetative cells they formed from. Unlike persister cells, endospores are the result of a morphological differentiation process triggered by nutrient limitation (starvation) in the environment; endosporulation is initiated by quorum sensing within the "starving" population.Most disinfectants such as household cleaning products, alcohols, quaternary ammonium compounds and detergents have little effect on endospores. However, sterilant alkylating agents such as ethylene oxide (ETO), and 10% bleach are effective against endospores. To kill most anthrax spores, standard household bleach (with 10% sodium hypochlorite) must be in contact with the spores for at least several minutes; a very small proportion of spores can survive longer than 10 minutes in such a solution. Higher concentrations of bleach are not more effective, and can cause some types of bacteria to aggregate and thus survive. | Endospore | Wikipedia | 492 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
While significantly resistant to heat and radiation, endospores can be destroyed by burning or by autoclaving at a temperature exceeding the boiling point of water, 100 °C. Endospores are able to survive at 100 °C for hours, although the larger the number of hours the fewer that will survive. An indirect way to destroy them is to place them in an environment that reactivates them to their vegetative state. They will germinate within a day or two with the right environmental conditions, and then the vegetative cells, not as hardy as endospores, can be straightforwardly destroyed. This indirect method is called Tyndallization. It was the usual method for a while in the late 19th century before the introduction of inexpensive autoclaves. Prolonged exposure to ionising radiation, such as x-rays and gamma rays, will also kill most endospores.
The endospores of certain types of (typically non-pathogenic) bacteria, such as Geobacillus stearothermophilus, are used as probes to verify that an autoclaved item has been rendered truly sterile: a small capsule containing the spores is put into the autoclave with the items; after the cycle the content of the capsule is cultured to check if anything will grow from it. If nothing will grow, then the spores were destroyed and the sterilization was successful.
In hospitals, endospores on delicate invasive instruments such as endoscopes are killed by low-temperature, and non-corrosive, ethylene oxide sterilizers. Ethylene oxide is the only low-temperature sterilant to stop outbreaks on these instruments. In contrast, "high level disinfection" does not kill endospores but is used for instruments such as a colonoscope that do not enter sterile bodily cavities. This latter method uses only warm water, enzymes, and detergents. | Endospore | Wikipedia | 403 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
Bacterial endospores are resistant to antibiotics, most disinfectants, and physical agents such as radiation, boiling, and drying. The impermeability of the spore coat is thought to be responsible for the endospore's resistance to chemicals. The heat resistance of endospores is due to a variety of factors:
Calcium dipicolinate, abundant within the endospore, may stabilize and protect the endospore's DNA.
Small acid-soluble proteins (SASPs) saturate the endospore's DNA and protect it from heat, drying, chemicals, and radiation. They also function as a carbon and energy source for the development of a vegetative bacterium during germination.
The cortex may osmotically remove water from the interior of the endospore and the dehydration that results is thought to be very important in the endospore's resistance to heat and radiation.
Finally, DNA repair enzymes contained within the endospore are able to repair damaged DNA during germination.
Reactivation
Reactivation of the endospore occurs when conditions are more favourable and involves activation, germination, and outgrowth. Even if an endospore is located in plentiful nutrients, it may fail to germinate unless activation has taken place. This may be triggered by heating the endospore. Germination involves the dormant endospore starting metabolic activity and thus breaking hibernation. It is commonly characterised by rupture or absorption of the spore coat, swelling of the endospore, an increase in metabolic activity, and loss of resistance to environmental stress.
Outgrowth follows germination and involves the core of the endospore manufacturing new chemical components and exiting the old spore coat to develop into a fully functional vegetative bacterial cell, which can divide to produce more cells.
Endospores possess five times more sulfur than vegetative cells. This excess sulfur is concentrated in spore coats as an amino acid, cysteine. It is believed that the macromolecule accountable for maintaining the dormant state has a protein coat rich in cystine, stabilized by S-S linkages. A reduction in these linkages has the potential to change the tertiary structure, causing the protein to unfold. This conformational change in the protein is thought to be responsible for exposing active enzymatic sites necessary for endospore germination. | Endospore | Wikipedia | 499 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
Endospores can stay dormant for a very long time. For instance, endospores were found in the tombs of the Egyptian pharaohs. When placed in appropriate medium, under appropriate conditions, they were able to be reactivated. In 1995, Raul Cano of California Polytechnic State University found bacterial spores in the gut of a fossilized bee trapped in amber from a tree in the Dominican Republic. The bee fossilized in amber was dated to being about 25 million years old. The spores germinated when the amber was cracked open and the material from the gut of the bee was extracted and placed in nutrient medium. After the spores were analyzed by microscopy, it was determined that the cells were very similar to Lysinibacillus sphaericus which is found in bees in the Dominican Republic today.
Importance
As a simplified model for cellular differentiation, the molecular details of endospore formation have been extensively studied, specifically in the model organism Bacillus subtilis. These studies have contributed much to our understanding of the regulation of gene expression, transcription factors, and the sigma factor subunits of RNA polymerase.
Endospores of the bacterium Bacillus anthracis were used in the 2001 anthrax attacks. The powder found in contaminated postal letters consisted of anthrax endospores. This intentional distribution led to 22 known cases of anthrax (11 inhalation and 11 cutaneous). The case fatality rate among those patients with inhalation anthrax was 45% (5/11). The six other individuals with inhalation anthrax and all the individuals with cutaneous anthrax recovered. Had it not been for antibiotic therapy, many more might have been stricken.
According to WHO veterinary documents, B. anthracis sporulates when it sees oxygen instead of the carbon dioxide present in mammal blood; this signals to the bacteria that it has reached the end of the animal, and an inactive dispersable morphology is useful.
Sporulation requires the presence of free oxygen. In the natural situation, this means the vegetative cycles occur within the low oxygen environment of the infected host and, within the host, the organism is exclusively in the vegetative form. Once outside the host, sporulation commences upon exposure to the air and the spore forms are essentially the exclusive phase in the environment. | Endospore | Wikipedia | 484 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
Biotechnology
Bacillus subtilis spores are useful for the expression of recombinant proteins and in particular for the surface display of peptides and proteins as a tool for fundamental and applied research in the fields of microbiology, biotechnology and vaccination.
Endospore-forming bacteria
Examples of endospore-forming bacteria include the genera:
Acetonema
Actinomyces
Alkalibacillus
Ammoniphilus
Amphibacillus
Anaerobacter
Anaerospora
Aneurinibacillus
Anoxybacillus
Bacillus
Brevibacillus
Caldanaerobacter
Caloramator
Caminicella
Cerasibacillus
Clostridium
Clostridiisalibacter
Cohnella
Coxiella (i.e. Coxiella burnetii)
Dendrosporobacter
Desulfotomaculum
Desulfosporomusa
Desulfosporosinus
Desulfovirgula
Desulfunispora
Desulfurispora
Filifactor
Filobacillus
Gelria
Geobacillus
Geosporobacter
Gracilibacillus
Halobacillus
Halonatronum
Heliobacterium
Heliophilum
Laceyella
Lentibacillus
Lysinibacillus
Mahella
Metabacterium
Moorella
Natroniella
Oceanobacillus
Orenia
Ornithinibacillus
Oxalophagus
Oxobacter
Paenibacillus
Paraliobacillus
Pelospora
Pelotomaculum
Piscibacillus
Planifilum
Pontibacillus
Propionispora
Salinibacillus
Salsuginibacillus
Seinonella
Shimazuella
Sporacetigenium
Sporoanaerobacter
Sporobacter
Sporobacterium
Sporohalobacter
Sporolactobacillus
Sporomusa
Sporosarcina
Sporotalea
Sporotomaculum
Syntrophomonas
Syntrophospora
Tenuibacillus
Tepidibacter
Terribacillus
Thalassobacillus
Thermoacetogenium
Thermoactinomyces
Thermoalkalibacillus
Thermoanaerobacter
Thermoanaeromonas
Thermobacillus
Thermoflavimicrobium
Thermovenabulum
Tuberibacillus
Virgibacillus
Vulcanobacillus | Endospore | Wikipedia | 504 | 42957 | https://en.wikipedia.org/wiki/Endospore | Biology and health sciences | Basic anatomy | Biology |
Snell's law (also known as the Snell–Descartes law, the ibn-Sahl law, and the law of refraction) is a formula used to describe the relationship between the angles of incidence and refraction, when referring to light or other waves passing through a boundary between two different isotropic media, such as water, glass, or air.
In optics, the law is used in ray tracing to compute the angles of incidence or refraction, and in experimental optics to find the refractive index of a material. The law is also satisfied in meta-materials, which allow light to be bent "backward" at a negative angle of refraction with a negative refractive index.
The law states that, for a given pair of media, the ratio of the sines of angle of incidence () and angle of refraction () is equal to the refractive index of the second medium with regard to the first () which is equal to the ratio of the refractive indices () of the two media, or equivalently, to the ratio of the phase velocities () in the two media.
The law follows from Fermat's principle of least time, which in turn follows from the propagation of light as waves.
History
Ptolemy, in Alexandria, Egypt, had found a relationship regarding refraction angles, but it was inaccurate for angles that were not small. Ptolemy was confident he had found an accurate empirical law, partially as a result of slightly altering his data to fit theory (see: confirmation bias).
The law was eventually named after Snell, although it was first discovered by the Persian scientist Ibn Sahl, at Baghdad court in 984. In the manuscript On Burning Mirrors and Lenses, Sahl used the law to derive lens shapes that focus light with no geometric aberration.
Alhazen, in his Book of Optics (1021), came close to rediscovering the law of refraction, but he did not take this step. | Snell's law | Wikipedia | 409 | 42964 | https://en.wikipedia.org/wiki/Snell%27s%20law | Physical sciences | Optics | Physics |
The law was rediscovered by Thomas Harriot in 1602, who however did not publish his results although he had corresponded with Kepler on this very subject. In 1621, the Dutch astronomer Willebrord Snellius (1580–1626)—Snell—derived a mathematically equivalent form, that remained unpublished during his lifetime. René Descartes independently derived the law using heuristic momentum conservation arguments in terms of sines in his 1637 essay Dioptrique, and used it to solve a range of optical problems. Rejecting Descartes' solution, Pierre de Fermat arrived at the same solution based solely on his principle of least time. Descartes assumed the speed of light was infinite, yet in his derivation of Snell's law he also assumed the denser the medium, the greater the speed of light. Fermat supported the opposing assumptions, i.e., the speed of light is finite, and his derivation depended upon the speed of light being slower in a denser medium. Fermat's derivation also utilized his invention of adequality, a mathematical procedure equivalent to differential calculus, for finding maxima, minima, and tangents.
In his influential mathematics book Geometry, Descartes solves a problem that was worked on by Apollonius of Perga and Pappus of Alexandria. Given n lines L and a point P(L) on each line, find the locus of points Q such that the lengths of the line segments QP(L) satisfy certain conditions. For example, when n = 4, given the lines a, b, c, and d and a point A on a, B on b, and so on, find the locus of points Q such that the product QA*QB equals the product QC*QD. When the lines are not all parallel, Pappus showed that the loci are conics, but when Descartes considered larger n, he obtained cubic and higher degree curves. To show that the cubic curves were interesting, he showed that they arose naturally in optics from Snell's law. | Snell's law | Wikipedia | 430 | 42964 | https://en.wikipedia.org/wiki/Snell%27s%20law | Physical sciences | Optics | Physics |
According to Dijksterhuis, "In De natura lucis et proprietate (1662) Isaac Vossius said that Descartes had seen Snell's paper and concocted his own proof. We now know this charge to be undeserved but it has been adopted many times since." Both Fermat and Huygens repeated this accusation that Descartes had copied Snell. In French, Snell's Law is sometimes called "la loi de Descartes" or more frequently "loi de Snell-Descartes".
In his 1678 Traité de la Lumière, Christiaan Huygens showed how Snell's law of sines could be explained by, or derived from, the wave nature of light, using what we have come to call the Huygens–Fresnel principle.
With the development of modern optical and electromagnetic theory, the ancient Snell's law was brought into a new stage. In 1962, Nicolaas Bloembergen showed that at the boundary of nonlinear medium, the Snell's law should be written in a general form. In 2008 and 2011, plasmonic metasurfaces were also demonstrated to change the reflection and refraction directions of light beam.
Explanation
Snell's law is used to determine the direction of light rays through refractive media with varying indices of refraction. The indices of refraction of the media, labeled , and so on, are used to represent the factor by which a light ray's speed decreases when traveling through a refractive medium, such as glass or water, as opposed to its velocity in a vacuum.
As light passes the border between media, depending upon the relative refractive indices of the two media, the light will either be refracted to a lesser angle, or a greater one. These angles are measured with respect to the normal line, represented perpendicular to the boundary. In the case of light traveling from air into water, light would be refracted towards the normal line, because the light is slowed down in water; light traveling from water to air would refract away from the normal line.
Refraction between two surfaces is also referred to as reversible because if all conditions were identical, the angles would be the same for light propagating in the opposite direction. | Snell's law | Wikipedia | 481 | 42964 | https://en.wikipedia.org/wiki/Snell%27s%20law | Physical sciences | Optics | Physics |
Snell's law is generally true only for isotropic or specular media (such as glass). In anisotropic media such as some crystals, birefringence may split the refracted ray into two rays, the ordinary or o-ray which follows Snell's law, and the other extraordinary or e-ray which may not be co-planar with the incident ray.
When the light or other wave involved is monochromatic, that is, of a single frequency, Snell's law can also be expressed in terms of a ratio of wavelengths in the two media, and :
Derivations and formula
Snell's law can be derived in various ways.
Derivation from Fermat's principle
Snell's law can be derived from Fermat's principle, which states that the light travels the path which takes the least time. By taking the derivative of the optical path length, the stationary point is found giving the path taken by the light. (There are situations of light violating Fermat's principle by not taking the least time path, as in reflection in a (spherical) mirror.) In a classic analogy, the area of lower refractive index is replaced by a beach, the area of higher refractive index by the sea, and the fastest way for a rescuer on the beach to get to a drowning person in the sea is to run along a path that follows Snell's law.
As shown in the figure to the right, assume the refractive index of medium 1 and medium 2 are and respectively. Light enters medium 2 from medium 1 via point O.
is the angle of incidence, is the angle of refraction with respect to the normal.
The phase velocities of light in medium 1 and medium 2 are
and
respectively.
is the speed of light in vacuum.
Let T be the time required for the light to travel from point Q through point O to point P.
where a, b, l and x are as denoted in the right-hand figure, x being the varying parameter.
To minimize it, one can differentiate :
(stationary point)
Note that
and
Therefore,
Derivation from Huygens's principle
Alternatively, Snell's law can be derived using interference of all possible paths of light wave from source to observer—it results in destructive interference
Derivation from Maxwell's equations
Another way to derive Snell's Law involves an application of the general boundary conditions of Maxwell equations for electromagnetic radiation and induction. | Snell's law | Wikipedia | 510 | 42964 | https://en.wikipedia.org/wiki/Snell%27s%20law | Physical sciences | Optics | Physics |
Derivation from conservation of energy and momentum
Yet another way to derive Snell's law is based on translation symmetry considerations. For example, a homogeneous surface perpendicular to the z direction cannot change the transverse momentum. Since the propagation vector is proportional to the photon's momentum, the transverse propagation direction must remain the same in both regions. Assume without loss of generality a plane of incidence in the plane . Using the well known dependence of the wavenumber on the refractive index of the medium, we derive Snell's law immediately.
where is the wavenumber in vacuum. Although no surface is truly homogeneous at the atomic scale, full translational symmetry is an excellent approximation whenever the region is homogeneous on the scale of the light wavelength.
Vector form
Given a normalized light vector (pointing from the light source toward the surface) and a normalized plane normal vector , one can work out the normalized reflected and refracted rays, via the cosines of the angle of incidence and angle of refraction , without explicitly using the sine values or any trigonometric functions or angles:
Note: must be positive, which it will be if is the normal vector that points from the surface toward the side where the light is coming from, the region with index . If is negative, then points to the side without the light, so start over with replaced by its negative.
This reflected direction vector points back toward the side of the surface where the light came from.
Now apply Snell's law to the ratio of sines to derive the formula for the refracted ray's direction vector:
The formula may appear simpler in terms of renamed simple values and , avoiding any appearance of trig function names or angle names:
Example:
The cosine values may be saved and used in the Fresnel equations for working out the intensity of the resulting rays.
Total internal reflection is indicated by a negative radicand in the equation for , which can only happen for rays crossing into a less-dense medium ().
Total internal reflection and critical angle | Snell's law | Wikipedia | 415 | 42964 | https://en.wikipedia.org/wiki/Snell%27s%20law | Physical sciences | Optics | Physics |
When light travels from a medium with a higher refractive index to one with a lower refractive index, Snell's law seems to require in some cases (whenever the angle of incidence is large enough) that the sine of the angle of refraction be greater than one. This of course is impossible, and the light in such cases is completely reflected by the boundary, a phenomenon known as total internal reflection. The largest possible angle of incidence which still results in a refracted ray is called the critical angle; in this case the refracted ray travels along the boundary between the two media.
For example, consider a ray of light moving from water to air with an angle of incidence of 50°. The refractive indices of water and air are approximately 1.333 and 1, respectively, so Snell's law gives us the relation
which is impossible to satisfy. The critical angle θcrit is the value of θ1 for which θ2 equals 90°:
Dispersion
In many wave-propagation media, wave velocity changes with frequency or wavelength of the waves; this is true of light propagation in most transparent substances other than a vacuum. These media are called dispersive. The result is that the angles determined by Snell's law also depend on frequency or wavelength, so that a ray of mixed wavelengths, such as white light, will spread or disperse. Such dispersion of light in glass or water underlies the origin of rainbows and other optical phenomena, in which different wavelengths appear as different colors.
In optical instruments, dispersion leads to chromatic aberration; a color-dependent blurring that sometimes is the resolution-limiting effect. This was especially true in refracting telescopes, before the invention of achromatic objective lenses.
Lossy, absorbing, or conducting media
In a conducting medium, permittivity and index of refraction are complex-valued. Consequently, so are the angle of refraction and the wave-vector. This implies that, while the surfaces of constant real phase are planes whose normals make an angle equal to the angle of refraction with the interface normal, the surfaces of constant amplitude, in contrast, are planes parallel to the interface itself. Since these two planes do not in general coincide with each other, the wave is said to be inhomogeneous. The refracted wave is exponentially attenuated, with exponent proportional to the imaginary component of the index of refraction. | Snell's law | Wikipedia | 502 | 42964 | https://en.wikipedia.org/wiki/Snell%27s%20law | Physical sciences | Optics | Physics |
Opahs, also commonly known as moonfish, sunfish, cowfish (not to be confused with Molidae), kingfish, and redfin ocean pan are large, colorful, deep-bodied pelagic lampriform fishes comprising the small family Lampridae (also spelled Lamprididae).
The family comprises two genera: Lampris () and the monotypic Megalampris (known only from fossil remains). The extinct family, Turkmenidae, from the Paleogene of Central Asia, is closely related, though much smaller.
In 2015, Lampris guttatus was discovered to have near-whole-body endothermy in which the entire core of the body is maintained at around 5 °C above the surrounding water. This is unique among fish as most fish are entirely cold blooded or are capable of warming only some parts of their bodies.
Species
Two living species were traditionally recognized, but a taxonomic review in 2018 found that more should be recognized (the result of splitting L. guttatus into several species, each with a more restricted geographic range), bringing the total to six. The six species of Lampris have mostly non-overlapping geographical ranges, and can be recognized based on body shape and coloration pattern.
Lampris australensis Underkoffler, Luers, Hyde & Craig, 2018 Southern spotted opah – Southern hemisphere, in the Pacific and Indian oceans.
Lampris guttatus (Brünnich, 1788) North Atlantic opah – formerly thought to be cosmopolitan, but now thought to be restricted to the northeastern Atlantic including the Mediterranean Sea.
Lampris immaculatus Gilchrist, 1904 southern opah – confined to the Southern Ocean from 34° S to the Antarctic Polar Front.
Lampris incognitus Underkoffler, Luers, Hyde & Craig, 2018 smalleye Pacific opah – central and eastern North Pacific Ocean.
Lampris lauta Lowe, 1860 East Atlantic opah – eastern Atlantic Ocean, including the Mediterranean, Azores and Canary Islands.
Lampris megalopsis Underkoffler, Luers, Hyde & Craig, 2018 bigeye Pacific opah – cosmopolitan, including the Gulf of Mexico, Indian Ocean, the western Pacific Ocean and Chile. | Opah | Wikipedia | 451 | 1490711 | https://en.wikipedia.org/wiki/Opah | Biology and health sciences | Acanthomorpha | Animals |
Extinct species
† Lampris zatima, also known as "Diatomœca zatima", is a very small, extinct species from the late Miocene of what is now Southern California known primarily from fragments, and the occasional headless specimens.
† Megalampris keyesi is an extinct species estimated to be about 4 m in length. Fossil remains date back to the late Oligocene of what is now New Zealand, and it is the first fossil lampridiform found in the Southern Hemisphere.
Description
Opahs are deeply keeled, compressed, discoid fish with conspicuous coloration: the body is a deep red-orange grading to rosy on the belly, with white spots covering the flanks. Both the median and paired fins are a bright vermilion. The large eyes stand out, as well, ringed with golden yellow. The body is covered in minute cycloid scales and its silvery, iridescent guanine coating is easily abraded.
Opahs closely resemble in shape the unrelated butterfish (family Stromateidae). Both have falcated (curved) pectoral fins and forked, emarginated (notched) caudal fins. Aside from being significantly larger than butterfish, opahs have enlarged, falcated pelvic fins with about 14 to 17 rays, which distinguish them from superficially similar carangids—positioned thoracically; adult butterfish lack pelvic fins. The pectorals of opahs are also inserted (more or less) horizontally rather than vertically. The anterior portion of an opah's single dorsal fin (with about 50–55 rays) is greatly elongated, also in a falcated profile similar to the pelvic fins. The anal fin (around 34 to 41 rays) is about as high and as long as the shorter portion of the dorsal fin, and both fins have corresponding grooves into which they can be depressed.
The snout is pointed and the mouth small, toothless, and terminal. The lateral line forms a high arch over the pectoral fins before sweeping down to the caudal peduncle. The larger species, Lampris guttatus, may reach a total length of and a weight of . The lesser-known Lampris immaculatus reaches a recorded total length of just .
Endothermy | Opah | Wikipedia | 483 | 1490711 | https://en.wikipedia.org/wiki/Opah | Biology and health sciences | Acanthomorpha | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.