text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
A DNA virus is a virus that has a genome made of deoxyribonucleic acid (DNA) that is replicated by a DNA polymerase. They can be divided between those that have two strands of DNA in their genome, called double-stranded DNA (dsDNA) viruses, and those that have one strand of DNA in their genome, called single-stranded DNA (ssDNA) viruses. dsDNA viruses primarily belong to two realms: Duplodnaviria and Varidnaviria, and ssDNA viruses are almost exclusively assigned to the realm Monodnaviria, which also includes some dsDNA viruses. Additionally, many DNA viruses are unassigned to higher taxa. Reverse transcribing viruses, which have a DNA genome that is replicated through an RNA intermediate by a reverse transcriptase, are classified into the kingdom Pararnavirae in the realm Riboviria.
DNA viruses are ubiquitous worldwide, especially in marine environments where they form an important part of marine ecosystems, and infect both prokaryotes and eukaryotes. They appear to have multiple origins, as viruses in Monodnaviria appear to have emerged from archaeal and bacterial plasmids on multiple occasions, though the origins of Duplodnaviria and Varidnaviria are less clear.
Prominent disease-causing DNA viruses include herpesviruses, papillomaviruses, and poxviruses.
== Baltimore classification ==
The Baltimore classification system is used to group viruses together based on their manner of messenger RNA (mRNA) synthesis and is often used alongside standard virus taxonomy, which is based on evolutionary history. DNA viruses constitute two Baltimore groups: Group I: double-stranded DNA viruses, and Group II: single-stranded DNA viruses. While Baltimore classification is chiefly based on transcription of mRNA, viruses in each Baltimore group also typically share their manner of replication. Viruses in a Baltimore group do not necessarily share genetic relation or morphology.
=== Double-stranded DNA viruses ===
The first Baltimore group of DNA viruses are those that have a double-stranded DNA genome. All dsDNA viruses have their mRNA synthesized in a three-step process. First, a transcription preinitiation complex binds to the DNA upstream of the site where transcription begins, allowing for the recruitment of a host RNA polymerase. Second, once the RNA polymerase is recruited, it uses the negative strand as a template for synthesizing mRNA strands. Third, the RNA polymerase terminates transcription upon reaching a specific signal, such as a polyadenylation site.
dsDNA viruses make use of several mechanisms to replicate their genome. Bidirectional replication, in which two replication forks are established at a replication origin site and move in opposite directions of each other, is widely used. A rolling circle mechanism that produces linear strands while progressing in a loop around the circular genome is also common. Some dsDNA viruses use a strand displacement method whereby one strand is synthesized from a template strand, and a complementary strand is then synthesized from the prior synthesized strand, forming a dsDNA genome. Lastly, some dsDNA viruses are replicated as part of a process called replicative transposition whereby a viral genome in a host cell's DNA is replicated to another part of a host genome.
dsDNA viruses can be subdivided between those that replicate in the cell nucleus, and as such are relatively dependent on host cell machinery for transcription and replication, and those that replicate in the cytoplasm, in which case they have evolved or acquired their own means of executing transcription and replication. dsDNA viruses are also commonly divided between tailed dsDNA viruses, referring to members of the realm Duplodnaviria, usually the tailed bacteriophages of the order Caudovirales, and tailless or non-tailed dsDNA viruses of the realm Varidnaviria.
=== Single-stranded DNA viruses ===
The second Baltimore group of DNA viruses are those that have a single-stranded DNA genome. ssDNA viruses have the same manner of transcription as dsDNA viruses. However, because the genome is single-stranded, it is first made into a double-stranded form by a DNA polymerase upon entering a host cell. mRNA is then synthesized from the double-stranded form. The double-stranded form of ssDNA viruses may be produced either directly after entry into a cell or as a consequence of replication of the viral genome. Eukaryotic ssDNA viruses are replicated in the nucleus.
Most ssDNA viruses contain circular genomes that are replicated via rolling circle replication (RCR). ssDNA RCR is initiated by an endonuclease that bonds to and cleaves the positive strand, allowing a DNA polymerase to use the negative strand as a template for replication. Replication progresses in a loop around the genome by means of extending the 3'-end of the positive strand, displacing the prior positive strand, and the endonuclease cleaves the positive strand again to create a standalone genome that is ligated into a circular loop. The new ssDNA may be packaged into virions or replicated by a DNA polymerase to form a double-stranded form for transcription or continuation of the replication cycle.
Parvoviruses contain linear ssDNA genomes that are replicated via rolling hairpin replication (RHR), which is similar to RCR. Parvovirus genomes have hairpin loops at each end of the genome that repeatedly unfold and refold during replication to change the direction of DNA synthesis to move back and forth along the genome, producing numerous copies of the genome in a continuous process. Individual genomes are then excised from this molecule by the viral endonuclease. For parvoviruses, either the positive or negative sense strand may be packaged into capsids, varying from virus to virus.
Nearly all ssDNA viruses have positive sense genomes, but a few exceptions and peculiarities exist. The family Anelloviridae is the only ssDNA family whose members have negative sense genomes, which are circular. Parvoviruses, as previously mentioned, may package either the positive or negative sense strand into virions. Lastly, bidnaviruses package both the positive and negative linear strands.
== ICTV classification ==
The International Committee on Taxonomy of Viruses (ICTV) oversees virus taxonomy and organizes viruses at the basal level at the rank of realm. Virus realms correspond to the rank of domain used for cellular life but differ in that viruses within a realm do not necessarily share common ancestry, nor do the realms share common ancestry with each other. As such, each virus realm represents at least one instance of viruses coming into existence. Within each realm, viruses are grouped together based on shared characteristics that are highly conserved over time. Three DNA virus realms are recognized: Duplodnaviria, Monodnaviria, and Varidnaviria.
=== Duplodnaviria ===
Duplodnaviria contains dsDNA viruses that encode a major capsid protein (MCP) that has the HK97 fold. Viruses in the realm also share a number of other characteristics involving the capsid and capsid assembly, including an icosahedral capsid shape and a terminase enzyme that packages viral DNA into the capsid during assembly. Two groups of viruses are included in the realm: tailed bacteriophages, which infect prokaryotes and are assigned to the order Caudovirales, and herpesviruses, which infect animals and are assigned to the order Herpesvirales.
Duplodnaviria is a very ancient realm, perhaps predating the last universal common ancestor (LUCA) of cellular life. Its origins not known, nor whether it is monophyletic or polyphyletic. A characteristic feature is the HK97-fold found in the MCP of all members, which is found outside the realm only in encapsulins, a type of nanocompartment found in bacteria: this relation is not fully understood.
The relation between caudoviruses and herpesviruses is also uncertain: they may share a common ancestor or herpesviruses may be a divergent clade from the realm Caudovirales. A common trait among duplodnaviruses is that they cause latent infections without replication while still being able to replicate in the future. Tailed bacteriophages are ubiquitous worldwide, important in marine ecology, and the subject of much research. Herpesviruses are known to cause a variety of epithelial diseases, including herpes simplex, chickenpox and shingles, and Kaposi's sarcoma.
=== Monodnaviria ===
Monodnaviria contains ssDNA viruses that encode an endonuclease of the HUH superfamily that initiates rolling circle replication and all other viruses descended from such viruses. The prototypical members of the realm are called CRESS-DNA viruses and have circular ssDNA genomes. ssDNA viruses with linear genomes are descended from them, and in turn some dsDNA viruses with circular genomes are descended from linear ssDNA viruses.
Viruses in Monodnaviria appear to have emerged on multiple occasions from archaeal and bacterial plasmids, a type of extra-chromosomal DNA molecule that self-replicates inside its host. The kingdom Shotokuvirae in the realm likely emerged from recombination events that merged the DNA of these plasmids and complementary DNA encoding the capsid proteins of RNA viruses.
CRESS-DNA viruses include three kingdoms that infect prokaryotes: Loebvirae, Sangervirae, and Trapavirae. The kingdom Shotokuvirae contains eukaryotic CRESS-DNA viruses and the atypical members of Monodnaviria. Eukaryotic monodnaviruses are associated with many diseases, and they include papillomaviruses and polyomaviruses, which cause many cancers, and geminiviruses, which infect many economically important crops.
=== Varidnaviria ===
Varidnaviria contains DNA viruses that encode MCPs that have a jelly roll fold folded structure in which the jelly roll (JR) fold is perpendicular to the surface of the viral capsid. Many members also share a variety of other characteristics, including a minor capsid protein that has a single JR fold, an ATPase that packages the genome during capsid assembly, and a common DNA polymerase. Two kingdoms are recognized: Helvetiavirae, whose members have MCPs with a single vertical JR fold, and Bamfordvirae, whose members have MCPs with two vertical JR folds.
Varidnaviria is either monophyletic or polyphyletic and may predate the LUCA. The kingdom Bamfordvirae is likely derived from the other kingdom Helvetiavirae via fusion of two MCPs to have an MCP with two jelly roll folds instead of one. The single jelly roll (SJR) fold MCPs of Helvetiavirae show a relation to a group of proteins that contain SJR folds, including the Cupin superfamily and nucleoplasmins.
Marine viruses in Varidnaviria are ubiquitous worldwide and, like tailed bacteriophages, play an important role in marine ecology. Most identified eukaryotic DNA viruses belong to the realm. Notable disease-causing viruses in Varidnaviria include adenoviruses, poxviruses, and the African swine fever virus. Poxviruses have been highly prominent in the history of modern medicine, especially Variola virus, which caused smallpox. Many varidnaviruses can become endogenized in their host's genome; a peculiar example are virophages, which after infecting a host, can protect the host against giant viruses.
=== Baltimore classification ===
dsDNA viruses are classified into three realms and include many taxa that are unassigned to a realm:
All viruses in Duplodnaviria are dsDNA viruses.
In Monodnaviria, members of the class Papovaviricetes are dsDNA viruses.
All viruses in Varidnaviria are dsDNA viruses.
The following taxa that are unassigned to a realm exclusively contain dsDNA viruses:
Orders: Ligamenvirales
Families: Ampullaviridae, Baculoviridae, Bicaudaviridae, Clavaviridae, Fuselloviridae, Globuloviridae, Guttaviridae, Halspiviridae, Hytrosaviridae, Nimaviridae, Nudiviridae, Ovaliviridae, Plasmaviridae, Polydnaviridae, Portogloboviridae, Thaspiviridae, Tristromaviridae
Genera: Dinodnavirus, Rhizidiovirus
ssDNA viruses are classified into one realm and include several families that are unassigned to a realm:
In Monodnaviria, all members except viruses in Papovaviricetes are ssDNA viruses.
The unassigned families Anelloviridae and Spiraviridae are ssDNA virus families.
Viruses in the family Finnlakeviridae contain ssDNA genomes. Finnlakeviridae is unassigned to a realm but is a proposed member of Varidnaviria.
== References ==
=== Bibliography === | Wikipedia/Double-stranded_DNA_viruses |
The central nervous system (CNS) controls most of the functions of the body and mind. It comprises the brain, spinal cord and the nerve fibers that branch off to all parts of the body. The CNS viral diseases are caused by viruses that attack the CNS. Existing and emerging viral CNS infections are major sources of human morbidity and mortality.
Virus infections usually begin in the peripheral tissues, and can invade the mammalian system by spreading into the peripheral nervous system and more rarely the CNS. CNS is protected by effective immune responses and multi-layer barriers, but some viruses enter with high-efficiency through the bloodstream and some by directly infecting the nerves that innervate the tissues.
Most viruses that enter can be opportunistic and accidental pathogens, but some like herpes viruses and rabies virus have evolved in time to enter the nervous system efficiently, by exploiting the neuronal cell biology. While acute viral diseases come on quickly, chronic viral conditions have long incubation periods inside the body. Their symptoms develop slowly and follow a progressive, fatal course.
== Types ==
== Symptoms ==
Characteristics of a viral infection can include pain, swelling, redness, impaired function, fever, drowsiness, confusion and convulsions.
== Diagnosis ==
=== Classification ===
Acute - the most common diseases caused by acute viral infections are encephalitis, flaccid paralysis, aseptic meningitis, post infectious and encephalomyelitis.
Chronic - the most common diseases caused by chronic viral infections are subacute-sclerosing panencephalitis, progressive multifocal leukoencephalopathy, retrovirus disease and spongiform encephalopathies.
== Prevention ==
Prophylactic vaccination is available against poliomyelitis, measles, Japanese encephalitis, and rabies. Hyper immune immunoglobulin has been used for prophylaxis of measles, herpes zoster virus, HSV-2, vaccine, rabies, and some other infections in high-risk groups.
== Treatment ==
Treatments of proven efficacy are currently limited mostly to herpes viruses and human immunodeficiency virus. The herpes virus is of two types: herpes type 1 (HSV-1, or oral herpes) and herpes type 2 (HSV-2, or genital herpes). Although there is no particular cure; there are treatments that can relieve the symptoms. Drugs like Famvir, Zovirax, and Valtrex are among the drugs used, but these medications can only decrease pain and shorten the healing time. They can also decrease the total number of outbreaks in the surrounding. Warm baths also may relieve the pain of genital herpes.
Human Immunodeficiency Virus Infection (HIV) is treated by using a combination of medications to fight against the HIV infection in the body. This is called antiretroviral therapy (ART). ART is not a cure, but it can control the virus so that a person can live a longer, healthier life and reduce the risk of transmitting HIV to others around him. ART involves taking a combination of HIV medicines (called an HIV regimen) every day, exactly as prescribed by the doctor. These HIV medicines prevent HIV Virus from multiplying (making copies of itself in the body), which reduces the amount of HIV in the body. Having less HIV in the body gives the immune system a chance to recover and fight off infections and cancers. Even though there is still some HIV in the body, the immune system is strong enough to fight off infections and cancers. By reducing the amount of HIV in the body, HIV medicines also reduce the risk of transmitting the virus to others. ART is recommended for all people with HIV, regardless of how long they've had the virus or how healthy they are. If left untreated, HIV will attack the immune system and eventually progress to AIDS.
=== New therapies ===
Development of new therapies has been hindered by the lack of appropriate animal model systems for some important viruses and also because of the difficulty in conducting human clinical trials for diseases that are rare. Nonetheless, numerous innovative approaches to antiviral therapy are available including candidate thiazolide and purazinecarboxamide derivatives with potential broad-spectrum antiviral efficacy. New herpes virus drugs include viral helicase-primase and terminase inhibitors. A promising new area of research involves therapies based on enhanced understanding of host antiviral immune responses.
== Epidemiology ==
Many viral infections of the central nervous system occur in seasonal peaks or as epidemics, whereas others, such as herpes simplex encephalitis, are sporadic. In endemic areas it is mostly a disease of children, but as the disease spreads to new regions, or nonimmune travelers visit endemic regions, nonimmune adults are also affected.
== Children ==
Meningitis is very common in children. Newborns can develop herpes virus infections through contact with infected secretions in the birth canal. Other viral infections are acquired by breathing air contaminated with virus-containing droplets exhaled by an infected person. Arbovirus infections are acquired from bites by infected insects (called epidemic encephalitis). Viral central nervous system infections in newborns and infants usually begin with fever. The inability of infants to communicate directly makes it difficult to understand their symptoms. Newborns may have no other symptoms and may initially not otherwise appear ill. Infants older than a month or so typically become irritable and fussy and refuse to eat. Vomiting is common. Sometimes the soft spot on top of a newborn's head (fontanelle) bulges, indicating an increase in pressure on the brain. Because irritation of the meninges is worsened by movement, an infant with meningitis may cry more, rather than calm down, when picked up and rocked. Some infants develop a strange, high-pitched cry. Infants with encephalitis often have seizures or other abnormal movements. Infants with severe encephalitis may become lethargic and comatose and then die. To make the diagnosis of meningitis or the diagnosis of encephalitis, doctors do a spinal tap (lumbar puncture) to obtain cerebrospinal fluid (CSF) for laboratory analysis in children.
== Risks for other diseases ==
A study using electronic health records indicates that 45 (with 22 of these being replicated with the UK Biobank and not all of them necessarily central nervous system viral diseases) viral exposures can significantly elevate risks of neurodegenerative disease, including up to 15 years after infection.
== See also ==
List of central nervous system infections
Aging brain § Immune system and fluids
== References ==
== External links == | Wikipedia/Central_nervous_system_viral_disease |
A DNA virus is a virus that has a genome made of deoxyribonucleic acid (DNA) that is replicated by a DNA polymerase. They can be divided between those that have two strands of DNA in their genome, called double-stranded DNA (dsDNA) viruses, and those that have one strand of DNA in their genome, called single-stranded DNA (ssDNA) viruses. dsDNA viruses primarily belong to two realms: Duplodnaviria and Varidnaviria, and ssDNA viruses are almost exclusively assigned to the realm Monodnaviria, which also includes some dsDNA viruses. Additionally, many DNA viruses are unassigned to higher taxa. Reverse transcribing viruses, which have a DNA genome that is replicated through an RNA intermediate by a reverse transcriptase, are classified into the kingdom Pararnavirae in the realm Riboviria.
DNA viruses are ubiquitous worldwide, especially in marine environments where they form an important part of marine ecosystems, and infect both prokaryotes and eukaryotes. They appear to have multiple origins, as viruses in Monodnaviria appear to have emerged from archaeal and bacterial plasmids on multiple occasions, though the origins of Duplodnaviria and Varidnaviria are less clear.
Prominent disease-causing DNA viruses include herpesviruses, papillomaviruses, and poxviruses.
== Baltimore classification ==
The Baltimore classification system is used to group viruses together based on their manner of messenger RNA (mRNA) synthesis and is often used alongside standard virus taxonomy, which is based on evolutionary history. DNA viruses constitute two Baltimore groups: Group I: double-stranded DNA viruses, and Group II: single-stranded DNA viruses. While Baltimore classification is chiefly based on transcription of mRNA, viruses in each Baltimore group also typically share their manner of replication. Viruses in a Baltimore group do not necessarily share genetic relation or morphology.
=== Double-stranded DNA viruses ===
The first Baltimore group of DNA viruses are those that have a double-stranded DNA genome. All dsDNA viruses have their mRNA synthesized in a three-step process. First, a transcription preinitiation complex binds to the DNA upstream of the site where transcription begins, allowing for the recruitment of a host RNA polymerase. Second, once the RNA polymerase is recruited, it uses the negative strand as a template for synthesizing mRNA strands. Third, the RNA polymerase terminates transcription upon reaching a specific signal, such as a polyadenylation site.
dsDNA viruses make use of several mechanisms to replicate their genome. Bidirectional replication, in which two replication forks are established at a replication origin site and move in opposite directions of each other, is widely used. A rolling circle mechanism that produces linear strands while progressing in a loop around the circular genome is also common. Some dsDNA viruses use a strand displacement method whereby one strand is synthesized from a template strand, and a complementary strand is then synthesized from the prior synthesized strand, forming a dsDNA genome. Lastly, some dsDNA viruses are replicated as part of a process called replicative transposition whereby a viral genome in a host cell's DNA is replicated to another part of a host genome.
dsDNA viruses can be subdivided between those that replicate in the cell nucleus, and as such are relatively dependent on host cell machinery for transcription and replication, and those that replicate in the cytoplasm, in which case they have evolved or acquired their own means of executing transcription and replication. dsDNA viruses are also commonly divided between tailed dsDNA viruses, referring to members of the realm Duplodnaviria, usually the tailed bacteriophages of the order Caudovirales, and tailless or non-tailed dsDNA viruses of the realm Varidnaviria.
=== Single-stranded DNA viruses ===
The second Baltimore group of DNA viruses are those that have a single-stranded DNA genome. ssDNA viruses have the same manner of transcription as dsDNA viruses. However, because the genome is single-stranded, it is first made into a double-stranded form by a DNA polymerase upon entering a host cell. mRNA is then synthesized from the double-stranded form. The double-stranded form of ssDNA viruses may be produced either directly after entry into a cell or as a consequence of replication of the viral genome. Eukaryotic ssDNA viruses are replicated in the nucleus.
Most ssDNA viruses contain circular genomes that are replicated via rolling circle replication (RCR). ssDNA RCR is initiated by an endonuclease that bonds to and cleaves the positive strand, allowing a DNA polymerase to use the negative strand as a template for replication. Replication progresses in a loop around the genome by means of extending the 3'-end of the positive strand, displacing the prior positive strand, and the endonuclease cleaves the positive strand again to create a standalone genome that is ligated into a circular loop. The new ssDNA may be packaged into virions or replicated by a DNA polymerase to form a double-stranded form for transcription or continuation of the replication cycle.
Parvoviruses contain linear ssDNA genomes that are replicated via rolling hairpin replication (RHR), which is similar to RCR. Parvovirus genomes have hairpin loops at each end of the genome that repeatedly unfold and refold during replication to change the direction of DNA synthesis to move back and forth along the genome, producing numerous copies of the genome in a continuous process. Individual genomes are then excised from this molecule by the viral endonuclease. For parvoviruses, either the positive or negative sense strand may be packaged into capsids, varying from virus to virus.
Nearly all ssDNA viruses have positive sense genomes, but a few exceptions and peculiarities exist. The family Anelloviridae is the only ssDNA family whose members have negative sense genomes, which are circular. Parvoviruses, as previously mentioned, may package either the positive or negative sense strand into virions. Lastly, bidnaviruses package both the positive and negative linear strands.
== ICTV classification ==
The International Committee on Taxonomy of Viruses (ICTV) oversees virus taxonomy and organizes viruses at the basal level at the rank of realm. Virus realms correspond to the rank of domain used for cellular life but differ in that viruses within a realm do not necessarily share common ancestry, nor do the realms share common ancestry with each other. As such, each virus realm represents at least one instance of viruses coming into existence. Within each realm, viruses are grouped together based on shared characteristics that are highly conserved over time. Three DNA virus realms are recognized: Duplodnaviria, Monodnaviria, and Varidnaviria.
=== Duplodnaviria ===
Duplodnaviria contains dsDNA viruses that encode a major capsid protein (MCP) that has the HK97 fold. Viruses in the realm also share a number of other characteristics involving the capsid and capsid assembly, including an icosahedral capsid shape and a terminase enzyme that packages viral DNA into the capsid during assembly. Two groups of viruses are included in the realm: tailed bacteriophages, which infect prokaryotes and are assigned to the order Caudovirales, and herpesviruses, which infect animals and are assigned to the order Herpesvirales.
Duplodnaviria is a very ancient realm, perhaps predating the last universal common ancestor (LUCA) of cellular life. Its origins not known, nor whether it is monophyletic or polyphyletic. A characteristic feature is the HK97-fold found in the MCP of all members, which is found outside the realm only in encapsulins, a type of nanocompartment found in bacteria: this relation is not fully understood.
The relation between caudoviruses and herpesviruses is also uncertain: they may share a common ancestor or herpesviruses may be a divergent clade from the realm Caudovirales. A common trait among duplodnaviruses is that they cause latent infections without replication while still being able to replicate in the future. Tailed bacteriophages are ubiquitous worldwide, important in marine ecology, and the subject of much research. Herpesviruses are known to cause a variety of epithelial diseases, including herpes simplex, chickenpox and shingles, and Kaposi's sarcoma.
=== Monodnaviria ===
Monodnaviria contains ssDNA viruses that encode an endonuclease of the HUH superfamily that initiates rolling circle replication and all other viruses descended from such viruses. The prototypical members of the realm are called CRESS-DNA viruses and have circular ssDNA genomes. ssDNA viruses with linear genomes are descended from them, and in turn some dsDNA viruses with circular genomes are descended from linear ssDNA viruses.
Viruses in Monodnaviria appear to have emerged on multiple occasions from archaeal and bacterial plasmids, a type of extra-chromosomal DNA molecule that self-replicates inside its host. The kingdom Shotokuvirae in the realm likely emerged from recombination events that merged the DNA of these plasmids and complementary DNA encoding the capsid proteins of RNA viruses.
CRESS-DNA viruses include three kingdoms that infect prokaryotes: Loebvirae, Sangervirae, and Trapavirae. The kingdom Shotokuvirae contains eukaryotic CRESS-DNA viruses and the atypical members of Monodnaviria. Eukaryotic monodnaviruses are associated with many diseases, and they include papillomaviruses and polyomaviruses, which cause many cancers, and geminiviruses, which infect many economically important crops.
=== Varidnaviria ===
Varidnaviria contains DNA viruses that encode MCPs that have a jelly roll fold folded structure in which the jelly roll (JR) fold is perpendicular to the surface of the viral capsid. Many members also share a variety of other characteristics, including a minor capsid protein that has a single JR fold, an ATPase that packages the genome during capsid assembly, and a common DNA polymerase. Two kingdoms are recognized: Helvetiavirae, whose members have MCPs with a single vertical JR fold, and Bamfordvirae, whose members have MCPs with two vertical JR folds.
Varidnaviria is either monophyletic or polyphyletic and may predate the LUCA. The kingdom Bamfordvirae is likely derived from the other kingdom Helvetiavirae via fusion of two MCPs to have an MCP with two jelly roll folds instead of one. The single jelly roll (SJR) fold MCPs of Helvetiavirae show a relation to a group of proteins that contain SJR folds, including the Cupin superfamily and nucleoplasmins.
Marine viruses in Varidnaviria are ubiquitous worldwide and, like tailed bacteriophages, play an important role in marine ecology. Most identified eukaryotic DNA viruses belong to the realm. Notable disease-causing viruses in Varidnaviria include adenoviruses, poxviruses, and the African swine fever virus. Poxviruses have been highly prominent in the history of modern medicine, especially Variola virus, which caused smallpox. Many varidnaviruses can become endogenized in their host's genome; a peculiar example are virophages, which after infecting a host, can protect the host against giant viruses.
=== Baltimore classification ===
dsDNA viruses are classified into three realms and include many taxa that are unassigned to a realm:
All viruses in Duplodnaviria are dsDNA viruses.
In Monodnaviria, members of the class Papovaviricetes are dsDNA viruses.
All viruses in Varidnaviria are dsDNA viruses.
The following taxa that are unassigned to a realm exclusively contain dsDNA viruses:
Orders: Ligamenvirales
Families: Ampullaviridae, Baculoviridae, Bicaudaviridae, Clavaviridae, Fuselloviridae, Globuloviridae, Guttaviridae, Halspiviridae, Hytrosaviridae, Nimaviridae, Nudiviridae, Ovaliviridae, Plasmaviridae, Polydnaviridae, Portogloboviridae, Thaspiviridae, Tristromaviridae
Genera: Dinodnavirus, Rhizidiovirus
ssDNA viruses are classified into one realm and include several families that are unassigned to a realm:
In Monodnaviria, all members except viruses in Papovaviricetes are ssDNA viruses.
The unassigned families Anelloviridae and Spiraviridae are ssDNA virus families.
Viruses in the family Finnlakeviridae contain ssDNA genomes. Finnlakeviridae is unassigned to a realm but is a proposed member of Varidnaviria.
== References ==
=== Bibliography === | Wikipedia/SsDNA_virus |
Hepadnaviridae is a family of viruses. Humans, apes, and birds serve as natural hosts. The family contains five genera. Its best-known member is hepatitis B virus. Diseases associated with this family include: liver infections, such as hepatitis, hepatocellular carcinomas (chronic infections), and cirrhosis. It is the sole accepted family in the order Blubervirales.
== Taxonomy ==
The following genera are recognized:
Avihepadnavirus
Orthohepadnavirus
Herpetohepadnavirus
Metahepadnavirus
Parahepadnavirus
== History and discovery ==
Although liver diseases transmissible among human populations were identified early in the history of medicine, the first known hepatitis with a viral etiological agent was Hepatitis A, in the picornaviridae family. Hepatitis B Virus (HBV) was identified as an infection distinct from Hepatitis A through its contamination of yellow fever vaccine. The vaccine contained human serum as a stabilizing agent which was HBV-infected. HBV was identified as a new DNA virus in the 1960s, followed a couple of decades later by the discovery of the flavivirus hepatitis C. HBV was first identified in the lab as the "Australia agent" by Blumberg and colleagues in the blood of an Aboriginal transfusion patient. This work earned Blumberg the 1976 Nobel Prize in Medicine.
== Genome ==
Hepadnaviruses have very small genomes of partially double-stranded, partially single stranded circular DNA (pdsDNA). The genome consists of two strands, a longer negative-sense strand and a shorter and positive-sense strand of variable length. In the virion these strands are arranged such that the two ends of the long strand meet but are not covalently bonded together. The shorter strand overlaps this divide and is connected to the longer strand on either side of the split through a direct repeat (DR) segment that pairs the two strands together. In replication, the viral pdsDNA is converted in the host cell nucleus to covalently-closed-circular DNA (cccDNA) by the viral polymerase.
Replication involves an RNA intermediate, as in viruses belonging to group VII of Baltimore classification. Four main open reading frames are encoded (ORFs) and the virus has four known genes which encode seven proteins: the core capsid protein, the viral polymerase, surface antigens—preS1, preS2, and S, the X protein and HBeAg. The X protein is thought to be non-structural. Its function and significance are poorly understood but it is suspected to be associated with host gene expression modulation.
=== Viral polymerase ===
Members of the family Hepadnaviridae encode their own polymerase, rather than co-opting host machinery as some other viruses do. This enzyme is unique among viral polymerases in that it has reverse transcriptase activity to convert RNA into DNA to replicate the genome (the only other human-pathogenic virus family encoding a polymerase with this capability is Retroviridae), RNAse activity (used when the DNA genome is synthesized from pgRNA that was packaged in virions for replication to destroy the RNA template and produce the pdsDNA genome), and DNA-dependent-DNA-polymerase activity (used to create cccDNA from pdsDNA in the first step of the replication cycle).
=== Envelope proteins ===
The hepatitis envelope proteins are composed of subunits made from the viral preS1, preS2, and S genes. The L (for "large") envelope protein contains all three subunits. The M (for "medium") protein contains only preS2 and S. The S (for "small") protein contains only S. The genome portions encoding these envelope protein subunits share both the same frame and the same stop codon, generating nested transcripts on a single open reading frame. The pre-S1 is encoded first (closest to the 5' end), followed directly by the pre-S2 and the S. When a transcript is made from the beginning of the pre-S1 region, all three genes are included in the transcript and the L protein is produced. When the transcript starts after the pro-S1 at the beginning of the pre-S2 the final protein contains the pre-S2 and S subunits only and therefore is an M protein. The smallest envelope protein containing just the S subunit is made most because it is encoded closest to the 3' end and comes from the shortest transcript. These envelope proteins can assemble independently of the viral capsid and genome into non-infectious virus-like particles that give the virus a pleomorphic appearance and promote a strong immune response in hosts.
== Replication ==
Hepadnaviruses replicate through an RNA intermediate (which they transcribe back into cDNA using reverse transcriptase). The reverse transcriptase becomes covalently linked to a short 3- or 4-nucleotide primer. Most hepadnaviruses will only replicate in specific hosts, and this makes experiments using in vitro methods very difficult.
The virus binds to specific receptors on cells and the core particle enters the cell cytoplasm. This is then translocated to the nucleus, where the partially double stranded DNA is 'repaired' by the viral polymerase to form a complete circular dsDNA genome (called covalently-closed-circular DNA or cccDNA). The genome then undergoes transcription by the host cell RNA polymerase and the pregenomicRNA (pgRNA) is sent out of the nucleus. The pgRNA is inserted into an assembled viral capsid containing the viral polymerase. Inside this capsid the genome is converted from RNA to pdsDNA through activity of the polymerase as an RNA-dependent-DNA-polymerase and subsequently as an RNAse to eliminate the pgRNA transcript. These new virions either leave the cell to infect others or are immediately dismantled so the new viral genomes can enter the nucleus and magnify the infection. The virions that leave the cell egress through budding.
== Structure ==
Viruses in Hepadnaviridae are enveloped, with spherical geometries, and T=4 symmetry. The diameter is around 42 nm. Genomes are circular, around 3.2kb in length. The genome codes for 7 proteins.
== Evolution ==
Based on the presence of viral genomes in bird DNA it appears that the hepadnaviruses evolved >82 million years ago. Birds may be the original hosts of the Hepadnaviridae with mammals becoming infected after a bird (see host switch).
Endogenous hepatitis B virus genomes have been described in crocodilian, snake and turtle genomes. This suggests that these viruses have infected vertebrates for over 200 million years ago.
Hepadnaviruses have been described in fish and amphibians also. This suggests that this family has co-evolved with the vertebrates.
Phylogenetic trees suggest that the bird viruses originated from those infecting reptiles. Those affecting mammals appear to be more closely related to those found in fish.
=== Nackednaviridae ===
A proposed family of viruses – the Nackednaviridae – has been isolated from fish. This family has a similar genomic organisation to that of members of the family Hepadnaviridae. These two families separated over 400 million years ago, suggesting an ancient origin for the family Hepadnaviridae.
Viruses in the family have non-enveloped, isosahedral structure with T=3 symmetry, smaller than typical Hepadnaviridae virions (about 5% of the latter show a T=3 symmetry). The circular, monopartite genome is about 3 kb much like Hepadnaviridae. The envelop protein S is accordingly not present, likely the ancestral state by sequence analysis. Unlike Hepadnaviridae viruses that usually diverge alongside their hosts, viruses in the family jump hosts more frequently. The "type" for this family is African cichlid nackednavirus'(ACNDV), formerly African cichlid hepadnavirus (ACHBV), a proposed and not-yet-accepted species.
== Cell tropism ==
Hepadnaviruses, as their "hepa" name implies, infect liver cells and cause hepatitis. This is true not only of the human pathogen Hepatitis B Virus but also the hepadnaviruses that infect other organisms. The "adhesion" step of the dynamic phase—in which an exterior viral protein stably interacts with a host cell protein—determines cell tropism. In the case of HBV the host receptor is human sodium taurocholate receptor (NTCP), a mediator of bile acid uptake, and the virus anti-receptor is the abundant HB-AgS envelope protein.
== See also ==
Transmission of hepadnaviruses
== Notes ==
== References ==
== External links ==
ICTV Report: Hepadnaviridae
Viralzone: Hepadnaviridae
"Hepadnaviridae". NCBI Taxonomy Browser. 10404. | Wikipedia/Hepadnaviridae |
Bacilladnaviridae is a family of single-stranded DNA viruses that primarily infect diatoms.
== Characteristics ==
Similar to other eukaryotic ssDNA viruses, bacilladnaviruses are likely to replicate their genomes by the rolling-circle mechanism, initiated by the virus-encoded endonuclease (Rep). However, the latter protein of bacilladnaviruses displays unique conserved motifs and in phylogenetic trees forms a monophyletic clade separated from other groups of ssDNA viruses. The capsid protein of bacilladnaviruses has the jelly-roll fold and is most closely related to the corresponding proteins from members of the family Nodaviridae, which have ssRNA genomes.
== Taxonomy ==
The following genera are recognized:
Aberdnavirus
Diatodnavirus
Keisodnavirus
Kieseladnavirus
Protobacilladnavirus
Puahadnavirus
Seawadnavirus
== References ==
== External links ==
Yuji Tomaru, Kensuke Toyoda, Hidekazu Suzuki, Tamotsu Nagumo, Kei Kimura & Yoshitake Takao: New single-stranded DNA virus with a unique genomic structure that infects marine diatom Chaetoceros setoensis. In: Nature Sci Rep 3, 3337. 26 November 2013. doi:10.1038/srep03337. Proposal of new species "Chaetoceros setoensis DNA virus" (CsetDNAV), ssDNA circular. | Wikipedia/Bacilladnaviridae |
Adnaviria is a realm of viruses that includes archaeal viruses that have a filamentous virion (i.e. body) and a linear, double-stranded DNA genome. The genome exists in A-form (A-DNA) and encodes a dimeric major capsid protein (MCP) that contains the SIRV2 fold, a type of alpha-helix bundle containing four helices. The virion consists of the genome encased in capsid proteins to form a helical nucleoprotein complex. For some adnaviruses, this helix is surrounded by a lipid membrane called an envelope. Some contain an additional protein layer between the nucleoprotein helix and the envelope. Complete virions are long and thin and may be flexible or a stiff like a rod.
Adnaviria was established in 2020 after cryogenic electron microscopy showed that the viruses in the realm were related due to a shared MCP, A-DNA, and general virion structure. Viruses in Adnaviria infect hyperthermophilic and acidophilic archaea, i.e. archaea that inhabit very high temperature environments and highly acidic environments. Their A-DNA genome may be an adaptation to this extreme environment. Viruses in Adnaviria have potentially existed for a long time, as it is thought that they may have infected the last archaeal common ancestor. In general, they show no genetic relation to any viruses outside the realm.
== Etymology ==
Adnaviria takes the first part of its name, Adna-, from A-DNA, which refers to the A-form genomic DNA of all viruses in the realm. The second part, -viria is the suffix used for virus realms. The sole kingdom in the realm, Zilligvirae, is named after Wolfram Zillig (1925–2005) for his research on hyperthermophilic archaea, with the virus kingdom suffix -virae. The name of the sole phylum, Taleaviricota, is derived from Latin talea, which means "rod" and refers to the morphology of viruses in the realm, and the virus phylum suffix -viricota. Lastly, the sole class in the realm, Tokiviricetes, is constructed from Georgian toki (თოკი), which means "thread", and the suffix used for virus classes, -viricetes.
== Characteristics ==
Viruses in Adnaviria infect hyperthermophilic and acidophilic archaea and have linear, double-stranded DNA (dsDNA) genomes that range from about 16 to 56 kilobase pairs in length. The ends of their genomes contain inverted terminal repeats. Their genomes exist in A-form, also called A-DNA, a type of DNA that has a compact right-handed helix with more base pairs per turn than B-form DNA. The creation of genomic A-DNA is caused by an interaction with major capsid protein (MCP) dimers, which, during virion assembly, cover pre-genomic B-DNA to form a helical nucleoprotein complex that contains genomic A-DNA. The A-form genome may be an adaptation to allow DNA survival under extreme conditions. Furthermore, viruses in Adnaviria have high genome redundancy, which also might be an adaptation to survive such extreme environments.
The nucleoprotein helix is composed of asymmetric units of two MCPs. For rudiviruses, this is homodimer, a molecule formed by the bonding of two identical MCPs. For lipothrixviruses and tristromaviruses, it is heterodimer, a molecule formed by the bonding of two different MCPs that are paralogous, i.e. the result of a gene duplication event. The MCPs of viruses in Adnaviria contain a folded structure that consists of a type of alpha-helix bundle that contains four helices, called the SIRV2 fold and named after Sulfolobus islandicus rod-shaped virus 2 (SIRV2). The four-helix bundle is found at the end (C-terminus) of the protein while the beginning (N-terminus) of the protein has an extended α-helical arm that wraps tightly around the dsDNA genome to change it to A-form. Variations in the protein structure exist, but the same base structure is retained in all adnaviruses.
Adnaviruses have filamentous virions, i.e. they are long, thin, and cylindrical. Lipothrixviruses and ungulaviruses have flexible virions about 410–2,200 nanometers (nm) in length and 24–38 nm in width in which the nucleoprotein helix is surrounded by a lipid envelope. Tristromaviruses, about 400 by 32 nm, likewise have flexible virions with an envelope, and they contain an additional protein sheath layer between the nucleoprotein complex and the envelope. Rudviruses have stiff, non-enveloped, rod-like virions about 600–900 by 23 nm. At both ends of the virion, lipothrixviruses and ungulaviruses have mop- or claw-like structures connected to a collar, whereas rudiviruses and tristromaviruses have plugs at each end from which bundles of thin filaments emanate.
== Phylogenetics ==
Viruses in Adnaviria have potentially existed for a long time, as it is thought that they may have infected the last archaeal common ancestor. In general, they show no genetic relation to viruses outside the realm. The only genes that are shared with other viruses are glycosyltransferases, ribbon-helix-helix transcription factors, and anti-CRISPR proteins. Adnaviruses are morphologically similar to non-archaeal filamentous viruses but their virions are built from different capsid proteins. Viruses of Clavaviridae, a family of filamentous archaeal viruses, likewise possess MCPs and virion organization that show no relation to the MCPs and virion organization of viruses in Adnaviria and for that reason are excluded from the realm.
== Classification ==
Adnaviria is monotypic down to the rank of its sole class, Tokiviricetes, which has three orders. This taxonomy is shown hereafter:
Realm: Adnaviria
Kingdom: Zilligvirae
Phylum: Taleaviricota
Class: Tokiviricetes
Order: Ligamenvirales
Order: Maximonvirales
Order: Primavirales
== History ==
Viruses of Adnaviria began to be discovered in the 1980s by Wolfram Zillig and his colleagues. To discover these viruses, Zillig developed the methods used to culture their hosts. The first of these to be described were TTV1, TTV2, and TTV3 in 1983. TTV1 was classified as the first lipothrixvirus but is now classified as a tristromavirus. SIRV2, a rudivirus, became a model for studying virus-host interactions after its discovery in 1998. The families Lipothrixviridae and Rudiviridae were then united under the order Ligamenvirales in 2012 based on evidence of their relation. Cryogenic electron microscopy would later show in 2020 that the MCPs of tristromaviruses contained a SIRV2-like fold like ligamenviruses, which provided justification for establishing Adnaviria in the same year.
== See also ==
List of higher virus taxa
== References ==
== External links ==
Media related to Adnaviria at Wikimedia Commons | Wikipedia/Adnaviria |
Dinodnavirus is a genus of viruses that infect dinoflagellates. This genus belongs to the clade of nucleocytoplasmic large DNA viruses. The only species in the genus is Heterocapsa circularisquama DNA virus 01.
== Name ==
The order name, Dinodnavirales, is a combination of Dino, from host dinoflagellate and dna, from its DNA genome.
== Virology ==
The virus has an icosahedral capsid 180–210 nanometers in diameter.
The genome is a single molecule of double stranded DNA of about 356-kilobases.
It infects the dinoflagellate Heterocapsa circularisquama.
During replication virions emerge from a specific cytoplasm compartment – the 'viroplasm' – which is created by the virus.
== Taxonomy ==
DNA studies have shown that the genus belongs in the family Asfarviridae.
== References == | Wikipedia/Dinodnavirus |
Castleman disease (CD) describes a group of rare lymphoproliferative disorders that involve enlarged lymph nodes, and a broad range of inflammatory symptoms and laboratory abnormalities. Whether Castleman disease should be considered an autoimmune disease, cancer, or infectious disease is currently unknown.
Castleman disease includes at least three distinct subtypes: unicentric Castleman disease (UCD), human herpesvirus 8 associated multicentric Castleman disease (HHV-8-associated MCD), and idiopathic multicentric Castleman disease (iMCD). These are differentiated by the number and location of affected lymph nodes and the presence of human herpesvirus 8, a known causative agent in a portion of cases. Correctly classifying the Castleman disease subtype is important, as the three subtypes vary significantly in symptoms, clinical findings, disease mechanism, treatment approach, and prognosis. All forms involve overproduction of cytokines and other inflammatory proteins by the body's immune system as well as characteristic abnormal lymph node features that can be observed under the microscope. In the United States, approximately 4,300 to 5,200 new cases are diagnosed each year.
Castleman disease is named after Benjamin Castleman, who first described the disease in 1954. The Castleman Disease Collaborative Network is the largest organization dedicated to accelerating research and treatment for Castleman disease as well as improving patient care.
== Classification ==
Castleman disease (CD) can involve one or more enlarged lymph nodes in a single region of the body (unicentric CD, UCD) or it can involve multiple enlarged lymph node regions (multi centric CD, MCD). Doctors classify the disease into different categories based on the number of enlarged lymph node regions and the underlying cause. There are four established subtypes of Castleman disease:
=== Unicentric Castleman disease ===
Unicentric Castleman disease (UCD) involves a single enlarged lymph node or multiple enlarged lymph nodes within a single region of the body that display microscopic features consistent with Castleman disease. It is also sometimes called localized Castleman disease.
The exact cause of UCD is unknown, but appears to be due to a genetic change that occurs in the lymph node tissue, most similar to a benign tumor. In about half cases of UCD, individuals exhibit no symptoms (asymptomatic). Sometimes symptoms stem are secondary to compression of surrounding structures by rapidly enlarging lymph nodes.
Some UCD patients, however, experience systemic inflammatory symptoms such as fever, fatigue, night sweats, weight loss, and skin rash as well as laboratory abnormalities such as elevated C-reactive protein.
Surgery is considered by experts to be the first-line treatment option for all cases of UCD. Sometimes, removing the enlarged lymph node(s) is not possible. If surgical excision is not possible, treatment is recommended for symptomatic patients. If symptoms are due to compression, then rituximab is recommended. If symptoms are due to an inflammatory syndrome, then anti-interleukin-6 (IL-6) therapy is recommended. If these treatments are not effective, then radiation may be needed.
=== Multicentric Castleman disease (MCD) ===
In this form, patients have multiple regions of enlarged lymph nodes with characteristic microscopic features, flu-like symptoms, and organ dysfunction due to excessive cytokines or inflammatory proteins. MCD is further classified into three categories based on underlying cause: POEMS-associated MCD, HHV-8-associated MCD, and idiopathic MCD (iMCD).
=== POEMS-associated MCD ===
A cancerous cell population found in patients with POEMS syndrome (polyneuropathy, organomegaly, endocrinopathy, monoclonal plasma cell disorder, and skin changes) can cause MCD in a fraction of patients by producing cytokines that initiate a cytokine storm. In patients who have both POEMS-associated MCD, treatment should be directed at the POEMS syndrome.
=== HHV-8-associated multicentric Castleman disease (HHV-8-MCD) ===
HHV-8-associated MCD patients have multiple regions of enlarged lymph nodes and episodic inflammatory symptoms due to uncontrolled infection with HHV-8. HHV-8-associated MCD is most commonly diagnosed in HIV infected or otherwise immunocompromised individuals that are not able to control HHV-8 infection. Thus, HHV-8-associated MCD patients may experience additional symptoms related to their HIV infection or other conditions. First-line treatment of HHV-8-associated MCD is rituximab, a drug used to eliminate a type of immune cell called the B lymphocyte. It is highly effective for HHV-8-associated MCD, but occasionally antivirals and/or cytotoxic chemotherapies are needed.
=== Idiopathic multicentric Castleman disease (iMCD) ===
Idiopathic multicentric Castleman disease (iMCD), which is the most common form of MCD, occurs for an unknown cause. There is no evidence of POEMS syndrome, HHV-8, or any other cancer or infectious disease. Though all forms of MCD involve excessive production of cytokines and a cytokine storm, iMCD has important differences in symptoms, disease course, and treatment from POEMS-associated MCD and HHV-8-associated MCD. First line treatment for iMCD is anti-IL-6 therapy with siltuximab (or tocilizumab, if siltuximab is not available). Siltuximab is the only FDA-approved treatment for iMCD and patients who respond to siltuximab tend to have long-term responses. In critically ill patients, chemotherapy and corticosteroids are recommended if the patient is demonstrating disease progression while on siltuximab. Approximately half of iMCD patients do not improve with anti-IL-6 therapy. In patients where siltuximab is not effective, other treatments such as rituximab and sirolimus can be used.
iMCD can be further sub-classified into three clinical subgroups:
iMCD with TAFRO Syndrome (iMCD-TAFRO): characterized by acute episodes of Thrombocytopenia, Anasarca, Fever, Renal dysfunction or myelofibrosis, and Organomegaly (TAFRO syndrome).
iMCD with idiopathic plasmacytic lymphadenopathy (iMCD-IPL): characterized by thrombocytosis, hypergammaglobulinemia, and a more chronic disease course.
iMCD, not otherwise specified (iMCD-NOS): is diagnosed in iMCD patients who do not have iMCD-TAFRO or iMCD-IPL.
== Pathology ==
Castleman disease is defined by a range of characteristic features seen on microscopic analysis (histology) of tissue from enlarged lymph nodes. Variations in the lymph node tissues of patients with CD have led to 4 histological classifications:
Plasmacytic: increased number of follicles with large hyperplastic germinal centers and sheetlike plasmacytosis (increased number plasma cells). Germinal centers may also show regressed features
Hyaline vascular: regressed germinal centers, follicular dendritic cell prominence or dysplasia, hypervascularity in interfollicular regions, sclerotic vessels, prominent mantle zones with an "onion-skin" appearance.
Hypervascular: similar to hyaline vascular features, but seen in iMCD rather than UCD. Includes regressed germinal centers, follicular dendritic cell prominence, hypervascularity in interfollicular regions, and prominent mantle zones with an "onion-skin" appearance.
Mixed: presence of a combination of hyaline vascular/hypervascular and plasmacytic features in the same lymph node.
UCD most commonly demonstrates hyaline vascular features, but plasmacytic features or a mix of features may also be seen. iMCD more commonly demonstrates plasmacytic features, but hypervascular features or a mix of features are also seen. All cases of HHV-8-associated MCD are thought to demonstrate plasmablastic features—similar to plasmacytic features, but with plasmablasts present. The clinical utility of subtyping Castleman disease by histologic features is uncertain, as histologic subtypes do not consistently predict disease severity or treatment response. Guidelines recommend against using histologic subtype to guide treatment decisions. Staining with latency-associated nuclear antigen (LANA-1), a marker for HHV-8 infection, should be measured in all forms of Castleman disease but is positive only in HHV-8-associated MCD.
Diseases other than Castleman disease can present with similar histologic findings in lymph node tissue, including:
Infectious causes: Epstein-Barr virus, human immunodeficiency virus, tuberculosis
Autoimmune diseases: Systemic lupus erythematosus, rheumatoid arthritis
Lymphoproliferative disorders: lymphoma, autoimmune lymphoproliferative syndrome
== History ==
Unicentric Castleman disease was first described in a case series by Benjamin Castleman in 1956. By 1984, a number of case reports had been published describing a multicentric variant of the disease and with some reports describing an association with Kaposi's sarcoma. In 1995, the association between HHV-8 and Castleman disease was described in patients with HIV. Formal diagnostic criteria and definition of the disease was established in 2016, which will allow for better understanding and the ability to appropriately track and research CD. In 2017, international consensus diagnostic criteria for idiopathic multicentric Castleman disease (iMCD) were established for the first time. In 2018, the first treatment guidelines for iMCD were established. In 2020 the first evidence based diagnostic criteria and treatment guidelines were established for unicentric Castleman disease.
World Castleman Disease Day was established in 2018 and is held every year on July 23. This date was chosen for Benjamin Castleman's initial case series describing Castleman disease, which was published in July 1956, and the diagnostic criteria for idiopathic multicentric Castleman disease, which were published in the journal Blood on March 23, 2017.
== Castleman Disease Collaborative Network ==
The Castleman Disease Collaborative Network (CDCN), founded in 2012, is the largest organization dedicated to Castleman disease. It is a global initiative dedicated to research and treatment for Castleman disease (CD) and to improve survival for all patients with CD. The CDCN works to achieve this by facilitating collaboration among the global research community, mobilizing resources, strategically investing in high-impact research, and supporting patients and people supporting them.
== References ==
== Further reading ==
Fajgenbaum, David (2019). Chasing My Cure: A Doctor's Race to Turn Hope into Action; a Memoir. New York: Ballantine Books. ISBN 9781524799618. OCLC 1144129598. Book by the founder of the Castleman Disease Collaborative Network.
Fajgenbaum, David (31 October 2023). "His Rare Disease's Cure Was Sitting on the Pharmacy Shelf - interview with David Fajgenbaum" (Interview). Interviewed by Eric J. Topol. Medscape. | Wikipedia/Castleman_disease |
Orf is a farmyard pox, a type of zoonosis. It causes small pustules in the skin of primarily sheep and goats, but can also occur on the hands of humans. A pale halo forms around a red centre. It may persist for several weeks before crusting and then either resolves or leaves a granuloma. There is usually only one non-painful lesion, but there can be more. Lymph nodes may also become swollen.
It is caused by a parapoxvirus. It can occur in humans who handle infected animals or contaminated objects. One third of cases may develop erythema multiforme. Once resolved, a person can still be infected again.
Generally, once infected, treatment options are limited. Injecting the lesion with cidofovir or applying imiquimod has been studied. However, it is sometimes required to excise the pustules. The vaccine used in sheep to prevent orf is live and has been known to cause disease in humans.
The disease is endemic in livestock herds worldwide. A recent outbreak emerged in southwest Ethiopia between October 2019 and May 2020.
== Humans ==
Orf is a zoonotic disease, meaning humans can contract this disorder through direct contact with infected sheep and goats or with fomites carrying the orf virus. It causes a purulent-appearing papule locally and generally no systemic symptoms. Infected locations can include the finger, hand, arm, face and even the penis (caused by infection either from contact with the hand during urination or from bestiality). Consequently, it is important to observe good personal hygiene and to wear gloves when treating infected animals. It may appear similar to cowpox and pseudocowpox.
While orf is usually a benign self-limiting illness which resolves in 3-6 weeks, in the immunocompromised it can be very progressive and even life-threatening. One percent topical cidofovir has been successfully used in a few patients with progressive disease. Serious damage may be inflicted on the eye if it is infected by orf, even among otherwise healthy patients. The virus can survive in the soil for at least six months.
== Other animals ==
Orf is primarily a disease of sheep and goats although it has been reported as a natural disease in humans, steenbok and alpacas, chamois and tahrs, reindeer, musk oxen, dogs, cats, mountain goats, bighorn sheep, dall sheep, and red squirrels.
=== Sheep and goats ===
It has been recorded since the late 19th century and has been reported from most sheep-or goat-raising areas, including those in Europe, the Middle East, the United States, Africa, Asia, South America, Canada, New Zealand and Australia. Orf is spread by fomites and direct contact. In some environments, infection is injected by scratches from thistles of both growing and felled plants. Symptoms include papules and pustules on the lips and muzzle, and less commonly in the mouth of young lambs and on the eyelids, feet, and teats of ewes. The lesions progress to thick crusts which may bleed. Orf in the mouths of lambs may prevent suckling and cause weight loss, and can infect the udder of the mother ewe, thus potentially leading to mastitis. Sheep are prone to reinfection. Occasionally the infection can be extensive and persistent if the animal does not produce an immune response.
A live virus vaccine (ATCvet code: QI04AD01 (WHO)) is made from scab material and usually given to ewes at the age of two months, but typically only given to lambs when there is a confirmed outbreak. The vaccine can cause minor cases of orf when used in humans.
In sheep and goats, the lesions mostly appear on or near the hairline and elsewhere on the lips and muzzle. In some cases the lesions appear on and in the nostrils, around the eyes, on the thigh, coronet, vulva, udder, and axilla. In rare cases, mostly involving young lambs, lesions are found on the tongue, gums, roof of the mouth and the esophagus. It has also been reported a number of times to cause lesions in the rumen. In one case it was shown that a severe form of orf virus caused an outbreak involving the gastrointestinal tract, lungs, heart, as well as the buccal cavity, cheeks, tongue and lips. Another severe case was reported pharyngitis, genital lesions and infection of the hooves which led to lameness and, in some cases, sloughing of the hoof.
More typically, sheep will become free of orf within a week or so as the disease runs its course. Sheep custodians can assist by ensuring infected lambs receive sufficient milk and separating out the infected stock to slow down cross-transmission to healthy animals. It is advisable for those handling infected animals to wear disposable gloves to prevent cross infection and self-infection. A veterinarian must be contacted if there is a risk of misdiagnosis with other, more serious conditions.
== See also ==
Ecthyma
List of cutaneous conditions
List of immunofluorescence findings for autoimmune bullous conditions
Imiquimod
Cidofovir
== References ==
== External links ==
Farm Health Online: Disease Management of Orf Virus in Sheep | Wikipedia/Orf_(disease) |
Respiratory diseases, or lung diseases, are pathological conditions affecting the organs and tissues that make gas exchange difficult in air-breathing animals. They include conditions of the respiratory tract including the trachea, bronchi, bronchioles, alveoli, pleurae, pleural cavity, the nerves and muscles of respiration. Respiratory diseases range from mild and self-limiting, such as the common cold, influenza, and pharyngitis to life-threatening diseases such as bacterial pneumonia, pulmonary embolism, tuberculosis, acute asthma, lung cancer, and severe acute respiratory syndromes, such as COVID-19. Respiratory diseases can be classified in many different ways, including by the organ or tissue involved, by the type and pattern of associated signs and symptoms, or by the cause of the disease.
The study of respiratory disease is known as pulmonology. A physician who specializes in respiratory disease is known as a pulmonologist, a chest medicine specialist, a respiratory medicine specialist, a respirologist or a thoracic medicine specialist.
== Obstructive lung disease ==
Asthma, chronic bronchitis, bronchiectasis and chronic obstructive pulmonary disease (COPD) are all obstructive lung diseases characterised by airway obstruction. This limits the amount of air that is able to enter alveoli because of constriction of the bronchial tree, due to inflammation. Obstructive lung diseases are often identified because of symptoms and diagnosed with pulmonary function tests such as spirometry. Many obstructive lung diseases are managed by avoiding triggers (such as dust mites or smoking), with symptom control such as bronchodilators, and with suppression of inflammation (such as through corticosteroids) in severe cases. One common cause of COPD including emphysema, and chronic bronchitis, is tobacco smoking, and common causes of bronchiectasis include severe infections and cystic fibrosis. The definitive cause of asthma is not yet known.
== Restrictive lung diseases ==
Restrictive lung diseases are a category of respiratory diseases characterized by a loss of lung compliance, causing incomplete lung expansion and increased lung stiffness, such as in infants with respiratory distress syndrome. Restrictive lung diseases can be divided into two categories: those caused by intrinsic factors and those caused by extrinsic factors. Restrictive lung diseases yielding from intrinsic factors occur within the lungs themselves, such as tissue death due to inflammation or toxins. Conversely, restrictive lung diseases caused by extrinsic factors result from conditions originating from outside the lungs such as neuromuscular dysfunction and irregular chest wall movements.
== Chronic respiratory disease ==
Chronic respiratory diseases are long-term diseases of the airways and other structures of the lung. They are characterized by a high inflammatory cell recruitment (neutrophil) and/or destructive cycle of infection, (e.g. mediated by Pseudomonas aeruginosa). Some of the most common are asthma, chronic obstructive pulmonary disease, and acute respiratory distress syndrome. Most chronic respiratory dieseases are not curable; however, various forms of treatment that help dilate major air passages and improve shortness of breath can help control symptoms and increase the quality of life.
=== Telerehabilitation for chronic respiratory disease ===
The latest evidence suggests that primary pulmonary rehabilitation and maintenance rehabilitation delivered through telerehabilitation for people with chronic respiratory disease reaches outcomes similar to centre-based rehabilitation. While there are no safety issues identified, the findings are based on evidence limited by a small number of studies.
== Respiratory tract infections ==
Infections can affect any part of the respiratory system. They are traditionally divided into upper respiratory tract infections and lower respiratory tract infections.
=== Upper respiratory tract infection ===
The upper airway is defined as all the structures connecting the glottis to the mouth and nose. The most common upper respiratory tract infection is the common cold. However, infections of specific organs of the upper respiratory tract such as sinusitis, tonsillitis, otitis media, pharyngitis and laryngitis are also considered upper respiratory tract infections.
Epiglottitis is a bacterial infection of the larynx which causes life-threatening swelling of the epiglottis with a mortality rate of 7% in adults and 1% in children. Haemophilus influenzae is still the primary cause even with vaccinations. Also Streptococcus pyogenes can cause epiglottitis. Symptoms include drooling, stridor, difficulty breathing and swallowing, and a hoarse voice.
Croup (Laryngotracheobronchitis) is a viral infection of the vocal cords typically lasting five to six days. The main symptom is a barking cough and low-grade fever. On an X-ray, croup can be recognized by the "steeple sign", which is a narrowing of the trachea. It most commonly occurs in winter months in children between the ages of 3 months and 5 years. A severe form caused by bacteria is called bacterial tracheitis.
Tonsillitis is swelling of the tonsils by a bacterial or viral infection. This inflammation can lead to airway obstruction. From tonsillitis can come a peritonsillar abscess which is the most common upper airway infection and occurs primarily in young adults. It causes swelling in one of the tonsils, pushing the uvula to the unaffected side. Diagnosis is usually made based on the presentation and examination. Symptoms generally include fever, sore throat, trouble swallowing, and sounding like they have a "hot potato" in their mouth.
=== Lower respiratory tract infection ===
The most common lower respiratory tract infection is pneumonia, an infection of the lungs which is usually caused by bacteria, particularly Streptococcus pneumoniae in Western countries. Worldwide, tuberculosis is an important cause of pneumonia. Other pathogens such as viruses and fungi can cause pneumonia, for example severe acute respiratory syndrome, COVID-19 and pneumocystis pneumonia. Pneumonia may develop complications such as a lung abscess, a round cavity in the lung caused by the infection, or may spread to the pleural cavity.
Poor oral care may be a contributing factor to lower respiratory disease, as bacteria from gum disease may travel through airways and into the lungs. There is also a co-occurrence between acute eosinophilic pneumonia, desquamative interstitial pneumonia and tobacco use.
=== Upper and lower respiratory tract infection ===
Primary ciliary dyskinesia is a genetic disorder causing the cilia to not move in a coordinated manner. This causes chronic respiratory infections, cough, and nasal congestion. This can lead to bronchiectasis, which can cause life-threatening breathing issues.
== Tumors ==
=== Malignant tumors ===
Malignant tumors of the respiratory system, particularly primary carcinomas of the lung, are a major health problem responsible for 15% of all cancer diagnoses and 30% of all cancer deaths. The majority of respiratory system cancers are attributable to smoking tobacco.
The major histological types of respiratory system cancer are:
Small cell lung cancer
Non-small cell lung cancer
Adenocarcinoma of the lung
Squamous cell carcinoma of the lung
Large cell lung carcinoma
Other lung cancers (carcinoid, Kaposi's sarcoma, melanoma)
Lymphoma
Head and neck cancer
Pleural mesothelioma, almost always caused by exposure to asbestos dust.
In addition, since many cancers spread via the bloodstream and the entire cardiac output passes through the lungs, it is common for cancer metastases to occur within the lung. Breast cancer may invade directly through local spread, and through lymph node metastases. After metastasis to the liver, colon cancer frequently metastasizes to the lung. Prostate cancer, germ cell cancer and renal cell carcinoma may also metastasize to the lung.
Treatment of respiratory system cancer depends on the type of cancer. Surgical removal of part of a lung (lobectomy, segmentectomy, or wedge resection) or of an entire lung pneumonectomy), along with chemotherapy and radiotherapy, are all used. The chance of surviving lung cancer depends on the cancer stage at the time the cancer is diagnosed, and to some extent on the histology, and is only about 14–17% overall. In the case of metastases to the lung, treatment can occasionally be curative but only in certain, rare circumstances.
=== Benign tumors ===
Benign tumors are relatively rare causes of respiratory disease. Examples of benign tumors are:
Pulmonary hamartoma
Congenital malformations such as pulmonary sequestration and congenital cystic adenomatoid malformation (CCAM).
== Pleural cavity diseases ==
Pleural cavity diseases include pleural mesothelioma which are mentioned above.
A collection of fluid in the pleural cavity is known as a pleural effusion. This may be due to fluid shifting from the bloodstream into the pleural cavity due to conditions such as congestive heart failure and cirrhosis. It may also be due to inflammation of the pleura itself as can occur with infection, pulmonary embolus, tuberculosis, mesothelioma and other conditions.
A pneumothorax is a hole in the pleura covering the lung allowing air in the lung to escape into the pleural cavity. The affected lung "collapses" like a deflated balloon. A tension pneumothorax is a particularly severe form of this condition where the air in the pleural cavity cannot escape, so the pneumothorax keeps getting bigger until it compresses the heart and blood vessels, leading to a life-threatening situation.
== Pulmonary vascular disease ==
Pulmonary vascular diseases are conditions that affect the pulmonary circulation. Examples are:
Pulmonary embolism, a blood clot that forms in a vein, breaks free, travels through the heart and lodges in the lungs (thromboembolism). Large pulmonary emboli are fatal, causing sudden death. A number of other substances can also embolise (travel through the blood stream) to the lungs but they are much more rare: fat embolism (particularly after bony injury), amniotic fluid embolism (with complications of labour and delivery), air embolism (iatrogenic – caused by invasive medical procedures).
Pulmonary arterial hypertension, elevated pressure in the pulmonary arteries. Most commonly it is idiopathic (i.e. of unknown cause) but it can be due to the effects of another disease, particularly COPD. This can lead to strain on the right side of the heart, a condition known as cor pulmonale.
Pulmonary edema, leakage of fluid from capillaries of the lung into the alveoli (or air spaces). It is usually due to congestive heart failure.
Pulmonary hemorrhage, inflammation and damage to capillaries in the lung resulting in blood leaking into the alveoli. This may cause blood to be coughed up. Pulmonary hemorrhage can be due to auto-immune disorders such as granulomatosis with polyangiitis and Goodpasture's syndrome.
== Neonatal diseases ==
Pulmonary diseases also impact newborns and the disorders are often unique from those that affect adults.
Infant respiratory distress syndrome most commonly occurs in less than six hours after birth in about 1% of all births in the United States. The main risk factor is prematurity with the likelihood of it occurring going up to 71% in infants under 750g. Other risk factors include infant of a diabetic mother (IDM), method of delivery, fetal asphyxia, genetics, prolonged rupture of membranes (PROM), maternal toxemia, chorioamnionitis, and male sex. The widely accepted pathophysiology of respiratory distress syndrome is it caused by insufficient surfactant production and immature lung and vascular development. The lack of surfactant makes the lungs atelectatic causing a ventilation to perfusion mismatch, lowered compliance, and increased air resistance. This causes hypoxia and respiratory acidosis which can lead to pulmonary hypertension. It has a ground glass appearance on an x-ray. Symptoms can include tachypnea, nasal flaring, paradoxical chest movement, grunting, and subcostal retractions.
Bronchopulmonary Dysplasia is a condition that occurs after birth usually from mechanical ventilation and oxygen use. It happens almost exclusively in pre-mature infants and is characterized by the alveoli, and lung vasculature becoming inflamed and damaged. Complications from BPD can follow a patient into adulthood. As a child they may experience learning disabilities, pulmonary hypertension, and hearing problems. As an adult, there is an increased likelihood for asthma and exercise intolerance.
Meconium Aspiration Syndrome occurs in full term or post-term infants who aspirate meconium. Risk factors include a diabetic mother, fetal hypoxia, precipitous delivery, and maternal high blood pressure. Its diagnosis is based on meconium stained amniotic fluid at delivery and staining on the skin, nails, and umbilical cord. Aspiration can cause airway obstruction, air-trapping, pneumonia, lung inflammation, and inactivated surfactant. It presents as patchy atelectasis and hyperinflation on an x-ray with a pneumothorax of pneumomediastinum also possible.
Persistent Pulmonary Hypertension of the Newborn (PPHN) is a syndrome that occurs from an abnormal transition to extra-uterine life. It is marked by an elevated pulmonary vascular resistance and vasoconstriction causing a right-to-left shunt of the blood through the foramen ovale or ductus arteriosus. There are three main causes of PPHN are parenchymal diseases such as meconium aspiration syndrome, idiopathic, and hypoplastic vasculature like in a diaphragmatic hernia. It will eventually resolve in most infants. This is the only syndrome that inhaled nitric oxide is approved for by the FDA.
Transient Tachypnea of the Newborn is caused by the retention of alveolar fluid in the lungs. It commonly occurs in infants who are delivered via caesarean section without the onset of labor because absorption of amniotic fluid in the lungs has not yet commenced. Other risk factors are male sex, macrosomia, multiple gestations, and maternal asthma. It usually presents with tachypnea and increased work of breathing. On an x-ray diffuse infiltrates, interlobar fissures, and sometimes pleural effusions can be seen. It is a diagnosis of exclusion because of its similarity to other diseases and frequently CPAP is used to help push the lung fluid into the pulmonary vasculature.
Pulmonary interstitial emphysema is the condition of air escaping overdistended alveoli into the pulmonary interstitium. It is a rare disease that occurs most often in premature infants, even though it is possible to appear in adults. It often presents as a slow deterioration with the need for increased ventilatory support. Chest x-ray is the standard for diagnosis where it is seen as linear or cystic translucencies extending to the edges of the lungs.
Bronchiolitis is the swelling and buildup of mucus in the bronchioles. It is usually caused by respiratory syncytial virus (RSV), which is spread when an infant touches the nose or throat fluids of someone infected. The virus infects the cells causing ciliary dysfunction and death. The debris, edema, and inflammation eventually leads to the symptoms. It is the most common reason for admission of children under the age of one year. It can present widely from a mild respiratory infection to respiratory failure. Since there is no medication to treat the disease, it is only managed supportively with fluids and oxygen.
== Diagnosis ==
Respiratory diseases may be investigated by performing one or more of the following tests:
Biopsy of the lung or pleura
Blood test
Bronchoscopy
Chest X-ray
CT scan, including high-resolution computed tomography
Culture of microorganisms from secretions such as sputum
Ultrasound scanning can be useful to detect fluid such as pleural effusion
Pulmonary function test
Ventilation–perfusion scan
== Epidemiology ==
Respiratory disease is a common and significant cause of illness and death around the world. In the US, approximately one billion common colds occur each year. A study found that in 2010, there were approximately 6.8 million emergency department visits for respiratory disorders in the U.S. for patients under the age of 18. In 2012, respiratory conditions were the most frequent reasons for hospital stays among children.
In the UK, approximately 1 in 7 individuals are affected by some form of chronic lung disease, most commonly chronic obstructive pulmonary disease, which includes asthma, chronic bronchitis and emphysema.
Respiratory diseases (including lung cancer) are responsible for over 10% of hospitalizations and over 16% of deaths in Canada.
In 2011, respiratory disease with ventilator support accounted for 93.3% of ICU utilization in the United States.
== References ==
== External links == | Wikipedia/Lung_disease |
Polydnaviriformidae ( PDV) is a family of insect viriforms; members are known as polydnaviruses. There are two genera in the family: Bracoform and Ichnoviriform. Polydnaviruses form a symbiotic relationship with parasitoid wasps. Ichnoviriforms (IV) occur in Ichneumonid wasps and Bracoviriforms (BV) in Braconid wasps. The larvae of wasps in both of those groups are themselves parasitic on Lepidoptera (moths and butterflies), and the polydnaviruses are important in circumventing the immune response of their parasitized hosts. Little or no sequence homology exists between BV and IV, suggesting that the two genera have been evolving independently for a long time.
== Taxonomy ==
Bracoviriform
Ichnoviriform
== Structure ==
Viruses in Polydnaviridae are enveloped, with prolate ellipsoid and cylindrical geometries. Genomes are circular and segmented, composed of multiple segments of double-stranded, superhelical DNA packaged in capsid proteins. They are around 2.0–31kb in length.
== Life cycle ==
Viral replication is nuclear. DNA-templated transcription is the method of transcription. The virus exits the host cell by nuclear pore export.
Parasitoid wasps serve as hosts for the virus, and Lepidoptera serve as hosts for these wasps. The female wasp injects one or more eggs into its host along with a quantity of virus. The virus and wasp are in a mutualistic symbiotic relationship: expression of viral genes prevents the wasp's host's immune system from killing the wasp's injected egg and causes other physiological alterations that ultimately cause the parasitized host to die. Transmission routes are parental.
== Biology ==
These viruses are part of a unique biological system consisting of an endoparasitic wasp (parasitoid), a host (usually lepidopteran) larva, and the virus. The full genome of the virus is endogenous, dispersed among the genome of the wasp. The virus only replicates in a particular part of the ovary, called the calyx, of pupal and adult female wasps. The virus is injected along with the wasp egg into the body cavity of a lepidopteran host caterpillar and infects cells of the caterpillar. The infection does not lead to replication of new viruses; rather, it affects the caterpillar's immune system, as the virion carries virulence genes instead of viral replication genes. It can be considered a type of viral vector.
Without the virus infection, phagocytic hemocytes (blood cells) will encapsulate and kill the wasp egg and larvae, but the immune suppression caused by the virus allows survival of the wasp egg and larvae, leading to hatching and complete development of the immature wasp in the caterpillar. Additionally, genes expressed from the polydnavirus in the parasitised host alter host development and metabolism to be beneficial for the growth and survival of the parasitoid larva.
=== Potential carrier subfamilies ===
Ichneumonoidea
Braconidae
Microgastrinae
Miracinae
Cheloninae
Cardiochilinae
Mendeselinae
Khoikhoiinae
Ichneumonidae
Campopleginae
Banchinae
== Characteristics ==
Both genera of PDV share certain characteristics:
the virus particles of each contain multiple segments of dsDNA (double-strand, or "normal" DNA, as contrasted with positive- or negative-sense single-strand DNA or RNA, as found in some other viruses) with each segment containing only part of the full genome (much like chromosomes in eukaryotic organisms)
the genome of the virus has eukaryotic characteristics such as the presence of introns (common for insect genes but rare for viruses) and a low coding density
the genome of each virus is integrated into the host wasp genome
the genome is organized in several multiple-member genes families (which differ between Bracoviruses and Ichnoviruses)
the virus particles are only produced in specific cell types in the female wasp's reproductive organs
The morphologies of the two genera are different when observed by electron microscopy. Ichnoviruses tend to be ovoid while bracoviruses are short rods. The virions of Bracoviruses are released by cell lysis; the virions of Ichnoviruses are released by budding.
== Evolution ==
Nucleic acid analysis suggests a very long association of the viruses with the wasps (estimated 73.7 million years ± 10 million).
=== Older wasp-derived theory ===
Two proposals have been advanced for how the wasp/virus association developed. The first suggests that the virus is derived from wasp genes. Many parasitoids that do not use PDVs inject proteins that provide many of the same functions, that is, a suppression of the immune response to the parasite egg. In this model, the braconid and ichneumonid wasps packaged genes for these functions into the viruses—essentially creating a gene-transfer system that results in the caterpillar producing the immune-suppressing factors. In this scenario, the PDV structural proteins (capsids) were probably "borrowed" from existing viruses.
=== Current endogenous virus theory ===
The alternative proposal suggests that ancestral wasps developed a beneficial association with an existing virus that eventually led to the integration of the virus into the wasp's genome. Following integration, the genes responsible for virus replication and the capsids were (eventually) no longer included in the PDV genome. This hypothesis is supported by the distinct morphology differences between IV and BV, suggesting different ancestral viruses for the two genera. BV has likely evolved from a nudivirus, specifically a betanudivirus, ~100 million years ago. IV has a less clear origin: although earlier reports found a protein p44/p53 with structural similarities to ascovirus, the link was not confirmed in later studies. As a result, the current opinion is that IV originated from a yet-unidentified novel viral family, with a weak link to the NCLDVs. In either case, both genera were formed through a single integration event in their respective wasp lineages.
The two groups of viruses in the family are not in fact phylogenetically related suggesting that this taxon may need revision.
== Effect on host immunity ==
In the host, several mechanisms of the insect immune system can be triggered when the wasp lays its eggs and when the parasitic wasp is developing. When a large body (wasp egg or small particle used experimentally) is introduced into an insect's body, the classic immune reaction is the encapsulation by hematocytes. An encapsulated body can also be melanised in order to asphyxiate it, thanks to another type of hemocyte, which uses the phenoloxidase pathway to produce melanin. Small particles can be phagocytosed, and macrophage cells can then be also melanised in a nodule. Finally, insects can also respond with production of antiviral peptides.
PolyDNAvirus protect the hymenopteran larvae from the host immune system, acting at different levels.
First they can disable or destroy hematocytes. The polyDNAvirus associated with Cotesia rubecula, code for a protein CrV1 that denatures actin filaments in hematocytes, so those cells become less able to move and adhere to the larvae. Microplitis demolitor Bracovirus (MdBV) induce apoptosis of hematocytes, thanks to its gene PTP-H2. It also decreases the adhesion capacity of hematocytes, thanks to its gene Glc1.8. The gene also inhibits phagocytosis.
PolyDNAvirus can also act on melanisation, MdBV interferes with the production of phenoloxidase.
Finally, polyDNAvirus can also produce viral ankyrins, that interfere with production of antiviral peptides. In some Ichnoviruses, Vankyrin can also prevent apoptosis, the extreme reaction of a cell to block viral propagation.
The Ichnoviruses produce some proteins called vinnexins which have been recognized as homologous to the innexins of insects. They are responsible for the encoding of the structural units of the gap-junctions. These proteins may alter the intercellular communication which could explain the disruption of the encapsidation process.
== Virus-like particles ==
Another strategy used by parasitoid Hymenoptera to protect their offspring is production of virus-like particles. VLPs are similar to viruses in their structure, but they don't carry any nucleic acid. For example, Venturia canescens (Ichneumonidea) and Leptopilina sp. (Figitidaea) produce VLPs.
VLPs can be compared to PolyDNAvirus because they are secreted in the same way, and they both act to protect the larvae against the host's immune system. V. canescens-VLPs (VcVLP1, VcVLP2, VcNEP ...) are produced in the calyx cells before they go to the oviducts. Work in 2006 did not find their link to any viruses and assumed a cellular origin. More recent comparison links them to highly reshuffled domesticated Nudivirus sequences. This link produces the name Venturia canescens endogenous nudivirus (VcENV), an alphanudivirus closely related to NlENV found in Nilaparvata lugens.
VLPs protect the Hymenoptera larvae locally, whereas polyDNAvirus can have a more global effect. VLPs allow the larvae to escape the immune system: the larva is not recognised as harmful by its host, or the immune cells can't interact with it thanks to the VLPs. Venturia canescens uses these instead of polydnaviruses because its ichnovirus has been deactivated.
The wasp Leptopilina heterotoma secrete VLPs that are able to penetrate into the lamellocytes, thanks to specific receptors, and then modify the shape and surface properties of the lamellocytes so they become inefficient and the larvae are safe from encapsulation. The Leptopilina VLPs or mixed-strategy extracellular vesicles (MSEVs) contain some secretion systems. Their evolutionary picture is less clear, but a recently reported virus, L. boulardi Filamentous Virus (LbFV), shows significant similarities.
== Micro-RNA ==
MicroRNA are small RNA fragments produced in the host cells thanks to a specific enzymatic mechanism. They promote viral RNA destruction. MicroRNA attach to viral-RNA because they are complementary. Then the complex is recognised by an enzyme that destroys it. This phenomenon is known as PTGS (for post transcriptional gene silencing) or RNAi (RNA interference.)
It is interesting to consider the microRNA phenomenon in the polyDNAvirus context. Many hypotheses can be formulated:
Braconidae carry nudivirus-related genes in their genome, so they may be able to produce microRNA against nudivirus, as an innate immunity.
Wasps perhaps use microRNA to control the viral genes they carry.
PolyDNAvirus can also use PTGS to interfere with the host's gene expression.
PTGS is also used for organisms' development, using the same enzymes as antiviral gene silencing, so we can imagine that if the host uses PTGS against polyDNAvirus, perhaps it also affects its development.
== See also ==
Mutualism
== References ==
ICTVdB Management (2006). 00.055. Polydnaviridae. In: ICTVdB—The Universal Virus Database, version 4. Büchen-Osmond, C. (Ed), Columbia University, New York, USA
Espagne, E.; et al. (2004). "Genome Sequence of a Polydnavirus: Insights into Symbiotic Virus Evolution". Science. 306 (5694): 286–289. Bibcode:2004Sci...306..286E. doi:10.1126/science.1103066. PMID 15472078. S2CID 12260572.
== External links ==
http://research.biology.arizona.edu/mosquito/willott/507/polydnaviruses.html
Viralzone: Polydnaviridae
ICTV | Wikipedia/Polydnaviridae |
Hand, foot, and mouth disease (HFMD) is a common infection caused by a group of enteroviruses. It typically begins with a fever and feeling generally unwell. This is followed a day or two later by flat discolored spots or bumps that may blister, on the hands, feet and mouth and occasionally buttocks and groin. Signs and symptoms normally appear 3–6 days after exposure to the virus. The rash generally resolves on its own in about a week.
The viruses that cause HFMD are spread through close personal contact, through the air from coughing, and via the feces of an infected person. Contaminated objects can also spread the disease. Coxsackievirus A16 is the most common cause, and enterovirus 71 is the second-most common cause. Other strains of coxsackievirus and enterovirus can also be responsible. Some people may carry and pass on the virus despite having no symptoms of disease. Other animals are not involved. Diagnosis can often be made based on symptoms. Occasionally, a throat or stool sample may be tested for the virus.
Most people with hand, foot, and mouth disease get better on their own in 7 to 10 days. Most cases require no specific treatment. No antiviral medication or vaccine is available, but development efforts are underway. For fever and for painful mouth sores, over-the-counter pain medications such as ibuprofen may be used, though aspirin should be avoided in children. The illness is usually not serious. Occasionally, intravenous fluids are given to children who are dehydrated. Very rarely, viral meningitis or encephalitis may complicate the disease. Because HFMD is normally mild, some jurisdictions allow children to continue to go to child care and schools as long as they have no fever or uncontrolled drooling with mouth sores, and as long as they feel well enough to participate in classroom activities.
HFMD occurs in all areas of the world. It often occurs in small outbreaks in nursery schools or kindergartens. Large outbreaks have been occurring in Asia since 1997. It usually occurs during the spring, summer, and fall months. Typically it occurs in children less than five years old but can occasionally occur in adults. HFMD should not be confused with foot-and-mouth disease (also known as hoof-and-mouth disease), which mostly affects livestock.
== Signs and symptoms ==
Common constitutional signs and symptoms of HFMD include fever, nausea, vomiting, feeling tired, generalized discomfort, loss of appetite, and irritability in infants and toddlers. Skin lesions frequently develop in the form of a rash of flat discolored spots and bumps which may be followed by vesicular sores with blisters on palms of the hands, soles of the feet, buttocks, and sometimes on the lips. The rash is rarely itchy for children, but can be extremely itchy for adults. Painful facial ulcers, blisters, or lesions may also develop in or around the nose or mouth. HFMD usually resolves on its own after 7–10 days. Most cases of the disease are relatively harmless, but complications including encephalitis, meningitis, and paralysis that mimics the neurological symptoms of polio can occur.
== Cause ==
The viruses that cause the disease are of the Picornaviridae family. Coxsackievirus A16 is the most common cause of HFMD. Enterovirus 71 (EV-71) is the second-most common cause. Many other strains of coxsackievirus and enterovirus can also be responsible.
=== Transmission ===
HFMD is highly contagious and is transmitted by nasopharyngeal secretions such as saliva or nasal mucus, by direct contact, or by fecal–oral transmission. It is possible to be infectious for days to weeks after the symptoms have resolved.
Childcare settings are the most common places for HFMD to be contracted because of toilet training, diaper changes, and children's propensity to put their hands into their mouths. HFMD is contracted through nose and throat secretions such as saliva, sputum, and nasal mucus as well as fluid in blisters, and stool.
== Diagnosis ==
A diagnosis usually can be made by the presenting signs and symptoms alone. If the diagnosis is unclear, a throat swab or stool specimen may be taken to identify the virus by culture. The common incubation period (the time between infection and onset of symptoms) ranges from three to six days. Early detection of HFMD is important in preventing an outbreak in the pediatric population.
== Prevention ==
Preventive measures include avoiding direct contact with infected individuals (including keeping infected children home from school), proper cleaning of shared utensils, disinfecting contaminated surfaces, and proper hand hygiene. These measures are effective in decreasing the transmission of the viruses responsible for HFMD.
Protective habits include hand washing and disinfecting surfaces in play areas. Breastfeeding has also been shown to decrease rates of severe HFMD, though does not reduce the risk of the infection of the disease.
=== Vaccine ===
A vaccine known as the EV71 vaccine is available to prevent HFMD in China as of December 2015. No vaccine is currently available in the United States.
== Treatment ==
Medications are usually not needed as hand, foot, and mouth disease is a viral disease that typically resolves on its own. Currently, there is no specific curative treatment for hand, foot, and mouth disease. Disease management typically focuses on achieving symptomatic relief. Pain from the sores may be eased with the use of analgesic medications. Infection in older children, adolescents, and adults is typically mild and lasts approximately 1 week, but may occasionally run a longer course. Fever reducers can help decrease body temperature.
A minority of individuals with hand, foot, and mouth disease may require hospital admission due to complications such as inflammation of the brain, inflammation of the meninges, or acute flaccid paralysis. Non-neurologic complications such as inflammation of the heart, fluid in the lungs, or bleeding into the lungs may also occur.
== Complications ==
Complications from the viral infections that cause HFMD are rare but require immediate medical treatment if present. HFMD infections caused by Enterovirus 71 tend to be more severe and are more likely to have neurologic or cardiac complications including death than infections caused by Coxsackievirus A16. Viral or aseptic meningitis can occur with HFMD in rare cases and is characterized by fever, headache, stiff neck, or back pain. The condition is usually mild and clears without treatment; however, hospitalization for a short time may be needed. Other serious complications of HFMD include encephalitis (inflammation of the brain), or flaccid paralysis in rare circumstances.
Fingernail and toenail loss have been reported in children 4–8 weeks after having HFMD. The relationship between HFMD and the reported nail loss is unclear; however, it is temporary and nail growth resumes without treatment.
Minor complications due to symptoms can occur such as dehydration, due to mouth sores causing discomfort with intake of foods and fluid.
== Epidemiology ==
Hand, foot and mouth disease most commonly occurs in children under the age of 10 and more often under the age of 5, but it can also affect adults with varying symptoms. It tends to occur in outbreaks during the spring, summer, and autumn seasons. This is believed to be due to heat and humidity improving spread. HFMD is more common in rural areas than urban areas; however, socioeconomic status and hygiene levels need to be considered. Poor hygiene is a risk factor for HFMD.
=== Outbreaks ===
In 1997, an outbreak occurred in Sarawak, Malaysia with 600 cases and over 30 children died.
In 1998, there was an outbreak in Taiwan, affecting mainly children. There were 405 severe complications, and 78 children died. The total number of cases in that epidemic is estimated to have been 1.5 million.
In 2008 an outbreak in China, beginning in March in Fuyang, Anhui, led to 25,000 infections, and 42 deaths, by May 13. Similar outbreaks were reported in Singapore (more than 2,600 cases as of April 20, 2008), Vietnam (2,300 cases, 11 deaths), Mongolia (1,600 cases), and Brunei (1,053 cases from June–August 2008).
In 2009 17 children died in an outbreak during March and April 2009 in China's eastern Shandong Province, and 18 children died in the neighboring Henan Province. Out of 115,000 reported cases in China from January to April, 773 were severe and 50 were fatal.
In 2010 in China, an outbreak occurred in southern China's Guangxi Autonomous Region as well as Guangdong, Henan, Hebei, and Shandong provinces. Until March, 70,756 children were infected and 40 died from the disease. By June, the peak season for the disease, 537 had died.
The World Health Organization reporting between January and October 2011 (1,340,259) states the number of cases in China had dropped by approx 300,000 from 2010 (1,654,866) cases, with new cases peaking in June. There were 437 deaths, down from 2010 (537 deaths).
In December 2011, the California Department of Public Health identified a strong form of the virus, coxsackievirus A6 (CVA6), where nail loss in children is common.
In 2012 in Alabama, United States there was an outbreak of an unusual type of the disease. It occurred in a season when it is not usually seen and affected teenagers and older adults. There were some hospitalizations due to the disease but no reported deaths.
In 2012 in Cambodia, 52 of 59 reviewed cases of children reportedly dead (as of July 9, 2012) due to a mysterious disease were diagnosed to be caused by a virulent form of HFMD. Although a significant degree of uncertainty exists with reference to the diagnosis, the WHO report states, "Based on the latest laboratory results, a significant proportion of the samples tested positive for enterovirus 71 (EV-71), which causes hand foot and mouth disease (HFMD). The EV-71 virus has been known to generally cause severe complications amongst some patients."
HFMD infected 1,520,274 people with up to 431 deaths reported at the end of July in 2012 in China.
In 2018, more than 50,000 cases occurred through a nationwide outbreak in Malaysia with two deaths also reported.
=== India 2022 ===
An outbreak of an illness referred to as tomato fever or tomato flu was identified in the Kollam district on May 6, 2022. The illness is endemic to Kerala, India and gets its name because of the red and round blisters it causes, which look like tomatoes. The disease may be a new variant of the viral HFMD or an effect of chikungunya or dengue fever. Flu may be a misnomer.
The condition mainly affects children under the age of five. An article in The Lancet states that the appearance of the blisters is similar to that seen in Mpox, and the illness is not thought to be related to SARS-CoV-2. Symptoms, treatment and prevention are similar to HFMD.
== History ==
HFMD cases were first described clinically in Canada and New Zealand in 1957. The disease was termed "Hand Foot and Mouth Disease", by Thomas Henry Flewett, after a similar outbreak in 1960.
== Research ==
Novel antiviral agents to prevent and treat infection with the viruses responsible for HFMD are currently under development. Preliminary studies have shown inhibitors of the EV-71 viral capsid to have potent antiviral activity.
== References ==
== External links ==
Media related to Hand, foot and mouth disease at Wikimedia Commons
Highly contagious Hand, foot and mouth disease killing China's children at Wikinews | Wikipedia/Hand,_foot_and_mouth_disease |
A DNA virus is a virus that has a genome made of deoxyribonucleic acid (DNA) that is replicated by a DNA polymerase. They can be divided between those that have two strands of DNA in their genome, called double-stranded DNA (dsDNA) viruses, and those that have one strand of DNA in their genome, called single-stranded DNA (ssDNA) viruses. dsDNA viruses primarily belong to two realms: Duplodnaviria and Varidnaviria, and ssDNA viruses are almost exclusively assigned to the realm Monodnaviria, which also includes some dsDNA viruses. Additionally, many DNA viruses are unassigned to higher taxa. Reverse transcribing viruses, which have a DNA genome that is replicated through an RNA intermediate by a reverse transcriptase, are classified into the kingdom Pararnavirae in the realm Riboviria.
DNA viruses are ubiquitous worldwide, especially in marine environments where they form an important part of marine ecosystems, and infect both prokaryotes and eukaryotes. They appear to have multiple origins, as viruses in Monodnaviria appear to have emerged from archaeal and bacterial plasmids on multiple occasions, though the origins of Duplodnaviria and Varidnaviria are less clear.
Prominent disease-causing DNA viruses include herpesviruses, papillomaviruses, and poxviruses.
== Baltimore classification ==
The Baltimore classification system is used to group viruses together based on their manner of messenger RNA (mRNA) synthesis and is often used alongside standard virus taxonomy, which is based on evolutionary history. DNA viruses constitute two Baltimore groups: Group I: double-stranded DNA viruses, and Group II: single-stranded DNA viruses. While Baltimore classification is chiefly based on transcription of mRNA, viruses in each Baltimore group also typically share their manner of replication. Viruses in a Baltimore group do not necessarily share genetic relation or morphology.
=== Double-stranded DNA viruses ===
The first Baltimore group of DNA viruses are those that have a double-stranded DNA genome. All dsDNA viruses have their mRNA synthesized in a three-step process. First, a transcription preinitiation complex binds to the DNA upstream of the site where transcription begins, allowing for the recruitment of a host RNA polymerase. Second, once the RNA polymerase is recruited, it uses the negative strand as a template for synthesizing mRNA strands. Third, the RNA polymerase terminates transcription upon reaching a specific signal, such as a polyadenylation site.
dsDNA viruses make use of several mechanisms to replicate their genome. Bidirectional replication, in which two replication forks are established at a replication origin site and move in opposite directions of each other, is widely used. A rolling circle mechanism that produces linear strands while progressing in a loop around the circular genome is also common. Some dsDNA viruses use a strand displacement method whereby one strand is synthesized from a template strand, and a complementary strand is then synthesized from the prior synthesized strand, forming a dsDNA genome. Lastly, some dsDNA viruses are replicated as part of a process called replicative transposition whereby a viral genome in a host cell's DNA is replicated to another part of a host genome.
dsDNA viruses can be subdivided between those that replicate in the cell nucleus, and as such are relatively dependent on host cell machinery for transcription and replication, and those that replicate in the cytoplasm, in which case they have evolved or acquired their own means of executing transcription and replication. dsDNA viruses are also commonly divided between tailed dsDNA viruses, referring to members of the realm Duplodnaviria, usually the tailed bacteriophages of the order Caudovirales, and tailless or non-tailed dsDNA viruses of the realm Varidnaviria.
=== Single-stranded DNA viruses ===
The second Baltimore group of DNA viruses are those that have a single-stranded DNA genome. ssDNA viruses have the same manner of transcription as dsDNA viruses. However, because the genome is single-stranded, it is first made into a double-stranded form by a DNA polymerase upon entering a host cell. mRNA is then synthesized from the double-stranded form. The double-stranded form of ssDNA viruses may be produced either directly after entry into a cell or as a consequence of replication of the viral genome. Eukaryotic ssDNA viruses are replicated in the nucleus.
Most ssDNA viruses contain circular genomes that are replicated via rolling circle replication (RCR). ssDNA RCR is initiated by an endonuclease that bonds to and cleaves the positive strand, allowing a DNA polymerase to use the negative strand as a template for replication. Replication progresses in a loop around the genome by means of extending the 3'-end of the positive strand, displacing the prior positive strand, and the endonuclease cleaves the positive strand again to create a standalone genome that is ligated into a circular loop. The new ssDNA may be packaged into virions or replicated by a DNA polymerase to form a double-stranded form for transcription or continuation of the replication cycle.
Parvoviruses contain linear ssDNA genomes that are replicated via rolling hairpin replication (RHR), which is similar to RCR. Parvovirus genomes have hairpin loops at each end of the genome that repeatedly unfold and refold during replication to change the direction of DNA synthesis to move back and forth along the genome, producing numerous copies of the genome in a continuous process. Individual genomes are then excised from this molecule by the viral endonuclease. For parvoviruses, either the positive or negative sense strand may be packaged into capsids, varying from virus to virus.
Nearly all ssDNA viruses have positive sense genomes, but a few exceptions and peculiarities exist. The family Anelloviridae is the only ssDNA family whose members have negative sense genomes, which are circular. Parvoviruses, as previously mentioned, may package either the positive or negative sense strand into virions. Lastly, bidnaviruses package both the positive and negative linear strands.
== ICTV classification ==
The International Committee on Taxonomy of Viruses (ICTV) oversees virus taxonomy and organizes viruses at the basal level at the rank of realm. Virus realms correspond to the rank of domain used for cellular life but differ in that viruses within a realm do not necessarily share common ancestry, nor do the realms share common ancestry with each other. As such, each virus realm represents at least one instance of viruses coming into existence. Within each realm, viruses are grouped together based on shared characteristics that are highly conserved over time. Three DNA virus realms are recognized: Duplodnaviria, Monodnaviria, and Varidnaviria.
=== Duplodnaviria ===
Duplodnaviria contains dsDNA viruses that encode a major capsid protein (MCP) that has the HK97 fold. Viruses in the realm also share a number of other characteristics involving the capsid and capsid assembly, including an icosahedral capsid shape and a terminase enzyme that packages viral DNA into the capsid during assembly. Two groups of viruses are included in the realm: tailed bacteriophages, which infect prokaryotes and are assigned to the order Caudovirales, and herpesviruses, which infect animals and are assigned to the order Herpesvirales.
Duplodnaviria is a very ancient realm, perhaps predating the last universal common ancestor (LUCA) of cellular life. Its origins not known, nor whether it is monophyletic or polyphyletic. A characteristic feature is the HK97-fold found in the MCP of all members, which is found outside the realm only in encapsulins, a type of nanocompartment found in bacteria: this relation is not fully understood.
The relation between caudoviruses and herpesviruses is also uncertain: they may share a common ancestor or herpesviruses may be a divergent clade from the realm Caudovirales. A common trait among duplodnaviruses is that they cause latent infections without replication while still being able to replicate in the future. Tailed bacteriophages are ubiquitous worldwide, important in marine ecology, and the subject of much research. Herpesviruses are known to cause a variety of epithelial diseases, including herpes simplex, chickenpox and shingles, and Kaposi's sarcoma.
=== Monodnaviria ===
Monodnaviria contains ssDNA viruses that encode an endonuclease of the HUH superfamily that initiates rolling circle replication and all other viruses descended from such viruses. The prototypical members of the realm are called CRESS-DNA viruses and have circular ssDNA genomes. ssDNA viruses with linear genomes are descended from them, and in turn some dsDNA viruses with circular genomes are descended from linear ssDNA viruses.
Viruses in Monodnaviria appear to have emerged on multiple occasions from archaeal and bacterial plasmids, a type of extra-chromosomal DNA molecule that self-replicates inside its host. The kingdom Shotokuvirae in the realm likely emerged from recombination events that merged the DNA of these plasmids and complementary DNA encoding the capsid proteins of RNA viruses.
CRESS-DNA viruses include three kingdoms that infect prokaryotes: Loebvirae, Sangervirae, and Trapavirae. The kingdom Shotokuvirae contains eukaryotic CRESS-DNA viruses and the atypical members of Monodnaviria. Eukaryotic monodnaviruses are associated with many diseases, and they include papillomaviruses and polyomaviruses, which cause many cancers, and geminiviruses, which infect many economically important crops.
=== Varidnaviria ===
Varidnaviria contains DNA viruses that encode MCPs that have a jelly roll fold folded structure in which the jelly roll (JR) fold is perpendicular to the surface of the viral capsid. Many members also share a variety of other characteristics, including a minor capsid protein that has a single JR fold, an ATPase that packages the genome during capsid assembly, and a common DNA polymerase. Two kingdoms are recognized: Helvetiavirae, whose members have MCPs with a single vertical JR fold, and Bamfordvirae, whose members have MCPs with two vertical JR folds.
Varidnaviria is either monophyletic or polyphyletic and may predate the LUCA. The kingdom Bamfordvirae is likely derived from the other kingdom Helvetiavirae via fusion of two MCPs to have an MCP with two jelly roll folds instead of one. The single jelly roll (SJR) fold MCPs of Helvetiavirae show a relation to a group of proteins that contain SJR folds, including the Cupin superfamily and nucleoplasmins.
Marine viruses in Varidnaviria are ubiquitous worldwide and, like tailed bacteriophages, play an important role in marine ecology. Most identified eukaryotic DNA viruses belong to the realm. Notable disease-causing viruses in Varidnaviria include adenoviruses, poxviruses, and the African swine fever virus. Poxviruses have been highly prominent in the history of modern medicine, especially Variola virus, which caused smallpox. Many varidnaviruses can become endogenized in their host's genome; a peculiar example are virophages, which after infecting a host, can protect the host against giant viruses.
=== Baltimore classification ===
dsDNA viruses are classified into three realms and include many taxa that are unassigned to a realm:
All viruses in Duplodnaviria are dsDNA viruses.
In Monodnaviria, members of the class Papovaviricetes are dsDNA viruses.
All viruses in Varidnaviria are dsDNA viruses.
The following taxa that are unassigned to a realm exclusively contain dsDNA viruses:
Orders: Ligamenvirales
Families: Ampullaviridae, Baculoviridae, Bicaudaviridae, Clavaviridae, Fuselloviridae, Globuloviridae, Guttaviridae, Halspiviridae, Hytrosaviridae, Nimaviridae, Nudiviridae, Ovaliviridae, Plasmaviridae, Polydnaviridae, Portogloboviridae, Thaspiviridae, Tristromaviridae
Genera: Dinodnavirus, Rhizidiovirus
ssDNA viruses are classified into one realm and include several families that are unassigned to a realm:
In Monodnaviria, all members except viruses in Papovaviricetes are ssDNA viruses.
The unassigned families Anelloviridae and Spiraviridae are ssDNA virus families.
Viruses in the family Finnlakeviridae contain ssDNA genomes. Finnlakeviridae is unassigned to a realm but is a proposed member of Varidnaviria.
== References ==
=== Bibliography === | Wikipedia/Single-stranded_DNA_viruses |
Boston exanthem disease is a cutaneous condition that first occurred as an epidemic in Boston in 1951. It is caused by echovirus 16.: 398 The disease tends to afflict children more often than adults, although some adults can become infected, and the symptoms have never been fatal. It shows some clinical similarity to Rubella and Human herpesvirus 6
== Outbreaks ==
=== Boston, 1951 ===
The first known outbreak of Boston exanthem disease occurred in late summer of 1951 in Boston, Massachusetts. The initial symptoms were thought to be Rubella, however the clinical features were different. Patients exhibited no Koplik's spots, the course of the infection was shorter, and the skin lesions differed from Rubella. Two physicians, Franklin A. Neva from the University of Pittsburgh, and Ilse J. Gorbach investigated the outbreak. Through surveys sent to physicians, 18 cases were identified and specimens collected, 15 children and 3 adults.
=== Pittsburgh, 1954 ===
An outbreak was first identified in a suburb of Pittsburgh, Pennsylvania in June, 1954. Investigation in this suburb revealed an additional 17 cases. After notifying area physicians, an additional 7 cases were identified in other parts of the city. Cases occurred in both children and adults, with one adult hospitalized.
== See also ==
Eruptive pseudoangiomatosis
Skin lesion
== References == | Wikipedia/Boston_exanthem_disease |
A DNA virus is a virus that has a genome made of deoxyribonucleic acid (DNA) that is replicated by a DNA polymerase. They can be divided between those that have two strands of DNA in their genome, called double-stranded DNA (dsDNA) viruses, and those that have one strand of DNA in their genome, called single-stranded DNA (ssDNA) viruses. dsDNA viruses primarily belong to two realms: Duplodnaviria and Varidnaviria, and ssDNA viruses are almost exclusively assigned to the realm Monodnaviria, which also includes some dsDNA viruses. Additionally, many DNA viruses are unassigned to higher taxa. Reverse transcribing viruses, which have a DNA genome that is replicated through an RNA intermediate by a reverse transcriptase, are classified into the kingdom Pararnavirae in the realm Riboviria.
DNA viruses are ubiquitous worldwide, especially in marine environments where they form an important part of marine ecosystems, and infect both prokaryotes and eukaryotes. They appear to have multiple origins, as viruses in Monodnaviria appear to have emerged from archaeal and bacterial plasmids on multiple occasions, though the origins of Duplodnaviria and Varidnaviria are less clear.
Prominent disease-causing DNA viruses include herpesviruses, papillomaviruses, and poxviruses.
== Baltimore classification ==
The Baltimore classification system is used to group viruses together based on their manner of messenger RNA (mRNA) synthesis and is often used alongside standard virus taxonomy, which is based on evolutionary history. DNA viruses constitute two Baltimore groups: Group I: double-stranded DNA viruses, and Group II: single-stranded DNA viruses. While Baltimore classification is chiefly based on transcription of mRNA, viruses in each Baltimore group also typically share their manner of replication. Viruses in a Baltimore group do not necessarily share genetic relation or morphology.
=== Double-stranded DNA viruses ===
The first Baltimore group of DNA viruses are those that have a double-stranded DNA genome. All dsDNA viruses have their mRNA synthesized in a three-step process. First, a transcription preinitiation complex binds to the DNA upstream of the site where transcription begins, allowing for the recruitment of a host RNA polymerase. Second, once the RNA polymerase is recruited, it uses the negative strand as a template for synthesizing mRNA strands. Third, the RNA polymerase terminates transcription upon reaching a specific signal, such as a polyadenylation site.
dsDNA viruses make use of several mechanisms to replicate their genome. Bidirectional replication, in which two replication forks are established at a replication origin site and move in opposite directions of each other, is widely used. A rolling circle mechanism that produces linear strands while progressing in a loop around the circular genome is also common. Some dsDNA viruses use a strand displacement method whereby one strand is synthesized from a template strand, and a complementary strand is then synthesized from the prior synthesized strand, forming a dsDNA genome. Lastly, some dsDNA viruses are replicated as part of a process called replicative transposition whereby a viral genome in a host cell's DNA is replicated to another part of a host genome.
dsDNA viruses can be subdivided between those that replicate in the cell nucleus, and as such are relatively dependent on host cell machinery for transcription and replication, and those that replicate in the cytoplasm, in which case they have evolved or acquired their own means of executing transcription and replication. dsDNA viruses are also commonly divided between tailed dsDNA viruses, referring to members of the realm Duplodnaviria, usually the tailed bacteriophages of the order Caudovirales, and tailless or non-tailed dsDNA viruses of the realm Varidnaviria.
=== Single-stranded DNA viruses ===
The second Baltimore group of DNA viruses are those that have a single-stranded DNA genome. ssDNA viruses have the same manner of transcription as dsDNA viruses. However, because the genome is single-stranded, it is first made into a double-stranded form by a DNA polymerase upon entering a host cell. mRNA is then synthesized from the double-stranded form. The double-stranded form of ssDNA viruses may be produced either directly after entry into a cell or as a consequence of replication of the viral genome. Eukaryotic ssDNA viruses are replicated in the nucleus.
Most ssDNA viruses contain circular genomes that are replicated via rolling circle replication (RCR). ssDNA RCR is initiated by an endonuclease that bonds to and cleaves the positive strand, allowing a DNA polymerase to use the negative strand as a template for replication. Replication progresses in a loop around the genome by means of extending the 3'-end of the positive strand, displacing the prior positive strand, and the endonuclease cleaves the positive strand again to create a standalone genome that is ligated into a circular loop. The new ssDNA may be packaged into virions or replicated by a DNA polymerase to form a double-stranded form for transcription or continuation of the replication cycle.
Parvoviruses contain linear ssDNA genomes that are replicated via rolling hairpin replication (RHR), which is similar to RCR. Parvovirus genomes have hairpin loops at each end of the genome that repeatedly unfold and refold during replication to change the direction of DNA synthesis to move back and forth along the genome, producing numerous copies of the genome in a continuous process. Individual genomes are then excised from this molecule by the viral endonuclease. For parvoviruses, either the positive or negative sense strand may be packaged into capsids, varying from virus to virus.
Nearly all ssDNA viruses have positive sense genomes, but a few exceptions and peculiarities exist. The family Anelloviridae is the only ssDNA family whose members have negative sense genomes, which are circular. Parvoviruses, as previously mentioned, may package either the positive or negative sense strand into virions. Lastly, bidnaviruses package both the positive and negative linear strands.
== ICTV classification ==
The International Committee on Taxonomy of Viruses (ICTV) oversees virus taxonomy and organizes viruses at the basal level at the rank of realm. Virus realms correspond to the rank of domain used for cellular life but differ in that viruses within a realm do not necessarily share common ancestry, nor do the realms share common ancestry with each other. As such, each virus realm represents at least one instance of viruses coming into existence. Within each realm, viruses are grouped together based on shared characteristics that are highly conserved over time. Three DNA virus realms are recognized: Duplodnaviria, Monodnaviria, and Varidnaviria.
=== Duplodnaviria ===
Duplodnaviria contains dsDNA viruses that encode a major capsid protein (MCP) that has the HK97 fold. Viruses in the realm also share a number of other characteristics involving the capsid and capsid assembly, including an icosahedral capsid shape and a terminase enzyme that packages viral DNA into the capsid during assembly. Two groups of viruses are included in the realm: tailed bacteriophages, which infect prokaryotes and are assigned to the order Caudovirales, and herpesviruses, which infect animals and are assigned to the order Herpesvirales.
Duplodnaviria is a very ancient realm, perhaps predating the last universal common ancestor (LUCA) of cellular life. Its origins not known, nor whether it is monophyletic or polyphyletic. A characteristic feature is the HK97-fold found in the MCP of all members, which is found outside the realm only in encapsulins, a type of nanocompartment found in bacteria: this relation is not fully understood.
The relation between caudoviruses and herpesviruses is also uncertain: they may share a common ancestor or herpesviruses may be a divergent clade from the realm Caudovirales. A common trait among duplodnaviruses is that they cause latent infections without replication while still being able to replicate in the future. Tailed bacteriophages are ubiquitous worldwide, important in marine ecology, and the subject of much research. Herpesviruses are known to cause a variety of epithelial diseases, including herpes simplex, chickenpox and shingles, and Kaposi's sarcoma.
=== Monodnaviria ===
Monodnaviria contains ssDNA viruses that encode an endonuclease of the HUH superfamily that initiates rolling circle replication and all other viruses descended from such viruses. The prototypical members of the realm are called CRESS-DNA viruses and have circular ssDNA genomes. ssDNA viruses with linear genomes are descended from them, and in turn some dsDNA viruses with circular genomes are descended from linear ssDNA viruses.
Viruses in Monodnaviria appear to have emerged on multiple occasions from archaeal and bacterial plasmids, a type of extra-chromosomal DNA molecule that self-replicates inside its host. The kingdom Shotokuvirae in the realm likely emerged from recombination events that merged the DNA of these plasmids and complementary DNA encoding the capsid proteins of RNA viruses.
CRESS-DNA viruses include three kingdoms that infect prokaryotes: Loebvirae, Sangervirae, and Trapavirae. The kingdom Shotokuvirae contains eukaryotic CRESS-DNA viruses and the atypical members of Monodnaviria. Eukaryotic monodnaviruses are associated with many diseases, and they include papillomaviruses and polyomaviruses, which cause many cancers, and geminiviruses, which infect many economically important crops.
=== Varidnaviria ===
Varidnaviria contains DNA viruses that encode MCPs that have a jelly roll fold folded structure in which the jelly roll (JR) fold is perpendicular to the surface of the viral capsid. Many members also share a variety of other characteristics, including a minor capsid protein that has a single JR fold, an ATPase that packages the genome during capsid assembly, and a common DNA polymerase. Two kingdoms are recognized: Helvetiavirae, whose members have MCPs with a single vertical JR fold, and Bamfordvirae, whose members have MCPs with two vertical JR folds.
Varidnaviria is either monophyletic or polyphyletic and may predate the LUCA. The kingdom Bamfordvirae is likely derived from the other kingdom Helvetiavirae via fusion of two MCPs to have an MCP with two jelly roll folds instead of one. The single jelly roll (SJR) fold MCPs of Helvetiavirae show a relation to a group of proteins that contain SJR folds, including the Cupin superfamily and nucleoplasmins.
Marine viruses in Varidnaviria are ubiquitous worldwide and, like tailed bacteriophages, play an important role in marine ecology. Most identified eukaryotic DNA viruses belong to the realm. Notable disease-causing viruses in Varidnaviria include adenoviruses, poxviruses, and the African swine fever virus. Poxviruses have been highly prominent in the history of modern medicine, especially Variola virus, which caused smallpox. Many varidnaviruses can become endogenized in their host's genome; a peculiar example are virophages, which after infecting a host, can protect the host against giant viruses.
=== Baltimore classification ===
dsDNA viruses are classified into three realms and include many taxa that are unassigned to a realm:
All viruses in Duplodnaviria are dsDNA viruses.
In Monodnaviria, members of the class Papovaviricetes are dsDNA viruses.
All viruses in Varidnaviria are dsDNA viruses.
The following taxa that are unassigned to a realm exclusively contain dsDNA viruses:
Orders: Ligamenvirales
Families: Ampullaviridae, Baculoviridae, Bicaudaviridae, Clavaviridae, Fuselloviridae, Globuloviridae, Guttaviridae, Halspiviridae, Hytrosaviridae, Nimaviridae, Nudiviridae, Ovaliviridae, Plasmaviridae, Polydnaviridae, Portogloboviridae, Thaspiviridae, Tristromaviridae
Genera: Dinodnavirus, Rhizidiovirus
ssDNA viruses are classified into one realm and include several families that are unassigned to a realm:
In Monodnaviria, all members except viruses in Papovaviricetes are ssDNA viruses.
The unassigned families Anelloviridae and Spiraviridae are ssDNA virus families.
Viruses in the family Finnlakeviridae contain ssDNA genomes. Finnlakeviridae is unassigned to a realm but is a proposed member of Varidnaviria.
== References ==
=== Bibliography === | Wikipedia/DsDNA_virus |
Liver disease, or hepatic disease, is any of many diseases of the liver. If long-lasting it is termed chronic liver disease. Although the diseases differ in detail, liver diseases often have features in common.
== Liver diseases ==
There are more than a hundred different liver diseases. Some of the most common are:
Fascioliasis, a parasitic infection of liver caused by a liver fluke of the genus FascioIa, mostly FascioIa hepatica.
Hepatitis, inflammation of the liver, is caused by various viruses (viral hepatitis) also by some liver toxins (e.g. alcoholic hepatitis), autoimmunity (autoimmune hepatitis) or hereditary conditions.
Alcoholic liver disease is a hepatic manifestation of alcohol overconsumption, including fatty liver disease, alcoholic hepatitis, and cirrhosis. Analogous terms such as "drug-induced" or "toxic" liver disease are also used to refer to disorders caused by various drugs.
Fatty liver disease (hepatic steatosis) is a reversible condition where large vacuoles of triglyceride fat accumulate in liver cells. Non-alcoholic fatty liver disease is a spectrum of disease associated with obesity and metabolic syndrome.
Hereditary diseases that cause damage to the liver include hemochromatosis, involving accumulation of iron in the body, and Wilson's disease. Liver damage is also a clinical feature of alpha 1-antitrypsin deficiency and glycogen storage disease type II.
In transthyretin-related hereditary amyloidosis, the liver produces a mutated transthyretin protein which has severe neurodegenerative or cardiopathic effects. Liver transplantation can be curative.
Gilbert's syndrome, a genetic disorder of bilirubin metabolism found in a small percent of the population, can cause mild jaundice.
Cirrhosis is the formation of fibrous tissue (fibrosis) in the place of liver cells that have died due to a variety of causes, including viral hepatitis, alcohol overconsumption, and other forms of liver toxicity. Cirrhosis causes chronic liver failure.
Primary liver cancer most commonly manifests as hepatocellular carcinoma or cholangiocarcinoma; rarer forms include angiosarcoma and hemangiosarcoma of the liver. (Many liver malignancies are secondary lesions that have metastasized from primary cancers in the gastrointestinal tract and other organs, such as the kidneys, lungs.)
Primary biliary cirrhosis is a serious autoimmune disease of the bile capillaries.
Primary sclerosing cholangitis is a serious chronic inflammatory disease of the bile duct, which is believed to be autoimmune in origin.
Budd–Chiari syndrome is the clinical picture caused by occlusion of the hepatic vein.
== Signs and symptoms ==
Some of the signs and symptoms of a liver disease are the following:
Jaundice
Confusion and altered consciousness caused by hepatic encephalopathy.
Thrombocytopenia and coagulopathy.
Risk of bleeding symptoms, particularly taking place in the gastrointestinal tract
== Mechanisms ==
Liver diseases can develop through several mechanisms:
=== DNA damage ===
One general mechanism, increased DNA damage, is shared by some of the major liver diseases, including infection by hepatitis B virus or hepatitis C virus, heavy alcohol consumption, and obesity.
Viral infection by hepatitis B virus, or hepatitis C virus causes an increase of reactive oxygen species. The increase in intracellular reactive oxygen species is about 10,000-fold with chronic hepatitis B virus infection and 100,000-fold following hepatitis C virus infection. This increase in reactive oxygen species causes inflammation and more than 20 types of DNA damage. Oxidative DNA damage is mutagenic and also causes epigenetic alterations at the sites of DNA repair. Epigenetic alterations and mutations affect the cellular machinery that may cause the cell to replicate at a higher rate or result in the cell avoiding apoptosis, and thus contribute to liver disease. By the time accumulating epigenetic and mutational changes eventually cause hepatocellular carcinoma, epigenetic alterations appear to have an even larger role in carcinogenesis than mutations. Only one gene, TP53, is mutated in more than 20% of liver cancers while 41 genes each have hypermethylated promoters (repressing gene expression) in more than 20% of liver cancers.
Alcohol consumption in excess causes a build-up of acetaldehyde. Acetaldehyde and free radicals generated by metabolizing alcohol induce DNA damage and oxidative stress. In addition, activation of neutrophils in alcoholic liver disease contributes to the pathogenesis of hepatocellular damage by releasing reactive oxygen species (which can damage DNA). The level of oxidative stress and acetaldehyde-induced DNA adducts due to alcohol consumption does not appear sufficient to cause increased mutagenesis. However, as reviewed by Nishida et al., alcohol exposure, causing oxidative DNA damage (which is repairable), can result in epigenetic alterations at the sites of DNA repair. Alcohol-induced epigenetic alterations of gene expression appear to lead to liver injury and ultimately carcinoma.
Obesity is associated with a higher risk of primary liver cancer. As shown with mice, obese mice are prone to liver cancer, likely due to two factors. Obese mice have increased pro-inflammatory cytokines. Obese mice also have higher levels of deoxycholic acid, a product of bile acid alteration by certain gut microbes, and these microbes are increased with obesity. The excess deoxycholic acid causes DNA damage and inflammation in the liver, which, in turn, can lead to liver cancer.
=== Other relevant aspects ===
Several liver diseases are due to viral infection. Viral hepatitides such as Hepatitis B virus and Hepatitis C virus can be vertically transmitted during birth via contact with infected blood. According to a 2012 NICE publication, "about 85% of hepatitis B infections in newborns become chronic". In occult cases, Hepatitis B virus is present by hepatitis B virus DNA, but testing for HBsAg is negative. High consumption of alcohol can lead to several forms of liver disease including alcoholic hepatitis, alcoholic fatty liver disease, cirrhosis, and liver cancer. In the earlier stages of alcoholic liver disease, fat builds up in the liver's cells due to increased creation of triglycerides and fatty acids and a decreased ability to break down fatty acids. Progression of the disease can lead to liver inflammation from the excess fat in the liver. Scarring in the liver often occurs as the body attempts to heal and extensive scarring can lead to the development of cirrhosis in more advanced stages of the disease. Approximately 3–10% of individuals with cirrhosis develop a form of liver cancer known as hepatocellular carcinoma. According to Tilg, et al., gut microbiome could very well have an effect, be involved in the pathophysiology, on the various types of liver disease which an individual may encounter. Insight into the exact causes and mechanisms mediating pathophysiology of the liver is quickly progressing due to the introduction new technological approaches like Single cell sequencing and kinome profiling
=== Air pollutants ===
Particulate matter or carbon black are common pollutants. They have a direct toxic effect on the liver; cause inflammation of liver caused by and thereby impact lipid metabolism and fatty liver disease; and can translocate from the lungs to the liver.
Because particulate matter and carbon black are very diverse and each has different toxicodynamics, detailed mechanisms of translocation are not clear. Water-soluble fractions of particulate matter are the most important part of translocation to the liver, through extrapulmonary circulation. When particulate matter gets into the bloodstream, it combines with immune cells and stimulates innate immune responses. Pro-inflammatory cytokines are released and cause disease progression.
== Epidemiology ==
Liver diseases, including conditions such as non-alcoholic fatty liver disease (NAFLD), alcohol-related liver disease (ALD), and viral hepatitis, are significant public health concerns worldwide. In the United States, NAFLD is the most common chronic liver condition, affecting approximately 24% of the population, with the prevalence rising due to increasing rates of obesity and metabolic syndrome. Alcohol-related liver disease accounts for about 4.5% of liver-related deaths globally, underscoring the substantial burden of alcohol misuse. Viral hepatitis, primarily hepatitis B and hepatitis C, remains a leading cause of liver cirrhosis and liver cancer worldwide, despite advances in antiviral therapies and vaccination efforts. Additionally, recent studies have highlighted lean steatotic liver disease (SLD), a subset of NAFLD, affecting over 12% of U.S. adults even in the absence of obesity. These data emphasize the importance of early detection and targeted interventions to manage liver disease and its associated complications effectively.
New research reports the prevalence of lean steatotic liver disease (SLD) in the United States using data from the National Health and Nutrition Examination Survey (2017-2023), researchers estimated the age-adjusted prevalence of lean SLD at 12.8%. This includes 9.3% for lean metabolic dysfunction-associated steatotic liver disease, 1.3% for metabolic dysfunction and alcohol-related steatotic liver disease, and 1.0% for alcohol-related liver disease.
== Diagnosis ==
A number of liver function tests are available to test the proper function of the liver. These test for the presence of enzymes in blood that are normally most abundant in liver tissue, metabolites or products. serum proteins, serum albumin, serum globulin, alanine transaminase, aspartate transaminase, prothrombin time, partial thromboplastin time.
Imaging tests such as transient elastography, ultrasound and magnetic resonance imaging can be used to show the liver tissue and the bile ducts. Liver biopsy can be performed to examine liver tissue to distinguish between various conditions; tests such as elastography may reduce the need for biopsy in some situations.
In liver disease, prothrombin time is longer than usual. In addition, the amounts of both coagulation factors and anticoagulation factors are reduced as a diseased liver cannot productively synthesize them as it did when healthy. Nonetheless, there are two exceptions in this falling tendency: coagulation factor VIII and von Willebrand factor, a platelet adhesive protein. Both inversely rise in the setting of hepatic insufficiency, thanks to the drop of hepatic clearance and compensatory productions from other sites of the body. Fibrinolysis generally proceeds faster with acute liver failure and advanced stage liver disease, unlike chronic liver disease in which concentration of fibrinogen remains unchanged.
A previously undiagnosed liver disease may become evident first after autopsy. Following are gross pathology images:
== Treatment ==
Anti-viral medications are available to treat infections such as hepatitis B. Other conditions may be managed by slowing down disease progression, for example:
By using steroid-based drugs in autoimmune hepatitis.
Regularly removing a quantity of blood from a vein (venesection) in the iron overload condition, hemochromatosis.
Wilson's disease, a condition where copper builds up in the body, can be managed with drugs that bind copper, allowing it to be passed from the body in urine.
In cholestatic liver disease, (where the flow of bile is affected due to cystic fibrosis) a medication called ursodeoxycholic acid may be given.
== See also ==
Model for end-stage liver disease (MELD)
== References ==
== Further reading ==
Friedman LS, Keeffe EB (2011-08-03). Handbook of Liver Disease. Elsevier Health Sciences. ISBN 978-1-4557-2316-4.
== External links == | Wikipedia/Liver_disease |
Hand, foot, and mouth disease (HFMD) is a common infection caused by a group of enteroviruses. It typically begins with a fever and feeling generally unwell. This is followed a day or two later by flat discolored spots or bumps that may blister, on the hands, feet and mouth and occasionally buttocks and groin. Signs and symptoms normally appear 3–6 days after exposure to the virus. The rash generally resolves on its own in about a week.
The viruses that cause HFMD are spread through close personal contact, through the air from coughing, and via the feces of an infected person. Contaminated objects can also spread the disease. Coxsackievirus A16 is the most common cause, and enterovirus 71 is the second-most common cause. Other strains of coxsackievirus and enterovirus can also be responsible. Some people may carry and pass on the virus despite having no symptoms of disease. Other animals are not involved. Diagnosis can often be made based on symptoms. Occasionally, a throat or stool sample may be tested for the virus.
Most people with hand, foot, and mouth disease get better on their own in 7 to 10 days. Most cases require no specific treatment. No antiviral medication or vaccine is available, but development efforts are underway. For fever and for painful mouth sores, over-the-counter pain medications such as ibuprofen may be used, though aspirin should be avoided in children. The illness is usually not serious. Occasionally, intravenous fluids are given to children who are dehydrated. Very rarely, viral meningitis or encephalitis may complicate the disease. Because HFMD is normally mild, some jurisdictions allow children to continue to go to child care and schools as long as they have no fever or uncontrolled drooling with mouth sores, and as long as they feel well enough to participate in classroom activities.
HFMD occurs in all areas of the world. It often occurs in small outbreaks in nursery schools or kindergartens. Large outbreaks have been occurring in Asia since 1997. It usually occurs during the spring, summer, and fall months. Typically it occurs in children less than five years old but can occasionally occur in adults. HFMD should not be confused with foot-and-mouth disease (also known as hoof-and-mouth disease), which mostly affects livestock.
== Signs and symptoms ==
Common constitutional signs and symptoms of HFMD include fever, nausea, vomiting, feeling tired, generalized discomfort, loss of appetite, and irritability in infants and toddlers. Skin lesions frequently develop in the form of a rash of flat discolored spots and bumps which may be followed by vesicular sores with blisters on palms of the hands, soles of the feet, buttocks, and sometimes on the lips. The rash is rarely itchy for children, but can be extremely itchy for adults. Painful facial ulcers, blisters, or lesions may also develop in or around the nose or mouth. HFMD usually resolves on its own after 7–10 days. Most cases of the disease are relatively harmless, but complications including encephalitis, meningitis, and paralysis that mimics the neurological symptoms of polio can occur.
== Cause ==
The viruses that cause the disease are of the Picornaviridae family. Coxsackievirus A16 is the most common cause of HFMD. Enterovirus 71 (EV-71) is the second-most common cause. Many other strains of coxsackievirus and enterovirus can also be responsible.
=== Transmission ===
HFMD is highly contagious and is transmitted by nasopharyngeal secretions such as saliva or nasal mucus, by direct contact, or by fecal–oral transmission. It is possible to be infectious for days to weeks after the symptoms have resolved.
Childcare settings are the most common places for HFMD to be contracted because of toilet training, diaper changes, and children's propensity to put their hands into their mouths. HFMD is contracted through nose and throat secretions such as saliva, sputum, and nasal mucus as well as fluid in blisters, and stool.
== Diagnosis ==
A diagnosis usually can be made by the presenting signs and symptoms alone. If the diagnosis is unclear, a throat swab or stool specimen may be taken to identify the virus by culture. The common incubation period (the time between infection and onset of symptoms) ranges from three to six days. Early detection of HFMD is important in preventing an outbreak in the pediatric population.
== Prevention ==
Preventive measures include avoiding direct contact with infected individuals (including keeping infected children home from school), proper cleaning of shared utensils, disinfecting contaminated surfaces, and proper hand hygiene. These measures are effective in decreasing the transmission of the viruses responsible for HFMD.
Protective habits include hand washing and disinfecting surfaces in play areas. Breastfeeding has also been shown to decrease rates of severe HFMD, though does not reduce the risk of the infection of the disease.
=== Vaccine ===
A vaccine known as the EV71 vaccine is available to prevent HFMD in China as of December 2015. No vaccine is currently available in the United States.
== Treatment ==
Medications are usually not needed as hand, foot, and mouth disease is a viral disease that typically resolves on its own. Currently, there is no specific curative treatment for hand, foot, and mouth disease. Disease management typically focuses on achieving symptomatic relief. Pain from the sores may be eased with the use of analgesic medications. Infection in older children, adolescents, and adults is typically mild and lasts approximately 1 week, but may occasionally run a longer course. Fever reducers can help decrease body temperature.
A minority of individuals with hand, foot, and mouth disease may require hospital admission due to complications such as inflammation of the brain, inflammation of the meninges, or acute flaccid paralysis. Non-neurologic complications such as inflammation of the heart, fluid in the lungs, or bleeding into the lungs may also occur.
== Complications ==
Complications from the viral infections that cause HFMD are rare but require immediate medical treatment if present. HFMD infections caused by Enterovirus 71 tend to be more severe and are more likely to have neurologic or cardiac complications including death than infections caused by Coxsackievirus A16. Viral or aseptic meningitis can occur with HFMD in rare cases and is characterized by fever, headache, stiff neck, or back pain. The condition is usually mild and clears without treatment; however, hospitalization for a short time may be needed. Other serious complications of HFMD include encephalitis (inflammation of the brain), or flaccid paralysis in rare circumstances.
Fingernail and toenail loss have been reported in children 4–8 weeks after having HFMD. The relationship between HFMD and the reported nail loss is unclear; however, it is temporary and nail growth resumes without treatment.
Minor complications due to symptoms can occur such as dehydration, due to mouth sores causing discomfort with intake of foods and fluid.
== Epidemiology ==
Hand, foot and mouth disease most commonly occurs in children under the age of 10 and more often under the age of 5, but it can also affect adults with varying symptoms. It tends to occur in outbreaks during the spring, summer, and autumn seasons. This is believed to be due to heat and humidity improving spread. HFMD is more common in rural areas than urban areas; however, socioeconomic status and hygiene levels need to be considered. Poor hygiene is a risk factor for HFMD.
=== Outbreaks ===
In 1997, an outbreak occurred in Sarawak, Malaysia with 600 cases and over 30 children died.
In 1998, there was an outbreak in Taiwan, affecting mainly children. There were 405 severe complications, and 78 children died. The total number of cases in that epidemic is estimated to have been 1.5 million.
In 2008 an outbreak in China, beginning in March in Fuyang, Anhui, led to 25,000 infections, and 42 deaths, by May 13. Similar outbreaks were reported in Singapore (more than 2,600 cases as of April 20, 2008), Vietnam (2,300 cases, 11 deaths), Mongolia (1,600 cases), and Brunei (1,053 cases from June–August 2008).
In 2009 17 children died in an outbreak during March and April 2009 in China's eastern Shandong Province, and 18 children died in the neighboring Henan Province. Out of 115,000 reported cases in China from January to April, 773 were severe and 50 were fatal.
In 2010 in China, an outbreak occurred in southern China's Guangxi Autonomous Region as well as Guangdong, Henan, Hebei, and Shandong provinces. Until March, 70,756 children were infected and 40 died from the disease. By June, the peak season for the disease, 537 had died.
The World Health Organization reporting between January and October 2011 (1,340,259) states the number of cases in China had dropped by approx 300,000 from 2010 (1,654,866) cases, with new cases peaking in June. There were 437 deaths, down from 2010 (537 deaths).
In December 2011, the California Department of Public Health identified a strong form of the virus, coxsackievirus A6 (CVA6), where nail loss in children is common.
In 2012 in Alabama, United States there was an outbreak of an unusual type of the disease. It occurred in a season when it is not usually seen and affected teenagers and older adults. There were some hospitalizations due to the disease but no reported deaths.
In 2012 in Cambodia, 52 of 59 reviewed cases of children reportedly dead (as of July 9, 2012) due to a mysterious disease were diagnosed to be caused by a virulent form of HFMD. Although a significant degree of uncertainty exists with reference to the diagnosis, the WHO report states, "Based on the latest laboratory results, a significant proportion of the samples tested positive for enterovirus 71 (EV-71), which causes hand foot and mouth disease (HFMD). The EV-71 virus has been known to generally cause severe complications amongst some patients."
HFMD infected 1,520,274 people with up to 431 deaths reported at the end of July in 2012 in China.
In 2018, more than 50,000 cases occurred through a nationwide outbreak in Malaysia with two deaths also reported.
=== India 2022 ===
An outbreak of an illness referred to as tomato fever or tomato flu was identified in the Kollam district on May 6, 2022. The illness is endemic to Kerala, India and gets its name because of the red and round blisters it causes, which look like tomatoes. The disease may be a new variant of the viral HFMD or an effect of chikungunya or dengue fever. Flu may be a misnomer.
The condition mainly affects children under the age of five. An article in The Lancet states that the appearance of the blisters is similar to that seen in Mpox, and the illness is not thought to be related to SARS-CoV-2. Symptoms, treatment and prevention are similar to HFMD.
== History ==
HFMD cases were first described clinically in Canada and New Zealand in 1957. The disease was termed "Hand Foot and Mouth Disease", by Thomas Henry Flewett, after a similar outbreak in 1960.
== Research ==
Novel antiviral agents to prevent and treat infection with the viruses responsible for HFMD are currently under development. Preliminary studies have shown inhibitors of the EV-71 viral capsid to have potent antiviral activity.
== References ==
== External links ==
Media related to Hand, foot and mouth disease at Wikimedia Commons
Highly contagious Hand, foot and mouth disease killing China's children at Wikinews | Wikipedia/Hand,_foot,_and_mouth_disease |
Fifth disease, also known as erythema infectiosum and slapped cheek syndrome, is a common and contagious disease caused by infection with parvovirus B19. This virus was discovered in 1975 and can cause other diseases besides fifth disease. Fifth disease typically presents as a rash and is most common in children. Parvovirus B19 can affect people of all ages; about two out of ten persons infected will have no symptoms.
== Pathogenicity ==
Parvovirus B19 is the only virus within the Parvoviridae family to cause disease in humans, especially in children. The most common disease derived from parvovirus B19 is fifth disease. This disease is spread in close contact through respiratory droplets, which can be from the nose, mouth, or direct contact with an infected person. Fifth disease is most commonly spread in the winter and spring seasons in children aged six to fourteen years old. Parvovirus B19 will start replicating from anywhere for four to eighteen days. Infected children will be most contagious during this time and before they develop the most notable sign, a red rash on their cheeks, and other symptoms.
Since parvovirus B19 is a single stranded DNA virus, replication can only occur in dividing cells. This is also why other populations besides children can become infected with parvovirus B19, develop fifth disease, and have complications. Certain populations are at a higher risk if they have more dividing cells or a weakened immune system than the typical person. These populations include pregnant women, fetuses, adults, and immunocompromised. Over the last few years, first-time infections in pregnant women have been increasing throughout the world. About 1-5% of pregnant women can become infected. Typically, having the virus will not impact the outcome of the pregnancy, and 90% of cases of infected fetuses do not lead to any serious outcomes. However, complications can still occur in both the fetus and mother. For example, if fetuses contract parvovirus B19, possible complications can include miscarriage or intrauterine fetal death. Additionally, infected adults have been documented to develop arthralgias, or joint pain. Also, a specific group of immunocompromised people with bone marrow failure and infected with parvovirus B19 have been shown to develop aplastic crisis. Other notable complications caused by infection from parvovirus B19 can include gloves and sock syndrome.
== Signs and symptoms ==
The symptoms of fifth disease are usually mild and may start as a fever, headache or a runny nose. These symptoms pass, then a few days later, the rash appears. The bright red rash most commonly appears in the face, particularly the cheeks. Children infected typically go through 3 stages; first when the rash appears on the face. This is a defining symptom of the infection in children (hence the name "slapped cheek disease"). In addition to red cheeks, the second stage consists of children developing a red, lacy rash on the rest of the body, with the upper arms, torso, and legs being the most common locations. The rash typically lasts a few days and may itch; some cases have been known to last for several weeks. People are usually no longer infectious once the rash has appeared. Lastly the third stage consists of recurring rashes due to hot showers, sun exposure, or minor injuries lasting about 11 days.
In children, the risk of Parvovirus B19-related arthralgia (joint-stiffness) is less than 10%, but 19% of those with new-onset arthritis may have developed the B19 infection within the previous 10 weeks. Teenagers and adults may present with joint pain or swelling, out of which 60% infected females and 30% of infected males reported these symptoms. Out of these, 20% of the females may experience continuous joint stiffness for several months or years. Symptoms can persist up to 3 weeks since onset. Sometimes, fifth disease can also cause serious complications, especially if the person is pregnant, has anemia, or is immunocompromised; affecting the blood system, joints or nerves. Adults with fifth disease may have difficulty in walking and in bending joints such as wrists, knees, ankles, fingers, and shoulders.
The disease is usually mild, but in certain risk groups and rare circumstances, it can have serious consequences:
In pregnancy, infection in the first trimester is considered more detrimental for the mother but contraction of the infection in the second trimester has been linked to hydrops fetalis, a condition causing excessive build up of fluid in the fetus' tissues and organs causing edema, and thus causing spontaneous miscarriage.
Those who are immunocompromised (HIV/AIDS, chemotherapy) may be at risk for complications if exposed.
In less than 5% of women with parvovirus B19 infection, a baby may develop severe anemia leading to miscarriage. This occurs most often during the first half of pregnancy.
== Causes ==
Fifth disease, also known as erythema infectiosum, is caused by parvovirus B19, which only infects humans. Infection by parvovirus B19 can lead to multiple clinical manifestations, but the most common is fifth disease.
Parvovirus B19 (B19V) is a small, single-stranded, non-enveloped DNA virus. Binding of B19V capsid to the cellular receptor globoside (Gb4Cer) results in a cascade of structural changes and subsequent signal transduction processes facilitating the entry of parvovirus B19 into the host cell. After gaining access to the host cell, B19V binds to glycosphingolipid globoside (blood group P antigen) targeting erythroid lineage in the bone marrow. Replication of viral genome and release of virus from infected cells lead to various complex effects on host's cellular environment such as induction of DNA damage, hijack of cell cycle and apoptosis (killing of infected cells).
B19V DNA has been found in a wide range of tissues in healthy and diseased individuals indicates the persistence of B19V infection. According to a clinical microbiology review published by Jianming Qiu "Persistence of viral DNA has been detected in up to 50% of biopsy specimens of the spleen, lymph nodes, tonsils, liver, heart, synovial tissues, skin, brain, and testes, for decades after infection."
Recovery from parvovirus B19 infection is achieved by production of IgM antibodies which are specific for virus and are generated 10–12 days after infection. After day 16, when signs of fifth disease (red rashes) and arthralgia (pain in joints) becomes apparent, specific anti B19 IgG is produced by immune cells. Production of serum anti B19 IgG keeps infection under control and facilitates the recovery of erythroid cell production in erythroid lineage cells that were targeted by parvovirus B19.
== Transmission ==
Fifth disease is transmitted primarily by respiratory droplets such as sneezing, coughing, etc.; by direct contact through the saliva or mucus, but can also be spread by contact with infected blood either directly or through blood transfusions. The incubation period (the time between exposure to an infection and the onset of symptoms) is usually between 4 and 21 days. Viremia (a condition which occurs when viruses enter the bloodstream and eventually spread to the rest of the body) occurs within 5 to 10 days from exposure to Parvovirus B19, and the person remains contagious 5 days following Viremia. Typically, school children, day-care workers, teachers, and parents are most likely to be exposed to the virus making them high risk population. Rates of transmission of Parvovirus B19 is highest in household settings with people living with infected persons, leading to almost 50%, moderate among adults with a 40% transmission rate, and variable in people working at daycare centers and schools with about 10-60% of transmission risk. The most common time for infection to spread in children causing fifth disease is during late winters and early spring, with outbreaks occurring every 3–4 years. Vertical transmission from maternal infection may also occur, which can lead to hydrops fetalis, a human disease of the fetuses due to the infection's detrimental effects on red blood cell production.
Parvovirus B19 can also be transmitted through blood products such as frozen plasma or cellular blood components (Red blood cells, white blood cells and platelets, as the virus is resistant to the common mechanism of solvent detergent techniques that is used to inactivate pathogens in these blood products.
== Diagnosis ==
The most common manifestation of Fifth Disease is marked by a red, "slapped cheek" look on the face and a lace-like rash on the body and limbs. The "slapped cheek" appearance of the rash can be suggestive of fifth disease, however, the rash can be mistaken with other skin related disease or infections. Many other viral rashes, like measles, rubella, roseola, and scarlet fever, can look similar to erythema infectiosum. In adults, for example, joint pain caused by parvovirus B19 infection might make doctor consider conditions like the flu and mononucleosis during initial diagnosis. Doctors may also look consider ruling out non-infectious causes like drug allergies and certain types of arthritis; which can present with similar symptoms as fifth disease. For this reason, blood samples testing can be definitive in confirming the diagnosis of Fifth Disease. These blood tests are commonly referred to as "diagnostic assays". An antibody assay uses antibodies designed to detect parvovirus antigen or protein in blood circulation. For example, anti-parvovirus B19 IgM antibody serum assay is often the preferred method to detect previous infection. The assay can result positive one week after initial infection. Negative assay results may prompt retesting in the future to rule out early sampling of blood serum. A positive assay result can also be indicative of an infection within the previous two to six months. People acquire lifetime immunity if IgG antibodies are produced in response to parvovirus B19 exposure. Infection by parvovirus B19 can also be confirmed by isolation of viral DNA detected by Polymerase Chain Reaction (PCR) or direct hybridization. PCR tests are considered significantly more sensitive to detecting the viral antigen parvovirus B19 compared direct DNA hybridization. In order to diagnose Fifth Disease in a fetus, a PCR test is done using a sample taken from the amniotic fluid surrounding the baby (otherwise known as "amniocentesis"). A DNA hybridization assay can better detect variants of the parvovirus B19. There exists 3 biological similar genotypes of parvovirus B19, numbered one through three. The most common genotype circulating is genotype one. Laboratory tests can indicate complications of infection, including anemia, liver damage, and low platelet count.
Aside from diagnosing Fifth Disease with laboratory tests, it is crucial to monitor fetal blood flow in the brain. This involves looking for signs of moderate to severe anemia using an ultrasound.
== Treatment ==
Treatment for Fifth Disease is primarily symptomatic and supportive as the infection is frequently self-limiting. A self-limiting infection typically does not require treatment, such as medication, and will heal independently. While there is currently no specific therapy recommendation for Fifth Disease, symptom management can be attempted with over-the-counter medications. Antipyretics, such as acetaminophen, are commonly used to reduce fevers. In cases of joint disease, such as those with arthritis or arthralgia, treatment options can include medications that reduce inflammation, like non-steroidal anti-inflammatories (NSAID); or other anti-inflammatories can be used. It is essential to never give children aspirin for any of their symptoms due to the risk of Reye's syndrome. Conservative treatment targeted to relieve people with symptoms of joint disease caused by Fifth Disease has also utilized acupuncture, physical therapy exercises, and chiropractic care along with pharmacologic management. Other forms of treatment include plenty of rest, increased daily fluids, nutritious daily meal intake, medication adherence and overall wellbeing.
The rash usually does not itch but can be mildly painful. The rash itself is not considered contagious. The infection generally lasts about 5 to 10 days. Stress, hot temperatures, exercise, and exposure to sunlight can contribute to recurrence within months of the initial infection. Upon resolution, immunity is considered life-long. Populations at greater risk of complications (see below) may need referral to a specialist. Anemia is a more severe complication that could result from parvovirus B19 infection and requires a blood transfusion as part of therapy.
== Prevention ==
Since there is no specific treatment for Fifth Disease, prevention is an important factor. Although fifth disease primarily occurs in children and will typically resolve on its own, similar to the cold, vulnerable populations such as those who are immunocompromised, pregnant, or people with anemia are more at risk of developing complications from the disease due to their bodies compromised state making it harder for the body to fight off the virus. Therefore, prevention of fifth disease is an important factor in decreasing the number of people who become sick from the B19 virus resulting in Fifth Disease.
Primary prevention aims to prevent the virus from infecting the host's body and ultimately stop the disease from happening. In contrast, secondary prevention aims to detect the disease early on in its course and stop its progression. An example of a primary prevention strategy is the use of vaccines. There currently are no approved vaccinations for Fifth Disease. More research needs to be done to develop of a safe, productive, and efficacious vaccine. However, clinical studies have shown that vaccinations for B19 carry possible additional benefits to high-risk people, such as those who are pregnant, immunocompromised, have had organ transplants, and children with anemia.
The abbreviation NPI stands for non-pharmaceutical interventions. As discussed in the 'Transmission' content above, Fifth Disease can be spread from human to human through blood respiratory particles, and from mother to baby during pregnancy. Since somebody can spread Fifth Disease through respiratory particles, similar to the transmission of COVID-19, the CDC recommends following the general recommendations for respiratory viruses. Thus, much of the same NPI utilized during the COVID-19 pandemic can be used as preventive strategies for Fifth Disease, such as practice good hand hygiene, coughing and sneezing into the elbow, proper mask etiquette, and isolating when sick/contagious.
=== High risk/vulnerable population prevention ===
One of the populations that is at high risk for severe complications from contracting Fifth Disease, as discussed in the complications section, is pregnant women and their fetuses. The primary prevention for pregnant women is to reduce the exposure or contact with Fifth Disease. Prevention strategies for pregnant women, due to the increased risk of severe complications both for them and the fetus, include increased awareness about the virus/disease to provide them with the knowledge and resources they need to take care of their health effectively, those who are at high-risk for complications should also be advised on the transmission of the virus and educated on what other safety measures they could practice to avoid areas where transmission of the disease is typically high such as childcare centers, close contact with school-age children or even close contact with someone who works with school-aged children such as teachers, and healthcare settings such as hospitals.
== Outcomes ==
In Fifth Disease, parvovirus B19 has the potential to affect various parts of the body, including the skin, heart, brain, joints, liver and more. Thus, complications of Fifth Disease can be present in various populations with different conditions such as pregnancy, fetal development, neurological conditions, autoimmune disorders, etc. While parvovirus B19 is typically transmitted via respiratory secretions or hand to mouth contact, it has also been known to be passed from pregnant women to fetuses. Notably, there are some known complications associated with Fifth Disease relating to pregnant women and fetuses that can range from mild to moderate, and in some cases, severe complications can affect both pregnant women and fetuses. Roughly 50-75% of all pregnant women are immune to parvovirus B19 while the remainder of women are susceptible to mild illness. In a 2024 review of parvovirus b19 infection and pregnancy, it is found that pregnant women who do not have immunity to parvovirus B19 are at higher risk of passing the infection to their baby, especially if they contract it during the first trimester or second trimester; which can lead to more serious complications. Although the potential consequences of erythema infectiosum can be quite serious in pregnancy, pregnant women can be tested for immunity via the presence of IgG antibodies and IgM antibodies. A majority of fetuses who do contract parvovirus B19 show either no significant symptoms or complete resolution of the virus. However, the following serious complications are rare but possible: miscarriage, stillbirth, fetal anemia, hepatic failure, and abnormal neurodevelopment outcome. In some cases, fetuses would develop hydrops fetalis due to congenital parvovirus B19. This condition was studied as a determinant of later fetal outcomes, such as miscarriage or perinatal death, in 2016 systematic review. The review showed that those born with parvovirus B19 that caused hydrops fetalis did have an association with higher mortality risk and higher risk of complications.
In addition to fetuses, parvovirus B19 infection and its effects has been studied in adults as well. The parvovirus B19 infection has also been associated with the fetal development of neurological complications, as identified in a systematic review in 2014. This analysis included a total of 89 studies covering complications in both the central and nervous system such as encephalitis, meningitis, and peripheral neuropathy. However, the specific pathophysiology of these complications has yet to be discovered but the review does encourage the use of antibody testing to determine a patient's risk. Infection of this virus is not limited to the nervous system. Parvovirus B19 has also been linked to cases of cardiac inflammation that can cause structural damage to the heart over time. If the damage progresses and is significant, cardiac cell death may occur.
In people with weakened immune systems, parvovirus B19 infection often causes low blood cell counts and can lead to chronic infection. Sometimes, an acute parvovirus B19 infection can mimic or even trigger autoimmune disease because it can cause the body to produce antibodies against itself. Individuals that are living with HIV are also susceptible to complications if infected due to being immunocompromised. This happens due to processes like molecular mimicry, cell death, and enzyme activation. While relatively rare, those who live with both HIV and parvovirus B19 infection will be unable to fight off the B19 virus. This can result in substantial loss of red blood cells and cause anemia.
Recent research has found that children and adults who are infected with Parvovirus B19 may develop acute arthritis, and in some cases, chronic joint problems. Studies have detected the presence of the Virus' DNA in the synovial tissue of people with rheumatoid arthritis, but other studies show mixed results.
In some cases, parvovirus B19 can mainly affect the bone marrow due to the virus's strong attraction to bone marrow receptors, often causing bone marrow function to decrease. This is why viral infection by parvovirus B19 can be particularly harmful to people with hemolytic anemia or blood cancers, leading to a condition called pure red cell aplasia.
== Epidemiology ==
Fifth disease is also known as human parvovirus B19V found all throughout the world, primarily during childhood. This virus spreads by breathing in the viral particles or in the womb during fetal development. The illness is very common and self-limiting. The modes of transmission include respiratory droplets, blood, or mother to fetus. Fifth disease is most prevalent in children aged 5 to 15 years old. Fifth disease occurs at lower rates in adults. The virus spreads easily and once contracted, the body will begin developing lasting immunity to reinfection. The prevalence of antibodies is 50% in children and 70% to 85% in adults. The virus affects both men and women equally. During the spring and winter, epidemic outbreaks are most likely to occur. In the summer and fall, sporadic cases and outbreaks occur. The outbreaks most commonly occur in daycares and schools. The periodicity of the outbreak cycle is three-to-seven years. The risk of acquiring the viral illness increases when exposed to an infected person or contaminated blood. Individuals who have an occupation that requires close contact with infected people such as healthcare workers and teachers are at an increased risk of acquiring the viral illness. Another risk factor of fifth disease are immunocompromised individuals, those with anemia are at a higher risk of developing complications. Pregnant women are at risk for acquiring viral illness, especially during the first half of pregnancy. Though, complications are very rare and less than 5% of these cases will experience serious complications. The most common complication among pregnant women is anemia. In rare cases, severe anemia can occur, and a buildup of fluid can develop. A buildup of fluid can cause congestive heart failure or death. A blood infusion or induction may be necessary. No vaccine is available for human parvovirus B19, though attempts have been made to develop one.
=== Vulnerable populations ===
Parvovirus B19 can cause serious complications in certain groups of people:
Pregnant women - B19 infection in this population can lead to detrimental effects on the fetus, causing hydrops fetalis, as mentioned previously. The risk of fetal infection and harm to the fetus is not high, but if the child is infected, the outcome is not adverse. According to a study, pregnant women infected with Parvovirus B19 carry a 30% chance of infecting the fetus with only 9% of them experiencing adverse outcomes.
Immunocompromised people are also at risk of being infected and having poor outcomes from the virus. These people with a weakened immune system either due to HIV, transplants or congenital immunodeficiencies, may be seen experiencing chronic anemia as their bodies are unable to fight the virus effectively. This can cause prolonged anemia in the affected individuals.
In people with sickle-cell disease or other forms of chronic hemolytic anemia, a blood disorder, the infection can precipitate an aplastic crisis, wherein the bone marrow of the individual will suddenly stop producing red blood cells.
A 2019 systematic review examined the rates of parvovirus B19 infection among daycare workers. Since transmission typically occurs through respiratory secretions, it was thought that daycare workers would be at an increased risk of infection because young children can spread saliva through drool. The systematic review indicates that daycare workers are at an increased risk for infection. Another review also supports the finding that daycare workers have an increased risk of contracting parvovirus B19 infection. A 2019 meta-analysis examined rates of parvovirus B19 infection among those with sickle cell disease (SCD) using IgG and IgM antibody detection. Pooled data from Africa, Asia, and the Americas revealed a 48.8% parvovirus B19 infection prevalence among persons with sickle cell disease. Prevalence of infection was also determined by geographic location, where areas with reduced access to adequate housing had higher prevalence (Africa was 55.5%). A 2020 literature review also supports the finding that persons with SCD, as well as those with the blood disorder beta thalassemia, are at a higher risk of parvovirus B19 infection.
== History ==
=== Parvovirus B19 ===
Fifth disease is caused by a virus known as Parvovirus B19. This virus was officially discovered in 1975 by Yvonne Cossart and research group. They accidentally discovered the virus while analyzing hepatitis B virus surface antigen panels. The name, B19, comes from its label among a panel in row B and number 19. The group used electron microscopy and found viral particles with similar size and shape to parvoviruses that infect only animals. Soon after, a Japanese research group also found a parvovirus B19 and named it "Nakatani virus." This virus turned out to be the same one discovered by Cossart's team. Throughout this process, the virus was labeled "serum parvovirus-like particle." This changed in 1985, after confirmation of the similarity between the two research groups' findings, to B19. Parvovirus B19 is a single-stranded DNA virus and part of the Parvoviridae family, which includes Parvoviridae family Parvovirinae and Densovirinae. The Parvoviridae family is named after the Latin word "parvum," which means small. This is because before technological advancements, these viruses were considered to be one of the smallest viruses to infect mammals. However, these viruses are known to also infect invertebrates as well. So, additional genera were created depending on how the viruses replicate. These include Parvovirus, Dependovirus, and Erythrovirus. Parvovirus B19 used to be part of the Parvovirus genus since it does not need assistance to replicate, like the viruses of the Dependovirus do. However, parvovirus B19 only infects erythroid cells, so it is now part of the Erythrovirus genus. One well-known Parvovirus includes Canine parvovirus, which is known to infect dogs and cause inflammation of the small intestine and heart muscle. While all of these viruses are known to cause diseases, only parvovirus B19 infects humans. Specifically, parvovirus B19 attacks the P-antigen on human stem cells that eventually develop into red blood cells.
=== Name ===
Fifth disease's name comes from its numerical position after the top four childhood rash-causing diseases. The first four include measles, scarlet fever, rubella, and Duke's disease. Fifth disease can also be known as "slapped cheek disease" due to the red rash that spreads on the cheeks after several days of infection with parvovirus B19.
== See also ==
List of cutaneous conditions
Roseola
Virals
== References ==
Katta R (2002-04-01). "Parvovirus B19: a review". Dermatologic Clinics. 20 (2): 333–342. doi:10.1016/S0733-8635(01)00013-4. ISSN 0733-8635. PMID 12120446.
== External links ==
Parvovirus B19 at the Centers for Disease Control and Prevention | Wikipedia/Fifth_disease |
Heck's disease, also known as Focal Epithelial Hyperplasia, is an asymptomatic, benign neoplastic condition characterized by multiple white to pinkish papules that occur diffusely in the oral cavity.: 411 It can present with slightly pale, smooth or roughened surface morphology. It is caused by the human papilloma virus types 13 and 32. It exhibits surface cells with vacuolated cytoplasm around irregular, pyknotic nuclei and occasional cells with mitosis-like changes within otherwise mature and well-differentiated epithelium. A distinguishing histologic feature is elongated rete ridges with mitosoid bodies. It shows 'cobblestone' appearance clinically. It was first identified in the Aboriginal population of North America.
Over time, they will spontaneously regress without treatment. Possible treatment may be excisional biopsy for lesions of functional or aesthetic concern.
== References == | Wikipedia/Heck's_disease |
Infectious diseases (ID), also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates and determines the cause of a disease (bacteria, virus, parasite, fungus or prions). Once the cause is known, an ID specialist can then run various tests to determine the best drug to treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines.
== Scope ==
Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin.
Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS.
== History ==
Infectious diseases are historically associated with hygiene and epidemiology due to periodic outbreaks ravaging countries, especially in the cities before the advent of sanitation, but also with travel medicine and tropical medicine, as many diseases acquired in tropical and subtropical areas are infectious in nature.
Western innovations for treating infectious diseases originated in Ancient Greece, and before infectious disease was even conceptualized, a Greek Physician named Hippocrates formed the Hippocratic Corpus. Included in this collection of 70 documents was a text that contained illness-causing infectious diseases. This text, called the Epidemiai volumes, played a key role in forming the western approach to infectious disease. A physician during the Roman empire, Galen of Pergamon, also made great impacts on the western perception of infectious disease with his multiple treatises. These treatises gave insight into the Antonine Plague which we now recognize as smallpox based on the description in Galen's treatises.
Between the 16th and 18th centuries, medical professionals were educating more people, learning more from their research, and gaining access to information from other professionals in the field due to the use of printers like Gutenberg and the mass production of medical books. These books, now in the hands of many, included observations of infectious diseases. Such as syphilis, malaria, and smallpox. In the late 18th century we start to see vaccinations forming and the first vaccination for smallpox was established. Although there were records of individual infectious diseases spread out over medical documents, a combined perception of infectious disease as an area of medicine did not exist at that time.
During the 19th century, modern medicine began to develop and the sources of infectious diseases became more clear. Robert Koch, a German physician who studied pathogens, discovered three major pathogens that were the cause of Anthrax, Tuberculosis, and Cholera. Louis Pasteur was a pioneer in the creation of vaccines for infectious diseases, one being a vaccine for Anthrax. He also developed the germ theory of infectious diseases which influenced Joseph Lister to practice methods during surgery that reduce the growth of pathogens that cause infectious disease. Although infectious disease started to become a more collective concept in the 19th-century it was not considered a medical specialty until the 1970s due to a number of newly discovered diseases and vaccines.
== Investigations ==
When diagnosing, a medical professional must first determine if a patient has an infectious disease or another condition not caused by infection but exhibits similar symptoms. Once the illness is confirmed to be caused by an infection, Infectious diseases specialists employ a variety of diagnostic tests to help identify the pathogen that is causing an infection. Common tests include staining, culture tests, serological tests, susceptibility tests, genotyping, nucleic acid-base test, and polymerase chain reaction. Seeing as samples of bodily fluid or tissue are used in these tests, a specialist will have to distinguish between the non-disease-causing bacteria and disease-causing bacteria inhabiting the body to effectively identify and treat the infection.
Staining is a method of testing that uses a special dye to change the color of pathogens and a microscope to view them. The change in color helps doctors distinguish the pathogen from its surrounding and identify what it is. This method is only successful with large and plentiful pathogens present. Therefore, this method is unsuccessful with viruses because they can not be viewed under a microscope due to their small size. Staining has more of an effect on bacteria where a violet colored stain is used, this is called gram staining. If the bacteria appears blue it is considered gram positive and if it appears red it is gram negative.
Culture tests are done when there is not enough of the pathogen to be seen through other tests. ID specialists will grow the pathogen in the lab until they have enough to work with. Although cultures work on some pathogens, such as the bacteria that causes strep throat, it is ineffective on many others, such as syphilis. A test to identify the pathogen, such as staining, would take place after culture tests.
Susceptibility tests are done by ID specialists to discover which antimicrobial drug would be most effective at killing the pathogen. Cultures can also be used as a form of susceptibility testing by adding the drug to the cultured pathogens and observing whether or not it kills the pathogen and how much of the drug is needed to kill it.
Nucleic acid-base tests are used to detect genetic material. For pathogens that can't be cultured, ID specialists can identify them by looking for specific DNA or RNA. Polymerase chain reaction (PCR), a type of nucleic acid-base test, is similar to culture tests in that genes from the pathogen are duplicated. This method is mainly used when a specific pathogen is suspected.
== Treatments ==
Infectious diseases specialists employ a variety of antimicrobial agents to help treat infections. The type of antimicrobial depends on the organism that is causing the infection. Antibiotics are used to treat bacterial infections; antiviral agents treat viral infections; and antifungal agents treat fungal infections.
== Training ==
=== United States ===
In the United States, infectious diseases is a subspecialty of internal medicine and pediatrics. In order to "sit" for the infectious diseases' board certification test (administered by the American Board of Internal Medicine, or the American Board of Pediatrics), physicians must have completed their residency (in internal medicine, or pediatrics), then undergo additional fellowship training (for at least two, or three years, respectively). The exam has been given as a subspecialty of internal medicine since 1972 and as a subspecialty of pediatrics since 1994.
== References ==
== External links ==
IDSA - Infectious Diseases Society of America | Wikipedia/Infectious_disease_(medical_specialty) |
The Standard Industrial Classification (SIC) is a system for classifying industries by a four-digit code as a method of standardizing industry classification for statistical purposes across agencies. Established in the United States in 1937, it is used by government agencies to classify industry areas. Similar SIC systems are also used by agencies in other countries, e.g., by the United Kingdom's Companies House.
In the United States, the SIC system was last revised in 1987 and was last used by the Census Bureau for the 1992 Economic Census, and has been replaced by the North American Industry Classification System (NAICS code), which was released in 1997. Some U.S. government departments and agencies, such as the U.S. Securities and Exchange Commission (SEC), continue to use SIC codes.
The SIC code for an establishment, that is, a unique business with a registered U.S. headquarters, was determined by the industry appropriate for the overall largest product lines of the company or organization of which the establishment was a part. The later NAICS classification system has a different concept, assigning establishments into categories based on each one's output.
== History ==
The first edition of SIC was published in parts during 1938–1940, with revisions made in 1941–1942. The next edition was published in two parts in 1945 and 1949. Further revisions were issued in 1957, 1963, 1967, 1972, 1977, and 1987.
The SIC code system has been used since the 1930s. It was developed by the Interdepartmental Committee on Industrial Statistics, established by the Central Statistical Board who developed the List of Industries for manufacturing, published in 1938, and the 1939 List of Industries for non-manufacturing industries, which became the first Standard Industrial Classification for the United States. The SIC system was last revised in 1987 and was last used by the Census Bureau for the 1992 Economic Census.
The Office of Management and Budget, or OMB, was tasked with revising the SIC system to reflect changing economic conditions. The OMB established the Economic Classification Policy Committee in 1992 to develop a new system representative of the current industrial climate. The result was the North American Industry Classification System, or NAICS, a collaborative effort between Canada, the U.S. and Mexico. NAICS replaced the four-digit SIC code with a six-digit code, and it provided more flexibility in handling emerging industries (for example, the NAICS system more generally allows for "Other..." categories across industry groups). The new codes were implemented in Canada and the United States in 1997 and in Mexico one year later.
NAICS classified establishments (workplace) by their main output, instead of classifying them with the larger firm or organization of which the establishment was a part. This gives more precise information on establishment and worker activities than the SIC system, but changed the meaning of the classifications somewhat, making some time series of data hard to sustain accurately. Fort and Klimek (2016) found using longitudinal data on establishments that the switch from SIC to NAICS reclassified large numbers of workers differently by industry/sector than NAICS does, notably by reclassifying some from the Manufacturing sector into Services.
== Purpose ==
In the early 1900s, each branch of United States government agencies conducted business analysis using its own methods and metrics, unknown and meaningless to other branches. In the 1930s, the government needed standardized and meaningful methods to measure, analyze and share data across its various agencies. Thus, the Standard Industrial Classification system was born. SIC codes are four-digit numerical representations of major businesses and industries. SIC codes are assigned based on common characteristics shared in the products, services, production and delivery system of a business.
== Structure ==
SIC codes have a hierarchical, top-down structure that begins with general characteristics and narrows down to the specifics. The first two digits of the code represent the major industry sector to which a business belongs. The third and fourth digits describe the sub-classification of the business group and specialization, respectively. For example, "36" refers to a business that deals in "Electronic and Other Equipment." Adding "7" as a third digit to get "367" indicates that the business operates in "Electronic, Component and Accessories." The fourth digit distinguishes the specific industry sector, so a code of "3672" indicates that the business is concerned with "Printed Circuit Boards."
== Uses ==
The U.S. Census Bureau, Bureau of Labor Statistics, Internal Revenue Service and Social Security Administration utilize SIC codes in their reporting, although SIC codes are also used in academic and business sectors. The Bureau of Labor Statistics updates the codes every three years and uses SIC to report on work force, wages and pricing issues. The Social Security Administration assigns SIC codes to businesses based on the descriptions provided by employers under the primary business activity entry on employer ID applications.
== Limitations ==
Over the years, the U.S. Census has identified three major limitations to using the SIC system. The first limitation surrounds its definition and mistaken classification of employee groups. For example, administrative assistants in the automotive industry support all levels of the business, yet the SIC defines these employees as part of the "Basic Sector" of manufacturing jobs when they should be reported as "Non-Basic." Secondly, SIC codes were developed for traditional industries prior to 1970. Business has changed considerably since then from manufacturing-based to mostly service-based. As a result, and thirdly the SIC has been slow to recognize new and emerging industries, such as those in the computer, software, and information technology sectors.
== Codes ==
=== Range ===
The SIC codes can be grouped into progressively broader industry classifications: industry group, major group, and division. The first 3 digits of the SIC code indicate the industry group, and the first two digits indicate the major group. Each division encompasses a range of SIC codes:
To look at a particular example of the hierarchy, SIC code 2024 (ice cream and frozen desserts) belongs to industry group 202 (dairy products), which is part of major group 20 (food and kindred products), which belongs to the division of manufacturing.
=== List ===
The following table is from the SEC's website, which allows searching for companies by SIC code in its database of filings. The acronym NEC stands for "not elsewhere classified".
== See also ==
North American Industry Classification System
International Standard Industrial Classification
Global Industry Classification Standard
Australian and New Zealand Standard Industrial Classification
United Kingdom Standard Industrial Classification of Economic Activities
Industry Classification Benchmark (ICB)
Merchant category code
== References ==
== External links ==
Official website
Bernard Guibert, Jean Laganier and Michel Volle, An Essay on Industrial Classifications, Économie et statistique n° 20, February 1971
North American Industry Classification System
SIC Tools and Resources | Wikipedia/Standard_Industrial_Classification |
Edible algae based vaccination is a vaccination strategy under preliminary research to combine a genetically engineered sub-unit vaccine and an immunologic adjuvant into Chlamydomonas reinhardtii microalgae. Microalgae can be freeze-dried and administered orally. While spirulina is accepted as safe to consume, edible algal vaccines remain under basic research with unconfirmed safety and efficacy as of 2018.
In 2003, the first documented algal-based vaccine antigen was reported, consisting of a foot-and-mouth disease antigen complexed with the cholera toxin subunit B, which delivered the antigen to digestion mucosal surfaces in mice. The vaccine was grown in C. reinhardtii algae and provided oral vaccination in mice, but was hindered by low vaccine antigen expression levels.
Proteins expressed inside the chloroplast of algae (the most common site of genetic engineering and protein production) do not undergo glycosylation, a form of posttranslational modification. Glycosylation of proteins that are not naturally modified like the malaria vaccine candidate pfs25 can occur in common expression systems like yeast.
== Notes ==
== References ==
U.S. Food and Drug Administration (2002) GRAS Notification for Spirulina Microalgae
Specht, Elizabeth A.; Mayfield, Stephen P. (2014). "Algae-based oral recombinant vaccines". Frontiers in Microbiology. 5: 60. doi:10.3389/fmicb.2014.00060. PMC 3925837. PMID 24596570.
Rasala, Beth A.; Muto, Machiko; Lee, Philip A.; Jager, Michal; Cardoso, Rosa M.F.; Behnke, Craig A.; Kirk, Peter; Hokanson, Craig A.; Crea, Roberto; Mendez, Michael; Mayfield, Stephen P. (2010). "Production of therapeutic proteins in algae, analysis of expression of seven human proteins in the chloroplast of Chlamydomonas reinhardtii". Plant Biotechnology Journal. 8 (6): 719–733. doi:10.1111/j.1467-7652.2010.00503.x. PMC 2918638. PMID 20230484.
Shimp, Richard L.; Rowe, Christopher; Reiter, Karine; Chen, Beth; Nguyen, Vu; Aebig, Joan; Rausch, Kelly M.; Kumar, Krishan; Wu, Yimin; Jin, Albert J.; Jones, David S.; Narum, David L. (2013). "Development of a Pfs25-EPA malaria transmission blocking vaccine as a chemically conjugated nanoparticle". Vaccine. 31 (28): 2954–2962. doi:10.1016/j.vaccine.2013.04.034. PMC 3683851. PMID 23623858.
Gregory, James A.; Li, Fengwu; Tomosada, Lauren M.; Cox, Chesa J.; Topol, Aaron B.; Vinetz, Joseph M.; Mayfield, Stephen; Hviid, Lars (2012). "Algae-Produced Pfs25 Elicits Antibodies That Inhibit Malaria Transmission". PLOS ONE. 7 (5): e37179. Bibcode:2012PLoSO...737179G. doi:10.1371/journal.pone.0037179. PMC 3353897. PMID 22615931. | Wikipedia/Edible_algae_vaccine |
Industrial enzymes are enzymes that are commercially used in a variety of industries such as pharmaceuticals, chemical production, biofuels, food and beverage, and consumer products. Due to advancements in recent years, biocatalysis through isolated enzymes is considered more economical than use of whole cells. Enzymes may be used as a unit operation within a process to generate a desired product, or may be the product of interest. Industrial biological catalysis through enzymes has experienced rapid growth in recent years due to their ability to operate at mild conditions, and exceptional chiral and positional specificity, things that traditional chemical processes lack. Isolated enzymes are typically used in hydrolytic and isomerization reactions. Whole cells are typically used when a reaction requires a co-factor. Although co-factors may be generated in vitro, it is typically more cost-effective to use metabolically active cells.
== Enzymes as a unit of operation ==
=== Immobilization ===
Despite their excellent catalytic capabilities, enzymes and their properties must be improved prior to industrial implementation in many cases. Some aspects of enzymes that must be improved prior to implementation are stability, activity, inhibition by reaction products, and selectivity towards non-natural substrates. This may be accomplished through immobilization of enzymes on a solid material, such as a porous support. Immobilization of enzymes greatly simplifies the recovery process, enhances process control, and reduces operational costs. Many immobilization techniques exist, such as adsorption, covalent binding, affinity, and entrapment. Ideal immobilization processes should not use highly toxic reagents in the immobilization technique to ensure stability of the enzymes. After immobilization is complete, the enzymes are introduced into a reaction vessel for biocatalysis.
==== Adsorption ====
Enzyme adsorption onto carriers functions based on chemical and physical phenomena such as van der Waals forces, ionic interactions, and hydrogen bonding. These forces are weak, and as a result, do not affect the structure of the enzyme. A wide variety of enzyme carriers may be used. Selection of a carrier is dependent upon the surface area, particle size, pore structure, and type of functional group.
==== Covalent binding ====
Many binding chemistries may be used to adhere an enzyme to a surface to varying degrees of success. The most successful covalent binding techniques include binding via glutaraldehyde to amino groups and N-hydroxysuccinide esters. These immobilization techniques occur at ambient temperatures in mild conditions, which have limited potential to modify the structure and function of the enzyme.
==== Affinity ====
Immobilization using affinity relies on the specificity of an enzyme to couple an affinity ligand to an enzyme to form a covalently bound enzyme-ligand complex. The complex is introduced into a support matrix for which the ligand has high binding affinity, and the enzyme is immobilized through ligand-support interactions.
==== Entrapment ====
Immobilization using entrapment relies on trapping enzymes within gels or fibers, using non-covalent interactions. Characteristics that define a successful entrapping material include high surface area, uniform pore distribution, tunable pore size, and high adsorption capacity.
=== Recovery ===
Enzymes typically constitute a significant operational cost for industrial processes, and in many cases, must be recovered and reused to ensure economic feasibility of a process. Although some biocatalytic processes operate using organic solvents, the majority of processes occur in aqueous environments, improving the ease of separation. Most biocatalytic processes occur in batch, differentiating them from conventional chemical processes. As a result, typical bioprocesses employ a separation technique after bioconversion. In this case, product accumulation may cause inhibition of enzyme activity. Ongoing research is performed to develop in situ separation techniques, where product is removed from the batch during the conversion process. Enzyme separation may be accomplished through solid-liquid extraction techniques such as centrifugation or filtration, and the product-containing solution is fed downstream for product separation.
== Enzymes as a desired product ==
To industrialize an enzyme, the following upstream and downstream enzyme production processes are considered:
=== Upstream ===
Upstream processes are those that contribute to the generation of the enzyme.
==== Selection of a suitable enzyme ====
An enzyme must be selected based upon the desired reaction. The selected enzyme defines the required operational properties, such as pH, temperature, activity, and substrate affinity.
==== Identification and selection of a suitable source for the selected enzyme ====
The choice of a source of enzymes is an important step in the production of enzymes. It is common to examine the role of enzymes in nature and how they relate to the desired industrial process. Enzymes are most commonly sourced through bacteria, fungi, and yeast. Once the source of the enzyme is selected, genetic modifications may be performed to increase the expression of the gene responsible for producing the enzyme.
==== Process development ====
Process development is typically performed after genetic modification of the source organism, and involves the modification of the culture medium and growth conditions. In many cases, process development aims to reduce mRNA hydrolysis and proteolysis.
==== Large scale production ====
Scaling up of enzyme production requires optimization of the fermentation process. Most enzymes are produced under aerobic conditions, and as a result, require constant oxygen input, impacting fermenter design. Due to variations in the distribution of dissolved oxygen, as well as temperature, pH, and nutrients, the transport phenomena associated with these parameters must be considered. The highest possible productivity of the fermenter is achieved at maximum transport capacity of the fermenter.
=== Downstream ===
Downstream processes are those that contribute to separation or purification of enzymes.
==== Removal of insoluble materials and recovery of enzymes from the source ====
The procedures for enzyme recovery depend on the source organism, and whether enzymes are intracellular or extracellular. Typically, intracellular enzymes require cell lysis and separation of complex biochemical mixtures. Extracellular enzymes are released into the culture medium, and are much simpler to separate. Enzymes must maintain their native conformation to ensure their catalytic capability. Since enzymes are very sensitive to pH, temperature, and ionic strength of the medium, mild isolation conditions must be used.
==== Concentration and primary purification of enzymes ====
Depending on the intended use of the enzyme, different levels purity are required. For example, enzymes used for diagnostic purposes must be separated to a higher purity than bulk industrial enzymes to prevent catalytic activity that provides erroneous results. Enzymes used for therapeutic purposes typically require the most rigorous separation. Most commonly, a combination of chromatography steps is employed for separation.
The purified enzymes are either sold in pure form and sold to other industries, or added to consumer goods.
== See also ==
Industrial ecology
Industrial fermentation
Industrial microbiology
== References == | Wikipedia/Industrial_enzyme |
Industrial processes are procedures involving chemical, physical, electrical, or mechanical steps to aid in the manufacturing of an item or items, usually carried out on a very large scale. Industrial processes are the key components of heavy industry.
== Chemical processes by main basic material ==
Certain chemical process yield important basic materials for society, e.g., (cement, steel, aluminum, and fertilizer). However, these chemical reactions contribute to climate change by emitting carbon dioxide, a greenhouse gas, through chemical reactions, as well as through the combustion of fossil fuels to generate the high temperatures needed to reach the activation energies of the chemical reactions.
=== Cement (the paste within concrete) ===
Calcination – Limestone, which is largely composed of fossilized calcium carbonate (CaCO3), breaks down at high temperatures into useable calcium oxide (CaO) and carbon dioxide gas (CO2), which gets released as a by-product. This chemical reaction, called calcination, figures most prominently in creating cement (the paste within concrete). The reaction is also important in providing calcium oxide to act as a chemical flux (removal of impurities) within a blast furnace.
CaCO3(s) → CaO(s) + CO2(g)
=== Steel ===
Smelting – Inside a blast furnace, carbon monoxide (CO) is released by combusting coke (a high-carbon derivative of coal) and removes the undesired oxygen (O) within ores. CO2 is released as a by-product, carrying away the oxygen and leaving behind the desired pure metal. Most prominently, iron smelting is how steel (largely iron with small amounts of carbon) is created from mined iron ore and coal.
Fe2O3(s) + 3 CO(g) → 2 Fe(s) + 3 CO2(g)
=== Aluminium ===
Hall–Héroult process – Aluminium oxide (Al2O3) is smelted with coke (C) in a high-temperature electrolysis reaction, yielding the desired pure aluminium (Al) and a mixture of CO and CO2.
Al2O3(s) + 3 C(s) → 2 Al(s) + 3 CO(g)
2 Al2O3(s) + 3 C(s) → 4 Al(s) + 3 CO2(g)
=== Fertilizer ===
Haber process – Atmospheric nitrogen (N2) is separated, yielding ammonia (NH3), which is used to make all synthetic fertilizer. The Haber process uses a fossil carbon source, generally natural gas, to provide the CO for the water–gas shift reaction, yielding hydrogen (H2) and releasing CO2. The H2 is used to break the strong triple bond in N2, yielding industrial ammonia.
CH4(g) + H2O(g) → CO(g) + 3 H2(g)
CO(g) + H2O(g) → H2(g) + CO2(g)
N2(g) + 3 H2(g) → 2 NH3(g)
=== Other chemical processes ===
Disinfection – chemical treatment to kill bacteria and viruses
Pyroprocessing – using heat to chemically combine materials, such as in cement
== Electrolysis ==
The availability of electricity and its effect on materials gave rise to several processes for plating or separating metals.
Electrolytic process – any process using electrolysis
Electrophoretic deposition – electrolytic deposition of colloidal particles in a liquid medium
Electropolishing – the reverse of electroplating
Electrotyping – using electroplating to produce printing plates
Gilding, electroplating, anodizing, electrowinning – depositing a material on an electrode
Isoelectric focusing a.k.a. electrofocusing – similar to electroplating, but separating molecules
Metallizing, plating, spin coating – the generic terms for giving non-metals a metallic coating
== Cutting ==
Electrical discharge machining (EDM)
Laser cutting
Machining – the mechanical cutting and shaping of metal which involves the loss of the material
Oxy-fuel welding and cutting
Plasma cutting
Sawing
Shearing
Water-jet cutting – cutting materials using a very high-pressure jet of water
== Metalworking ==
Case-hardening, differential hardening, shot peening – creating a wear-resistant surface
Casting – shaping of a liquid material by pouring it into moulds and letting it solidify
Die cutting – A "forme" or "die" is pressed onto a flat material in order to cut, score, punch and otherwise shape the material
Electric arc furnace — very-high-temperature processing
Forging – the shaping of metal by use of heat and hammer
Hydroforming – a tube of metal is expanded into a mould under pressure
Precipitation hardening – heat treatment used to strengthen malleable materials
Progressive stamping – the production of components from a strip or roll
Sandblasting – cleaning of a surface using sand or other particles
Smelting and direct reduction – extracting metals from ores
Soldering, brazing, welding – a process for joining metals
Stamping
Steelmaking – turning "pig iron" from smelting into steel
Tumble polishing – for polishing
Work hardening – adding strength to metals, alloys, etc.
=== Iron and steel ===
Basic oxygen steelmaking
Bessemer process
Blast furnace – produced cast iron
Catalan forge, open hearth furnace, bloomery – produced wrought iron
Cementation process
Crucible steel
Direct reduction – produced direct reduced iron
Smelting – the process of using furnaces to produce steel, copper, etc.
== Molding ==
The physical shaping of materials by forming their liquid form using a mould
Blow molding as in plastic containers or in the glass container industry – making hollow objects by blowing them into a mould
Casting, sand casting – the shaping of molten metal or plastics using a mould
Compression molding
Sintering, powder metallurgy – the making of objects from metal or ceramic powder
== Separation ==
Many materials exist in an impure form. Purification or separation provides a usable product.
Comminution – reduces the size of physical particles (it exists between crushing and grinding)
Frasch process – for extracting molten sulfur from the ground
Froth flotation, flotation process – separating minerals through flotation
Liquid–liquid extraction – dissolving one substance in another
== Distillation ==
Distillation is the purification of volatile substances by evaporation and condensation
Batch distillation
Continuous distillation
Fractional distillation, steam distillation, vacuum distillation
Fractionating column
Spinning cone
== Additive manufacturing ==
In additive manufacturing, material is progressively added to the piece until the desired shape and size are obtained.
Fused deposition modeling (FDM)
Photolithography
Selective laser sintering (SLS)
Stereolithography (SLA)
== Petroleum and organic compounds ==
The nature of an organic molecule means it can be transformed at the molecular level to create a range of products.
Alkylation – refining of crude oil
Burton process – cracking of hydrocarbons
Cracking (chemistry) – the generic term for breaking up the larger molecules
Cumene process – making phenol and acetone from benzene
Friedel–Crafts reaction, Kolbe–Schmitt reaction
Olefin metathesis, thermal depolymerization
Oxo process – produces aldehydes from alkenes
Polymerization
Raschig hydroxylamine process – produces hydroxylamine, a precursor of nylon
Transesterification – organic chemicals
== Organized by product ==
Aluminium – ( Hall-Héroult process, Deville process, Bayer process, Wöhler process)
Ammonia, used in fertilizer – (Haber process)
Bromine – (Dow process)
Chlorine, used in chemicals – (chloralkali process, Weldon process, Hooker process)
Fat – (rendering)
Fertilizer – (nitrophosphate process)
Glass – (Pilkington process)
Gold – (bacterial oxidation, Parkes process)
Graphite – (Acheson process)
Heavy water, used to refine radioactive products – (Girdler sulfide process)
Hydrogen – (water–gas shift reaction, steam reforming)
Lead (and bismuth) – (Betts electrolytic process, Betterton-Kroll process)
Nickel – (Mond process)
Nitric acid – (Ostwald process)
Paper – (pulping, Kraft process, Fourdrinier machine)
Rubber – (vulcanization)
Salt – (Alberger process, Grainer evaporation process)
Semiconductor crystals – (Bridgman–Stockbarger method, Czochralski method)
Silver – (Patio process, Parkes process)
Silicon carbide – (Acheson process, Lely process)
Sodium carbonate, used for soap – (Leblanc process, Solvay process, Leblanc-Deacon process)
Sulfuric acid – (lead chamber process, contact process)
Titanium – (Hunter process, Kroll process)
Zirconium – (Hunter process, Kroll process, van Arkel–de Boer process)
A list by process:
Alberger process, Grainer evaporation process – produces salt from brine
Bacterial oxidation – used to produce gold
Bayer process – the extraction of aluminium from ore
Chloralkali process, Weldon process – for producing chlorine and sodium hydroxide
Dow process – produces bromine from brine
Formox process – oxidation of methanol to produce formaldehyde
Girdler sulfide process – for making heavy water
Hunter process, Kroll process – produces titanium and zirconium
Industrial rendering – the separation of fat from bone and protein
Lead chamber process, contact process – production of sulfuric acid
Mond process – nickel
Nitrophosphate process – a number of similar process for producing fertilizer
Ostwald process – produces nitric acid
Packaging
Pidgeon process – produces magnesium, reducing the oxide using silicon
Steam reforming, water gas shift reaction – produce hydrogen and carbon monoxide from methane or hydrogen and carbon dioxide from water and carbon monoxide
Vacuum metalising – a finishing process
Van Arkel–de Boer process – for producing titanium, zirconium, hafnium, vanadium, thorium, or protactinium
== See also ==
Chemical engineering
Industrial Extraction
Mass production
Multilevel Flow Modeling
Process (engineering)
== References == | Wikipedia/Industrial_process |
Single-cell proteins (SCP) or microbial proteins refer to edible unicellular microorganisms. The biomass or protein extract from pure or mixed cultures of algae, yeasts, fungi or bacteria may be used as an ingredient or a substitute for protein-rich foods, and is suitable for human consumption or as animal feeds. Industrial agriculture is marked by a high water footprint, high land use, biodiversity destruction, general environmental degradation and contributes to climate change by emission of a third of all greenhouse gases; production of SCP does not necessarily exhibit any of these serious drawbacks. As of today, SCP is commonly grown on agricultural waste products, and as such inherits the ecological footprint and water footprint of industrial agriculture. However, SCP may also be produced entirely independent of agricultural waste products through autotrophic growth. Thanks to the high diversity of microbial metabolism, autotrophic SCP provides several different modes of growth, versatile options of nutrients recycling, and a substantially increased efficiency compared to crops. A 2021 publication showed that photovoltaic-driven microbial protein production could use 10 times less land for an equivalent amount of protein compared to soybean cultivation.
With the world population reaching 9 billion by 2050, there is strong evidence that agriculture will not be able to meet demand and that there is serious risk of food shortage. Autotrophic SCP represents options of fail-safe mass food-production which can produce food reliably even under harsh climate conditions.
== History ==
In 1781, processes for preparing highly concentrated forms of yeast were established. Research on Single Cell Protein Technology started a century ago when Max Delbrück and his colleagues found out the high value of surplus brewer’s yeast as a feeding supplement for animals. During World War I and World War II, yeast-SCP was employed on a large scale in Germany to counteract food shortages during the war. Inventions for SCP production often represented milestones for biotechnology in general: for example, in 1919, Sak in Denmark and Hayduck in Germany invented a method named, “Zulaufverfahren”, (fed-batch) in which sugar solution was fed continuously to an aerated suspension of yeast instead of adding yeast to diluted sugar solution once (batch). In post war period, the Food and Agriculture Organization of the United Nations (FAO) emphasized on hunger and malnutrition problems of the world in 1960 and introduced the concept of protein gap, showing that 25% of the world population had a deficiency of protein intake in their diet. It was also feared that agricultural production would fail to meet the increasing demands of food by humanity. By the mid 60’s, almost quarter of a million tons of food yeast were being produced in different parts of the world and Soviet Union alone produced some 900,000 tons by 1970 of food and fodder yeast.
In the 1960s, researchers at BP developed what they called "proteins-from-oil process": a technology for producing single-cell protein by yeast fed by waxy n-paraffins, a byproduct of oil refineries. Initial research work was done by Alfred Champagnat at BP's Lavera Refinery in France; a small pilot plant there started operations in March 1963, and the same construction of the second pilot plant, at Grangemouth Oil Refinery in Britain, was authorized.
The term SCP was coined in 1966 by Carroll L. Wilson of MIT.
The "food from oil" idea became quite popular by the 1970s, with Champagnat being awarded the UNESCO Science Prize in 1976, and paraffin-fed yeast facilities being built in a number of countries. The primary use of the product was as poultry and cattle feed.
The Soviets were particularly enthusiastic, opening large "BVK" (belkovo-vitaminny kontsentrat, i.e., "protein-vitamin concentrate") plants next to their oil refineries in Kstovo (1973) and Kirishi (1974). The Soviet Ministry of Microbiological Industry had eight plants of this kind by 1989. However, due to concerns of toxicity of alkanes in SCP and pressured by the environmentalist movements, the government decided to close them down, or convert to some other microbiological processes.
Quorn is a range of vegetarian and vegan meat-substitutes made from Fusarium venenatum mycoprotein, sold in Europe and North America.
Another type of single cell protein-based meat analogue (which does not use fungi however but rather bacteria) is Calysta. Other producers are Unibio (Denmark) Circe Biotechnologie (Austria) and String Bio (India).
SCP has been argued to be a source of alternative or resilient food.
== Production process ==
Single-cell proteins develop when microbes ferment waste materials (including wood, straw, cannery, and food-processing wastes, residues from alcohol production, hydrocarbons, or human and animal excreta). With 'electric food' processes the inputs are electricity, CO2 and trace minerals and chemicals such as fertiliser. It is also possible to derive SCP from natural gas to use as a resilient food. Similarly SCP can be derived from waste plastic by upcycling.
The problem with extracting single-cell proteins from waste products is the dilution and cost. They are found in very low concentrations, usually less than 5%. Engineers have developed ways to increase the concentrations including centrifugation, flotation, precipitation, coagulation, and filtration, or the use of semi-permeable membranes.
The single-cell protein must be dehydrated to approximately 10% moisture content and/or acidified to aid in storage and prevent spoilage. The methods to increase the concentrations to adequate levels and the de-watering process require equipment that is expensive and not always suitable for small-scale operations. It is economically prudent to feed the product locally and soon after it is produced.
== Microorganisms ==
Microbes employed include (brand names in parentheses for commercialized examples):
== Properties ==
Large-scale production of microbial biomass has many advantages over the traditional methods for producing proteins for food or feed.
Microorganisms have a much higher growth rate (algae: 2–6 hours, yeast: 1–3 hours, bacteria: 0.5–2 hours). This also allows selection for strains with high yield and good nutritional composition more quickly and easily compared to breeding.
Whereas large parts of crops, such as stems, leaves and roots, are not edible, single-cell microorganisms can be used entirely. Whereas parts of the edible fraction of crops are indigestible, many microorganisms are digestible at a much higher fraction.
Microorganisms usually have a much higher protein content of 30–70% in the dry mass than vegetables or grains. The amino acid profiles of many SCP microorganisms often have excellent nutritional quality, comparable to hen's eggs.
Some microorganisms can build vitamins and nutrients which eukaryotic organisms such as plants cannot produce or not produce in significant amounts, including vitamin B12.
Microorganisms can utilize a broad spectrum of raw materials as carbon sources including alkanes, methanol, methane, ethanol and sugars. What was considered "waste product" often can be reclaimed as nutrients and support growth of edible microorganisms.
Like plants, autotrophic microorganisms are capable of growing on CO2. Some of them, such as bacteria with the Wood–Ljungdahl pathway or the reductive TCA can fix CO2 with efficiencies ranging from 2-3 times to 10 times more efficiently than plants, when also considering the effects of photoinhibition.
Some bacteria, such as several homoacetogenic clostridia, are capable of performing syngas fermentation. This means they can metabolize synthesis gas, a gas mixture of CO, H2 and CO2 that can be made by gasification of residual intractable biowastes such as lignocellulose.
Some bacteria are diazotrophic, i.e. they can fix N2 from the air and are thus independent of chemical N-fertilizer, whose production, utilization and degradation causes tremendous harm to the environment, deteriorates public health, and fosters climate change.
Many bacteria can utilize H2 for energy supply, using enzymes called hydrogenases. Whereas hydrogenases are normally highly O2-sensitive, some bacteria are capable of performing O2-dependent respiration of H2. This feature allows autotrophic bacteria to grow on CO2 without light at a fast growth rate. Since H2 can be made efficiently by water electrolysis, in a manner of speaking, those bacteria can be "powered by electricity".
Microbial biomass production is independent of seasonal and climatic variations, and can easily be shielded from extreme weather events that are expected to cause crop failures with the ongoing climate-change. Light-independent microorganisms such as yeasts can continue to grow at night.
Cultivation of microorganisms generally has a much lower water footprint than agricultural food production. Whereas the global average blue-green water footprint (irrigation, surface, ground and rain water) of crops reaches about 1800 liters per kg crop due to evaporation, transpiration, drainage and runoff, closed bioreactors producing SCP exhibits none of these causes.
Cultivation of microorganisms does not require fertile soil and therefore does not compete with agriculture. Thanks to the low water requirements, SCP cultivation can even be done in dry climates with infertile soil and may provide a means of fail-safe food supply in arid countries.
Photosynthetic microorganisms can reach a higher solar-energy-conversion efficiency than plants, because in photobioreactors supply of water, CO2 and a balanced light distribution can be tightly controlled.
Unlike agricultural products which are processed towards a desired quality, it is easier with microorganisms to direct production towards a desired quality. Instead of extracting amino acids from soy beans and throwing away half of the plant body in the process, microorganisms can be genetically modified to overproduce or even secrete a particular amino acid. However, in order to keep a good consumer acceptance, it is usually easier to obtain similar results by screening for microorganisms which already have the desired trait or train them via selective adaptation.
Although SCP shows very attractive features as a nutrient for humans, however there are some problems that deter its adoption on global basis:
Fast growing microorganisms such as bacteria and yeast have a high concentration of nucleic acid, notably RNA. Levels must be limited in the diets of monogastric animals to <50 g per day. Ingestion of purine compounds arising from RNA breakdown leads to increased plasma levels of uric acid, which can cause gout and kidney stones. Uric acid can be converted to allantoin, which is excreted in urine. Nucleic acid removal is not necessary from animal feeds but is from human foods (humans have lost parts of the uric acid catabolic pathway during their evolution).This problem can be remediated, however. One common method consists in a heat treatment which kills the cells, inactivates proteases and allows endogenous RNases to hydrolyse RNA with release of nucleotides from cell to culture broth.
Similar to plant cells, the cell wall of some microorganisms such as algae and yeast contains indigestible components, such as cellulose. The cells of some kind of SCP should be broken up in order to liberate the cell interior and allow complete digestion.
Some kind of SCP exhibits unpleasant color and flavors.
Depending on the kind of SCP and the cultivation conditions, care must be taken to prevent and control contamination by other microorganisms because contaminants may produce toxins such as mycotoxins or cyanotoxins. An interesting approach to address this problem was proposed with the fungus Scytalidium acidophilum which grows at a pH as low as 1, outside the tolerance of most microorganisms. This allows it to grow on acid-hydrolysed paper waste at low-cost.
Some yeast and fungal proteins are deficient in methionine.
== See also ==
Solein: a single cell protein made by Solar Foods Ltd. Finland-based.
Kiverdi, Inc and subsidiary Air Protein by Lisa Dyson. California-based.
Avecom - Belgium-based
Unibio - Denmark-based
Calysta - California-based
Circe Biotechnologie - Austria-based
Superbrewed Food (formerly White Dog Labs). Delaware-based
Deep Branch - UK-based
LanzaTech
Nature's Fynd - Chicago-based
Kyanos
NovoNutrients
Deep Branch Biotechnology
Fermentative hydrogen production
Hydrogenotrophs
Alternative foods
Microbial food cultures
== References ==
== External links ==
Media related to Single-cell protein at Wikimedia Commons | Wikipedia/Single-cell_protein |
Protein production is the biotechnological process of generating a specific protein. It is typically achieved by the manipulation of gene expression in an organism such that it expresses large amounts of a recombinant gene. This includes the transcription of the recombinant DNA to messenger RNA (mRNA), the translation of mRNA into polypeptide chains, which are ultimately folded into functional proteins and may be targeted to specific subcellular or extracellular locations.
Protein production systems (also known as expression systems) are used in the life sciences, biotechnology, and medicine. Molecular biology research uses numerous proteins and enzymes, many of which are from expression systems; particularly DNA polymerase for PCR, reverse transcriptase for RNA analysis, restriction endonucleases for cloning, and to make proteins that are screened in drug discovery as biological targets or as potential drugs themselves. There are also significant applications for expression systems in industrial fermentation, notably the production of biopharmaceuticals such as human insulin to treat diabetes, and to manufacture enzymes.
== Protein production systems ==
Commonly used protein production systems include those derived from bacteria, yeast, baculovirus/insect, mammalian cells, and more recently filamentous fungi such as Myceliophthora thermophila. When biopharmaceuticals are produced with one of these systems, process-related impurities termed host cell proteins also arrive in the final product in trace amounts.
=== Cell-based systems ===
The oldest and most widely used expression systems are cell-based and may be defined as the "combination of an expression vector, its cloned DNA, and the host for the vector that provide a context to allow foreign gene function in a host cell, that is, produce proteins at a high level". Overexpression is an abnormally and excessively high level of gene expression which produces a pronounced gene-related phenotype.
There are many ways to introduce foreign DNA to a cell for expression, and many different host cells may be used for expression — each expression system has distinct advantages and liabilities. Expression systems are normally referred to by the host and the DNA source or the delivery mechanism for the genetic material. For example, common hosts are bacteria (such as E. coli, B. subtilis), yeast (such as S. cerevisiae) or eukaryotic cell lines. Common DNA sources and delivery mechanisms are viruses (such as baculovirus, retrovirus, adenovirus), plasmids, artificial chromosomes and bacteriophage (such as lambda). The best expression system depends on the gene involved, for example the Saccharomyces cerevisiae is often preferred for proteins that require significant posttranslational modification. Insect or mammal cell lines are used when human-like splicing of mRNA is required. Nonetheless, bacterial expression has the advantage of easily producing large amounts of protein, which is required for X-ray crystallography or nuclear magnetic resonance experiments for structure determination.
Because bacteria are prokaryotes, they are not equipped with the full enzymatic machinery to accomplish the required post-translational modifications or molecular folding. Hence, multi-domain eukaryotic proteins expressed in bacteria often are non-functional. Also, many proteins become insoluble as inclusion bodies that are difficult to recover without harsh denaturants and subsequent cumbersome protein-refolding.
To address these concerns, expressions systems using multiple eukaryotic cells were developed for applications requiring the proteins be conformed as in, or closer to eukaryotic organisms: cells of plants (i.e. tobacco), of insects or mammalians (i.e. bovines) are transfected with genes and cultured in suspension and even as tissues or whole organisms, to produce fully folded proteins. Mammalian in vivo expression systems have however low yield and other limitations (time-consuming, toxicity to host cells,..). To combine the high yield/productivity and scalable protein features of bacteria and yeast, and advanced epigenetic features of plants, insects and mammalians systems, other protein production systems are developed using unicellular eukaryotes (i.e. non-pathogenic 'Leishmania' cells).
==== Bacterial systems ====
===== Escherichia coli =====
E. coli is one of the most widely used expression hosts, and DNA is normally introduced in a plasmid expression vector. The techniques for overexpression in E. coli are well developed and work by increasing the number of copies of the gene or increasing the binding strength of the promoter region so assisting transcription.
For example, a DNA sequence for a protein of interest could be cloned or subcloned into a high copy-number plasmid containing the lac (often LacUV5) promoter, which is then transformed into the bacterium E. coli. Addition of IPTG (a lactose analog) activates the lac promoter and causes the bacteria to express the protein of interest.
E. coli strain BL21 and BL21(DE3) are two strains commonly used for protein production. As members of the B lineage, they lack lon and OmpT proteases, protecting the produced proteins from degradation. The DE3 prophage found in BL21(DE3) provides T7 RNA polymerase (driven by the LacUV5 promoter), allowing for vectors with the T7 promoter to be used instead.
===== Corynebacterium =====
Non-pathogenic species of the gram-positive Corynebacterium are used for the commercial production of various amino acids. The C. glutamicum species is widely used for producing glutamate and lysine, components of human food, animal feed and pharmaceutical products.
Expression of functionally active human epidermal growth factor has been done in C. glutamicum, thus demonstrating a potential for industrial-scale production of human proteins. Expressed proteins can be targeted for secretion through either the general, secretory pathway (Sec) or the twin-arginine translocation pathway (Tat).
Unlike gram-negative bacteria, the gram-positive Corynebacterium lack lipopolysaccharides that function as antigenic endotoxins in humans.
===== Pseudomonas fluorescens =====
The non-pathogenic and gram-negative bacteria, Pseudomonas fluorescens, is used for high level production of recombinant proteins; commonly for the development bio-therapeutics and vaccines. P. fluorescens is a metabolically versatile organism, allowing for high throughput screening and rapid development of complex proteins. P. fluorescens is most well known for its ability to rapid and successfully produce high titers of active, soluble protein.
==== Eukaryotic systems ====
===== Yeasts =====
Expression systems using either S. cerevisiae or Pichia pastoris allow stable and lasting production of proteins that are processed similarly to mammalian cells, at high yield, in chemically defined media of proteins.
===== Filamentous fungi =====
Filamentous fungi, especially Aspergillus and Trichoderma, have long been used to produce diverse industrial enzymes from their own genomes ("native", "homologous") and from recombinant DNA ("heterologous").
More recently, Myceliophthora thermophila C1 has been developed into an expression platform for screening and production of native and heterologous proteins.The expression system C1 shows a low viscosity morphology in submerged culture, enabling the use of complex growth and production media. C1 also does not "hyperglycosylate" heterologous proteins, as Aspergillus and Trichoderma tend to do.
===== Baculovirus-infected cells =====
Baculovirus-infected insect cells (Sf9, Sf21, High Five strains) or mammalian cells (HeLa, HEK 293) allow production of glycosylated or membrane proteins that cannot be produced using fungal or bacterial systems. It is useful for production of proteins in high quantity. Genes are not expressed continuously because infected host cells eventually lyse and die during each infection cycle.
===== Non-lytic insect cell expression =====
Non-lytic insect cell expression is an alternative to the lytic baculovirus expression system. In non-lytic expression, vectors are transiently or stably transfected into the chromosomal DNA of insect cells for subsequent gene expression. This is followed by selection and screening of recombinant clones. The non-lytic system has been used to give higher protein yield and quicker expression of recombinant genes compared to baculovirus-infected cell expression. Cell lines used for this system include: Sf9, Sf21 from Spodoptera frugiperda cells, Hi-5 from Trichoplusia ni cells, and Schneider 2 cells and Schneider 3 cells from Drosophila melanogaster cells. With this system, cells do not lyse and several cultivation modes can be used. Additionally, protein production runs are reproducible. This system gives a homogeneous product. A drawback of this system is the requirement of an additional screening step for selecting viable clones.
===== Excavata =====
Leishmania tarentolae (cannot infect mammals) expression systems allow stable and lasting production of proteins at high yield, in chemically defined media. Produced proteins exhibit fully eukaryotic post-translational modifications, including glycosylation and disulfide bond formation.
===== Mammalian systems =====
The most common mammalian expression systems are Chinese Hamster ovary (CHO) and Human embryonic kidney (HEK) cells.
Chinese hamster ovary cell
Mouse myeloma lymphoblstoid (e.g. NS0 cell)
Fully Human
Human embryonic kidney cells (HEK-293)
Human embryonic retinal cells (Crucell's Per.C6)
Human amniocyte cells (Glycotope and CEVEC)
=== Cell-free systems ===
Cell-free production of proteins is performed in vitro using purified RNA polymerase, ribosomes, tRNA and ribonucleotides. These reagents may be produced by extraction from cells or from a cell-based expression system. Due to the low expression levels and high cost of cell-free systems, cell-based systems are more widely used.
== See also ==
Cellosaurus, a database of cell lines
Gene expression
Single-cell protein
Protein purification
Precision fermentation
Host cell protein
List of recombinant proteins
== References ==
== Further reading ==
Higgins SJ, Hames BD (1999). Protein Expression: A Practical Approach. Oxford University Press. ISBN 978-0-19-963623-5.
Baneyx, François (2004). Protein Expression Technologies: Current Status and Future Trends. Garland Science. ISBN 978-0-9545232-5-1.
== External links == | Wikipedia/Recombinant_protein |
Network equipment providers (NEPs) – sometimes called telecommunications equipment manufacturers (TEMs) – sell products and services to communication service providers such as fixed or mobile operators as well as to enterprise customers. NEP technology allows for calls on mobile phones, Internet surfing, joining a conference calls, or watching video on demand through IPTV (internet protocol TV). The history of the NEPs goes back to the mid-19th century when the first telegraph networks were set up. Some of these players still exist today.
== Telecommunications equipment manufacturers ==
The terminology of the traditional telecommunications industry has rapidly evolved during the Information Age. The terms "Network" and "Telecoms" are often used interchangeably. The same is true for "provider" and "manufacturer". Historically, NEPs sell integrated hardware/software systems to carriers such as NTT-DoCoMo, ATT, Sprint, and so on. They purchase hardware from TEMs (telecom equipment manufacturers), such as Vertiv, Kontron, and NEC, to name a few. TEMs are responsible for manufacturing the hardware, devices, and equipment the telecommunications industry requires. The distinction between NEP and TEM is sometimes blurred, because all the following phrases may imply NEP:
Telecommunications equipment provider
Telecommunications equipment industry
Telecommunications equipment company
Telecommunications equipment manufacturer (TEM)
Telecommunications equipment technology
Network equipment provider (NEP)
Network equipment industry
Network equipment companies
Network equipment manufacturer
Network equipment technology
== Services ==
This is a highly competitive industry that includes telephone, cable, and data services segments. Products and services include:
Mobile networks like GSM (Global System for Mobile Communication), Enhanced Data Rates for GSM Evolution (EDGE) or GPRS (General Packet Radio Service). Networks of this kind are typically also known as 2G and 2.5G networks. The 3G mobile networks are based on UMTS (Universal Mobile Telecommunication Standard) which allows much higher data rates than 2G or *5G.
Fixed networks which are typically based on PSTN (Public Switched Telephone Network).
Enterprise networks, like Unified Communication infrastructure
Internet infrastructures, like routers and switches
== Companies ==
Some providers in each customer segment are:
Majority of revenues from service providers:
Alcatel-Lucent
Ericsson
Huawei
Samsung
TP-Link
D-Link
Juniper Networks
NEC
Nokia Networks
Ciena
ZTE
Majority of revenues from enterprise customers:
Avaya
Cisco
Motorola
Unify
The NEPs have recently undergone a significant consolidation or M&A activity, for example, the joint venture of Nokia and Siemens (Nokia Siemens Networks), the acquisition of Marconi by Ericsson, the merger between Alcatel and Lucent, and many numerous acquisitions by Cisco.
A look at the financial performance of these players according to the segment they serve creates a diverse picture:
== Power balance in the NEP ecosystem ==
NEPs face high pressure from old & new rivals and a stronger, more consolidated customer base.
Threat of New entrants:
Growing importance software applications has led to the entry of new players like System integrators and other ISVs. (For some NEPs, SIs are being considered as competitors for selected network services i.e. application, services, and control layers of the network)
In the area of managed and hosted services, NEPs are likely to face competition from new players like Google due to lower entry barriers
Bargaining Power of Suppliers:
Increasing standardization and commoditization of network components leads to more competition among component suppliers, thus lowering their bargaining position.
Overcapacities have led to lower bargaining power of Semiconductor suppliers
As more standardized networks components are expected to be used for NGNs, a shift in the current supplier structure may balance the bargaining between suppliers and NEPs
Bargaining Power of Buyers:
Consolidation among communication service providers due to convergence leads to greater dependence on a few large clients, which means higher bargaining strength of customers
Due to pressures on their profitability, service providers are increasingly looking at lowering their operating costs and capital expenditures (lowering cost per subscriber), and this is putting pressures on NEPs margins.
Enterprises increasingly demand end-to-end solutions through a single vendor for their Unified Communication needs
Threat of Substitution:
Switch from PSTN to Next-Generation Network
Increasing use of standardized network components (COTS) compared to more proprietary equipment
Software to increasingly replace traditional network components
== Open Source Age ==
The SCOPE Alliance was a non-profit and influential Network Equipment provider (NEP) industry group aimed at standardizing "carrier-grade" systems for telecom in the Information Age, successfully in accelerating the NEP transformation towards Carrier-grade Open Source Hardware, OS, Middleware, Virtualization, and Cloud see table:
== NFV, SDN, 5G, Cloud transformation Age ==
From 2010 onwards, Telecom carriers (NEP customers) wanted direct involvement in driving transformation. The NEP-only SCOPE Alliance was retired, as the industry combined forces on Service Availability, ETSI Network function virtualization standardization, Software-defined networking adoption, and 5G network slicing initiatives.
== References ==
== External links ==
IBM study related to the NEP industry | Wikipedia/Network_equipment_provider |
Industrial microbiology is a branch of biotechnology that applies microbial sciences to create industrial products in mass quantities, often using microbial cell factories. There are multiple ways to manipulate a microorganism in order to increase maximum product yields. Introduction of mutations into an organism may be accomplished by introducing them to mutagens. Another way to increase production is by gene amplification, this is done by the use of plasmids, and vectors. The plasmids and/ or vectors are used to incorporate multiple copies of a specific gene that would allow more enzymes to be produced that eventually cause more product yield. The manipulation of organisms in order to yield a specific product has many applications to the real world like the production of some antibiotics, vitamins, enzymes, amino acids, solvents, alcohol and daily products. Microorganisms play a big role in the industry, with multiple ways to be used. Medicinally, microbes can be used for creating antibiotics in order to treat infection. Microbes can also be used for the food industry as well. Microbes are very useful in creating some of the mass produced products that are consumed by people. The chemical industry also uses microorganisms in order to synthesize amino acids and organic solvents. Microbes can also be used in an agricultural application for use as a biopesticide instead of using dangerous chemicals and or inoculants to help plant proliferation.
== Medical application ==
The medical application to industrial microbiology is the production of new drugs synthesized in a specific organism for medical purposes. Production of antibiotics is necessary for the treatment of many bacterial infections. Some natural occurring antibiotics and precursors, are produced through a process called fermentation. The microorganisms grow in a liquid media where the population size is controlled in order to yield the greatest amount of product. In this environment nutrient, pH, temperature, and oxygen are controlled also in order to maximize the amount of cells and cause them not to die before the production of the antibiotic of interest. Once the antibiotic is produced it must be extracted in order to yield an income.
Vitamins also get produced in massive quantities either by fermentation or biotransformation. Vitamin B 2 (riboflavin) for example is produced both ways. Biotransformation is mostly used for the production of riboflavin, and the carbon source starting material for this reaction is glucose. There are a few strains of microorganisms that were engineered to increase the yield of riboflavin produced. The most common organism used for this reaction is Ashbya gossypii. The fermentation process is another common way to produce riboflavin. The most common organism used for production of riboflavin through fermentation is Eremothecium ashbyii. Once riboflavin is produced it must be extracted from the broth, this is done by heating the cells for a certain amount of time, and then the cells can be filtered out of solution. Riboflavin is later purified and released as final product.
Microbial biotransformation can be used to produce steroid medicaments. Steroids can be consumed either orally or by injection. Steroids play a big role in the control of arthritis. Cortisone is an anti-inflammatory drug that fights against arthritis, as well as several skin diseases. Another steroid used is testosterone, which was produced from dehydroepiandrosterone by using the Corynebacterium species.
== Food industry application ==
=== Fermentation ===
Fermentation is a reaction where sugar can be converted into a gas, alcohols or acids. Fermentation happens anaerobically, which means microorganisms that go through fermentation can function without the presence of oxygen. Yeasts and bacteria are commonly used to mass produce multiple products. Drinking alcohol is a product that is produced by yeasts and bacteria. Alcohol that can be consumed is also known as ethanol, and ethanol is used to power automobiles as a fuel source. Drinking alcohol is produced from natural sugars like glucose.
Carbon dioxide is produced as a side product in this reaction and can be used to make bread, and can also be used to carbonate beverages.
Fermentation Wine:
Alcoholic beverages like beer and wine are fermented by microorganisms when there is no oxygen present.
In this process, once there is enough alcohol and carbon dioxide around in the media the yeast start to die due to the environment becoming toxic to them. There are many strains of yeast and bacteria that can tolerate different amounts of alcohol around in their environment before it becoming toxic, thus one can obtain different alcohol levels in beer and wine, just by selecting a different microbial strain.
Most yeast can tolerate between 10 and 15 percent alcohol, but there are some strains that can tolerate up to 21 percent alcohol.
Dairy products like cheese and yogurt can also be made through fermentation using microbes.
Cheese was produced as a way to preserve the nutrients obtained from milk, through fermentation thus elongating the shelf-life of the product.
Microbes are used to convert the lactose sugars into lactic acid through fermentation. The bacteria used for such fermentation are usually from Lactococci, Lactobacilli, or Streptococci families.
Sometimes these microbes are added before or after the acidification step needed for cheese production.
Also these microbes are responsible for the different flavors of cheese, since they have enzymes that breakdown milk sugars and fats into multiple building blocks.
Some other microbes like mold may be purposely introduced during or before the aging of the cheese, in order to give it a different flavor.
The production of yogurt starts from the pasteurization of milk, where undesired microbes are reduced or eliminated.
Once the milk is pasteurized the milk is ready to be processed to reduce fat and liquid content, so what remains is mostly solid content.
This can be done by drying the milk so that the liquid evaporates or by adding concentrated milk.
Increasing the solid content of the milk also increases the nutritional value since the nutrients are more concentrated.
After this step is accomplished, the milk is ready for fermentation where the milk gets inoculated with bacteria in hygienic stainless steel containers and then gets carefully monitored for lactic acid production, temperature and pH.
Enzymes can be produced through fermentation either by submerged fermentation and/ or by solid state fermentation. Submerged fermentation is referred to when the microorganisms are in contact with media. In this process the contact with oxygen is essential. The bioreactors/fermentors that are used to do these mass production of product can store up to 500 cubic meters in volume. Solid state fermentation is less common than submerged fermentation, but has many benefits. There is less need for the environment to be sterile since there is less water, there is a higher stability and concentration for the end product. Insulin synthesis is done through the fermentation process and the use of recombinant E.coli or yeast in order to make human insulin also called Humulin.
== Agriculture application ==
The demand for agricultural products is constantly increasing due to the need of various fertilizers and pesticides. There are long term effects of the overuse of chemical fertilizers and pesticides. Due to the excessive use of chemical fertilizers and pesticides, the soil becomes infertile and a non-sufficient use for growing crops. For that matter, biofertelizers, biopesticides and organic farming come to the rescue.
Biopesticide is a pesticide derived from a living organism or natural occurring substances. Biochemical pesticides can also be produced from naturally occurring substances that can control pest populations in a non-toxic matter. An example of a biochemical pesticide is garlic and pepper based insecticides, these work by repelling insects from the desired location. Microbial pesticides, usually a virus, bacterium, or fungus are used to control pest populations in a more specific manner. The most commonly used microbe for the production of microbial bio-pesticides is Bacillus thuringiensis, also known as Bt. This spore forming bacterium produces a delta-endotoxins in which it causes the insect or pest to stop feeding on the crop or plant because the endotoxin destroys the lining of the digestive system.
== Chemical application ==
Synthesis of amino acids and organic solvents can also be made using microbes. The synthesis of essential amino acids such as are L-Methionine, L-Lysine, L-Tryptophan and the non-essential amino acid L-Glutamic acid are used today mainly for feed, food, and pharmaceutical industries. The production of these amino acids is due to Corynebacterium glutamicum and fermentation. C.glutamicum was engineered to be able to produce L-lysine and L-Glutamic acid in large quantities. L-Glutamic acid had a high demand for production because this amino acid is used to produce Monosodium glutamate (MSG) a food flavoring agent. In 2012 the total production of L-Glutamic acid was 2.2 million tons and is produced using a submerged fermentation technique inoculated with C.glutamicum. L-Lysine was originally produced from diaminopimelic acid (DAP) by E.coli, but once the C.glutamicum was discovered for the production of L-Glutamic acid. This organism and other autotrophs were later modified to yield other amino acids such as lysine, aspartate, methionine, isoleucine and threonine. L-Lysine is used for the feeding of pigs and chicken, as well as to treat nutrient deficiency, increase energy in a patient, and sometimes used to treat viral infections. L-Tryptophan is also produced through fermentation and by Corynebacterium and E.coli, though the production is not as large as the rest of the amino acids it is still produced for pharmaceutical purposes since it can be converted and used to produce neurotransmitters.
The production of organic solvents like acetone, butanol, and isopropanol through fermentation was one of the first things to be produced by using bacteria, since achieving the necessary chirality of the products is easily achieved by using living systems. Solvent fermentation uses a series of Clostridia bacterial species. Solvent fermentation at first was not as productive as it is used today. The amount of bacteria required to yield a product was high, and the actual yield of product was low. Later technological advances were discovered that allowed scientist to genetically alter these strains to achieve a higher yield for these solvents. These Clostridial strains were transformed to have extra gene copies of enzymes necessary for solvent production, as well as being more tolerant to higher concentrations of the solvent being produced, since these bacteria have a range of product in which they can survive in before the environment becomes toxic. Yielding more strains that can use other substrates was also another way to increase the productivity of these bacteria.
== References == | Wikipedia/Industrial_microbiology |
Biotransformation is the biochemical modification of one chemical compound or a mixture of chemical compounds. Biotransformations can be conducted with whole cells, their lysates, or purified enzymes. Increasingly, biotransformations are effected with purified enzymes. Major industries and life-saving technologies depend on biotransformations.
== Advantages and disadvantages ==
Compared to the conventional production of chemicals, biotransformations are often attractive because their selectivities can be high, limiting the coproduction of undesirable coproducts. Generally operating under mild temperatures and pressures in aqueous solutions, many biotransformations are "green". The catalysts, i.e. the enzymes, are amenable to improvement by genetic manipulation.
Biotechnology usually is restrained by substrate scope. Petrochemicals for example are often not amenable to biotransformations, especially on the scale required for some applications, e.g. fuels. Biotransformations can be slow and are often incompatible with high temperatures, which are employed in traditional chemical synthesis to increase rates. Enzymes are generally only stable <100 °C, and usually much lower. Enzymes, like other catalysts are poisonable. In some cases, performance or recyclability can be improved by using immobilized enzymes.
== Historical ==
Wine and beer making are examples of biotransformations that have been practiced since ancient times. Vinegar has long been produced by fermentation, involving the oxidation of ethanol to acetic acid. Cheesemaking traditionally relies on microbes to convert dairy precursors. Yogurt is produced by inoculating heat-treated milk with microorganisms such as Streptococcus thermophilus and Lactobacillus bulgaricus.
== Modern examples ==
=== Pharmaceuticals ===
Beta-lactam antibiotics, e.g., penicillin and cephalosporin are produced by biotransformations in an industry valued several billions of dollars. Processes are conducted in vessels up to 60,000 gal in volume. Sugars, methionine, and ammonium salts are used as C,S,N sources. Genetically modified Penicillium chrysogenum is employed for penicillin production.
Some steroids are hydroxylated in vitro to give drugs.
=== Sugars ===
High fructose corn syrup is generated by biotransformation of corn starch, which is converted to a mixture of glucose and fructose. Glucoamylase is one enzyme used in the process.
Cyclodextrins are produced by transferases.
=== Amino acids ===
Amino acids are sometimes produced industrially by transaminases. In other cases, amino acids are obtained by biotransformations of peptides using peptidases.
=== Acrylamide ===
With acrylonitrile and water as substrates, nitrile hydratase enzymes are used to produce acrylamide, a valued monomer.
=== Biofuels ===
Many kinds of fuels and lubricants are produced by processes that include biotransformations starting from natural precursors such as fats, cellulose, and sugars.
== See also ==
Biotechnology
Biodegradation
== References == | Wikipedia/Biotransformation |
Single-cell proteins (SCP) or microbial proteins refer to edible unicellular microorganisms. The biomass or protein extract from pure or mixed cultures of algae, yeasts, fungi or bacteria may be used as an ingredient or a substitute for protein-rich foods, and is suitable for human consumption or as animal feeds. Industrial agriculture is marked by a high water footprint, high land use, biodiversity destruction, general environmental degradation and contributes to climate change by emission of a third of all greenhouse gases; production of SCP does not necessarily exhibit any of these serious drawbacks. As of today, SCP is commonly grown on agricultural waste products, and as such inherits the ecological footprint and water footprint of industrial agriculture. However, SCP may also be produced entirely independent of agricultural waste products through autotrophic growth. Thanks to the high diversity of microbial metabolism, autotrophic SCP provides several different modes of growth, versatile options of nutrients recycling, and a substantially increased efficiency compared to crops. A 2021 publication showed that photovoltaic-driven microbial protein production could use 10 times less land for an equivalent amount of protein compared to soybean cultivation.
With the world population reaching 9 billion by 2050, there is strong evidence that agriculture will not be able to meet demand and that there is serious risk of food shortage. Autotrophic SCP represents options of fail-safe mass food-production which can produce food reliably even under harsh climate conditions.
== History ==
In 1781, processes for preparing highly concentrated forms of yeast were established. Research on Single Cell Protein Technology started a century ago when Max Delbrück and his colleagues found out the high value of surplus brewer’s yeast as a feeding supplement for animals. During World War I and World War II, yeast-SCP was employed on a large scale in Germany to counteract food shortages during the war. Inventions for SCP production often represented milestones for biotechnology in general: for example, in 1919, Sak in Denmark and Hayduck in Germany invented a method named, “Zulaufverfahren”, (fed-batch) in which sugar solution was fed continuously to an aerated suspension of yeast instead of adding yeast to diluted sugar solution once (batch). In post war period, the Food and Agriculture Organization of the United Nations (FAO) emphasized on hunger and malnutrition problems of the world in 1960 and introduced the concept of protein gap, showing that 25% of the world population had a deficiency of protein intake in their diet. It was also feared that agricultural production would fail to meet the increasing demands of food by humanity. By the mid 60’s, almost quarter of a million tons of food yeast were being produced in different parts of the world and Soviet Union alone produced some 900,000 tons by 1970 of food and fodder yeast.
In the 1960s, researchers at BP developed what they called "proteins-from-oil process": a technology for producing single-cell protein by yeast fed by waxy n-paraffins, a byproduct of oil refineries. Initial research work was done by Alfred Champagnat at BP's Lavera Refinery in France; a small pilot plant there started operations in March 1963, and the same construction of the second pilot plant, at Grangemouth Oil Refinery in Britain, was authorized.
The term SCP was coined in 1966 by Carroll L. Wilson of MIT.
The "food from oil" idea became quite popular by the 1970s, with Champagnat being awarded the UNESCO Science Prize in 1976, and paraffin-fed yeast facilities being built in a number of countries. The primary use of the product was as poultry and cattle feed.
The Soviets were particularly enthusiastic, opening large "BVK" (belkovo-vitaminny kontsentrat, i.e., "protein-vitamin concentrate") plants next to their oil refineries in Kstovo (1973) and Kirishi (1974). The Soviet Ministry of Microbiological Industry had eight plants of this kind by 1989. However, due to concerns of toxicity of alkanes in SCP and pressured by the environmentalist movements, the government decided to close them down, or convert to some other microbiological processes.
Quorn is a range of vegetarian and vegan meat-substitutes made from Fusarium venenatum mycoprotein, sold in Europe and North America.
Another type of single cell protein-based meat analogue (which does not use fungi however but rather bacteria) is Calysta. Other producers are Unibio (Denmark) Circe Biotechnologie (Austria) and String Bio (India).
SCP has been argued to be a source of alternative or resilient food.
== Production process ==
Single-cell proteins develop when microbes ferment waste materials (including wood, straw, cannery, and food-processing wastes, residues from alcohol production, hydrocarbons, or human and animal excreta). With 'electric food' processes the inputs are electricity, CO2 and trace minerals and chemicals such as fertiliser. It is also possible to derive SCP from natural gas to use as a resilient food. Similarly SCP can be derived from waste plastic by upcycling.
The problem with extracting single-cell proteins from waste products is the dilution and cost. They are found in very low concentrations, usually less than 5%. Engineers have developed ways to increase the concentrations including centrifugation, flotation, precipitation, coagulation, and filtration, or the use of semi-permeable membranes.
The single-cell protein must be dehydrated to approximately 10% moisture content and/or acidified to aid in storage and prevent spoilage. The methods to increase the concentrations to adequate levels and the de-watering process require equipment that is expensive and not always suitable for small-scale operations. It is economically prudent to feed the product locally and soon after it is produced.
== Microorganisms ==
Microbes employed include (brand names in parentheses for commercialized examples):
== Properties ==
Large-scale production of microbial biomass has many advantages over the traditional methods for producing proteins for food or feed.
Microorganisms have a much higher growth rate (algae: 2–6 hours, yeast: 1–3 hours, bacteria: 0.5–2 hours). This also allows selection for strains with high yield and good nutritional composition more quickly and easily compared to breeding.
Whereas large parts of crops, such as stems, leaves and roots, are not edible, single-cell microorganisms can be used entirely. Whereas parts of the edible fraction of crops are indigestible, many microorganisms are digestible at a much higher fraction.
Microorganisms usually have a much higher protein content of 30–70% in the dry mass than vegetables or grains. The amino acid profiles of many SCP microorganisms often have excellent nutritional quality, comparable to hen's eggs.
Some microorganisms can build vitamins and nutrients which eukaryotic organisms such as plants cannot produce or not produce in significant amounts, including vitamin B12.
Microorganisms can utilize a broad spectrum of raw materials as carbon sources including alkanes, methanol, methane, ethanol and sugars. What was considered "waste product" often can be reclaimed as nutrients and support growth of edible microorganisms.
Like plants, autotrophic microorganisms are capable of growing on CO2. Some of them, such as bacteria with the Wood–Ljungdahl pathway or the reductive TCA can fix CO2 with efficiencies ranging from 2-3 times to 10 times more efficiently than plants, when also considering the effects of photoinhibition.
Some bacteria, such as several homoacetogenic clostridia, are capable of performing syngas fermentation. This means they can metabolize synthesis gas, a gas mixture of CO, H2 and CO2 that can be made by gasification of residual intractable biowastes such as lignocellulose.
Some bacteria are diazotrophic, i.e. they can fix N2 from the air and are thus independent of chemical N-fertilizer, whose production, utilization and degradation causes tremendous harm to the environment, deteriorates public health, and fosters climate change.
Many bacteria can utilize H2 for energy supply, using enzymes called hydrogenases. Whereas hydrogenases are normally highly O2-sensitive, some bacteria are capable of performing O2-dependent respiration of H2. This feature allows autotrophic bacteria to grow on CO2 without light at a fast growth rate. Since H2 can be made efficiently by water electrolysis, in a manner of speaking, those bacteria can be "powered by electricity".
Microbial biomass production is independent of seasonal and climatic variations, and can easily be shielded from extreme weather events that are expected to cause crop failures with the ongoing climate-change. Light-independent microorganisms such as yeasts can continue to grow at night.
Cultivation of microorganisms generally has a much lower water footprint than agricultural food production. Whereas the global average blue-green water footprint (irrigation, surface, ground and rain water) of crops reaches about 1800 liters per kg crop due to evaporation, transpiration, drainage and runoff, closed bioreactors producing SCP exhibits none of these causes.
Cultivation of microorganisms does not require fertile soil and therefore does not compete with agriculture. Thanks to the low water requirements, SCP cultivation can even be done in dry climates with infertile soil and may provide a means of fail-safe food supply in arid countries.
Photosynthetic microorganisms can reach a higher solar-energy-conversion efficiency than plants, because in photobioreactors supply of water, CO2 and a balanced light distribution can be tightly controlled.
Unlike agricultural products which are processed towards a desired quality, it is easier with microorganisms to direct production towards a desired quality. Instead of extracting amino acids from soy beans and throwing away half of the plant body in the process, microorganisms can be genetically modified to overproduce or even secrete a particular amino acid. However, in order to keep a good consumer acceptance, it is usually easier to obtain similar results by screening for microorganisms which already have the desired trait or train them via selective adaptation.
Although SCP shows very attractive features as a nutrient for humans, however there are some problems that deter its adoption on global basis:
Fast growing microorganisms such as bacteria and yeast have a high concentration of nucleic acid, notably RNA. Levels must be limited in the diets of monogastric animals to <50 g per day. Ingestion of purine compounds arising from RNA breakdown leads to increased plasma levels of uric acid, which can cause gout and kidney stones. Uric acid can be converted to allantoin, which is excreted in urine. Nucleic acid removal is not necessary from animal feeds but is from human foods (humans have lost parts of the uric acid catabolic pathway during their evolution).This problem can be remediated, however. One common method consists in a heat treatment which kills the cells, inactivates proteases and allows endogenous RNases to hydrolyse RNA with release of nucleotides from cell to culture broth.
Similar to plant cells, the cell wall of some microorganisms such as algae and yeast contains indigestible components, such as cellulose. The cells of some kind of SCP should be broken up in order to liberate the cell interior and allow complete digestion.
Some kind of SCP exhibits unpleasant color and flavors.
Depending on the kind of SCP and the cultivation conditions, care must be taken to prevent and control contamination by other microorganisms because contaminants may produce toxins such as mycotoxins or cyanotoxins. An interesting approach to address this problem was proposed with the fungus Scytalidium acidophilum which grows at a pH as low as 1, outside the tolerance of most microorganisms. This allows it to grow on acid-hydrolysed paper waste at low-cost.
Some yeast and fungal proteins are deficient in methionine.
== See also ==
Solein: a single cell protein made by Solar Foods Ltd. Finland-based.
Kiverdi, Inc and subsidiary Air Protein by Lisa Dyson. California-based.
Avecom - Belgium-based
Unibio - Denmark-based
Calysta - California-based
Circe Biotechnologie - Austria-based
Superbrewed Food (formerly White Dog Labs). Delaware-based
Deep Branch - UK-based
LanzaTech
Nature's Fynd - Chicago-based
Kyanos
NovoNutrients
Deep Branch Biotechnology
Fermentative hydrogen production
Hydrogenotrophs
Alternative foods
Microbial food cultures
== References ==
== External links ==
Media related to Single-cell protein at Wikimedia Commons | Wikipedia/Single_cell_protein |
Industrial gases are the gaseous materials that are manufactured for use in industry. The principal gases provided are nitrogen, oxygen, carbon dioxide, argon, hydrogen, helium and acetylene, although many other gases and mixtures are also available in gas cylinders. The industry producing these gases is also known as industrial gas, which is seen as also encompassing the supply of equipment and technology to produce and use the gases. Their production is a part of the wider chemical Industry (where industrial gases are often seen as "specialty chemicals").
Industrial gases are used in a wide range of industries, which include oil and gas, petrochemicals, chemicals, power, mining, steelmaking, metals, environmental protection, medicine, pharmaceuticals, biotechnology, food, water, fertilizers, nuclear power, electronics and aerospace. Industrial gas is sold to other industrial enterprises; typically comprising large orders to corporate industrial clients, covering a size range from building a process facility or pipeline down to cylinder gas supply.
Some trade scale business is done, typically through tied local agents who are supplied wholesale. This business covers the sale or hire of gas cylinders and associated equipment to tradesmen and occasionally the general public. This includes products such as balloon helium, dispensing gases for beer kegs, welding gases and welding equipment, LPG and medical oxygen.
Retail sales of small scale gas supply are not confined to just the industrial gas companies or their agents. A wide variety of hand-carried small gas containers, which may be called cylinders, bottles, cartridges, capsules or canisters are available to supply LPG, butane, propane, carbon dioxide or nitrous oxide. Examples are whipped-cream chargers, powerlets, campingaz and sodastream.
== Early history of gases ==
The first gas from the natural environment used by humans was almost certainly air when it was discovered that blowing on or fanning a fire made it burn brighter. Humans also used the warm gases from a fire to smoke foods and steam from boiling water to cook foods.
Carbon dioxide has been known from ancient times as the byproduct of fermentation, particularly for beverages, which was first documented dating from 7000 to 6600 B.C. in Jiahu, China. Natural gas was used by the Chinese in about 500 B.C. when they discovered the potential to transport gas seeping from the ground in crude pipelines of bamboo to where it was used to boil sea water. Sulfur dioxide was used by the Romans in winemaking as it had been discovered that burning candles made of sulfur inside empty wine vessels would keep them fresh and prevent them gaining a vinegar smell.
Early understanding consisted of empirical evidence and the protoscience of alchemy; however with the advent of scientific method and the science of chemistry, these gases became positively identified and understood.
The history of chemistry tells us that a number of gases were identified and either discovered or first made in relatively pure form during the Industrial Revolution of the 18th and 19th centuries by notable chemists in their laboratories. The timeline of attributed discovery for various gases are carbon dioxide (1754), hydrogen (1766), nitrogen (1772), nitrous oxide (1772), oxygen (1773), ammonia (1774), chlorine (1774), methane (1776), hydrogen sulfide (1777), carbon monoxide (1800), hydrogen chloride (1810), acetylene (1836), helium (1868) fluorine (1886), argon (1894), krypton, neon and xenon (1898)
and radon (1899).
Carbon dioxide, hydrogen, nitrous oxide, oxygen, ammonia, chlorine, sulfur dioxide and manufactured fuel gas were already being used during the 19th century, and mainly had uses in food, refrigeration, medicine, and for fuel and gas lighting. For example, carbonated water was being made from 1772 and commercially from 1783, chlorine was first used to bleach textiles in 1785 and nitrous oxide was first used for dentistry anaesthesia in 1844. At this time gases were often generated for immediate use by chemical reactions. A notable example of a generator is Kipps apparatus which was invented in 1844 and could be used to generate gases such as hydrogen, hydrogen sulfide, chlorine, acetylene and carbon dioxide by simple gas evolution reactions. Acetylene was manufactured commercially from 1893 and acetylene generators were used from about 1898 to produce gas for gas cooking and gas lighting, however electricity took over as more practical for lighting and once LPG was produced commercially from 1912, the use of acetylene for cooking declined.
Once gases had been discovered and produced in modest quantities, the process of industrialisation spurred on innovation and invention of technology to produce larger quantities of these gases. Notable developments in the industrial production of gases include the electrolysis of water to produce hydrogen (in 1869) and oxygen (from 1888), the Brin process for oxygen production which was invented in the 1884, the chloralkali process to produce chlorine in 1892 and the Haber Process to produce ammonia in 1908.
The development of uses in refrigeration also enabled advances in air conditioning and the liquefaction of gases. Carbon dioxide was first liquefied in 1823. The first Vapor-compression refrigeration cycle using ether was invented by Jacob Perkins in 1834 and a similar cycle using ammonia was invented in 1873 and another with sulfur dioxide in 1876. Liquid oxygen and Liquid nitrogen were both first made in 1883; Liquid hydrogen was first made in 1898 and liquid helium in 1908. LPG was first made in 1910. A patent for LNG was filed in 1914 with the first commercial production in 1917.
Although no one event marks the beginning of the industrial gas industry, many would take it to be the 1880s with the construction of the first high pressure gas cylinders. Initially cylinders were mostly used for carbon dioxide in carbonation or dispensing of beverages.
In 1895 refrigeration compression cycles were further developed to enable the liquefaction of air, most notably by Carl von Linde allowing larger quantities of oxygen production and in 1896 the discovery that large quantities of acetylene could be dissolved in acetone and rendered nonexplosive allowed the safe bottling of acetylene.
A particularly important use was the development of welding and metal cutting done with oxygen and acetylene from the early 1900s.
As production processes for other gases were developed many more gases came to be sold in cylinders without the need for a gas generator.
== Gas production technology ==
Air separation plants refine air in a separation process and so allow the bulk production of nitrogen and argon in addition to oxygen - these three are often also produced as cryogenic liquid. To achieve the required low distillation temperatures, an Air Separation Unit (ASU) uses a refrigeration cycle that operates by means of the Joule–Thomson effect.
In addition to the main air gases, air separation is also the only practical source for production of the rare noble gases neon, krypton and xenon.
Cryogenic technologies also allow the liquefaction of natural gas, hydrogen and helium. In natural-gas processing, cryogenic technologies are used to remove nitrogen from natural gas in a Nitrogen Rejection Unit; a process that can also be used to produce helium from natural gas where natural gas fields contain sufficient helium to make this economic. The larger industrial gas companies have often invested in extensive patent libraries in all fields of their business, but particularly in cryogenics.
The other principal production technology in the industry is Reforming. Steam reforming is a chemical process used to convert natural gas and steam into a syngas containing hydrogen and carbon monoxide with carbon dioxide as a byproduct. Partial oxidation and autothermal reforming are similar processes but these also require oxygen from an ASU. Synthesis gas is often a precursor to the chemical synthesis of ammonia or methanol. The carbon dioxide produced is an acid gas and is most commonly removed by amine treating. This separated carbon dioxide can potentially be sequestrated to a carbon capture reservoir or used for Enhanced oil recovery.
Air Separation and hydrogen reforming technologies are the cornerstone of the industrial gases industry and also form part of the technologies required for many fuel gasification ( including IGCC), cogeneration and Fischer-Tropsch gas to liquids schemes. Hydrogen has many production methods and may be almost a carbon neutral alternative fuel if produced by water electrolysis (assuming the electricity is produced in nuclear or other low carbon footprint power plant instead of reforming natural gas which is by far dominant method). One example of displacing the use of hydrocarbons is Orkney; see hydrogen economy for more information on hydrogen's uses.
Liquid hydrogen is used by NASA in the Space Shuttle as a rocket fuel.
Simpler gas separation technologies, such as membranes or molecular sieves used in pressure swing adsorption or vacuum swing adsorption are also used to produce low purity air gases in nitrogen generators and oxygen plants. Other examples producing smaller amounts of gas are chemical oxygen generators or oxygen concentrators.
In addition to the major gases produced by air separation and syngas reforming, the industry provides many other gases. Some gases are simply byproducts from other industries and others are sometimes bought from other larger chemical producers, refined and repackaged; although a few have their own production processes. Examples are hydrogen chloride produced by burning hydrogen in chlorine, nitrous oxide produced by thermal decomposition of ammonium nitrate when gently heated, electrolysis for the production of fluorine, chlorine and hydrogen, and electrical corona discharge to produce ozone from air or oxygen.
Related services and technology can be supplied such as vacuum, which is often provided in hospital gas systems; purified compressed air; or refrigeration. Another unusual system is the inert gas generator. Some industrial gas companies may also supply related chemicals, particularly liquids such as bromine, hydrogen fluoride and ethylene oxide.
== Gas distribution ==
=== Mode of gas supply ===
Most materials that are gaseous at ambient temperature and pressure are supplied as compressed gas. A gas compressor is used to compress the gas into storage pressure vessels (such as gas canisters, gas cylinders or tube trailers) through piping systems. Gas cylinders are by far the most common gas storage and large numbers are produced at a "cylinder fill" facility.
However, not all industrial gases are supplied in the gaseous phase. A few gases are vapors that can be liquefied at ambient temperature under pressure alone, so they can also be supplied as a liquid in an appropriate container. This phase change also makes these gases useful as ambient refrigerants and the most significant industrial gases with this property are ammonia (R717), propane (R290), butane (R600), and sulfur dioxide (R764). Chlorine also has this property but is too toxic, corrosive and reactive to ever have been used as a refrigerant. Some other gases exhibit this phase change if the ambient temperature is low enough; this includes ethylene (R1150), carbon dioxide (R744), ethane (R170), nitrous oxide (R744A), and sulfur hexafluoride; however, these can only be liquefied under pressure if kept below their critical temperatures which are 9 °C for C2H4 ; 31 °C for CO2 ; 32 °C for C2H6 ; 36 °C for N2O ; 45 °C for SF6. All of these substances are also provided as a gas (not a vapor) at the 200 bar pressure in a gas cylinder because that pressure is above their critical pressure.
Permanent gases (those with a critical temperature below ambient) can only be supplied as liquid if they are also cooled. All gases can potentially be used as a refrigerant around the temperatures at which they are liquid; for example nitrogen (R728) and methane (R50) are used as refrigerant at cryogenic temperatures.
Exceptionally carbon dioxide can be produced as a cold solid known as dry ice, which sublimes as it warms in ambient conditions, the properties of carbon dioxide are such that it cannot be liquid at a pressure below its triple point of 5.1 bar.
Acetylene is also supplied differently. Since it is so unstable and explosive, this is supplied as a gas dissolved in acetone within a packing mass in a cylinder. Acetylene is also the only other common industrial gas that sublimes at atmospheric pressure.
=== Gas delivery ===
The major industrial gases can be produced in bulk and delivered to customers by pipeline, but can also be packaged and transported.
Most gases are sold in gas cylinders and some sold as liquid in appropriate containers (e.g. Dewars) or as bulk liquid delivered by truck. The industry originally supplied gases in cylinders to avoid the need for local gas generation; but for large customers such as steelworks or oil refineries, a large gas production plant may be built nearby (typically called an "on-site" facility) to avoid using large numbers of cylinders manifolded together. Alternatively, an industrial gas company may supply the plant and equipment to produce the gas rather than the gas itself. An industrial gas company may also offer to act as plant operator under an operations and maintenance contract for a gases facility for a customer, since it usually has the experience of running such facilities for the production or handling of gases for itself.
Some materials are dangerous to use as a gas; for example, fluorine is highly reactive and industrial chemistry requiring fluorine often uses hydrogen fluoride (or hydrofluoric acid) instead. Another approach to overcoming gas reactivity is to generate the gas as and when required, which is done, for example, with ozone.
The delivery options are therefore local gas generation, pipelines, bulk transport (truck, rail, ship), and packaged gases in gas cylinders or other containers.
Bulk liquid gases are often transferred to end user storage tanks. Gas cylinders (and liquid gas containing vessels) are often used by end users for their own small scale distribution systems. Toxic or flammable gas cylinders are often stored by end users in gas cabinets for protection from external fire or from any leak.
=== Gas cylinder color coding ===
Despite attempts at standardization to facilitate user and first responders' safety, no universal coding exists for cylinders with industrial gases, therefore several color coding standards are in usage. In most developed countries of the world, notably countries of European union and United Kingdom, EN 1089-3 is used, with cylinders of liquefied petroleum gas being an exception.
In United States of America, no official regulation of color coding for gas cylinders exists and none is enforced.
== What defines an industrial gas ==
Industrial gas is a group of materials that are specifically manufactured for use in industry and are also gaseous at ambient temperature and pressure. They are chemicals which can be an elemental gas or a chemical compound that is either organic or inorganic, and tend to be low molecular weight molecules. They could also be a mixture of individual gases. They have value as a chemical; whether as a feedstock, in process enhancement, as a useful end product, or for a particular use; as opposed to having value as a "simple" fuel.
The term “industrial gases” is sometimes narrowly defined as just the major gases sold, which are: nitrogen, oxygen, carbon dioxide, argon, hydrogen, acetylene and helium. Many names are given to gases outside of this main list by the different industrial gas companies, but generally the gases fall into the categories "specialty gases", “medical gases”, “fuel gases” or “refrigerant gases”. However gases can also be known by their uses or industries that they serve, hence "welding gases" or "breathing gases", etc.; or by their source, as in "air gases"; or by their mode of supply as in "packaged gases". The major gases might also be termed "bulk gases" or "tonnage gases".
In principle any gas or gas mixture sold by the "industrial gases industry" probably has some industrial use and might be termed an "industrial gas". In practice, "industrial gases" are likely to be a pure compound or a mixture of precise chemical composition, packaged or in small quantities, but with high purity or tailored to a specific use (e.g. oxyacetylene).
Lists of the more significant gases are listed in "The Gases" below.
There are cases when a gas is not usually termed an "industrial gas"; principally where the gas is processed for later use of its energy rather than manufactured for use as a chemical substance or preparation.
The oil and gas industry is seen as distinct. So, whilst it is true that natural gas is a "gas" used in "industry" - often as a fuel, sometimes as a feedstock, and in this generic sense is an "industrial gas"; this term is not generally used by industrial enterprises for hydrocarbons produced by the petroleum industry directly from natural resources or in an oil refinery. Materials such as LPG and LNG are complex mixtures often without precise chemical composition that often also changes whilst stored.
The petrochemical industry is also seen as distinct. So petrochemicals (chemicals derived from petroleum) such as ethylene are also generally not described as "industrial gases".
Sometimes the chemical industry is thought of as distinct from industrial gases; so materials such as ammonia and chlorine might be considered "chemicals" (especially if supplied as a liquid) instead of or sometimes as well as "industrial gases".
Small scale gas supply of hand-carried containers is sometimes not considered to be industrial gas as the use is considered personal rather than industrial; and suppliers are not always gas specialists.
These demarcations are based on perceived boundaries of these industries (although in practice there is some overlap), and an exact scientific definition is difficult. To illustrate "overlap" between industries:
Manufactured fuel gas (such as town gas) would historically have been considered an industrial gas. Syngas is often considered to be a petrochemical; although its production is a core industrial gases technology. Similarly, projects harnessing Landfill gas or biogas, Waste-to-energy schemes, as well as Hydrogen Production all exhibit overlapping technologies.
Helium is an industrial gas, even though its source is from natural gas processing.
Any gas is likely to be considered an industrial gas if it is put in a gas cylinder (except perhaps if it is used as a fuel)
Propane would be considered an industrial gas when used as a refrigerant, but not when used as a refrigerant in LNG production, even though this is an overlapping technology.
== Gases ==
=== Elemental gases ===
The known chemical elements which are, or can be obtained from natural resources (without transmutation) and which are gaseous are hydrogen, nitrogen, oxygen, fluorine, chlorine, plus the noble gases; and are collectively referred to by chemists as the "elemental gases". These elements are all primordial apart from the noble gas radon which is a trace radioisotope which occurs naturally since all isotopes are radiogenic nuclides from radioactive decay. These elements are all nonmetals.
(Synthetic elements have no relevance to the industrial gas industry; however for scientific completeness, note that it has been suggested, but not scientifically proven, that metallic elements 112 (Copernicium) and 114 (Flerovium) are gases.)
The elements which are stable two atom homonuclear molecules at standard temperature and pressure (STP), are hydrogen (H2), nitrogen (N2) and oxygen (O2), plus the halogens fluorine (F2) and chlorine (Cl2). The noble gases are all monatomic.
In the industrial gases industry the term "elemental gases" (or sometimes less accurately "molecular gases") is used to distinguish these gases from molecules that are also chemical compounds.
Radon is chemically stable, but it is radioactive and does not have a stable isotope. Its most stable isotope, 222Rn, has a half-life of 3.8 days. Its uses are due to its radioactivity rather than its chemistry and it requires specialist handling outside of industrial gas industry norms. It can however be produced as a by-product of uraniferous ores processing. Radon is a trace naturally occurring radioactive material (NORM) encountered in the air processed in an ASU.
Chlorine is the only elemental gas that is technically a vapor since STP is below its critical temperature; whilst
bromine and mercury are liquid at STP, and so their vapor exists in equilibrium with their liquid at STP.
=== Other common industrial gases ===
This list shows the other most common gases sold by industrial gas companies.
=== Important liquefied gases ===
This list shows the most important liquefied gases:
Produced from air
liquid nitrogen (LIN)
liquid oxygen (LOX)
liquid argon (LAR)
Produced from various sources
liquid carbon dioxide
Produced from hydrocarbon feedstock
liquid hydrogen
liquid helium
Gas mixtures produced from hydrocarbon feedstock
Liquefied natural gas (LNG)
Liquefied petroleum gas (LPG)
== Industrial gas applications ==
The uses of industrial gases are diverse.
The following is a small list of areas of use:
== Companies ==
AGA AB (part of The Linde Group)
Airgas (part of Air Liquide)
Air Liquide
Air Products & Chemicals
BASF
BOC (part of The Linde Group)
Gulf Cryo
INOX Air Products (part of INOX Group)
The Linde Group (formerly Linde AG)
Messer Group
MOX-Linde Gases
Praxair (part of The Linde Group)
Pro Gases UK
Nippon Gases (part of Taiyo Nippon Sanso Corporation)
Matheson Tri-Gas (part of Taiyo Nippon Sanso Corporation)
Rotarex
== See also ==
== References ==
== External links ==
Media related to Industrial gases at Wikimedia Commons | Wikipedia/Industrial_gas |
In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation, and are typically measured by the amount of output produced per unit of cost (production cost). A decrease in cost per unit of output enables an increase in scale that is, increased production with lowered cost. At the basis of economies of scale, there may be technical, statistical, organizational or related factors to the degree of market control.
Economies of scale arise in a variety of organizational and business situations and at various levels, such as a production, plant or an entire enterprise. When average costs start falling as output increases, then economies of scale occur. Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis. The economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor. Diseconomies of scale are the opposite.
Economies of scale often have limits, such as passing the optimum design point where costs per additional unit begin to increase. Common limits include exceeding the nearby raw material supply, such as wood in the lumber, pulp and paper industry. A common limit for a low cost per unit weight raw materials is saturating the regional market, thus having to ship products uneconomic distances. Other limits include using energy less efficiently or having a higher defect rate.
Large producers are usually efficient at long runs of a product grade (a commodity) and find it costly to switch grades frequently. They will, therefore, avoid specialty grades even though they have higher margins. Often smaller (usually older) manufacturing facilities remain viable by changing from commodity-grade production to specialty products. Economies of scale must be distinguished from economies stemming from an increase in the production of a given plant. When a plant is used below its optimal production capacity, increases in its degree of utilization bring about decreases in the total average cost of production. Nicholas Georgescu-Roegen (1966) and Nicholas Kaldor (1972) both argue that these economies should not be treated as economies of scale.
== Overview ==
The simple meaning of economies of scale is doing things more efficiently with increasing size. Common sources of economies of scale are purchasing (bulk buying of materials through long-term contracts), managerial (increasing the specialization of managers), financial (obtaining lower-interest charges when borrowing from banks and having access to a greater range of financial instruments), marketing (spreading the cost of advertising over a greater range of output in media markets), and technological (taking advantage of returns to scale in the production function). Each of these factors reduces the long run average costs (LRAC) of production by shifting the short-run average total cost (SRATC) curve down and to the right.
Economies of scale is a concept that may explain patterns in international trade or in the number of firms in a given market. The exploitation of economies of scale helps explain why companies grow large in some industries. It is also a justification for free trade policies, since some economies of scale may require a larger market than is possible within a particular country—for example, it would not be efficient for Liechtenstein to have its own carmaker if they only sold to their local market. A lone carmaker may be profitable, but even more so if they exported cars to global markets in addition to selling to the local market. Economies of scale also play a role in a "natural monopoly". There is a distinction between two types of economies of scale: internal and external. An industry that exhibits an internal economy of scale is one where the costs of production fall when the number of firms in the industry drops, but the remaining firms increase their production to match previous levels. Conversely, an industry exhibits an external economy of scale when costs drop due to the introduction of more firms, thus allowing for more efficient use of specialized services and machinery.
Economies of scale exist whenever the total cost of producing two quantities of a product X is lower when a single firm instead of two separate firms produce it. See Economies of scope#Economics.
T
C
(
(
Q
1
+
Q
2
)
X
)
<
T
C
(
Q
1
X
)
+
T
C
(
Q
2
X
)
{\displaystyle TC((Q_{1}+Q_{2})X)<TC(Q_{1}X)+TC(Q_{2}X)}
== Determinants of economies of scale ==
=== Physical and engineering basis: economies of increased dimension ===
Some of the economies of scale recognized in engineering have a physical basis, such as the square–cube law, by which the surface of a vessel increases by the square of the dimensions while the volume increases by the cube. This law has a direct effect on the capital cost of such things as buildings, factories, pipelines, ships and airplanes.
In structural engineering, the strength of beams increases with the cube of the thickness.
Drag loss of vehicles like aircraft or ships generally increases less than proportional with increasing cargo volume, although the physical details can be quite complicated. Therefore, making them larger usually results in less fuel consumption per ton of cargo at a given speed.
Heat loss from industrial processes vary per unit of volume for pipes, tanks and other vessels in a relationship somewhat similar to the square–cube law. In some productions, an increase in the size of the plant reduces the average variable cost, thanks to the energy savings resulting from the lower dispersion of heat.
Economies of increased dimension are often misinterpreted because of the confusion between indivisibility and three-dimensionality of space. This confusion arises from the fact that three-dimensional production elements, such as pipes and ovens, once installed and operating, are always technically indivisible. However, the economies of scale due to the increase in size do not depend on indivisibility but exclusively on the three-dimensionality of space. Indeed, indivisibility only entails the existence of economies of scale produced by the balancing of productive capacities, considered above; or of increasing returns in the utilisation of a single plant, due to its more efficient use as the quantity produced increases. However, this latter phenomenon has nothing to do with the economies of scale which, by definition, are linked to the use of a larger plant.
=== Economies in holding stocks and reserves ===
At the base of economies of scale there are also returns to scale linked to statistical factors. In fact, the greater of the number of resources involved, the smaller, in proportion, is the quantity of reserves necessary to cope with unforeseen contingencies (for instance, machine spare parts, inventories, circulating capital, etc.).
=== Transaction economies ===
One of the reasons firms appear is to reduce transaction costs. A larger scale generally determines greater bargaining power over input prices and therefore benefits from pecuniary economies in terms of purchasing raw materials and intermediate goods compared to companies that make orders for smaller amounts. In this case, we speak of pecuniary economies, to highlight the fact that nothing changes from the "physical" point of view of the returns to scale. Furthermore, supply contracts entail fixed costs which lead to decreasing average costs if the scale of production increases. This is of important utility in the study of corporate finance.
=== Economies deriving from the balancing of production capacity ===
Economies of productive capacity balancing derives from the possibility that a larger scale of production involves a more efficient use of the production capacities of the individual phases of the production process. If the inputs are indivisible and complementary, a small scale may be subject to idle times or to the underutilization of the productive capacity of some sub-processes. A higher production scale can make the different production capacities compatible. The reduction in machinery idle times is crucial in the case of a high cost of machinery.
=== Economies resulting from the division of labour and the use of superior techniques ===
A larger scale allows for a more efficient division of labour. The economies of division of labour derive from the increase in production speed, from the possibility of using specialized personnel and adopting more efficient techniques. An increase in the division of labour inevitably leads to changes in the quality of inputs and outputs.
=== Managerial economics ===
Many administrative and organizational activities are mostly cognitive and, therefore, largely independent of the scale of production. When the size of the company and the division of labour increase, there are a number of advantages due to the possibility of making organizational management more effective and perfecting accounting and control techniques. Furthermore, the procedures and routines that turned out to be the best can be reproduced by managers at different times and places.
=== Learning and growth economies ===
Learning and growth economies are at the base of dynamic economies of scale, associated with the process of growth of the scale dimension and not to the dimension of scale per se. Learning by doing implies improvements in the ability to perform and promotes the introduction of incremental innovations with a progressive lowering of average costs. Learning economies are directly proportional to the cumulative production (experience curve).
Growth economies emerge if a company gains an added benefit by expanding its size. These economies are due to the presence of some resource or competence that is not fully utilized, or to the existence of specific market positions that create a differential advantage in expanding the size of the firms. That growth economies disappear once the scale size expansion process is completed. For example, a company that owns a supermarket chain benefits from an economy of growth if, opening a new supermarket, it gets an increase in the price of the land it owns around the new supermarket. The sale of these lands to economic operators, who wish to open shops near the supermarket, allows the company in question to make a profit, making a profit on the revaluation of the value of building land.
=== Capital and operating cost ===
Overall costs of capital projects are known to be subject to economies of scale. A crude estimate is that if the capital cost for a given sized piece of equipment is known, changing the size will change the capital cost by the 0.6 power of the capacity ratio (the point six to the power rule).
In estimating capital cost, it typically requires an insignificant amount of labor, and possibly not much more in materials, to install a larger capacity electrical wire or pipe having significantly greater capacity.
The cost of a unit of capacity of many types of equipment, such as electric motors, centrifugal pumps, diesel and gasoline engines, decreases as size increases. Also, the efficiency increases with size.
=== Crew size and other operating costs for ships, trains and airplanes ===
Operating crew size for ships, airplanes, trains, etc., does not increase in direct proportion to capacity. (Operating crew consists of pilots, co-pilots, navigators, etc. and does not include passenger service personnel.) Many aircraft models were significantly lengthened or "stretched" to increase payload.
Many manufacturing facilities, especially those making bulk materials like chemicals, refined petroleum products, cement and paper, have labor requirements that are not greatly influenced by changes in plant capacity. This is because labor requirements of automated processes tend to be based on the complexity of the operation rather than production rate, and many manufacturing facilities have nearly the same basic number of processing steps and pieces of equipment, regardless of production capacity.
=== Economical use of byproducts ===
Karl Marx noted that large scale manufacturing allowed economical use of products that would otherwise be waste. Marx cited the chemical industry as an example, which today along with petrochemicals, remains highly dependent on turning various residual reactant streams into salable products. In the pulp and paper industry, it is economical to burn bark and fine wood particles to produce process steam and to recover the spent pulping chemicals for conversion back to a usable form.
=== Economies of scale and the size of exporter ===
Large and more productive firms typically generate enough net revenues abroad to cover the fixed costs associated with exporting. However, in the event of trade liberalization, resources will have to be reallocated toward the more productive firm, which raises the average productivity within the industry.
Firms differ in their labor productivity and the quality of their products, so more efficient firms are more likely to generate more net income abroad and thus become exporters of their goods or services. There is a correlating relationship between a firm's total sales and underlying efficiency. Firms with higher productivity will always outperform a firm with lower productivity which will lead to lower sales. Through trade liberalization, organizations are able to drop their trade costs due to export growth. However, trade liberalization does not account for any tariff reduction or shipping logistics improvement. However, total economies of scale is based on the exporters individual frequency and size. So large-scale companies are more likely to have a lower cost per unit as opposed to small-scale companies. Likewise, high trade frequency companies are able to reduce their overall cost attributed per unit when compared to those of low-trade frequency companies.
== Economies of scale and returns to scale ==
Economies of scale is related to and can easily be confused with the theoretical economic notion of returns to scale. Where economies of scale refer to a firm's costs, returns to scale describe the relationship between inputs and outputs in a long-run (all inputs variable) production function. A production function has constant returns to scale if increasing all inputs by some proportion results in output increasing by that same proportion. Returns are decreasing if, say, doubling inputs results in less than double the output, and increasing if more than double the output. If a mathematical function is used to represent the production function, and if that production function is homogeneous, returns to scale are represented by the degree of homogeneity of the function. Homogeneous production functions with constant returns to scale are first degree homogeneous, increasing returns to scale are represented by degrees of homogeneity greater than one, and decreasing returns to scale by degrees of homogeneity less than one.
If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown that at a particular level of output, the firm has economies of scale if and only if it has increasing returns to scale, has diseconomies of scale if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale).
If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range.
In essence, returns to scale refer to the variation in the relationship between inputs and output. This relationship is therefore expressed in "physical" terms. But when talking about economies of scale, the relation taken into consideration is that between the average production cost and the dimension of scale. Economies of scale therefore are affected by variations in input prices. If input prices remain the same as their quantities purchased by the firm increase, the notions of increasing returns to scale and economies of scale can be considered equivalent. However, if input prices vary in relation to their quantities purchased by the company, it is necessary to distinguish between returns to scale and economies of scale. The concept of economies of scale is more general than that of returns to scale since it includes the possibility of changes in the price of inputs when the quantity purchased of inputs varies with changes in the scale of production.
The literature assumed that due to the competitive nature of reverse auctions, and in order to compensate for lower prices and lower margins, suppliers seek higher volumes to maintain or increase the total revenue. Buyers, in turn, benefit from the lower transaction costs and economies of scale that result from larger volumes. In part as a result, numerous studies have indicated that the procurement volume must be sufficiently high to provide sufficient profits to attract enough suppliers, and provide buyers with enough savings to cover their additional costs.
However, Shalev and Asbjornse found, in their research based on 139 reverse auctions conducted in the public sector by public sector buyers, that the higher auction volume, or economies of scale, did not lead to better success of the auction. They found that auction volume did not correlate with competition, nor with the number of bidders, suggesting that auction volume does not promote additional competition. They noted, however, that their data included a wide range of products, and the degree of competition in each market varied significantly, and offer that further research on this issue should be conducted to determine whether these findings remain the same when purchasing the same product for both small and high volumes. Keeping competitive factors constant, increasing auction volume may further increase competition.
== Economies of scale in the history of economic analysis ==
=== Economies of scale in classical economists ===
The first systematic analysis of the advantages of the division of labour capable of generating economies of scale, both in a static and dynamic sense, was that contained in the famous First Book of Wealth of Nations (1776) by Adam Smith, generally considered the founder of political economy as an autonomous discipline.
John Stuart Mill, in Chapter IX of the First Book of his Principles, referring to the work of Charles Babbage (On the economics of machines and manufactories), widely analyses the relationships between increasing returns and scale of production all inside the production unit.
=== Economies of scale in Marx and distributional consequences ===
In Das Kapital (1867), Karl Marx, referring to Charles Babbage, extensively analyzed economies of scale and concludes that they are one of the factors underlying the ever-increasing concentration of capital. Marx observes that in the capitalist system the technical conditions of the work process are continuously revolutionized in order to increase the surplus by improving the productive force of work. According to Marx, with the cooperation of many workers brings about an economy in the use of the means of production and an increase in productivity due to the increase in the division of labour. Furthermore, the increase in the size of the machinery allows significant savings in construction, installation and operation costs. The tendency to exploit economies of scale entails a continuous increase in the volume of production which, in turn, requires a constant expansion of the size of the market. However, if the market does not expand at the same rate as production increases, overproduction crises can occur. According to Marx the capitalist system is therefore characterized by two tendencies, connected to economies of scale: towards a growing concentration and towards economic crises due to overproduction.
In his 1844 Economic and Philosophic Manuscripts, Karl Marx observes that economies of scale have historically been associated with an increasing concentration of private wealth and have been used to justify such concentration. Marx points out that concentrated private ownership of large-scale economic enterprises is a historically contingent fact, and not essential to the nature of such enterprises. In the case of agriculture, for example, Marx calls attention to the sophistical nature of the arguments used to justify the system of concentrated ownership of land:
As for large landed property, its defenders have always sophistically identified the economic advantages offered by large-scale agriculture with large-scale landed property, as if it were not precisely as a result of the abolition of property that this advantage, for one thing, received its greatest possible extension, and, for another, only then would be of social benefit.
Instead of concentrated private ownership of land, Marx recommends that economies of scale should instead be realized by associations:
Association, applied to land, shares the economic advantage of large-scale landed property, and first brings to realization the original tendency inherent in land-division, namely, equality. In the same way association re-establishes, now on a rational basis, no longer mediated by serfdom, overlordship and the silly mysticism of property, the intimate ties of man with the earth, for the earth ceases to be an object of huckstering, and through free labor and free enjoyment becomes once more a true personal property of man.
=== Economies of scale in Marshall ===
Alfred Marshall notes that Antoine Augustin Cournot and others have considered "the internal economies [...] apparently without noticing that their premises lead inevitably to the conclusion that, whatever firm first gets a good start will obtain a monopoly of the whole business of its trade … ". Marshall believes that there are factors that limit this trend toward monopoly, and in particular:
the death of the founder of the firm and the difficulty that the successors may have inherited his/her entrepreneurial skills;
the difficulty of reaching new markets for one's goods;
the growing difficulty of being able to adapt to changes in demand and to new techniques of production;
The effects of external economies, that is the particular type of economies of scale connected not to the production scale of an individual production unit, but to that of an entire sector.
=== Sraffa's critique ===
Piero Sraffa observes that Marshall, in order to justify the operation of the law of increasing returns without it coming into conflict with the hypothesis of free competition, tended to highlight the advantages of external economies linked to an increase in the production of an entire sector of activity. However, "those economies which are external from the point of view of the individual firm, but internal as regards the industry in its aggregate, constitute precisely the class which is most seldom to be met with." "In any case - Sraffa notes – in so far as external economies of the kind in question exist, they are not linked to be called forth by small increases in production," as required by the marginalist theory of price. Sraffa points out that, in the equilibrium theory of the individual industries, the presence of external economies cannot play an important role because this theory is based on marginal changes in the quantities produced.
Sraffa concludes that, if the hypothesis of perfect competition is maintained, economies of scale should be excluded. He then suggests the possibility of abandoning the assumption of free competition to address the study of firms that have their own particular market. This stimulated a whole series of studies on the cases of imperfect competition in Cambridge. However, in the succeeding years Sraffa followed a different path of research that brought him to write and publish his main work Production of commodities by means of commodities (Sraffa 1966). In this book, Sraffa determines relative prices assuming no changes in output, so that no question arises as to the variation or constancy of returns.
=== Rule of six-tenths ===
In 1947, DuPont engineer Roger Williams Jr. (1930–2005) published a rule of thumb that costs of chemical process are roughly proportional to the tonnage in power ~0.6. In the following decades it became widely adopted other engineering industries and terrestrial mining, sometimes (e. g., in electrical power generation) with modified exponential scaling factors.
=== Economies of scale and the tendency towards monopoly: "Cournot's dilemma" ===
It has been noted that in many industrial sectors there are numerous companies with different sizes and organizational structures, despite the presence of significant economies of scale. This contradiction, between the empirical evidence and the logical incompatibility between economies of scale and competition, has been called the 'Cournot dilemma'. As Mario Morroni observes, Cournot's dilemma appears to be unsolvable if we only consider the effects of economies of scale on the dimension of scale. If, on the other hand, the analysis is expanded, including the aspects concerning the development of knowledge and the organization of transactions, it is possible to conclude that economies of scale do not always lead to monopoly. In fact, the competitive advantages deriving from the development of the firm's capabilities and from the management of transactions with suppliers and customers can counterbalance those provided by the scale, thus counteracting the tendency towards a monopoly inherent in economies of scale. In other words, the heterogeneity of the organizational forms and of the size of the companies operating in a sector of activity can be determined by factors regarding the quality of the products, the production flexibility, the contractual methods, the learning opportunities, the heterogeneity of preferences of customers who express a differentiated demand with respect to the quality of the product, and assistance before and after the sale. Very different organizational forms can therefore co-exist in the same sector of activity, even in the presence of economies of scale, such as, for example, flexible production on a large scale, small-scale flexible production, mass production, industrial production based on rigid technologies associated with flexible organizational systems and traditional artisan production. The considerations regarding economies of scale are therefore important, but not sufficient to explain the size of the company and the market structure. It is also necessary to take into account the factors linked to the development of capabilities and the management of transaction costs.
== External economies of scale ==
External economies of scale tend to be more prevalent than internal economies of scale. Through the external economies of scale, the entry of new firms benefits all existing competitors as it creates greater competition and also reduces the average cost for all firms as opposed to internal economies of scale which only allows benefits to the individual firm. Advantages that arise from external economies of scale include;
Expansion of the industry.
Benefits most or all of the firms within the industry.
Can lead to rapid growth of local governments.
== Sources ==
=== Purchasing ===
Firms are able to lower their average costs by buying their inputs required for the production process in bulk or from special wholesalers.
=== Managerial ===
Firms might be able to lower their average costs by improving their management structure within the firm. This can range from hiring better skilled or more experienced managers from the industry.
=== Technological ===
Technological advancements change production processes and subsequently reduce the overall cost per unit. Tim Hindle argues that the rollout of the internet "has completely reshaped the assumptions underlying economies of scale".
== See also ==
== Notes ==
== References ==
=== Citations ===
=== General and cited references ===
Arrow, Kenneth (1979). "The division of labor in the economy, the polity, and society". In O’Driscoll, Gerald P. Jr (ed.). Adam Smith and Modern Political Economy. Bicentennial Essays on the Wealth of Nations. Uckfield: The Iowa State University Press. pp. 153–164. ISBN 978-0813819006.
Babbage, Charles (1832). On the Economy of Machinery and Manufactures. London: Knight.
Baumol, William Jack (1961). Economic Theory and Operational Analysis (4 ed.). Englewood Cliffs, New Jersey: Prentice Hall. ISBN 9780132271240. {{cite book}}: ISBN / Date incompatibility (help)
Cournot, Antoine Augustin (1838). Recherches sur les Principes Mathématiques de la Théorie des Richesses (in French). Paris: Hachette. ISBN 978-2012871786. {{cite book}}: ISBN / Date incompatibility (help) New ed. with Appendices by Léon Walras, Joseph Bertrand and Vilfredo Pareto, Introduction and notes by Georges Lutfalla, Paris: Librairie des Sciences Politiques et Sociales Marcel Rivière, 1938. English translation: Cournot, Antoine Augustin (1927). Researches into the Mathematical Principles of the Theory of Wealth. Translated by Bacon, Nathaniel T. New York: Macmillan. Repr. New York: A.M. Kelley, 1971.
Demsetz, Harold (1995). The Economics of the Business Firm. Seven Critical Comments. Cambridge: Cambridge University Press. ISBN 0521588650. Repr. 1997.
Evangelista, Rinaldo (1999). Knowledge and Investment. The Source of Innovation in Industry. cheltenham: Elgar.
Färe, Rolf; Grosskopf, Shawna; Lovell, C. A. Knox (June 1986). "Scale Economies and Duality". Journal of Economics. 46 (2): 175–182. doi:10.1007/BF01229228. S2CID 154480027.
Georgescu-Roegen, Nicholas (1966). Analytical Economics: Issues and Problems. Cambridge, Mass.: Harvard University Press. ISBN 9780674281639.
Hanoch, Giora (June 1975). "The Elasticity of Scale and the Shape of Average Costs". American Economic Review. 65 (3): 492–497. JSTOR 1804855.
Kaldor, Nicholas (December 1972). "The irrelevance of equilibrium economics". The Economic Journal. 82 (328): 1237–1255. doi:10.2307/2231304. JSTOR 2231304.
Levin, Richard C.; Klevorick, Alvin K.; Nelson, Richard R.; Winter, Sidney G. (1987). Baily, M.N.; Winston, C. (eds.). "Appropriating the returns from industrial research and development" (PDF). Brookings Papers on Economic Activity. 1987 (3): 783–820. doi:10.2307/2534454. JSTOR 2534454. S2CID 51821102.
Marshall, Alfred (1890). Principles of Economics (8 ed.). London: Macmillan. Repr. 1990.
Marx, Karl (1867). Das Kapital [Capital. A Critique to Political Economy]. Vol. 1. Translated by Fowkes, Ben. London: Penguin Books in association with New Left Review. Repr. 1990.
Marx, Karl (1894). Das Kapital [Capital. A Critique to Political Economy]. Vol. 3. Translated by Fernbach, David B.; introduced by Mandel, Ernest. London: Penguin Books in association with New Left Review.
Morroni, Mario (1992). Production Process and Technical Change. Cambridge: Cambridge University Press. ISBN 9780511599019.
Morroni, Mario (2006). Knowledge, Scale and Transactions in the Theory of the Firm. Cambridge: Cambridge University Press. ISBN 9781107321007. Repr. 2009.
Panzar, John; Willig, Robert D. (August 1977). "Economies of Scale in Multi-Output Production". The Quarterly Journal of Economics. 91 (3): 481–493. doi:10.2307/1885979. JSTOR 1885979.
Penrose, Edith (1959). The Theory of the Growth of the Firm (3 ed.). Oxford: Oxford University Press. ISBN 9780198289777. {{cite book}}: ISBN / Date incompatibility (help) Repr. (1997).
Pratten, Clifford Frederick (1991). The Competitiveness of Small Firms. Cambridge: Cambridge University Press.
Robinson, Austin (1958) [1931]. The Structure of Competitive Industry. Cambridge: Cambridge University Press.
Rosenberg, Nathan (1982). "Learning by using". Inside the Black Box. Technology and Economics. Cambridge: Cambridge University Press.
Scherer, F.M. (1980). Industrial Market Structure and Economic Performance (2 ed.). Chicago: Rand McNally. ISBN 9780528671029.
Scherer, F.M. (2000). "Professor Sutton's 'Technology and market structure'". The Journal of Industrial Economics. 48 (2): 215–223. doi:10.1111/1467-6451.00120.
Silvestre, Joaquim (1987). "Economies and Diseconomies of Scale". The New Palgrave: A Dictionary of Economics. Vol. 2. London: Macmillan. pp. 80–84. ISBN 978-0-333-37235-7.
Smith, Adam (1976) [1776]. An Inquiry into the Nature and Causes of the Wealth of Nations. Vol. 2. Oxford: Clarendon Press.
Sraffa, Piero (1925). "Sulle relazioni tra costo e quantità prodotta". Annali di Economia (in Italian). 2: 277–328. English translation: Sraffa, Piero (1998). "On the relations between cost and quantity produced". Italian Economic Papers. Volume III. Translated by Pasinetti, L.L. (1998 ed.). Bologna: Società Italiana degli Economisti, Oxford University Press, Il Mulino. ISBN 978-0198290346. Repr. in Kurz, H.D.; Salvadori, N. (2003). The Legacy of Piero Sraffa. Vol. 2 (2003 ed.). Cheltenham: An Elgar Reference Collection. pp. 3–43. ISBN 978-1-84064-439-5.
Sraffa, Piero (December 1926). "The law of returns under competitive conditions". The Economic Journal. 36 (144): 535–550. doi:10.2307/2959866. JSTOR 2959866. S2CID 6458099. Repr. in Kurz, H.D.; Salvadori, N. (2003). The Legacy of Piero Sraffa. Vol. 2 (2003 ed.). Cheltenham: An Elgar Reference Collection. pp. 44–59. ISBN 978-1-84064-439-5.
Sraffa, Piero (1966). Production of Commodities by Means of Commodities. Prelude to a Critique of Economic Theory. Cambridge: Cambridge University Press. ISBN 978-0521099691.
Zelenyuk, V. (2013). "A scale elasticity measure for directional distance function and its dual: Theory and DEA estimation". European Journal of Operational Research. 228 (3): 592–600. doi:10.1016/j.ejor.2013.01.012.
Zelenyuk, V. (2014). "Scale efficiency and homotheticity: equivalence of primal and dual measures". Journal of Productivity Analysis. 42 (1): 15–24. doi:10.1007/s11123-013-0361-z. S2CID 122978026.
== External links ==
Economies of Scale Definition by The Linux Information Project (LINFO)
Economies of Scale by Economics Online | Wikipedia/Industrial_scale |
In the production of phonograph records – discs that were commonly made of shellac, and later, vinyl – sound was recorded directly onto a master disc (also called the matrix, sometimes just the master) at the recording studio. From about 1950 on (earlier for some large record companies, later for some small ones) it became usual to have the performance first recorded on audio tape, which could then be processed and/or edited, and then dubbed on to the master disc.
== Background ==
The grooves are engraved into the master disc on a mastering lathe. Early versions of these master discs were soft wax, and later a harder lacquer was used.
The mastering process was originally something of an art as the operator had to manually allow for the changes in sound which affected how wide the space for the groove needed to be on each rotation. Sometimes the engineer would sign his work, or leave humorous or cryptic comments in the lead-out groove area, where it was normal to scratch or stamp identifying codes to distinguish each master.
== Mass producing ==
The original soft master, known as a "lacquer", was silvered using the same process as the silvering of mirrors. To prepare the master for making copies, soft masters made of wax were coated with fine graphite. Later masters made of lacquer were sprayed with a saponin mix, rinsed, and then sprayed with stannous chloride, which sensitized the surface. After another rinse, they were sprayed with a mix of the silver solution and dextrose reducer to create a silver coating. This coating provided the conductive layer to carry the current for the subsequent electroplating, commonly with a nickel alloy.
In the early days of microgroove records (1940–1960), nickel plating was only brief, just an hour or less. This was followed by copper plating, which was both quicker and simpler to manage at that time. Later with advent of nickel sulfamate plating solutions, all matrices were plated with solid nickel. Most factories transferred the master matrix after an initial flash of nickel from a slow warm nickel electroplating bath at around 15 amperes to a hot 130 degree nickel plating bath. In this, the current would be raised at regular intervals until it reached between 110 A and 200 A, depending on the standard of the equipment and the skill of the operators. This and all subsequent metal copies were known as matrices.
When this metal master was removed from the lacquer (master), it would be a negative master or master matrix, since it was a negative copy of the lacquer. (In the UK, this was called the master; note the difference from soft master/lacquer disc above). In the earliest days the negative master was used as a mold to press records sold to the public, but as demand for mass production of records grew, another step was added to the process.
After removing the silver deposit and passifying (see below), the metal master was then electroplated (electroformed) to create metal positive matrices, or "mothers". From these positives, stampers (negatives) would be formed. Producing mothers was similar to electroforming Masters, except the time allowed to turn-up to full current was much shorter and the heavier Mothers could be produced in as little as one hour and stampers (145 grams) could be made in 45 minutes.
Prior to plating either the nickel master or nickel mother, it needed to be passified to prevent the next matrix from adhering to the mother. There were several methods used; EMI favoured the fairly difficult albumin soaking method whereas CBS Records and Philips used the electrolytic method. Soaking in a dichromate solution was another popular method, however, this method risked contaminating the nickel solution with chrome. The electrolytic method was similar to the standard electrolytic cleaning method except the cycles were reversed finishing the process with the matrix as the anode. This also cleaned the surface of the matrix about to be copied. After separating from the master, a new mother was polished with a fine abrasive to remove (or at least round-off) the microscopic "horns" at the top of the grooves, produced by the cutting lathe. This allowed the vinyl to flow better in the pressing stage and reduced the non-fill problem.
Stampers produced from the mothers after separating were chrome plated to provide a hard stain-free surface. Each stamper was next centre punched for the pin on the playback turntable. Methods used included aligning the final locked groove over three pins, or tapping the edge while rotating under the punch until the grooves could be seen (through a microscope) to move constantly towards the centre. Either method was quite skilled and took much effort to learn. The centre punch not only punched a hole, but formed a lip which would be used to secure the stamper into the press.
The stamper was next trimmed to size, and the back sanded smooth, to ensure a smooth finish to the mouldings, and improve contact between the stamper and the press die. The edge was then pressed hydraulically to form another lip to clamp the edge down on the press. The stampers would be used in hydraulic presses to mould the LP discs. The advantages of this system over the earlier more-direct system included ability to make a large number of records quickly by using multiple stampers. Also, more records could be produced from each master since stampers would eventually get damaged, but rarely wear out.
Since the master was the unique source of the positive, made to produce the stampers, it was considered a library item. Accordingly, copy positives, required to replace worn positives, were made from unused early stampers. These were known as copy shells, and were the physical equivalent of the first positive.
The "pedigree" of any record can be traced through the positive/stamper identities used, by reading the lettering found on the record run-out area.
== Packaging and distribution ==
Singles are typically sold in plain or label-logo paper sleeves, though EPs are often treated to a cover in similar style to an LP. LPs are universally packaged in paperboard covers with a paper (usually additional artwork, photography, and/or lyrics) or plastic liner (or "poly-lined" paper) protecting the delicate surface of the record. Few albums have had records packaged inside with a 3 mil polyethylene plastic sleeve, either square or round-bottomed (also called U-shaped), and an accompanying 11×11 paper insert with the additional artwork, photography, and/or lyrics as described above. The insert could be single- or double-sided, in color or grayscale, and glossy or matte.
Packaging methods have changed since the introduction of the LP record. The 'wrap-around' or 'flipback' sleeve initially became the standard packaging method for LPs during the 1950s. In this packaging method the front cover is able to be printed in colour and is laminated, whereas the back cover features only black text on a white background and is usually unlaminated. These sleeves are constructed in two parts: a laminated front section is wrapped around a separate back panel. Three 'flaps' are used to fix the front and back panels together on the outside. As the unlaminated cardboard back cover section is prone to discolouration due to exposure to natural light, in some instances a single printed sheet containing the back cover information is pasted over the entire back panel, covering the 'wrap-around' flaps but not reaching the outer edge of the sleeve, thus allowing some of the laminated 'flaps' to be exposed. While discolouration still occurs with this method, it is often less evident than when the cardboard back cover alone is exposed. A common feature of flipback sleeves in the 1960s was for information specific to either monaural or stereo versions of the record (typically a format-specific catalogue number and a "MONO" or "STEREO" disclaimer) would be printed on the same front cover artwork, and the whole front panel shifted up or down to expose the appropriate "version" on the front while the unused one would be covered up (but often not very well) by the back cover panel.
Towards the end of the 1960s advances in printing and packaging technology led to the introduction of the 'fully laminated' sleeve. Rather than the two-part construction of the 'wrap-around' sleeve, this method consists of a single component part, which is printed in full colour and is completely laminated with the 'flaps' tucked inside the back sleeve section. This is the method generally used for all subsequent releases in the vinyl age and is considered superior not only because of the additional ease allowed in the use of a single component, but also because the fully laminated finish offers far better protection from discolouration caused by exposure to natural light.
With the advent of long-playing records, the album cover became more than just packaging and protection, and album cover art became an important part of the music marketing and consuming experience. In the 1970s it became more common to have picture covers on singles. Many singles with picture sleeves (especially from the 1960s) are sought out by collectors, and the sleeves alone can go for a high price. LPs can have embossed cover art (with some sections being raised), an effect rarely seen on CD covers. The label area on the disc itself may contain themed or custom artwork rather than the standard record company's logo layout.
Records are made at large manufacturing plants, either owned by the major labels, or run by independent operators to whom smaller operations and independent labels could go for smaller runs. A band starting out might get a few hundred disks stamped, whereas big selling artists need the presses running full-time to manufacture the hundreds of thousands of copies needed for the launch of a big album. For most bands today, using any of the large manufacturing plants, it is not cost-effective to produce less than one thousand records. To do raises the cost of production, almost prohibitively. The reason for this is that the start up costs for making a record, as discussed prior in this article, are high when compared to the start up costs for making, say, a compact disc.
Sometimes bands might make a picture jacket for their record. Again, usually it is cost prohibitive to make less than one thousand jackets. The average cost for manufacturing a 7" record with a picture jacket is approximately $2.50, at a run of one thousand records and jackets - if one uses any of the large manufacturing plants.
Records are generally sold through specialist shops, although some big chain stores also have record departments. Many records are sold from stock, but it is normal to place special orders for less common records. Stock is expensive, so only large city center stores can afford to have several copies of a record.
While records are generally pressed on plain black vinyl, the album itself is given a much more ornamental appearance. This can include a solid color (other than black), splatter art, a marble look, or transparency (either tinged with a color or clear). Some examples of this can be seen to the right. One of the most well known examples of this technique is the white vinyl repressing of The Beatles' White Album.
== Labels ==
Record companies organised their products into labels. These could either be subsidiary companies, or they could simply be just a brand name. For example, EMI published records under the His Master's Voice label which was their classical recording brand, Harvest for their progressive rock brand, home to Pink Floyd. They also had Music for Pleasure and Classics for Pleasure as their economy labels. EMI also used the Parlophone brand in the UK for Beatles records in the early 1960s.
In the 1970s successful musicians sought greater control, and one way they achieved this was with their own labels, though normally they were still operated by the large music corporations. Two of the most famous early examples of this were the Beatles' Apple Records and Led Zeppelin's Swan Song Records
In the late 1970s the anarchic punk rock movement gave rise to the independent record labels. These were not owned or even distributed by the major corporations. In the UK, examples were Stiff Records, who published Ian Dury and the Blockheads, and 2 Tone Records, a label for the Specials. These allowed smaller bands to step onto the ladder without having to conform to the rigid rules of the large corporations.
== Home recording ==
One example of an "instantaneous recording" machine, available to the home recording enthusiast by about 1929 or 1930, was the "Sentinel Chromatron" machine. The Sentinel Chromatron recorded on a single side of uncoated aluminum; its records were read with a fibre needle. It was "rather unstable technology" which produced poor sound quality in comparison to shellac records and was rarely used after 1935.
RCA Victor introduced home phonograph disk recorders in October 1930. These phonographs featured a large counterbalanced tone arm with horseshoe magnet pick-up. These types of pick-ups could also be "driven" to actually move the needle and RCA took advantage of that by designing a system of home recording that used "pre-grooved" records. The material that the records were made from (advertised as "Victrolac") was soft and it was possible to somewhat modulate the grooves using the pick-up with proper recording needle and a fairly heavy weight placed on the pick-up. The discs were only six inches in diameter so recording time at 78rpm was brief. Larger size Victor blanks were introduced late in 1931, when RCA-Victor introduced the Radiola-Electrola RE-57. These machines were capable of recording at 331⁄3 rpm as well as 78 rpm. One could select to record something from the radio or one could record using the hand-held microphone. The RAE-59 sold for a hefty $350.00 at a time when many manufacturers had trouble finding buyers for $50.00 radios.
The home phonograph disk recorders of the 1930s were expensive machines that few could afford. Cheaper machines, such as the Wilcox-Gay Recordio line, were sold during the late 1930s through the early 1950s. They operated at 78 rpm only and were similar in appearance to (and not much larger than) a portable phonograph of the era. One 1941 model that included a radio sold for $39.95, approximately equivalent to $500 in 2005 dollars. The fidelity was adequate for clear voice recordings.
In the past (approximately from the 1940s through the 1970s), there were booths called Voice-O-Graphs, that let the user record their own voice onto a record when money was inserted. These were often found at arcades and tourist attractions alongside other vending and game machines. The Empire State Building's 86th floor observatory in New York City, Coney Island, NY and Conneaut Lake Park, PA are some of the locations which had such machines. Gem Razors also created thousands of free Voice-O-Graph records during wartime for the troops to send home to their families.
In the former USSR, records were commonly homemade using discarded medical x-rays. These records, which were usually made under the nation's samizdat movement, were nicknamed "Bones" or "Ribs", were usually inscribed with illegal copies of popular music banned by the government. They also became a popular means of distribution among Soviet punk bands; in addition to the high cost and low availability of vinyl, punk music was politically suppressed, and publishing outlets were limited.
Currently, two companies (Vestax and Vinylrecorder) offer disk recorders priced in the high four figures which enable "experienced professional users" and enthusiasts to produce high-fidelity stereo vinyl recordings. The Gakken Company in Japan also offers the Emile Berliner Gramophone Kit, and while it does not record actual records, it enables the user to physically inscribe sounds onto a CD (or any flat, smooth surface) with a needle and replay them back on any similar machine.
Home recording equipment made a cameo appearance in the 1941 Marx Brothers film, The Big Store. A custom recording was also the original surprise Christmas present in the 1931 version of The Bobbsey Twins' Wonderful Secret (when the book was rewritten in 1962 as The Bobbsey Twins' Wonderful Winter Secret, it became an 8 mm movie).
== See also ==
Record production portal
== References == | Wikipedia/Production_of_phonograph_records |
An accounting network or accounting association is a professional services network whose principal purpose is to provide members resources to assist the clients around the world and hence reduce the uncertainty by bringing together a greater number of resources to work on a problem. The networks and associations operate independently of the independent members. The largest accounting networks are known as the Big Four.
== The Big Four ==
The Big Four are the four largest professional services networks in the world: Deloitte, EY, KPMG, and PwC. They are the four largest global accounting networks as measured by revenue. The four are often grouped because they are comparable in size relative to the rest of the market, both in terms of revenue and workforce; they are considered equal in their ability to provide a wide scope of professional services to their clients; and, among those looking to start a career in professional services, particularly accounting, they are considered equally attractive networks to work in, because of the frequency with which these firms engage with Fortune 500 companies.
The Big Four all offer audit, assurance, taxation, management consulting, valuation, market research, actuarial, corporate finance, and legal services to their clients. A significant majority of the audits of public companies, as well as many audits of private companies, are conducted by these four networks.
Until the late 20th century, the market for professional services was dominated by eight networks which were nicknamed the "Big Eight". The Big Eight consisted of Arthur Andersen, Arthur Young, Coopers & Lybrand, Deloitte Haskins and Sells, Ernst & Whinney, Peat Marwick Mitchell, Price Waterhouse, and Touche Ross.
The Big Eight gradually reduced due to mergers between these firms, as well as the 2002 collapse of Arthur Andersen, leaving four networks dominating the market at the turn of the 21st century. In the United Kingdom in 2011, it was reported that the Big Four account for the audits of 99% of the companies in the FTSE 100 Index, and 96% of the companies in the FTSE 250 Index, an index of the leading mid-cap listing companies. Such a high level of industry concentration has caused concern, and a desire among some in the investment community for the UK's Competition & Markets Authority (CMA) to consider breaking up the Big Four. In October 2018, the CMA announced it would launch a detailed study of the Big Four's dominance of the audit sector. In July 2020, the UK Financial Reporting Council told the Big Four that they must submit plans by October 2020 to separate their audit and consultancy operations by 2024.
== History of accounting networks and associations ==
=== Foundations ===
Accounting networks were created to meet a specific need. “The accounting profession in the U.S. was built upon a state-established monopoly for audits of financial statements.” Accounting networks arose out of the necessity for public American companies to have audited financial statements for the Securities and Exchange Commission (SEC). For over 70 years, the SEC has continually sought for greater coordination and consistent quality in audits everywhere in the world. Networks were the logical model to address these requirements. They expanded outside of the United States since financial results had to be audited wherever a company conducted business. In the US, the Public Company Accounting Oversight Board's (PCAOB) regulations provide for inspection of non-United States firms. Without a network with common standards and internal means of communications, conducting the required audits would not be possible.
There were other profession-based factors which favored the growth of accounting networks. As a result of competition for the audit work, consolidation was inevitable. These include the fact that a network can establish a brand. A brand establishes the credibility of the network and allows the individual members to charge more. Creating a brand is very difficult when all of the members of a network are providing essentially the same services.
Being a network member establishes that the firm is part of a large group. Additionally, the larger the firm, the more likely it will be invited to render auditing engagements. A large organized network allows for spreading the costs to price competitively. Ultimately, size is the only real means of differentiation that is readily available on accounting firms to assure clients that they can do international work.
Networks also reflect the clients’ needs for seamless worldwide services because they are more efficient and cost-effective. From the perspective of the accounting firm, a global regulated organization with consistently applied standards significantly reduced the risk. However, increasing the size of the networks can enhance legal liability risks and quality control issues that have not been resolved.
With these factors in play, some networks continued to grow; others remained in a stasis position. Individual members of networks began to offer other services related to accounting. These services included forensic accounting, business appraisals, employee benefits planning, strategic planning, and almost anything associated with financial parts of the client’s business. The network’s structure easily accommodated these services and their geographical expansion.
As the Big Eight consolidated to become Big Six, the Big Five and then the Big Four, new networks naturally developed to emulate them. BDO and Grant Thornton were the earliest followers. Networks were then developed to serve mid-market companies and private businesses. New networks also sprang up as an extension of a single accounting firm in the same way the Big Eight were formed. New structures were created to further extend the networks.
The largest accounting networks adopted trade names that each member used. The names of the original firms that became part of the networks were lost and replaced with trade names. The perception was created that these networks were more than networks, but single entities rather than completely independent firms. This was never the case. The result was that the Big Eight concept was established which separated the eight firms from all other accounting firms.
Another factor in the development of networks in accounting was the American Institute of Certified Public Accountants (AICPA)’s prohibition of advertising. While the largest firms indirectly advertised their services, the small firms complied with the rules and believed advertising to be unprofessional. Additionally, midsize firms were de facto restricted from advertising simply because of limited budgets. They could not create a brand that was able to compete with the one established by the Big Eight. The advertising restriction was lifted in the 1970s by the Federal Trade Commission.
=== Multidisciplinary expansion ===
In the 1990s, the large accounting firms reached another ceiling in the services they made available to their clients. Having reached their natural limit on growth with more than 90% of auditing for public companies, the Big 6 branched out to become multidisciplinary in legal, technology, and employment services. Since the essential infrastructure was in place, it was thought to be relatively simple to incorporate other services into the existing network. As a network, it was natural to create independent entities in these other professions which themselves could be part of the network. The method and structures varied from firm to firm.
When the Big 6 began its expansion to the legal profession, it was met with fierce opposition from law firms and bar associations. Commissions, panels and committees were established by legal and accounting firms to argue their positions. Government agencies were enlisted. For more than five years the debate escalated. This movement ended abruptly with the fall of Arthur Andersen as a result of its association with Enron. Sarbanes Oxley followed, which effectively ended this trend. Some international associations of independent firms, such as Alliott Group, now include law firms within the membership.
=== Global ranking ===
Here is the list of the largest global accounting networks based on full year 2023 revenue (if available):
== Vicarious liability ==
Accounting networks are now facing a new challenge that go to the heart of the image that the firms wish to project to clients. The perception has been that the Big Four, Grant Thornton and BDO are single entities that perform services around the world for clients of this single entity. As a result of court cases this has introduced significant vicarious liability issues requiring the networks to distance themselves from the perception of being a single entity. The Parmalat case is the best illustration of the issues.
While the firms have lost a number of cases, the facts and circumstance, or procedural elements have reduced their actual liability.
== Networks versus associations ==
The vicarious liability issues carry over into operations. Regulations in the EU have been imposed that require the “networks” to define whether they are "associations" of independent firm or are more integrated networks operationally and financially. Additional standards have been passed by the International Federation of Accountants, an independent organization representing the accounting industry, on distinguishing networks from associations. The objectives of each are to provide the clients a level of understanding about the degree of integration with each other. Examples of international associations of accounting firms include Alliott Group, Geneva Group International and Leading Edge Alliance.
Here is the list of top 10 global accounting associations in 2021:
Global Accountancy Associations Top 10
== Conflicts of interest ==
Self-definition as a network or associations may determine if there is a conflict of interest. If the group is perceived as a network, it may be foreclosed from representation of clients because they cannot represent a competitor. Association members would not be foreclosed from representation because the firms are perceived as independent by clients.
== Big 4 dominance of public company audits ==
Accounting scandals have again focused on the fact that the Big Four has a near monopoly on audits of public companies. Networks are demanding regulations on auditing to require that auditors rotate and include the smaller networks in this rotation. The demands also request that mid-market firms be able to participate to break up the monopoly of the Big Four.
== List of accounting networks and associations ==
Andersen Global
Alliott Group
Baker Tilly
BDO International (Binder Dijker Otte & Co)
Crowe Global
Deloitte (Deloitte Haskins Sells/Deloitte, Haskins Sells, Touche Ross, Tohmatsu)
Ernst & Young (Arthur Young, Ernst Whinney/Ernst Ernst, Whinney Smith Murray)
Grant Thornton International
HLB International
Integra International Ltd
KPMG (Klynveld Main Goerdeler, Peat Marwick)
Mazars
MNP LLP
Morison KSi
Moore Global
PKF International
PwC (PricewaterhouseCoopers, Coopers & Lybrand/Cooper Brothers, Lybrand Ross Brothers Montgomery, Price Waterhouse)
RSM International
SW International
SMS Latinoamerica
SGA World
== See also ==
Umbrella organization
Business networking
Organization studies
Multidisciplinary professional services networks
Law firm network
Big Four accounting firms
== References == | Wikipedia/Accounting_network |
Industrial agitators are machines used to stir or mix fluids in industries that process products in the chemical, food, pharmaceutical and cosmetic industries. Their uses include:
mixing liquids together
promote the reactions of chemical substances
keeping homogeneous liquid bulk during storage
increase heat transfer (heating or cooling)
== Types ==
Several different kind of industrial agitators exist:
mechanical agitators (rotating)
static agitators (pipe fitted with baffles)
rotating tank agitators (e.g., a concrete mixer)
paddle type mixers
agitators working with a pump blasting liquid
agitator turning tanks to gas
The choice of the agitator depends on the phase that needs to be mixed (one or several phases): liquids only, liquid and solid, liquid and gas or liquid with solids and gas.
Depending on the type of phase and the viscosity of the bulk, the agitator may be called a mixer, kneader, dough mixer, amongst others. Agitators used in liquids can be placed on the top of the tank in a vertical position, horizontally on the side of the tank, or less commonly, on the bottom of the tank.
== Principle of agitation ==
The agitation is achieved by movement of the heterogeneous mass (liquid-solid phase). In mechanical agitators, this the result of the rotation of an impeller. The bulk can be composed of different substances and the aim of the operation is to blend it or to improve the efficiency of a reaction by a better contact between reactive product. Agitation may also be used to increase heat transfer or to maintain particles in suspension.
== Data of an agitator ==
The agitation of liquid is made by one or several agitation impellers.
Depending on its shape, the impeller can generate:
the moving of the liquid which is characterized by its velocity and direction.
Turbulence which is an erratic variation in space and time of local fluid velocity.
Shearing given by a velocity gradient between two filets of fluids.
These two phenomena provide energy consumption.
== Impellers ==
Propellers (marine or hydrofoil) give an inlet and outlet which are on axial direction, preferably downward, they are characterized by a nice pumping flow, low energy consumption and low shear magnitude as well as low turbulence. An impeller is a rotor that produces a sucking force, and is part of a pump.
Turbines (flat blades or pitched blades) which inlet flow is axial and outlet flow is radial will provide shearing, turbulence and need approximately 20 time more energy than propellers, for the same diameter and same rotation speed.
== Mechanical features ==
An agitator is composed of a drive device ( motor, gear reducer, belts…), a guiding system of the shaft (lantern fitted with bearings),
a shaft and impellers .
If the operating conditions are under high pressure or high temperature, the agitator must be equipped with a sealing system to keep tightened the inside of the tank when the shaft is crossing it.
If the shaft is long (> 10m), it can be guided by a bearing located in the bottom of the tank (bottom bearing).
== References == | Wikipedia/Industrial_agitator |
Industrial architecture is the design and construction of buildings facilitating the needs of the industrial sector. The architecture revolving around the industrial world uses a variety of building designs and styles to consider the safe flow, distribution and production of goods and labor. Such buildings rose in importance with the Industrial Revolution, starting in Britain, and were some of the pioneering structures of modern architecture. Many of the architectural buildings revolving around the industry allowed for processing, manufacturing, distribution, and the storage of goods and resources. Architects also have to consider the safety measurements and workflow to ensure the smooth flow within the work environment located in the building.
== Industrial architect ==
Industrial architects specialize in designing and planning of industrial buildings or infrastructure. They integrate different processes, machinery, equipment and industrial building code requirements into functional industrial buildings. They follow quality standards to ensure that industrial building are safely built for production or human use. Industrial architects are responsible for the design and planning of the following: markets, warehouses, factories, processing plants, power plants, commercial facilities, etc.
== History ==
=== Industrial Revolution ===
Britain played an important role in the Industrial Revolution, which stimulated the expansion of trade and distribution of goods amongst Europe and the Atlantic Ocean. The technological advances from Europe were later spread to the United States in the late 1700s. Samuel Slater fled to the United States and later opened a textile mill in Rhode Island; shortly after that the cotton gin was invented by Eli Whitney.
One of the first industrial buildings were built in Britain in the 1700s during the First Industrial Revolution, which later inspired other industrial architecture to arise throughout the world. The First Industrial Revolution lasted from mid-1700s to the mid-1800s and then later the Second Industrial Revolution came about which mainly focused on the use of new materials and production of goods.
==== 1700s ====
One of the earliest industrial buildings were relativity built at a domestic scale, for instance workshops for local craftsmen.
==== 1700s–1850s ====
This time period was the transformation of the British economy. The population in England had increased to 16 million people around 1841, with the majority moving to Northern Europe. Factories had been built and production in the factories had become dominant; production was not on a large-scale.
=== Post-Industrial Revolution ===
The birth of all industrial architecture stemmed from England and the continuing expansions of the architecture was a product of the Industrial Revolution. The usage and production of iron and steel became more prominent since they were used as the foundation for the industrial buildings. Steel is a durable material and was also used in other parts of the industry such as infrastructure, but it was difficult to make because it required high temperature to melt the metal.
==== 1850s–1914 ====
Britain saw a increase in production during this time period. Railways played an important role in transportation and distribution of resources throughout Europe and the United States. Industrial buildings were built at a larger scale to accommodate large machinery used in food production such as flour mills and breweries. With the implementation of the Planning Act of 1909, the industry had a significant impact on the siting and layout of industrial facilities as it continued to progress throughout the years.
==== 1914 to present ====
As architecture became modernized throughout the years, the more traditional industrial sites throughout Europe and the United States continued to decrease. For instance, coal is a raw material that was heavily used throughout the industrial revolution, so there were coal mines. Buildings continued to increase in size to accommodate mass production. The overall design of modern-day buildings is sleeker and more spacious.
The early 20th century saw multi-story factories influenced by high land costs and the need for vertical movement of goods. However, later designs, such as the one-story factories of the World War II era, became more prevalent due to their flexibility, ease of construction, and suitability for assembly lines. These designs also focused on the well-being of workers, with features like natural light, air, and better working conditions to boost productivity.
=== The Future ===
Modern industrial architecture integrates smart technology, adaptable designs, and sustainable materials. Abandoned industrial spaces are frequently transformed into residential, commercial, or mixed-use developments, supporting urban revitalization. This design style, characterized by open layouts, exposed utilities, and eco-friendly materials, is popular in both urban and suburban settings, highlighting green living and historic charm. Repurposed structures play a key role in urban renewal, revitalizing neglected areas into thriving hubs for housing, businesses, and cultural activities.
The future of industrial architecture is influenced by technological advancements such as automation, robotics, and integration of smart systems, which enhance efficiency, productivity, and safety. As manufacturing evolves, industrial buildings will continue to adapt, with a focus on sustainability and collaborative work environments.
=== Some key elements to industrial buildings ===
Industrial buildings are typically characterized by large, open spaces, high ceilings, and minimal ornamentation, utilizing durable materials like concrete, brick, metal, and glass. The design prioritizes practicality, with elements like exposed structural components and raw materials. Functional principles include adaptability for changing production needs, efficient circulation, zoning for different tasks, and proper ventilation.
High ceilings
Functionality and design
Large windows
Large, open floor plans
Built to safety standards
== Types of Industrial Buildings ==
== References ==
== Further reading ==
Bradley, Betsy Hunter. The Works: The Industrial Architecture of the United States. New York: Oxford University Press, 1999.
Jefferies, Matthew. Politics and Culture in Wilhelmine Germany: The Case of Industrial Architecture. Washington, D.C.: Berg, 1995.
Jevremović, Ljiljana; Turnšek, Branko A. J.; Vasić, Milanka; and Jordanović, Marina. "Passive Design Applications: Industrial Architecture Perspective", Facta Universitatis Series: Architecture and Civil Engineering, Vol. 12, No. 2 (2014): 173–82.
Jones, Edgar (1985). Industrial Architecture in Britain: 1750–1939. Oxford: Facts on File. ISBN 978-0-8160-1295-4. OCLC 12286054.
McGowan, F.; Radosevic, S.; and Tunzelmann, N. von. Emerging Industrial Architecture in Europe. Hoboken: Taylor and Francis, 2004.
Pearson, Lynn (2016). Victorian and Edwardian British Industrial Architecture. Crowood Press. ISBN 978-1-78500-189-5. OCLC 959428302.
Pragnell, Hubert J. (2021) [2000]. Industrial Britain: an Architectural History. Batsford. ISBN 978-1-84994-733-6. OCLC 1259509747.
Winter, John (1970). Industrial Architecture: A Survey of Factory Building. London: Studio Vista. OCLC 473557982. | Wikipedia/Industrial_architecture |
A conglomerate () is a type of multi-industry company that consists of several different and unrelated business entities that operate in various industries. A conglomerate usually has a parent company that owns and controls many subsidiaries, which are legally independent but financially and strategically dependent on the parent company. Conglomerates are often large and multinational corporations that have a global presence and a diversified portfolio of products and services. Conglomerates can be formed by merger and acquisitions, spin-offs, or joint ventures.
Conglomerates are common in many countries and sectors, such as media, banking, energy, mining, manufacturing, retail, defense, and transportation. This type of organization aims to achieve economies of scale, market power, risk diversification, and financial synergy. However, they also face challenges such as complexity, bureaucracy, agency problems, and regulation.
The popularity of conglomerates has varied over time and across regions. In the United States, conglomerates became popular in the 1960s as a form of economic bubble driven by low interest rates and leveraged buyouts. However, many of them collapsed or were broken up in the 1980s due to poor performance, accounting scandals, and antitrust regulation. In contrast, conglomerates have remained prevalent in Asia, especially in China, Japan, South Korea, and India. In mainland China, many state-affiliated enterprises have gone through high value mergers and acquisitions, resulting in some of the highest value business transactions of all time. These conglomerates have strong ties with the government and preferential policies and access to capital.
== United States ==
=== The conglomerate fad of the 1960s ===
During the 1960s, the United States was caught up in a "conglomerate fad" which turned out to be a form of an economic bubble.
Due to a combination of low interest rates and a repeating bear-bull market, conglomerates were able to buy smaller companies in leveraged buyouts (sometimes at temporarily deflated values). Famous examples from the 1960s include Gulf and Western Industries, Ling-Temco-Vought, ITT Corporation, Litton Industries, Textron, and Teledyne. The trick was to look for acquisition targets with solid earnings and much lower price–earnings ratios than the acquirer. The conglomerate would make a tender offer to the target's shareholders at a princely premium to the target's current stock price. Upon obtaining shareholder approval, the conglomerate usually settled the transaction in something other than cash, like debentures, bonds, warrants or convertible debentures (issuing the latter two would effectively dilute its shareholders down the road, but many shareholders at the time were not thinking that far ahead). The conglomerate would then add the target's earnings to its earnings, thereby increasing the conglomerate's overall earnings per share. In finance jargon, the transaction was "accretive to earnings."
The relatively lax accounting standards of the time meant that accountants were often able to get away with creative mathematics in calculating the conglomerate's post-acquisition consolidated earnings numbers. In turn, the price of the conglomerate's stock would go up, thereby re-establishing its previous price-earnings ratio, and then it could repeat the whole process with a new target. In plain English, conglomerates were using rapid acquisitions to create the illusion of rapid growth. In 1968, the peak year of the conglomerate fad, U.S. corporations completed a record number of mergers: approximately 4,500. In that year, at least 26 of the country's 500 largest corporations were acquired, of which 12 had assets above $250 million.
All this complex company reorganization had very real consequences for people who worked for companies that were either acquired by conglomerates or were seen as likely to be acquired by them. Acquisitions were a disorienting and demoralizing experience for executives at acquired companies—those who were not immediately laid off found themselves at the mercy of the conglomerate's executives in some other distant city. Most conglomerates' headquarters were located on the West Coast or East Coast, while many of their acquisitions were located in the country's interior. Many interior cities were devastated by repeatedly losing the headquarters of corporations to mergers, in which independent ventures were reduced to subsidiaries of conglomerates based in New York or Los Angeles. Pittsburgh, for example, lost about a dozen. The terror instilled by the mere prospect of such harsh consequences for executives and their home cities meant that fending off takeovers, real or imagined, was a constant distraction for executives at all corporations seen as choice acquisition targets during this era.
The chain reaction of rapid growth through acquisitions could not last forever. When interest rates rose to offset rising inflation, conglomerate profits began to fall. The beginning of the end came in January 1968, when Litton shocked Wall Street by announcing a quarterly profit of only 21 cents per share, versus 63 cents for the previous year's quarter. This was "just a decline in earnings of about 19 percent", not an actual loss or a corporate scandal, and "yet the stock was crushed, plummeting from $90 to $53". It would take two more years before it was clear that the conglomerate fad was on its way out. The stock market eventually figured out that the conglomerates' bloated and inefficient businesses were as cyclical as any others—indeed, it was that cyclical nature that had caused such businesses to be such undervalued acquisition targets in the first place—and their descent put "the lie to the claim that diversification allowed them to ride out a downturn." A major selloff of conglomerate shares ensued. To keep going, many conglomerates were forced to shed the new businesses they had recently purchased, and by the mid-1970s most conglomerates had been reduced to shells. The conglomerate fad was subsequently replaced by newer ideas like focusing on a company's core competency and unlocking shareholder value (which often translate into spin-offs).
=== Genuine diversification ===
In other cases, conglomerates are formed for genuine interests of diversification rather than manipulation of paper return on investment. Companies with this orientation would only make acquisitions or start new branches in other sectors when they believed this would increase profitability or stability by sharing risks. Flush with cash during the 1980s, General Electric also moved into financing and financial services, which in 2005 accounted for about 45% of the company's net earnings. GE formerly owned a minority interest in NBCUniversal, which owns the NBC television network and several other cable networks. United Technologies was also a successful conglomerate until it was dismantled in the late 2010s.
=== Mutual funds ===
With the spread of mutual funds (especially index funds since 1976), investors could more easily obtain diversification by owning a small slice of many companies in a fund rather than owning shares in a conglomerate. Another example of a successful conglomerate is Warren Buffett's Berkshire Hathaway, a holding company which used surplus capital from its insurance subsidiaries to invest in businesses across a variety of industries.
== International ==
The end of the First World War caused a brief economic crisis in Weimar Germany, permitting entrepreneurs to buy businesses at rock-bottom prices. The most successful, Hugo Stinnes, established the most powerful private economic conglomerate in 1920s Europe – Stinnes Enterprises – which embraced sectors as diverse as manufacturing, mining, shipbuilding, hotels, newspapers, and other enterprises.
The best-known British conglomerate was Hanson plc. It followed a rather different timescale than the U.S. examples mentioned above, as it was founded in 1964 and ceased to be a conglomerate when it split itself into four separate listed companies between 1995 and 1997.
In Hong Kong, some of the well-known conglomerates such as:
Swire Group (AD1816) (or Swire Pacific) Started by Liverpool natives the Swire family, which controls a wide range of businesses, including property (Swire Properties), aviation (i.e. Cathay Pacific), beverages (bottler of Coca-Cola), shipping and trading.
Jardine Matheson (AD1824) operates businesses in the fields of property (Hongkong Land), finance (Jardine Lloyd Thompson), trading, retail (Dairy Farm) and hotels (i.e. Mandarin Oriental).
CK Hutchison Holdings Limited: Telecoms, Infrastructure, Ports (i.e. Hongkong International Terminals, River Trade Terminal), Health and Beauty Retail (i.e. AS Watson), Energy, Finance
The Wharf (Holdings): Telecoms (formerly i-Cable Communications), Retail, Transportation (i.e. Modern Terminals), Finance, Hotels (i.e. Marco Polo Hotels)
In Japan, a different model of conglomerate, the keiretsu, evolved. Whereas the Western model of conglomerate consists of a single corporation with multiple subsidiaries controlled by that corporation, the companies in a keiretsu are linked by interlocking shareholdings and a central role of a bank. Mitsui, Mitsubishi, Sumitomo are some of Japan's best-known keiretsu, reaching from automobile manufacturing to the production of electronics such as televisions. While not a keiretsu, Sony is an example of a modern Japanese conglomerate with operations in consumer electronics, video games, the music industry, television and film production and distribution, financial services, and telecommunications.
In China, many of the country's conglomerates are state-owned enterprises, but there is a substantial number of private conglomerates. Notable conglomerates include BYD, CIMC, China Merchants Bank, Huawei, JXD, Meizu, Ping An Insurance, TCL, Tencent, TP-Link, ZTE, Legend Holdings, Dalian Wanda Group, China Poly Group, Beijing Enterprises, and Fosun International. Fosun is currently China's largest civilian-run conglomerate by revenue.
In South Korea, the chaebol is a type of conglomerate owned and operated by a family. A chaebol is also inheritable, as most of the current presidents of chaebols succeeded their fathers or grandfathers. Some of the largest and most well-known Korean chaebols are Samsung, LG, Hyundai Kia and SK.
In India, family-owned enterprises became some of Asia's largest conglomerates, such as the Aditya Birla Group, Tata Group, Emami, Kirloskar Group, Larsen & Toubro, Mahindra Group, Bajaj Group, ITC Limited, Essar Group, Reliance Industries, Adani Group and the Bharti Enterprises.
In Brazil the largest conglomerates are J&F Investimentos, Odebrecht, Itaúsa, Camargo Corrêa, Votorantim Group, Andrade Gutierrez, and Queiroz Galvão.
In Turkey the largest conglomerates are Koç Holding, Sabancı Holding, Yıldız Holding, Çukurova Holding, Doğuş Holding, Doğan Holding.
In New Zealand, Fletcher Challenge was formed in 1981 from the merger of Fletcher Holdings, Challenge Corporation, and Tasman Pulp & Paper, in an attempt to create a New Zealand-based multi-national company. At the time, the newly merged company dealt in construction, building supplies, pulp and paper mills, forestry, and oil & gas. Following a series of bungled investments, the company demerged in the early 2000s to concentrate on building and construction.
In Pakistan, some of the examples are Adamjee Group, Dawood Hercules, House of Habib, Lakson Group and Nishat Group.
In the Philippines, the largest conglomerate of the country is the Ayala Corporation which focuses on malls, bank, real estate development, and telecommunications. The other big conglomerates in the Philippines included JG Summit Holdings, Lopez Holdings Corporation, ABS-CBN Corporation, GMA Network, Inc., MediaQuest Holdings, TV5 Network, Inc.,
SM Investments Corporation, Metro Pacific Investments Corporation, and San Miguel Corporation.
In the United States, some of the examples are The Walt Disney Company, Warner Bros. Discovery and The Trump Organization (see below).
In Canada, one of the examples is Hudson's Bay Company. Another such conglomerate is J.D. Irving, Limited, which controls a large portion of the economic activities as well as media in the Province of New Brunswick.
== Advantages and disadvantages of conglomerates ==
=== Advantages ===
Diversification results in a reduction of investment risk. A downturn suffered by one subsidiary, for instance, can be counterbalanced by stability, or even expansion, in another division. For example, if Berkshire Hathaway's construction materials business has a good year, the profit might be offset by a bad year in its insurance business. This advantage is enhanced by the fact that the business cycle affects industries in different ways.
A conglomerate creates an internal capital market if the external one is not developed enough. Through the internal market, different parts of the conglomerate allocate capital more effectively.
A conglomerate can show earnings growth, by acquiring companies whose shares are more discounted than its own. In fact, Teledyne, GE, and Berkshire Hathaway have delivered high earnings growth for a time.
=== Disadvantages ===
The extra layers of management increase costs.
Accounting disclosure is less useful information, many numbers are disclosed grouped, rather than separately for each business. The complexity of a conglomerate's accounts makes it harder for managers, investors, and regulators to analyze and makes it easier for management to hide issues.
Conglomerates can trade at a discount to the overall individual value of their businesses because investors can achieve diversification on their own simply by purchasing multiple stocks. The whole is often worth less than the sum of its parts.
Culture clashes can destroy value.
Inertia prevents the development of innovation.
Lack of focus, and inability to manage unrelated businesses equally well.
Brand dilution where the brand loses its brand associations with a market segment, product area, or quality, price, or cachet.
Conglomerates more easily run the risk of being too big to fail.
Some cite the decreased cost of conglomerate stock (a phenomenon known as conglomerate discount) as evidential of these disadvantages, while other traders believe this tendency to be a market inefficiency, which undervalues the true strength of these stocks.
== Media conglomerates ==
In her 1999 book No Logo, Naomi Klein provides several examples of mergers and acquisitions between media companies designed to create conglomerates to create synergy between them:
WarnerMedia included several tenuously linked businesses during the 1990s and 2000s, including Internet access, content, film, cable systems, and television. Their diverse portfolio of assets allowed for cross-promotion and economies of scale. However, the company has sold or spun off many of these businesses – including Warner Music Group, Warner Books, AOL, Time Warner Cable, and Time Inc. – since 2004.
Clear Channel Communications, a public company, at one point owned a variety of TV and radio stations and billboard operations, together with many concert venues across the US and a diverse portfolio of assets in the UK and other countries around the world. The concentration of bargaining power in this one entity allowed it to gain better deals for all of its business units. For example, the promise of playlisting (allegedly, sometimes, coupled with the threat of blacklisting) on its radio stations was used to secure better deals from artists performing in events organized by the entertainment division. These policies have been attacked as unfair and even monopolistic, but are a clear advantage of the conglomerate strategy. On December 21, 2005, Clear Channel completed the divestiture of Live Nation, and in 2007 the company divested their television stations to other firms, some of which Clear Channel holds a small interest in. Live Nation owns the events and concert venues previously owned by Clear Channel Communications.
Impact of conglomerates on the media: The four major media conglomerates in the United States are The Walt Disney Company, Comcast, Warner Bros. Discovery and Paramount Global. The Walt Disney Company is linked with the American Broadcasting Company (ABC), creating the largest media corporation, with revenue equal to roughly thirty six billion dollars. Since Walt Disney owns ABC, it controls its news and programming. Walt Disney also acquired most of Fox, for over $70 billion. When General Electric owned NBC, it did not allow negative reporting against General Electric on air (NBCUniversal is now owned by Comcast). Viacom merged with CBS in 2019 as ViacomCBS (now Paramount Global) after originally merged in 2000 with Viacom as the surviving company while also Viacom divested CBS in 2006 due to FCC regulations as the time.
=== Internet conglomerates ===
A relatively new development, Internet conglomerates, such as Alphabet, Google's parent company belong to the modern media conglomerate group and play a major role within various industries, such as brand management. In most cases, Internet conglomerates consist of corporations that own several medium-sized online or hybrid online-offline projects. In many cases, newly joined corporations get higher returns on investment, access to business contacts, and better rates on loans from various banks.
== Food conglomerates ==
Similar to other industries, many food companies can be termed as conglomerates.
The Philip Morris group, which once was the parent company of Altria group, Philip Morris International, and Kraft Foods, had an annual combined turnover of $80 bn. Phillip Morris International and Kraft Foods later spun off into independent companies.
Nestlé
== See also ==
== References ==
== Bibliography ==
Holland, Max (1989), When the Machine Stopped: A Cautionary Tale from Industrial America, Boston: Harvard Business School Press, ISBN 978-0-87584-208-0, OCLC 246343673.
McDonald, Paul and Wasko, Janet (2010), The Contemporary Hollywood Film Industry, Blackwell Publishing Ltd. ISBN 978-1-4051-3388-3
== External links ==
"Conglomerate". Encyclopædia Britannica. 2007. Encyclopædia Britannica Online. November 17, 2007.
Conglomerate Monkeyshines – an example of how conglomerates were used in the 1960s to manufacture earnings growth | Wikipedia/Conglomerate_(company) |
A corporate spin-off, also known as a spin-out, starburst or hive-off, is a type of corporate action where a company "splits off" a section as a separate business or creates a second incarnation, even if the first is still active. It is distinct from a sell-off, where a company sells a section to another company or firm in exchange for cash or securities.
== Characteristics ==
Spin-offs are divisions of companies or organizations that then become independent businesses with assets, employees, intellectual property, technology, or existing products that are taken from the parent company. Shareholders of the parent company receive equivalent shares in the new company in order to compensate for the loss of equity in the original stocks. However, shareholders may then buy and sell stocks from either company independently; this potentially makes investment in the companies more attractive, as potential share purchasers can invest narrowly in the portion of the business they think will have the most growth.
In contrast, divestment can also sever one business from another, but the assets are sold off rather than retained under a renamed corporate entity.
Many times, the management team of the new company are from the same parent organization. Often, a spin-off offers the opportunity for a division to be backed by the company but not be affected by the parent company's image or history, giving potential to take existing ideas that had been languishing in an old environment and help them grow in a new environment. Spin-offs also allow high-growth divisions, once separated from other low-growth divisions, to command higher valuation multiples.
In most cases, the parent company or organization offers support doing one or more of the following:
Investing equity in the new firm
Being the first customer of the spin-off that helps create cash flow
Providing incubation space (desk, chairs, phones, Internet access, etc.)
Providing legal, finance, or technology services
All the support from the parent company is provided with the explicit purpose of helping the spin-off grow. One of the most critical antecedents of corporate spin off or corporate entrepreneurship rests upon its CEO’s ability to articulate a compelling vision can strengthen emotional bonds within the top management team, helping to foster corporate spin off or corporate entrepreneurship.
=== U.S. Securities and Exchange Commission ===
The United States Securities and Exchange Commission's (SEC) definition of "spin-off" is more precise. Spin-offs occur when the equity owners of the parent company receive equity stakes in the newly spun off company. For example, when Agilent Technologies was spun off from Hewlett-Packard (HP) in 1999, the stockholders of HP received Agilent stock. A company not considered a spin-off in the SEC's definition (but considered by the SEC as a technology transfer or licensing of technology to the new company) may also be called a spin-off in common usage.
=== Other definitions ===
A second definition of a spin-out is a firm formed when an employee or group of employees leaves an existing entity to form an independent start-up firm. The prior employer can be a firm, a university, or another organization. Spin-outs typically operate at arm's length from the previous organizations and have independent sources of financing, products, services, customers, and other assets. In some cases, the spin-out may license technology from the parent or supply the parent with products or services; conversely, they may become competitors. Such spin-outs are important sources of technological diffusion in high-tech industries.
Terms such as hive-up, hive down or hive across are sometime used for transferring a business to a parent company, a subsidiary company or a fellow subsidiary.
== Reasons for spin-offs ==
One of the main reasons for what The Economist has dubbed the 2011 "starburst revival" is that "companies seeking buyers for parts of their business are not getting good offers from other firms, or from private equity". For example, Foster's Group, an Australian beverage company, was prepared to sell its wine business. However, due to the lack of a decent offer, it decided to spin off the wine business, which is now called Treasury Wine Estates.
=== Conglomerate discount ===
According to The Economist, another driving force of the proliferation of spin-offs is what it calls the "conglomerate discount" — that "stockmarkets value a diversified group at less than the sum of its parts".
== Examples ==
Some examples of spin-offs (according to the SEC definition):
Guidant was spun off from Eli Lilly and Company in 1994, formed from Lilly's Medical Devices and Diagnostics Division.
Agilent Technologies spun off from Hewlett-Packard (HP) in 1999, formed from HP's former test-and-measurement equipment division. Later in 2014, Keysight was spun off from Agilent Technologies.
Expedia Group was spun off from Microsoft in 1999, with its eponymous subsidiary Expedia.
DreamWorks Animation was spun off from DreamWorks Pictures in 2004. In turn, DreamWorks Animation was acquired by Comcast and NBCUniversal in 2016.
Covidien was spun off from Tyco International in 2007.
TE Connectivity was spun off from Tyco International in 2007.
Cenovus Energy was spun off from Encana (now Ovintiv) in 2009.
AOL was a Time Warner spin-off in 2009; this effectively was a demerger, as AOL had previously merged into Time Warner.
Ocean Rig was spun off from DryShips in September 2011.
News Corporation's publishing operations (and its broadcasting operations in Australia) were spun off as News Corp in 2013. The previous News Corporation's remaining media properties were retained under the name 21st Century Fox. In turn, 21st Century Fox was acquired by The Walt Disney Company in 2019, but most of its broadcast and cable properties (the Fox broadcast network, Fox News Channel, Fox Business Network and Fox Sports) were spun off to the new Fox Corporation while Disney retained the film and television production units.
After being acquired by Sega, Index Corporation's video game operations were re-branded as Atlus, the name of a predecessor company, while its contents and solution businesses were spun off as a new company using the Index Corporation name in 2013.
Mallinckrodt Pharmaceuticals was spun off from Covidien in 2013.
Viacom was spun off from CBS in 1971, but were later re-merged in 2019 as ViacomCBS, now Paramount Global.
Fortive, Envista and Veralto were spun off from Danaher in 2016, 2019 and 2023 respectively.
In South Korea, the then-CJ E&M (now CJ ENM Entertainment Division) spun off its drama production and distribution division into a new subsidiary company called Studio Dragon in May 2016.
Examples following the second definition of spin-out:
Fairchild Semiconductor was a spin-out of Shockley Transistor; the founders were Shockley's "traitorous eight"
Intel was in turn a spin-out of Fairchild, as were many firms in the semiconductor industry
=== Academia ===
An example of companies created by technology transfer or licensing:
Since 1997, Oxford University Innovation has helped create more than 70 spin-out companies, and now, on average, every two months a new company is spun out of "academic research generated within and owned by the University of Oxford". Over £266 million in external investment has been raised by spin-out companies since 2000, and five are currently listed on the London Stock Exchange's Alternative Investment Market.
== See also ==
Demerger
Divestment
Equity carve-out
Stub (stock)
Successor company
== References ==
== Further reading ==
EIRMA (2003) "Innovation Through Spinning In and Out", Research Technology Management, Vol. 46, 63–64.
René Rohrbeck; Mario Döhler; Heinrich Arnold (20 April 2009). "Creating growth with externalization of R&D results—the spin‐along approach". Global Business and Organizational Excellence. 28 (4): 44–51. doi:10.1002/JOE.20267. ISSN 1932-2054. Wikidata Q104832450.
Rohrbeck, R., Hölzle K. and H. G. Gemünden (2009): "Opening up for competitive advantage: How Deutsche Telekom creates an open innovation ecosystem", R&D Management, Vol. 39, S. 420–430.
== External links == | Wikipedia/Corporate_spin-off |
A disease cluster is an unusually large aggregation of a relatively uncommon disease (medical condition) or event within a particular geographical location or period. Recognition of a cluster depends on its size being greater than would be expected by chance. Identification of a suspected disease cluster may initially depend on anecdotal evidence. Epidemiologists and biostatisticians then assess whether the suspected cluster corresponds to an actual increase of disease in the area. Typically, when clusters are recognized, they are reported to public health departments in the local area. If clusters are of sufficient size and importance, they may be re-evaluated as outbreaks.
John Snow's pioneering investigation of the 1854 cholera outbreak in Soho, London, is seen as a classic example of the study of such a cluster.
== See also ==
Cancer cluster
== References ==
== External links ==
Disease Clusters: An Overview, introduction to a course held by the Agency for Toxic Substances and Disease Registry | Wikipedia/Disease_cluster |
Antiviral Therapy is a peer-reviewed medical journal published by International Medical Press. It publishes primary papers and reviews on all aspects of the clinical development of antiviral drugs, including clinical trial results, drug resistance, viral diagnostics, drug safety, pharmacoepidemiology, and vaccines. Antiviral Therapy is an official publication of the International Society for Antiviral Research.
The journal was established in 1996 by Douglas D. Richman (University of California, San Diego) and Joep M.A. Lange (University of Amsterdam), who still as of 2013 serve as the joint editors-in-chief. The first two issues were published by MediTech Media. The initial publication frequency was quarterly, rising to bimonthly in 2003 and to eight issues a year in 2005. The journal also publishes supplements containing abstracts from various conferences and workshops, including the International HIV Drug Resistance Workshop, International Workshop on Adverse Drug Reactions and Lipodystrophy in HIV, and the Therapies for Viral Hepatitis Workshop.
Articles from 1998 are archived online in PDF format, with content over a year old being available for free. All online content is available free to those living in developing countries through HINARI.
== Abstracting and indexing ==
The journal is abstracted and indexed by BIOSIS Previews, Chemical Abstracts, Current Contents/Clinical Medicine, EMBASE/Excerpta Medica, MEDLINE/Index Medicus, and the Science Citation Index. According to the Journal Citation Reports, the journal has a 2014 impact factor of 3.02.
== See also ==
Antiviral Chemistry & Chemotherapy
== References ==
== External links ==
Official website | Wikipedia/Antiviral_Therapy_(journal) |
Drug repositioning (also known as drug repurposing, re-profiling, re-tasking, or therapeutic switching) is the repurposing of an approved drug for the treatment of a different disease or medical condition than that for which it was originally developed. This is one line of scientific research which is being pursued to develop safe and effective COVID-19 treatments. Other research directions include the development of a COVID-19 vaccine and convalescent plasma transfusion.
Several existing antiviral medications, previously developed or used as treatments for severe acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS), HIV/AIDS, and malaria, have been researched as potential COVID-19 treatments, with some moving into clinical trials.
In a statement to the journal Nature Biotechnology in February 2020, US National Institutes of Health Viral Ecology Unit chief Vincent Munster said, "The general genomic layout and the general replication kinetics and the biology of the MERS, SARS and [SARS-CoV-2] viruses are very similar, so testing drugs which target relatively generic parts of these coronaviruses is a logical step".
== Background ==
Outbreaks of novel emerging infections such as COVID-19 pose unique challenges to discover treatments appropriate for clinical use, given the small amount of time available for drug discovery. Since the process of developing and licensing a new drug for COVID-19 was expected to pose a particularly long delay, researchers have been probing the existing compendium of approved antivirals and other drugs as a cost-effective strategy in the meantime. In early 2020 hundreds of hospitals and universities began their own trials of existing safe drugs with repurposing potential against COVID-19.
Drug repurposing usually requires three steps before taking the drug across the development pipeline: recognition of the right drug; systematic evaluation of the drug effect in clinical models; and estimation of usefulness in phase II clinical trials.
One approach used in repositioning is to look for drugs that act through virus-related targets such as the RNA genome (i.e. remdesivir). Another approach concerns drugs acting through polypeptide packing (i.e. lopinavir).
The rush to publish papers about the pandemic resulted in some scandals of inaccurate scientific publications. Some early studies reporting the efficacy of hydroxychloroquine and remdesivir convinced drug agencies such as Food and Drug Administration (FDA) and European Medicines Agency to approve the off-label use by issuing Emergency Use Authorizations which were later revoked as new evidence showed these drugs have no effect on the course of COVID-19. These false-positive results can be explained in terms of the base-rate fallacy and the rapid changes in clinical guidance regarding COVID-19 treatment could have been avoided if mechanistic evidence for and against repurposing candidates were carefully assessed and the standard evidence amalgamation tools such as meta-analysis were routinely applied.
== Monoclonal antibodies ==
Monoclonal antibodies under investigation for repurposing include anti-IL-6 agents (tocilizumab) and anti-IL-8 (BMS-986253). (This is in parallel to novel monoclonal antibody drugs developed specifically for COVID-19.)
Mavrilimumab is a human monoclonal antibody that inhibits human granulocyte macrophage colony-stimulating factor receptor (GM-CSF-R). It has been studied to see if it can improve the prognosis for patients with COVID-19 pneumonia and systemic hyperinflammation. One small study indicated some beneficial effects of treatment with mavrilimumab compared with those who were not.
In January 2021, the UK National Health Service issued guidance that the immune modulating drugs tocilizumab and sarilumab were beneficial when given promptly to people with COVID-19 admitted to intensive care, following research which found a reduction in the risk of death by 24%.
=== Tocilizumab ===
== Anticoagulants ==
Medications to prevent blood clotting have been suggested for treatment, and anticoagulant therapy with low-molecular-weight heparin appears to be associated with better outcomes in severe COVID-19 showing signs of coagulopathy (elevated D-dimer). Several anticoagulants have been tested in Italy, with low-molecular-weight heparin being widely used to treat patients, prompting the Italian Medicines Agency to publish guidelines on its use.
Scientists have identified an ability of heparin to bind to the spike protein of the SARS-CoV-2 virus, neutralising it, and proposed the drug as a possible antiviral.
A multicenter study on 300 patients researching the use of enoxaparin sodium at prophylaxis and therapeutic dosages was announced in Italy on 14 April.
The anticoagulant dipyridamole is proposed as a treatment for COVID-19, and a clinical trial is underway.
== Antidepressants ==
Many antidepressants have anti-inflammatory properties. An observational study in Paris area hospitals found that COVID-19 patients admitted to the hospital who were already taking an antidepressant had 44% less risk of intubation or death. The potential mechanisms how fluvoxamine and fluoxetin are contributing to prevent the development of severe respiratory symptoms of COVID-19 by protecting the type 2 lung alveolar cells have been summarized in a review in March 2022.
=== Fluvoxamine ===
In October 2021, the TOGETHER trial, a large clinical trial in Brazil, reported that treating high-risk outpatients with an early diagnosis of COVID-19 with 100 mg fluvoxamine twice daily for 10 days reduced by up to about 65% the risk of hospitalization. The effect was reduced to about 32% with low adherence, possibly due to intolerance. There was also a reduction in the number of deaths by up to about 90% with high adherence. The drug was studied because of its anti-inflammatory effects, but the mechanism of action against COVID-19 remains uncertain.
On 16 December, the NIH found that use of fluvoxamine did not impact incidence of covid-related hospitalizations and considered the evidence insufficient to recommend either for or against the drug.
On 23 December, under very low certainty evidence, the Ontario clinical practice guideline suggested considering the drug to treat mildly ill patients within 7 days of symptom onset.
In May 2022, based on a review of available scientific evidence, the US Food and Drug Administration (FDA) declined a request to issue an Emergency Use Authorization (EUA) for fluvoxamine to treat COVID-19, saying that the data were not sufficient to conclude that it may be effective in treating non-hospitalized people with COVID-19 to prevent serious illness or hospitalization. University of Minnesota professor David Boulware, who filed the EUA application, said that the standard that they were holding for fluvoxamine was a different standard than the other big pharma trials, with Paxlovid and molnupiravir and the monoclonals.
== Antioxidants ==
=== Acetylcysteine (NAC) ===
Acetylcysteine is being considered as a possible treatment for COVID-19.
== Antiparasitics ==
The idea of repurposing host-directed drugs for antiviral therapy has experienced a renaissance. In some cases the research has highlighted fundamental limitations to their use for the treatment of acute RNA virus infections. Antiparasitics that have been investigated include chloroquine, hydroxychloroquine, mefloquine, ivermectin, and atovaquone.
=== Chloroquine and hydroxychloroquine ===
=== Ivermectin ===
== Antivirals ==
Research is focused on repurposing approved antiviral drugs that have been previously developed against other viruses, such as MERS-CoV, SARS-CoV, and West Nile virus. These include favipiravir, remdesivir, ribavirin, triazavirin, and umifenovir.
The combination of artesunate/pyronaridine was found to have an inhibitory effect on SARS-CoV-2 in vitro tests using Hela cells. Artesunate/pyronaridine showed a virus titer inhibition rate of 99% or more after 24 hours, while cytotoxicity was also reduced. A preprint published in July 2020, reported that pyronaridine and artesunate exhibit antiviral activity against SARS-CoV-2 and influenza viruses using human lung epithelial (Calu-3) cells. It is in phase II clinical trial in South Korea and in South Africa.
Molnupiravir is a drug developed to treat influenza. It is in Phase III trials as a treatment for COVID-19. In December 2020, scientists reported that the antiviral drug molnupiravir developed for the treatment of influenza can completely suppress SARS-CoV-2 transmission within 24 hours in ferrets whose COVID-19 transmission they find to closely resemble SARS-CoV-2 spread in human young-adult populations. A clinical trial, which has not as of 1 October 2021 been peer reviewed, suggests molnupiravir taken orally can reduce the risk of hospitalization and prevent death in patients diagnosed with COVID-19. The drug needs to be given early to be effective. As of 1 January 2022, Molnupiravir has been approved for emergency use against COVID-19 in the United Kingdom, India, and the United States.
Niclosamide was identified as a candidate antiviral in an in vitro drug screening assay done in South Korea.
Protease inhibitors, which specifically target the protease 3CLpro, are being researched and developed in the laboratory such as CLpro-1, GC376, and Rupintrivir.
Coronaviruses species possess an intrinsic resistance to ribavirin.
Sofosbuvir/daclatasvir is a drug combination developed to treat hepatitis C. In October 2020, a meta-analysis found a significantly lower risk of all-cause mortality with the drug combination when given to hospitalized patients.
=== Favipiravir ===
Favipiravir is an antiviral drug approved for the treatment of influenza in Japan. There is limited evidence suggesting that, compared to other antiviral drugs, favipiravir might improve outcomes for people with COVID-19, but more rigorous studies are needed before any conclusions can be drawn.
Chinese clinical trials in Wuhan and Shenzhen claimed to show that favipiravir was "clearly effective". Of 35 patients in Shenzhen tested negative in a median of 4 days, while the length of illness was 11 days in the 45 patients who did not receive it. In a study conducted in Wuhan on 240 patients with pneumonia half were given favipiravir and half received umifenovir. The researchers found that patients recovered from coughs and fevers faster when treated with favipiravir, but that there was no change in how many patients in each group progressed to more advanced stages of illness that required treatment with a ventilator.
On 22 March 2020, Italy approved the drug for experimental use against COVID-19 and began conducting trials in the three regions most affected by the disease. The Italian Pharmaceutical Agency reminded the public that the existing evidence in support of the drug is scant and preliminary.
On 30 May 2020, the Russian Health Ministry approved a generic version of favipiravir named Avifavir, which proved highly effective in the first phase of clinical trials.
In June 2020, India approved the use of a generic version of favipravir called FabiFlu, developed by Glenmark Pharmaceuticals, in the treatment of mild to moderate cases of COVID-19.
On 26 May 2021, a systematic review found a 24% greater chance of clinical improvement when administered in the first seven days of hospitalization, but no statistically significant reduction in mortality for any of the groups, including hospitalized patients and those with mild or moderate symptoms.
=== Lopinavir/ritonavir ===
In March 2020, the main protease (3CLpro) of the SARS-CoV-2 virus was identified as a target for post-infection drugs. The enzyme is essential for processing the replication-related polyprotein. To find the enzyme, scientists used the genome published by Chinese researchers in January 2020 to isolate the main protease. Protease inhibitors approved for treating human immunodeficiency viruses (HIV) – lopinavir and ritonavir – have preliminary evidence of activity against the coronaviruses, SARS and MERS. As a potential combination therapy, they are used together in two Phase III arms of the 2020 global Solidarity project on COVID-19. A preliminary study in China of combined lopinavir and ritonavir found no effect in people hospitalized for COVID-19.
One study of lopinavir/ritonavir (Kaletra), a combination of the antivirals lopinavir and ritonavir, concluded that "no benefit was observed". The drugs were designed to inhibit HIV from replicating by binding to the protease. A team of researchers at the University of Colorado are trying to modify the drugs to find a compound that will bind with the protease of SARS-CoV-2. There are criticisms within the scientific community about directing resources to repurposing drugs specifically developed for HIV/AIDS because such drugs are unlikely to be effective against a virus lacking the specific HIV-1 protease they target. The WHO included lopinavir/ritonavir in the international Solidarity trial.
On 29 June, the chief investigators of the UK RECOVERY Trial reported that there was no clinical benefit from use of lopinavir-ritonavir in 1,596 people hospitalized with severe COVID-19 infection over 28 days of treatment.
A study published in October 2020, screening those drugs approved by the US Food and Drug Administration (FDA) which target SARS-CoV-2 spike (S) protein proposed that the current unbalanced combination formula of lopinavir might in fact interfere with the ritonavir's blocking activity on the receptor binding domain-human angiotensin converting enzyme-2 (RBD-hACE2) interaction, thus effectively limiting its therapeutic benefit in COVID-19 cases.
In 2022, the PANORAMIC trial is testing the effectiveness of nirmatrelvir combined with ritonavir, and molnupiravir in preventing hospitalization and helping faster recovery for people aged over 50 and those at higher risk due to underlying health conditions. As of March 2022 has over 16,000 people enrolled as participants making it the largest study into COVID-19 antivirals.
=== Remdesivir ===
== Immunomodulatory treatments ==
=== Baricitinib ===
In May 2022, the US Food and Drug Administration (FDA) approved barictinib for the treatment of COVID-19 in hospitalized adults requiring supplemental oxygen, non-invasive or invasive mechanical ventilation, or extracorporeal membrane oxygenation (ECMO). Barictinib is the first immunomodulatory treatment for COVID-19 to receive FDA approval.
In the United States, barictinib is authorized under an emergency use authorization (EUA) for the treatment of COVID-19 in hospitalized people aged 2 to less than 18 years of age who require supplemental oxygen, non-invasive or invasive mechanical ventilation, or extracorporeal membrane oxygenation.
== Immunosuppressants ==
=== Anakinra ===
In December 2021, anakinra (Kineret) was authorized in the European Union for the treatment of COVID-19 in adults with pneumonia requiring supplemental oxygen (low or high flow oxygen) and who are at risk of developing severe respiratory failure, as determined by blood levels of a protein called suPAR (soluble urokinase plasminogen activator receptor) of at least 6 ng per ml."
== Interferons ==
Drugs with immune modulating effects that may prove useful in COVID-19 treatment include type I Interferons such as Interferon-β, peginterferon alpha-2a and -2b.
IFN-β 1b have been shown in an open label randomised controlled trial in combination with lopinavir/ ritonavir and ribavirin to significantly reduce viral load, alleviate symptoms and reduce cytokine responses when compared to lopinavir/ ritonavir alone.<Lancet 2020;395(10238):1695-1704> IFN-β will be included in the international Solidarity Trial in combination with the HIV drugs Lopinavir and Ritonavir. as well as the REMAP-CAP Finnish biotech firm Faron Pharmaceuticals continues to develop INF-beta for ARDS and is involved in worldwide initiatives against COVID-19, including the Solidarity trial. UK biotech firm Synairgen started conducting trials on IFN-β, a drug that was originally developed to treat COPD.
== Steroids ==
Systemic corticosteroids have a small but statistically significant beneficial effect in reducing 30-day all-cause mortality in individuals hospitalized with COVID-19.
=== Budesonide ===
Administration of this inhaled steroid early in the course of COVID-19 infection has been found to reduce the likelihood of needing urgent medical care and reduced the time to recovery. More studies are on-going. In April 2021, budesonide was approved by authorities in the UK for off-label use to treat COVID-19 on a case-by-case basis.
=== Ciclesonide ===
Ciclesonide, an inhaled corticosteroid for asthma, was identified as a candidate antiviral in an in vitro drug screening assay done in South Korea. It has been used for treatment of pre-symptomatic COVID-19 patients and is undergoing clinical trials.
=== Dexamethasone ===
Dexamethasone is a corticosteroid medication in use for multiple conditions such as rheumatic problems, skin diseases, asthma and chronic obstructive lung disease among others. A multi-center, randomized controlled trial of dexamethasone in treating acute respiratory distress syndrome (ARDS), published in February 2020, showed reduced need for mechanical ventilation and mortality. Dexamethasone is only helpful in people requiring supplemental oxygen. Following an analysis of seven randomized trials, the WHO recommends the use of systemic corticosteroids in guidelines for treatment of people with severe or critical illness, and that they not be used in people that do not meet the criteria for severe illness.
On 16 June, the Oxford University RECOVERY Trial issued a press release announcing preliminary results that the drug could reduce deaths by about a third in participants on ventilators and by about a fifth in participants on oxygen; it did not benefit patients who did not require respiratory support. The researchers estimated that treating 8 patients on ventilators with dexamethasone saved one life, and treating 25 patients on oxygen saved one life. Several experts called for the full dataset to be published quickly to allow wider analysis of the results. A preprint was published on 22 June and the peer-reviewed article appeared on 17 July.
Based on those preliminary results, dexamethasone treatment has been recommended by the US National Institutes of Health (NIH) for patients with COVID-19 who are mechanically ventilated or who require supplemental oxygen but are not mechanically ventilated. The NIH recommends against using dexamethasone in patients with COVID-19 who do not require supplemental oxygen. In July 2020, the World Health Organization (WHO) stated they are in the process of updating treatment guidelines to include dexamethasone or other steroids.
The Infectious Diseases Society of America (IDSA) guideline panel suggests the use of glucocorticoids for patients with severe COVID-19; where severe is defined as patients with oxygen saturation (SpO2) ≤94% on room air, and those who require supplemental oxygen, mechanical ventilation, or extracorporeal membrane oxygenation (ECMO). The IDSA recommends against the use of glucocorticoids for those with COVID-19 without hypoxemia requiring supplemental oxygen.
In July 2020, the European Medicines Agency (EMA) started reviewing results from the RECOVERY study arm that involved the use of dexamethasone in the treatment of patients with COVID-19 admitted to the hospital to provide an opinion on the results. It focused particularly on the potential use of the drug for the treatment of adults with COVID-19.
In September 2020, the WHO released updated guidance on using corticosteroids for COVID-19. The WHO recommends systemic corticosteroids rather than no systemic corticosteroids for the treatment of people with severe and critical COVID-19 (strong recommendation, based on moderate certainty evidence). The WHO suggests not to use corticosteroids in the treatment of people with non-severe COVID-19 (conditional recommendation, based on low certainty evidence).
In September 2020, the European Medicines Agency (EMA) endorsed the use of dexamethasone in adults and adolescents (from twelve years of age and weighing at least 40 kilograms (88 lb)) who require supplemental oxygen therapy. Dexamethasone can be taken by mouth or given as an injection or infusion (drip) into a vein.
=== Hydrocortisone ===
In September 2020, a meta-analysis published by the WHO Rapid Evidence Appraisal for COVID-19 Therapies (REACT) Working Group found hydrocortisone to be effective in reducing mortality rate of critically ill COVID-19 patients when compared to other usual care or a placebo.
The use of corticosteroids can cause a severe and deadly "hyperinfection" syndrome for people with strongyloidiasis, which may be an underlying condition in populations exposed to the parasite Strongyloides stercoralis. This risk can be mitigated by the presumptive use of ivermectin before steroid treatment.
=== Methylprednisolone ===
In March–April 2020, a small bioinformatics company, AdvaitaBio, used its data analysis platform, iPathwayGuide, to analyze one of the first transcriptomics data sets that became available from COVID-19 patients. This analysis was able to identify methylprednisolone as a drug that could potentially help patients with severe cases of this disease. The analysis of the molecular data indicated that patients with severe COVID-19 suffered from the cytokine storm syndrome, and also identified the specific pathways and mechanisms through which methylprednisolone would help revert many of the important gene expression changes induced by the cytokine storm. A subsequent clinical trial undertaken in the Henry Ford Health Systems showed that methylprednisolone reduced mortality by approximately 44% (from 29.6% to 16.6%). The results contradicted flagrantly the recommendations of the World Health Organization, which at the time, had a standing recommendation NOT to use systemic steroids in COVID-19 patients. This, together with the very tense scientific environment cause by theretraction of some early COVID-19-related papers, delayed the publication of these results by several months. This was very unfortunate, since methylprednisolone is low-cost and widely available and could have prevented many thousands of deaths. Several months later, the results of the RECOVERY trial (see dexamethasone above) also showed steroids as being effective in reducing mortality, and helped change the general opinion about steroid treatments in COVID-19. The drug repurposing analysis that was first to propose a steroid for severe COVID-19 case was eventually published in the journal Bioinformatics Currently, steroids including methylprednisolone and dexamethasone are part of the standard of care in severe cases of COVID-19.
For a composite end point of preventing ICU admission, need for mechanical ventilator or mortality, the number needed to treat (NNT) to benefit a single patient was only 5 for methylprednisolone when used early in hospitalization. The NNT necessary for methylprednisolone to avoid a death was only 8 for all hospitalized patients. This is in contrast to the RECOVERY trial (NCT04323592) for dexamethasone (see Dexamethasone above), where NNT was 8 for patients on mechanical ventilation, and 25 for patients needed oxygen to prevent mortality.
== Vitamins ==
=== Vitamin C ===
Supplementation with vitamin C, has been suggested as part of the supportive management of COVID-19, as serum levels of the vitamin are depleted in the acute stage of infection owing to increased metabolic demands. In April 2021, the US National Institutes of Health (NIH) COVID-19 Treatment Guidelines stated that "there are insufficient data to recommend either for or against the use of vitamin C for the prevention or treatment of COVID-19." In an update posted December 2022, the NIH position was unchanged:
There is insufficient evidence for the COVID-19 Treatment Guidelines Panel (the Panel) to recommend either for or against the use of vitamin C for the treatment of COVID-19 in nonhospitalized patients.
There is insufficient evidence for the Panel to recommend either for or against the use of vitamin C for the treatment of COVID-19 in hospitalized patients.
Three meta-analyses of people hospitalized with severe COVID-19 - with a high overlap in the clinical trials being included - reported a significant reduction in the risk of all-cause, in-hospital mortality with the administration of vitamin C relative to no vitamin C. There were no significant differences in ventilation incidence, hospitalization duration or length of intensive care unit stay between the two groups. The majority of the trials used intravenous administration of the vitamin. Acute kidney injury was lower in people treated with vitamin C treatment. There were no differences in the frequency of other adverse events due to the vitamin. All three journal articles concluded that further large-scale studies are needed to affirm its mortality benefits before issuing updated guidelines and recommendations.
=== Vitamin D ===
During the COVID-19 pandemic, there has been interest in vitamin D status and supplements, given the significant overlap in the risk factors for severe COVID-19 and vitamin D deficiency. These include obesity, older age, and Black or Asian ethnic origin, and it is notable that vitamin D deficiency is particularly common within these groups.
The National Institutes of Health (NIH) COVID-19 Treatment Guidelines states "there is insufficient evidence to recommend either for or against the use of vitamin D for the prevention or treatment of COVID-19."
The general recommendation to consider taking vitamin D supplements, particularly given the levels of vitamin D deficiency in Western populations, has been repeated. As of February 2021, the English National Institute for Health and Care Excellence (NICE) continued to recommend small doses of supplementary vitamin D for people with little exposure to sunshine, but recommended that practitioners should not offer a vitamin D supplement solely to prevent or treat COVID-19, except as part of a clinical trial.
Multiple studies have reported links between pre-existing vitamin D deficiency and the severity of the disease. Several systematic reviews and meta-analyses of these show that vitamin D deficiency may be associated with a higher probability of becoming infected with COVID-19, and have clearly demonstrated there are significant associations between deficiency and a greater severity of the disease, including relative increases in hospitalization and mortality rates of about 80%. The quality of some of the studies included and whether this demonstrates a causal relationship has been questioned.
Many clinical trials are underway or have been completed assessing the use of oral vitamin D and its metabolites such as calcifediol for prevention or treatment of COVID-19 infection, especially in people with vitamin D deficiency.
The effects of oral vitamin D supplementation on the need for intensive care unit (ICU) admission and mortality in hospitalized COVID-19 patients has been the subject of a meta-analysis. A much lower ICU admission rate was found in patients who received vitamin D supplementation, which was only 36% of that seen in patients without supplementation (p<0.0001). No significant effects on mortality were found in this meta-analysis. The certainty of these analyses is limited by the heterogenicity in the studies which include both vitamin D3 (cholecalciferol) and calcifediol, but these findings indicate a potential role in improving COVID-19 severity, with more robust data being required to substantiate any effects on mortality.
Calcifediol, which is 25-hydroxyvitamin D, is more quickly activated, and has been used in several trials. Review of the published results suggests that calcifediol supplementation may have a protective effect on the risk of ICU admissions in COVID-19 patients.
== Minerals ==
=== Zinc ===
The National Institutes of Health (NIH) COVID-19 Treatment Guidelines states "there is insufficient evidence to recommend either for or against the use of zinc for the treatment of COVID-19" and that "the Panel recommends against using zinc supplementation above the recommended dietary allowance for the prevention of COVID-19, except in a clinical trial (BIII)."
== Others ==
Antibiotics: Some antibiotics that have been identified as potentially repurposable as COVID-19 treatments, including:
Broad-spectrum antibiotics: In 2021, the importance of drug repurposing for COVID-19 led to the establishment of broad-spectrum antibiotics. Broad-spectrum therapeutics are effective against multiple types of pathogens. Such drugs have been suggested as potential emergency treatments for future pandemics.
Teicoplanin,
Oritavancin,
Dalbavancin,
Monensin,
Azithromycin.
Bucillamine: On 31 July 2020, the U.S. Food and Drug Administration (FDA) authorized Revive Therapeutics to proceed with a randomized, double-blind, placebo-controlled confirmatory Phase III clinical trial protocol to evaluate the safety and efficacy of the antirheumatic agent bucillamine in patients with mild-moderate COVID-19.
Clofoctol, a bacteriostatic antibiotic, has been proposed as a treatment for COVID-19. A study in mice showed that clofoctol blocks the replication of SARS-CoV-2.
Colchicine: Researchers from the Montreal Heart Institute in Canada are studying the role of colchicine in reducing inflammation and pulmonary complications in patients with mild symptoms of COVID-19. The study, named COLCORONA, was recruiting 6000 adults 40 and older who were diagnosed with COVID-19 and experienced mild symptoms not requiring hospitalization. Women who were pregnant or breastfeeding or who did not have an effective contraceptive method were not eligible. The trial results are favorable, but inconclusive.
Fenofibrate and bezafibrate have been suggested for treatment of life-threatening symptoms of COVID-19. Fenofibrate also lowered severe progressive inflammation markers in hospitalized COVID-19 patients within 48 hours of treatment in an Israeli study. It showed extremely promising results by interfering with how coronavirus reproduce.
nanoFenretinide is nanoparticle sized fenretinide and repurposed oncology drug approved to enter the clinic for a lymphoma indication. It was identified as a candidate antiviral in an in vitro drug screening assay done in South Korea. Fenretinide's clinical safety profile also makes it an ideal candidate in combination regimens.
Histamine H2 receptor antagonists are under investigation.
Cimetidine has been suggested as a treatment for COVID-19.
Famotidine has been suggested as a treatment for COVID-19, and a clinical study is underway.
Ibuprofen: A trial called "Liberate" has been started in the United Kingdom to determine the effectiveness of ibuprofen in reducing the severity and progression of lung injury which results in breathing difficulties for COVID-19 patients. Subjects are to receive three doses of a special formulation of the drug – lipid ibuprofen – in addition to usual care.
Influenza vaccine: A clinical cohort study in Brazil found that COVID-19 patients who received a recent influenza vaccine needed less intensive care support, less invasive respiratory support, and were less likely to die.
Sildenafil, more commonly known by the brand name Viagra, is proposed as a treatment for COVID-19, and a Phase III clinical trial is underway.
== Found ineffective ==
The use of aspirin, hydroxychloroquine, azithromycin, and colchicine were found ineffective against COVID-19. The use of the combination of lopinavir and ritonavir together was found ineffective against COVID-19. The use of the combination of etesevimab and bamlanivimab together was found ineffective against the Omicron variant.
== References ==
== Further reading ==
== External links ==
"COVID-19 therapeutics tracker". Regulatory Affairs Professionals Society.
"STAT's Covid-19 Drugs and Vaccines Tracker". Stat. 27 April 2020.
Zimmer C, Wu KJ, Corum J, Kristoffersen M (16 July 2020). "Coronavirus Drug and Treatment Tracker". The New York Times.
"JHMI Clinical Recommendations for Available Pharmacologic Therapies for COVID-19" (PDF). Johns Hopkins Medicine.
World Health Organization (2021). Therapeutics and COVID-19: living guideline, 24 September 2021 (Report). World Health Organization (WHO). hdl:10665/345356. WHO/2019-nCoV/therapeutics/2021.3.
Velasquez-Manoff M (11 August 2020). "How Covid Sends Some Bodies to War With Themselves". The New York Times.
Zimmer C (30 April 2020). "Old Drugs May Find a New Purpose: Fighting the Coronavirus". The New York Times. | Wikipedia/COVID-19_drug_repurposing_research |
Drug repositioning (also called drug repurposing) involves the investigation of existing drugs for new therapeutic purposes.
== Repurposing achievements ==
Repurposing generics can have groundbreaking effects for patients: 35% of 'transformative' drugs approved by the US FDA are repurposed products. Repurposing is especially relevant for rare or neglected diseases.
A number of successes have been achieved, the foremost including sildenafil (Viagra) for erectile dysfunction and pulmonary hypertension and thalidomide for leprosy and multiple myeloma. Clinical trials have been performed on posaconazole and ravuconazole for Chagas disease.
Other antifungal agents clotrimazole and ketoconazole have been investigated for anti-trypanosome therapy. Successful repositioning of antimicrobials has led to the discovery of broad-spectrum therapeutics, which are effective against multiple infection types.
== Strategy ==
Drug repositioning is a "universal strategy" for neglected diseases due to 1) reduced number of required clinical trial steps could reduce the time and costs for the medicine to reach market, 2) existing pharmaceutical supply chains could facilitate "formulation and distribution" of the drug, 3) known possibility of combining with other drugs could allow more effective treatment, 4) the repositioning could facilitate the discovery of "new mechanisms of action for old drugs and new classes of medicines", 5) the removal of “activation barriers” of early research stages can enable the project to advance rapidly into disease-oriented research.
Often considered as a serendipitous approach, where repurposable drugs are discovered by chance, drug repurposing has heavily benefited from advances in human genomics, network biology, and chemoproteomics. It is now possible to identify serious repurposing candidates by finding genes involved in a specific disease and checking if they interact, in the cell, with other genes which are targets of known drugs. It was shown that drugs against targets supported by human genetics are twice as likely to succeed than overall drugs in the pharmaceutical pipeline. Drug repurposing can be a time and cost effective strategy for treating dreadful diseases such as cancer and is applied as a means of solution-finding to combat the COVID-19 pandemic.
Computational drug repurposing is the in silico screening of approved drugs for use against new indications. It can use molecular, clinical or biophysical data. Electronic health records and real-world evidence gained popularity in drug repurposing, for instance for COVID 19. Computational drug repurposing is expected to reduce drug development costs and time. In 2020, during the COVID-19 pandemic, a European project, Exscalate4Cov conducted drug repurposing experiments, leading to the identification of raloxifene as a possible candidate for treating early-stage COVID-19 patients.
== Challenges ==
According to a 2022 systematic review, inadequate resources (financial and subject matter expertise), barriers to accessing shelved compounds and their trial data, and the lack of traditional IP protections for repurposed compounds are the key barriers to drug repurposing. There is a lack of financial incentives for pharmaceutical companies to explore the repurposing of generic drugs. Indeed, doctors can prescribe the drug off-label and pharmacists can switch the branded version for a cheaper generic alternative. According to Pharmacologist Alasdair Breckenridge and patent judge Robin Jacob this issue is so significant that: "If a generic version of a drug is available, developers have little or no opportunity to recoup their investment in the development of the drug for a new indication".
Drug repositioning present other challenges. First, the dosage required for the treatment of a novel disease usually differs from that of its original target disease, and if this happens, the discovery team will have to begin from Phase I clinical trials, which effectively strips drug repositioning of its advantages of over de novo drug discovery. Second, the finding of new formulation and distribution mechanisms of existing drugs to the novel-disease-affected areas rarely includes the efforts of "pharmaceutical and toxicological" scientists. Third, patent right issues can be very complicated for drug repurposing due to the lack of experts in the legal area of drug repositioning, the disclosure of repositioning online or via publications, and the extent of the novelty of the new drug purpose.
== Drug repurposing in psychiatry ==
Drug repurposing is considered a rapid, cost-effective, and reduced-risk strategy for the development of new treatment options also for psychiatric disorders.
=== Bipolar disorder ===
In bipolar disorder, repurposed drugs are emerging as feasible augmentation options. Several agents, all sustained by a plausible biological rationale, have been evaluated. Evidence from meta-analyses showed that adjunctive allopurinol and tamoxifen were superior to placebo for mania, and add-on modafinil/armodafinil and pramipexole seemed to be effective for bipolar depression, while the efficacy of celecoxib and N-acetylcysteine appeared to be limited to certain outcomes.
Further, meta-analytic evidence exists also for adjunctive melatonin and ramelteon in mania, and for add-on acetylsalicylic acid, pioglitazone, memantine, and inositol in bipolar depression, but findings were not significant.
The generally low quality of evidence does not allow making reliable recommendations for the use of repurposed drugs in clinical practice, but some of these drugs have shown promising results and deserve further attention in research.
== See also ==
COVID-19 drug repurposing research
Chemoproteomics
Exscalate4Cov
== References ==
== Further reading == | Wikipedia/Drug_repositioning |
Antiviral drugs are a class of medication used for treating viral infections. Most antivirals target specific viruses, while a broad-spectrum antiviral is effective against a wide range of viruses. Antiviral drugs are a class of antimicrobials, a larger group which also includes antibiotic (also termed antibacterial), antifungal and antiparasitic drugs, or antiviral drugs based on monoclonal antibodies. Most antivirals are considered relatively harmless to the host, and therefore can be used to treat infections. They should be distinguished from virucides, which are not medication but deactivate or destroy virus particles, either inside or outside the body. Natural virucides are produced by some plants such as eucalyptus and Australian tea trees.
== Medical uses ==
Most of the antiviral drugs now available are designed to help deal with HIV, herpes viruses, the hepatitis B and C viruses, and influenza A and B viruses.
Viruses use the host's cells to replicate and this makes it difficult to find targets for the drug that would interfere with the virus without also harming the host organism's cells. Moreover, the major difficulty in developing vaccines and antiviral drugs is due to viral variation.
The emergence of antivirals is the product of a greatly expanded knowledge of the genetic and molecular function of organisms, allowing biomedical researchers to understand the structure and function of viruses, major advances in the techniques for finding new drugs, and the pressure placed on the medical profession to deal with the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS).
The first experimental antivirals were developed in the 1960s, mostly to deal with herpes viruses, and were found using traditional trial-and-error drug discovery methods. Researchers grew cultures of cells and infected them with the target virus. They then introduced into the cultures chemicals which they thought might inhibit viral activity and observed whether the level of virus in the cultures rose or fell. Chemicals that seemed to have an effect were selected for closer study.
This was a very time-consuming, hit-or-miss procedure, and in the absence of a good knowledge of how the target virus worked, it was not efficient in discovering effective antivirals which had few side effects. Only in the 1980s, when the full genetic sequences of viruses began to be unraveled, did researchers begin to learn how viruses worked in detail, and exactly what chemicals were needed to thwart their reproductive cycle.
== Antiviral drug design ==
=== Antiviral targeting ===
The general idea behind modern antiviral drug design is to identify viral proteins, or parts of proteins, that can be disabled. These "targets" should generally be as unlike any proteins or parts of proteins in humans as possible, to reduce the likelihood of side effects and toxicity. The targets should also be common across many strains of a virus, or even among different species of virus in the same family, so a single drug will have broad effectiveness. For example, a researcher might target a critical enzyme synthesized by the virus, but not by the patient, that is common across strains, and see what can be done to interfere with its operation.
Once targets are identified, candidate drugs can be selected, either from drugs already known to have appropriate effects or by actually designing the candidate at the molecular level with a computer-aided design program.
The target proteins can be manufactured in the lab for testing with candidate treatments by inserting the gene that synthesizes the target protein into bacteria or other kinds of cells. The cells are then cultured for mass production of the protein, which can then be exposed to various treatment candidates and evaluated with "rapid screening" technologies.
=== Approaches by virus life cycle stage ===
Viruses consist of a genome and sometimes a few enzymes stored in a capsule made of protein (called a capsid), and sometimes covered with a lipid layer (sometimes called an 'envelope'). Viruses cannot reproduce on their own and instead propagate by subjugating a host cell to produce copies of themselves, thus producing the next generation.
Researchers working on such "rational drug design" strategies for developing antivirals have tried to attack viruses at every stage of their life cycles. Some species of mushrooms have been found to contain multiple antiviral chemicals with similar synergistic effects.
Compounds isolated from fruiting bodies and filtrates of various mushrooms have broad-spectrum antiviral activities, but successful production and availability of such compounds as frontline antiviral is a long way away.
Viral life cycles vary in their precise details depending on the type of virus, but they all share a general pattern:
Attachment to a host cell.
Release of viral genes and possibly enzymes into the host cell.
Replication of viral components using host-cell machinery.
Assembly of viral components into complete viral particles.
Release of viral particles to infect new host cells.
==== Before cell entry ====
One antiviral strategy is to interfere with the ability of a virus to infiltrate a target cell. The virus must go through a sequence of steps to do this, beginning with binding to a specific "receptor" molecule on the surface of the host cell and ending with the virus "uncoating" inside the cell and releasing its contents. Viruses that have a lipid envelope must also fuse their envelope with the target cell, or with a vesicle that transports them into the cell before they can uncoat.
This stage of viral replication can be inhibited in two ways:
Using agents which mimic the virus-associated protein (VAP) and bind to the cellular receptors. This may include VAP anti-idiotypic antibodies, natural ligands of the receptor, and anti-receptor antibodies.
Using agents which mimic the cellular receptor and bind to the VAP. This includes anti-VAP antibodies, receptor anti-idiotypic antibodies, extraneous receptor and synthetic receptor mimics.
This strategy of designing drugs can be very expensive, and since the process of generating anti-idiotypic antibodies is partly trial and error, it can be a relatively slow process until an adequate molecule is produced.
===== Entry inhibitor =====
A very early stage of viral infection is viral entry, when the virus attaches to and enters the host cell. A number of "entry-inhibiting" or "entry-blocking" drugs are being developed to fight HIV. HIV most heavily targets a specific type of lymphocyte known as "helper T cells", and identifies these target cells through T-cell surface receptors designated "CD4" and "CCR5". Attempts to interfere with the binding of HIV with the CD4 receptor have failed to stop HIV from infecting helper T cells, but research continues on trying to interfere with the binding of HIV to the CCR5 receptor in hopes that it will be more effective.
HIV infects a cell through fusion with the cell membrane, which requires two different cellular molecular participants, CD4 and a chemokine receptor (differing depending on the cell type). Approaches to blocking this virus/cell fusion have shown some promise in preventing entry of the virus into a cell. At least one of these entry inhibitors—a biomimetic peptide called Enfuvirtide, or the brand name Fuzeon—has received FDA approval and has been in use for some time. Potentially, one of the benefits from the use of an effective entry-blocking or entry-inhibiting agent is that it potentially may not only prevent the spread of the virus within an infected individual but also the spread from an infected to an uninfected individual.
One possible advantage of the therapeutic approach of blocking viral entry (as opposed to the currently dominant approach of viral enzyme inhibition) is that it may prove more difficult for the virus to develop resistance to this therapy than for the virus to mutate or evolve its enzymatic protocols.
===== Uncoating inhibitors =====
Inhibitors of uncoating have also been investigated.
Amantadine and rimantadine have been introduced to combat influenza. These agents act on penetration and uncoating.
Pleconaril works against rhinoviruses, which cause the common cold, by blocking a pocket on the surface of the virus that controls the uncoating process. This pocket is similar in most strains of rhinoviruses and enteroviruses, which can cause diarrhea, meningitis, conjunctivitis, and encephalitis.
Some scientists are making the case that a vaccine against rhinoviruses, the predominant cause of the common cold, is achievable.
Vaccines that combine dozens of varieties of rhinovirus at once are effective in stimulating antiviral antibodies in mice and monkeys, researchers reported in Nature Communications in 2016.
Rhinoviruses are the most common cause of the common cold; other viruses such as respiratory syncytial virus, parainfluenza virus and adenoviruses can cause them too. Rhinoviruses also exacerbate asthma attacks. Although rhinoviruses come in many varieties, they do not drift to the same degree that influenza viruses do. A mixture of 50 inactivated rhinovirus types should be able to stimulate neutralizing antibodies against all of them to some degree.
==== During viral synthesis ====
A second approach is to target the processes that synthesize virus components after a virus invades a cell.
===== Reverse transcription =====
One way of doing this is to develop nucleotide or nucleoside analogues that look like the building blocks of RNA or DNA, but deactivate the enzymes that synthesize the RNA or DNA once the analogue is incorporated. This approach is more commonly associated with the inhibition of reverse transcriptase (RNA to DNA) than with "normal" transcriptase (DNA to RNA).
The first successful antiviral, aciclovir, is a nucleoside analogue, and is effective against herpesvirus infections. The first antiviral drug to be approved for treating HIV, zidovudine (AZT), is also a nucleoside analogue.
An improved knowledge of the action of reverse transcriptase has led to better nucleoside analogues to treat HIV infections. One of these drugs, lamivudine, has been approved to treat hepatitis B, which uses reverse transcriptase as part of its replication process. Researchers have gone further and developed inhibitors that do not look like nucleosides, but can still block reverse transcriptase.
Another target being considered for HIV antivirals include RNase H—which is a component of reverse transcriptase that splits the synthesized DNA from the original viral RNA.
===== Integrase =====
Another target is integrase, which integrate the synthesized DNA into the host cell genome. Examples of integrase inhibitors include raltegravir, elvitegravir, and dolutegravir.
===== Transcription =====
Once a virus genome becomes operational in a host cell, it then generates messenger RNA (mRNA) molecules that direct the synthesis of viral proteins. Production of mRNA is initiated by proteins known as transcription factors. Several antivirals are now being designed to block attachment of transcription factors to viral DNA.
===== Translation/antisense =====
Genomics has not only helped find targets for many antivirals, it has provided the basis for an entirely new type of drug, based on "antisense" molecules. These are segments of DNA or RNA that are designed as complementary molecule to critical sections of viral genomes, and the binding of these antisense segments to these target sections blocks the operation of those genomes. A phosphorothioate antisense drug named fomivirsen has been introduced, used to treat opportunistic eye infections in AIDS patients caused by cytomegalovirus, and other antisense antivirals are in development. An antisense structural type that has proven especially valuable in research is morpholino antisense.
Morpholino oligos have been used to experimentally suppress many viral types:
caliciviruses
flaviviruses (including West Nile virus)
dengue
HCV
coronaviruses
===== Translation/ribozymes =====
Yet another antiviral technique inspired by genomics is a set of drugs based on ribozymes, which are enzymes that will cut apart viral RNA or DNA at selected sites. In their natural course, ribozymes are used as part of the viral manufacturing sequence, but these synthetic ribozymes are designed to cut RNA and DNA at sites that will disable them.
A ribozyme antiviral to deal with hepatitis C has been suggested, and ribozyme antivirals are being developed to deal with HIV. An interesting variation of this idea is the use of genetically modified cells that can produce custom-tailored ribozymes. This is part of a broader effort to create genetically modified cells that can be injected into a host to attack pathogens by generating specialized proteins that block viral replication at various phases of the viral life cycle.
===== Protein processing and targeting =====
Interference with post translational modifications or with targeting of viral proteins in the cell is also possible.
==== Protease inhibitors ====
Some viruses include an enzyme known as a protease that cuts viral protein chains apart so they can be assembled into their final configuration. HIV includes a protease, and so considerable research has been performed to find "protease inhibitors" to attack HIV at that phase of its life cycle. Protease inhibitors became available in the 1990s and have proven effective, though they can have unusual side effects, for example causing fat to build up in unusual places. Improved protease inhibitors are now in development.
Protease inhibitors have also been seen in nature. A protease inhibitor was isolated from the shiitake mushroom (Lentinus edodes). The presence of this may explain the Shiitake mushrooms' noted antiviral activity in vitro.
===== Long dsRNA helix targeting =====
Most viruses produce long dsRNA helices during transcription and replication. In contrast, uninfected mammalian cells generally produce dsRNA helices of fewer than 24 base pairs during transcription. DRACO (double-stranded RNA activated caspase oligomerizer) is a group of experimental antiviral drugs initially developed at the Massachusetts Institute of Technology. In cell culture, DRACO was reported to have broad-spectrum efficacy against many infectious viruses, including dengue flavivirus, Amapari and Tacaribe arenavirus, Guama bunyavirus, H1N1 influenza and rhinovirus, and was additionally found effective against influenza in vivo in weanling mice. It was reported to induce rapid apoptosis selectively in virus-infected mammalian cells, while leaving uninfected cells unharmed. DRACO effects cell death via one of the last steps in the apoptosis pathway in which complexes containing intracellular apoptosis signalling molecules simultaneously bind multiple procaspases. The procaspases transactivate via cleavage, activate additional caspases in the cascade, and cleave a variety of cellular proteins, thereby killing the cell.
==== Assembly ====
Rifampicin acts at the assembly phase.
==== Release phase ====
The final stage in the life cycle of a virus is the release of completed viruses from the host cell, and this step has also been targeted by antiviral drug developers. Two drugs named zanamivir (Relenza) and oseltamivir (Tamiflu) that have been recently introduced to treat influenza prevent the release of viral particles by blocking a molecule named neuraminidase that is found on the surface of flu viruses, and also seems to be constant across a wide range of flu strains.
=== Immune system stimulation ===
Rather than attacking viruses directly, a second category of tactics for fighting viruses involves encouraging the body's immune system to attack them. Some antivirals of this sort do not focus on a specific pathogen, instead stimulating the immune system to attack a range of pathogens.
One of the best-known of this class of drugs are interferons, which inhibit viral synthesis in infected cells. One form of human interferon named "interferon alpha" is well-established as part of the standard treatment for hepatitis B and C, and other interferons are also being investigated as treatments for various diseases.
A more specific approach is to synthesize antibodies, protein molecules that can bind to a pathogen and mark it for attack by other elements of the immune system. Once researchers identify a particular target on the pathogen, they can synthesize quantities of identical "monoclonal" antibodies to link up that target. A monoclonal drug is now being sold to help fight respiratory syncytial virus in babies, and antibodies purified from infected individuals are also used as a treatment for hepatitis B.
== Antiviral drug resistance ==
Antiviral resistance can be defined by a decreased susceptibility to a drug caused by changes in viral genotypes. In cases of antiviral resistance, drugs have either diminished or no effectiveness against their target virus. The issue inevitably remains a major obstacle to antiviral therapy as it has developed to almost all specific and effective antimicrobials, including antiviral agents.
The Centers for Disease Control and Prevention (CDC) inclusively recommends anyone six months and older to get a yearly vaccination to protect them from influenza A viruses (H1N1) and (H3N2) and up to two influenza B viruses (depending on the vaccination). Comprehensive protection starts by ensuring vaccinations are current and complete. However, vaccines are preventative and are not generally used once a patient has been infected with a virus. Additionally, the availability of these vaccines can be limited based on financial or locational reasons which can prevent the effectiveness of herd immunity, making effective antivirals a necessity.
The three FDA-approved neuraminidase antiviral flu drugs available in the United States, recommended by the CDC, include: oseltamivir (Tamiflu), zanamivir (Relenza), and peramivir (Rapivab). Influenza antiviral resistance often results from changes occurring in neuraminidase and hemagglutinin proteins on the viral surface. Currently, neuraminidase inhibitors (NAIs) are the most frequently prescribed antivirals because they are effective against both influenza A and B. However, antiviral resistance is known to develop if mutations to the neuraminidase proteins prevent NAI binding. This was seen in the H257Y mutation, which was responsible for oseltamivir resistance to H1N1 strains in 2009. The inability of NA inhibitors to bind to the virus allowed this strain of virus with the resistance mutation to spread due to natural selection. Furthermore, a study published in 2009 in Nature Biotechnology emphasized the urgent need for augmentation of oseltamivir stockpiles with additional antiviral drugs including zanamivir. This finding was based on a performance evaluation of these drugs supposing the 2009 H1N1 'Swine Flu' neuraminidase (NA) were to acquire the oseltamivir-resistance (His274Tyr) mutation, which is currently widespread in seasonal H1N1 strains.
=== Origin of antiviral resistance ===
The genetic makeup of viruses is constantly changing, which can cause a virus to become resistant to currently available treatments. Viruses can become resistant through spontaneous or intermittent mechanisms throughout the course of an antiviral treatment. Immunocompromised patients, more often than immunocompetent patients, hospitalized with pneumonia are at the highest risk of developing oseltamivir resistance during treatment. Subsequent to exposure to someone else with the flu, those who received oseltamivir for "post-exposure prophylaxis" are also at higher risk of resistance.
The mechanisms for antiviral resistance development depend on the type of virus in question. RNA viruses such as hepatitis C and influenza A have high error rates during genome replication because RNA polymerases lack proofreading activity. RNA viruses also have small genome sizes that are typically less than 30 kb, which allow them to sustain a high frequency of mutations. DNA viruses, such as HPV and herpesvirus, hijack host cell replication machinery, which gives them proofreading capabilities during replication. DNA viruses are therefore less error prone, are generally less diverse, and are more slowly evolving than RNA viruses. In both cases, the likelihood of mutations is exacerbated by the speed with which viruses reproduce, which provides more opportunities for mutations to occur in successive replications. Billions of viruses are produced every day during the course of an infection, with each replication giving another chance for mutations that encode for resistance to occur.
Multiple strains of one virus can be present in the body at one time, and some of these strains may contain mutations that cause antiviral resistance. This effect, called the quasispecies model, results in immense variation in any given sample of virus, and gives the opportunity for natural selection to favor viral strains with the highest fitness every time the virus is spread to a new host. Recombination, the joining of two different viral variants, and reassortment, the swapping of viral gene segments among viruses in the same cell, also play a role in resistance, especially in influenza.
Antiviral resistance has been reported in antivirals for herpes, HIV, hepatitis B and C, and influenza, but antiviral resistance is a possibility for all viruses. Mechanisms of antiviral resistance vary between virus types.
=== Detection of antiviral resistance ===
National and international surveillance is performed by the CDC to determine effectiveness of the current FDA-approved antiviral flu drugs. Public health officials use this information to make current recommendations about the use of flu antiviral medications. WHO further recommends in-depth epidemiological investigations to control potential transmission of the resistant virus and prevent future progression. As novel treatments and detection techniques to antiviral resistance are enhanced so can the establishment of strategies to combat the inevitable emergence of antiviral resistance.
=== Treatment options for antiviral resistant pathogens ===
If a virus is not fully wiped out during a regimen of antivirals, treatment creates a bottleneck in the viral population that selects for resistance, and there is a chance that a resistant strain may repopulate the host. Viral treatment mechanisms must therefore account for the selection of resistant viruses.
The most commonly used method for treating resistant viruses is combination therapy, which uses multiple antivirals in one treatment regimen. This is thought to decrease the likelihood that one mutation could cause antiviral resistance, as the antivirals in the cocktail target different stages of the viral life cycle. This is frequently used in retroviruses like HIV, but a number of studies have demonstrated its effectiveness against influenza A, as well. Viruses can also be screened for resistance to drugs before treatment is started. This minimizes exposure to unnecessary antivirals and ensures that an effective medication is being used. This may improve patient outcomes and could help detect new resistance mutations during routine scanning for known mutants. However, this has not been consistently implemented in treatment facilities at this time.
== Direct-acting antivirals ==
The term Direct-acting antivirals (DAA) has long been associated with the combination of antiviral drugs used to treat hepatitis C infections. These are the more effective than older treatments such as ribavirin (partially indirectly acting) and interferon (indirect acting). The DAA drugs against hepatitis C are taken orally, as tablets, for 8 to 12 weeks. The treatment depends on the type or types (genotypes) of hepatitis C virus that are causing the infection. Both during and at the end of treatment, blood tests are used to monitor the effectiveness of the treatment and subsequent cure.
The DAA combination drugs used include:
Harvoni (sofosbuvir and ledipasvir)
Epclusa (sofosbuvir and velpatasvir)
Vosevi (sofosbuvir, velpatasvir, and voxilaprevir)
Zepatier (elbasvir and grazoprevir)
Mavyret (glecaprevir and pibrentasvir)
The United States Food and Drug Administration approved DAAs on the basis of a surrogate endpoint called sustained virological response (SVR). SVR is achieved in a patient when hepatitis C virus RNA remains undetectable 12–24 weeks after treatment ends. Whether through DAAs or older interferon-based regimens, SVR is associated with improved health outcomes and significantly decreased mortality. For those who already have advanced liver disease (including hepatocellular carcinoma), however, the benefits of achieving SVR may be less pronounced, though still substantial.
Despite its historical roots in hepatitis C research, the term "direct-acting antivirals" is becoming more broadly used to also include other anti-viral drugs with a direct viral target such as aciclovir (against herpes simplex virus), letermovir (against cytomegalovirus), or AZT (against human immunodeficiency virus). In this context it serves to distinguish these drugs from those with an indirect mechanism of action such as immune modulators like interferon alfa. This difference is of particular relevance for potential drug resistance mutation development.
== Public policy ==
=== Use and distribution ===
Guidelines regarding viral diagnoses and treatments change frequently and limit quality care. Even when physicians diagnose older patients with influenza, use of antiviral treatment can be low. Provider knowledge of antiviral therapies can improve patient care, especially in geriatric medicine. Furthermore, in local health departments (LHDs) with access to antivirals, guidelines may be unclear, causing delays in treatment. With time-sensitive therapies, delays could lead to lack of treatment.
Overall, national guidelines, regarding infection control and management, standardize care and improve healthcare worker and patient safety. Guidelines, such as those provided by the Centers for Disease Control and Prevention (CDC) during the 2009 flu pandemic caused by the H1N1 virus, recommend, among other things, antiviral treatment regimens, clinical assessment algorithms for coordination of care, and antiviral chemoprophylaxis guidelines for exposed persons. Roles of pharmacists and pharmacies have also expanded to meet the needs of public during public health emergencies.
=== Stockpiling ===
Public Health Emergency Preparedness initiatives are managed by the CDC via the Office of Public Health Preparedness and Response. Funds aim to support communities in preparing for public health emergencies, including pandemic influenza. Also managed by the CDC, the Strategic National Stockpile (SNS) consists of bulk quantities of medicines and supplies for use during such emergencies. Antiviral stockpiles prepare for shortages of antiviral medications in cases of public health emergencies. During the H1N1 pandemic in 2009–2010, guidelines for SNS use by local health departments was unclear, revealing gaps in antiviral planning. For example, local health departments that received antivirals from the SNS did not have transparent guidance on the use of the treatments. The gap made it difficult to create plans and policies for their use and future availabilities, causing delays in treatment.
== See also ==
Antiretroviral drug (especially HAART for HIV)
CRISPR-Cas13
Discovery and development of CCR5 receptor antagonists (for HIV)
Monoclonal antibody
List of antiviral drugs
Antiprion drugs and Astemizole
Discovery and development of NS5A inhibitors
COVID-19 drug repurposing research
== References == | Wikipedia/Antiviral_Therapy |
The discovery of disease-causing pathogens is an important activity in the field of medical science. Many viruses, bacteria, protozoa, fungi, helminths (parasitic worms), and prions are identified as a confirmed or potential pathogen. In the United States, a Centers for Disease Control and Prevention program, begun in 1995, identified over a hundred patients with life-threatening illnesses that were considered to be of an infectious cause but that could not be linked to a known pathogen. The association of pathogens with disease can be a complex and controversial process, in some cases requiring decades or even centuries to achieve.
== Factors impairing identification of pathogens ==
Factors which have been identified as impeding the identification of pathogens include the following:
1. Lack of animal models: Experimental infection in animals has been used as a criterion to demonstrate a disease-causing ability, but for some pathogens (such as Vibrio cholerae, which causes disease only in humans), animal models do not exist. In cases where animal models were not available, scientists have sometimes infected themselves or others to determine an organism's disease causing ability.
2. Pre-existing theories of disease: Before a pathogen is well-recognized, scientists may attribute the symptoms of infection to other causes, such as toxicological, psychological, or genetic causes. Once a pathogen has been associated with an illness, researchers have reported difficulty displacing these pre-existing theories.
3. Variable pathogenicity: Infection with pathogens can produce varying responses in hosts, complicating the process of showing a relationship between infection and the pathogen. In some infectious diseases, the severity of symptoms has been shown to be dependent on specific genetic traits of the host.
4. Organisms that look alike but behave differently: In some cases a harmless organism exists which looks identical to a disease causing organism with a microscope, which complicates the discovery process.
5. Lack of research effort: Slow progress has been attributed to the small numbers of researchers working on a pathogen.
== 19th-century discoveries ==
=== Vibrio cholerae (1849–1884) ===
Vibrio cholerae bacteria are transmitted through contaminated water. Once ingested, the bacteria colonize the intestinal tract of the host and produce a toxin which causes body fluids to flow across the lining of the intestine. Death can result in 2–3 hours from dehydration if no treatment is provided.
Before the discovery of an infectious cause, the symptoms of cholera were thought to be caused by an excess of bile in the patient; the disease cholera gets its name from the Greek word χολή, meaning bile. This theory was consistent with humorism, and led to such medical practices as bloodletting. The bacterium was first reported in 1849 by Gabriel Pouchet, who discovered it in stools from patients with cholera, but he did not appreciate the significance of this presence. The first scientist to understand the significance of Vibrio cholerae was the Italian anatomist Filippo Pacini, who published detailed drawings of the organism in "Microscopical observations and pathological deductions on cholera" in 1854. He published further papers in 1866, 1871, 1876, and 1880, which were ignored by the scientific community. He correctly described how the bacteria caused diarrhea, and developed treatments that were found to be effective. Whilst John Snow's epidemiological maps were well recognized and led to the removal of the Broad Street pump handle (e.g., the 1854 Broad Street cholera outbreak), in 1874, scientific representatives from 21 countries voted unanimously to resolve that cholera was caused by environmental toxins from miasmata, or clouds of unhealthy substances which float in the air. In 1884, Robert Koch re-discovered Vibrio cholerae as a causal element in cholera. Some scientists opposed the new theory, and even drank cholera cultures to disprove it:
Koch announced his discovery of the cholera vibrio in 1884. His conclusions were based upon the constant finding of the peculiar "comma bacillus" in the stools of cholera patients, and the failure to demonstrate this organism in the feces of other persons. It was not possible to reproduce typical cholera in laboratory animals. At the time the "germ theory" of disease had not yet obtained general acceptance, and Koch's announcement was received with considerable skepticism, particularly after it was found that similar "comma bacilli" could be found at times in the feces of persons not suffering from cholera, and often in all sorts of other environments - well and river waters, cheese, etc. We now know that these were saprotrophic species of Vibrio, which may be differentiated from the cholera vibrio by cultural and immunological methods. But the correctness of Koch's opinion was dramatically demonstrated by von Pettenkofer and Emmerich who, doubting the etiological relationship of Koch's organisms, deliberately drank cultures of it. Von Pettenkofer developed merely a transient diarrhea, but Emmerich suffered from a typical and severe attack of cholera.
Von Pettenkofer considered his experience proof that Vibrio cholerae was harmless, as he did not develop cholera from consuming the culture. Between 1849, when Pouchet discovered Vibrio cholerae, and 1891, over a million people died in cholera epidemics in Europe and Russia. In 1995, researchers published a study in Science explaining why some persons are able to be infected with cholera without symptoms, possibly explaining why Pettenkofer did not get sick. The study showed that a series of genetic mutations in some people provide resistance to cholera toxin; but these mutations come at a price. If too many of them occur in a person, they will develop cystic fibrosis, an incurable and often fatal genetic disorder.
== 20th-century discoveries ==
=== Giardia lamblia (1681–1975) ===
Giardiasis is a disease caused by infection with the protozoan Giardia lamblia. Infection with Giardia can produce diarrhea, gas, and abdominal pain in some people. If untreated, infection can be chronic. In children, chronic Giardia infection can cause stunting (stunted growth) and lowered intelligence. Infection with Giardia is now universally recognized as a disease and treated by physicians with antiprotozoal drugs. Since 2002, Giardia cases must be reported to the United States Centers for Disease Control and Prevention (CDC), according to the CDC's Reportable Disease Spreadsheet. The U.S. National Institutes of Health Gastrointestinal Parasites Lab studies Giardia almost exclusively.
However, Giardia experienced an extraordinarily long term of emergence, from its discovery in 1681 until the 1970s when it was fully accepted that infection with Giardia was a treatable cause of chronic diarrhea:
Giardia lamblia was first discovered by Leeuwenhoeck (1681) who found the parasite in his own {diarrheal} stools. It was long considered to be a harmless commensal organism, but in recent years has been recognized as a cause of intestinal disease often acquired by travelers to foreign countries, persons drinking contaminated water in this country, children in day care nurseries and homosexual males. It is the most common pathogenic intestinal parasite in the United States, being found in 4% of stool specimens submitted to state public health laboratories for parasite examination. Attesting to its increasing importance in the United States, a symposium on Giardiasis, sponsored by the Environmental Protection Agency, was held in the fall of 1978.
Some of the first evidence in modern times of Giardia's pathogenicity came during World War II when soldiers were treated for malaria with the antiprotozoal quinacrine, and their diarrhea disappeared, as did the Giardia from their stool samples. In 1954, Dr. R.C. Rendtorff performed experiments on prisoner volunteers, infecting them with Giardia. In the experiment, although some prisoners experienced changes in stool habits, he concluded that these could not be conclusively linked to Giardia infection, and also indicated that all prisoners experienced spontaneous clearance of Giardia. His experiments were described at the EPA Symposium on Waterborne Transmission of Giardiasis in 1978:
[...] we also included Giardia lamblia, which at that time was not generally believed to be an invasive pathogenic parasite of man. Giardia was thought in the 1950's to cause occasional problems of diarrhea in children but its appearance was so common and, in adults so lacking in clinical symptomatology, that most considered it a non-pathogen. As a result, we felt safe in exposing prisoners to Giardia.
In 1954–1955, an outbreak of Giardia infection occurred in Oregon (United States), sickening 50,000 people. This was documented in a communication by Dr. Lyle Veazie, which wasn't published until 15 years later in The New England Journal of Medicine. In the communication, Veazie notes that he was unable to find a publisher for his account of the epidemic. The communication was re-published in the Proceedings of the EPA Symposium on Waterborne Transmission of Giardiasis in 1979, and that version included the following quote from the Director of the Oregon State Board of Health, suggesting that diarrhea from Giardia was still being attributed to other causes by health authorities in 1954:
While an unidentified virus seems the most likely etiologic agent, the unusual prevalence of Giardia lamblia cysts in stools of patients seems worthy of record.
=== Helicobacter pylori (1892–1982) ===
Infection with the bacteria Helicobacter pylori is the cause of most stomach ulcers. The discovery is generally credited to Australian gastroenterologists Dr. Barry Marshall and Dr. J. Robin Warren, who published their findings in 1983. The pair received the Nobel Prize in 2005 for their work. Before this, nobody really knew what caused stomach ulcers, though a popular belief was that the "stress" played a role. Some researchers suggested that ulcers were a psychosomatic illness.
In H Pylori Pioneers, Dr. Marshall noted that other physicians had produced evidence of H. pylori infection as early as 1892. Marshall writes that earlier reports were disregarded because they conflicted with existing belief. The first description of H. pylori came in 1892 from Giulio Bizzozero, who identified acid-tolerant bacteria living in a dog's stomach. Later, a theory would be developed that no bacteria could live in the stomach. Although the theory has no scientific basis, it would become a stumbling block for scientists, discouraging them for searching for infective causes of stomach ulcers. In 1940, two physicians, Dr. A. Stone Freeberg and Dr. Louis E. Barron published a paper describing a spiral bacteria found in about half of their gastroenterology patients who had stomach ulcers. Dr. John Lykoudis, a Greek physician, was one of the first physicians to treat stomach ulcers as an infectious disease. Between 1960 and 1970, he treated over 10,000 ulcer patients in Athens with antibiotics. Lykoudis tried to publish a paper on his findings, but they conflicted with traditional theory, and his work was never published. Lykoudis' experience was followed in 1975 by a further publication in Gut magazine that included spiral bacteria living on the borders of duodonal ulcers. The medical significance of Steer's findings was disregarded, but he “continued to publish papers on H. Pylori, mostly as a hobby."
H. pylori can infect the stomach of some people without causing stomach ulcers. In investigating asymptomatic carriers of H. pylori, researchers identified a genetic trait called Interleuik-1 beta-31 which causes increased production of stomach acid, resulting in ulcers if patients become infected with H. pylori. Patients without the trait do not develop stomach ulcers in response to H. pylori infection, but instead have increased risk from stomach cancer if they become infected. Investigation into other gastrointestinal infections has also shown that the symptoms are the result of interaction between the infection and specific genetic mutations in the host.
=== Pathogenic variants of Escherichia coli (1947–1983) ===
There are different types of E. coli, some of which are found in humans and are harmless. Enterotoxigenic Escherichia coli (ETEC) is a type found to cause illness in humans, possessing gene that allows it to manufacture a substance toxic to humans. Cattle are immune to its effects but when people eat food contaminated with cattle feces, the organism can cause disease. Reports of pathogenic E. coli appear in medical literature as early as 1947. Publications regarding variants of E. coli which cause disease appeared regularly in medical journals throughout the 1950s, '60s, and '70s, with fatalities being reported in humans and infants starting in the 1970s. Despite the earlier reports, pathogenic E. coli did not rise to public prominence until 1983, when a CDC researcher published a paper identifying ETEC as the cause of a series of outbreaks of unexplained hemorrhagic gastrointestinal illness. Despite the earlier publication of pathogenic variants of E. coli, researchers encountered significant difficulties in establishing ETEC as a pathogen.
=== Human immunodeficiency virus (1959–1984) ===
AIDS was first reported June 5, 1981, when the CDC recorded a cluster of Pneumocystis carinii pneumonia (now still classified as PCP but known to be caused by Pneumocystis jirovecii) in five homosexual men in Los Angeles. The discovery of the virus took several years of research, and was announced in 1984 by Dr. Gallo of the U.S. National Cancer Institute, Dr. Luc Montagnier at the Pasteur Institute in Paris, and Dr. Jay Levy at the University of California, San Francisco.
However, HIV existed long before the 1981 CDC report. Three of the earliest known instances of HIV infection are as follows:
A plasma sample taken in 1959 from an adult male living in what is now the Democratic Republic of the Congo.
HIV found in tissue samples from a 15-year-old African-American teenager who died in St. Louis in 1969.
HIV found in tissue samples from a Norwegian sailor who died around 1976.
Two species of HIV infect humans: HIV-1 and HIV-2. More virulent and more easily transmitted, HIV-1 is the source of the majority of HIV infections throughout the world, while HIV-2 is not as easily transmitted and is largely confined to West Africa. Both HIV-1 and HIV-2 are of primate origin. The origin of HIV-1 is the central common chimpanzee (Pan troglodytes troglodytes) found in southern Cameroon. It is established that HIV-2 originated from the sooty mangabey (Cercocebus atys), an Old World monkey of Guinea Bissau, Gabon, and Cameroon.
It is hypothesized that HIV probably transferred to humans as a result of direct contact with primates, for instance during hunting, butchery, or inter-species sexual contact.
=== Cyclospora (1995) ===
Cyclospora is a gastrointestinal pathogen that causes fever, diarrhea, vomiting, and severe weight loss. Outbreaks of the disease occurred in Chicago in 1989 and other areas in the United States. But investigation by the CDC could not identify an infectious cause. The discovery of the cause was made by Mr. Ramachandran Rajah, the head of a medical clinic's laboratory in Kathmandu, Nepal. Mr. Rajah was trying to discover why local residents and visitors were becoming ill every summer. He identified an unusual looking organism in stool samples from patients who were sick. But when the clinic sent slides of the organism to the CDC, it was identified as blue-green algae, which is harmless. Many pathologists had seen the same thing before, but dismissed it as irrelevant to the patient's disease. Later, the organism would be identified as a special kind of parasite, and treatment would be developed to help patients with the infection. In the United States, Cyclospora infection must be reported to the CDC according to the CDC's Reportable Disease Chart
== Present-day discoveries ==
The process of identifying new infectious agents continues. One study has suggested there are a large number of pathogens already causing illness in the population, but they have not yet been properly identified.
=== Gastrointestinal pathogens ===
Many recently emerged pathogens infect the gastrointestinal tract. For example, there are three gastrointestinal protozoal infections which must be reported to the CDC: Giardia, Cyclospora, and Cryptosporidium. None of these was known to be a significant pathogen in the 1970s.
Figure 1 shows the prevalence of gastrointestinal protozoa in studies from the United States and Canada. The most prevalent protozoa in these studies are considered emerging infectious diseases by some researchers, because a consensus does not yet exist in the medical and public health spheres concerning their importance in the role of human disease. Researchers have suggested that their treatment may be complicated by differing opinions regarding pathogenicity, lack of reliable testing procedures, and lack of reliable treatments. As with newly discovered pathogens before them, researchers are reporting that these organisms may be responsible for illnesses for which no clear cause has been found, such as irritable bowel syndrome.
==== Dientamoeba fragilis ====
Dientamoeba fragilis is a single-celled parasite which infects the large intestine causing diarrhea, gas, and abdominal pain. An Australian study identified patients with symptoms of IBS who were actually infected with Dientamoeba fragilis. Their symptoms resolved following treatment. A study in Denmark identified a high incidence Dientamoeba fragilis infection in a group of patients suspected of having gastrointestinal illness of an infectious nature. The study also suggested special methods may be required to identify infection.
==== Blastocystis ====
Blastocystis is a single-celled protozoan which infects the large intestine. Physicians report that patients with infection show symptoms of abdominal pain, constipation, and diarrhea. One study found that 43% of IBS patients were infected with Blastocystis versus 7% of controls. An additional study found that many IBS patients from whom Blastocystis could not be identified showed a strong antibody reaction to the organism, which is a type of test used to diagnose certain difficult-to-detect infections. Other researchers have also reported that special testing techniques may be necessary to identify the infection in some people. While some scientists believe the finding that IBS patients carry a protozoal infection to be significant, other researchers have reported their belief that the presence of the infection is not medically significant. Researchers report that the infection can be resistant to common protozoal treatments in laboratory culture study, and in experience with patients,; therefore, identifying Blastocystis infection may not be of immediate help to a patient. A 2006 study of gastrointestinal infections in the United States suggested that Blastocystis infection has become the leading cause of protozoal diarrhea in that country. Blastocystis was the most frequently identified protozoal infection found in patients in a 2006 Canadian study.
== See also ==
Spanish flu
Black Death
Bubonic plague
Pandemic
Smallpox
== References == | Wikipedia/Discovery_of_disease-causing_pathogens |
An infection rate or incident rate is the probability or risk of an infection in a population. It is used to measure the frequency of occurrence of new instances of infection within a population during a specific time period.
Rate of infection
=
K
×
the number of infections
the number of those at risk of infection
{\displaystyle {\text{Rate of infection}}=K\times {\frac {\text{the number of infections}}{\text{the number of those at risk of infection}}}}
The number of infections equals the cases identified in the study or observed. An example would be HIV infection during a specific time period in the defined population. The population at risk are the cases appearing in the population during the same time period. An example would be all the people in a city during a specific time period. The constant K is assigned a value of 100 to represent a percentage. An example would be to find the percentage of people in a city who are infected with HIV: 6,000 cases in March divided by the population of a city (one million) multiplied by the constant (K) would give an infection rate of 0.6%.
Calculating the infection rate is used to analyze trends for the purpose of infection and disease control. An online infection rate calculator has been developed by the Centers for Disease Control and Prevention that allows the determination of the streptococcal A infection rate in a population.
== Clinical applications ==
Health care facilities routinely track their infection rates according to the guidelines issued by the Joint Commission. The healthcare-associated infection (HAI) rates measure infection of patients in a particular hospital. This allows rates to compared with other hospitals. These infections can often be prevented when healthcare facilities follow guidelines for safe care. To get payment from Medicare, hospitals are required to report data about some infections to the Centers for Disease Control and Prevention's (CDC's) National Healthcare Safety Network (NHSN). Hospitals currently submit information on central line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), surgical site infections (SSIs), MRSA Bacteremia, and C. difficile laboratory-identified events. The public reporting of these data is an effort by the Department of Health and Human Services.
For meaningful comparisons of infection rates, populations must be very similar between the two or more assessments. However, a problem with mean rates is that they cannot reflect differences in risk between populations,
== References ==
== External links ==
The Society for Healthcare Epidemiology of America epidemiologists or physicians in infection control.
Association for Professionals in Infection Control and Epidemiology infection prevention and control professionals.
The Certification Board of Infection Control and Epidemiology, Inc. | Wikipedia/Infection_rate |
Trial and error is a fundamental method of problem-solving characterized by repeated, varied attempts which are continued until success, or until the practicer stops trying.
According to W.H. Thorpe, the term was devised by C. Lloyd Morgan (1852–1936) after trying out similar phrases "trial and failure" and "trial and practice". Under Morgan's Canon, animal behaviour should be explained in the simplest possible way. Where behavior seems to imply higher mental processes, it might be explained by trial-and-error learning. An example is a skillful way in which his terrier Tony opened the garden gate, easily misunderstood as an insightful act by someone seeing the final behavior. Lloyd Morgan, however, had watched and recorded the series of approximations by which the dog had gradually learned the response, and could demonstrate that no insight was required to explain it.
Edward Lee Thorndike was the initiator of the theory of trial and error learning based on the findings he showed how to manage a trial-and-error experiment in the laboratory. In his famous experiment, a cat was placed in a series of puzzle boxes in order to study the law of effect in learning. He plotted to learn curves which recorded the timing for each trial. Thorndike's key observation was that learning was promoted by positive results, which was later refined and extended by B. F. Skinner's operant conditioning.
Trial and error is also a method of problem solving, repair, tuning, or obtaining knowledge. In the field of computer science, the method is called generate and test (brute force). In elementary algebra, when solving equations, it is called guess and check.
This approach can be seen as one of the two basic approaches to problem-solving, contrasted with an approach using insight and theory. However, there are intermediate methods that, for example, use theory to guide the method, an approach known as guided empiricism.
This way of thinking has become a mainstay of Karl Popper's critical rationalism.
== Methodology ==
The trial and error approach is used most successfully with simple problems and in games, and it is often the last resort when no apparent rule applies. This does not mean that the approach is inherently careless, for an individual can be methodical in manipulating the variables in an attempt to sort through possibilities that could result in success. Nevertheless, this method is often used by people who have little knowledge in the problem area. The trial-and-error approach has been studied from its natural computational point of view
=== Simplest applications ===
Ashby (1960, section 11/5) offers three simple strategies for dealing with the same basic exercise-problem, which have very different efficiencies. Suppose a collection of 1000 on/off switches have to be set to a particular combination by random-based testing, where each test is expected to take one second. [This is also discussed in Traill (1978–2006, section C1.2]. The strategies are:
the perfectionist all-or-nothing method, with no attempt at holding partial successes. This would be expected to take more than 10^301 seconds, [i.e., 2^1000 seconds, or 3·5×(10^291) centuries]
a serial-test of switches, holding on to the partial successes (assuming that these are manifest), which would take 500 seconds on average
parallel-but-individual testing of all switches simultaneously, which would take only one second
Note the tacit assumption here that no intelligence or insight is brought to bear on the problem. However, the existence of different available strategies allows us to consider a separate ("superior") domain of processing — a "meta-level" above the mechanics of switch handling — where the various available strategies can be randomly chosen. Once again this is "trial and error", but of a different type.
=== Hierarchies ===
Ashby's book develops this "meta-level" idea, and extends it into a whole recursive sequence of levels, successively above each other in a systematic hierarchy. On this basis, he argues that human intelligence emerges from such organization: relying heavily on trial-and-error (at least initially at each new stage), but emerging with what we would call "intelligence" at the end of it all. Thus presumably the topmost level of the hierarchy (at any stage) will still depend on simple trial-and-error.
Traill (1978–2006) suggests that this Ashby-hierarchy probably coincides with Piaget's well-known theory of developmental stages. [This work also discusses Ashby's 1000-switch example; see §C1.2]. After all, it is part of Piagetian doctrine that children learn first by actively doing in a more-or-less random way, and then hopefully learn from the consequences — which all has a certain resemblance to Ashby's random "trial-and-error".
=== Application ===
Traill (2008, espec. Table "S" on p.31) follows Jerne and Popper in seeing this strategy as probably underlying all knowledge-gathering systems — at least in their initial phase.
Four such systems are identified:
Natural selection which "educates" the DNA of the species,
The brain of the individual (just discussed);
The "brain" of society-as-such (including the publicly held body of science); and
The adaptive immune system.
== Features ==
Trial and error has a number of features:
solution-oriented: trial and error makes no attempt to discover why a solution works, merely that it is a solution.
problem-specific: trial and error makes no attempt to generalize a solution to other problems.
non-optimal: trial and error is generally an attempt to find a solution, not all solutions, and not the best solution.
needs little knowledge: trials and error can proceed where there is little or no knowledge of the subject.
It is possible to use trial and error to find all solutions or the best solution, when a testably finite number of possible solutions exist. To find all solutions, one simply makes a note and continues, rather than ending the process, when a solution is found, until all solutions have been tried. To find the best solution, one finds all solutions by the method just described and then comparatively evaluates them based upon some predefined set of criteria, the existence of which is a condition for the possibility of finding a best solution. (Also, when only one solution can exist, as in assembling a jigsaw puzzle, then any solution found is the only solution and so is necessarily the best.)
== Examples ==
Trial and error has traditionally been the main method of finding new drugs, such as antibiotics. Chemists simply try chemicals at random until they find one with the desired effect. In a more sophisticated version, chemists select a narrow range of chemicals it is thought may have some effect using a technique called structure–activity relationship. (The latter case can be alternatively considered as a changing of the problem rather than of the solution strategy: instead of "What chemical will work well as an antibiotic?" the problem in the sophisticated approach is "Which, if any, of the chemicals in this narrow range will work well as an antibiotic?") The method is used widely in many disciplines, such as polymer technology to find new polymer types or families.
Trial and error is also commonly seen in player responses to video games - when faced with an obstacle or boss, players often form a number of strategies to surpass the obstacle or defeat the boss, with each strategy being carried out before the player either succeeds or quits the game.
Sports teams also make use of trial and error to qualify for and/or progress through the playoffs and win the championship, attempting different strategies, plays, lineups and formations in hopes of defeating each and every opponent along the way to victory. This is especially crucial in playoff series in which multiple wins are required to advance, where a team that loses a game will have the opportunity to try new tactics to find a way to win, if they are not eliminated yet.
The scientific method can be regarded as containing an element of trial and error in its formulation and testing of hypotheses. Also compare genetic algorithms, simulated annealing and reinforcement learning – all varieties for search which apply the basic idea of trial and error.
Biological evolution can be considered as a form of trial and error. Random mutations and sexual genetic variations can be viewed as trials and poor reproductive fitness, or lack of improved fitness, as the error. Thus after a long time 'knowledge' of well-adapted genomes accumulates simply by virtue of them being able to reproduce.
Bogosort, a conceptual sorting algorithm (that is extremely inefficient and impractical), can be viewed as a trial and error approach to sorting a list. However, typical simple examples of bogosort do not track which orders of the list have been tried and may try the same order any number of times, which violates one of the basic principles of trial and error. Trial and error is actually more efficient and practical than bogosort; unlike bogosort, it is guaranteed to halt in finite time on a finite list, and might even be a reasonable way to sort extremely short lists under some conditions.
Jumping spiders of the genus Portia use trial and error to find new tactics against unfamiliar prey or in unusual situations, and remember the new tactics. Tests show that Portia fimbriata and Portia labiata can use trial and error in an artificial environment, where the spider's objective is to cross a miniature lagoon that is too wide for a simple jump, and must either jump then swim or only swim.
== See also ==
== References ==
== Further reading ==
Ashby, W. R. (1960: Second Edition). Design for a Brain. Chapman & Hall: London.
Traill, R.R. (1978–2006). Molecular explanation for intelligence…, Brunel University Thesis, HDL.handle.net
Traill, R.R. (2008). Thinking by Molecule, Synapse, or both? — From Piaget’s Schema, to the Selecting/Editing of ncRNA. Ondwelle: Melbourne. Ondwelle.com — or French version Ondwelle.com.
Zippelius, R. (1991). Die experimentierende Methode im Recht (Trial and error in Jurisprudence), Academy of Science, Mainz, ISBN 3-515-05901-6 | Wikipedia/Trial-and-error |
Infection prevention and control (IPC) is the discipline concerned with preventing healthcare-associated infections; a practical rather than academic sub-discipline of epidemiology. In Northern Europe, infection prevention and control is expanded from healthcare into a component in public health, known as "infection protection" (smittevern, smittskydd, Infektionsschutz in the local languages). It is an essential part of the infrastructure of health care. Infection control and hospital epidemiology are akin to public health practice, practiced within the confines of a particular health-care delivery system rather than directed at society as a whole.
Infection control addresses factors related to the spread of infections within the healthcare setting, whether among patients, from patients to staff, from staff to patients, or among staff. This includes preventive measures such as hand washing, cleaning, disinfecting, sterilizing, and vaccinating. Other aspects include surveillance, monitoring, and investigating and managing suspected outbreaks of infection within a healthcare setting.
A subsidiary aspect of infection control involves preventing the spread of antimicrobial-resistant organisms such as MRSA. This in turn connects to the discipline of antimicrobial stewardship—limiting the use of antimicrobials to necessary cases, as increased usage inevitably results in the selection and dissemination of resistant organisms. Antimicrobial medications (aka antimicrobials or anti-infective agents) include antibiotics, antibacterials, antifungals, antivirals and antiprotozoals.
The World Health Organization (WHO) has set up an Infection Prevention and Control (IPC) unit in its Service Delivery and Safety department that publishes related guidelines.
== Infection prevention and control ==
Aseptic technique is a key component of all invasive medical procedures. Similar control measures are also recommended in any healthcare setting to prevent the spread of infection generally.
=== Hand hygiene ===
Hand hygiene is one of the basic, yet most important steps in IPC (Infection Prevention and Control). Hand hygiene reduces the chances of HAI (Healthcare Associated Infections) drastically at a floor-low cost. Hand hygiene consists of either hand wash (water based) or hand rubs (alcohol based). Hand wash is a solid 7-steps according to the WHO standards, wherein hand rubs are 5-steps.
The American Nurses Association (ANA) and American Association of Nurse Anesthesiology (AANA) have set specific checkpoints for nurses to clean their hands; the checkpoints for nurses include, before patient contact, before putting on protective equipment, before doing procedures, after contact with patient's skin and surroundings, after contamination of foreign substances, after contact with bodily fluids and wounds, after taking off protective equipment, and after using the restroom. To ensure all before and after checkpoints for hand washing are done, precautions such as hand sanitizer dispensers filled with sodium hypochlorite, alcohol, or hydrogen peroxide, which are three approved disinfectants that kill bacteria, are placed in certain points, and nurses carrying mini hand sanitizer dispensers help increase sanitation in the work field. In cases where equipment is being placed in a container or bin and picked back up, nurses and doctors are required to wash their hands or use alcohol sanitizer before going back to the container to use the same equipment.
Independent studies by Ignaz Semmelweis in 1846 in Vienna and Oliver Wendell Holmes Sr. in 1843 in Boston established a link between the hands of health care workers and the spread of hospital-acquired disease. The U.S. Centers for Disease Control and Prevention (CDC) state that "It is well documented that the most important measure for preventing the spread of pathogens is effective handwashing". In the developed world, hand washing is mandatory in most health care settings and required by many different regulators.
In the United States, OSHA standards require that employers must provide readily accessible hand washing facilities, and must ensure that employees wash hands and any other skin with soap and water or flush mucous membranes with water as soon as feasible after contact with blood or other potentially infectious materials (OPIM).
In the UK healthcare professionals have adopted the 'Ayliffe Technique', based on the 6 step method developed by Graham Ayliffe, J. R. Babb, and A. H. Quoraishi.
Drying is an essential part of the hand hygiene process. In November 2008, a non-peer-reviewed study was presented to the European Tissue Symposium by the University of Westminster, London, comparing the bacteria levels present after the use of paper towels, warm air hand dryers, and modern jet-air hand dryers. Of those three methods, only paper towels reduced the total number of bacteria on hands, with "through-air dried" towels the most effective.
The presenters also carried out tests to establish whether there was the potential for cross-contamination of other washroom users and the washroom environment as a result of each type of drying method. They found that:
the jet air dryer, which blows air out of the unit at claimed speeds of 400 mph, was capable of blowing micro-organisms from the hands and the unit and potentially contaminating other washroom users and the washroom environment up to 2 metres away
use of a warm air hand dryer spread micro-organisms up to 0.25 metres from the dryer
paper towels showed no significant spread of micro-organisms.
In 2005, in a study conducted by TÜV Produkt und Umwelt, different hand drying methods were evaluated. The following changes in the bacterial count after drying the hands were observed:
=== Cleaning, Disinfection, Sterilization ===
The field of infection prevention describes a hierarchy of removal of microorganisms from surfaces including medical equipment and instruments. Cleaning is the lowest level, accomplishing substantial removal. Disinfection involves the removal of all pathogens other than bacterial spores. Sterilization is defined as the removal or destruction of ALL microorganisms including bacterial spores.
==== Cleaning ====
Cleaning is the first and simplest step in preventing the spread of infection via surfaces and fomites. Cleaning reduces microbial burden by chemical deadsorption of organisms (loosening bioburden/organisms from surfaces via cleaning chemicals), simple mechanical removal (rinsing, wiping), as well as disinfection (killing of organisms by cleaning chemicals).
To reduce their chances of contracting an infection, individuals are recommended to maintain good hygiene by washing their hands after every contact with questionable areas or bodily fluids and by disposing of garbage at regular intervals to prevent germs from growing.
==== Disinfection ====
Disinfection uses liquid chemicals on surfaces and at room temperature to kill disease-causing microorganisms. Ultraviolet light has also been used to disinfect the rooms of patients infected with Clostridioides difficile after discharge. Disinfection is less effective than sterilization because it does not kill bacterial endospores.
Along with ensuring proper hand washing techniques are followed, another major component to decrease the spread of disease is the sanitation of all medical equipment. The ANA and AANA set guidelines for sterilization and disinfection based on the Spaulding Disinfection and Sterilization Classification Scheme (SDSCS). The SDSCS classifies sterilization techniques into three categories: critical, semi-critical, and non-critical. For critical situations, or situations involving contact with sterile tissue or the vascular system, sterilize devices with sterilants that destroy all bacteria, rinse with sterile water, and use chemical germicides. In semi-critical situations, or situations with contact of mucous membranes or non-intact skin, high-level disinfectants are required. Cleaning and disinfecting devices with high-level disinfectants, rinsing with sterile water, and drying all equipment surfaces to prevent microorganism growth are methods nurses and doctors must follow. For non-critical situations, or situations involving electronic devices, stethoscopes, blood pressure cuffs, beds, monitors and other general hospital equipment, intermediate level disinfection is required. "Clean all equipment between patients with alcohol, use protective covering for non-critical surfaces that are difficult to clean, and hydrogen peroxide gas. . .for reusable items that are difficult to clean."
==== Sterilization ====
Sterilization is a process intended to kill all microorganisms and is the highest level of microbial kill that is possible.Sterilization, if performed properly, is an effective way of preventing Infections from spreading. It should be used for the cleaning of medical instruments and any type of medical item that comes into contact with the blood stream and sterile tissues.
There are four main ways in which such items are usually sterilized: autoclave (by using high-pressure steam), dry heat (in an oven), by using chemical sterilants such as glutaraldehydes or formaldehyde solutions or by exposure to ionizing radiation. The first two are the most widely used methods of sterilization mainly because of their accessibility and availability. Steam sterilization is one of the most effective types of sterilizations, if done correctly which is often hard to achieve. Instruments that are used in health care facilities are usually sterilized with this method. The general rule in this case is that in order to perform an effective sterilization, the steam must get into contact with all the surfaces that are meant to be disinfected. On the other hand, dry heat sterilization, which is performed with the help of an oven, is also an accessible type of sterilization, although it can only be used to disinfect instruments that are made of metal or glass. The very high temperatures needed to perform sterilization in this way are able to melt the instruments that are not made of glass or metal.
Effectiveness of the sterilizer, for example a steam autoclave is determined in three ways.
First, mechanical indicators and gauges on the machine itself indicate proper operation of the machine. Second heat sensitive indicators or tape on the sterilizing bags change color which indicate proper levels of heat or steam. And, third (most importantly) is biological testing in which a microorganism that is highly heat and chemical resistant (often the bacterial endospore) is selected as the standard challenge. If the process kills this microorganism, the sterilizer is considered to be effective.
Steam sterilization is done at a temperature of 121 C (250 F) with a pressure of 209 kPa (~2atm). In these conditions, rubber items must be sterilized for 20 minutes, and wrapped items 134 C with pressure of 310 kPa for 7 minutes. The time is counted once the temperature that is needed has been reached. Steam sterilization requires four conditions in order to be efficient: adequate contact, sufficiently high temperature, correct time and sufficient moisture. Sterilization using steam can also be done at a temperature of 132 C (270 F), at a double pressure.
Dry heat sterilization is performed at 170 C (340 F) for one hour or two hours at a temperature of 160 C (320 F). Dry heat sterilization can also be performed at 121 C, for at least 16 hours.
Chemical sterilization, also referred to as cold sterilization, can be used to sterilize instruments that cannot normally be disinfected through the other two processes described above. The items sterilized with cold sterilization are usually those that can be damaged by regular sterilization. A variety of chemicals can be used including aldehydes, hydrogen peroxide, and peroxyacetic acid. Commonly, glutaraldehydes and formaldehyde are used in this process, but in different ways. When using the first type of disinfectant, the instruments are soaked in a 2–4% solution for at least 10 hours while a solution of 8% formaldehyde will sterilize the items in 24 hours or more. Chemical sterilization is generally more expensive than steam sterilization and therefore it is used for instruments that cannot be disinfected otherwise. After the instruments have been soaked in the chemical solutions, they must be rinsed with sterile water which will remove the residues from the disinfectants. This is the reason why needles and syringes are not sterilized in this way, as the residues left by the chemical solution that has been used to disinfect them cannot be washed off with water and they may interfere with the administered treatment. Although formaldehyde is less expensive than glutaraldehydes, it is also more irritating to the eyes, skin and respiratory tract and is classified as a potential carcinogen, so it is used much less commonly.
Ionizing radiation is typically used only for sterilizing items for which none of the above methods are practical, because of the risks involved in the process
=== Personal protective equipment ===
Personal protective equipment (PPE) is specialized clothing or equipment worn by a worker for protection against a hazard. The hazard in a health care setting is exposure to blood, saliva, or other bodily fluids or aerosols that may carry infectious materials such as Hepatitis C, HIV, or other blood borne or bodily fluid pathogen. PPE prevents contact with a potentially infectious material by creating a physical barrier between the potential infectious material and the healthcare worker.
The United States Occupational Safety and Health Administration (OSHA) requires the use of personal protective equipment (PPE) by workers to guard against blood borne pathogens if there is a reasonably anticipated exposure to blood or other potentially infectious materials.
Components of PPE include gloves, gowns, bonnets, shoe covers, face shields, CPR masks, goggles, surgical masks, and respirators. How many components are used and how the components are used is often determined by regulations or the infection control protocol of the facility in question, which in turn are derived from knowledge of the mechanism of transmission of the pathogen(s) of concern. Many or most of these items are disposable to avoid carrying infectious materials from one patient to another patient and to avoid difficult or costly disinfection. In the US, OSHA requires the immediate removal and disinfection or disposal of a worker's PPE prior to leaving the work area where exposure to infectious material took place. For health care professionals who may come into contact with highly infectious bodily fluids, using personal protective coverings on exposed body parts improves protection. Breathable personal protective equipment improves user-satisfaction and may offer a similar level of protection. In addition, adding tabs and other modifications to the protective equipment may reduce the risk of contamination during donning and doffing (putting on and taking off the equipment). Implementing an evidence-based donning and doffing protocol such as a one-step glove and gown removal technique, giving oral instructions while donning and doffing, double gloving, and the use of glove disinfection may also improve protection for health care professionals.
Guidelines set by the ANA and ANAA for proper use of disposable gloves include, removing and replacing gloves frequently and when they are contaminated, damaged, or in between treatment of multiple patients. When removing gloves, “grasp outer edge of glove near wrist, peel away from hand turning inside out, hold removed glove in opposite gloved hand, slide ungloved finger under wrist of gloved hand so finger is inside gloved area, peel off the glove from inside creating a ‘bag’ for both gloves, dispose of gloves in proper waste receptacle”.
The inappropriate use of PPE equipment such as gloves, has been linked to an increase in rates of the transmission of infection, and the use of such must be compatible with the other particular hand hygiene agents used. Research studies in the form of randomized controlled trials and simulation studies are needed to determine the most effective types of PPE for preventing the transmission of infectious diseases to healthcare workers. There is low quality evidence that supports making improvements or modifications to personal protective equipment in order to help decrease contamination. Examples of modifications include adding tabs to masks or gloves to ease removal and designing protective gowns so that gloves are removed at the same time. In addition, there is weak evidence that the following PPE approaches or techniques may lead to reduced contamination and improved compliance with PPE protocols: Wearing double gloves, following specific doffing (removal) procedures such as those from the CDC, and providing people with spoken instructions while removing PPE.
=== Device-related infections ===
Healthcare-related infections such as (catheter-associated) urinary tract infections and (central-line) associated bloodstream infections can be caused by medical devices such as urinary catheters and central lines. Prudent use is essential in preventing infections associated with these medical devices. mHealth and patient participation have been used to improve risk awareness and prudent use (e.g. Participatient).
=== Antimicrobial surfaces ===
Microorganisms are known to survive on non-antimicrobial inanimate 'touch' surfaces (e.g., bedrails, over-the-bed trays, call buttons, bathroom hardware, etc.) for extended periods of time. This can be especially troublesome in hospital environments where patients with immunodeficiencies are at enhanced risk for contracting nosocomial infections.
Products made with antimicrobial copper alloy (brasses, bronzes, cupronickel, copper-nickel-zinc, and others) surfaces destroy a wide range of microorganisms in a short period.
The United States Environmental Protection Agency has approved the registration of 355 different antimicrobial copper alloys and one synthetic copper-infused hard surface that kills E. coli O157:H7, methicillin-resistant Staphylococcus aureus (MRSA), Staphylococcus, Enterobacter aerogenes, and Pseudomonas aeruginosa in less than 2 hours of contact. Other investigations have demonstrated the efficacy of antimicrobial copper alloys to destroy
Clostridioides difficile, influenza A virus, adenovirus, and fungi. As a public hygienic measure in addition to regular cleaning, antimicrobial copper alloys are being installed in healthcare facilities in the UK, Ireland, Japan, Korea, France, Denmark, and Brazil. The synthetic hard surface is being installed in the United States as well as in Israel.
== Vaccination of health care workers ==
Healthcare workers may be exposed to certain infections in the course of their work. Vaccines are available to provide some protection to workers in a healthcare setting. Depending on regulation, recommendation, specific work function, or personal preference, healthcare workers or first responders may receive vaccinations for hepatitis B; influenza; COVID-19, measles, mumps and rubella; Tetanus, diphtheria, pertussis; N. meningitidis; and varicella.
== Surveillance for infections ==
Surveillance is the act of infection investigation using the CDC definitions. Determining the presence of a hospital acquired infection requires an infection control practitioner (ICP) to review a patient's chart and see if the patient had the signs and symptom of an infection. Surveillance definitions exist for infections of the bloodstream, urinary tract, pneumonia, surgical sites and gastroenteritis.
Surveillance traditionally involved significant manual data assessment and entry in order to assess preventative actions such as isolation of patients with an infectious disease. Increasingly, computerized software solutions are becoming available that assess incoming risk messages from microbiology and other online sources. By reducing the need for data entry, software can reduce the data workload of ICPs, freeing them to concentrate on clinical surveillance.
As of 1998, approximately one third of healthcare acquired infections were preventable. Surveillance and preventative activities are increasingly a priority for hospital staff. The Study on the Efficacy of Nosocomial Infection Control (SENIC) project by the U.S. CDC found in the 1970s that hospitals reduced their nosocomial infection rates by approximately 32 per cent by focusing on surveillance activities and prevention efforts.
== Isolation and quarantine ==
In healthcare facilities, medical isolation refers to various physical measures taken to interrupt nosocomial spread of contagious diseases. Various forms of isolation exist, and are applied depending on the type of infection and agent involved, and its route of transmission, to address the likelihood of spread via airborne particles or droplets, by direct skin contact, or via contact with body fluids.
In cases where infection is merely suspected, individuals may be quarantined until the incubation period has passed and the disease manifests itself or the person remains healthy. Groups may undergo quarantine, or in the case of communities, a cordon sanitaire may be imposed to prevent infection from spreading beyond the community, or in the case of protective sequestration, into a community. Public health authorities may implement other forms of social distancing, such as school closings, when needing to control an epidemic.
== Barriers and facilitators of implementing infection prevention and control guidelines ==
Barriers to the ability of healthcare workers to follow PPE and infection control guidelines include communication of the guidelines, workplace support (manager support), the culture of use at the workplace, adequate training, the amount of physical space in the facility, access to PPE, and healthcare worker motivation to provide good patient care. Facilitators include the importance of including all the staff in a facility (healthcare workers and support staff) should be done when guidelines are implemented.
== Outbreak investigation ==
When an unusual cluster of illness is noted, infection control teams undertake an investigation to determine whether there is a true disease outbreak, a pseudo-outbreak (a result of contamination within the diagnostic testing process), or just random fluctuation in the frequency of illness. If a true outbreak is discovered, infection control practitioners try to determine what permitted the outbreak to occur, and to rearrange the conditions to prevent ongoing propagation of the infection. Often, breaches in good practice are responsible, although sometimes other factors (such as construction) may be the source of the problem.
Outbreaks investigations have more than a single purpose. These investigations are carried out in order to prevent additional cases in the current outbreak, prevent future outbreaks, learn about a new disease or learn something new about an old disease. Reassuring the public, minimizing the economic and social disruption as well as teaching epidemiology are some other obvious objectives of outbreak investigations.
According to the WHO, outbreak investigations are meant to detect what is causing the outbreak, how the pathogenic agent is transmitted, where it all started from, what is the carrier, what is the population at risk of getting infected and what are the risk factors.
== Training in infection control and health care epidemiology ==
Practitioners can come from several different educational streams: many begin as registered nurses, some as public health inspectors (environmental health officers), some as medical technologists (particularly in clinical microbiology), and some as physicians (typically infectious disease specialists). Specialized training in infection control and health care epidemiology are offered by the professional organizations described below. Physicians who desire to become infection control practitioners often are trained in the context of an infectious disease fellowship. Training that is conducted "face to face", via a computer, or via video conferencing may help improve compliance and reduce errors when compared with "folder based" training (providing health care professionals with written information or instructions).
In the United States, Certification Board of Infection Control and Epidemiology is a private company that certifies infection control practitioners based on their educational background and professional experience, in conjunction with testing their knowledge base with standardized exams. The credential awarded is CIC, Certification in Infection Control and Epidemiology. It is recommended that one has 2 years of Infection Control experience before applying for the exam. Certification must be renewed every five years.
A course in hospital epidemiology (infection control in the hospital setting) is offered jointly each year by the Centers for Disease Control and Prevention (CDC) and the Society for Healthcare Epidemiology of America.
== Standardization ==
=== Australia ===
In 2002, the Royal Australian College of General Practitioners published a revised standard for office-based infection control which covers the sections of managing immunisation, sterilisation and disease surveillance. However, the document on the personal hygiene of health workers is only limited to hand hygiene, waste and linen management, which may not be sufficient since some of the pathogens are air-borne and could be spread through air flow.
Since 1 November 2019, the Australian Commission on Safety and Quality in Health Care has managed the Hand Hygiene initiative in Australia, an initiative focused on improving hand hygiene practices to reduce the incidence of healthcare-associated infections.
=== United States ===
Currently, the federal regulation that describes infection control standards, as related to occupational exposure to potentially infectious blood and other materials, is found at 29 CFR Part 1910.1030 Bloodborne pathogens.
== See also ==
Pandemic prevention – Organization and management of preventive measures against pandemics
== References ==
== Further reading ==
Wong, P., & Lim, W. Y. (2020). Aligning difficult airway guidelines with the anesthetic COVID-19 guidelines to develop a COVID-19 difficult airway strategy: A narrative review. Journal of Anesthesia, 34(6), 924–943. https://doi.org/10.1007/s00540-020-02819-2
== External links ==
Association for Professionals in Infection Control and Epidemiology is primarily composed of infection prevention and control professionals with nursing or medical technology backgrounds
The Society for Healthcare Epidemiology of America is more heavily weighted towards practitioners who are physicians or doctoral-level epidemiologists.
Regional Infection Control Networks
The Certification Board of Infection Control and Epidemiology, Inc. | Wikipedia/Infection_prevention_and_control |
In infectious disease epidemiology, a sporadic disease is an infectious disease which occurs only infrequently, haphazardly, irregularly, or occasionally, from time to time in a few isolated places, with no discernible temporal or spatial pattern, as opposed to a recognizable epidemic outbreak or endemic pattern. The cases are so few (single or in a cluster) and separated so widely in time and place that there exists little or no discernable connection within them. They also do not show a recognizable common source of infection.
In the discussion of non-infectious diseases, a sporadic disease is a non-communicable disease (such as cancer) which occurs in people without any family history of that disease or without any inherited genetic predisposition for the disease (change in DNA which increases the risk of having that disease). Sporadic non-infectious diseases arise not due to any identifiable inherited gene, but because of randomly induced genetic mutations under the influence of environmental factors or of some unknown etiology. Sporadic non-infectious diseases typically occur late in life (late-onset), but early-onset sporadic non-infectious diseases also exist.
== Examples ==
=== Sporadic infectious diseases ===
Examples depend on time and place, because an infectious disease that is common in one area may be rare in another.
In the United States, tetanus, rabies, and plague are considered examples of sporadic diseases. Although the tetanus-causing bacteria Clostridium tetani is present in the soil everywhere in the United States, tetanus infections are very rare and occur in scattered locations because most individuals have either received vaccinations or clean wounds appropriately. Similarly the country records a few scattered cases of plague each year, generally contracted from rodent animals in rural areas in the western part of the country.
In another example, World Health Organization defines malaria to be sporadic when autochthonous cases (i.e. between two individuals in the same place) are too few and scattered to have any appreciable effect on the community.
=== Sporadic non-infectious diseases ===
Some examples of sporadic non-infectious diseases are sporadic Alzheimer's disease, sporadic Creutzfeldt–Jakob disease, sporadic cancers (such as sporadic basal cell carcinoma, sporadic breast cancer, sporadic medullary thyroid cancer and sporadic Kaposi's sarcoma), sporadic fatal insomnia, sporadic goitre, sporadic hemiplegic migraine, sporadic late-onset nemaline myopathy, sporadic neurofibroma and sporadic porphyria cutanea tarda.
== Potential source for an epidemic ==
If the conditions are favorable for its spread (pathogenicity, susceptibility of hosts, contact rate of individuals, population density, number of vaccinated or naturally immune individuals, etc.), a sporadic infectious disease may become the starting point of an epidemic.
For example, in developed countries, shigellosis (bacillary dysentery) is normally considered a sporadic disease, but in overcrowded places with poor sanitation and poor personal hygiene, it may become epidemic. Shigellosis was a sporadic disease in South Korea for many years, until 1998. Beginning in 1998 South Korea experienced a sudden epidemic of shigellosis among school children. Contaminated school meals were identified as the major source of infection, and after several years, the infection rate declined significantly.
In another example, the South Asian country of Bangladesh experienced sporadic cases of dengue fever, a mosquito-borne disease, from its first outbreak in 1964 until 1999. However, in 2000, the arrival of a Thai/Myanmar strain of the highly pathogenic dengue type 3 virus into the overpopulated and poorly urbanized country (which increases human-mosquito contact), with highly favorable breeding grounds (such as open water reservoirs used by poor people and accumulation of rainwater) for the vector, and very little public awareness gave rise to a sudden epidemic of dengue, with 5,551 reported cases that year. The type 3 Dengue virus subsided after 2002 and re-emerged in 2017, once again causing an outbreak in 2019.
== Difficulty of measuring ==
Molecular epidemiologist Lee Riley claims that most sporadic infections are actually part of unrecognized outbreaks, and that what appears to be endemic disease (from a traditional population-based epidemiology approach) actually consists of multiple small outbreaks (from a molecular epidemiology approach) in which seemingly unrelated (i.e., sporadic cases) are in reality epidemiologically related, because they belong to the same genotype of an infectious agent. Riley considers the differentiation of a disease occurrence as either endemic or epidemic to be not really meaningful. According to Riley, since most so-called sporadic occurrences of an endemic disease are actually small epidemics, rapid public health interventions against such occurrences can be made in the same way as they are done for recognized acute epidemics (i.e. epidemic in the traditional sense).
== Notes and references ==
=== Notes ===
=== References ===
== Works cited ==
Fullerton, Kathleen E.; Scallan, Elaine; Kirk, Martyn D.; Mahon, Barbara E.; Angulo, Frederick J.; de Valk, Henriette; van Pelt, Wilfrid; Gauci, Charmaine; Hauri, Anja M.; Majowicz, Shannon; O'Brien, Sarah J. (2012), "Case-Control Studies of Sporadic Enteric Infections: A Review and Discussion of Studies Conducted Internationally from 1990 to 2009", Foodborne Pathogens and Disease, 9 (4): 281–292, doi:10.1089/fpd.2011.1065, PMC 4568830, PMID 22443481 | Wikipedia/Sporadic_disease |
Disease X is a placeholder name that was adopted by the World Health Organization (WHO) in February 2018 on their shortlist of blueprint priority diseases to represent a hypothetical, unknown pathogen. The WHO adopted the placeholder name to ensure that their planning was sufficiently flexible to adapt to an unknown pathogen (e.g., broader vaccines and manufacturing facilities). Former Director of the US National Institute of Allergy and Infectious Diseases Anthony Fauci stated that the concept of Disease X would encourage WHO projects to focus their research efforts on entire classes of viruses (e.g., flaviviruses), instead of just individual strains (e.g., zika virus), thus improving WHO capability to respond to unforeseen strains.
In 2020, experts, including some of the WHO's own expert advisors, speculated that COVID-19, caused by the SARS-CoV-2 virus strain, met the requirements to be the first Disease X. In December 2024, an unidentified disease in the Democratic Republic of the Congo was sometimes referred to as Disease X, after infecting over 400 people and killing at least 79, later revealed to be an aggressive strain of malaria.
== Rationale ==
In May 2015, in pandemic preparations prior to the COVID-19 pandemic, the WHO was asked by member organizations to create an "R&D Blueprint for Action to Prevent Epidemics" to generate ideas that would reduce the time lag between the identification of viral outbreaks and the approval of vaccines/treatments, to stop the outbreaks from turning into a "public health emergency". The focus was to be on the most serious emerging infectious diseases (EIDs) for which few preventive options were available. A group of global experts, the "R&D Blueprint Scientific Advisory Group", was assembled by the WHO to draft a shortlist of less than ten "blueprint priority diseases".
Since 2015, the shortlist of EIDs has been reviewed annually and originally included widely known diseases such as Ebola and Zika which have historically caused epidemics, as well as lesser known diseases which have potential for serious outbreaks, such as SARS, Lassa fever, Marburg virus, Rift Valley fever, and Nipah virus. Since then, COVID-19 has been added to the list.
In February 2018, after the "2018 R&D Blueprint" meeting in Geneva, the WHO added Disease X to the shortlist as a placeholder for a "knowable unknown" pathogen. The Disease X placeholder acknowledged the potential for a future epidemic that could be caused by an unknown pathogen, and by its inclusion, challenged the WHO to ensure their planning and capabilities were flexible enough to adapt to such an event.
At the 2018 announcement of the updated shortlist of blueprint priority diseases, the WHO said: "Disease X represents the knowledge that a serious international epidemic could be caused by a pathogen currently unknown to cause human disease". John-Arne Røttingen, of the R&D Blueprint Special Advisory Group, said: "History tells us that it is likely the next big outbreak will be something we have not seen before", and "It may seem strange to be adding an 'X' but the point is to make sure we prepare and plan flexibly in terms of vaccines and diagnostic tests. We want to see 'plug and play' platforms developed which will work for any or a wide number of diseases; systems that will allow us to create countermeasures at speed". US expert Anthony Fauci said: "WHO recognizes it must 'nimbly move' and this involves creating platform technologies", and that to develop such platforms, WHO would have to research entire classes of viruses, highlighting flaviviruses. He added: "If you develop an understanding of the commonalities of those, you can respond more rapidly".
== Adoption ==
Jonathan D. Quick, the author of End of Epidemics, described the WHO's act of naming Disease X as "wise in terms of communicating risk", saying "panic and complacency are the hallmarks of the world's response to infectious diseases, with complacency currently in the ascendance". Women's Health wrote that the establishment of the term "might seem like an uncool move designed to incite panic" but that the whole purpose of including it on the list was to "get it on people's radars".
Richard Hatchett of the Coalition for Epidemic Preparedness Innovations (CEPI), wrote "It might sound like science fiction, but Disease X is something we must prepare for", noting that despite the success in controlling the 2014 Western African Ebola virus epidemic, strains of the disease had returned in 2018. In February 2019, CEPI announced funding of US$34 million to the German-based CureVac biopharmaceutical company to develop an "RNA Printer prototype", that CEPI said could "prepare for rapid response to unknown pathogens (i.e., Disease X)".
Parallels were drawn with the efforts by the United States Agency for International Development (USAID) and their PREDICT program, which was designed to act as an early warning pandemic system, by sourcing and researching animal viruses in particular "hot spots" of animal-human interaction.
In September 2019, The Daily Telegraph reported on how Public Health England (PHE) had launched its own investigation for a potential Disease X in the United Kingdom from the diverse range of diseases reported in their health system; they noted that 12 novel diseases and/or viruses had been recorded by PHE in the last decade.
In October 2019 in New York, the WHO's Health Emergencies Program ran a "Disease X dummy run" to simulate a global pandemic by Disease X, for its 150 participants from various world health agencies and public health systems to better prepare and share ideas and observations for combatting such an eventuality.
In March 2020, The Lancet Infectious Diseases published a paper titled "Disease X: accelerating the development of medical countermeasures for the next pandemic", which expanded the term to include Pathogen X (the pathogen that leads to Disease X), and identified areas of product development and international coordination that would help in combatting any future Disease X.
In April 2020, The Daily Telegraph described remdesivir, a drug being trialed to combat COVID-19, as an anti-viral that Gilead Sciences started working on a decade previously to treat a future Disease X.
In August 2023, the UK Government announced the creation of a new research center, located on the Porton Down campus, which is tasked at researching pathogens with the potential to emerge as Disease X. Live viruses will be kept in specialist containment facilities in order to develop tests and potential vaccines within 100 days in case a new threat is identified.
In January 2024, during the World Economic Forum's annual meeting, Disease X was once again discussed as being a potential threat following the COVID-19 pandemic.
== Strategy ==
A paper published in 2022 listed the following strategies in preparation for Disease X:
steps to reduce the risk of spillover and the consequent introduction and spread of a new disease in humans;
improving disease surveillance in humans and animals, to rapidly detect and sequence the infectious agent;
strengthening research programs to shorten the time lag between the development and production of medical countermeasures;
rapid implementation of pharmaceutical (e.g. vaccination) and non-pharmaceutical (e.g. social distancing) measures, to contain a large-scale epidemic;
develop international protocols to ensure fair distribution and global coverage of drugs and vaccines.
== Candidates ==
=== Zoonotic viruses ===
On the addition of Disease X in 2018, the WHO said it could come from many sources citing hemorrhagic fevers and the more recent non-polio enterovirus. However, Røttingen speculated that Disease X would be more likely to come from zoonotic transmission (an animal virus that jumps to humans), saying: "It's a natural process and it is vital that we are aware and prepare. It is probably the greatest risk". WHO special advisor Professor Marion Koopmans, also noted that the rate at which zoonotic diseases were appearing was accelerating, saying: "The intensity of animal and human contact is becoming much greater as the world develops. This makes it more likely new diseases will emerge but also modern travel and trade make it much more likely they will spread".
==== COVID-19 (2019–present) ====
From the outset of the COVID-19 pandemic, experts have speculated whether COVID-19 met the criteria to be Disease X. In early February 2020, Chinese virologist Shi Zhengli from the Wuhan Institute of Virology wrote that the first Disease X is from a coronavirus. Later that month, Marion Koopmans, Head of Viroscience at Erasmus University Medical Center in Rotterdam, and a member of the WHO's R&D Blueprint Special Advisory Group, wrote in the scientific journal Cell, "Whether it will be contained or not, this outbreak is rapidly becoming the first true pandemic challenge that fits the disease X category". At the same time, Peter Daszak, also a member of the WHO's R&D Blueprint, wrote in an opinion piece in The New York Times saying: "In a nutshell, Covid-19 is Disease X".
=== Synthetic viruses/bioweapons ===
At the 2018 announcement of the updated shortlist of blueprint priority diseases, the media speculated that a future Disease X could be created intentionally as a biological weapon. In 2018, WHO R&D Blueprint Special Advisor Group member Røttingen was questioned about the potential of Disease X to come from the ability of gene-editing technology to produce synthetic viruses (e.g., the 2017 synthesis of Orthopoxvirus in Canada was cited), which could be released through an accident or even an act of terror. Røttingen said it was unlikely that a future Disease X would originate from a synthetic virus or a bio-weapon. However, he noted the seriousness of such an event, saying, "Synthetic biology allows for the creation of deadly new viruses. It is also the case that where you have a new disease there is no resistance in the population and that means it can spread fast".
=== Bacterial infection ===
In September 2019, Public Health England (PHE) reported that the increasing antibiotic resistance of bacteria, even to "last-resort" antibiotics such as carbapenems and colistin, could also turn into a potential Disease X, citing the antibiotic resistance in gonorrhea as an example.
== In popular culture ==
In 2018, the Museum of London ran an exhibition titled "Disease X: London's next epidemic?", hosted for the centenary of the Spanish flu epidemic from 1918.
The term features in the title of several works of fiction that involve global pandemic diseases, such as Disease (2020), and Disease X: The Outbreak (2019).
== Conspiracy theories ==
Disease X has become the subject of several conspiracy theories, claiming that it may be a real disease, or conceived as a biological weapon, or engineered to create a planned epidemic.
== See also ==
Bioterrorism
Coalition for Epidemic Preparedness Innovations (CEPI)
Global Research Collaboration for Infectious Disease Preparedness (GloPIR-R)
Synthetic virology
Nuremberg Code
== References ==
== External links ==
Blueprint priority diseases (Archived 2020-03-01 at the Wayback Machine)—World Health Organization (6–7 February 2018)
Prioritizing diseases for research and development in emergency contexts—World Health Organization (March 2018)
(Video) What is Disease X—World Health Organization (16 March 2018)
The mystery viruses far worse than flu—BBC News (November 2018) | Wikipedia/Disease_X |
Combination therapy or polytherapy is therapy that uses more than one medication or modality. Typically, the term refers to using multiple therapies to treat a single disease, and often all the therapies are pharmaceutical (although it can also involve non-medical therapy, such as the combination of medications and talk therapy to treat depression). 'Pharmaceutical' combination therapy may be achieved by prescribing/administering separate drugs, or, where available, dosage forms that contain more than one active ingredient (such as fixed-dose combinations).
Polypharmacy is a related term, referring to the use of multiple medications (without regard to whether they are for the same or separate conditions/diseases). Sometimes "polymedicine" is used to refer to pharmaceutical combination therapy. Most of these kinds of terms lack a universally consistent definition, so caution and clarification are often advisable.
== Uses ==
Conditions treated with combination therapy include tuberculosis, leprosy, cancer, malaria, and HIV/AIDS. One major benefit of combination therapies is that they reduce development of drug resistance since a pathogen or tumor is less likely to have resistance to multiple drugs simultaneously. Artemisinin-based monotherapies for malaria are explicitly discouraged to avoid the problem of developing resistance to the newer treatment.
Combination therapy may seem costlier than monotherapy in the short term, but when it is used appropriately, it causes significant savings: lower treatment failure rate, lower case-fatality ratios, fewer side-effects than monotherapy, slower development of resistance, and thus less money needed for the development of new drugs.
=== In oncology ===
Combination therapy has gained momentum in oncology in recent years, with various studies demonstrating higher response rates with combinations of drugs compared to monotherapies, and the FDA recently approving therapeutic combination regimens that demonstrated superior safety and efficacy to monotherapies. In a recent study about solid cancers, Martin Nowak, Bert Vogelstein, and colleagues showed that in most clinical cases, combination therapies are needed to avoid the evolution of resistance to targeted drugs. Furthermore, they find that the simultaneous administration of multiple targeted drugs minimizes the chance of relapse when no single mutation confers cross-resistance to both drugs.
Various systems biology methods must be used to discover combination therapies to overcome drug resistance in select cancer types. Recent precision medicine approaches have focused on targeting multiple biomarkers found in individual tumors by using combinations of drugs. However, with 300 FDA-approved cancer drugs on the market, there almost 45,000 possible two-drug combinations and almost 4.5 million three-drug combinations for to choose from. That level of complexity is one of the primary impediments to the growth of combination therapy in oncology.
The National Cancer Institute has recently highlighted combination therapy as a top research priority in oncology.
=== Bacterial infections ===
Combination therapy with two or more antibiotics are often used in an effort to treat multi-drug resistant Gram-negative bacteria.
== Contrast to monotherapy ==
Monotherapy, or the use of a single therapy, can be applied to any therapeutic approach, but it is most commonly used to describe the use of a single medication. Normally, monotherapy is selected because a single medication is adequate to treat the medical condition. However, monotherapies may also be used because of unwanted side effects or dangerous drug interactions.
== See also ==
Polypill, a medication which contains a combination of multiple active ingredients
Combination drug
== References ==
== External links ==
Drug combination database. covers information on more than 1300 drug combinations in either clinical use or different testing stages.
Perturbation biology method for the discovery of anti-resistance drug combinations with network pharmacology. | Wikipedia/Combination_therapy |
Tick-borne diseases, which afflict humans and other animals, are caused by infectious agents transmitted by tick bites. They are caused by infection with a variety of pathogens, including rickettsia and other types of bacteria, viruses, and protozoa.
The economic impact of tick-borne diseases is considered to be substantial in humans, and tick-borne diseases are estimated to affect ~80 % of cattle worldwide. Most of these pathogens require passage through vertebrate hosts as part of their life cycle. Tick-borne infections in humans, farm animals, and companion animals are primarily associated with wildlife animal reservoirs. Many tick-borne infections in humans involve a complex cycle between wildlife animal reservoirs and tick vectors. The survival and transmission of these tick-borne viruses are closely linked to their interactions with tick vectors and host cells. These viruses are classified into different families, including Asfarviridae, Reoviridae, Rhabdoviridae, Orthomyxoviridae, Bunyaviridae, and Flaviviridae.
The occurrence of ticks and tick-borne illnesses in humans is increasing. Tick populations are spreading into new areas, in part due to climate change. Tick populations are also affected by changes in the populations of their hosts (e.g. deer, cattle, mice, lizards) and those hosts' predators (e.g. foxes). Diversity and availability of hosts and predators can be affected by deforestation and habitat fragmentation.
Because individual ticks can harbor more than one disease-causing agent, patients can be infected with more than one pathogen at the same time, compounding the difficulty in diagnosis and treatment. As the incidence of tick-borne illnesses increases and the geographic areas in which they are found expand, health workers increasingly must be able to distinguish the diverse, and often overlapping, clinical presentations of these diseases.
As of 2020 18 tick-borne pathogens have been identified in the United States according to the Centers for Disease Control and at least 27 are known globally. New tick-borne diseases have been discovered in the 21st century, due in part to the use of molecular assays and next-generation sequencing.
== Prevention ==
=== Exposure ===
Ticks tend to be more active during warmer months, though this varies by geographic region and climate. Areas with woods, bushes, high grass, or leaf litter are likely to have more ticks. Those bitten commonly experience symptoms such as body aches, fever, fatigue, joint pain, or rashes. People can limit their exposure to tick bites by wearing light-colored clothing (including pants and long sleeves), using insect repellent with 20%–30% N,N-Diethyl-3-methylbenzamide (DEET), tucking their pants legs into their socks, checking for ticks frequently, and washing and drying their clothing in a hot dryer.
According to the World Health Organization, tick-to-animal transmission is difficult to prevent because animals do not show visible symptoms; the only effective prevention relies on killing ticks on the livestock production facility.
=== Symptoms ===
Ticks also have the potential to induce a motor illness characterized by acute, ascending flaccid paralysis. This condition can be fatal if not treated promptly, affecting both humans and animals. It is mainly associated with certain species of ticks. Symptoms typically ranges from fatigue, numbness in the legs, muscle aches, and, to in some cases, paralysis and other severe neurological manifestations.
Tick-borne diseases (TBD) are a major health threat in the US. The number of pathogens and the burden of disease have been increasing over the last couple decades. With improved diagnostics and surveillance, new pathogens are regularly identified, bettering our understanding of TBDs. Unfortunately, diagnosis of these illnesses remains a challenge, with many TBDs presenting with similar nonspecific symptoms and diagnosis requiring a battery of assays to assess patients adequately. New advanced molecular diagnostic methods, including next-generation sequencing and metagenomics analysis, promise improved detection of novel and emerging pathogens with the ability to detect a litany of potential pathogens with a single assay.
=== Tick removal ===
Ticks should be removed as soon as safely possible once discovered. They can be removed either by grasping tweezers as close to the mouth as possible and pulling without rotation; some companies market grooved tools that rotate the hypostome to facilitate removal. Chemical methods to make the tick self-detach, or trying to pull the tick out with one's fingers, are not efficient methods. In Australia and New Zealand, where tick-borne infections are less common than tick reactions, the Australasian Society of Clinical Immunology and Allergy recommends seeking medical assistance or killing ticks in-situ by freezing and then leaving them to fall out to prevent allergic/anaphylactic reactions.
== Diagnosis ==
Diagnosing tick-borne diseases involves a dual approach. Some diagnoses rely on clinical observations and symptom analysis, while others are confirmed through laboratory tests. ticks can transmit a wide range of viruses, many of which are arboviruses. In general, specific laboratory tests are not available for rapid diagnosis of tick-borne diseases. Due to their seriousness, antibiotic treatment is often justified based on clinical presentation alone.
Diagnosing Lyme borreliosis relies on clinical criteria, with a history of a tick bite and associated symptoms being crucial. Laboratory diagnosis follows a 'two-tiered diagnostic protocol,' involving detecting specific antibodies using methods such as immunoenzymatic assays and Western blot tests, preferably with recombinant antigens. While ELISA and Western blot have similar sensitivity, Western blot is more specific due to the identification of specific immunoreactive bands. Seroconversion typically occurs around two weeks after symptom onset, but false positive ELISA results can be linked to poorly reactive antibodies against specific antigens, especially in patients with other infectious and non-infectious diseases.
Tick-borne encephalitis (TBE) presents non-specific clinical features, making laboratory diagnosis crucial. The diagnostic process typically involves identifying specific IgM- and IgG-serum antibodies through enzyme-linked immunosorbent assay (ELISA) since these antibodies are detectable in most cases upon hospitalization.
== Treatment ==
Patients with Lyme disease who are treated with appropriate antibiotics usually recover rapidly and completely. Antibiotics commonly used include doxycycline, amoxicillin, or cefuroxime axetil. For Anaplasmosis, ehrlichiosis and Rocky Mountain spotted fever, Doxycycline is the first line treatment for adults and children of all ages. For babesiosis, a combination therapy with atovaquone and azithromycin is most commonly recommended for treatment of mild to moderate babesiosis. Treatment is usually continued for 7 to 10 days. A combination regimen of oral clindamycin and quinine has also been proven effective, but the rate of adverse reactions is significantly higher with this combination. For Powassan virus, there are no medications for treating Powassan virus infections. Medications, however, can help to relieve symptoms and prevent complications. People with severe disease are typically treated in a hospital where they may be given intravenous fluids, fever-reducing medications, breathing support, and other therapies as needed.
== Assessing risk ==
For a person or pet to acquire a tick-borne disease requires that the individual gets bitten by a tick and that the tick feeds for a sufficient period of time. The feeding time required to transmit pathogens differs for different ticks and different pathogens. Transmission of the bacterium that causes Lyme disease is well understood to require a substantial feeding period. In general, soft ticks (Argasidae) transmit pathogens within minutes of attachment because they feed more frequently, whereas hard ticks (Ixodidae) take hours or days, but the latter are more common and harder to remove.
For an individual to acquire infection, the feeding tick must also be infected. Not all ticks are infected. In most places in the US, 30-50% of deer ticks will be infected with Borrelia burgdorferi (the agent of Lyme disease). Other pathogens are much more rare. Ticks can be tested for infection using a highly specific and sensitive qPCR procedure. Several commercial labs provide this service to individuals for a fee. The Laboratory of Medical Zoology (LMZ), a nonprofit lab at the University of Massachusetts, provides a comprehensive TickReport for a variety of human pathogens and makes the data available to the public. Those wishing to know the incidence of tick-borne diseases in their town or state can search the LMZ surveillance database.
== Examples ==
Major tick-borne diseases include:
=== Bacterial ===
Lyme disease or borreliosis
Organism: Borrelia burgdorferi sensu lato (bacterium)
Vector: at least 15 species of ticks in the genus Ixodes, including deer tick (Ixodes scapularis (=I. dammini), I. pacificus, I. ricinus (Europe), I. persulcatus (Asia))
Endemic to: The Americas and Eurasia
Symptoms: Fever, arthritis, neuroborreliosis, erythema migrans, cranial nerve palsy, carditis, fatigue, and influenza-like illness
Treatment: Antibiotics – amoxicillin in pregnant adults and children, doxycycline in other adults
Relapsing fever (tick-borne relapsing fever, different from Lyme disease due to different Borrelia species and ticks)
Organisms: Borrelia species such as B. hermsii, B. parkeri, B. duttoni, B. miyamotoi
Vector: Ornithodoros species
Regions: Primarily in Africa, Spain, Saudi Arabia, Asia in and certain areas of Canada and the western United States
Symptoms: Relapsing fever typically presents as recurring high fevers, flu-like symptoms, headaches, and muscular pain, with less common symptoms including rigors, joint pain, altered mentation, cough, sore throat, painful urination, and rash
Treatment: Antibiotics are the treatment for relapsing fever, with doxycycline, tetracycline, or erythromycin being the treatment of choice.
Typhus Several diseases caused by Rickettsia bacteria (below)
Rocky Mountain spotted fever
Organism: Rickettsia rickettsii
Vector: Wood tick (Dermacentor variabilis), D. andersoni
Region (US): East, Southwest
Vector: Amblyomma cajennense
Region (Brazil): São Paulo, Rio de Janeiro, Minas Gerais.
Symptoms: Fever, headache, altered mental status, myalgia, and rash
Treatment: Antibiotic therapy, typically consisting of doxycycline or tetracycline
Helvetica spotted fever
Organism: Rickettsia helvetica
Region (R. helvetica): Confirmed common in ticks in Sweden, Switzerland, France, and Laos
Vector/region(s): Ixodes ricinus is the main European vector.
Symptoms: Most often small red spots, other symptoms are fever, muscle pain, headache and respiratory problems
Treatment: Broad-spectrum antibiotic therapy is needed, phenoxymethylpenicillin likely is sufficient.
Human granulocytic anaplasmosis (formerly human granulocytic ehrlichiosis or HGE)
Organism: Anaplasma phagocytophilum (formerly Ehrlichia phagocytophilum or Ehrlichia equi)
Vector: Lone star tick (Amblyomma americanum), I. scapularis
Region (US): South Atlantic, South-central
Bartonella: Bartonella transmission rates to humans via tick bite are not well established but Bartonella is common in ticks. For example: 4.76% of 2100 ticks tested in a study in Germany
Tularemia
Organism: Francisella tularensis, A. americanum
Vector: D. variabilis, D. andersoni
Region (US): Southeast, South-central, West, widespread
=== Viral ===
Tick-borne meningoencephalitis
Organism: TBEV (FSME) virus, a flavivirus from family Flaviviridae
Vector: deer tick (Ixodes scapularis), Ixodes ricinus (Europe), Ixodes persulcatus (Russia + Asia))
Endemic to: Europe and northern Asia
Powassan virus/deer tick virus
Organism: Powassan virus (POWV), a flavivirus from family Flaviviridae. Lineage 2 POWV is also known as deer tick virus (DTV)
Vector: Ixodes cookei, Ix. scapularis, Ix. marxi, Ix. spinipalpusm, Dermacentor andersoni, and D. variabilis
Endemic to: North America and eastern Russia
Colorado tick fever
Organism: Colorado tick fever virus (CTF), a coltivirus from the Reoviridae
Vector: Dermacentor andersoni
Region: US (West)
Crimean-Congo hemorrhagic fever
Organism: CCHF virus, a nairovirus, from the Bunyaviridae
Vector: Hyalomma marginatum, Rhipicephalus bursa
Region: Southern part of Asia, Northern Africa, Southern Europe
Severe febrile illness
Organism: Heartland virus, a phlebovirus, from the Bunyaviridae
Vector: Lone star tick (Amblyomma americanum)
Region: Missouri and Tennessee, United States
Severe febrile illness, headaches, coma in 1/3 patients
Organism: tentatively Alongshan virus, jingmenvirus group in the flavivirus family
Vector: tick (likely Ixodes persulcatus, Ixodes ricinus), mosquitoes
Region: Inner Mongolia but potentially more widespread
=== Protozoan ===
Babesiosis
Organism: Babesia microti, Theileria equi
Vector: Ixodes scapularis (deer tick), I. pacificus (western black-legged tick)
Region (US): Northeast, West Coast
Cytauxzoonosis
Organism: Cytauxzoon felis
Vector: Amblyomma americanum (Lone star tick)
Region (US): South, Southeast
=== Toxin ===
Tick paralysis
Cause: Toxin
Vector (US): Dermacentor andersoni (Rocky Mountain wood tick), D. variabilis (American dog tick or wood tick)
Region (US): D. andersoni: East, D. variabilis: East, West coast
Vector (Australia): Ixodes holocyclus (Australian paralysis tick)
Region (Australia): East
=== Allergies ===
Alpha-gal allergy - Alpha-gal syndrome is likely caused by a hypersensitivity reaction to the Alpha-gal (Galactose-alpha-1,3-galactose) sugar molecule introduced by ticks while feeding on a human host. The immune reaction can leave people with an allergy to red meat and other mammalian derived products.
The experimental confirmation and investigation of how tick bites contribute to the development of AGS have been established and examined using a mouse model.
== See also ==
== References ==
== External links ==
UK's One Health Vector-Borne Diseases Hub
Tick-Borne Diseases: Recommendations for Workers and Employers—National Institute for Occupational Safety and Health
Tickborne Diseases—National Center for Infectious Diseases (CDC)
Tickborne Disease Website—Massachusetts Department of Public Health
Ixodes Scapularis—3D animation of Deer or Blacklegged Tick from US Army site
Parasitic Insects, Mites and Ticks: Genera of Medical and Veterinary Importance Wikibooks
Surendra RS; Shahid Karim (2021). "Tick Saliva and the Alpha-Gal Syndrome: Finding a Needle in a Haystack". Frontiers in Cellular and Infection Microbiology. 11. doi:10.3389/fcimb.2021.680264. PMC 8331069. PMID 34354960. | Wikipedia/Tick-borne_disease |
Phage therapy, viral phage therapy, or phagotherapy is the therapeutic use of bacteriophages for the treatment of pathogenic bacterial infections. This therapeutic approach emerged at the beginning of the 20th century but was progressively replaced by the use of antibiotics in most parts of the world after the Second World War. Bacteriophages, known as phages, are a form of virus that attach to bacterial cells and inject their genome into the cell. The bacteria's production of the viral genome interferes with its ability to function, halting the bacterial infection. The bacterial cell causing the infection is unable to reproduce and instead produces additional phages. Phages are very selective in the strains of bacteria they are effective against.
Advantages include reduced side effects and reduced risk of the bacterium developing resistance, since bacteriophages are much more specific than antibiotics. They are typically harmless not only to the host organism but also to other beneficial bacteria, such as the gut microbiota, reducing the chances of opportunistic infections. They have a high therapeutic index; that is, phage therapy would be expected to give rise to few side effects, even at higher-than-therapeutic levels. Because phages replicate in vivo (in cells of living organism), a smaller effective dose can be used.
Disadvantages include the difficulty of finding an effective phage for a particular infection; a phage will kill a bacterium only if it matches the specific strain. However, virulent phages can be isolated much more easily than other compounds and natural products. Consequently, phage mixtures ("cocktails") are sometimes used to improve the chances of success. Alternatively, samples taken from recovering patients sometimes contain appropriate phages that can be grown to cure other patients infected with the same strain. Ongoing challenges include the need to increase phage collections from reference phage banks, the development of efficient phage screening methods for the fast identification of the therapeutic phage(s), the establishment of efficient phage therapy strategies to tackle infectious biofilms, the validation of feasible phage production protocols that assure quality and safety of phage preparations, and the guarantee of stability of phage preparations during manufacturing, storage, and transport.
Phages tend to be more successful than antibiotics where there is a biofilm covered by a polysaccharide layer, which antibiotics typically cannot penetrate. Phage therapy can disperse the biofilm generated by antibiotic-resistant bacteria. However, the interactions between phages and biofilms can be complex, with phages developing symbiotic as well as predatory relationships with biofilms.
Phages are currently being used therapeutically to treat bacterial infections that do not respond to conventional antibiotics, particularly in Russia and Georgia. There is also a phage therapy unit in Wrocław, Poland, established in 2005, which continues several-decades-long research by the Institute of Immunology and Experimental Therapy of the Polish Academy of Sciences, the only such centre in a European Union country. Phages are the subject of renewed clinical attention in Western countries, such as the United States. In 2019, the United States Food and Drug Administration approved the first US clinical trial for intravenous phage therapy.
Phage therapy has many potential applications in human medicine as well as dentistry, veterinary science, and agriculture. If the target host of a phage therapy treatment is not an animal, the term "biocontrol" (as in phage-mediated biocontrol of bacteria) is usually employed, rather than "phage therapy".
== History ==
The discovery of bacteriophages was reported by British bacteriologist Frederick Twort in 1915 and by French microbiologist Felix d'Hérelle in 1917. D'Hérelle said that the phages always appeared in the stools of Shigella dysentery patients shortly before they began to recover. He "quickly learned that bacteriophages are found wherever bacteria thrive: in sewers, in rivers that catch waste runoff from pipes, and in the stools of convalescent patients". Phage therapy was immediately recognized by many to be a key way forward for the eradication of pathogenic bacterial infections. A Georgian, George Eliava, was making similar discoveries. He travelled to the Pasteur Institute in Paris, where he met d'Hérelle, and in 1923, he founded the Institute of Bacteriology, which later became known as the George Eliava Institute, in Tbilisi, Georgia, devoted to the development of phage therapy. Phage therapy is used in Russia, Georgia and Poland, and was used prophylactically for a time in the Soviet army, most notably during the Second World War.
In the Soviet Union, extensive research and development soon began in this field. In the United States during the 1940s, commercialization of phage therapy was undertaken by Eli Lilly and Company.
While knowledge was being accumulated regarding the biology of phages and how to use phage cocktails correctly, early uses of phage therapy were often unreliable. Since the early 20th century, research into the development of viable therapeutic antibiotics had also been underway, and by 1942, the antibiotic penicillin G had been successfully purified and saw use during the Second World War. The drug proved to be extraordinarily effective in the treatment of injured Allied soldiers whose wounds had become infected. By 1944, large-scale production of penicillin had been made possible, and in 1945, it became publicly available in pharmacies. Due to the drug's success, it was marketed widely in the US and Europe, leading Western scientists to mostly lose interest in further use and study of phage therapy for some time.
Isolated from Western advances in antibiotic production in the 1940s, Soviet scientists continued to develop already successful phage therapy to treat the wounds of soldiers in field hospitals. During World War II, the Soviet Union used bacteriophages to treat soldiers infected with various bacterial diseases, such as dysentery and gangrene. Soviet researchers continued to develop and to refine their treatments and to publish their research and results. However, due to the scientific barriers of the Cold War, this knowledge was not translated and did not proliferate across the world. A summary of these publications was published in English in 2009 in "A Literature Review of the Practical Application of Bacteriophage Research".
There is an extensive library and research center at the George Eliava Institute in Tbilisi, Georgia. Phage therapy is today a widespread form of treatment in that region.
As a result of the development of antibiotic resistance since the 1950s and an advancement of scientific knowledge, there has been renewed interest worldwide in the ability of phage therapy to eradicate bacterial infections and chronic polymicrobial biofilm (including in industrial situations).
Phages have been investigated as a potential means to eliminate pathogens like Campylobacter in raw food and Listeria in fresh food or to reduce food spoilage bacteria. In agricultural practice, phages have been used to fight pathogens like Campylobacter, Escherichia, and Salmonella in farm animals, Lactococcus and Vibrio pathogens in fish aquaculture, and Erwinia, Xanthomonas, and others in plants of agricultural importance. The oldest use is, however, in human medicine. Phages have been used against diarrheal diseases caused by E. coli, Shigella, or Vibrio and against wound infections caused by facultative pathogens of the skin like staphylococci and streptococci. Recently, the phage therapy approach has been applied to systemic and even intracellular infections, and non-replicating phage and isolated phage enzymes like lysins have been added to the antimicrobial arsenal. However, actual proof for the efficacy of these phage approaches in the field or the hospital is not available.
Some of the interest in the West can be traced back to 1994, when James Soothill demonstrated (in an animal model) that the use of phages could improve the success of skin grafts by reducing the underlying Pseudomonas aeruginosa infection. Recent studies have provided additional support for these findings in the model system.
Although not "phage therapy" in the original sense, the use of phages as delivery mechanisms for traditional antibiotics constitutes another possible therapeutic use. The use of phages to deliver antitumor agents has also been described in preliminary in vitro experiments for cells in tissue culture.
In June 2015, the European Medicines Agency hosted a one-day workshop on the therapeutic use of bacteriophages, and in July 2015, the US National Institutes of Health hosted a two-day workshop titled "Bacteriophage Therapy: An Alternative Strategy to Combat Drug Resistance".
In January 2016, phages were used successfully at Yale University by Benjamin Chan to treat a chronic Pseudomonas aeruginosa infection in ophthalmologist Ali Asghar Khodadoust. This successful treatment of a life-threatening infection sparked a resurgence of interest in phage therapy in the United States.
In 2017, a pair of genetically engineered phages along with one naturally occurring (so-called "phage Muddy") each from among those catalogued by SEA-PHAGES (Science Education Alliance-Phage Hunters Advancing Genomics and Evolutionary Science) at the Howard Hughes Medical Institute by Graham Hatfull and colleagues, was used by microbiologist James Soothill at Great Ormond Street Hospital for Children in London to treat an antibiotic-resistant bacterial (Mycobacterium abscessus) infection in a young woman with cystic fibrosis.
In 2022, two mycobacteriophages were administered intravenously twice daily to a young man with treatment-refractory Mycobacterium abscessus pulmonary infection and severe cystic fibrosis lung disease. Airway cultures for M. abscessus became negative after approximately 100 days of combined phage and antibiotic treatment, and a variety of biomarkers confirmed the therapeutic response. The individual received a bilateral lung transplant after 379 days of treatment, and cultures from the explanted lung tissue confirmed eradication of the bacteria. In a second case, successful treatment of disseminated cutaneous Mycobacterium chelonae was reported with a single phage administered intravenously twice daily in conjunction with antibiotic and surgical management.
== Potential benefits ==
Bacteriophage treatment offers a possible alternative to conventional antibiotic treatments for bacterial infection. It is conceivable that, although bacteria can develop resistance to phages, the resistance might be easier to overcome than resistance to antibiotics. Viruses, just like bacteria, can evolve resistance to different treatments.
Bacteriophages are very specific, targeting only one or a few strains of bacteria. Traditional antibiotics have a more wide-ranging effect, killing both harmful and useful bacteria, such as those facilitating food digestion. The species and strain specificity of bacteriophages makes it unlikely that harmless or useful bacteria will be killed when fighting an infection.
A few research groups in the West are engineering a broader-spectrum phage and also a variety of forms of MRSA treatments, including impregnated wound dressings, preventative treatment for burn victims, and phage-impregnated sutures. Enzybiotics are a new development at Rockefeller University that create enzymes from phages. Purified recombinant phage enzymes can be used as separate antibacterial agents in their own right.
Phage therapy also has the potential to prevent or treat infectious diseases of corals. This could mitigate the global coral decline.
== Applications ==
=== Collection ===
Phages for therapeutic use can be collected from environmental sources that likely contain high quantities of bacteria and bacteriophages, such as effluent outlets, sewage, or even soil. The samples are taken and applied to bacterial cultures that are to be targeted. If the bacteria die, the phages can be grown in liquid cultures.
=== Modes of treatment ===
Phages are "bacterium-specific", and therefore, it is necessary in many cases to take a swab from the patient and culture it prior to treatment. Occasionally, isolation of therapeutic phages can require a few months to complete, but clinics generally keep supplies of phage cocktails for the most common bacterial strains in a geographical area.
Phage cocktails are commonly sold in pharmacies in Eastern European countries, such as Russia and Georgia. The composition of bacteriophagic cocktails has been periodically modified to add phages effective against emerging pathogenic strains.
Phages in practice are applied orally, topically on infected wounds or spread onto surfaces, or during surgical procedures. Injection is rarely used, avoiding any risks of trace chemical contaminants that may be present from the bacteria amplification stage, and recognizing that the immune system naturally fights against viruses introduced into the bloodstream or lymphatic system.
Reviews of phage therapy indicate that more clinical and microbiological research is needed to meet current standards.
=== Clinical trials ===
Funding for phage therapy research and clinical trials is generally insufficient and difficult to obtain, since it is a lengthy and complex process to patent bacteriophage products. Due to the specificity of phages, phage therapy would be most effective as a cocktail injection, a modality generally rejected by the US Food and Drug Administration (FDA). Therefore, researchers and observers have predicted that if phage therapy is to gain traction, the FDA must change its regulatory stance on combination drug cocktails. Public awareness and education about phage therapy are generally limited to scientific or independent research rather than mainstream media.
In 2007, phase-1 and 2 clinical trials were completed at the Royal National Throat, Nose and Ear Hospital, London, for Pseudomonas aeruginosa infections (otitis).
Phase-1 clinical trials were conducted at the Southwest Regional Wound Care Center of Lubbock, Texas, for a cocktail of phages against P. aeruginosa, Staphylococcus aureus, and Escherichia coli, developed by Intralytix. PhagoBurn, a phase-1 and 2 trial of phage therapy against P. aeruginosa wound infection in France and Belgium in 2015–17, was terminated early due to lack of effectiveness.
Locus Biosciences has created a cocktail of three CRISPR-modified phages. A 2019 study examined its effectiveness against E. coli in the urinary tract, and a phase-1 trial was completed shortly before March 2021. In February 2019, the FDA approved the first clinical trial of intravenously administered phage therapy in the United States.
In July 2020, the FDA approved the first clinical trial of nebulized phage therapy in the United States. This double-blind, placebo-controlled study at Yale University will be focused on treating P. aeruginosa infections in patients with cystic fibrosis.
In February 2020, the FDA approved a clinical trial to evaluate bacteriophage therapy in patients with urinary tract infections. The study started in December 2020 and aims to identify ideal bacteriophage treatment regimens based on improvements in disease control rates.
In February 2021, the FDA approved a clinical trial to evaluate bacteriophage therapy in patients with chronic prosthetic joint infections (PJI). The study was to begin in October 2022 and be conducted by Adaptive Phage Therapeutics, in collaboration with the Mayo Clinic.
=== Administration ===
==== As pills ====
If administered as pills, phages can be freeze-dried; this procedure does not reduce efficiency. Temperature stability up to 55 °C and shelf lives of 14 months have been shown for some types of phages in pill form.
==== Liquid ====
Application in liquid form is possible, stored preferably in refrigerated vials. Oral administration works better when an antacid is included, as this increases the number of phages surviving passage through the stomach. Topical administration often involves application to gauzes that are laid on the area to be treated. Liquid bacteriophages are also utilized for local applications, such as wound dressings and topical treatments, as well as external administration, including sprays and rinses.
==== Via nebulizer ====
The July 2020 application for FDA approval for the first clinical trial of nebulized phage therapy in the United States does not specify a particular type of nebulizer, such as a compressor or ultrasound type. Bacteriophages are studied as potential candidates for treating bacterial lung infections, especially those caused by multidrug-resistant (MDR) bacteria. In these studies, bacteriophage solutions are administered via nebulizers, mostly using the compressor type. The stability and viability of phages during nebulization are crucial for their therapeutic efficacy. Current studies focus on whether phages can remain viable and effective when delivered via nebulizers. The choice of nebulizer can impact the stability and delivery efficiency of phages. Compressor nebulizers are commonly used because they generate a fine mist that can reach the lower respiratory tract.
In contrast to the compressor nebulizers, the ultrasound nebulizers can impact the viability of bacteriophages. The ultrasonic waves used to generate the aerosol can cause physical damage to the phages, potentially reducing their effectiveness. Preliminary research suggests the high-frequency vibrations and heat generated during the nebulization process can lead to a significant loss of phage activity. Consequently, one of the main challenges is ensuring that the phages remain undamaged during the nebulization process. Studies have shown that phages can be sensitive to the shear forces generated during nebulization. Still, with proper formulation and device selection, it is possible to maintain their viability, as the current research suggests.
=== Successful treatments ===
Phages were used successfully at Yale University by Benjamin Chan to treat a Pseudomonas infection in 2016. Intravenous phage drip therapy was successfully used to treat a patient with multidrug-resistant Acinetobacter baumannii in Thornton Hospital at UC San Diego in 2017. Nebulized phage therapy has been used successfully to treat numerous patients with cystic fibrosis and multidrug-resistant bacteria at Yale University as part of their compassionate use program. In 2019, a Brownsville, Minnesota resident with a longstanding bacterial infection in his knee received a phage treatment at the Mayo Clinic that eliminated the need for amputation of his lower leg. Individualised phage therapy was also successfully used by Robert T. Schooley and others to treat a case of multi-drug-resistant Acinetobacter baumannii in 2015. In 2022, an individually adjusted phage-antibiotic combination as an antimicrobial resistance treatment was demonstrated and described in detail. The scientists called for scaling up the research and for further development of this approach.
=== Treatment of biofilm infections ===
Phage therapy is being used to great effect in the treatment of biofilm infections, especially Pseudomonas aeruginosa and Staphylococcus aureus. From 78 recent cases of treatment of biofilm infections, 96% of patients saw clinical improvement using phage therapy, and 52% of patients saw complete symptom relief or a full expungement of the affecting bacteria. Biofilm infections are very challenging to treat with antibiotics. The biofilm matrix and surrounding bacterial membranes can bind to the antibiotics, preventing them from penetrating the biofilm. The matrix may contain enzymes that deactivate antibiotics. Biofilms also have low metabolic activity, which means antibiotics that target growing processes have much lower efficacy. These factors make phage therapy an enticing option for the treatment of such infections, and there are currently two ways to go about such treatment. The first is to isolate the initial bacteria and make a specific treatment phage to target it, while the second way is to use a combination of more general phages. The advantage of the second method is that it can easily be made commercially available for treatment, although there are some concerns that it may be substantially less effective.
== Limitations ==
The high bacterial strain specificity of phage therapy may make it necessary for clinics to make different cocktails for treatment of the same infection or disease, because the bacterial components of such diseases may differ from region to region or even person to person. In addition, this means that "banks" containing many different phages must be kept and regularly updated with new phages.
Further, bacteria can evolve different receptors either before or during treatment. This can prevent phages from completely eradicating them.
The need for banks of phages makes regulatory testing for safety harder and more expensive under current rules in most countries. Such a process would make the large-scale use of phage therapy difficult. Additionally, patent issues (specifically on living organisms) may complicate distribution for pharmaceutical companies wishing to have exclusive rights over their "invention", which would discourage a commercial corporation from investing capital in this.
As has been known for at least thirty years, mycobacteria such as Mycobacterium tuberculosis have specific bacteriophages. No lytic phage has yet been discovered for Clostridioides difficile, which is responsible for many nosocomial diseases, but some temperate phages (integrated in the genome, also called lysogenic) are known for this species; this opens encouraging avenues but with additional risks, as discussed below.
The negative public perception of viruses may contribute to the reluctance to embrace phage therapy.
=== Development of resistance ===
One of the major concerns usually associated with phage therapy is the emergence of bacteriophage-insensitive mutants (BIMs) that could hinder the success of this therapy. Several in vitro studies have reported a fast emergence of BIMs within a short time after phage treatment. The emergence of BIMs has also been observed in vivo using different animal models, although this usually occurs later than in vitro (reviewed in ). This fast adaptation of bacteria to phage attack is usually caused by mutations on genes encoding phage receptors, which include lipopolysaccharides (LPS), outer membrane proteins, capsules, flagella, and pili, among others. However, some studies suggest that when phage resistance is caused by mutations in phage receptors, this might result in fitness costs to the resistance bacterium, which will ultimately become less virulent. Moreover, it has been shown that the evolution of bacterial resistance to phage attack changes the efflux pump mechanism, causing increased sensitivity to drugs from several antibiotic classes. Therefore, it is conceivable to think that phage therapy that uses phages that exert selection for multidrug-resistant bacteria to become antibiotic-sensitive could potentially reduce the incidence of antibiotic-resistant infections.
Besides the prevention of phage adsorption by loss or modification of bacterial receptors, phage insensitivity can be caused by: prevention of phage DNA entry by superinfection exclusion systems; or degradation of phage DNA by restriction-modification systems or by CRISPR-Cas systems; and use of abortive infection systems that block phage replication, transcription, or translation, usually in conjunction with suicide of the host cell. Altogether, these mechanisms promote a quick adaptation of bacteria to phage attack and therefore, the emergence of phage-resistant mutants is frequent and unavoidable.
It is still unclear whether the wide use of phages would cause resistance similar to what has been observed for antibiotics. In theory, this is not very likely to occur, since phages are very specific, and therefore, their selective pressure would affect a very narrow group of bacteria. However, we should also consider the fact that many phage resistance systems are mounted on mobile genetic elements, including prophages and plasmids, and thus may spread quite rapidly even without direct selection. Nevertheless, in contrast to antibiotics, phage preparations for therapeutic applications are expected to be developed in a personalized way because of the high specificity of phages. In addition, strategies have been proposed to counter the problem of phage resistance. One of the strategies is the use of phage cocktails with complementary host ranges (different host ranges, which, when combined, result in an overall broader host range) and targeting different bacterial receptors. Another strategy is the combination of phages with other antimicrobials such as antibiotics, disinfectants, or enzymes that could enhance their antibacterial activity. The genetic manipulation of phage genomes can also be a strategy to circumvent phage resistance.
== Safety aspects ==
Bacteriophages are bacterial viruses, evolved to infect bacterial cells. To do that, phages must use characteristic structures at cell surfaces (receptors), and to propagate they need appropriate molecular tools inside the cells. Bacteria are prokaryotes, and their cells differ substantially from eukaryotes, including humans or animals. For this reason, phages meet the major safety requirement: they do not infect treated individuals. Even engineered phages and induced artificial internalization of phages into mammalian cells do not result in phage propagation. Natural transcytosis of unmodified phages, that is, uptake and internal transport to the other side of a cell, which was observed in human epithelial cells, did not result in phage propagation or cell damage. Recently, however, it was reported that filamentous temperate phages of P. aeruginosa can be endocytosed into human and murine leukocytes, resulting in transcription of the phage DNA. In turn, the product RNA triggers maladaptive innate viral pattern-recognition responses and thus inhibits the immune clearance of the bacteria. Whether this also applies to dsDNA phages like Caudovirales has not yet been established; this is an important question to be addressed as it may affect the overall safety of phage therapy.
Due to many experimental treatments in human patients conducted in past decades, and to already existing RCTs (see section: Clinical experience and randomized controlled trials), phage safety can be assessed directly. The first safety trial in healthy human volunteers for a phage was conducted by Bruttin and Brüssow in 2005. They investigated the oral administration of Escherichia coli phage T4 and found no adverse effects of the treatment.
Historical record shows that phages are safe, with mild side effects, if any. Still, administering bacteriophages can induce an immune response. Macrophages, key cells of the innate immune system, play a central role in mediating this response. The most frequent (though still rare) adverse reactions to phage preparations found in patients were symptoms from the digestive tract, local reactions at the site of administration of a phage preparation, superinfections, and a rise in body temperature. These reactions might have occurred because either toxins were released from bacteria destroyed by the phages—such toxin release from bacteria can also happen with antibiotic use—or due to leftover bacterial fragments or residual components from the bacterial growth medium ("food for bacteria") present in the phage treatment when unpurified preparations were used.
When bacteriophages are introduced into the body, they may be recognized as foreign entities by macrophages through pattern recognition receptors (PRRs) such as Toll-like receptors (TLRs). The binding of bacteriophages to these receptors triggers macrophage activation, leading to phagocytosis (macrophages engulf and digest the bacteriophages) and cytokine production: activated macrophages produce pro-inflammatory cytokines. These cytokines can modulate the immune response but generally do not result in significant fever when phages are used appropriately.
The route by which bacteriophages enter the body can affect the degree of immune activation. Applying bacteriophages directly to the mucosa targets the site of infection with minimal systemic exposure, leading to a localized immune response. Injecting bacteriophages into muscle tissue introduces them to a larger number of macrophages in the muscle and regional lymph nodes. In intravenous injection, direct introduction into the bloodstream exposes bacteriophages to macrophages throughout the body, including those in the spleen and liver. However, significant elevations in body temperature are uncommon and typically only observed in cases of rapid phage administration or high doses. Anticipating immune responses allows healthcare professionals to monitor patients appropriately and make treatment adjustments if necessary. Macrophages are integral to the body's immune response to bacteriophage therapy, mediating any potential immune reactions. Intravenous administration of bacteriophages is conducted under strict medical supervision, by specialists in infectious diseases within a hospital setting, due to potential adverse reactions. Adverse reactions to intravenous bacteriophage therapy may include hypotension, i.e., a drop in blood pressure, leading to loss of consciousness. A sudden drop (chills) and rise (fever) in body temperature, known as the Jarisch–Herxheimer reaction, can occur due to the rapid lysis of bacteria and release of endotoxins. Rapid bacterial lysis releases endotoxins (e.g., lipopolysaccharides from gram-negative bacteria) that trigger systemic inflammatory responses, including "cytokine storms". Continuous monitoring of heart rate, blood pressure, and temperature to detect early signs of adverse reactions is done after the intravenous phage administration. Successful treatment of life-threatening infections with intravenous phage therapy has been documented. Patients have responded to therapy after one or several intravenous administrations, clearing infections that were unresponsive to conventional treatments: phages can disrupt biofilms, which are often resistant to antibiotics, enhancing infection clearance.
Bacteriophages must be produced in bacteria that are lysed (i.e., fragmented) during phage propagation. As such, phage lysates contain bacterial debris that may affect the human organism even when the phage itself is harmless. For these and other reasons, purification of bacteriophages is considered important, and phage preparations need to be assessed for their safety as a whole, particularly when phages are to be administered intravenously. This is consistent with general procedures for other drug candidates. In 2015, a group of phage therapy experts summarized the quality and safety requirements for sustainable phage therapy.
Phage effects on the human microbiome also contribute to safety issues in phage therapy. Many phages, especially temperate ones, carry genes that can affect the pathogenicity of the host. Even lambda, a temperate phage of the E. coli K-12 laboratory strain, carries two genes that provide potential virulence benefits to the lysogenic host, one that increases intestinal adherence and the other that confers resistance to complement killing in the blood. For this reason, temperate phages are generally to be avoided as candidates for phage therapy, although in some cases, the lack of lytic phage candidates and emergency conditions may make such considerations moot. Another potential problem is generalized transduction, a term for the ability of some phages to transfer bacterial DNA from one host to another. This occurs because the systems for packaging of the phage DNA into capsids can mistakenly package host DNA instead. Indeed, with some well-characterized phages, up to 5% of the virus particles contain only bacterial DNA. Thus in a typical lysate, the entire genome of the propagating host is present in more than a million copies in every milliliter. For these reasons, it is imperative that any phage to be considered for therapeutic usage should be subjected to thorough genomic analysis and tested for the capacity for generalized transduction.
As antibacterials, phages may also affect the composition of microbiomes, by infecting and killing phage-sensitive strains of bacteria. However, a major advantage of bacteriophages over antibiotics is the high specificity of bacteriophages. This specificity limits antibacterial activity to a sub-species level; typically, a phage kills only selected bacterial strains. For this reason, phages are much less likely (than antibiotics) to disturb the composition of a natural microbiome or to induce dysbiosis. This was demonstrated in experimental studies where microbiome composition was assessed by next-generation sequencing that revealed no important changes correlated with phage treatment in human treatments.
Much of the difficulty in obtaining regulatory approval is proving to be the risks of using a self-replicating entity that has the capability to evolve.
As with antibiotic therapy and other methods of countering bacterial infections, endotoxins are released by the bacteria as they are destroyed within the patient (Jarisch–Herxheimer reaction). This can cause symptoms of fever; in extreme cases, toxic shock (a problem also seen with antibiotics) is possible. Janakiraman Ramachandran argues that this complication can be avoided in those types of infection where this reaction is likely to occur by using genetically engineered bacteriophages that have had their gene responsible for producing endolysin removed. Without this gene, the host bacterium still dies but remains intact, because the lysis is disabled. On the other hand, this modification stops the exponential growth of phages, so one administered phage means at most one dead bacterial cell. Eventually, these dead cells are consumed by the normal house-cleaning duties of the phagocytes, which utilize enzymes to break down the whole bacterium and its contents into harmless proteins, polysaccharides, and lipids.
Temperate (or lysogenic) bacteriophages are not generally used therapeutically, since this group can act as a way for bacteria to exchange DNA. This can help spread antibiotic resistance or even, theoretically, make the bacteria pathogenic, such as in cases of cholera. Carl Merril has claimed that harmless strains of corynebacterium may have been converted into C. diphtheriae that "probably killed a third of all Europeans who came to North America in the seventeenth century".: 94 Fortunately, many phages seem to be lytic only with negligible probability of becoming lysogenic.
== Regulation and legislation ==
Approval of phage therapy for use in humans has not been given in Western countries, with a few exceptions. In the United States, Washington and Oregon law allows naturopathic physicians to use any therapy that is legal anywhere in the world on an experimental basis, and in Texas, phages are considered natural substances and can be used in addition to (but not as a replacement for) traditional therapy (they have been used routinely in a wound care clinic in Lubbock since 2006).
In 2013, "the 20th biennial Evergreen International Phage Meeting ... conference drew 170 participants from 35 countries, including leaders of companies and institutes involved with human phage therapies from France, Australia, Georgia, Poland, and the United States."
In France, phage therapy disappeared officially with the withdrawal of the Vidal dictionary (France's official drug directory), in 1978. The last phage preparation, produced by l'Institut du Bactériophage, was an ointment against skin infections. Phage therapy research ceased at about the same time across the country, with the closure of the bacteriophage department at the Pasteur Institute. Some hospital physicians continued to offer phage therapy until the 1990s, when production died out.
On their rediscovery, at the end of the 1990s, phage preparations were classified as medicines, i.e., "medicinal products" in the EU or "drugs" in the US. However, the pharmaceutical legislation that had been implemented since their disappearance from Western medicine was mainly designed to cater for industrially-made pharmaceuticals, devoid of any customization and intended for large-scale distribution, and it was not deemed necessary to provide phage-specific requirements or concessions.
Today's phage therapy products need to comply with the entire battery of medicinal product licensing requirements: manufacturing according to GMP, preclinical studies, phase I, II, and III clinical trials, and marketing authorisation. Technically, industrially produced predefined phage preparations could make it through the conventional pharmaceutical processes, minding some adaptations. However, phage specificity and resistance issues are likely to cause these defined preparations to have a relatively short useful lifespan. The pharmaceutical industry is currently not considering phage therapy products. Yet, a handful of small and medium-sized enterprises have shown interest, with the help of risk capital and/or public funding. Currently, no defined therapeutic phage product has made it to the EU or US markets.
According to Jean-Paul Pirnay, therapeutic phages should be prepared individually and kept in large phage banks, ready to be used, upon testing for effectiveness against the patient's bacterial pathogen(s). Intermediary or combined (industrially made as well as precision phage preparations) approaches could be appropriate. However, it turns out to be difficult to reconcile classical phage therapy concepts, which are based on the timely adaptation of phage preparations, with current Western pharmaceutical R&D and marketing models. Repeated calls for a specific regulatory framework have not been heeded by European policymakers. A phage therapy framework based on the Biological Master File concept has been proposed as a (European) solution to regulatory issues, but European regulations do not allow for an extension of this concept to biologically active substances such as phages.
Meanwhile, representatives from the medical, academic, and regulatory communities have established some (temporary) national solutions. For instance, phage applications have been performed in Europe under the umbrella of Article 37 (Unproven Interventions in Clinical Practice) of the Helsinki Declaration. To enable the application of phage therapy after Poland had joined the EU in 2004, the Ludwik Hirszfeld Institute of Immunology and Experimental Therapy in Wrocław opened its own Phage Therapy Unit (PTU). Phage therapy performed at the PTU is considered an "experimental treatment", covered by the adapted Act of 5 December 1996 on the Medical Profession (Polish Law Gazette, 2011, No. 277 item 1634) and Article 37 of the Helsinki Declaration. Similarly, in the last few years, a number of phage therapy interventions have been performed in the US under the FDA's emergency Investigational New Drug (eIND) protocol.
Some patients have been treated with phages under the umbrella of "compassionate use", which is a treatment option that allows a physician to use a not-yet-authorized medicine in desperate cases. Under strict conditions, medicines under development can be made available for use in patients for whom no satisfactory authorized therapies are available and who cannot participate in clinical trials. In principle, this approach can only be applied to products for which earlier study results have demonstrated efficacy and safety, but have not yet been approved. Much like Article 37 of the Helsinki Declaration, the compassionate use treatment option can only be applied when the phages are expected to help in life-threatening or chronic and/or seriously debilitating diseases that are not treatable with formally approved products.
In France, ANSM, the French medicine agency, has organized a specific committee—Comité Scientifique Spécialisé Temporaire (CSST)—for phage therapy, which consists of experts in various fields. Their task is to evaluate and guide each phage therapy request that ends up at the ANSM. Phage therapy requests are discussed together with the treating physicians and consensus advice is sent to the ANSM], which then decides whether or not to grant permission. Between 2006 and 2018, fifteen patients were treated in France (eleven recovered) using this pathway.
In Belgium, in 2016 and in response to a number of parliamentary questions, Maggie De Block, the Minister of Social Affairs and Health, acknowledged that it is indeed not evident to treat phages as industrially made drugs, and therefore she proposed to investigate if the magistral preparation pathway could offer a solution. Magistral preparations (compounding pharmacies in the US) are not subjected to certain constraints such as GMP compliance and marketing authorization. As the "magistral preparation framework" was created to allow for adapted patient treatments and/or to use medicines for which there is no commercial interest, it seemed a suitable framework for precision phage therapy concepts. Magistral preparations are medicines prepared in a pharmacy in accordance with a medical prescription for an individual patient. They are made by a pharmacist (or under his/her supervision) from their constituent ingredients, according to the technical and scientific standards of pharmaceutical technology. Phage active pharmaceutical ingredients to be included in magistral preparations must meet the requirements of a monograph, which describes their production and quality control testing. They must be accompanied by a certificate of analysis, issued by a "Belgian Approved Laboratory", which has been granted an accreditation to perform batch-release testing of medicinal products. Since 2019, phages have been delivered in the form of magistral preparations to nominal patients in Belgium.
The first phage therapy case in China can be traced back to 1958, at Shanghai Jiao Tong University School of Medicine. However, many regulations were not yet established back then, and phage therapy soon lost people's interest due to the prevalence of antibiotics, which eventually led to the antimicrobial resistance crisis. This prompted researchers in China as well as the Chinese government to pay attention to phage therapy again, and following the first investigator-initiated trial (IIT) by the Shanghai Institute of Phage in 2019, phage therapy rapidly flourished. Currently, commercial phage therapy applications must go through either one of two pathways. The first is for fixed-ingredient phage products. The second pathway is for personalized phage products, which need to go through IITs. This way, the products are considered restrictive medical technologies.
== Application in other species ==
=== Animals ===
Phage therapy has been a relevant mode of treatment in animals for decades. It has been proposed as a method of treating bacterial infections in the veterinary medical field in response to the rampant use of antibiotics. Studies have investigated the application of phage therapy in livestock species as well as companion animals. Brigham Young University has been researching the use of phage therapy to treat American foulbrood in honeybees. Phage therapy is also being investigated for potential applications in aquaculture.
=== Plants ===
Phage therapy has been studied for bacterial spot of stonefruit, caused by Xanthomonas pruni (syn. X. campestris pv. pruni, syn. X. arboricola pv. pruni) in prunus species. Some treatments have been very successful.
== Cultural impact ==
The 1925 novel and 1926 Pulitzer Prize winner Arrowsmith by Sinclair Lewis used phage therapy as a plot point.
Greg Bear's 2002 novel Vitals features phage therapy, based on Soviet research, used to transfer genetic material.
The 2012 collection of military history essays about the changing role of women in warfare, Women in War – From Home Front to Front Line includes a chapter featuring phage therapy: "Chapter 17: Women who thawed the Cold War".
Steffanie A. Strathdee's book The Perfect Predator: An Epidemiologist's Journey to Save Her Husband from a Deadly Superbug, co-written with her husband, Thomas Patterson, was published by Hachette Book Group in 2019. It describes Strathdee's ultimately successful attempt to introduce phage therapy as a life-saving treatment for her husband, critically ill with a completely antibiotic-resistant Acinetobacter baumannii infection following severe pancreatitis.
== See also ==
Antimicrobial resistance
Paul E. Turner
Phage display
Phage monographs
Prophage
== References ==
This article was adapted from the following source under a CC BY 4.0 license (2021) (reviewer reports):
Joana Azeredo, Jean-Paul Pirnay, Diana Priscila Pires, Mzia Kutateladze, Krystyna Dabrowska, Rob Lavigne, Bob G Blasdel (15 December 2021). "Phage Therapy" (PDF). WikiJournal of Medicine. 8 (1). WikiJournal of Medicine: 4. doi:10.15347/WJM/2021.004. ISSN 2002-4436. Wikidata Q100400597.
== Further reading ==
== External links ==
iBiology video: Phage Therapy (2016)
Popular Science – "The Next Phage" (2009) | Wikipedia/Phage_therapy |
Agent-based models have many applications in biology, primarily due to the characteristics of the modeling method. Agent-based modeling is a rule-based, computational modeling methodology that focuses on rules and interactions among the individual components or the agents of the matrix
. The goal of this modeling method is to generate populations of the system components of interest and simulate their interactions in a virtual world. Agent-based models start with rules for behavior and seek to reconstruct, through computational instantiation of those behavioral rules, the observed patterns of behavior.
== Characteristics ==
Several of the characteristics of agent-based models important to biological studies include:
=== Modular structure ===
The behavior of an agent-based model is defined by the rules of its agents. Existing agent rules can be modified or new agents can be added without having to modify the entire model.
=== Emergent properties ===
Through the use of the individual agents that interact locally with rules of behavior, agent-based models result in a synergy that leads to a higher level whole with much more intricate behavior than those of each individual agent.
=== Abstraction ===
Either by excluding non-essential details or when details are not available, agent-based models can be constructed in the absence of complete knowledge of the system under study. This allows the model to be as simple and verifiable as possible.
=== Stochasticity ===
Biological systems exhibit behavior that appears to be random. The probability of a particular behavior can be determined for a system as a whole and then be translated into rules for the individual agents.
== Modelling different species behaviour ==
In an ecological context, agent-based modeling can be used to model the behaviour of different species such as insects infestations, other invasive species, aphids, aquatic populations, and the evolution of innate foraging behaviors.
=== Forest insect infestations ===
Agent-based modeling has been used to simulate attack behavior of the mountain pine beetle (MPB), Dendroctonus ponderosae, in order to evaluate how different harvesting policies influence spatial characteristics of the forest and spatial propagation of the MPB infestation over time. About two-thirds of the land in British Columbia, Canada is covered by forests that are constantly being modified by natural disturbances such as fire, disease, and insect infestation. Forest resources make up approximately 15% of the province's economy, so infestations caused by insects such as the MPB can have significant impacts on the economy. The MPB outbreaks are considered a major natural disturbance that can result in widespread mortality of the lodgepole pine tree, one of the most abundant commercial tree species in British Columbia. Insect outbreaks have resulted in the death of trees over areas of several thousand square kilometers.
The agent-based model developed for this study was designed to simulate the MPB attack behavior in order to evaluate how management practices influence the spatial distribution and patterns of insect population and their preferences for attacked and killed trees. Three management strategies were considered by the model: 1) no management, 2) sanitation harvest and 3) salvage harvest. In the model, the Beetle Agent represented the MPB behavior; the Pine Agent represented the forest environment and tree health evolution; the Forest Management Agent represented the different management strategies. The Beetle Agent follows a series of rules to decide where to fly within the forest and to select a healthy tree to attack, feed, and breed. The MPB typically kills host trees in its natural environment in order to successfully reproduce. The beetle larvae feed on the inner bark of mature host trees, eventually killing them. In order for the beetles to reproduce, the host tree must be sufficiently large and have thick inner bark. The MPB outbreaks end when the food supply decreases to the point that there is not enough to sustain the population or when climatic conditions become unfavorable for the beetle. The Pine Agent simulates the resistance of the host tree, specifically the Lodgepole pine tree, and monitors the state and attributes of each stand of trees. At some point in the MPB attack, the number of beetles per tree reaches the host tree capacity. When this point is reached, the beetles release a chemical to direct beetles to attack other trees. The Pine Agent models this behavior by calculating the beetle population density per stand and passes the information to the Beetle Agents. The Forest Management Agent was used, at the stand level, to simulate two common silviculture practices (sanitation and salvage) as well as the strategy where no management practice was employed. With the sanitation harvest strategy, if a stand has an infestation rate greater than a set threshold, the stand is removed as well as any healthy neighbor stand when the average size of the trees exceeded a set threshold. For the salvage harvest strategy, a stand is removed even it is not under a MPB attack if a predetermined number of neighboring stands are under a MPB attack.
The study considered a forested area in the North-Central Interior of British Columbia of approximately 560 hectare. The area consisted primarily of Lodgepole pine with smaller proportions of Douglas fir and White spruce. The model was executed for five time steps, each step representing a single year. Thirty simulation runs were conducted for each forest management strategy considered. The results of the simulation showed that when no management strategy was employed, the highest overall MPB infestation occurred. The results also showed that the salvage harvest management technique resulted in a 25% reduction in the number of forest strands killed by the MPB, as opposed to a 19% reduction by the sanitation harvest management strategy. In summary, the results show that the model can be used as a tool to build forest management policies.
=== Invasive species ===
Invasive species refers to "non-native" plants and animals that adversely affect the environments they invade. The introduction of invasive species may have environmental, economic, and ecological implications. An agent-based model can developed to evaluate the impacts of port-specific and importer-specific enforcement regimes for a given agricultural commodity that presents invasive species risk with the goal of improving the allocation of enforcement resources and to provide a tool to policy makers to answer further questions concerning border enforcement and invasive species risk.
The agent-based model developed for the study considered three types of agents: invasive species, importers, and border enforcement agents. In the model, the invasive species can only react to their surroundings, while the importers and border enforcement agents are able to make their own decisions based on their own goals and objectives. The invasive species has the ability to determine if it has been released in an area containing the target crop, and to spread to adjacent plots of the target crop. The model incorporates spatial probability maps that are used to determine if an invasive species becomes established. The study focused on shipments of broccoli from Mexico into California through the ports of entry Calexico, California and Otay Mesa, California. The selected invasive species of concern was the crucifer flea beetle (Phyllotreta cruciferae). California is by far the largest producer of broccoli in the United States and so the concern and potential impact of an invasive species introduction through the chosen ports of entry is significant. The model also incorporated a spatially explicit damage function that was used to model invasive species damage in a realistic manner. Agent-based modeling provides the ability to analyze the behavior of heterogeneous actors, so three different types of importers were considered that differed in terms of commodity infection rates (high, medium, and low), pretreatment choice, and cost of transportation to the ports. The model gave predictions on inspection rates for each port of entry and importer and determined the success rate of border agent inspection, not only for each port and importer but also for each potential level of pretreatment (no pretreatment, level one, level two, and level three).
The model was implemented and ran in NetLogo, version 3.1.5. Spatial information on the location of the ports of entry, major highways, and transportation routes was included in the analysis as well as a map of California broccoli crops layered with invasive species establishment probability maps. BehaviorSpace, a software tool integrated with NetLogo, was used to test the effects of different parameters (e.g. shipment value, pretreatment cost) in the model. On average, 100 iterations were calculated at each level of the parameter being used, where an iteration represented a one-year run.
The results of the model showed that as inspection efforts increase, importers increase due care, or the pretreatment of shipments, and the total monetary loss of California crops decreases. The model showed that importers respond to an increase in inspection effort in different ways. Some importers responded to increased inspection rate by increasing pretreatment effort, while others chose to avoid shipping to a specific port, or shopped for another port. An important result of the model results is that it can show or provide recommendations to policy makers about the point at which importers may start to shop for ports, such as the inspection rate at which port shopping is introduced and the importers associated with a certain level of pest risk or transportation cost are likely to make these changes. Another interesting outcome of the model is that when inspectors were not able to learn to respond to an importer with previously infested shipments, damage to California broccoli crops was estimated to be $150 million. However, when inspectors were able to increase inspection rates of importers with previous violations, damage to the California broccoli crops was reduced by approximately 12%. The model provides a mechanism to predict the introduction of invasive species from agricultural imports and their likely damage. Equally as important, the model provides policy makers and border control agencies with a tool that can be used to determine the best allocation of inspectional resources.
=== Aphid population dynamics ===
An agent-based model can be used to study the population dynamics of the bird cherry-oat aphid, Rhopalosiphum padi. The study was conducted in a five square kilometer region of North Yorkshire, a county located in the Yorkshire and the Humber region of England. The agent-based modeling method was chosen because of its focus on the behavior of the individual agents rather than the population as a whole. The authors propose that traditional models that focus on populations as a whole do not take into account the complexity of the concurrent interactions in ecosystems, such as reproduction and competition for resources which may have significant impacts on population trends. The agent-based modeling approach also allows modelers to create more generic and modular models that are more flexible and easier to maintain than modeling approaches that focus on the population as a whole. Other proposed advantages of agent-based models include realistic representation of a phenomenon of interest due to the interactions of a group of autonomous agents, and the capability to integrate quantitative variables, differential equations, and rule based behavior into the same model.
The model was implemented in the modeling toolkit Repast using the JAVA programming language. The model was run in daily time steps and focused on the autumn and winter seasons. Input data for the model included habitat data, daily minimum, maximum, and mean temperatures, and wind speed and direction. For the Aphid agents, age, position, and morphology (alate or apterous) were considered. Age ranged from 0.00 to 2.00, with 1.00 being the point at which the agent becomes an adult. Reproduction by the Aphid agents is dependent on age, morphology, and daily minimum, maximum, and mean temperatures. Once nymphs hatch, they remain in the same location as their parents. The morphology of the nymphs is related to population density and the nutrient quality of the aphid's food source. The model also considered mortality among the Aphid agents, which is dependent on age, temperatures, and quality of habitat. The speed at which an Aphid agent ages is determined by the daily minimum, maximum, and mean temperatures. The model considered movement of the Aphid agents to occur in two separate phases, a migratory phase and a foraging phase, both of which affect the overall population distribution.
The study started the simulation run with an initial population of 10,000 alate aphids distributed across a grid of 25 meter cells. The simulation results showed that there were two major population peaks, the first in early autumn due to an influx of alate immigrants and the second due to lower temperatures later in the year and a lack of immigrants. Ultimately, it is the goal of the researchers to adapt this model to simulate broader ecosystems and animal types.
=== Aquatic population dynamics ===
A model is proposed to study the population dynamics of two species of macrophytes. Aquatic plants play a vital role in the ecosystems in which they live as they may provide shelter and food for other aquatic organisms. However, they may also have harmful impacts such as the excessive growth of non-native plants or eutrophication of the lakes in which they live leading to anoxic conditions. Given these possibilities, it is important to understand how the environment and other organisms affect the growth of these aquatic plants to allow mitigation or prevention of these harmful impacts.
Potamogeton pectinatus is one of the aquatic plant agents in the model. It is an annual growth plant that absorbs nutrients from the soil and reproduces through root tubers and rhizomes. Reproduction of the plant is not impacted by water flow, but can be influenced by animals, other plants, and humans. The plant can grow up to two meters tall, which is a limiting condition because it can only grow in certain water depths, and most of its biomass is found at the top of the plant in order to capture the most sunlight possible. The second plant agent in the model is Chara aspera, also a rooted aquatic plant. One major difference in the two plants is that the latter reproduces through the use of very small seeds called oospores and bulbills which are spread via the flow of water. Chara aspera only grows up to 20 cm and requires very good light conditions as well as good water quality, all of which are limiting factors on the growth of the plant. Chara aspera has a higher growth rate than Potamogeton pectinatus but has a much shorter life span. The model also considered environmental and animal agents. Environmental agents considered included water flow, light penetration, and water depth. Flow conditions, although not of high importance to Potamogeton pectinatus, directly impact the seed dispersal of Chara aspera. Flow conditions affect the direction as well as the distance the seeds will be distributed. Light penetration strongly influences Chara aspera as it requires high water quality. Extinction coefficient (EC) is a measure of light penetration in water. As EC increases, the growth rate of Chara aspera decreases. Finally, depth is important to both species of plants. As water depth increases, the light penetration decreases making it difficult for either species to survive beyond certain depths.
The area of interest in the model was a lake in the Netherlands named Lake Veluwe. It is a relatively shallow lake with an average depth of 1.55 meters and covers about 30 square kilometers. The lake is under eutrophication stress which means that nutrients are not a limiting factor for either of the plant agents in the model. The initial position of the plant agents in the model was randomly determined. The model was implemented using Repast software package and was executed to simulate the growth and decay of the two different plant agents, taking into account the environmental agents previously discussed as well as interactions with other plant agents. The results of the model execution show that the population distribution of Chara aspera has a spatial pattern very similar to the GIS maps of observed distributions. The authors of the study conclude that the agent rules developed in the study are reasonable to simulate the spatial pattern of macrophyte growth in this particular lake.
== Cell-based modeling ==
Agent-based modeling is increasingly used to model the behaviour of individual cells within a tissue. These models are divided into on- and off-lattice models with on-lattice models such as cellular automata and cellular potts model and off-lattice models such as center-based models, vertex-based models, immersed boundary method models and models based on the subcellular element method. Some examples of specific applications of cell-based modeling are:
=== Bacteria aggregation leading to biofilm formation ===
An agent-based model can be used model the colonisation of bacteria onto a surface, leading to the formation of biofilms. The purpose of iDynoMiCS (individual-based Dynamics of Microbial Communities Simulator) is to simulate the growth of populations and communities of individual microbes (small unicellular organisms such as bacteria, archaea and protists) that compete for space and resources in biofilms immersed in aquatic environments. iDynoMiCS can be used to seek to understand how individual microbial dynamics lead to emergent population- or biofilm-level properties and behaviours. Examining such formations is important in soil and river studies, dental hygiene studies, infectious disease and medical implant related infection research, and for understanding biocorrosion. An agent-based modelling paradigm was employed to make it possible to explore how each individual bacterium, of a particular species, contributes to the development of the biofilm. The initial illustration of iDynoMiCS considered how environmentally fluctuating oxygen availability affects the diversity and composition of a community of denitrifying bacteria that induce the denitrification pathway under anoxic or low oxygen conditions. The study explores the hypothesis that the existence of diverse strategies of denitrification in an environment can be explained by solely assuming that faster response incurs a higher cost. The agent-based model suggests that if metabolic pathways can be switched without cost the faster the switching the better. However, where faster switching incurs a higher cost, there is a strategy with optimal response time for any frequency of environmental fluctuations. This suggests that different types of denitrifying strategies win in different biological environments. Since this introduction the applications of iDynoMiCS continues to increase: a recent exploration of the plasmid invasion in biofilms being one example. This study explored the hypothesis that poor plasmid spread in biofilms is caused by a dependence of conjugation on the growth rate of the plasmid donor agent. Through simulation, the paper suggests that plasmid invasion into a resident biofilm is only limited when plasmid transfer depends on growth. Sensitivity analysis techniques were employed that suggests parameters relating to timing (lag before plasmid transfer between agents) and spatial reach are more important for plasmid invasion into a biofilm than the receiving agents growth rate or probability of segregational loss. Further examples that use iDynoMiCS continue to be published, including use of iDynoMiCS in modelling of a Pseudomonas aeruginosa biofilm with glucose substrate.
iDynoMiCS has been developed by an international team of researchers in order to provide a common platform for further development of all individual-based models of microbial biofilms and such like. The model was originally the result of years of work by Laurent Lardon, Brian Merkey, and Jan-Ulrich Kreft, with code contributions from Joao Xavier. With additional funding from the National Centre for Replacement, Refinement, and Reduction of Animals in Research (NC3Rs) in 2013, the development of iDynoMiCS as a tool for biological exploration continues apace, with new features being added when appropriate. From its inception, the team have committed to releasing iDynoMiCS as an open source platform, encouraging collaborators to develop additional functionality that can then be merged into the next stable release. IDynoMiCS has been implemented in the Java programming language, with MATLAB and R scripts provided to analyse results. Biofilm structures that are formed in simulation can be viewed as a movie using POV-Ray files that are generated as the simulation is run.
=== Mammary stem cell enrichment following irradiation during puberty ===
Experiments have shown that exposure to ionizing irradiation of pubertal mammary glands results in an increase in the ratio of mammary stem cells in the gland. This is important because stem cells are thought to be key targets for cancer initiation by ionizing radiation because they have the greatest long-term proliferative potential and mutagenic events persist in multiple daughter cells. Additionally, epidemiology data show that children exposed to ionizing radiation have a substantially greater breast cancer risk than adults. These experiments thus prompted questions about the underlying mechanism for the increase in mammary stem cells following radiation which can be explored by two agent-based models used in parallel with in vivo and in vitro experiments to evaluate cell inactivation, dedifferentiation via epithelial-mesenchymal transition (EMT), and self-renewal (symmetric division) as mechanisms by which radiation could increase stem cells.
The first agent-based model is a multiscale model of mammary gland development starting with a rudimentary mammary ductal tree at the onset of puberty (during active proliferation) all the way to a full mammary gland at adulthood (when there is little proliferation). The model consists of millions of agents, with each agent representing a mammary stem cell, a progenitor cell, or a differentiated cell in the breast. Simulations were first run on the Lawrence Berkeley National Laboratory Lawrencium supercomputer to parameterize and benchmark the model against a variety of in vivo mammary gland measurements. The model was then used to test the three different mechanisms to determine which one led to simulation results that matched in vivo experiments the best. Surprisingly, radiation-induced cell inactivation by death did not contribute to increased stem cell frequency independently of the dose delivered in the model. Instead the model revealed that the combination of increased self-renewal and cell proliferation during puberty led to stem cell enrichment. In contrast epithelial-mesenchymal transition in the model was shown to increase stem cell frequency not only in pubertal mammary glands but also in adult glands. This latter prediction, however, contradicted the in vivo data; irradiation of adult mammary glands did not lead to increased stem cell frequency. These simulations therefore suggested self-renewal as the primary mechanism behind pubertal stem cell increase.
To further evaluate self-renewal as the mechanism, a second agent-based model was created to simulate the growth dynamics of human mammary epithelial cells (containing stem/progenitor and differentiated cell subpopulations) in vitro after irradiation. By comparing the simulation results with data from the in vitro experiments, the second agent-based model further confirmed that cells must extensively proliferate to observe a self-renewal dependent increase in stem/progenitor cell numbers after irradiation.
The combination of the two agent-based models and the in vitro/in vivo experiments provide insight into why children exposed to ionizing radiation have a substantially greater breast cancer risk than adults. Together, they support the hypothesis that the breast is susceptible to a transient increase in stem cell self-renewal when exposed to radiation during puberty, which primes the adult tissue to develop cancer decades later.
== See also ==
Autonomous agent – Type of autonomous entity in software
Intelligent agent – Software agent which acts autonomously
== References == | Wikipedia/Agent-based_model_in_biology |
In epidemiology, force of infection (denoted
λ
{\displaystyle \lambda }
) is the rate at which susceptible individuals acquire an infectious disease. Because it takes account of susceptibility it can be used to compare the rate of transmission between different groups of the population for the same infectious disease, or even between different infectious diseases. That is to say,
λ
{\displaystyle \lambda }
is directly proportional to
β
{\displaystyle \beta }
; the effective transmission rate.
λ
=
number of new infections
number of susceptible persons exposed
×
average duration of exposure
{\displaystyle \lambda ={\frac {\mbox{number of new infections}}{{\mbox{number of susceptible persons exposed}}\times {\mbox{average duration of exposure}}}}}
Such a calculation is difficult because not all new infections are reported, and it is often difficult to know how many susceptibles were exposed. However,
λ
{\displaystyle \lambda }
can be calculated for an infectious disease in an endemic state if homogeneous mixing of the population and a rectangular population distribution (such as that generally found in developed countries), rather than a pyramid, is assumed. In this case,
λ
{\displaystyle \lambda }
is given by:
λ
=
1
A
{\displaystyle \lambda ={\frac {1}{A}}}
where
A
{\displaystyle A}
is the average age of infection. In other words,
A
{\displaystyle A}
is the average time spent in the susceptible group before becoming infected. The rate of becoming infected (
λ
{\displaystyle \lambda }
) is therefore
1
/
A
{\displaystyle 1/A}
(since rate is 1/time). The advantage of this method of calculating
λ
{\displaystyle \lambda }
is that data on the average age of infection is very easily obtainable, even if not all cases of the disease are reported.
== See also ==
Basic reproduction number
Compartmental models in epidemiology
Epidemic
Mathematical modelling of infectious disease
== References ==
== Further reading ==
Muench, H. (1934) Derivation of rates from summation data by the catalytic curve. Journal of the American Statistical Association, 29: 25–38. | Wikipedia/Force_of_infection |
The quasispecies model is a description of the process of the Darwinian evolution of certain self-replicating entities within the framework of physical chemistry. A quasispecies is a large group or "cloud" of related genotypes that exist in an environment of high mutation rate (at stationary state), where a large fraction of offspring are expected to contain one or more mutations relative to the parent. This is in contrast to a species, which from an evolutionary perspective is a more-or-less stable single genotype, most of the offspring of which will be genetically accurate copies.
It is useful mainly in providing a qualitative understanding of the evolutionary processes of self-replicating macromolecules such as RNA or DNA or simple asexual organisms such as bacteria or viruses (see also viral quasispecies), and is helpful in explaining something of the early stages of the origin of life. Quantitative predictions based on this model are difficult because the parameters that serve as its input are impossible to obtain from actual biological systems. The quasispecies model was put forward by Manfred Eigen and Peter Schuster based on initial work done by Eigen.
== Simplified explanation ==
When evolutionary biologists describe competition between species, they generally assume that each species is a single genotype whose descendants are mostly accurate copies. (Such genotypes are said to have a high reproductive fidelity.) In evolutionary terms, we are interested in the behavior and fitness of that one species or genotype over time.
Some organisms or genotypes, however, may exist in circumstances of low fidelity, where most descendants contain one or more mutations. A group of such genotypes is constantly changing, so discussions of which single genotype is the most fit become meaningless. Importantly, if many closely related genotypes are only one mutation away from each other, then genotypes in the group can mutate back and forth into each other. For example, with one mutation per generation, a child of the sequence AGGT could be AGTT, and a grandchild could be AGGT again. Thus we can envision a "cloud" of related genotypes that is rapidly mutating, with sequences going back and forth among different points in the cloud. Though the proper definition is mathematical, that cloud, roughly speaking, is a quasispecies.
Quasispecies behavior exists for large numbers of individuals existing at a certain (high) range of mutation rates.
=== Quasispecies, fitness, and evolutionary selection ===
In a species, though reproduction may be mostly accurate, periodic mutations will give rise to one or more competing genotypes. If a mutation results in greater replication and survival, the mutant genotype may out-compete the parent genotype and come to dominate the species. Thus, the individual genotypes (or species) may be seen as the units on which selection acts and biologists will often speak of a single genotype's fitness.
In a quasispecies, however, mutations are ubiquitous and so the fitness of an individual genotype becomes meaningless: if one particular mutation generates a boost in reproductive success, it can't amount to much because that genotype's offspring are unlikely to be accurate copies with the same properties. Instead, what matters is the connectedness of the cloud. For example, the sequence AGGT has 12 (3+3+3+3) possible single point mutants AGGA, AGGG, and so on. If 10 of those mutants are viable genotypes that may reproduce (and some of whose offspring or grandchildren may mutate back into AGGT again), we would consider that sequence a well-connected node in the cloud. If instead only two of those mutants are viable, the rest being lethal mutations, then that sequence is poorly connected and most of its descendants will not reproduce. The analog of fitness for a quasispecies is the tendency of nearby relatives within the cloud to be well-connected, meaning that more of the mutant descendants will be viable and give rise to further descendants within the cloud.
When the fitness of a single genotype becomes meaningless because of the high rate of mutations, the cloud as a whole or quasispecies becomes the natural unit of selection.
=== Application to biological research ===
Quasispecies represents the evolution of high-mutation-rate viruses such as HIV and sometimes single genes or molecules within the genomes of other organisms. Quasispecies models have also been proposed by Jose Fontanari and Emmanuel David Tannenbaum to model the evolution of sexual reproduction. Quasispecies was also shown in compositional replicators (based on the Gard model for abiogenesis) and was also suggested to be applicable to describe cell's replication, which amongst other things requires the maintenance and evolution of the internal composition of the parent and bud.
== Formal background ==
The model rests on four assumptions:
The self-replicating entities can be represented as sequences composed of a small number of building blocks—for example, sequences of RNA consisting of the four bases adenine, guanine, cytosine, and uracil.
New sequences enter the system solely as the result of a copy process, either correct or erroneous, of other sequences that are already present.
The substrates, or raw materials, necessary for ongoing replication are always present in sufficient quantity. Excess sequences are washed away in an outgoing flux.
Sequences may decay into their building blocks. The probability of decay does not depend on the sequences' age; old sequences are just as likely to decay as young sequences.
In the quasispecies model, mutations occur through errors made in the process of copying already existing sequences. Further, selection arises because different types of sequences tend to replicate at different rates, which leads to the suppression of sequences that replicate more slowly in favor of sequences that replicate faster. However, the quasispecies model does not predict the ultimate extinction of all but the fastest replicating sequence. Although the sequences that replicate more slowly cannot sustain their abundance level by themselves, they are constantly replenished as sequences that replicate faster mutate into them. At equilibrium, removal of slowly replicating sequences due to decay or outflow is balanced by replenishing, so that even relatively slowly replicating sequences can remain present in finite abundance.
Due to the ongoing production of mutant sequences, selection does not act on single sequences, but on mutational "clouds" of closely related sequences, referred to as quasispecies. In other words, the evolutionary success of a particular sequence depends not only on its own replication rate, but also on the replication rates of the mutant sequences it produces, and on the replication rates of the sequences of which it is a mutant. As a consequence, the sequence that replicates fastest may even disappear completely in selection-mutation equilibrium, in favor of more slowly replicating sequences that are part of a quasispecies with a higher average growth rate. Mutational clouds as predicted by the quasispecies model have been observed in RNA viruses and in in vitro RNA replication.
The mutation rate and the general fitness of the molecular sequences and their neighbors is crucial to the formation of a quasispecies. If the mutation rate is zero, there is no exchange by mutation, and each sequence is its own species. If the mutation rate is too high, exceeding what is known as the error threshold, the quasispecies will break down and be dispersed over the entire range of available sequences.
== Mathematical description ==
A simple mathematical model for a quasispecies is as follows: let there be
S
{\displaystyle S}
possible sequences and let there be
n
i
{\displaystyle n_{i}}
organisms with sequence i. Let's say that each of these organisms asexually gives rise to
A
i
{\displaystyle A_{i}}
offspring. Some are duplicates of their parent, having sequence i, but some are mutant and have some other sequence. Let the mutation rate
q
i
j
{\displaystyle q_{ij}}
correspond to the probability that a j type parent will produce an i type organism. Then the expected fraction of offspring generated by j type organisms that would be i type organisms is
w
i
j
=
A
j
q
i
j
{\displaystyle w_{ij}=A_{j}q_{ij}}
,
where
∑
i
q
i
j
=
1
{\displaystyle \sum _{i}q_{ij}=1}
.
Then the total number of i-type organisms after the first round of reproduction, given as
n
i
′
{\displaystyle n'_{i}}
, is
n
i
′
=
∑
j
w
i
j
n
j
{\displaystyle n'_{i}=\sum _{j}w_{ij}n_{j}}
Sometimes a death rate term
D
i
{\displaystyle D_{i}}
is included so that:
w
i
j
=
A
j
q
i
j
−
D
i
δ
i
j
{\displaystyle w_{ij}=A_{j}q_{ij}-D_{i}\delta _{ij}}
where
δ
i
j
{\displaystyle \delta _{ij}}
is equal to 1 when i=j and is zero otherwise. Note that the n-th generation can be found by just taking the n-th power of W substituting it in place of W in the above formula.
This is just a system of linear equations. The usual way to solve such a system is to first diagonalize the W matrix. Its diagonal entries will be eigenvalues corresponding to certain linear combinations of certain subsets of sequences which will be eigenvectors of the W matrix. These subsets of sequences are the quasispecies. Assuming that the matrix W is a primitive matrix (irreducible and aperiodic), then after very many generations only the eigenvector with the largest eigenvalue will prevail, and it is this quasispecies that will eventually dominate. The components of this eigenvector give the relative abundance of each sequence at equilibrium.
=== Note about primitive matrices ===
W being primitive means that for some integer
n
>
0
{\displaystyle n>0}
, that the
n
t
h
{\displaystyle n^{th}}
power of W is > 0, i.e. all the entries are positive. If W is primitive then each type can, through a sequence of mutations (i.e. powers of W) mutate into all the other types after some number of generations. W is not primitive if it is periodic, where the population can perpetually cycle through different disjoint sets of compositions, or if it is reducible, where the dominant species (or quasispecies) that develops can depend on the initial population, as is the case in the simple example given below.
=== Alternative formulations ===
The quasispecies formulae may be expressed as a set of linear differential equations. If we consider the difference between the new state
n
i
′
{\displaystyle n'_{i}}
and the old state
n
i
{\displaystyle n_{i}}
to be the state change over one moment of time, then we can state that the time derivative of
n
i
{\displaystyle n_{i}}
is given by this difference,
n
˙
i
=
n
i
′
−
n
i
{\displaystyle {\dot {n}}_{i}=n'_{i}-n_{i}}
we can write:
n
˙
i
=
∑
j
w
i
j
n
j
−
n
i
{\displaystyle {\dot {n}}_{i}=\sum _{j}w_{ij}n_{j}-n_{i}}
The quasispecies equations are usually expressed in terms of concentrations
x
i
{\displaystyle x_{i}}
where
x
i
=
d
e
f
n
i
∑
j
n
j
{\displaystyle x_{i}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {n_{i}}{\sum _{j}n_{j}}}}
.
x
i
′
=
d
e
f
n
i
′
∑
j
n
j
′
{\displaystyle x'_{i}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {n'_{i}}{\sum _{j}n'_{j}}}}
.
The above equations for the quasispecies then become for the discrete version:
x
i
′
=
∑
j
w
i
j
x
j
∑
i
j
w
i
j
x
j
{\displaystyle x'_{i}={\frac {\sum _{j}w_{ij}x_{j}}{\sum _{ij}w_{ij}x_{j}}}}
or, for the continuum version:
x
˙
i
=
∑
j
w
i
j
x
j
−
x
i
∑
i
j
w
i
j
x
j
.
{\displaystyle {\dot {x}}_{i}=\sum _{j}w_{ij}x_{j}-x_{i}\sum _{ij}w_{ij}x_{j}.}
=== Simple example ===
The quasispecies concept can be illustrated by a simple system consisting of 4 sequences. Sequences [0,0], [0,1], [1,0], and [1,1] are numbered 1, 2, 3, and 4, respectively. Let's say the [0,0] sequence never mutates and always produces a single offspring. Let's say the other 3 sequences all produce, on average,
1
−
k
{\displaystyle 1-k}
replicas of themselves, and
k
{\displaystyle k}
of each of the other two types, where
0
≤
k
≤
1
{\displaystyle 0\leq k\leq 1}
. The W matrix is then:
W
=
[
1
0
0
0
0
1
−
k
k
k
0
k
1
−
k
k
0
k
k
1
−
k
]
{\displaystyle \mathbf {W} ={\begin{bmatrix}1&0&0&0\\0&1-k&k&k\\0&k&1-k&k\\0&k&k&1-k\end{bmatrix}}}
.
The diagonalized matrix is:
W
′
=
[
1
−
2
k
0
0
0
0
1
−
2
k
0
0
0
0
1
0
0
0
0
1
+
k
]
{\displaystyle \mathbf {W'} ={\begin{bmatrix}1-2k&0&0&0\\0&1-2k&0&0\\0&0&1&0\\0&0&0&1+k\end{bmatrix}}}
.
And the eigenvectors corresponding to these eigenvalues are:
Only the eigenvalue
1
+
k
{\displaystyle 1+k}
is more than unity. For the n-th generation, the corresponding eigenvalue will be
(
1
+
k
)
n
{\displaystyle (1+k)^{n}}
and so will increase without bound as time goes by. This eigenvalue corresponds to the eigenvector [0,1,1,1], which represents the quasispecies consisting of sequences 2, 3, and 4, which will be present in equal numbers after a very long time. Since all population numbers must be positive, the first two quasispecies are not legitimate. The third quasispecies consists of only the non-mutating sequence 1. It's seen that even though sequence 1 is the most fit in the sense that it reproduces more of itself than any other sequence, the quasispecies consisting of the other three sequences will eventually dominate (assuming that the initial population was not homogeneous of the sequence 1 type).
== References ==
== Further reading == | Wikipedia/Quasispecies_model |
The management of HIV/AIDS normally includes the use of multiple antiretroviral drugs as a strategy to control HIV infection. There are several classes of antiretroviral agents that act on different stages of the HIV life-cycle. The use of multiple drugs that act on different viral targets is known as highly active antiretroviral therapy (HAART). HAART decreases the patient's total burden of HIV, maintains function of the immune system, and prevents opportunistic infections that often lead to death. HAART also prevents the transmission of HIV between serodiscordant same-sex and opposite-sex partners so long as the HIV-positive partner maintains an undetectable viral load.
Treatment has been so successful that in many parts of the world, HIV has become a chronic condition in which progression to AIDS is increasingly rare. Anthony Fauci, former head of the United States National Institute of Allergy and Infectious Diseases, has written, "With collective and resolute action now and a steadfast commitment for years to come, an AIDS-free generation is indeed within reach." In the same paper, he noted that an estimated 700,000 lives were saved in 2010 alone by antiretroviral therapy. As another commentary noted, "Rather than dealing with acute and potentially life-threatening complications, clinicians are now confronted with managing a chronic disease that in the absence of a cure will persist for many decades."
The United States Department of Health and Human Services and the World Health Organization (WHO) recommend offering antiretroviral treatment to all patients with HIV. Because of the complexity of selecting and following a regimen, the potential for side effects, and the importance of taking medications regularly to prevent viral resistance, such organizations emphasize the importance of involving patients in therapy choices and recommend analyzing the risks and the potential benefits.
The WHO has defined health as more than the absence of disease. For this reason, many researchers have dedicated their work to better understanding the effects of HIV-related stigma, the barriers it creates for treatment interventions, and the ways in which those barriers can be circumvented.
== Classes of medication ==
There are six classes of drugs, which are usually used in combination, to treat HIV infection. Antiretroviral (ARV) drugs are broadly classified by the phase of the retrovirus life-cycle that the drug inhibits. Typical combinations include two nucleoside reverse-transcriptase inhibitors (NRTI) as a "backbone" along with one non-nucleoside reverse-transcriptase inhibitor (NNRTI), protease inhibitor (PI) or integrase inhibitors (also known as integrase nuclear strand transfer inhibitors or INSTIs) as a "base".
=== Entry inhibitors ===
Entry inhibitors (or fusion inhibitors) interfere with binding, fusion and entry of HIV-1 to the host cell by blocking one of several targets. Maraviroc, enfuvirtide and Ibalizumab are available agents in this class. Maraviroc works by targeting CCR5, a co-receptor located on human helper T-cells. Caution should be used when administering this drug, however, due to a possible shift in tropism which allows HIV to target an alternative co-receptor such as CXCR4. Ibalizumab is effective against both CCR5 and CXCR4 tropic HIV viruses.
In rare cases, individuals may have a mutation in the CCR5 delta gene which results in a nonfunctional CCR5 co-receptor and in turn, a means of resistance or slow progression of the disease. However, as mentioned previously, this can be overcome if an HIV variant that targets CXCR4 becomes dominant. To prevent fusion of the virus with the host membrane, enfuvirtide can be used. Enfuvirtide is a peptide drug that must be injected and acts by interacting with the N-terminal heptad repeat of gp41 of HIV to form an inactive hetero six-helix bundle, therefore preventing infection of host cells.
=== Nucleoside/nucleotide reverse-transcriptase inhibitors ===
Nucleoside reverse-transcriptase inhibitors (NRTI) and nucleotide reverse-transcriptase inhibitors (NtRTI) are nucleoside and nucleotide analogues which inhibit reverse transcription. HIV is an RNA virus, so it can not be integrated into the DNA in the nucleus of the human cell unless it is first "reverse" transcribed into DNA. Since the conversion of RNA to DNA is not naturally done in the mammalian cell, it is performed by a viral protein, reverse transcriptase, which makes it a selective target for inhibition. NRTIs are chain terminators. Once NRTIs are incorporated into the DNA chain, their lack of a 3' OH group prevents the subsequent incorporation of other nucleosides. Both NRTIs and NtRTIs act as competitive substrate inhibitors. Examples of NRTIs include zidovudine, abacavir, lamivudine, emtricitabine, and of NtRTIs – tenofovir and adefovir.
=== Non-nucleoside reverse-transcriptase inhibitors ===
Non-nucleoside reverse-transcriptase inhibitors (NNRTI) inhibit reverse transcriptase by binding to an allosteric site of the enzyme; NNRTIs act as non-competitive inhibitors of reverse transcriptase. NNRTIs affect the handling of substrate (nucleotides) by reverse transcriptase by binding near the active site. NNRTIs can be further classified into 1st generation and 2nd generation NNRTIs. 1st generation NNRTIs include nevirapine and efavirenz. 2nd generation NNRTIs are etravirine and rilpivirine. HIV-2 is intrinsically resistant to NNRTIs.
=== Integrase inhibitors ===
Integrase inhibitors (also known as integrase nuclear strand transfer inhibitors or INSTIs) inhibit the viral enzyme integrase, which is responsible for integration of viral DNA into the DNA of the infected cell. There are several integrase inhibitors under clinical trial, and raltegravir became the first to receive FDA approval in October 2007. Raltegravir has two metal binding groups that compete for substrate with two Mg2+ ions at the metal binding site of integrase. As of early 2022, four other clinically approved integrase inhibitors are elvitegravir, dolutegravir, bictegravir, and cabotegravir.
=== Protease inhibitors ===
Protease inhibitors block the viral protease enzyme necessary to produce mature virions upon budding from the host membrane. Particularly, these drugs prevent the cleavage of gag and gag/pol precursor proteins. Virus particles produced in the presence of protease inhibitors are defective and mostly non-infectious. Examples of HIV protease inhibitors are lopinavir, indinavir, nelfinavir, amprenavir and ritonavir. Darunavir and atazanavir are recommended as first line therapy choices. Maturation inhibitors have a similar effect by binding to gag, but development of two experimental drugs in this class, bevirimat and vivecon, was halted in 2010. Resistance to some protease inhibitors is high. Second generation drugs have been developed that are effective against otherwise resistant HIV variants.
== Combination therapy ==
The life cycle of HIV can be as short as about 1.5 days from viral entry into a cell, through replication, assembly, and release of additional viruses, to infection of other cells. HIV lacks proofreading enzymes to correct errors made when it converts its RNA into DNA via reverse transcription. Its short life-cycle and high error rate cause the virus to mutate very rapidly, resulting in a high genetic variability. Most of the mutations either are inferior to the parent virus (often lacking the ability to reproduce at all) or convey no advantage, but some of them have a natural selection superiority to their parent and can enable them to slip past defenses such as the human immune system and antiretroviral drugs. The more active copies of the virus, the greater the possibility that one resistant to antiretroviral drugs will be made.
When antiretroviral drugs are used improperly, multi-drug resistant strains can become the dominant genotypes very rapidly. In the era before multiple drug classes were available (pre-1997), the reverse-transcriptase inhibitors zidovudine, didanosine, zalcitabine, stavudine, and lamivudine were used serially or in combination leading to the development of multi-drug resistant mutations.
In contrast, antiretroviral combination therapy defends against resistance by creating multiple obstacles to HIV replication. This keeps the number of viral copies low and reduces the possibility of a superior mutation. If a mutation that conveys resistance to one of the drugs arises, the other drugs continue to suppress reproduction of that mutation. With rare exceptions, no individual antiretroviral drug has been demonstrated to suppress an HIV infection for long; these agents must be taken in combinations in order to have a lasting effect. As a result, the standard of care is to use combinations of antiretroviral drugs. Combinations usually consist of three drugs from at least two different classes. This three drug combination is commonly known as a triple cocktail. Combinations of antiretrovirals are subject to positive and negative synergies, which limits the number of useful combinations.
Because of HIV's tendency to mutate, when patients who have started an antiretrovial regimen fail to take it regularly, resistance can develop. On the other hand, patients who take their medications regularly can stay on one regimen without developing resistance. This greatly increases life expectancy and leaves more drugs available to the individual should the need arise.
In 2000, drug companies have worked together to combine these complex regimens into single-pill fixed-dose combinations. More than 20 antiretroviral fixed-dose combinations have been developed. This greatly increases the ease with which they can be taken, which in turn increases the consistency with which medication is taken (adherence), and thus their effectiveness over the long-term.
=== Adjunct treatment ===
Although antiretroviral therapy has helped to improve the quality of life of people living with HIV, there is still a need to explore other ways to further address the disease burden. One such potential strategy that was investigated was to add interleukin 2 as an adjunct to antiretroviral therapy for adults with HIV. A Cochrane review included 25 randomized controlled trials that were conducted across six countries. The researchers found that interleukin 2 increases the CD4 immune cells, but does not make a difference in terms of death and incidence of other infections. Furthermore, there is probably an increase in side-effects with interleukin 2. The findings of this review do not support the use of interleukin 2 as an add-on treatment to antiretroviral therapy for adults with HIV.
== Treatment guidelines ==
=== Initiation of antiretroviral therapy ===
Antiretroviral drug treatment guidelines have changed over time. Before 1987, no antiretroviral drugs were available and treatment consisted of treating complications from opportunistic infections and malignancies. After antiretroviral medications were introduced, most clinicians agreed that HIV positive patients with low CD4 counts should be treated, but no consensus formed as to whether to treat patients with high CD4 counts.
In April 1995, Merck and the National Institute of Allergy and Infectious Diseases began recruiting patients for a trial examining the effects of a three drug combination of the protease inhibitor indinavir and two nucleoside analogs, illustrating the substantial benefit of combining two NRTIs with a new class of antiretrovirals, protease inhibitors, namely indinavir. Later that year David Ho became an advocate of this "hit hard, hit early" approach with aggressive treatment with multiple antiretrovirals early in the course of the infection. Later reviews in the late 90s and early 2000s noted that this approach of "hit hard, hit early" ran significant risks of increasing side effects and development of multidrug resistance, and this approach was largely abandoned. The only consensus was on treating patients with advanced immunosuppression (CD4 counts less than 350/μL). Treatment with antiretrovirals was expensive at the time, ranging from $10,000 to $15,000 a year.
The timing of when to start therapy has continued to be a core controversy within the medical community, though recent studies have led to more clarity. The NA-ACCORD study observed patients who started antiretroviral therapy either at a CD4 count of less than 500 versus less than 350 and showed that patients who started ART at lower CD4 counts had a 69% increase in the risk of death. In 2015 the START and TEMPRANO studies both showed that patients lived longer if they started antiretrovirals at the time of their diagnosis, rather than waiting for their CD4 counts to drop to a specified level.
Other arguments for starting therapy earlier are that people who start therapy later have been shown to have less recovery of their immune systems, and higher CD4 counts are associated with less cancer.
The European Medicines Agency (EMA) has recommended the granting of marketing authorizations for two new antiretroviral (ARV) medicines, rilpivirine (Rekambys) and cabotegravir (Vocabria), to be used together for the treatment of people with human immunodeficiency virus type 1 (HIV-1) infection. The two medicines are the first ARVs that come in a long-acting injectable formulation. This means that instead of daily pills, people receive intramuscular injections monthly or every two months.
The combination of Rekambys and Vocabria injection is intended for maintenance treatment of adults who have undetectable HIV levels in the blood (viral load less than 50 copies/ml) with their current ARV treatment, and when the virus has not developed resistance to certain class of anti-HIV medicines called non-nucleoside reverse transcriptase inhibitors (NNRTIs) and integrase strand transfer inhibitors (INIs).
==== Treatment as prevention ====
A separate argument for starting antiretroviral therapy that has gained more prominence is its effect on HIV transmission. ART reduces the amount of virus in the blood and genital secretions. This has been shown to lead to dramatically reduced transmission of HIV when one partner with a suppressed viral load (<50 copies/ml) has sex with a partner who is HIV negative. In clinical trial HPTN 052, 1763 serodiscordant heterosexual couples in nine countries were planned to be followed for at least 10 years, with both groups receiving education on preventing HIV transmission and condoms, but only one group getting ART. The study was stopped early (after 1.7 years) for ethical reasons when it became clear that antiviral treatment provided significant protection. Of the 28 couples where cross-infection had occurred, all but one had taken place in the control group, consistent with a 96% reduction in risk of transmission while on ART. The single transmission in the experimental group occurred early after starting ART before viral load was likely to be suppressed. Pre-exposure prophylaxis (PrEP) provides HIV-negative individuals with medication—in conjunction with safer-sex education and regular HIV/STI screenings—in order to reduce the risk of acquiring HIV. In 2011, the journal Science gave the Breakthrough of the Year award to treatment as prevention.
In July 2016 a consensus document was created by the Prevention Access Campaign which has been endorsed by over 400 organisations in 58 countries. The consensus document states that the risk of HIV transmission from a person living with HIV who has been undetectable for a minimum of six months is negligible to non-existent, with negligible being defined as "so small or unimportant to be not worth considering". The Chair of the British HIV Association (BHIVA), Chloe Orkin, stated in July 2017 that 'there should be no doubt about the clear and simple message that a person with sustained, undetectable levels of HIV virus in their blood cannot transmit HIV to their sexual partners.'
Furthermore, the PARTNER study, which ran from 2010 to 2014, enrolled 1166 serodiscordant couples (where one partner is HIV positive and the other is negative) in a study that found that the estimated rate of transmission through any condomless sex with the HIV-positive partner taking ART with an HIV load less than 200 copies/ml was zero.
In summary, as the WHO HIV treatment guidelines state, "The ARV regimens now available, even in the poorest countries, are safer, simpler, more effective and more affordable than ever before."
There is a consensus among experts that, once initiated, antiretroviral therapy should never be stopped. This is because the selection pressure of incomplete suppression of viral replication in the presence of drug therapy causes the more drug sensitive strains to be selectively inhibited. This allows the drug resistant strains to become dominant. This in turn makes it harder to treat the infected individual as well as anyone else they infect. One trial showed higher rates of opportunistic infections, cancers, heart attacks and death in patients who periodically interrupted their ART.
=== Guideline sources ===
There are several treatment guidelines for HIV-1 infected adults in the developed world (that is, those countries with access to all or most therapies and laboratory tests). In the United States there are both the International AIDS Society-USA (IAS-USA) (a 501(c)(3) not-for-profit organization in the US) as well as the US government's Department of Health and Human Services guidelines. In Europe there are the European AIDS Clinical Society guidelines.
For resource limited countries, most national guidelines closely follow the World Health Organization (WHO) guidelines.
==== Guidelines ====
The guidelines use new criteria to consider starting HAART, as described below. However, there remain a range of views on this subject and the decision of whether to commence treatment ultimately rests with the patient and his or her doctor.
The US DHHS guidelines (published April 8, 2015) state:
Antiretroviral therapy (ART) is recommended for all HIV-infected individuals to reduce the risk of disease progression.
ART also is recommended for HIV-infected individuals for the prevention of transmission of HIV.
Patients starting ART should be willing and able to commit to treatment and understand the benefits and risks of therapy and the importance of adherence. Patients may choose to postpone therapy, and providers, on a case-by-case basis, may elect to defer therapy on the basis of clinical and/or psychosocial factors.
The newest WHO guidelines (dated September 30, 2015) now agree and state:
Antiretroviral therapy (ART) should be initiated in everyone living with HIV at any CD4 cell count
==== Baseline resistance ====
Baseline resistance is the presence of resistance mutations in patients who have never been treated before for HIV. In countries with a high rate of baseline resistance, resistance testing is recommended before starting treatment; or, if the initiation of treatment is urgent, then a "best guess" treatment regimen should be started, which is then modified on the basis of resistance testing. In the UK, there is 11.8% medium to high-level resistance at baseline to the combination of efavirenz + zidovudine + lamivudine, and 6.4% medium to high level resistance to stavudine + lamivudine + nevirapine. In the US, 10.8% of one cohort of patients who had never been on ART before had at least one resistance mutation in 2005. Various surveys in different parts of the world have shown increasing or stable rates of baseline resistance as the era of effective HIV therapy continues. With baseline resistance testing, a combination of antiretrovirals that are likely to be effective can be customized for each patient.
=== Regimens ===
Most HAART regimens consist of three drugs: Two NRTIs ("backbone")+ a PI/NNRTI/INSTI ("base"). Initial regimens use "first-line" drugs with a high efficacy and low side-effect profile.
The US DHHS preferred initial regimens for adults and adolescents in the United States, as of April 2015, are:
tenofovir/emtricitabine and raltegravir (an integrase inhibitor)
tenofovir/emtricitabine and dolutegravir (an integrase inhibitor)
abacavir/lamivudine (two NRTIs) and dolutegravir for patients who have been tested negative for the HLA-B*5701 gene allele
tenofovir/emtricitabine, elvitegravir (an integrase inhibitor) and cobicistat (inhibiting metabolism of the former) in patients with good kidney function (gfr > 70)
tenofovir/emtricitabine, ritonavir, and darunavir (both latter are protease inhibitors)
Both efavirenz and nevirapine showed similar benefits when combined with NRTI respectively.
In the case of the protease inhibitor based regimens, ritonavir is used at low doses to inhibit cytochrome p450 enzymes and "boost" the levels of other protease inhibitors, rather than for its direct antiviral effect. This boosting effect allows them to be taken less frequently throughout the day. Cobicistat is used with elvitegravir for a similar effect but does not have any direct antiviral effect itself.
The WHO preferred initial regimen for adults and adolescents as of June 30, 2013, is:
tenofovir + lamivudine (or emtricitabine) + efavirenz
=== Special populations ===
==== Acute infection ====
In the first six months after infection HIV viral loads tend to be elevated and people are more often symptomatic than in later latent phases of HIV disease. There may be special benefits to starting antiretroviral therapy early during this acute phase, including lowering the viral "set-point" or baseline viral load, reduce the mutation rate of the virus, and reduce the size of the viral reservoir (See section below on viral reservoirs). The SPARTAC trial compared 48 weeks of ART vs 12 weeks vs no treatment in acute HIV infection and found that 48 weeks of treatment delayed the time to decline in CD4 count below 350 cells per ml by 65 weeks and kept viral loads significantly lower even after treatment was stopped.
Since viral loads are usually very high during acute infection, this period carries an estimated 26 times higher risk of transmission. By treating acutely infected patients, it is presumed that it could have a significant impact on decreasing overall HIV transmission rates since lower viral loads are associated with lower risk of transmission (See section on treatment as prevention). However an overall benefit has not been proven and has to be balanced with the risks of HIV treatment. Therapy during acute infection carries a grade BII recommendation from the US DHHS.
==== Children ====
HIV can be especially harmful to infants and children, with one study in Africa showing that 52% of untreated children born with HIV had died by age 2. By five years old, the risk of disease and death from HIV starts to approach that of young adults. The WHO recommends treating all children less than 5 years old, and starting all children older than 5 with stage 3 or 4 disease or CD4 <500 cells/ml. DHHS guidelines are more complicated but recommend starting all children less than 12 months old and children of any age who have symptoms.
As for which antiretrovirals to use, this is complicated by the fact that many children who are born to mothers with HIV are given a single dose of nevirapine (an NNRTI) at the time of birth to prevent transmission. If this fails it can lead to NNRTI resistance. Also, a large study in Africa and India found that a PI based regimen was superior to an NNRTI based regimen in children less than 3 years who had never been exposed to NNRTIs in the past. Thus the WHO recommends PI based regimens for children less than 3.
The WHO recommends for children less than 3 years:
abacavir (or zidovudine) + lamivudine + lopinivir + ritonivir
and for children 3 years to less than 10 years and adolescents <35 kilograms:
abacavir + lamivudine + efavirenz
US DHHS guidelines are similar but include PI based options for children > 3 years old.
A systematic review assessed the effects and safety of abacavir-containing regimens as first-line therapy for children between 1 month and 18 years of age when compared to regimens with other NRTIs. This review included two trials and two observational studies with almost eleven thousand HIV infected children and adolescents. They measured virologic suppression, death and adverse events. The authors found that there is no meaningful difference between abacavir-containing regimens and other NRTI-containing regimens. The evidence is of low to moderate quality and therefore it is likely that future research may change these findings.
==== Pregnant women ====
The goals of treatment for pregnant women include the same benefits to the mother as in other infected adults as well as prevention of transmission to her child. The risk of transmission from mother to child is proportional to the plasma viral load of the mother. Untreated mothers with a viral load >100,000 copies/ml have a transmission risk of over 50%. The risk when viral loads are < 1000 copies/ml are less than 1%. ART for mothers both before and during delivery and to mothers and infants after delivery are recommended to substantially reduce the risk of transmission. The mode of delivery is also important, with a planned Caesarian section having a lower risk than vaginal delivery or emergency Caesarian section.
HIV can also be detected in breast milk of infected mothers and transmitted through breast feeding. The WHO balances the low risk of transmission through breast feeding from women who are on ART with the benefits of breastfeeding against diarrhea, pneumonia and malnutrition. It also strongly recommends that breastfeeding infants receive prophylactic ART. In the US, the DHHS recommends against women with HIV breastfeeding.
==== Older adults ====
With improvements in HIV therapy, several studies now estimate that patients on treatment in high-income countries can expect a normal life expectancy. This means that a higher proportion of people living with HIV are now older and research is ongoing into the unique aspects of HIV infection in the older adult. There is data that older people with HIV have a blunted CD4 response to therapy but are more likely to achieve undetectable viral levels. However, not all studies have seen a difference in response to therapy. The guidelines do not have separate treatment recommendations for older adults, but it is important to take into account that older patients are more likely to be on multiple non-HIV medications and consider drug interactions with any potential HIV medications. There are also increased rates of HIV associated non-AIDS conditions (HANA) such as heart disease, liver disease and dementia that are multifactorial complications from HIV, associated behaviors, coinfections like hepatitis B, hepatitis C, and human papilloma virus (HPV) as well as HIV treatment.
==== Adults with depression ====
Many factors may contribute to depression in adults living with HIV, such as the effects of the virus on the brain, other infections or tumours, antiretroviral drugs and other medical treatment. Rates of major depression are higher in people living with HIV compared to the general population, and this may negatively influence antiretroviral treatment. In a systematic review, Cochrane researchers assessed whether giving antidepressants to adults living with both HIV and depression may improve depression. Ten trials, of which eight were done in high-income countries, with 709 participants were included. Results indicated that antidepressants may be better in improving depression compared to placebo, but the quality of the evidence is low and future research is likely to impact on the findings.
== Concerns ==
There are several concerns about antiretroviral regimens that should be addressed before initiating:
Intolerance: The drugs can have serious side-effects which can lead to harm as well as keep patients from taking their medications regularly.
Resistance: Not taking medication consistently can lead to low blood levels that foster drug resistance.
Cost: The WHO maintains a database of world ART costs which have dropped dramatically in recent years as more first line drugs have gone off-patent. A one pill, once a day combination therapy has been introduced in South Africa for as little as $10 per patient per month. One 2013 study estimated an overall cost savings to ART therapy in South Africa given reduced transmission. In the United States, new on-patent regimens can cost up to $28,500 per patient, per year.
Public health: Individuals who fail to use antiretrovirals as directed can develop multi-drug resistant strains which can be passed onto others.
== Response to therapy ==
=== Virologic response ===
Suppressing the viral load to undetectable levels (<50 copies per ml) is the primary goal of ART. This should happen by 24 weeks after starting combination therapy. Viral load monitoring is the most important predictor of response to treatment with ART. Lack of viral load suppression on ART is termed virologic failure. Levels higher than 200 copies per ml is considered virologic failure, and should prompt further testing for potential viral resistance.
Research has shown that people with an undetectable viral load are unable to transmit the virus through condomless sex with a partner of either gender. The 'Swiss Statement' of 2008 described the chance of transmission as 'very low' or 'negligible,' but multiple studies have since shown that this mode of sexual transmission is impossible where the HIV-positive person has a consistently undetectable viral load. This discovery has led to the formation of the Prevention Access Campaign are their 'U=U' or 'Undetectable=Untransmittable' public information strategy, an approach that has gained widespread support amongst HIV/AIDS-related medical, charitable, and research organisations. The studies demonstrating that U=U is an effective strategy for preventing HIV transmission in serodiscordant couples so long as "the partner living with HIV [has] a durably suppressed viral load" include: Opposites Attract, PARTNER 1, PARTNER 2, (for male–male couples) and HPTN052 (for heterosexual couples). In these studies, couples where one partner was HIV-positive and one partner was HIV-negative were enrolled and regular HIV testing completed. In total from the four studies, 4097 couples were enrolled over four continents and 151,880 acts of condomless sex were reported, there were zero phylogenetically linked transmissions of HIV where the positive partner had an undetectable viral load. Following this the U=U consensus statement advocating the use of 'zero risk' was signed by hundreds of individuals and organisations including the US CDC, British HIV Association and The Lancet medical journal. The importance of the final results of the PARTNER 2 study were described by the medical director of the Terrence Higgins Trust as "impossible to overstate", while lead author Alison Rodger declared that the message that "undetectable viral load makes HIV untransmittable ... can help end the HIV pandemic by preventing HIV transmission." The authors summarised their findings in The Lancet as follows:
Our results provide a similar level of evidence on viral suppression and HIV transmission risk for gay men to that previously generated for heterosexual couples and suggest that the risk of HIV transmission in gay couples through condomless sex when HIV viral load is suppressed is effectively zero. Our findings support the message of the U=U (undetectable equals untransmittable) campaign, and the benefits of early testing and treatment for HIV.
This result is consistent with the conclusion presented by Anthony S. Fauci, the Director of the National Institute of Allergy and Infectious Diseases for the U.S. National Institutes of Health, and his team in a viewpoint published in the Journal of the American Medical Association, that U=U is an effective HIV prevention method when an undetectable viral load is maintained.
=== Immunologic response ===
CD4 cell counts are another key measure of immune status and ART effectiveness. CD4 counts should rise 50 to 100 cells per ml in the first year of therapy. There can be substantial fluctuation in CD4 counts of up to 25% based on the time of day or concomitant infections. In one long-term study, the majority of increase in CD4 cell counts was in the first two years after starting ART with little increase afterwards. This study also found that patients who began ART at lower CD4 counts continued to have lower CD4 counts than those who started at higher CD4 counts. When viral suppression on ART is achieved but without a corresponding increase in CD4 counts it can be termed immunologic nonresponse or immunologic failure. While this is predictive of worse outcomes, there is no consensus on how to adjust therapy to immunologic failure and whether switching therapy is beneficial. DHHS guidelines do not recommend switching an otherwise suppressive regimen.
Innate lymphoid cells (ILC) are another class of immune cell that is depleted during HIV infection. However, if ART is initiated before this depletion at around 7 days post infection, ILC levels can be maintained. While CD4 cell counts typically replenish after effective ART, ILCs depletion is irreversible with ART initiated after the depletion despite suppression of viremia. Since one of the roles of ILCs is to regulate the immune response to commensal bacteria and to maintain an effective gut barrier, it has been hypothesized that the irreversible depletion of ILCs plays a role in the weakened gut barrier of HIV patients, even after successful ART.
== Salvage therapy ==
In patients who have persistently detectable viral loads while taking ART, tests can be done to investigate whether there is drug resistance. Most commonly a genotype is sequenced which can be compared with databases of other HIV viral genotypes and resistance profiles to predict response to therapy. Resistance testing may improve virological outcomes in those who have treatment failures. However, there is lack of evidence of effectiveness of such testing in those who have not done any treatment before.
If there is extensive resistance a phenotypic test of a patient's virus against a range of drug concentrations can be performed, but is expensive and can take several weeks, so genotypes are generally preferred. Using information from a genotype or phenotype, a regimen of three drugs from at least two classes is constructed that will have the highest probability of suppressing the virus. If a regimen cannot be constructed from recommended first line agents it is termed salvage therapy, and when six or more drugs are needed it is termed mega-HAART.
== Structured treatment interruptions ==
Drug holidays (or "structured treatment interruptions") are intentional discontinuations of antiretroviral drug treatment. As mentioned above, randomized controlled studies of structured treatment interruptions have shown higher rates of opportunistic infections, cancers, heart attacks and death in patients who took drug holidays. With the exception of post-exposure prophylaxis (PEP), treatment guidelines do not call for the interruption of drug therapy once it has been initiated.
== Adverse effects ==
Each class and individual antiretroviral carries unique risks of adverse side effects.
=== NRTIs ===
The NRTIs can interfere with mitochondrial DNA synthesis and lead to high levels of lactate and lactic acidosis, liver steatosis, peripheral neuropathy, myopathy and lipoatrophy. First-line NRTIs such as lamivudine/emtrictabine, tenofovir, and abacavir are less likely to cause mitochondrial dysfunction.
Mitochondrial Haplogroups(mtDNA), non pathologic mutations inherited from the maternal line, have been linked to the efficacy of CD4+ count following ART. Idiosyncratic toxicity with mtDNA haplogroup is also well studied (Boeisteril et al., 2007).
=== NNRTIs ===
NNRTIs are generally safe and well tolerated. The main reason for discontinuation of efavirenz is neuro-psychiatric effects including suicidal ideation. Nevirapine can cause severe hepatotoxicity, especially in women with high CD4 counts.
=== Protease inhibitors ===
Protease inhibitors (PIs) are often given with ritonavir, a strong inhibitor of cytochrome P450 enzymes, leading to numerous drug-drug interactions. They are also associated with lipodystrophy, elevated triglycerides and elevated risk of heart attack.
=== Integrase inhibitors ===
Integrase inhibitors (INSTIs) are among the best tolerated of the antiretrovirals with excellent short and medium term outcomes. Given their relatively new development there is less long term safety data. They are associated with an increase in creatinine kinase levels and rarely myopathy.
== Post-exposure prophylaxis (PEP) ==
When people are exposed to HIV-positive infectious bodily fluids either through skin puncture, contact with mucous membranes or contact with damaged skin, they are at risk for acquiring HIV. Pooled estimates give a risk of transmission with puncture exposures of 0.3% and mucous membrane exposures 0.63%. United States guidelines state that "feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered potentially infectious unless they are visibly bloody." Given the rare nature of these events, rigorous study of the protective abilities of antiretrovirals are limited but do suggest that taking antiretrovirals afterwards can prevent transmission. It is unknown if three medications are better than two. The sooner after exposure that ART is started the better, but after what period they become ineffective is unknown, with the US Public Health Service Guidelines recommending starting prophylaxis up to a week after exposure. They also recommend treating for a duration of four weeks based on animal studies. Their recommended regimen is emtricitabine + tenofovir + raltegravir (an INSTI). The rationale for this regimen is that it is "tolerable, potent, and conveniently administered, and it has been associated with minimal drug interactions." People who are exposed to HIV should have follow up HIV testing at 6, 12, and 24 weeks.
== Pregnancy planning ==
Women with HIV have been shown to have decreased fertility which can affect available reproductive options. In cases where the woman is HIV negative and the man is HIV positive, the primary assisted reproductive method used to prevent HIV transmission is sperm washing followed by intrauterine insemination (IUI) or in vitro fertilization (IVF). Preferably this is done after the man has achieved an undetectable plasma viral load. In the past there have been cases of HIV transmission to an HIV-negative partner through processed artificial insemination, but a large modern series in which followed 741 couples where the man had a stable viral load and semen samples were tested for HIV-1, there were no cases of HIV transmission.
For cases where the woman is HIV positive and the man is HIV negative, the usual method is artificial insemination. With appropriate treatment the risk of mother-to-child infection can be reduced to below 1%.
== History ==
Several buyers clubs sprang up since 1986 to combat HIV. The drug zidovudine (AZT), a nucleoside reverse-transcriptase inhibitor (NRTI), was not effective on its own. It was approved by the US FDA in 1987. The FDA bypassed stages of its review for safety and effectiveness in order to distribute this drug earlier. Subsequently, several more NRTIs were developed but even in combination were unable to suppress the virus for long periods of time and patients still inevitably died. To distinguish from this early antiretroviral therapy (ART), the term highly active antiretroviral therapy (HAART) was introduced. In 1996 two sequential publications in The New England Journal of Medicine by Hammer and colleagues and Gulick and colleagues illustrated the substantial benefit of combining two NRTIs with a new class of antiretrovirals, protease inhibitors, namely indinavir. This concept of three-drug therapy was quickly incorporated into clinical practice and rapidly showed impressive benefit with a 60% to 80% decline in rates of AIDS, death, and hospitalization. It would also create a new period of optimism at the 11th International AIDS Conference that was held in Vancouver that year.
As HAART became widespread, fixed dose combinations were made available to ease the administration. Later, the term combination antiretroviral therapy (cART) gained favor with some physicians as a more accurate name, not conveying to patients any misguided idea of the nature of the therapy. Today multidrug, highly effective regimens are long since the default in ART, which is why they are increasingly called simply ART instead of HAART or cART. This retronymic process is linguistically comparable to the way that the words electronic computer and digital computer at first were needed to make useful distinctions in computing technology, but with the later irrelevance of the distinction, computer alone now covers their meaning. Thus as "all computers are digital now", so "all ART is combination ART now." However, the names HAART and cART, reinforced by thousands of earlier mentions in medical literature still being regularly cited, also remain in use. In 1997, the new number of new HIV/AIDS cases in the United States would see its first significant decline at 47%, with credit going to the effectiveness of HAART.
== Research ==
People living with HIV can expect to live a nearly normal life span if able to achieve durable viral suppression on combination antiretroviral therapy. However this requires lifelong medication and will still have higher rates of cardiovascular, kidney, liver and neurologic disease. This has prompted further research towards a cure for HIV.
=== Patients cured of HIV infection ===
The so-called "Berlin patient" has been potentially cured of HIV infection and has been off of treatment since 2006 with no detectable virus. This was achieved through two bone marrow transplants that replaced his immune system with a donor's that did not have the CCR5 cell surface receptor, which is needed for some variants of HIV to enter a cell. Bone marrow transplants carry their own significant risks including potential death and was only attempted because it was necessary to treat a blood cancer he had. Attempts to replicate this have not been successful and given the risks, expense and rarity of CCR5 negative donors, bone marrow transplant is not seen as a mainstream option. It has inspired research into other methods to try to block CCR5 expression through gene therapy. A procedure zinc-finger nuclease-based gene knockout has been used in a Phase I trial of 12 humans and led to an increase in CD4 count and decrease in their viral load while off antiretroviral treatment. Attempt to reproduce this failed in 2016. Analysis of the failure showed that gene therapy only successfully treats 11–28% of cells, leaving the majority of CD4+ cells capable of being infected. The analysis found that only patients where less than 40% of cells were infected had reduced viral load. The gene therapy was not effective if the native CD4+ cells remained. This is the main limitation which must be overcome for this treatment to become effective.
After the "Berlin patient", two additional patients with both HIV infection and cancer were reported to have no traceable HIV virus after successful stem cell transplants. Virologist Annemarie Wensing of the University Medical Center Utrecht announced this development during her presentation at the 2016 "Towards an HIV Cure" symposium. However, these two patients are still on antiretroviral therapy, which is not the case for the Berlin patient. Therefore, it is not known whether or not the two patients are cured of HIV infection. The cure might be confirmed if the therapy were to be stopped and no viral rebound occurred.
In March 2019, a second patient, referred to as the "London Patient", was confirmed to be in complete remission of HIV. Like the Berlin Patient, the London Patient received a bone marrow transplant from a donor who has the same CCR5 mutation. He has been off antiviral drugs since September 2017, indicating the Berlin Patient was not a "one-off".
Alternative approaches aiming to mimic one's biological immunity to HIV through the absence or mutation of the CCR5 gene is being conducted in current research efforts. The efforts of which are done through the introduction of induced pluripotent stem cells that have been CCR5 disrupted through the CRISPR/Cas9 gene editing system.
=== Viral reservoirs ===
The main obstacle to complete elimination of HIV infection by conventional antiretroviral therapy is that HIV is able to integrate itself into the DNA of host cells and rest in a latent state, while antiretrovirals only attack actively replicating HIV. The cells in which HIV lies dormant are called the viral reservoir, and one of the main sources is thought to be central memory and transitional memory CD4+ T cells. In 2014 there were reports of the cure of HIV in two infants, presumably due to the fact that treatment was initiated within hours of infection, preventing HIV from establishing a deep reservoir. There is work being done to try to activate reservoir cells into replication so that the virus is forced out of latency and can be attacked by antiretrovirals and the host immune system. Targets include histone deacetylase (HDAC) which represses transcription and if inhibited can lead to increased cell activation. The HDAC inhibitors valproic acid and vorinostat have been used in human trials with only preliminary results so far.
=== Immune activation ===
Even with all latent virus deactivated, it is thought that a vigorous immune response will need to be induced to clear all the remaining infected cells. Strategies include using cytokines to restore CD4+ cell counts as well as therapeutic vaccines to prime immune responses. One such candidate vaccine is Tat Oyi, developed by Biosantech. This vaccine is based on the HIV protein tat. Animal models have shown the generation of neutralizing antibodies and lower levels of HIV viremia.
=== Sequential mRNA vaccine ===
HIV vaccine development is an active area of research and an important tool for managing the global AIDS epidemic. Research into a vaccine for HIV has been ongoing for decades with no lasting success for preventing infection. The rapid development, though, of mRNA vaccines to deal with the COVID-19 pandemic may provide a new path forward.
Like SARS-CoV-2, the virus that causes COVID-19, HIV has a spike protein. In retroviruses like HIV, the spike protein is formed by two proteins expressed by the Env gene. This viral envelope binds to the host cell's receptor and is what gains the virus entry into the cell. With mRNA vaccines, mRNA or messenger RNA, contains the instructions for how to make the spike protein. The mRNA is put into lipid-based nanoparticles for drug delivery. This was a key breakthrough in optimizing the efficiency and efficacy of in vivo delivery. When the vaccine is injected, the mRNA enters cells and joins up with a ribosome. The ribosome then translates the mRNA instructions into the spike protein. The immune system detects the presence of the spike protein and B cells, a type of white blood cell, begin to develop antibodies. Should the actual virus later enter the system, the external spike protein will be recognized by memory B cells, whose function is to memorize the characteristics of the original antigen. Memory B cells then produce the antibodies, hopefully destroying the virus before it can bind to another cell and repeat the HIV life cycle.
SARS-CoV-2 and HIV-1 have similarities—notably both are RNA viruses—but there are important differences. As a retrovirus, HIV-1 can insert a copy of its RNA genome into the host's DNA, making total eradication more difficult. The virus is also highly mutable making it a challenge for the adaptive immune system to develop a response. As a chronic infection, HIV-1 and the adaptive immune system undergo reciprocal selective pressures leading to the evolutionary arms race of coevolution.
Broadly neutralizing HIV-1 antibodies, or bnAbs, have been shown to attach to the Env spike protein envelope regardless of the specific HIV mutations. This bodes well for vaccine development. Complicating matters, though, naive B cells—mature B cells not yet exposed to any antigen and are the progenitors of bnAbs—are rare. Further, the mutation events needed to turn these B cells into bnAbs are also rare. Because of this, there is a growing consensus that an effective HIV vaccine will need to create not only humoral (antibody-mediated) immunity, but a T-cell-mediated immunity.
mRNA vaccines have advantages over traditional vaccines which may help deal with some of the challenges presented by the HIV virus. The mRNA in the vaccine only codes for the protein spike, not the whole virus, so the possibility of reverse transcription, where the virus copies its genetic material into the host's genome, is not present. Another advantage when compared to traditional vaccines is the speed of development. mRNA vaccines take months not years, which means a multipart sequential vaccine regime is possible.
Attempts to elicit an immune response that triggers broadly neutralizing antibodies (bnAbs) with a single vaccine dose have been unsuccessful. A multipart sequential mRNA vaccine regime, however, might guide the immune response in the right direction. The first shot triggers an immune response for the correct naive B cells. Later vaccinations encourage the development of these cells further, eventually turning them into memory b cells, and later into plasma cells, which can secrete the broadly neutralizing antibodies:
In essence, the sequential immunization approach represents an attempt to mimic Env evolution that would occur with natural infection.... In contrast to traditional prime/boost strategies, in which the same immunogen is used repeatedly for vaccination, the sequential immunization approach relies on a series of different immunogens with the goal of eventually inducing bnAb(s).
A Phase 1 clinical trial by Scripps Research and the International AIDS Vaccine Initiative of an mRNA vaccine showed that 97 percent of participants had the desired initial “priming” immune response of naive b cells. This is a positive result for developing the first shot in a vaccine sequence. Moderna is partnering with Scripps and the International AIDS Vaccine Initiative for a follow-up phase 1 clinical trial of an HIV mRNA vaccine (mRNA-1644) starting later in 2021.
== Drug advertisements ==
Direct-to-consumer and other advertisements for HIV drugs in the past were criticized for their use of healthy, glamorous models rather than typical people with HIV/AIDS. Usually, these people will present with debilitating conditions or illnesses as a result of HIV/AIDS. In contrast, by featuring people in unrealistically strenuous activities, such as mountain climbing; this proved to be offensive and insensitive to the suffering of people who are HIV positive. The US FDA reprimanded multiple pharmaceutical manufacturers for publishing such adverts in 2001, as the misleading advertisements harmed consumers by implying unproven benefits and failing to disclose important information about the drugs. Overall, some drug companies chose not to present their drugs in a realistic way, which consequently harmed the general public's ideas, suggesting that HIV would not affect you as much as suggested. This led to people not wanting to get tested, for fear of being HIV positive, because at the time (in the 1980s and 1990s particularly), having contracted HIV was seen as a death sentence, as there was no known cure. An example of such a case is Freddie Mercury, who died in 1991, aged 45, of AIDS-related pneumonia.
== Beyond medical management ==
The preamble to the World Health Organization's Constitution defines health as "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity." Those living with HIV today are met with other challenges that go beyond the singular goal of lowering their viral load. A 2009 meta-analysis studying the correlates of HIV-stigma found that individuals living with higher stigma burden were more likely to have poorer physical and mental health. Insufficient social support and delayed diagnosis due to decreased frequency of HIV testing and knowledge of risk reduction were cited as some of the reasons. People living with HIV (PLHIV) have lower health related quality of life (HRQoL) scores than do the general population. The stigma of having HIV is often compounded with the stigma of identifying with the LGBTQ community or the stigma of being an injecting drug user (IDU) even though heterosexual sexual transmission accounts for 85% of all HIV-1 infections worldwide. AIDS has been cited as the most heavily stigmatized medical condition among infectious diseases. Part of the consequence of this stigma toward PLHIV is the belief that they are seen as responsible for their status and less deserving of treatment.
A 2016 study sharing the WHO's definition of health critiques its 90-90-90 target goal, which is part of a larger strategy that aims to eliminate the AIDS epidemic as a public health threat by 2030, by arguing that it does not go far enough in ensuring the holistic health of PLHIV. The study suggests that maintenance of HIV and AIDS should go beyond the suppression of viral load and the prevention of opportunistic infection. It proposes adding a 'fourth 90' addressing a new 'quality of life' target that would focus specifically on increasing the quality of life for those that are able to suppress their viral load to undetectable levels along with new metrics to track the progress toward that target. This study serves as an example of the shifting paradigm in the dynamics of the health care system from being heavily 'disease-oriented' to more 'human-centered'. Though questions remain of what exactly a more 'human-centered' method of treatment looks like in practice, it generally aims to ask what kind of support, other than medical support, PLHIV need to cope with and eliminate HIV-related stigmas. Campaigns and marketing aimed at educating the general public in order to reduce any misplaced fears of HIV contraction is one example. Also encouraged is the capacity-building and guided development of PLHIV into more leadership roles with the goal of having a greater representation of this population in decision making positions. Structural legal intervention has also been proposed, specifically referring to legal interventions to put in place protections against discrimination and improve access to employment opportunities. On the side of the practitioner, greater competence for the experience of people living with HIV is encouraged alongside the promotion of an environment of nonjudgment and confidentiality.
Psychosocial group interventions such as psychotherapy, relaxation, group support, and education may have some beneficial effects on depression in HIV positive people.
== Food insecurity ==
The successful treatment and management of HIV/AIDS is affected by a plethora of factors which ranges from successfully taking prescribed medications, preventing opportunistic infection, and food access etc. Food insecurity is a condition in which households lack access to adequate food because of limited money or other resources. Food insecurity is a global issue that has affected billions of people yearly, including those living in developed countries.
Food insecurity is a major public health disparity in the United States of America, which significantly affects minority groups, people living at or below the poverty line, and those who are living with one or more morbidity. As of December 31, 2017, there were approximately 126,742 people living with HIV/AIDS (PLWHA) in NYC, of whom 87.6% can be described as living with some level of poverty and food insecurity as reported by the NYC Department of Health on March 31, 2019. Having access to a consistent food supply that is safe and healthy is an important part in the treatment and management of HIV/AIDS. PLWHA are also greatly affected by food inequities and food deserts which causes them to be food insecure. Food insecurity, which can cause malnutrition, can also negatively impact HIV treatment and recovery from opportunistic infections. Similarly, PLWHA require additional calories and nutritionally support that require foods free from contamination to prevent further immunocompromising. Food insecurity can further exacerbate the progression of HIV/AIDS and can prevent PLWHA from consistently following their prescribed regimen, which will lead to poor outcomes.
It is imperative that these food insecurity among PLWHA are addressed and rectified to reduce this health inequity. It is important to recognized that socioeconomic status, access to medical care, geographic location, public policy, race and ethnicity all play a pivotal role in the treatment and management of HIV/AIDS. The lack of sufficient and constant income does limit the options for food, treatment, and medications. The same can be inferred for those who are among the oppressed groups in society who are marginalized and may be less inclined or encouraged to seek care and assistance. Endeavors to address food insecurity should be included in HIV treatment programs and may help improve health outcomes if it also focuses on health equity among the diagnosed as much as it focuses on medications. Access to consistently safe and nutritious foods is one of the most important facets in ensuring PLWHA are being provided the best possible care. By altering the narratives for HIV treatment so that more support can be garnered to reduce food insecurity and other health disparities mortality rates will decrease for people living with HIV/AIDS.
== See also ==
AV-HALT
Discovery and development of HIV-protease inhibitors
Discovery and development of non-nucleoside reverse-transcriptase inhibitors
Discovery and development of nucleoside and nucleotide reverse-transcriptase inhibitors
HIV capsid inhibition
== References ==
== Further reading ==
Strayer DS, Akkina R, Bunnell BA, Dropulic B, Planelles V, Pomerantz RJ, et al. (June 2005). "Current status of gene therapy strategies to treat HIV/AIDS". Molecular Therapy. 11 (6): 823–42. doi:10.1016/j.ymthe.2005.01.020. PMID 15922953.
== External links ==
HIVinfo at US Department of Health and Human Services | Wikipedia/Antiretroviral_drug |
Disease Informatics (also called Infectious Disease informatics) addresses some major challenges to global public health, demanding solid medical interference, but also credible data centric strategies [1]. With rapid advancement of genetic technology tools that analyze the DNA and RNA of pathogens to identify, track, and characterize them, alongside artificial intelligence and the field of Infectious Disease Informatics (IDI) has emerged as area of expertise.
Considering infectious diseases contribute to millions of deaths every year, the ability to identify and understand disease diffusion is crucial for society to apply control and prevention measures. The knowledge gained by researchers in the field of disease informatics can be used to aid policymakers' decisions on issues such as spreading public awareness, updating the training of health professionals, and buying vaccines.
Aside from aiding in policymakers' decisions, the goals of disease informatics also include increased identification of biomarkers for transmissibility, improved vaccine design, and a deeper understanding of host-pathogen interactions, and the optimization of antimicrobial development.
In parallel, recent insights of the COVID-19 pandemic emphasize some essential of involvement of data science for epidemic forecasting, risk modeling, and policy support. Acknowledging how infectious disease plays a role in numerous amounts of deaths each year, the need to recognize the disease transmission is pivotal for prevention and societal safeguard. Together, these approaches mark a paradigm shift; where managing infectious diseases no longer relies solely on biological knowledge, but equally on computational insights and collaborative information systems.
== Background ==
Throughout most of history, human viewpoint towards epidemics or outbreaks had a combination of false theories and surprising extent of practical sense . Eventually, detection and administration of infectious disease outbreaks relied mostly on manual reporting, clinical observation, and delayed laboratory confirmations. A reportable condition of infectious disease is where timely informing of individual cases is studied for the management and prevention of an outbreak or condition.The traditional methods, often suffered from slow response times and limited scalability, factors that proved critical during fast-moving outbreaks such as SARS in 2003 or H1N1 in 2009.This need gave rise to Infectious Disease Informatics a field that blends epidemiology, computer science, bioinformatics, and bio surveillance to enhance the management of infectious diseases.
=== Case study of HIV and SARs: A Network Analysis of comorbidity risk at the time of outbreak. ===
The rate of mortality and morbidity co-exist to a term "comorbidity" that is associated to the increase possibility of health conditions due to infections. In simple term this word relates to the existence of diverse diseases and their caused disorders in an individual. So comorbidity occurs when it shows the relation between co-existence of two disease simultaneously in an individual or a patient. Similarly, viruses imposing risk on respiratory system of an individual has been emerging a threat to global medical security, Severe Acute Respiratory Syndrome (SARS) is a pandemic, along coronavirus (CoV), has been termed as SARs associated coronavirus (SARS-CoV) .This case study shows an approach towards quantitative discovery of societal disease comorbidities considering various techniques of accessible mRNA expression, disease to gene relation, protein mapping, relation among two co existing diseases and the relation of drug to disease data.
=== Connection to broader Health Informatics domain ===
Disease Informatics is a branch within the broader aspect of Health Informatics, which focuses on collection of information and communication machinery in medical facilities. It comprises diverse domains which serve a foundation for disease focused applications such as telemedicine, including Electronic health records (EHRs) which refers to how clinical systems are designed to for storing, retrieval and display of electronic data which is collected over the time a patient is under care.
IDI systems success rate depends on how well it can access and process clinical data from hospital information systems. As modern health informatics infrastructures facilitate real time data sharing, integration of diverse information systems, and the application of international standards such as HL7 FHIR (Fast Healthcare Interoperability Resources),these capabilities are essential for enabling timely and accurate infectious disease surveillance.
== The Emerging role of Informatics in Public health practice ==
Control of infectious disease is the cornerstone of public health. Emerging informatics in public health shows a radical alteration in how health data is collected, monitored, analyzed and utilized to improve public/population health outcomes.
=== Essential services in public health practice ===
Public health practice is grounded based on three core functions: Assessment, Assurance and Policy Development. These show few essential services aimed at improving public health.
By monitoring health status, diagnosing, educating and empowering communities to overcome medical issues.
Enforce government regulations and laws to prioritize safety of societal health.
Assure capable health service workforce.
Continuous study/research for innovative techniques.
=== Syndromic surveillance as a tool for early detection ===
Infectious Disease Informatics (IDI) plays a vital role in enabling early detection and rapid response to outbreaks, particularly through tools like syndromic surveillance. Syndromic surveillance (relates to public health surveillance) focuses on how a contagious disease can be identified and studied, while monitoring the current public health data. These tools are progressively used for effective detection and response to any infectious disease, though it is a natural outbreak or a result of some lab experiment.
The approach of how this surveillance system works by implementing natural language processing to identify the potential primary factors of an epidemic.
Patients encounter while seeking medical care at any healthcare facilities.
Data collection is done by sending de-identifiable data such as symptoms and patients characteristics.
Data is then shared with state or local health departments or HIEs.
To enable early detection of public health risk, (NSSP) National Syndromic Surveillance Program hosted by CDC aggregates this information via Bio Sense platform.
CDC supports surveillance by providing fundings, training to health departments, technical and project assistance and analytical tools for data analysis.
This NSSP network of public health professionals collaborates to build capacity through training, live webinars, and joint efforts to improve surveillance methods and emergency responses.
== Computational Methods ==
=== Artificial intelligence ===
The use of artificial intelligence (AI) tools, such as machine learning and natural language processing (NLP), in disease informatics increase efficiency by automating and speeding up several data analysis processes. Advances with AI and increased accessibility of data aid in predictive modeling and public health surveillance. AI uses predictive modeling to examine vast data sets and forecast future outcomes to increase the ability to predict disease outbreaks and help guide public health treatments. AI also provides a valuable avenue by combining its ability of spatial modeling with geographic information system (GIS) data to uncover geographical patterns (for example disease clusters) to support data-driven decision-making for local-level predictions of disease diffusion. As the growth of AI continues, more advances for its use in disease informatics are expected to come.
=== Machine learning ===
Machine learning (ML) techniques aid the study of disease informatics with its capability to spatially and temporally predict the progression and transmission of infectious diseases. In disease informatics, the role of Machine learning algorithm can play a pivotal role to control the downside of any infectious disease over time by predicting the cause and further spread by analyzing extensive amounts of complex data sets to identify patterns across varying types of data such as demographics, electronic health records, environmental conditions, etc. Researchers apply algorithms to data sets (for example genomic data, social media posts, and health records) to make predictions about the potential sources of an outbreak, the likelihood of an individual contracting a certain disease, and forecasting the number of cases of a disease in a given region. For analyzing large, complex data sets to identify trends, techniques like Support vector machine, Ensemble learning, Conditional Random Field(CRF), Decision tree and other algorithms are used.
=== Text mining ===
The use of text mining has become a beneficial avenue for querying large amounts of data to aid in gene mapping and the analysis of genomes. This tool provides the ability to query medical databases for processes such as genomic mapping, by integrating the genomic and proteomic data to map the genes and highlight their interrelationships with various diseases. Retrieving data of targeted sequences can be done in two ways, through a similarity search or by keyword search. A similarity search (using software like BLAST (biotechnology) is performed by entering a known sequence as a query sequence to search for sequences that have similarities. A keyword search (public tools include SRS, Entrez, and ACNUC) uses annotations that define the features of genes, such as sequence positions, to retrieve the desired gene sequences being searched for.
=== NLP ===
Natural Language processing (NLP) is highly considered for analyzing the patient data which consists of symptoms as this information are mostly provided in online health communities. It converts unstructured information into usable data for early detection and diagnosis. NLP tools known as (PubTator 3.0) which identifies relation across various entities as such genetic relation, chemical, variants, various infections, discovered species and cell mapping for experimental search.
== Limitations and future prospects ==
=== Accessibility concerns ===
The accuracy of these AI tools and techniques relies upon providing them with high-quality, comprehensive data. Accessibility and collection of such data is still an ongoing challenge because most of the data pulled is incomplete, noisy, and contains human errors (i.e. grammar, abbreviations, spelling) which means the data must undergo a thorough cleaning (data cleansing) before it is eligible to be used.
The formation of a standardized taxonomy for data analysis and predictive modeling would facilitate research collaboration, accelerate decisions, and help select the right predictive models to be used.
One method being used is federated learning, which allows the AI to be trained across multiple different centers without the need for sharing raw data, keeping the data safe within its source.
Another concern is the potential for bias and overfitting of the predictive models, which could lead to inaccurate predictions. Human error can still persist even using these tools to automate tasks, due to the fact that if the AI tools are trained incorrectly, they will produce inaccurate data. A relevant study suggests that implementing AI with wearable devices and other emerging technology in the future would benefit some of the challenges by providing real-time data for the models to use, which could lead to increased accuracy of the data in its raw form, creating less need to spend time cleaning the data, and allowing the models to make more accurate predictions.
=== Ethical concerns ===
A critical concern for using AI and predictive modeling in disease informatics is data security and privacy. The data sources being used (electronic health records, demographics, etc.) contain highly sensitive information that must be protected for all parties involved. Any models or techniques being used need to be in compliance with local governmental regulations and laws such as HIPAA in the United States. The data used must also undergo rigorous data anonymization and de-identification protocols to protect patient privacy.
Similarly, in Europe Health Technology Assessment Regulation (HTAR), are for the evaluation of various medical trial benefits and the consequences of a new medical technology.These laws are applied when a computational medical experiment or tool such as robotic surgery or software for diagnosis needs to demonstrate their safety and potential effective impact in the medical area.
== References == | Wikipedia/Disease_informatics |
Vaccine resistance is the evolutionary adaptation of pathogens to infect and spread through vaccinated individuals, analogous to antimicrobial resistance. It concerns both human and animal vaccines. Although the emergence of a number of vaccine resistant pathogens has been well documented, this phenomenon is nevertheless much more rare and less of a concern than antimicrobial resistance.
Vaccine resistance may be considered a special case of immune evasion, from the immunity conferred by the vaccine. Since the immunity conferred by a vaccine may be different from that induced by infection by the pathogen, the immune evasion may also be easier (in case of an inefficient vaccine) or more difficult (would be the case of the universal flu vaccine). We speak of vaccine resistance only if the immune evasion is a result of evolutionary adaptation of the pathogen (and not a feature of the pathogen that it had before any evolutionary adaptation to the vaccine) and the adaptation is driven by the selective pressure induced by the vaccine (this would not be the case of an immune evasion that is the result of genetic drift that would be present even without vaccinating the population).
Some of the causes advanced for less frequent emergence of resistance are that
vaccines are mostly used for prophylaxis, that is before infection occurs, and usually act to suppress the pathogen before the host becomes infectious
most vaccines target multiple antigenic sites of the pathogen
different hosts may produce different immune responses to the same pathogen
For diseases that confer long lasting immunity after exposure, typically childhood diseases, it was argued that a vaccine may provide the same immune response as natural infection, so it is expected that there should be no vaccine resistance.
If vaccine resistance emerges the vaccine may retain some level of protection against serious infection, possibly by modifying the immune response of the host away from immunopathology.
The best known cases of vaccine resistance are for the following diseases
animal diseases
Marek's disease where actually more virulent strains emerged after vaccination because the vaccine did not protect against infection and transmission, only against serious forms of the disease
Yersinia ruckeri because a single mutation was sufficient to generate vaccine resistance
avian metapneumovirus
human diseases
Streptococcus pneumoniae because recombination with another serotype not targeted by the vaccine
hepatitis B virus because the vaccine targeted a single site formed by 9 amino acids
Bordetella pertussis because not all serotypes were targeted and later because acellular vaccines targeted only a few antigens
Other less documented cases are for avian influenza, avian reovirus, Corynebacterium diphtheriae, feline calicivirus, H. influenzae, infectious bursal disease virus, Neisseria meningitidis, Newcastle disease virus, and porcine circovirus type 2.
== References == | Wikipedia/Vaccine_resistance |
In epidemiology, the attack rate is the proportion of an at-risk population that contracts the disease during a specified time interval. It is used in hypothetical predictions and during actual outbreaks of disease. An at-risk population is defined as one that has no immunity to the attacking pathogen, which can be either a novel pathogen or an established pathogen. It is used to project the number of infections to expect during an epidemic. This aids in marshalling resources for delivery of medical care as well as production of vaccines and/or anti-viral and anti-bacterial medicines.
The rate is arrived at by taking the number of new cases in the population at risk and dividing by the number of persons at risk in the population.
== See also ==
Incidence (epidemiology)
Compartmental models in epidemiology
Herd immunity
Risk assessment in public health
Vaccine-naive
== References ==
== External links ==
The International Biometric Society
The Collection of Biostatistics Research Archive
Guide to Biostatistics (MedPageToday.com) Archived 2012-05-22 at the Wayback Machine | Wikipedia/Attack_rate |
Mosquito-borne diseases or mosquito-borne illnesses are diseases caused by bacteria, viruses or parasites transmitted by mosquitoes. Nearly 700 million people contract mosquito-borne illnesses each year, resulting in more than a million deaths.
Diseases transmitted by mosquitoes include malaria, dengue, West Nile virus, chikungunya, yellow fever, filariasis, tularemia, dirofilariasis, Japanese encephalitis, Saint Louis encephalitis, Western equine encephalitis, Eastern equine encephalitis, Venezuelan equine encephalitis, Ross River fever, Barmah Forest fever, La Crosse encephalitis, and Zika fever, as well as newly detected Keystone virus and Rift Valley fever. A preprint by Australian research group argues that Mycobacterium ulcerans, the causative pathogen of Buruli ulcer is also transmitted by mosquitoes.
There is no evidence as of April 2020 that COVID-19 can be transmitted by mosquitoes, and it is extremely unlikely this could occur.
== Types ==
=== Protozoa ===
The female mosquito of the genus Anopheles may carry the malaria parasite. Five different species of Plasmodium cause malaria in humans: Plasmodium falciparum, Plasmodium malariae, Plasmodium ovale, Plasmodium knowlesi and Plasmodium vivax (see Plasmodium). Worldwide, malaria is a leading cause of premature mortality, particularly in children under the age of five, with an estimated 207 million cases and more than half a million deaths in 2012, according to the World Malaria Report 2013 published by the World Health Organization (WHO). The death toll increased to one million as of 2018 according to the American Mosquito Control Association.
=== Bacterial ===
In January 2024, a publication by an Australian research group demonstrated significant genetic similarity between Mycobacterium ulcerans in humans and possums, compared to PCR screening of M. ulcerans from trapped Aedes notoscriptus mosquitoes, and concluded that Mycobacterium ulcerans, the causative pathogen of Buruli ulcer, is transmitted by mosquitos.
=== Myiasis ===
Botflies are known to parasitize humans or other mammalians, causing myiasis, and to use mosquitoes as intermediate vector agents to deposit eggs on a host. The human botfly Dermatobia hominis attaches its eggs to the underside of a mosquito, and when the mosquito takes a blood meal from a human or an animal, the body heat of the mammalian host induces hatching of the larvae.
=== Helminthiasis ===
Some species of mosquito can carry the filariasis worm, a parasite that causes a disfiguring condition (often referred to as elephantiasis) characterized by a great swelling of several parts of the body; worldwide, around 120 million people are living with a filariasis disability.
=== Virus ===
The viral diseases yellow fever, dengue fever, Zika fever and chikungunya are transmitted mostly by Aedes aegypti mosquitoes.
Other viral diseases like epidemic polyarthritis, Rift Valley fever, Ross River fever, St. Louis encephalitis, West Nile fever, Japanese encephalitis, La Crosse encephalitis and several other encephalitic diseases are carried by several different mosquitoes. Eastern equine encephalitis (EEE) and Western equine encephalitis (WEE) occur in the United States where they cause disease in humans, horses, and some bird species. Because of the high mortality rate, EEE and WEE are regarded as two of the most serious mosquito-borne diseases in the United States. Symptoms range from mild flu-like illness to encephalitis, coma, and death.
Viruses carried by arthropods such as mosquitoes or ticks are known collectively as arboviruses. West Nile virus was accidentally introduced into the US in 1999 and by 2003 had spread to almost every state with over 3,000 cases in 2006.
Other species of Aedes as well as Culex and Culiseta are also involved in the transmission of disease.
Myxomatosis is spread by biting insects, including mosquitoes.
== Transmission ==
A mosquito's period of feeding is often undetected; the bite only becomes apparent because of the immune reaction it provokes. When a mosquito bites a human, it injects saliva and anti-coagulants. With the initial bite to an individual, there is no reaction, but with subsequent bites, the body's immune system develops antibodies. The bites become inflamed and itchy within 24 hours. This is the usual reaction in young children. With more bites, the sensitivity of the human immune system increases, and an itchy red hive appears in minutes where the immune response has broken capillary blood vessels and fluid has collected under the skin. This type of reaction is common in older children and adults. Some adults can become desensitized to mosquitoes and have little or no reaction to their bites, while others can become hyper-sensitive with bites causing blistering, bruising, and large inflammatory reactions, a response known as skeeter syndrome.
One study found Dengue virus and Zika virus altered the skin bacteria of rats in a way that caused their body odor to be more attractive to mosquitoes.
== Signs and symptoms ==
Symptoms of illness are specific to the type of viral infection and vary in severity, based on the individuals infected.
=== Zika virus ===
Symptoms vary in severity, from mild unnoticeable symptoms to more common symptoms like fever, rash, headache, achy muscle and joints, and conjunctivitis. Symptoms can last several days to weeks, but death resulting from this infection is rare.
=== West Nile virus, dengue fever ===
Most people infected with the West Nile virus usually do not develop symptoms. However, some individuals can develop cases of severe fatigue, weakness, headaches, body aches, joint and muscle pain, vomiting, diarrhea, and rash, which can last for weeks or months. More serious symptoms have a greater risk of appearing in people over 60 years of age, or those with cancer, diabetes, hypertension, and kidney disease.
Dengue fever is mostly characterized by high fever, headaches, joint pain, and rash. However, more severe instances can lead to hemorrhagic fever, internal bleeding, and breathing difficulty, which can be fatal.
=== Chikungunya ===
People infected with this virus can develop sudden onset fever along with debilitating joint and muscle pain, rash, headache, nausea, and fatigue. Symptoms can last a few days or be prolonged to weeks and months. Although patients can recover completely, there have been cases in which joint pain has persisted for several months and can extend beyond that for years. Other people can develop heart complications, eye problems, and even neurological complications.
=== Malaria ===
Early onset symptoms of malaria can start anywhere from 10-15 days after exposure and can present as fever, headache and chills. Symptom severity may very depend on age and previous exposures, however children under five, pregnant women and immune compromised will be at higher risk of more severe symptoms. Severe symptoms range from extreme lethargy, loss of consciousness, convulsion, difficulty breathing, bloody urine, jaundice, irregular bleeding, and death.
== Mechanism ==
Mosquitoes carrying such arboviruses are able to stay healthy due to their immune system being able to recognize the virions as foreign particles and "chop off" the virus' genetic coding, rendering it inert. A human is infected with a mosquito-borne virus when a female mosquito carrying the virus, along with its viral particles that have yet to be destroyed by the mosquito, bites a human by penetrating the skin and releasing the virus into the bloodstream. It is not completely known how mosquitoes handle eukaryotic parasites to carry them without being harmed. Data has shown that the malaria parasite Plasmodium falciparum alters the mosquito vector's feeding behavior by increasing frequency of biting in infected mosquitoes, thus increasing the chance of transmitting the parasite.
The mechanism of transmission of this disease starts with the injection of the parasite into the victim's blood when malaria-infected female Anopheles mosquitoes bite into a human being. The parasite uses human liver cells as hosts for maturation where it will continue to replicate and grow, moving into other areas of the body via the bloodstream. The spread of this infection cycle then continues when other mosquitoes bite the same individual. The result will cause that mosquito to ingest the parasite and allow it to transmit the Malaria disease into another person through the same mode of bite injection.
Flaviviridae viruses transmissible via vectors like mosquitoes include West Nile virus and yellow fever virus, which are single stranded, positive-sense RNA viruses enveloped in a protein coat.
Once inside the host's body, the virus will attach itself to a cell's surface through receptor-mediated endocytosis. This essentially means that the proteins and DNA material of the virus are ingested into the host cell. The viral RNA material will undergo several changes and processes inside the host's cell so that it can release more viral RNA that can then be replicated and assembled to infect neighboring host cells. Mosquito-borne flaviviruses also encode viral antagonists to the innate immune system in order to cause persistent infection in mosquitoes and a broad spectrum of diseases in humans. The data on transmissibility via insect vectors of hepatitis C virus, also belonging to family Flaviviridae (as well as for hepatitis B virus, belonging to family Hepadnaviridae) are inconclusive. WHO states that "There is no insect vector or animal reservoir for HCV", while there are experimental data supporting at least the presence of [PCR]-detectable hepatitis C viral RNA in Culex mosquitoes for up to 13 days.
Currently, there are no specific vaccine therapies for West Nile virus approved for humans; however, vaccines are available and some show promise for animals, as a means to intervene with the mechanism of spreading such pathogens.
== Diagnosis ==
Doctors can typically identify a mosquito bite by sight.
A doctor will perform a physical examination and ask about medical history as well as any travel history. Be ready to give details on any international trips, including the dates you were traveling, the countries you visited and any contact you had with mosquitoes.
=== Dengue fever ===
Diagnosing dengue fever can be difficult, as its symptoms often overlap with many other diseases such as malaria and typhoid fever. Laboratory tests can detect evidence of the dengue viruses, however the results often come back too late to assist in directing treatment.
=== West Nile virus ===
Medical testing can confirm the presence of West Nile fever or a West Nile-related illness, such as meningitis or encephalitis. If infected, a blood test may show a rising level of antibodies to the West Nile virus. A lumbar puncture (spinal tap) is the most common way to diagnose meningitis, by analyzing the cerebrospinal fluid surrounding your brain and spinal cord. The fluid sample may show an elevated white cell count and antibodies to the West Nile virus if you were exposed. In some cases, an electroencephalography (EEG) or magnetic resonance imaging (MRI) scan can help detect brain inflammation.
=== Zika virus ===
A Zika virus infection might be suspected if symptoms are present and an individual has traveled to an area with known Zika virus transmission. Zika virus can only be confirmed by a laboratory test of body fluids, such as urine or saliva, or by blood test.
=== Chikungunya ===
Laboratory blood tests can identify evidence of chikungunya or other similar viruses such as dengue and Zika. Blood test may confirm the presence of IgM and IgG anti-chikungunya antibodies. IgM antibodies are highest 3 to 5 weeks after the beginning of symptoms and will continue be present for about 2 months.
== Prevention ==
There is a re-emergence of mosquito vectored viruses (arthropod-borne viruses) called arboviruses carried by the Aedes aegypti mosquito. Examples are the Zika virus, chikungunya virus, yellow fever and dengue fever. The re-emergence of the viruses has been at a faster rate, and over a wider geographic area, than in the past. The rapid re-emergence is due to expanding global transportation networks, the mosquito's increasing ability to adapt to urban settings, the disruption of traditional land use and the inability to control expanding mosquito populations. Like malaria, arboviruses do not have a vaccine. (The only exception is yellow fever.) Prevention is focused on reducing the adult mosquito populations, controlling mosquito larvae and protecting individuals from mosquito bites. Depending on the mosquito vector, and the affected community, a variety of prevention methods may be deployed at one time.
Mosquito borne diseases are indirectly contagious, a mosquito needs to get infected from biting a patient first than transfer it to the next thus, they both need to be in the general area. Mosquito control measures during the Panama canal construction provide the only successful case study of reducing from outbreak status s to zero-malaria and zero-yellow fever, where among applied measures the authority achieve zero yellow fever and zero malaria status where patients were aggressively treat in off-site facilities. Most of the current testing for mosquito-borne diseases is extremely costly, often requiring expensive equipment, resources, and laboratory staff. This is an increasing need for low cost, accessible, easily detectable and dispensable assays that can detect the presence of these mosquito-borne diseases. Further research into these point of care detection methods, especially in rural areas when dengue is most prevalent, would allow for increased monitoring, detection and prevention of mosquito-borne viruses.
=== Insecticidal nets and indoor residual spraying ===
The use of insecticide treated mosquito nets (ITNs) are at the forefront of preventing mosquito bites that cause malaria. The prevalence of ITNs in sub-Saharan Africa has grown from 3% of households to 50% of households from 2000 to 2010 with over 254 million insecticide treated nets distributed throughout sub-Saharan Africa for use against the mosquito vectors Anopheles gambiae and Anopheles funestus which carry malaria. Because the Anopheles gambiae feeds indoors (endophagic) and rests indoors after feeding (endophilic), insecticide treated nets (ITNs) interrupt the mosquito's feeding pattern. The ITNs continue to offer protection, even after there are holes in the nets, because of their excito-repellency properties which reduce the number of mosquitoes that enter the home. The World Health Organization (WHO) recommends treating ITNs with the pyrethroid class of insecticides. There is an emerging concern of mosquito resistance to insecticides used in ITNs. Twenty-seven (27) sub-Saharan African countries have reported Anopheles vector resistance to pyrethroid insecticides.
Indoor spraying of insecticides is another prevention method widely used to control mosquito vectors. To help control the Aedes aegypti mosquito, homes are sprayed indoors with residual insecticide applications. Indoor residual spraying (IRS) reduces the female mosquito population and mitigates the risk of dengue virus transmission. Indoor residual spraying is completed usually once or twice a year. Mosquitoes rest on walls and ceilings after feeding and are killed by the insecticide. Indoor spraying can be combined with spraying the exterior of the building to help reduce the number of mosquito larvae and subsequently, the number of adult mosquitoes.
This measure works excellently in city and urban areas where with running water people don't have the need of indoor water containers for their daily consumption for: First. according to the mosquito rearing protocol, one larval mosquito habitat could release 1,000 adult mosquitoes in 6–10 days. That means about 100 mosquitoes could emerge from a 1-liter habitat per day while people there try to have their water in much larger volume there come at-home mosquito habitats, they don't emerge at once but gradually throughout the day. At best spraying will kill all live insects at the time, not the newly emerges. Second, people are wary, think twice on any introduction of poison into their own home.
Therefore, for the prevention to be effective it is necessary to have mosquito-to-be larvae and pupae in people's houses killed without contaminating their water such as to have them suffocated.
=== Female mosquito trap ===
Only female mosquito bite on only warm blooded animals, they have capability to identify and target their hosts from 1–3 miles away in real time proportioning to 1500 miles in human distance. Even us, we only can identify miles far targets through vision, by the rays they emit, so do mosquitoes, they must be able to see our warmth, or our thermal images because warmth is an obligatory condition they are on the hunt and because electromagnetic radiation is the only media that has miles long atmospheric reach. then for the trap to target only female mosquitoes it must utilize their capacity to see thermal images to use warmth as attractant or a warm lure such as:. with distinct preferences, between side-by-side 37 °C, 40 °C and 42 °C thermal image footprints, they choose to go to the warmer first. A 42 °C trap in front of a house will have its front yard mosquito-bite-free area for humans and mammal pets but not birds for their body temperatures are also at 42 °C.
=== Personal protection methods ===
There are other methods that an individual can use to protect themselves from mosquito bites. Limiting exposure to mosquitoes from dusk to dawn when the majority of mosquitoes are active and wearing long sleeves and long pants during the period mosquitoes are most active. Placing screens on windows and doors is a simple and effective means of reducing the number of mosquitoes indoors. Anticipating mosquito contact and using a topical mosquito repellent with icaridin or DEET is also recommended. Draining or covering water receptacles, both indoor and outdoors, is also a simple but effective prevention method. Removing debris and tires, cleaning drains, and cleaning gutters help larval control and reduce the number of adult mosquitoes.
=== Vaccines ===
There is a vaccine for yellow fever which was developed in the 1930s, the yellow 17D vaccine, and it is still in use today. The initial yellow fever vaccination provides lifelong protection for most people and provides immunity within 30 days of the vaccine. Reactions to the yellow fever vaccine have included mild headache and fever, and muscle aches. There are rare cases of individuals presenting with symptoms that mirror the disease itself. The risk of complications from the vaccine are greater for individuals over 60 years of age. In addition, the vaccine is not usually administered to babies under nine months of age, pregnant women, people with allergies to egg protein, and individuals living with AIDS/HIV. The World Health Organization (WHO) reports that 105 million people have been vaccinated for yellow fever in West Africa from 2000 to 2015.
To date, there are relatively few vaccines against mosquito-borne diseases, this is due to the fact that most viruses and bacteria caused by mosquitos are highly mutatable. The National Institute of Allergy and Infectious Disease (NIAID) began Phase 1 clinical trials of a new vaccine that would be nearly universal in protecting against the majority of mosquito-borne diseases.
==== Dengvaxia ====
Dengvaxia, developed by Sanofi-Pasteur, was the first dengue vaccine available in the United States. Dengvaxia (CYD-TVD) is a live attenuated vaccine, meaning it consists of a less severe pathogen which helps provide the human immune system with protective antigens and greater long term immunity. In order to receive the vaccine a previous laboratory confirmed positive dengue infection is required. Three doses of the vaccine are required for full protection against dengue, with dose 1 being given immediately after conformation of a previous dengue infection, dose 2 given six months after receiving the first dose, and dose 3 given six months after receiving the second dose. Statistics have shown Dengvaxia to protect against dengue illness in 8 out of 10 children who contracted dengue virus prior to receiving the vaccine.
However, recently the manufacturers of Dengvaxia, Sanofi-Pasteur has begun to stop manufacturing the vaccine, citing a lack of demand.
==== TAK-003 ====
In May 2024, TAK-003 became the second dengue vaccine to be prequalified by the World Health Organization (WHO). This live-attenuated vaccine, developed by Takeda is similar to the Dengvaxia vaccine in the fact that it contains a weakened version of the four variants of dengue virus. The difference between the two vaccines is the TAK-003 vaccine can be administered without a prior dengue infection and it also induces cellular immunity against dengue virus along with host immunity. This vaccine is administered in two doses, with three months in between the doses.
=== Education and community involvement ===
The arboviruses have expanded their geographic range and infected populations that had no recent community knowledge of the diseases carried by the Aedes aegypti mosquito. Education and community awareness campaigns are necessary for prevention to be effective. Communities are educated on how the disease is spread, how they can protect themselves from infection and the symptoms of infection. Community health education programs can identify and address the social/economic and cultural issues that can hinder preventative measures. Community outreach and education programs can identify which preventative measures a community is most likely to employ. Leading to a targeted prevention method that has a higher chance of success in that particular community. Community outreach and education includes engaging community health workers and local healthcare providers, local schools and community organizations to educate the public on mosquito vector control and disease prevention.
== Treatments ==
=== Yellow fever ===
Numerous drugs have been used to treat yellow fever disease with minimal satisfaction to date. Patients with multisystem organ involvement will require critical care support such as possible hemodialysis or mechanical ventilation. Rest, fluids, and acetaminophen are also known to relieve milder symptoms of fever and muscle pain. Due to hemorrhagic complications, aspirin should be avoided. Infected individuals should avoid mosquito exposure by staying indoors or using a mosquito net.
=== Dengue fever ===
Dengue infection's therapeutic management is simple, cost effective and successful in saving lives by adequately performing timely institutionalized interventions. Treatment options are restricted, while no effective antiviral drugs for this infection have been accessible to date. Patients in the early phase of the dengue virus may recover without hospitalization. However, ongoing clinical research is in the works to find specific anti-dengue drugs. Dengue fever occurs via Aedes aegypti mosquito (it acts as a vector).
=== Zika virus ===
Zika virus vaccine clinical trials are to be conducted and established. There are efforts being put toward advancing antiviral therapeutics against zika virus for swift control. Present day Zika virus treatment is symptomatic through antipyretics and analgesics. Currently there are no publications regarding viral drug screening. Nevertheless, therapeutics for this infection have been used.
=== Chikungunya ===
There are no treatment modalities for acute and chronic chikungunya that currently exist. Most treatment plans use supportive and symptomatic care like analgesics for pain and anti-inflammatories for inflammation caused by arthritis. In acute stages of this virus, rest, antipyretics and analgesics are used to subside symptoms. Most use non-steroidal anti-inflammatory drugs (NSAIDs). In some cases, joint pain may resolve from treatment but stiffness remains.
=== Latest treatment ===
The sterile insect technique (SIT) uses irradiation to sterilize insect pests before releasing them in large numbers to mate with wild females. Since they do not produce any offspring, the population, and consequently the disease incidence, is reduced over time. Used successfully for decades to combat fruit flies and livestock pests such as screwworm and tsetse flies, the technique can be adapted also for some disease-transmitting mosquito species. Pilot projects are being initiated or are under way in different parts of the world.
== Epidemiology ==
Mosquito-borne diseases, such as dengue fever and malaria, typically affect developing countries and areas with tropical climates. Mosquito vectors are sensitive to climate changes and tend to follow seasonal patterns. Between years there are often dramatic shifts in incidence rates. The occurrence of this phenomenon in endemic areas makes mosquito-borne viruses difficult to treat.
Dengue fever is caused by infection through viruses of the family Flaviviridae. The illness is most commonly transmitted by Aedes aegypti mosquitoes in tropical and subtropical regions. Dengue virus has four different serotypes, each of which are antigenically related but have limited cross-immunity to reinfection.
Although dengue fever has a global incidence of 50–100 million cases, only several hundreds of thousands of these cases are life-threatening. The geographic prevalence of the disease can be examined by the spread of Aedes aegypti. Over the last twenty years, there has been a geographic spread of the disease. Dengue incidence rates have risen sharply within urban areas which have recently become endemic hot spots for the disease. The recent spread of Dengue can also be attributed to rapid population growth, increased coagulation in urban areas, and global travel. Without sufficient vector control, the dengue virus has evolved rapidly over time, posing challenges to both government and public health officials.
Malaria is caused by a protozoan called Plasmodium falciparum. P. falciparum parasites are transmitted mainly by the Anopheles gambiae complex in rural Africa. In just this area, P. falciparum infections comprise an estimated 200 million clinical cases and 1 million annual deaths. 75% of individuals affected in this region are children. As with dengue, changing environmental conditions have led to novel disease characteristics. Due to increased illness severity, treatment complications, and mortality rates, many public health officials concede that malaria patterns are rapidly transforming in Africa. Scarcity of health services, rising instances of drug resistance, and changing vector migration patterns are factors that public health officials believe contribute to malaria's dissemination.
Climate heavily affects mosquito vectors of malaria and dengue. Climate patterns influence the lifespan of mosquitos as well as the rate and frequency of reproduction. Climate change impacts have been of great interest to those studying these diseases and their vectors. Additionally, climate impacts mosquito blood feeding patterns as well as extrinsic incubation periods. Climate consistency gives researchers an ability to accurately predict annual cycling of the disease but recent climate unpredictability has eroded researchers' ability to track the disease with such precision.
== Advances in biological control of arboviruses ==
In many insect species, such as Drosophila melanogaster, researchers found that a natural infection with the bacteria strain Wolbachia pipientis increases the fitness of the host by increasing resistance to RNA viral infections. Robert L. Glaser and Mark A. Meola investigated Wolbachia-induced resistance to West Nile virus (WNV) in Drosophila melanogaster (fruit flies). Two groups of fruit flies were naturally infected with Wolbachia. Glaser and Meola then cured one group of fruit flies of Wolbachia using tetracycline. Both the infected group and the cured groups were then infected with WNV. Flies infected with Wolbachia were found to have a changed phenotype that caused resistance to WNV. The phenotype was found to be caused by a "dominant, maternally transmitted, cytoplasmic factor". The WNV-resistance phenotype was then reversed by curing the fruit flies of Wolbachia. Since Wolbachia is also maternally transmitted, it was found that the WNV-resistant phenotype is directly related to the Wolbachia infection. West Nile virus is transmitted to humans and animals through the Southern house mosquito, Culex quinquefasciatus. Glaser and Meola knew vector compatibility could be reduced through Wolbachia infection due to studies done with other species of mosquitoes, mainly, Aedes aegypti. Their goal was to transfer WNV resistance to Cx. quinquefasciatus by inoculating the embryos of the mosquito with the same strain of Wolbachia that naturally occurred in the fruit flies. Upon infection, Cx. quinquefasciatus showed an increased resistance to WNV that was transferable to offspring. The ability to genetically modify mosquitoes in the lab and then have the infected mosquitoes transmit it to their offspring showed that it was possible to transmit the bacteria to wild populations to decrease human infections.
In 2011, Ary Hoffmann and associates produced the first case of Wolbachia-induced arbovirus resistance in wild populations of Aedes aegypti through a small project called Eliminate Dengue: Our Challenge. This was made possible by an engineered strain of Wolbachia termed wMel that came from D. melanogaster. The transfer of wMel from D. melanogaster into field-caged populations of the mosquito Aedes aegypti induced resistance to dengue, yellow fever, and chikungunya viruses. Although other strains of Wolbachia also reduced susceptibility to dengue infection, they also put a greater demand on the fitness of Ae. aegypti. wMel was different in that it was thought to only cost the organism a small portion of its fitness. wMel-infected Ae. aegypti were released into two residential areas in the city of Cairns, Australia over a 14-week period. Hoffmann and associates, released a total of 141,600 infected adult mosquitoes in Yorkeys Knob suburb and 157,300 in Gordonvale suburb. After release, the populations were monitored for three years to record the spread of wMel. Population monitoring was gauged by measuring larvae laid in traps. At the beginning of the monitoring period but still within the release period, it was found that wMel-infected Ae. aegypti had doubled in Yorkeys Knob and increased 1.5-fold in Gordonvale. Uninfected Ae. aegypti populations were in decline. By the end of the three years, wMel-infected Ae. aegypti had stable populations of about 90%. However, these populations were isolated to the Yorkeys Knob and Gordonvale suburbs due to unsuitable habitat surrounding the neighborhoods.
Although populations flourished in these areas with nearly 100% transmission, no signs of spread were noted, proving disappointing for some. Following this experiment, Tom L. Schmidt and his colleagues conducted an experiment releasing Wolbachia-infected Aedes aegypti using different site selection methods occurred in different areas of Cairns during 2013. The release sites were monitored over two years. This time the release was done in urban areas that were adjacent to adequate habitat to encourage mosquito dispersal. Over the two years, the population doubled, and spatial spread was also increased, unlike the first release, giving ample satisfactory results. By increasing the spread of the Wolbachia-infected mosquitoes, the researchers were able to establish that population of a large city was possible if the mosquitoes were given adequate habitat to spread into upon release in different local locations throughout the city. In both of these studies, no adverse effects on public health or the natural ecosystem occurred. This made it an extremely attractive alternative to traditional insecticide methods given the increased pesticide resistance occurring from heavy use.
From the success seen in Australia, the researchers were able to begin operating in more threatened portions of the world. The Eliminate Dengue program spread to 10 countries throughout Asia, Latin America, and the Western Pacific blooming into the non-profit organization, World Mosquito Program, as of September 2017. They still use the same technique of infecting wild populations of Ae. aegypti as they did in Australia, but their target diseases now include Zika, chikungunya and yellow fever as well as dengue. Although not alone in their efforts to use Wolbachia-infected mosquitoes to reduce mosquito-borne disease, the World Mosquito Program method is praised for being self-sustaining in that it causes permanent phenotype change rather than reducing mosquito populations through cytoplasmic incompatibility through male-only dispersal.
Researchers working with dengue virus have also tried to introduce anti-dengue genes into the mosquito population through a Gene drive mechanism. The result would be any female mosquitos not inheriting the anti-dengue gene would die. However, this mechanism has only be shown in Drosophila melanogaster and has not yet been successful in Aedes aegypti. One possible answer to this could be utilizing the new CRISPR/Cas9 gene editing system which could potentially introduce anti-dengue genes into the offspring genome.
== See also ==
Climate change and infectious diseases
Disease vector
Economic entomology
Mosquito control
== External links ==
UK's One Health Vector-Borne Diseases Hub
== References == | Wikipedia/Mosquito-borne_disease |
A vaccine-preventable disease is an infectious disease for which an effective preventive vaccine exists. If a person acquires a vaccine-preventable disease and dies from it, the death is considered a vaccine-preventable death.
The most common and serious vaccine-preventable diseases tracked by the World Health Organization (WHO) are: diphtheria, Haemophilus influenzae serotype b infection, hepatitis B, measles, meningitis, mumps, pertussis, poliomyelitis, rubella, tetanus, tuberculosis, and yellow fever. The WHO reports licensed vaccines being available to prevent, or contribute to the prevention and control of, 31 vaccine-preventable infections.
== Background ==
In 2012, the World Health Organization estimated that vaccination prevents 2.5 million deaths each year. With 100% immunization, and 100% efficacy of the vaccines, one out of seven deaths among young children could be prevented, mostly in developing countries, making this an important global health issue. Four diseases were responsible for 98% of vaccine-preventable deaths: measles, Haemophilus influenzae serotype b, pertussis, and neonatal tetanus.
The Immunization Surveillance, Assessment and Monitoring program of the WHO monitors and assesses the safety and effectiveness of programs and vaccines at reducing illness and deaths from diseases that could be prevented by vaccines.
Vaccine-preventable deaths are usually caused by a failure to obtain the vaccine in a timely manner. This may be due to financial constraints or to lack of access to the vaccine. A vaccine that is generally recommended may be medically inappropriate for a small number of people due to severe allergies or a damaged immune system. In addition, a vaccine against a given disease may not be recommended for general use in a given country, or may be recommended only to certain populations, such as young children or older adults. Every country makes its own immunization recommendations, based on the diseases that are common in its area and its healthcare priorities. If a vaccine-preventable disease is uncommon in a country, then residents of that country are unlikely to receive a vaccine against it. For example, residents of Canada and the United States do not routinely receive vaccines against yellow fever, which leaves them vulnerable to infection if travelling to areas where risk of yellow fever is highest (endemic or transitional regions).
== List of vaccine-preventable diseases ==
The WHO lists 25 diseases for which vaccines are available:
Cholera
COVID-19
Dengue fever
Diphtheria
Haemophilus influenzae type b
Hepatitis (A and B only)
Human papillomavirus infection
Influenza
Japanese encephalitis
Malaria
Measles
Meningococcal meningitis
Mumps
Pertussis
Pneumococcal disease
Poliomyelitis
Rabies
Rotavirus
Rubella
Tetanus
Tick-borne encephalitis
Tuberculosis
Typhoid fever
Varicella
Yellow fever
=== Used in non humans ===
Bordetella
Canine distemper
Canine influenza
Canine parvovirus
Chlamydia
Feline calicivirus
Feline distemper
Feline leukemia
Feline viral rhinotracheitis
Leptospirosis
Lyme disease
== Vaccine-preventable diseases demonstrated in the laboratory on other animals ==
Enterococcus gallinarum on mice (to prevent bacteria-triggered autoimmune disease)
== See also ==
Vaccination policy
World Immunization Week
Measles resurgence in the United States
== References ==
== External links ==
Media related to Vaccine-preventable diseases at Wikimedia Commons | Wikipedia/Vaccine-preventable_disease |
Vaccine efficacy or vaccine effectiveness is the percentage reduction of disease cases in a vaccinated group of people compared to an unvaccinated group. For example, a vaccine efficacy or effectiveness of 80% indicates an 80% decrease in the number of disease cases among a group of vaccinated people compared to a group in which nobody was vaccinated. When a study is carried out using the most favorable, ideal or perfectly controlled conditions, such as those in a clinical trial, the term vaccine efficacy is used. On the other hand, when a study is carried out to show how well a vaccine works when they are used in a bigger, typical population under less-than-perfectly controlled conditions, the term vaccine effectiveness is used.
Vaccine efficacy was designed and calculated by Greenwood and Yule in 1915 for the cholera and typhoid vaccines. It is best measured using double-blind, randomized, clinical controlled trials, such that it is studied under "best case scenarios."
Vaccine efficacy studies are used to measure several important and critical outcomes of interest such as disease attack rates, hospitalizations due to the disease, deaths due to the disease, asymptomatic infection, serious adverse events due to vaccination, vaccine reactogenicity, and cost effectiveness of the vaccine. Vaccine efficacy is calculated on a set population (and therefore is not a constant value when counting in other populations), and may be misappropriated to be how efficacious a vaccine is in all populations.
== Testing ==
Vaccine efficacy differs from vaccine effectiveness in the same way that an explanatory clinical trial differs from an intention-to-treat trial: vaccine efficacy shows how effective a vaccine could be given ideal circumstances and 100% vaccine uptake (such as the conditions within a controlled clinical trial); vaccine effectiveness measures how well a vaccine performs when it is used in routine circumstances in the community. What makes vaccine efficacy relevant is that it shows the disease attack rates as well as a tracking of vaccination status. Vaccine effectiveness is relatively inexpensive to measure than vaccine efficacy. The measurement of vaccine effectiveness relies on observational studies which are usually easier to perform, whereas a vaccine efficacy measurement requires randomized controlled trials which are time and capital intensive. Because a clinical trial is based on people who are taking the vaccine and those who are not, there is a risk for disease, and optimal treatment is needed for those who become infected.
The advantages of measuring vaccine efficacy is having the ability to control for selection bias, as well as prospective, active monitoring for disease attack rates, and careful tracking of vaccination status for a study population there is normally a subset as well; laboratory confirmation of the infectious outcome of interest and a sampling of vaccine immunogenicity. The major disadvantages of vaccine efficacy trials are the complexity and expense of performing them, especially for relatively uncommon infectious outcomes of diseases for which the sample size required is driven up to achieve clinically useful statistical power. Vaccine effectiveness estimates obtained from observational studies are usually subject to selection bias. Since 2014, epidemiologists have used quasi-experimental designs to obtain unbiased estimates of vaccine effectiveness.
Standardized statements of efficacy may be parametrically expanded to include multiple categories of efficacy in a table format. While conventional efficacy/effectiveness data typically shows the ability to prevent a symptomatic infection, this expanded approach could include prevention of outcomes categorized to include symptom class, viral damage minor/serious, hospital admission, ICU admission, death, various viral shedding levels, etc. Capturing effectiveness at preventing each of these "outcome categories" is typically part of any study and could be provided in a table with clear definitions instead of being inconsistently presented in study discussion as is typically done in past practice.
== Biological factors ==
Biological exposures such as parasites affect the immune responses after vaccination. This can be seen in areas with a high burden of parasitic infections where vaccine responses are low for vaccines such as BCG. Infections like malaria suppress immune responses to polysaccharide vaccines. A potential solution is to give curative treatment before vaccination in areas where malaria is present. The effect of parasites on vaccine response has also been observed in individuals infected by helminths in areas that have a high burden of infectious diseases. Established helminth infections at the time of vaccination affect vaccine responses.
Other biological factors such as smoking, age, sex, and nutrition also affect vaccine responses. In the case of hepatitis B vaccine, for example, increasing age, being male, having a body mass index > 25, and smoking can result in lower seroprotection rates.
The composition of the gut microbiota might impact responses to vaccination, although there is insufficient evidence for the gut microbiota directly affecting vaccine efficacy.
== Formula ==
The outcome data (vaccine efficacy) generally are expressed as a proportionate reduction in disease attack rate (AR) between the unvaccinated (ARU) and vaccinated (ARV), or can be calculated from the relative risk (RR) of disease among the vaccinated group.
The basic formula is written as:
V
E
=
A
R
U
−
A
R
V
A
R
U
×
100
%
,
{\displaystyle VE={\frac {ARU-ARV}{ARU}}\times 100\%,}
with
V
E
{\textstyle VE}
= Vaccine efficacy,
A
R
U
{\displaystyle ARU}
= Attack rate of unvaccinated people,
A
R
V
{\displaystyle ARV}
= Attack rate of vaccinated people.
An alternative, equivalent formulation of vaccine efficacy is:
V
E
=
(
1
−
R
R
)
×
100
%
,
{\displaystyle VE=(1-RR)\times 100\%,}
where
R
R
{\displaystyle RR}
is the relative risk of developing the disease for vaccinated people compared to unvaccinated people.
The design of clinical trials ensures that regulatory approval is issued only for effective vaccines. However, during research, it is possible that an intervention actually increases the risk of participants, for example, in the STEP and Phambili studies, which were both intended to test an experimental HIV vaccine . In these cases, the formula would yield a negative efficacy value because
A
R
V
>
A
R
U
{\displaystyle ARV>ARU}
. A negative efficacy value is sometimes present in the lower limit of a confidence interval of an estimate of vaccine efficacy for specific clinical endpoints. While this means that the intervention may actually have a negative effect, it could also be simply due to small sample size or sample variability.
== Relative risk ==
First, the baseline risk can be calculated for each group and then vaccine efficacy (RRR) as follows:
24
12221
=
0.196
%
{24 \over 12221}=0.196\%
for the vaccinated group (24 infections)
106
12198
=
0.86
%
{\displaystyle {106 \over 12198}=0.86\%}
for the placebo group (106 infections)
The relative risk,
R
R
=
0.196
0.86
≈
0.23
{\displaystyle RR={0.196 \over 0.86}\approx 0.23}
Then,
V
E
=
(
1
−
R
R
)
×
100
⟹
(
1
−
0.23
)
×
100
≈
77
%
{\displaystyle VE=(1-RR)\times 100\implies (1-0.23)\times 100\approx 77\%}
Also, the absolute risk reduction (ARR) for any vaccine can simply be obtained from calculating the difference of risks between the groups i.e. 0.86%–0.196% which renders a value of about 0.66% for the above example.
== Cases studied ==
The New England Journal of Medicine did a study on the efficacy of a vaccine for the influenza A virus. A total of 1,952 subjects were enrolled and received study vaccines in the fall of 2007. Influenza activity occurred from January through April 2008, with the circulation of influenza types:
A (H3N2) (about 90%)
B (about 9%)
Absolute efficacy against both types of influenza, as measured by isolating the virus in culture, identifying it on real-time polymerase-chain-reaction assay, or both, was 68% (95% confidence interval [CI], 46 to 81) for the inactivated vaccine and 36% (95% CI, 0 to 59) for the live attenuated vaccine. In terms of relative efficacy, there was a 50% (95% CI, 20 to 69) reduction in laboratory-confirmed influenza among subjects who received inactivated vaccine as compared with those given live attenuated vaccine. Subjects were healthy adults. The efficacy against the influenza A virus was 72% and for the inactivated was 29% with a relative efficacy of 60%. The influenza vaccine is not 100% efficacious in preventing disease, but it is close to 100% safe, and much safer than the disease.
Since 2004, clinical trials testing the efficacy of the influenza vaccine have been slowly coming in: 2,058 people were vaccinated in October and November 2005. Influenza activity was prolonged but of low intensity; type A (H3N2) was the virus that was generally spreading around the population, which was very like the vaccine itself. The efficacy of the inactivated vaccine was 16% (95% confidence interval [CI], -171% to 70%) for the virus identification end point (virus isolation in cell culture or identification through polymerase chain reaction) and 54% (95% CI, 4%–77%) for the primary end point (virus isolation or increase in serum antibody titer). The absolute efficacies of the live attenuated vaccine for these end points were 8% (95% CI, -194% to 67%) and 43% (95% CI, -15% to 71%).
With serologic end points included, efficacy was demonstrated for the inactivated vaccine in a year with low influenza attack rates. Influenza vaccines are effective in reducing cases of influenza, especially when the content predicts accurately circulating types and circulation is high. However, they are less effective in reducing cases of influenza-like illness and have a modest impact on working days lost. There is insufficient evidence to assess their impact on complications.
== References == | Wikipedia/Vaccine_efficacy |
Inequality in disease refers to the unequal distribution or burden of disease among a population. This differs from the related topic of health disparities, which requires an inequality in disease that is linked to, at least in part, systemic differences faced by socially and economically disadvantaged groups. For example, an increased prevalence of soft tissue injuries among professional athletes in comparison to the rest of the population would be considered inequality in disease and not a health disparity, as this difference could not be attributed to social or economic disadvantages. Many variations in health outcomes in the United States can be seen across several social characteristics, such as gender, race, socioeconomic status, the environment, and educational attainment as well as in the intersections between these identities.
== Gender ==
First noted in the late 19th century, female life expectancy is generally greater than that of males. This is partially explained by biological factors. For instance, there is a cross-cultural trend that male fetal mortality rates are higher than female fetal mortality rates. Additionally, estrogen decreases the risk of females acquiring heart disease by lowering the amount of cholesterol in the blood, while testosterone suppresses the immune system in males and puts them at risk for acquiring serious illnesses. However, biological differences do not fully account for the large gender gap in the health outcomes of men and women. Social factors play a large role in gender disparities in health.
One of the main factors that contributes to the decreased life expectancy of males is their propensity to engage in risk-taking behaviors. Some commonly cited examples include heavy drinking, illicit drug use, violence, drunk driving, not wearing helmets, and smoking. These behaviors contribute to injuries that may lead to premature death in males. In particular, the effect of risk-taking behavior on health is especially visible in the case of smoking. As smoking rates have fallen in the United States overall, less men engage in this behavior and the life expectancy gap between men and women has slightly decreased as a result.
The behaviour of men and women also vary in regards to diet and exercise, leading to differential health outcomes . On average, men exercise more than women, but their diet is less nutritious. Consequently, men are more likely to be overweight, while women are at greater risk for obesity. Exposure to violence is another social factor that has an influence on health. In general, women have a higher likelihood of experiencing sexual and intimate partner violence, while men are twice as likely to die from suicide or homicide.
Markedly, the impact of gender on health becomes especially salient in different socioeconomic contexts. In the United States, there is a large economic gender inequality with many economically disadvantaged women occupying much fewer positions of power than men. According to the Panel Study of Income Dynamics, "among adults with the strongest attachment to the labor force, only 9.6% of women earned more than $50,000 annually, compared with 44.5% of men." This gendered economic inequality is partly responsible for the gender-health paradox: the general trend that women live longer than men, but experience a greater degree of non-life-threatening chronic illnesses over the course of a lifetime. A low socioeconomic status in women contributes to feelings of a lack of personal control over the events in their lives, increased stress, and low self-esteem. Perpetual states of stress inflict damage on the bodies and minds of women, placing them at risk for physical ailments, such as heart disease and arthritis, as well mental health disorders, such as depression.
Another significant social factor is that men and women deal with their illnesses in different ways. Women generally have strong support networks and are able to rely on others for emotional support, with the potential to improve their states of health. In contrast, men are less likely to have strong support networks, they have fewer doctor visits, and often cope with their illnesses on their own. Also, men and women express pain in different ways. Researchers have observed that women openly express feelings of pain, while men are more reserved in this regard and prefer to appear tough even when they experience severe mental or physical suffering. This finding suggests that this is due to socialization processes. Women are taught to be submissive and emotional, while men are taught to be strong, powerful figures that do not show their emotions. The social stigma associated with expressions of pain prevents men from admitting their suffering to others, making it more difficult to overcome the pain.
Moreover, neighborhood effects have a greater influence on women than men. For instance, research findings suggest that women living in impoverished neighborhoods are more likely to experience obesity, while this effect is not as strong for men. The physical environment also generally impacts a woman's self-rated health. This effect can be explained by the fact that women spend more time at home than their male counterparts, as a result of higher unemployment rates, and therefore may be more exposed to negative environmental characteristics that take a toll on their health.
Finally, gender effects also vary with race, ethnicity, and nativity status. Notably, Christy Erving conducted a study in which she examined the gender differences in the health profiles of African Americans and Caribbean blacks (immigrants and U.S. born). One of the findings from this research is that on average, African American women report lower self-rated measures of health, worse physical health, and were more likely to experience severe chronic illnesses than men. This finding contradicts the gender-health paradox in the sense that researchers would expect morbidity rates to be higher for women, but less of the illnesses that they acquire should be debilitating. In contrast, the opposite trend is observed for U.S. born Caribbean blacks, with men more likely to experience chronic, life-threatening illnesses than women. The health outcomes of Caribbean black immigrants are somewhere in-between the health outcomes of U.S. born Caribbean blacks and African Americans, wherein the females have a lower value of self-reported health but experience equal rates of life-threatening, chronic disease as men. This data illustrates that even within one racial category, there can be stark gender differences in health on the basis of social differences within the groups that compose the race.
== Race ==
Studies have shown that individuals who are racially and ethnically stigmatized, not just in the U.S., but globally as well, experience health issues such as mental and physical illness and in some cases even death, at higher rates than the average individual. There has been some controversy around "race" being a determinant of disease and health issues since there are unmeasured forms of background history that are potential factors in this research. Geographical origins and the types of environments individual races were exposed to are significant contributors to the health of a certain race, especially when the environment they are in now is not the same as the one their race originates from geographically.
Along with these factors, physical, psychological, social, and chemical environments are all included and accounted for. Including exposure over the course of one's life and through generations, and biological adaptation to these environmental exposures, including gene expression. An example of this is a study of hypertension between black people and whites. West Africans and people of West African descent levels of hypertension increased when they moved from Africa to the United States. Their levels of hypertension were twice as high as the levels of black people that were in Africa. While whites in the United States even had higher rates of hypertension than Black people in Africa, the black people in the United States rates of hypertension were higher than some predominately white populations in Europe. Again, this proves that when a race is taken out of their original geographic environment, they are more prone to disease and illness, because their genetic make-up was made for a specific type of environment.
Transitioning from the environmental aspect of race and disease, there is a direct correlation between race and socioeconomic status which contributes to racial disparities in health. When it comes to death rates from heart disease, the rate is about twice as high for black men vs. white men. Now, death rates from heart disease are lower for both black and white women compared to their male counterparts, but the patterns of racial disparities and education disparities for women are similar to that of the men. Death from heart disease is about three times as higher for black women than white women. For both black men and women, racial differences in deaths from heart disease at every level of education is evident, with the racial gap being larger at the higher levels of education than at the lowest levels. There are a number of reasons why race matters in terms of health after socioeconomic status has been accounted for. For one, health is affected by adversity early on in one's life, such as traumatic stress, poverty, and abuse. These factors affect the physical and mental health of an individual. As we know, most of the people living in poverty in the United States are minorities, specifically African Americans, so unfortunately there is no surprise that they are the individuals with so many health issues.
Continuously, race is relevant to health issues, because of the non-equivalence of socioeconomic status indicators across racial groups. At the same level of education, minorities (black people and non-white Hispanic people) receive less income than their Anglo-white counterparts, as well as have less wealth and purchasing power. Namely, one of the biggest reasons that race matters in terms of health is due to racism. Both personal and institutionalized racism are very prominent in today's society, maybe not as blunt and easy to notice in comparison to the past, but it still exists. Certain residential segregation by race, such as redlining, has created very distinct racial differences in terms of education, employment, and opportunities. Opportunities such as access to good healthcare/medical care. Institutional and cultural racism can even harm minorities health through stereotypes and prejudices, which contributes to socioeconomic mobility and can reduce and limit resources and opportunities required for a healthy lifestyle.
Socioeconomic status is only one part of racial disparities in health that reflect larger social inequalities in society. Racism is a system that combines with, and sometimes changes, socioeconomic status to influence health, and race still matters for health when socioeconomic status is considered.
== Socioeconomic status ==
Socioeconomic status is a multidimensional classification, often defined using an individual's income and level of education. Other related metrics can round out this definition; for example, in a 2006 study by authors Cox, McKevitt, Rudd and Wolfe, further categories included "occupation, home and goods ownership, and area-based deprivation indices" in their determination of status.
Income inequality has risen rapidly in the United States, pushing greater amounts of the population into positions of lower socioeconomic status. A study published in 1993 examined Americans who had died between May and August 1960, and paired the mortality information with income, education and occupation data for each person. The work found an inverse correlation between socioeconomic status and mortality rate, as well as an increasing strength of this pattern and its reflection of the growth of income inequality in the United States.
These findings, although concerned with total mortality of any cause, reflect a similar relationship between socioeconomic status and disease incidence or death in the United States. Disease composes a very significant portion of U.S. mortality; as of May 2017, 6 out of 7 of the leading causes of death in America are non-communicable diseases, including heart disease, cancer, lower respiratory diseases, and cerebrovascular diseases (stroke). Indeed, these diseases have been seen to disproportionately affect the socioeconomically disadvantaged, albeit to different degrees and with differing magnitude. Mortality rates associated with cardiovascular disease (CVD), including coronary heart disease (CHD) and stroke, were assessed for individuals across areas of differing income and income inequality. The authors found that the mortality rates for each of the three respective diseases were greater by a factor of 1.36, 1.26, and 1.60, in areas of higher inequality compared to lower inequality areas of similar income. Across areas of differing income and constant income inequality, the rate of death due to CVD, CHD and stroke was increased by a factor of 1.27, 1.15, and 1.33 in the lower income areas. These trends across two measures of variation in socioeconomic status reflect the complexity and depth of the relationship between disease and economic standing. The authors are careful to state that while these patterns exist, they are not sufficiently described as related by cause and effect. While correlating, health and status have arisen in the U.S. from interrelated forces that may intricately accumulate or negate one another due to specific historical contexts.
As this lack of cause and effect simplicity indicates, exactly where disease-related health inequality arises is murky, and multiple factors likely contribute. Important to an examination of disease and health in the context of a complicated classification like socioeconomic status is the degree to which these measures are tied up with mechanisms that are dependent upon the individual, and those that are regionally variant. In the aforementioned 2006 study, the authors define individualized factors within three categories, "material (eg, income, possessions, environment), behavioural (eg, diet, smoking, exercise) and psychosocial (eg, perceived inequality, stress)", and provide two categories for external, regionally varying factors, "environmental influences (such as provision of and access to services) and psychosocial influences (such as social support)." The interactive and compounding nature of these forces can shape and be shaped by socioeconomic status, presenting a challenge to researchers to tease apart the intersecting factors of health and status. In the 2006 study, authors examined the specific drivers of the correlation between stroke occurrence and socioeconomic status. Identifying more nuanced and interlocking factors, they cited risk behaviors, early life influences, and access to care as tied to socioeconomic status and thus health inequality.
Inequality in disease is intricately tangled up with stratification of social class and economic status in the United States. Correlations, often disease-dependent, between health and socioeconomic attainment have been demonstrated in numerous studies for numerous diseases. The causes of these correlations are interlocking and often related to factors varying between regions and individuals, and design of future studies concerning inequality in disease require careful thought to the multifaceted driving mechanisms of social inequality.
== Environment ==
The neighbourhood and areas people live in, as well as their occupation, make up the environment in which they exist. People living in poverty stricken neighborhoods are at a greater risk for heart disease, possibly because the supermarkets in their area do not sell healthy foods and there is increased availability of stores selling alcohol and tobacco than in more affluent parts of town. People living in rural areas are also more susceptible to heart disease, as well. An agriculturally based diet rich in fat and cholesterol, combined with an isolated environment in which there is limited access to health care and ways to distribute information probably creates a pattern in which people living in rural environments have higher levels of heart disease. Occupational cancer is one way in which the environment one works in can increase their rate of disease. Employees exposed to smoke, asbestos, diesel fumes, paint, and chemicals in factories can develop cancer from their workplace. All of these jobs tend to be low-paying and typically held by low income individuals. The decreased amount of healthy food in stores located in low-income areas also contributes to the increased rates of diabetes for persons living in those neighborhoods. One of the best examples of this can be seen by observing the city of Jacksonville, Florida.
=== Food deserts in urban Jacksonville ===
In Jacksonville, Florida it is hard to find groceries stores around the area because it is surrounded by fats, sugar, and high in cholesterol markets. In Duval County, there are 177,000 food insecure individuals such as children, families,senior citizens, and veterans that do not know when they will have a chance to have another meal again. Nearly 60 percent of the food that is consumed in Duval County is processed. To combat this, agencies helped distribute food and they averaged 12.3 million meals over eight counties in Northern Florida. In Duval alone, 3.5 million meals were handed out to families. The image below shows all of the hunger-relief partner agencies located within Jacksonville's food deserts that get food from Feeding Northeast Florida. In all Feeding Northeast Florida provided 4.2 million pounds of food to agencies in food deserts. These numbers were stats recorded in 2016.
=== Water pollution ===
Just like Flint, Jacksonville had a water crisis and found 23 different chemicals in its water supply. It was so bad that Jacksonville was labeled top 10 in worst water in the nation. They stood at number 10 because of the 23 different chemicals. The chemicals that were most commonly found in the water in high volumes were trihalomethanes, which is made up of four different cleaning byproducts, such as chloroform. Trihalomethanes are confirmed to be carcinogenic. Throughout the five-year testing period, unsafe levels of trihalomethanes were found during the 32 months of testing, and levels that are considered illegal by the EPA were found in 12 of those months. In one of the testing periods the trihalomethanes were found at twice the EPA legal limit. Other chemicals, such as lead and arsenic that can cause health problems to people, were also found in the drinking water.
Another way that water pollution is damaged is from nutrient overload. Nutrient overload is caused by manure and fertilizers, stormwater runoff, and wastewater treatment plants. This occurs in a lot of Florida rivers and the rivers are contained with blue green algae that feed on all those nutrients. All the waste that is dumped into the rivers gets fed on by other plants and animals that release toxins in the area, which makes everything surrounded by it a deadly toxin as well. The toxins that are dumped into the rivers can cause discoloration in the rivers to make a dark blue and green color. By looking at the river most people can tell how dangerous and harmful it is to be around it. If the water were to somehow get into water companies people can receive serious harm from drinking and bathing with this water.
== Education ==
Education level is an influential predictor of socioeconomic status. In 2022, among full-time workers over the age of 25 in the United States, those without a high school diploma earned a median weekly income of $682. Median weekly income among these workers increased with each advancement in educational level. High school graduates earned a median of $853, college graduates earned $1,432, and doctorate degrees earned $2,083 respectively. Furthermore, in 2022, those without high school degrees held the highest unemployment rate out of the educational subgroups. This correlation between increased income and employment opportunity informs differences in disease burden, especially in the United States. On average, individuals with higher financial security can obtain higher-quality health insurance and choose a better living environment. Both of these factors support better health outcomes as compared to those with lower financial stability.
Educational attainment is also a predictor of how likely an individual is to engage in risky, possibly disease-causing, behaviors. In terms of smoking, which directly correlates to an increased risk for diseases like lung cancer, education is an important determining factor in the likelihood of an individual to smoke. As of 2009–10, 35 percent of adults who did not graduate high school were smokers, compared to 30 percent of high school graduates and just 13 percent of college graduates. High school graduates also smoked more packs, on average, each year than smokers who had graduated from college. Furthermore, individuals with a high school degree or less were 30% less likely to abstain from smoking for at least 3 months during their time as a regular smoker Other studies have found that binge drinking is higher among those with college degrees, implying that binge drinking is a habitat developed by many during the college years.
Unhealthy dietary habits can also directly lead to diseases such as heart disease, hypertension, and type-2 diabetes. One of the leading causes of unhealthy eating habits is a lack of access to grocery stores, creating so called "food deserts." Studies have found that immediate access to a grocery store (within 1.5 mile radius) was 1.4 times less likely in areas where only 27%, or less, of the population was college graduates. The negative effects of these food deserts are exacerbated by the fact that impoverished neighborhoods also had an oversupply of liquor store, fast food restaurants, and convenience stores.
One significant risk for sexually active individuals is that of sexually transmitted diseases and infections. While studies have found that the correlation between education and carrying these is relatively low on average (and even less so for certain subsets such as Black women), there is a strong correlation between education and other risky sexual behaviors. Those with only a high school degree or less were significantly more likely to engage in risky practices such as early sexual initiation, sexual activity with those who use "shooting" street drugs such as heroin, and even prostitution. In addition, those with less education were also less likely to practice some safe sex practices such as condom use.
Studies have also found that adults with higher educational achievement were more likely to lead healthier lives. Intake of key nutrients such as Vitamins A and C, potassium, and calcium was positively correlated with education level. This is a critical statistic because those nutrients, such as Vitamin C, are critical in helping the body fight diseases and infections. There was also a correlation between education and exercise habits. A 2010 study found that while 85% of college graduates stated they exercised in the last month, only 68% of high school graduates and 61% of non-high school graduates said the same. Because exercise is so crucial to preventing diseases like hypertension and type 2 diabetes, this stark distinction between exercise habitats can have significant effects. By 2011, 15% of high school (or less) graduates had diabetes, compared to just 7% of college graduates.
Arguably the best way of seeing the true effects of education in the inequality of disease is to examine mortality levels, as Heart Disease, Cancer, and Lower Respiratory Diseases are the top three killers, respectively, of Americans every year. By age 25, if an individual does not have at least a high school degree, they will die an average of 9 years earlier than an otherwise similar college graduate. A different national study found that individuals with only bachelor's degrees were 26% more likely to die in the next 5 years than individuals of the same age with professional degrees such as a master's. Even more stark, Americans without a high school degree were almost twice as likely to die than those with a professional degree in the study's 5 year follow-up period.
== Intersectionality ==
Individuals often have multiple social identities (gender, sexuality, race, disability status, etc), which, when taken into account in the context of intersectionality, can affect disease distribution in ways that looking at any single identity cannot. The interplay of each of these factors creates lived experiences and challenges unique to these individuals, especially among those from multiple historically disadvantaged groups. For example, LGBTQ+ people of color face different obstacles than those who identify as LGBTQ+, but not a person of color. Likewise, there are differences for those who identify as a person of color but not as LGBTQ+. This is especially prevalent when thinking about intersectionality within discrimination and the specific health challenges populations may face as a result. For instance, Black women who face discriminatory events may have an increased risk of delivering babies both early and underweight.
Previous public health literature has often lacked participation from underrepresented groups. This can be problematic when making recommendations for further research and to the general public, particularly in the context of intersectionality. One subset of the literature that illustrates this is older LGBTQ+ populations. The majority of the health disparities research in this field has been conducted on wealthier white able-bodied in comparison to the LGBTQ+ population as a whole. For this reason, less wealthy, non-white, and disabled counterparts, make up less of the literature comparatively and researchers know less about the unique disease burdens that they face. Researchers have also continued to progress towards understanding interlocking systems of oppression by both better including underrepresented subgroups in aggregating studies (i.e.. LGBTQ+ people of color) as well as conducting more studies using disaggregated data (i.e.. black bisexual women).
== See Also ==
== References ==
=== Citations ===
=== Sources === | Wikipedia/Inequality_in_disease |
Disease ecology is a sub-discipline of ecology concerned with the mechanisms, patterns, and effects of host-pathogen interactions, particularly those of infectious diseases. For example, it examines how parasites spread through and influence wildlife populations and communities. By studying the flow of diseases within the natural environment, scientists seek to better understand how changes within our environment can shape how pathogens, and other diseases, travel. Therefore, diseases ecology seeks to understand the links between ecological interactions and disease evolution. New emerging and re-emerging infectious diseases (infecting both wildlife and humans) are increasing at unprecedented rates which can have lasting impacts on public health, ecosystem health, and biodiversity.
== Factors affecting spread of diseases ==
Parasitic infections, along with certain transmitted diseases, are present in wildlife which can have severe health effects on particular individuals and populations. Constant host-parasite interactions make disease ecology critical in conservation ecology.
=== Ecological factors ===
Ecological factors that can determine the persistence and the spread of diseases are population size, density, and composition. Host population size is important in the context of host-parasite interactions since the spread of diseases needs a host population large enough to sustain parasitic interactions. The health of the overall population (and the size of the weakened population members) will also influence the way that parasites and diseases will transmit among members. Additionally, competition and predation dynamics in the ecosystem can influence the density of potential hosts which can either propagate or limit the spread of diseases.
==== Predator-prey interactions ====
In some cases when a parasite has weakened an animal it will become easier prey for a predator species. Occasionally predators will prefer feeding on the sick or infected prey even though they carry a parasite because of the opportunity weak prey present. Without the presence of a predator species the prey species would likely exceed manageable numbers therefore leading to the rapid spread of pathogens throughout the prey population. Available host numbers increased when the infected individuals are not removed due to low predation. However, there are some situations where predator feeding can disturb a pathogen that previously was dormant leading to an epidemic that otherwise would not have occurred. Some parasites are able to survive when their host species is consumed leading to the parasite being distributed in the waste of the predator which can continue the spread of disease.
==== Parasitism ====
Parasitism in disease ecology is important because it can shape the way many habitats function because they are disease carriers. These diseases can alter the timing of events, biogeochemical cycles, and even the flow of energy in a habitat. Parasites are able to limit population growth and reproduction of species which may lead to a shift in the balance of an ecosystem. Other ways parasites impact systems are through nutrient cycles. Parasites are able to create imbalances of the elements in a system through the relationship they have with a host and the host's diet.
=== Biological factors ===
Biological factors that can determine the persistence of diseases include parameters pertaining at the level of the individual within the population (one single organism). Sex differences are found to be prevalent in disease transmission. For example, male American minks are larger and travel wider distances, making them more prone to come into contact with parasites and diseases. The host species age may additionally affect the rate in which diseases are transmitted. Younger members of populations have yet to acquire herd immunity and are therefore more susceptible to parasitic infections.
=== Anthropogenic factors ===
Anthropogenic factors of disease spread can be through the introduction or translocation of wildlife for conservation purposes by humans. Additionally, human activity is changing the way in which diseases move through the natural environment.
== In relation to anthropogenic factors ==
Humans are strongly impacting how diseases spread by creating what is known as "novel species associations". Globalization, mainly through world travel and trade, has created a system in which pathogens, and other species, are more in contact with one another than before. Ecological disruption, including habitat fragmentation and road construction, degrade natural landscapes and have been studied as drivers of recent emergence and re-emergence of infectious diseases worldwide. Scientists have speculated that habitat destruction and biodiversity loss are some of the main reasons influencing the rapid spread of non-human, disease carrying vectors. The loss of predators, that mitigate the ability for pathogen transmission, can increase the rate of disease transmission. Human anthropogenic induced climate change is becoming problematic, as parasites and their associated diseases, can move to higher latitudes with increasing global temperatures. New diseases can therefore infect populations that were previously never in contact with certain pathogens.
=== Urbanization and biodiversity loss ===
Urbanization is considered one of the main land-use changes, defined as the growth in the area and number of people inhabiting cities and creates artificial landscapes of built-up structures for human use. With over 65% of the global human population living in cities by 2025, ecological impacts of urbanization focuses mainly on biodiversity loss defined as the decline in species richness. With empirical evidence, scientists are understanding that biodiversity loss is associated with increased disease transmission and worsening of disease severity for humans, wildlife, and certain plant species. As biodiversity is lost worldwide, it is oftentimes the larger, slower reproducing animal species that will go extinct first. This leaves smaller, more adaptable, fast reproducing species abundant. Research has shown that these smaller species are more likely the ones to carry and transmit pathogens (key examples include bats, rats, and mice).
=== Invasive species ===
Globalization, especially world trade and travel, has facilitated the spread of non-native species worldwide. Newly introduced invasive species have the ability to alter ecological dynamics through local and regional extinction of native species. This can promote changes to the ecosystem including the shift in abundance and richness of native species. New invasive species, and the diseases they potentially carry, can escape into the environment and alter the existing natural ecosystems and the ecosystem services that people are dependent upon, including water quality and nutrient availability.
=== Habitat fragmentation ===
Encroachment on natural ecosystems and wildlife with rapid urbanization exposes humans to a wide variety of disease carrying animals. Habitat fragmentation leads to increased edge effects and increases the contact between different communities, vectors, and pathogens which can increase disease transmission. It is argued that between 2013 and 2015, the Ebola virus disease (EDB) outbreak in West Africa began due to deforestation and habitat degradation. In this case, frugivorous and insectivorous bat species had less forest serving as a barrier between them and dense human settlements. Transmission of the Ebola virus is believed to have occurred through direct contact with bat species carrying the pathogen and humans, encroaching on natural ecosystems.
=== Climate change ===
Scientists have deemed vector borne diseases to be sensitive to changes in weather and climate. The abundance of disease carrying vectors in the environment depends on multiple factors, including temperature, relative humidity, and water availability, all factors necessary for the reproductive processes and success of disease carrying vectors. Climate change predictions include rising temperatures and changes in rainfall pattern which can create suitable habitats and increases the overall survival rate and fitness of pathogen carrying species. With a warming climate, pathogens and parasites can begin shifting their native geographic ranges to higher latitudes and infect host species in which they have no prior interaction with. The shift in rainfall patterns can additionally indicate the presence of disease carrying vectors. For example, mosquitos spread diseases such as malaria and lymphatic filariasis. The distribution of lymphatic filariasis via mosquitos can be determined by looking at soil moisture content, an indicator of viable mosquito breeding habitat (as mosquito larvae need shallow, stagnant water to survive). As temperature and precipitation patterns change, so will soil moisture levels and the corresponding mosquito populations.
As climate change continues to disrupt ecosystems around the world it can make both human and non-human populations more or less vulnerable to disease depending on the specific effects of climate change on the disease. The subject of climate change and its impact on disease is increasingly attracting the attention of health professionals and climate-change scientists, particularly with respect to malaria and other vector-transmitted human diseases. More specifically, climate change can impact malaria transmissions by extending the season of transmission and creating more breeding sites due to increasing temperatures and rainfall, respectively. Increases in malaria transmissions and other vector-transmitted human diseases can have a devastating impact on communities that do not receive appropriate medical care and on people who have not had exposure to these diseases.
==== In relation to tropical, northern temperate zones, and the Arctic ====
It is thought that the effects of climate change on temperature will increase with latitude. This means that northern temperate zones will experience more temperature changes than tropical zones. Tropical zones experience less climate variability, so organisms in tropical zones have adjusted to a continuous climate. Therefore, slight disruptions in climate can dramatically affect the organisms in tropical zones. Climate change can affect organisms by elongating their reproductive cycles. In addition to this, climate change allows for pathogens to expand beyond tropical zones, dramatically impacting species because of the introduction of new pathogens. These impacted species include humans and human livestock.
Changes in northern temperate zones and the Arctic are also expected. More specifically, the effects of climate change on temperature increase with latitude, so the temperature in northern temperate zones is projected to increase and the temperature in the Arctic is projected to increase even more. Like tropical zones, climate change in northern temperate zones and the Arctic can also cause species to move beyond their original niche. For example, climate change has allowed elk to move north in areas that overlap with other species such as caribou. When the elk move, they introduce new pathogens into the area, thus harming the caribou.
==== Models and predicting disease ecology ====
There are numerous approaches when predicting the impacts of climate change on diseases. Static approaches use reproduction rates to find how climate change will affect vectors. An example of the use of static approaches is a process-based model called MIASMA. This model explores the relationship between different climate change scenarios and the reproduction rate of vectors. This model has been used specifically to look at mosquitoes in African highlands to make predictions about the future of the development and feeding of mosquitoes. Additionally, this model can be used to find the population of mosquitoes that bite, allowing predictions of diseases such as dengue fever.
Another approach includes statistical based models, which relies on observations unlike process-based models. An example of this type of model is CLIMEX, which maps vector species over geographical locations while accounting for climate factors. It is important to note that this approach does have limitations. CLIMEX does not include all factors that impact vector species.
Time-series models can also be used to find how climate change will modify disease dynamics. However this approach has a downside; only a limited number of locations and pathogens can be looked at simultaneously using time-series models.
Predictions of ENSO (El Niño Southern Oscillation) can also help predict diseases. ENSO events can create cooler temperatures in the Western Tropical Pacific and warmer temperatures in the Central and Eastern Tropical Pacific leading to intense precipitation and storms. Changes in climate due to ENSO can affect the dynamics of diseases and can affect the water sources humans use. For example, in 1991, cholera reappeared in Peru around the same time as an el Niño event occurred. ENSO events can be anticipated early on, and therefore by predicting ENSO, predictions about disease transmission peaks can be made up to two months before they occur.
== Notable examples in disease ecology ==
=== Malaria ===
Malaria is a disease transferred by the female Anopheles mosquito, located predominantly in sub-Saharan Africa and is a long withstanding public health issue. It is a disease that is strongly regulated by climate factors and therefore climate change will have a notable impact on the transmission of the disease. As temperatures warm, the reproductive phase of the Plasmodium parasite, within the gut of the female mosquito, will undergo completion. This will ensure that the female mosquito becomes infective before the end of its lifespan. Precipitation is also a critical factor for the breeding and the transmission of malaria and with climate change influencing regular precipitation patterns, studies are finding that mosquito breeding potential can increase as a direct result of climate change.
=== Lyme disease ===
Lyme disease is the most common tickborne disease throughout the United States and Europe with an estimated 476,000 cases in Europe and 200,000 cases in the United States per year. Recently, studies have concluded that there is an increased risk of Lyme disease in Southern Canada due to the home range expansion of the tick vector Ixodes scapularis, which is responsible for carrying the disease. Climate change creates milder winters and extended Spring and Autumn seasons. This creates hospitable habitats for ticks thrive at higher latitudes (where they are normally not found). Human infections of Lyme disease have been increasingly prominent in certain southern parts of Canadian provinces such as Ontario, Quebec, Manitoba, and Nova Scotia. According to Canadian published studies, other environmental factors are contributing to the expansion of the Ixodes scapularis home range which include the introduction of the vector through migratory birds and density of deer populations.
=== West Nile virus ===
West Nile virus is transferred between mosquitos and birds of prey including eagles, hawks, falcons, and owls. In the United States, West Nile Virus is being increasingly studied in New York and Connecticut due to the effects of climate change on two disease carrying vectors. Climate change is promoting the hybridization amongst two mosquito vectors (C. pipiens and C. quinquefasciatus) which can have an effect on the genetic composition of the hybrid allowing it to become more effective at transmitting diseases and increases its adaptability to different climactic conditions.
== See also ==
== References ==
== Bibliography == | Wikipedia/Disease_ecology |
Multidrug-resistant (MDR) bacteria are bacteria that are resistant to three or more classes of antimicrobial drugs. MDR bacteria have seen an increase in prevalence in recent years and pose serious risks to public health. MDR bacteria can be broken into 3 main categories: Gram-positive, Gram-negative, and other (acid-stain). These bacteria employ various adaptations to avoid or mitigate the damage done by antimicrobials. With increased access to modern medicine there has been a sharp increase in the amount of antibiotics consumed. Given the abundant use of antibiotics there has been a considerable increase in the evolution of antimicrobial resistance factors, now outpacing the development of new antibiotics.
== Examples identified as serious threats to public health ==
Examples of MDR bacteria identified as serious threats to public health include:
Gram-positive MDR bacteria
Clostridioides difficile
Staphylococcus aureus
Vancomycin-resistant Enterococcus
Streptococcus pneumoniae
Gram-negative MDR bacteria
Carbapenem-resistant Acinetobacter
Escherichia coli
Klebsiella pneumoniae
Enterobacter spp.
Neisseria gonorrhoeae
Campylobacter
Pseudomonas aeruginosa
Salmonella
Shigella
Other MDR bacteria
Mycobacterium tuberculosis
== Microbial adaptations ==
MDR bacteria employ a plurality of adaptations to overcome the environmental insults caused by antibiotics. Bacteria are capable of sharing these resistance factors in a process called horizontal gene transfer where resistant bacteria share genetic information that encodes resistance to the naive population.
Antibiotic inactivation: bacteria create proteins that can prevent damage caused by antibiotics, they can do this in two ways. First, inactivating or modifying the antibiotic so that it can no longer interact with its target. Second, degrading the antibiotic directly.
Multidrug efflux pumps: The use of transporter proteins to expel the antibiotic.
Modification of target sites: mutating or modifying elements of the bacteria structure to prevent interaction with the antibiotic.
Structural modifications: mutating or modifying global elements of cell to adapt to Antibiotic (Such as increased acid tolerance to an acidic antimicrobial)
== Alternative antimicrobial methods ==
=== Phage therapy ===
Bacteriophage therapy, commonly known as 'phage therapy,' uses bacteria-specific viruses to kill antibiotic resistant bacteria. Phage therapy offers considerably higher specificity as the phage can be engineered to only infect a certain bacteria species. Phage therapy also allows for the possibility of biofilm penetration in cases where antibiotics are ineffective due to the increased resistance of biofilm-forming pathogens. One major drawback to phage therapy is the evolution of phage-resistant microbes which was seen in a majority of phage therapy experiments aimed to treat sepsis and intestinal infection. Recent studies suggest that development of phage resistance comes as a trade-off for antibiotic resistance and can be used to create antibiotic-sensitive populations.
== References == | Wikipedia/Multidrug-resistant_bacteria |
The concept of One Health is the unity of multiple practices that work together locally, nationally, and globally to help achieve optimal health for people, animals, and the environment. When the people, animals, and environment are put together, they make up the One Health Triad . The One Health Triad shows how the health of people, animals, and the environment is linked to one another. Each element affects and is affected by the others. For instance, diseases transmitted from animals can impact human health, while environmental factors like pollution or climate change can influence both animal and human health. With One Health being a worldwide concept, it makes it easier to advance health care in the 21st century. When this concept is used, and applied properly, it can help protect people, animals, and the environment in the present and future generations.
== Background ==
The origins of the One Health Model dates as far back as 1821, with the first links between human and animal diseases being recognized by Rudolf Virchow. Virchow noticed links between human and animal disease, coining the term "zoonosis." The major connection Virchow made was between Trichinella spiralis in swine and human infections. It was over a century later before the ideas laid out by Virchow were integrated into a single health model connecting human health with animal health.
In 1964, Dr. Calvin Schwabe, a former member of World Health Organization (WHO) and the founding chair of Department of Epidemiology and Preventive Medicine at the Veterinary School at the University of California Davis, called for a "One Medicine" model emphasizing the need for collaboration between human and wildlife pathologists as a means of controlling and even preventing disease spread. This model sought to bridge the gap between human health and animal health, advocating for a more integrated approach to preventing and controlling diseases that impact both humans and animals." It would be another four decades before the One Health became a reality with the 12 Manhattan Principles, where human and animal pathologists called for "One Health, One World."
The One Health Model has gained momentum in recent years due to the discovery of the multiple interconnections that exist between animal and human disease. Recent estimates place zoonotic diseases as the source 60% of total human pathogens, and 75% of emerging human pathogens.
Greater awareness of food safety concerns has also prompted further support of the One Health Model. More than 60 percent of pathogens globally originate from the environment. In Canada over 10 million illnesses per year are food-borne. This is expected to have an economic impact on the country, costing $3.7 billion yearly. These illnesses highlight the need for a 'One Health' approach that addresses public health concerns and reduces the economic costs associated with disease outbreaks. The One Health Model can significantly minimize these costs and improve health outcomes by promoting early intervention and collaboration across sectors.The One Health Model has been introduced as a method to encourage intervention as early as possible, which has previously been overlooked but remains impactful to many economies.
== Applying the One Health Model ==
The One Health Model can constantly be applied with human and animal interactions. One of the main situations where One Health can be applied is with canine and feline obesity being linked to their owners and their own obesity. Obesity in canines and felines is not good for them nor is it good for humans. The obesity of humans and their animals can result in many health problems such as diabetes mellitus, osteoarthritis, and many others. In some cases if the obesity of the pet is too bad the pet may be removed from its owner and put up for adoption. The only solution for this issue is to encourage owners to have a healthy lifestyle for both them and their animals. Zoonotic Diseases is another situation that the One Health model can be applied to. This is talked about more in the Zoonotic Disease section.
== One Health and Antibiotic Resistance ==
Antibiotic resistance is becoming a serious problem in today's agriculture industry and for humans. One reason for this occurring resistance is that natural resistomes are present in different environmental niches. These environmental resistomes function as an antibiotic resistance gene. There are many questions and research that needs to be further done to find out if these environmental resistomes play a big role in the antibiotic resistance that is occurring in humans, animals, and plants. A recent study was done and reported that 700 000 annual deaths were caused by infections due to drug resistant pathogens This study also reported that if unchecked, this number will increase to 10 million by 2050. The National Antimicrobial Monitoring System is a system used to monitor antimicrobial resistance among bacteria that is isolated from animals that are used as food
In 2013, they found that about 29% of turkeys, 18% of swine, 17% of beef, and 9% of chicken were multi drug resistant, meaning they had resistance to 3 or more classes of antimicrobials. Having this resistance for both animals and humans makes it easier for zoonotic diseases to be transferred between them and also makes it easier for the resistance of these antimicrobials to be passed on. With this being said, there are many possible risk management options that can be taken to help reduce this possibility. Most of these risk management options can take place on the farm or at the slaughter house for animals. When it comes to humans, risk management has to be done by you yourself and you have to be responsible for good hygiene, up to date vaccinations, and proper use of antibiotics. With that being said, the same management on farms needs to be taken for proper use of antibiotics and only using them when it is absolutely necessary and improving the general hygiene in all stages of production. With these management factors added in with research and knowledge on the amount of resistance within our environment, antimicrobial resistance may be able to be controlled and help reduce the amount of zoonotic diseases that are passed between animals and humans.
== Zoonotic Diseases and One Health ==
Zoonosis or zoonotic disease can be defined as an infectious disease that can be transmitted between animals and humans. One Health plays a big role in helping to prevent and control zoonotic diseases. Approximately 75% of new and emerging infectious diseases in humans are defined as zoonotic. Zoonotic diseases can be spread in many different ways. The most common known ways they are spread are through direct contact, indirect contact, vector-borne, and food-borne. Below in (Table 1) you can see a list of different zoonotic diseases, their main reservoirs, and their mode of transmission.
Table 1: Zoonotic Diseases
== Environmental Stressors and Mental Health in the One Health Model ==
The One Health Model traditionally focuses on zoonotic diseases and antimicrobial resistance, but increasing attention is being given to the intersections of environmental stressors and mental health in both humans and animals. Environmental stressors refer to factors such as climate change, pollution, habitat destruction, and biodiversity loss that can negatively impact the physical and psychological health of both humans and animals face these stressors, which impact the ecosystem as a whole. Stability also creates significant psychological challenges, leading to conditions like eco-anxiety, depression, and post-traumatic stress disorder (PTSD) in response to environmental changes or natural disasters.
== Climate Change and Mental Health ==
Climate change has been linked to increased psychological distress, particularly among communities experiencing extreme weather events such as hurricanes, wildfires, and prolonged droughts.Studies have shown that rising temperatures correlate with higher suicide rates, increased aggression, and worsening symptoms of psychiatric disorders. In animals, climate-induced stress disrupts behavior, migration patterns, and reproductive health, which can lead to ecosystem imbalances affecting both wildlife and human populations.
== Pollution and Neurodevelopmental Disorders ==
Air pollution, particularly fine particulate matter (PM2.5), has been associated with cognitive decline, neuroinflammation, and higher risks of anxiety and depression in humans. Research also suggests that exposure to heavy metals and pesticides affects neurological health in livestock and wildlife, potentially altering animal behavior in ways that impact human-animal interactions, food supply, and disease transmission.
== Deforestation and Emerging Zoonotic Threats ==
The destruction of natural habitats contributes to the displacement of wildlife, increasing the likelihood of human-wildlife conflict and the emergence of new zoonotic diseases. Psychological stress among affected human populations, particularly Indigenous communities, is compounded by the loss of biodiversity and traditional ways of life.
Habitat destruction leads to wildlife displacement, raising the chances of human-animal interactions. Consequently, this can cause zoonotic diseases, which are illnesses that transfer from animals to humans. This scenario illustrates the interconnectedness of animal health, human health, and environmental health as outlined in the One Health model.
== The Role of One Health in Addressing Environmental Mental Health Challenges ==
Integrating mental health considerations into the One Health framework can improve public health preparedness and resilience. Solutions include:
Urban planning to reduce heat-related mental distress and improve access to green spaces.
Wildlife conservation efforts that promote ecological stability and reduce human-animal conflict.
Interdisciplinary research combining epidemiology, psychology, and environmental science to study the long-term mental health effects of climate change.
By expanding One Health beyond infectious disease control to include environmental determinants of mental health, global health systems can develop more holistic and sustainable solutions.
== See also ==
Antibiotic resistance
Epidemiology
Exposome
== References == | Wikipedia/One_Health_Model |
Source control is a strategy for reducing disease transmission by blocking respiratory secretions produced through breathing, speaking, coughing, sneezing or singing. Multiple source control techniques can be used in hospitals, but for the general public wearing personal protective equipment during epidemics or pandemics, respirators provide the greatest source control, followed by surgical masks, with cloth face masks recommended for use by the public only when there are shortages of both respirators and surgical masks.
== Mechanisms ==
Infections in general may spread by direct contact (for example, shaking hands or kissing), by inhaling infectious droplets in the air (droplet transmission), by inhaling long-lasting aerosols with tiny particles (airborne transmission), and by touching objects with infectious material on their surfaces (fomites). Different diseases spread in different ways; some spread by only some of these routes. For instance, fomite transmission of COVID-19 is thought to be rare while aerosol, droplet and contact transmission appear to be the primary transmission modes, as of April 2021.
Coughs and sneezes can spread airborne droplets up to ~8 meters (26 ft). Speaking can spread droplets up to ~2 meters (6.6 ft).
Masking any person who may be a source of infectious droplets (or aerosols) thus reduces the unsafe range of physical distances. If a person can be infectious before they are symptomatic and diagnosed, then people who do not yet know if they are infectious may also be a source of infection.
For pathogens transmitted through the air, strategies to block cough air jets and to capture aerosols, e.g. the "Shield & Sink" approach, can be highly effective in minimizing exposure to respiratory secretions.
Outside of respiratory source control, handwashing helps to protect people against contact transmission, and against indirect droplet transmission. Handwashing removes infectious droplets that their mask caught (from either side) and which transferred to their hands when they touched their mask.
=== Potentially ineffective methods of source control ===
In the past, suggestions have been made that covering the mouth and nose, like with an elbow, tissue, or hand, would be a viable measure towards reducing the transmissions of airborne diseases. This method of source control was suggested, but not empirically tested, in the "Control of Airborne Infection" section of a 1974 publication of Riley's Airborne Infection. NIOSH also noted that the use of a tissue as source control, in their guidelines for TB, had not been tested as of 1992.
In 2013, Gustavo et al. looked into the effectiveness of various methods of source control, including via the arm, via a tissue, via bare hands, and via a surgical mask. They concluded that simply covering a cough was not an effective method of stopping transmission, and a surgical mask was not effective at reducing the amount of displaced droplets detected compared to the other rudimentary forms of source control. Another paper noted that the fit of a face mask matters in its source control performance. (However, note that OSHA 29 CFR 1910.134 does not cover the fit of face masks other than NIOSH-approved respirators.)
== Contrast with personal protective equipment ==
While source control protects others from transmission arising from the wearer, personal protective equipment protects the wearer themselves. Cloth face masks can be used for source control (as a last resort) but are not considered personal protective equipment as they have low filter efficiency (generally varying between 2–60%), although they are easy to obtain and reusable after washing. There are no standards or regulation for self-made cloth face masks, and source control on a well-fitted cloth mask is worse than a surgical mask.
Surgical masks are designed to protect against splashes and sprays, but do not provide complete respiratory protection from germs and other contaminants because of the loose fit between the surface of the face mask and the face. Surgical masks are regulated by various national standards to have high bacterial filtration efficiency (BFE). N95/N99/N100 masks and other filtering facepiece respirators can provide source control in addition to respiratory protection, but respirators with an unfiltered exhalation valve may not provide source control and require additional measures to filter exhalation air when source control is required.
=== Exhalation source control with respirators ===
Some masks have exhalation valve that let the exhaled air go out unfiltered. The certification grade of the mask (such as N95) is about the mask itself and it does not warrant any safety about the air that is expelled by the wearer through the valve. A mask with valve mainly increases the comfort of the wearer.
Unfiltered exhalation of air is found on both filtering facepiece and elastomeric respirators with exhalation valves. Unfiltered air is also found on powered air-purifying respirators, which cannot ever filter exhaled air. During the COVID-19 pandemic, masks with unfiltered-exhalation valves ran counter to the requirements of some mandatory mask orders. Despite the aforementioned belief, a 2020 research by the NIOSH and CDC shows that an uncovered exhalation valve already provides source control on a level similar to, or even better than, surgical masks.
It is possible to seal some unfiltered exhalation valves or to cover it with an additional surgical mask; this might be done where mask shortages make it necessary. However, so long as there are no shortages, respirators without exhalation valves should still be preferred in situations where source control is necessary.
== Source Control during TB Outbreaks ==
=== US HIV/AIDS epidemic ===
HIV was a noted co-infection in around 35% of those affected by TB in some regions of the US, despite extended close contact being a requisite factor for infection. Respirable particles are noted to be created by handling TB-infected tissue, or by coughing by those actively infected. Once in the air, droplet nuclei can persist in unventilated spaces. Most people infected with TB are asymptomatic, unless the immune system is weakened by some other factor, like HIV/AIDS, which can turn an infected person's latent TB into active TB source.
1994 CDC guidelines brought three methods of source control for the prevention of TB: administrative controls, engineering controls, and personal protective equipment, particularly with the use of fit-checked respirators.
Administrative controls mainly involve people and areas in hospital responsible for TB controls, including training, skin-testing, and regulatory compliance, as well as those responsible for quantifying the amount of TB present in the hospital's community and in-hospital, like staff. To assist with this, OSHA proposed TB guidelines in 1997, but withdrew them in 2003 following the decline of TB.
Engineering controls mainly involve ventilation and planning isolation rooms, but can also involve environmental controls, like negative pressure, ultraviolet germicidal radiation, and the use of HEPA filters.
The use of personal protective equipment, in this system of TB controls, requires the use of respirators whenever personnel are in contact with someone suspected of having TB, including during transport. This includes anyone near the infected person, all of whom must be provided with some sort of personal protective equipment, to avoid contracting TB. If PPE cannot be provided in time, the infected patient should be delayed from being moved through an area not controlled by PPE until the controls are in place, unless the care of the infected patient is compromised by an administrative delay.
During TB outbreaks in the 1990s, multiple hospitals upgraded their controls and policies to attenuate the spread of TB.
== COVID-19 pandemic ==
=== United States ===
==== Pre-COVID ====
In 2007, the CDC HICPAC published a set of guidelines, called the 2007 Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings, suggesting that use of "barrier precautions", defined as "masks, gowns, [and] gloves", would not be required, so long as it was limited to "routine entry", patients were not confirmed to be infected, and no aerosol-generating procedures were being done. "Standard precautions" requiring the use of masks, face shields, and/or eye protection, would be needed if there was potential for the spraying of bodily fluids, like during intubation.
The guidelines are the same regardless of the type of pathogen, but the guidelines also note that, based on the experience of SARS-CoV in Toronto, that "N95 or higher respirators may offer additional protection to those exposed to aerosol-generating procedures and high risk activities".
Separate from "barrier precautions" and "standard precautions" are "airborne precautions", a protocol for "infectious agents transmitted by the airborne route", like with SARS-CoV and tuberculosis, requiring 12 air changes per hour for new facilities, and use of fitted N95 respirators. These measures are used whenever someone is suspected of harboring an "infectious agent".
==== Early measures ====
During the COVID-19 pandemic, cloth face masks for source control had been recommended by the U.S. Centers for Disease Control and Prevention (CDC) for members of the public who left their homes, and health care facilities were recommended to consider requiring face masks for all people who enter a facility. Health care personnel and patients with COVID-19 symptoms were recommended to use surgical masks if available, as they are more protective. Masking patients reduces the personal protective equipment recommended by CDC for health care personnel under crisis shortage conditions.
==== Post-2023 ====
By 2023, The New York Times noted that the CDC had dropped mandates for masks in hospitals during COVID, limiting the COVID policies to an advisory role. Use of masks for source control is still recommended in times of high viral activity, but the CDC did not provide numbers for benchmarks. The new policies are thought, according to the New York Times, based on various citations to medical literature, to increase mortality among vulnerable patients, especially those with cancer.
The New York Times article cites a paper published in 2023, that suggests the high mortality of cancer patients following the Omicron wave may have been due to relaxing of policies preventing COVID-19 transmission (like source control policies). The 2023 paper also cites a research letter published in 2022, that suggests that the surge of COVID-19 cases in hospitals may have been due to the high contagiousness of Omicron, an article which suggested a high secondary attack rate relative to Delta, and papers finding increased mortality of cancer patients due to higher rates of breakthrough infections.
Also in 2023, new draft guidelines were proposed by the CDC HICPAC, to update the pre-COVID 2007 Guideline for Isolation Precautions: Preventing Transmission of Infectious Agents in Healthcare Settings. The proposed updates were met with disapproval by the National Nurses United union, as they felt the changes did not go far enough. Changes included clarifying by adding "source control" as a qualification for the use of "barrier precautions".
=== United Kingdom ===
A paper in the Journal of Hospital Infection, published in 2024, focusing on hospitals in the UK, found that the removal of mandates, based around surgical masks, in hospitals was not associated with an increase in SARS-CoV-2 infections from weeks between December 4, 2021 to December 10, 2022. However, the authors noted that the end of mask mandates also coincided with an increase in Omicron infections, and that more data would be needed despite evidence for removal of mask mandates from 2022-2023.
== See also ==
Face masks during the COVID-19 pandemic
N95 respirator
Respirator assigned protection factors
Permissible exposure limit
Workplace hazard controls for COVID-19
== Notes ==
== References ==
== Further reading == | Wikipedia/Source_control_(respiratory_disease) |
In epidemiology, an outbreak is a sudden increase in occurrences of a disease when cases are in excess of normal expectancy for the location or season. It may affect a small and localized group or impact upon thousands of people across an entire continent. The number of cases varies according to the disease-causing agent, and the size and type of previous and existing exposure to the agent. Outbreaks include many epidemics, which term is normally only for infectious diseases, as well as diseases with an environmental origin, such as a water or foodborne disease. They may affect a region in a country or a group of countries. Pandemics are near-global disease outbreaks when multiple and various countries around the Earth are soon infected.
== Definition ==
The terms "outbreak" and "epidemic" have often been used interchangeably. Researchers Manfred S. Green and colleagues propose that the latter term be restricted to larger events, pointing out that Chambers Concise Dictionary and Stedman's Medical Dictionary acknowledge this distinction.
== Outbreak investigation ==
When investigating disease outbreaks, the epidemiology profession has developed a number of widely accepted steps. As described by the United States Centers for Disease Control and Prevention, these include the following:
Identify the existence of the outbreak (Is the group of ill persons normal for the time of year, geographic area, etc.?)
Verify the diagnosis related to the outbreak
Create a case definition to define who/what is included as a case
Map the spread of the outbreak using Information technology as diagnosis is reported to insurance
Develop a hypothesis (What appears to be causing the outbreak?)
Study hypotheses (collect data and perform analysis)
Refine hypothesis and carry out further study
Develop and implement control and prevention systems
Release findings to greater communities
The order of the above steps and relative amount of effort and resources used in each varies from outbreak to outbreak. For example, prevention and control measures are usually implemented very early in the investigation, often before the causative agent is known. In many situations, promoting good hygiene and hand-washing is one of the first things recommended. Other interventions may be added as the investigation moves forward and more information is obtained. Waiting until the end of an investigation to implement prevention and control measures is a sure way to lose ones job. In outbreaks identified through notifiable disease surveillance, reports are often linked to laboratory results and verifying the diagnosis is straight forward. In outbreaks of unknown etiology, determining and verifying the diagnosis can be a significant part of the investigation with respect to time and resources. Several steps are usually going on at any point in time during the investigation. Steps may be repeated. For example, initial case definitions are often established to be intentionally broad but later refined as more is learned about the outbreak. The above list has 9 steps, others have more. Implementing active surveillance to identify additional cases is often added.
Outbreak debriefing and review has also been recognized as an additional final step and iterative process by the Public Health Agency of Canada.
== Types ==
There are several outbreak patterns, which can be useful in identifying the transmission method or source, and predicting the future rate of infection. Each has a distinctive epidemic curve, or histogram of case infections and deaths.
Common source – All victims acquire the infection from the same source (e.g. a contaminated water supply).
Continuous source – Common source outbreak where the exposure occurs over multiple incubation periods
Point source – Common source outbreak where the exposure occurs in less than one incubation period
Propagated – Transmission occurs from person to person.
Outbreaks can also be:
Behavioral risk related (e.g., sexually transmitted diseases, increased risk due to malnutrition)
Zoonotic – The infectious agent is endemic to an animal population, infection is transferred to humans.
Patterns of occurrence are:
Endemic – a communicable disease, such as influenza, measles, mumps, pneumonia, colds, viruses, and smallpox, which is characteristic of a particular place, or among a particular group, or area of interest or activity.
Epidemic – when this disease is found to infect a significantly larger number of people at the same time than is common at that time, and among that population, and may spread through one or several communities.
Pandemic – occurs when an epidemic spreads worldwide.
== Condition for declaring an outbreak over ==
By convention, a communicable disease outbreak is declared over when a period of twice the incubation period of the infectious disease has elapsed without identification of any new case, however, for organisms with a short incubation period (e.g. fewer than ten days), a period of three times the incubation period is preferred.
== Outbreak legislation ==
Outbreak legislation is still in its infancy and not many countries have had a direct and complete set of the provisions.
However, some countries do manage the outbreaks using relevant acts, such as public health law.
World Health Organization member states are obligated by International Health Regulations to report outbreaks. WHO member states are holding a special session in November 2021 to consider the International Treaty for Pandemic Preparedness and Response to establish further legal obligations in managing disease outbreaks.
== See also ==
1947 New York City smallpox outbreak
1993 Four Corners hantavirus outbreak
2003 Midwest monkeypox outbreak
2007 Yap Islands Zika virus outbreak
2014 Democratic Republic of the Congo Ebola virus outbreak
2019–present coronavirus pandemic by country and territory
Superspreading event
== References ==
== External links ==
Plague of Suspicion, audio hour on media coverage of outbreaks and epidemics | Wikipedia/Disease_outbreak |
Viral phylodynamics is the study of how epidemiological, immunological, and evolutionary processes act and potentially interact to shape viral phylogenies.
Since the term was coined in 2004, research on viral phylodynamics has focused on transmission dynamics in an effort to shed light on how these dynamics impact viral genetic variation. Transmission dynamics can be considered at the level of cells within an infected host, individual hosts within a population, or entire populations of hosts.
Many viruses, especially RNA viruses, rapidly accumulate genetic variation because of short generation times and high mutation rates.
Patterns of viral genetic variation are therefore heavily influenced by how quickly transmission occurs and by which entities transmit to one another.
Patterns of viral genetic variation will also be affected by selection acting on viral phenotypes.
Although viruses can differ with respect to many phenotypes, phylodynamic studies have to date tended to focus on a limited number of viral phenotypes.
These include virulence phenotypes, phenotypes associated with viral transmissibility, cell or tissue tropism phenotypes, and antigenic phenotypes that can facilitate escape from host immunity.
Due to the impact that transmission dynamics and selection can have on viral genetic variation, viral phylogenies can therefore be used to investigate important epidemiological, immunological, and evolutionary processes, such as epidemic spread, spatio-temporal dynamics including metapopulation dynamics, zoonotic transmission, tissue tropism, and antigenic drift.
The quantitative investigation of these processes through the consideration of viral phylogenies is the central aim of viral phylodynamics.
== Sources of phylodynamic variation ==
In coining the term phylodynamics, Grenfell and coauthors postulated that viral phylogenies "... are determined by a combination of immune selection, changes in viral population size, and spatial dynamics".
Their study showcased three features of viral phylogenies, which may serve as rules of thumb for identifying important epidemiological, immunological, and evolutionary processes influencing patterns of viral genetic variation.
The relative lengths of internal versus external branches will be affected by changes in viral population size over time
Rapid expansion of a virus in a population will be reflected by a "star-like" tree, in which external branches are long relative to internal branches. Star-like trees arise because viruses are more likely to share a recent common ancestor when the population is small, and a growing population has an increasingly smaller population size towards the past. Compared to a phylogeny of an expanding virus, a phylogeny of a viral population that stays constant in size will have external branches that are shorter relative to branches on the interior of the tree. The phylogeny of HIV provides a good example of a star-like tree, as the prevalence of HIV infection rose rapidly throughout the 1980s (exponential growth). The phylogeny of hepatitis B virus instead reflects a viral population that has remained roughly consistent (constant size). Similarly, trees reconstructed from viral sequences isolated from chronically infected individuals can be used to gauge changes in viral population sizes within a host.
The clustering of taxa on a viral phylogeny will be affected by host population structure
Viruses within similar hosts, such as hosts that reside in the same geographic region, are expected to be more closely related genetically if transmission occurs more commonly between them. The phylogenies of measles and rabies virus illustrate viruses with spatially structured host population. These phylogenies stand in contrast to the phylogeny of human influenza, which does not appear to exhibit strong spatial structure over extended periods of time. Clustering of taxa, when it occurs, is not necessarily observed at all scales, and a population that appears structured at some scale may appear panmictic at another scale, for example at a smaller spatial scale. While spatial structure is the most commonly observed population structure in phylodynamic analyses, viruses may also have nonrandom admixture by attributes such as the age, race, and risk behavior. This is because viral transmission can preferentially occur between hosts sharing any of these attributes.
Tree balance will be affected by selection, most notably immune escape
The effect of directional selection on the shape of a viral phylogeny is exemplified by contrasting the trees of influenza virus and HIV's surface proteins. The ladder-like phylogeny of influenza virus A/H3N2's hemagglutinin protein bears the hallmarks of strong directional selection, driven by immune escape (imbalanced tree). In contrast, a more balanced phylogeny may occur when a virus is not subject to strong immune selection or other source of directional selection. An example of this is the phylogeny of the HIV envelope protein inferred from sequences isolated from different individuals in a population (balanced tree). Phylogenies of the HIV envelope protein from chronically infected hosts resemble influenza's ladder-like tree. This highlights that the processes affecting viral genetic variation can differ across scales. Indeed, contrasting patterns of viral genetic variation within and between hosts has been an active topic in phylodynamic research since the field's inception.
Although these three phylogenetic features are useful rules of thumb to identify epidemiological, immunological, and evolutionary processes that might be impacting viral genetic variation, there is growing recognition that the mapping between process and phylogenetic pattern can be many-to-one. For instance, although ladder-like trees could reflect the presence of directional selection, ladder-like trees could also reflect sequential genetic bottlenecks that might occur with rapid spatial spread, as in the case of rabies virus. Because of this many-to-one mapping between process and phylogenetic pattern, research in the field of viral phylodynamics has sought to develop and apply quantitative methods to effectively infer process from reconstructed viral phylogenies (see Methods). The consideration of other data sources (e.g., incidence patterns) may aid in distinguishing between competing phylodynamic hypotheses. Combining disparate sources of data for phylodynamic analysis remains a major challenge in the field and is an active area of research.
== Applications ==
=== Viral origins ===
Phylodynamic models may aid in dating epidemic and pandemic origins.
The rapid rate of evolution in viruses allows molecular clock models to be estimated from genetic sequences, thus providing a per-year rate of evolution of the virus.
With the rate of evolution measured in real units of time, it is possible to infer the date of the most recent common ancestor (MRCA) for a set of viral sequences.
The age of the MRCA of these isolates is a lower bound; the common ancestor of the entire virus population must have existed earlier than the MRCA of the virus sample.
In April 2009, genetic analysis of 11 sequences of swine-origin H1N1 influenza suggested that the common ancestor existed at or before 12 January 2009.
This finding aided in making an early estimate of the basic reproduction number
R
0
{\displaystyle R_{0}}
of the pandemic. Similarly, genetic analysis of sequences isolated from within an individual can be used to determine the individual's infection time.
=== Viral spread ===
Phylodynamic models may provide insight into epidemiological parameters that are difficult to assess through traditional surveillance means.
For example, assessment of
R
0
{\displaystyle R_{0}}
from surveillance data requires careful control of the variation of the reporting rate and the intensity of surveillance.
Inferring the demographic history of the virus population from genetic data may help to avoid these difficulties and can provide a separate avenue for inference of
R
0
{\displaystyle R_{0}}
.
Such approaches have been used to estimate
R
0
{\displaystyle R_{0}}
in hepatitis C virus and HIV.
Additionally, differential transmission between groups, be they geographic-, age-, or risk-related, is very difficult to assess from surveillance data alone.
Phylogeographic models have the possibility of more directly revealing these otherwise hidden transmission patterns.
Phylodynamic approaches have mapped the geographic movement of the human influenza virus and quantified the epidemic spread of rabies virus in North American raccoons.
However, nonrepresentative sampling may bias inferences of both
R
0
{\displaystyle R_{0}}
and migration patterns.
Phylodynamic approaches have also been used to better understand viral transmission dynamics and spread within infected hosts. For example, phylodynamic studies have been used to infer the rate of viral growth within infected hosts and to argue for the occurrence of viral compartmentalization in hepatitis C infection.
=== Viral control efforts ===
Phylodynamic approaches can also be useful in ascertaining the effectiveness of viral control efforts, particularly for diseases with low reporting rates. For example, the genetic diversity of the DNA-based hepatitis B virus declined in the Netherlands in the late 1990s, following the initiation of a vaccination program. This correlation was used to argue that vaccination was effective at reducing the prevalence of infection, although alternative explanations are possible.
Viral control efforts can also impact the rate at which virus populations evolve, thereby influencing phylogenetic patterns. Phylodynamic approaches that quantify how evolutionary rates change over time can therefore provide insight into the effectiveness of control strategies. For example, an application to HIV sequences within infected hosts showed that viral substitution rates dropped to effectively zero following the initiation of antiretroviral drug therapy. This decrease in substitution rates was interpreted as an effective cessation of viral replication following the commencement of treatment, and would be expected to lead to lower viral loads. This finding is especially encouraging because lower substitution rates are associated with slower progression to AIDS in treatment-naive patients.
Antiviral treatment also creates selective pressure for the evolution of drug resistance in virus populations, and can thereby affect patterns of genetic diversity. Commonly, there is a fitness trade-off between faster replication of susceptible strains in the absence of antiviral treatment and faster replication of resistant strains in the presence of antivirals. Thus, ascertaining the level of antiviral pressure necessary to shift evolutionary outcomes is of public health importance. Phylodynamic approaches have been used to examine the spread of oseltamivir resistance in influenza A/H1N1.
== Methods ==
Most often, the goal of phylodynamic analyses is to make inferences of epidemiological processes from viral phylogenies.
Thus, most phylodynamic analyses begin with the reconstruction of a phylogenetic tree.
Genetic sequences are often sampled at multiple time points, which allows the estimation of substitution rates and the time of the MRCA using a molecular clock model.
For viruses, Bayesian phylogenetic methods are popular because of the ability to fit complex demographic scenarios while integrating out phylogenetic uncertainty.
Traditional evolutionary approaches directly utilize methods from computational phylogenetics and population genetics to assess hypotheses of selection and population structure without direct regard for epidemiological models.
For example,
the magnitude of selection can be measured by comparing the rate of nonsynonymous substitution to the rate of synonymous substitution (dN/dS);
the population structure of the host population may be examined by calculation of F-statistics; and
hypotheses concerning panmixis and selective neutrality of the virus may be tested with statistics such as Tajima's D.
However, such analyses were not designed with epidemiological inference in mind and it may be difficult to extrapolate from standard statistics to desired epidemiological quantities.
In an effort to bridge the gap between traditional evolutionary approaches and epidemiological models, several analytical methods have been developed to specifically address problems related to phylodynamics.
These methods are based on coalescent theory, birth-death models, and simulation, and are used to more directly relate epidemiological parameters to observed viral sequences.
=== Coalescent theory and phylodynamics ===
==== Effective population size ====
The coalescent is a mathematical model that describes the ancestry of a sample of nonrecombining gene copies.
In modeling the coalescent process, time is usually considered to flow backwards from the present.
In a selectively neutral population of constant size
N
{\displaystyle N}
and nonoverlapping generations (the Wright Fisher model),
the expected time for a sample of two gene copies to coalesce (i.e., find a common ancestor) is
N
{\displaystyle N}
generations.
More generally, the waiting time for two members of a sample of
n
{\displaystyle n}
gene copies to share a common ancestor is exponentially distributed, with rate
λ
n
=
(
n
2
)
1
N
{\displaystyle \lambda _{n}={n \choose 2}{\frac {1}{N}}}
.
This time interval is labeled
T
n
{\displaystyle T_{n}}
, and at its end there are
n
−
1
{\displaystyle n-1}
extant lineages remaining. These remaining lineages will coalesce at the rate
λ
n
−
1
⋯
λ
2
{\displaystyle \lambda _{n-1}\cdots \lambda _{2}}
after intervals
T
n
−
1
⋯
T
2
{\displaystyle T_{n-1}\cdots T_{2}}
.
This process can be simulated by drawing exponential random variables with rates
{
λ
n
−
i
}
i
=
0
,
⋯
,
n
−
2
{\displaystyle \{\lambda _{n-i}\}_{i=0,\cdots ,n-2}}
until there is only a single lineage remaining (the MRCA of the sample).
In the absence of selection and population structure, the tree topology may be simulated by picking two lineages uniformly at random after each coalescent interval
T
i
{\displaystyle T_{i}}
.
The expected waiting time to find the MRCA of the sample is the sum of the expected values of the internode intervals,
E
[
T
M
R
C
A
]
=
E
[
T
n
]
+
E
[
T
n
−
1
]
+
⋯
+
E
[
T
2
]
=
1
/
λ
n
+
1
/
λ
n
−
1
+
⋯
+
1
/
λ
2
=
2
N
(
1
−
1
n
)
.
{\displaystyle {\begin{aligned}\mathrm {E} [\mathrm {TMRCA} ]&=\mathrm {E} [T_{n}]+\mathrm {E} [T_{n-1}]+\cdots +\mathrm {E} [T_{2}]\\&=1/\lambda _{n}+1/\lambda _{n-1}+\cdots +1/\lambda _{2}\\&=2N(1-{\frac {1}{n}}).\end{aligned}}}
Two corollaries are :
The time to the MRCA (TMRCA) of a sample is not unbounded in the sample size.
lim
n
→
∞
E
[
T
M
R
C
A
]
=
2
N
.
{\displaystyle \lim _{n\rightarrow \infty }\mathrm {E} [\mathrm {TMRCA} ]=2N.}
Few samples are required for the expected TMRCA of the sample to be close to the theoretical upper bound, as the difference is
O
(
1
/
n
)
{\displaystyle O(1/n)}
.
Consequently, the TMRCA estimated from a relatively small sample of viral genetic sequences is an asymptotically unbiased estimate for the time that the viral population was founded in the host population.
For example, Robbins et al. estimated the TMRCA for 74 HIV-1 subtype-B genetic sequences collected in North America to be 1968.
Assuming a constant population size, we expect the time back to 1968 to represent
1
−
1
/
74
=
99
%
{\displaystyle 1-1/74=99\%}
of the TMRCA of the North American virus population.
If the population size
N
(
t
)
{\displaystyle N(t)}
changes over time, the coalescent rate
λ
n
(
t
)
{\displaystyle \lambda _{n}(t)}
will also be a function of time.
Donnelley and Tavaré derived this rate for a time-varying population size under the assumption of constant birth rates:
λ
n
(
t
)
=
(
n
2
)
1
N
(
t
)
{\displaystyle \lambda _{n}(t)={n \choose 2}{\frac {1}{N(t)}}}
.
Because all topologies are equally likely under the neutral coalescent, this model will have the same properties as the constant-size coalescent under a rescaling of the time variable:
t
→
∫
τ
=
0
t
d
τ
N
(
τ
)
{\displaystyle t\rightarrow \int _{\tau =0}^{t}{\frac {\mathrm {d} \tau }{N(\tau )}}}
.
Very early in an epidemic, the virus population may be growing exponentially at rate
r
{\displaystyle r}
, so that
t
{\displaystyle t}
units of time in the past, the population will have size
N
(
t
)
=
N
0
e
−
r
t
{\displaystyle N(t)=N_{0}e^{-rt}}
.
In this case, the rate of coalescence becomes
λ
n
(
t
)
=
(
n
2
)
1
N
0
e
−
r
t
{\displaystyle \lambda _{n}(t)={n \choose 2}{\frac {1}{N_{0}e^{-rt}}}}
.
This rate is small close to when the sample was collected (
t
=
0
{\displaystyle t=0}
), so that external branches (those without descendants) of a gene genealogy will tend to be long relative to those close to the root of the tree. This is why rapidly growing populations yield trees with long tip branches.
If the rate of exponential growth is estimated from a gene genealogy, it may be combined with knowledge of the duration of infection or the serial interval
D
{\displaystyle D}
for a particular pathogen to estimate the basic reproduction number,
R
0
{\displaystyle R_{0}}
.
The two may be linked by the following equation:
r
=
R
0
−
1
D
{\displaystyle r={\frac {R_{0}-1}{D}}}
.
For example, one of the first estimates of
R
0
{\displaystyle R_{0}}
was for pandemic H1N1 influenza in 2009 by using a coalescent-based analysis of 11 hemagglutinin sequences in combination with prior data about the infectious period for influenza.
==== Compartmental models ====
Infectious disease epidemics are often characterized by highly nonlinear and rapid changes in the number of infected individuals and the effective population size of the virus. In such cases, birth rates are highly variable, which can diminish the correspondence between effective population size and the prevalence of infection. Many mathematical models have been developed in the field of mathematical epidemiology to describe the nonlinear time series of prevalence of infection and the number of susceptible hosts. A well studied example is the Susceptible-Infected-Recovered (SIR) system of differential equations, which describes the fractions of the population
S
(
t
)
{\displaystyle S(t)}
susceptible,
I
(
t
)
{\displaystyle I(t)}
infected, and
R
(
t
)
{\displaystyle R(t)}
recovered as a function of time:
d
S
d
t
=
−
β
S
I
{\displaystyle {\frac {dS}{dt}}=-\beta SI}
,
d
I
d
t
=
β
S
I
−
γ
I
{\displaystyle {\frac {dI}{dt}}=\beta SI-\gamma I}
, and
d
R
d
t
=
γ
I
{\displaystyle {\frac {dR}{dt}}=\gamma I}
.
Here,
β
{\displaystyle \beta }
is the per capita rate of transmission to susceptible hosts, and
γ
{\displaystyle \gamma }
is the rate at which infected individuals recover, whereupon they are no longer infectious. In this case, the incidence of new infections per unit time is
f
(
t
)
=
β
S
I
{\displaystyle f(t)=\beta SI}
, which is analogous to the birth rate in classical population genetics models. The general formula for the rate of coalescence is:
λ
n
(
t
)
=
(
n
2
)
2
f
(
t
)
I
(
t
)
2
{\displaystyle \lambda _{n}(t)={n \choose 2}{\frac {2f(t)}{I(t)^{2}}}}
.
The ratio
2
(
n
2
)
/
I
(
t
)
2
{\displaystyle 2{n \choose 2}/{I(t)^{2}}}
can be understood as arising from the probability that two lineages selected uniformly at random are both ancestral to the sample. This probability is the ratio of the number of ways to pick two lineages without replacement from the set of lineages and from the set of all infections:
(
n
2
)
/
(
I
(
t
)
2
)
≈
2
(
n
2
)
/
I
(
t
)
2
{\displaystyle {n \choose 2}/{I(t) \choose 2}\approx 2{n \choose 2}/{I(t)^{2}}}
. Coalescent events will occur with this probability at the rate given by the incidence function
f
(
t
)
{\displaystyle f(t)}
.
For the simple SIR model, this yields
λ
n
(
t
)
=
(
n
2
)
2
β
S
(
t
)
I
(
t
)
{\displaystyle \lambda _{n}(t)={n \choose 2}{\frac {2\beta S(t)}{I(t)}}}
.
This expression is similar to the Kingman coalescent rate, but is damped by the fraction susceptible
S
(
t
)
{\displaystyle S(t)}
.
Early in an epidemic,
S
(
0
)
≈
1
{\displaystyle S(0)\approx 1}
, so for the SIR model
λ
n
(
t
)
≈
(
n
2
)
2
β
I
(
t
)
{\displaystyle \lambda _{n}(t)\approx {n \choose 2}{\frac {2\beta }{I(t)}}}
.
This has the same mathematical form as the rate in the Kingman coalescent, substituting
N
e
=
I
(
t
)
/
2
β
{\displaystyle N_{e}=I(t)/2\beta }
. Consequently, estimates of effective population size based on the Kingman coalescent will be proportional to prevalence of infection during the early period of exponential growth of the epidemic.
When a disease is no longer exponentially growing but has become endemic, the rate of lineage coalescence can also be derived for the epidemiological model governing the disease's transmission dynamics. This can be done by extending the Wright Fisher model to allow for unequal offspring distributions. With a Wright Fisher generation taking
τ
{\displaystyle \tau }
units of time, the rate of coalescence is given by:
λ
n
=
(
n
2
)
1
N
e
τ
{\displaystyle \lambda _{n}={n \choose 2}{\frac {1}{N_{e}\tau }}}
,
where the effective population size
N
e
{\displaystyle N_{e}}
is the population size
N
{\displaystyle N}
divided by the variance of the offspring distribution
σ
2
{\displaystyle \sigma ^{2}}
. The generation time
τ
{\displaystyle \tau }
for an epidemiological model at equilibrium is given by the duration of infection and the population size
N
{\displaystyle N}
is closely related to the equilibrium number of infected individuals. To derive the variance in the offspring distribution
σ
2
{\displaystyle \sigma ^{2}}
for a given epidemiological model, one can imagine that infected individuals can differ from one another in their infectivities, their contact rates, their durations of infection, or in other characteristics relating to their ability to transmit the virus with which they are infected. These differences can be acknowledged by assuming that the basic reproduction number is a random variable
ν
{\displaystyle \nu }
that varies across individuals in the population and that
ν
{\displaystyle \nu }
follows some continuous probability distribution. The mean and variance of these individual basic reproduction numbers,
E
[
ν
]
{\displaystyle \mathrm {E} [\nu ]}
and
V
a
r
[
ν
]
{\displaystyle \mathrm {Var} [\nu ]}
, respectively, can then be used to compute
σ
2
{\displaystyle \sigma ^{2}}
. The expression relating these quantities is given by:
σ
2
=
V
a
r
[
ν
]
E
[
ν
]
2
+
1
{\displaystyle \sigma ^{2}={\frac {\mathrm {Var} [\nu ]}{\mathrm {E} [\nu ]^{2}}}+1}
.
For example, for the SIR model above, modified to include births into the population and deaths out of the population, the population size
N
{\displaystyle N}
is given by the equilibrium number of infected individuals,
I
{\displaystyle I}
. The mean basic reproduction number, averaged across all infected individuals, is given by
β
/
γ
{\displaystyle \beta /\gamma }
, under the assumption that the background mortality rate is negligible compared to the rate of recovery
γ
{\displaystyle \gamma }
. The variance in individuals' basic reproduction rates is given by
(
β
/
γ
)
2
{\displaystyle (\beta /\gamma )^{2}}
, because the duration of time individuals remain infected in the SIR model is exponentially distributed. The variance in the offspring distribution
σ
2
{\displaystyle \sigma ^{2}}
is therefore 2.
N
e
{\displaystyle N_{e}}
therefore becomes
I
2
{\displaystyle {\frac {I}{2}}}
and the rate of coalescence becomes:
λ
n
=
(
n
2
)
2
γ
I
{\displaystyle \lambda _{n}={n \choose 2}{\frac {2\gamma }{I}}}
.
This rate, derived for the SIR model at equilibrium, is equivalent to the rate of coalescence given by the more general formula. Rates of coalescence can similarly be derived for epidemiological models with superspreaders or other transmission heterogeneities, for models with individuals who are exposed but not yet infectious, and for models with variable infectious periods, among others. Given some epidemiological information (such as the duration of infection) and a specification of a mathematical model, viral phylogenies can therefore be used to estimate epidemiological parameters that might otherwise be difficult to quantify.
=== Phylogeography ===
At the most basic level, the presence of geographic population structure can be revealed by comparing the genetic relatedness of viral isolates to geographic relatedness.
A basic question is whether geographic character labels are more clustered on a phylogeny than expected under a simple nonstructured model. This question can be answered by counting the number of geographic transitions on the phylogeny via parsimony, maximum likelihood or through Bayesian inference.
If population structure exists, then there will be fewer geographic transitions on the phylogeny than expected in a panmictic model.
This hypothesis can be tested by randomly scrambling the character labels on the tips of the phylogeny and counting the number of geographic transitions present in the scrambled data.
By repeatedly scrambling the data and calculating transition counts, a null distribution can be constructed and a p-value computed by comparing the observed transition counts to this null distribution.
Beyond the presence or absence of population structure, phylodynamic methods can be used to infer the rates of movement of viral lineages between geographic locations and reconstruct the geographic locations of ancestral lineages.
Here, geographic location is treated as a phylogenetic character state, similar in spirit to 'A', 'T', 'G', 'C', so that geographic location is encoded as a substitution model.
The same phylogenetic machinery that is used to infer models of DNA evolution can thus be used to infer geographic transition matrices.
The end result is a rate, measured in terms of years or in terms of nucleotide substitutions per site, that a lineage in one region moves to another region over the course of the phylogenetic tree.
In a geographic transmission network, some regions may mix more readily and other regions may be more isolated.
Additionally, some transmission connections may be asymmetric, so that the rate at which lineages in region 'A' move to region 'B' may differ from the rate at which lineages in 'B' move to 'A'.
With geographic location thus encoded, ancestral state reconstruction can be used to infer ancestral geographic locations of particular nodes in the phylogeny. These types of approaches can be extended by substituting other attributes for geographic locations. For example, in an application to rabies virus, Streicker and colleagues estimated rates of cross-species transmission by considering host species as the attribute.
=== Simulation ===
As discussed above, it is possible to directly infer parameters of simple compartmental epidemiological models, such as SIR models, from sequence data by looking at genealogical patterns.
Additionally, general patterns of geographic movement can be inferred from sequence data, but these inferences do not involve an explicit model of transmission dynamics between infected individuals.
For more complicated epidemiological models, such as those involving cross-immunity, age structure of host contact rates, seasonality, or multiple host populations with different life history traits, it is often impossible to analytically predict genealogical patterns from epidemiological parameters.
As such, the traditional statistical inference machinery will not work with these more complicated models, and in this case, it is common to instead use a forward simulation-based approach.
Simulation-based models require specification of a transmission model for the infection process between infected hosts and susceptible hosts and for the recovery process of infected hosts.
Simulation-based models may be compartmental, tracking the numbers of hosts infected and recovered to different viral strains, or may be individual-based, tracking the infection state and immune history of every host in the population.
Generally, compartmental models offer significant advantages in terms of speed and memory usage, but may be difficult to implement for complex evolutionary or epidemiological scenarios.
A forward simulation model may account for geographic population structure or age structure by modulating transmission rates between host individuals of different geographic or age classes.
Additionally, seasonality may be incorporated by allowing time of year to influence transmission rate in a stepwise or sinusoidal fashion.
To connect the epidemiological model to viral genealogies requires that multiple viral strains, with different nucleotide or amino acid sequences, exist in the simulation, often denoted
I
1
⋯
I
n
{\displaystyle I_{1}\cdots I_{n}}
for different infected classes.
In this case, mutation acts to convert a host in one infected class to another infected class.
Over the course of the simulation, viruses mutate and sequences are produced, from which phylogenies may be constructed and analyzed.
For antigenically variable viruses, it becomes crucial to model the risk of transmission from an individual infected with virus strain 'A' to an individual who has previously been infected with virus strains 'B', 'C', etc...
The level of protection against one strain of virus by a second strain is known as cross-immunity.
In addition to risk of infection, cross-immunity may modulate the probability that a host becomes infectious and the duration that a host remains infectious.
Often, the degree of cross-immunity between virus strains is assumed to be related to their sequence distance.
In general, in needing to run simulations rather than compute likelihoods, it may be difficult to make fine-scale inferences on epidemiological parameters, and instead, this work usually focuses on broader questions, testing whether overall genealogical patterns are consistent with one epidemiological model or another. Additionally, simulation-based methods are often used to validate inference results, providing test data where the correct answer is known ahead of time. Because computing likelihoods for genealogical data under complex simulation models has proven difficult, an alternative statistical approach called Approximate Bayesian Computation (ABC) is becoming popular in fitting these simulation models to patterns of genetic variation, following successful application of this approach to bacterial diseases. This is because ABC makes use of easily computable summary statistics to approximate likelihoods, rather than the likelihoods themselves.
== Examples ==
=== Phylodynamics of influenza ===
Human influenza is an acute respiratory infection primarily caused by viruses influenza A and influenza B.
Influenza A viruses can be further classified into subtypes, such as A/H1N1 and A/H3N2.
Here, subtypes are denoted according to their hemagglutinin (H or HA) and neuraminidase (N or NA) genes, which as surface proteins, act as the primary targets for the humoral immune response.
Influenza viruses circulate in other species as well, most notably as swine influenza and avian influenza.
Through reassortment, genetic sequences from swine and avian influenza occasionally enter the human population.
If a particular hemagglutinin or neuraminidase has been circulating outside the human population, then humans will lack immunity to this protein and an influenza pandemic may follow a host switch event, as seen in 1918, 1957, 1968 and 2009.
After introduction into the human population, a lineage of influenza generally persists through antigenic drift, in which HA and NA continually accumulate mutations allowing viruses to infect hosts immune to earlier forms of the virus.
These lineages of influenza show recurrent seasonal epidemics in temperate regions and less periodic transmission in the tropics.
Generally, at each pandemic event, the new form of the virus outcompetes existing lineages.
The study of viral phylodynamics in influenza primarily focuses on the continual circulation and evolution of epidemic influenza, rather than on pandemic emergence.
Of central interest to the study of viral phylodynamics is the distinctive phylogenetic tree of epidemic influenza A/H3N2, which shows a single predominant trunk lineage that persists through time and side branches that persist for only 1–5 years before going extinct.
==== Selective pressures ====
Phylodynamic techniques have provided insight into the relative selective effects of mutations to different sites and different genes across the influenza virus genome.
The exposed location of hemagglutinin (HA) suggests that there should exist strong selective pressure for evolution to the specific sites on HA that are recognized by antibodies in the human immune system.
These sites are referred to as epitope sites.
Phylogenetic analysis of H3N2 influenza has shown that putative epitope sites of the HA protein evolve approximately 3.5 times faster on the trunk of the phylogeny than on side branches. This suggests that viruses possessing mutations to these exposed sites benefit from positive selection and are more likely than viruses lacking such mutations to take over the influenza population.
Conversely, putative nonepitope sites of the HA protein evolve approximately twice as fast on side branches than on the trunk of the H3 phylogeny, indicating that mutations to these sites are selected against and viruses possessing such mutations are less likely to take over the influenza population.
Thus, analysis of phylogenetic patterns gives insight into underlying selective forces.
A similar analysis combining sites across genes shows that while both HA and NA undergo substantial positive selection, internal genes show low rates of amino acid fixation relative to levels of polymorphism, suggesting an absence of positive selection.
Further analysis of HA has shown it to have a very small effective population size relative to the census size of the virus population, as expected for a gene undergoing strong positive selection. However, across the influenza genome, there is surprisingly little variation in effective population size; all genes are nearly equally low.
This finding suggests that reassortment between segments occurs slowly enough, relative to the actions of positive selection, that genetic hitchhiking causes beneficial mutations in HA and NA to reduce diversity in linked neutral variation in other segments of the genome.
Influenza A/H1N1 shows a larger effective population size and greater genetic diversity than influenza H3N2, suggesting that H1N1 undergoes less adaptive evolution than H3N2.
This hypothesis is supported by empirical patterns of antigenic evolution; there have been nine vaccine updates recommended by the WHO for H1N1 in the interpandemic period between 1978 and 2009, while there have been 20 vaccine updates recommended for H3N2 during this same time period.
Additionally, an analysis of patterns of sequence evolution on trunk and side branches suggests that H1N1 undergoes substantially less positive selection than H3N2. However, the underlying evolutionary or epidemiological cause for this difference between H3N2 and H1N1 remains unclear.
==== Circulation patterns ====
The extremely rapid turnover of the influenza population means that the rate of geographic spread of influenza lineages must also, to some extent, be rapid.
Surveillance data show a clear pattern of strong seasonal epidemics in temperate regions and less periodic epidemics in the tropics. The geographic origin of seasonal epidemics in the Northern and Southern Hemispheres had been a major open question in the field. However, temperate epidemics usually emerge from a global reservoir rather than emerging from within the previous season's genetic diversity. This and subsequent work, has suggested that the global persistence of the influenza population is driven by viruses being passed from epidemic to epidemic, with no individual region in the world showing continual persistence. However, there is considerable debate regarding the particular configuration of the global network of influenza, with one hypothesis suggesting a metapopulation in East and Southeast Asia that continually seeds influenza in the rest of the world, and another hypothesis advocating a more global metapopulation in which temperate lineages often return to the tropics at the end of a seasonal epidemic.
All of these phylogeographic studies necessarily suffer from limitations in the worldwide sampling of influenza viruses. For example, the relative importance of tropical Africa and India has yet to be uncovered. Additionally, the phylogeographic methods used in these studies (see section on phylogeographic methods) make inferences of the ancestral locations and migration rates on only the samples at hand, rather than on the population in which these samples are embedded.
Because of this, study-specific sampling procedures are a concern in extrapolating to population-level inferences. However, estimates of migration rates that are jointly based on epidemiological and evolutionary simulations appear robust to a large degree of undersampling or oversampling of a particular region. Further methodological progress is required to more fully address these issues.
==== Simulation-based models ====
Forward simulation-based approaches for addressing how immune selection can shape the phylogeny of influenza A/H3N2's hemagglutinin protein have been actively developed by disease modelers since the early 2000s.
These approaches include both compartmental models and agent-based models.
One of the first compartmental models for influenza was developed by Gog and Grenfell, who simulated the dynamics of many strains with partial cross-immunity to one another.
Under a parameterization of long host lifespan and short infectious period, they found that strains would form self-organized sets that would emerge and replace one another.
Although the authors did not reconstruct a phylogeny from their simulated results, the dynamics they found were consistent with a ladder-like viral phylogeny exhibiting low strain diversity and rapid lineage turnover.
Later work by Ferguson and colleagues adopted an agent-based approach to better identify the immunological and ecological determinants of influenza evolution.
The authors modeled influenza's hemagglutinin as four epitopes, each consisting of three amino acids.
They showed that under strain-specific immunity alone (with partial cross-immunity between strains based on their amino acid similarity), the phylogeny of influenza A/H3N2's HA was expected to exhibit 'explosive genetic diversity', a pattern that is inconsistent with empirical data.
This led the authors to postulate the existence of a temporary strain-transcending immunity: individuals were immune to reinfection with any other influenza strain for approximately six months following an infection.
With this assumption, the agent-based model could reproduce the ladder-like phylogeny of influenza A/H3N2's HA protein.
Work by Koelle and colleagues revisited the dynamics of influenza A/H3N2 evolution following the publication of a paper by Smith and colleagues which showed that the antigenic evolution of the virus occurred in a punctuated manner. The phylodynamic model designed by Koelle and coauthors argued that this pattern reflected a many-to-one genotype-to-phenotype mapping, with the possibility of strains from antigenically distinct clusters of influenza sharing a high degree of genetic similarity.
Through incorporating this mapping of viral genotype into viral phenotype (or antigenic cluster) into their model, the authors were able to reproduce the ladder-like phylogeny of influenza's HA protein without generalized strain-transcending immunity.
The reproduction of the ladder-like phylogeny resulted from the viral population passing through repeated selective sweeps.
These sweeps were driven by herd immunity and acted to constrain viral genetic diversity.
Instead of modeling the genotypes of viral strains, a compartmental simulation model by Gökaydin and colleagues considered influenza evolution at the scale of antigenic clusters (or phenotypes).
This model showed that antigenic emergence and replacement could result under certain epidemiological conditions.
These antigenic dynamics would be consistent with a ladder-like phylogeny of influenza exhibiting low genetic diversity and continual strain turnover.
In recent work, Bedford and colleagues used an agent-based model to show that evolution in a Euclidean antigenic space can account for the phylogenetic pattern of influenza A/H3N2's HA, as well as the virus's antigenic, epidemiological, and geographic patterns.
The model showed the reproduction of influenza's ladder-like phylogeny depended critically on the mutation rate of the virus as well as the immunological distance yielded by each mutation.
==== The phylodynamic diversity of influenza ====
Although most research on the phylodynamics of influenza has focused on seasonal influenza A/H3N2 in humans, influenza viruses exhibit a wide variety of phylogenetic patterns.
Qualitatively similar to the phylogeny of influenza A/H3N2's hemagglutinin protein, influenza A/H1N1 exhibits a ladder-like phylogeny with relatively low genetic diversity at any point in time and rapid lineage turnover.
However, the phylogeny of influenza B's hemagglutinin protein has two circulating lineages: the Yamagata and the Victoria lineage.
It is unclear how the population dynamics of influenza B contribute to this evolutionary pattern, although one simulation model has been able to reproduce this phylogenetic pattern with longer infectious periods of the host.
Genetic and antigenic variation of influenza is also present across a diverse set of host species.
The impact of host population structure can be seen in the evolution of equine influenza A/H3N8: instead of a single trunk with short side-branches, the hemagglutinin of influenza A/H3N8 splits into two geographically distinct lineages, representing American and European viruses.
The evolution of these two lineages is thought to have occurred as a consequence of quarantine measures.
Additionally, host immune responses are hypothesized to modulate virus evolutionary dynamics.
Swine influenza A/H3N2 is known to evolve antigenically at a rate that is six times slower than that of the same virus circulating in humans, although these viruses' rates of genetic evolution are similar.
Influenza in aquatic birds is hypothesized to exhibit 'evolutionary stasis', although recent phylogenetic work indicates that the rate of evolutionary change in these hosts is similar to those in other hosts, including humans.
In these cases, it is thought that short host lifespans prevent the build-up of host immunity necessary to effectively drive antigenic drift.
=== Phylodynamics of HIV ===
==== Origin and spread ====
The global diversity of HIV-1 group M is shaped by its origins in Central Africa around the turn of the 20th century.
The epidemic underwent explosive growth throughout the early 20th century with multiple radiations out of Central Africa.
While traditional epidemiological surveillance data are almost nonexistent for the early period of epidemic expansion, phylodynamic analyses based on modern sequence data can be used to estimate when the epidemic began and to estimate the early growth rate.
The rapid early growth of HIV-1 in Central Africa is reflected in the star-like phylogenies of the virus, with most coalescent events occurring in the distant past. Multiple founder events have given rise to distinct HIV-1 group M subtypes which predominate in different parts of the world.
Subtype B is most prevalent in North America and Western Europe, while subtypes A and C, which account for more than half of infections worldwide, are common in Africa.
HIV subtypes differ slightly in their transmissibility, virulence, effectiveness of antiretroviral therapy, and pathogenesis.
The rate of exponential growth of HIV in Central Africa in the early 20th century preceding the establishment of modern subtypes has been estimated using coalescent approaches. Several estimates based on parametric exponential growth models are shown in table 1, for different time periods, risk groups and subtypes. The early spread of HIV-1 has also been characterized using nonparametric ("skyline") estimates of
N
e
{\displaystyle N_{e}}
.
The early growth of subtype B in North America was quite high,
however, the duration of exponential growth was relatively short, with saturation occurring in the mid- and late-1980s.
At the opposite extreme, HIV-1 group O, a relatively rare group that is geographically confined to Cameroon and that is mainly spread by heterosexual sex, has grown at a lower rate than either subtype B or C.
HIV-1 sequences sampled over a span of five decades have been used with relaxed molecular clock phylogenetic methods to estimate the time of cross-species viral spillover into humans around the early 20th century.
The estimated TMRCA for HIV-1 coincides with the appearance of the first densely populated large cities in Central Africa.
Similar methods have been used to estimate the time that HIV originated in different parts of the world.
The origin of subtype B in North America is estimated to be in the 1960s, where it went undetected until the AIDS epidemic in the 1980s.
There is evidence that progenitors of modern subtype B originally colonized the Caribbean before undergoing multiple radiations to North and South America.
Subtype C originated around the same time in Africa.
==== Contemporary epidemiological dynamics ====
At shorter time scales and finer geographical scales, HIV phylogenies may reflect epidemiological dynamics related to risk behavior and sexual networks.
Very dense sampling of viral sequences within cities over short periods of time has given a detailed picture of HIV transmission patterns in modern epidemics.
Sequencing of virus from newly diagnosed patients is now routine in many countries for surveillance of drug resistance mutations, which has yielded large databases of sequence data in those areas.
There is evidence that HIV transmission within heterogeneous sexual networks leaves a trace in HIV phylogenies, in particular making phylogenies more imbalanced and concentrating coalescent events on a minority of lineages.
By analyzing phylogenies estimated from HIV sequences from men who have sex with men in London, United Kingdom, Lewis et al. found evidence that transmission is highly concentrated in the brief period of primary HIV infection (PHI), which consists of approximately the first 6 months of the infectious period.
In a separate analysis, Volz et al. found that simple epidemiological dynamics explain phylogenetic clustering of viruses collected from patients with PHI.
Patients who were recently infected were more likely to harbor virus that is phylogenetically close to samples from other recently infected patients. Such clustering is consistent with observations in simulated epidemiological dynamics featuring an early period of intensified transmission during PHI. These results therefore provided further support for Lewis et al.'s findings that HIV transmission occurs frequently from individuals early in their infection.
==== Viral adaptation ====
Purifying immune selection dominates evolution of HIV within hosts, but evolution between hosts is largely decoupled from within-host evolution. Immune selection has relatively little influence on HIV phylogenies at the population level for three reasons.
First, there is an extreme bottleneck in viral diversity at the time of sexual transmission. Second, transmission tends to occur early in infection before immune selection has had a chance to operate. Finally, the replicative fitness of a viral strain (measured in transmissions per host) is largely extrinsic to virological factors, depending more heavily on behaviors in the host population. These include heterogeneous sexual and drug-use behaviors.
There is some evidence from comparative phylogenetic analysis and epidemic simulations that HIV adapts at the level of the population to maximize transmission potential between hosts. This adaptation is towards intermediate virulence levels, which balances the productive lifetime of the host (time until AIDS) with the transmission probability per act. A useful proxy for virulence is the set-point viral load (SPVL), which is correlated with the time until AIDS. SPVL is the quasi-equilibrium titer of viral particles in the blood during chronic infection. For adaptation towards intermediate virulence to be possible, SPVL needs to be heritable and a trade-off between viral transmissibility and the lifespan of the host needs to exist. SPVL has been shown to be correlated between HIV donor and recipients in transmission pairs, thereby providing evidence that SPVL is at least partly heritable. The transmission probability of HIV per sexual act is positively correlated with viral load, thereby providing evidence of the trade-off between transmissibility and virulence. It is therefore theoretically possible that HIV evolves to maximize its transmission potential. Epidemiological simulation and comparative phylogenetic studies have shown that adaptation of HIV towards optimum SPVL could be expected over 100–150 years. These results depend on empirical estimates for the transmissibility of HIV and the lifespan of hosts as a function of SPVL.
== Future directions ==
Up to this point, phylodynamic approaches have focused almost entirely on RNA viruses, which often have mutation rates on the order of 10−3 to 10−4 substitutions per site per year.
This allows a sample of around 1000 bases to have power to give a fair degree of confidence in estimating the underlying genealogy connecting sampled viruses.
However, other pathogens may have significantly slower rates of evolution.
DNA viruses, such as herpes simplex virus, evolve orders of magnitude more slowly.
These viruses have commensurately larger genomes.
Bacterial pathogens such as pneumococcus and tuberculosis evolve slower still and have even larger genomes.
In fact, there exists a very general negative correlation between genome size and mutation rate across observed systems.
Because of this, similar amounts of phylogenetic signal are likely to result from sequencing full genomes of RNA viruses, DNA viruses or bacteria.
As sequencing technologies continue to improve, it is becoming increasingly feasible to conduct phylodynamic analyses on the full diversity of pathogenic organisms.
Additionally, improvements in sequencing technologies will allow detailed investigation of within-host evolution, as the full diversity of an infecting quasispecies may be uncovered given enough sequencing effort.
== See also ==
Bacterial phylodynamics
== References ==
This article was adapted from the following source under a CC BY 4.0 license (2013) (reviewer reports):
Erik M Volz; Katia Koelle; Trevor Bedford (21 March 2013). "Viral phylodynamics". PLOS Computational Biology. 9 (3): e1002947. doi:10.1371/JOURNAL.PCBI.1002947. ISSN 1553-734X. PMC 3605911. PMID 23555203. Wikidata Q21045423. | Wikipedia/Viral_phylodynamics |
Antiprion drugs are hypothetical drugs that work against prions. The discovery of antiprion drugs is a priority because prion diseases are untreatable and fatal. Therefore, it is a therapeutic priority to find effective anti-prion drugs. In 2019, researched was published describing a drug that delayed the progression of a prion disease in mice and macaques.
== Mechanism of action ==
The disease progression in prion diseases is probably due to the conformational change of the prion protein (PrP). PrP changes from alpha-helical conformation to a disease-associated, mainly beta-sheeted scrapie isoform (PrPSc), which forms amyloid aggregates. The drugs that contain N′-benzylidene-benzohydrazide core structure are likely to slow down this progression. Drugs that target PrPC, the normal prion isoform, are also hypothesized to delay the progression of prion diseases.
== References ==
== External links ==
Walsh, Daniel J.; Rees, Judy R.; Mehra, Surabhi; Bourkas, Matthew E. C.; Kaczmarczyk, Lech; et al. (1 April 2024). "Anti-prion drugs do not improve survival in novel knock-in models of inherited prion disease". PLOS Pathogens. 20 (4): e1012087. doi:10.1371/journal.ppat.1012087. ISSN 1553-7374. PMC 10984475. PMID 38557815. | Wikipedia/Antiprion_drug |
Zinc finger transcription factors or ZF-TFs, are transcription factors composed of a zinc finger-binding domain and any of a variety of transcription-factor effector-domains that exert their modulatory effect in the vicinity of any sequence to which the protein domain binds.
Zinc finger protein transcription factors can be encoded by genes small enough to fit a number of such genes into a single vector, allowing the medical intervention and control of expression of multiple genes and the initiation of an elaborate cascade of events. In this respect, it is also possible to target a sequence that is common to multiple (usually functionally related) genes to control the transcription of all these genes with a single transcription factor. Also, it is possible to target a family of related genes by targeting and modulating the expression of the endogenous transcription factor(s) that control(s) them. They also have the advantage that the targeted sequence need not be symmetrical unlike most other DNA-binding motifs based on natural transcription factors that bind as dimers.
== Applications ==
By targeting the ZF-TF toward a specific DNA sequence and attaching the necessary effector domain, it is possible to downregulate or upregulate the expression of the gene(s) in question while using the same DNA-binding domain. The expression of a gene can also be downregulated by blocking elongation by RNA polymerase (without the need for an effector domain) in the coding region or RNA itself can also be targeted. Besides the obvious development of tools for the research of gene function, engineered ZF-TFs have therapeutic potential including correction of abnormal gene expression profiles (e.g., erbB-2 overexpression in human adenocarcinomas) and anti-retrovirals (e.g. HIV-1).
== See also ==
Artificial transcription factor, of which the ZF-TF is a type
Gene therapy
Zinc finger proteins
Zinc finger chimera
Zinc finger nuclease
== References == | Wikipedia/Zinc_finger_protein_transcription_factor |
The management of HIV/AIDS normally includes the use of multiple antiretroviral drugs as a strategy to control HIV infection. There are several classes of antiretroviral agents that act on different stages of the HIV life-cycle. The use of multiple drugs that act on different viral targets is known as highly active antiretroviral therapy (HAART). HAART decreases the patient's total burden of HIV, maintains function of the immune system, and prevents opportunistic infections that often lead to death. HAART also prevents the transmission of HIV between serodiscordant same-sex and opposite-sex partners so long as the HIV-positive partner maintains an undetectable viral load.
Treatment has been so successful that in many parts of the world, HIV has become a chronic condition in which progression to AIDS is increasingly rare. Anthony Fauci, former head of the United States National Institute of Allergy and Infectious Diseases, has written, "With collective and resolute action now and a steadfast commitment for years to come, an AIDS-free generation is indeed within reach." In the same paper, he noted that an estimated 700,000 lives were saved in 2010 alone by antiretroviral therapy. As another commentary noted, "Rather than dealing with acute and potentially life-threatening complications, clinicians are now confronted with managing a chronic disease that in the absence of a cure will persist for many decades."
The United States Department of Health and Human Services and the World Health Organization (WHO) recommend offering antiretroviral treatment to all patients with HIV. Because of the complexity of selecting and following a regimen, the potential for side effects, and the importance of taking medications regularly to prevent viral resistance, such organizations emphasize the importance of involving patients in therapy choices and recommend analyzing the risks and the potential benefits.
The WHO has defined health as more than the absence of disease. For this reason, many researchers have dedicated their work to better understanding the effects of HIV-related stigma, the barriers it creates for treatment interventions, and the ways in which those barriers can be circumvented.
== Classes of medication ==
There are six classes of drugs, which are usually used in combination, to treat HIV infection. Antiretroviral (ARV) drugs are broadly classified by the phase of the retrovirus life-cycle that the drug inhibits. Typical combinations include two nucleoside reverse-transcriptase inhibitors (NRTI) as a "backbone" along with one non-nucleoside reverse-transcriptase inhibitor (NNRTI), protease inhibitor (PI) or integrase inhibitors (also known as integrase nuclear strand transfer inhibitors or INSTIs) as a "base".
=== Entry inhibitors ===
Entry inhibitors (or fusion inhibitors) interfere with binding, fusion and entry of HIV-1 to the host cell by blocking one of several targets. Maraviroc, enfuvirtide and Ibalizumab are available agents in this class. Maraviroc works by targeting CCR5, a co-receptor located on human helper T-cells. Caution should be used when administering this drug, however, due to a possible shift in tropism which allows HIV to target an alternative co-receptor such as CXCR4. Ibalizumab is effective against both CCR5 and CXCR4 tropic HIV viruses.
In rare cases, individuals may have a mutation in the CCR5 delta gene which results in a nonfunctional CCR5 co-receptor and in turn, a means of resistance or slow progression of the disease. However, as mentioned previously, this can be overcome if an HIV variant that targets CXCR4 becomes dominant. To prevent fusion of the virus with the host membrane, enfuvirtide can be used. Enfuvirtide is a peptide drug that must be injected and acts by interacting with the N-terminal heptad repeat of gp41 of HIV to form an inactive hetero six-helix bundle, therefore preventing infection of host cells.
=== Nucleoside/nucleotide reverse-transcriptase inhibitors ===
Nucleoside reverse-transcriptase inhibitors (NRTI) and nucleotide reverse-transcriptase inhibitors (NtRTI) are nucleoside and nucleotide analogues which inhibit reverse transcription. HIV is an RNA virus, so it can not be integrated into the DNA in the nucleus of the human cell unless it is first "reverse" transcribed into DNA. Since the conversion of RNA to DNA is not naturally done in the mammalian cell, it is performed by a viral protein, reverse transcriptase, which makes it a selective target for inhibition. NRTIs are chain terminators. Once NRTIs are incorporated into the DNA chain, their lack of a 3' OH group prevents the subsequent incorporation of other nucleosides. Both NRTIs and NtRTIs act as competitive substrate inhibitors. Examples of NRTIs include zidovudine, abacavir, lamivudine, emtricitabine, and of NtRTIs – tenofovir and adefovir.
=== Non-nucleoside reverse-transcriptase inhibitors ===
Non-nucleoside reverse-transcriptase inhibitors (NNRTI) inhibit reverse transcriptase by binding to an allosteric site of the enzyme; NNRTIs act as non-competitive inhibitors of reverse transcriptase. NNRTIs affect the handling of substrate (nucleotides) by reverse transcriptase by binding near the active site. NNRTIs can be further classified into 1st generation and 2nd generation NNRTIs. 1st generation NNRTIs include nevirapine and efavirenz. 2nd generation NNRTIs are etravirine and rilpivirine. HIV-2 is intrinsically resistant to NNRTIs.
=== Integrase inhibitors ===
Integrase inhibitors (also known as integrase nuclear strand transfer inhibitors or INSTIs) inhibit the viral enzyme integrase, which is responsible for integration of viral DNA into the DNA of the infected cell. There are several integrase inhibitors under clinical trial, and raltegravir became the first to receive FDA approval in October 2007. Raltegravir has two metal binding groups that compete for substrate with two Mg2+ ions at the metal binding site of integrase. As of early 2022, four other clinically approved integrase inhibitors are elvitegravir, dolutegravir, bictegravir, and cabotegravir.
=== Protease inhibitors ===
Protease inhibitors block the viral protease enzyme necessary to produce mature virions upon budding from the host membrane. Particularly, these drugs prevent the cleavage of gag and gag/pol precursor proteins. Virus particles produced in the presence of protease inhibitors are defective and mostly non-infectious. Examples of HIV protease inhibitors are lopinavir, indinavir, nelfinavir, amprenavir and ritonavir. Darunavir and atazanavir are recommended as first line therapy choices. Maturation inhibitors have a similar effect by binding to gag, but development of two experimental drugs in this class, bevirimat and vivecon, was halted in 2010. Resistance to some protease inhibitors is high. Second generation drugs have been developed that are effective against otherwise resistant HIV variants.
== Combination therapy ==
The life cycle of HIV can be as short as about 1.5 days from viral entry into a cell, through replication, assembly, and release of additional viruses, to infection of other cells. HIV lacks proofreading enzymes to correct errors made when it converts its RNA into DNA via reverse transcription. Its short life-cycle and high error rate cause the virus to mutate very rapidly, resulting in a high genetic variability. Most of the mutations either are inferior to the parent virus (often lacking the ability to reproduce at all) or convey no advantage, but some of them have a natural selection superiority to their parent and can enable them to slip past defenses such as the human immune system and antiretroviral drugs. The more active copies of the virus, the greater the possibility that one resistant to antiretroviral drugs will be made.
When antiretroviral drugs are used improperly, multi-drug resistant strains can become the dominant genotypes very rapidly. In the era before multiple drug classes were available (pre-1997), the reverse-transcriptase inhibitors zidovudine, didanosine, zalcitabine, stavudine, and lamivudine were used serially or in combination leading to the development of multi-drug resistant mutations.
In contrast, antiretroviral combination therapy defends against resistance by creating multiple obstacles to HIV replication. This keeps the number of viral copies low and reduces the possibility of a superior mutation. If a mutation that conveys resistance to one of the drugs arises, the other drugs continue to suppress reproduction of that mutation. With rare exceptions, no individual antiretroviral drug has been demonstrated to suppress an HIV infection for long; these agents must be taken in combinations in order to have a lasting effect. As a result, the standard of care is to use combinations of antiretroviral drugs. Combinations usually consist of three drugs from at least two different classes. This three drug combination is commonly known as a triple cocktail. Combinations of antiretrovirals are subject to positive and negative synergies, which limits the number of useful combinations.
Because of HIV's tendency to mutate, when patients who have started an antiretrovial regimen fail to take it regularly, resistance can develop. On the other hand, patients who take their medications regularly can stay on one regimen without developing resistance. This greatly increases life expectancy and leaves more drugs available to the individual should the need arise.
In 2000, drug companies have worked together to combine these complex regimens into single-pill fixed-dose combinations. More than 20 antiretroviral fixed-dose combinations have been developed. This greatly increases the ease with which they can be taken, which in turn increases the consistency with which medication is taken (adherence), and thus their effectiveness over the long-term.
=== Adjunct treatment ===
Although antiretroviral therapy has helped to improve the quality of life of people living with HIV, there is still a need to explore other ways to further address the disease burden. One such potential strategy that was investigated was to add interleukin 2 as an adjunct to antiretroviral therapy for adults with HIV. A Cochrane review included 25 randomized controlled trials that were conducted across six countries. The researchers found that interleukin 2 increases the CD4 immune cells, but does not make a difference in terms of death and incidence of other infections. Furthermore, there is probably an increase in side-effects with interleukin 2. The findings of this review do not support the use of interleukin 2 as an add-on treatment to antiretroviral therapy for adults with HIV.
== Treatment guidelines ==
=== Initiation of antiretroviral therapy ===
Antiretroviral drug treatment guidelines have changed over time. Before 1987, no antiretroviral drugs were available and treatment consisted of treating complications from opportunistic infections and malignancies. After antiretroviral medications were introduced, most clinicians agreed that HIV positive patients with low CD4 counts should be treated, but no consensus formed as to whether to treat patients with high CD4 counts.
In April 1995, Merck and the National Institute of Allergy and Infectious Diseases began recruiting patients for a trial examining the effects of a three drug combination of the protease inhibitor indinavir and two nucleoside analogs, illustrating the substantial benefit of combining two NRTIs with a new class of antiretrovirals, protease inhibitors, namely indinavir. Later that year David Ho became an advocate of this "hit hard, hit early" approach with aggressive treatment with multiple antiretrovirals early in the course of the infection. Later reviews in the late 90s and early 2000s noted that this approach of "hit hard, hit early" ran significant risks of increasing side effects and development of multidrug resistance, and this approach was largely abandoned. The only consensus was on treating patients with advanced immunosuppression (CD4 counts less than 350/μL). Treatment with antiretrovirals was expensive at the time, ranging from $10,000 to $15,000 a year.
The timing of when to start therapy has continued to be a core controversy within the medical community, though recent studies have led to more clarity. The NA-ACCORD study observed patients who started antiretroviral therapy either at a CD4 count of less than 500 versus less than 350 and showed that patients who started ART at lower CD4 counts had a 69% increase in the risk of death. In 2015 the START and TEMPRANO studies both showed that patients lived longer if they started antiretrovirals at the time of their diagnosis, rather than waiting for their CD4 counts to drop to a specified level.
Other arguments for starting therapy earlier are that people who start therapy later have been shown to have less recovery of their immune systems, and higher CD4 counts are associated with less cancer.
The European Medicines Agency (EMA) has recommended the granting of marketing authorizations for two new antiretroviral (ARV) medicines, rilpivirine (Rekambys) and cabotegravir (Vocabria), to be used together for the treatment of people with human immunodeficiency virus type 1 (HIV-1) infection. The two medicines are the first ARVs that come in a long-acting injectable formulation. This means that instead of daily pills, people receive intramuscular injections monthly or every two months.
The combination of Rekambys and Vocabria injection is intended for maintenance treatment of adults who have undetectable HIV levels in the blood (viral load less than 50 copies/ml) with their current ARV treatment, and when the virus has not developed resistance to certain class of anti-HIV medicines called non-nucleoside reverse transcriptase inhibitors (NNRTIs) and integrase strand transfer inhibitors (INIs).
==== Treatment as prevention ====
A separate argument for starting antiretroviral therapy that has gained more prominence is its effect on HIV transmission. ART reduces the amount of virus in the blood and genital secretions. This has been shown to lead to dramatically reduced transmission of HIV when one partner with a suppressed viral load (<50 copies/ml) has sex with a partner who is HIV negative. In clinical trial HPTN 052, 1763 serodiscordant heterosexual couples in nine countries were planned to be followed for at least 10 years, with both groups receiving education on preventing HIV transmission and condoms, but only one group getting ART. The study was stopped early (after 1.7 years) for ethical reasons when it became clear that antiviral treatment provided significant protection. Of the 28 couples where cross-infection had occurred, all but one had taken place in the control group, consistent with a 96% reduction in risk of transmission while on ART. The single transmission in the experimental group occurred early after starting ART before viral load was likely to be suppressed. Pre-exposure prophylaxis (PrEP) provides HIV-negative individuals with medication—in conjunction with safer-sex education and regular HIV/STI screenings—in order to reduce the risk of acquiring HIV. In 2011, the journal Science gave the Breakthrough of the Year award to treatment as prevention.
In July 2016 a consensus document was created by the Prevention Access Campaign which has been endorsed by over 400 organisations in 58 countries. The consensus document states that the risk of HIV transmission from a person living with HIV who has been undetectable for a minimum of six months is negligible to non-existent, with negligible being defined as "so small or unimportant to be not worth considering". The Chair of the British HIV Association (BHIVA), Chloe Orkin, stated in July 2017 that 'there should be no doubt about the clear and simple message that a person with sustained, undetectable levels of HIV virus in their blood cannot transmit HIV to their sexual partners.'
Furthermore, the PARTNER study, which ran from 2010 to 2014, enrolled 1166 serodiscordant couples (where one partner is HIV positive and the other is negative) in a study that found that the estimated rate of transmission through any condomless sex with the HIV-positive partner taking ART with an HIV load less than 200 copies/ml was zero.
In summary, as the WHO HIV treatment guidelines state, "The ARV regimens now available, even in the poorest countries, are safer, simpler, more effective and more affordable than ever before."
There is a consensus among experts that, once initiated, antiretroviral therapy should never be stopped. This is because the selection pressure of incomplete suppression of viral replication in the presence of drug therapy causes the more drug sensitive strains to be selectively inhibited. This allows the drug resistant strains to become dominant. This in turn makes it harder to treat the infected individual as well as anyone else they infect. One trial showed higher rates of opportunistic infections, cancers, heart attacks and death in patients who periodically interrupted their ART.
=== Guideline sources ===
There are several treatment guidelines for HIV-1 infected adults in the developed world (that is, those countries with access to all or most therapies and laboratory tests). In the United States there are both the International AIDS Society-USA (IAS-USA) (a 501(c)(3) not-for-profit organization in the US) as well as the US government's Department of Health and Human Services guidelines. In Europe there are the European AIDS Clinical Society guidelines.
For resource limited countries, most national guidelines closely follow the World Health Organization (WHO) guidelines.
==== Guidelines ====
The guidelines use new criteria to consider starting HAART, as described below. However, there remain a range of views on this subject and the decision of whether to commence treatment ultimately rests with the patient and his or her doctor.
The US DHHS guidelines (published April 8, 2015) state:
Antiretroviral therapy (ART) is recommended for all HIV-infected individuals to reduce the risk of disease progression.
ART also is recommended for HIV-infected individuals for the prevention of transmission of HIV.
Patients starting ART should be willing and able to commit to treatment and understand the benefits and risks of therapy and the importance of adherence. Patients may choose to postpone therapy, and providers, on a case-by-case basis, may elect to defer therapy on the basis of clinical and/or psychosocial factors.
The newest WHO guidelines (dated September 30, 2015) now agree and state:
Antiretroviral therapy (ART) should be initiated in everyone living with HIV at any CD4 cell count
==== Baseline resistance ====
Baseline resistance is the presence of resistance mutations in patients who have never been treated before for HIV. In countries with a high rate of baseline resistance, resistance testing is recommended before starting treatment; or, if the initiation of treatment is urgent, then a "best guess" treatment regimen should be started, which is then modified on the basis of resistance testing. In the UK, there is 11.8% medium to high-level resistance at baseline to the combination of efavirenz + zidovudine + lamivudine, and 6.4% medium to high level resistance to stavudine + lamivudine + nevirapine. In the US, 10.8% of one cohort of patients who had never been on ART before had at least one resistance mutation in 2005. Various surveys in different parts of the world have shown increasing or stable rates of baseline resistance as the era of effective HIV therapy continues. With baseline resistance testing, a combination of antiretrovirals that are likely to be effective can be customized for each patient.
=== Regimens ===
Most HAART regimens consist of three drugs: Two NRTIs ("backbone")+ a PI/NNRTI/INSTI ("base"). Initial regimens use "first-line" drugs with a high efficacy and low side-effect profile.
The US DHHS preferred initial regimens for adults and adolescents in the United States, as of April 2015, are:
tenofovir/emtricitabine and raltegravir (an integrase inhibitor)
tenofovir/emtricitabine and dolutegravir (an integrase inhibitor)
abacavir/lamivudine (two NRTIs) and dolutegravir for patients who have been tested negative for the HLA-B*5701 gene allele
tenofovir/emtricitabine, elvitegravir (an integrase inhibitor) and cobicistat (inhibiting metabolism of the former) in patients with good kidney function (gfr > 70)
tenofovir/emtricitabine, ritonavir, and darunavir (both latter are protease inhibitors)
Both efavirenz and nevirapine showed similar benefits when combined with NRTI respectively.
In the case of the protease inhibitor based regimens, ritonavir is used at low doses to inhibit cytochrome p450 enzymes and "boost" the levels of other protease inhibitors, rather than for its direct antiviral effect. This boosting effect allows them to be taken less frequently throughout the day. Cobicistat is used with elvitegravir for a similar effect but does not have any direct antiviral effect itself.
The WHO preferred initial regimen for adults and adolescents as of June 30, 2013, is:
tenofovir + lamivudine (or emtricitabine) + efavirenz
=== Special populations ===
==== Acute infection ====
In the first six months after infection HIV viral loads tend to be elevated and people are more often symptomatic than in later latent phases of HIV disease. There may be special benefits to starting antiretroviral therapy early during this acute phase, including lowering the viral "set-point" or baseline viral load, reduce the mutation rate of the virus, and reduce the size of the viral reservoir (See section below on viral reservoirs). The SPARTAC trial compared 48 weeks of ART vs 12 weeks vs no treatment in acute HIV infection and found that 48 weeks of treatment delayed the time to decline in CD4 count below 350 cells per ml by 65 weeks and kept viral loads significantly lower even after treatment was stopped.
Since viral loads are usually very high during acute infection, this period carries an estimated 26 times higher risk of transmission. By treating acutely infected patients, it is presumed that it could have a significant impact on decreasing overall HIV transmission rates since lower viral loads are associated with lower risk of transmission (See section on treatment as prevention). However an overall benefit has not been proven and has to be balanced with the risks of HIV treatment. Therapy during acute infection carries a grade BII recommendation from the US DHHS.
==== Children ====
HIV can be especially harmful to infants and children, with one study in Africa showing that 52% of untreated children born with HIV had died by age 2. By five years old, the risk of disease and death from HIV starts to approach that of young adults. The WHO recommends treating all children less than 5 years old, and starting all children older than 5 with stage 3 or 4 disease or CD4 <500 cells/ml. DHHS guidelines are more complicated but recommend starting all children less than 12 months old and children of any age who have symptoms.
As for which antiretrovirals to use, this is complicated by the fact that many children who are born to mothers with HIV are given a single dose of nevirapine (an NNRTI) at the time of birth to prevent transmission. If this fails it can lead to NNRTI resistance. Also, a large study in Africa and India found that a PI based regimen was superior to an NNRTI based regimen in children less than 3 years who had never been exposed to NNRTIs in the past. Thus the WHO recommends PI based regimens for children less than 3.
The WHO recommends for children less than 3 years:
abacavir (or zidovudine) + lamivudine + lopinivir + ritonivir
and for children 3 years to less than 10 years and adolescents <35 kilograms:
abacavir + lamivudine + efavirenz
US DHHS guidelines are similar but include PI based options for children > 3 years old.
A systematic review assessed the effects and safety of abacavir-containing regimens as first-line therapy for children between 1 month and 18 years of age when compared to regimens with other NRTIs. This review included two trials and two observational studies with almost eleven thousand HIV infected children and adolescents. They measured virologic suppression, death and adverse events. The authors found that there is no meaningful difference between abacavir-containing regimens and other NRTI-containing regimens. The evidence is of low to moderate quality and therefore it is likely that future research may change these findings.
==== Pregnant women ====
The goals of treatment for pregnant women include the same benefits to the mother as in other infected adults as well as prevention of transmission to her child. The risk of transmission from mother to child is proportional to the plasma viral load of the mother. Untreated mothers with a viral load >100,000 copies/ml have a transmission risk of over 50%. The risk when viral loads are < 1000 copies/ml are less than 1%. ART for mothers both before and during delivery and to mothers and infants after delivery are recommended to substantially reduce the risk of transmission. The mode of delivery is also important, with a planned Caesarian section having a lower risk than vaginal delivery or emergency Caesarian section.
HIV can also be detected in breast milk of infected mothers and transmitted through breast feeding. The WHO balances the low risk of transmission through breast feeding from women who are on ART with the benefits of breastfeeding against diarrhea, pneumonia and malnutrition. It also strongly recommends that breastfeeding infants receive prophylactic ART. In the US, the DHHS recommends against women with HIV breastfeeding.
==== Older adults ====
With improvements in HIV therapy, several studies now estimate that patients on treatment in high-income countries can expect a normal life expectancy. This means that a higher proportion of people living with HIV are now older and research is ongoing into the unique aspects of HIV infection in the older adult. There is data that older people with HIV have a blunted CD4 response to therapy but are more likely to achieve undetectable viral levels. However, not all studies have seen a difference in response to therapy. The guidelines do not have separate treatment recommendations for older adults, but it is important to take into account that older patients are more likely to be on multiple non-HIV medications and consider drug interactions with any potential HIV medications. There are also increased rates of HIV associated non-AIDS conditions (HANA) such as heart disease, liver disease and dementia that are multifactorial complications from HIV, associated behaviors, coinfections like hepatitis B, hepatitis C, and human papilloma virus (HPV) as well as HIV treatment.
==== Adults with depression ====
Many factors may contribute to depression in adults living with HIV, such as the effects of the virus on the brain, other infections or tumours, antiretroviral drugs and other medical treatment. Rates of major depression are higher in people living with HIV compared to the general population, and this may negatively influence antiretroviral treatment. In a systematic review, Cochrane researchers assessed whether giving antidepressants to adults living with both HIV and depression may improve depression. Ten trials, of which eight were done in high-income countries, with 709 participants were included. Results indicated that antidepressants may be better in improving depression compared to placebo, but the quality of the evidence is low and future research is likely to impact on the findings.
== Concerns ==
There are several concerns about antiretroviral regimens that should be addressed before initiating:
Intolerance: The drugs can have serious side-effects which can lead to harm as well as keep patients from taking their medications regularly.
Resistance: Not taking medication consistently can lead to low blood levels that foster drug resistance.
Cost: The WHO maintains a database of world ART costs which have dropped dramatically in recent years as more first line drugs have gone off-patent. A one pill, once a day combination therapy has been introduced in South Africa for as little as $10 per patient per month. One 2013 study estimated an overall cost savings to ART therapy in South Africa given reduced transmission. In the United States, new on-patent regimens can cost up to $28,500 per patient, per year.
Public health: Individuals who fail to use antiretrovirals as directed can develop multi-drug resistant strains which can be passed onto others.
== Response to therapy ==
=== Virologic response ===
Suppressing the viral load to undetectable levels (<50 copies per ml) is the primary goal of ART. This should happen by 24 weeks after starting combination therapy. Viral load monitoring is the most important predictor of response to treatment with ART. Lack of viral load suppression on ART is termed virologic failure. Levels higher than 200 copies per ml is considered virologic failure, and should prompt further testing for potential viral resistance.
Research has shown that people with an undetectable viral load are unable to transmit the virus through condomless sex with a partner of either gender. The 'Swiss Statement' of 2008 described the chance of transmission as 'very low' or 'negligible,' but multiple studies have since shown that this mode of sexual transmission is impossible where the HIV-positive person has a consistently undetectable viral load. This discovery has led to the formation of the Prevention Access Campaign are their 'U=U' or 'Undetectable=Untransmittable' public information strategy, an approach that has gained widespread support amongst HIV/AIDS-related medical, charitable, and research organisations. The studies demonstrating that U=U is an effective strategy for preventing HIV transmission in serodiscordant couples so long as "the partner living with HIV [has] a durably suppressed viral load" include: Opposites Attract, PARTNER 1, PARTNER 2, (for male–male couples) and HPTN052 (for heterosexual couples). In these studies, couples where one partner was HIV-positive and one partner was HIV-negative were enrolled and regular HIV testing completed. In total from the four studies, 4097 couples were enrolled over four continents and 151,880 acts of condomless sex were reported, there were zero phylogenetically linked transmissions of HIV where the positive partner had an undetectable viral load. Following this the U=U consensus statement advocating the use of 'zero risk' was signed by hundreds of individuals and organisations including the US CDC, British HIV Association and The Lancet medical journal. The importance of the final results of the PARTNER 2 study were described by the medical director of the Terrence Higgins Trust as "impossible to overstate", while lead author Alison Rodger declared that the message that "undetectable viral load makes HIV untransmittable ... can help end the HIV pandemic by preventing HIV transmission." The authors summarised their findings in The Lancet as follows:
Our results provide a similar level of evidence on viral suppression and HIV transmission risk for gay men to that previously generated for heterosexual couples and suggest that the risk of HIV transmission in gay couples through condomless sex when HIV viral load is suppressed is effectively zero. Our findings support the message of the U=U (undetectable equals untransmittable) campaign, and the benefits of early testing and treatment for HIV.
This result is consistent with the conclusion presented by Anthony S. Fauci, the Director of the National Institute of Allergy and Infectious Diseases for the U.S. National Institutes of Health, and his team in a viewpoint published in the Journal of the American Medical Association, that U=U is an effective HIV prevention method when an undetectable viral load is maintained.
=== Immunologic response ===
CD4 cell counts are another key measure of immune status and ART effectiveness. CD4 counts should rise 50 to 100 cells per ml in the first year of therapy. There can be substantial fluctuation in CD4 counts of up to 25% based on the time of day or concomitant infections. In one long-term study, the majority of increase in CD4 cell counts was in the first two years after starting ART with little increase afterwards. This study also found that patients who began ART at lower CD4 counts continued to have lower CD4 counts than those who started at higher CD4 counts. When viral suppression on ART is achieved but without a corresponding increase in CD4 counts it can be termed immunologic nonresponse or immunologic failure. While this is predictive of worse outcomes, there is no consensus on how to adjust therapy to immunologic failure and whether switching therapy is beneficial. DHHS guidelines do not recommend switching an otherwise suppressive regimen.
Innate lymphoid cells (ILC) are another class of immune cell that is depleted during HIV infection. However, if ART is initiated before this depletion at around 7 days post infection, ILC levels can be maintained. While CD4 cell counts typically replenish after effective ART, ILCs depletion is irreversible with ART initiated after the depletion despite suppression of viremia. Since one of the roles of ILCs is to regulate the immune response to commensal bacteria and to maintain an effective gut barrier, it has been hypothesized that the irreversible depletion of ILCs plays a role in the weakened gut barrier of HIV patients, even after successful ART.
== Salvage therapy ==
In patients who have persistently detectable viral loads while taking ART, tests can be done to investigate whether there is drug resistance. Most commonly a genotype is sequenced which can be compared with databases of other HIV viral genotypes and resistance profiles to predict response to therapy. Resistance testing may improve virological outcomes in those who have treatment failures. However, there is lack of evidence of effectiveness of such testing in those who have not done any treatment before.
If there is extensive resistance a phenotypic test of a patient's virus against a range of drug concentrations can be performed, but is expensive and can take several weeks, so genotypes are generally preferred. Using information from a genotype or phenotype, a regimen of three drugs from at least two classes is constructed that will have the highest probability of suppressing the virus. If a regimen cannot be constructed from recommended first line agents it is termed salvage therapy, and when six or more drugs are needed it is termed mega-HAART.
== Structured treatment interruptions ==
Drug holidays (or "structured treatment interruptions") are intentional discontinuations of antiretroviral drug treatment. As mentioned above, randomized controlled studies of structured treatment interruptions have shown higher rates of opportunistic infections, cancers, heart attacks and death in patients who took drug holidays. With the exception of post-exposure prophylaxis (PEP), treatment guidelines do not call for the interruption of drug therapy once it has been initiated.
== Adverse effects ==
Each class and individual antiretroviral carries unique risks of adverse side effects.
=== NRTIs ===
The NRTIs can interfere with mitochondrial DNA synthesis and lead to high levels of lactate and lactic acidosis, liver steatosis, peripheral neuropathy, myopathy and lipoatrophy. First-line NRTIs such as lamivudine/emtrictabine, tenofovir, and abacavir are less likely to cause mitochondrial dysfunction.
Mitochondrial Haplogroups(mtDNA), non pathologic mutations inherited from the maternal line, have been linked to the efficacy of CD4+ count following ART. Idiosyncratic toxicity with mtDNA haplogroup is also well studied (Boeisteril et al., 2007).
=== NNRTIs ===
NNRTIs are generally safe and well tolerated. The main reason for discontinuation of efavirenz is neuro-psychiatric effects including suicidal ideation. Nevirapine can cause severe hepatotoxicity, especially in women with high CD4 counts.
=== Protease inhibitors ===
Protease inhibitors (PIs) are often given with ritonavir, a strong inhibitor of cytochrome P450 enzymes, leading to numerous drug-drug interactions. They are also associated with lipodystrophy, elevated triglycerides and elevated risk of heart attack.
=== Integrase inhibitors ===
Integrase inhibitors (INSTIs) are among the best tolerated of the antiretrovirals with excellent short and medium term outcomes. Given their relatively new development there is less long term safety data. They are associated with an increase in creatinine kinase levels and rarely myopathy.
== Post-exposure prophylaxis (PEP) ==
When people are exposed to HIV-positive infectious bodily fluids either through skin puncture, contact with mucous membranes or contact with damaged skin, they are at risk for acquiring HIV. Pooled estimates give a risk of transmission with puncture exposures of 0.3% and mucous membrane exposures 0.63%. United States guidelines state that "feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered potentially infectious unless they are visibly bloody." Given the rare nature of these events, rigorous study of the protective abilities of antiretrovirals are limited but do suggest that taking antiretrovirals afterwards can prevent transmission. It is unknown if three medications are better than two. The sooner after exposure that ART is started the better, but after what period they become ineffective is unknown, with the US Public Health Service Guidelines recommending starting prophylaxis up to a week after exposure. They also recommend treating for a duration of four weeks based on animal studies. Their recommended regimen is emtricitabine + tenofovir + raltegravir (an INSTI). The rationale for this regimen is that it is "tolerable, potent, and conveniently administered, and it has been associated with minimal drug interactions." People who are exposed to HIV should have follow up HIV testing at 6, 12, and 24 weeks.
== Pregnancy planning ==
Women with HIV have been shown to have decreased fertility which can affect available reproductive options. In cases where the woman is HIV negative and the man is HIV positive, the primary assisted reproductive method used to prevent HIV transmission is sperm washing followed by intrauterine insemination (IUI) or in vitro fertilization (IVF). Preferably this is done after the man has achieved an undetectable plasma viral load. In the past there have been cases of HIV transmission to an HIV-negative partner through processed artificial insemination, but a large modern series in which followed 741 couples where the man had a stable viral load and semen samples were tested for HIV-1, there were no cases of HIV transmission.
For cases where the woman is HIV positive and the man is HIV negative, the usual method is artificial insemination. With appropriate treatment the risk of mother-to-child infection can be reduced to below 1%.
== History ==
Several buyers clubs sprang up since 1986 to combat HIV. The drug zidovudine (AZT), a nucleoside reverse-transcriptase inhibitor (NRTI), was not effective on its own. It was approved by the US FDA in 1987. The FDA bypassed stages of its review for safety and effectiveness in order to distribute this drug earlier. Subsequently, several more NRTIs were developed but even in combination were unable to suppress the virus for long periods of time and patients still inevitably died. To distinguish from this early antiretroviral therapy (ART), the term highly active antiretroviral therapy (HAART) was introduced. In 1996 two sequential publications in The New England Journal of Medicine by Hammer and colleagues and Gulick and colleagues illustrated the substantial benefit of combining two NRTIs with a new class of antiretrovirals, protease inhibitors, namely indinavir. This concept of three-drug therapy was quickly incorporated into clinical practice and rapidly showed impressive benefit with a 60% to 80% decline in rates of AIDS, death, and hospitalization. It would also create a new period of optimism at the 11th International AIDS Conference that was held in Vancouver that year.
As HAART became widespread, fixed dose combinations were made available to ease the administration. Later, the term combination antiretroviral therapy (cART) gained favor with some physicians as a more accurate name, not conveying to patients any misguided idea of the nature of the therapy. Today multidrug, highly effective regimens are long since the default in ART, which is why they are increasingly called simply ART instead of HAART or cART. This retronymic process is linguistically comparable to the way that the words electronic computer and digital computer at first were needed to make useful distinctions in computing technology, but with the later irrelevance of the distinction, computer alone now covers their meaning. Thus as "all computers are digital now", so "all ART is combination ART now." However, the names HAART and cART, reinforced by thousands of earlier mentions in medical literature still being regularly cited, also remain in use. In 1997, the new number of new HIV/AIDS cases in the United States would see its first significant decline at 47%, with credit going to the effectiveness of HAART.
== Research ==
People living with HIV can expect to live a nearly normal life span if able to achieve durable viral suppression on combination antiretroviral therapy. However this requires lifelong medication and will still have higher rates of cardiovascular, kidney, liver and neurologic disease. This has prompted further research towards a cure for HIV.
=== Patients cured of HIV infection ===
The so-called "Berlin patient" has been potentially cured of HIV infection and has been off of treatment since 2006 with no detectable virus. This was achieved through two bone marrow transplants that replaced his immune system with a donor's that did not have the CCR5 cell surface receptor, which is needed for some variants of HIV to enter a cell. Bone marrow transplants carry their own significant risks including potential death and was only attempted because it was necessary to treat a blood cancer he had. Attempts to replicate this have not been successful and given the risks, expense and rarity of CCR5 negative donors, bone marrow transplant is not seen as a mainstream option. It has inspired research into other methods to try to block CCR5 expression through gene therapy. A procedure zinc-finger nuclease-based gene knockout has been used in a Phase I trial of 12 humans and led to an increase in CD4 count and decrease in their viral load while off antiretroviral treatment. Attempt to reproduce this failed in 2016. Analysis of the failure showed that gene therapy only successfully treats 11–28% of cells, leaving the majority of CD4+ cells capable of being infected. The analysis found that only patients where less than 40% of cells were infected had reduced viral load. The gene therapy was not effective if the native CD4+ cells remained. This is the main limitation which must be overcome for this treatment to become effective.
After the "Berlin patient", two additional patients with both HIV infection and cancer were reported to have no traceable HIV virus after successful stem cell transplants. Virologist Annemarie Wensing of the University Medical Center Utrecht announced this development during her presentation at the 2016 "Towards an HIV Cure" symposium. However, these two patients are still on antiretroviral therapy, which is not the case for the Berlin patient. Therefore, it is not known whether or not the two patients are cured of HIV infection. The cure might be confirmed if the therapy were to be stopped and no viral rebound occurred.
In March 2019, a second patient, referred to as the "London Patient", was confirmed to be in complete remission of HIV. Like the Berlin Patient, the London Patient received a bone marrow transplant from a donor who has the same CCR5 mutation. He has been off antiviral drugs since September 2017, indicating the Berlin Patient was not a "one-off".
Alternative approaches aiming to mimic one's biological immunity to HIV through the absence or mutation of the CCR5 gene is being conducted in current research efforts. The efforts of which are done through the introduction of induced pluripotent stem cells that have been CCR5 disrupted through the CRISPR/Cas9 gene editing system.
=== Viral reservoirs ===
The main obstacle to complete elimination of HIV infection by conventional antiretroviral therapy is that HIV is able to integrate itself into the DNA of host cells and rest in a latent state, while antiretrovirals only attack actively replicating HIV. The cells in which HIV lies dormant are called the viral reservoir, and one of the main sources is thought to be central memory and transitional memory CD4+ T cells. In 2014 there were reports of the cure of HIV in two infants, presumably due to the fact that treatment was initiated within hours of infection, preventing HIV from establishing a deep reservoir. There is work being done to try to activate reservoir cells into replication so that the virus is forced out of latency and can be attacked by antiretrovirals and the host immune system. Targets include histone deacetylase (HDAC) which represses transcription and if inhibited can lead to increased cell activation. The HDAC inhibitors valproic acid and vorinostat have been used in human trials with only preliminary results so far.
=== Immune activation ===
Even with all latent virus deactivated, it is thought that a vigorous immune response will need to be induced to clear all the remaining infected cells. Strategies include using cytokines to restore CD4+ cell counts as well as therapeutic vaccines to prime immune responses. One such candidate vaccine is Tat Oyi, developed by Biosantech. This vaccine is based on the HIV protein tat. Animal models have shown the generation of neutralizing antibodies and lower levels of HIV viremia.
=== Sequential mRNA vaccine ===
HIV vaccine development is an active area of research and an important tool for managing the global AIDS epidemic. Research into a vaccine for HIV has been ongoing for decades with no lasting success for preventing infection. The rapid development, though, of mRNA vaccines to deal with the COVID-19 pandemic may provide a new path forward.
Like SARS-CoV-2, the virus that causes COVID-19, HIV has a spike protein. In retroviruses like HIV, the spike protein is formed by two proteins expressed by the Env gene. This viral envelope binds to the host cell's receptor and is what gains the virus entry into the cell. With mRNA vaccines, mRNA or messenger RNA, contains the instructions for how to make the spike protein. The mRNA is put into lipid-based nanoparticles for drug delivery. This was a key breakthrough in optimizing the efficiency and efficacy of in vivo delivery. When the vaccine is injected, the mRNA enters cells and joins up with a ribosome. The ribosome then translates the mRNA instructions into the spike protein. The immune system detects the presence of the spike protein and B cells, a type of white blood cell, begin to develop antibodies. Should the actual virus later enter the system, the external spike protein will be recognized by memory B cells, whose function is to memorize the characteristics of the original antigen. Memory B cells then produce the antibodies, hopefully destroying the virus before it can bind to another cell and repeat the HIV life cycle.
SARS-CoV-2 and HIV-1 have similarities—notably both are RNA viruses—but there are important differences. As a retrovirus, HIV-1 can insert a copy of its RNA genome into the host's DNA, making total eradication more difficult. The virus is also highly mutable making it a challenge for the adaptive immune system to develop a response. As a chronic infection, HIV-1 and the adaptive immune system undergo reciprocal selective pressures leading to the evolutionary arms race of coevolution.
Broadly neutralizing HIV-1 antibodies, or bnAbs, have been shown to attach to the Env spike protein envelope regardless of the specific HIV mutations. This bodes well for vaccine development. Complicating matters, though, naive B cells—mature B cells not yet exposed to any antigen and are the progenitors of bnAbs—are rare. Further, the mutation events needed to turn these B cells into bnAbs are also rare. Because of this, there is a growing consensus that an effective HIV vaccine will need to create not only humoral (antibody-mediated) immunity, but a T-cell-mediated immunity.
mRNA vaccines have advantages over traditional vaccines which may help deal with some of the challenges presented by the HIV virus. The mRNA in the vaccine only codes for the protein spike, not the whole virus, so the possibility of reverse transcription, where the virus copies its genetic material into the host's genome, is not present. Another advantage when compared to traditional vaccines is the speed of development. mRNA vaccines take months not years, which means a multipart sequential vaccine regime is possible.
Attempts to elicit an immune response that triggers broadly neutralizing antibodies (bnAbs) with a single vaccine dose have been unsuccessful. A multipart sequential mRNA vaccine regime, however, might guide the immune response in the right direction. The first shot triggers an immune response for the correct naive B cells. Later vaccinations encourage the development of these cells further, eventually turning them into memory b cells, and later into plasma cells, which can secrete the broadly neutralizing antibodies:
In essence, the sequential immunization approach represents an attempt to mimic Env evolution that would occur with natural infection.... In contrast to traditional prime/boost strategies, in which the same immunogen is used repeatedly for vaccination, the sequential immunization approach relies on a series of different immunogens with the goal of eventually inducing bnAb(s).
A Phase 1 clinical trial by Scripps Research and the International AIDS Vaccine Initiative of an mRNA vaccine showed that 97 percent of participants had the desired initial “priming” immune response of naive b cells. This is a positive result for developing the first shot in a vaccine sequence. Moderna is partnering with Scripps and the International AIDS Vaccine Initiative for a follow-up phase 1 clinical trial of an HIV mRNA vaccine (mRNA-1644) starting later in 2021.
== Drug advertisements ==
Direct-to-consumer and other advertisements for HIV drugs in the past were criticized for their use of healthy, glamorous models rather than typical people with HIV/AIDS. Usually, these people will present with debilitating conditions or illnesses as a result of HIV/AIDS. In contrast, by featuring people in unrealistically strenuous activities, such as mountain climbing; this proved to be offensive and insensitive to the suffering of people who are HIV positive. The US FDA reprimanded multiple pharmaceutical manufacturers for publishing such adverts in 2001, as the misleading advertisements harmed consumers by implying unproven benefits and failing to disclose important information about the drugs. Overall, some drug companies chose not to present their drugs in a realistic way, which consequently harmed the general public's ideas, suggesting that HIV would not affect you as much as suggested. This led to people not wanting to get tested, for fear of being HIV positive, because at the time (in the 1980s and 1990s particularly), having contracted HIV was seen as a death sentence, as there was no known cure. An example of such a case is Freddie Mercury, who died in 1991, aged 45, of AIDS-related pneumonia.
== Beyond medical management ==
The preamble to the World Health Organization's Constitution defines health as "a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity." Those living with HIV today are met with other challenges that go beyond the singular goal of lowering their viral load. A 2009 meta-analysis studying the correlates of HIV-stigma found that individuals living with higher stigma burden were more likely to have poorer physical and mental health. Insufficient social support and delayed diagnosis due to decreased frequency of HIV testing and knowledge of risk reduction were cited as some of the reasons. People living with HIV (PLHIV) have lower health related quality of life (HRQoL) scores than do the general population. The stigma of having HIV is often compounded with the stigma of identifying with the LGBTQ community or the stigma of being an injecting drug user (IDU) even though heterosexual sexual transmission accounts for 85% of all HIV-1 infections worldwide. AIDS has been cited as the most heavily stigmatized medical condition among infectious diseases. Part of the consequence of this stigma toward PLHIV is the belief that they are seen as responsible for their status and less deserving of treatment.
A 2016 study sharing the WHO's definition of health critiques its 90-90-90 target goal, which is part of a larger strategy that aims to eliminate the AIDS epidemic as a public health threat by 2030, by arguing that it does not go far enough in ensuring the holistic health of PLHIV. The study suggests that maintenance of HIV and AIDS should go beyond the suppression of viral load and the prevention of opportunistic infection. It proposes adding a 'fourth 90' addressing a new 'quality of life' target that would focus specifically on increasing the quality of life for those that are able to suppress their viral load to undetectable levels along with new metrics to track the progress toward that target. This study serves as an example of the shifting paradigm in the dynamics of the health care system from being heavily 'disease-oriented' to more 'human-centered'. Though questions remain of what exactly a more 'human-centered' method of treatment looks like in practice, it generally aims to ask what kind of support, other than medical support, PLHIV need to cope with and eliminate HIV-related stigmas. Campaigns and marketing aimed at educating the general public in order to reduce any misplaced fears of HIV contraction is one example. Also encouraged is the capacity-building and guided development of PLHIV into more leadership roles with the goal of having a greater representation of this population in decision making positions. Structural legal intervention has also been proposed, specifically referring to legal interventions to put in place protections against discrimination and improve access to employment opportunities. On the side of the practitioner, greater competence for the experience of people living with HIV is encouraged alongside the promotion of an environment of nonjudgment and confidentiality.
Psychosocial group interventions such as psychotherapy, relaxation, group support, and education may have some beneficial effects on depression in HIV positive people.
== Food insecurity ==
The successful treatment and management of HIV/AIDS is affected by a plethora of factors which ranges from successfully taking prescribed medications, preventing opportunistic infection, and food access etc. Food insecurity is a condition in which households lack access to adequate food because of limited money or other resources. Food insecurity is a global issue that has affected billions of people yearly, including those living in developed countries.
Food insecurity is a major public health disparity in the United States of America, which significantly affects minority groups, people living at or below the poverty line, and those who are living with one or more morbidity. As of December 31, 2017, there were approximately 126,742 people living with HIV/AIDS (PLWHA) in NYC, of whom 87.6% can be described as living with some level of poverty and food insecurity as reported by the NYC Department of Health on March 31, 2019. Having access to a consistent food supply that is safe and healthy is an important part in the treatment and management of HIV/AIDS. PLWHA are also greatly affected by food inequities and food deserts which causes them to be food insecure. Food insecurity, which can cause malnutrition, can also negatively impact HIV treatment and recovery from opportunistic infections. Similarly, PLWHA require additional calories and nutritionally support that require foods free from contamination to prevent further immunocompromising. Food insecurity can further exacerbate the progression of HIV/AIDS and can prevent PLWHA from consistently following their prescribed regimen, which will lead to poor outcomes.
It is imperative that these food insecurity among PLWHA are addressed and rectified to reduce this health inequity. It is important to recognized that socioeconomic status, access to medical care, geographic location, public policy, race and ethnicity all play a pivotal role in the treatment and management of HIV/AIDS. The lack of sufficient and constant income does limit the options for food, treatment, and medications. The same can be inferred for those who are among the oppressed groups in society who are marginalized and may be less inclined or encouraged to seek care and assistance. Endeavors to address food insecurity should be included in HIV treatment programs and may help improve health outcomes if it also focuses on health equity among the diagnosed as much as it focuses on medications. Access to consistently safe and nutritious foods is one of the most important facets in ensuring PLWHA are being provided the best possible care. By altering the narratives for HIV treatment so that more support can be garnered to reduce food insecurity and other health disparities mortality rates will decrease for people living with HIV/AIDS.
== See also ==
AV-HALT
Discovery and development of HIV-protease inhibitors
Discovery and development of non-nucleoside reverse-transcriptase inhibitors
Discovery and development of nucleoside and nucleotide reverse-transcriptase inhibitors
HIV capsid inhibition
== References ==
== Further reading ==
Strayer DS, Akkina R, Bunnell BA, Dropulic B, Planelles V, Pomerantz RJ, et al. (June 2005). "Current status of gene therapy strategies to treat HIV/AIDS". Molecular Therapy. 11 (6): 823–42. doi:10.1016/j.ymthe.2005.01.020. PMID 15922953.
== External links ==
HIVinfo at US Department of Health and Human Services | Wikipedia/Antiretroviral_drugs_for_HIV |
Antiviral drugs are a class of medication used for treating viral infections. Most antivirals target specific viruses, while a broad-spectrum antiviral is effective against a wide range of viruses. Antiviral drugs are a class of antimicrobials, a larger group which also includes antibiotic (also termed antibacterial), antifungal and antiparasitic drugs, or antiviral drugs based on monoclonal antibodies. Most antivirals are considered relatively harmless to the host, and therefore can be used to treat infections. They should be distinguished from virucides, which are not medication but deactivate or destroy virus particles, either inside or outside the body. Natural virucides are produced by some plants such as eucalyptus and Australian tea trees.
== Medical uses ==
Most of the antiviral drugs now available are designed to help deal with HIV, herpes viruses, the hepatitis B and C viruses, and influenza A and B viruses.
Viruses use the host's cells to replicate and this makes it difficult to find targets for the drug that would interfere with the virus without also harming the host organism's cells. Moreover, the major difficulty in developing vaccines and antiviral drugs is due to viral variation.
The emergence of antivirals is the product of a greatly expanded knowledge of the genetic and molecular function of organisms, allowing biomedical researchers to understand the structure and function of viruses, major advances in the techniques for finding new drugs, and the pressure placed on the medical profession to deal with the human immunodeficiency virus (HIV), the cause of acquired immunodeficiency syndrome (AIDS).
The first experimental antivirals were developed in the 1960s, mostly to deal with herpes viruses, and were found using traditional trial-and-error drug discovery methods. Researchers grew cultures of cells and infected them with the target virus. They then introduced into the cultures chemicals which they thought might inhibit viral activity and observed whether the level of virus in the cultures rose or fell. Chemicals that seemed to have an effect were selected for closer study.
This was a very time-consuming, hit-or-miss procedure, and in the absence of a good knowledge of how the target virus worked, it was not efficient in discovering effective antivirals which had few side effects. Only in the 1980s, when the full genetic sequences of viruses began to be unraveled, did researchers begin to learn how viruses worked in detail, and exactly what chemicals were needed to thwart their reproductive cycle.
== Antiviral drug design ==
=== Antiviral targeting ===
The general idea behind modern antiviral drug design is to identify viral proteins, or parts of proteins, that can be disabled. These "targets" should generally be as unlike any proteins or parts of proteins in humans as possible, to reduce the likelihood of side effects and toxicity. The targets should also be common across many strains of a virus, or even among different species of virus in the same family, so a single drug will have broad effectiveness. For example, a researcher might target a critical enzyme synthesized by the virus, but not by the patient, that is common across strains, and see what can be done to interfere with its operation.
Once targets are identified, candidate drugs can be selected, either from drugs already known to have appropriate effects or by actually designing the candidate at the molecular level with a computer-aided design program.
The target proteins can be manufactured in the lab for testing with candidate treatments by inserting the gene that synthesizes the target protein into bacteria or other kinds of cells. The cells are then cultured for mass production of the protein, which can then be exposed to various treatment candidates and evaluated with "rapid screening" technologies.
=== Approaches by virus life cycle stage ===
Viruses consist of a genome and sometimes a few enzymes stored in a capsule made of protein (called a capsid), and sometimes covered with a lipid layer (sometimes called an 'envelope'). Viruses cannot reproduce on their own and instead propagate by subjugating a host cell to produce copies of themselves, thus producing the next generation.
Researchers working on such "rational drug design" strategies for developing antivirals have tried to attack viruses at every stage of their life cycles. Some species of mushrooms have been found to contain multiple antiviral chemicals with similar synergistic effects.
Compounds isolated from fruiting bodies and filtrates of various mushrooms have broad-spectrum antiviral activities, but successful production and availability of such compounds as frontline antiviral is a long way away.
Viral life cycles vary in their precise details depending on the type of virus, but they all share a general pattern:
Attachment to a host cell.
Release of viral genes and possibly enzymes into the host cell.
Replication of viral components using host-cell machinery.
Assembly of viral components into complete viral particles.
Release of viral particles to infect new host cells.
==== Before cell entry ====
One antiviral strategy is to interfere with the ability of a virus to infiltrate a target cell. The virus must go through a sequence of steps to do this, beginning with binding to a specific "receptor" molecule on the surface of the host cell and ending with the virus "uncoating" inside the cell and releasing its contents. Viruses that have a lipid envelope must also fuse their envelope with the target cell, or with a vesicle that transports them into the cell before they can uncoat.
This stage of viral replication can be inhibited in two ways:
Using agents which mimic the virus-associated protein (VAP) and bind to the cellular receptors. This may include VAP anti-idiotypic antibodies, natural ligands of the receptor, and anti-receptor antibodies.
Using agents which mimic the cellular receptor and bind to the VAP. This includes anti-VAP antibodies, receptor anti-idiotypic antibodies, extraneous receptor and synthetic receptor mimics.
This strategy of designing drugs can be very expensive, and since the process of generating anti-idiotypic antibodies is partly trial and error, it can be a relatively slow process until an adequate molecule is produced.
===== Entry inhibitor =====
A very early stage of viral infection is viral entry, when the virus attaches to and enters the host cell. A number of "entry-inhibiting" or "entry-blocking" drugs are being developed to fight HIV. HIV most heavily targets a specific type of lymphocyte known as "helper T cells", and identifies these target cells through T-cell surface receptors designated "CD4" and "CCR5". Attempts to interfere with the binding of HIV with the CD4 receptor have failed to stop HIV from infecting helper T cells, but research continues on trying to interfere with the binding of HIV to the CCR5 receptor in hopes that it will be more effective.
HIV infects a cell through fusion with the cell membrane, which requires two different cellular molecular participants, CD4 and a chemokine receptor (differing depending on the cell type). Approaches to blocking this virus/cell fusion have shown some promise in preventing entry of the virus into a cell. At least one of these entry inhibitors—a biomimetic peptide called Enfuvirtide, or the brand name Fuzeon—has received FDA approval and has been in use for some time. Potentially, one of the benefits from the use of an effective entry-blocking or entry-inhibiting agent is that it potentially may not only prevent the spread of the virus within an infected individual but also the spread from an infected to an uninfected individual.
One possible advantage of the therapeutic approach of blocking viral entry (as opposed to the currently dominant approach of viral enzyme inhibition) is that it may prove more difficult for the virus to develop resistance to this therapy than for the virus to mutate or evolve its enzymatic protocols.
===== Uncoating inhibitors =====
Inhibitors of uncoating have also been investigated.
Amantadine and rimantadine have been introduced to combat influenza. These agents act on penetration and uncoating.
Pleconaril works against rhinoviruses, which cause the common cold, by blocking a pocket on the surface of the virus that controls the uncoating process. This pocket is similar in most strains of rhinoviruses and enteroviruses, which can cause diarrhea, meningitis, conjunctivitis, and encephalitis.
Some scientists are making the case that a vaccine against rhinoviruses, the predominant cause of the common cold, is achievable.
Vaccines that combine dozens of varieties of rhinovirus at once are effective in stimulating antiviral antibodies in mice and monkeys, researchers reported in Nature Communications in 2016.
Rhinoviruses are the most common cause of the common cold; other viruses such as respiratory syncytial virus, parainfluenza virus and adenoviruses can cause them too. Rhinoviruses also exacerbate asthma attacks. Although rhinoviruses come in many varieties, they do not drift to the same degree that influenza viruses do. A mixture of 50 inactivated rhinovirus types should be able to stimulate neutralizing antibodies against all of them to some degree.
==== During viral synthesis ====
A second approach is to target the processes that synthesize virus components after a virus invades a cell.
===== Reverse transcription =====
One way of doing this is to develop nucleotide or nucleoside analogues that look like the building blocks of RNA or DNA, but deactivate the enzymes that synthesize the RNA or DNA once the analogue is incorporated. This approach is more commonly associated with the inhibition of reverse transcriptase (RNA to DNA) than with "normal" transcriptase (DNA to RNA).
The first successful antiviral, aciclovir, is a nucleoside analogue, and is effective against herpesvirus infections. The first antiviral drug to be approved for treating HIV, zidovudine (AZT), is also a nucleoside analogue.
An improved knowledge of the action of reverse transcriptase has led to better nucleoside analogues to treat HIV infections. One of these drugs, lamivudine, has been approved to treat hepatitis B, which uses reverse transcriptase as part of its replication process. Researchers have gone further and developed inhibitors that do not look like nucleosides, but can still block reverse transcriptase.
Another target being considered for HIV antivirals include RNase H—which is a component of reverse transcriptase that splits the synthesized DNA from the original viral RNA.
===== Integrase =====
Another target is integrase, which integrate the synthesized DNA into the host cell genome. Examples of integrase inhibitors include raltegravir, elvitegravir, and dolutegravir.
===== Transcription =====
Once a virus genome becomes operational in a host cell, it then generates messenger RNA (mRNA) molecules that direct the synthesis of viral proteins. Production of mRNA is initiated by proteins known as transcription factors. Several antivirals are now being designed to block attachment of transcription factors to viral DNA.
===== Translation/antisense =====
Genomics has not only helped find targets for many antivirals, it has provided the basis for an entirely new type of drug, based on "antisense" molecules. These are segments of DNA or RNA that are designed as complementary molecule to critical sections of viral genomes, and the binding of these antisense segments to these target sections blocks the operation of those genomes. A phosphorothioate antisense drug named fomivirsen has been introduced, used to treat opportunistic eye infections in AIDS patients caused by cytomegalovirus, and other antisense antivirals are in development. An antisense structural type that has proven especially valuable in research is morpholino antisense.
Morpholino oligos have been used to experimentally suppress many viral types:
caliciviruses
flaviviruses (including West Nile virus)
dengue
HCV
coronaviruses
===== Translation/ribozymes =====
Yet another antiviral technique inspired by genomics is a set of drugs based on ribozymes, which are enzymes that will cut apart viral RNA or DNA at selected sites. In their natural course, ribozymes are used as part of the viral manufacturing sequence, but these synthetic ribozymes are designed to cut RNA and DNA at sites that will disable them.
A ribozyme antiviral to deal with hepatitis C has been suggested, and ribozyme antivirals are being developed to deal with HIV. An interesting variation of this idea is the use of genetically modified cells that can produce custom-tailored ribozymes. This is part of a broader effort to create genetically modified cells that can be injected into a host to attack pathogens by generating specialized proteins that block viral replication at various phases of the viral life cycle.
===== Protein processing and targeting =====
Interference with post translational modifications or with targeting of viral proteins in the cell is also possible.
==== Protease inhibitors ====
Some viruses include an enzyme known as a protease that cuts viral protein chains apart so they can be assembled into their final configuration. HIV includes a protease, and so considerable research has been performed to find "protease inhibitors" to attack HIV at that phase of its life cycle. Protease inhibitors became available in the 1990s and have proven effective, though they can have unusual side effects, for example causing fat to build up in unusual places. Improved protease inhibitors are now in development.
Protease inhibitors have also been seen in nature. A protease inhibitor was isolated from the shiitake mushroom (Lentinus edodes). The presence of this may explain the Shiitake mushrooms' noted antiviral activity in vitro.
===== Long dsRNA helix targeting =====
Most viruses produce long dsRNA helices during transcription and replication. In contrast, uninfected mammalian cells generally produce dsRNA helices of fewer than 24 base pairs during transcription. DRACO (double-stranded RNA activated caspase oligomerizer) is a group of experimental antiviral drugs initially developed at the Massachusetts Institute of Technology. In cell culture, DRACO was reported to have broad-spectrum efficacy against many infectious viruses, including dengue flavivirus, Amapari and Tacaribe arenavirus, Guama bunyavirus, H1N1 influenza and rhinovirus, and was additionally found effective against influenza in vivo in weanling mice. It was reported to induce rapid apoptosis selectively in virus-infected mammalian cells, while leaving uninfected cells unharmed. DRACO effects cell death via one of the last steps in the apoptosis pathway in which complexes containing intracellular apoptosis signalling molecules simultaneously bind multiple procaspases. The procaspases transactivate via cleavage, activate additional caspases in the cascade, and cleave a variety of cellular proteins, thereby killing the cell.
==== Assembly ====
Rifampicin acts at the assembly phase.
==== Release phase ====
The final stage in the life cycle of a virus is the release of completed viruses from the host cell, and this step has also been targeted by antiviral drug developers. Two drugs named zanamivir (Relenza) and oseltamivir (Tamiflu) that have been recently introduced to treat influenza prevent the release of viral particles by blocking a molecule named neuraminidase that is found on the surface of flu viruses, and also seems to be constant across a wide range of flu strains.
=== Immune system stimulation ===
Rather than attacking viruses directly, a second category of tactics for fighting viruses involves encouraging the body's immune system to attack them. Some antivirals of this sort do not focus on a specific pathogen, instead stimulating the immune system to attack a range of pathogens.
One of the best-known of this class of drugs are interferons, which inhibit viral synthesis in infected cells. One form of human interferon named "interferon alpha" is well-established as part of the standard treatment for hepatitis B and C, and other interferons are also being investigated as treatments for various diseases.
A more specific approach is to synthesize antibodies, protein molecules that can bind to a pathogen and mark it for attack by other elements of the immune system. Once researchers identify a particular target on the pathogen, they can synthesize quantities of identical "monoclonal" antibodies to link up that target. A monoclonal drug is now being sold to help fight respiratory syncytial virus in babies, and antibodies purified from infected individuals are also used as a treatment for hepatitis B.
== Antiviral drug resistance ==
Antiviral resistance can be defined by a decreased susceptibility to a drug caused by changes in viral genotypes. In cases of antiviral resistance, drugs have either diminished or no effectiveness against their target virus. The issue inevitably remains a major obstacle to antiviral therapy as it has developed to almost all specific and effective antimicrobials, including antiviral agents.
The Centers for Disease Control and Prevention (CDC) inclusively recommends anyone six months and older to get a yearly vaccination to protect them from influenza A viruses (H1N1) and (H3N2) and up to two influenza B viruses (depending on the vaccination). Comprehensive protection starts by ensuring vaccinations are current and complete. However, vaccines are preventative and are not generally used once a patient has been infected with a virus. Additionally, the availability of these vaccines can be limited based on financial or locational reasons which can prevent the effectiveness of herd immunity, making effective antivirals a necessity.
The three FDA-approved neuraminidase antiviral flu drugs available in the United States, recommended by the CDC, include: oseltamivir (Tamiflu), zanamivir (Relenza), and peramivir (Rapivab). Influenza antiviral resistance often results from changes occurring in neuraminidase and hemagglutinin proteins on the viral surface. Currently, neuraminidase inhibitors (NAIs) are the most frequently prescribed antivirals because they are effective against both influenza A and B. However, antiviral resistance is known to develop if mutations to the neuraminidase proteins prevent NAI binding. This was seen in the H257Y mutation, which was responsible for oseltamivir resistance to H1N1 strains in 2009. The inability of NA inhibitors to bind to the virus allowed this strain of virus with the resistance mutation to spread due to natural selection. Furthermore, a study published in 2009 in Nature Biotechnology emphasized the urgent need for augmentation of oseltamivir stockpiles with additional antiviral drugs including zanamivir. This finding was based on a performance evaluation of these drugs supposing the 2009 H1N1 'Swine Flu' neuraminidase (NA) were to acquire the oseltamivir-resistance (His274Tyr) mutation, which is currently widespread in seasonal H1N1 strains.
=== Origin of antiviral resistance ===
The genetic makeup of viruses is constantly changing, which can cause a virus to become resistant to currently available treatments. Viruses can become resistant through spontaneous or intermittent mechanisms throughout the course of an antiviral treatment. Immunocompromised patients, more often than immunocompetent patients, hospitalized with pneumonia are at the highest risk of developing oseltamivir resistance during treatment. Subsequent to exposure to someone else with the flu, those who received oseltamivir for "post-exposure prophylaxis" are also at higher risk of resistance.
The mechanisms for antiviral resistance development depend on the type of virus in question. RNA viruses such as hepatitis C and influenza A have high error rates during genome replication because RNA polymerases lack proofreading activity. RNA viruses also have small genome sizes that are typically less than 30 kb, which allow them to sustain a high frequency of mutations. DNA viruses, such as HPV and herpesvirus, hijack host cell replication machinery, which gives them proofreading capabilities during replication. DNA viruses are therefore less error prone, are generally less diverse, and are more slowly evolving than RNA viruses. In both cases, the likelihood of mutations is exacerbated by the speed with which viruses reproduce, which provides more opportunities for mutations to occur in successive replications. Billions of viruses are produced every day during the course of an infection, with each replication giving another chance for mutations that encode for resistance to occur.
Multiple strains of one virus can be present in the body at one time, and some of these strains may contain mutations that cause antiviral resistance. This effect, called the quasispecies model, results in immense variation in any given sample of virus, and gives the opportunity for natural selection to favor viral strains with the highest fitness every time the virus is spread to a new host. Recombination, the joining of two different viral variants, and reassortment, the swapping of viral gene segments among viruses in the same cell, also play a role in resistance, especially in influenza.
Antiviral resistance has been reported in antivirals for herpes, HIV, hepatitis B and C, and influenza, but antiviral resistance is a possibility for all viruses. Mechanisms of antiviral resistance vary between virus types.
=== Detection of antiviral resistance ===
National and international surveillance is performed by the CDC to determine effectiveness of the current FDA-approved antiviral flu drugs. Public health officials use this information to make current recommendations about the use of flu antiviral medications. WHO further recommends in-depth epidemiological investigations to control potential transmission of the resistant virus and prevent future progression. As novel treatments and detection techniques to antiviral resistance are enhanced so can the establishment of strategies to combat the inevitable emergence of antiviral resistance.
=== Treatment options for antiviral resistant pathogens ===
If a virus is not fully wiped out during a regimen of antivirals, treatment creates a bottleneck in the viral population that selects for resistance, and there is a chance that a resistant strain may repopulate the host. Viral treatment mechanisms must therefore account for the selection of resistant viruses.
The most commonly used method for treating resistant viruses is combination therapy, which uses multiple antivirals in one treatment regimen. This is thought to decrease the likelihood that one mutation could cause antiviral resistance, as the antivirals in the cocktail target different stages of the viral life cycle. This is frequently used in retroviruses like HIV, but a number of studies have demonstrated its effectiveness against influenza A, as well. Viruses can also be screened for resistance to drugs before treatment is started. This minimizes exposure to unnecessary antivirals and ensures that an effective medication is being used. This may improve patient outcomes and could help detect new resistance mutations during routine scanning for known mutants. However, this has not been consistently implemented in treatment facilities at this time.
== Direct-acting antivirals ==
The term Direct-acting antivirals (DAA) has long been associated with the combination of antiviral drugs used to treat hepatitis C infections. These are the more effective than older treatments such as ribavirin (partially indirectly acting) and interferon (indirect acting). The DAA drugs against hepatitis C are taken orally, as tablets, for 8 to 12 weeks. The treatment depends on the type or types (genotypes) of hepatitis C virus that are causing the infection. Both during and at the end of treatment, blood tests are used to monitor the effectiveness of the treatment and subsequent cure.
The DAA combination drugs used include:
Harvoni (sofosbuvir and ledipasvir)
Epclusa (sofosbuvir and velpatasvir)
Vosevi (sofosbuvir, velpatasvir, and voxilaprevir)
Zepatier (elbasvir and grazoprevir)
Mavyret (glecaprevir and pibrentasvir)
The United States Food and Drug Administration approved DAAs on the basis of a surrogate endpoint called sustained virological response (SVR). SVR is achieved in a patient when hepatitis C virus RNA remains undetectable 12–24 weeks after treatment ends. Whether through DAAs or older interferon-based regimens, SVR is associated with improved health outcomes and significantly decreased mortality. For those who already have advanced liver disease (including hepatocellular carcinoma), however, the benefits of achieving SVR may be less pronounced, though still substantial.
Despite its historical roots in hepatitis C research, the term "direct-acting antivirals" is becoming more broadly used to also include other anti-viral drugs with a direct viral target such as aciclovir (against herpes simplex virus), letermovir (against cytomegalovirus), or AZT (against human immunodeficiency virus). In this context it serves to distinguish these drugs from those with an indirect mechanism of action such as immune modulators like interferon alfa. This difference is of particular relevance for potential drug resistance mutation development.
== Public policy ==
=== Use and distribution ===
Guidelines regarding viral diagnoses and treatments change frequently and limit quality care. Even when physicians diagnose older patients with influenza, use of antiviral treatment can be low. Provider knowledge of antiviral therapies can improve patient care, especially in geriatric medicine. Furthermore, in local health departments (LHDs) with access to antivirals, guidelines may be unclear, causing delays in treatment. With time-sensitive therapies, delays could lead to lack of treatment.
Overall, national guidelines, regarding infection control and management, standardize care and improve healthcare worker and patient safety. Guidelines, such as those provided by the Centers for Disease Control and Prevention (CDC) during the 2009 flu pandemic caused by the H1N1 virus, recommend, among other things, antiviral treatment regimens, clinical assessment algorithms for coordination of care, and antiviral chemoprophylaxis guidelines for exposed persons. Roles of pharmacists and pharmacies have also expanded to meet the needs of public during public health emergencies.
=== Stockpiling ===
Public Health Emergency Preparedness initiatives are managed by the CDC via the Office of Public Health Preparedness and Response. Funds aim to support communities in preparing for public health emergencies, including pandemic influenza. Also managed by the CDC, the Strategic National Stockpile (SNS) consists of bulk quantities of medicines and supplies for use during such emergencies. Antiviral stockpiles prepare for shortages of antiviral medications in cases of public health emergencies. During the H1N1 pandemic in 2009–2010, guidelines for SNS use by local health departments was unclear, revealing gaps in antiviral planning. For example, local health departments that received antivirals from the SNS did not have transparent guidance on the use of the treatments. The gap made it difficult to create plans and policies for their use and future availabilities, causing delays in treatment.
== See also ==
Antiretroviral drug (especially HAART for HIV)
CRISPR-Cas13
Discovery and development of CCR5 receptor antagonists (for HIV)
Monoclonal antibody
List of antiviral drugs
Antiprion drugs and Astemizole
Discovery and development of NS5A inhibitors
COVID-19 drug repurposing research
== References == | Wikipedia/Antiviral_therapy |
In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new candidate medications are discovered.
Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery, as with penicillin. More recently, chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that had a desirable therapeutic effect in a process known as classical pharmacology. After sequencing of the human genome allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compounds libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy.
Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, the process of drug development can continue. If successful, clinical trials are developed.
Modern drug discovery is thus usually a capital-intensive process that involves large investments by pharmaceutical industry corporations as well as national governments (who provide grants and loan guarantees). Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity was about US$1.8 billion. In the 21st century, basic discovery research is funded primarily by governments and by philanthropic organizations, while late-stage development is funded primarily by pharmaceutical companies or venture capitalists. To be allowed to come to market, drugs must undergo several successful phases of clinical trials, and pass through a new drug approval process, called the New Drug Application in the United States.
Discovering drugs that may be a commercial success, or a public health success, involves a complex interaction between investors, industry, academia, patent laws, regulatory exclusivity, marketing, and the need to balance secrecy with communication. Meanwhile, for disorders whose rarity means that no large commercial success or public health effect can be expected, the orphan drug funding process ensures that people who experience those disorders can have some hope of pharmacotherapeutic advances.
== History ==
The idea that the effect of a drug in the human body is mediated by specific interactions of the drug molecule with biological macromolecules, (proteins or nucleic acids in most cases) led scientists to the conclusion that individual chemicals are required for the biological activity of the drug. This made for the beginning of the modern era in pharmacology, as pure chemicals, instead of crude extracts of medicinal plants, became the standard drugs. Examples of drug compounds isolated from crude preparations are morphine, the active agent in opium, and digoxin, a heart stimulant originating from Digitalis lanata. Organic chemistry also led to the synthesis of many of the natural products isolated from biological sources.
Historically, substances, whether crude extracts or purified chemicals, were screened for biological activity without knowledge of the biological target. Only after an active substance was identified was an effort made to identify the target. This approach is known as classical pharmacology, forward pharmacology, or phenotypic drug discovery.
Later, small molecules were synthesized to specifically target a known physiological/pathological pathway, avoiding the mass screening of banks of stored compounds. This led to great success, such as the work of Gertrude Elion and George H. Hitchings on purine metabolism, the work of James Black on beta blockers and cimetidine, and the discovery of statins by Akira Endo. Another champion of the approach of developing chemical analogues of known active substances was Sir David Jack at Allen and Hanbury's, later Glaxo, who pioneered the first inhaled selective beta2-adrenergic agonist for asthma, the first inhaled steroid for asthma, ranitidine as a successor to cimetidine, and supported the development of the triptans.
Gertrude Elion, working mostly with a group of fewer than 50 people on purine analogues, contributed to the discovery of the first anti-viral; the first immunosuppressant (azathioprine) that allowed human organ transplantation; the first drug to induce remission of childhood leukemia; pivotal anti-cancer treatments; an anti-malarial; an anti-bacterial; and a treatment for gout.
Cloning of human proteins made possible the screening of large libraries of compounds against specific targets thought to be linked to specific diseases. This approach is known as reverse pharmacology and is the most frequently used approach today.
In the 2020s, qubit and quantum computing started to be used to reduce the time needed to drug discovery.
== Targets ==
A "target" is produced within the pharmaceutical industry. Generally, the "target" is the naturally existing cellular or molecular structure involved in the pathology of interest where the drug-in-development is meant to act. However, the distinction between a "new" and "established" target can be made without a full understanding of just what a "target" is. This distinction is typically made by pharmaceutical companies engaged in the discovery and development of therapeutics. In an estimate from 2011, 435 human genome products were identified as therapeutic drug targets of FDA-approved drugs.
"Established targets" are those for which there is a good scientific understanding, supported by a lengthy publication history, of both how the target functions in normal physiology and how it is involved in human pathology. This does not imply that the mechanism of action of drugs that are thought to act through a particular established target is fully understood. Rather, "established" relates directly to the amount of background information available on a target, in particular functional information. In general, "new targets" are all those targets that are not "established targets" but which have been or are the subject of drug discovery efforts. The majority of targets selected for drug discovery efforts are proteins, such as G-protein-coupled receptors (GPCRs) and protein kinases.
== Screening and design ==
The process of finding a new drug against a chosen target for a particular disease usually involves high-throughput screening (HTS), wherein large libraries of chemicals are tested for their ability to modify the target. For example, if the target is a novel GPCR, compounds will be screened for their ability to inhibit or stimulate that receptor (see antagonist and agonist): if the target is a protein kinase, the chemicals will be tested for their ability to inhibit that kinase.
Another function of HTS is to show how selective the compounds are for the chosen target, as one wants to find a molecule which will interfere with only the chosen target, but not other, related targets. To this end, other screening runs will be made to see whether the "hits" against the chosen target will interfere with other related targets – this is the process of cross-screening. Cross-screening is useful because the more unrelated targets a compound hits, the more likely that off-target toxicity will occur with that compound once it reaches the clinic.
It is unlikely that a perfect drug candidate will emerge from these early screening runs. One of the first steps is to screen for compounds that are unlikely to be developed into drugs; for example compounds that are hits in almost every assay, classified by medicinal chemists as "pan-assay interference compounds", are removed at this stage, if they were not already removed from the chemical library. It is often observed that several compounds are found to have some degree of activity, and if these compounds share common chemical features, one or more pharmacophores can then be developed. At this point, medicinal chemists will attempt to use structure–activity relationships (SAR) to improve certain features of the lead compound:
increase activity against the chosen target
reduce activity against unrelated targets
improve the druglikeness or ADME properties of the molecule.
This process will require several iterative screening runs, during which, it is hoped, the properties of the new molecular entities will improve, and allow the favoured compounds to go forward to in vitro and in vivo testing for activity in the disease model of choice.
Amongst the physicochemical properties associated with drug absorption include ionization (pKa), and solubility; permeability can be determined by PAMPA and Caco-2. PAMPA is attractive as an early screen due to the low consumption of drug and the low cost compared to tests such as Caco-2, gastrointestinal tract (GIT) and Blood–brain barrier (BBB) with which there is a high correlation.
A range of parameters can be used to assess the quality of a compound, or a series of compounds, as proposed in the Lipinski's Rule of Five. Such parameters include calculated properties such as cLogP to estimate lipophilicity, molecular weight, polar surface area and measured properties, such as potency, in-vitro measurement of enzymatic clearance etc. Some descriptors such as ligand efficiency (LE) and lipophilic efficiency (LiPE) combine such parameters to assess druglikeness.
While HTS is a commonly used method for novel drug discovery, it is not the only method. It is often possible to start from a molecule which already has some of the desired properties. Such a molecule might be extracted from a natural product or even be a drug on the market which could be improved upon (so-called "me too" drugs). Other methods, such as virtual high throughput screening, where screening is done using computer-generated models and attempting to "dock" virtual libraries to a target, are also often used.
Another method for drug discovery is de novo drug design, in which a prediction is made of the sorts of chemicals that might (e.g.) fit into an active site of the target enzyme. For example, virtual screening and computer-aided drug design are often used to identify new chemical moieties that may interact with a target protein. Molecular modelling and molecular dynamics simulations can be used as a guide to improve the potency and properties of new drug leads.
There is also a paradigm shift in the drug discovery community to shift away from HTS, which is expensive and may only cover limited chemical space, to the screening of smaller libraries (maximum a few thousand compounds). These include fragment-based lead discovery (FBDD) and protein-directed dynamic combinatorial chemistry. The ligands in these approaches are usually much smaller, and they bind to the target protein with weaker binding affinity than hits that are identified from HTS. Further modifications through organic synthesis into lead compounds are often required. Such modifications are often guided by protein X-ray crystallography of the protein-fragment complex. The advantages of these approaches are that they allow more efficient screening and the compound library, although small, typically covers a large chemical space when compared to HTS.
Phenotypic screens have also provided new chemical starting points in drug discovery. A variety of models have been used including yeast, zebrafish, worms, immortalized cell lines, primary cell lines, patient-derived cell lines and whole animal models. These screens are designed to find compounds which reverse a disease phenotype such as death, protein aggregation, mutant protein expression, or cell proliferation as examples in a more holistic cell model or organism. Smaller screening sets are often used for these screens, especially when the models are expensive or time-consuming to run. In many cases, the exact mechanism of action of hits from these screens is unknown and may require extensive target deconvolution experiments to ascertain. The growth of the field of chemoproteomics has provided numerous strategies to identify drug targets in these cases.
Once a lead compound series has been established with sufficient target potency and selectivity and favourable drug-like properties, one or two compounds will then be proposed for drug development. The best of these is generally called the lead compound, while the other will be designated as the "backup". These decisions are generally supported by computational modelling innovations.
== Nature as source ==
Traditionally, many drugs and other chemicals with biological activity have been discovered by studying chemicals that organisms create to affect the activity of other organisms for survival.
Despite the rise of combinatorial chemistry as an integral part of lead discovery process, natural products still play a major role as starting material for drug discovery. A 2007 report found that of the 974 small molecule new chemical entities developed between 1981 and 2006, 63% were natural derived or semisynthetic derivatives of natural products. For certain therapy areas, such as antimicrobials, antineoplastics, antihypertensive and anti-inflammatory drugs, the numbers were higher.
Natural products may be useful as a source of novel chemical structures for modern techniques of development of antibacterial therapies.
=== Plant-derived ===
Many secondary metabolites produced by plants have potential therapeutic medicinal properties. These secondary metabolites contain, bind to, and modify the function of proteins (receptors, enzymes, etc.). Consequently, plant derived natural products have often been used as the starting point for drug discovery.
==== History ====
Until the Renaissance, the vast majority of drugs in Western medicine were plant-derived extracts. This has resulted in a pool of information about the potential of plant species as important sources of starting materials for drug discovery. Botanical knowledge about different metabolites and hormones that are produced in different anatomical parts of the plant (e.g. roots, leaves, and flowers) are crucial for correctly identifying bioactive and pharmacological plant properties. Identifying new drugs and getting them approved for market has proved to be a stringent process due to regulations set by national drug regulatory agencies.
==== Jasmonates ====
Jasmonates are important in responses to injury and intracellular signals. They induce apoptosis and protein cascade via proteinase inhibitor, have defense functions, and regulate plant responses to different biotic and abiotic stresses. Jasmonates also have the ability to directly act on mitochondrial membranes by inducing membrane depolarization via release of metabolites.
Jasmonate derivatives (JAD) are also important in wound response and tissue regeneration in plant cells. They have also been identified to have anti-aging effects on human epidermal layer. It is suspected that they interact with proteoglycans (PG) and glycosaminoglycan (GAG) polysaccharides, which are essential extracellular matrix (ECM) components to help remodel the ECM. The discovery of JADs on skin repair has introduced newfound interest in the effects of these plant hormones in therapeutic medicinal application.
==== Salicylates ====
Salicylic acid (SA), a phytohormone, was initially derived from willow bark and has since been identified in many species. It is an important player in plant immunity, although its role is still not fully understood by scientists. They are involved in disease and immunity responses in plant and animal tissues. They have salicylic acid binding proteins (SABPs) that have shown to affect multiple animal tissues. The first discovered medicinal properties of the isolated compound was involved in pain and fever management. They also play an active role in the suppression of cell proliferation. They have the ability to induce death in lymphoblastic leukemia and other human cancer cells. One of the most common drugs derived from salicylates is aspirin, also known as acetylsalicylic acid, with anti-inflammatory and anti-pyretic properties.
=== Animal-derived ===
Some drugs used in modern medicine have been discovered in animals or are based on compounds found in animals. For example, the anticoagulant drugs, hirudin and its synthetic congener, bivalirudin, are based on saliva chemistry of the leech, Hirudo medicinalis. Used to treat type 2 diabetes, exenatide was developed from saliva compounds of the Gila monster, a venomous lizard.
=== Microbial metabolites ===
Microbes compete for living space and nutrients. To survive in these conditions, many microbes have developed abilities to prevent competing species from proliferating. Microbes are the main source of antimicrobial drugs. Streptomyces isolates have been such a valuable source of antibiotics, that they have been called medicinal molds. The classic example of an antibiotic discovered as a defense mechanism against another microbe is penicillin in bacterial cultures contaminated by Penicillium fungi in 1928.
=== Marine invertebrates ===
Marine environments are potential sources for new bioactive agents. Arabinose nucleosides discovered from marine invertebrates in 1950s, demonstrated for the first time that sugar moieties other than ribose and deoxyribose can yield bioactive nucleoside structures. It took until 2004 when the first marine-derived drug was approved. For example, the cone snail toxin ziconotide, also known as Prialt treats severe neuropathic pain. Several other marine-derived agents are now in clinical trials for indications such as cancer, anti-inflammatory use and pain. One class of these agents are bryostatin-like compounds, under investigation as anti-cancer therapy.
=== Chemical diversity ===
As above mentioned, combinatorial chemistry was a key technology enabling the efficient generation of large screening libraries for the needs of high-throughput screening. However, now, after two decades of combinatorial chemistry, it has been pointed out that despite the increased efficiency in chemical synthesis, no increase in lead or drug candidates has been reached. This has led to analysis of chemical characteristics of combinatorial chemistry products, compared to existing drugs or natural products. The chemoinformatics concept chemical diversity, depicted as distribution of compounds in the chemical space based on their physicochemical characteristics, is often used to describe the difference between the combinatorial chemistry libraries and natural products. The synthetic, combinatorial library compounds seem to cover only a limited and quite uniform chemical space, whereas existing drugs and particularly natural products, exhibit much greater chemical diversity, distributing more evenly to the chemical space. The most prominent differences between natural products and compounds in combinatorial chemistry libraries is the number of chiral centers (much higher in natural compounds), structure rigidity (higher in natural compounds) and number of aromatic moieties (higher in combinatorial chemistry libraries). Other chemical differences between these two groups include the nature of heteroatoms (O and N enriched in natural products, and S and halogen atoms more often present in synthetic compounds), as well as level of non-aromatic unsaturation (higher in natural products). As both structure rigidity and chirality are well-established factors in medicinal chemistry known to enhance compounds specificity and efficacy as a drug, it has been suggested that natural products compare favourably to today's combinatorial chemistry libraries as potential lead molecules.
=== Screening ===
Two main approaches exist for the finding of new bioactive chemical entities from natural sources.
The first is sometimes referred to as random collection and screening of material, but the collection is far from random. Biological (often botanical) knowledge is often used to identify families that show promise. This approach is effective because only a small part of the earth's biodiversity has ever been tested for pharmaceutical activity. Also, organisms living in a species-rich environment need to evolve defensive and competitive mechanisms to survive. Those mechanisms might be exploited in the development of beneficial drugs.
A collection of plant, animal and microbial samples from rich ecosystems can potentially give rise to novel biological activities worth exploiting in the drug development process. One example of successful use of this strategy is the screening for antitumor agents by the National Cancer Institute, which started in the 1960s. Paclitaxel was identified from Pacific yew tree Taxus brevifolia. Paclitaxel showed anti-tumour activity by a previously undescribed mechanism (stabilization of microtubules) and is now approved for clinical use for the treatment of lung, breast, and ovarian cancer, as well as for Kaposi's sarcoma. Early in the 21st century, Cabazitaxel (made by Sanofi, a French firm), another relative of taxol has been shown effective against prostate cancer, also because it works by preventing the formation of microtubules, which pull the chromosomes apart in dividing cells (such as cancer cells). Other examples are: 1. Camptotheca (Camptothecin · Topotecan · Irinotecan · Rubitecan · Belotecan); 2. Podophyllum (Etoposide · Teniposide); 3a. Anthracyclines (Aclarubicin · Daunorubicin · Doxorubicin · Epirubicin · Idarubicin · Amrubicin · Pirarubicin · Valrubicin · Zorubicin); 3b. Anthracenediones (Mitoxantrone · Pixantrone).
The second main approach involves ethnobotany, the study of the general use of plants in society, and ethnopharmacology, an area inside ethnobotany, which is focused specifically on medicinal uses.
Artemisinin, an antimalarial agent from sweet wormtree Artemisia annua, used in Chinese medicine since 200BC is one drug used as part of combination therapy for multiresistant Plasmodium falciparum.
Additionally, since machine learning has become more advanced, virtual screening is now an option for drug developers. AI algorithms are being used to perform virtual screening of chemical compounds, which involves predicting the activity of a compound against a specific target. By using machine learning algorithms to analyse large amounts of chemical data, researchers can identify potential new drug candidates that are more likely to be effective against a specific disease. Algorithms, such as Nearest-Neighbour classifiers, RF, extreme learning machines, SVMs, and deep neural networks (DNNs), are used for VS based on synthesis feasibility and can also predict in vivo activity and toxicity.
=== Structural elucidation ===
The elucidation of the chemical structure is critical to avoid the re-discovery of a chemical agent that is already known for its structure and chemical activity. Mass spectrometry is a method in which individual compounds are identified based on their mass/charge ratio, after ionization. Chemical compounds exist in nature as mixtures, so the combination of liquid chromatography and mass spectrometry (LC-MS) is often used to separate the individual chemicals. Databases of mass spectra for known compounds are available and can be used to assign a structure to an unknown mass spectrum. Nuclear magnetic resonance spectroscopy is the primary technique for determining chemical structures of natural products. NMR yields information about individual hydrogen and carbon atoms in the structure, allowing detailed reconstruction of the molecule's architecture.
== New Drug Application ==
When a drug is developed with evidence throughout its history of research to show it is safe and effective for the intended use in the United States, the company can file an application – the New Drug Application (NDA) – to have the drug commercialized and available for clinical application. NDA status enables the FDA to examine all submitted data on the drug to reach a decision on whether to approve or not approve the drug candidate based on its safety, specificity of effect, and efficacy of doses.
== See also ==
== References ==
== Further reading ==
== External links == | Wikipedia/Drug_discovery |
Industrial enzymes are enzymes that are commercially used in a variety of industries such as pharmaceuticals, chemical production, biofuels, food and beverage, and consumer products. Due to advancements in recent years, biocatalysis through isolated enzymes is considered more economical than use of whole cells. Enzymes may be used as a unit operation within a process to generate a desired product, or may be the product of interest. Industrial biological catalysis through enzymes has experienced rapid growth in recent years due to their ability to operate at mild conditions, and exceptional chiral and positional specificity, things that traditional chemical processes lack. Isolated enzymes are typically used in hydrolytic and isomerization reactions. Whole cells are typically used when a reaction requires a co-factor. Although co-factors may be generated in vitro, it is typically more cost-effective to use metabolically active cells.
== Enzymes as a unit of operation ==
=== Immobilization ===
Despite their excellent catalytic capabilities, enzymes and their properties must be improved prior to industrial implementation in many cases. Some aspects of enzymes that must be improved prior to implementation are stability, activity, inhibition by reaction products, and selectivity towards non-natural substrates. This may be accomplished through immobilization of enzymes on a solid material, such as a porous support. Immobilization of enzymes greatly simplifies the recovery process, enhances process control, and reduces operational costs. Many immobilization techniques exist, such as adsorption, covalent binding, affinity, and entrapment. Ideal immobilization processes should not use highly toxic reagents in the immobilization technique to ensure stability of the enzymes. After immobilization is complete, the enzymes are introduced into a reaction vessel for biocatalysis.
==== Adsorption ====
Enzyme adsorption onto carriers functions based on chemical and physical phenomena such as van der Waals forces, ionic interactions, and hydrogen bonding. These forces are weak, and as a result, do not affect the structure of the enzyme. A wide variety of enzyme carriers may be used. Selection of a carrier is dependent upon the surface area, particle size, pore structure, and type of functional group.
==== Covalent binding ====
Many binding chemistries may be used to adhere an enzyme to a surface to varying degrees of success. The most successful covalent binding techniques include binding via glutaraldehyde to amino groups and N-hydroxysuccinide esters. These immobilization techniques occur at ambient temperatures in mild conditions, which have limited potential to modify the structure and function of the enzyme.
==== Affinity ====
Immobilization using affinity relies on the specificity of an enzyme to couple an affinity ligand to an enzyme to form a covalently bound enzyme-ligand complex. The complex is introduced into a support matrix for which the ligand has high binding affinity, and the enzyme is immobilized through ligand-support interactions.
==== Entrapment ====
Immobilization using entrapment relies on trapping enzymes within gels or fibers, using non-covalent interactions. Characteristics that define a successful entrapping material include high surface area, uniform pore distribution, tunable pore size, and high adsorption capacity.
=== Recovery ===
Enzymes typically constitute a significant operational cost for industrial processes, and in many cases, must be recovered and reused to ensure economic feasibility of a process. Although some biocatalytic processes operate using organic solvents, the majority of processes occur in aqueous environments, improving the ease of separation. Most biocatalytic processes occur in batch, differentiating them from conventional chemical processes. As a result, typical bioprocesses employ a separation technique after bioconversion. In this case, product accumulation may cause inhibition of enzyme activity. Ongoing research is performed to develop in situ separation techniques, where product is removed from the batch during the conversion process. Enzyme separation may be accomplished through solid-liquid extraction techniques such as centrifugation or filtration, and the product-containing solution is fed downstream for product separation.
== Enzymes as a desired product ==
To industrialize an enzyme, the following upstream and downstream enzyme production processes are considered:
=== Upstream ===
Upstream processes are those that contribute to the generation of the enzyme.
==== Selection of a suitable enzyme ====
An enzyme must be selected based upon the desired reaction. The selected enzyme defines the required operational properties, such as pH, temperature, activity, and substrate affinity.
==== Identification and selection of a suitable source for the selected enzyme ====
The choice of a source of enzymes is an important step in the production of enzymes. It is common to examine the role of enzymes in nature and how they relate to the desired industrial process. Enzymes are most commonly sourced through bacteria, fungi, and yeast. Once the source of the enzyme is selected, genetic modifications may be performed to increase the expression of the gene responsible for producing the enzyme.
==== Process development ====
Process development is typically performed after genetic modification of the source organism, and involves the modification of the culture medium and growth conditions. In many cases, process development aims to reduce mRNA hydrolysis and proteolysis.
==== Large scale production ====
Scaling up of enzyme production requires optimization of the fermentation process. Most enzymes are produced under aerobic conditions, and as a result, require constant oxygen input, impacting fermenter design. Due to variations in the distribution of dissolved oxygen, as well as temperature, pH, and nutrients, the transport phenomena associated with these parameters must be considered. The highest possible productivity of the fermenter is achieved at maximum transport capacity of the fermenter.
=== Downstream ===
Downstream processes are those that contribute to separation or purification of enzymes.
==== Removal of insoluble materials and recovery of enzymes from the source ====
The procedures for enzyme recovery depend on the source organism, and whether enzymes are intracellular or extracellular. Typically, intracellular enzymes require cell lysis and separation of complex biochemical mixtures. Extracellular enzymes are released into the culture medium, and are much simpler to separate. Enzymes must maintain their native conformation to ensure their catalytic capability. Since enzymes are very sensitive to pH, temperature, and ionic strength of the medium, mild isolation conditions must be used.
==== Concentration and primary purification of enzymes ====
Depending on the intended use of the enzyme, different levels purity are required. For example, enzymes used for diagnostic purposes must be separated to a higher purity than bulk industrial enzymes to prevent catalytic activity that provides erroneous results. Enzymes used for therapeutic purposes typically require the most rigorous separation. Most commonly, a combination of chromatography steps is employed for separation.
The purified enzymes are either sold in pure form and sold to other industries, or added to consumer goods.
== See also ==
Industrial ecology
Industrial fermentation
Industrial microbiology
== References == | Wikipedia/Industrial_enzymes |
Fungal DNA barcoding is the process of identifying species of the biological kingdom Fungi through the amplification and sequencing of specific DNA sequences and their comparison with sequences deposited in a DNA barcode database such as the ISHAM reference database, or the Barcode of Life Data System (BOLD). In this attempt, DNA barcoding relies on universal genes that are ideally present in all fungi with the same degree of sequence variation. The interspecific variation, i.e., the variation between species, in the chosen DNA barcode gene should exceed the intraspecific (within-species) variation.
A fundamental problem in fungal systematics is the existence of teleomorphic and anamorphic stages in their life cycles. These morphs usually differ drastically in their phenotypic appearance, preventing a straightforward association of the asexual anamorph with the sexual teleomorph. Moreover, fungal species can comprise multiple strains that can vary in their morphology or in traits such as carbon- and nitrogen utilisation, which has often led to their description as different species, eventually producing long lists of synonyms. Fungal DNA barcoding can help to identify and associate anamorphic and teleomorphic stages of fungi, and through that to reduce the confusing multitude of fungus names. For this reason, mycologists were among the first to spearhead the investigation of species discrimination by means of DNA sequences, at least 10 years earlier than the DNA barcoding proposal for animals by Paul D. N. Hebert and colleagues in 2003, who popularised the term "DNA barcoding".
The success of identification of fungi by means of DNA barcode sequences stands and falls with the quantitative (completeness) and qualitative (level of identification) aspect of the reference database. Without a database covering a broad taxonomic range of fungi, many identification queries will not result in a satisfyingly close match. Likewise, without a substantial curatorial effort to maintain the records at a high taxonomic level of identification, queries – even when they might have a close or exact match in the reference database – will not be informative if the closest match is only identified to phylum or class level.
Another crucial prerequisite for DNA barcoding is the ability to unambiguously trace the provenance of DNA barcode data back to the originally sampled specimen, the so-called voucher specimen. This is common practice in biology along with the description of new taxa, where the voucher specimens, on which the taxonomic description is based, become the type specimens. When the identity of a certain taxon (or a genetic sequence in the case of DNA barcoding) is in doubt, the original specimen can be re-examined to review and ideally solve the issue. Voucher specimens should be clearly labelled as such, including a permanent voucher identifier that unambiguously connects the specimen with the DNA barcode data derived from it. Furthermore, these voucher specimens should be deposited in publicly accessible repositories like scientific collections or herbaria to preserve them for future reference and to facilitate research involving the deposited specimens.
== Barcode DNA markers ==
=== Internal Transcribed Spacer (ITS) – the primary fungal barcode ===
In fungi, the Internal transcribed spacer (ITS) is a roughly 600 base pairs long region in the ribosomal tandem repeat gene cluster of the nuclear genome. The region is flanked by the DNA sequences for the ribosomal small subunit (SSU) or 18S subunit at the 5' end, and by the large subunit (LSU) or 28S subunit at the 3' end. The Internal Transcribed Spacer itself consists of two parts, ITS1 and ITS2, which are separated from each other by the 5.8S subunit nested between them. Like the flanking 18S and 28S subunits, the 5.8S subunit contains a highly conserved DNA sequence, as they code for structural parts of the ribosome, which is a key component in intracellular protein synthesis.
Due to several advantages of ITS (see below) and a comprehensive amount of sequence data accumulated in the 1990s and early 2000s, Begerow et al. (2010) and Schoch et al. (2012) proposed the ITS region as primary DNA barcode region for the genetic identification of fungi.
UNITE is an open ITS barcoding database for fungi and all other eukaryotes.
==== Primers ====
The conserved flanking regions of 18S and 28S serve as anchor points for the primers used for PCR amplification of the ITS region. Moreover, the conserved nested 5.8S region allows for the construction of "internal" primers, i.e., primers attaching to complementary sequences within the ITS region. White et al. (1990) proposed such internal primers, named ITS2 and ITS3, along with the flanking primers ITS1 and ITS4 in the 18S and the 28S subunit, respectively. Due to their almost universal applicability to ITS sequencing in fungi, these primers are still in wide use today. Optimised primers specifically for ITS sequencing in Dikarya (comprising Basidiomycota and Ascomycota) have been proposed by Toju et al. (2012).
For the majority of fungi, the ITS primers proposed by White et al. (1990) have become the standard primers used for PCR amplification. These primers are:
==== Advantages and shortcomings ====
A major advantage of using the ITS region as molecular marker and fungal DNA barcode is that the entire ribosomal gene cluster is arranged in tandem repeats, i.e., in multiple copies. This allows for its PCR amplification and Sanger sequencing even from small material samples (given the DNA is not fragmented due to age or other degenerative influences). Hence, a high PCR success rate is usually observed when amplifying ITS. However, this success rate varies greatly among fungal groups, from 65% in non-Dikarya (including the now paraphyletic Mucoromycotina, the Chytridiomycota and the Blastocladiomycota) to 100% in Saccharomycotina and Basidiomycota (with the exception of very low success in Pucciniomycotina). Furthermore, the choice of primers for ITS amplification can introduce biases towards certain taxonomic fungus groups. For example, the "universal" ITS primers fail to amplify about 10% of the tested fungal specimens.
The tandem repeats of the ribosomal gene cluster cause the problem of significant intragenomic sequence heterogeneity observed among ITS copies of several fungal groups. In Sanger sequencing, this will cause ITS sequence reads of different lengths to superpose each other, potentially rendering the resulting chromatograph unreadable. Furthermore, because of the non-coding nature of the ITS region that can lead to a substantial amount of indels, it is impossible to consistently align ITS sequences from highly divergent species for further bigger-scale phylogenetic analyses. The degree of intragenomic sequence heterogeneity can be investigated in more detail through molecular cloning of the initially PCR-amplified ITS sequences, followed by sequencing of the clones. This procedure of initial PCR amplification, followed by cloning of the amplicons and finally sequencing of the cloned PCR products is the most common approach of obtaining ITS sequences for DNA metabarcoding of environmental samples, in which a multitude of different fungal species can be present simultaneously. However, this approach of sequencing after cloning was rarely done for the ITS sequences that make up the reference libraries used for DNA barcode-aided identification, thus potentially giving an underestimate of the existing ITS sequence variation in many samples.
The weighted arithmetic mean of the intraspecific (within-species) ITS variability among fungi is 2.51%. This variability, however, can range from 0% for example in Serpula lacrymans (n=93 samples) over 0.19% in Tuber melanosporum (n=179) up to 15.72% in Rhizoctonia solani (n=608), or even 24.75% in Pisolithus tinctorius (n=113). In cases of high intraspecific ITS variability, the application of a threshold of 3% sequence variability – a canonical upper value for intraspecific variation – will therefore lead to a higher estimate of operational taxonomic units (OTUs), i.e., putative species, than there actually are in a sample. In the case of medically relevant fungal species, a more strict threshold of 2.5% ITS variability allows only around 75% of all species to be accurately identified to the species level.
On the other hand, morphologically well-defined, but evolutionarily young species complexes or sibling species may only differ (if at all) in a few nucleotides of the ITS sequences. Solely relying on ITS barcode data for the identification of such species pairs or complexes may thus obscure the actual diversity and might lead to misidentification if not accompanied by the investigation of morphological and ecological features and/or comparison of additional diagnostic genetic markers. For some taxa, ITS (or its ITS2 part) is not variable enough as fungal DNA barcode, as for example has been shown in Aspergillus, Cladosporium, Fusarium and Penicillium. Efforts to define a universally applicable threshold value of ITS variability that demarcates intraspecific from interspecific (between-species) variability thus remain futile.
Nonetheless, the probability of correct species identification with the ITS region is high in the Dikarya, and especially so in Basidiomycota, where even the ITS1 part is often sufficient to identify the species. However, its discrimination power is partly superseded by that of the DNA-directed RNA polymerase II subunit RPB1 (see also below).
Due to the shortcomings of ITS' as primary fungal DNA barcode, the necessity of establishing a second DNA barcode marker was expressed. Several attempts were made to establish other genetic markers that could serve as additional DNA barcodes, similar to the situation in plants, where the plastidial genes rbcL, matK and trnH-psbA, as well as the nuclear ITS are often used in combination for DNA barcoding.
=== Translational elongation factor 1α (TEF1α) – the secondary fungal barcode ===
The translational elongation factor 1α is part of the eucaryotic elongation factor 1 complex, whose main function is to facilitate the elongation of the amino acid chain of a polypeptide during the translation process of gene expression.
Stielow et al. (2015) investigated the TEF1α gene, among a number of others, as potential genetic marker for fungal DNA barcoding. The TEF1α gene coding for the translational elongation factor 1α is generally considered to have a slow mutation rate, and it is therefore generally better suited for investigating older splits deeper in the phylogenetic history of an organism group. Despite this, the authors conclude that TEF1α is the most promising candidate for an additional DNA barcode marker in fungi as it also features sequence regions of higher mutation rates. Following this, a quality-controlled reference database was established and merged with the previously existing ISHAM-ITS database for fungal ITS DNA barcodes to form the ISHAM database.
TEF1α has been successfully used to identify a new species of Cantharellus from Texas and distinguish it from a morphologically similar species. In the genera Ochroconis and Verruconis (Sympoventuriaceae, Venturiales), however, the marker does not allow distinction of all species. TEF1α has also been used in phylogenetic analyses at the genus level, e.g. in the case of Cantharellus and the entomopathogenic Beauveria, and for the phylogenetics of early-diverging fungal lineages.
==== Primers ====
TEF1α primers used in the broad-scale screening of the performance of DNA barcode gene candidates of Stielow et al. (2015) were the forward primer EF1-983F with the sequence 5'-GCYCCYGGHCAYCGTGAYTTYAT-3', and the reverse primer EF1-1567R with the sequence 5'-ACHGTRCCRATACCACCRATCTT-3'. In addition, a number of new primers was developed, with the primer pair in bold resulting in a high average amplification success of 88%:
Primers used for the investigation of Rhizophydiales and especially Batrachochytrium dendrobatidis, a pathogen of amphibia, are the forward primer tef1F with the nucleotide sequence 5'-TACAARTGYGGTGGTATYGACA-3', and the reverse primer tef1R with the sequence 5'-ACNGACTTGACYTCAGTRGT-3'. These primers also successfully amplified the majority of Cantharellus species investigated by Buyck et al. (2014), with the exception of a few species for which more specific primers were developed: the forward primer tef-1Fcanth with the sequence 5'-AGCATGGGTDCTYGACAAG-3', and the reverse primer tef-1Rcanth with the sequence 5'-CCAATYTTRTAYACATCYTGGAG-3'.
=== D1/D2 domain of the LSU ribosomal RNA ===
The D1/D2 domain is part of the nuclear large subunit (28S) ribosomal RNA, and it is therefore located in the same ribosomal tandem repeat gene cluster as the Internal Transcribed Spacer (ITS). But unlike the non-coding ITS sequences, the D1/D2 domain contains coding sequence. With about 600 base pairs it is about the same nucleotide sequence length as ITS, which makes amplification and sequencing rather straightforward, an advantage that has led to the accumulation of an extensive amount of D1/D2 sequence data especially for yeasts.
Regarding the molecular identification of basidiomycetous yeasts, D1/D2 (or ITS) can be used alone. However, Fell et al. (2000) and Scorzetti et al. (2002) recommend the combined analysis of the D1/D2 and ITS regions, a practice that later became the standard required information for describing new taxa of asco- and basidiomycetous yeasts. When attempting to identify early diverging fungal lineages, the study of Schoch et al. (2012), comparing the identification performance of different genetic markers, showed that the large subunit (as well as the small subunit) of the ribosomal RNA performs better than ITS or RPB1.
==== Primers ====
For basidiomycetous yeasts, the forward primer F63 with the sequence 5'-GCATATCAATAAGCGGAGGAAAAG-3', and the reverse primer LR3 with the sequence 5'-GGTCCGTGTTTCAAGACGG-3' have been successfully used for PCR amplification of the D1/D23 domain. The D1/D2 domain of ascomycetous yeasts like Candida can be amplified with the forward primer NL-1 (same as F63) and the reverse primer NL-4 (same as LR3).
=== RNA polymerase II subunit RPB1 ===
The RNA polymerase II subunit RPB1 is the largest subunit of the RNA polymerase II. In Saccharomyces cerevisiae, it is encoded by the RPO21 gene. PCR amplification success of RPB1 is very taxon-dependent, ranging from 70 to 80% in Ascomycota to 14% in early diverging fungal lineages. Apart from the early diverging lineages, RPB1 has a high rate of species identification in all fungal groups. In the species-rich Pezizomycotina it even outperforms ITS.
In a study comparing the identification performance of four genes, RPB1 was among the most effective genes when combining two genes in the analysis: combined analysis with either ITS or with the large subunit ribosomal RNA yielded the highest identification success.
Other studies also used RPB2, the second-largest subunit of the RNA polymerase II, e.g. for studying the phylogenetic relationships among species of the genus Cantharellus or for a phylogenetic study shedding light on the relationships among early-diverging lineages in the fungal kingdom.
==== Primers ====
Primers successfully amplifying RPB1 especially in Ascomycota are the forward primer RPB1-Af with the sequence 5'-GARTGYCCDGGDCAYTTYGG-3', and the reverse primer RPB1-Ac-RPB1-Cr with the sequence 5'-CCNGCDATNTCRTTRTCCATRTA-3'.
=== Intergenic Spacer (IGS) of ribosomal RNA genes ===
The Intergenic Spacer (IGS) is the region of non-coding DNA between individual tandem repeats of the ribosomal gene cluster in the nuclear genome, as opposed to the Internal Transcribed Spacer (ITS) that is situated within these tandem repeats.
IGS has been successfully used for the differentiation of strains of Xanthophyllomyces dendrorhous as well as for species distinction in the psychrophilic genus Mrakia (Cystofilobasidiales). Due to these results, IGS has been recommended as a genetic marker for additional differentiation (along with D1/D2 and ITS) of closely related species and even strains within one species in basidiomycete yeasts.
The recent discovery of additional non-coding RNA genes in the IGS region of some basidiomycetes cautions against uncritical use of IGS sequences for DNA barcoding and phylogenetic purposes.
=== Other genetic markers ===
The cytochrome c oxidase subunit I (COI) gene outperforms ITS in DNA barcoding of Penicillium (Ascomycota) species, with species-specific barcodes for 66% of the investigated species versus 25% in the case of ITS. Furthermore, a part of the β-Tubulin A (BenA) gene exhibits a higher taxonomic resolution in distinguishing Penicillium species as compared to COI and ITS. In the closely related Aspergillus niger complex, however, COI is not variable enough for species discrimination. In Fusarium, COI exhibits paralogues in many cases, and homologous copies are not variable enough to distinguish species.
COI also performs poorly in the identification of basidiomycote rusts of the order Pucciniales due to the presence of introns. Even when the obstacle of introns is overcome, ITS and the LSU rRNA (28S) outperform COI as DNA barcode marker. In the subdivision Agaricomycotina, PCR amplification success was poor for COI, even with multiple primer combinations. Successfully sequenced COI samples also included introns and possible paralogous copies, as reported for Fusarium. Agaricus bisporus was found to contain up to 19 introns, making the COI gene of this species the longest recorded, with 29,902 nucleotides. Apart from the substantial troubles of sequencing COI, COI and ITS generally perform equally well in distinguishing basidiomycote mushrooms.
Topoisomerase I (TOP1) was investigated as additional DNA barcode candidate by Lewis et al. (2011) based on proteome data, with the developed universal primer pair being subsequently tested on actual samples by Stielow et al. (2015). The forward primer TOP1_501-F with the sequence 5'-TGTAAAACGACGGCCAGT-ACGAT-ACTGCCAAGGTTTTCCGTACHTACAACGC-3' (where the first section marks the universal M13 forward primer tail, the second part consisting of ACGAT a spacer, and the third part the actual primer) and reverse the primer TOP1_501-R with 5'-CAGGAAACAGCTATGA-CCCAGTCCTCGTCAACWGACTTRATRGCCCA-3' (the first section marking the universal M13 reverse primer tail, the second part the actual TOP1 reverse primer) amplify a fragment of approximately 800 base pairs.
TOP1 was found to be a promising DNA barcode candidate marker for ascomycetes, where it can distinguish species in Fusarium and Penicillium – genera, in which the primary ITS barcode performs poorly. However, poor amplification success with the TOP1 universal primers is observed in early-diverging fungal lineages and basidiomycetes except Pucciniomycotina (where ITS PCR success is poor).
Like TOP1, the Phosphoglycerate kinase (PGK) was among the genetic markers investigated by Lewis et al. (2011) and Stielow et al. (2015) as potential additional fungal DNA barcodes. A number of universal primers was developed, with the PGK533 primer pair, amplifying a circa 1,000 base pair fragment, being the most successful in most fungi except Basidiomycetes. Like TOP1, PGK is superior to ITS in species differentiation in ascomycete genera like Penicillium and Fusarium, and both PGK and TOP1 perform as good as TEF1α in distinguishing closely related species in these genera.
== Applications ==
=== Food safety ===
A citizen science project investigated the consensus between the labelling of dried, commercially sold mushrooms and the DNA barcoding results from these mushrooms. All samples were found to be correctly labelled. However, an obstacle was the unreliability of ITS reference databases in terms of the level of identification, as the two databases (GenBank and UNITE) used for ITS sequence comparison gave different identification results in some of the samples.
Correct labelling of mushrooms intended for consumption was also investigated by Raja et al. (2016), who used the ITS region for DNA barcoding from dried mushrooms, mycelium powders, and dietary supplement capsules. In only 30% of the 33 samples did the product label correctly state the binomial fungus name. In another 30%, the genus name was correct, but the species epithet did not match, and in 15% of the cases not even the genus name of the binomial name given on the product label matched the result of the obtained ITS barcode. For the remaining 25% of the samples, no ITS sequence could be obtained.
Xiang et al. (2013) showed that using ITS sequences, the commercially highly valuable the caterpillar fungus Ophiocordyceps sinensis and its counterfeit versions (O. nutans, O. robertsii, Cordyceps cicadae, C. gunnii, C. militaris, and the plant Ligularia hodgsonii) can be reliably identified to the species level.
=== Pathogenic fungi ===
A study by Vi Hoang et al. (2019) focused on the identification accuracy of pathogenic fungi using both the primary (ITS) and secondary (TEF1α) barcode markers. Their results show that in Diutina (a segregate of Candida) and Pichia, species identification is straightforward with either the ITS or the TEF1α as well as with a combination of both. In the Lodderomyces assemblage, which contains three of the five most common pathogenic Candida species (C. albicans, C. dubliniensis, and C. parapsilosis), ITS failed to distinguish Candida orthopsilosis and C. parapsilosis, which are part of the Candida parapsilosis complex of closely related species. TEF1α, on the other hand, allowed identification of all investigated species of the Lodderomyces clade. Similar results were obtained for Scedosporium species, which are attributed to a wide range of localised to invasive diseases: ITS could not distinguish between S. apiospermum and S. boydii, whereas with TEF1α all investigated species of this genus could be accurately identified. This study therefore underlines the usefulness of applying more than one DNA barcoding marker for fungal species identification.
=== Conservation of cultural heritage ===
Fungal DNA barcoding has been successfully applied to the investigation of foxing phenomena, a major concern in the conservation of paper documents. Sequeira et al. (2019) sequenced ITS from foxing stains and found Chaetomium globosum, Ch. murorum, Ch. nigricolor, Chaetomium sp., Eurotium rubrum, Myxotrichum deflexum, Penicillium chrysogenum, P. citrinum, P. commune, Penicillium sp. and Stachybotrys chartarum to inhabit the investigated paper stains.
Another study investigated fungi that act as biodeteriorating agents in the Old Cathedral of Coimbra, part of the University of Coimbra, a UNESCO World Heritage Site. Sequencing the ITS barcode of ten samples with classical Sanger as well as with Illumina next-generation sequencing techniques, they identified 49 fungal species. Aspergillus versicolor, Cladosporium cladosporioides, C. sphaerospermum, C. tenuissimum, Epicoccum nigrum, Parengyodontium album, Penicillium brevicompactum, P. crustosum, P. glabrum, Talaromyces amestolkiae and T. stollii were the most common species isolated from the samples.
Another study concerning objects of cultural heritage investigated the fungal diversity on a canvas painting by Paula Rego using the ITS2 subregion of the ITS marker. Altogether, 387 OTUs (putative species) in 117 genera of 13 different classes of fungi were observed.
== See also ==
DNA barcoding
Microbial DNA barcoding
Pollen DNA barcoding
DNA barcoding in diet assessment
Consortium for the Barcode of Life
== References ==
== Further reading ==
== External links ==
Aftol primer listing (as used in James et al. 2006's six-gene phylogeny) | Wikipedia/Fungal_DNA_barcoding |
Metabarcoding is the barcoding of DNA/RNA (or eDNA/eRNA) in a manner that allows for the simultaneous identification of many taxa within the same sample. The main difference between barcoding and metabarcoding is that metabarcoding does not focus on one specific organism, but instead aims to determine species composition within a sample.
A barcode consists of a short variable gene region (for example, see different markers/barcodes) which is useful for taxonomic assignment flanked by highly conserved gene regions which can be used for primer design. This idea of general barcoding originated in 2003 from researchers at the University of Guelph.
The metabarcoding procedure, like general barcoding, proceeds in order through stages of DNA extraction, PCR amplification, sequencing and data analysis. Different genes are used depending if the aim is to barcode single species or metabarcoding several species. In the latter case, a more universal gene is used. Metabarcoding does not use single species DNA/RNA as a starting point, but DNA/RNA from several different organisms derived from one environmental or bulk sample.
== Environmental DNA ==
Environmental DNA or eDNA describes the genetic material present in environmental samples such as sediment, water, and air, including whole cells, extracellular DNA and potentially whole organisms. eDNA can be captured from environmental samples and preserved, extracted, amplified, sequenced, and categorized based on its sequence. From this information, detection and classification of species is possible. eDNA may come from skin, mucous, saliva, sperm, secretions, eggs, feces, urine, blood, roots, leaves, fruit, pollen, and rotting bodies of larger organisms, while microorganisms may be obtained in their entirety. eDNA production is dependent on biomass, age and feeding activity of the organism as well as physiology, life history, and space use.
By 2019 methods in eDNA research had been expanded to be able to assess whole communities from a single sample. This process involves metabarcoding, which can be precisely defined as the use of general or universal polymerase chain reaction (PCR) primers on mixed DNA samples from any origin followed by high-throughput next-generation sequencing (NGS) to determine the species composition of the sample. This method has been common in microbiology for years, but, as of 2020, it is only just finding its footing in the assessment of macroorganisms. Ecosystem-wide applications of eDNA metabarcoding have the potential to not only describe communities and biodiversity, but also to detect interactions and functional ecology over large spatial scales, though it may be limited by false readings due to contamination or other errors. Altogether, eDNA metabarcoding increases speed, accuracy, and identification over traditional barcoding and decreases cost, but needs to be standardized and unified, integrating taxonomy and molecular methods for full ecological study.
eDNA metabarcoding has applications to diversity monitoring across all habitats and taxonomic groups, ancient ecosystem reconstruction, plant-pollinator interactions, diet analysis, invasive species detection, pollution responses, and air quality monitoring. eDNA metabarcoding is a unique method still in development and will likely remain in flux for some time as technology advances and procedures become standardized. However, as metabarcoding is optimized and its use becomes more widespread, it is likely to become an essential tool for ecological monitoring and global conservation study.
== Community DNA ==
Since the inception of high‐throughput sequencing (HTS), the use of metabarcoding as a biodiversity detection tool has drawn immense interest. However, there has yet to be clarity regarding what source material is used to conduct metabarcoding analyses (e.g., environmental DNA versus community DNA). Without clarity between these two source materials, differences in sampling, as well as differences in laboratory procedures, can impact subsequent bioinformatics pipelines used for data processing, and complicate the interpretation of spatial and temporal biodiversity patterns. Here, we seek to clearly differentiate among the prevailing source materials used and their effect on downstream analysis and interpretation for environmental DNA metabarcoding of animals and plants compared to that of community DNA metabarcoding.
With community DNA metabarcoding of animals and plants, the targeted groups are most often collected in bulk (e.g., soil, malaise trap or net), and individuals are removed from other sample debris and pooled together prior to bulk DNA extraction. In contrast, macro‐organism eDNA is isolated directly from an environmental material (e.g., soil or water) without prior segregation of individual organisms or plant material from the sample and implicitly assumes that the whole organism is not present in the sample. Of course, community DNA samples may contain DNA from parts of tissues, cells and organelles of other organisms (e.g., gut contents, cutaneous intracellular or extracellular DNA). Likewise, macro‐organism eDNA samples may inadvertently capture whole microscopic nontarget organisms (e.g., protists, bacteria). Thus, the distinction can at least partly break down in practice.
Another important distinction between community DNA and macro‐organism eDNA is that sequences generated from community DNA metabarcoding can be taxonomically verified when the specimens are not destroyed in the extraction process. Here, sequences can then be generated from voucher specimens using Sanger sequencing. As the samples for eDNA metabarcoding lack whole organisms, no such in situ comparisons can be made. Taxonomic affinities can therefore only be established by directly comparing obtained sequences (or through bioinformatically generated operational taxonomic units (MOTUs)), to sequences that are taxonomically annotated such as NCBI's GenBank nucleotide database, BOLD, or to self‐generated reference databases from Sanger‐sequenced DNA. (The molecular operational taxonomic unit (MOTU) is a group identified through use of cluster algorithms and a predefined percentage sequence similarity, for example, 97%)). Then, to at least partially corroborate the resulting list of taxa, comparisons are made with conventional physical, acoustic or visual‐based survey methods conducted at the same time or compared with historical records from surveys for a location (see Table 1).
The difference in source material between community DNA and eDNA therefore has distinct ramifications for interpreting the scale of inference for time and space about the biodiversity detected. From community DNA, it is clear that the individual species were found in that time and place, but for eDNA, the organism that produced the DNA may be upstream from the sampled location, or the DNA may have been transported in the faeces of a more mobile predatory species (e.g., birds depositing fish eDNA, or was previously present, but no longer active in the community and detection is from DNA that was shed years to decades before. The latter means that the scale of inference both in space and in time must be considered carefully when inferring the presence for the species in the community based on eDNA.
== Metabarcoding stages ==
There are six stages or steps in DNA barcoding and metabarcoding. The DNA barcoding of animals (and specifically of bats) is used as an example in the diagram at the right and in the discussion immediately below.
First, suitable DNA barcoding regions are chosen to answer some specific research question. The most commonly used DNA barcode region for animals is a segment about 600 base pairs long of the mitochondrial gene cytochrome oxidase I (CO1). This locus provides large sequence variation between species yet relatively small amount of variation within species. Other commonly used barcode regions used for species identification of animals are ribosomal DNA (rDNA) regions such as 16S, 18S and 12S and mitochondrial regions such as cytochrome B. These markers have advantages and disadvantages and are used for different purposes. Longer barcode regions (at least 600 base pairs long) are often needed for accurate species delimitation, especially to differentiate close relatives. Identification of the producer of organism's remains such as faeces, hairs and saliva can be used as a proxy measure to verify absence/presence of a species in an ecosystem. The DNA in these remains is usually of low quality and quantity, and therefore, shorter barcodes of around 100 base pairs long are used in these cases. Similarly, DNA remains in dung are often degraded as well, so short barcodes are needed to identify prey consumed.
Second, a reference database needs to be built of all DNA barcodes likely to occur in a study. Ideally, these barcodes need to be generated from vouchered specimens deposited in a publicly accessible place, such as for instance a natural history museum or another research institute. Building up such reference databases is currently being done all over the world. Partner organizations collaborate in international projects such as the International Barcode of Life Project (iBOL) and Consortium for the Barcode of Life (CBOL), aiming to construct a DNA barcode reference that will be the foundation for DNA‐based identification of the world's biome. Well‐known barcode repositories are NCBI GenBank and the Barcode of Life Data System (BOLD).
Third, the cells containing the DNA of interest must be broken open to expose its DNA. This step, DNA extractions and purifications, should be performed from the substrate under investigation. There are several procedures available for this. Specific techniques must be chosen to isolate DNA from substrates with partly degraded DNA, for example fossil samples, and samples containing inhibitors, such as blood, faeces and soil. Extractions in which DNA yield or quality is expected to be low should be carried out in an ancient DNA facility, together with established protocols to avoid contamination with modern DNA. Experiments should always be performed in duplicate and with positive controls included.
Fourth, amplicons have to be generated from DNA extracted, either from a single specimen or from complex mixtures with primers based on DNA barcodes selected under step 1. To keep track of their origin, labelled nucleotides (molecular IDs or MID labels) need to be added in case of metabarcoding. These labels are needed later on in the analyses to trace reads from a bulk data set back to their origin.
Fifth, the appropriate techniques should be chosen for DNA sequencing. The classic Sanger chain‐termination method relies on the selective incorporation of chain‐elongating inhibitors of DNA polymerase during DNA replication. These four bases are separated by size using electrophoresis and later identified by laser detection. The Sanger method is limited and can produce a single read at the same time and is therefore suitable to generate DNA barcodes from substrates that contain only a single species. Emerging technologies such as nanopore sequencing have resulted in the cost of DNA sequencing reducing from about USD 30,000 per megabyte in 2002 to about USD 0.60 in 2016. Modern next-generation sequencing (NGS) technologies can handle thousands to millions reads in parallel and are therefore suitable for mass identification of a mix of different species present in a substrate, summarized as metabarcoding.
Finally, bioinformatic analyses need to be carried out to match DNA barcodes obtained with Barcode Index Numbers (BINs) in reference libraries. Each BIN, or BIN cluster, can be identified to species level when it shows high (>97%) concordance with DNA barcodes linked to a species present in a reference library, or when taxonomic identification to the species level is still lacking, an operational taxonomic unit (OTU), which refers to a group of species (i.e. genus, family or higher taxonomic rank). (See binning (metagenomics)). The results of the bioinformatics pipeline must be pruned, for example by filtering out unreliable singletons, superfluous duplicates, low‐quality reads and/or chimeric reads. This is generally done by carrying out serial BLAST searches in combination with automatic filtering and trimming scripts. Standardized thresholds are needed to discriminate between different species or a correct and a wrong identification.
== Metabarcoding workflow ==
Despite the obvious power of the approach, eDNA metabarcoding is affected by precision and accuracy challenges distributed throughout the workflow in the field, in the laboratory and at the keyboard. As set out in the diagram at the right, following the initial study design (hypothesis/question, targeted taxonomic group etc) the current eDNA workflow consists of three components: field, laboratory and bioinformatics. The field component consists of sample collection (e.g., water, sediment, air) that is preserved or frozen prior to DNA extraction. The laboratory component has four basic steps: (i) DNA is concentrated (if not performed in the field) and purified, (ii) PCR is used to amplify a target gene or region, (iii) unique nucleotide sequences called "indexes" (also referred to as "barcodes") are incorporated using PCR or are ligated (bound) onto different PCR products, creating a "library" whereby multiple samples can be pooled together, and (iv) pooled libraries are then sequenced on a high‐throughput machine. The final step after laboratory processing of samples is to computationally process the output files from the sequencer using a robust bioinformatics pipeline.
== OTUs and the species concept ==
== Method and visualisation ==
The method requires each collected DNA to be archived with its corresponding "type specimen" (one for each taxon), in addition to the usual collection data. These types are stored in specific institutions (museums, molecular laboratories, universities, zoological gardens, botanical gardens, herbaria, etc.) one for each country, and in some cases, the same institution is assigned to contain the types of more than a country, in cases where some nations do not have the technology or financial resources to do so.
In this way, the creation of type specimens of genetic codes represents a methodology parallel to that carried out by traditional taxonomy.
In a first stage, the region of the DNA that would be used to make the barcode was defined. It had to be short and achieve a high percentage of unique sequences. For animals, algae and fungi, a portion of a mitochondrial gene which codes for subunit 1 of the cytochrome oxidase enzyme, CO1, has provided high percentages (95%), a region around 648 base pairs.
In the case of plants, the use of CO1 has not been effective since they have low levels of variability in that region, in addition to the difficulties that are produced by the frequent effects of polyploidy, introgression, and hybridization, so the chloroplast genome seems more suitable .
== Applications ==
=== Pollinator networks ===
The diagram on the right shows a comparison of pollination networks based on DNA metabarcoding with more traditional networks based on direct observations of insect visits to plants. By detecting numerous additional hidden interactions, metabarcoding data largely alters the properties of the pollination networks compared to visit surveys. Molecular data shows that pollinators are much more generalist than expected from visit surveys. However, pollinator species were composed of relatively specialized individuals and formed functional groups highly specialized upon floral morphs.
As a consequence of the ongoing global changes, a dramatic and parallel worldwide decline in pollinators and animal-pollinated plant species has been observed. Understanding the responses of pollination networks to these declines is urgently required to diagnose the risks the ecosystems may incur as well as to design and evaluate the effectiveness of conservation actions. Early studies on animal pollination dealt with simplified systems, i.e. specific pairwise interactions or involved small subsets of plant-animal communities. However, the impacts of disturbances occur through highly complex interaction networks and, nowadays, these complex systems are currently a major research focus. Assessing the true networks (determined by ecological process) from field surveys that are subject to sampling effects still provides challenges.
Recent research studies have clearly benefited from network concepts and tools to study the interaction patterns in large species assemblages. They showed that plant-pollinator networks were highly structured, deviating significantly from random associations. Commonly, networks have (1) a low connectance (the realized fraction of all potential links in the community) suggesting a low degree of generalization; (2) a high nestedness (the more-specialist organisms are more likely to interact with subsets of the species that more-generalist organisms interact with) the more specialist species interact only with proper subsets of those species interacting with the more generalist ones; (3) a cumulative distribution of connectivity (number of links per species, s) that follows a power or a truncated power law function characterized by few supergeneralists with more links than expected by chance and many specialists; (4) a modular organization. A module is a group of plant and pollinator species that exhibits high levels of within-module connectivity, and that is poorly connected to species of other groups.
The low level of connectivity and the high proportion of specialists in pollination networks contrast with the view that generalization rather than specialization is the norm in networks. Indeed, most plants species are visited by a diverse array of pollinators which exploit floral resources from a wide range of plant species. A main cause evoked to explain this apparent contradiction is the incomplete sampling of interactions. Indeed, most network properties are highly sensitive to sampling intensity and network size. Network studies are basically phytocentric i.e. based on the observations of pollinator visits to flowers. This plant-centered approach suffers nevertheless from inherent limitations which may hamper the comprehension of mechanisms contributing to community assembly and biodiversity patterns. First, direct observations of pollinator visits to certain taxa such as orchids are often scarce and rare interactions are very difficult to detect in field in general. Pollinator and plant communities usually are composed of few abundant species and many rare species that are poorly recorded in visit surveys. These rare species appear as specialists, whereas in fact they could be typical generalists. Because of the positive relationship between interaction frequency (f) and connectivity (s), undersampled interactions may lead to overestimating the degree of specialization in networks. Second, network analyses have mostly operated at species levels. Networks have very rarely been up scaled to the functional groups or down scaled to the individual-based networks, and most of them have been focused on one or two species only. The behavior of either individuals or colonies is commonly ignored, although it may influence the structure of the species networks. Species accounted as generalists in species networks could, therefore, entail cryptic specialized individuals or colonies. Third, flower visitors are by no means always effective pollinators as they may deposit no conspecific pollen and/or a lot of heterospecific pollen. Animal-centered approaches based on the investigation of pollen loads on visitors and plant stigmas may be more efficient at revealing plant-pollinator interactions.
=== Disentangling food webs ===
Metabarcoding offers new opportunities for deciphering trophic linkages between predators and their prey within food webs. Compared to traditional, time-consuming methods, such as microscopic or serological analyses, the development of DNA metabarcoding allows the identification of prey species without prior knowledge of the predator's prey range. In addition, metabarcoding can also be used to characterize a large number of species in a single PCR reaction, and to analyze several hundred samples simultaneously. Such an approach is increasingly used to explore the functional diversity and structure of food webs in agroecosystems. Like other molecular-based approaches, metabarcoding only gives qualitative results on the presence/absence of prey species in the gut or fecal samples. However, this knowledge of the identity of prey consumed by predators of the same species in a given environment enables a "pragmatic and useful surrogate for truly quantitative information.
In food web ecology, "who eats whom" is a fundamental issue for gaining a better understanding of the complex trophic interactions existing between pests and their natural enemies within a given ecosystem. The dietary analysis of arthropod and vertebrate predators allows the identification of key predators involved in the natural control of arthropod pests and gives insights into the breadth of their diet (generalist vs. specialist) and intraguild predation.
The diagram on the right summarises results from a 2020 study which used metabarcoding to untangle the functional diversity and structure of the food web associated with a couple of millet fields in Senegal. After assigning the identified OTUs as species, 27 arthropod prey taxa were identified from nine arthropod predators. The mean number of prey taxa detected per sample was the highest in carabid beetles, ants and spiders, and the lowest in the remaining predators including anthocorid bugs, pentatomid bugs, and earwigs. Across predatory arthropods, a high diversity of arthropod preys was observed in spiders, carabid beetles, ants, and anthocorid bugs. In contrast, the diversity of prey species identified in earwigs and pentatomid bugs was relatively low. Lepidoptera, Hemiptera, Diptera and Coleoptera were the most common insect prey taxa detected from predatory arthropods.
Conserving functional biodiversity and related ecosystem services, especially by controlling pests using their natural enemies, offers new avenues to tackle challenges for the sustainable intensification of food production systems. Predation of crop pests by generalist predators, including arthropods and vertebrates, is a major component of natural pest control. A particularly important trait of most generalist predators is that they can colonize crops early in the season by first feeding on alternative prey. However, the breadth of the "generalist" diet entails some drawbacks for pest control, such as intra-guild predation. A tuned diagnosis of diet breadth in generalist predators, including predation of non-pest prey, is thus needed to better disentangle food webs (e.g., exploitation competition and apparent competition) and ultimately to identify key drivers of natural pest control in agroecosystems. However, the importance of generalist predators in the food web is generally difficult to assess, due to the ephemeral nature of individual predator–prey interactions. The only conclusive evidence of predation results from direct observation of prey consumption, identification of prey residues within predators' guts, and analyses of regurgitates or feces.
=== Marine biosecurity ===
The spread of non-indigenous species (NIS) represents significant and increasing risks to ecosystems. In marine systems, NIS that survive the transport and adapt to new locations can have significant adverse effects on local biodiversity, including the displacement of native species, and shifts in biological communities and associated food webs. Once NIS are established, they are extremely difficult and costly to eradicate, and further regional spread may occur through natural dispersal or via anthropogenic transport pathways. While vessel hull fouling and ships' ballast waters are well known as important anthropogenic pathways for the international spread of NIS, comparatively little is known about the potential of regionally transiting vessels to contribute to the secondary spread of marine pests through bilge water translocation.
Recent studies have revealed that the water and associated debris entrained in bilge spaces of small vessels (<20 m) can act as a vector for the spread of NIS at regional scales. Bilge water is defined as any water that is retained on a vessel (other than ballast), and that is not deliberately pumped on board. It can accumulate on or below the vessel's deck (e.g., under floor panels) through a variety of mechanisms, including wave actions, leaks, via the propeller stern glands, and through the loading of items such as diving, fishing, aquaculture or scientific equipment. Bilge water, therefore, may contain seawater as well as living organisms at various life stages, cell debris and contaminants (e.g., oil, dirt, detergent, etc.), all of which are usually discharged using automatic bilge pumps or are self-drained using duckbill valves. Bilge water pumped from small vessels (manually or automatically) is not usually treated prior to discharge to sea, contrasting with larger vessels that are required to separate oil and water using filtration systems, centrifugation, or carbon absorption. If propagules are viable through this process, the discharge of bilge water may result in the spread of NIS.
In 2017, Fletcher et al. used a combination of laboratory and field experiments to investigate the diversity, abundance, and survival of biological material contained in bilge water samples taken from small coastal vessels. Their laboratory experiment showed that ascidian colonies or fragments, and bryozoan larvae, can survive passage through an unfiltered pumping system largely unharmed. They also conducted the first morpho-molecular assessment (using eDNA metabarcoding) on the biosecurity risk posed by bilge water discharges from 30 small vessels (sailboats and motorboats) of various origins and sailing time. Using eDNA metabarcoding they characterised approximately three times more taxa than via traditional microscopic methods, including the detection of five species recognised as non-indigenous in the study region.
To assist in understanding the risks associated with different NIS introduction vectors, traditional microscope biodiversity assessments are increasingly being complemented by eDNA metabarcoding. This allows a wide range of diverse taxonomic assemblages, at many life stages to be identified. It can also enable the detection of NIS that may have been overlooked using traditional methods. Despite the great potential of eDNA metabarcoding tools for broad-scale taxonomic screening, a key challenge for eDNA in the context of environmental monitoring of marine pests, and particularly when monitoring enclosed environments such as some bilge spaces or ballast tanks, is differentiating dead and viable organisms. Extracellular DNA can persist in dark/cold environments for extended periods of time (months to years, thus many of the organisms detected using eDNA metabarcoding may have not been viable in the location of sample collection for days or weeks. In contrast, ribonucleic acid (RNA) deteriorates rapidly after cell death, likely providing a more accurate representation of viable communities. Recent metabarcoding studies have explored the use of co-extracted eDNA and eRNA molecules for monitoring benthic sediment samples around marine fish farms and oil drilling sites, and have collectively found slightly stronger correlations between biological and physico-chemical variables along impact gradients when using eRNA. From a marine biosecurity prospective, the detection of living NIS may represent a more serious and immediate threat than the detection of NIS based purely on a DNA signal. Environmental RNA may therefore offer a useful method for identifying living organisms in samples.
=== Miscellaneous ===
The construction of the genetic barcode library was initially focused on fish and birds, followed by butterflies and other invertebrates. In the case of birds, the DNA sample is usually obtained from the chest.
Researchers have already developed specific catalogs for large animal groups, such as bees, birds, mammals or fish. Another use is to analyze the complete zoocenosis of a given geographic area, such as the "Polar Life Bar Code" project that aims to collect the genetic traits of all organisms that live in polar regions; both poles of the Earth. Related to this form is the coding of all the ichthyofauna of a hydrographic basin, for example the one that began to develop in the Rio São Francisco, in the northeast of Brazil.
The potential of the use of Barcodes is very wide, since the discovery of numerous cryptic species (it has already yielded numerous positive results), the use in the identification of species at any stage of their life, the secure identification in cases of protected species that are illegally trafficked, etc.
It has also been used as a non-invasive tool to determine the diet of wildlife species, such as wombats and particularly critically endangered species, such as the northern hairy-nosed wombat (Lasiorhinus krefftii).
== Potentials and shortcomings ==
=== Potentials ===
DNA barcoding has been proposed as a way to distinguish species suitable even for non-specialists to use.
=== Shortcomings ===
In general, the shortcomings for DNA barcoding are valid also for metabarcoding. One particular drawback for metabarcoding studies is that there is no consensus yet regarding the optimal experimental design and bioinformatics criteria to be applied in eDNA metabarcoding. However, there are current joined attempts, such as the COST network DNAqua-Net of the European Cooperation in Science and Technology, to move forward by exchanging experience and knowledge to establish best-practice standards for biomonitoring.
The so-called barcode is a region of mitochondrial DNA within the gene for cytochrome c oxidase. A database, Barcode of Life Data Systems (BOLD), contains DNA barcode sequences from over 190,000 species. However, scientists such as Rob DeSalle have expressed concern that classical taxonomy and DNA barcoding, which they consider a misnomer, need to be reconciled, as they delimit species differently. Genetic introgression mediated by endosymbionts and other vectors can further make barcodes ineffective in the identification of species.
=== Status of barcode species ===
In microbiology, genes can move freely even between distantly related bacteria, possibly extending to the whole bacterial domain. As a rule of thumb, microbiologists have assumed that kinds of Bacteria or Archaea with 16S ribosomal RNA gene sequences more similar than 97% to each other need to be checked by DNA-DNA hybridisation to decide if they belong to the same species or not. This concept was narrowed in 2006 to a similarity of 98.7%.
DNA-DNA hybridisation is outdated, and results have sometimes led to misleading conclusions about species, as with the pomarine and great skua. Modern approaches compare sequence similarity using computational methods.
== See also ==
Barcode of Life Data System (BOLD)
Consortium for the Barcode of Life (CBOL)
International Nucleotide Sequence Database Collaboration (INSDC)
Molecular marker
Taxonomic impediment
== References ==
== Further references == | Wikipedia/DNA_metabarcoding |
DNA barcoding methods for fish are used to identify groups of fish based on DNA sequences within selected regions of a genome. These methods can be used to study fish, as genetic material, in the form of environmental DNA (eDNA) or cells, is freely diffused in the water. This allows researchers to identify which species are present in a body of water by collecting a water sample, extracting DNA from the sample and isolating DNA sequences that are specific for the species of interest. Barcoding methods can also be used for biomonitoring and food safety validation, animal diet assessment, assessment of food webs and species distribution, and for detection of invasive species.
In fish research, barcoding can be used as an alternative to traditional sampling methods. Barcoding methods can often provide information without damage to the studied animal.
Aquatic environments have unique properties that affect how genetic material from organisms is distributed. DNA material diffuses rapidly in aquatic environments, which makes it possible to detect organisms from a large area when sampling a specific spot. Due to rapid degradation of DNA in aquatic environments, detected species represent contemporary presence, without confounding signals from the past.
DNA-based identification is fast, reliable and accurate in its characterization across life stages and species. Reference libraries are used to connect barcode sequences to single species and can be used to identify the species present in DNA samples. Libraries of reference sequences are also useful in identifying species in cases of morphological ambiguity, such as with larval stages.
eDNA samples and barcoding methods are used in water management, as species composition can be used as an indicator of ecosystem health. Barcoding and metabarcoding methods are particularly useful in studying endangered or elusive fish, as species can be detected without catching or harming the animals.
== Applications ==
=== Ecological monitoring ===
Biomonitoring of aquatic ecosystems is required by national and international legislation (e.g. the Water Framework Directive and the Marine Strategy Framework Directive). Traditional methods are time-consuming and include destructive practices that can harm individuals of rare or protected species. DNA barcoding is a relatively cost-effective and quick method for identifying fish species aquatic environments. Presence or absence of key fish species can be established using eDNA from water samples and spatio-temporal distribution of fish species (e.g. timing and location of spawning) can be studied. This can help discover e.g. impacts of physical barriers such as dam construction and other human disturbances. DNA tools are also used in dietary studies of fish and the construction of aquatic food webs. Metabarcoding of fish gut contents or feces identify recently consumed prey species. However, secondary predation must be taken into consideration.
=== Invasive species ===
Early detection is vital for control and removal of non-indigenous, ecologically harmful species (e.g. lion fish (Pteroissp.) in the Atlantic and Caribbean). Metabarcoding of eDNA can be used to detect cryptic or invasive species in aquatic ecosystems.
=== Fisheries management ===
Barcoding and metabarcoding approaches yield rigorous and extensive data on recruitment, ecology and geographic ranges of fisheries resources. The methods also improve knowledge of nursery areas and spawning grounds, with benefits for fisheries management. Traditional methods for fishery assessment can be highly destructive, such as gillnet sampling or trawling. Molecular methods offers an alternative for non-invasive sampling. For example, barcoding and metabarcoding can help identifying fish eggs to species to ensure reliable data for stock assessment, as it has proven more reliable than identification via phenotypic characters. Barcoding and metabarcoding are also powerful tools in monitoring of fisheries quotas and by-catch.
eDNA can detect and quantify the abundance of some anadromous species as well as their temporal distribution. This approach can be used to develop appropriate management measures, of particular importance for commercial fisheries.
=== Food safety ===
Globalisation of food supply chains has led to an increased uncertainty of the origin and safety of fish-based products. Barcoding can be used to validate the labelling of products and to trace their origin. “Fish fraud” has been discovered across the globe. A recent study from supermarkets in the state of New York found that 26.92% of seafood purchases with an identifiable barcode were mislabelled.
Barcoding can also trace fish species as there can be human health hazards related to consumption of fish. Further, biotoxins can occasionally be concentrated when toxins move up the food chain. One example relates to coral reef species where predatory fish such as barracuda have been detected to cause Ciguatera fish poisoning. Such new associations of fish poisoning can be detected by the use of fish barcoding.
=== Protection of endangered species ===
Barcoding can be used in the conservation of endangered species through the prevention of illegal trading of CITES listed species. There is a large black market for fish based products and also in the aquarium and pet trades. To protect sharks from overexploitation, illegal use can be detected from barcoding shark fin soup and traditional medicines.
== Methodology ==
=== Sampling in aquatic environments ===
Aquatic environments have special attributes that need to be considered when sampling for fish eDNA metabarcoding. Seawater sampling is of particular interest for assessment of health of marine ecosystems and their biodiversity. Although the dispersion of eDNA in seawater is large and salinity negatively influences DNA preservation, a water sample can contain high amounts of eDNA from fish up to one week after sampling. Free molecules, intestinal lining and skin cell debris are the main sources of fish eDNA.
In comparison to marine environments, ponds have biological and chemical properties that can alter eDNA detection. The small size of ponds compared to other water bodies makes them more sensitive to environmental conditions such as exposure to UV light and changes in temperature and pH. These factors can affect the amount of eDNA. Moreover, trees and dense vegetation around ponds represent a barrier that prevents water aeration by wind. Such barriers can also promote the accumulation of chemical substances that damage eDNA integrity. Heterogeneous distribution of eDNA in ponds may affect detection of fishes. Availability of fish eDNA is also dependent of life stage, activity, seasonality and behavior. The largest amounts of eDNA are obtained from spawning, larval stages and breeding activity.
=== Target regions ===
Primer design is crucial for metabarcoding success. Some studies on primer development have described cytochrome B and 16S as suitable target regions for fish metabarcoding. Evans et.al. (2016) described that Ac16S and L2513/H2714 primer sets are able to detect fish species accurately in different mesocosms. Another study performed by Valentini et.al. (2016) showed that the L1848/H1913 primer pair, which amplifies a region of 12S rRNA locus, was able to reach high taxonomical coverage and discrimination even with a short target fragment. This research also evidenced that in 89% of sampling sites, metabarcoding approach was similar or even higher than traditional methods (e.g. electrofishing and netting methods). Hänfling et.al. (2016) performed metabarcoding experiments focused on lake fish communities using 12S_F1/12S_R1 and CytB_L14841/CytB_H15149 primer pairs, whose targets were located in the mitochondrial 12S and cytochrome B regions respectively. The results demonstrate that detection of fish species was higher when using 12S primers than CytB. This was due to the persistence of shorter 12S fragments (~100 bp) in comparison to larger CytB amplicon (~460 bp). In general, these studies summarize that special considerations about primer design and selection have to be taken according to the objectives and nature of the experiment.
== Fish reference databases ==
There are a number of open access databases available to researchers worldwide. The proper identification of fish specimens with DNA barcoding methods relies heavily on the quality and species coverage of available sequence databases. A fish reference database is an electronic database that typically contains DNA barcodes, images, and geospatial coordinates of examined fish specimens. The database can also contain linkages to voucher specimens, information on species distributions, nomenclature, authoritative taxonomic information, collateral natural history information and literature citations. Reference databases may be curated, meaning that the entries are subjected to expert assessment before being included, or uncurated, in which case they may include a large number of reference sequences but with less reliable identification of species.
FISH-BOL
Launched in 2005, The Fish Barcode of Life Initiative (FISH-BOL) www.fishbol.org is an international research collaboration that is assembling a standardized reference DNA sequence library for all fish species. It is a concerted global research project with the goal to collect and assemble standardized DNA barcode sequences and associated voucher provenance data in a curated reference sequence library to aid the molecular identification of all fish species.
If researchers wish to contribute to the FISH-BOL reference library, clear guidelines are provided for specimen collection, imaging, preservation, and archival, as well as meta-data collection and submission protocols. The Fish-BOL database functions as a portal to the Barcode of Life Data Systems (BOLD).
French Polynesia Fish Barcoding Base
The French Polynesia Fish Barcoding Database contains all the specimens captured during several field trips organised or participated in by CRIOBE (Centre for Island Research and Environmental Observatory) since 2006 in the Archipelagos of French Polynesia. For each classified specimen, the following information can be available: scientific name, picture, date, GPS coordinate, depth and method of capture, size, and Cytochrome Oxidase c Subunit 1 (CO1) DNA sequence. The database can be searched using name (genus or species) or using a part of the CO1 DNA sequence.
Aquagene
A collaborative product developed by several German institutions, Aquagene provides free access to curated genetic information of marine fish species. The database allows species identification by DNA sequence comparisons. All species are characterized by multiple gene sequences, presently including the standard CO1 barcoding gene together with CYTB, MYH6 and (coming shortly) RHOD, facilitating unambiguous species determination even for closely related species or those with high intraspecific diversity. The genetic data is complemented online with additional data of the sampled specimen, such as digital images, voucher number and geographic origin.
Additional resources
Other reference databases that are more general, but may also be useful for barcoding fish are the Barcode of Life Datasystem and Genbank.
== Advantages ==
Barcoding/metabarcoding provides quick and usually reliable species identification, meaning that morphological identification, i.e. taxonomic expertise, is not needed. Metabarcoding also makes it possible to identify species when organisms are degraded or only part of an organism is available. It is a powerful tool for detection of rare and/or invasive species, which can be detected despite low abundance. Traditional methods to assess fish biodiversity, abundance and density include the use of gears like nets, electrofishing equipment, trawls, cages, fyke-nets or other gear which show reliable results of presence only for abundant species. Contrary, rare native species, as well as newly established alien species, are less likely to be detected via traditional methods, leading to incorrect absence/presence assumptions. Barcoding/metabarcoding is also in some cases a non-invasive sampling method, as it provides the opportunity to analyze DNA from eDNA or by sampling living organisms.
For fish parasites, metabarcoding allows for detection of cryptic or microscopic parasites from aquatic environments, which is difficult with more direct methods (e.g. identifying species from samples with microscopy). Some parasites exhibit cryptic variation and metabarcoding can be helpful method in revealing this.
The application of eDNA metabarcoding is cost-effective in large surveys or when many samples are required. eDNA can reduce the costs of fishing, transport of samples and time invested by taxonomists, and in most cases requires only small amounts of DNA from target species to reach reliable detection. Constantly decreasing prices for barcoding/metabarcoding due to technical development is another advantage. The eDNA approach is also suitable for monitoring of inaccessible environments.
== Challenges ==
The results obtained from metabarcoding are limited or biased to the frequency of occurrence. It is also problematic that far from all species have barcodes attached to them.
Even though metabarcoding may overcome some practical limitations of conventional sampling methods, there is still no consensus regarding experimental design and the bioinformatic criteria for application of eDNA metabarcoding. The lack of criteria is due to the heterogeneity of experiments and studies conducted so far, which dealt with different fish diversities and abundances, types of aquatic ecosystems, numbers of markers and marker specificities.
Another significant challenge for the method is how to quantify fish abundance from molecular data. Although there are some cases in which quantification has been possible there appears to be no consensus on how, or to what extent, molecular data can meet this aim for fish monitoring.
== See also ==
DNA barcoding
DNA barcoding in diet assessment
Algae DNA barcoding
Microbial DNA barcoding
Aquatic macroinvertebrate DNA barcoding
== References == | Wikipedia/Fish_DNA_barcoding |
DNA barcoding in diet assessment is the use of DNA barcoding to analyse the diet of organisms and further detect and describe their trophic interactions. This approach is based on the identification of consumed species by characterization of DNA present in dietary samples, e.g. individual food remains, regurgitates, gut and fecal samples, homogenized body of the host organism, target of the diet study (for example with whole body of insects).
The DNA sequencing approach to be adopted depends on the diet breadth of the target consumer. For organisms feeding on one or only few species, traditional Sanger sequencing techniques can be used. For polyphagous species with diet items more difficult to identify, it is conceivable to determine all consumed species using NGS methodology.
The barcode markers utilized for amplification will differ depending on the diet of the target organism. For herbivore diets, the standard DNA barcode loci will differ significantly depending on the plant taxonomic level. Therefore, for identifying plant tissue at the taxonomic family or genus level, the markers rbcL and trn-L-intron are used, which differ from the loci ITS2, matK, trnH-psbA (noncoding intergenic spacer) used to identify diet items to genus and species level. For animal prey, the most broadly used DNA barcode markers to identify diets are the mitochondrial cytochrome C oxydase (COI) and cytochrome b (cytb). When the diet is broad and diverse, DNA metabarcoding is used to identify most of the consumed items.
== Advantages ==
A major benefit of using DNA barcoding in diet assessment is the ability to provide high taxonomic resolution of consumed species. Indeed, when compared to traditional morphological analysis, DNA barcoding enables a more reliable separation of closely related taxa reducing the observed bias. Moreover, DNA barcoding enables to detect soft and highly digested items, not recognisable through morphological identification. For example, Arachnids feed on pre-digested bodies of insects or other small animals and their stomach content is too decomposed and morphologically unrecognizable using traditional methods such as microscopy.
When investigating herbivores diet, DNA metabarcoding enables detection of highly digested plant items with a higher number of taxa identified compared to microhistology and macroscopic analysis. For instance, Nichols et al. (2016) highlighted the taxonomic precision of metabarcoding on rumen contents, with on average 90% of DNA-sequences being identified to genus or species level in comparison to 75% of plant fragments recognised with macroscopy. Morevoer, another empirically tested advantage of metabarcoding compared to traditional time-consuming methods, involves higher cost efficiency. Finally, with its fine resolution, DNA barcoding represents a crucial tool in wildlife management to identify the feeding habits of endangered species and animals that can cause feeding damages to the environment.
== Challenges ==
With DNA barcoding it is not possible to retrieve information about sex or age of prey species, which can be crucial. This limitation can anyway be overcome with an additional step in the analysis by using microsatellite polymorphism and Y-chromosome amplification. Moreover, DNA provides detailed information of the most recent events (e.g. 24–48 hr) but it is not able to provide a longer dietary prospect unless a continuous sampling is conducted. Additionally, when using generic primers that amplify ‘barcode’ regions from a broad range of food species, the amplifiable host DNA may largely outnumber the presence of prey DNA, complicating prey detection. However, a strategy to prevent the host DNA amplification can be the addition of a predator-specific blocking primer. Indeed, blocking primers for suppressing amplification of predator DNA allows the amplification of the other vertebrate groups and produces amplicon mixes that are predominately food DNA.
Despite the improvement of diet assessment via DNA barcoding, secondary consumption (prey of the prey, parasites, etc.) still represents a confounding factor. In fact, some secondary prey may result in the analysis as primary prey items, introducing a bias. However, due to a much lower total biomass and to a higher level of degradation, DNA of secondary prey might represent only a minor part of sequences recovered compared to primary prey.
The quantitative interpretation of DNA barcoding results is not straightforward. There have been attempts to use the number of sequences recovered to estimate the abundance of prey species in diet contents (e.g. gut, faeces). For example, if the wolf ate more moose than wild boar, there should be more moose DNA in their gut, and thus, more moose sequences are recovered. Despite the evidence for general correlations between the sequence number and the biomass, actual evaluations of this method have been unsuccessful. This can be explained by the fact that tissues originally contain different densities of DNA and can be digested differently.
== Examples ==
=== Mammals ===
Mammals diet is widely studied using DNA barcoding and metabarcoding. Some differences in the methodology can be observed depending on the feeding strategy of the target mammal species, i.e. whether it is herbivore, carnivore, or omnivore.
For herbivore mammal species, DNA is usually extracted from faeces samples or rumen contents collected from road kills or animals killed during regular hunting. Within DNA barcoding, the trnL approach can be used to identify plant species by using a very short but informative fragment of chloroplast DNA (P6 loop of the chloroplast trnL (UAA) intron). Potentially, this application is applicable to all herbivorous species feeding on angiosperms and gymnosperms Alternatively to the trnL approach, the markers rbcL, ITS2, matK, trnH-psbA can be used to amplify plant species.When studying small herbivores with a cryptic life style, such as voles and lemmings, DNA barcoding of ingested plants can be a crucial tool giving an accurate picture of food utilization. Additionally, the fine resolution in plant identification obtained with DNA barcoding allows researchers to understand change in diet composition over time and variability among individuals, as observed in the alpine chamois (Rupicapra rupicapra). Between October and November, by analyzing the faeces composition via DNA barcoding, the alpine chamois showed a shift in diet preferences. Also, different diet categories were observed amongst individuals within each month.
For carnivores, the use of non-invasive approaches is crucial especially when dealing with elusive and endangered species. Diet assessment through DNA barcoding of faeces can have a greater efficiency in prey species detection compared to traditional diet analysis, which mostly rely upon the morphological identification of undigested hard remains in the faeces. Estimating the vertebrate diet diversity of the leopard cat (Prionailurus bengalensis) in Pakistan, Shehzad et al. (2012) identified a total of 18 prey taxa using DNA barcoding on faeces. Eight distinct bird taxa were reported, while previous studies based on conventional methods did not identify any bird species in the leopard cat diet. Another example is the use of DNA barcoding to identify soft remains of prey in the stomach contents of predators e.g. grey seals (Halichoerus grypus) and harbour porpoises (Phocoena phocoena).DNA metabarcoding is a game changer for the study of complex diets, such as for omnivores predators, feeding on many different species with both plants and animal origin. This methodology does not require knowledge about the food consumed by animals in the habitat they occupy. In a study on brown bear (Ursus arctos) diet, DNA metabarcoding allowed accurate reconstruction of a wide range of taxonomically different items present in faecal samples collected in the field.
=== Birds ===
=== Fish ===
=== Arthropods ===
== See also ==
Fish DNA barcoding
Aquatic macroinvertebrates DNA barcoding
Microbial DNA barcoding
Algae DNA barcoding
Pollen DNA barcoding
== References == | Wikipedia/DNA_barcoding_in_diet_assessment |
DNA barcoding of algae is commonly used for species identification and phylogenetic studies. Algae form a phylogenetically heterogeneous group, meaning that the application of a single universal barcode/marker for species delimitation is unfeasible, thus different markers/barcodes are applied for this aim in different algal groups.
== Diatoms ==
Diatom DNA barcoding is a method for taxonomical identification of diatoms even to species level. It is conducted using DNA or RNA followed by amplification and sequencing of specific, conserved regions in the diatom genome followed by taxonomic assignment.
One of the main challenges of identifying diatoms is that it is often collected as a mixture of diatoms from several species. DNA metabarcoding is the process of identifying the individual species from a mixed sample of environmental DNA (also called eDNA) which is DNA extracted straight from the environment such as in soil or water samples.
A newly applied method is diatom DNA metabarcoding which is used for ecological quality assessment of rivers and streams because of the specific response of diatoms to particular ecologic conditions. As species identification via morphology is relatively difficult and requires a lot of time and expertise, high-throughput sequencing (HTS) DNA metabarcoding enables taxonomic assignment and therefore identification for the complete sample regarding the group specific primers chosen for the previous DNA amplification.
Until now, several DNA markers have already been developed, mainly targeting the 18S rRNA. Using the V4 hypervariable region of the ribosomal small subunit DNA (SSU rDNA), DNA-based identification was found to be more efficient than the classical morphology-based approach. Other conserved regions in the genomes which are frequently used as marker genes are ribulose-1-5-bisphosphate carboxylase (rbcL), cytochrome oxidase I (cox1, COI), ITS and 28S. It has been shown repeatedly that the molecular data gained by diatom eDNA metabarcoding quite faithfully reflect the morphology-based biotic diatom indices and therefore provide a similar assessment of ecosystem status. In the meantime, diatoms are routinely used for the assessment of ecological quality in other freshwater ecosystems. Together with aquatic invertebrates they are considered as the best indicators of disturbance related to physical, chemical or biological conditions of watercourses. Numerous studies are using benthic diatoms for biomonitoring. Because no ideal diatom DNA barcode was found, it has been proposed that different markers are used for different purposes. Indeed, the highly variable cox1, ITS and 28S genes were considered more suitable for taxonomic studies, while more conserved 18S and rbcL genes seem more appropriate for biomonitoring.
=== Advantages ===
Applying the DNA barcoding concept to diatoms promises great potential to resolve the problem of inaccurate species identification and thus facilitate analyses of the biodiversity of environmental samples.
Molecular methods based on the NGS technology almost always leads to a higher number of identified taxa whose presence could subsequently be verified by light microscopy. Results of this study provides evidence that eDNA barcoding of diatoms is suitable for water quality assessment and could complement or improve traditional methods. Stoeck et al. also showed that eDNA barcoding provides a more insight into diatom diversity or other protist communities and therefore could be used for ecological projection of global diversity. Other studies showed different results. For example, inventories obtained from the molecular-based method were closer to those obtained by the morphology-based method when abundant species are in focus.
DNA metabarcoding can also increase the taxonomic resolution and comparability across geographic regions, which is often difficult using morphological characters only. Moreover, DNA-based identification allows extending the range of potential bioindicators, including the inconspicuous taxonomic groups that could be highly sensitive or tolerant to particular stressors. Indirectly, the molecular methods can also help filling the gaps in knowledge of species ecology, by increasing the number of samples processed coupled with a decrease in processing time (cost-effectiveness), as well as by increasing the accuracy and precision of correlation between species/MOTUs occurrence and environmental factors.
=== Challenges ===
Currently there is no consensus concerning methods for DNA preservation and isolation, the choice of DNA barcodes and PCR primers, nor agreement concerning the parameters of MOTU clustering and their taxonomic assignment. Sampling and molecular steps need to be standardize through development studies. One of the major limitation is the availability of reference barcodes for diatoms species. The reference database of bioindicator taxa is far from complete despite the constant efforts of numerous national barcoding initiatives a lot of species are still lacking barcode information. Furthermore, most existing metabarcoding data are only locally available and geographically scattered, which is hindering the development of globally useful tools. Visco et al. estimated that no more than 30% of European diatoms species are currently represented in reference databases. For example, there is an important lack for a number of species from the Fennoscandian communities (especially acidophilic diatoms, such as Eunotia incisa). It has also been shown that taxonomic identification with DNA barcoding is not accurate above species level, to discriminate varieties for example (reference missing).
Another well-known limitation of barcoding for taxonomic identification is the clustering method used before the taxonomic assignation: It often leads to massive loss of genetic information and the only reliable way to assess the effects of different clustering and different taxonomic assignation processes would be to compare the species list generated by different pipelines when using the same reference database. This has yet to be done for the variety of pipelines used in molecular assessment of diatom communities in Europe. Taxonomically validated databases, which includes accessible vouchers are also crucial for reliable taxa identification via NGS.
Additionally, primer bias is often found to be a major source of variation in barcoding and PCR primers efficiency can differ between diatoms species, i.e. some primers lead to a preferential amplification of one taxon over another.
The inference of abundance from metabarcoding data is considered as one of the most difficult issues in environmental use. The number of generated sequences by HTS does not directly correspond to the number of specimen or biomass and that different species can produce different amount of reads, (for example, due to differences in the chloroplast size with the rbcL marker). Vasselon et al. recently created a biovolume correction factor when using the rbcL marker. For example, Achnanthidium minutissimum has a small biovolume, and thus will generate less copies of the rbcL fragment (located in the chloroplast) than larger species. This correction factor, however, requires extensive calibration with each species own biovolume and has been tested only on a few species that far. Fluctuations of gene copy number for other markers, such as the 18S marker, does not seem to be species specific, but have not been tested yet.
=== Diatom target regions ===
Barcoding marker usually combine hypervariable regions of the genome (to allow the distinction between species) with very conserved region (to insure a specificity to the target organism). Several DNA markers, belonging to the nuclear, mitochondrial, and chloroplast genomes (rbcL, COI, ITS+5.8S, SSU, 18S...), have been designed and successfully used for diatoms identification with NGS.
==== 18S and V4 subunit ====
The 18S gene region has been widely used as a marker in other protist groups and Jahn et al. were the first to test the 18S gene region for diatoms barcoding. Zimmerman et al. proposed a 390–410 bp long fragment of the 1800 bp long 18S rRNA gene locus as a barcode marker for the analysis of environmental samples with HTS. and discusses its use and limitations for diatom identification. This fragment includes the V4 subunit which is the largest and most complex of the highly variable regions within the 18S locus. They highlighted that this hypervariable region of the 18S gene have great potential for studying protist diversity at large scale but has limited efficiency to identification below species level or cryptic species.
==== rbcL ====
The rbcl gene is used for taxonomy studies (Trobajo et al. 2009) which benefits include that rarely any intragenomic variation and they are very easily aligned and compared. An open-access reference library, called R-Syst::diatom includes data for two barcodes (18S and rbcL). It is freely accessible through a website. Kermmarec et al. also successfully used the rbcL gene for ecological assessment of diatoms. The rbcL marker is also easily aligned and compared.
Moniz and Kaczmarska investigated the amplification success of the SSU, COI, and ITS2 markers and found that the 300 – 400 bp ITS-2 + 5.8S fragment provided the highest success rate of amplification and good species resolution. This marker was subsequently used to separate morphologically defined species with a success rate of 99.5%. Despite this amplification success, Zimmerman et al. criticised the use of ITS-2 due to intra-individual heterogeneity. It has been suggested that SSU or the rbcL (Mann et al., 2010) markers less heterogenous between individuals and therefore more beneficial when distinguishing between species.
=== Applications ===
==== Genetic tool for biomonitoring and bioassessment ====
Diatoms are routinely used as part of a suite of biomonitoring tools which must be monitored as part of the European Water Framework Directive. Diatoms are used as an indicator of ecosystem health in freshwaters because they are ubiquitous, directly affected by the changes in physico-chemical parameters and show a better relationship with environmental variables than other taxa e.g. invertebrates, giving a better overall picture of water quality.
Over the recent years, researchers have developed and standardised the tools for the metabarcoding and sequencing of diatoms, to complement the traditional assessment using microscopy, opening up a new avenue of biomonitoring for aquatic systems. Using benthic diatoms through a method of next-generation sequencing approach to river biomonitoring revealed a good potential in it. Many studies have shown that metabarcoding and HTS (high-throughput sequencing) can be utilized to estimate the quality status and diversity in freshwaters. As part of the Environment Agency, Kelly et al. has developed a DNA-based metabarcoding approach to assess diatom communities in rivers for the UK. Vasselon et al. compared morphological and HTS approaches for diatoms and found that HTS gave a reliable indication of quality status for most rivers in terms of Specific Polluosensitivity Index (SPI). Vasselon et al. also applied DNA metabarcoding of diatoms communities to the monitoring network of rivers on the tropical Island Mayotte (French DOM-TOM).
Rimet et al. also explored the possibility of using HTS for assessing diatom diversity and showed that diversity indices from both HTS and microscopic analysis were well correlated although not perfect.
DNA barcoding and metabarcoding can be used to establish molecular metrics and indices, which potentially provide conclusions broadly similar to those of the traditional approaches about the ecological and environmental status of aquatic ecosystems.
==== Forensics ====
Diatoms are used to as a diagnosis tool for drowning in forensic practices. The diatom test is based on the principle of diatom inhalation from water into the lungs and distribution and deposition around the body. DNA methods can be used to confirm if the cause of death was indeed drowning and locate the origin of drowning. Diatom DNA metabarcoding, provides the opportunity to quickly analyse the diatom community present within a body and locate the origin of drowning and investigate if a body may have been moved from one place to another.
==== Cryptic species and databasing ====
Diatom metabarcoding may help delimit cryptic species that are difficult to identify using microscopy and help complete reference databases by comparing morphological assemblages to metabarcoding data.
== Other Microalgae ==
Chlorophytes possess an ancients and taxonomically very diverse lineage (Fang et al. 2014), including terrestrial plants too. Even though more than 14 000 species have been described based on structural and ultrastructural criteria (Hall et al. 2010) their morphological identification is often limited.
Several barcodes for chlorophytes have been proposed for DNA-based identification in order to bypass the problematics of the morphological one. Although the cytochrome oxidase I (COI, COX) coding gene (link) is a standard barcode for animals it proved to be unsatisfactory for chlorophytes because the gene contains several introns in this algae group (Turmel et al. 2002). Nuclear marker genes have been used for chlorophytes are SSU rDNA, LSU rDNA, rDNA ITS (Leliaert et al. 2014).
== Macroalgae ==
Macroalgae—a morphological rather than taxonomic grouping—can be very challenging to identify because of their simple morphology, phenotypic plasticity and alternate lifecycle stages. Thus, algal systematics and identification have come to rely heavily on genetic/molecular tools such as DNA barcoding. The SSU rDNA gene is a common used barcode for phylogenetic studies on macroalgae. However, the SSU rDNA is a highly conserved region and typically lack resolution for species identification.
Over the past 2 decades certain standards for DNA barcoding with the aim of species identification have been developed for each of the main groups of macroalgae. The cytochrome c oxidase subunit I (COI) gene is commonly used as a barcode for red and brown algae, while tufA (plastid elongation factor), rbcL (rubisco large subunit) and ITS (internal transcribe spacer) are commonly used for green algae. These barcodes are typically 600-700 bp long.
The barcodes typically differ between the 3 main groups of macroalgae (red, green and brown) because their evolutionary heritage is very diverse. Macroalgae is a polyphyletic group, meaning that within the group they do not all share a recent common ancestor, making it challenging to find a gene that is conserved among all but variable enough for species identification.
== Target regions ==
Adapted from
== See also ==
Detailed information on DNA barcoding of different organisms can be found here:
Microbial DNA barcoding
DNA barcoding
Fish DNA barcoding
DNA barcoding in diet assessment
== References == | Wikipedia/Algae_DNA_barcoding |
DNA barcoding is a method of species identification using a short section of DNA from a specific gene or genes. The premise of DNA barcoding is that by comparison with a reference library of such DNA sections (also called "sequences"), an individual sequence can be used to uniquely identify an organism to species, just as a supermarket scanner uses the familiar black stripes of the UPC barcode to identify an item in its stock against its reference database. These "barcodes" are sometimes used in an effort to identify unknown species or parts of an organism, simply to catalog as many taxa as possible, or to compare with traditional taxonomy in an effort to determine species boundaries.
Different gene regions are used to identify the different organismal groups using barcoding. The most commonly used barcode region for animals and some protists is a portion of the cytochrome c oxidase I (COI or COX1) gene, found in mitochondrial DNA. Other genes suitable for DNA barcoding are the internal transcribed spacer (ITS) rRNA often used for fungi and RuBisCO used for plants. Microorganisms are detected using different gene regions. The 16S rRNA gene for example is widely used in identification of prokaryotes, whereas the 18S rRNA gene is mostly used for detecting microbial eukaryotes. These gene regions are chosen because they have less intraspecific (within species) variation than interspecific (between species) variation, which is known as the "Barcoding Gap".
Some applications of DNA barcoding include: identifying plant leaves even when flowers or fruits are not available; identifying pollen collected on the bodies of pollinating animals; identifying insect larvae which may have fewer diagnostic characters than adults; or investigating the diet of an animal based on its stomach content, saliva or feces. When barcoding is used to identify organisms from a sample containing DNA from more than one organism, the term DNA metabarcoding is used, e.g. DNA metabarcoding of diatom communities in rivers and streams, which is used to assess water quality.
== Background ==
DNA barcoding techniques were developed from early DNA sequencing work on microbial communities using the 5S rRNA gene. In 2003, specific methods and terminology of modern DNA barcoding were proposed as a standardized method for identifying species, as well as potentially allocating unknown sequences to higher taxa such as orders and phyla, in a paper by Paul D.N. Hebert et al. from the University of Guelph, Ontario, Canada. Hebert and his colleagues demonstrated the utility of the cytochrome c oxidase I (COI) gene, first utilized by Folmer et al. in 1994, using their published DNA primers as a tool for phylogenetic analyses at the species levels as a suitable discriminatory tool between metazoan invertebrates. The "Folmer region" of the COI gene is commonly used for distinction between taxa based on its patterns of variation at the DNA level. The relative ease of retrieving the sequence, and variability mixed with conservation between species, are some of the benefits of COI. Calling the profiles "barcodes", Hebert et al. envisaged the development of a COI database that could serve as the basis for a "global bioidentification system".
== Methods ==
=== Sampling and preservation ===
Barcoding can be done from tissue from a target specimen, from a mixture of organisms (bulk sample), or from DNA present in environmental samples (e.g. water or soil). The methods for sampling, preservation or analysis differ between those different types of sample.
Tissue samples
To barcode a tissue sample from the target specimen, a small piece of skin, a scale, a leg or antenna is likely to be sufficient (depending on the size of the specimen). To avoid contamination, it is necessary to sterilize used tools between samples. It is recommended to collect two samples from one specimen, one to archive, and one for the barcoding process. Sample preservation is crucial to overcome the issue of DNA degradation.
Bulk samples
A bulk sample is a type of environmental sample containing several organisms from the taxonomic group under study. The difference between bulk samples (in the sense used here) and other environmental samples is that the bulk sample usually provides a large quantity of good-quality DNA. Examples of bulk samples include aquatic macroinvertebrate samples collected by kick-net, or insect samples collected with a Malaise trap. Filtered or size-fractionated water samples containing whole organisms like unicellular eukaryotes are also sometimes defined as bulk samples. Such samples can be collected by the same techniques used to obtain traditional samples for morphology-based identification.
eDNA samples
The environmental DNA (eDNA) method is a non-invasive approach to detect and identify species from cellular debris or extracellular DNA present in environmental samples (e.g. water or soil) through barcoding or metabarcoding. The approach is based on the fact that every living organism leaves DNA in the environment, and this environmental DNA can be detected even for organisms that are at very low abundance. Thus, for field sampling, the most crucial part is to use DNA-free material and tools on each sampling site or sample to avoid contamination, if the DNA of the target organism(s) is likely to be present in low quantities. On the other hand, an eDNA sample always includes the DNA of whole-cell, living microorganisms, which are often present in large quantities. Therefore, microorganism samples taken in the natural environment also are called eDNA samples, but contamination is less problematic in this context due to the large quantity of target organisms. The eDNA method is applied on most sample types, like water, sediment, soil, animal feces, stomach content or blood from e.g. leeches.
=== DNA extraction, amplification and sequencing ===
DNA barcoding requires that DNA in the sample is extracted. Several different DNA extraction methods exist, and factors like cost, time, sample type and yield affect the selection of the optimal method.
When DNA from organismal or eDNA samples is amplified using polymerase chain reaction (PCR), the reaction can be affected negatively by inhibitor molecules contained in the sample. Removal of these inhibitors is crucial to ensure that high quality DNA is available for subsequent analyzing.
Amplification of the extracted DNA is a required step in DNA barcoding. Typically, only a small fragment of the total DNA material is sequenced (typically 400–800 base pairs) to obtain the DNA barcode. Amplification of eDNA material is usually focused on smaller fragment sizes (<200 base pairs), as eDNA is more likely to be fragmented than DNA material from other sources. However, some studies argue that there is no relationship between amplicon size and detection rate of eDNA.
When the DNA barcode marker region has been amplified, the next step is to sequence the marker region using DNA sequencing methods. Many different sequencing platforms are available, and technical development is proceeding rapidly.
=== Marker selection ===
Markers used for DNA barcoding are called barcodes. In order to successfully characterize species based on DNA barcodes, selection of informative DNA regions is crucial. A good DNA barcode should have low intra-specific and high inter-specific variability and possess conserved flanking sites for developing universal PCR primers for wide taxonomic application. The goal is to design primers that will detect and distinguish most or all the species in the studied group of organisms (high taxonomic resolution). The length of the barcode sequence should be short enough to be used with current sampling source, DNA extraction, amplification and sequencing methods.
Ideally, one gene sequence would be used for all taxonomic groups, from viruses to plants and animals. However, no such gene region has been found yet, so different barcodes are used for different groups of organisms, or depending on the study question.
For animals, the most widely used barcode is mitochondrial cytochrome C oxidase I (COI) locus. Other mitochondrial genes, such as Cytb, 12S or 16S are also used. Mitochondrial genes are preferred over nuclear genes because of their lack of introns, their haploid mode of inheritance and their limited recombination. Moreover, each cell has various mitochondria (up to several thousand) and each of them contains several circular DNA molecules. Mitochondria can therefore offer abundant source of DNA even when sample tissue is limited.
In plants, however, mitochondrial genes are not appropriate for DNA barcoding because they exhibit low mutation rates. A few candidate genes have been found in the chloroplast genome, the most promising being maturase K gene (matK) by itself or in association with other genes. Multi-locus markers such as ribosomal internal transcribed spacers (ITS DNA) along with matK, rbcL, trnH or other genes have also been used for species identification. The best discrimination between plant species has been achieved when using two or more chloroplast barcodes.
For bacteria, the small subunit of ribosomal RNA (16S) gene can be used for different taxa, as it is highly conserved. Some studies suggest COI, type II chaperonin (cpn60) or β subunit of RNA polymerase (rpoB) also could serve as bacterial DNA barcodes.
Barcoding fungi is more challenging, and more than one primer combination might be required. The COI marker performs well in certain fungi groups, but not equally well in others. Therefore, additional markers are being used, such as ITS rDNA and the large subunit of nuclear ribosomal RNA (28S LSU rRNA).
Within the group of protists, various barcodes have been proposed, such as the D1–D2 or D2–D3 regions of 28S rDNA, V4 subregion of 18S rRNA gene, ITS rDNA and COI. Additionally, some specific barcodes can be used for photosynthetic protists, for example the large subunit of ribulose-1,5-bisphosphate carboxylase-oxygenase gene (rbcL) and the chloroplastic 23S rRNA gene.
== Reference libraries and bioinformatics ==
Reference libraries are used for the taxonomic identification, also called annotation, of sequences obtained from barcoding or metabarcoding. These databases contain the DNA barcodes assigned to previously identified taxa. Most reference libraries do not cover all species within an organism group, and new entries are continually created. In the case of macro- and many microorganisms (such as algae), these reference libraries require detailed documentation (sampling location and date, person who collected it, image, etc.) and authoritative taxonomic identification of the voucher specimen, as well as submission of sequences in a particular format. However, such standards are fulfilled for only a small number of species. The process also requires the storage of voucher specimens in museum collections, herbaria and other collaborating institutions. Both taxonomically comprehensive coverage and content quality are important for identification accuracy. In the microbial world, there is no DNA information for most species names, and many DNA sequences cannot be assigned to any Linnaean binomial. Several reference databases exist depending on the organism group and the genetic marker used. There are smaller, national databases (e.g. FinBOL), and large consortia like the International Barcode of Life Project (iBOL).
=== BOLD ===
Launched in 2007, the Barcode of Life Data System (BOLD) is one of the biggest databases, containing about 780 000 BINs (Barcode Index Numbers) in 2022. It is a freely accessible repository for the specimen and sequence records for barcode studies, and it is also a workbench aiding the management, quality assurance and analysis of barcode data. The database mainly contains BIN records for animals based on the COI genetic marker. For plant identification, BOLD accepts sequences from matK and rbcL.
=== UNITE ===
The UNITE database was launched in 2003 and is a reference database for the molecular identification of fungal (and since 2018 all eukaryotic) species with the nuclear ribosomal internal transcribed spacer (ITS) genetic marker region. This database is based on the concept of species hypotheses: you choose the % at which you want to work, and the sequences are sorted in comparison to sequences obtained from voucher specimens identified by experts.
=== Diat.barcode ===
Diat.barcode database was first published under the name R-syst::diatom in 2016 starting with data from two sources: the Thonon culture collection (TCC) in the hydrobiological station of the French National Institute for Agricultural Research (INRA), and from the NCBI (National Center for Biotechnology Information) nucleotide database. Diat.barcode provides data for two genetic markers, rbcL (Ribulose-1,5-bisphosphate carboxylase/oxygenase) and 18S (18S ribosomal RNA). The database also involves additional, trait information of species, like morphological characteristics (biovolume, size dimensions, etc.), life-forms (mobility, colony-type, etc.) or ecological features (pollution sensitivity, etc.).
=== Bioinformatic analysis ===
In order to obtain well structured, clean and interpretable data, raw sequencing data must be processed using bioinformatic analysis. The FASTQ file with the sequencing data contains two types of information: the sequences detected in the sample (FASTA file) and a quality file with quality scores (PHRED scores) associated with each nucleotide of each DNA sequence. The PHRED scores indicate the probability with which the associated nucleotide has been correctly scored.
In general, the PHRED score decreases towards the end of each DNA sequence. Thus some bioinformatics pipelines simply cut the end of the sequences at a defined threshold.
Some sequencing technologies, like MiSeq, use paired-end sequencing during which sequencing is performed from both directions producing better quality. The overlapping sequences are then aligned into contigs and merged. Usually, several samples are pooled in one run, and each sample is characterized by a short DNA fragment, the tag. In a demultiplexing step, sequences are sorted using these tags to reassemble the separate samples. Before further analysis, tags and other adapters are removed from the barcoding sequence DNA fragment. During trimming, the bad quality sequences (low PHRED scores), or sequences that are much shorter or longer than the targeted DNA barcode, are removed. The following dereplication step is the process where all of the quality-filtered sequences are collapsed into a set of unique reads (individual sequence units ISUs) with the information of their abundance in the samples. After that, chimeras (i.e. compound sequences formed from pieces of mixed origin) are detected and removed. Finally, the sequences are clustered into OTUs (Operational Taxonomic Units), using one of many clustering strategies. The most frequently used bioinformatic software include Mothur, Uparse, Qiime, Galaxy, Obitools, JAMP, Barque, and DADA2.
Comparing the abundance of reads, i.e. sequences, between different samples is still a challenge because both the total number of reads in a sample as well as the relative amount of reads for a species can vary between samples, methods, or other variables. For comparison, one may then reduce the number of reads of each sample to the minimal number of reads of the samples to be compared – a process called rarefaction. Another way is to use the relative abundance of reads.
=== Species identification and taxonomic assignment ===
The taxonomic assignment of the OTUs to species is achieved by matching of sequences to reference libraries. The Basic Local Alignment Search Tool (BLAST) is commonly used to identify regions of similarity between sequences by comparing sequence reads from the sample to sequences in reference databases. If the reference database contains sequences of the relevant species, then the sample sequences can be identified to species level. If a sequence cannot be matched to an existing reference library entry, DNA barcoding can be used to create a new entry.
In some cases, due to the incompleteness of reference databases, identification can only be achieved at higher taxonomic levels, such as assignment to a family or class. In some organism groups such as bacteria, taxonomic assignment to species level is often not possible. In such cases, a sample may be assigned to a particular operational taxonomic unit (OTU).
In some cases, specimens with identical (COI) DNA barcodes clearly belong to different species, e.g. species of the fish genus Chromis.
== Applications ==
Applications of DNA barcoding include identification of new species, safety assessment of food, identification and assessment of cryptic species, detection of alien species, identification of endangered and threatened species, linking egg and larval stages to adult species, securing intellectual property rights for bioresources, framing global management plans for conservation strategies, elucidate feeding niches, and forensic science. DNA barcode markers can be applied to address basic questions in systematics, ecology, evolutionary biology and conservation, including community assembly, species interaction networks, taxonomic discovery, and assessing priority areas for environmental protection.
=== Identification of species ===
Specific short DNA sequences or markers from a standardized region of the genome can provide a DNA barcode for identifying species. Molecular methods are especially useful when traditional methods are not applicable. DNA barcoding has great applicability in identification of larvae for which there are generally few diagnostic characters available, and in association of different life stages (e.g. larval and adult) in many animals. Identification of species listed in the Convention of the International Trade of Endangered Species (CITES) appendixes using barcoding techniques is used in monitoring of illegal trade.
=== Detection of invasive species ===
Alien species can be detected via barcoding. Barcoding can be suitable for detection of species in e.g. border control, where rapid and accurate morphological identification is often not possible due to similarities between different species, lack of sufficient diagnostic characteristics and/or lack of taxonomic expertise. Barcoding and metabarcoding can also be used to screen ecosystems for invasive species, and to distinguish between an invasive species and native, morphologically similar, species. The high efficiency of DNA identification is shown relative to the traditional monitoring of biological invasions.
=== Delimiting cryptic species ===
DNA barcoding enables the identification and recognition of cryptic species. The results of DNA barcoding analyses depend however upon the choice of analytical methods, so the process of delimiting cryptic species using DNA barcodes can be as subjective as any other form of taxonomy. Hebert et al. (2004) concluded that the butterfly Astraptes fulgerator in north-western Costa Rica actually consists of 10 different species. These results, however, were subsequently challenged by Brower (2006), who pointed out numerous serious flaws in the analysis, and concluded that the original data could support no more than the possibility of three to seven cryptic taxa rather than ten cryptic species. Smith et al. (2007) used cytochrome c oxidase I DNA barcodes for species identification of the 20 morphospecies of Belvosia parasitoid flies (Diptera: Tachinidae) reared from caterpillars (Lepidoptera) in Area de Conservación Guanacaste (ACG), northwestern Costa Rica. These authors discovered that barcoding raises the species count to 32, by revealing that each of the three parasitoid species, previously considered as generalists, actually are arrays of highly host-specific cryptic species. For 15 morphospecies of polychaetes within the deep Antarctic benthos studied through DNA barcoding, cryptic diversity was found in 50% of the cases. Furthermore, 10 previously overlooked morphospecies were detected, increasing the total species richness in the sample by 233%.
=== Diet analysis and food web application ===
DNA barcoding and metabarcoding can be useful in diet analysis studies, and is typically used if prey specimens cannot be identified based on morphological characters. There is a range of sampling approaches in diet analysis: DNA metabarcoding can be conducted on stomach contents, feces, saliva or whole body analysis. In fecal samples or highly digested stomach contents, it is often not possible to distinguish tissue from single species, and therefore metabarcoding can be applied instead. Feces or saliva represent non-invasive sampling approaches, while whole body analysis often means that the individual needs to be killed first. For smaller organisms, sequencing for stomach content is then often done by sequencing the entire animal.
=== Barcoding for food safety ===
DNA barcoding represents an essential tool to evaluate the quality of food products. The purpose is to guarantee food traceability, to minimize food piracy, and to valuate local and typical agro-food production. Another purpose is to safeguard public health; for example, metabarcoding offers the possibility to identify groupers causing Ciguatera fish poisoning from meal remnants, or to separate poisonous mushrooms from edible ones (Ref).
=== Biomonitoring and ecological assessment ===
DNA barcoding can be used to assess the presence of endangered species for conservation efforts (Ref), or the presence of indicator species reflective to specific ecological conditions (Ref), for example excess nutrients or low oxygen levels.
=== Forensic Science ===
DNA barcoding is often used for species identification in forensic science cases. Unknown animal or plant samples at crime scenes can be found, collected, and identified, in hopes of linking it to a suspect and getting a conviction. Poaching, killing of endangered species, and animal abuse are examples of crimes where DNA barcoding is used, since animal DNA is often found. On the other hand, plant DNA is usually used as trace evidence to link a suspect to a crime scene.
== Potentials and shortcomings ==
=== Potentials ===
Traditional bioassessment methods are well established internationally, and serve biomonitoring well, as for example for aquatic bioassessment within the EU Directives WFD and MSFD. However, DNA barcoding could improve traditional methods for the following reasons; DNA barcoding (i) can increase taxonomic resolution and harmonize the identification of taxa which are difficult to identify or lack experts, (ii) can more accurately/precisely relate environmental factors to specific taxa (iii) can increase comparability among regions, (iv) allows for the inclusion of early life stages and fragmented specimens, (v) allows delimitation of cryptic/rare species (vi) allows for development of new indices e.g. rare/cryptic species which may be sensitive/tolerant to stressors, (vii) increases the number of samples which can be processed and reduces processing time resulting in increased knowledge of species ecology, (viii) is a non-invasive way of monitoring when using eDNA methods.
==== Time and cost ====
DNA barcoding is faster than traditional morphological methods all the way from training through to taxonomic assignment. It takes less time to gain expertise in DNA methods than becoming an expert in taxonomy. In addition, the DNA barcoding workflow (i.e. from sample to result) is generally quicker than traditional morphological workflow and allows the processing of more samples.
==== Taxonomic resolution ====
DNA barcoding allows the resolution of taxa from higher (e.g. family) to lower (e.g. species) taxonomic levels, that are otherwise too difficult to identify using traditional morphological methods, like e.g. identification via microscopy. For example, Chironomidae (the non-biting midge) are widely distributed in both terrestrial and freshwater ecosystems. Their richness and abundance make them important for ecological processes and networks, and they are one of many invertebrate groups used in biomonitoring. Invertebrate samples can contain as many as 100 species of chironomids which often make up as much as 50% of a sample. Despite this, they are usually not identified below the family level because of the taxonomic expertise and time required. This may result in different chironomid species with different ecological preferences grouped together, resulting in inaccurate assessment of water quality.
DNA barcoding provides the opportunity to resolve taxa, and directly relate stressor effects to specific taxa such as individual chironomid species. For example, Beermann et al. (2018) DNA barcoded Chironomidae to investigate their response to multiple stressors; reduced flow, increased fine-sediment and increased salinity. After barcoding, it was found that the chironomid sample consisted of 183 Operational Taxonomic Units (OTUs), i.e. barcodes (sequences) that are often equivalent to morphological species. These 183 OTUs displayed 15 response types rather than the previously reported two response types recorded when all chironomids were grouped together in the same multiple stressor study. A similar trend was discovered in a study by Macher et al. (2016) which discovered cryptic diversity within the New Zealand mayfly species Deleatidium sp. This study found different response patterns of 12 molecular distinct OTUs to stressors which may change the consensus that this mayfly is sensitive to pollution.
=== Shortcomings ===
Despite the advantages offered by DNA barcoding, it has also been suggested that DNA barcoding is best used as a complement to traditional morphological methods. This recommendation is based on multiple perceived challenges.
==== Physical parameters ====
It is not completely straightforward to connect DNA barcodes with ecological preferences of the barcoded taxon in question, as is needed if barcoding is to be used for biomonitoring. For example, detecting target DNA in aquatic systems depends on the concentration of DNA molecules at a site, which in turn can be affected by many factors. The presence of DNA molecules also depends on dispersion at a site, e.g. direction or strength of currents. It is not really known how DNA moves around in streams and lakes, which makes sampling difficult. Another factor might be the behavior of the target species, e.g. fish can have seasonal changes of movements, crayfish or mussels will release DNA in larger amounts just at certain times of their life (moulting, spawning). For DNA in soil, even less is known about distribution, quantity or quality.
The major limitation of the barcoding method is that it relies on barcode reference libraries for the taxonomic identification of the sequences. The taxonomic identification is accurate only if a reliable reference is available. However, most databases are still incomplete, especially for smaller organisms e.g. fungi, phytoplankton, nematoda etc. In addition, current databases contain misidentifications, spelling mistakes and other errors. There is massive curation and completion effort around the databases for all organisms necessary, involving large barcoding projects (for example the iBOL project for the Barcode of Life Data Systems (BOLD) reference database). However, completion and curation are difficult and time-consuming. Without vouchered specimens, there can be no certainty about whether the sequence used as a reference is correct.
DNA sequence databases like GenBank contain many sequences that are not tied to vouchered specimens (for example, herbarium specimens, cultured cell lines, or sometimes images). This is problematic in the face of taxonomic issues such as whether several species should be split or combined, or whether past identifications were sound. Reusing sequences, not tied to vouchered specimens, of initially misidentified organism may support incorrect conclusions and must be avoided. Therefore, best practice for DNA barcoding is to sequence vouchered specimens. For many taxa, it can be however difficult to obtain reference specimens, for example with specimens that are difficult to catch, available specimens are poorly conserved, or adequate taxonomic expertise is lacking.
Importantly, DNA barcodes can also be used to create interim taxonomy, in which case OTUs can be used as substitutes for traditional Latin binomials – thus significantly reducing dependency on fully populated reference databases.
==== Technological bias ====
DNA barcoding also carries methodological bias, from sampling to bioinformatics data analysis. Beside the risk of contamination of the DNA sample by PCR inhibitors, primer bias is one of the major sources of errors in DNA barcoding. The isolation of an efficient DNA marker and the design of primers is a complex process and considerable effort has been made to develop primers for DNA barcoding in different taxonomic groups. However, primers will often bind preferentially to some sequences, leading to differential primer efficiency and specificity and unrepresentative communities’ assessment and richness inflation. Thus, the composition of the sample's communities sequences is mainly altered at the PCR step. Besides, PCR replication is often required, but leads to an exponential increase in the risk of contamination. Several studies have highlighted the possibility to use mitochondria-enriched samples or PCR-free approaches to avoid these biases, but as of 2018, the DNA metabarcoding technique is still based on the sequencing of amplicons. Other biases enter the picture during the sequencing and during the bioinformatic processing of the sequences, like the creation of chimeras.
==== Lack of standardization ====
Even as DNA barcoding is more widely used and applied, there is no agreement concerning the methods for DNA preservation or extraction, the choices of DNA markers and primers set, or PCR protocols. The parameters of bioinformatics pipelines (for example OTU clustering, taxonomic assignment algorithms or thresholds etc.) are at the origin of much debate among DNA barcoding users. Sequencing technologies are also rapidly evolving, together with the tools for the analysis of the massive amounts of DNA data generated, and standardization of the methods is urgently needed to enable collaboration and data sharing at greater spatial and time-scale. This standardisation of barcoding methods at the European scale is part of the objectives of the European COST Action DNAqua-net and is also addressed by CEN (the European Committee for Standardization).
Another criticism of DNA barcoding is its limited efficiency for accurate discrimination below species level (for example, to distinguish between varieties), for hybrid detection, and that it can be affected by evolutionary rates.
==== Mismatches between conventional (morphological) and barcode based identification ====
It is important to know that taxa lists derived by conventional (morphological) identification are not, and maybe never will be, directly comparable to taxa lists derived from barcode based identification because of several reasons. The most important cause is probably the incompleteness and lack of accuracy of the molecular reference databases preventing a correct taxonomic assignment of eDNA sequences. Taxa not present in reference databases will not be found by eDNA, and sequences linked to a wrong name will lead to incorrect identification. Other known causes are a different sampling scale and size between a traditional and a molecular sample, the possible analysis of dead organisms, which can happen in different ways for both methods depending on organism group, and the specific selection of identification in either method, i.e. varying taxonomical expertise or possibility to identify certain organism groups, respectively primer bias leading also to a potential biased analysis of taxa.
==== Estimates of richness/diversity ====
DNA Barcoding can result in an over or underestimate of species richness and diversity. Some studies suggest that artifacts (identification of species not present in a community) are a major cause of inflated biodiversity. The most problematic issue are taxa represented by low numbers of sequencing reads. These reads are usually removed during the data filtering process, since different studies suggest that most of these low-frequency reads may be artifacts. However, real rare taxa may exist among these low-abundance reads. Rare sequences can reflect unique lineages in communities which make them informative and valuable sequences. Thus, there is a strong need for more robust bioinformatics algorithms that allow the differentiation between informative reads and artifacts. Complete reference libraries would also allow a better testing of bioinformatics algorithms, by permitting a better filtering of artifacts (i.e. the removal of sequences lacking a counterpart among extant species) and therefore, it would be possible obtain a more accurate species assignment. Cryptic diversity can also result in inflated biodiversity as one morphological species may actually split into many distinct molecular sequences. This will go a long way in generating DNA reference data which is crucial for environmental DNA-based biodiversity monitoring.
== Megabarcoding ==
Megabarcoding is a term used to describe high-throughput specimen-based DNA barcoding, where thousands of specimens can be barcoded simultaneously for species identification and discovery.This is enabled by the use of third-generation sequencing platforms including PacBio (Sequel I/II) by Pacific Biosciences and MinION, PromethION by Oxford Nanopore Technology. As compared to Sanger sequencing, megabarcoding is faster and cheaper, allowing for the large-scale generation of DNA barcodes for thousands of species.
=== Applications ===
Megabarcoding can help fill the dark taxa. DNA barcode reference data gap for insects and accelerate species discovery, understand species diversity patterns, evaluate species richness, generate rapid biodiversity species inventories, track baseline shifts, and matching life-history stages.
== Metabarcoding ==
Metabarcoding is defined as the barcoding of DNA or eDNA (environmental DNA) that allows for simultaneous identification of many taxa within the same (environmental) sample, however often within the same organism group. The main difference between the approaches is that metabarcoding, in contrast to barcoding, does not focus on one specific organism, but instead aims to determine species composition within a sample.
=== Methodology ===
The metabarcoding procedure, like general barcoding, covers the steps of DNA extraction, PCR amplification, sequencing and data analysis. A barcode consists of a short variable gene region (for example, see different markers/barcodes) which is useful for taxonomic assignment flanked by highly conserved gene regions which can be used for primer design. Different genes are used depending if the aim is to barcode single species or metabarcoding several species. In the latter case, a more universal gene is used. Metabarcoding does not use single species DNA/RNA as a starting point, but DNA/RNA from several different organisms derived from one environmental or bulk sample.
=== Applications ===
Metabarcoding has the potential to complement biodiversity measures, and even replace them in some instances, especially as the technology advances and procedures gradually become cheaper, more optimized and widespread.
DNA metabarcoding applications include Biodiversity monitoring in terrestrial and aquatic environments, Paleontology and ancient ecosystems, Plant-pollinator interactions, Diet analysis and Food safety.
=== Advantages and challenges ===
The general advantages and shortcomings for barcoding reviewed above are valid also for metabarcoding. One particular drawback for metabarcoding studies is that there is no consensus yet regarding the optimal experimental design and bioinformatics criteria to be applied in eDNA metabarcoding. However, there are current joined attempts, like e.g. the EU COST network DNAqua-Net, to move forward by exchanging experience and knowledge to establish best-practice standards for biomonitoring.
== Artificial DNA barcoding ==
In 2014, researchers from ETH Zurich suggested using artificial, sub-micrometer-sized DNA barcodes as an "invisible oil tag". The barcodes consist of synthetic DNA sequences inside magnetically recoverable silica particles. They can be added to food oil in a very small amount (down to 1 ppb) as a label, and can be retrieved at any time for authenticity test by PCR/sequencing. This method can be used to test olive oil for adulteration.
== See also ==
Subtopics:
Related topics:
Also see the sidebar navigation at the top of the article.
== References ==
== External links ==
SweBOL
FinBOL
International Barcode of Life Project (iBOL)
BOLD
UNITE
Diat.barcode | Wikipedia/DNA_barcoding |
Pollen DNA barcoding is the process of identifying pollen donor plant species through the amplification and sequencing of specific, conserved regions of plant DNA. Being able to accurately identify pollen has a wide range of applications though it has been difficult in the past due to the limitations of microscopic identification of pollen.
Pollen identified using DNA barcoding involves the specific targeting of gene regions that are found in most to all plant species but have high variation between members of different species. The unique sequence of base pairs for each species within these target regions can be used as an identifying feature.
The applications of pollen DNA barcoding range from forensics, to food safety, to conservation. Each of these fields benefits from the creation of plant barcode reference libraries. These libraries range largely in size and scope of their collections as well as what target region(s) they specialize in.
One of the main challenges of identifying pollen is that it is often collected as a mixture of pollen from several species. Metabarcoding is the process of identifying the individual species DNA from a mixed DNA sample and is commonly used to catalog pollen in mixed pollen loads found on pollinating animals and in environmental DNA (also called eDNA) which is DNA extracted straight from the environment such as in soil or water samples.
== Advantages ==
Some of the principle constraints of microscopic identification are the expertise and time requirements. Identifying pollen via microscopy requires a high level of expertise in the pollen characteristics of the specific plants being studied. With expertise it can still be extremely difficult to identify pollen accurately with high taxonomic resolution. The skills required to do DNA barcoding are much more common making the approach easier to adopt. Pollen DNA barcoding is a technique that has grown in popularity due to the decreased costs associated with "next generation sequencing" (NGS) techniques and is being continually improved in efficiency including through the use of a dual-indexing approach. Some of the other major advantages include the savings in time and resources compared to microscopic identification. Identifying pollen is time-consuming, involving spreading pollen on a slide, staining the pollen to improve visibility, then focusing in on individual pollen grains and identifying them based on size, shape, as well as the shape and number of pores. If a pollen reference library is not available, then pollen has to be collected from wild specimens or from herbarium specimens and is then added to a pollen reference library.
Rare plants visited by some pollinators can be difficult to determine, by using pollen DNA barcoding researchers can uncover "invisible" interactions between plants and pollinators.
== Challenges ==
There are many challenges when it comes to genetic barcoding of pollen. The amplification process of DNA can mean that even small pieces of plant DNA can be detected included those from contaminants to a sample. Strict procedures to prevent contamination are important and can be facilitated by the hardiness of the pollen coat allowing the pollen to be washed of contaminants without damaging the internal pollen DNA.
DNA barcode reference libraries are still being built and standardized target regions are being gradually adopted. These challenges are likely due to the newness of DNA barcoding and will likely improve with the wider adoption of DNA barcoding as a tool used by taxonomists.
Determining the amount of each contributor to a mixed pollen load can be difficult to determine through the use of DNA barcoding. However, scientists have been able to compare pollen amounts via rank order.
=== Alternatives ===
Innovations in automated microscopy and imagining software offer one potential alternative in the identification of pollen. Through the use of pattern-recognition software, researchers have developed software that can characterize microscopic pollen images based on texture analyzes.
== Target regions ==
There have been several different regions of plant DNA that have been used as targets for genetic barcoding including rbcL, matK, trnH-psbA, ITS1 and ITS2. A combination of rbcL and matK has been recommended for use in plant DNA barcoding. It has been found that trnL is better for degraded DNA and ITS1 is better for differentiating species within a genus.
== Applications ==
=== Use in pollination networks ===
Being able to identify pollen is especially important in the study of pollination networks which are made up of all the interactions between plants and the animals that facilitate their pollination. Identifying the pollen carried on insects helps scientists understand what plants are being visited by which insects. Insects can also have homologous features making them difficult to identify and are themselves sometimes identified through genetic barcoding (usually of the CO1 region). Every insect that visits a flower is not necessarily a pollinator. Many lack features such as hairs allowing them to carry pollen while others avoid the pollen-laden anthers to steal nectar. Pollination networks are made more accurate by including what pollen is being carried by which insects. Some scientists argue that pollination effectiveness (PE), which is measured by studying the germination rates of seeds produced from flowers visited only once by a single animal, is the best way to determine which animals are important pollinators though other scientists have used DNA barcoding to determine the genetic origin of pollen found on insects and have argued that this in conjunction with other traits is a good indication of pollination effectiveness. By studying the composition and structure of pollination networks, conservationists can understand the stability of a pollination network and identify which species are most important and which are at most risk to perturbation leading to pollinator declines.
Another advantage of pollen DNA barcoding is that it can be used to determine the source of pollen found on museum specimens of insects, and these records of insect-plant interactions can then be compared to modern-day interactions to see how pollination networks have changed over time due to global warming, land use change, and other factors.
=== Forensics ===
Being accurately able to identify pollen found on evidence helps forensic investigators identify which regions evidence originated from based on the plants that are endemic to those regions. In addition to this, atmospheric pollen originating from illegal cannabis farms were successfully detected by scientists which in the future could allow law enforcement officials to narrow down the search areas for illegal farms.
=== Ancient pollen ===
Due to the hardy structure of pollen which has evolved to survive being transported sometimes great distances while keeping the internal genetic information intact, the origin of pollen found mixed in ancient substrates can often be determined through DNA barcoding.
=== Food safety ===
Honeybees carry pollen as well as the nectar used in their production of honey. For food quality and safety concerns it is important to understand the plant providence of human-consumed bee products including honey, royal jelly, and pollen pellets. Investigators can test which plants honeybees foraged on and thus the origin of the nectar used in honey by collecting pollen packets from honeybees' corbicular loads and identify the pollen via DNA metabarcoding.
== See also ==
Aeroplankton
== References == | Wikipedia/Pollen_DNA_barcoding |
DNA barcoding is an alternative method to the traditional morphological taxonomic classification, and has frequently been used to identify species of aquatic macroinvertebrates (generally considered those large enough to be seen without magnification). Many are crucial indicator organisms in the bioassessment of freshwater (e.g.: Ephemeroptera, Plecoptera, Trichoptera) and marine (e.g. Annelida, Echinoderms, Molluscs) ecosystems.
Since its introduction, the field of DNA barcoding has matured to bridge the gap between traditional taxonomy and molecular systematics. This technique has the ability to provide more detailed taxonomic information, particularly for cryptic, small, or rare species. DNA barcoding involves specific targeting of gene regions that are found and conserved in most animal species, but have high variation between members of different species. Accurate diagnosis depends on low intraspecific variation compared with that between species, a short DNA sequence such as Cytochrome Subunit Oxidase I gene (COI), would allow precise allocation of an individual to a taxon.
== Methodology ==
While the concept of using DNA sequence divergence for species discrimination has been reported earlier, Hebert et al. (2003) were pioneers in proposing standardization of DNA barcoding as a method of molecularly distinguishing species.
Specimens collection for DNA barcoding does not differ from the traditional methods, apart from the fact that the samples should be preserved in high concentration (>70%) ethanol. It has been indicated that the typical protocol of storing benthic samples in formalin has an adverse effect on DNA integrity.
The key concept for barcoding macroinvertebrates, is proper selection of DNA markers (DNA barcode region) to amplify appropriate gene regions, using PCR techniques. The DNA barcode region needs to be ideally conserved within a species, but variable among different (even closely related) species and therefore, its sequence should serve as a species-specific genetic tag. Therefore, the selection of the marker plays an important role. Cytochrome Subunit Oxidase I gene (COI) is one of the most widely used markers in barcoding of macroinvertebrates. Other markers that can be used are ribosomal RNA genes 16S and 18S.
Moreover, sorting invertebrates into different size categories is useful, since specimens in a sample can vary widely in biomass, depending on species and life stage.
For further details on methods see DNA barcoding.
=== DNA metabarcoding ===
Due to the significant number of taxa that compose aquatic macroinvertebrate communities, DNA metabarcoding method is generally used to assess distinct taxa within bulk or water samples. DNA metabarcoding is a method that consists of the same workflow as DNA barcoding, distinguished by the use of high-throughput sequencing (HTS) technologies. The potential of DNA metabarcoding in the assessment and monitoring of various taxonomic groups, has been successfully demonstrated in several studies. Numerous researchers have used metabarcoding methods to classify benthic macroinvertebrates from tissue samples, indicating its feasibility and higher sensitivity from classical taxonomy methods. Others, validate the use of next-generation sequencing (NGS) technologies in environmental samples to evaluate water quality in marine ecosystems and in freshwater biodiversity studies, including macroinvertebrate species assessment. Applications of these technologies in environmental samples is constantly increasing. Most of the recent studies are based on advancing eDNA approaches' implementation, field validation, platform and barcode choice or database limitations.
== Application and challenges ==
Macroinvertebrates (meta)barcoding methods are often used in:
Biodiversity assessment. Because of the large number of macroinvertebrate species, sample processing (sorting and identification) is laborious and often difficult task that can lead to errors during the assessment.
Environmental monitoring programs. Macroinvertebrates within the same system may be residents from several months to multiple years, depending on the lifespan of each organism. Consequently, macroinvertebrate communities inhabit aquatic ecosystems long enough to reflect the chronic effects of pollutants and yet short enough to respond to relatively acute changes in water quality. Because of the limited mobility of macroinvertebrates and their relative inability to move away from adverse conditions, the location of chronic sources of pollution often can be pinpointed by comparing communities of these organisms.
Detection of alien species. Application of eDNA and (meta) barcoding techniques are constantly increasing in the studies of invasion processes.
Species identification. ‘Species’ level identification requires a high level of taxonomic expertise. Different developmental stages of macroinvertebrates are often difficult to identify morphologically, even for experts, especially because of the lack of appropriate identification keys for aquatic macroinvertebrates . For some aquatic invertebrates taxa for example, taxonomic identification is only possible for males and some late instars, but the coupling of barcoding with traditional taxonomy provides a robust framework for biological identification. Often, species cannot be identified as they are morphologically cryptic, similar or represent less known groups. It has been suggested that a combined analysis of morphological and molecular data could provide the best solution into what is called “integrative taxonomy”. Number of studies have used barcoding or metabarcoding approaches on different groups, for example Odonates, specifically dragonflies (Anisoptera) and the damselflies (Zygoptera), with recommendation to use combination of markers.
Stress response. Individual freshwater invertebrate species, often merged to a higher taxonomic level for biomonitoring purposes, can differ substantially in their tolerance to stressors and respond in more complex ways than observed at genus level. Identifications based on DNA barcoding have the potential to improve detection of small changes in stream conditions. Recent results showed that DNA barcoding can increase taxonomic resolution and thereby, increase the sensitivity of bioassessment metrics.
There are also many challenges when it comes to genetic barcoding of aquatic macroinvertebrates:
Reference libraries. Availability of reference libraries of DNA barcodes is very important in species' identification.
Missing species in databases. Information about existing species are usually not complete or correlated with ecological parameters such as depth, sampling technique, salinity etc.
Validation of data quality. Databases' records are often not curated.
Outdated taxonomy. Species in databases can be sometimes named with outdated taxonomy (e.g. synonyms).
Quantitative measurement of species diversity (estimation of biomass and abundance of species).
Lacking DNA information. Species in earlier literature are identified only by taxonomic features of which no DNA samples exist to confirm.
Technical challenges must be taken into consideration, such as the need to apply different protocols when working with different organisms, selection of an appropriate DNA barcoding markers, primer design (identification of conserved regions suitable as primer-binding sites, evaluation of the taxonomic coverage and the ability of the amplified regions to resolve taxa at the family level, etc.).
Costs related with sequencing.
== See also ==
DNA barcoding
Fish DNA barcoding
Algae DNA barcoding
PCR
Sequencing
== References == | Wikipedia/Aquatic_macroinvertebrate_DNA_barcoding |
Formal science is a branch of science studying disciplines concerned with abstract structures described by formal systems, such as logic, mathematics, statistics, theoretical computer science, artificial intelligence, information theory, game theory, systems theory, decision theory and theoretical linguistics. Whereas the natural sciences and social sciences seek to characterize physical systems and social systems, respectively, using theoretical and empirical methods, the formal sciences use language tools concerned with characterizing abstract structures described by formal systems and the deductions that can be made from them. The formal sciences aid the natural and social sciences by providing information about the structures used to describe the physical world, and what inferences may be made about them.
== Branches ==
Logic (also a branch of philosophy)
Mathematics
Statistics
Systems science
Data science
Information theory
Computer science
Cryptography
== Differences from other sciences ==
One reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts.
Because of their non-empirical nature, formal sciences are construed by outlining a set of axioms and definitions from which other statements (theorems) are deduced. For this reason, in Rudolf Carnap's logical-positivist conception of the epistemology of science, theories belonging to formal sciences are understood to contain no synthetic statements, instead containing only analytic statements.
== See also ==
== References ==
== Further reading ==
Mario Bunge (1985). Philosophy of Science and Technology. Springer.
Mario Bunge (1998). Philosophy of Science. Rev. ed. of: Scientific research. Berlin, New York: Springer-Verlag, 1967.
C. West Churchman (1940). Elements of Logic and Formal Science, J.B. Lippincott Co., New York.
James Franklin (1994). The formal sciences discover the philosophers' stone. In: Studies in History and Philosophy of Science. Vol. 25, No. 4, pp. 513–533, 1994
Stephen Leacock (1906). Elements of Political Science. Houghton, Mifflin Co, 417 pp.
Popper, Karl R. (2002) [1959]. The Logic of Scientific Discovery. New York, NY: Routledge Classics. ISBN 0-415-27844-9. OCLC 59377149.
Bernt P. Stigum (1990). Toward a Formal Science of Economics. MIT Press
Marcus Tomalin (2006), Linguistics and the Formal Sciences. Cambridge University Press
William L. Twining (1997). Law in Context: Enlarging a Discipline. 365 pp.
== External links ==
Media related to Formal sciences at Wikimedia Commons
Interdisciplinary conferences — Foundations of the Formal Sciences | Wikipedia/Formal_sciences |
Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics, computer science, chemistry, medicine, statistics, and mathematical modeling to understand the fundamental and emergent properties of neurons, glia and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the "epic challenge" of the biological sciences.
The scope of neuroscience has broadened over time to include different approaches used to study the nervous system at different scales. The techniques used by neuroscientists have expanded enormously, from molecular and cellular studies of individual neurons to imaging of sensory, motor and cognitive tasks in the brain.
== History ==
The earliest study of the nervous system dates to ancient Egypt. Trepanation, the surgical practice of either drilling or scraping a hole into the skull for the purpose of curing head injuries or mental disorders, or relieving cranial pressure, was first recorded during the Neolithic period. Manuscripts dating to 1700 BC indicate that the Egyptians had some knowledge about symptoms of brain damage.
Early views on the function of the brain regarded it to be a "cranial stuffing" of sorts. In Egypt, from the late Middle Kingdom onwards, the brain was regularly removed in preparation for mummification. It was believed at the time that the heart was the seat of intelligence. According to Herodotus, the first step of mummification was to "take a crooked piece of iron, and with it draw out the brain through the nostrils, thus getting rid of a portion, while the skull is cleared of the rest by rinsing with drugs."
The view that the heart was the source of consciousness was not challenged until the time of the Greek physician Hippocrates. He believed that the brain was not only involved with sensation—since most specialized organs (e.g., eyes, ears, tongue) are located in the head near the brain—but was also the seat of intelligence. Plato also speculated that the brain was the seat of the rational part of the soul. Aristotle, however, believed the heart was the center of intelligence and that the brain regulated the amount of heat from the heart. This view was generally accepted until the Roman physician Galen, a follower of Hippocrates and physician to Roman gladiators, observed that his patients lost their mental faculties when they had sustained damage to their brains.
Abulcasis, Averroes, Avicenna, Avenzoar, and Maimonides, active in the Medieval Muslim world, described a number of medical problems related to the brain. In Renaissance Europe, Vesalius (1514–1564), René Descartes (1596–1650), Thomas Willis (1621–1675) and Jan Swammerdam (1637–1680) also made several contributions to neuroscience.
Luigi Galvani's pioneering work in the late 1700s set the stage for studying the electrical excitability of muscles and neurons. In 1843 Emil du Bois-Reymond demonstrated the electrical nature of the nerve signal, whose speed Hermann von Helmholtz proceeded to measure, and in 1875 Richard Caton found electrical phenomena in the cerebral hemispheres of rabbits and monkeys. Adolf Beck published in 1890 similar observations of spontaneous electrical activity of the brain of rabbits and dogs. Studies of the brain became more sophisticated after the invention of the microscope and the development of a staining procedure by Camillo Golgi during the late 1890s. The procedure used a silver chromate salt to reveal the intricate structures of individual neurons. His technique was used by Santiago Ramón y Cajal and led to the formation of the neuron doctrine, the hypothesis that the functional unit of the brain is the neuron. Golgi and Ramón y Cajal shared the Nobel Prize in Physiology or Medicine in 1906 for their extensive observations, descriptions, and categorizations of neurons throughout the brain.
In parallel with this research, in 1815 Jean Pierre Flourens induced localized lesions of the brain in living animals to observe their effects on motricity, sensibility and behavior. Work with brain-damaged patients by Marc Dax in 1836 and Paul Broca in 1865 suggested that certain regions of the brain were responsible for certain functions. At the time, these findings were seen as a confirmation of Franz Joseph Gall's theory that language was localized and that certain psychological functions were localized in specific areas of the cerebral cortex. The localization of function hypothesis was supported by observations of epileptic patients conducted by John Hughlings Jackson, who correctly inferred the organization of the motor cortex by watching the progression of seizures through the body. Carl Wernicke further developed the theory of the specialization of specific brain structures in language comprehension and production. Modern research through neuroimaging techniques, still uses the Brodmann cerebral cytoarchitectonic map (referring to the study of cell structure) anatomical definitions from this era in continuing to show that distinct areas of the cortex are activated in the execution of specific tasks.
During the 20th century, neuroscience began to be recognized as a distinct academic discipline in its own right, rather than as studies of the nervous system within other disciplines. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field. Rioch originated the integration of basic anatomical and physiological research with clinical psychiatry at the Walter Reed Army Institute of Research, starting in the 1950s. During the same period, Schmitt established a neuroscience research program within the Biology Department at the Massachusetts Institute of Technology, bringing together biology, chemistry, physics, and mathematics. The first freestanding neuroscience department (then called Psychobiology) was founded in 1964 at the University of California, Irvine by James L. McGaugh. This was followed by the Department of Neurobiology at Harvard Medical School, which was founded in 1966 by Stephen Kuffler.
In the process of treating epilepsy, Wilder Penfield produced maps of the location of various functions (motor, sensory, memory, vision) in the brain. He summarized his findings in a 1950 book called The Cerebral Cortex of Man. Wilder Penfield and his co-investigators Edwin Boldrey and Theodore Rasmussen are considered to be the originators of the cortical homunculus.
The understanding of neurons and of nervous system function became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for the transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia. In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation.
As a result of the increasing interest about the nervous system, several prominent neuroscience organizations have been formed to provide a forum to all neuroscientists during the 20th century. For example, the International Brain Research Organization was founded in 1961, the International Society for Neurochemistry in 1963, the European Brain and Behaviour Society in 1968, and the Society for Neuroscience in 1969. Recently, the application of neuroscience research results has also given rise to applied disciplines as neuroeconomics, neuroeducation, neuroethics, and neurolaw.
Over time, brain research has gone through philosophical, experimental, and theoretical phases, with work on neural implants and brain simulation predicted to be important in the future.
== Modern neuroscience ==
The scientific study of the nervous system increased significantly during the second half of the twentieth century, principally due to advances in molecular biology, electrophysiology, and computational neuroscience. This has allowed neuroscientists to study the nervous system in all its aspects: how it is structured, how it works, how it develops, how it malfunctions, and how it can be changed.
For example, it has become possible to understand, in much detail, the complex processes occurring within a single neuron. Neurons are cells specialized for communication. They are able to communicate with neurons and other cell types through specialized junctions called synapses, at which electrical or electrochemical signals can be transmitted from one cell to another. Many neurons extrude a long thin filament of axoplasm called an axon, which may extend to distant parts of the body and are capable of rapidly carrying electrical signals, influencing the activity of other neurons, muscles, or glands at their termination points. A nervous system emerges from the assemblage of neurons that are connected to each other in neural circuits, and networks.
The vertebrate nervous system can be split into two parts: the central nervous system (defined as the brain and spinal cord), and the peripheral nervous system. In many species—including all vertebrates—the nervous system is the most complex organ system in the body, with most of the complexity residing in the brain. The human brain alone contains around one hundred billion neurons and one hundred trillion synapses; it consists of thousands of distinguishable substructures, connected to each other in synaptic networks whose intricacies have only begun to be unraveled. At least one out of three of the approximately 20,000 genes belonging to the human genome is expressed mainly in the brain.
Due to the high degree of plasticity of the human brain, the structure of its synapses and their resulting functions change throughout life.
Making sense of the nervous system's dynamic complexity is a formidable research challenge. Ultimately, neuroscientists would like to understand every aspect of the nervous system, including how it works, how it develops, how it malfunctions, and how it can be altered or repaired. Analysis of the nervous system is therefore performed at multiple levels, ranging from the molecular and cellular levels to the systems and cognitive levels. The specific topics that form the main focus of research change over time, driven by an ever-expanding base of knowledge and the availability of increasingly sophisticated technical methods. Improvements in technology have been the primary drivers of progress. Developments in electron microscopy, computer science, electronics, functional neuroimaging, and genetics and genomics have all been major drivers of progress.
Advances in the classification of brain cells have been enabled by electrophysiological recording, single-cell genetic sequencing, and high-quality microscopy, which have combined into a single method pipeline called patch-sequencing in which all three methods are simultaneously applied using miniature tools. The efficiency of this method and the large amounts of data that is generated has allowed researchers to make some general conclusions about cell types; for example that the human and mouse brain have different versions of fundamentally the same cell types.
=== Molecular and cellular neuroscience ===
Basic questions addressed in molecular neuroscience include the mechanisms by which neurons express and respond to molecular signals and how axons form complex connectivity patterns. At this level, tools from molecular biology and genetics are used to understand how neurons develop and how genetic changes affect biological functions. The morphology, molecular identity, and physiological characteristics of neurons and how they relate to different types of behavior are also of considerable interest.
Questions addressed in cellular neuroscience include the mechanisms of how neurons process signals physiologically and electrochemically. These questions include how signals are processed by neurites and somas and how neurotransmitters and electrical signals are used to process information in a neuron. Neurites are thin extensions from a neuronal cell body, consisting of dendrites (specialized to receive synaptic inputs from other neurons) and axons (specialized to conduct nerve impulses called action potentials). Somas are the cell bodies of the neurons and contain the nucleus.
Another major area of cellular neuroscience is the investigation of the development of the nervous system. Questions include the patterning and regionalization of the nervous system, axonal and dendritic development, trophic interactions, synapse formation and the implication of fractones in neural stem cells, differentiation of neurons and glia (neurogenesis and gliogenesis), and neuronal migration.
Computational neurogenetic modeling is concerned with the development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes, on the cellular level (Computational Neurogenetic Modeling (CNGM) can also be used to model neural systems).
=== Neural circuits and systems ===
Systems neuroscience research centers on the structural and functional architecture of the developing human brain, and the functions of large-scale brain networks, or functionally-connected systems within the brain. Alongside brain development, systems neuroscience also focuses on how the structure and function of the brain enables or restricts the processing of sensory information, using learned mental models of the world, to motivate behavior.
Questions in systems neuroscience include how neural circuits are formed and used anatomically and physiologically to produce functions such as reflexes, multisensory integration, motor coordination, circadian rhythms, emotional responses, learning, and memory. In other words, this area of research studies how connections are made and morphed in the brain, and the effect it has on human sensation, movement, attention, inhibitory control, decision-making, reasoning, memory formation, reward, and emotion regulation.
Specific areas of interest for the field include observations of how the structure of neural circuits effect skill acquisition, how specialized regions of the brain develop and change (neuroplasticity), and the development of brain atlases, or wiring diagrams of individual developing brains.
The related fields of neuroethology and neuropsychology address the question of how neural substrates underlie specific animal and human behaviors. Neuroendocrinology and psychoneuroimmunology examine interactions between the nervous system and the endocrine and immune systems, respectively. Despite many advancements, the way that networks of neurons perform complex cognitive processes and behaviors is still poorly understood.
=== Cognitive and behavioral neuroscience ===
Cognitive neuroscience addresses the questions of how psychological functions are produced by neural circuitry. The emergence of powerful new measurement techniques such as neuroimaging (e.g., fMRI, PET, SPECT), EEG, MEG, electrophysiology, optogenetics and human genetic analysis combined with sophisticated experimental techniques from cognitive psychology allows neuroscientists and psychologists to address abstract questions such as how cognition and emotion are mapped to specific neural substrates. Although many studies hold a reductionist stance looking for the neurobiological basis of cognitive phenomena, recent research shows that there is an interplay between neuroscientific findings and conceptual research, soliciting and integrating both perspectives. For example, neuroscience research on empathy solicited an interdisciplinary debate involving philosophy, psychology and psychopathology. Moreover, the neuroscientific identification of multiple memory systems related to different brain areas has challenged the idea of memory as a literal reproduction of the past, supporting a view of memory as a generative, constructive and dynamic process.
Neuroscience is also allied with the social and behavioral sciences, as well as with nascent interdisciplinary fields. Examples of such alliances include neuroeconomics, decision theory, social neuroscience, and neuromarketing to address complex questions about interactions of the brain with its environment. A study into consumer responses for example uses EEG to investigate neural correlates associated with narrative transportation into stories about energy efficiency.
=== Computational neuroscience ===
Questions in computational neuroscience can span a wide range of levels of traditional analysis, such as development, structure, and cognitive functions of the brain. Research in this field utilizes mathematical models, theoretical analysis, and computer simulation to describe and verify biologically plausible neurons and nervous systems. For example, biological neuron models are mathematical descriptions of spiking neurons which can be used to describe both the behavior of single neurons as well as the dynamics of neural networks. Computational neuroscience is often referred to as theoretical neuroscience.
=== Neuroscience and medicine ===
==== Clinical neuroscience ====
Neurology, psychiatry, neurosurgery, psychosurgery, anesthesiology and pain medicine, neuropathology, neuroradiology, ophthalmology, otolaryngology, clinical neurophysiology, addiction medicine, and sleep medicine are some medical specialties that specifically address the diseases of the nervous system. These terms also refer to clinical disciplines involving diagnosis and treatment of these diseases.
Neurology works with diseases of the central and peripheral nervous systems, such as amyotrophic lateral sclerosis (ALS) and stroke, and their medical treatment. Psychiatry focuses on affective, behavioral, cognitive, and perceptual disorders. Anesthesiology focuses on perception of pain, and pharmacologic alteration of consciousness. Neuropathology focuses upon the classification and underlying pathogenic mechanisms of central and peripheral nervous system and muscle diseases, with an emphasis on morphologic, microscopic, and chemically observable alterations. Neurosurgery and psychosurgery work primarily with surgical treatment of diseases of the central and peripheral nervous systems.
Neuroscience underlies the development of various neurotherapy methods to treat diseases of the nervous system.
==== Translational research ====
Recently, the boundaries between various specialties have blurred, as they are all influenced by basic research in neuroscience. For example, brain imaging enables objective biological insight into mental illnesses, which can lead to faster diagnosis, more accurate prognosis, and improved monitoring of patient progress over time.
Integrative neuroscience describes the effort to combine models and information from multiple levels of research to develop a coherent model of the nervous system. For example, brain imaging coupled with physiological numerical models and theories of fundamental mechanisms may shed light on psychiatric disorders.
Another important area of translational research is brain–computer interfaces (BCIs), or machines that are able to communicate and influence the brain. They are currently being researched for their potential to repair neural systems and restore certain cognitive functions. However, some ethical considerations have to be dealt with before they are accepted.
== Major branches ==
Modern neuroscience education and research activities can be very roughly categorized into the following major branches, based on the subject and scale of the system in examination as well as distinct experimental or curricular approaches. Individual neuroscientists, however, often work on questions that span several distinct subfields.
== Careers in neuroscience ==
Source:
=== Bachelor's Level ===
=== Master's Level ===
=== Advanced Degree ===
== Neuroscience organizations ==
The largest professional neuroscience organization is the Society for Neuroscience (SFN), which is based in the United States but includes many members from other countries. Since its founding in 1969 the SFN has grown steadily: as of 2010 it recorded 40,290 members from 83 countries. Annual meetings, held each year in a different American city, draw attendance from researchers, postdoctoral fellows, graduate students, and undergraduates, as well as educational institutions, funding agencies, publishers, and hundreds of businesses that supply products used in research.
Other major organizations devoted to neuroscience include the International Brain Research Organization (IBRO), which holds its meetings in a country from a different part of the world each year, and the Federation of European Neuroscience Societies (FENS), which holds a meeting in a different European city every two years. FENS comprises a set of 32 national-level organizations, including the British Neuroscience Association, the German Neuroscience Society (Neurowissenschaftliche Gesellschaft), and the French Société des Neurosciences. The first National Honor Society in Neuroscience, Nu Rho Psi, was founded in 2006. Numerous youth neuroscience societies which support undergraduates, graduates and early career researchers also exist, such as Simply Neuroscience and Project Encephalon.
In 2013, the BRAIN Initiative was announced in the US. The International Brain Initiative was created in 2017, currently integrated by more than seven national-level brain research initiatives (US, Europe, Allen Institute, Japan, China, Australia, Canada, Korea, and Israel) spanning four continents.
=== Public education and outreach ===
In addition to conducting traditional research in laboratory settings, neuroscientists have also been involved in the promotion of awareness and knowledge about the nervous system among the general public and government officials. Such promotions have been done by both individual neuroscientists and large organizations. For example, individual neuroscientists have promoted neuroscience education among young students by organizing the International Brain Bee, which is an academic competition for high school or secondary school students worldwide. In the United States, large organizations such as the Society for Neuroscience have promoted neuroscience education by developing a primer called Brain Facts, collaborating with public school teachers to develop Neuroscience Core Concepts for K-12 teachers and students, and cosponsoring a campaign with the Dana Foundation called Brain Awareness Week to increase public awareness about the progress and benefits of brain research. In Canada, the Canadian Institutes of Health Research's (CIHR) Canadian National Brain Bee is held annually at McMaster University.
Neuroscience educators formed a Faculty for Undergraduate Neuroscience (FUN) in 1992 to share best practices and provide travel awards for undergraduates presenting at Society for Neuroscience meetings.
Neuroscientists have also collaborated with other education experts to study and refine educational techniques to optimize learning among students, an emerging field called educational neuroscience. Federal agencies in the United States, such as the National Institute of Health (NIH) and National Science Foundation (NSF), have also funded research that pertains to best practices in teaching and learning of neuroscience concepts.
== Engineering applications of neuroscience ==
=== Neuromorphic computer chips ===
Neuromorphic engineering is a branch of neuroscience that deals with creating functional physical models of neurons for the purposes of useful computation. The emergent computational properties of neuromorphic computers are fundamentally different from conventional computers in the sense that they are complex systems, and that the computational components are interrelated with no central processor.
One example of such a computer is the SpiNNaker supercomputer.
Sensors can also be made smart with neuromorphic technology. An example of this is the Event Camera's BrainScaleS (brain-inspired Multiscale Computation in Neuromorphic Hybrid Systems), a hybrid analog neuromorphic supercomputer located at Heidelberg University in Germany. It was developed as part of the Human Brain Project's neuromorphic computing platform and is the complement to the SpiNNaker supercomputer, which is based on digital technology. The architecture used in BrainScaleS mimics biological neurons and their connections on a physical level; additionally, since the components are made of silicon, these model neurons operate on average 864 times (24 hours of real time is 100 seconds in the machine simulation) that of their biological counterparts.
Recent advances in neuromorphic microchip technology have led a group of scientists to create an artificial neuron that can replace real neurons in diseases.
== Nobel prizes related to neuroscience ==
== See also ==
== References ==
== Further reading ==
== External links ==
Neuroscience on In Our Time at the BBC
Neuroscience Information Framework (NIF)
American Society for Neurochemistry
British Neuroscience Association (BNA)
Federation of European Neuroscience Societies
Neuroscience Online (electronic neuroscience textbook)
HHMI Neuroscience lecture series - Making Your Mind: Molecules, Motion, and Memory Archived 2013-06-24 at the Wayback Machine
Société des Neurosciences
Neuroscience For Kids | Wikipedia/neuroscience |
The Galves–Löcherbach model (or GL model) is a mathematical model for a network of neurons with intrinsic stochasticity.
In the most general definition, a GL network consists of a countable number of elements (idealized neurons) that interact by sporadic nearly-instantaneous discrete events (spikes or firings). At each moment, each neuron N fires independently, with a probability that depends on the history of the firings of all neurons since the last time N last fired. Thus each neuron "forgets" all previous spikes, including its own, whenever it fires. This property is a defining feature of the GL model.
In specific versions of the GL model, the past network spike history since the last firing of a neuron N may be summarized by an internal variable, the potential of that neuron, that is a weighted sum of those spikes. The potential may include the spikes of only a finite subset of other neurons, thus modeling arbitrary synapse topologies. In particular, the GL model includes as a special case the general leaky integrate-and-fire neuron model.
== Formal definition ==
The GL model has been formalized in several different ways. The notations below are borrowed from several of those sources.
The GL network model consists of a countable set of neurons with some set
I
{\displaystyle I}
of indices. The state is defined only at discrete sampling times, represented by integers, with some fixed time step
Δ
{\displaystyle \Delta }
. For simplicity, let's assume that these times extend to infinity in both directions, implying that the network has existed since forever.
In the GL model, all neurons are assumed evolve synchronously and atomically between successive sampling times. In particular, within each time step, each neuron may fire at most once. A Boolean variable
X
i
[
t
]
{\displaystyle X_{i}[t]}
denotes whether the neuron
i
∈
I
{\displaystyle i\in I}
fired (
X
i
[
t
]
=
1
{\displaystyle X_{i}[t]=1}
) or not (
X
i
[
t
]
=
0
{\displaystyle X_{i}[t]=0}
) between sampling times
t
∈
Z
{\displaystyle t\in \mathbb {Z} }
and
t
+
1
{\displaystyle t+1}
.
Let
X
[
t
′
:
t
]
{\displaystyle X[t'\,{\mathrel {:}}\,t]}
denote the matrix whose rows are the histories of all neuron firings from time
t
′
{\displaystyle t'}
to time
t
{\displaystyle {}t}
inclusive, that is
X
[
t
′
:
t
]
=
(
(
X
i
[
s
]
)
t
′
≤
s
≤
t
)
i
∈
I
{\displaystyle X[t'\,{\mathrel {:}}\,t]=((X_{i}[s])_{t'\leq s\leq t})_{i\in I}}
and let
X
[
−
∞
:
t
]
{\displaystyle X[-\infty \,{\mathrel {:}}\,t]}
be defined similarly, but extending infinitely in the past. Let
τ
i
[
t
]
{\displaystyle \tau _{i}[t]}
be the time before the last firing of neuron
i
{\displaystyle i}
before time
t
{\displaystyle t}
, that is
τ
i
[
t
]
=
m
a
x
{
s
<
t
|
X
i
[
s
]
=
1
}
.
{\displaystyle \tau _{i}[t]=\mathop {\mathrm {max} } \{s<t\;{\mathrel {\big |}}\;X_{i}[s]=1\}.}
Then the general GL model says that
P
r
o
b
(
X
i
[
t
]
=
1
|
X
[
−
∞
:
t
−
1
]
)
=
Φ
i
(
X
[
τ
i
[
t
]
:
t
−
1
]
)
{\displaystyle \mathop {\mathrm {Prob} } {\biggl (}\,X_{i}[t]=1\;{\mathrel {\bigg |}}\;X[-\infty \,{\mathrel {:}}\,t-1]\,{\biggr )}\;\;=\;\;\Phi _{i}{\biggl (}X{\bigl [}\tau _{i}[t]\,{\mathrel {:}}\,t-1{\bigr ]}{\biggr )}}
Moreover, the firings in the same time step are conditionally independent, given the past network history, with the above probabilities. That is, for each finite subset
K
⊂
I
{\displaystyle K\subset I}
and any configuration
a
i
∈
{
0
,
1
}
,
i
∈
K
,
{\displaystyle a_{i}\in \{0,1\},i\in K,}
we have
P
r
o
b
(
⋂
k
∈
K
{
X
k
[
t
]
=
a
k
}
|
X
[
−
∞
:
t
−
1
]
)
=
∏
k
∈
K
P
r
o
b
(
X
k
[
t
]
=
a
k
|
X
[
τ
k
[
t
]
:
t
−
1
]
)
{\displaystyle \mathop {\mathrm {Prob} } {\biggl (}\,\bigcap _{k\in K}{\bigl \{}X_{k}[t]=a_{k}{\bigr \}}\;{\mathrel {\bigg |}}\;X[-\infty \,{\mathrel {:}}\,t-1]{\biggr )}\;\;=\;\;\prod _{k\in K}\mathop {\mathrm {Prob} } {\biggl (}\,X_{k}[t]=a_{k}\;{\mathrel {\bigg |}}\;X{\bigl [}\tau _{k}[t]\,{\mathrel {:}}\,t-1{\bigl ]}\,{\biggr )}}
== Potential-based variants ==
In a common special case of the GL model, the part of the past firing history
X
[
τ
i
[
t
]
:
t
−
1
]
{\displaystyle X{\bigl [}\tau _{i}[t]\,{\mathrel {:}}\,t-1{\bigr ]}}
that is relevant to each neuron
i
∈
I
{\displaystyle i\in I}
at each sampling time
t
{\displaystyle t}
is summarized by a real-valued internal state variable or potential
V
i
[
t
]
{\displaystyle V_{i}[t]}
(that corresponds to the membrane potential of a biological neuron), and is basically a weighted sum of the past spike indicators, since the last firing of neuron
i
{\displaystyle i}
. That is,
V
i
[
t
]
=
∑
t
′
=
τ
i
[
t
]
t
−
1
(
E
i
[
t
′
]
+
∑
j
∈
I
w
j
→
i
X
j
[
t
′
]
)
α
i
[
t
′
−
τ
i
[
t
]
,
t
−
1
−
t
′
]
{\displaystyle V_{i}[t]\;\;=\;\;\sum _{t'=\tau _{i}[t]}^{t-1}{\biggl (}E_{i}[t']+\sum _{j\in I}w_{j\rightarrow i}\,X_{j}[t']{\biggr )}\alpha _{i}{\bigl [}t'-\tau _{i}[t],t-1-t'{\bigr ]}}
In this formula,
w
j
→
i
{\displaystyle w_{j\rightarrow i}}
is a numeric weight, that corresponds to the total weight or strength of the synapses from the axon of neuron
j
{\displaystyle j}
to the dendrites of neuron
i
{\displaystyle i}
. The term
E
i
[
t
′
]
{\displaystyle E_{i}[t']}
, the external input, represents some additional contribution to the potential that may arrive between times
t
′
{\displaystyle t'}
and
t
′
+
1
{\displaystyle t'+1}
from other sources besides the firings of other neurons. The factor
α
i
[
r
,
s
]
{\displaystyle \alpha _{i}[r,s]}
is a history weight function that modulates the contributions of firings that happened
r
{\displaystyle r}
whole steps after the last firing of neuron
i
{\displaystyle i}
and
s
{\displaystyle s}
whole steps before the current time.
Then one defines
P
r
o
b
(
X
i
[
t
]
=
1
|
X
[
−
∞
:
t
−
1
]
)
=
ϕ
i
(
V
i
[
t
]
)
{\displaystyle \mathop {\mathrm {Prob} } {\biggl (}\,X_{i}[t]=1\;{\mathrel {\bigg |}}\;X[-\infty :t-1]\,{\biggr )}\;\;=\;\;\phi _{i}(V_{i}[t])}
where
ϕ
i
{\displaystyle \phi _{i}}
is a monotonically non-decreasing function from
R
{\displaystyle \mathbb {R} }
into the interval
[
0
,
1
]
{\displaystyle [0,1]}
.
If the synaptic weight
w
j
→
i
{\displaystyle w_{j\to i}}
is negative, each firing of neuron
j
{\displaystyle j}
causes the potential
V
i
{\displaystyle V_{i}}
to decrease. This is the way inhibitory synapses are approximated in the GL model. The absence of a synapse between those two neurons is modeled by setting
w
j
→
i
=
0
{\displaystyle w_{j\to i}=0}
.
=== Leaky integrate and fire variants ===
In an even more specific case of the GL model, the potential
V
i
{\displaystyle V_{i}}
is defined to be a decaying weighted sum of the firings of other neurons. Namely, when a neuron
i
{\displaystyle i}
fires, its potential is reset to zero. Until its next firing, a spike from any neuron
j
{\displaystyle j}
increments
V
i
{\displaystyle V_{i}}
by the constant amount
w
j
→
i
{\displaystyle w_{j\rightarrow i}}
. Apart from those contributions, during each time step, the potential decays by a fixed recharge factor
μ
i
{\displaystyle \mu _{i}}
towards zero.
In this variant, the evolution of the potential
V
i
{\displaystyle V_{i}}
can be expressed by a recurrence formula
V
i
[
t
+
1
]
=
{
0
i
f
X
i
[
t
]
=
1
μ
i
V
i
[
t
]
i
f
X
i
[
t
]
=
0
}
+
E
i
[
t
]
+
∑
j
∈
I
w
j
→
i
X
j
[
t
]
{\displaystyle V_{i}[t+1]\;\;=\;\;\left\{{\begin{array}{ll}0&\mathrm {if} \;X_{i}[t]=1\\\mu _{i}\,V_{i}[t]&\mathrm {if} \;X_{i}[t]=0\end{array}}\right\}\;+\;E_{i}[t]\;+\;\sum _{j\in I}w_{j\to i}\,X_{j}[t]}
Or, more compactly,
V
i
[
t
+
1
]
=
(
1
−
X
i
[
t
]
)
μ
i
V
i
[
t
]
+
E
i
[
t
]
+
∑
j
∈
I
w
j
→
i
X
j
[
t
]
{\displaystyle V_{i}[t+1]\;\;=\;\;(1-X_{i}[t])\,\mu _{i}\,V_{i}[t]\;+\;E_{i}[t]\;+\;\sum _{j\in I}w_{j\to i}\,X_{j}[t]}
This special case results from taking the history weight factor
α
[
r
,
s
]
{\displaystyle \alpha [r,s]}
of the general potential-based variant to be
μ
i
s
{\displaystyle \mu _{i}^{s}}
. It is very similar to the leaky integrate and fire model.
==== Reset potential ====
If, between times
t
{\displaystyle t}
and
t
+
1
{\displaystyle t+1}
, neuron
i
{\displaystyle i}
fires (that is,
X
i
[
t
]
=
1
{\displaystyle X_{i}[t]=1}
), no other neuron fires (
X
j
[
t
]
=
0
{\displaystyle X_{j}[t]=0}
for all
j
≠
i
{\displaystyle j\neq i}
),and there is no external input (
E
i
[
t
]
=
0
{\displaystyle E_{i}[t]=0}
), then
V
i
[
t
+
1
]
{\displaystyle V_{i}[t+1]}
will be
w
i
→
i
{\displaystyle w_{i\to i}}
. This self-weight therefore represents the reset potential that the neuron assumes just after firing, apart from other contributions. The potential evolution formula therefore can be written also as
V
i
[
t
+
1
]
=
{
V
i
R
i
f
X
i
[
t
]
=
1
μ
i
V
i
[
t
]
i
f
X
i
[
t
]
=
0
}
+
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
{\displaystyle V_{i}[t+1]\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\mathrm {if} \;X_{i}[t]=1\\\mu _{i}\,V_{i}[t]&\mathrm {if} \;X_{i}[t]=0\end{array}}\right\}\;+\;E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]}
where
V
i
R
=
w
i
→
i
{\displaystyle V_{i}^{\mathsf {R}}=w_{i\to i}}
is the reset potential. Or, more compactly,
V
i
[
t
+
1
]
=
X
i
[
t
]
V
i
R
+
(
1
−
X
i
[
t
]
)
μ
i
V
i
[
t
]
+
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
{\displaystyle V_{i}[t+1]\;\;=\;\;X_{i}[t]\,V_{i}^{\mathsf {R}}\;\;+\;\;(1-X_{i}[t])\,\mu _{i}\,V_{i}[t]\;+\;E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]}
==== Resting potential ====
These formulas imply that the potential decays towards zero with time, when there are no external or synaptic inputs and the neuron itself does not fire. Under these conditions, the membrane potential of a biological neuron will tend towards some negative value, the resting or baseline potential
V
i
B
{\displaystyle V_{i}^{\mathsf {B}}}
, on the order of −40 to −80 millivolts.
However, this apparent discrepancy exists only because it is customary in neurobiology to measure electric potentials relative to that of the extracellular medium. That discrepancy disappears if one chooses the baseline potential
V
i
B
{\displaystyle V_{i}^{\mathsf {B}}}
of the neuron as the reference for potential measurements. Since the potential
V
i
{\displaystyle V_{i}}
has no influence outside of the neuron, its zero level can be chosen independently for each neuron.
==== Variant with refractory period ====
Some authors use a slightly different refractory variant of the integrate-and-fire GL neuron, which ignores all external and synaptic inputs (except possibly the self-synapse
w
i
→
i
{\displaystyle w_{i\to i}}
) during the time step immediately after its own firing. The equation for this variant is
V
i
[
t
+
1
]
=
{
V
i
R
i
f
X
i
[
t
]
=
1
μ
i
V
i
[
t
]
+
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
i
f
X
i
[
t
]
=
0
{\displaystyle V_{i}[t+1]\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\quad \mathrm {if} \;X_{i}[t]=1\\[2mm]\displaystyle \mu _{i}\,V_{i}[t]\;+\;E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]&\quad \mathrm {if} \;X_{i}[t]=0\end{array}}\right.}
or, more compactly,
V
i
[
t
+
1
]
=
X
i
[
t
]
V
i
R
+
(
1
−
X
i
[
t
]
)
(
μ
i
V
i
[
t
]
+
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
)
{\displaystyle V_{i}[t+1]\;\;=\;\;X_{i}[t]\,V_{i}^{\mathsf {R}}\;\;+\;\;(1-X_{i}[t])\,{\biggl (}\mu _{i}\,V_{i}[t]\;+\;E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]{\biggr )}}
==== Forgetful variants ====
Even more specific sub-variants of the integrate-and-fire GL neuron are obtained by setting the recharge factor
μ
i
{\displaystyle \mu _{i}}
to zero. In the resulting neuron model, the potential
V
i
{\displaystyle V_{i}}
(and hence the firing probability) depends only on the inputs in the previous time step; all earlier firings of the network, including of the same neuron, are ignored. That is, the neuron does not have any internal state, and is essentially a (stochastic) function block.
The evolution equations then simplify to
V
i
[
t
+
1
]
=
{
V
i
R
i
f
X
i
[
t
]
=
1
0
i
f
X
i
[
t
]
=
0
}
+
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
{\displaystyle V_{i}[t+1]\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\mathrm {if} \;X_{i}[t]=1\\0&\mathrm {if} \;X_{i}[t]=0\end{array}}\right\}\;+\;E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]}
V
i
[
t
+
1
]
=
X
i
[
t
]
V
i
R
+
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
{\displaystyle V_{i}[t+1]\;\;=\;\;X_{i}[t]\,V_{i}^{\mathsf {R}}\;\;+\;\;E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]}
for the variant without refractory step, and
V
i
[
t
+
1
]
=
{
V
i
R
i
f
X
i
[
t
]
=
1
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
i
f
X
i
[
t
]
=
0
{\displaystyle V_{i}[t+1]\;\;=\;\;\left\{{\begin{array}{ll}V_{i}^{\mathsf {R}}&\quad \mathrm {if} \;X_{i}[t]=1\\[2mm]\displaystyle E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]&\quad \mathrm {if} \;X_{i}[t]=0\end{array}}\right.}
V
i
[
t
+
1
]
=
X
i
[
t
]
V
i
R
+
(
1
−
X
i
[
t
]
)
(
E
i
[
t
]
+
∑
j
∈
I
∖
{
i
}
w
j
→
i
X
j
[
t
]
)
{\displaystyle V_{i}[t+1]\;\;=\;\;X_{i}[t]\,V_{i}^{\mathsf {R}}\;\;+\;\;(1-X_{i}[t])\,{\biggl (}E_{i}[t]\;+\;\sum _{j\in I\setminus \{i\}}w_{j\to i}\,X_{j}[t]{\biggr )}}
for the variant with refractory step.
In these sub-variants, while the individual neurons do not store any information from one step to the next, the network as a whole still can have persistent memory because of the implicit one-step delay between the synaptic inputs and the resulting firing of the neuron. In other words, the state of a network with
n
{\displaystyle n}
neurons is a list of
n
{\displaystyle n}
bits, namely the value of
X
i
[
t
]
{\displaystyle X_{i}[t]}
for each neuron, which can be assumed to be stored in its axon in the form of a traveling depolarization zone.
== History ==
The GL model was defined in 2013 by mathematicians Antonio Galves and Eva Löcherbach. Its inspirations included Frank Spitzer's interacting particle system and Jorma Rissanen's notion of stochastic chain with memory of variable length. Another work that influenced this model was Bruno Cessac's study on the leaky integrate-and-fire model, who himself was influenced by Hédi Soula. Galves and Löcherbach referred to the process that Cessac described as "a version in a finite dimension" of their own probabilistic model.
Prior integrate-and-fire models with stochastic characteristics relied on including a noise to simulate stochasticity. The Galves–Löcherbach model distinguishes itself because it is inherently stochastic, incorporating probabilistic measures directly in the calculation of spikes. It is also a model that may be applied relatively easily, from a computational standpoint, with a good ratio between cost and efficiency. It remains a non-Markovian model, since the probability of a given neuronal spike depends on the accumulated activity of the system since the last spike.
Contributions to the model were made, considering the hydrodynamic limit of the interacting neuronal system, the long-range behavior and aspects pertaining to the process in the sense of predicting and classifying behaviors according to a fonction of parameters, and the generalization of the model to the continuous time.
The Galves–Löcherbach model was a cornerstone to the NeuroMat project.
== See also ==
Biological neuron model
Hodgkin–Huxley model
Computational neuroscience
NeuroMat
== References == | Wikipedia/Galves–Löcherbach_model |
In neuroscience, classical cable theory uses mathematical models to calculate the electric current (and accompanying voltage) along passive neurites, particularly the dendrites that receive synaptic inputs at different sites and times. Estimates are made by modeling dendrites and axons as cylinders composed of segments with capacitances
c
m
{\displaystyle c_{m}}
and resistances
r
m
{\displaystyle r_{m}}
combined in parallel (see Fig. 1). The capacitance of a neuronal fiber comes about because electrostatic forces are acting through the very thin lipid bilayer (see Figure 2). The resistance in series along the fiber
r
l
{\displaystyle r_{l}}
is due to the axoplasm's significant resistance to movement of electric charge.
== History ==
Cable theory in computational neuroscience has roots leading back to the 1850s, when Professor William Thomson (later known as Lord Kelvin) began developing mathematical models of signal decay in submarine (underwater) telegraphic cables. The models resembled the partial differential equations used by Fourier to describe heat conduction in a wire.
The 1870s saw the first attempts by Hermann to model neuronal electrotonic potentials also by focusing on analogies with heat conduction. However, it was Hoorweg who first discovered the analogies with Kelvin's undersea cables in 1898 and then Hermann and Cremer who independently developed the cable theory for neuronal fibers in the early 20th century. Further mathematical theories of nerve fiber conduction based on cable theory were developed by Cole and Hodgkin (1920s–1930s), Offner et al. (1940), and Rushton (1951).
Experimental evidence for the importance of cable theory in modelling the behavior of axons began surfacing in the 1930s from work done by Cole, Curtis, Hodgkin, Sir Bernard Katz, Rushton, Tasaki and others. Two key papers from this era are those of Davis and Lorente de Nó (1947) and Hodgkin and Rushton (1946).
The 1950s saw improvements in techniques for measuring the electric activity of individual neurons. Thus cable theory became important for analyzing data collected from intracellular microelectrode recordings and for analyzing the electrical properties of neuronal dendrites. Scientists like Coombs, Eccles, Fatt, Frank, Fuortes and others now relied heavily on cable theory to obtain functional insights of neurons and for guiding them in the design of new experiments.
Later, cable theory with its mathematical derivatives allowed ever more sophisticated neuron models to be explored by workers such as Jack, Rall, Redman, Rinzel, Idan Segev, Tuckwell, Bell, and Iannella. More recently, cable theory has been applied to model electrical activity in bundled neurons in the white matter of the brain.
== Deriving the cable equation ==
Note, various conventions of rm exist.
Here rm and cm, as introduced above, are measured per membrane-length unit (per meter (m)). Thus rm is measured in ohm·meters (Ω·m) and cm in farads per meter (F/m). This is in contrast to Rm (in Ω·m2) and Cm (in F/m2), which represent the specific resistance and capacitance respectively of one unit area of membrane (in m2). Thus, if the radius, a, of the axon is known, then its circumference is 2πa, and its rm, and its cm values can be calculated as:
These relationships make sense intuitively, because the greater the circumference of the axon, the greater the area for charge to escape through its membrane, and therefore the lower the membrane resistance (dividing Rm by 2πa); and the more membrane available to store charge (multiplying Cm by 2πa).
The specific electrical resistance, ρl, of the axoplasm allows one to calculate the longitudinal intracellular resistance per unit length, rl, (in Ω·m−1) by the equation:
The greater the cross sectional area of the axon, πa2, the greater the number of paths for the charge to flow through its axoplasm, and the lower the axoplasmic resistance.
Several important avenues of extending classical cable theory have recently seen the introduction of endogenous structures in order to analyze the effects of protein polarization within dendrites and different synaptic input distributions over the dendritic surface of a neuron.
To better understand how the cable equation is derived, first consider an idealized neuron with a perfectly sealed membrane (rm=∞) with no loss of current to the outside, and no capacitance (cm = 0). A current injected into the fiber at position x = 0 would move along the inside of the fiber unchanged. Moving away from the point of injection and by using Ohm's law (V = IR) we can calculate the voltage change as:
where the negative is because current flows down the potential gradient.
Letting Δx go towards zero and having infinitely small increments of x, one can write (4) as:
or
Bringing rm back into the picture is like making holes in a garden hose. The more holes, the faster the water will escape from the hose, and the less water will travel all the way from the beginning of the hose to the end. Similarly, in an axon, some of the current traveling longitudinally through the axoplasm will escape through the membrane.
If im is the current escaping through the membrane per length unit, m, then the total current escaping along y units must be y·im. Thus, the change of current in the axoplasm, Δil, at distance, Δx, from position x=0 can be written as:
or, using continuous, infinitesimally small increments:
i
m
{\displaystyle i_{m}}
can be expressed with yet another formula, by including the capacitance. The capacitance will cause a flow of charge (a current) towards the membrane on the side of the cytoplasm. This current is usually referred to as displacement current (here denoted
i
c
{\displaystyle i_{c}}
.) The flow will only take place as long as the membrane's storage capacity has not been reached.
i
c
{\displaystyle i_{c}}
can then be expressed as:
where
c
m
{\displaystyle c_{m}}
is the membrane's capacitance and
∂
V
/
∂
t
{\displaystyle {\partial V}/{\partial t}}
is the change in voltage over time.
The current that passes the membrane (
i
r
{\displaystyle i_{r}}
) can be expressed as:
and because
i
m
=
i
r
+
i
c
{\displaystyle i_{m}=i_{r}+i_{c}}
the following equation for
i
m
{\displaystyle i_{m}}
can be derived if no additional current is added from an electrode:
where
∂
i
l
/
∂
x
{\displaystyle {\partial i_{l}}/{\partial x}}
represents the change per unit length of the longitudinal current.
Combining equations (6) and (11) gives a first version of a cable equation:
which is a second-order partial differential equation (PDE).
By a simple rearrangement of equation (12) (see later) it is possible to make two important terms appear, namely the length constant (sometimes referred to as the space constant) denoted
λ
{\displaystyle \lambda }
and the time constant denoted
τ
{\displaystyle \tau }
. The following sections focus on these terms.
== Length constant ==
The length constant,
λ
{\displaystyle \lambda }
(lambda), is a parameter that indicates how far a stationary current will influence the voltage along the cable. The larger the value of
λ
{\displaystyle \lambda }
, the farther the charge will flow. The length constant can be expressed as:
The larger the membrane resistance, rm, the greater the value of
λ
{\displaystyle \lambda }
, and the more current will remain inside the axoplasm to travel longitudinally through the axon. The higher the axoplasmic resistance,
r
l
{\displaystyle r_{l}}
, the smaller the value of
λ
{\displaystyle \lambda }
, the harder it will be for current to travel through the axoplasm, and the shorter the current will be able to travel.
It is possible to solve equation (12) and arrive at the following equation (which is valid in steady-state conditions, i.e. when time approaches infinity):
Where
V
0
{\displaystyle V_{0}}
is the depolarization at
x
=
0
{\displaystyle x=0}
(point of current injection), e is the exponential constant (approximate value 2.71828) and
V
x
{\displaystyle V_{x}}
is the voltage at a given distance x from x=0. When
x
=
λ
{\displaystyle x=\lambda }
then
and
which means that when we measure
V
{\displaystyle V}
at distance
λ
{\displaystyle \lambda }
from
x
=
0
{\displaystyle x=0}
we get
Thus
V
λ
{\displaystyle V_{\lambda }}
is always 36.8 percent of
V
0
{\displaystyle V_{0}}
.
== Time constant ==
Neuroscientists are often interested in knowing how fast the membrane potential,
V
m
{\displaystyle V_{m}}
, of an axon changes in response to changes in the current injected into the axoplasm. The time constant,
τ
{\displaystyle \tau }
, is an index that provides information about that value.
τ
{\displaystyle \tau }
can be calculated as:
The larger the membrane capacitance,
c
m
{\displaystyle c_{m}}
, the more current it takes to charge and discharge a patch of membrane and the longer this process will take. The larger the membrane resistance
r
m
{\displaystyle r_{m}}
, the harder it is for a current to induce a change in membrane potential. So the higher the
τ
{\displaystyle \tau }
the slower the nerve impulse can travel. That means, membrane potential (voltage across the membrane) lags more behind current injections. Response times vary from 1–2 milliseconds in neurons that are processing information that needs high temporal precision to 100 milliseconds or longer. A typical response time is around 20 milliseconds.
== Generic form and mathematical structure ==
If one multiplies equation (12) by
r
m
{\displaystyle r_{m}}
on both sides of the equal sign we get:
and recognize
λ
2
=
r
m
/
r
l
{\displaystyle \lambda ^{2}={r_{m}}/{r_{l}}}
on the left side and
τ
=
c
m
r
m
{\displaystyle \tau =c_{m}r_{m}}
on the right side. The cable equation can now be written in its perhaps best known form:
This is a 1D heat equation or diffusion equation for which many solution methods, such as Green's functions and Fourier methods, have been developed.
It is also a special degenerate case of the Telegrapher's equation, where the inductance
L
{\displaystyle L}
vanishes and the signal propagation speed
1
/
L
C
{\displaystyle 1/{\sqrt {LC}}}
is infinite.
== See also ==
Nanophysiology
Axon
Bidomain model
Bioelectrochemistry
Biological neuron model
Dendrite
Hodgkin–Huxley model
Membrane potential
Monodomain model
Nernst–Planck equation
Patch clamp
Saltatory conduction
Soliton model in neuroscience
== References ==
Poznanski, Roman R. (2013). Mathematical Neuroscience. San Diego [California]: Academic Press.
Tuckwell, Henry C. (1988). Introduction to theoretical neurobiology. Cambridge [Cambridgeshire]: Cambridge University Press. ISBN 978-0521350969.
de Nó, Rafael Lorente (1947). A study of nerve physiology. Studies from the Rockefeller Institute for Medical Research. Reprints. Rockefeller Institute for Medical Research. pp. Part I, 131:1-496, Part II, 132:1-548. ISBN 9780598674722. OCLC 6217290. {{cite book}}: ISBN / Date incompatibility (help)
Lazarevich, Ivan A.; Kazantsev, Victor B. (2013). "Dendritic signal transition induced by intracellular charge in inhomogeneties". Phys. Rev. E. 88 (6): 062718. arXiv:1308.0821. Bibcode:2013PhRvE..88f2718L. doi:10.1103/PhysRevE.88.062718. PMID 24483497. S2CID 13353454.
Douglas, PK; Douglas, David B. (2019). "Reconsidering Spatial Priors in EEG Source Estimation : Does White Matter Contribute to EEG Rhythms?". 2019 7th International Winter Conference on Brain-Computer Interface (BCI). Vol. 88. pp. 1–12. arXiv:2111.08939. doi:10.1109/IWW-BCI.2019.8737307. ISBN 978-1-5386-8116-9. S2CID 195064621.
== Notes == | Wikipedia/Cable_theory |
The soliton hypothesis in neuroscience is a model that claims to explain how action potentials are initiated and conducted along axons based on a thermodynamic theory of nerve pulse propagation. It proposes that the signals travel along the cell's membrane in the form of certain kinds of solitary sound (or density) pulses that can be modeled as solitons. The model is proposed as an alternative to the Hodgkin–Huxley model in which action potentials: voltage-gated ion channels in the membrane open and allow sodium ions to enter the cell (inward current). The resulting decrease in membrane potential opens nearby voltage-gated sodium channels, thus propagating the action potential. The transmembrane potential is restored by delayed opening of potassium channels. Soliton hypothesis proponents assert that energy is mainly conserved during propagation except dissipation losses; Measured temperature changes are completely inconsistent with the Hodgkin-Huxley model.
The soliton model (and sound waves in general) depends on adiabatic propagation in which the energy provided at the source of excitation is carried adiabatically through the medium, i.e. plasma membrane. The measurement of a temperature pulse and the claimed absence of heat release during an action potential were the basis of the proposal that nerve impulses are an adiabatic phenomenon much like sound waves. Synaptically evoked action potentials in the electric organ of the electric eel are associated with substantial positive (only) heat production followed by active cooling to ambient temperature. In the garfish olfactory nerve, the action potential is associated with a biphasic temperature change; however, there is a net production of heat. These published results are inconsistent with the Hodgkin-Huxley Model and the authors interpret their work in terms of that model: The initial sodium current releases heat as the membrane capacitance is discharged; heat is absorbed during recharge of the membrane capacitance as potassium ions move with their concentration gradient but against the membrane potential. This mechanism is called the "Condenser Theory". Additional heat may be generated by membrane configuration changes driven by the changes in membrane potential. An increase in entropy during depolarization would release heat; entropy increase during repolarization would absorb heat. However, any such entropic contributions are incompatible with Hodgkin and Huxley model
== History ==
Ichiji Tasaki pioneered a thermodynamic approach to the phenomenon of nerve pulse propagation which identified several phenomena that were not included in the Hodgkin–Huxley model. Along with measuring various non-electrical components of a nerve impulse, Tasaki investigated the physical chemistry of phase transitions in nerve fibers and its importance for nerve pulse propagation. Based on Tasaki's work, Konrad Kaufman proposed sound waves as a physical basis for nerve pulse propagation in an unpublished manuscript. The basic idea at the core of the soliton model is the balancing of intrinsic dispersion of the two dimensional sound waves in the membrane by nonlinear elastic properties near a phase transition. The initial impulse can acquire a stable shape under such circumstances, in general known as a solitary wave. Solitons are the simplest solution of the set of nonlinear wave equations governing such phenomenon and were applied to model nerve impulse in 2005 by Thomas Heimburg and Andrew D. Jackson, both at the Niels Bohr Institute of the University of Copenhagen. Heimburg heads the institute's Membrane Biophysics Group. The biological physics group of Matthias Schneider has studied propagation of two-dimensional sound waves in lipid interfaces and their possible role in biological signalling
== Justification ==
The model starts with the observation that cell membranes always have a freezing point (the temperature below which the consistency changes from fluid to gel-like) only slightly below the organism's body temperature, and this allows for the propagation of solitons. An action potential traveling along a mixed nerve results in a slight increase in temperature followed by a decrease in temperature. Soliton model proponents claim that no net heat is released during the overall pulse and that the observed temperature changes are inconsistent with the Hodgkin-Huxley model. However, this is untrue: the Hodgkin Huxley model predicts a biphasic release and absorption of heat. In addition, the action potential causes a slight local thickening of the membrane and a force acting outwards; this effect is not predicted by the Hodgkin–Huxley model but does not contradict it, either.
The soliton model attempts to explain the electrical currents associated with the action potential as follows: the traveling soliton locally changes density and thickness of the membrane, and since the membrane contains many charged and polar substances, this will result in an electrical effect, akin to piezoelectricity. Indeed, such nonlinear sound waves have now been shown to exist at lipid interfaces that show superficial similarity to action potentials (electro-opto-mechanical coupling, velocities, biphasic pulse shape, threshold for excitation etc.). Furthermore, the waves remain localized in the membrane and do not spread out in the surrounding due to an impedance mismatch.
== Formalism ==
The soliton representing the action potential of nerves is the solution of the partial differential equation
∂
2
Δ
ρ
∂
t
2
=
∂
∂
x
[
(
c
0
2
+
p
Δ
ρ
+
q
Δ
ρ
2
)
∂
Δ
ρ
∂
x
]
−
h
∂
4
Δ
ρ
∂
x
4
,
{\displaystyle {\frac {\partial ^{2}\Delta \rho }{\partial t^{2}}}={\frac {\partial }{\partial x}}\left[\left(c_{0}^{2}+p\Delta \rho +q\Delta \rho ^{2}\right){\frac {\partial \Delta \rho }{\partial x}}\right]-h{\frac {\partial ^{4}\Delta \rho }{\partial x^{4}}},}
where t is time and x is the position along the nerve axon. Δρ is the change in membrane density under the influence of the action potential, c0 is the sound velocity of the nerve membrane, p and q describe the nature of the phase transition and thereby the nonlinearity of the elastic constants of the nerve membrane. The parameters c0, p and q are dictated by the thermodynamic properties of the nerve membrane and cannot be adjusted freely. They have to be determined experimentally. The parameter h describes the frequency dependence of the sound velocity of the membrane (dispersion relation). The above equation does not contain any fit parameters. It is formally related to the Boussinesq approximation for solitons in water canals. The solutions of the above equation possess a limiting maximum amplitude and a minimum propagation velocity that is similar to the pulse velocity in myelinated nerves. Under restrictive assumptions, there exist periodic solutions that display hyperpolarization and refractory periods.
== Role of ion channels ==
Advocates of the soliton model claim that it explains several aspects of the action potential, which are not explained by the Hodgkin–Huxley model. Since it is of thermodynamic nature it does not address the properties of single macromolecules like ion channel proteins on a molecular scale. It is rather assumed that their properties are implicitly contained in the macroscopic thermodynamic properties of the nerve membranes. The soliton model predicts membrane current fluctuations during the action potential. These currents are of similar appearance as those reported for ion channel proteins. They are thought to be caused by lipid membrane pores spontaneously generated by the thermal fluctuations. Such thermal fluctuations explain the specific ionic selectivity or the specific time-course of the response to voltage changes on the basis of their effect on the macroscopic susceptibilities of the system.
== Application to anesthesia ==
The authors claim that their model explains the previously obscure mode of action of numerous anesthetics. The Meyer–Overton observation holds that the strength of a wide variety of chemically diverse anesthetics is proportional to their lipid solubility, suggesting that they do not act by binding to specific proteins such as ion channels but instead by dissolving in and changing the properties of the lipid membrane. Dissolving substances in the membrane lowers the membrane's freezing point, and the resulting larger difference between body temperature and freezing point inhibits the propagation of solitons. By increasing pressure, lowering pH or lowering temperature, this difference can be restored back to normal, which should cancel the action of anesthetics: this is indeed observed. The amount of pressure needed to cancel the action of an anesthetic of a given lipid solubility can be computed from the soliton model and agrees reasonably well with experimental observations.
== Differences between model predictions and experimental observations ==
The following is a list of some of the disagreements between experimental observations and the "soliton model":
Antidromic invasion of soma from axon
An action potential initiated anywhere on an axon will travel in an antidromic (backward) direction to the neuron soma (cell body) without loss of amplitude and produce a full-amplitude action potential in the soma. As the membrane area of the soma is orders of magnitude larger than the area of the axon, conservation of energy requires that an adiabatic mechanical wave decrease in amplitude. Since the absence of heat production is one of the claimed justifications of the 'soliton model', this is particularly difficult to explain within that model.
Persistence of action potential over wide temperature range
An important assumption of the soliton model is the presence of a phase transition near the ambient temperature of the axon ("Formalism", above). Then, rapid change of temperature away from the phase transition temperature would necessarily cause large changes in the action potential. Below the phase transition temperature, the soliton wave would not be possible. Yet, action potentials are present at 0 °C. The time course is slowed in a manner predicted by the measured opening and closing kinetics of the Hodgkin-Huxley ion channels.
Collisions
Nerve impulses traveling in opposite directions annihilate each other on collision. On the other hand, mechanical waves do not annihilate but pass through each other. Soliton model proponents have attempted to show that action potentials can pass through a collision; however, collision annihilation of orthodromic and antidromic action potentials is a routinely observed phenomenon in neuroscience laboratories and are the basis of a standard technique for identification of neurons. Solitons pass each other on collision (Figure--"Collision of Solitons"), solitary waves in general can pass, annihilate or bounce of each other and solitons are only a special case of such solitary waves.
Ionic currents under voltage clamp
The voltage clamp, used by Hodgkin and Huxley (1952) (Hodgkin-Huxley Model) to experimentally dissect the action potential in the squid giant axon, uses electronic feedback to measure the current necessary to hold membrane voltage constant at a commanded value. A silver wire, inserted into the interior of the axon, forces a constant membrane voltage along the length of the axon. Under these circumstances, there is no possibility of a traveling 'soliton'. Any thermodynamic changes are very different from those resulting from an action potential. Yet, the measured currents accurately reproduce the action potential.
Single channel currents
The patch clamp technique isolates a microscopic patch of membrane on the tip of a glass pipette. It is then possible to record currents from single ionic channels. There is no possibility of propagating solitons or thermodynamic changes. Yet, the properties of these channels (temporal response to voltage jumps, ionic selectivity) accurately predict the properties of the macroscopic currents measured under conventional voltage clamp.
Selective ionic conductivity
The current underlying the action potential depolarization is selective for sodium. Repolarization depends on a selective potassium current. These currents have very specific responses to voltage changes which quantitatively explain the action potential. Substitution of non-permeable ions for sodium abolishes the action potential. The 'soliton model' cannot explain either the ionic selectivity or the responses to voltage changes.
Pharmacology
The drug tetrodotoxin (TTX) blocks action potentials at extremely low concentrations. The site of action of TTX on the sodium channel has been identified. Dendrotoxins block the potassium channels. These drugs produce quantitatively predictable changes in the action potential. The 'soliton model' provides no explanation for these pharmacological effects.
== Action waves ==
A recent theoretical model, proposed by Ahmed El Hady and Benjamin Machta, proposes that there is a mechanical surface wave which co-propagates with the electrical action potential. These surface waves are called "action waves". In the El Hady–Machta's model, these co-propagating waves are driven by voltage changes across the membrane caused by the action potential.
== See also ==
Biological neuron models
Hodgkin–Huxley model
Vector soliton
== Sources ==
Federico Faraci (2013) "The 60th anniversary of the Hodgkin-Huxley model: a critical assessment from a historical and modeler’s viewpoint"
Revathi Appali, Ursula van Rienen, Thomas Heimburg (2012) "A comparison of the Hodgkin-Huxley model and the Soliton theory for the Action Potential in Nerves "
Action Waves in the Brain, The Guardian, 1 May 2015.
Ichiji Tasaki (1982) "Physiology and Electrochemistry of Nerve Fibers"
Konrad Kaufman (1989) "Action Potentials and Electrochemical Coupling in the Macroscopic Chiral Phospholipid Membrane".
Andersen, Jackson and Heimburg"Towards a thermodynamic theory of nerve pulse propagation"
Pradip Das; W.H. Schwarz (4 November 1994). "Solitons in cell membrane". Physical Review E. 51 (4): 3588–3612. Bibcode:1995PhRvE..51.3588D. doi:10.1103/PhysRevE.51.3588. PMID 9963042.
Revisiting the mechanics of the action potential, Princeton University Journal watch, 1 April 2015.
On the (sound) track of anesthetics, Eurekalert, according to a press release University of Copenhagen, 6 March 2007
Kaare Græsbøll (2006). "Function of Nerves — Action of Anesthetics" (PDF). Gamma. 143. Archived from the original (PDF) on 2016-03-03. Retrieved 2007-03-11. An elementary introduction.
Solitary acoustic waves observed to propagate at a lipid membrane interface, Phys.org June 20, 2014
== References == | Wikipedia/Soliton_model |
The Hindmarsh–Rose model of neuronal activity is aimed to study the spiking-bursting behavior of the membrane potential observed in experiments made with a single neuron. The relevant variable is the membrane potential, x(t), which is written in dimensionless units. There are two more variables, y(t) and z(t), which take into account the transport of ions across the membrane through the ion channels. The transport of sodium and potassium ions is made through fast ion channels and its rate is measured by y(t), which is called the spiking variable. z(t) corresponds to an adaptation current, which is incremented at every spike, leading to a decrease in the firing rate. Then, the Hindmarsh–Rose model has the mathematical form of a system of three nonlinear ordinary differential equations on the dimensionless dynamical variables x(t), y(t), and z(t). They read:
d
x
d
t
=
y
+
ϕ
(
x
)
−
z
+
I
,
d
y
d
t
=
ψ
(
x
)
−
y
,
d
z
d
t
=
r
[
s
(
x
−
x
R
)
−
z
]
,
{\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=y+\phi (x)-z+I,\\{\frac {dy}{dt}}&=\psi (x)-y,\\{\frac {dz}{dt}}&=r[s(x-x_{R})-z],\end{aligned}}}
where
ϕ
(
x
)
=
−
a
x
3
+
b
x
2
,
ψ
(
x
)
=
c
−
d
x
2
.
{\displaystyle {\begin{aligned}\phi (x)&=-ax^{3}+bx^{2},\\\psi (x)&=c-dx^{2}.\end{aligned}}}
The model has eight parameters: a, b, c, d, r, s, xR and I. It is common to fix some of them and let the others be control parameters. Usually the parameter I, which means the current that enters the neuron, is taken as a control parameter. Other control parameters used often in the literature are a, b, c, d, or r, the first four modeling the working of the fast ion channels and the last one the slow ion channels, respectively. Frequently, the parameters held fixed are s = 4 and xR = -8/5. When a, b, c, d are fixed the values given are a = 1, b = 3, c = 1, and d = 5. The parameter r governs the time scale of the neural adaptation and is something of the order of 10−3, and I ranges between −10 and 10.
The third state equation:
d
z
d
t
=
r
[
s
(
x
−
x
R
)
−
z
]
,
{\displaystyle {\begin{aligned}{\frac {dz}{dt}}&=r[s(x-x_{R})-z],\\\end{aligned}}}
allows a great variety of dynamic behaviors of the membrane potential, described by variable x, including unpredictable behavior, which is referred to as chaotic dynamics. This makes the Hindmarsh–Rose model relatively simple and provides a good qualitative description of the many different patterns that are observed empirically.
== See also ==
Biological neuron models
Ephaptic coupling
Hodgkin–Huxley model
Computational neuroscience
Neural oscillation
Rulkov map
Chialvo map
== References ==
Hindmarsh J. L.; Rose R. M. (1984). "A model of neuronal bursting using three coupled first order differential equations". Proceedings of the Royal Society of London. Series B. Biological Sciences. 221 (1222): 87–102. Bibcode:1984RSPSB.221...87H. doi:10.1098/rspb.1984.0024. PMID 6144106. S2CID 117149. | Wikipedia/Hindmarsh–Rose_model |
The theta model, or Ermentrout–Kopell canonical model, is a biological neuron model originally developed to mathematically describe neurons in the animal Aplysia. The model is particularly well-suited to describe neural bursting, which is characterized by periodic transitions between rapid oscillations in the membrane potential followed by quiescence. This bursting behavior is often found in neurons responsible for controlling and maintaining steady rhythms such as breathing, swimming, and digesting. Of the three main classes of bursting neurons (square wave bursting, parabolic bursting, and elliptic bursting), the theta model describes parabolic bursting, which is characterized by a parabolic frequency curve during each burst.
The model consists of one variable that describes the membrane potential of a neuron along with an input current. The single variable of the theta model obeys relatively simple equations, allowing for analytic, or closed-form solutions, which are useful for understanding the properties of parabolic bursting neurons. In contrast, other biophysically accurate neural models such as the Hodgkin–Huxley model and Morris–Lecar model consist of multiple variables that cannot be solved analytically, requiring numerical integration to solve.
Similar models include the quadratic integrate and fire (QIF) model, which differs from the theta model only by a change of variables and Plant's model, which consists of Hodgkin–Huxley type equations and also differs from the theta model by a series of coordinate transformations.
Despite its simplicity, the theta model offers enough complexity in its dynamics that it has been used for a wide range of theoretical neuroscience research as well as in research beyond biology, such as in artificial intelligence.
== Background and history ==
Bursting is "an oscillation in which an observable [part] of the system, such as voltage or chemical concentration, changes periodically between an active phase of rapid spike oscillations (the fast sub-system) and a phase of quiescence". Bursting comes in three distinct forms: square-wave bursting, parabolic bursting, and elliptic bursting. There exist some models that do not fit neatly into these categories by qualitative observation, but it is possible to sort such models by their topology (i.e. such models can be sorted "by the structure of the fast subsystem").
All three forms of bursting are capable of beating and periodic bursting. Periodic bursting (or just bursting) is of more interest because many phenomena are controlled by, or arise from, bursting. For example, bursting due to a changing membrane potential is common in various neurons, including but not limited to cortical chattering neurons, thalamacortical neurons, and pacemaker neurons. Pacemakers in general are known to burst and synchronize as a population, thus generating a robust rhythm that can maintain repetitive tasks like breathing, walking, and eating. Beating occurs when a cell bursts continuously with no periodic quiescent periods, but beating is often considered to be an extreme case and is rarely of primary interest.
Bursting cells are important for motor generation and synchronization. For example, the pre-Bötzinger complex in the mammalian brain stem contains many bursting neurons that control autonomous breathing rhythms. Various neocortical neurons (i.e. cells of the neocortex) are capable of bursting, which "contribute significantly to [the] network behavior [of neocortical neurons]". The R15 neuron of the abdominal ganglion in Aplyisa, hypothesized to be a neurosecretory cell (i.e. a cell that produces hormones), is known to produce bursts characteristic of neurosecretory cells. In particular, it is known to produce parabolic bursts.
Since many biological processes involve bursting behavior, there is a wealth of various bursting models in scientific literature. For instance, there exist several models for interneurons and cortical spiking neurons. However, the literature on parabolic bursting models is relatively scarce.
Parabolic bursting models are mathematical models that mimic parabolic bursting in real biological systems. Each burst of a parabolic burster has a characteristic feature in the burst structure itself – the frequency at the beginning and end of the burst is low relative to the frequency in the middle of the burst. A frequency plot of one burst resembles a parabola, hence the name "parabolic burst". Furthermore, unlike elliptic or square-wave bursting, there is a slow modulating wave which, at its peak, excites the cell enough to generate a burst and inhibits the cell in regions near its minimum. As a result, the neuron periodically transitions between bursting and quiescence.
Parabolic bursting has been studied most extensively in the R15 neuron, which is one of six types of neurons of the Aplysia abdominal ganglion and one of thirty neurons comprising the abdominal ganglion. The Aplysia abdominal ganglion was studied and extensively characterized because its relatively large neurons and proximity of the neurons to the surface of the ganglion made it an ideal and "valuable preparation for cellular electrophysical studies".
Early attempts to model parabolic bursting were for specific applications, often related to studies of the R15 neuron. This is especially true of R. E. Plant and Carpenter, whose combined works comprise the bulk of parabolic bursting models prior to Ermentrout and Kopell's canonical model.
Though there was no specific mention of the term "parabolic bursting" in Plant's papers, Plant's model(s) do involve a slow, modulating oscillation which control bursting in the model(s). This is, by definition, parabolic bursting. Both of Plant's papers on the topic involve a model derived from the Hodgkin–Huxley equations and include extra conductances, which only add to the complexity of the model.
Carpenter developed her model primarily for a square wave burster. The model was capable of producing a small variety of square wave bursts and produced parabolic bursts as a consequence of adding an extra conductance. However, the model applied to only spatial propagation down axons and not situations where oscillations are limited to a small region in space (i.e. it was not suited for "space-clamped" situations).
The lack of a simple, generalizable, space-clamped, parabolic bursting model motivated Ermentrout and Kopell to develop the theta model.
== Characteristics of the model ==
=== General equations ===
It is possible to describe a multitude of parabolic bursting cells by deriving a simple mathematical model, called a canonical model. Derivation of the Ermentrout and Kopell canonical model begins with the general form for parabolic bursting, and notation will be fixed to clarify the discussion. The letters
f
{\displaystyle f}
,
g
{\displaystyle g}
,
h
{\displaystyle h}
,
I
{\displaystyle I}
are reserved for functions;
x
{\displaystyle x}
,
y
{\displaystyle y}
,
θ
{\displaystyle \theta }
for state variables;
ε
{\displaystyle \varepsilon }
,
p
{\displaystyle p}
, and
q
{\displaystyle q}
for scalars.
In the following generalized system of equations for parabolic bursting, the values of
f
{\displaystyle f}
describe the membrane potential and ion channels, typical of many conductance-based biological neuron models. Slow oscillations are controlled by
h
{\displaystyle h}
, and ultimately described by
y
{\displaystyle y}
. These slow oscillations can be, for example, slow fluctuations in calcium concentration inside a cell. The function
g
{\displaystyle g}
couples
y
˙
{\displaystyle {\dot {y}}}
to
x
˙
{\displaystyle {\dot {x}}}
, thereby allowing the second system,
y
˙
{\displaystyle {\dot {y}}}
, to influence the behavior of the first system,
x
˙
{\displaystyle {\dot {x}}}
. In more succinct terms, "
x
{\displaystyle x}
generates the spikes and
y
{\displaystyle y}
generates the slow waves". The equations are:
x
˙
=
f
(
x
)
+
ε
2
g
(
x
,
y
,
ε
)
,
{\displaystyle {\dot {x}}=f(x)+\varepsilon ^{2}g(x,y,\varepsilon ),}
y
˙
=
ε
h
(
x
,
y
,
ε
)
,
{\displaystyle {\dot {y}}=\varepsilon h(x,y,\varepsilon ),}
where
x
{\displaystyle x}
is a vector with
p
{\displaystyle p}
entries (i.e.
x
∈
R
p
{\displaystyle x\in \mathbb {R} ^{p}}
),
y
{\displaystyle y}
is a vector with
q
{\displaystyle q}
entries (i.e.
y
∈
R
q
{\displaystyle y\in \mathbb {R} ^{q}}
),
ε
{\displaystyle \varepsilon }
is small and positive, and
f
{\displaystyle f}
,
g
{\displaystyle g}
,
h
{\displaystyle h}
are smooth (i.e. infinitely differentiable). Additional constraints are required to guarantee parabolic bursting. First,
x
˙
=
f
(
x
)
{\displaystyle {\dot {x}}=f(x)}
must produce a circle in phase space that is invariant, meaning it does not change under certain transformations. This circle must also be attracting in
R
2
{\displaystyle \mathbb {R} ^{2}}
with a critical point located at
x
=
0
{\displaystyle x=0}
. The second criterion requires that when
y
˙
=
h
(
0
,
y
,
0
)
{\displaystyle {\dot {y}}=h(0,y,0)}
, there exists a stable limit cycle solution. These criteria can be summarized by the following points:
When
ε
=
0
{\displaystyle \varepsilon =0}
,
x
˙
=
f
(
x
)
{\displaystyle {\dot {x}}=f(x)}
"has an attracting invariant circle with a single critical point", with the critical point located at
x
=
0
{\displaystyle x=0}
, and
When
x
=
0
{\displaystyle x=0}
,
y
˙
=
h
(
0
,
y
,
0
)
{\displaystyle {\dot {y}}=h(0,y,0)}
has a stable limit cycle solution.
The theta model can be used in place of any parabolic bursting model that satisfies the assumptions above.
=== Model equations and properties ===
The theta model is a reduction of the generalized system from the previous section and takes the form,
d
θ
d
t
=
1
−
cos
θ
+
(
1
+
cos
θ
)
I
(
t
)
,
θ
∈
S
1
.
{\displaystyle {\frac {d\theta }{dt}}=1-\cos \theta +(1+\cos \theta )I(t),\qquad \theta \in S^{1}.}
This model is one of the simplest excitable neuron models. The state variable
θ
{\displaystyle \theta }
represents the angle in radians, and the input function,
I
(
t
)
{\displaystyle I(t)}
, is typically chosen to be periodic. Whenever
θ
{\displaystyle \theta }
reaches the value
θ
=
π
{\displaystyle \theta =\pi }
, the model is said to produce a spike.
The theta model is capable of a single saddle-node bifurcation and can be shown to be the "normal form for the saddle-node on a limit cycle bifurcation." When
(
I
<
0
)
{\displaystyle (I<0)}
, the system is excitable, i.e., given an appriate perturbation the system will produce a spike. Incidentally, when viewed in the plane (
R
2
{\displaystyle \mathbb {R} ^{2}}
), the unstable critical point is actually a saddle point because
S
1
{\displaystyle S^{1}}
is attracting in
R
2
{\displaystyle \mathbb {R} ^{2}}
. When
(
I
>
0
)
{\displaystyle (I>0)}
,
θ
˙
{\displaystyle {\dot {\theta }}}
is also positive, and the system will give rise to a limit cycle. Therefore, the bifurcation point is located at
I
(
t
)
=
0
{\displaystyle I(t)=0}
.
Near the bifurcation point, the theta model resembles the quadratic integrate and fire model:
d
x
d
t
=
x
2
+
I
.
{\displaystyle {\frac {dx}{dt}}=x^{2}+I.}
For I > 0, the solutions of this equation blow up in finite time. By resetting the trajectory
x
(
t
)
{\displaystyle x(t)}
to
−
∞
{\displaystyle -\infty }
when it reaches
+
∞
{\displaystyle +\infty }
, the total period is then
T
=
π
I
.
{\displaystyle T={\frac {\pi }{\sqrt {I}}}.}
(true for both theta and QIF models)
Therefore, the period diverges as
I
→
0
+
{\displaystyle I\rightarrow 0^{+}}
and the frequency converges to zero.
==== Example ====
When
I
(
t
)
{\displaystyle I(t)}
is some slow wave which can be both negative and positive, the system is capable of producing parabolic bursts. Consider the simple example
I
(
t
)
:=
sin
(
α
t
)
{\displaystyle I(t):=\sin(\alpha t)}
, where
α
{\displaystyle \alpha }
is relatively small. Then for
α
t
∈
(
0
,
π
)
{\displaystyle \alpha t\in (0,\pi )}
,
I
(
t
)
{\displaystyle I(t)}
is strictly positive and
θ
{\displaystyle \theta }
makes multiple passes through the angle
π
{\displaystyle \pi }
, resulting in multiple bursts. Note that whenever
α
t
{\displaystyle \alpha t}
is near zero or
π
{\displaystyle \pi }
, the theta neuron will spike at a relatively low frequency, and whenever
α
t
{\displaystyle \alpha t}
is near
α
t
=
π
/
2
{\displaystyle \alpha t=\pi /2}
the neuron will spike with very high frequency. When
α
t
=
π
{\displaystyle \alpha t=\pi }
, the frequency of spikes is zero since the period is infinite since
θ
{\displaystyle \theta }
can no longer pass through
θ
=
π
{\displaystyle \theta =\pi }
. Finally, for
α
t
∈
(
π
,
2
π
)
{\displaystyle \alpha t\in (\pi ,2\pi )}
, the neuron is excitable and will no longer burst. This qualitative description highlights the characteristics that make the theta model a parabolic bursting model. Not only does the model have periods of quiescence between bursts which are modulated by a slow wave, but the frequency of spikes at the beginning and end of each burst is high relative to the frequency at the middle of the burst.
=== Derivation ===
The derivation comes in the form of two lemmas in Ermentrout and Kopell (1986). Lemma 1, in summary, states that when viewing the general equations above in a subset
S
1
×
R
2
{\displaystyle S^{1}\times \mathbb {R} ^{2}}
, the equations take the form:
x
1
˙
=
f
¯
(
x
1
)
+
ε
2
g
¯
(
x
1
,
y
,
ε
)
x
1
∈
S
1
,
{\displaystyle {\dot {x_{1}}}={\overline {f}}(x_{1})+\varepsilon ^{2}{\overline {g}}(x_{1},y,\varepsilon )\qquad x_{1}\in S^{1},}
y
˙
=
ε
h
¯
(
x
1
,
y
,
ε
)
y
∈
R
q
.
{\displaystyle {\dot {y}}=\varepsilon {\overline {h}}(x_{1},y,\varepsilon )\;\;\;\;\;\;y\in \mathbb {R} ^{q}.}
By lemma 2 in Ermentrout and Kopell 1986, "There exists a change of coordinates... and a constant, c, such that in new coordinates, the two equations above converge pointwise as
ε
→
0
{\displaystyle \varepsilon \rightarrow 0}
to the equations
θ
˙
=
(
1
−
cos
θ
)
+
(
1
+
cos
θ
)
g
¯
(
0
,
y
,
0
)
,
{\displaystyle {\dot {\theta }}=(1-\cos \theta )+(1+\cos \theta ){\overline {g}}(0,y,0),}
y
˙
=
1
c
h
¯
(
0
,
y
,
0
)
,
{\displaystyle {\dot {y}}={\frac {1}{c}}{\overline {h}}(0,y,0),}
for all
θ
≠
π
{\displaystyle \theta \neq \pi }
. Convergence is uniform except near
θ
=
π
{\displaystyle \theta =\pi }
." (Ermentrout and Kopell, 1986). By letting
I
(
t
)
:=
g
¯
(
0
,
y
,
0
)
{\displaystyle I(t):={\overline {g}}(0,y,0)}
, resemblance to the theta model is obvious.
=== Phase response curve ===
In general, given a scalar phase model of the form
θ
˙
=
f
(
θ
)
+
g
(
θ
)
S
(
t
)
,
{\displaystyle {\dot {\theta }}=f(\theta )+g(\theta )S(t),}
where
S
(
t
)
{\displaystyle S(t)}
represents the perturbation current, a closed form solution of the phase response curve (PRC) does not exist.
However, the theta model is a special case of such an oscillator and happens to have a closed-form solution for the PRC. The theta model is recovered by defining
f
{\displaystyle f}
and
g
{\displaystyle g}
as
f
(
θ
)
=
(
1
−
cos
θ
)
+
I
(
1
+
cos
θ
)
,
{\displaystyle f(\theta )=(1-\cos \theta )+I(1+\cos \theta ),}
g
(
θ
)
=
(
1
+
cos
θ
)
.
{\displaystyle g(\theta )=(1+\cos \theta ).}
In the appendix of Ermentrout 1996, the PRC is shown to be
Z
(
θ
)
=
K
(
1
+
cos
θ
)
{\displaystyle Z(\theta )=K(1+\cos \theta )}
.
== Similar models ==
=== Plant's model ===
The authors of Soto-Treviño et al. (1996) discuss in great detail the similarities between Plant's (1976) model and the theta model. At first glance, the mechanisms of bursting in both systems are very different: In Plant's model, there are two slow oscillations – one for conductance of a specific current and one for the concentration of calcium. The calcium oscillations are active only when the membrane potential is capable of oscillating. This contrasts heavily against the theta model in which one slow wave modulates the burst of the neuron and the slow wave has no dependence upon the bursts. Despite these differences, the theta model is shown to be similar to Plant's (1976) model by a series of coordinate transformations. In the process, Soto-Trevino, et al. discovered that the theta model was more general than originally believed.
=== Quadratic integrate-and-fire ===
The quadratic integrate-and-fire (QIF) model was created by Latham et al. in 2000 to explore the many questions related to networks of neurons with low firing rates. It was unclear to Latham et al. why networks of neurons with "standard" parameters were unable to generate sustained low frequency firing rates, while networks with low firing rates were often seen in biological systems.
According to Gerstner and Kistler (2002), the quadratic integrate-and-fire (QIF) model is given by the following differential equation:
τ
u
˙
=
a
0
(
u
−
u
rest
)
(
u
−
u
c
)
+
R
m
I
,
{\displaystyle \tau {\dot {u}}=a_{0}(u-u_{\text{rest}})(u-u_{c})+R_{m}I,}
where
a
0
{\displaystyle a_{0}}
is a strictly positive scalar,
u
{\displaystyle u}
is the membrane potential,
u
rest
{\displaystyle u_{\text{rest}}}
is the resting potential
u
c
{\displaystyle u_{c}}
is the minimum potential necessary for the membrane to produce an action potential,
R
m
{\displaystyle R_{m}}
is the membrane resistance,
τ
{\displaystyle \tau }
the membrane time constant and
u
c
>
u
rest
{\displaystyle u_{c}>u_{\text{rest}}}
. When there is no input current (i.e.
I
=
0
{\displaystyle I=0}
), the membrane potential quickly returns to rest following a perturbation. When the input current,
I
{\displaystyle I}
, is large enough, the membrane potential (
u
{\displaystyle u}
) surpasses its firing threshold and rises rapidly (indeed, it reaches arbitrarily large values in finite time); this represents the peak of the action potential. To simulate the recovery after the action potential, the membrane voltage is then reset to a lower value
u
r
{\displaystyle u_{r}}
. To avoid dealing with arbitrarily large values in simulation, researchers will often set an upper limit on the membrane potential, above which the membrane potential will be reset; for example Latham et al. (2000) reset the voltage from +20 mV to −80 mV. This voltage reset constitutes an action potential.
The theta model is very similar to the QIF model since the theta model differs from the QIF model by means of a simple coordinate transform. By scaling the voltage appropriately and letting
Δ
I
{\displaystyle \Delta I}
be the change in current from the minimum current required to elicit a spike, the QIF model can be rewritten in the form
u
˙
=
u
2
+
Δ
I
.
{\displaystyle {\dot {u}}=u^{2}+\Delta I.}
Similarly, the theta model can be rewritten as
θ
˙
=
1
−
cos
θ
+
(
1
+
cos
θ
)
Δ
I
.
{\displaystyle {\dot {\theta }}=1-\cos \theta +(1+\cos \theta )\Delta I.}
The following proof will show that the QIF model becomes the theta model given an appropriate choice for the coordinate transform.
Define
u
(
t
)
=
tan
(
θ
/
2
)
{\displaystyle u(t)=\tan(\theta /2)}
. Recall that
d
tan
(
x
)
/
d
x
=
1
/
cos
2
(
x
)
{\displaystyle d\tan(x)/dx=1/\cos ^{2}(x)}
, so taking the derivative yields
u
˙
=
1
cos
2
(
θ
2
)
1
2
θ
˙
=
u
2
+
Δ
I
.
{\displaystyle {\dot {u}}={\frac {1}{\cos ^{2}\left({\frac {\theta }{2}}\right)}}{\frac {1}{2}}{\dot {\theta }}=u^{2}+\Delta I.}
An additional substitution and rearranging in terms of
θ
{\displaystyle \theta }
yields
θ
˙
=
2
[
cos
2
(
θ
2
)
tan
2
(
θ
2
)
+
cos
2
(
θ
2
)
Δ
I
]
=
2
[
sin
2
(
θ
2
)
+
cos
2
(
θ
2
)
Δ
I
]
.
{\displaystyle {\dot {\theta }}=2\left[\cos ^{2}\left({\frac {\theta }{2}}\right)\tan ^{2}\left({\frac {\theta }{2}}\right)+\cos ^{2}\left({\frac {\theta }{2}}\right)\Delta I\right]=2\left[\sin ^{2}\left({\frac {\theta }{2}}\right)+\cos ^{2}\left({\frac {\theta }{2}}\right)\Delta I\right].}
Using the trigonometric identities
cos
2
(
x
/
2
)
=
1
+
cos
(
x
)
2
{\displaystyle \cos ^{2}(x/2)={\frac {1+\cos(x)}{2}}}
,
sin
2
(
x
/
2
)
=
1
−
cos
(
x
)
2
{\displaystyle \sin ^{2}(x/2)={\frac {1-\cos(x)}{2}}}
and
θ
˙
{\displaystyle {\dot {\theta }}}
as defined above, we have that
θ
˙
=
2
[
1
−
cos
θ
2
+
(
1
+
cos
θ
2
)
Δ
I
]
=
1
−
cos
θ
+
(
1
+
cos
θ
)
Δ
I
.
{\displaystyle {\dot {\theta }}=2\left[{\frac {1-\cos \theta }{2}}+\left({\frac {1+\cos \theta }{2}}\right)\Delta I\right]=1-\cos \theta +(1+\cos \theta )\Delta I.}
Therefore, there exists a change of coordinates, namely
u
(
t
)
=
tan
(
θ
/
2
)
{\displaystyle u(t)=\tan(\theta /2)}
, which transforms the QIF model into the theta model. The reverse transformation also exists, and is attained by taking the inverse of the first transformation.
== Applications ==
=== Neuroscience ===
==== Lobster stomatogastric ganglion ====
Though the theta model was originally used to model slow cytoplasmic oscillations that modulate fast membrane oscillations in a single cell, Ermentrout and Kopell found that the theta model could be applied just as easily to systems of two electrically coupled cells such that the slow oscillations of one cell modulates the bursts of the other. Such cells serve as the central pattern generator (CPG) of the pyloric system in the lobster stomatograstic ganglion. In such a system, a slow oscillator, called the anterior burster (AB) cell, modulates the bursting cell called the pyloric dilator (PD), resulting in parabolic bursts.
==== Visual cortex ====
A group led by Boergers, used the theta model to explain why exposure to multiple simultaneous stimuli can reduce the response of the visual cortex below the normal response from a single (preferred) stimulus. Their computational results showed that this may happen due to strong stimulation of a large group of inhibitory neurons. This effect not only inhibits neighboring populations, but has the extra consequence of leaving the inhibitory neurons in disarray, thus increasing the effectiveness of inhibition.
=== Theta networks ===
Osan et al. (2002) found that in a network of theta neurons, there exist two different types of waves that propagate smoothly over the network, given a sufficiently large coupling strength. Such traveling waves are of interest because they are frequently observed in pharmacologically treated brain slices, but are hard to measure in intact animals brains. The authors used a network of theta models in favor of a network of leaky integrate-and-fire (LIF) models due to two primary advantages: first, the theta model is continuous, and second, the theta model retains information about "the delay between the crossing of the spiking threshold and the actual firing of an action potential". The LIF fails to satisfy both conditions.
=== Artificial intelligence ===
==== Steepest gradient descent learning rule ====
The theta model can also be applied to research beyond the realm of biology. McKennoch et al. (2008) derived a steepest gradient descent learning rule based on theta neuron dynamics. Their model is based on the assumption that "intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents..." contrary to similar models like SpikeProp and Tempotron, which depend heavily on the shape of the postsynaptic potential (PSP). Not only could the multilayer theta network perform just about as well as Tempotron learning, but the rule trained the multilayer theta network to perform certain tasks neither SpikeProp nor Tempotron were capable of.
== Limitations ==
According to Kopell and Ermentrout (2004), a limitation of the theta model lies in its relative difficulty in electrically coupling two theta neurons. It is possible to create large networks of theta neurons – and much research has been done with such networks – but it may be advantageous to use Quadratic Integrate-and-Fire (QIF) neurons, which allow for electrical coupling in a "straightforward way".
== See also ==
Biological neuron model
Computational neuroscience
FitzHugh–Nagumo model
Hodgkin–Huxley model
Neuroscience
== References ==
== External links ==
Plant Model on Scholarpedia
== Further reading ==
Keener, James P., and James Sneyd. Mathematical Physiology. New York: Springer, 2009. ISBN 978-0-387-98381-3 | Wikipedia/Theta_model |
In biology exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency.
== Exponential integrate-and-fire (EIF) ==
The exponential integrate-and-fire model (EIF) is a biological neuron model, a simple modification of the classical leaky integrate-and-fire model describing how neurons produce action potentials. In the EIF, the threshold for spike initiation is replaced by a depolarizing non-linearity. The model was first introduced by Nicolas Fourcaud-Trocmé, David Hansel, Carl van Vreeswijk and Nicolas Brunel. The exponential nonlinearity was later confirmed by Badel et al. It is one of the prominent examples of a precise theoretical prediction in computational neuroscience that was later confirmed by experimental neuroscience.
In the exponential integrate-and-fire model, spike generation is exponential, following the equation:
d
V
d
t
−
R
τ
m
I
(
t
)
=
1
τ
m
[
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
]
{\displaystyle {\frac {dV}{dt}}-{\frac {R}{\tau _{m}}}I(t)={\frac {1}{\tau _{m}}}[E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)]}
.
where
V
{\displaystyle V}
is the membrane potential,
V
T
{\displaystyle V_{T}}
is the intrinsic membrane potential threshold,
τ
m
{\displaystyle \tau _{m}}
is the membrane time constant,
E
m
{\displaystyle E_{m}}
is the resting potential, and
Δ
T
{\displaystyle \Delta _{T}}
is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses
V
T
{\displaystyle V_{T}}
, it diverges to infinity in finite time. In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger than
V
T
{\displaystyle V_{T}}
) at which the membrane potential is reset to a value Vr . The voltage reset value Vr is one of the important parameters of the model.
Two important remarks: (i) The right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data. In this sense the exponential nonlinearity is not an arbitrary choice but directly supported by experimental evidence. (ii) Even though it is a nonlinear model, it is simple enough to calculate the firing rate for constant input, and the linear response to fluctuations, even in the presence of input noise.
A didactive review of the exponential integrate-and-fire model (including fit to experimental data and relation to the Hodgkin-Huxley model) can be found in Chapter 5.2 of the textbook Neuronal Dynamics.
== Adaptive exponential integrate-and-fire (AdEx) ==
The adaptive exponential integrate-and-fire neuron (AdEx) is a two-dimensional spiking neuron model where the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w
τ
m
d
V
d
t
=
R
I
(
t
)
+
[
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
]
−
R
w
{\displaystyle \tau _{m}{\frac {dV}{dt}}=RI(t)+[E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)]-Rw}
τ
d
w
(
t
)
d
t
=
−
a
[
V
m
(
t
)
−
E
m
]
−
w
+
b
τ
δ
(
t
−
t
f
)
{\displaystyle \tau {\frac {dw(t)}{dt}}=-a[V_{\mathrm {m} }(t)-E_{\mathrm {m} }]-w+b\tau \delta (t-t^{f})}
where w denotes an adaptation current with time scale
τ
{\displaystyle \tau }
. Important model parameters are the voltage reset value Vr, the intrinsic threshold
V
T
{\displaystyle V_{T}}
, the time constants
τ
{\displaystyle \tau }
and
τ
m
{\displaystyle \tau _{m}}
as well as the coupling parameters a and b. The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting and initial bursting.
The adaptive exponential integrate-and-fire model is remarkable for three aspects: (i) its simplicity since it contains only two coupled variables; (ii) its foundation in experimental data since the nonlinearity of the voltage equation is extracted from experiments; and (iii) the broad spectrum of single-neuron firing patterns that can be described by an appropriate choice of AdEx model parameters. In particular, the AdEx reproduces the following firing patterns in response to a step current input: neuronal adaptation, regular bursting, initial bursting, irregular firing, regular firing.
A didactic review of the adaptive exponential integrate-and-fire model (including examples of single-neuron firing patterns) can be found in Chapter 6.1 of the textbook Neuronal Dynamics.
== Generalized exponential integrate-and-fire Model (GEM) ==
The generalized exponential integrate-and-fire model (GEM) is a two-dimensional spiking neuron model where the exponential nonlinearity of the voltage equation is combined with a subthreshold variable x
τ
m
d
V
d
t
=
R
I
(
t
)
+
[
E
m
−
V
+
Δ
T
exp
(
V
−
V
T
Δ
T
)
]
−
b
[
E
x
−
V
]
x
{\displaystyle \tau _{m}{\frac {dV}{dt}}=RI(t)+[E_{m}-V+\Delta _{T}\exp \left({\frac {V-V_{T}}{\Delta _{T}}}\right)]-b\,[E_{x}-V]x}
τ
x
(
V
)
d
x
(
t
)
d
t
=
x
0
(
V
m
(
t
)
)
−
x
{\displaystyle \tau _{x}(V){\frac {dx(t)}{dt}}=x_{0}(V_{\mathrm {m} }(t))-x}
where b is a coupling parameter,
τ
x
(
V
)
{\displaystyle \tau _{x}(V)}
is a voltage-dependent time constant, and
x
0
(
V
)
{\displaystyle x_{0}(V)}
is a saturating nonlinearity, similar to the gating variable m of the Hodgkin-Huxley model. The term
b
[
E
x
−
V
]
x
{\displaystyle b[E_{x}-V]x}
in the first equation can be considered as a slow voltage-activated ion current.
The GEM is remarkable for two aspects: (i) the nonlinearity of the voltage equation is extracted from experiments; and (ii) the GEM is simple enough to enable a mathematical analysis of the stationary firing-rate and the linear response even in the presence of noisy input.
A review of the computational properties of the GEM and its relation to other spiking neuron models can be found in.
== References == | Wikipedia/Exponential_integrate-and-fire |
A multi-compartment model is a type of mathematical model used for describing the way materials or energies are transmitted among the compartments of a system. Sometimes, the physical system that we try to model in equations is too complex, so it is much easier to discretize the problem and reduce the number of parameters. Each compartment is assumed to be a homogeneous entity within which the entities being modeled are equivalent. A multi-compartment model is classified as a lumped parameters model. Similar to more general mathematical models, multi-compartment models can treat variables as continuous, such as a differential equation, or as discrete, such as a Markov chain. Depending on the system being modeled, they can be treated as stochastic or deterministic.
Multi-compartment models are used in many fields including pharmacokinetics, epidemiology, biomedicine, systems theory, complexity theory, engineering, physics, information science and social science. The circuits systems can be viewed as a multi-compartment model as well. Most commonly, the mathematics of multi-compartment models is simplified to provide only a single parameter—such as concentration—within a compartment.
== In Systems Theory ==
In systems theory, it involves the description of a network whose components are compartments that represent a population of elements that are equivalent with respect to the manner in which they process input signals to the compartment.
Instant homogeneous distribution of materials or energies within a "compartment."
The exchange rate of materials or energies among the compartments is related to the densities of these compartments.
Usually, it is desirable that the materials do not undergo chemical reactions while transmitting among the compartments.
When concentration of the cell is of interest, typically the volume is assumed to be constant over time, though this may not be totally true in reality.
== Single-compartment model ==
Possibly the simplest application of multi-compartment model is in the single-cell concentration monitoring (see the figure above). If the volume of a cell is V, the mass of solute is q, the input is u(t) and the secretion of the solution is proportional to the density of it within the cell, then the concentration of the solution C within the cell over time is given by
d
q
d
t
=
u
(
t
)
−
k
q
{\displaystyle {\frac {\mathrm {d} q}{\mathrm {d} t}}=u(t)-kq}
C
=
q
V
{\displaystyle C={\frac {q}{V}}}
Where k is the proportionality.
== Software ==
Simulation Analysis and Modeling 2 SAAM II is a software system designed specifically to aid in the development and testing of multi-compartment models. It has a user-friendly graphical user interface
wherein compartmental models are constructed by creating a visual representation of the model. From this model, the program automatically creates systems of ordinary differential equations. The program can both
simulate and fit models to data, returning optimal parameter estimates and associated statistics. It was developed by scientists working on metabolism and hormones kinetics (e.g., glucose, lipids, or insulin). It was then used for tracer studies and pharmacokinetics. Albeit a multi-compartment model can in principle be developed and run via other software, like MATLAB or C++ languages, the user interface offered by SAAM II allows the modeler (and non-modelers) to better control the system, especially when the complexity increases.
== Discrete Compartmental Model ==
Discrete models are concerned with discrete variables, often a time interval
Δ
t
{\displaystyle \Delta t}
. An example of a discrete multi-compartmental model is a discrete version of the Lotka–Volterra Model. Here consider two compartments prey and predators denoted by
x
(
t
)
{\displaystyle x(t)}
and
y
(
t
)
{\displaystyle y(t)}
respectively. The compartments are coupled to each other by mass action terms in each equation. Over a discrete time-step
Δ
t
{\displaystyle \Delta t}
, we get
x
(
t
+
Δ
t
)
=
x
(
t
)
+
α
x
(
t
)
Δ
t
−
β
x
(
t
)
y
(
t
)
Δ
t
y
(
t
+
Δ
t
)
=
y
(
t
)
+
δ
x
(
t
)
y
(
t
)
Δ
t
−
γ
y
(
t
)
Δ
t
.
{\displaystyle {\begin{aligned}x(t+\Delta t)&=x(t)+\alpha x(t)\Delta t-\beta x(t)y(t)\Delta t\\y(t+\Delta t)&=y(t)+\delta x(t)y(t)\Delta t-\gamma y(t)\Delta t.\end{aligned}}}
Here
the
x
(
t
)
{\displaystyle x(t)}
and
y
(
t
)
{\displaystyle y(t)}
terms represent the number of that population at a given time
t
{\displaystyle t}
;
the
α
x
(
t
)
Δ
t
{\displaystyle \alpha x(t)\Delta t}
term represents the birth of prey;
the mass action term
β
x
(
t
)
y
(
t
)
Δ
t
{\displaystyle \beta x(t)y(t)\Delta t}
is the number of prey dying due to predators;
the mass action term
δ
x
(
t
)
y
(
t
)
Δ
t
{\displaystyle \delta x(t)y(t)\Delta t}
represents the birth of predators as a function of prey eaten;
the
γ
y
(
t
)
Δ
t
{\displaystyle \gamma y(t)\Delta t}
term is the death of predators;
α
,
β
,
δ
,
{\displaystyle \alpha ,\beta ,\delta ,}
and
γ
{\displaystyle \gamma }
are real valued parameters determining the weights of each transitioning term.
These equations are easily solved iteratively.
== Continuous Compartmental Model ==
The discrete Lotka-Volterra example above can be turned into a continuous version by rearranging and taking the limit as
Δ
t
→
0
{\displaystyle \Delta t\rightarrow 0}
.
lim
Δ
t
→
0
x
(
t
+
Δ
t
)
−
x
(
t
)
Δ
t
≡
d
x
d
t
=
α
x
−
β
x
y
lim
Δ
t
→
0
y
(
t
+
Δ
t
)
−
y
(
t
)
Δ
t
≡
d
y
d
t
=
δ
x
y
−
γ
y
{\displaystyle {\begin{aligned}&\lim _{\Delta t\rightarrow 0}{\frac {x(t+\Delta t)-x(t)}{\Delta t}}\equiv {\frac {dx}{dt}}=\alpha x-\beta xy\\&\lim _{\Delta t\rightarrow 0}{\frac {y(t+\Delta t)-y(t)}{\Delta t}}\equiv {\frac {dy}{dt}}=\delta xy-\gamma y\end{aligned}}}
This yields a system of ordinary differential equations. Treating this model as differential equations allows the implementation of calculus methods to study the dynamics of the system more in-depth.
== Multi-Compartment Model ==
As the number of compartments increases, the model can be very complex and the solutions usually beyond ordinary calculation.
The formulae for n-cell multi-compartment models become:
q
˙
1
=
q
1
k
11
+
q
2
k
12
+
⋯
+
q
n
k
1
n
+
u
1
(
t
)
q
˙
2
=
q
1
k
21
+
q
2
k
22
+
⋯
+
q
n
k
2
n
+
u
2
(
t
)
⋮
q
˙
n
=
q
1
k
n
1
+
q
2
k
n
2
+
⋯
+
q
n
k
n
n
+
u
n
(
t
)
{\displaystyle {\begin{aligned}{\dot {q}}_{1}=q_{1}k_{11}+q_{2}k_{12}+\cdots +q_{n}k_{1n}+u_{1}(t)\\{\dot {q}}_{2}=q_{1}k_{21}+q_{2}k_{22}+\cdots +q_{n}k_{2n}+u_{2}(t)\\\vdots \\{\dot {q}}_{n}=q_{1}k_{n1}+q_{2}k_{n2}+\cdots +q_{n}k_{nn}+u_{n}(t)\end{aligned}}}
Where
0
=
∑
i
=
1
n
k
i
j
{\displaystyle 0=\sum _{i=1}^{n}{k_{ij}}}
for
j
=
1
,
2
,
…
,
n
{\displaystyle j=1,2,\dots ,n}
(as the total 'contents' of all compartments is constant in a closed system)
Or in matrix forms:
q
˙
=
K
q
+
u
{\displaystyle \mathbf {\dot {q}} =\mathbf {Kq} +\mathbf {u} }
Where
K
=
[
k
11
k
12
⋯
k
1
n
k
21
k
22
⋯
k
2
n
⋮
⋮
⋱
⋮
k
n
1
k
n
2
⋯
k
n
n
]
q
=
[
q
1
q
2
⋮
q
n
]
u
=
[
u
1
(
t
)
u
2
(
t
)
⋮
u
n
(
t
)
]
{\displaystyle \mathbf {K} ={\begin{bmatrix}k_{11}&k_{12}&\cdots &k_{1n}\\k_{21}&k_{22}&\cdots &k_{2n}\\\vdots &\vdots &\ddots &\vdots \\k_{n1}&k_{n2}&\cdots &k_{nn}\\\end{bmatrix}}\mathbf {q} ={\begin{bmatrix}q_{1}\\q_{2}\\\vdots \\q_{n}\end{bmatrix}}\mathbf {u} ={\begin{bmatrix}u_{1}(t)\\u_{2}(t)\\\vdots \\u_{n}(t)\end{bmatrix}}}
and
[
1
1
⋯
1
]
K
=
[
0
0
⋯
0
]
{\displaystyle {\begin{bmatrix}1&1&\cdots &1\\\end{bmatrix}}\mathbf {K} ={\begin{bmatrix}0&0&\cdots &0\\\end{bmatrix}}}
(as the total 'contents' of all compartments is constant in a closed system)
In the special case of a closed system (see below) i.e. where
u
=
0
{\displaystyle \mathbf {u} =0}
then there is a general solution.
q
=
c
1
e
λ
1
t
v
1
+
c
2
e
λ
2
t
v
2
+
⋯
+
c
n
e
λ
n
t
v
n
{\displaystyle \mathbf {q} =c_{1}e^{\lambda _{1}t}\mathbf {v_{1}} +c_{2}e^{\lambda _{2}t}\mathbf {v_{2}} +\cdots +c_{n}e^{\lambda _{n}t}\mathbf {v_{n}} }
Where
λ
1
{\displaystyle \lambda _{1}}
,
λ
2
{\displaystyle \lambda _{2}}
, ... and
λ
n
{\displaystyle \lambda _{n}}
are the eigenvalues of
K
{\displaystyle \mathbf {K} }
;
v
1
{\displaystyle \mathbf {v_{1}} }
,
v
2
{\displaystyle \mathbf {v_{2}} }
, ... and
v
n
{\displaystyle \mathbf {v_{n}} }
are the respective eigenvectors of
K
{\displaystyle \mathbf {K} }
; and
c
1
{\displaystyle c_{1}}
,
c
2
{\displaystyle c_{2}}
, .... and
c
n
{\displaystyle c_{n}}
are constants.
However, it can be shown that given the above requirement to ensure the 'contents' of a closed system are constant, then for every pair of eigenvalue and eigenvector then either
λ
=
0
{\displaystyle \lambda =0}
or
[
1
1
⋯
1
]
v
=
0
{\displaystyle {\begin{bmatrix}1&1&\cdots &1\\\end{bmatrix}}\mathbf {v} =0}
and also that one eigenvalue is 0, say
λ
1
{\displaystyle \lambda _{1}}
So
q
=
c
1
v
1
+
c
2
e
λ
2
t
v
2
+
⋯
+
c
n
e
λ
n
t
v
n
{\displaystyle \mathbf {q} =c_{1}\mathbf {v_{1}} +c_{2}e^{\lambda _{2}t}\mathbf {v_{2}} +\cdots +c_{n}e^{\lambda _{n}t}\mathbf {v_{n}} }
Where
[
1
1
⋯
1
]
v
i
=
0
{\displaystyle {\begin{bmatrix}1&1&\cdots &1\\\end{bmatrix}}\mathbf {v_{i}} =0}
for
i
=
2
,
3
,
…
n
{\displaystyle \mathbf {i} =2,3,\dots n}
This solution can be rearranged:
q
=
[
v
1
[
c
1
0
⋯
0
]
+
v
2
[
0
c
2
⋯
0
]
+
⋯
+
v
n
[
0
0
⋯
c
n
]
]
[
1
e
λ
2
t
⋮
e
λ
n
t
]
{\displaystyle \mathbf {q} ={\Bigg [}\mathbf {v_{1}} {\begin{bmatrix}c_{1}&0&\cdots &0\\\end{bmatrix}}+\mathbf {v_{2}} {\begin{bmatrix}0&c_{2}&\cdots &0\\\end{bmatrix}}+\dots +\mathbf {v_{n}} {\begin{bmatrix}0&0&\cdots &c_{n}\\\end{bmatrix}}{\Bigg ]}{\begin{bmatrix}1\\e^{\lambda _{2}t}\\\vdots \\e^{\lambda _{n}t}\\\end{bmatrix}}}
This somewhat inelegant equation demonstrates that all solutions of an n-cell multi-compartment model with constant or no inputs are of the form:
q
=
A
[
1
e
λ
2
t
⋮
e
λ
n
t
]
{\displaystyle \mathbf {q} =\mathbf {A} {\begin{bmatrix}1\\e^{\lambda _{2}t}\\\vdots \\e^{\lambda _{n}t}\\\end{bmatrix}}}
Where
A
{\displaystyle \mathbf {A} }
is a nxn matrix and
λ
2
{\displaystyle \lambda _{2}}
,
λ
3
{\displaystyle \lambda _{3}}
, ... and
λ
n
{\displaystyle \lambda _{n}}
are constants.
Where
[
1
1
⋯
1
]
A
=
[
a
0
⋯
0
]
{\displaystyle {\begin{bmatrix}1&1&\cdots &1\\\end{bmatrix}}\mathbf {A} ={\begin{bmatrix}a&0&\cdots &0\\\end{bmatrix}}}
== Model topologies ==
Generally speaking, as the number of compartments increases, it is challenging both to find the algebraic and numerical solutions to the model. However, there are special cases of models, which rarely exist in nature, when the topologies exhibit certain regularities that the solutions become easier to find. The model can be classified according to the interconnection of cells and input/output characteristics:
Closed model: No sinks or source, lit. all koi = 0 and ui = 0;
Open model: There are sinks or/and sources among cells.
Catenary model: All compartments are arranged in a chain, with each pool connecting only to its neighbors. This model has two or more cells.
Cyclic model: It's a special case of the catenary model, with three or more cells, in which the first and last cell are connected, i.e. k1n ≠ 0 or/and kn1 ≠ 0.
Mammillary model: Consists of a central compartment with peripheral compartments connecting to it. There are no interconnections among other compartments.
Reducible model: It's a set of unconnected models. It bears great resemblance to the computer concept of forest as against trees.
== See also ==
Mathematical model
Biomedical engineering
Biological neuron models
Compartmental models in epidemiology
Physiologically-based pharmacokinetic modelling
== References ==
Godfrey, K., Compartmental Models and Their Application, Academic Press, 1983 (ISBN 0-12-286970-2).
Anderson, D. H., Compartmental Modeling and Tracer Kinetics, Springer-Verlag Lecture Notes in Biomathematics #50, 1983 (ISBN 0-387-12303-2).
Jacquez, J. A, Compartmental Analysis in Biology and Medicine, 2nd ed., The University of Michigan Press, 1985.
Evans, W. C., Linear Systems, Compartmental Modeling, and Estimability Issues in IAQ Studies, in Tichenor, B., Characterizing Sources of Indoor Air Pollution and Related Sink Effects, ASTM STP 1287, pp. 239–262, 1996 (ISBN 0-8031-2030-3). | Wikipedia/Multi-compartment_model |
In neurophysiology, several mathematical models of the action potential have been developed, which fall into two basic types. The first type seeks to model the experimental data quantitatively, i.e., to reproduce the measurements of current and voltage exactly. The renowned Hodgkin–Huxley model of the axon from the Loligo squid exemplifies such models. Although qualitatively correct, the H-H model does not describe every type of excitable membrane accurately, since it considers only two ions (sodium and potassium), each with only one type of voltage-sensitive channel. However, other ions such as calcium may be important and there is a great diversity of channels for all ions. As an example, the cardiac action potential illustrates how differently shaped action potentials can be generated on membranes with voltage-sensitive calcium channels and different types of sodium/potassium channels. The second type of mathematical model is a simplification of the first type; the goal is not to reproduce the experimental data, but to understand qualitatively the role of action potentials in neural circuits. For such a purpose, detailed physiological models may be unnecessarily complicated and may obscure the "forest for the trees". The FitzHugh–Nagumo model is typical of this class, which is often studied for its entrainment behavior. Entrainment is commonly observed in nature, for example in the synchronized lighting of fireflies, which is coordinated by a burst of action potentials; entrainment can also be observed in individual neurons. Both types of models may be used to understand the behavior of small biological neural networks, such as the central pattern generators responsible for some automatic reflex actions. Such networks can generate a complex temporal pattern of action potentials that is used to coordinate muscular contractions, such as those involved in breathing or fast swimming to escape a predator.
== Hodgkin–Huxley model ==
In 1952 Alan Lloyd Hodgkin and Andrew Huxley developed a set of equations to fit their experimental voltage-clamp data on the axonal membrane. The model assumes that the membrane capacitance C is constant; thus, the transmembrane voltage V changes with the total transmembrane current Itot according to the equation
C
d
V
d
t
=
I
t
o
t
=
I
e
x
t
+
I
N
a
+
I
K
+
I
L
{\displaystyle C{\frac {dV}{dt}}=I_{\mathrm {tot} }=I_{\mathrm {ext} }+I_{\mathrm {Na} }+I_{\mathrm {K} }+I_{\mathrm {L} }}
where INa, IK, and IL are currents conveyed through the local sodium channels, potassium channels, and "leakage" channels (a catch-all), respectively. The initial term Iext represents the current arriving from external sources, such as excitatory postsynaptic potentials from the dendrites or a scientist's electrode.
The model further assumes that a given ion channel is either fully open or closed; if closed, its conductance is zero, whereas if open, its conductance is some constant value g. Hence, the net current through an ion channel depends on two variables: the probability popen of the channel being open, and the difference in voltage from that ion's equilibrium voltage, V − Veq. For example, the current through the potassium channel may be written as
I
K
=
g
K
(
V
−
E
K
)
p
o
p
e
n
,
K
{\displaystyle I_{\mathrm {K} }=g_{\mathrm {K} }\left(V-E_{\mathrm {K} }\right)p_{\mathrm {open,K} }}
which is equivalent to Ohm's law. By definition, no net current flows (IK = 0) when the transmembrane voltage equals the equilibrium voltage of that ion (when V = EK).
To fit their data accurately, Hodgkin and Huxley assumed that each type of ion channel had multiple "gates", so that the channel was open only if all the gates were open and closed otherwise. They also assumed that the probability of a gate being open was independent of the other gates being open; this assumption was later validated for the inactivation gate. Hodgkin and Huxley modeled the voltage-sensitive potassium channel as having four gates; letting pn denote the probability of a single such gate being open, the probability of the whole channel being open is the product of four such probabilities, i.e., popen, K = n4. Similarly, the probability of the voltage-sensitive sodium channel was modeled to have three similar gates of probability m and a fourth gate, associated with inactivation, of probability h; thus, popen, Na = m3h. The probabilities for each gate are assumed to obey first-order kinetics
d
m
d
t
=
−
m
−
m
e
q
τ
m
{\displaystyle {\frac {dm}{dt}}=-{\frac {m-m_{\mathrm {eq} }}{\tau _{m}}}}
where both the equilibrium value meq and the relaxation time constant τm depend on the instantaneous voltage V across the membrane. If V changes on a time-scale more slowly than τm, the m probability will always roughly equal its equilibrium value meq; however, if V changes more quickly, then m will lag behind meq. By fitting their voltage-clamp data, Hodgkin and Huxley were able to model how these equilibrium values and time constants varied with temperature and transmembrane voltage. The formulae are complex and depend exponentially on the voltage and temperature. For example, the time constant for sodium-channel activation probability h varies as 3(θ−6.3)/10 with the Celsius temperature θ, and with voltage V as
1
τ
h
=
0.07
e
−
V
/
20
+
1
1
+
e
3
−
V
/
10
.
{\displaystyle {\frac {1}{\tau _{h}}}=0.07e^{-V/20}+{\frac {1}{1+e^{3-V/10}}}.}
In summary, the Hodgkin–Huxley equations are complex, non-linear ordinary differential equations in four independent variables: the transmembrane voltage V, and the probabilities m, h and n. No general solution of these equations has been discovered. A less ambitious but generally applicable method for studying such non-linear dynamical systems is to consider their behavior in the vicinity of a fixed point. This analysis shows that the Hodgkin–Huxley system undergoes a transition from stable quiescence to bursting oscillations as the stimulating current Iext is gradually increased; remarkably, the axon becomes stably quiescent again as the stimulating current is increased further still. A more general study of the types of qualitative behavior of axons predicted by the Hodgkin–Huxley equations has also been carried out.
== FitzHugh–Nagumo model ==
Because of the complexity of the Hodgkin–Huxley equations, various simplifications have been developed that exhibit qualitatively similar behavior. The FitzHugh–Nagumo model is a typical example of such a simplified system. Based on the tunnel diode, the FHN model has only two independent variables, but exhibits a similar stability behavior to the full Hodgkin–Huxley equations. The equations are
C
d
V
d
t
=
I
−
g
(
V
)
,
{\displaystyle C{\frac {dV}{dt}}=I-g(V),}
L
d
I
d
t
=
E
−
V
−
R
I
{\displaystyle L{\frac {dI}{dt}}=E-V-RI}
where g(V) is a function of the voltage V that has a region of negative slope in the middle, flanked by one maximum and one minimum (Figure FHN). A much-studied simple case of the FitzHugh–Nagumo model is the Bonhoeffer-van der Pol nerve model, which is described by the equations
C
d
V
d
t
=
I
−
ϵ
(
V
3
3
−
V
)
,
{\displaystyle C{\frac {dV}{dt}}=I-\epsilon \left({\frac {V^{3}}{3}}-V\right),}
L
d
I
d
t
=
−
V
{\displaystyle L{\frac {dI}{dt}}=-V}
where the coefficient ε is assumed to be small. These equations can be combined into a second-order differential equation
C
d
2
V
d
t
2
+
ϵ
(
V
2
−
1
)
d
V
d
t
+
V
L
=
0.
{\displaystyle C{\frac {d^{2}V}{dt^{2}}}+\epsilon \left(V^{2}-1\right){\frac {dV}{dt}}+{\frac {V}{L}}=0.}
This van der Pol equation has stimulated much research in the mathematics of nonlinear dynamical systems. Op-amp circuits that realize the FHN and van der Pol models of the action potential have been developed by Keener.
A hybrid of the Hodgkin–Huxley and FitzHugh–Nagumo models was developed by Morris and Lecar in 1981, and applied to the muscle fiber of barnacles. True to the barnacle's physiology, the Morris–Lecar model replaces the voltage-gated sodium current of the Hodgkin–Huxley model with a voltage-dependent calcium current. There is no inactivation (no h variable) and the calcium current equilibrates instantaneously, so that again, there are only two time-dependent variables: the transmembrane voltage V and the potassium gate probability n. The bursting, entrainment and other mathematical properties of this model have been studied in detail.
The simplest models of the action potential are the "flush and fill" models (also called "integrate-and-fire" models), in which the input signal is summed (the "fill" phase) until it reaches a threshold, firing a pulse and resetting the summation to zero (the "flush" phase). All of these models are capable of exhibiting entrainment, which is commonly observed in nervous systems.
== Extracellular potentials and currents ==
Whereas the above models simulate the transmembrane voltage and current at a single patch of membrane, other mathematical models pertain to the voltages and currents in the ionic solution surrounding the neuron. Such models are helpful in interpreting data from extracellular electrodes, which were common prior to the invention of the glass pipette electrode that allowed intracellular recording. The extracellular medium may be modeled as a normal isotropic ionic solution; in such solutions, the current follows the electric field lines, according to the continuum form of Ohm's Law
j
=
σ
E
{\displaystyle \mathbf {j} =\sigma \mathbf {E} }
where j and E are vectors representing the current density and electric field, respectively, and where σ is the conductivity. Thus, j can be found from E, which in turn may be found using Maxwell's equations. Maxwell's equations can be reduced to a relatively simple problem of electrostatics, since the ionic concentrations change too slowly (compared to the speed of light) for magnetic effects to be important. The electric potential φ(x) at any extracellular point x can be solved using Green's identities
ϕ
(
x
)
=
1
4
π
σ
o
u
t
s
i
d
e
∮
m
e
m
b
r
a
n
e
∂
∂
n
1
|
x
−
ξ
|
[
σ
o
u
t
s
i
d
e
ϕ
o
u
t
s
i
d
e
(
ξ
)
−
σ
i
n
s
i
d
e
ϕ
i
n
s
i
d
e
(
ξ
)
]
d
S
{\displaystyle \phi (\mathbf {x} )={\frac {1}{4\pi \sigma _{\mathrm {outside} }}}\oint _{\mathrm {membrane} }{\frac {\partial }{\partial n}}{\frac {1}{\left|\mathbf {x} -{\boldsymbol {\xi }}\right|}}\left[\sigma _{\mathrm {outside} }\phi _{\mathrm {outside} }({\boldsymbol {\xi }})-\sigma _{\mathrm {inside} }\phi _{\mathrm {inside} }({\boldsymbol {\xi }})\right]dS}
where the integration is over the complete surface of the membrane;
ξ
{\displaystyle {\boldsymbol {\xi }}}
is a position on the membrane, σinside and φinside are the conductivity and potential just within the membrane, and σoutside and φoutside the corresponding values just outside the membrane. Thus, given these σ and φ values on the membrane, the extracellular potential φ(x) can be calculated for any position x; in turn, the electric field E and current density j can be calculated from this potential field.
== See also ==
Biological neuron models
GHK current equation
Models of neural computation
Saltatory conduction
Bioelectronics
Cable theory
== References ==
== Further reading ==
Glass L, Mackey MC (1988). From Clocks to Chaos: The Rhythms of Life. Princeton, New Jersey: Princeton University. ISBN 978-0-691-08496-1. | Wikipedia/Quantitative_models_of_the_action_potential |
The linear-nonlinear-Poisson (LNP) cascade model is a simplified functional model of neural spike responses. It has been successfully used to describe the response characteristics of neurons in early sensory pathways, especially the visual system. The LNP model is generally implicit when using reverse correlation or the spike-triggered average to characterize neural responses with white-noise stimuli.
There are three stages of the LNP cascade model. The first stage consists of a linear filter, or linear receptive field, which describes how the neuron integrates stimulus intensity over space and time. The output of this filter then passes through a nonlinear function, which gives the neuron's instantaneous spike rate as its output. Finally, the spike rate is used to generate spikes according to an inhomogeneous Poisson process.
The linear filtering stage performs dimensionality reduction, reducing the high-dimensional spatio-temporal stimulus space to a low-dimensional feature space, within which the neuron computes its response. The nonlinearity converts the filter output to a (non-negative) spike rate, and accounts for nonlinear phenomena such as spike threshold (or rectification) and response saturation. The Poisson spike generator converts the continuous spike rate to a series of spike times, under the assumption that the probability of a spike depends only on the instantaneous spike rate.
The model offers a useful approximation of neural activity, allowing scientists to derive reliable estimates from a mathematically simple formula.
== Mathematical formulation ==
=== Single-filter LNP ===
Let
x
{\displaystyle \mathbf {x} }
denote the spatio-temporal stimulus vector at a particular instant, and
k
{\displaystyle \mathbf {k} }
denote a linear filter (the neuron's linear receptive field), which is a vector with the same number of elements as
x
{\displaystyle \mathbf {x} }
. Let
f
{\displaystyle f}
denote the nonlinearity, a scalar function with non-negative output. Then the LNP model specifies that, in the limit of small time bins,
P
(
spike
)
∝
f
(
k
⋅
x
)
{\displaystyle P({\textrm {spike}})\propto f(\mathbf {k} \cdot \mathbf {x} )}
.
For finite-sized time bins, this can be stated precisely as the probability of observing y spikes in a single bin:
P
(
y
~spikes
)
=
(
Δ
λ
)
y
y
!
e
−
Δ
λ
{\displaystyle P(y{\textrm {~spikes}})={\frac {\left(\Delta \lambda \right)^{y}}{y!}}e^{-\Delta \lambda }}
where
λ
=
f
(
k
⋅
x
)
{\displaystyle \lambda =f(\mathbf {k} \cdot \mathbf {x} )}
, and
Δ
{\displaystyle \Delta }
is the bin size.
=== Multi-filter LNP ===
For neurons sensitive to multiple dimensions of the stimulus space, the linear stage of the LNP model can be generalized to a bank of linear filters, and the nonlinearity becomes a function of multiple inputs. Let
k
1
,
k
2
,
…
,
k
n
{\displaystyle \mathbf {k_{1}} ,\mathbf {k_{2}} ,\ldots ,\mathbf {k_{n}} }
denote the set of linear filters that capture a neuron's stimulus dependence. Then the multi-filter LNP model is described by
P
(
spike
)
∝
f
(
k
1
⋅
x
,
k
2
⋅
x
,
…
,
k
n
⋅
x
)
{\displaystyle P({\textrm {spike}})\propto f(\mathbf {k_{1}} \!\cdot \!\mathbf {x} ,\;\mathbf {k_{2}} \!\cdot \!\mathbf {x} ,\;\ldots ,\;\mathbf {k_{n}} \!\cdot \!\mathbf {x} )}
or
P
(
spike
)
∝
f
(
K
x
)
,
{\displaystyle P({\textrm {spike}})\propto f(K\mathbf {x} ),}
where
K
{\displaystyle K}
is a matrix whose columns are the filters
k
i
{\displaystyle \mathbf {k_{i}} }
.
== Estimation ==
The parameters of the LNP model consist of the linear filters
{
k
i
}
{\displaystyle \{{k_{i}}\}}
and the nonlinearity
f
{\displaystyle f}
. The estimation problem (also known as the problem of neural characterization) is the problem of determining these parameters from data consisting of a time-varying stimulus and the set of observed spike times. Techniques for estimating the LNP model parameters include:
moment-based techniques, such as the spike-triggered average or spike-triggered covariance
with information-maximization or maximum likelihood techniques.
== Related models ==
The LNP model provides a simplified, mathematically tractable approximation to more biophysically detailed single-neuron models such as the integrate-and-fire or Hodgkin–Huxley model.
If the nonlinearity
f
{\displaystyle f}
is a fixed invertible function, then the LNP model is a generalized linear model. In this case,
f
{\displaystyle f}
is the inverse link function.
An alternative to the LNP model for neural characterization is the Volterra kernel or Wiener kernel series expansion, which arises in classical nonlinear systems-identification theory. These models approximate a neuron's input-output characteristics using a polynomial expansion analogous to the Taylor series, but do not explicitly specify the spike-generation process.
== See also ==
Random neural network
Spike-triggered average
Spike-triggered covariance
== References == | Wikipedia/Linear-nonlinear-Poisson_cascade_model |
The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage V is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters
ε
{\displaystyle \varepsilon }
and
η
{\displaystyle \eta }
of the model can be interpreted as the response of the membrane potential to an incoming spike (response kernel
ε
{\displaystyle \varepsilon }
, the PSP) and to an outgoing spike (response kernel
η
{\displaystyle \eta }
, also called refractory kernel). The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an (integrated version of) a generalized integrate-and-fire model with adaptation.
== Model equations for SRM in continuous time ==
In the SRM, at each moment in time t, a spike can be generated stochastically with instantaneous stochastic intensity or 'escape function'
ρ
(
t
)
=
f
(
V
(
t
)
−
ϑ
(
t
)
)
{\displaystyle \rho (t)=f(V(t)-\vartheta (t))}
that depends on the momentary difference between the membrane voltage V(t) and the dynamic threshold
ϑ
(
t
)
{\displaystyle \vartheta (t)}
.
The membrane voltage V(t) at time t is given by
V
(
t
)
=
∑
f
η
(
t
−
t
f
)
+
∫
0
∞
κ
(
s
)
I
(
t
−
s
)
d
s
+
V
r
e
s
t
{\displaystyle V(t)=\sum _{f}\eta (t-t^{f})+\int _{0}^{\infty }\kappa (s)I(t-s)\,ds+V_{\mathrm {rest} }}
where tf is the firing time of spike number f of the neuron, Vrest is the resting voltage in the absence of input, I(t-s) is the input current at time t − s and
κ
(
s
)
{\displaystyle \kappa (s)}
is a linear filter (also called kernel) that describes the contribution of an input current pulse at time t − s to the voltage at time t. The contributions to the voltage caused by a spike at time
t
f
{\displaystyle t^{f}}
are described by the refractory kernel
η
(
t
−
t
f
)
{\displaystyle \eta (t-t^{f})}
. In particular,
η
(
t
−
t
f
)
{\displaystyle \eta (t-t^{f})}
describes the time course of the action potential starting at time
t
f
{\displaystyle t^{f}}
as well as the spike-afterpotential.
The dynamic threshold
ϑ
(
t
)
{\displaystyle \vartheta (t)}
is given by
ϑ
(
t
)
=
ϑ
0
+
∑
f
θ
1
(
t
−
t
f
)
{\displaystyle \vartheta (t)=\vartheta _{0}+\sum _{f}\theta _{1}(t-t^{f})}
where
ϑ
0
{\displaystyle \vartheta _{0}}
is the firing threshold of an inactive neuron and
θ
1
(
t
−
t
f
)
{\displaystyle \theta _{1}(t-t^{f})}
describes the increase of the threshold after a spike at time
t
f
{\displaystyle t^{f}}
. In case of a fixed threshold [i.e.,
θ
1
(
t
−
t
f
)
{\displaystyle \theta _{1}(t-t^{f})}
=0], the refractory kernel
η
(
t
−
t
f
)
{\displaystyle \eta (t-t^{f})}
should include only the spike-afterpotential, but not the shape of the spike itself.
A common choice for the 'escape rate'
f
{\displaystyle f}
(that is consistent with biological data) is
f
(
V
−
ϑ
)
=
1
τ
0
exp
[
β
(
V
−
ϑ
)
]
{\displaystyle f(V-\vartheta )={\frac {1}{\tau _{0}}}\exp[\beta (V-\vartheta )]}
where
τ
0
{\displaystyle \tau _{0}}
is a time constant that describes how quickly a spike is fired once the membrane potential reaches the threshold and
β
{\displaystyle \beta }
is a sharpness parameter. For
β
→
∞
{\displaystyle \beta \to \infty }
the threshold becomes sharp and spike firing occurs deterministically at the moment when the membrane potential hits the threshold from below. The sharpness value found in experiments is
1
/
β
≈
4
m
V
{\displaystyle 1/\beta \approx 4mV}
which that neuronal firing becomes non-neglibable as soon the membrane potential is a few mV below the formal firing threshold. The escape rate process via a soft threshold is reviewed in Chapter 9 of the textbook Neuronal Dynamics.
In a network of N SRM neurons
1
≤
i
≤
N
{\displaystyle 1\leq i\leq N}
, the membrane voltage of neuron
i
{\displaystyle i}
is given by
V
i
(
t
)
=
∑
f
η
i
(
t
−
t
i
f
)
+
∑
j
=
1
N
w
i
j
∑
f
′
ε
i
j
(
t
−
t
j
f
′
)
+
V
r
e
s
t
{\displaystyle V_{i}(t)=\sum _{f}\eta _{i}(t-t_{i}^{f})+\sum _{j=1}^{N}w_{ij}\sum _{f'}\varepsilon _{ij}(t-t_{j}^{f'})+V_{\mathrm {rest} }}
where
t
j
f
′
{\displaystyle t_{j}^{f'}}
are the firing times of neuron j (i.e., its spike train), and
η
i
(
t
−
t
i
f
)
{\displaystyle \eta _{i}(t-t_{i}^{f})}
describes the time course of the spike and the spike after-potential for neuron i,
w
i
j
{\displaystyle w_{ij}}
and
ε
i
j
(
t
−
t
j
f
′
)
{\displaystyle \varepsilon _{ij}(t-t_{j}^{f'})}
describe the amplitude and time course of an excitatory or inhibitory postsynaptic potential (PSP) caused by the spike
t
j
f
′
{\displaystyle t_{j}^{f'}}
of the presynaptic neuron j. The time course
ε
i
j
(
s
)
{\displaystyle \varepsilon _{ij}(s)}
of the PSP results from the convolution of the postsynaptic current
I
(
t
)
{\displaystyle I(t)}
caused by the arrival of a presynaptic spike from neuron j.
== Model equations for SRM in discrete time ==
For simulations, the SRM is usually implemented in discrete time. In time step
t
n
{\displaystyle t_{n}}
of duration
Δ
t
{\displaystyle \Delta t}
, a spike is generated with probability
P
F
(
t
n
)
=
F
(
V
(
t
n
)
−
ϑ
(
t
n
)
)
{\displaystyle P_{F}(t_{n})=F(V(t_{n})-\vartheta (t_{n}))}
that depends on the momentary difference between the membrane voltage V and the dynamic threshold
ϑ
{\displaystyle \vartheta }
. The function F is often taken as a standard sigmoidal
F
(
x
)
=
0.5
[
1
+
tanh
(
γ
x
)
]
{\displaystyle F(x)=0.5[1+\tanh(\gamma x)]}
with steepness parameter
γ
{\displaystyle \gamma }
. But the functional form of F can also be calculated from the stochastic intensity
f
{\displaystyle f}
in continuous time as
F
(
y
n
)
≈
1
−
exp
[
y
n
Δ
t
]
{\displaystyle F(y_{n})\approx 1-\exp[y_{n}\,\Delta t]}
where
y
n
=
V
(
t
n
)
−
ϑ
(
t
n
)
{\displaystyle y_{n}=V(t_{n})-\vartheta (t_{n})}
is the distance to threshold.
The membrane voltage
V
(
t
n
)
{\displaystyle V(t_{n})}
in discrete time is given by
V
(
t
n
)
=
∑
f
η
(
t
n
−
t
f
)
+
∑
m
=
1
∞
κ
(
m
Δ
t
)
I
(
t
n
−
m
Δ
t
)
+
V
r
e
s
t
{\displaystyle V(t_{n})=\sum _{f}\eta (t_{n}-t^{f})+\sum _{m=1}^{\infty }\kappa (m\,\Delta t)I(t_{n}-m\,\Delta t)+V_{\mathrm {rest} }}
where tf is the discretized firing time of the neuron, Vrest is the resting voltage in the absence of input, and
I
(
t
k
)
{\displaystyle I(t_{k})}
is the input current at time
t
k
{\displaystyle t_{k}}
(integrated over one time step). The input filter
κ
(
s
)
{\displaystyle \kappa (s)}
and the spike-afterpotential
η
(
s
)
{\displaystyle \eta (s)}
are defined as in the case of the SRM in continuous time.
For networks of SRM neurons in discrete time we define the spike train of neuron j as a sequence of zeros and ones,
{
X
j
(
t
m
)
∈
{
0
,
1
}
;
m
=
1
,
2
,
3
,
…
}
{\displaystyle \{X_{j}(t_{m})\in \{0,1\};m=1,2,3,\dots \}}
and rewrite the membrane potential as
V
i
(
t
n
)
=
∑
m
η
i
(
t
n
−
t
m
)
X
i
(
t
m
)
+
∑
j
w
i
j
∑
m
ε
i
j
(
t
n
−
t
m
)
X
j
(
t
m
)
+
V
r
e
s
t
{\displaystyle V_{i}(t_{n})=\sum _{m}\eta _{i}(t_{n}-t_{m})X_{i}(t_{m})+\sum _{j}w_{ij}\sum _{m}\varepsilon _{ij}(t_{n}-t_{m})X_{j}(t_{m})+V_{\mathrm {rest} }}
In this notation, the refractory kernel
κ
(
s
)
{\displaystyle \kappa (s)}
and the PSP shape
ε
i
j
(
s
)
{\displaystyle \varepsilon _{ij}(s)}
can be interpreted as linear response filters applied to the binary spike trains
X
j
{\displaystyle X_{j}}
.
== Main applications of the SRM ==
=== Theory of computation with pulsed neural networks ===
Since the formulation as SRM provides an explicit expression for the membrane voltage (without the detour via a differential equations), SRMs have been the dominant mathematical model in a formal theory of computation with spiking neurons.
=== Prediction of voltage and spike times of cortical neurons ===
The SRM with dynamic threshold has been used to predict the firing time of cortical neurons with a precision of a few milliseconds. Neurons were stimulated, via current injection, with time-dependent currents of different means and variance while the membrane voltage was recorded. The reliability of predicted spikes was close to the intrinsic reliability when the same time-dependent current was repeated several times. Moreover, extracting the shape of the filters
κ
(
s
)
{\displaystyle \kappa (s)}
and
η
(
s
)
{\displaystyle \eta (s)}
directly from the experimental data revealed that adaptation extends over time scales from tens of milliseconds to tens of seconds. Thanks to the convexity properties of the likelihood in Generalized Linear Models, parameter extraction is efficient.
=== Associative memory in networks of spiking neurons ===
SRM0 neurons have been used to construct an associative memory in a network of spiking neurons. The SRM network which stored a finite number of stationary patterns as attractors using a Hopfield-type connectivity matrix was one of the first examples of attractor networks with spiking neurons.
=== Population activity equations in large networks of spiking neurons ===
For SRM neurons, an important variable characterizing the internal state of the neuron is the time since the last spike (or 'age' of the neuron) which enters into the refractory kernel
η
(
s
)
{\displaystyle \eta (s)}
. The population activity equations for SRM neurons can be formulated alternatively either as integral equations, or as partial differential equations for the 'refractory density'. Because the refractory kernel may include a time scale slower than that of the membrane potential, the population equations for SRM neurons provide powerful alternatives to the more broadly used partial differential equations for the 'membrane potential density'. Reviews of the population activity equation based on refractory densities can be found in as well in Chapter 14 of the textbook Neuronal Dynamics.
=== Spike patterns and temporal code ===
SRMs are useful to understand theories of neural coding. A network SRM neurons has stored attractors that form reliable spatio-temporal spike patterns (also known as synfire chains) example of temporal coding for stationary inputs. Moreover, the population activity equations for SRM exhibit temporally precise transients after a stimulus switch, indicating reliable spike firing.
== History and relation to other models ==
The Spike Response Model has been introduced in a series of papers between 1991 and 2000. The name Spike Response Model probably appeared for the first time in 1993. Some papers used exclusively the deterministic limit with a hard threshold others the soft threshold with escape noise. Precursors of the Spike Response Model are the integrate-and-fire model introduced by Lapicque in 1907 as well as models used in auditory neuroscience.
=== SRM0 ===
An important variant of the model is SRM0 which is related to time-dependent nonlinear renewal theory. The main difference to the voltage equation of the SRM introduced above is that in the term containing the refractory kernel
η
(
s
)
{\displaystyle \eta (s)}
there is no summation sign over past spikes: only the most recent spike matters. The model SRM0 is closely related to the inhomogeneous Markov interval process and to age-dependent models of refractoriness.
=== GLM ===
The equations of the SRM as introduced above are equivalent to Generalized Linear Models in neuroscience (GLM). In the neuroscience, GLMs have been introduced as an extension of the Linear-Nonlinear-Poisson model (LNP) by adding self-interaction of an output spike with the internal state of the neuron (therefore also called 'Recursive LNP'). The self-interaction is equivalent to the kernel
η
(
s
)
{\displaystyle \eta (s)}
of the SRM. The GLM framework enables to formulate a maximum likelihood approach applied to the likelihood of an observed spike train under the assumption that an SRM could have generated the spike train. Despite the mathematical equivalence there is a conceptual difference in interpretation: in the SRM the variable V is interpreted as membrane voltage whereas in the recursive LNP it is a 'hidden' variable to which no meaning is assigned. The SRM interpretation is useful if measurements of subthreshold voltage are available whereas the recursive LNP is useful in systems neuroscience where spikes (in response to sensory stimulation) are recorded extracellulary without access to the subthreshold voltage.
=== Adaptive leaky integrate-and-fire models ===
A leaky integrate-and-fire neuron with spike-triggered adaptation has a subthreshold membrane potential generated by the following differential equations
τ
m
d
V
(
t
)
d
t
=
R
I
(
t
)
−
[
V
(
t
)
−
E
r
e
s
t
]
−
R
∑
k
w
k
{\displaystyle \tau _{\mathrm {m} }{\frac {dV(t)}{dt}}=RI(t)-[V(t)-E_{\mathrm {rest} }]-R\sum _{k}w_{k}}
τ
k
d
w
k
(
t
)
d
t
=
−
w
k
+
b
k
τ
k
∑
f
δ
(
t
−
t
f
)
{\displaystyle \tau _{k}{\frac {dw_{k}(t)}{dt}}=-w_{k}+b_{k}\tau _{k}\sum _{f}\delta (t-t^{f})}
where
τ
m
{\displaystyle \tau _{m}}
is the membrane time constant and wk is an adaptation current number, with index k, Erest is the resting potential and tf is the firing time of the neuron and the Greek delta denotes the Dirac delta function. Whenever the voltage reaches the firing threshold the voltage is reset to a value Vr below the firing threshold. Integration of the linear differential equations gives a formula identical to the voltage equation of the SRM. However, in this case, the refractory kernel
η
(
s
)
{\displaystyle \eta (s)}
does not include the spike shape but only the spike-afterpotential. In the absence of adaptation currents, we retrieve the standard LIF model which is equivalent to a refractory kernel
η
(
s
)
{\displaystyle \eta (s)}
that decays exponentially with the membrane time constant
τ
m
{\displaystyle \tau _{m}}
.
== External links ==
Spike Response Model, Chapter 6.4 of the textbook Neuronal Dynamics
'soft threshold' and escape noise, Chapter 9 of the textbook Neuronal Dynamics
Quasi-Renewal Theory Chapter 14 of the textbook Neuronal Dynamics.
Spike Response Model, from Scholarpedia
== Reference section == | Wikipedia/Spike_response_model |
A binding neuron (BN) is an abstract concept of processing of input impulses in a generic neuron based on their temporal coherence and the level of neuronal inhibition. Mathematically, the concept may be implemented by most neuronal models including the well-known leaky integrate-and-fire model. The BN concept originated in 1996 and 1998 papers by A. K. Vidybida,
== Description of the concept ==
For a generic neuron the stimuli are excitatory impulses. Normally, more than single input impulse is necessary for exciting neuron up to the level when it fires and emits an output impulse.
Let the neuron receives
n
{\displaystyle n}
input impulses at consecutive moments of time
t
1
,
t
2
,
…
,
t
n
{\displaystyle t_{1},t_{2},\dots ,t_{n}}
. In the BN concept the temporal coherence
t
c
{\displaystyle tc}
between input impulses is defined as follows
t
c
=
1
t
n
−
t
1
.
{\displaystyle tc={\frac {1}{t_{n}-t_{1}}}\,.}
The high degree of temporal coherence between input impulses suggests that in external media all
n
{\displaystyle n}
impulses can be created by a single complex event. Correspondingly, if BN is stimulated by a highly coherent set of input impulses, it fires and emits an output impulse. In the BN terminology, BN binds the elementary events (input impulses) into a single event (output impulse). The binding happens if the input impulses are enough coherent in time, and does not happen if those impulses do not have required degree of coherence.
Inhibition in the BN concept (essentially, the slow somatic potassium inhibition) controls the degree of temporal coherence required for binding: the higher level of inhibition, the higher degree of temporal coherence is necessary for binding to occur.
The emitted output impulse is treated as abstract representation of the compound event (the set of coherent in time input impulses), see Scheme.
== Origin ==
"Although a neuron requires energy, its main function is to receive signals and to send them out that is, to handle information." --- this words by Francis Crick point at the necessity to describe neuronal functioning in terms of processing of abstract signals
The two abstract concepts, namely, the "coincidence detector" and "temporal integrator" are offered in this course,
The first one expects that a neuron fires a spike if a number of input impulses are received at the same time. In the temporal integrator concept a neuron fires a spike after receiving a number of input impulses distributed in time.
Each of the two takes into account some features of real neurons since it is known that a realistic neuron can display
both coincidence detector and temporal integrator modes of activity depending on the stimulation applied,
.
At the same time, it is known that a neuron together with excitatory impulses receives also inhibitory stimulation.
A natural development of the two above mentioned concepts could be a concept which endows inhibition with its own signal processing role.
In the neuroscience, there is an idea of binding problem.
For example, during visual perception, such features as form, color and stereopsis are represented in the brain by
different neuronal assemblies. The mechanism ensuring those features to be perceived as belonging to a single real object is called "feature binding",
.
The experimentally approved opinion is that precise temporal coordination between neuronal impulses is required for the binding to occur,
This coordination mainly means that signals about different features must arrive to certain areas in the brain within a certain time window.
The BN concept reproduces at the level of single generic neuron the requirement, which is necessary for the feature binding to occur, and which was
formulated earlier at the level of large-scale neuronal assemblies.
Its formulation is made possible by the analysis of response of the
Hodgkin–Huxley model to stimuli similar to those the real neurons receive
in the natural conditions, see "Mathematical implementations", below.
== Mathematical implementations ==
=== Hodgkin–Huxley (H-H) model ===
Hodgkin–Huxley model — physiologically substantiated neuronal model,
which operates in terms of
transmembrane ionic currents, and describes mechanism of generation of action potential.
In the paper
the response of the H-H model was studied numerically to stimuli
U
(
t
)
{\displaystyle U(t)}
composed of many
excitatory impulses distributed randomly within a time window
W
{\displaystyle W}
:
U
(
t
)
=
∑
k
=
1
N
P
V
(
t
−
t
k
)
,
t
k
∈
[
0
;
W
]
.
{\displaystyle U(t)=\sum _{k=1}^{NP}V(t-t_{k}),\qquad t_{k}\in [0;W].}
Here
V
(
t
)
{\displaystyle V(t)}
denotes magnitude of
excitatory postsynaptic potential at moment
t
{\displaystyle t}
;
t
k
{\displaystyle t_{k}}
— is the moment of arrival of
k
{\displaystyle k}
-th impulse;
N
P
{\displaystyle NP}
— is the total number of impulses the
stimulus is composed of. The numbers
t
k
{\displaystyle t_{k}}
are random,
distributed uniformly within interval
[
0
;
W
]
{\displaystyle [0;W]}
. The stimulating current applied in the H-H equations
is as follows
I
(
t
)
=
−
C
M
d
U
(
t
)
d
t
,
{\displaystyle I(t)=-C_{M}\ {\frac {dU(t)}{dt}},}
where
C
M
{\displaystyle C_{M}}
— is the capacity of unit area of excitable membrane.
The probability to generate action potential was calculated as a function
of the window width
W
{\displaystyle W}
.
Different constant potassium conductances were added to the H-H equations
in order to create certain levels of
inhibitory potential. The dependencies obtained, if recalculated as functions of
T
C
=
1
W
{\displaystyle TC={\frac {1}{W}}}
,
which is analogous to temporal coherence of impulses in the compound stimulus, have step-like form.
The location of the step is controlled by the level of inhibition potential,
see Fig. 1.
Due to this type of dependence, the H-H equations can be treated as mathematical model of the BN concept.
=== Leaky integrate and fire neuron (LIF) ===
Leaky integrate and fire neuron is a widely used abstract neuronal model.
If to state a similar problem for the LIF neuron with appropriately chosen
inhibition mechanism,
then it is possible to obtain step-like dependencies similar to the Fig. 1
as well.
Therefore, the LIF neuron as well can be considered as mathematical model of the BN concept.
=== Binding neuron model ===
The binding neuron model implements the BN concept in the most refined form.
In this model each input impulse is stored in the neuron during fixed
time
τ
{\displaystyle \tau }
and then disappears.
This kind of memory serves as surrogate of the
excitatory postsynaptic potential.
The model has a threshold
N
t
h
{\displaystyle N_{th}}
: if the number of stored in
the BN impulses exceeds
N
t
h
{\displaystyle N_{th}}
then the neuron fires a spike and clears it internal memory. The presence of
inhibition results in the decreased
τ
{\displaystyle \tau }
.
In the BN model, it is necessary to control the time to live of any stored
impulse during calculation of the neuron's
response to input stimulation. This makes the BN model more complicated
for numerical simulation than the LIF model.
On the other hand, any impulse spends finite time
τ
{\displaystyle \tau }
in the BN model neuron. This is in contrast to the
LIF model, where traces of any impulse can be present infinitely long.
This property of the BN model allows to
get precise description of output activity of BN stimulated with random
stream of input impulses, see
.
The limiting case of BN with infinite memory, τ→∞, corresponds to
the temporal integrator.
The limiting case of BN with infinitely short memory, τ→0, corresponds
to the coincidence detector.
== Integrated circuit implementation ==
The above-mentioned and other neuronal models and nets made of them can be implemented in microchips. Among different chips it is worth mentioning the field-programmable gate arrays. These chips can be used for implementation of any neuronal model, but the BN model can be programmed most naturally because it can use only integers and do not need solving differential equations. Those features are used, e.g. in and
== Limitations ==
As an abstract concept the BN model is subjected to necessary limitations. Among those are such as ignoring neuronal morphology, identical magnitude of input impulses, replacement of a set of transients with different relaxation times, known for a real neuron, with a single time to live,
τ
{\displaystyle \tau }
, of impulse in neuron, the absence of refractoriness and fast (chlorine) inhibition. The BN model has the same limitations, yet some of them can be removed in a complicated model, see, e.g., where the BN model is used with refractoriness and fast inhibition.
== References == | Wikipedia/Binding_neuron |
In neuroscience, classical cable theory uses mathematical models to calculate the electric current (and accompanying voltage) along passive neurites, particularly the dendrites that receive synaptic inputs at different sites and times. Estimates are made by modeling dendrites and axons as cylinders composed of segments with capacitances
c
m
{\displaystyle c_{m}}
and resistances
r
m
{\displaystyle r_{m}}
combined in parallel (see Fig. 1). The capacitance of a neuronal fiber comes about because electrostatic forces are acting through the very thin lipid bilayer (see Figure 2). The resistance in series along the fiber
r
l
{\displaystyle r_{l}}
is due to the axoplasm's significant resistance to movement of electric charge.
== History ==
Cable theory in computational neuroscience has roots leading back to the 1850s, when Professor William Thomson (later known as Lord Kelvin) began developing mathematical models of signal decay in submarine (underwater) telegraphic cables. The models resembled the partial differential equations used by Fourier to describe heat conduction in a wire.
The 1870s saw the first attempts by Hermann to model neuronal electrotonic potentials also by focusing on analogies with heat conduction. However, it was Hoorweg who first discovered the analogies with Kelvin's undersea cables in 1898 and then Hermann and Cremer who independently developed the cable theory for neuronal fibers in the early 20th century. Further mathematical theories of nerve fiber conduction based on cable theory were developed by Cole and Hodgkin (1920s–1930s), Offner et al. (1940), and Rushton (1951).
Experimental evidence for the importance of cable theory in modelling the behavior of axons began surfacing in the 1930s from work done by Cole, Curtis, Hodgkin, Sir Bernard Katz, Rushton, Tasaki and others. Two key papers from this era are those of Davis and Lorente de Nó (1947) and Hodgkin and Rushton (1946).
The 1950s saw improvements in techniques for measuring the electric activity of individual neurons. Thus cable theory became important for analyzing data collected from intracellular microelectrode recordings and for analyzing the electrical properties of neuronal dendrites. Scientists like Coombs, Eccles, Fatt, Frank, Fuortes and others now relied heavily on cable theory to obtain functional insights of neurons and for guiding them in the design of new experiments.
Later, cable theory with its mathematical derivatives allowed ever more sophisticated neuron models to be explored by workers such as Jack, Rall, Redman, Rinzel, Idan Segev, Tuckwell, Bell, and Iannella. More recently, cable theory has been applied to model electrical activity in bundled neurons in the white matter of the brain.
== Deriving the cable equation ==
Note, various conventions of rm exist.
Here rm and cm, as introduced above, are measured per membrane-length unit (per meter (m)). Thus rm is measured in ohm·meters (Ω·m) and cm in farads per meter (F/m). This is in contrast to Rm (in Ω·m2) and Cm (in F/m2), which represent the specific resistance and capacitance respectively of one unit area of membrane (in m2). Thus, if the radius, a, of the axon is known, then its circumference is 2πa, and its rm, and its cm values can be calculated as:
These relationships make sense intuitively, because the greater the circumference of the axon, the greater the area for charge to escape through its membrane, and therefore the lower the membrane resistance (dividing Rm by 2πa); and the more membrane available to store charge (multiplying Cm by 2πa).
The specific electrical resistance, ρl, of the axoplasm allows one to calculate the longitudinal intracellular resistance per unit length, rl, (in Ω·m−1) by the equation:
The greater the cross sectional area of the axon, πa2, the greater the number of paths for the charge to flow through its axoplasm, and the lower the axoplasmic resistance.
Several important avenues of extending classical cable theory have recently seen the introduction of endogenous structures in order to analyze the effects of protein polarization within dendrites and different synaptic input distributions over the dendritic surface of a neuron.
To better understand how the cable equation is derived, first consider an idealized neuron with a perfectly sealed membrane (rm=∞) with no loss of current to the outside, and no capacitance (cm = 0). A current injected into the fiber at position x = 0 would move along the inside of the fiber unchanged. Moving away from the point of injection and by using Ohm's law (V = IR) we can calculate the voltage change as:
where the negative is because current flows down the potential gradient.
Letting Δx go towards zero and having infinitely small increments of x, one can write (4) as:
or
Bringing rm back into the picture is like making holes in a garden hose. The more holes, the faster the water will escape from the hose, and the less water will travel all the way from the beginning of the hose to the end. Similarly, in an axon, some of the current traveling longitudinally through the axoplasm will escape through the membrane.
If im is the current escaping through the membrane per length unit, m, then the total current escaping along y units must be y·im. Thus, the change of current in the axoplasm, Δil, at distance, Δx, from position x=0 can be written as:
or, using continuous, infinitesimally small increments:
i
m
{\displaystyle i_{m}}
can be expressed with yet another formula, by including the capacitance. The capacitance will cause a flow of charge (a current) towards the membrane on the side of the cytoplasm. This current is usually referred to as displacement current (here denoted
i
c
{\displaystyle i_{c}}
.) The flow will only take place as long as the membrane's storage capacity has not been reached.
i
c
{\displaystyle i_{c}}
can then be expressed as:
where
c
m
{\displaystyle c_{m}}
is the membrane's capacitance and
∂
V
/
∂
t
{\displaystyle {\partial V}/{\partial t}}
is the change in voltage over time.
The current that passes the membrane (
i
r
{\displaystyle i_{r}}
) can be expressed as:
and because
i
m
=
i
r
+
i
c
{\displaystyle i_{m}=i_{r}+i_{c}}
the following equation for
i
m
{\displaystyle i_{m}}
can be derived if no additional current is added from an electrode:
where
∂
i
l
/
∂
x
{\displaystyle {\partial i_{l}}/{\partial x}}
represents the change per unit length of the longitudinal current.
Combining equations (6) and (11) gives a first version of a cable equation:
which is a second-order partial differential equation (PDE).
By a simple rearrangement of equation (12) (see later) it is possible to make two important terms appear, namely the length constant (sometimes referred to as the space constant) denoted
λ
{\displaystyle \lambda }
and the time constant denoted
τ
{\displaystyle \tau }
. The following sections focus on these terms.
== Length constant ==
The length constant,
λ
{\displaystyle \lambda }
(lambda), is a parameter that indicates how far a stationary current will influence the voltage along the cable. The larger the value of
λ
{\displaystyle \lambda }
, the farther the charge will flow. The length constant can be expressed as:
The larger the membrane resistance, rm, the greater the value of
λ
{\displaystyle \lambda }
, and the more current will remain inside the axoplasm to travel longitudinally through the axon. The higher the axoplasmic resistance,
r
l
{\displaystyle r_{l}}
, the smaller the value of
λ
{\displaystyle \lambda }
, the harder it will be for current to travel through the axoplasm, and the shorter the current will be able to travel.
It is possible to solve equation (12) and arrive at the following equation (which is valid in steady-state conditions, i.e. when time approaches infinity):
Where
V
0
{\displaystyle V_{0}}
is the depolarization at
x
=
0
{\displaystyle x=0}
(point of current injection), e is the exponential constant (approximate value 2.71828) and
V
x
{\displaystyle V_{x}}
is the voltage at a given distance x from x=0. When
x
=
λ
{\displaystyle x=\lambda }
then
and
which means that when we measure
V
{\displaystyle V}
at distance
λ
{\displaystyle \lambda }
from
x
=
0
{\displaystyle x=0}
we get
Thus
V
λ
{\displaystyle V_{\lambda }}
is always 36.8 percent of
V
0
{\displaystyle V_{0}}
.
== Time constant ==
Neuroscientists are often interested in knowing how fast the membrane potential,
V
m
{\displaystyle V_{m}}
, of an axon changes in response to changes in the current injected into the axoplasm. The time constant,
τ
{\displaystyle \tau }
, is an index that provides information about that value.
τ
{\displaystyle \tau }
can be calculated as:
The larger the membrane capacitance,
c
m
{\displaystyle c_{m}}
, the more current it takes to charge and discharge a patch of membrane and the longer this process will take. The larger the membrane resistance
r
m
{\displaystyle r_{m}}
, the harder it is for a current to induce a change in membrane potential. So the higher the
τ
{\displaystyle \tau }
the slower the nerve impulse can travel. That means, membrane potential (voltage across the membrane) lags more behind current injections. Response times vary from 1–2 milliseconds in neurons that are processing information that needs high temporal precision to 100 milliseconds or longer. A typical response time is around 20 milliseconds.
== Generic form and mathematical structure ==
If one multiplies equation (12) by
r
m
{\displaystyle r_{m}}
on both sides of the equal sign we get:
and recognize
λ
2
=
r
m
/
r
l
{\displaystyle \lambda ^{2}={r_{m}}/{r_{l}}}
on the left side and
τ
=
c
m
r
m
{\displaystyle \tau =c_{m}r_{m}}
on the right side. The cable equation can now be written in its perhaps best known form:
This is a 1D heat equation or diffusion equation for which many solution methods, such as Green's functions and Fourier methods, have been developed.
It is also a special degenerate case of the Telegrapher's equation, where the inductance
L
{\displaystyle L}
vanishes and the signal propagation speed
1
/
L
C
{\displaystyle 1/{\sqrt {LC}}}
is infinite.
== See also ==
Nanophysiology
Axon
Bidomain model
Bioelectrochemistry
Biological neuron model
Dendrite
Hodgkin–Huxley model
Membrane potential
Monodomain model
Nernst–Planck equation
Patch clamp
Saltatory conduction
Soliton model in neuroscience
== References ==
Poznanski, Roman R. (2013). Mathematical Neuroscience. San Diego [California]: Academic Press.
Tuckwell, Henry C. (1988). Introduction to theoretical neurobiology. Cambridge [Cambridgeshire]: Cambridge University Press. ISBN 978-0521350969.
de Nó, Rafael Lorente (1947). A study of nerve physiology. Studies from the Rockefeller Institute for Medical Research. Reprints. Rockefeller Institute for Medical Research. pp. Part I, 131:1-496, Part II, 132:1-548. ISBN 9780598674722. OCLC 6217290. {{cite book}}: ISBN / Date incompatibility (help)
Lazarevich, Ivan A.; Kazantsev, Victor B. (2013). "Dendritic signal transition induced by intracellular charge in inhomogeneties". Phys. Rev. E. 88 (6): 062718. arXiv:1308.0821. Bibcode:2013PhRvE..88f2718L. doi:10.1103/PhysRevE.88.062718. PMID 24483497. S2CID 13353454.
Douglas, PK; Douglas, David B. (2019). "Reconsidering Spatial Priors in EEG Source Estimation : Does White Matter Contribute to EEG Rhythms?". 2019 7th International Winter Conference on Brain-Computer Interface (BCI). Vol. 88. pp. 1–12. arXiv:2111.08939. doi:10.1109/IWW-BCI.2019.8737307. ISBN 978-1-5386-8116-9. S2CID 195064621.
== Notes == | Wikipedia/Cable_equation |
Pyramidal cells, or pyramidal neurons, are a type of multipolar neuron found in areas of the brain including the cerebral cortex, the hippocampus, and the amygdala. Pyramidal cells are the primary excitation units of the mammalian prefrontal cortex and the corticospinal tract. One of the main structural features of the pyramidal neuron is the conic shaped soma, or cell body, after which the neuron is named. Other key structural features of the pyramidal cell are a single axon, a large apical dendrite, multiple basal dendrites, and the presence of dendritic spines.
Pyramidal neurons are also one of two cell types where the characteristic sign, Negri bodies, are found in post-mortem rabies infection. Pyramidal neurons were first discovered and studied by Santiago Ramón y Cajal. Since then, studies on pyramidal neurons have focused on topics ranging from neuroplasticity to cognition.
== Structure ==
One of the main structural features of the pyramidal neuron is the conic shaped soma, or cell body, after which the neuron is named. Other key structural features of the pyramidal cell are a single axon, a large apical dendrite, multiple basal dendrites, and the presence of dendritic spines.
=== Apical dendrite ===
The apical dendrite rises from the apex of the pyramidal cell's soma. The apical dendrite is a single, long, thick dendrite that branches several times as distance from the soma increases and extends towards the cortical surface.
=== Basal dendrite ===
Basal dendrites arise from the base of the soma. The basal dendritic tree consists of three to five primary dendrites. As distance increases from the soma, the basal dendrites branch profusely.
Pyramidal cells are among the largest neurons in the brain. Both in humans and rodents, pyramidal cell bodies (somas) average around 20 μm in length. Pyramidal dendrites typically range in diameter from half a micrometer to several micrometers. The length of a single dendrite is usually several hundred micrometers. Due to branching, the total dendritic length of a pyramidal cell may reach several centimeters. The pyramidal cell's axon is often even longer and extensively branched, reaching many centimeters in total length.
=== Dendritic spines ===
Dendritic spines receive most of the excitatory impulses (EPSPs) that enter a pyramidal cell. Dendritic spines were first noted by Ramón y Cajal in 1888 by using Golgi's method. Ramón y Cajal was also the first person to propose the physiological role of increasing the receptive surface area of the neuron. The greater the pyramidal cell's surface area, the greater the neuron's ability to process and integrate large amounts of information. Dendritic spines are absent on the soma, while the number increases away from it. The typical apical dendrite in a rat has at least 3,000 dendritic spines. The average human apical dendrite is approximately twice the length of a rat's, so the number of dendritic spines present on a human apical dendrite could be as high as 6,000.
== Growth and development ==
=== Differentiation ===
Pyramidal specification occurs during early development of the cerebrum. Progenitor cells are committed to the neuronal lineage in the subcortical proliferative ventricular zone (VZ) and the subventricular zone (SVZ). Immature pyramidal cells undergo migration to occupy the cortical plate, where they further diversify. Endocannabinoids (eCBs) are one class of molecules that have been shown to direct pyramidal cell development and axonal pathfinding. Transcription factors such as Ctip2 and Sox5 have been shown to contribute to the direction in which pyramidal neurons direct their axons.
=== Early postnatal development ===
Pyramidal cells in rats have been shown to undergo many rapid changes during early postnatal life. Between postnatal days 3 and 21, pyramidal cells have been shown to double the size of the soma, increase the length of the apical dendrite fivefold, and increase basal dendrite length thirteen-fold. Other changes include the lowering of the membrane's resting potential, reduction of membrane resistance, and an increase in the peak values of action potentials.
== Signaling ==
Like dendrites in most other neurons, the dendrites are generally the input areas of the neuron, while the axon is the neuron's output. Both axons and dendrites are highly branched. The large amount of branching allows the neuron to send and receive signals to and from many different neurons.
Pyramidal neurons, like other neurons, have numerous voltage-gated ion channels. In pyramidal cells, there is an abundance of Na+, Ca2+, and K+ channels in the dendrites, and some channels in the soma. Ion channels within pyramidal cell dendrites have different properties from the same ion channel type within the pyramidal cell soma. Voltage-gated Ca2+ channels in pyramidal cell dendrites are activated by subthreshold EPSPs and by back-propagating action potentials. The extent of back-propagation of action potentials within pyramidal dendrites depends upon the K+ channels. K+ channels in pyramidal cell dendrites provide a mechanism for controlling the amplitude of action potentials.
The ability of pyramidal neurons to integrate information depends on the number and distribution of the synaptic inputs they receive. A single pyramidal cell receives about 30,000 excitatory inputs and 1700 inhibitory (IPSPs) inputs. Excitatory (EPSPs) inputs terminate exclusively on the dendritic spines, while inhibitory (IPSPs) inputs terminate on dendritic shafts, the soma, and even the axon. Pyramidal neurons can be excited by the neurotransmitter glutamate, and inhibited by the neurotransmitter GABA.
=== Firing classifications ===
Pyramidal neurons have been classified into different subclasses based upon their firing responses to 400-1000 millisecond current pulses. These classification are RSad, RSna, and IB neurons.
==== RSad ====
RSad pyramidal neurons, or adapting regular spiking neurons, fire with individual action potentials (APs), which are followed by a hyperpolarizing afterpotential. The afterpotential increases in duration which creates spike frequency adaptation (SFA) in the neuron.
==== RSna ====
RSna pyramidal neurons, or non-adapting regular spiking neurons, fire a train of action potentials after a pulse. These neurons show no signs of adaptation.
==== IB ====
IB pyramidal neurons, or intrinsically bursting neurons, respond to threshold pulses with a burst of two to five rapid action potentials. IB pyramidal neurons show no adaptation.
=== Molecular classifications ===
There are several studies showing that morphological and electric pyramidal cells properties could be deduced from gene expression measured by single cell sequencing. Several studies are proposing that single cell classifications in mouse and human neurons based on gene expression could explain various neuronal properties . Neuronal types in these classifications are split into excitatory, inhibitory and hundreds of corresponding sub-types. For example, pyramidal cells of layer 2-3 in human are classified as FREM3 type and often have a high amount of Ih-current generated by HCN-channel.
== Function ==
=== Corticospinal tract ===
Pyramidal neurons are the primary neural cell type in the corticospinal tract. Normal motor control depends on the development of connections between the axons in the corticospinal tract and the spinal cord. Pyramidal cell axons follow cues such as growth factors to make specific connections. With proper connections, pyramidal cells take part in the circuitry responsible for vision guided motor function.
=== Cognition ===
Pyramidal neurons in the prefrontal cortex are implicated in cognitive ability. In mammals, the complexity of pyramidal cells increases from posterior to anterior brain regions. The degree of complexity of pyramidal neurons is likely linked to the cognitive capabilities of different anthropoid species. Pyramidal cells within the prefrontal cortex appear to be responsible for processing input from the primary auditory cortex, primary somatosensory cortex, and primary visual cortex, all of which process sensory modalities. These cells might also play a critical role in complex object recognition within the visual processing areas of the cortex. Relative to other species, the larger cell size and complexity of pyramidal neurons, along with certain patterns of cellular organization and function, correlates with the evolution of human cognition.
=== Memory and learning ===
The hippocampus's pyramidal cells are essential for certain types of memory and learning. They form synapses that aid in the integration of synaptic voltages throughout their complex dendritic trees through interactions with mossy fibers from granule cells. Since it affects the postsynaptic voltages produced by mossy fiber activation, the placement of thorny excrescences on basal and apical dendrites is important for memory formation. By enabling dynamic control of the sensitivity of CA3 pyramidal cells, this clustering of mossy fiber synapses on pyramidal cells may facilitate the initiation of somatic spikes.
The interactions between pyramidal cells and an estimated 41 mossy fiber boutons, each originating from a unique granule cell, highlight the role of these boutons in information processing and synaptic connectivity, which are essential for memory and learning. Fundamentally, mossy fiber input is received by pyramidal cells in the hippocampus which integrate synaptic voltages within their dendritic architecture. The location of prickly protrusions and the clustering of synapses influence sensitivity and contribute to the processing of information pertaining to memory and learning.
== See also ==
Pyramidal tract
Chandelier cells - innervate initial segments of pyramidal axons
Rosehip neuron
== References ==
== External links ==
Pyramidal cell - Cell Centered Database
Diagram
Image
Diagram (as part of slideshow) Archived 2016-11-02 at the Wayback Machine | Wikipedia/Pyramidal_neurons |
Nervous system diseases, also known as nervous system or neurological disorders, refers to a small class of medical conditions affecting the nervous system. This category encompasses over 600 different conditions, including genetic disorders, infections, cancer, seizure disorders (such as epilepsy), conditions with a cardiovascular origin (such as stroke), congenital and developmental disorders (such as spina bifida), and degenerative disorders (such as multiple sclerosis, Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis).
== Signs and symptoms ==
Signs and symptoms can vary depending on the condition. Given the significance of the nervous system in human physiology, symptoms can involve other organ systems and result in motor dysfunction, sensory impairment, and pain, among other things.
== Causes ==
=== Genetic ===
Some nervous system diseases are due to genetic mutations. For example, Huntington's disease is an inherited disease characterized by progressive neurodegeneration. Huntington's disease results from a mutation in either copy of the HTT gene, which results in an abnormally folded protein. The accumulation of mutated proteins results in brain damage of the basal ganglia.
=== Congenital/developmental defect ===
Developing babies can have birth defects that affect the formation of the nervous system. For example, Anencephaly (or spina bifida) causes abnormalities in the nervous system due to neural tube defects.
=== Cancer ===
Specialized cells in the central nervous system, such as glial cells, may proliferate abnormally and form gliomas. Glioblastoma is an aggressive form of glioma.
=== Infection ===
Pathogens like fungi, bacteria, and viruses can affect the nervous system. For example, meningitis is a common infection of the central nervous system, where bacterial or viral infections cause an inflammation of the meninges.
=== Seizure disorder ===
It is suspected that seizures occur because of synchronized brain activity. Epilepsy, for example, is characterized by an abnormal electrical activity in the brain, which causes repeated seizures.
=== Vascular ===
The brain is rich in blood vessels because it requires a lot of nutrients and oxygen. A stroke may result from a blood clot or hemorrhage.
=== Degenerative ===
A neurodegenerative disease is a disease that causes damage to neurons. Examples of neurodegenerative disease include Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis. For example, multiple sclerosis (MS) is an inflammatory neurodegenerative disease where the body initiate an inflammatory reaction in the central nervous system, and causes damage to neurons. Neurodegeneration is different in each disease; for example, MS is a result of a degenerative process called demyelination. On the other hand, Parkinson's disease results from damage of neurons in the Substantia Nigra, which is important to initiate motor behavior.
== Anatomy ==
=== Central nervous system (CNS) ===
According to Tim Newman, the central nervous system is made up of the brain and spinal cord, it collects information from the entire body and it also controls functions throughout the entire body.
==== Brain ====
The brain is the most complex organ in the human body. It is split up into two hemispheres, each split into four lobes: frontal, parietal, the temporal, and occipital. The brain has over 100 billion neurons and it consumes up to 20% of the energy used by the body.
==== Spinal cord ====
The spinal cord runs through most of the back. The spinal cord contains a total of 31 spinal nerves in between each vertebra. The nerves connect to the peripheral nervous system.
=== Peripheral nervous system ===
The peripheral nervous system connects to the muscles and glands and sends information to the central nervous system.
== Diagnosis ==
There are a number of different tests that can be used to diagnose neurological disorders.
=== Lumbar puncture ===
A lumbar puncture (LP), also known as a spinal tap, is a procedure where a hollow needle is inserted into the subarachnoid space of the spinal cord, allowing for the collection of cerebrospinal fluid (CSF) for collection and subsequent analysis. Red and white blood cell counts, protein and glucose levels, and the presence of abnormal cells or pathogens such as bacteria and viruses can all be screened for. The opacity and color of the fluid can also yield useful information that can assist in a diagnosis.
== Treatments ==
The treatments for nervous system disorders varies depending on the condition, and can include interventions such as medication, surgery, and therapy.
== See also ==
Central nervous system disease
Peripheral neuropathy
== References ==
"Nervous System Side Effects". Cancer.Net. 2012-07-02. Retrieved 2019-04-05.
== External links == | Wikipedia/Nervous_system_disease |
Basic research, also called pure research, fundamental research, basic science, or pure science, is a type of scientific research with the aim of improving scientific theories for better understanding and prediction of natural or other phenomena. In contrast, applied research uses scientific theories to develop technology or techniques, which can be used to intervene and alter natural or other phenomena. Though often driven simply by curiosity, basic research often fuels the technological innovations of applied science. The two aims are often practiced simultaneously in coordinated research and development.
In addition to innovations, basic research serves to provide insights and public support of nature, possibly improving conservation efforts. Technological innovations may influence engineering concepts, such as the beak of a kingfisher influencing the design of a high-speed bullet train.
== Overview ==
Basic research advances fundamental knowledge about the world. It focuses on creating and refuting or supporting theories that explain observed phenomena. Pure research is the source of most new scientific ideas and ways of thinking about the world. It can be exploratory, descriptive, or explanatory; however, explanatory research is the most common.
Basic research generates new ideas, principles, and theories, which may not be immediately utilized but nonetheless form the basis of progress and development in different fields. Today's computers, for example, could not exist without research in pure mathematics conducted over a century ago, for which there was no known practical application at the time. Basic research rarely helps practitioners directly with their everyday concerns; nevertheless, it stimulates new ways of thinking that have the potential to revolutionize and dramatically improve how practitioners deal with a problem in the future.
== By country ==
In the United States, basic research is funded mainly by the federal government and done mainly at universities and institutes. As government funding has diminished in the 2010s, however, private funding is increasingly important.
== Basic versus applied science ==
Applied science focuses on the development of technology and techniques. In contrast, basic science develops scientific knowledge and predictions, principally in natural sciences but also in other empirical sciences, which are used as the scientific foundation for applied science. Basic science develops and establishes information to predict phenomena and perhaps to understand nature, whereas applied science uses portions of basic science to develop interventions via technology or technique to alter events or outcomes. Applied and basic sciences can interface closely in research and development. The interface between basic research and applied research has been studied by the National Science Foundation. A worker in basic scientific research is motivated by a driving curiosity about the unknown. When his explorations yield new knowledge, he experiences the satisfaction of those who first attain the summit of a mountain or the upper reaches of a river flowing through unmapped territory. Discovery of truth and understanding of nature are his objectives. His professional standing among his fellows depends upon the originality and soundness of his work. Creativeness in science is of a cloth with that of the poet or painter.It conducted a study in which it traced the relationship between basic scientific research efforts and the development of major innovations, such as oral contraceptives and videotape recorders. This study found that basic research played a key role in the development in all of the innovations. The number of basic science research that assisted in the production of a given innovation peaked between 20 and 30 years before the innovation itself. While most innovation takes the form of applied science and most innovation occurs in the private sector, basic research is a necessary precursor to almost all applied science and associated instances of innovation. Roughly 76% of basic research is conducted by universities.
A distinction can be made between basic science and disciplines such as medicine and technology. They can be grouped as STM (science, technology, and medicine; not to be confused with STEM [science, technology, engineering, and mathematics]) or STS (science, technology, and society). These groups are interrelated and influence each other, although they may differ in the specifics such as methods and standards.
The Nobel Prize mixes basic with applied sciences for its award in Physiology or Medicine. In contrast, the Royal Society of London awards distinguish natural science from applied science.
== See also ==
Blue skies research
Hard and soft science
Metascience
Normative science
Physics
Precautionary principle
Pure mathematics
Pure Chemistry
== References ==
== Further reading ==
Levy, David M. (2002). "Research and Development". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. OCLC 317650570, 50016270, 163149563 | Wikipedia/Basic_science_research |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.