text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Chirurgia magna ( Latin for "Great [work on] Surgery"), fully titled the Inventarium sive chirurgia magna (Latin for "The Inventory, or the Great [work on] Surgery"), is a guide to surgery and practical medicine completed in 1363. Guy de Chauliac , Pope Clement VI 's attending physician , compiled the information from his own field experience and research of historical medical texts. The original text is in Latin and comprises 465 pages. It was translated into various European languages: the version in Middle English has been published. [ 1 ] This work became one of the most important reference manuals of practical medicine for the next three centuries. [ 2 ] It was translated into Irish by Cormac Mac Duinnshléibhe . [ 3 ]
The physician and bibliophile Tibulle Desbarreaux-Bernard (1798–1880) believed that the Chirurgia magna was originally written in Catalan at the medical school in Montpellier and that the extant Latin text is an early translation. [ 4 ]
A modern edition of the Latin text, with commentary on sources, has been printed. [ 5 ]
This article about a medical book is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Chirurgia_magna
|
Chitosan-poly ( acrylic acid ) is a composite that has been increasingly used to create chitosan-poly(acrylic acid) nanoparticles . [ 1 ] [ 2 ] [ 3 ] More recently, various composite forms have come out with poly(acrylic acid) being synthesized with chitosan which is often used in a variety of drug delivery processes. Chitosan which already features strong biodegradability and biocompatibility nature can be merged with polyacrylic acid to create hybrid nanoparticles that allow for greater adhesion qualities as well as promote the biocompatibility and homeostasis nature of chitosan poly(acrylic acid) complex. [ 1 ] The synthesis of this material is essential in various applications and can allow for the creation of nanoparticles to facilitate a variety of dispersal and release behaviors and its ability to encapsulate a multitude of various drugs and particles.
Research on nanoparticles and their chitosan nanoparticles grew in popularity in the early 1990s. [ 1 ] [ 2 ] [ 3 ] mainly due to its biodegradability and biocompatibility nature. Chitosan, due to its molecular structure, can be dissolved well within a variety of solvents and a variety of biologics, such as acids like formic and lactic acid. [ 3 ] Additionally, a benefit of chitosan is its ability to be greatly modified such as with other natural materials, synthetic materials, ligands , and even functionalized with various techniques. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] Such an experience can be seen with the synthesis with poly-(acrylic acid) devices. [ 7 ] [ 14 ] [ 19 ] The addition of poly-(acrylic acid) can allow for an interaction to induce amphiphilicity and be spontaneously assembled. [ 7 ] [ 14 ] [ 19 ] This can be important due to the beneficial impact on its stimuli responsiveness and for large-scale use. [ 7 ] [ 14 ] [ 19 ]
Chitosan is a polysaccharide that is derived from chitin that is composed of an alkaline deacetylated monomer of glucosamine and an acetylated monomor glucosamine and binding through β-1,4 glycosidic and hydrogen bonds . [ 2 ] [ 3 ] The benefit of chitosan comes from their reactive groups such as -OH and -NH 2 . [ 11 ] Various mechanisms for chitosan exist, and various isolation techniques can be issued for the fabrication of chitosan nanoparticles .
There are various mechanisms for chitosan nanoparticle synthesis. These mechanisms include ionic gelation/ polyelectrolyte complexation, emulsion droplet coalescence, emulsion solvent diffusion, reverse miscellisation, desolvation, emulsification cross-linking , nanoprecipitation, and spray-drying . [ 3 ] [ 15 ]
Ionic gelation /polyelectrolyte complexation involves converting cationic chitosan solution with anionic tripolyphosphate and collecting precipitate in the form of nanoparticles . [ 3 ] [ 20 ] [ 21 ] [ 22 ]
Emulsion droplet coalescence involves the formulation of chitosan nanoparticles by creating two stable emulsions with liquid paraffin by adding one with a stabilizer and another with sodium hydroxide again containing a stabilizer. This mixture of the two emulsions can be used to form nanoparticles. [ 3 ] [ 23 ]
Emulsion solvent diffusion takes chitosan with stabilizer mixed in with an organic solvent such as methylene chloride/acetone that contains a drug that is hydrophilic and is diffused with acetone and chitosan nanoparticles are derived via centrifugation . [ 3 ] [ 24 ]
Reverse miscellisation involves taking an organic solvent lipophilic surfactant and adding chitosan with a drug and cross-linker like glutaraldehyde . The nanoparticles are then extracted. [ 3 ] [ 25 ]
Desolvation includes preparing chitosan solution and adding a precipitate with a stabilizing solution and precipitate such as acetone . Due to the insolubility of chitosan, the precipitate begins to form through the elimination of the liquid surrounding chitosan. A crosslinker such as glutaraldehyde can be added to formulate the nanoparticles [ 3 ] [ 26 ]
Chitosan -based solution is developed in the oil face and translated into stabilized liquid . A crosslinker such as glutaraldehyde can then be used to derive chitosan nanoparticles . [ 3 ] [ 27 ]
Nanoprecipitation refers to using chitosan and dissolving it within a solvent and then having a pump to differentiate the dispersing phase and with tween 80, derive nanoparticles from the dispersing phase. [ 3 ] [ 28 ]
Spray drying involves taking chitosan and adding it to the solvent acetic acid solution. The solution will then be atomized . These droplets will be mixed with drying gas and after further evaporation , nanoparticles can be derived. [ 3 ] [ 29 ]
Poly( acrylic acid ) refers to acrylic acid that is polymerized. Poly(acrylic acid) is also known to have a neutral pH, have beneficial crosslinking properties, due to the charge properties of the side changes and poly(acrylic acid) being anionic [ 1 ] [ 11 ] [ 12 ] [ 13 ] [ 21 ] [ 22 ] 1,11–13,21,22. Poly (acrylic acid) is known to have good biocompatibility with chitosan , particularly with the amine groups (-NH 2 ) [ 30 ]
An alternative method for the fabrication of chitosan nanoparticles includes the inclusion of polymerized groups of chitosan. This methodology can allow for the improvement of the chitosan cross-linking mechanism and improve overall drug release profiles for drugs such as amoxicillin and meloxicam. [ 1 ] [ 31 ] Additionally, when poly ( acrylic acid ) is localized within the inner shell, overall drug encapsulation can be improved. [ 19 ] [ 30 ]
Ionic gelation with radical polymerization takes in a chitosan solution after through the addition of an acid monomer, the chitosan changes from the anion of an acrylic monomer . The nanoparticles are then derived after being self-settled overnight, and the unreacted monomer is removed. This is the main method for the formulation of poly ( acrylic acid ) based chitosan nanoparticles. [ 1 ] [ 3 ] [ 11 ] [ 14 ]
Biomedical applications of chitosan -based nanoparticles range from cancer treatment to regenerative medicine and tissue engineering to inflammatory diseases to diabetic treatment to the treatment of cerebral diseases, cardiovascular diseases , infectious diseases , and even for vaccine delivery. [ 3 ] Lung cancer , breast cancer , and colorectal cancer include the top 3 cancers in terms of frequency and are responsible for 1 out of 3 cancer cases and death burden worldwide. [ 32 ] Chitosan-based nanoparticles provide benefits to make targeted drug delivery systems for biomedical use and overall improve the potential of oral administration of drugs (Figure 3). [ 1 ] [ 3 ] [ 15 ] [ 33 ]
Figure 3 Advantages of chitosan nanoparticles. Adopted from Sharifi-Rad et al, 2021. [ 32 ]
One of the main uses of chitosan -based nanoparticles involves drug delivery devices. The following are drugs delivered with chitosan-based nanoparticle: methotrexate , fucose -conjugated chitosan, 5- fluorouracil , doxorubicin , docetaxel , paclitaxel , propranolol -HCL, CyA, insulin , indomethacin , cefazolin , isoniazid , tetracycline , didanosine , isoniazid , rifampicin , folate , zaltoprofen , curcumin , cisplatin , camptothecin , bupivacaine , paclitaxel , prothionamide , hydrocortisone , albumin , ocimum gratissimum essential oil, triphosphate , RGD peptides and morphine . [ 3 ] [ 32 ] [ 33 ] The targeting system again ranges from various drug systems, with a primary focus on targeting cancer within specific organs such as lung or colorectal . The potential of poly( acrylic acid ) and the addition has shown success in improvements of overall gene expression and protein delivery through the ability to modify pH sensitivity, modify chemosensitivity, and modify targeting. [ 2 ] [ 10 ] [ 14 ] [ 15 ] [ 17 ] [ 18 ] [ 19 ] [ 22 ] [ 26 ] [ 28 ] [ 29 ] [ 30 ]
Another main use of chitosan -based nanoparticles involves the ability to withhold various drugs, organic compounds , and even inorganic analytes 5,8,9,11,12,23–25,28,32. These analytes include Fe 3 O4 (Figure 4). [ 3 ] [ 5 ] [ 9 ] [ 11 ] A Fe 3 O 4 based chitosan poly( acrylic acid ) nanoparticle or nanosphere can have applications such as toxic metal uptake for direct use in drug delivery systems , treatment of tumors , magnetic separation of biomolecules , and even MRI contrast enhancement. [ 3 ] [ 5 ] [ 9 ] [ 11 ]
Figure 4 Magnetic nanospheres with chitosan-poly( acrylic acid ). Adopted from Feng et al, 2009. [ 9 ]
Chitosan alone or together with putrescine has been used successfully to slow the decay of fruits for up to 12 days when held at low temperatures. [ 34 ]
Overall continued improvement of stability, biocompatibility , degradability, and nontoxicity is needed to improve the viability. [ 1 ] [ 3 ] [ 15 ] [ 33 ] Current limitations exist in routes of delivery, such as limited work in orally administered nanoparticles and drug delivery devices . Absorption should further be improved in chitosan poly(acrylic acid) nanoparticles for improved solubility for targeted drug delivery. [ 1 ] [ 3 ] [ 15 ] [ 33 ] Additionally, further work in cell viability and cell proliferation is needed within these nanoparticles for use in tissue regeneration . Additionally, current limitations exist in fabrication techniques and large chain implementation due to possible difficulties in the synthesis of chitosan-based nanoparticles. [ 1 ] [ 3 ] [ 15 ] [ 33 ]
|
https://en.wikipedia.org/wiki/Chitosan_nanoparticles
|
Chlamydia research [ 1 ] is the systematic study of the organisms in the taxonomic group of bacteria Chlamydiota (formerly Chlamydiae), the diagnostic procedures [ 2 ] to treat infections, the disease chlamydia , infections caused by the organisms, the epidemiology of infection and the development of vaccines. The process of research can include the participation of many researchers who work in collaboration from separate organizations, governmental entities and universities. [ 3 ]
The Centers for Disease Control and Prevention (CDC) offers funding to research the biology , physiology, epidemiology, vaccine development, and publish systematic reviews of Chlamydia species. Other funding sources include the National Chlamydia Coalition. [ 4 ] [ 5 ]
Studies continue to determine the organism's genetic makeup. NIAID-supported scientists have determined the complete genome (genetic blueprint) for C. trachomatis. [ 6 ]
The Max Planck Institute for Infection Biology continues its research into chlamydia infection. [ 7 ] [ 8 ] [ 9 ] The institute has published over 140 studies related to chlamydia. [ 10 ]
There are research projects in several areas at the Queensland University of Technology , including development of a human vaccine for chlamydial sexually transmitted disease and understanding basic mechanisms of regulation, including the importance of chlamydial proteases . Chlamydia infections in wildlife are part of the research into chlamydia, particularly koalas' genomics and gene regulation studies in chlamydia. [ 11 ]
A sample list of primary publications: [ 11 ]
Vaccine development at the University of Southampton continues. [ 12 ] [ 13 ]
Vaccine research is ongoing in independent and institutional settings. [ 14 ] [ 15 ] CTH522 has completed phase 1 trials [ 16 ]
Clinical trials are used by researchers investigating the efficacy of interventions or protocol in the epidemiology, detection, prevention and treatment of chlamydia infections. Interventions are the use of medical products, medication, devices, procedures or changes in the participants' behavior. The effects on the participants are measured and compared to previous trials, placebo or a new medical approach, or to no intervention. [ 17 ] The National Institutes of Health support ongoing research in the study of chlamydia infection. At least 113 studies have been initiated as of 2015. [ 17 ] [ 18 ] One example was the clinical trial of eye prophylaxis in newborns in the prevention of neonatal conjunctivitis caused by Chlamydia trachomatis. [ 19 ]
Research related to chlamydia can take the form of an observational study. This type of study assesses outcomes in groups of participants according to a research plan or protocol. The volunteers in the study may receive interventions such as medical products, medications, devices, or procedures as part of their routine medical care. The volunteers in this type of study are not assigned to specific interventions as in a clinical trial. [ 20 ] An example of an observational study regarding chlamydia infection was "Non-Invasive Sexually Transmitted Disease (STD) Testing in Women Seeking Emergency Contraception or Urine Pregnancy Testing: Meeting the Needs of an At-Risk Population" in 2010. [ 21 ] Observational studies employ the use of randomised control studies. [ 22 ]
Case studies that research the prevalence and prevention of chlamydia can include personal contact, a detailed history of the participants, extensive physical examinations, and related contextual conditions. Chlamydia case studies also can be produced by following a formal research method. These case studies are likely to appear in formal research venues, such as journals, professional conferences, and administrative science. [ 23 ] [ 24 ]
In doing case study research, the case being studied may be an individual, organization, event, or action, existing in a specific time and place. For instance, clinical science has produced both well-known case studies of individuals and also case studies of clinical practices. [ 25 ] [ 26 ] [ 27 ]
Evidence-based medicine chlamydia studies optimizes decision-making by employing the use of information based upon well-designed research . This approach to the study of chlamydia requires that only research conducted coming from meta-analyses , systematic reviews , and randomized controlled trials ) can yield widely applied recommendations. [ 28 ] [ 29 ] [ 30 ] Some examples of evidence-based research on chlamydia include:
Using Wikipedia for Research
|
https://en.wikipedia.org/wiki/Chlamydia_research
|
Chloroquine and hydroxychloroquine are anti-malarial medications also used against some auto-immune diseases. [ 1 ] Chloroquine, along with hydroxychloroquine, was an early experimental treatment for COVID-19 . [ 2 ] Neither drug has been useful to prevent or treat SARS-CoV-2 infection. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] Administration of chloroquine or hydroxychloroquine to COVID-19 patients, either as monotherapies or in conjunction with azithromycin , has been associated with deleterious outcomes, such as QT prolongation . [ 9 ] [ 10 ] Scientific evidence does not substantiate the efficacy of hydroxychloroquine, with or without the addition of azithromycin, in the therapeutic management of COVID-19. [ 9 ] [ 11 ]
Cleavage of the SARS-CoV-2 S2 spike protein required for viral entry into cells can be accomplished by proteases TMPRSS2 located on the cell membrane, or by cathepsins (primarily cathepsin L ) in endolysosomes . [ 12 ] Hydroxychloroquine inhibits the action of cathepsin L in endolysosomes, but because cathepsin L cleavage is minor compared to TMPRSS2 cleavage, hydroxychloroquine does little to inhibit SARS-CoV-2 infection. [ 12 ]
Several countries initially used chloroquine or hydroxychloroquine for treatment of persons hospitalized with COVID-19 (as of March 2020), though the drug was not formally approved through clinical trials. [ 13 ] [ 14 ] From April to June 2020, there was an emergency use authorization for their use in the United States, [ 15 ] and was used off label for potential treatment of the disease. [ 16 ] On 24 April 2020, citing the risk of "serious heart rhythm problems", the FDA posted a caution against using the drug for COVID-19 "outside of the hospital setting or a clinical trial". [ 17 ]
Their use was withdrawn as a possible treatment for COVID-19 infection when it proved to have no benefit for hospitalized patients with severe COVID-19 illness in the international Solidarity trial and UK RECOVERY Trial . [ 18 ] [ 19 ] On 15 June 2020, the FDA revoked its emergency use authorization, stating that it was "no longer reasonable to believe" that the drug was effective against COVID-19 or that its benefits outweighed "known and potential risks". [ 20 ] [ 21 ] [ 22 ] In fall of 2020, the National Institutes of Health issued treatment guidelines recommending against the use of hydroxychloroquine for COVID-19 except as part of a clinical trial . [ 1 ]
In 2021, hydroxychloroquine was part of the recommended treatment for mild cases in India. [ 23 ]
In 2020, the speculative use of hydroxychloroquine for COVID-19 threatened its availability for people with established indications (malaria and auto-immune diseases). [ 5 ]
Chloroquine is an anti-malarial medication that is also used against some auto-immune diseases. Hydroxychloroquine is more commonly available than chloroquine in the United States. [ 13 ] Hydroxychloroquine is used as a prophylactic in India. [ 24 ] [ 25 ]
Hydroxychloroquine and chloroquine have numerous, potentially serious, side effects , such as retinopathy , hypoglycemia , or life-threatening arrhythmia and cardiomyopathy . [ 26 ] Both drugs have extensive interactions with prescription drugs, affecting the therapeutic dose and disease mitigation. [ 26 ] [ 27 ] Some people have allergic reactions to these drugs. [ 26 ] [ 27 ] The NIH recommended against the use of a combination of hydroxychloroquine and azithromycin because of the resulting increased risk of sudden cardiac death. [ 28 ] Widespread administration of chloroquine or hydroxychloroquine, either alone or in combination with azithromycin, among COVID-19 patients, has been associated with increased mortality due to adverse effects, including QT prolongation. [ 9 ] [ 10 ]
In October 2021 a large network of companies selling hydroxychloroquine and ivermectin has been disclosed in the US, targeting primarily right-wing and vaccine hesitant groups through social media and conspiracy videos by anti-vaccine activists such as Simone Gold . The network had 72,000 customers who collectively paid $15 million for consultations and medications. [ 29 ]
Chloroquine was initially recommended by Indian, Chinese, South Korean and Italian health authorities for the treatment of COVID-19, [ 30 ] although these agencies and the US CDC noted contraindications for people with heart disease or diabetes . [ 13 ] [ 31 ] In February 2020, both drugs were shown to effectively reduce COVID-19 illness, but a further study concluded that hydroxychloroquine was more potent than chloroquine and had a more tolerable safety profile. [ 32 ] [ 33 ]
On 18 March 2020, the World Health Organization (WHO) announced that chloroquine and the related hydroxychloroquine would be among the four drugs studied as part of the multinational Solidarity clinical trial . [ 34 ]
On 19 March 2020, US president Donald Trump encouraged the use of chloroquine and hydroxychloroquine during a national press conference. These endorsements led to massive increases in public demand for the drugs in the United States. [ 35 ] Beginning in March 2020, Trump began promoting hydroxychloroquine to prevent or treat COVID-19 , citing small numbers of anecdotal reports . [ 36 ] Trump stated in June that he was taking the drug as a preventive measure, [ 21 ] stimulating unprecedented worldwide demand and causing shortages of hydroxychloroquine for its prescribed purpose of preventing malaria . [ 36 ]
New York governor Andrew Cuomo announced that New York State trials of chloroquine and hydroxychloroquine would begin on 24 March. [ 37 ] On 28 March, the US Food and Drug Administration (FDA) authorized the use of hydroxychloroquine sulfate and chloroquine phosphate under an Emergency Use Authorization (EUA), which was later revoked due to the risk of cardiac adverse events. [ 2 ] [ 38 ] The drug was authorized under the EUA as an experimental treatment for emergency use in hospitalized patients. [ 2 ] [ 38 ] [ 39 ]
In late March 2020, an Arizona man died of cardiac arrest and his wife was hospitalized after the couple ingested a version of chloroquine used as a parasite treatment for aquarium fish. The couple had incorrectly believed that the parasite treatment would have the same effects as the medication form of chloroquine. The surviving wife stated that the couple self-administered the chemical after listening to speeches by President Donald Trump that touted chloroquine as an effective treatment against COVID-19. [ 40 ] [ 41 ]
Beginning in March 2020, New Jersey state senator Joe Pennacchio began publicly calling for the use of hydroxychloroquine to combat the spread of COVID-19 based on a French study which showed a decrease in "viral shedding." [ 42 ] He received support from over 60 doctors and advocacy groups across the United States, including Sheila Page and Marilyn Singleton from the Association of American Physicians and Surgeons , Niran Al-Agba from Physicians for Patient Protection and Frank Alario of the American College of Physicians . [ 42 ]
On 28 March 2020, the FDA authorized the use of hydroxychloroquine and chloroquine under an emergency use authorization (EUA). [ 2 ] The experimental treatment was first authorized only for emergency use for people hospitalized but unable to receive treatment in a clinical trial. [ 39 ]
On 1 April 2020, the European Medicines Agency (EMA) issued guidance that chloroquine and hydroxychloroquine are only to be used in clinical trials or emergency use programs. [ 43 ]
On 9 April 2020, the National Institutes of Health began the first clinical trial to assess whether hydroxychloroquine is safe and effective to treat COVID-19. [ 44 ] [ 45 ] A Veterans Affairs study released results on 21 April suggesting COVID-19-hospitalized patients treated with hydroxychloroquine were more likely to die than those who received no drug treatment at all, after correcting for clinical characteristics. [ 46 ] [ 47 ]
On 24 April 2020, the FDA cautioned against using the drug outside a hospital setting or clinical trial after reviewing case reports of adverse effects including ventricular tachycardia , ventricular fibrillation and in some cases death. [ 17 ] According to Johns Hopkins' ABX Guide for COVID-19, "Hydroxychloroquine may cause prolonged QT , and caution should be used in critically ill COVID-19 patients who may have cardiac dysfunction or if combined with other drugs that cause QT prolongation". [ 48 ] Caution was also recommended as to the combination of chloroquine and hydroxychloroquine with treatments which might inhibit the CYP3A4 enzyme (by which these drugs are metabolized). As such, combination might indirectly result in higher plasma levels of chloroquine and hydroxychloroquine, and thus enhance the risk for significant QT prolongation. CYP3A4 inhibitors include Azithromycin , ritonavir , and lopinavir . [ 49 ]
On 27 April 2020 the Association of American Physicians and Surgeons wrote a letter, signed by Jane Orient and Michael Robb, to Arizona governor Doug Ducey asking to rescind his executive order forbidding the use of hydrochloroquine as a treatment for COVID-19. [ 50 ] The executive order was signed on 2 April 2020. [ 51 ]
On 5 June 2020, use of hydroxychloroquine in the UK RECOVERY Trial was discontinued when an interim analysis of 1,542 treatments showed it provided no mortality benefit to people hospitalized with severe COVID-19 infection over 28 days of observation. [ 19 ]
On 15 June 2020, the FDA revoked the emergency use authorization for hydroxychloroquine and chloroquine, stating that although the evaluation of both these drugs under clinical trials continues, the FDA (after interagency consultation with the Biomedical Advanced Research and Development Authority (BARDA)) concluded that, based on new information and other information discussed "... it is no longer reasonable to believe that oral formulations of hydroxychloroquine (HCQ) and chloroquine (CQ) may be effective in treating COVID-19, nor is it reasonable to believe that the known and potential benefits of these products outweigh their known and potential risks". [ 20 ] [ 52 ] [ 53 ] [ 22 ]
On 23 July 2020, results were published from a multicenter, randomized, open-label, three-group, controlled trial of 667 participants in Brazil which found no benefit from using hydroxychloroquine, alone or with azithromycin, to treat mild-to-moderate COVID-19. [ 54 ] In July, the U.S. President Donald Trump once again promoted the use of the drug contradicting various public health officials, including National Institute of Allergy and Infectious Diseases director Dr. Anthony Fauci . [ 55 ]
In November 2020, a U.S. National Institutes of Health clinical trial evaluating the safety and effectiveness of hydroxychloroquine for the treatment of adults with COVID-19 formally concluded that the drug provided no clinical benefit for COVID-19 treatment and recommended against its use. [ 56 ] [ 57 ] [ 1 ]
A Cochrane review from February 2021 came to the conclusion, that hydroxychloroquine has little or no effect on the risk of death. Besides, adverse events are tripled compared to placebo. The authors came to the conclusion that no further trials of hydroxychloroquine or chloroquine for treatment of COVID-19 should be carried out. [ 58 ]
On 26 April 2021, in its amended clinical management protocol for COVID-19, the Indian Ministry of Health lists hydroxychloroquine for use in patients during the early course of the disease. [ 23 ]
In August 2022, a meta-analysis led by Harvard epidemiologist Miguel Hernán found that the aggregate of pre-exposure prophylaxis trials with hydroxychloroquine suggested a reduction of around 28% in COVID-19 infections. [ 59 ] Evidence of effectiveness in this setting was also provided by a large multicenter study led by the Centre for Tropical Medicine and Global Health at the University of Oxford, published only in 2024, which found a 15% decrease in symptomatic infections with prophylaxis. [ 60 ] Both studies argued that the controversies surrounding the drug early in the pandemic led to the premature closure of studies and to difficulties in trial recruitment, ultimately hurting scientific enquiry about its effectiveness.
A French study published in 2024 found that the use of hydroxychloroquine may have been associated with 17,000 deaths in Belgium, Turkey, France, Italy, Spain, and the United States. [ 61 ] [ 62 ] This study was ultimately retracted on August 22, 2024, after "the Editor-in-Chief found the conclusions of the article to be unreliable".
Due to the properties of zinc as a cofactor in the immune response for producing antibodies during viral infections, [ 63 ] as of May 2020 it was being included among multiple-agent "cocktails" for investigating potential treatment of people hospitalized with COVID-19 infection. [ 64 ] One such cocktail – hydroxychloroquine combined with a high dose of zinc (as a sulfate , 220 mg (50 mg elemental Zn) per day for five days, a zinc dose ~4 times higher than the reference daily intake level ) [ 63 ] and an approved antibiotic , either azithromycin or doxycycline – began in May as a Phase IV trial in New York State . [ 65 ] However, caution was recommended about the combination of chloroquine or hydroxychloroquine with CYP3A4 inhibitors, such as azithromycin, [ 49 ] a treatment combination found to be ineffective for preventing death in hospitalized people with COVID-19. [ 66 ] There was preliminary evidence that combining hydroxychloroquine and azithromycin for treating non-hospitalized ("outpatient") people with COVID-19 infection with multiple comorbidities was effective, [ 67 ] but this evidence was not confirmed by later studies: co-administration of chloroquine or hydroxychloroquine with azithromycin has been associated with increased mortality due to adverse effects, including QT prolongation. [ 9 ] [ 10 ]
Zinc deficiency – which decreases immune capacity to defend against pathogens – is common among elderly people, and may be a susceptibility factor in viral infections. [ 63 ] The mechanism for any potential benefit of including zinc in a cocktail treatment for recovery from severe COVID-19 or any viral infection is unknown. [ 63 ] [ 64 ]
Drugs used for treatment of infectious diseases may also be considered for use for post-exposure prophylaxis . On 22 May, The Lancet published a response to criticism of the Indian government's decision to allow chemoprophylaxis with hydroxychloroquine for some high risk persons who may have had exposure to COVID. Researchers supporting prophylactic administration of hydroxychloquine note that results from human trials have suggested that hydroxychloroquine may decrease the duration of both viral shedding and symptoms if the drug is administered early. [ 68 ]
On 3 June, results were published from a randomized, double-blind, placebo-controlled trial of 821 participants which found that hydroxychloroquine did not prevent symptomatic COVID-19 illness when used for post-exposure prophylaxis . [ 69 ] [ 70 ] [ 71 ] A randomized, multicenter, placebo-controlled trial amongst healthcare workers found that oral hydroxychloroquine did not help prevent COVID-19 infections when used as pre-exposure prophylaxis. [ 72 ]
British researchers are studying whether the drug is effective when used for prevention. 10,000 National Health Service (NHS) workers, along with 30,000 additional volunteers from Asia, South America, Africa, and other parts of Europe are participating in the global study. Results are expected by 2021. [ 73 ] [ 74 ] Known as the COPCOV [ 75 ] trial, led by researchers at the University of Oxford - it was halted in 2021 after evidence came forward that hydroxychloroquine was not effective at either preventing COVID-19 or treating it. Several large, prestigious studies found that the drug neither halted infection nor improved outcomes. [ 76 ] [ 77 ] [ 78 ]
Due to safety concerns and evidence of heart arrhythmias leading to higher death rates, the WHO suspended the hydroxychloroquine arm of the multinational Solidarity trial in May 2020. [ 79 ] [ 80 ] [ 81 ] The WHO had enrolled 3,500 patients from 17 countries in the Solidarity trial. [ 79 ] The research surrounding this suspension, provided by a company called Surgisphere based in Chicago , came into question due to errors in the underlying data set. [ 82 ] [ 83 ] [ 84 ] The authors of the study corrected errors in the data later but initially remained firm on their conclusions. [ 82 ] Subsequently, a retraction of the study by three of its authors was published by The Lancet on 4 June 2020. [ 85 ] The authors stated that their reason behind the retraction was because Surgisphere had failed to cooperate with an independent review of the data used for the study by not allowing any such review to take place. [ 86 ] [ 87 ]
The WHO decided to resume the trial on 3 June, after reviewing the safety concerns which had been raised. Speaking at a press briefing, WHO's director-general, Tedros Adhanom Ghebreyesus stated that the board had reviewed the available mortality data and had found "no reasons to modify the trial". [ 88 ] [ 89 ]
On 4 July, the WHO discontinued the hydroxychloroquine trial based on evidence presented at the July WHO Summit on COVID-19 research and innovation. The WHO stated that "These interim trial results show that hydroxychloroquine and lopinavir/ritonavir produce little or no reduction in the mortality of hospitalized COVID-19 patients when compared to standard of care." [ 90 ]
|
https://en.wikipedia.org/wiki/Chloroquine_and_hydroxychloroquine_during_the_COVID-19_pandemic
|
The Choice and Partnership Approach (CAPA) is a model of engagement and clinical assessment, principally used in child and adolescent psychiatric services . It aims to use collaborative ways of working with service users to enhance the effectiveness of services and user satisfaction with services. [ clarification needed ]
The CAPA model was developed in the early 2000s in two English NHS trust providers. [ 1 ] Originally it was an initiative designed to improve service effectiveness and the management of service demand and capacity in UK Child and Adolescent Mental Health Services (CAMHS). [ 2 ] [ 3 ]
CAPA focuses on the experience of the service user. It is a collaborative model where the clinicians providing the assessment act as facilitators for the user and their family. [ 4 ] Once a referral is accepted by the service, the user is contacted to arrange a convenient time for an appointment. This is the Choice Appointment . The possible outcomes of this appointment are that the client;
The appointment tries to be collaborative and strengths based. Once a user is accepted, they enter the Partnership phase. The clinician continues to act as a facilitator with expertise rather than an expert with power. The work is composed of aspects that are Core work and parts that are Specialist work .
The model is based on transparent collaboration, finding user strengths and developing a shared formulation. The original model refers to 11 key components, all of which should be implemented to work the best. [ 5 ]
The work is also guided by the 7 Helpful Habits. [ 6 ]
|
https://en.wikipedia.org/wiki/Choice_And_Partnership_Approach
|
Cholangiosarcoma is a tumor of the connective tissues of the bile ducts .
Primary risk factors for cholangiosarcoma are Primary Sclerosing Cholangitis and infection by Clonorchis Sinensis (a fluke found in undercooked fish).
This article incorporates public domain material from Dictionary of Cancer Terms . U.S. National Cancer Institute .
This oncology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Cholangiosarcoma
|
A cholecystoenterostomy is a surgical procedure in which the gall bladder is joined to the small intestine . It is performed in order to allow bile to pass from the liver to the intestine when the common bile duct is obstructed by an irremovable cause. [ 1 ]
This surgery article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Cholecystenterostomy
|
The cholera belt was a flat strip of (usually red) flannel or knitted wool, about six feet long and six inches wide, that was wrapped around the bare abdomen. The item was standard army issue, and was purported to prevent the wearer from contracting cholera , dysentery , and other ailments believed to be caused by chilling of the abdomen. The belt's use continued decades after the causative link between pathogen-contaminated drinking water and cholera was established. [ 1 ]
Attempts to prevent illness by wearing flannel body wraps date to the early 1700s. In 1707 Jeremiah Wainewright wrote "'I was perswaded'(sic) ... to wear Flannel next to my Skin some ten Years ago for a severe Cough ... I received some advantage'", and in 1726 author Richard Towne wrote, "'those who are subject to habitual Looseness may receive great Benefit by wearing Flannel and keeping their Bodies warm'". By 1799 the British army promoted a "flannel bandage to the whole abdomen," with surgeon Robert Jackson in 1817 recommending "the application of 'flannel over the abdomen, adding such pressure to it by a flannel roller'" to prevent dysentery, and James Annesley writing in 1828 that "'use of a thick flannel banyan and cummerband during the Monsoon will ... exert considerable influence in preventing bowel complaints'". [ 1 ]
According to historian E.T. Renbourn, flannel waistcoats and belts were commonly worn by British soldiers before the 1830s but as cholera epidemics spread from 1817 to the 1830s, fear spread leading to reports in the Cholera Gazette that soldiers should wear flannel to prevent cholera, possibly originating in the Polish-Russian War of 1830-31 though a "cholera belt" was not mentioned. Renbourn writes that although the phrase "cholera belt" was not being specifically mentioned in print, it was "being used fairly widely by the populace in general". It was not until 1848 when, Instructions to Army Medical Officers for their Guidance on the Appearance of the Spasmodic Cholera included the suggestion that every soldier should be provided two "cholera belts". In 1849 an anonymous author published the pamphlet "What has Cholera done in London?" advising "readers to wear a folded flannel belt around the belly ... recommended by the Board of Health'". [ 1 ]
J. McGrigor Croft, M.D. writing in the Westminster, England Marylebone Mercury in 1866 that he was the inventor of the cholera belt. Croft states that he did this to "aid the poor" and then describes how to make the belt out of ordinary flannel in the hopes that anyone will be able to make their own. He states that two medical doctors of his acquaintance vouch for the invention. He calls it an "abdominal respirator; it permits the heated perspiration of the body to pass off without any chance of chill ... without any inconvenience such as found in the old cholera belt". Croft goes on to say that he has declined the patent and gives it to the public freely. [ 2 ]
Surgeon in the Bengal Army, Andrew Duncan in 1888 wrote that "'Cholera belts must be stringently insisted on ... and there should be periodic inspection - and without warning - to see that men are wearing them'". [ 1 ] In 1898, The San Francisco Call reported that the belt is a "good thing for the troops" quoting advice from a Major Edward Field "that no soldier should think of going to Manila without a cholera belt". And although it is impossible to take everything a soldier would need in the tropics, the cholera belt holds the highest place in the emergency list". [ 3 ]
In 1914, donations by the tiny village of Middlemarch , New Zealand of items such as tobacco, shirts and tinned fruit for soldiers going to fight in World War I , included "26 cholera belts". [ 4 ] The idea of abdominal chilling as a factor in illness was brought up as late as 1947 although it was rejoined by those who pointed out that the idea was not based on experimental evidence. [ 5 ]
Renbourn sums up the history of cholera belts as an interest in wearing them fluctuated whether an outbreak was happening nearby or not. The argument seems to be that it "prevented suppression of 'perspiration' and the consequent flow of blocked excretions to the bowel". [ 1 ]
In 1946, L. E. Napier wrote in Principles and Practice of Tropical Medicine that "The flannel cholera belt, whose powers of cholera prevention were of course mythical ... has fortunately gone out of fashion". [ 6 ]
|
https://en.wikipedia.org/wiki/Cholera_belt
|
A cholera pit was a burial place used in a time of emergency when the disease was prevalent. Such mass graves were often unmarked and were placed in remote or specially selected locations. Public fears of contagion, lack of space within existing churchyards [ 1 ] and restrictions placed on the movements of people from location to location [ 2 ] also contributed to their establishment and use. Many of the victims were poor and lacked the funds for memorial stones, however memorials were sometimes added at a later date. [ 1 ]
Often the bodies of cholera victims were wrapped in cotton or linen and doused in coal-tar or pitch before placing into a coffin. Each burial was in a pit 8 feet (2.4 m) deep and liberally sprinkled with quicklime . [ 3 ] The bodies were sometimes burnt before interment. [ 4 ]
It is considered that the cholera risk posed through disturbance of cholera pits from the 19th century is non-existent as transmission is through contaminated water or food. [ 5 ]
An early 19th century incidence of asiatic cholera in Europe was recorded in Russia and other continental countries in the spring of 1831. The first occurrence in England was in the Autumn of 1831 when it reached Sunderland , by 1832 it was at Exeter, and it spread rapidly through the British Isles, reaching Kilmarnock in July 1832. [ 3 ] [ 6 ] Other less severe outbreaks were recorded in 1849 and 1853. [ 7 ] In the United States of America, outbreaks of cholera took place in 1834, 1849, and 1861. [ 8 ]
At Barrmill in North Ayrshire the tradition is that the disease was passed on from a group of gypsies camped on Whin Hill that local boys had gone out to meet. Troops were regularly placed to prevent entry or exit during cholera outbreaks and normal burial in Beith was impossible and impractical, given the number of deaths. The burial site was fenced off and bordered by trees, kept in order by the Crawford Bros. from the factory until they died. It has been neglected since then. [ 2 ]
In 1834 cholera broke out in Beith and although 'clothes were burned, bedding fumigated, stairs and closes whitewashed, a nurse who was a veteran of the Dalry outbreak was engaged and a ban placed on entertainments at funerals.' There were 100 cases in September 1834, 205 people were eventually affected with 105 deaths. Some of the people were buried in the parish churchyard, but others were buried in a field, close to what became Spier's School, on the little common south-west of where the Geilsland Road meets the Powgree Burn. [ 9 ] Robert Spier, the father of John Spier, was a member of the local Health Board. [ 10 ]
The burial at Cleeves Cove is said to that of a member of the family who lived at Cleeves Farm. Tradition has it that "A prediction was uttered many long ages ago, that Cleaves [ sic ], on three successive occasions, would be the first place in the parish visited by the pestilence. The cholera of 1832 was called the fulfillment of the second visitation : accordingly, many of the older inhabitants talk of one still being in reserve." [ 11 ]
When attempting to create a burial pit in Little Bury Meadow near Exeter , the locals attacked the grave-digger when he arrived to break the ground. [ 3 ]
In Kilmarnock a patch of ground was purchased in Howard's Park "partly because the common-burying ground of the town was considered too small to meet the necessities of the case, and partly to prevent apprehended infection, as the graves in the new locality might remain in an undisturbed condition for a longer period." [ 1 ]
The construction of the proposed rail link to Glasgow Airport involved disturbance of the Paisley cholera pit; however, the project was cancelled. [ 5 ]
There is a cholera pit in Upton-upon-Severn , see [ 16 ]
|
https://en.wikipedia.org/wiki/Cholera_pit
|
Cholesteatoma is a destructive and expanding growth consisting of keratinizing squamous epithelium in the middle ear and/or mastoid process . [ 1 ] [ 2 ] Cholesteatomas are not cancerous as the name may suggest, but can cause significant problems because of their erosive and expansile properties. This can result in the destruction of the bones of the middle ear ( ossicles ), as well as growth through the base of the skull into the brain. They often become infected and can result in chronically draining ears. Treatment almost always consists of surgical removal. [ 2 ] [ 3 ]
Other more common conditions (e.g. otitis externa ) may also present with these symptoms, but cholesteatoma is much more serious and should not be overlooked. If a patient presents to a doctor with ear discharge and hearing loss, the doctor should consider cholesteatoma until the disease is definitely excluded. [ 4 ] Other less common symptoms (all less than 15%) of cholesteatoma may include pain, balance disruption , tinnitus , earache , headaches and bleeding from the ear. [ 2 ] There can also be facial nerve weakness. Balance symptoms in the presence of a cholesteatoma raise the possibility that the cholesteatoma is eroding the balance organs in the inner ear . [ 1 ]
Doctors' initial inspections may only reveal an ear canal full of discharge. Until the doctor has cleaned the ear and inspected the entire tympanic membrane , cholesteatoma cannot be diagnosed. [ 2 ] Once the debris is cleared, cholesteatoma can give rise to a number of appearances. If there is significant inflammation, the tympanic membrane may be partially obscured by an aural polyp . If there is less inflammation, the cholesteatoma may present the appearance of 'semolina' discharging from a defect in the tympanic membrane. The posterior and superior parts of the tympanic membrane are most commonly affected. If the cholesteatoma has been dry, the cholesteatoma may present the appearance of ' wax over the attic'. The attic is just above the eardrum .
If untreated, a cholesteatoma can eat or cause erosion of the three small bones located in the middle ear (the malleus , incus and stapes , collectively called ossicles ). [ 5 ] This can result in nerve deterioration, imbalance, vertigo , and deafness early in the disease. [ 6 ] It can also affect and erode, through the enzymes it produces, the thin bone structure that isolates the top of the ear from the brain, as well as lay the covering of the brain open to infection with serious complications (rarely even death due to brain abscess and sepsis ).
Both the acquired as well as the congenital types of the disease can affect the facial nerve that extends from the brain to the face and passes through the inner and middle ear and leaves at the anterior tip of the mastoid bone , and then rises to the front of the ear and extends into the upper and lower face.
Cholesteatomas occur in two basic classifications: Acquired cholesteatomas, which are more common, are usually caused by pathological alteration of the ear drum leading to accumulation of keratin within the middle ear . [ 7 ] Congenital cholesteatomas are usually middle ear epidermal cysts that are identified deep within an intact ear drum often in the superior anterior portion. [ 8 ]
Cholesteatomas do not contain cholesterol or fat and should not be confused with cholesterol granulomas . [ 8 ]
Keratin-filled cysts that grow medial to the tympanic membrane are considered to be congenital if they fulfill the following criteria (Levenson's criteria): [ 3 ]
Congenital cholesteatomas occur at three important sites: the middle ear, the petrous apex, and the cerebropontinio angle. They are most often found deep to the anterior aspect of the ear drum, and a vestigial structure, the epidermoid formation, from which congenital cholesteatoma may originate, has been identified in this area. [ 4 ]
Not all middle ear epidermal cysts are congenital, as they can be acquired either by metaplasia of the middle ear mucosa or by traumatic implantation of ear canal or tympanic membrane skin. In addition, cholesteatoma inadvertently left by a surgeon usually regrows as an epidermal cyst. Some authors have also suggested hereditary factors. [ 9 ] [ 10 ]
More commonly, keratin accumulates in a pouch of tympanic membrane which extends into the middle ear space. This abnormal folding or 'retraction' of the tympanic membrane arises in one of the following ways:
Cholesteatoma may also arise as a result of metaplasia of the middle ear mucosa [ 15 ] or implantation following trauma.
Cholesteatoma is diagnosed by a medical doctor by physical examination of the ear. A CT scan may help to rule out other, often more serious causes for the patient's clinical presentation. Non-ionizing radiation imaging techniques ( MRI ) may be suitable to replace a CT scan, if determined necessary by a physician. [ 16 ] [ 17 ]
Cholesteatoma is a persistent disease. Once the diagnosis of cholesteatoma is made in a patient who can tolerate a general anesthetic, the standard treatment is to surgically remove the growth.
The challenge of cholesteatoma surgery is to permanently remove the cholesteatoma whilst retaining or reconstructing the normal functions of the structures housed within the temporal bone .
The general objective of cholesteatoma surgery has two parts. It is both directed against the underlying pathology and directed towards maintaining the normal functions of the temporal bone. These aims are conflicting and this makes cholesteatoma surgery extremely challenging.
Sometimes, the situation results in a clash of surgical aims. The need to fully remove a progressive disease like cholesteatoma is the surgeon's first priority. Preservation of hearing is secondary to this primary aim. If the disease can be removed easily so that there is no increased risk of residual disease, then the ossicles may be preserved. If the disease is difficult to remove, so that there is an increased risk of residual disease, then removal of involved ossicles in order to fully clear cholesteatoma has generally been regarded as necessary and reasonable.
In other words, the aims of cholesteatoma treatment form a hierarchy. The paramount objective is the complete removal of cholesteatoma. The remaining objectives, such as hearing preservation, are subordinate to the need for complete removal of cholesteatoma. This hierarchy of aims has led to the development of a wide range of strategies for the treatment of cholesteatoma.
The variation in technique in cholesteatoma surgery results from each surgeon's judgment whether to retain or remove certain structures housed within the temporal bone in order to facilitate the removal of cholesteatoma. This typically involves some form of mastoidectomy which may or may not involve removing the posterior ear canal wall and the ossicles.
Removal of the canal wall facilitates the complete clearance of cholesteatoma from the temporal bone in three ways:
Thus removal of the canal wall provides one of the most effective strategies for achieving the primary aim of cholesteatoma surgery, the complete removal of cholesteatoma. However, there is a trade-off, since the functional impact of canal wall removal is also important.
The removal of the ear canal wall results in:
The formation of a mastoid cavity by removal of the canal wall is the simplest and most effective procedure for facilitating the removal of cholesteatoma, but may bestow the most lasting infirmity due to loss of ear function upon the patient treated in this way.
The following strategies are employed to mitigate the effects of canal wall removal:
Clearly, preservation and restoration of ear function at the same time as total removal of cholesteatoma requires a high level of surgical expertise.
Traditionally, ear surgery has been performed using the surgical microscope. The direct line of view dictated by that approach necessitates using the mastoid as the access port to the middle ear. It has long been recognized that failure in cholesteatoma surgery occurs in some of the out of view spaces of the tympanic cavity like the sinus tympani and facial recess that are out of view using the traditional microscopic technique. [ 23 ] More recently, the endoscope has been increasingly utilized in the surgical management of cholesteatoma in one of two ways:
There are multiple advantages for the use of the endoscope in cholesteatoma surgery:
It is important that the patient attend periodic follow-up checks, because even after careful microscopic surgical removal, cholesteatomas may recur. Such recurrence may arise many years, or even decades, after treatment.
A 'residual cholesteatoma' may develop if the initial surgery failed to completely remove the original; residual cholesteatomas typically become evident within the first few years after the initial surgery.
A 'recurrent cholesteatoma' is a new cholesteatoma that develops when the underlying causes of the initial cholesteatoma are still present. Such causes can include, for example, poor Eustachian tube function, which results in retraction of the ear drum, and failure of the normal outward migration of skin. [ 27 ]
In a retrospective study of 345 patients with middle ear cholesteatoma operated on by the same surgeon, the overall 5-year recurrence rate was 11.8%. [ 28 ] In a different study with a mean follow-up period of 7.3 years, the recurrence rate was 12.3%, with the recurrence rate being higher in children than in adults. [ 29 ] The use of the endoscope as an ancillary instrument has been shown to reduce the incidence of residual cholesteatoma. [ 30 ] Although more studies are needed, so far, new techniques addressing underlying Eustachian tube dysfunction such as transtympanic dilatation of the Eustachian tube has not been shown to change outcomes of chronic ear surgery. [ 31 ]
Recent findings indicate that the keratinizing squamous epithelium of the middle ear could be subjected to human papillomavirus infection. [ 32 ] Indeed, DNA belonging to oncogenic HPV16 has been detected in cholesteatoma tissues, thereby underlining that keratinizing squamous epithelia could potentially be a target tissue for HPV infection. [ 32 ]
In one study, the number of new cases of cholesteatoma in Iowa was estimated in 1975–76 to be just under one new case per 10,000 citizens per year. [ 33 ] Cholesteatoma affects all age groups, from infants through to the elderly. The peak incidence occurs in the second decade. [ 33 ]
|
https://en.wikipedia.org/wiki/Cholesteatoma
|
In surgical pathology , strawberry gallbladder , more formally cholesterolosis of the gallbladder and gallbladder cholesterolosis , is a change in the gallbladder wall due to excess cholesterol . [ 1 ]
The name strawberry gallbladder comes from the typically stippled appearance of the mucosal surface on gross examination , which resembles a strawberry . The term was coined by surgical pathologist William C. MacCarty of the Mayo Clinic in 1910. [ 2 ] Cholesterolosis results from abnormal deposits of cholesterol esters in macrophages within the lamina propria ( foam cells ) and in mucosal epithelium. The gallbladder may be affected in a patchy localized form or in a diffuse form. The diffuse form macroscopically appears as a bright red mucosa with yellow mottling (due to lipid), hence the term strawberry gallbladder.
It is not tied to cholelithiasis ( gallstones ) or cholecystitis ( inflammation of the gallbladder). [ 3 ]
This gastroenterology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Cholesterolosis_of_gallbladder
|
Chondrodysplasia Blomstrand is a rare genetic disorder characterized by a mutation of the parathyroid hormone receptor , leading to the absence of a functional PTHR1 . This condition causes abnormal ossification of the endocrine system and intermembranous tissues, [ 1 ] along with accelerated skeletal maturation. [ 2 ]
This article about an endocrine, nutritional, or metabolic disease is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Chondrodysplasia_Blomstrand
|
A chondroid syringoma is a well circumscribed but unencapsulated, multilobulated sweat gland -derived tumor. [ 1 ] It is centered in the deep dermis or subcutaneous fat. [ 1 ] Microscopically it is a mixed tumor , characterized by prominent chondroid or myxoid stroma enveloping benign bland appearing epithelial and myoepithelial cells. [ 1 ] Its malignant counterpart is malignant chondroid syringoma .
This oncology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Chondroid_syringoma
|
Chondroitinase treatment is a treatment of proteoglycans , a protein in the fluid among cells where (among other things) they affect neural activity (communication, plasticity ). [ 1 ] Chondroitinase treatment has been shown to allow adults vision to be restored as far as ocular dominance is concerned. [ 2 ] Moreover, there is some evidence that Chondroitinase could be used for the treatment of spinal injuries . [ 3 ]
In addition, the enzyme that is used in the chondroitinase treatment , chondroitinase ABC , derives from the bacterium Proteus vulgaris . [ 4 ] In recent years, pre-clinical research involving the chondroitinase ABC enzyme has been mainly directed towards utilizing it as a way of treating spinal cord injuries in test animals using viral vectors . [ 5 ] In general, the way chondroitinase ABC works in vivo is it cleaves off the side chains of molecules known as chondroitin sulfate proteoglycans (CSPGs) which are over produced by glial cells in the central nervous system when a spinal injury occurs. [ 4 ] [ 5 ] When chondroitin sulfate proteoglycans are bonded to their side chains called chondroitin sulfate glycosaminoglycans, these molecules are known to prevent neural restoration to the damaged region of the central nervous system because they form glial scar tissue which inhibits both neuroplasticity and repair of damaged axons . [ 5 ] [ 6 ] However, when the side chains of the chondroitin sulfate proteoglycans are cleaved by chondroitinase ABC, this promotes the damaged region of the CNS to recover from the spinal cord injury. [ 4 ]
It has recently been proposed that chondroitinase treatment promotes plasticity by activation of Tropomyosin receptor kinase B , receptor for Brain-derived neurotrophic factor and a major plasticity orchestrator in the brain. [ 7 ] Cleavage of CSPGs by chondroitinase ABC leads to inactivation of PTPRS , the membrane receptor for CSPGs and a phosphatase that inactivates TRKB under normal physiological conditions, which subsequently promotes TRKB phosphorylation and activation of neuroplasticity. [ citation needed ]
|
https://en.wikipedia.org/wiki/Chondroitinase_treatment
|
Chondroplasty is surgery of the cartilage , the most common being corrective surgery of the cartilage of the knee .
Surgery known as thyroid chondroplasty (or tracheal shave ) is used to reduce the visibility of the Adam's apple in transgender women .
This surgery article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Chondroplasty
|
Choosing Wisely is a United States–based health educational campaign, led by the ABIM Foundation ( American Board of Internal Medicine ), about unnecessary health care . [ 1 ]
The campaign identifies over 500 tests and procedures and encourages doctors and patients to discuss, research, and possibly get second opinions, before proceeding with them. [ 2 ] To conduct the campaign, the ABIM Foundation asks medical specialty societies to make five to ten recommendations for preventing overuse of a treatment in their field. The foundation then publicizes this information, and the medical specialty societies disseminate it to their members.
The campaign has garnered both praise and criticism, and some of its ideas have spread to other countries. It does not include evaluation of its effects on costs, on discussions or on medical outcomes. [ 3 ] Some doctors have said they lack time for the recommended discussions. [ 2 ]
In 2002 the ABIM Foundation published Medical professionalism in the new millennium: a Physician Charter . [ 1 ] [ 4 ] The charter states that physicians have a responsibility to promote health equity when some health resources are scarce . [ 1 ] As a practical way of achieving distributive justice , in 2010 physician Howard Brody recommended that medical specialty societies, being stewards of a field, ought to publish a list of five things which they would like changed in their field and publicize it to their members. [ 1 ] [ 5 ] [ 6 ] In 2011, the National Physicians Alliance tested a project in which it organized the creation of some "top 5 lists". [ 1 ] [ 7 ] [ 8 ] Analysis of the National Physician's Alliance project predicted that the health field could have saved US$6.8 billion in 2009 by cutting spending on the 15 services in the lists from three societies, [ 9 ] out of total US health spending that year of US$2.5 trillion. [ 10 ] US$5.8 billion of the savings were from one recommendation: using generic rather than brand name statin. [ 9 ]
Continuing this project, Choosing Wisely was created to organise the creation of more "lists of five," later ten, [ 11 ] and their distribution to more physicians and patients. [ 1 ] [ 12 ] Executive boards of societies, with or without participation by members, identify practices which their field may overuse. [ 13 ] [ 1 ] Each recommendation in the program must have the support of clinical guidelines, evidence, or expert opinion. [ 1 ]
To participate in Choosing Wisely , each society developed list of tests, treatments, or services which that specialty commonly overuses. [ 1 ] The society shares this information with their members, as well as organizations who can publicize to local community groups, and in each community patients and doctors can consider the information as they like. [ 1 ] The ABIM Foundation gave grants to help societies participate. [ 14 ]
As of April 2018, there were 552 recommendations targeting a range of procedures to either question or avoid without special consideration. [ 15 ] They can be searched online by key words, such as "back pain" but the numerous supporting footnotes on each recommendation are only in a pdf on the clinician page, without links to the papers. [ 11 ]
Between 2012 and 2023, more than 80 specialty societies highlighted examples. While these examples are no longer maintained and available on the website (www.choosingwisely.org), specialty societies are encouraged to publish individual lists. Many of these lists are accessible through https://www.aafp.org/pubs/afp/collections/choosing-wisely.html .
Some examples of the information shared in Choosing Wisely include the following:
The Choosing Wisely campaign identifies the following difficulties in achieving its goals:
The American College of Emergency Physicians (ACEP) initially formed three independent task forces to evaluate whether to participate; by 2012 all three task forces recommended against participation because the recommendations do not recognize that emergency physicians need extra tests, since they do not know the patients, do not recognize that emergency physicians need to eliminate every life-threatening possibility, will lead to refusals by insurers to cover items on the lists, let other medical societies tell emergency physicians what to do, and because the campaign does not address tort reform to address defensive testing, and the campaign publicizes the items as "unnecessary tests" even though describing them as tests to discuss carefully. [ 24 ]
In 2012 The New York Times said that the campaign was likely to "alter treatment standards in hospitals and doctors' offices nationwide" and one of their opinion writers said that many tests were unnecessary. [ 25 ] CBS News said that "the evidence is on the initiative's side." [ 26 ] USA Today noted that the campaign was "a rare coordinated effort among multiple medical societies". [ 27 ]
While expressing the need for evidence-based healthcare recommendations, in 2012 The Economist found the Choosing Wisely recommendations to be weak because they are not enforceable. [ 28 ] In an editorial published in the Southwest Journal of Pulmonary and Critical Care , Richard Robbin and Allen Thomas expressed concern that the campaign could be used by payers to limit options for doctors and patients. However, they declare the Choosing Wisely recommendations a "welcome start." [ 29 ]
Also in 2012, Robert Goldberg, writing for The American Spectator , criticized the program saying that it was "designed to sustain the rationale and ideology that shaped Obamacare" (the Patient Protection and Affordable Care Act ), that the lists were "redundant and highly subjective", and that participants in the effort would greedily benefit at the expense of others if the campaign succeeded. [ 30 ]
In February 2013 the Robert Wood Johnson Foundation provided USD $2.5 million in funding for the campaign, saying that the foundation wanted to "help increase the tangible impact of the Choosing Wisely campaign". [ 31 ]
A 2013 editorial in the journal of the Netherlands Society of Cardiology reviewed the recommendations and recommended that something similar be proposed by the society; the piece did criticize the overly didactic nature of the recommendations, comparing them to the Ten Commandments , and expressed concern about whether they adequately addressed the difficulties of assessing risks for each patient. [ 32 ] In 2013 critics in the Southwest Journal of Pulmonary & Critical Care said, "the present Choosing Wisely campaign has fundamental flaws—not because it is medically wrong but because it attempts to replace choice and good judgment with a rigid set of rules that undoubtedly will have many exceptions. Based on what we have seen so far, we suspect that Choosing Wisely is much more about saving money than improving patient care. We also predict it will be used by the unknowing or unscrupulous to further interfere with the doctor-patient relationship." [ 33 ]
In 2015 the campaign was criticized by Bob Lanier, executive director of a medical specialty society and past president of the Texas Medical Association Foundation, who said that the recommendations were compiled by societies' executive committees without good evidence and without following standards of practice or research, will lead to refusals by insurers to cover items on the lists, are biased against diagnostic testing, are an effort by supporters of single-payer healthcare to reduce costs so that single-payer healthcare becomes affordable, will encourage biased studies by authors funded by insurers and health delivery systems, to cut their costs, and were influenced by grants available from the ABIM Foundation. [ 13 ]
In 2015 a piece in Newsweek by Kurt Eichenwald described a controversy around the ABIM Foundation's lack of transparency about its finances and functioning. [ 34 ]
In 2016 campaign was described as an attempt to encourage doctors and patients to recognize the illusion of control or "therapeutic illusion" in choices to use treatments which have a basis outside of evidence-based medicine . [ 35 ]
In 2017 addiction specialists in Canada said the recommendation to wait for sobriety before treating depression was harmful and unjustified. [ 36 ]
A 2017 study reported that many patients and physicians found it challenging to use Choosing Wisely recommendations, particularly when the patient had symptoms, and the doctor recommended against a test. Barriers "included malpractice concern, patient requests for services, lack of time for shared decision making, and the number of tests recommended by specialists. [ 2 ] Cedars–Sinai Medical Center in Los Angeles put 100 of the 552 Choosing Wisely items in its electronic medical records. These give warnings to doctors, but only after they have finished talking to patients and order a procedure or drug, so too late to have the recommended discussion. [ 14 ]
The Choosing Wisely campaign makes no provision to scientifically research its own efficacy, but academic centers are making plans to independently report on the impact of the campaign. [ 37 ] The services targeted by the Choosing Wisely lists have broad variance in how much impact they can have on patients' care and costs. [ 38 ] Doctors analyzed many services listed as low value by Choosing Wisely and other sources, and found that 25% or 42% of Medicare patients received at least one of these services in an average year, depending on definitions. The services represented 0.6% or 2.7% of Medicare costs [ 39 ] and there was no significant pattern among types of physicians. [ 40 ]
The campaign has been cited as being part of a broader movement including many comparable campaigns. [ 41 ] The German Network for Evidence Based Medicine considered adapting concepts from the program into the German healthcare system. [ 42 ] In April 2014, Choosing Wisely Canada launched. [ 43 ] Choosing Wisely Canada is organized by the Canadian Medical Association and the University of Toronto, and is chaired by Dr. Wendy Levinson . By 2015 and following the Choosing Wisely precedent established in the United States, doctors in Australia, Canada, Denmark, England, France, Germany, Italy, Japan, the Netherlands, New Zealand, Switzerland, and Wales were exploring whether and how to bring ideas from Choosing Wisely to their countries. [ 44 ] English doctors "are worried how patients will perceive the initiative." [ 14 ] In 2018, Norway launched Gjør kloke valg ( lit. ' Make smart choices ' ) modeled on the Choosing Wisely program. [ 45 ] [ 46 ]
|
https://en.wikipedia.org/wiki/Choosing_Wisely
|
Spooning or choreic hand is flexion and dorsal arching of the wrists and hyperextension of the fingers when the hands are extended sideways palms down. [ 1 ] [ 2 ]
Spooning is a recognized clinical sign in pediatric neurology during standard evaluation of the posture with extended arms. Spooning is often observed in children up to the age of 5. [ 3 ]
In older ages it is a clinical sign seen in children with chorea .
|
https://en.wikipedia.org/wiki/Choreic_hand
|
Choreoathetosis is the occurrence of involuntary movements in a combination of chorea (irregular migrating contractions) and athetosis (twisting and writhing).
It is caused by many different diseases and agents. It is a symptom of several diseases, including GLUT1 deficiency syndrome , Lesch–Nyhan syndrome , phenylketonuria , and Huntington disease and can be a feature of kernicterus (rapidly increasing unconjugated bilirubin that cross the blood-brain-barrier in infants).
Choreoathetosis is also a common presentation of dyskinesia as a side effect of levodopa-carbidopa in the treatment of Parkinson disease. [ 1 ]
The use of crack cocaine or amphetamines can result in conditions nicknamed crack dancing , or tweaking respectively, described as choreoathetoid. [ 2 ]
This medical sign article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Choreoathetosis
|
Chorionic bump is a rare medical condition defined as an irregular, convex bulge or protrusion from the choriodecidual surface into the gestational sac . [ 1 ] [ 2 ] It is medically defined as a separate entity from a chorionic hematoma . [ 3 ]
Identification of a chorionic bump in early first trimester pregnancy represents a significant risk factor for pregnancy loss, given a live birth rate of less than 50%. [ 4 ] The incidence rate for chorionic bump is estimated to be between 1.5 and 7 per 1000 pregnancies. [ 3 ]
It is believed that chorionic bump can start as a hematoma in the intervillous space . [ 5 ] Additionally, Infertility treatments may be associated with increased likelihood of chorionic bump. [ 4 ]
Existing literature suggests that chorionic bump causes first trimester pregnancy loss and doubles the miscarriage rate as compared to having no risk factors. [ 4 ]
This medical article is a stub . You can help Wikipedia by expanding it .
This human reproduction article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Chorionic_bump
|
Choristomas , a form of heterotopia , are masses of normal tissues found in abnormal locations. [ 1 ] [ 2 ] [ 3 ] [ 4 ] In contrast to a neoplasm or tumor , the growth of a choristoma is normally regulated. [ 5 ]
It is different from a hamartoma . The two can be differentiated as follows: a hamartoma is disorganized overgrowth of tissues in their normal location (e.g., Peutz–Jeghers polyps ), while a choristoma is normal tissue growth in an abnormal location (e.g., osseous choristoma, [ 6 ] gastric tissue located in distal ileum in Meckel diverticulum ).
This article related to pathology is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Choristoma
|
A choroidal fissure cyst is a cyst at the level of the choroidal fissure of the brain . They are usually asymptomatic and do not require treatment.
This article about a medical condition affecting the nervous system is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Choroidal_fissure_cyst
|
Christian Friedrich Wilhelm Roller (11 January 1802 – 3 January 1878) was a German psychiatrist born in Pforzheim .
Roller studied medicine at the Universities of Tübingen and Göttingen , and following graduation returned to Pforzheim to practice medicine. In 1827 he became an assistant at a mental institution in Heidelberg , and from 1835 to 1842 was director of the asylum.
At the Heidelberg asylum he was distressed by the conditions he experienced, and in collaboration with physician Friedrich Groos (1768-1852), he developed plans for construction of a larger, more modern facility. Later his plans became reality when in 1842 he founded the Illenau Healing and Care Institution ( Heil- und Pflegeanstalt Illenau ) at Achern . Roller was director of the Illenau institution until his death in 1878.
As a psychiatrist Roller was vehemently opposed to "city asylums", a standpoint which placed him at odds with a number of his contemporaries. He believed that an isolated non-urban setting such as Illenau was beneficial for a patients' return to mental health. In addition, he stressed the importance of separating the patient from his/her familiar surroundings. Two of the better known psychiatrists who served under him at Illenau were Bernhard von Gudden (1824-1886) and Richard von Krafft-Ebing (1840-1902). [ 1 ] [ 2 ]
|
https://en.wikipedia.org/wiki/Christian_Friedrich_Wilhelm_Roller
|
The Christopher & Dana Reeve Foundation is a charitable organization headquartered in Short Hills, New Jersey , dedicated to finding treatments and cures for paralysis caused by spinal cord injury and other neurological disorders.
The organization's mission statement states: "We are dedicated to curing spinal cord injury by advancing innovative research and improving quality of life for individuals and families impacted by paralysis." [ 1 ] As of 2024 [update] , it has distributed over $140 million to spinal cord researchers, [ 2 ] and $46 million to nonprofits that aim to support better quality-of-life for people with disabilities. [ 3 ]
The foundation was started in 1982 by Hank Stifel, whose son Henry had been injured in a motor vehicle accident. Its original name was the Stifel Paralysis Research Foundation. In the mid-1980s, Stifel approached the American Paralysis Association (APA) about a merger under the APA banner. [ 4 ] In 1995, actor Christopher Reeve became quadriplegic as a result of a horse riding accident. Reeve reached out to the APA and raised funds for it. He joined the board of directors and was elected chairman. In 1996, Reeve established the Christopher Reeve Foundation. In 1999, CRF and APA merged into the Christopher Reeve Paralysis Foundation. Later, the word "paralysis" was dropped from its name and the organization was called the Christopher Reeve Foundation. [ 5 ] [ 6 ]
After Reeve's death in October 2004, his widow, Dana Reeve , assumed the chairmanship of the Foundation. Dana Reeve herself died 17 months later, in March 2006, of lung cancer. [ 7 ]
On March 11, 2007, the Foundation announced that it had changed its name to the Christopher & Dana Reeve Foundation on the first anniversary of Dana Reeve's death. [ 8 ] As of 2024, all three of Christopher Reeve's children serve on the foundation's board of directors. [ 9 ] They are television reporter and anchor Will Reeve, film producer and director Matthew Reeve , and lawyer Alexandra Reeve Givens.
|
https://en.wikipedia.org/wiki/Christopher_and_Dana_Reeve_Foundation
|
In cellular neuroscience , chromatolysis is the dissolution of the Nissl bodies in the cell body of a neuron . It is an induced response of the cell usually triggered by axotomy , ischemia , toxicity to the cell, cell exhaustion, virus infections , and hibernation in lower vertebrates. Neuronal recovery through regeneration can occur after chromatolysis, but most often it is a precursor of apoptosis . The event of chromatolysis is also characterized by a prominent migration of the nucleus towards the periphery of the cell and an increase in the size of the nucleolus , nucleus, and cell body. [ 1 ] The term "chromatolysis" was initially used in the 1940s to describe the observed form of cell death characterized by the gradual disintegration of nuclear components, a process which is now called apoptosis. [ 2 ] Chromatolysis is still used as a term to distinguish the particular apoptotic process in the neuronal cells, where Nissl substance disintegrates.
In 1885, researcher Walther Flemming described dying cells in degenerating mammalian ovarian follicles . The cells showed variable stages of pyknotic chromatin. These stages included chromatin condensation , which Flemming described as "half-moon" shaped and appearing as "chromatin balls," or structures resembling large, smooth, and round electron-dense chromatin masses. Other stages included cell fractionation into smaller bodies. Flemming named this degenerative process "chromatolysis" to describe the gradual disintegration of nuclear components. The process he described now fits with the relatively new term, apoptosis, to describe cell death . [ 2 ]
Around the same time of Flemming's research, chromatolysis was also studied in the lactating mammary glands and in breast cancer cells. From observing the regression of ovarian follicles in mammals, it was argued that a necessary cellular process existed to counterbalance the proliferation of cells by mitosis. At this time, chromatolysis was proposed to play a major role in this physiological process. Chromatolysis was also thought to be responsible for necessary cell elimination in various organs during development. Again, these expanded definitions of chromatolysis are consistent with what we now term apoptosis.
In 1952, research further supported the role of chromatolysis in changing the physiology of cells during cell death processes in embryo development. It was also observed that the integrity of mitochondria is maintained during chromatolysis.
By the 1970s, the conserved structural features of chromatolysis were identified. The consistent features of chromatolysis included the condensation of the cytoplasm and chromatin, cell shrinkage, formation of "chromatin balls," intact normal organelles , and fragmentation of cells observed by the budding of fragments enclosed in the cell membrane. These budding fragments were termed "apoptotic bodies," thus coining the name "apoptosis" to describe this form of cell death. The authors of these studies, most likely unfamiliar with older publications on chromatolysis, were essentially describing apoptosis as a process identical to chromatolysis. [ 2 ]
Central chromatolysis is the most common form of chromatolysis and is characterized by the loss or dispersion of the Nissl bodies starting near the nucleus at the center of the neuron, and then extending peripherally towards the plasma membrane. Also characteristic of central chromatolysis is the displacement of the nucleus towards the periphery of the perikaryon . [ 3 ] [ 4 ] [ 5 ] Other cellular changes are observed during the process of the central chromatolysis. The process of Nissl dissolution is less apparent toward periphery of the cell body of the neuron, where normal-looking Nissl bodies may be present. [ 1 ] Hyperplasia of neurofilaments is frequently observed, however the extent varies. The number of autophagic vacuoles and lysosomal structures often increase during central chromatolysis. Changes can also occur in other organelles such as the Golgi apparatus and neurotubules . However, the exact significance of these changes is currently unknown. In neurons receiving axonal transection, central chromatolysis is observed in the area between the nucleus and the axon hillock following....... [ 6 ]
Peripheral chromatolysis is much less common, but has been reported to occur after axotomy and ischemia in certain species. Peripheral chromatolysis is essentially the reverse of central chromatolysis, in which the disintegration of Nissl bodies is initiated at the periphery of the neuron and extends inwards towards the nucleus of the cell. Peripheral chromatolysis has been observed to occur in lithium-induced chromatolysis and it could be useful in investigating and countering the hypothesis that waves of enzymatic activity always progress from the perinuclear area, or the area situated around the nucleus, to the peripheral of the cell. [ 7 ]
When an axon is injured, the whole neuron reacts to provide increased metabolic activity that is necessary for regeneration of the axon. Part of this reaction includes structural alternations caused by the chromatolysis event. [ 9 ] The enlargement of nuclear components due to axotomy can be explained by the alteration of the cell's cytoskeleton . The cytoskeleton maintains the nuclear components of a cell and the size of the cell body in neurons. The increase in protein within the neuron leads to this change in the cytoskeleton. For example, there is an increase in phosphorylated neurofilament proteins and cytoskeletal components, tubulin and actin , in neurons undergoing chromatolysis. [ 4 ] The increase in protein can be explained by the increase in cytoskeleton size. Changes in the cell body cytoskeleton seem to be responsible for enhanced nuclear eccentricity following axonal injury. [ 1 ] [ 3 ]
One hypothesis behind the incidence of chromatolysis following axotomy is that the shortening of the axon prevents the incorporation of the axonal cytoskeleton that undergoes formation in the injured neuron. Nuclear eccentricity can be attributed to the presence of excess axonal cytoskeleton between the nucleus and axon hillock, which causes chromatolysis. A second hypothesis proposes that blockage of axonal cytoskeletal proteins causes chromatolysis. [ 8 ]
Axotomy also induces the loss of basophilic staining in the event of central chromatolysis of the neuronal cell. The loss of staining begins near the nucleus and spreads toward the axon hillock. The basophilic rim is formed as chromatolysis compresses the cytoplasmic skeleton. [ 8 ]
Acrylamide intoxication has been shown to be an agent for the induction of chromatolysis. In one study groups of rats were injected with acrylamide for 3, 6, and 12 days and the A- and B-cell perikarya of their L5 dorsal root ganglion were examined. There was no morphological change in the B-cell perikarya, the A-cell perikarya however exhibited chromatolysis in 11% and 23% of the population, for the 6 and 12 days groups respectively. For the purposes of the study A-cells were defined as ganglia neurons whose nucleolus was large and centrally placed in the nucleus, while B-cells had many nucleoli distributed along the periphery of their nucleus. Acrylamide intoxication resembles neural axotomy histologically and mechanically. In each case the neuron undergoes chromatolysis and atrophy of the cell body and axon. Also both seem to be mechanically related to a disruption of the delivery of neurofilament to the axon due to a decreased transport of a trophic factor from the axon to the cell body. [ 10 ]
Exposure to lithium has also been used as a method to induce chromatolysis in rats. The study involved the injection of large doses of lithium chloride into female Lewis rats over several day periods. Examination of the trigeminal and dorsal root ganglia revealed peripheral chromatolysis in these cells. The cells exhibited decreased numbers of Nissl bodies throughout the cell, especially at the peripheral cytoplasm were the Nissl bodies were completely absent. Using lithium as a method to induce peripheral chromatolysis could be useful for future study of chromatolysis due to its simplicity and the fact it does not cause nuclear displacement. [ 7 ]
Central chromatolysis has been observed in spinal anterior horn and motor neurons of patients with amyotrophic lateral sclerosis (ALS). [ 11 ] Patients with ALS appear to have significant alterations that occur within the chromatolyzed neuronal cells. [ 12 ] [ 13 ] These alterations include dense conglomerates of aggregated dark mitochondria and presynaptic vesicles , bundles of neurofilaments , and a marked increase of presynaptic vesicles. Changes to the function of the motor neurons have also been observed. The most typical functional change in chromatolytic motor neurons is the significant reduction in size of the monosynaptic excitatory postsynaptic potentials (EPSPs). These monosynaptic EPSPs also seem to be prolonged in the chromatolyzed cells of ALS patients. This functional change to the anterior horn neurons could result in the elimination of certain excitatory synaptic inputs and thus give rise to the clinical motor function impairment that is characteristic of the ALS disease. [ 13 ]
Alzheimer's disease is a major neurodegenerative disease that involves the dying off of neurons and synapses. Chromatolysis has been observed in neurons from Alzheimer's patients, often as a precursor to apoptosis. Chromatolytic cells have also been observed in a pathologically similar disease known as Pick's disease . [ 14 ] Most recent studies have observed chromatolysis in cells from rats that have been subjected to either copper or aluminum toxication, which are both hypothesized to be involved in the pathogenesis of Alzheimer's disease. [ 15 ] [ 16 ]
Severe neuronal chromatolysis has been detected in the brainstems of adult cattle with the neurodegenerative condition known as idiopathic brainstem neuronal chromatolysis (IBNC). The symptoms of IBNC in cattle are clinically similar to those characterized by bovine spongiform encephalopathy , otherwise known as mad-cow disease. These symptoms included tremor, lack of muscle movement coordination, anxiety and weight loss. [ 17 ] At the cellular level, IBNC is marked by the degeneration of neurons and axons within the brainstem and cranial nerves . The disease also has a significant correlation with abnormal labeling for prion protein (PrP) in the brain. IBNC has been characterized by severe neuronal, axonal , and myelin degradation , accompanied by non-supportive inflammation and changes in spongiform of various regions of grey matter. A significant loss of neurons due to hippocampal degeneration has also been observed. The degenerate chromatolysis neurons seldom showed intracytoplasmic labeling for PrP. [ 18 ]
Chromatolysis has been reported in patients with alcoholic encephalopathies. Central chromatolysis was observed mainly among neurons in the brainstem, particularly in the pontine nuclei and the cerebellar dentate nuclei. Nuclei of cranial nerves, arcuate nuclei, and posterior horn cells were also affected. Studies examining patients with alcoholic encephalopathies give evidence of central chromatolysis. Mild to severe degeneration of spinal cord tracks has been observed in patients with Marchiafava–Bignami disease and Wernicke–Korsakoff syndrome , both forms of encephalopathy linked to alcohol. [ 19 ]
The mechanisms and signals for chromatolysis were first researched in depth in the 1960s and still merit further investigation. [ 9 ] [ 20 ] It is clear that axotomy is one of the most direct inducers of chromatolysis and if further research were put into elucidating the specific pathways which associate axonal damage to chromatolysis, then potential therapies could be developed for halting the chromatolytic response of neurons and ameliorating the detrimental effects of degenerative diseases, such as Alzheimer's and ALS. [ 20 ]
|
https://en.wikipedia.org/wiki/Chromatolysis
|
Chronic allograft nephropathy ( CAN ) is a kidney disorder which is the leading cause of kidney transplant failure, [ 1 ] occurring months to years after the transplant.
CAN is characterized by a gradual decline in kidney function and, typically, accompanied by high blood pressure and hematuria . [ 2 ]
The histopathology is characterized by interstitial fibrosis , tubular atrophy ,
fibrotic intimal thickening of arteries and glomerulosclerosis . [ 2 ] [ 3 ]
CAN is diagnosed by examination of tissue, e.g. a kidney biopsy . [ 4 ]
|
https://en.wikipedia.org/wiki/Chronic_allograft_nephropathy
|
Chronic care refers to medical care which addresses pre-existing or long-term illness, as opposed to acute care which is concerned with short term or severe illness of brief duration. Chronic medical conditions include asthma , diabetes , emphysema , chronic bronchitis , congestive heart disease, cirrhosis of the liver , hypertension and depression . Without effective treatment chronic conditions may lead to disability .
The incidence of chronic disease has increased as mortality rates have decreased. [ 1 ] It is estimated that by 2030 half of the population of the USA will have one or more chronic conditions. [ 2 ]
According to the CDC, 6 out of 10 adults in the U.S. are managing at least one chronic disease and 42% of adults have two or more chronic conditions. [ 3 ]
Conditions, injuries and diseases which were previously fatal can now be treated with chronic care. Chronic care aims to maintain wellness by keeping symptoms in remission while balancing treatment regimes and quality of life . [ 1 ] Many of the core functions of primary health care are central to chronic care. [ 4 ] Chronic care is complex in nature because it may extend over a pro-longed period of time, requires input from a diverse set of health professionals , various medications and possibly monitoring equipment. [ 5 ]
According to 2008 figures from the Centers for Disease Control and Prevention chronic medical care accounts for more than 75% of health care spending in the US. [ 1 ] In response to the increased government expenditure in dealing with chronic care policy makers are searching for effective interventions and strategies. These strategies can broadly be described within four categories. These are disease prevention and early detection, new providers, settings and qualifications, disease management programs and integrated care models . [ 6 ]
One of the major problems from a health care system which is poorly coordinated for people with chronic conditions is the incidence of patients receiving conflicting advice from different providers. [ 2 ] Patients will often be given prescriptions for medication that adversely interact with
one another. One recent study estimated that more than 20% of older patients in the USA took at least one medication which could negatively impact another condition. [ 7 ] This is referred to as therapeutic competition.
Effective chronic care requires an information platform to track patients' status and ensure appropriate treatments are given. [ 8 ]
There is a recognised gap between treatment guidelines and current practice for chronic care. [ 9 ] Individualised treatment plans are critical in treating chronic conditions because patients will place varying important on health outcomes. For example, some patients will fore-go complex, inconvenient medication regimes at the expense of quality of life. [ 9 ]
One of the greatest challenges in this field of health care is dealing with the co-existence of multiple long-term conditions, also known as multimorbidity . [ 5 ] There are few incentives within current health care systems to coordinate care across multiple providers and varying services. [ 2 ] A 2001 survey by Mathematica Policy Research found that physicians feel they have inadequate training to deal with multiple chronic conditions. An increase in the number of chronic conditions correlates with an increase in the number of inappropriate hospitalizations. [ 2 ] Self-management can be challenging because recommended activities for one condition may be made difficult because of another condition. [ 9 ]
Chronic care is a patient-based approach to provide chronically ill patients with the knowledge and resources to help them better understand their conditions and to help them to adhere with treatment for better outcomes. Chronic care patients may require the services of a variety of care providers, including dietitians, nutritionists, occupational therapists, nurses, behavioral care, pain management, surgery, and pastoral care. Working in collaboration with the patient, the chronic care provider coordinates care these and other specialist providers. Additionally, the patient may require palliative or hospice care, especially at end of life.
|
https://en.wikipedia.org/wiki/Chronic_care
|
A chronic condition (also known as chronic disease or chronic illness ) is a health condition or disease that is persistent or otherwise long-lasting in its effects or a disease that comes with time. The term chronic is often applied when the course of the disease lasts for more than three months.
Common chronic diseases include diabetes , functional gastrointestinal disorder , eczema , arthritis , asthma , chronic obstructive pulmonary disease , autoimmune diseases , genetic disorders and some viral diseases such as hepatitis C and acquired immunodeficiency syndrome .
An illness which is lifelong because it ends in death is a terminal illness . It is possible and not unexpected for an illness to change in definition from terminal to chronic as medicine progresses. Diabetes and HIV for example were once terminal yet are now considered chronic, due to the availability of insulin for diabetics and daily drug treatment for individuals with HIV, which allow these individuals to live while managing symptoms. [ 1 ]
In medicine , chronic conditions are distinguished from those that are acute . An acute condition typically affects one portion of the body and responds to treatment. A chronic condition, on the other hand, usually affects multiple areas of the body, is not fully responsive to treatment, and persists for an extended period of time. [ 2 ]
Chronic conditions may have periods of remission or relapse where the disease temporarily goes away, or subsequently reappear. Periods of remission and relapse are commonly discussed when referring to substance abuse disorders which some consider to fall under the category of chronic condition. [ 3 ]
Chronic conditions are often associated with non-communicable diseases which are distinguished by their non-infectious causes. Some chronic conditions though, are caused by transmissible infections such as HIV/AIDS. [ citation needed ]
63% of all deaths worldwide are from chronic conditions. [ 4 ] Chronic diseases constitute a major cause of mortality , and the World Health Organization (WHO) attributes 38 million deaths a year to non-communicable diseases. [ 5 ] In the United States approximately 40% of adults have at least two chronic conditions. [ 6 ] [ 7 ]
Having more than one chronic condition is referred to as multimorbidity . [ 8 ]
Chronic conditions have often been used to describe the various health related states of the human body such as syndromes, physical impairments, disabilities as well as diseases. Epidemiologists have found interest in chronic conditions due to the fact they contribute to disease, disability, and diminished physical and/or mental capacity. [ 9 ]
For example, high blood pressure or hypertension is considered to be not only a chronic condition itself but also correlated with diseases such as heart attack or stroke .
Researchers, particularly those studying the United States, utilize the Chronic Condition Indicator (CCI) which maps ICD codes as "chronic" or "non-chronic". [ 10 ]
The list below includes these chronic conditions and diseases:
In 2015 the World Health Organization produced a report on non-communicable diseases, citing the four major types as: [ 11 ]
Other examples of chronic diseases and health conditions include:
While risk factors vary with age and gender, many of the common chronic diseases in the US are caused by dietary, lifestyle and metabolic risk factors. [ 12 ] Therefore, these conditions might be prevented by behavioral changes , such as quitting smoking, adopting a healthy diet, and increasing physical activity. Social determinants are important risk factors for chronic diseases. [ 13 ] Social factors , e.g., socioeconomic status, education level, and race/ethnicity, are a major cause for the disparities observed in the care of chronic disease. [ 13 ] Lack of access and delay in receiving care result in worse outcomes for patients from minorities and underserved populations. [ 14 ] Those barriers to medical care complicate patients monitoring and continuity in treatment. [ citation needed ]
In the US, minorities and low-income populations are less likely to seek, access and receive preventive services necessary to detect conditions at an early stage. [ 15 ]
The majority of US health care and economic costs associated with medical conditions are incurred by chronic diseases and conditions and associated health risk behaviors. Eighty-four percent of all health care spending in 2006 was for the 50% of the population who have one or more common chronic medical conditions (CDC, 2014).
There are several psychosocial risk and resistance factors among children with chronic illness and their family members. Adults with chronic illness were significantly more likely to report life dissatisfaction than those without chronic illness. [ 16 ] Compared to their healthy peers, children with chronic illness have about a twofold increase in psychiatric disorders. [ 17 ] Higher parental depression and other family stressors predicted more problems among patients. [ 18 ] In addition, sibling problems along with the burden of illness on the family as a whole led to more psychological strain on the patients and their families. [ 18 ]
Africa
African countries are currently grappling with a double health burden—while infectious diseases continue to be a major cause of death, chronic illnesses are increasingly becoming more deadly, particularly in sub-Saharan Africa . This region reports some of the highest chronic disease mortality rates globally, impacting both men and women alike. [ 19 ] The surge in chronic conditions such as diabetes , hypertension , and cardiovascular disease is being driven by poor lifestyle choices like unhealthy diets, physical inactivity, smoking, and obesity. These modifiable behaviors are becoming widespread across both rural and urban areas. In addition to lifestyle factors, genetics also plays a role in the region’s chronic disease profile, particularly for conditions like high blood pressure and diabetes. [ 20 ]
Compounding the problem is the state of healthcare systems, which often lack the infrastructure, funding, and public awareness needed to respond effectively to this growing crisis.
Asia
Asia's chronic disease burden is rising sharply, driven by a mix of aging populations, genetic predispositions, and fast-paced urbanization. The transition to more sedentary lifestyles and Westernized diets brought on by industrialization and economic growth—has contributed significantly to the growing number of non-communicable diseases (NCDs). South Asians , in particular, are at greater risk, developing these conditions earlier in life and often at lower body weights compared to global norms, resulting in higher healthcare costs and lower productivity. [ 21 ]
Tobacco use remains a critical risk factor across South Asia, with a strong link to chronic illnesses. For instance, the Maldives has reported some of the highest rates of NCD-related deaths among women. Poor diets and smoking rank among the top contributors to early death and disability, made worse by limited access to healthcare and low levels of health awareness in many communities.
Latin America and the Caribbean
In Latin America and the Caribbean , changing lifestyles and environmental conditions are key contributors to the rise in chronic diseases. Many young people, including students, are engaging in habits such as poor nutrition, high consumption of processed foods and sugary drinks, and low levels of physical activity all of which increase their vulnerability to conditions like diabetes and heart disease. [ 22 ]
The region’s rapid urban growth and influence from global food and media trends have also shifted daily routines toward more sedentary and unhealthy patterns. Combined with existing social and economic challenges, these changes are putting additional pressure on public health systems, underscoring the urgent need for prevention strategies and stronger public policies.
Some people suffered from chronic symptoms that developed soon after Covid-19 injections, and this long term condition is known as post-vaccination syndrome (PVS). In February 2025, research from Yale University School of Medicine showed that more frequent Epstein-Barr virus (EBV) reactivation and elevated levels of circulating spike protein were observed in PVS participants, including those who were not infected, compared to healthy controls. [ 23 ]
A growing body of evidence supports that prevention is effective in reducing the effect of chronic conditions; in particular, early detection results in less severe outcomes. Clinical preventive services include screening for the existence of the disease or predisposition to its development, counseling and immunizations against infectious agents. Despite their effectiveness, the utilization of preventive services is typically lower than for regular medical services. In contrast to their apparent cost in time and money, the benefits of preventive services are not directly perceived by patient because their effects are on the long term or might be greater for society as a whole than at the individual level. [ 24 ]
Therefore, public health programs are important in educating the public, and promoting healthy lifestyles and awareness about chronic diseases. While those programs can benefit from funding at different levels (state, federal, private) their implementation is mostly in charge of local agencies and community-based organizations. [ 25 ]
Studies have shown that public health programs are effective in reducing mortality rates associated to cardiovascular disease, diabetes and cancer, but the results are somewhat heterogeneous depending on the type of condition and the type of programs involved. [ 26 ] For example, results from different approaches in cancer prevention and screening depended highly on the type of cancer. [ 27 ] The rising number of patient with chronic diseases has renewed the interest in prevention and its potential role in helping control costs. In 2008, the Trust for America's Health produced a report that estimated investing $10 per person annually in community-based programs of proven effectiveness and promoting healthy lifestyle (increase in physical activity, healthier diet and preventing tobacco use) could save more than $16 billion annually within a period of just five years. [ 28 ]
A 2017 review (updated in 2022) found that it is uncertain whether school-based policies on targeting risk factors on chronic diseases such as healthy eating policies, physical activity policies, and tobacco policies can improve student health behaviours or knowledge of staffs and students. [ 29 ] [ needs update ] The updated review in 2022 did determine a slight improvement in measures of obesity and physical activity as the use of improved strategies lead to increased implementation interventions but continued to call for additional research to address questions related to alcohol use and risk. [ 29 ] Encouraging those with chronic conditions to continue with their outpatient ( ambulatory ) medical care and attend scheduled medical appointments may help improve outcomes and reduce medical costs due to missed appointments. [ 30 ] Finding patient-centered alternatives to doctors or consultants scheduling medical appointments has been suggested as a means of improving the number of people with chronic conditions that miss medical appointments, however there is no strong evidence that these approaches make a difference. [ 30 ]
Nursing can play an important role in assisting patients with chronic diseases achieve longevity and experience wellness. [ 31 ] Scholars point out that the current neoliberal era emphasizes self-care, in both affluent and low-income communities. [ 32 ] This self-care focus extends to the nursing of patients with chronic diseases, replacing a more holistic role for nursing with an emphasis on patients managing their own health conditions. Critics note that this is challenging if not impossible for patients with chronic disease in low-income communities where health care systems, and economic and social structures do not fully support this practice. [ 32 ]
A study in Ethiopia showcases a nursing-heavy approach to the management of chronic disease. Foregrounding the problem of distance from healthcare facility, the study recommends patients increase their request for care. It uses nurses and health officers to fill, in a cost-efficient way, the large unmet need for chronic disease treatment. [ 33 ] They led their health centers staffed by nurses and health officers; so, there are specific training required for involvement in the programmed must be carried out regularly, to ensure that new staff is educated in administering chronic disease care. [ 33 ] The program shows that community-based care and education, primarily driven by nurses and health officers, works. [ 33 ] It highlights the importance of nurses following up with individuals in the community, and allowing nurses flexibility in meeting their patients' needs and educating them for self-care in their homes. [ citation needed ]
The epidemiology of chronic disease is diverse and the epidemiology of some chronic diseases can change in response to new treatments. In the treatment of HIV, the success of anti-retroviral therapies means that many patients will experience this infection as a chronic disease that for many will span several decades of their chronic life. [ 34 ]
Some epidemiology of chronic disease can apply to multiple diagnosis. Obesity and body fat distribution for example contribute and are risk factors for many chronic diseases such as diabetes, heart, and kidney disease. [ 35 ] Other epidemiological factors, such as social, socioeconomic, and environment do not have a straightforward cause and effect relationship with chronic disease diagnosis. While typically higher socioeconomic status is correlated with lower occurrence of chronic disease, it is not known is there is a direct cause and effect relationship between these two variables. [ 36 ]
The epidemiology of communicable chronic diseases such as AIDS is also different from that of noncommunicable chronic disease. While Social factors do play a role in AIDS prevalence, only exposure is truly needed to contract this chronic disease. Communicable chronic diseases are also typically only treatable with medication intervention, rather than lifestyle change as some non-communicable chronic diseases can be treated. [ 37 ]
As of 2003, there are a few programs which aim to gain more knowledge on the epidemiology of chronic disease using data collection. The hope of these programs is to gather epidemiological data on various chronic diseases across the United States and demonstrate how this knowledge can be valuable in addressing chronic disease. [ 38 ]
In the United States, as of 2004 nearly one in two Americans (133 million) has at least one chronic medical condition, with most subjects (58%) between the ages of 18 and 64. [ 10 ] The number is projected to increase by more than one percent per year by 2030, resulting in an estimated chronically ill population of 171 million. [ 10 ] The most common chronic conditions are high blood pressure , arthritis , respiratory diseases like emphysema , and high cholesterol . [ citation needed ]
Based on data from 2014 Medical Expenditure Panel Survey (MEPS), about 60% of adult Americans were estimated to have one chronic illness, with about 40% having more than one; this rate appears to be mostly unchanged from 2008. [ 39 ] MEPS data from 1998 showed 45% of adult Americans had at least one chronic illness, and 21% had more than one. [ 40 ]
According to research by the CDC , chronic disease is also especially a concern in the elderly population in America. Chronic diseases like stroke, heart disease, and cancer were among the leading causes of death among Americans aged 65 or older in 2002, accounting for 61% of all deaths among this subset of the population. [ 41 ] It is estimated that at least 80% of older Americans are currently living with some form of a chronic condition, with 50% of this population having two or more chronic conditions. [ 41 ] The two most common chronic conditions in the elderly are high blood pressure and arthritis, with diabetes, coronary heart disease, and cancer also being reported among the elder population. [ 42 ]
In examining the statistics of chronic disease among the living elderly, it is also important to make note of the statistics pertaining to fatalities as a result of chronic disease. Heart disease is the leading cause of death from chronic disease for adults older than 65, followed by cancer, stroke, diabetes, chronic lower respiratory diseases, influenza and pneumonia, and, finally, Alzheimer's disease. [ 41 ] Though the rates of chronic disease differ by race for those living with chronic illness, the statistics for leading causes of death among elderly are nearly identical across racial/ethnic groups. [ 41 ]
Chronic illnesses cause about 70% of deaths in the US and in 2002 chronic conditions (heart disease, cancers, stroke, chronic respiratory diseases, diabetes, Alzheimer's disease, mental illness and kidney diseases) were six of the top ten causes of mortality in the general US population. [ 43 ]
The government of Canada put a high emphasis on chronic conditions in the country [1] . At least 45.1% of Canadians will experience one chronic condition in their lifetime. On December 11, 2024, Sun Life, a prominent health insurance provider in Canada, reported an increase in chronic diseases across all age groups. They emphasize that chronic conditions affect both young individuals and the elderly. Sun Life highlights that a growing number of young people are facing chronic issues such as diabetes, asthma, high blood pressure, and elevated cholesterol levels. The report examined drug claims for chronic conditions from over three million Sun Life plan members [2] .
It is important to note that diabetes is one of the fastest-growing chronic conditions in Canada, having increased by approximately 30% from 2019 to 2023. Claims for diabetes medications have surged more rapidly among Canadians under the age of 30 [3] .
Chronic diseases are prevalent among older Canadians. A report indicates that 73% of individuals aged 65 and older have at least one of ten common chronic conditions. The ten most frequent chronic diseases in Canada include hypertension , affecting 65.7% of the elderly, periodontal disease at 52.0%, osteoarthritis at 38.0%, ischemic heart disease at 27.0%, diabetes at 26.8%, osteoporosis at 25.1%, cancer at 21.5%, COPD at 20.2%, asthma at 10.7%, and mood and anxiety disorders at 10.5%. Additionally, COVID-19 has impacted chronic conditions in seniors, and its effects are currently being studied [4] .
Chronic diseases are a major factor in the continuous growth of medical care spending. [ 44 ] In 2002, the U.S. Department of Health and Human Services stated that the health care for chronic diseases cost the most among all health problems in the U.S. [ 45 ] Healthy People 2010 reported that more than 75% of the $2 trillion spent annually in U.S. medical care are due to chronic conditions; spending are even higher in proportion for Medicare beneficiaries (aged 65 years and older). [ 15 ] Furthermore, in 2017 it was estimated that 90% of the $3.3 billion spent on healthcare in the United States was due to the treatment of chronic diseases and conditions. [ 46 ] [ 39 ] Spending growth is driven in part by the greater prevalence of chronic illnesses and the longer life expectancy of the population. Also, improvement in treatments has significantly extended the lifespans of patients with chronic diseases but results in additional costs over long period of time. A striking success is the development of combined antiviral therapies that led to remarkable improvement in survival rates and quality of life of HIV -infected patients. [ citation needed ]
In addition to direct costs in health care, chronic diseases are a significant burden to the economy, through limitations in daily activities, loss in productivity and loss of days of work. A particular concern is the rising rates of overweight and obesity in all segments of the U.S. population. [ 15 ] Obesity itself is a medical condition and not a disease, but it constitutes a major risk factor for developing chronic illnesses, such as diabetes, stroke, cardiovascular disease and cancers. Obesity results in significant health care spending and indirect costs, as illustrated by a recent study from the Texas comptroller reporting that obesity alone cost Texas businesses an extra $9.5 billion in 2009, including more than $4 billion for health care, $5 billion for lost productivity and absenteeism, and $321 million for disability. [ 47 ]
The Public Health Agency of Canada states that chronic disease has a negative impact on the labor force participant of individuals. In particular, people with chronic diseases “are likely to have recurrent sick leave, long-term absences from work, and often face an early retirement from the labour force.” [ 48 ]
In 2000, the Public Health Agency of Canada stated that the total economic burden of arthritis totaled 6.4 billion Canadian dollars per year, representing 28.9% of all musculoskeletal disease expenditures. 65% of the total economic cost was incurred by those aged 35-64 years old. It is anticipated that people aged 55 and older will most significantly contribute to the prevalence of arthritis. This is projected to result in reduced labor force participant and a substantial increase in morbidity costs. The Public Health Agency of Canada recommends focusing on prevention strategies, minimizing costs by improving health and reducing disability, and providing support to people with arthritis to remain active in the workforce. [ 49 ]
As of 2004, the estimated economic burden of Chronic obstructive pulmonary disease (COPD) is 805.5 billion yen per year. Direct costs, which include inpatient care, outpatient care, and home oxygen therapy, account for 645.1 billion yen per year. Meanwhile, indirect costs are estimated to cost 160.4 billion yen per year in lost productivity due to absenteeism from work. The high smoking rate and increasing size of the elderly population are likely to exacerbate the economic impact of COPD in Japan. [ 50 ]
Major indirect costs of COPD are a decrease in labor force participation, increased cost of healthcare due to assisted living expenses, increased prevalence of premature death, and care giver support cost. In 1999, a survey demonstrated that patients with chronic bronchitis, COPD, or emphysema missed an average of 42.2 days of work per year due to their condition. [ 50 ]
There have been recent links between social factors and prevalence as well as outcome of chronic conditions.
The connection between loneliness, overall health, and chronic conditions has recently been highlighted. Some studies have shown that loneliness has detrimental health effects similar to that of smoking and obesity. [ 51 ] One study found that feelings of isolation are associated with higher self reporting of health as poor, and feelings of loneliness increased the likelihood of mental health disorders in individuals. [ 52 ]
The connection between chronic illness and loneliness is established, yet oftentimes ignored in treatment. One study for example found that a greater number of chronic illnesses per individual were associated with feelings of loneliness. [ 53 ] Some of the possible reasons for this listed are an inability to maintain independence as well as the chronic illness being a source of stress for the individual. A study of loneliness in adults over age 65 found that low levels of loneliness as well as high levels of familial support were associated with better outcomes of multiple chronic conditions such as hypertension and diabetes. [ 53 ]
There are some recent movements in the medical sphere to address these connections when treating patients with chronic illness. The biopsychosocial approach for example, developed in 2006 focuses on patients "patient's personality, family, culture, and health dynamics." [ 54 ] Physicians are leaning more towards a psychosocial approach to chronic illness to aid the increasing number of individuals diagnosed with these conditions. Despite this movement, there is still criticism that chronic conditions are not being treated appropriately, and there is not enough emphasis on the behavioral aspects of chronic conditions [ 55 ] or psychological types of support for patients. [ 56 ]
The mental health intersectionality on those with chronic conditions is a large aspect often overlooked by doctors. And chronic illness therapists are available for support to help with the mental toll of chronic illness a it is often underestimated in society. Adults with chronic illness that restrict their daily life present with more depression and lower self-esteem than healthy adults and adults with non-restricting chronic illness. [ 57 ] The emotional influence of chronic illness also has an effect on the intellectual and educational development of the individual. [ 58 ] For example, people living with type 1 diabetes endure a lifetime of monotonous and rigorous health care management usually involving daily blood glucose monitoring, insulin injections, and constant self-care. This type of constant attention that is required by type 1 diabetes and other chronic illness can result in psychological maladjustment. There have been several theories, namely one called diabetes resilience theory, that posit that protective processes buffer the impact of risk factors on the individual's development and functioning. [ 59 ]
People with chronic conditions pay more out-of-pocket; a study found that Americans spent $2,243 more on average. [ 60 ] The financial burden can increase medication non-adherence. [ 61 ] [ 62 ]
In some countries, laws protect patients with chronic conditions from excessive financial responsibility; for example, as of 2008 France limited copayments for those with chronic conditions, and Germany limits cost sharing to 1% of income versus 2% for the general public. [ 63 ]
Within the medical-industrial complex , chronic illnesses can impact the relationship between pharmaceutical companies and people with chronic conditions. Life-saving drugs, or life-extending drugs, can be inflated for a profit . [ 64 ] There is little regulation on the cost of chronic illness drugs, which suggests that abusing the lack of a drug cap can create a large market for drug revenue. [ 65 ] Likewise, certain chronic conditions can last throughout one's lifetime and create pathways for pharmaceutical companies to take advantage of this. [ 66 ]
Gender influences how chronic disease is viewed and treated in society. Women's chronic health issues are often considered to be most worthy of treatment or most severe when the chronic condition interferes with a woman's fertility. Historically, there is less of a focus on a woman's chronic conditions when it interferes with other aspects of her life or well-being. Many women report feeling less than or even "half of a woman" due to the pressures that society puts on the importance of fertility and health when it comes to typically feminine ideals. These kinds of social barriers interfere with women's ability to perform various other activities in life and fully work toward their aspirations. [ 67 ]
Race is also implicated in chronic illness, although there may be many other factors involved. Racial minorities are 1.5-2 times more likely to have most chronic diseases than white individuals. Non-Hispanic blacks are 40% more likely to have high blood pressure that non-Hispanic whites, diagnosed diabetes is 77% higher among non-Hispanic blacks, and American Indians and Alaska Natives are 60% more likely to be obese than non-Hispanic whites. [ 68 ] Some of this prevalence has been suggested to be in part from environmental racism . Flint, Michigan, for example, had high levels of lead poisoning in their drinkable water after waste was dumped into low-value housing areas. [ 69 ] There are also higher rates of asthma in children who live in lower income areas due to an abundance of pollutants being released on a much larger scale in these areas. [ 70 ] [ 71 ]
In Europe, the European Chronic Disease Alliance was formed in 2011, which represents over 100,000 healthcare workers. [ 72 ]
In the United States, there are a number of nonprofits focused on chronic conditions, including entities focused on specific diseases such as the American Diabetes Association , Alzheimer's Association , or Crohn's and Colitis Foundation . There are also broader groups focused on advocacy or research into chronic illness in general, such as the National Association of Chronic Disease Directors, Partnership to Fight Chronic Disease, the Chronic Disease Coalition which arose in Oregon in 2015, [ 73 ] and the Chronic Policy Care Alliance. [ 74 ]
Signs and symptoms Syndrome Disease
Medical diagnosis Differential diagnosis Prognosis
Acute Chronic Cure
Eponymous disease Acronym or abbreviation Remission
|
https://en.wikipedia.org/wiki/Chronic_condition
|
A chronic electrode implant is an electronic device implanted chronically (for a long period) into the brain or other electrically excitable tissue. It may record electrical impulses in the brain or may stimulate neurons with electrical impulses from an external source.
The potential for neural interfacing technology to restore lost sensory or motor function is staggering; victims of paralysis due to peripheral nerve injury could achieve a full recovery by directly recording the output of their motor cortex , but the technology is immature and unreliable. [1] [2] There are numerous examples in the literature of intra-cortical electrode recording used to a variety of ends that fail after a few weeks, a few months at best. [3] [4] [5] [6] [7] [8] [9] [10] This document will review the current state of research into electrode failure, focusing on recording electrodes as opposed to stimulating electrodes.
Chronic brain-computer interfaces come in two varieties, stimulating and recording. Applications for stimulating interfaces include sensory prosthetics ( cochlear implants ), for example, are the most successful variety of sensory prosthetics) and deep brain stimulation therapies, while recording interfaces can be used for research applications [11] and to record the activity of speech or motor centers directly from the brain. In principle these systems are susceptible to the same tissue response that causes failure in implanted electrodes, but stimulating interfaces can overcome this problem by increasing signal strength. Recording electrodes, however, must rely on whatever signals are present where they are implanted, and cannot easily be made more sensitive.
Current implantable microelectrodes are unable to record single- or multi-unit activity reliably on a chronic scale. Lebedev and Nicolelis discuss in their 2006 review the specific needs for research in the field to truly improve the technology to the level of clinical implementation. They suggest four directions for improvement: [12] [13]
This review will focus on techniques pursued in the literature that are relevant to achieving the goal of consistent, long-term recordings. Research towards this end can be divided into two primary categories: characterizing the specific causes of recording failure, and techniques for preventing or delaying electrode failure.
As mentioned above, if there is to be significant progress towards long-term implantable electrodes, an important step is documenting the response of living tissue to electrode implantation in both the acute and chronic timelines. It is ultimately this tissue response that causes electrodes to fail by encapsulating the electrode itself in a protective layer called a "glial scar", (see 2.2). One serious impediment to understanding the tissue response is the lack of true standardization of implantation technique or of electrode materials. Common materials for electrode or probe construction include silicon , platinum , iridium , polyimide , ceramic , gold , as well as others. [14] [15] [16] [17] [18] [19] [20] In addition to the variety of materials used, electrodes are constructed in many different shapes, [21] including planar shanks, simple uniform microwires, and probes that taper to a thin tip from a wider base. Implantable electrode research also employs many different techniques for surgically implanting the electrodes; the most critical differences are whether or not the implant is anchored across the skull [22] and the speed of insertion. [23] The overall observed tissue response is caused by a combination of the traumatic injury of electrode insertion and the persistent presence of a foreign body in the neural tissue.
Damage caused by electrodes in the short term is caused by the insertion into the tissue. Consequently, research into minimizing this is focused on the geometry of the electrode and the proper technique for insertion. Short term effects of electrode insertion on surrounding tissue have been documented extensively. [24] They include cell death (both neuronal and glial ), severed neuronal processes and blood vessels, mechanical tissue compression, and collection of debris resulting from cell death.
In the Bjornsson et al. 2006 study, an ex vivo apparatus was constructed explicitly to study the deformation of and damage to neural tissue during electrode insertion. Electrodes were constructed from silicon wafers to have three different sharpnesses (interior angle of 5° for sharp, 90° for medium, 150° for blunt). Insertion speed was also presented at three speeds, 2 mm/s, 0.5 mm/s, and 0.125 mm/s. Qualitative assessments of vascular damage were made by taking real-time images of electrodes being inserted into 500 um thick coronal brain slices. To facilitate direct visualization of vascular deformation, tissue was labeled with fluorescent dextran and microbeads before viewing. The fluorescent dextran filled the blood vessels, allowing initial geometry to be visualized along with any distortions or breakages. Fluorescent microbeads lodged throughout the tissue, providing discrete coordinates that aided in computerized calculations of strain and deformation. Analysis of the images prompted the division of tissue damage into 4 categories:
Fluid displacement by device insertion frequently resulted in ruptured vessels. Severing and dragging were consistently present along the insertion track, but did not correlate with tip geometry. Rather, these features were correlated with insertion speed, being more prevalent at medium and slow insertion speeds. Faster insertion of sharp probes was the only condition resulting in no reported vascular damage.
When implanted in neural tissue in the long term, microelectrodes stimulate a sort of foreign body response, primarily effected by astrocytes and microglia . Each cell-type performs many functions in supporting healthy, uninjured neural tissue, and each is also 'activated' by injury related mechanisms that result in changes in morphology, expression profile, and function. Tissue response has also been shown to be greater in situation where the electrodes are anchored through the subject's skull; the tethering forces aggravate the injury caused by the electrode's insertion and sustain the tissue response. [25]
One function taken on by microglia when activated is to cluster around foreign bodies and degrade them enzymatically. It has been proposed that when the foreign body cannot be degraded, as in the case of implanted electrodes whose material composition is resistant to such enzymatic dissolution, this 'frustrated phagocytosis ' contributes to the failure of recordings, releasing necrotic substances into the immediate vicinity and contributing to cell death around the electrode. [26]
Activated astrocytes form the major component of the encapsulating tissue that forms around implanted electrodes. " Current theories hold that glial encapsulation, i.e. gliosis , insulates the electrode from nearby neurons, thereby hindering diffusion and increasing impedance, extends the distance between the electrode and its nearest target neurons, or creates an inhibitory environment for neurite extension, thus repelling regenerating neural processes away from recording sites ". [27] [28] Either activated astrocytes or buildup of cellular debris from cell death around the electrode would act to insulate the recording sites from other, active neurons. [29] Even very small increases in the separation between the electrode and local nerve population can insulate the electrode completely, as electrodes must be within 100 μm to get a signal.
Another recent study addresses the problem of the tissue response. [30] Michigan-type electrodes (see article for detailed dimensions) were surgically inserted into the brains of Adult male Fischer 344 rats; a control population was treated with the same surgical procedures, but the electrode was implanted and immediately removed so that a comparison could be made between tissue response to acute injury and chronic presence. Animal subjects were sacrificed at 2 and 4 weeks after implantation to quantify the tissue response with histological and immunostaining techniques. Samples were stained for ED1 and GFAP presence. ED1+ reading is indicative of the presence of macrophages , and was observed in a densely packed region within approximately 50 μm of the electrode surface. ED1+ cells were present at both 2 and 4 weeks after implantation, with no significant difference between the time points. Presence of GFAP indicates presence of reactive astrocytes, and was seen at 2 and 4 weeks after implantation, extending more than 500 μm from the electrode surface. Stab controls showed signs of inflammation and reactive gliosis as well, however signals were significantly lower in intensity than those found in chronic test subjects, and diminished noticeably from 2 weeks to 4 weeks. This is strong evidence that glial scarring and the encapsulation, and eventual isolation, of implanted microelectrodes is primarily a result of chronic implantation, and not the acute injury.
Another recent study addressing the impact of chronically implanted electrodes points that tungsten-coated electrodes seem to be well tolerated by the nervous tissue, inducing a small and circumscribed inflammatory response only in the vicinity of the implant, associated with a small cell death [31] .
Techniques for combating long-term failure of electrodes are understandably focused on disarming the foreign body response. This can most obviously be achieved by improving the biocompatibility of the electrode itself, thus reducing the tissue's perception of the electrode as a foreign substance. [32] As a result, much of the research towards alleviating the tissue response is focused on improved biocompatibility .
It is difficult to effectively evaluate progress towards improved electrode biocompatibility because of the variety of research in this field.
This section loosely categorizes different approaches to improving biocompatibility seen in the literature. Descriptions of the research are limited to a brief summary of the theory and technique, not the results, which are presented in detail in the original publications. Thus far, no technique has achieved results drastic and sweeping enough to change the fact of the encapsulation response.
Research focusing on bioactive coatings to alleviate the tissue response is conducted primarily on silicon-based electrodes. Techniques include the following:
Another body of research dedicated to improving the biocompatibility of electrodes focuses on functionalizing the electrode surface with relevant protein sequences. Studies have demonstrated that surfaces functionalized with sequences taken from adhesive peptides will decrease cellular motility and support higher neuronal populations. [37] [38] It has also been shown that peptides can be selected to specifically support neuronal growth or glial growth, and that peptides can be deposited in patterns to guide cellular outgrowth. [39] [40] [41] If populations of neurons can be induced to grow onto inserted electrodes, electrode failure should be minimized.
Kennedy's research details the use of a glass cone electrode which contains a microwire built inside of it. [42] The microwire is used for recording, and the cone is filled with neurotrophic substances or neural tissue in order to promote growth of local neurons into the electrode to allow for recording. This approach overcomes tissue response by encouraging neurons to grow closer to recording surface.
Some notable success has also been made in developing microfluid delivery mechanisms that could ostensibly deliver targeted pharmacological agents to electrode implantation sites to alleviate the tissue response. [43]
Just as in other fields, some effort is devoted explicitly to the development of standardized research tools. The goal of these tools is to provide a powerful, objective way of analyzing the failure of chronic neural electrodes in order to improve the reliability of the technology.
One such effort describes the development of an in vitro model to study the tissue response phenomenon. Midbrains are surgically removed from day 14 Fischer 344 rats and grown in culture to create a confluent layer of neurons, microglia, and astrocytes. This confluent layer can be used to study the foreign body response by scrape-injury or depositing electrode microwires on the monolayer, fixing the culture at defined time points after insertion/injury and studying tissue response with histological methods. [44]
Another research tool is a numerical model of the mechanical electrode-tissue interface. The goal of this model is not to detail the electrical or chemical characteristics of the interface, but the mechanical ones created by electrode-tissue adhesion, tethering forces, and strain mismatch. This model can be used to predict forces generated at the interface by electrodes of different material stiffnesses or geometries. [45]
For studies requiring a massive quantity of identical electrodes, a bench-top technique has been demonstrated in the literature to use a silicon shape as a master to produce multiple copies out of polymeric materials via a PDMS intermediate. This is exceptionally useful for material studies or for labs who need a high volume of electrodes but cannot afford to buy them all. [46]
|
https://en.wikipedia.org/wiki/Chronic_electrode_implant
|
Chronic inflammatory demyelinating polyneuropathy ( CIDP ) is an acquired autoimmune disease of the peripheral nervous system characterized by progressive weakness and impaired sensory function in the legs and arms. [ 1 ] The disorder is sometimes called chronic relapsing polyneuropathy ( CRP ) or chronic inflammatory demyelinating polyradiculoneuropathy (because it involves the nerve roots). [ 2 ] CIDP is closely related to Guillain–Barré syndrome and it is considered the chronic counterpart of that acute disease. [ 3 ] Its symptoms are also similar to progressive inflammatory neuropathy . It is one of several types of neuropathy .
In its traditional manifestation, chronic inflammatory demyelinating polyneuropathy is characterized by symmetric, progressive limb weakness and sensory loss, which typically starts in the legs. Patients report having trouble getting out of a chair, walking, climbing stairs, and falling. Problems with gripping objects, tying shoe laces, and using utensils can all be brought on by upper limb involvement. Proximal limb weakness is a fundamental clinical characteristic that sets apart chronic inflammatory demyelinating polyneuropathy from the vast majority of distal polyneuropathies , which are far more common. Proprioception impairment, distal paresthesias , loss of feeling, and poor balance are all brought on by sensory involvement. Only a small percentage of cases involve neuropathic pain . [ 4 ]
Fatigue has been identified as common in CIDP patients, but it is unclear how much this is due to primary (due to the disease action on the body) or secondary effects (impacts on the whole person of being ill with CIDP). [ 5 ] [ 6 ] [ 7 ]
Numerous reports have outlined a range of clinical patterns that are thought to be chronic inflammatory demyelinating polyneuropathy variations. Different variations include ataxic, pure motor, and pure sensory patterns; additionally, there are multifocal patterns in which the distributions of specific nerve territories experience weakness and sensory loss. [ 4 ]
Chronic inflammatory demyelinating polyneuropathy (or polyradiculoneuropathy) is considered an autoimmune disorder destroying myelin, the protective covering of the nerves. Typical early symptoms are "tingling" (sort of electrified vibration or paresthesia ) or numbness in the extremities, frequent (night) leg cramps, loss of reflexes (in knees), muscle fasciculations , "vibration" feelings, loss of balance, general muscle cramping and nerve pain. [ 8 ] [ 9 ] CIDP is extremely rare but under-recognized and under-treated due to its heterogeneous presentation (both clinical and electrophysiological) and the limitations of clinical, serologic, and electrophysiologic diagnostic criteria. Despite these limitations, early diagnosis and treatment is favoured in preventing irreversible axonal loss and improving functional recovery. [ 10 ]
There is a lack of awareness and treatment of CIDP. Although there are stringent research criteria for selecting patients for clinical trials, there are no generally agreed-upon clinical diagnostic criteria for CIDP due to its different presentations in symptoms and objective data. Application of the present research criteria to routine clinical practice often misses the diagnosis in a majority of patients, and patients are often left untreated despite progression of their disease. [ 11 ]
HIV infection is a factor in the occurrence of CIDP. At every stage of HIV infection, distinct patterns of CIDP, whether progressive or relapsing, have been noted. Increased protein content is linked to CSF pleocytosis in the majority of HIV-CIDP cases. [ 12 ] Pregnancy has been linked to a significantly greater risk of relapse. [ 13 ]
In one study, 32% of 92 CIDP patients had a history of infection within 6 weeks of the onset of neurological symptoms, with the majority of these infections being non-specific upper respiratory tract or gastrointestinal infections. [ 13 ] A different study showed that out of 100 patients, 16% had an infectious event six weeks or less prior to the onset of neurological symptoms: seven patients had CIDP that was related to or followed viral hepatitis , and six had a chronic infection with the hepatitis B virus . The other nine patients had vague symptoms similar to the flu. [ 14 ]
There is no known genetic predisposition to chronic inflammatory demyelinating polyneuropathy. [ 15 ]
Some variants of CIDP present autoimmunity against proteins of the node of Ranvier . These variants comprise a subgroup of inflammatory neuropathies with IgG4 autoantibodies against the paranodal proteins neurofascin -155, contactin -1 and caspr -1. [ 16 ]
These cases are special not only because of their pathology, but also because they are non-responsive to the standard treatment. They are responsive to Rituximab instead. [ 16 ]
Also some cases of combined central and peripheral demyelination (CCPD) could be produced by neurofascins. [ 17 ]
Autoantibodies to components of the Ranvier nodes, specially autoantibodies the Contactin-associated protein 1 ( CASPR ), cause a form of CIDP with an acute " Guillain-Barre -like" phase, followed by a chronic phase with progressive symptoms. Different IgG subclasses are associated with the different phases of the disease. IgG3 Caspr autoantibodies were found during the acute GBS-like phase, while IgG4 Caspr autoantibodies were present during the chronic phase of disease. [ 18 ]
In the local tissue compartment of peripheral nerves , the immune system is carefully regulated by a normal, balanced collection of immunocompetent cells as well as soluble factors, maintaining the integrity of the system. Maintaining self-tolerance requires defense against immune reactions to autoantigens . Chronic inflammatory demyelinating polyneuropathy disrupts self-tolerance and activates autoreactive T and B cells , which are normally suppressed immune cells . This leads to the organ-specific damage typical of autoimmune disease . [ 19 ] Molecular mimicry may be particularly relevant to the tolerance breakdown linked to autoimmune neuropathies. The process known as " molecular mimicry " occurs when an infectious organism that shares epitopes from its host's afflicted tissue triggers an immune response in the host. However, only a small number of convincingly identified specific targets for such a response have been found in chronic inflammatory demyelinating polyneuropathy. [ 20 ]
Individuals with chronic inflammatory demyelinating polyneuropathy have evidence of activation of T cells in the systemic immune compartment; however, antigen specificity is still largely unknown. [ 21 ] [ 22 ]
It was proposed more than 20 years ago that autoantibodies play a role in the development of chronic inflammatory demyelinating polyneuropathy. This was supported by the detection of oligoclonal IgG bands in the cerebrospinal fluid [ 23 ] and immunoglobulin as well as complement deposition on myelinated nerve fibers. [ 24 ]
Target antigens may also include gangliosides and related glycolipids . There is serologic evidence of recent Campylobacter jejuni infection in a small number of individuals with chronic inflammatory demyelinating polyneuropathy. Because carbohydrate epitopes are expressed in both microbial lipopolysaccharides and nerve glycolipids , this discovery may, in rare cases, point to molecular mimicry as the root cause of chronic inflammatory demyelinating polyneuropathy. [ 25 ]
Apart from myelin-directed antibodies, other serum components that can cause demyelination as well as conduction block include complement, cytokines , and other inflammatory mediators. Individuals with chronic inflammatory demyelinating polyneuropathy have a low frequency of specific antibodies, which suggests that different antibodies and different mechanisms are involved in each patient. [ 20 ]
When a patient presents with a non-length-dependent demyelinating polyneuropathy which either develops chronically over several months or progresses over more than a month, CIDP may be diagnosed. There may be a secondary progressive course along with a progressive course that follows, or it may be relapsing and remitting. Pathological investigations and electrophysiological studies, if necessary, show the underlying demyelinating process. [ 26 ]
The primary basis for diagnosing CIDP is the electrophysiological studies that depict an asymmetric demyelinating process. Comparison of the proximal and distal latencies of equivalent segments of two nerves in the same limb reveals that these patients with acquired demyelinating neuropathy frequently have a differential slowing of conduction velocity. There is always a noticeable difference in the compound muscle action potential's dispersion, and conduction block is commonly experienced. [ 26 ]
An MRI can show proximal nerve or root enlargement and gadolinium enhancement, which indicate active inflammation as well as demyelination in the brachial plexus [ 27 ] or cauda equina . [ 28 ]
Clinically, CIDP is divided into "typical" and "atypical" cases. A typical case of CIDP is a symmetrical polyneuropathy that affects the proximal and distal muscles equally. Atypical cases of CIDP include multifocal acquired demyelinating sensory and motor neuropathy (MADSAM), Lewis-Sumner syndrome (LSS), and distal acquired demyelinating symmetric (DADS). DADS is a sensory or sensorimotor neuropathy that is symmetrical and length-dependent. It is frequently linked to an IgM paraprotein and noticeably longer distal motor latencies. The characteristics are typical of demyelinating neuropathy with antimyelin-associated glycoprotein (MAG) antibodies; however, anti-MAG neuropathy is not included in the CIDP criteria according to the EFNS/PNS criteria, primarily due to the presence of a particular antibody and a different response to treatment. LSS exhibits a multifocal distribution, with conduction block serving as the disease's electrophysiological hallmark. Furthermore, there have been reports of pure motor and sensory CIDP variants, with the latter occasionally limited to sensory nerve roots (chronic immune sensory polyradiculopathy). The acronym CANOMAD refers to a rare chronic ataxic neuropathy linked to disialosyl ( ganglioside ) antibodies, IgM paraprotein , ophthalmoplegia , and cold agglutinins . [ 29 ]
CIDP variants are among several types of immune-mediated neuropathies recognised. [ 30 ] [ 31 ] These include:
Other possible diagnoses are
For this reason a diagnosis of chronic inflammatory demyelinating polyneuropathy needs further investigations. The diagnosis is usually provisionally made through a clinical neurological examination .
Typical diagnostic tests include:
In some cases electrophysiological studies fail to show any evidence of demyelination. Though conventional electrophysiological diagnostic criteria are not met, the patient may still respond to immunomodulatory treatments. In such cases, presence of clinical characteristics suggestive of CIDP are critical, justifying full investigations, including sural nerve biopsy. [ 37 ]
First-line treatment for CIDP is currently intravenous immunoglobulin and other treatments include corticosteroids (e.g., prednisone ), and plasmapheresis (plasma exchange) which may be prescribed alone or in combination with an immunosuppressant drug . [ 38 ] Recent controlled studies show subcutaneous immunoglobulin appears to be as effective for CIDP treatment as intravenous immunoglobulin in most patients, and with fewer systemic side effects. [ 39 ]
Intravenous immunoglobulin and plasmapheresis have proven beneficial in randomized, double-blind, placebo-controlled trials. Despite less definitive published evidence of efficacy, corticosteroids are considered standard therapies because of their long history of use and cost effectiveness. Intravenous immunoglobulin is probably the first-line CIDP treatment, but is extremely expensive. For example, in the U.S., a single 65 g dose of Gamunex brand in 2010 might be billed at the rate of $8,000 just for the immunoglobulin—not including other charges such as nurse administration. [ citation needed ]
Immunosuppressive drugs are often of the cytotoxic ( chemotherapy ) class, including rituximab (Rituxan) which targets B cells , and cyclophosphamide , a drug which reduces the function of the immune system. Ciclosporin has also been used in CIDP but with less frequency as it is a newer approach. [ 40 ] Ciclosporin is thought to bind to immunocompetent lymphocytes , especially T-lymphocytes . [ citation needed ]
Non-cytotoxic immunosuppressive treatments usually include the anti-rejection transplant drugs azathioprine (Imuran/Azoran) and mycophenolate mofetil (Cellcept). In the U.S., these drugs are used "off-label", meaning that they do not have an indication for the treatment of CIDP in their package inserts. Before azathioprine is used, the patient should first have a blood test that ensures that azathioprine can safely be used. [ citation needed ]
Anti-thymocyte globulin , an immunosuppressive agent that selectively destroys T lymphocytes is being studied for use in CIDP. Anti-thymocyte globulin is the gamma globulin fraction of antiserum from animals that have been immunized against human thymocytes. It is a polyclonal antibody. Although chemotherapeutic and immunosuppressive agents have shown to be effective in treating CIDP, significant evidence is lacking, mostly due to the heterogeneous nature of the disease in the patient population in addition to the lack of controlled trials. [ citation needed ]
A review of several treatments found that azathioprine, interferon alpha and methotrexate were not effective. [ 41 ] Cyclophosphamide and rituximab seem to have some response. Mycophenolate mofetil may be of use in milder cases. Immunoglobulin and steroids are the first line choices for treatment. [ citation needed ]
In severe cases of CIDP, when second-line immunomodulatory drugs are not efficient, autologous hematopoietic stem cell transplantation (HSCT) is sometimes performed. The treatment may induce long-term remission even in severe treatment-refractory cases of CIDP. To improve outcome, it has been suggested that it should be initiated before irreversible axonal damage has occurred. However, a precise estimation of its clinical efficacy for CIDP is not available, as randomized controlled trials (RCT) have not been performed. [ 42 ] (In MS, the ASTIMS RCT provides evidence for superior effect of HSCT to the then-best practice for treatment of aggressive MS. [ 42 ] The more recent MIST RCT confirmed its superiority in MS. [ 43 ] )
Physical therapy and occupational therapy may improve muscle strength, activities of daily living , mobility, and minimize the shrinkage of muscles and tendons and distortions of the joints. [ citation needed ]
Ongoing specialist community support, information, advice, and guidance is available from a range of Charities , Non-Government Organisations (NGOs), and Patient Advisory Groups around the world. In the United Kingdom this is provided by GAIN (Guillain–Barré and Associated Inflammatory Neuropathies), [ 44 ] in the USA it is provided by GBS/CIDP Foundation International, [ 45 ] and in The European Union by a range of organisations under the umbrella of EPODIN (European Patient Organization for Disimmune & Inflammatory Neuropathies) [ 46 ]
As in multiple sclerosis , another demyelinating condition, it is not possible to predict with certainty how CIDP will affect patients over time. The pattern of relapses and remissions varies greatly with each patient. A period of relapse can be very disturbing, but many patients make significant recoveries. [ citation needed ]
If diagnosed early, initiation of early treatment to prevent loss of nerve axons is recommended. However, many individuals are left with residual numbness, weakness, tremors, fatigue and other symptoms which can lead to long-term morbidity and diminished quality of life . [ 2 ]
It is important to build a good relationship with doctors, both primary care and specialist. Because of the rarity of the illness, many doctors will not have encountered it before. Each case of CIDP is different, and relapses, if they occur, may bring new symptoms and problems. Because of the variability in severity and progression of the disease, doctors will not be able to give a definite prognosis. A period of experimentation with different treatment regimens is likely to be necessary in order to discover the most appropriate treatment regimen for a given patient. [ citation needed ]
In 1982 Lewis et al. reported a group of patients with a chronic asymmetrical sensorimotor neuropathy mostly affecting the arms with multifocal involvement of peripheral nerves. [ 47 ] Also in 1982 Dyck et al reported a response to prednisolone to a condition they referred to as chronic inflammatory demyelinating polyradiculoneuropathy. [ 48 ] Parry and Clarke in 1988 described a neuropathy which was later found to be associated with IgM autoantibodies directed against GM1 gangliosides. [ 49 ] [ 50 ] This latter condition was later termed multifocal motor neuropathy [ 51 ] This distinction is important because multifocal motor neuropathy responds to intravenous immunoglobulin alone, while chronic inflammatory demyelinating polyneuropathy responds to intravenous immunoglobulin, steroids and plasma exchange. [ 52 ] It has been suggested that multifocal motor neuropathy is distinct from chronic inflammatory demyelinating polyneuropathy and that Lewis-Sumner syndrome is a distinct variant type of chronic inflammatory demyelinating polyneuropathy. [ 53 ]
The Lewis-Sumner form of this condition is considered a rare disease with only 50 cases reported up to 2004. [ 54 ] A total of 90 cases had been reported by 2009. [ 55 ]
The National Vaccine Injury Compensation Program has awarded money damages to patients who came down with CIDP after receiving one of the childhood vaccines listed on the Federal Government's vaccine injury table . These Vaccine Court awards often come with language stating that the Court denies that the specific vaccine "caused petitioner to suffer CIDP or any other injury. Nevertheless, the parties agree to the joint stipulation, attached hereto as Appendix A. The undersigned finds said stipulation reasonable and adopts it as the decision of the Court in awarding damages, on the terms set forth therein." [ 56 ] A keyword search on the Court of Federal Claims "Opinions/Orders" database for the term "CIDP" returns 202 opinions related to CIDP and vaccine injury compensation. [ 57 ]
|
https://en.wikipedia.org/wiki/Chronic_inflammatory_demyelinating_polyneuropathy
|
Chronic phase chronic myelogenous leukemia is a phase of chronic myelogenous leukemia in which 5% or fewer of the cells in the blood and bone marrow are blast cells (immature blood cells ). This phase may last from several months to several years, and there may be no symptoms of leukemia .
This article incorporates public domain material from Dictionary of Cancer Terms . U.S. National Cancer Institute .
This oncology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Chronic_phase_chronic_myelogenous_leukemia
|
Chua Soi Lek ( simplified Chinese : 蔡细历 ; traditional Chinese : 蔡細歷 ; pinyin : Cài Xìlì ; Pe̍h-ōe-jī : Chhòa Sòe-le̍k ; born 2 January 1947), also known as Chua Kin Seng, is a Chinese Malaysian politician from the state of Johor . He is the 9th President of the Malaysian Chinese Association (MCA), a major component party in Barisan Nasional (BN) coalition from 2010 until 2013. He held the post of Minister of Health from 2004 until 2008. [ 1 ] He has also been a one-term Member of Parliament (MP) for Labis (2004-2008) and a 4-term Member of the Johor State Legislative Assembly (MLA) for Penggaram from 1986 to 2004.
He was born in Batu Pahat , Johor. Chua received his early education at Sekolah Kebangsaan Lim Poon, then Batu Pahat High School and Muar High School. He was educated in medicine (SSB) at the University of Malaya in 1968–1973.
He was trained in psychology and practised psychiatry before entering politics. Chua set up his medical practice in 1977 after serving as a medical officer at the Batu Pahat Hospital . He sold the clinic in 1990 to pursue a full-time career in politics with MCA.
He was first elected as a state assemblyman for Penggaram , Johor on MCA's ticket in 1986. He continued to serve Penggaram for 18 years through four consecutive state elections. Later, he became a Johor state government executive councillor. In the 2004 general election , he contested for the Labis parliamentary seat under the Barisan Nasional coalition and claimed victory. The then Prime Minister Tun Abdullah Ahmad Badawi appointed Chua into the Malaysian cabinet as the Minister of Health following that victory.
He held several prominent posts throughout his later career. He was the MP for Labis, a MCA vice-president, Johor MCA state liaison committee chairman as well as Batu Pahat MCA division chairman until he resigned from all public and political offices on 2 January 2008 due to the eruption of a sensational sex scandal. [ 2 ]
In the 2008 general elections , MCA won only 15 parliamentary seats out of 40 they contested. Some grassroots leaders and former top leaders including Dr Chua, demanded the President, Ong Ka Ting , step down to take responsibility.
He returned to active politics in the second half of 2008 and won the Batu Pahat Division chairman post uncontested. Then he contested the MCA deputy presidency, defeating Ong Ka Chuan , Donald Lim and Lee Hak Teik in a four-cornered fight. [ 3 ]
Despite that, Dr Chua was only appointed chairman of the Government Policy Monitoring bureau and left out of the MCA leadership in Johor by party president, Ong Tee Keat . This was seen as a move to isolate Dr Chua politically. [ 4 ] Eventually, Chua was expelled from the party in August 2009 by the MCA Disciplinary Committee for his past sex scandal. [ 5 ]
In 2009, Chua's supporters sparked an Extraordinary General Meeting (EGM) to challenge Ong Tee Keat's presidency and to reinstate Dr Chua back as a MCA member and deputy president.
The EGM was held on 10 October, where a number of resolutions were made challenging Chua's removal from MCA and his sacking as deputy president of MCA. [ 6 ] A vote of no confidence against Datuk Seri Ong Tee Keat passed by 14 votes. [ 7 ] In the other resolution, Dr Chua's expulsion was overturned. [ 8 ] Ong and Chua both refused to resign, and united under a "greater unity plan," putting their differences aside temporarily. However, some central committee members, led by Liow Tiong Lai , previously aligned with Ong, demanded fresh elections.
In early March 2010, Chua and his supporters in the central committee (CC), joined other CCs led by Liow Tiong Lai , in resigning. With the resignation of two-thirds of the central committee, fresh elections were to be held as per the party constitution. Chua contested the presidency against incumbent Ong Tee Keat and former president Ong Ka Ting. In the three-cornered fight, Chua emerged victorious while incumbent Tee Keat finished in third place. [ 9 ] [ 10 ] After becoming president, Chua focused on rebuilding the appearance of unity within MCA after a year of public infighting. [ 11 ]
In February 2012, Chua broke from Malaysian political norms by having a public debate with Lim Guan Eng , Chief Minister of Penang. The first debate continued with another public debate, labelled as Debate 2.0, that was held on 8 July 2012. Both debates generated tremendous public and media interest. [ 12 ] [ 13 ]
In the 2013 Malaysian general election MCA only won 7 of the 37 federal seats and 11 of the 90 state seats it contested. In the general election in 2008, it won 15 parliamentary and 32 state seats. Chua said MCA remained adamant in not accepting any government post at both state and federal level, following its dismal performance in the just-concluded 13th General Election. [ 14 ] [ 15 ] The poor performance in the election led to calls for Chua's resignation. [ 16 ] Chua did not enter the following party poll for president, and in December 2013, Liow Tiong Lai was elected the President of MCA. [ 17 ] [ 18 ]
On 1 January 2008, Chua Soi Lek admitted that he was the person featured in a sensational sex DVD that was widely being circulated in Johor . The two DVDs were distributed anonymously in Muar and other towns in Johor show Chua having scandal with a young woman, described by him as a "personal friend." The DVDs are believed to be wireless hidden camera recordings in a hotel suite. [ 19 ]
He claimed no involvement in the filming or production of the DVD in question. [ 20 ] On 2 January 2008, he formally announced his resignation from all posts including Member of Parliament for Labis , vice presidency of MCA , and Minister of Health at a press conference. [ 21 ]
Chua later remarked his downfall was due to his dedication to his work as Health Minister and MCA Vice-President, which caused his political rivals to grow suspicious of him. [ 22 ]
Chua is married to Puan Sri Wong Sek Hin and the couple have three children. [ citation needed ] One of their sons, Chua Tee Yong had replaced him as Labis MP.
|
https://en.wikipedia.org/wiki/Chua_Soi_Lek
|
CIDEX is a brand name for a Glutaraldehyde-free (0.55% ortho-phthalaldehyde) high-level disinfecting solution used within the field of medicine. [ 1 ]
The CIDEX brand name has been registered as a trademark since 1962. [ 2 ]
This medical article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Cidex
|
The concept of healthcare knowledge transfer using cinematography recognizes that films with carefully crafted and verified content, using graphics, animations and live-action video, can be one of the most efficient ways of transferring knowledge with clarity and speed, to both lay-people and healthcare professionals.
The use of cinematography to enhance healthcare practice and delivery dates back to the late 19th century in Western Europe. Étienne-Jules Marey (a French scientist and physiologist), Eugène-Louis Doyen (a French surgeon), Bolesław Matuszewski (a Polish cameraman, in France his first name was written as Boleslas), and Gheorghe Marinescu (a Romanian neurologist) are some of the pioneers of medical cinematography. [ 1 ]
In 1888, Doyen had his surgeries captured on film. His films were brought into disrepute by the fact that they were copied and shown on fairgrounds. The resulting social prejudice may explain the slow take-off of medical cinematography. For example, in 1910 someone said the following regarding Doyen's videos: ,“These pictures savoured of advertisement, and were never popular, save as a side show among the less scientifically inclined members of the profession.” [ 2 ] Doyen's work, however, marked the distinction between the concepts of ‘film for entertainment’ versus ‘film for medicine’.
In 1893, Marey used the technique to study human physiology and movement. [ 3 ] In 1898 Boleslas Matuszewski recorded medical films in Paris (at the time, the world's neurologic capital). At Salpêtrière and Pitié hospitals he filmed surgeries and cases of people affected by nervous and mental disorders.
Gheorghe Marinescu (Romanian neurologist) studied under renowned French professor Jean-Martin Charcot and returned to Bucharest as Chief Physician at Pantelimon Hospital. Between 1898 and 1902 he conducted a cinematographic project, recording and analyzing a series of neurological conditions in patients. He perfected the application of filming techniques to clinical neurology and published five articles based on cinematographic analysis. Marinescu wrote that the role of cinematography is “to complement and even replace, in whatever measure possible, the descriptive exposition of phenomena by more rigorous, more exact analysis, which consists of recording movement with the help of special procedures” [ 4 ]
In 1905 Arthur Van Gehuchten (Belgian anatomist and neurologist) began to film neurologic patients and built up a collection of films for teaching purposes. His collection is exceptional both quantitatively and qualitatively. [ 5 ]
In 1910 Thomas Edison produced the first ever public health education film, raising awareness about the prevention of tuberculosis . The view at the time was that “propagandists saw education as limited to providing facts. Propaganda was designed to control opinions and actions, without having to use direct force. Propaganda appealed to most progressive health reformers as a way of providing genuine popular democratic support for the trained medical elite. [ citation needed ]
Dr. H. E. Kleinschmidt of the Tuberculosis Association defined good propaganda as ‘mental inoculation’ whose avowed goal was ‘will-control through education’. The goal of health propagandists was not simply to inform but to persuade, to control individual thoughts and actions.” [ 6 ]
Edison’s film came under criticism for “the distortions, exaggerations, and inaccuracies inherent in the melodrama format. The films clearly exaggerated the power of medicine to deal with tuberculosis; and they greatly over dramatized the effects and symptoms of the disease.” [ 6 ]
During World War I, the use of medical propaganda appealed to the US government and private agencies, which worked together “to produce and distribute anti- venereal disease motion pictures, including the feature-length melodramas Fit to Fight and The End of the Road and other lesser-known films. These movies, originally made for military trainees and war workers, were revised for showing to civilians, as evidence mounted that many soldiers had been infected prior to induction.” [ 7 ]
The 1930s saw the rise of medical cinematography employed to reach the goals of public health . In 1933, the Journal of Communication published an article regarding immunization propaganda in the US: “Dr. Nash has enlisted the cinematograph film. He has noticed that American films illustrating immunization never show the instant of puncture, so that spectators are left to infer something disagreeable. But in the 16 mm. film taken by Kodak Ltd. in his own clinic the whole operation is photographed, and although now and then there may be a grimace or a hint of nervousness, there is certainly no crying or resistance. The children in the film, quite unrehearsed, behave perfectly. It is an excellent film and should do much to dispel parental apprehensions.” [ 8 ]
In a 1935 issue of the British Medical Journal , one could read the following advice regarding the use of films in public health: “If the medical profession wanted, for example, to introduce widespread prophylaxis against diphtheria it could take a leaf out of Papworth's book and tell a story on the films that would help to wipe out diphtheria in a few years' time. It is no good trying to persuade people of the Tightness of this or that measure of health with the sweet voice of reason. It is the emotional appeal that wins the day; reason follows humbly after. The film, with its forcible assault upon both eye and ear, is a powerful weapon of propaganda. And it could be used with effect for ‘putting across’ to the public the idea of preventive medicine. Perhaps the Minister of Health, Sir Kingsley Wood , a master of propaganda, might enlist the help of the moving picture in his campaign for the improved health of the people.” [ 9 ]
By the 1980s, medical films had become an established vehicleo for both medical and patient training. A 1987 issue of the American Journal of Nursing contained a full-page section on Films & Tapes covering broad topics such as Aids to Daily Living for the elderly, working moms, AIDS, alcohol abuse and even emotional fitness. [ 10 ]
Today, the concept of healthcare knowledge transfer recognizes that film clips (with carefully crafted and verified content, and using graphics, animations and live-action video), can be one of the most efficient ways of transferring knowledge with clarity and speed, to both lay-people (patients, families, and friends), and healthcare professionals. [ citation needed ]
Traditionally the transfer of generic information (i.e. non patient-specific information) tends to happen on a somewhat ad-hoc, live, one-to-one basis, that is generally less well structured, less comprehensive, and thus less successful (whilst also being more expensive and less time efficient). [ citation needed ]
However, in order to be efficient and effective, the quality of the film's content must be very high. A patient's understanding of conditions and treatments is important to their care and well-being, so it is useful for carefully crafted clips to be available 24/7. It also helps patients when attending the all-important one-to-one consultations that are the bedrock of good healthcare. If patients are better informed and better prepared they can focus on their case rather than generic issues.
The literature in this area also contains examples of what can happen when the film content is not particularly well crafted and presented. In 2000, a UK cancer study highlighted the impact of poor-quality video. The authors acknowledge that “not all randomised trials of video education, however, have had similarly consistent results. A randomised trial in patients undergoing colonoscopy reported increased knowledge and satisfaction but failed to demonstrate a reduction in anxiety. A similar study in patients receiving genetic counseling reported similar benefits but again no reduction in anxiety. Two further randomised trials, the first in patients having breast surgery and the second in patients undergoing coronary angioplasty , failed to show any improvement in satisfaction or anxiety. The variation in these trial results suggests that, like all educational materials, the quality of the content is paramount and how it is used is vital to success. Involving patients in the development and showing patients recounting their personal experience undoubtedly helps. Using respected TV personalities offers the familiar face of respectability and professionalism. Above all, most studies fail to take advantage of the role which video has to play in continuing the educational process at home with their carers and friends, but instead ask patients to watch it in the unfamiliar environment of the clinic.” [ 11 ]
With the widespread use of the internet since the late 1990s, there is tremendous potential for health-related films to be made available, free at the point of need, over the internet. Moreover, the 21st century sees an expanding integration of social media and mobile devices into our lives, thus enabling an even wider adoption of film-based health knowledge transfer.
A recent slate of long-form films includes ‘Outreach’ - a film about the workings of a specialist spinal injury unit, [h The Spinal Injury Patient Film], an award-winning film for new patients with spinal cord injuries, Choosing a Wheelchair, [ 12 ] a film about the correct protocol of wheelchair assessment and procurement for individual patients, and several films on cancer survivorship .
The 55-minute production is set to help improve well-being and long-term outcomes for Britain's 1.2 million wheelchair users. Too many people do not understand the importance of getting the right wheelchair; the right set-up and support, and the appropriate pressure relief. UK provision is patchy, indeed there is - surprisingly - no qualification requirement for a wheelchair services provider/specifier in the National Health Service (NHS). [ citation needed ]
The film includes leading experts and practitioners in this area, plus case studies, and even an introductory history of the wheelchair. It thus provides a comprehensive and holistic overview of best practice and assessment, and is supported by many voices of experience.
For the NHS the long-term cost benefits of proper provision in an aging population are very considerable.
Newly injured spinal cord injury (SCI) patients are anxious about what is happening to them and the related long-term implications. When a patient is admitted to hospital with an SCI it is traumatic for their family, friends and loved ones too. One challenge faced by clinicians is communicating with a patient's ‘advocates’, people who may know little of spinal cord injury, the care process, nor the problems of prognosis. They may be understandably overwhelmed, they want answers... “doctor, will he/she walk again?”. [ citation needed ]
Inspired by patient feedback, award-winning director Marcus Dillistone , has created the film “From Darkness Into Light”], [ 13 ] to help explain the core issues with SCI and to provide patient insight. The film is designed not to replace face-to-face discussion, but to complement it. [ citation needed ]
The film was conceived as a ‘prescriptive’ communications tool in hospitals, with the decision of when (and if) to introduce the film to the patient or their advocates being made by the clinical team.
When a person is diagnosed with cancer, treatment is often the focus of discussion among patients, health care providers and family. However, the next phase of care, cancer survivorship , is much less discussed. The 2005 Institute of Medicine (IOM) report “From Cancer Patient to Cancer Survivor: Lost in Transition” [ 14 ] raised awareness of the issue. Survivors of cancer face unique challenges, while health care providers do not always have the time, resources and training to provide counseling and assistance. [ citation needed ]
A 2005 randomized controlled study, Moving Beyond Cancer Trial, [ 15 ] showed that a peer-modeling video as a psychoeducation tool is more effective than print materials in the recovery of energy/vitality in post-treatment breast cancer patients. This 23-minute film, now available on the National Cancer Institute website , addressed re-entry challenges in four life domains: physical health, emotional well-being, interpersonal relations, and life perspectives. Designed to promote adaptive peer modeling, the film observes four breast cancer survivors as they describe their experience in each of the four domains, as well as the active coping skills they used to meet associated challenges. The film also includes commentary by an oncologist expert in breast cancer on the re-entry experience and on active methods for approaching problems during re-entry. [ citation needed ]
Since then, many video resources have been made available on the internet to cancer survivors and their care providers. Examples include a general education video on the issues of cancer survivorship produced by the IOM, a video series by Living Beyond Breast Cancer , a national education and support organization, addressing specific and sometimes sensitive issues during breast cancer survivorship. [ 16 ]
|
https://en.wikipedia.org/wiki/Cinematography_in_healthcare
|
Cinemeducation is the use of film in medical education . It was originally coined by Matthew Alexander, Hall, and Pettice in the journal Family Medicine in 1994 [ 1 ] and later used by Matthew Alexander, Anna Pavlov, and Patricia Lenahan in their text of the same title. [ 2 ] Cinemeducation emphasises the psychosocial aspects of medicine. [ 3 ] It has been used in teaching family systems theory , [ 4 ] end-of-life care , [ 5 ] medical professionalism and medical ethics , [ 6 ] and in psychiatry [ 7 ] [ 8 ] and mental health services. [ 9 ]
This article relating to education is a stub . You can help Wikipedia by expanding it .
This medical article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Cinemeducation
|
Circulating tumor DNA (ctDNA) is tumor -derived fragmented DNA in the bloodstream that is not associated with cells. ctDNA should not be confused with cell-free DNA ( cfDNA ), a broader term which describes DNA that is freely circulating in the bloodstream, but is not necessarily of tumor origin. Because ctDNA may reflect the entire tumor genome , it has gained traction for its potential clinical utility; " liquid biopsies " in the form of blood draws may be taken at various time points to monitor tumor progression throughout the treatment regimen. [ 1 ] [ 2 ]
Recent studies have laid the foundation for inferring gene expression from cfDNA (and ctDNA), with EPIC-seq emerging as a notable advancement. [ 3 ] This method has substantially raised the bar for the noninvasive inference of expression levels of individual genes, thereby augmenting the assay's applicability in disease characterization, histological classification, and monitoring treatment efficacy. [ 3 ] [ 4 ] [ 5 ]
ctDNA originates directly from the tumor or from circulating tumor cells (CTCs), [ 6 ] which describes viable, intact tumor cells that shed from primary tumors and enter the bloodstream or lymphatic system. The precise mechanism of ctDNA release is unclear. The biological processes postulated to be involved in ctDNA release include apoptosis and necrosis from dying cells, or active release from viable tumor cells. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] Studies in both human (healthy and cancer patients) [ 12 ] and xenografted mice [ 13 ] show that the size of fragmented cfDNA is predominantly 166bp long, which corresponds to the length of DNA wrapped around a nucleosome plus a linker. Fragmentation of this length might be indicative of apoptotic DNA fragmentation , suggesting that apoptosis may be the primary method of ctDNA release. The fragmentation of cfDNA is altered in the plasma of cancer patients. [ 14 ] [ 15 ] In healthy tissue, infiltrating phagocytes are responsible for clearance of apoptotic or necrotic cellular debris, which includes cfDNA. [ 16 ] ctDNA in healthy patients is only present at low levels but higher levels of ctDNA in cancer patients can be detected with increasing tumor sizes. [ 17 ] This possibly occurs due to inefficient immune cell infiltration to tumor sites, which reduces effective clearance of ctDNA from the bloodstream. [ 16 ] Comparison of mutations in ctDNA and DNA extracted from primary tumors of the same patients revealed the presence of identical cancer-relevant genetic changes. [ 18 ] [ 19 ] This led to the possibility of using ctDNA for earlier cancer detection and treatment follow up. [ 20 ]
When blood is collected in EDTA tubes and stored, the white blood cells begin to lyse and release genomic wild type DNA in to the sample in quantities typically many fold higher than the ctDNA is present in. [ 21 ] This makes detection of mutations or other ctDNA biomarkers more difficult. [ 22 ] The use of commercially available cell stabilisation tubes can prevent or delay the lysis of white cells thereby reducing the dilution effect of the ctDNA. [ 23 ] Sherwood et al. demonstrated superior detection of KRAS mutations in matched samples collected in both EDTA K3 and Streck BCT tubes. [ 23 ] The advantages of cell stabilisation tubes can be realised in situation where blood cannot be processed to plasma immediately.
Other procedures can also reduce the amount of "contaminating" wild type DNA and make detection of ctDNA more feasible: [ 23 ]
The main appeal of ctDNA analysis is that it is extracted in a non-invasive manner through blood collection. Acquisition of cfDNA or ctDNA typically requires collection of approximately 3mL of blood into EDTA -coated tubes. The use of EDTA is important to reduce coagulation of blood. The plasma and serum fractions of blood can be separated through a centrifugation step. ctDNA or cfDNA can be subsequently extracted from these fractions. Although serum tends to have greater levels of cfDNA, this is primarily attributed to DNA from lymphocytes. [ 25 ] High levels of contaminating cfDNA is sub-optimal because this can decrease the sensitivity of ctDNA detection. Therefore, the majority of studies use plasma for ctDNA isolation. Plasma is then processed again by centrifugation to remove residual intact blood cells. The supernatant is used for DNA extraction, which can be performed using commercially available kits. [ citation needed ]
The analysis of ctDNA after extraction requires the use of various amplification and sequencing methods. These methods can be separated into two main groups based on whether the goal is to interrogate all genes in an untargeted approach, or if the goal is to monitor specific genes and mutations in a targeted approach. [ citation needed ]
A whole genome or whole exome sequencing approaches may be necessary to discover new mutations in tumor DNA while monitoring disease burden or tracking drug resistance. [ 26 ] Untargeted approaches are also useful in research to observe tumor heterogeneity or to discover new drug targets. However, while untargeted methods may be necessary in certain applications, it is more expensive and has lower resolution. This makes it difficult to detect rare mutations, or in situations where low ctDNA levels are present (such as minimal residual disease). Furthermore, there can be problems distinguishing between DNA from tumor cells and DNA from normal cells using a whole genome approach. [ citation needed ]
Whole genome or exome sequencing typically use high throughput DNA sequencing technologies . Limiting the sequencing to only the whole exome instead can decrease expense and increase speed, but at the cost of losing information about mutations in the non-coding regulatory regions of DNA. [ 27 ] While simply looking at DNA polymorphisms through sequencing does not differentiate DNA from tumor or normal cells, this problem can be resolved by comparing against a control sample of normal DNA (for example, DNA obtained through a buccal swab .) Importantly, whole genome and whole exome sequencing are useful for initial mutation discovery. This provides information for the use of more sensitive targeted techniques, which can then be used for disease monitoring purposes.
Whole genome sequencing enables to recover the structural properties of cfDNA, the size of fragments and their fragmentation patterns. These unique patterns can be an important source of information to improve the detection of ctDNA or localize the tissue of origin of these fragments. [ 28 ] Size-selection of short fragments (<150bp) with in vitro or in silico methods could improve the recovery of mutations and copy number aberrations. [ 15 ]
This method was originally developed by the laboratory of Bert Vogelstein , Luis Diaz, and Victor Velculescu at Johns Hopkins University . [ 29 ] Unlike normal karyotyping where a dye is used to stain chromosomal bands in order to visualize the chromosomes, digital karyotyping uses DNA sequences of loci throughout the genome in order to calculate copy number variation . [ 29 ] Copy number variations are common in cancers and describe situations where loss of heterozygosity of a gene may lead to decreased function due to lower expression, or duplication of a gene, which leads to overexpression.
After the whole genome is sequenced using a high throughput sequencing method, such as Illumina HiSeq, personalized analysis of rearranged ends (PARE) is applied to the data to analyze chromosomal rearrangements and translocations. This technique was originally designed to analyze solid tumor DNA but was modified for ctDNA applications. [ 29 ]
Proper epigenetic marking is essential for normal gene expression and cell function and aberrant alterations in epigenetic patterns is a hallmark of cancer. [ 30 ] A normal epigenetic status is maintained in a cell at least in part through DNA methylation . [ 31 ] Measuring aberrant methylation patterns in ctDNA is possible due to stable methylation of regions of DNA referred to as “ CpG islands ”. Methylation of ctDNA can be detected through bisulfite treatment . Bisulfite treatment chemically converts unmethylated cytosines into a uracil while leaving methylated cytosines unmodified. DNA is subsequently sequenced, and any alterations to the DNA methylation pattern can be identified. DNA hydroxymethylation is a similarly associated mark that has been shown to be a predictive marker of healthy versus diseased conditions in cfDNA, including cancer. [ 32 ] [ 33 ] )
In a targeted approach, sequencing of ctDNA can be directed towards a genetic panel constructed based on mutational hotspots for the cancer of interest. This is especially important for informing treatment in situations where mutations are identified in druggable targets. [ 27 ] Personalizing targeted analysis of ctDNA to each patient is also possible by combining liquid biopsies with standard primary tissue biopsies. Whole genome or whole exome sequencing of the primary tumor biopsy allows for discovery of genetic mutations specific to a patient's tumor, and can be used for subsequent targeted sequencing of the patient's ctDNA. The highest sensitivity of ctDNA detection is accomplished through targeted sequencing of specific single nucleotide polymorphisms (SNPs). Commonly mutated genes, such as oncogenes, which typically have hotspot mutations, are good candidates for targeted sequencing approaches. Conversely, most tumor suppressor genes have a wide array of possible loss of function mutations throughout the gene, and as such are not suitable for targeted sequencing. [ citation needed ]
Targeted approaches have the advantage of amplifying ctDNA through polymerase chain reactions (PCR) or digital PCR . This is especially important when analyzing ctDNA not only because there are relatively low levels of DNA circulating in the bloodstream, but also because ctDNA makes up a small proportion of the total cell-free DNA extracted. [ 27 ] Therefore, amplification of regions of interest can drastically improve sensitivity of ctDNA detection. However, amplification through PCR can introduce errors given the inherent error rate of DNA polymerases. Errors introduced during sequencing can also decrease the sensitivity of detecting ctDNA mutations. [ citation needed ]
Droplet digital PCR (ddPCR) is derived from the digital polymerase chain reaction , originally named by Bert Vogelstein ’s group at Johns Hopkins University . Droplet Digital PCR utilizes a droplet generator to partition single pieces of DNA into droplets using an oil/water emulsion. Then individual polymerase chain reactions occur in each droplet using selected primers against regions of ctDNA and proceeds to endpoint. The presence of the sequences of interest is measured by fluorescent probes, which bind to the amplified region. ddPCR allows for highly quantitative assessment of allele and mutant frequencies in ctDNA but is limited by the number of fluorescent probes that can be used in one assay (up to 5). [ 34 ] The sensitivity of the assay can vary depending on the amount of DNA analyzed and is around 1 in 10,000. [ 34 ] Specificity should be augmented through the use of either minor groove binding (MGB) modified probes or of an alternative such as locked nucleic acids (LNAs).
Beads, emulsification, amplification, and magnetics (BEAMing) is a technique that builds upon Droplet Digital PCR in order to identify mutations in ctDNA using flow cytometry. [ 35 ] After ctDNA is extracted from blood, PCR is performed with primers designed to target the regions of interest. These primers also contain specific DNA sequences, or tags. The amplified DNA is mixed with streptavidin-coated magnetic beads and emulsified into droplets. Biotinylated primers designed to bind to the tags are used to amplify the DNA. Biotinylation allows the amplified DNA to bind to the magnetic beads, which are coated with streptavidin. After the PCR is complete, the DNA-bound beads are separated using a magnet. The DNA on the beads are then denatured and allowed to hybridize with fluorescent oligonucleotides specific to each DNA template. The resulting bead-DNA complexes are then analyzed using flow cytometry. This technique is able to capture allele and mutation frequencies due to coupling with ddPCR. However, unlike with ddPCR, a larger number of DNA sequences can be interrogated due to the flexibility of using fluorescently bound probes. Another advantage of this system is that the DNA isolated can also be used for downstream sequencing. [ 36 ] Sensitivity is 1.6 in 10 4 to 4.3 in 10 5 . [ 34 ]
Cancer personalized profiling by deep sequencing (CAPP-Seq) was originally described by Ash Alizadeh and Maximilian Diehn's groups at Stanford University . This technique uses biotinylated oligonucleotide selector probes to target sequences of DNA relevant to ctDNA detection. [ 37 ] Publicly available cancer databases were used to construct a library of probes against recurrent mutations in cancer by calculating their recurrence index. The protocol was optimized for the low DNA levels observed in ctDNA collection. Then the isolated DNA undergoes deep sequencing for increased sensitivity. This technique allows for the interrogation of hundreds of DNA regions. The ctDNA detection sensitivity of CAPP-Seq is reported to be 2.5 molecules in 1,000,000. [ 38 ]
Tagged amplicon deep sequencing (TAM-Seq) allows targeted sequencing of entire genes to detect mutations in ctDNA. [ 39 ] First a general amplification step is performed using primers that span the entire gene of interest in 150-200bp sections. Then, a microfluidics system is used to attached adaptors with a unique identifier to each amplicon to further amplify the DNA in parallel singleplex reactions. This technique was shown to successfully identify mutations scattered in the TP53 tumor suppressor gene in advanced ovarian cancer patients. The sensitivity of this technique is 1 in 50.
Safe-sequencing (Safe-Seq) was originally described by Bert Vogelstein and his group at Johns Hopkins University . Safe-Seq decreases the error rate of massively parallel sequencing in order to increase the sensitivity to rare mutants. [ 40 ] It achieves this by addition of a unique identifier (UID) sequence to each DNA template. The DNA is then amplified using the added UIDs and sequenced. All DNA molecules with the same UID (a UID family) should have the same reported DNA sequence since they were amplified from one molecule. However, mutations can be introduced through amplification, or incorrect base assignments may be called in the sequencing and analysis steps. The presence of the UID allows these methodology errors to be separated from true mutations of the ctDNA. A mutation is considered a ‘supermutant’ if 95% of the sequenced reads are in agreement. The sensitivity of this approach is 9 in 1 million. [ 34 ]
Duplex sequencing is an improvement on the single UIDs added in the Safe-Seq technique. [ 41 ] In duplex sequencing, randomized double-stranded DNA act as unique tags and are attached to an invariant spacer. Tags are attached to both ends of a DNA fragment (α and β tags), which results in two unique templates for PCR - one strand with an α tag on the 5’ end and a β tag on the 3’ end and the other strand with a β tag on the 5’ end and an α tag on the 3’ end. These DNA fragments are then amplified with primers against the invariant sequences of the tags. The amplified DNA is sequenced and analyzed. DNA with the duplex adaptors are compared and mutations are only accepted if there is a consensus between both strands. This method takes into account both errors from sequencing and errors from early stage PCR amplification. The sensitivity of the approach to discovering mutants is 1 in 10^7.
Integrated digital error suppression (iDES) improves CAPP-Seq analysis of ctDNA in order to decrease error and therefore increase sensitivity of detection. [ 38 ] Reported in 2016, iDES combines CAPP-Seq with duplex barcoding sequencing technology and with a computational algorithm that removes stereotypical errors associated with the CAPP-Seq hybridization step. The method also integrates duplex sequencing where possible, and includes methods for more efficient duplex recovery from cell free DNA. The sensitivity of this improved version of CAPP-Seq is 4 in 100,000 copies.
Whole-genome sequencing investigations have been performed on ctDNA present in different patients with treatment-resistant prostate cancer (the vast majority, and in some cases, metastatic ), bladder cancer and control patients who did not present this DNA, including somatic mutations and structural rearrangements in their genomes.
This novel and promising technique has provided information on resistance to treatment with androgen receptor signaling inhibitors , intratumoral heterogeneity (thanks to phylogenetic evolution and molecular chronology), chromosomal instability , contribution of ctDNA to metastasis through global transcriptomic patterns (taking into account nucleosomes present in transcription start sites (TSSs) and AR-binding sites (ARBs). In this way, the genomic and transcriptomic evolution of ctDNA can be observed, performed in living patients who are developing resistance to treatment, therefore, ctDNA sequencing is elementary to identify clinically relevant differences in the cancer phenotype and to see how therapy is affecting patients. Furthermore, the relative homogeneity in driver gene alterations among metastases justifies that genomic and functional alterations in prostate cancer are shared between ctDNA and tissue.
This makes ctDNA a powerful emerging tool for the detection of genetic mutations at the genomic scale in patients suffering from metastatic cancer to observe the clinical relevance of the clonal composition of these tumors to understand better cancer control. This subclonal reconstruction based on ctDNA thanks to Whole-genome sequencing poses a unique set of challenges and opportunities for scientific research in oncology. Furthermore, serial ctDNA reveals treatment-driven selection for
androgen receptor augmentation because it increases the dimensionality of the data.
Further work is needed to understand how metastatic location and size, in relation to tumor burden, influence circulating tumor DNA and the choice of new techniques to select other lesions that reflect clinically dominant disease. [ 42 ]
One of the challenges in using ctDNA as a cancer biomarker is whether ctDNA can be distinguished with cfDNA from normal cells. cfDNA is released by non-malignant cells during normal cellular turnover, but also during procedures such as surgery , radiotherapy , or chemotherapy . It is thought that leukocytes are the primary contributors to cfDNA in serum. [ 27 ]
The clinical utility of ctDNA for the detection of primary disease is in part limited by the sensitivity of current technology to detect small tumors with low levels of ctDNA present and a priori unknown somatic mutations. [ 17 ] [ 34 ]
Evidence of disease by traditional imaging methods, such as CT , PET or MRI may be absent after tumor resection. Therefore, ctDNA analysis poses a potential avenue to detect minimal residual disease (MRD), and thus the possibility of tumor recurrence, in cases where bulk tumors are absent by conventional imaging methods. [ 17 ] A comparison of MRD detection by CT imaging compared to ctDNA has been previously done in individuals with stage II colon cancer; in this study, researchers were able to detect ctDNA in individuals who showed no sign of clinical malignancy by a CT scan, suggesting that ctDNA detection has greater sensitivity to assess MRD. [ 27 ] However, the authors acknowledge that ctDNA analysis is not without limitations; plasma samples collected post-operatively were only able to predict recurrence at 36 months in 48% of cases. [ 27 ] Subsequently, ctDNA assays have been developed for both colorectal cancer [ 43 ] and melanoma . [ 44 ]
These approaches are now used in the detection of minimal residual disease using both tumor-informed and tumor-agnostic approaches. [ 45 ]
The question of whether measurement of the amount or qualities of ctDNA could be used to determine outcomes in people with cancer has been a subject of study. As of 2015 this was very uncertain. [ 46 ] Although some studies have shown a trend of higher ctDNA levels in people with high stage metastatic cancer, ctDNA burden does not always correlate with traditional cancer staging. [ 34 ] As of 2013 it appeared unlikely that ctDNA would be of clinical utility as a sole predictor of prognosis. [ 47 ]
The emergence of drug-resistant tumors due to intra- and inter-tumoral heterogeneity an issue in treatment efficacy. A minor genetic clone within the tumor can expand after treatment if it carries a drug-resistant mutation. Initial biopsies can miss these clones due to low frequency or spatial separation of cells within the tumor. For example, since a biopsy only samples a small part of the tumor, clones that resides in a different location may go unnoticed. This can mislead research that focuses on studying the role of tumor heterogeneity in cancer progression and relapse. The use of ctDNA in research can alleviate these concerns because it could provide a more representative 'screenshot' of the genetic diversity of cancer at both primary and metastatic sites. For example, ctDNA has been shown to be useful in studying the clonal evolution of a patient's cancer before and after treatment regimens. [ 48 ] Early detection of cancer is still challenging but recent progress in the analysis of the epigenetic features of cfDNA, or the fragmentation pattern unlock improve the sensitivity of liquid biopsy. [ 28 ] Furthermore, ctDNA analysis is an emerging tool for understanding the clonal composition of metastatic tumors, detecting different mutations on a genomic scale, studying subclonal diversity that affects the prognosis of the disease as different resistant phenotypes can be found and the appearance of new mechanisms of genomic and transcriptomic resistance to treatment. [ 42 ]
Implementation of ctDNA in clinical practice is largely hindered by the lack of standardized methods for ctDNA processing and analysis. Standardization of methods for sample collection (including time of collection), downstream processing (DNA extraction and amplification), quantification and validation must be established before ctDNA analysis can become a routine clinical assay. Furthermore, creation of a panel of ‘standard’ tumor-associated biomarkers may be necessary given the resolution of current ct DNA sequencing and detection methods. Sequencing tumor-specific aberrations from plasma samples may also help exclude contaminating cfDNA from analysis; elevated levels of cfDNA from normal cells may be attributed to non-cancer related causes. [ 27 ] These sequencing techniques can also determine the clonal evolution of cancer, tumor heterogeneity and drug resistance mechanisms involved in cancer. [ 42 ]
|
https://en.wikipedia.org/wiki/Circulating_tumor_DNA
|
A circulating tumor cell ( CTC ) is a cancer cell from a primary tumor that has shed into the blood of the circulatory system , or the lymph of the lymphatic system . [ 1 ] CTCs are carried around the body to other organs where they may leave the circulation and become the seeds for the subsequent growth of secondary tumors . [ 2 ] [ 1 ] This is known as metastasis , responsible for most cancer-related deaths. [ 3 ]
The detection and analysis of CTCs can assist early patient prognoses and determine appropriate tailored treatments. [ 4 ] Currently, there is one FDA-approved method for CTC detection, CellSearch , which is used to diagnose breast , colorectal and prostate cancer. [ 5 ]
The detection of CTCs, or liquid biopsy , presents several advantages over traditional tissue biopsies. They are non-invasive, can be used repeatedly, and provide more useful information on metastatic risk, disease progression, and treatment effectiveness. [ 6 ] [ 7 ] For example, analysis of blood samples from cancer patients has found a propensity for increased CTC detection as the disease progresses. [ 8 ] Blood tests are easy and safe to perform and multiple samples can be taken over time. By contrast, analysis of solid tumors necessitates invasive procedures that might limit patient compliance. The ability to monitor the disease progression over time could facilitate appropriate modification to a patient's therapy, potentially improving their prognosis and quality of life. The important aspect of the ability to prognose the future progression of the disease is elimination (at least temporarily) of the need for a surgery when the repeated CTC counts are low and not increasing; the obvious benefits of avoiding the surgery include avoiding the risk related to the innate tumor-genicity of cancer surgeries. To this end, technologies with the requisite sensitivity and reproducibility to detect CTCs in patients with metastatic disease have recently been developed. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] On the other hand, CTCs are very rare, often present as only a few cells per milliliter of blood, which makes their detection challenging. In addition, they often express a variety of markers which vary from patient to patient, which makes it difficult to develop techniques with high sensitivity and specificity .
CTCs that originate from carcinomas (cancers of epithelial origin, which are the most prevalent) can be classified according to the expression of epithelial markers, as well as their size and whether they are apoptotic. In general, CTCs are anoikis -resistant, which means that they can survive in the bloodstream without attaching to a substrate. [ 17 ]
Circulating tumor cells are most often present in clusters. [ 21 ] CTC clusters are aggregates of two or more circulating tumor cells (CTCs) bound together. These clusters can consist of traditional, small, or cytokeratin-negative CTCs and carry cancer-specific biomarkers that distinguish them from other cells in circulation. Studies have shown that CTC clusters are associated with increased metastatic potential and poor prognosis. For example, research has demonstrated that patients with prostate cancer who have only single CTCs exhibit an eight-fold longer mean survival rate compared to those with CTC clusters. Similar findings have been reported for colorectal cancer as well. [ 22 ] [ 23 ]
There are two types of circulating tumor cell cluster, one that consists of cancer cells only is termed homotypic . A CTC cluster that also incorporates other cells including white blood cells, fibroblasts, endothelial cells (i.e., cells that line the interior surface of blood vessels), and platelets, is termed heterotypic . [ 24 ] Heterotypic clusters are also known as microemboli . It is suggested that these microemboli might enhance metastatic potential. [ 21 ]
The cancer exodus hypothesis suggests that CTC clusters remain intact throughout the metastatic process, rather than dissociating into single cells, which was previously assumed. According to this hypothesis, the clusters enter the bloodstream, travel as a cohesive unit, and exit circulation at distant metastatic sites without breaking apart. This allows the clusters to retain their multicellularity, enhancing their metastatic efficiency. The hypothesis posits that the survival advantage provided by intercellular support within clusters increases their metastatic potential compared to single CTCs. [ 22 ] [ 24 ]
CTC clusters exhibit distinct gene expression profiles, which confer resistance to certain cancer therapies, making them more resilient than individual tumor cells. Their ability to remain multicellular throughout metastasis, may explain their superior survival and metastatic potential. [ 25 ]
Research on CTC clusters and their role in metastasis continues to evolve, with the cancer exodus hypothesis offering a new perspective on how these clusters contribute to cancer progression. Detecting and analyzing CTC clusters provides critical prognostic information and could help guide therapeutic decisions for cancer patients. [ 26 ]
The detection of CTCs may have important prognostic and therapeutic implications but because their numbers can be very small, these cells are not easily detected. [ 27 ] It is estimated that among the cells that have detached from the primary tumor, only 0.01% can form metastases. [ 28 ]
Circulating tumor cells are found in frequencies on the order of 1-10 CTC per mL of whole blood in patients with metastatic disease. [ 29 ] For comparison, a mL of blood contains a few hundred CECs (i.e., circulating endothelial cells), a few million white blood cells, and a billion red blood cells. This low frequency, associated to difficulty of identifying cancerous cells, means that a key component of understanding CTCs biological properties require technologies and approaches capable of isolating 1 CTC per mL of blood, either by enrichment, or better yet with enrichment-free assays that identify all CTC subtypes in sufficiently high definition to satisfy diagnostic pathology image-quantity requirements in patients with a variety of cancer types. [ 19 ] To date CTCs have been detected in several epithelial cancers (breast, prostate, lung, and colon) [ 30 ] [ 31 ] [ 32 ] [ 33 ] and clinical evidences indicate that patients with metastatic lesions are more likely to have CTCs isolated.
CTCs are usually (in 2011) captured from the vasculature by using specific antibodies able to recognize specific tumoral marker (usually EpCAM ); however this approach is biased by the need for a sufficient expression of the selected protein on the cell surface, event necessary for the enrichment step. Moreover, since EpCAM and other proteins (e.g. cytokeratins ) are not expressed in some tumors and can be down regulated during the epithelial to mesenchymal transition ( EMT ), new enrichment strategies are required. [ 34 ]
First evidence indicates that CTC markers applied in human medicine are conserved in other species. Five of the more common markers including CK19 are also useful to detect CTC in the blood of dogs with malignant mammary tumors. [ 35 ] [ 36 ] Newer approaches are able to identify more cells out 7.5 ml of blood, like IsofFux or Maintrac. [ 37 ] [ 38 ] In very rare cases, CTCs are present in large enough quantities to be visible on routine blood smear examination. This is referred to as carcinocythemia or carcinoma cell leukemia and is associated with a poor prognosis. [ 39 ]
To date, a variety of research methods have been developed to isolate and enumerate CTCs. [ 40 ] The only U.S. Food and Drug Administration (FDA) cleared methodology for enumeration of CTC in whole blood is the CellSearch system. [ 41 ] Extensive clinical testing done using this method shows that presence of CTCs is a strong prognostic factor for overall survival in patients with metastatic breast, colorectal or prostate cancer. [ 8 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ]
CTCs are pivotal to understanding the biology of metastasis and promise potential as a biomarker to noninvasively evaluate tumor progression and response to treatment. However, isolation and characterization of CTCs represent a major technological challenge, since CTCs make up a minute number of the total cells in circulating blood, 1–10 CTCs per mL of whole blood compared to a few million white blood cells and a billion red blood cells. [ 48 ] Therefore, the major challenge for CTC researchers is the prevailing difficulty of CTC purification that allows the molecular characterization of CTCs. Several methods have been developed to isolate CTCs in the peripheral blood and essentially fall into two categories: biological methods and physical methods, as well as hybrid methods that combine both strategies. Techniques may also be classified based on whether they select CTCs for isolation (positive selection) or whether they exclude all blood cells (negative selection).
Biological methods isolate cells based on highly specific antigen binding, most commonly by monoclonal antibodies for positive selection. Antibodies against tumor specific biomarkers including EpCAM , HER2 and PSA have been used. The most common technique is magnetic nanoparticle-based separation (immunomagnetic assay) as used in CellSearch or MACS . Other techniques under research include microfluidic separation [ 49 ] and combination of immunomagnetic assay and microfluidic separation. [ 50 ] [ 51 ] [ 52 ] [ 53 ] As the development of microfabrication technology, microscale magnetic structures are implemented to provide better control of the magnetic field and assist the CTCs detection. [ 54 ] [ 55 ] [ 56 ] Oncolytic viruses such as vaccinia viruses [ 57 ] are developed to detect and identify CTCs. Alternative methods exist which use engineered proteins instead of antibodies, such as the malaria VAR2CSA protein, which binds to oncofetal chondroitin sulfate on the surface of CTCs. [ 58 ] CTCs may also be retrieved directly from the blood by a modified Seldinger technique , as developed by GILUPI GmbH. [ 59 ] [ 60 ] An antibody coated metal wire is inserted into a peripheral vein and stays there for a defined period (30 min). During this time, CTCs from the blood can bind to the antibodies (currently anti-EpCAM). After the incubation time, the wire is removed, washed and the native CTCs, isolated from the blood of the patient, can be further analysed. Molecular genetics as well as immunofluorescent staining and several other methods are possible. [ 61 ] [ 62 ] Advantage of this method is the higher blood volume that can be analysed for CTCs (approx. 750 ml in 30 min compared to 7.5 ml of a drawn blood sample).
CellSearch is the only FDA-approved platform for CTC isolation. This method is based on the use of iron nanoparticles coated with a polymer layer carrying biotin analogues and conjugated with antibodies against EpCAM for the capture of CTCs. Isolation is coupled to an analyzer to take images of isolated cells upon their staining with specific fluorescent antibody conjugates.
Blood is sampled in an EDTA tube with an added preservative. Upon arrival in the lab, 7.5mL of blood is centrifuged and placed in a preparation system. This system first enriches the tumor cells immunomagnetically by means of ferrofluid nanoparticles and a magnet. Subsequently, recovered cells are permeabilized and stained with a nuclear stain, a fluorescent antibody conjugate against CD45 (leukocyte marker) and cytokeratins 8 , 18 and 19 (epithelial markers). The sample is then scanned on an analyzer which takes images of the nuclear, cytokeratin, and CD45 stains. [ 63 ] To be considered a CTC a cell must contain a nucleus, be positive for cytoplasmic expression of cytokeratin as well as negative for the expression of CD45 marker, and have a diameter larger than 5 μm. If the total number of tumor cells found to meet the criteria cited above is 5 or more, a blood sample is positive. In studies done on prostate, breast and colon cancer patients, median survival of metastatic patients with positive samples is about half the median survival of metastatic patients with negative samples. This system is characterized by a recovery capacity of 93% and a detection limit of one CTC per 7.5 mL of whole blood. For specific cancer types, alternative methods such as IsoFlux have shown greater sensitivity . [ 64 ]
This automated method uses size filtration to enrich larger and less compressible circulating tumor cells from other blood components. The Parsortix system can take in blood samples ranging from 1 mL to 40 mL. A disposable microfluidic cassette with a 6.5 micron gap allows the vast majority of red blood cells and white blood cells to pass through, while larger rare cells, including circulating tumor cells and fetal cells get caught. Trapped cells can either be automatically stained with antibodies for identification or can be released out of the cassette for subsequent analysis. These released / harvested cells are alive and can be analyzed by downstream cellular and molecular techniques, as well as cultured. The filtration cassette captures a plethora of different cancer cell types. In May 2022, the Parsortix PC1 system was cleared by the FDA as a medical device for the capture and harvest of circulating tumor cells (CTCs) from metastatic breast cancer patient blood for subsequent analysis.
In addition to the IVD application, the PC1 may be used with the MBC-01 Metastatic Breast Cancer Kit for use in research studies or Lab Developed Tests (LDTs) that have been created and validated in a clinical laboratory.
This method involves technology to separate nucleated cells from red blood cells, which lack a nucleus. All nucleated cells, including normal white blood cells and CTCs, are exposed to fluorescent-tagged antibodies specific for cancer biomarkers. In addition, Epic's imaging system captures pictures of all the cells on the slide (approximately 3 million), records the precise coordinates of each cell, and analyzes each cell for 90 different parameters, including the fluorescence intensity of the four fluorescent markers and 86 different morphological parameters. Epic can also use FISH and other staining techniques to look for abnormalities such as duplications, deletions, and rearrangements. The imaging and analysis technology also allows for the coordinates of every cell on a slide to be known so that a single cell can be retrieved from the slide for analysis using next-generation sequencing. A hematopathology-trained algorithm incorporates numerous morphology measurements as well as expression from cytokeratin and CD45. The algorithm then proposes candidate CTCs that a trained reader confirms. Cells of interest are analyzed for relevant phenotypic and genotypic markers, with regional white blood cells included as negative controls. [ 65 ] Epic's molecular assays measure protein expression and also interrogate genomic abnormalities in CTCs for more than 20 different cancer types.
Maintrac is a diagnostic blood test platform applying microscopic in vitro diagnostic methods to identify rare cells in body fluids and their molecular characteristics. It is based on positive selection using EpCAM-specific antibodies. [ 66 ] Maintrac uses an approach based on microscopic identification of circulating tumor cells. To prevent damage and loss of the cells during the process, Maintrac uses just two steps towards the identification. In contrast to many other methods, maintrac does not purify the cells or enrich them, but identifies them within the context of the other blood compounds. To obtain vital cells and to reduce stress of those cells, blood cells are prepared by only one centrifugation step and erythrocyte lysis. Like CellSearch, maintrac uses an EpCAM antibody. It is, however, not used for enrichment but rather as a fluorescent marker to identify those cells. Together with the nuclear staining with propidium iodide the maintrac method can distinguish between dead and living cells. Only vital, propidium excluding EpCAM positive cells are counted as potential tumor cells. Only living cells can grow into tumors, therefore dying EpCAM positive cells can do no harm. The suspension is analysed by fluorescence microscopy, which automatically counts the events. Simultaneous event galleries are recorded to verify whether the software found a true living cell and to differentiate between skin epithelial cells for example. Close validation of the method showed that additional antibodies of cytokeratins or CD45 did not have any advantage. [ 38 ] [ 67 ]
Unlike other methods maintrac does not use the single cell count as a prognostic marker, rather Maintrac utilizes the dynamics of the cell count. Rising tumor cell numbers are an important factor that tumor activity is ongoing. [ 68 ] Decreasing cell counts are a sign for a successful therapy. Therefore, maintrac can be used to verify the success of a chemotherapy [ 38 ] [ 69 ] and to supervise the treatment during hormone or maintenance therapy [ 70 ] [ 71 ] Maintrac has been used experimentally to monitor cancer recurrence. [ 72 ] [ 73 ] Studies using Maintrac have shown that EpCAM positive cells can be found in the blood in patient without cancer. [ 74 ] Inflammatory conditions like Crohn's disease also show increased levels of EpCAM-positive cells. Patients with severe skin burns can also carry EpCAM positive cells in the blood. Therefore, the use of EpCAM-positive cells as a tool for early diagnosis is not optimal.
Physical methods are often filter-based, enabling the capture of CTCs by size rather than by specific epitopes . [ 16 ] ScreenCell is a filtration based device that allows sensitive and specific isolation of CTCs from human whole blood in a few minutes. [ 75 ] Peripheral blood is drawn and processed within 4 hours with a ScreenCell isolation device to capture CTCs. The captured cells are ready for cell culture or for direct characterization using ViewRNA in situ hybridization assay. The Parsortix method separates CTCs based on their size and deformability. [ 76 ]
Hybrid methods combine physical separation (by gradients, magnetic fields, etc.) with antibody-mediated cell retrieval. An example of this is a sensitive double gradient centrifugation and magnetic cell sorting detection and enumeration method which has been used to detect circulating epithelial cancer cells in breast cancer patients by negative selection. [ 77 ] The principle of negative selection is based on the retrieval of all blood cells by using a panel of antibodies as well as traditional gradient centrifugation with Ficoll . A similar method known as ISET Test has been employed to detect circulating prostate cancer cells [ 78 ] [ 79 ] [ 80 ] and another technique known as RosetteStep has been used to isolate CTCs from small-cell lung cancer patients. [ 81 ] Similarly, researchers at Massachusetts General Hospital have developed a negative selection method which employs inertial focusing on a microfluidic device . The technique, called CTC-iChip, first removes cells too small to be CTCs, such as red blood cells, and then uses magnetic particles to remove white blood cells. [ 82 ]
Some drugs are particularly effective against cancers which fit certain requirements. For example, Herceptin is very effective in patients who are Her2 positive, but much less effective in patients who are Her2 negative. Once the primary tumor is removed, biopsy of the current state of the cancer through traditional tissue typing is not possible anymore. [ 83 ] Often tissue sections of the primary tumor, removed years prior, are used to do the typing. Further characterization of CTC may help determining the current tumor phenotype. FISH assays have been performed on CTC as well as determination of IGF-1R , Her2, Bcl-2 , ERG , PTEN , AR status using immunofluorescence . [ 7 ] [ 84 ] [ 85 ] [ 86 ] [ 87 ] Single cell level qPCR can also be performed with the CTCs isolated from blood. [ citation needed ]
The organ tropism of patient-derived CTC has been investigated in a mouse model. [ 88 ] CTCs isolated from breast cancer patients and expanded in vitro showed they could generate bone, lung, ovary and brain metastases in mice, partially reflecting the secondary lesions as found in the corresponding patients. Remarkably, one CTC line—isolated long before the appearance of brain metastasis in patient—was highly competent to generate brain metastasis in mice. This was the first predictive case for brain metastasis and a proof of concept that intrinsic molecular features of metastatic precursors amongst CTCs could provide novel insights into the mechanisms of metastasis.
Morphological appearance is judged by human operators and is therefore subject to large inter operator variation. [ 89 ] Several CTC enumeration methods exist which use morphological appearance to identify CTC, which may also apply different morphological criteria. A recent study in prostate cancer showed that many different morphological definitions of circulating tumor cells have similar prognostic value, even though the absolute number of cells found in patients and normal donors varied by more than a decade between different morphological definitions. [ 90 ]
CTCs were observed for the first time in 1869 in the blood of a man with metastatic cancer by Thomas Ashworth, who postulated that "cells identical with those of the cancer itself being seen in the blood may tend to throw some light upon the mode of origin of multiple tumours existing in the same person". A thorough comparison of the morphology of the circulating cells to tumor cells from different lesions led Ashworth to conclude that "One thing is certain, that if they [CTC] came from an existing cancer structure, they must have passed through the greater part of the circulatory system to have arrived at the internal saphena vein of the sound leg". [ 91 ]
The importance of CTCs in modern cancer research began in the mid 1990s with the demonstration that CTCs exist early on in the course of the disease. [ 92 ] Those results were made possible by exquisitely sensitive magnetic separation technology employing ferrofluids (colloidal magnetic nanoparticles) and high gradient magnetic separators invented by Paul Liberti and motivated by theoretical calculations by Liberti and Leon Terstappen that indicated very small tumors shedding cells at less than 1.0% per day should result in detectable cells in blood. [ 93 ] A variety of other technologies have been applied to CTC enumeration and identification since that time.
Modern cancer research has demonstrated that CTCs derive from clones in the primary tumor, validating Ashworth's remarks. [ 94 ] The significant efforts put into understanding the CTCs biological properties have demonstrated the critical role circulating tumor cells play in the metastatic spread of carcinoma . [ 95 ] Furthermore, highly sensitive, single-cell analysis demonstrated a high level of heterogeneity seen at the single cell level for both protein expression and protein localization [ 96 ] and the CTCs reflected both the primary biopsy and the changes seen in the metastatic sites. [ 97 ]
|
https://en.wikipedia.org/wiki/Circulating_tumor_cell
|
Circumcision surgical procedure in males involves either a conventional "cut and stitch" surgical procedure or use of a circumcision instrument or device . In the newborn period (less than 2 months of age), almost all circumcisions are done by generalist practitioners using one of three surgical instruments. In the US, the Gomco clamp is the most utilized instrument, followed by the Mogen clamp and the Plastibell. [ 1 ] They are also used worldwide. [ 2 ]
Complications may include bleeding , infection , reduction in sensation of the glans penis , [ 3 ] and too little or too much tissue removal. [ 4 ] Deaths are rare. [ 5 ] [ 4 ] After the newborn period, circumcision has a higher risk of complications, especially bleeding and anesthetic complications. [ 6 ]
In the 21st century, most circumcisions in boys and men are performed using one of three open surgical methods. The forceps-guided method, the dorsal slit method, and the sleeve resection method are well described by the World Health Organization in their Manual for Male Circumcision under Local Anaesthesia. [ 7 ] The Gomco clamp and Mogen clamp are sometimes used after the newborn period, in conjunction with either surgical sutures or cyanoacrylate tissue adhesive to prevent post-operative bleeding. [ 8 ]
Circumcision surgical instruments should be distinguished from circumcision devices. Circumcision instruments are used at the time of surgery, and the circumcision is complete at the end of the procedure. The Gomco clamp, the Mogen clamp, and Unicirc are surgical instruments. [ 9 ] Circumcision devices remain on the penis for 4 to 7 days and either spontaneously detach or are removed surgically at a subsequent visit. [ 10 ] Plastibell, Shang Ring, and other plastic rings are all circumcision devices, also known as "in situ" devices. [ 9 ] Circumcision via instrument results in healing by primary intention and healing via devices is by secondary intention , so healing is delayed. All circumcision procedures should involve adequate injectable or topical anesthesia . [ 6 ]
The Gomco clamp is a surgical instrument used to perform circumcision in all age groups, but is mainly used in newborn circumcision. [ 11 ] It is the leading instrument for newborn circumcision in the US. [ 1 ] The World Health Organization describes it as having "an impeccable safety record". [ 2 ]
After retracting the foreskin , the Gomco bell is placed over the glans at the level of the corona and the foreskin is replaced into the anatomic (natural) position. The yoke is then placed over the bell, trapping the foreskin between the bell and the yoke. The clamp is tightened, crushing the foreskin between the bell and the base plate, and left in place for five minutes. The crushed blood vessels provide hemostasis . The flared bottom of the bell fits tightly against the hole of the base plate, so the foreskin may be cut away with a scalpel from above the base plate, the intent being a lower risk of injuring the glans.
Circumcision is rapid and completed in a single session. The total procedure takes less than ten minutes, five minutes of which is spent in waiting for the crushing action to take place. In newborns (<2 months of age), no sutures are needed and bleeding is uncommon. [ 6 ] After the newborn period, either sutures or cyanoacrylate tissue adhesive can be used to seal the fused mucosal-skin edge to prevent post-operative bleeding. [ 8 ] Because the glans is protected by the bell of the Gomco clamp, injuries to the glans are rare. No parts are left on the penis, so late complications are rare compared to devices like the Plastibell which remain on the penis. [ 2 ]
Care must be taken to ensure that the device is properly sterilized between procedures, or transmission of infection may occur. The American Academy of Pediatrics reviewed one study of 1,000 newborn Gomco circumcisions in a hospital setting in Saudi Arabia and rated it "fair evidence". The study found an overall complication rate of 1.9%. Bleeding occurred in 0.6% of cases, infection in 0.4%, and insufficient foreskin removed in 0.3%. [ 6 ]
Because the Gomco clamp is made of three major parts, there is a chance that pieces could be incorrectly assembled from differently sized units or those produced by different manufacturers. Using mismatched parts results in a device that might not sufficiently crush the foreskin, potentially resulting in bleeding. [ 2 ]
The Gomco clamp was invented by Dr. Hiram S. Yellen and Aaron A. Goldstein in 1935. Yellen, an obstetrician-gynecologist in Buffalo, New York , sought an improved method of newborn circumcision. Goldstein was a prolific local inventor and manufacturer. [ 11 ] Gomco stands for the GOldstein Medical COmpany , the original manufacturer of the instrument. [ 11 ] The patent was in the name of Aaron Goldstein ( U.S. patent 119,180 , issued February 27, 1940). [ 11 ] The instrument was a quick success and was widely marketed and sold in the US and Canada. It has since been manufactured and marketed worldwide. [ 11 ]
The Gomco clamp is the leading instrument used to perform non-ritual male circumcision in the United States. [ 2 ] There is little information concerning prevalence of Gomco use outside of the US. A 1998 survey found that the Gomco clamp was the technique preferred by 67% of American physicians, whereas Plastibell was used by 19% and the Mogen clamp by 10%. [ 1 ]
The Mogen clamp is a surgical instrument which permits rapid circumcision. It is most often used in the newborn period, particularly for Jewish ceremonial circumcision ( Bris ), but is also used in older boys. The newborn version has two flat blades that open 2.5 mm. [ 2 ] The Mogen clamp is widely used around the world. [ 2 ]
The foreskin is first extended using several straight hemostats . The Mogen clamp is then slid over the foreskin. After confirming that the tip of the glans is free of the blades, the clamp is locked, and a scalpel is used to cut the skin from the flat (upper) side of the clamp. In newborns, no sutures are required. Outside of the newborn period, cyanoacrylate tissue adhesive can be used instead of sutures. [ 8 ]
The Mogen clamp has no parts to assemble, is easy to use, and results in a bloodless circumcision with minimal scarring. A single size can be used for infants, obviating any sizing errors. It is rapid, but requires five minutes of clamping to prevent post-operative bleeding. Any complications are immediate, because the instrument is not left on the penis, so they can be dealt with on site. [ 2 ] The clamp can be safely used by non-physician healthcare workers in resource-limited settings. [ 12 ] [ 13 ] [ 14 ]
Care must be taken to ensure that the device is properly sterilized between procedures, or transmission of infection may occur. The instrument does not directly protect the glans during the procedure, so there is a risk that the glans can be pulled into the slit and crushed or partially severed. [ 2 ]
In July 2010, one company manufacturing Mogen clamps (Mogen Circumcision Instruments of New York) went out of business following a lawsuit in which the doctor entirely removed the head of the penis from the child. The court awarded the plaintiff $10 million in damages. [ 15 ] This came following similar lawsuits in 2007 and 2009, which awarded $7.5 million and $2.3 million, respectively.
According to the American Academy of Pediatrics, there are no good studies of complications of the Mogen clamp because complications are rare; thus, one can only rely on available case reports of glans injuries. [ 6 ]
The word mogen is derived from the Hebrew word for "shield". The Mogen clamp was introduced by Dr. Harry Bronstein in 1955. [ 2 ] Before the advent of the Mogen clamp, the Jewish shield was used, which has a narrow gap that protected the glans while the foreskin was pulled through and excised. Others modified this shield and began using instruments that produced a crushing action. Still used in many parts of the world, bone cutters are used to shield the glans, crush the foreskin tissue and guide the scalpel for a clean incision. The Mogen clamp is a refinement of these ancient techniques. [ 2 ]
The Winkelmann clamp is a sterilizable Gomco-like instrument which consists of a single unit, so mismatching of parts cannot occur.
Unicirc is a disposable plastic and metal instrument which functions similarly to the Gomco clamp, and, according to WHO, has "nearly met the clinical evaluation study requirements described in the WHO Framework for the Clinical Evaluation of Devices." [ 16 ]
A meta-analysis of randomized controlled trials suggested that compressive instruments were associated with less blood loss, more rapid healing, and less pain compared to other techniques. [ 17 ]
All "in situ" devices are based upon steel circumcision rings patented by Cecil Ross in 1939. [ 2 ] Plastibell represents the first commercialization of the Ross device and is the progenitor of all subsequent "in situ" devices. Such devices consist of a plastic ring which is inserted beneath the foreskin at the level of the corona and has a ligature , or ligature device, which acts as a tourniquet. This necroses the remaining part of the foreskin and the device either detaches spontaneously after 4 to 7 days, or is removed surgically at one week. Implementation of "in situ" devices for HIV prevention has failed to demonstrate potential advantages with regard to efficiency or cost, compared to conventional surgical circumcision. [ 10 ]
The Plastibell plastic ring is placed under the foreskin and secured with a circumferential ligature, which prevents bleeding when the distal foreskin is excised. The entire procedure takes five to ten minutes. [ 18 ] The ring falls off after 4 to 7 days, leaving a circumferential wound that heals by secondary intention in one to two weeks.
Plastibell is a single-use-only disposable device, which prevents reuse and potential transmission of infection. The glans is protected during the procedure by the ring, so there is a reduced risk of injury to the glans, compared to the Mogen clamp. [ 2 ] It is a rapid procedure which can be done under clean (rather than sterile ) conditions. No bandage is required, allowing for easy monitoring for bleeding or infection.
The American Academy of Pediatrics estimates that overall complications occur in 2.4–5% of Plastibell procedures. [ 6 ] The risk of bleeding is 1%, similar to the risk with the Gomco clamp and Mogen clamp. [ 2 ] A significant complication can occur if the glans swells and herniates (protrudes) through the ring. This worsens the swelling and can reduce blood
and urine flow, resulting in serious long-term sequelae. Unlike complications occurring with surgical instruments that are dealt with immediately, this complication occurs hours to days after the patient leaves the clinic and must be dealt with promptly to prevent serious complications. Therefore, Plastibell should only be used when follow-up is rapidly available. [ 2 ]
The idea of using a tourniquet approach to infant circumcision is attributed to Cecil J. Ross, who patented steel circumcision rings in 1939. [ 2 ] Subsequently, Kariher patented a plastic ring with a removable handle in 1955. [ 2 ] The Plastibell comes in a sterile package with a single ligature.
The Shang Ring is a disposable plastic "in situ" device for male circumcision. It has been studied in China and Africa, and has been approved by WHO for circumcision in males over 13 years of age to prevent HIV. The Shang Ring consists of two concentric medical grade plastic rings: an inner ring with a silicone band and an outer, hinged ring that acts as a ligature. The appropriate size is determined through use of a measuring strip. The inner ring is placed underneath the foreskin. The outer (hinged) ring is placed on the outside of the foreskin and locks against the inner ring when snapped together. The distal foreskin is then excised. The Shang Ring is removed after one week when the outer ring locking mechanism is opened using a special tool. A pair of scissors designed for this purpose is then used to remove the inner ring. [ 19 ]
Shang Ring is marketed as simple, disposable, easy to use, and provides sutureless circumcision that may be an acceptable alternative to conventional surgical techniques. [ 20 ]
Like other "in situ" devices, complications may occur up to several days following the placement procedure and must be dealt with promptly to prevent serious sequelae. Shang Ring should only be used where surgical care is rapidly available. In a review by WHO personnel, 0.4% men required rapid intervention with surgical circumcision as the excision had occurred but the foreskin slipped from the device and required suturing. No serious adverse events occurred; 1% experienced moderate adverse events from a total of 1983 successful device placements. All adverse events were managed with minor interventions and resolved without long-term sequelae. Rates were similar to those observed with conventional surgical circumcision. [ 21 ]
In settings where skilled surgeons are mostly located in urban centers, referral of clients who require surgical management of device-related complications within the recommended time frame of 6–12 hours may not be possible. [ 10 ] Healing is by secondary intention and is therefore delayed compared to techniques which allow for healing by primary intention. There is a risk of HIV transmission if men engage in unprotected sex before the wound is healed. Thus, Shang Ring circumcision requires a longer period of post-circumcision sexual abstinence than surgical or instrumental methods. [ 19 ]
The Shang Ring was developed by Jianzhong Shang in 2003. [ 19 ] The Shang Ring has been approved by WHO, [ 19 ] and is cleared by the U.S. FDA under the 510(k) mechanism with Plastibell as the predicate device. [ 22 ]
|
https://en.wikipedia.org/wiki/Circumcision_surgical_procedure
|
Cisatracurium besilate ( INN ; cisatracurium besylate ( USAN ); formerly recognized as 51W89; [ 1 ] trade name Nimbex ) is a bisbenzyltetrahydroisoquinolinium that has effect as a non-depolarizing neuromuscular-blocking drug , used adjunctively in anesthesia to facilitate endotracheal intubation and to provide skeletal muscle relaxation during surgery or mechanical ventilation . It shows intermediate duration of action. Cisatracurium is one of the ten isomers of the parent molecule, atracurium . [ 2 ] Moreover, cisatracurium represents approximately 15% of the atracurium mixture. [ 3 ]
The generic name cisatracurium was conceived by scientists at Burroughs Wellcome Co. (now part of GlaxoSmithKline) by combining the name "atracurium" with "cis" [hence cis atracurium] because the molecule is one of the three cis - cis isomers comprising the ten isomers of the parent, atracurium . [ 2 ] Atracurium itself was invented at Strathclyde University and licensed to Burroughs Wellcome Co. , Research Triangle Park, NC , for further development and subsequent marketing as Tracrium. As the secondary pharmacology of atracurium was being developed, it became clear that the primary clinical disadvantage of atracurium was likely to be its propensity to elicit histamine release. To address this issue, a program was initiated to investigate the individual isomer constituents of atracurium to identify and isolate the isomer(s) associated with the undesirable histamine effects as well as identify the isomer that might possibly retain the desirable properties without the histamine release. Thus, in 1989, D A Hill and G L Turner, PhD (both chemists at Burroughs Wellcome Co., Dartford, UK) first synthesized cisatracurium as an individual isomer molecule. The pharmacological research of cisatracurium and the other individual isomers [ 4 ] was then developed further primarily by R. Brandt Maehr and William B. Wastila, PhD (both of whom were pharmacologists within the Division of Pharmacology at Burroughs Wellcome Co.) in collaboration with John J. Savarese MD (who at the time was an anesthesiologist in the Dept. of Anesthesia, Harvard Medical School at the Massachusetts General Hospital , Boston , MA). Thereafter, the entire clinical development of cisatracurium was completed in a record short period from 1992 to 1994: the team of scientists was led by J. Neal Weakly PhD, Martha M. Abou-Donia PhD, and Steve Quessy PhD, in the Division of Clinical Neurosciences at Burroughs Wellcome Co. , Research Triangle Park , NC. By the time of its approval for human use, in 1995, by the US Food and Drug Administration, Burroughs Wellcome Co. had merged with Glaxo Inc. , and cisatracurium was approved to be marketed as Nimbex by GlaxoWellcome Inc. The trade name "Nimbex" was derived from inserting an "i" to the original proposal "Nmb ex ," which stood for ex cellent N euro m uscular b locker. [ citation needed ]
In vitro studies using human plasma indicated that cisatracurium spontaneously degrades at physiological pH via Hofmann elimination to yield laudanosine and the quaternary monoacrylate. Subsequent ester hydrolysis of the monoacrylate generates the monoquaternary alcohol, although the rate-limiting step is Hofmann elimination . [ 3 ] In rat plasma, cisatracurium is also metabolized by non-specific carboxylesterases (a rate-limiting step) to the monoquaternary alcohol and the monoquaternary acid. [ 3 ]
As is evident with the parent molecule, atracurium, [ 5 ] [ 6 ] cisatracurium is also susceptible to degradation by Hofmann elimination and ester hydrolysis as components of the in vivo metabolic processes. [ citation needed ] See the atracurium page for information on Hofmann elimination in vivo versus the Hofmann degradation chemical reaction.
Because Hofmann elimination is a temperature- and plasma pH-dependent process, cisatracurium's rate of degradation in vivo is highly influenced by body pH and temperature just as it is with the parent molecule, atracurium: thus, an increase in body pH favors the elimination process, [ citation needed ] whereas a decrease in temperature slows down the process.
One of the metabolites of cisatracurium via Hofmann elimination is laudanosine – see the atracurium page for further discussion of the issue regarding this metabolite. 80% of cisatracurium is metabolized eventually to laudanosine and 20% is metabolized hepatically or excreted renally. [ citation needed ] 10-15% of the dose is excreted unchanged in the urine. [ citation needed ]
Since Hofmann elimination is an organ-independent chemodegradative mechanism, there is little or no risk to the use of cisatracurium in patients with liver or renal disease when compared with other neuromuscular-blocking agents. [ 7 ]
The two reverse ester linkages in the bridge between the two isoquinolinium groups make atracurium and cisatracurium poor targets for plasma cholinesterase , unlike mivacurium which has two conventional ester linkages.
To date, cisatracurium has not been reported to elicit bronchospasm at doses that are clinically prescribed.
Cisatracurium undergoes Hofmann elimination as a primary route of chemodegradation: consequently one of the metabolites from this process is laudanosine , a tertiary amino alkaloid reported to be a modest CNS stimulant with epileptogenic activity [ 8 ] and cardiovascular effects such as low blood pressure and a slowed heart rate . [ 9 ] As a tertiary amine, Laudanosine is unionised and readily crosses the blood–brain barrier. Presently, [ when? ] there is little evidence that laudanosine accumulation and related toxicity will likely ever be seen with the doses of cisatracurium that are administered in clinical practice especially given that the plasma concentrations of laudanosine generated are lower with cisatracurium than those seen with atracurium. [ 9 ]
A recent [ when? ] study showed that cisatracurium pretreatment effectively decreases the incidence and severity of pain induced by propofol general anaesthesia. [ 10 ] Another study showed that hiccups accompanied by vomiting, insomnia, shortness of breath can also be relieved by the nondepolarizing muscle relaxant, cisatracurium, during total intravenous anesthesia. [ 11 ]
Treatment of 1,5-Pentanediol with 3-bromopropionyl chloride gives the corresponding ester; dehydrohalogenation of the ester with triethylamine then gives the bis-acrylate ( 2 ). Reaction of that unsaturated ester with tetrahydropapaverine [ 13 ] [ 14 ] ( 3 ) leads to conjugate addition of the secondary amine and formation of the intermediate ( 4 ). Alkylation with methyl benzenesulfonate forms the bis-quaternary salt, affording cisatracuronium ( 5 ).
|
https://en.wikipedia.org/wiki/Cisatracurium_besilate
|
City physician ( German : Stadtphysicus, Stadtphysikus, Stadtarzt ; Swedish : stadsfysikus, stadsläkare , Finnish : kaupunginfysikus, kaupunginlääkäri , from Latin physicus ) was a historical title in the Late Middle Ages for a physician appointed by the city council. The city physician was responsible for the health of the population, particularly the poor, and the sanitary conditions in the city. His duties also included the supervision of pharmacies and the supervision of those engaged in medical tasks, such as midwives and barber surgeons . In addition, he had forensic duties such as assessing the injuries of living persons, external postmortem examinations, and conducting autopsies in cases of non-natural and unexplained deaths. In times of epidemic , many city physicians published small, printed books of guidelines. His functions combined aspects of the modern health minister , chief medical officer , coroner , and medical/pharmaceutical licencing authority.
The role existed in what are today a number of European countries, including Germany, Estonia, Finland, Norway, Poland, Sweden, and Switzerland. [ 1 ] [ 2 ] [ 3 ]
A Stadtphysicus or Stadtphysikus (learned "body" physician in contrast to the practice-oriented chirurgicus ) [ 4 ] or Stadtarzt [ 5 ] (also, in about the 15th century in Augsburg, referred to as Stadt-Leibarzt ) [ 6 ] was appointed by the city council and, in addition to his private practice, performed roughly the tasks of a modern-day health department . The designation physicus was the title for the civil servant physician in Prussia until 1901. [ 7 ]
Well-known early city physicians include Hugh of Lucca , who was appointed surgeon in Bologna, Italy, in 1214, and William of Saliceto , who was appointed city physician in Verona, Italy in 1275. Other cities in the Empire established physician positions in the 13th and 14th centuries. Later, per the 1426 decree of Emperor Sigismund , all cities in the Holy Roman Empire were required to hire a city physician. [ 8 ]
In the late 16th and early 17th centuries, the preparation of calendars with astrological weather forecasts was also often performed by city physicians.
Some city physicians also acted as personal physicians ( Leibärzte [ de ] ) to noble or ecclesiastical dignitaries.
In less densely populated regions, the office was combined as city and district physician ( Stadt- und Kreisphysicus [ de ] ), who had to care for or supervise a specific medical district in addition to the city.
The deputy of the city physician was called Subphysicus , e.g. in Hamburg.
In Sweden, city physicians ( Swedish : stadsläkare , formerly stadsfysikus ) were responsible for the duties in cities which in rural areas belonged to provincial physicians ( provinsialläkare [ sv ] ). [ 9 ]
As early as the beginning of the 17th century, some of Sweden's cities ( Stockholm , Gothenburg , Falun , Gävle , Malmö and Kalmar ) hired a stadsfysikus in their service. In 1669, a city surgeon (city barber) was hired to work alongside the city physician in Stockholm, to assist in the treatment of external diseases and accidents. [ 10 ] By royal decree in 1827, both posts were transformed into those of city physician (first and second city physician). In 1757, the first city district doctors in Stockholm (three in number) were employed to provide medical care for the city's ailing poor.
In Stockholm, Gothenburg and Malmö, the chief city physician or city physician was equal to the chief provincial physician in the counties, with almost the same duties as the latter. City doctors were appointed by the city council ( stadsfullmäktige [ sv ] ), after the Medical Board had given an opinion on the competence of the respective applicants and the city's health board had been given the opportunity to give its opinion on the matter.
City district physicians ( stadsdistriktsläkare ), that is to say, persons who exercised the function of city physicians only within a certain district of the city, were appointed in the same order by the city council, unless the administration of the public health service was entrusted to the board of health, in which case the appointment of these physicians could also be entrusted to the same board.
In Stockholm, the role of city physician was established in 1827 and lasted until 1971. [ 11 ]
The position of city physician ( Finnish : kaupunginfysikus , later kaupunginlääkäri ) existed in Finland during the Swedish era and for a time after the country declared independence . Turku was the first city to hire a city physician, in 1755, and Helsinki was the second in 1774. [ 12 ] [ 13 ]
In Norway, Bergen was the first city to have a city physician ( stadsfysikus or bylege , lit. ' city doctor ' ), appointed in 1603. [ 14 ] Oslo's city physician role existed from 1626 until it was abolished in 1988; its city physician also held the role of head of the city's health council. [ 15 ] In Trondheim , the post was created in 1661, with Jens Nicolaisen as its first doctor. [ 16 ]
|
https://en.wikipedia.org/wiki/City_physician
|
Civil Resettlement Units ( CRUs ) was a scheme created during the Second World War by Royal Army Medical Corps psychiatrists to help British Army servicemen who had been prisoners of war (POWs) to return to civilian life, and to help their families and communities to adjust to having them back. Units were set up across Britain from 1945 and later expanded to provide for Far East Prisoners of War (FEPOWs) as well as those who had been captive in European camps. By March 1947, 19,000 European POWs and 4,500 FEPOWs had attended a unit.
During the First World War and shortly afterwards, many psychiatrists including Sigmund Freud assumed that soldiers who had been captured were 'virtually immune' from psychological harm because they were at a safe distance from battle. [ 1 ] This was linked with the belief that shell shock might be a way of escaping from danger. [ 2 ] Around the time of the Second World War , this view began to change. Psychiatrists and psychologists such as Millais Culpin and Adolf Vischer argued that POWs were at risk of mental harm, and Vischer coined the term "barbed-wire disease" to describe this condition. [ 1 ] Psychiatrists had been keen to look into these ideas, and the outbreak of war gave them the opportunity to conduct research. [ 3 ] The 1929 Geneva Convention had changed how POWs were dealt with by setting forth rules for prisoner exchange which made it possible for POWs to be returned to their home nations before the end of the war. [ 4 ]
In September 1943, Lieutenant General Sir Alexander Hood hosted an Army meeting at the Directorate of Army Psychiatry to discuss the repatriation of POWs, at which it was decided that British Army psychiatrists should investigate what difficulties POWs might experience on their return home, and how these difficulties might be dealt with. [ 5 ] As with much British Army psychiatry during the Second World War, work on rehabilitating POWs was headed by a group who called themselves the "Invisible College" and who formed the Tavistock Institute after the war. [ 3 ]
POWs experiencing the most apparently severe difficulties on repatriation were treated at military psychiatric hospitals such as Northfield Military Hospital . Psychiatrists Major Whiles and Alfred Torrie noted that patients were often 'markedly resentful of everyone and everything.' [ 6 ] Psychiatrists suggested that these feelings could lead to civil unrest after the war if experienced by the significant number of POWs who would be returning. [ 5 ]
Psychiatrist Major Wilfred Bion and psychologist Lieutenant Colonel Eric Trist conducted work at No. 21 War Office Selection Board (WOSB), Selsdon Court Hotel , Surrey where they attempted to adapt officer selection methods to the purpose of selecting POWs who might be capable of returning to active service. The "officer reception unit" was intended to 'provide them with advice on military retraining and re-employment, and on other problems.' [ 3 ] Bion suggested that resettlement should use 'psychiatric machinery; but the machinery need not cause irritation by creaking' and so any programme for handling POWs should appear more military than medical though it should incorporate psychiatric treatment in a subtle manner. [ 7 ]
At No. 1 RAMC Depot at Boyce Barracks in Crookham , psychiatrist Major A. T. M. "Tommy" Wilson headed an experimental programme to rehabilitate repatriated medical personnel. The experiment ran from November 1943 to February 1944, and involved 1200 POWs undergoing a four-week programme of rehabilitation and training. POW problems included low morale , absenteeism , high levels of sickness, and psychological disturbance. [ 3 ] Conclusions from the experiment were published in a memorandum titled The Prisoner of War Comes Home . This document argued that most POWs were not mentally ill but were maladjusted, and required support on their return home. [ 8 ]
In February 1944, the War Office agreed to establish a voluntary scheme to help POWs return to Britain based upon the Army psychiatrists' work. This scheme was announced in the House of Lords in July 1944. [ 9 ] In November 1944, a pilot unit called No. 10 Special Reception and Training Unit (SRTU) was set up in Derby . Wilson was selected to head this Unit as opposed to Bion , who expressed his dismay in a letter to fellow psychiatrist John Rickman . [ 10 ] Bion believed that the psychological principles underpinning the CRUs, which built on his earlier work at Northfield, were underdeveloped and needed further refinement. However, the first group of POWs were imminently due to return to Britain from Germany, which is likely why Wilson was selected to lead the SRTU. [ 5 ]
The SRTU pilot indicated to the Army psychiatrists that some changes were required before a scheme could be created on a larger scale. The "hutted camp" was too similar to a stalag , so more luxurious accommodation should be provided in future, and the proposed six weeks was deemed too long and so cut to four weeks. Lectures were not very popular, but visits to workshops proved unexpectedly popular, so the team built connections with the Ministry of Labour to facilitate work placements and visits. [ 5 ] Food was a particular concern of POWs, so table service was provided rather than having men queue.
In March 1945, the War Office agreed for 20 Civil Resettlement Units to be created. In the spring of 1945, the CRU organisers made frantic preparations for the first large wave of POWs returning from Germany. They secured Hatfield House as CRU Headquarters and No. 1 CRU, and other country houses across Britain were adapted for use as CRUs so that men could attend a Unit close to where they lived.
The planning team who created the CRUs gave a great deal of thought to what they should be called. Based on the Crookham and No. 21 WOSB investigations, Army psychiatrists emphasised that POWs were very sensitive to accusations or implications that they were mentally "damaged." Based on this the Adjutant General Sir Ronald Adam issued official instructions that:
The word "rehabilitation is frequently taken to connote a process of mental or physical reconditioning made necessary as the prisoner of war is looked on as abnormal or even a "mental case" [thus] the expression "mental rehabilitation" or these words separately shall not be used in conversation or in writing. [ 11 ]
One of the participants at the SRTU had also strongly recommended that the planners change the name of the Unit. He stated that 'I would not call it a Special Training Unit to any man... I think the word "training" should be changed.' [ 12 ] In the end, the planners decided that 'the expressions "resettlement" or "resettlement training" will be employed instead.' [ 11 ]
Each unit had a Commanding Officer and Second-in-Command (who were military men), a Medical Officer (usually a psychiatrist, though often this was not acknowledged to the participants attending), Vocational Officer, Ministry of Labour Liaison and a Civil Liaison Officer (a social worker, usually a woman, trained in psychological methods).
A large proportion of the other CRU staff were Auxiliary Territorial Service staff: POWs might not have interacted with women for years, so these women staff were intended to help repatriates become more comfortable in mixed company as well as to facilitate the running of CRUs. [ 5 ]
The team at No. 1 CRU, the CRU Headquarters, consisted of Tommy Wilson as the head psychiatrist and Medical Officer, Colonel Richard Meadows Rendel as Commanding Officer, psychologists Eric Trist and Isabel Menzies Lyth , mathematician Harold Bridger, and military officers Ian Dawson and Dick Braund. [ 13 ]
A " syndicate " of 60 volunteers (in four batches of 15) arrived each week at the CRU. They listened to introductory talks from the Commanding Officer and Medical Officer. After this, the programme was entirely voluntary except for an interview when a participant left the CRU. Participants had the opportunity to attend workshops, visit nearby workplaces or have work-experience placements. They were able to attend group discussions, meet with the Vocational Officer to talk about careers, and meet with the Civil Liaison Officer to talk about social or relationship concerns. [ 14 ] Whist drives and dances were held at the CRUs, bringing the local civilian population to the Unit with the intention of helping civilians and repatriated POWs to interact and adjust to one another. Men were not required to wear their military uniforms except for the pay parade when they were given their salaries. [ 5 ]
To inform POWs about the scheme as early as possible, information was distributed through the British Red Cross and the officers of the Supreme Headquarters Allied Expeditionary Force , who had access to POWs whilst they were still in prisoner of war camps. [ 15 ]
A leaflet called Settling Down on Civvy Street was issued to POWs after they had been back in Britain for a week or two. This timing was intended to catch their attention when the initial excitement of repatriation had subsided and POWs might begin to experience some frustration or have questions. [ 5 ]
Many local or regional newspapers carried stories about local CRUs and the local men participating in the scheme. [ 5 ] National newspapers also reported the creation of the CRUs, and on 12 July 1945, the King and Queen visited Hatfield, which generated significant news coverage. [ 16 ] [ 17 ] [ 18 ]
All of those who attended the CRUs were volunteers. Those from the earlier studies were compelled by the Army to attend, but were due for discharge or release on completion of their course. [ 5 ]
With the atomic bombings of Hiroshima and Nagasaki, the War Office planned for CRUs to accept only Far East prisoners of war (FEPOWs), based on the assumption that the CRUs would not be able to manage the combined number of POWs from Europe and the Far East and that the FEPOWs were more in need of the service. Wilson and Rendel felt that European POWs should not be denied the opportunity to attend, and went to lengths to expand the programme where possible and make space for both groups. Rendel and Wilson were removed from heading the programme as a result. [ 19 ] By the end of March 1947, more than 19,000 European POWs and 4,500 FEPOWs had attended a CRU. [ 20 ]
Major Adam Curle and Eric Trist conducted a study to evaluate the efficacy of CRUs. They found that 26% of POWs who attended a CRU demonstrated "unsettlement" compared with 64% of POWs who did not attend a CRU. [ 14 ] Curle and Trist found that the "settled" men studied had better social relationships than a civilian control sample. They argued that this demonstrated the CRU's worth as a therapeutic community . However, they also noted that the results might have been affected by more "settled" men being more likely to attend a CRU in the first place. [ 14 ]
Edgar Jones and Simon Wessely have argued that the small sample size and the single location studied limit the validity of the validation study. [ 1 ]
The principles and some of the methods devised for the CRUs were later adapted and applied to European civilian refugees displaced by war.
CRUs represent one of the first controlled experiments in social psychology . [ 6 ] The work conducted at the CRUs contributed to the development of the concept and methods of therapeutic communities . Many of the staff of No. 1 CRU had worked on WOSBs, and their collaborative work on these two schemes resulted in them coming together after the war to establish the Tavistock Institute of Human Relations in 1947. [ 21 ]
The archives of the Tavistock Institute , which include extensive materials on the psychological principles behind and creation of the CRUs, have been catalogued and donated to the Wellcome Library where they can be ordered and viewed. [ 22 ] [ 23 ]
|
https://en.wikipedia.org/wiki/Civil_Resettlement_Units
|
Clade X: A Global Health Security Simulation was a pandemic modelling exercise led by Johns Hopkins University 's Center for Health Security , which occurred on Tuesday, May 15, 2018, at the Mandarin Oriental Hotel in Washington, D.C. [ 1 ] The exercise was named after a hypothetical novel virus, and simulated efforts to counter a fast-moving and deadly epidemic released on purpose by a terrorist group consisting of scientists and their rich backers wanting to reduce overpopulation. [ 2 ] [ 3 ] In the simulation, the hypothetical pandemic resulted in 900 million simulated deaths. [ 4 ] The exercise was invitation-only and nearly 150 people attended. [ 5 ]
The exercise was co-hosted by the Program for Appropriate Technology in Health (PATH), the Global Health Council , and the Nuclear Threat Initiative (NTI). [ 6 ] It was funded through a grant from Open Philanthropy . [ 7 ]
This medical article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clade_X
|
Clara Bonanad is a clinical cardiologist at the University Clinical Hospital, Valencia in the department of Cardiology . [ 1 ] [ 2 ]
Bonanada earned her PhD in Medicine at the University of Valencia in 2016 and her PhD thesis is “Prognostic impact of Geriatric Syndromes in Acute Coronary Syndromes”. [ 3 ] [ 4 ]
Her research on sex differences in the influence of frailty in senior outpatients with heart failure, as well as Bending Oxygen Saturation index (BOSI) and the Risk of Worsening Heart Failure Events in Chronic Heart Failure. [ 5 ] [ 6 ]
|
https://en.wikipedia.org/wiki/Clara_Bonanad
|
Clarence Crafoord (28 May 1899 – 25 February 1984) was a Swedish cardiovascular surgeon , best known for performing the first successful repair of aortic coarctation on 19 October 1944, one year before Robert E. Gross .
Crafoord also introduced heparin as thrombosis prophylaxis in the 1930s and he pioneered mechanical positive-pressure ventilation during thoracic operations in the 1940s.
Crafoord was professor of thoracic surgery at Karolinska Institute from 1948 to 1966.
This biographical article related to medicine in Sweden is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clarence_Crafoord
|
Classificatie van verrichtingen is a Dutch system of health coding procedures .
It is based on ICD-9-CM (the International Classification of Diseases, Clinical Modification ), but not identical to it. [ 1 ]
It is abbreviated "CvV". [ 2 ]
This medical article is a stub . You can help Wikipedia by expanding it .
This Netherlands -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Classificatie_van_verrichtingen
|
The classification of mental disorders , also known as psychiatric nosology or psychiatric taxonomy , is central to the practice of psychiatry and other mental health professions .
The two most widely used psychiatric classification systems are chapter V of the International Classification of Diseases , 10th edition ( ICD-10 ), produced by the World Health Organization (WHO); and the Diagnostic and Statistical Manual of Mental Disorders , 5th edition (DSM-5), produced by the American Psychiatric Association (APA).
Both systems list disorders thought to be distinct types, and in recent revisions the two systems have deliberately converged their codes so that their manuals are often broadly comparable, though differences remain. Both classifications employ operational definitions . [ 1 ]
Other classification schemes, used more locally, include the Chinese Classification of Mental Disorders .
Manuals of limited use, by practitioners with alternative theoretical persuasions, include the Psychodynamic Diagnostic Manual .
In the scientific and academic literature on the definition or categorization of mental disorders, one extreme argues that it is entirely a matter of value judgments (including of what is normal ) while another proposes that it is or could be entirely objective and scientific (including by reference to statistical norms); [ 2 ] other views argue that the concept refers to a "fuzzy prototype " that can never be precisely defined, or that the definition will always involve a mixture of scientific facts (e.g. that a natural or evolved function is not working properly) and value judgments (e.g. that it is harmful or undesired). [ 3 ] Lay concepts of mental disorder vary considerably across different cultures and countries, and may refer to different sorts of individual and social problems. [ 4 ]
The WHO and national surveys report that there is no single consensus on the definition of mental disorder, and that the phrasing used depends on the social, cultural, economic and legal context in different contexts and in different societies. [ 5 ] [ 6 ] The WHO reports that there is intense debate about which conditions should be included under the concept of mental disorder; a broad definition can cover mental illness, intellectual disability, personality disorder and substance dependence, but inclusion varies by country and is reported to be a complex and debated issue. [ 5 ] There may be a criterion that a condition should not be expected to occur as part of a person's usual culture or religion. However, despite the term "mental", there is not necessarily a clear distinction drawn between mental (dys)functioning and brain (dys)functioning, or indeed between the brain and the rest of the body. [ 7 ]
Most international clinical documents avoid the term "mental illness", preferring the term "mental disorder". [ 5 ] However, some use "mental illness" as the main overarching term to encompass mental disorders. [ 8 ] Some consumer/survivor movement organizations oppose use of the term "mental illness" on the grounds that it supports the dominance of a medical model . [ 5 ] The term "serious mental impairment" (SMI) is sometimes used to refer to more severe and long-lasting disorders while " mental health problems" may be used as a broader term, or to refer only to milder or more transient issues. [ 9 ] [ 10 ] Confusion often surrounds the ways and contexts in which these terms are used. [ 11 ]
Mental disorders are generally classified separately to neurological disorders , learning disabilities or intellectual disabilities .
The International Classification of Diseases (ICD) is an international standard diagnostic classification for a wide variety of health conditions. The ICD-10 states that mental disorder is "not an exact term", although is generally used "...to imply the existence of a clinically recognisable set of symptoms or behaviours associated in most cases with distress and with interference with personal functions." Chapter V focuses on "mental and behavioural disorders" and consists of 10 main groups: [ 12 ]
Within each group there are more specific subcategories. The WHO has revised ICD-10 to produce the latest version, ICD-11, adopted by the 72nd World Health Assembly in 2019 and came into effect on 1 January 2022. [ 13 ]
The DSM -IV was originally published in 1994 and listed more than 250 mental disorders. It was produced by the American Psychiatric Association and it characterizes mental disorder as "a clinically significant behavioral or psychological syndrome or pattern that occurs in an individual,...is associated with present distress...or disability...or with a significantly increased risk of suffering" but that "...no definition adequately specifies precise boundaries for the concept of 'mental disorder'...different situations call for different definitions" (APA, 1994 and 2000). The DSM also states that "there is no assumption that each category of mental disorder is a completely discrete entity with absolute boundaries dividing it from other mental disorders or no mental disorders."
The DSM-IV-TR (Text Revision, 2000) consisted of five axes (domains) on which disorder could be assessed. The five axes were:
The axis classification system was removed in the DSM-5 and is now mostly of historical significance. [ 14 ] The main categories of disorder in the DSM are:
Child and adolescent psychiatry sometimes uses specific manuals in addition to the DSM and ICD. The Diagnostic Classification of Mental Health and Developmental Disorders of Infancy and Early Childhood (DC:0-3) was first published in 1994 by Zero to Three to classify mental health and developmental disorders in the first four years of life. It has been published in 9 languages. [ 16 ] [ 17 ] The Research Diagnostic criteria-Preschool Age ( RDC-PA ) was developed between 2000 and 2002 by a task force of independent investigators with the goal of developing clearly specified diagnostic criteria to facilitate research on psychopathology in this age group. [ 18 ] [ 19 ] The French Classification of Child and Adolescent Mental Disorders (CFTMEA), operational since 1983, is the classification of reference for French child psychiatrists. [ 20 ]
The ICD and DSM classification schemes have achieved widespread acceptance in psychiatry. A survey of 205 psychiatrists, from 66 countries across all continents, found that ICD-10 was more frequently used and more valued in clinical practice and training, while the DSM-IV was more frequently used in clinical practice in the United States and Canada, and was more valued for research, with accessibility to either being limited, and usage by other mental health professionals, policy makers, patients and families less clear. . [ 21 ] A primary care (e.g. general or family physician) version of the mental disorder section of ICD-10 has been developed (ICD-10-PHC) which has also been used quite extensively internationally. [ 22 ] A survey of journal articles indexed in various biomedical databases between 1980 and 2005 indicated that 15,743 referred to the DSM and 3,106 to the ICD. [ 23 ]
In Japan , most university hospitals use either the ICD or DSM. ICD appears to be the somewhat more used for research or academic purposes, while both were used equally for clinical purposes. Other traditional psychiatric schemes may also be used. [ 24 ]
The classification schemes in common usage are based on separate (but may be overlapping) categories of disorder schemes sometimes termed "neo-Kraepelinian" (after the psychiatrist Kraepelin ) [ 25 ] which is intended to be atheoretical with regard to etiology (causation). These classification schemes have achieved some widespread acceptance in psychiatry and other fields, and have generally been found to have improved inter-rater reliability , although routine clinical usage is less clear. Questions of validity and utility have been raised, both scientifically [ 26 ] and in terms of social, economic and political factors—notably over the inclusion of certain controversial categories, the influence of the pharmaceutical industry, [ 27 ] or the stigmatizing effect of being categorized or labelled .
Some approaches to classification do not use categories with single cut-offs separating the ill from the healthy or the abnormal from the normal (a practice sometimes termed "threshold psychiatry" or " dichotomous classification" [ 28 ] ). [ 29 ]
Classification may instead be based on broader underlying " spectra ", where each spectrum links together a range of related categorical diagnoses and nonthreshold symptom patterns. [ 30 ]
Some approaches go further and propose continuously varying dimensions that are not grouped into spectra or categories; each individual simply has a profile of scores across different dimensions. [ 31 ] DSM-5 planning committees are currently seeking to establish a research basis for a hybrid dimensional classification of personality disorders. [ 32 ] However, the problem with entirely dimensional classifications is they are said to be of limited practical value in clinical practice where yes/no decisions often need to be made, for example whether a person requires treatment, and moreover the rest of medicine is firmly committed to categories, which are assumed to reflect discrete disease entities. [ 33 ] While the Psychodynamic Diagnostic Manual has an emphasis on dimensionality and the context of mental problems, it has been structured largely as an adjunct to the categories of the DSM. Moreover, dimensionality approach was criticized for its reliance on independent dimensions whereas all systems of behavioral regulations show strong inter-dependence, feedback and contingent relationships [ 34 ] [ 35 ]
Descriptive classifications are based almost exclusively on either descriptions of behavior as reported by various observers, such as parents, teachers, and medical personnel; or symptoms as reported by individuals themselves. As such, they are quite subjective, not amenable to verification by third parties, and not readily transferable across chronologic and/or cultural barriers.
Somatic nosology, on the other hand, is based almost exclusively on the objective histologic and chemical abnormalities which are characteristic of various diseases and can be identified by appropriately trained pathologists. While not all pathologists will agree in all cases, the degree of uniformity allowed is orders of magnitude greater than that enabled by the constantly changing classification embraced by the DSM system. Some models, like Functional Ensemble of Temperament suggest to unify nosology of somatic, biologically based individual differences in healthy people (temperament) and their deviations in a form of mental disorders in one taxonomy. [ 35 ] [ 36 ]
Classification schemes may not apply to all cultures. The DSM is based on predominantly American research studies and has been said to have a decidedly American outlook, meaning that differing disorders or concepts of illness from other cultures (including personalistic rather than naturalistic explanations) may be neglected or misrepresented, while Western cultural phenomena may be taken as universal. [ 37 ] Culture-bound syndromes are those hypothesized to be specific to certain cultures (typically taken to mean non-Western or non-mainstream cultures); while some are listed in an appendix of the DSM-IV they are not detailed and there remain open questions about the relationship between Western and non-Western diagnostic categories and sociocultural factors, which are addressed from different directions by, for example, cross-cultural psychiatry or anthropology .
In Ancient Greece, Hippocrates and his followers are generally credited with the first classification system for mental illnesses, including mania , melancholia , paranoia , phobias and Scythian disease ( transvestism ). They held that they were due to different kinds of imbalance in four humors .
The Persian physicians 'Ali ibn al-'Abbas al-Majusi and Najib ad-Din Samarqandi elaborated upon Hippocrates' system of classification. [ 38 ] Avicenna (980−1037 CE) in the Canon of Medicine listed a number of mental disorders, including "passive male homosexuality".
Laws generally distinguished between "idiots" and "lunatics".
Thomas Sydenham (1624–1689), the "English Hippocrates", emphasized careful clinical observation and diagnosis and developed the concept of a syndrome , a group of associated symptoms having a common course, which would later influence psychiatric classification.
Evolution in the scientific concepts of psychopathology (literally referring to diseases of the mind) took hold in the late 18th and 19th centuries following the Renaissance and Enlightenment . Individual behaviors that had long been recognized came to be grouped into syndromes .
Boissier de Sauvages developed an extremely extensive psychiatric classification in the mid-18th century, influenced by the medical nosology of Thomas Sydenham and the biological taxonomy of Carl Linnaeus . It was only part of his classification of 2400 medical diseases. These were divided into 10 "classes", one of which comprised the bulk of the mental diseases, divided into four "orders" and 23 "genera". One genus, melancholia , was subdivided into 14 "species".
William Cullen advanced an influential medical nosology which included four classes of neuroses: coma, adynamias , spasms, and vesanias . The vesanias included amentia , melancholia, mania, and oneirodynia .
Towards the end of the 18th century and into the 19th, Pinel , influenced by Cullen's scheme, developed his own, again employing the terminology of genera and species. His simplified revision of this reduced all mental illnesses to four basic types. He argued that mental disorders are not separate entities but stem from a single disease that he called "mental alienation".
Attempts were made to merge the ancient concept of delirium with that of insanity, the latter sometimes described as delirium without fever.
On the other hand, Pinel had started a trend for diagnosing forms of insanity 'without delirium' (meaning hallucinations or delusions) – a concept of partial insanity . Attempts were made to distinguish this from total insanity by criteria such as intensity, content or generalization of delusions. [ 39 ]
Pinel's successor, Esquirol , extended Pinel's categories to five. Both made a clear distinction between insanity (including mania and dementia) as opposed to mental retardation (including idiocy and imbecility). Esquirol developed a concept of monomania —a periodic delusional fixation or undesirable disposition on one theme—that became a broad and common diagnosis and a part of popular culture for much of the 19th century. [ 40 ] The diagnosis of " moral insanity " coined by James Prichard also became popular; those with the condition did not seem delusional or intellectually impaired but seemed to have disordered emotions or behavior.
The botanical taxonomic approach was abandoned in the 19th century, in favor of an anatomical-clinical approach that became increasingly descriptive. There was a focus on identifying the particular psychological faculty involved in particular forms of insanity, including through phrenology , although some argued for a more central "unitary" cause . [ 39 ] French and German psychiatric nosology was in the ascendency. The term "psychiatry" ("Psychiatrie") was coined by German physician Johann Christian Reil in 1808, from the Greek "ψυχή" ( psychē : "soul or mind") and "ιατρός" ( iatros : "healer or doctor"). The term "alienation" took on a psychiatric meaning in France, later adopted into medical English. The terms psychosis and neurosis came into use, the former viewed psychologically and the latter neurologically. [ 39 ]
In the second half of the century, Karl Kahlbaum and Ewald Hecker developed a descriptive categorizion of syndromes , employing terms such as dysthymia , cyclothymia , catatonia , paranoia and hebephrenia . Wilhelm Griesinger (1817–1869) advanced a unitary scheme based on a concept of brain pathology. French psychiatrists Jules Baillarger described "folie à double forme" and Jean-Pierre Falret described " la folie circulaire "—alternating mania and depression.
The concept of adolescent insanity or developmental insanity was advanced by Scottish Asylum Superintendent and Lecturer in Mental Diseases Thomas Clouston in 1873, describing a psychotic condition which generally impacts those aged 18–24 years, particularly males, and in 30% of cases proceeded to "a secondary dementia". [ 41 ]
The concept of hysteria (wandering womb) had long been used, perhaps since ancient Egyptian times, and was later adopted by Freud. Descriptions of a specific syndrome now known as somatization disorder were first developed by the French physician, Paul Briquet in 1859.
An American physician, Beard, described " neurasthenia " in 1869. German neurologist Westphal , coined the term " obsessional neurosis " now termed obsessive-compulsive disorder , and agoraphobia . Alienists created a whole new series of diagnoses that highlighted single, impulsive behavior, such as kleptomania , dipsomania , pyromania , and nymphomania . The diagnosis of drapetomania was also developed in the Southern United States to explain the perceived irrationality of black slaves trying to escape what was thought to be a suitable role.
The scientific study of homosexuality began in the 19th century, informally viewed either as natural or as a disorder. Kraepelin included it as a disorder in his Compendium der Psychiatrie that he published in successive editions from 1883. [ 42 ]
In the late 19th century, Koch referred to "psychopathic inferiority" as a new term for moral insanity. In the 20th century the term became known as "psychopathy" or "sociopathy", related specifically to antisocial behavior. Related studies led to the DSM-III category of antisocial personality disorder .
Influenced by the approach of Kahlbaum and others, and developing his concepts in publications spanning the turn of the century, German psychiatrist Emil Kraepelin advanced a new system. He grouped together a number of existing diagnoses that appeared to all have a deteriorating course over time—such as catatonia , hebephrenia and dementia paranoides —under another existing term " dementia praecox " (meaning "early senility ", later renamed schizophrenia). Another set of diagnoses that appeared to have a periodic course and better outcome were grouped together under the category of manic-depressive insanity (mood disorder). He also proposed a third category of psychosis, called paranoia, involving delusions but not the more general deficits and poor course attributed to dementia praecox. In all he proposed 15 categories, also including psychogenic neurosis, psychopathic personality, and syndromes of defective mental development (mental retardation). He eventually included homosexuality in the category of "mental conditions of constitutional origin". [ citation needed ]
The neuroses were later split into anxiety disorders and other disorders.
Freud wrote extensively on hysteria and also coined the term, "anxiety neurosis", which appeared in DSM-I and DSM-II. Checklist criteria for this led to studies that were to define panic disorder for DSM-III.
Early 20th century schemes in Europe and the United States reflected a brain disease (or degeneration ) model that had emerged during the 19th century, as well as some ideas from Darwin 's theory of evolution and/or Freud 's psychoanalytic theories.
Psychoanalytic theory did not rest on classification of distinct disorders, but pursued analyses of unconscious conflicts and their manifestations within an individual's life. It dealt with neurosis, psychosis, and perversion. The concept of borderline personality disorder and other personality disorder diagnoses were later formalized from such psychoanalytic theories, though such ego psychology-based lines of development diverged substantially from the paths taken elsewhere within psychoanalysis.
The philosopher and psychiatrist Karl Jaspers made influential use of a "biographical method" and suggested ways to diagnose based on the form rather than content of beliefs or perceptions. In regard to classification in general he prophetically remarked that: "When we design a diagnostic schema, we can only do so if we forego something at the outset … and in the face of facts we have to draw the line where none exists... A classification therefore has only provisional value. It is a fiction which will discharge its function if it proves to be the most apt for the time". [ 33 ]
Adolph Meyer advanced a mixed biosocial scheme that emphasized the reactions and adaptations of the whole organism to life experiences.
In 1945, William C. Menninger advanced a classification scheme for the US army, called Medical 203, synthesizing ideas of the time into five major groups. This system was adopted by the Veterans Administration in the United States and strongly influenced the DSM .
The term stress , having emerged from endocrinology work in the 1930s, was popularized with an increasingly broad biopsychosocial meaning, and was increasingly linked to mental disorders. The diagnosis of post-traumatic stress disorder was later created. [ 43 ]
Mental disorders were first included in the sixth revision of the International Classification of Diseases (ICD-6) in 1949. [ 44 ] Three years later, in 1952, the American Psychiatric Association created its own classification system, DSM-I. [ 44 ]
The Feighner Criteria group described fourteen major psychiatric disorders for which careful research studies were available, including homosexuality . These developed as the Research Diagnostic Criteria , adopted and further developed by the DSM-III.
The DSM and ICD developed, partly in sync, in the context of mainstream psychiatric research and theory. Debates continued and developed about the definition of mental illness, the medical model , categorical vs dimensional approaches, and whether and how to include suffering and impairment criteria. [ 45 ] There is some attempt to construct novel schemes, for example from an attachment perspective where patterns of symptoms are construed as evidence of specific patterns of disrupted attachment , coupled with specific types of subsequent trauma. [ citation needed ]
The ICD-11 and DSM-5 are being developed at the start of the 21st century. Any radical new developments in classification are said to be more likely to be introduced by the APA than by the WHO, mainly because the former only has to persuade its own board of trustees whereas the latter has to persuade the representatives of over 200 countries at a formal revision conference. In addition, while the DSM is a bestselling publication that makes huge profits for APA, the WHO incurs major expense in determining international consensus for revisions to the ICD. Although there is an ongoing attempt to reduce trivial or accidental differences between the DSM and ICD, it is thought [ by whom? ] that the APA and the WHO are likely to continue to produce new versions of their manuals and, in some respects, to compete with one another. [ 33 ]
There is ongoing scientific doubt concerning the construct validity and reliability of psychiatric diagnostic categories and criteria [ 46 ] [ 47 ] [ 48 ] [ 49 ] even though they have been increasingly standardized to improve inter-rater agreement in controlled research. In the United States, there have been calls and endorsements for a congressional hearing to explore the nature and extent of harm potentially caused by this "minimally investigated enterprise". [ 50 ] [ 51 ]
Other specific criticisms of the current schemes include: attempts to demonstrate natural boundaries between related syndromes, or between a common syndrome and normality, have failed; inappropriateness of statistical (factor-analytic) arguments and lack of functionality considerations in the analysis of a structure of behavioral pathology; [ 34 ] the disorders of current classification are probably surface phenomena that can have many different interacting causes, yet "the mere fact that a diagnostic concept is listed in an official nomenclature and provided with a precise operational definition tends to encourage us to assume that it is a "quasi-disease entity" that can be invoked to explain the patient's symptoms"; and that the diagnostic manuals have led to an unintended decline in careful evaluation of each individual person's experiences and social context. [ 33 ]
Psychodynamic schemes have traditionally given the latter phenomenological aspect more consideration, but in psychoanalytic terms that have been long criticized on numerous grounds.
Some have argued that reliance on operational definition demands that intuitive concepts, such as depression, need to be operationally defined before they become amenable to scientific investigation. However, John Stuart Mill pointed out the dangers of believing that anything that could be given a name must refer to a thing [ citation needed ] and Stephen Jay Gould and others have criticized psychologists for doing just that. One critic states that "Instead of replacing 'metaphysical' terms such as 'desire' and 'purpose', they used it to legitimize them by giving them operational definitions. Thus in psychology, as in economics, the initial, quite radical operationalist ideas eventually came to serve as little more than a 'reassurance fetish' (Koch 1992, 275) for mainstream methodological practice." [ 52 ] According to Tadafumi Kato, since the era of Kraepelin, psychiatrists have been trying to differentiate mental disorders by using clinical interviews. Kato argues there has been little progress over the last century and that only modest improvements are possible in this way; he suggests that only neurobiological studies using modern technology could form the basis for a new classification. [ 53 ]
According to Heinz Katsching, expert committees have combined phenomenological criteria in variable ways into categories of mental disorders, repeatedly defined and redefined over the last half century. The diagnostic categories are termed "disorders" and yet, despite not being validated by biological criteria as most medical diseases are, are framed as medical diseases identified by medical diagnoses. He describes them as top-down classification systems similar to the botanic classifications of plants in the 17th and 18th centuries, when experts decided a priori which visible aspects of plants were relevant. Katsching notes that while psychopathological phenomena are certainly observed and experienced, the conceptual basis of psychiatric diagnostic categories is questioned from various ideological perspectives. [ 44 ]
Psychiatrist Joel Paris argues that psychiatry is sometimes susceptible to diagnostic fads . Some have been based on theory (overdiagnosis of schizophrenia ), some based on etiological (causation) concepts (overdiagnosis of post-traumatic stress disorder ), and some based on the development of treatments. Paris points out that psychiatrists like to diagnose conditions they can treat, and gives examples of what he sees as prescribing patterns paralleling diagnostic trends, for example an increase in bipolar diagnosis once lithium came into use, and similar scenarios with the use of electroconvulsive therapy , neuroleptics , tricyclic antidepressants , and SSRIs . He notes that there was a time when every patient seemed to have "latent schizophrenia" and another time when everything in psychiatry seemed to be " masked depression ", and he fears that the boundaries of the bipolar spectrum concept, including in application to children, are similarly expanding. [ 54 ] Allen Frances has suggested fad diagnostic trends regarding autism and Attention deficit hyperactivity disorder . [ 55 ]
Since the 1980s, psychologist Paula Caplan has had concerns about psychiatric diagnosis, and people being arbitrarily "slapped with a psychiatric label". Caplan says psychiatric diagnosis is unregulated, so doctors are not required to spend much time understanding patients situations or to seek another doctor's opinion. The criteria for allocating psychiatric labels are contained in the Diagnostic and Statistical Manual of Mental Disorders , which can "lead a therapist to focus on narrow checklists of symptoms, with little consideration for what is causing the patient's suffering". So, according to Caplan, getting a psychiatric diagnosis and label often hinders recovery. [ 56 ]
The DSM and ICD approach remains under attack both because of the implied causality model [ 57 ] and because some researchers believe it better to aim at underlying brain differences which can precede symptoms by many years. [ 58 ]
|
https://en.wikipedia.org/wiki/Classification_of_mental_disorders
|
Clear cell papillary renal cell tumour ( CCPRCT ) is a rare subtype of renal cell carcinoma (RCC) that has microscopic morphologic features of papillary renal cell carcinoma and clear cell renal cell carcinoma , yet is pathologically distinct based on molecular changes and immunohistochemistry . [ 1 ]
CCPRCT classically has apical nuclei, i.e. the nucleus is adjacent to the luminal aspect. [ 2 ] In most glandular structures the nuclei are usually basally located, i.e. in the cytoplasm adjacent to the basement membrane .
They typically stain with CK7 with 'cuplike' staining for CAIX, and do not stain with TFE3 and AMACR . [ 1 ]
This oncology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clear_cell_papillary_renal_cell_carcinoma
|
Sir Clement Price Thomas KCVO MRCS LRCP FRCS FRCP [ 1 ] (22 November 1893 – 19 March 1973) [ 2 ] was a pioneering Welsh thoracic surgeon most famous for his 1951 operation on King George VI . [ 3 ]
Following a scholarship to Westminster Hospital Medical School , Price Thomas was posted to the Middle East at the onset of the First World War . He resumed his surgical training on return and was ultimately elected on to the surgical team of the hospital.
Encouraged to pursue thoracic surgery by Tudor Edwards , Price Thomas took up, along with other posts, a thoracic surgery placement at the Royal Brompton Hospital , a specialist hospital for chest diseases. His reputation from his work on surgical techniques in pulmonary tuberculosis led to the decision that he would undertake the lung surgery on King George VI in 1951. Its success resulted in Price Thomas being appointed Knight Commander of the Royal Victorian Order (KCVO).
Price Thomas was elected as president to numerous significant bodies during his career, including, the British Medical Association , the Royal Society of Medicine , the Association of Thoracic Surgeons, the Thoracic Society and the Welsh National School of Medicine . A forerunner of thoracic surgery on the international platform, he delivered a number of eponymous lectures and received several honorary degrees.
Price Thomas, less well known for his cardiac surgery, also introduced surgery for coarctation of the aorta to the United Kingdom, a procedure he learnt from Clarence Crafoord .
He suffered from lung cancer in his later years, he was a lifelong cigarette smoker , and died at the age of 79 years, leaving a wife and two sons, one of whom became a surgeon.
Clement Price Thomas was born in Abercarn , Monmouthshire . [ 4 ] He was the youngest child of a family of nine children and they lived on Islwyn Street. [ 5 ] His father William Thomas was a grocer and his mother, Rosamund Gertrude Price, was a clergyman's daughter. [ 6 ] He attended Newport High School before going to Caterham School at the age of 13 years, [ 7 ] a boarding school in Surrey. He then proceeded to the University College of South Wales and Monmouthshire . [ 8 ] [ 9 ]
Price Thomas was awarded the Hughes Medal in anatomy whilst he was a student at Cardiff Medical School . Although his ambition was initially to enter dental surgery , he was subsequently awarded a scholarship to study medicine at the Westminster Hospital Medical School. [ 6 ] [ 8 ]
Between 1914 and 1918, Price Thomas was posted to the Middle East, specifically Gallipoli , Macedonia and Palestine . He went as a private in the 32nd Field Ambulance , Royal Army Medical Corps . [ 8 ] [ 10 ]
In 1921, after the war, Price Thomas achieved the Conjoint Board Diploma, LRCP and then in 1923, attained the FRCS (Eng) . At the time, he continued to gain clinical experience at Westminster Hospital with his residential appointments and was sequentially kept as a permanent member of the clinical team there. He was introduced to thoracic surgery by Tudor Edwards, an eminent thoracic surgeon of the time, [ 6 ] and was similarly influenced by Ernest Rock Carling and G. T. Mullally. [ 8 ] In addition, encouragement in chest surgery came from J. E. H. Roberts, and Morriston Davies. [ 10 ]
As well as his responsibilities at the Westminster Hospital , Price Thomas remained the predominant chest surgeon at the Brompton Hospital. He also attended to the Army and the Royal Air Force as a consultant in thoracic surgery. He became a consultant at the King Edward VII Sanatorium, Midhurst, and at the Royal National Hospital for Diseases of the Chest , Ventnor. In addition, he had responsibilities at the Welsh National Memorial Association South Wales. Through his contacts and reputation, he became an advisor to thoracic surgery for the Ministry of Health . [ 8 ]
Price Thomas had been appointed Tudor Edwards' assistant surgeon in 1932 and they performed the first case of lobectomy of the lung for bronchiectasis the same year. This was the start of a, particularly close friendship over many years. It was inevitable that Price Thomas should comfortably fit into Tudor Edwards' position in thoracic surgery at Westminster hospital once Edwards resigned. [ 6 ] [ 11 ] Later, during a Tudor Edwards memorial lecture, Sir Russell Brock, Baron Brock commented that no-one knew Tudor Edwards better than Price Thomas. [ 11 ]
Price Thomas was predominantly known to operate on tuberculosis and lung tumours. He was the first surgeon to do a bronchial sleeve resection, in 1947: the operation involved removing a bronchial carcinoid tumour . [ 6 ] [ 12 ] Price Thomas went on to show how a bronchial blockage from tuberculosis could be resected and the two ends of the bronchus could be sewn together, uniting in a similar way as two ends of intestine. [ 13 ] He had his own rationale for collapse therapy of the lung and specifically of selective partial thoracoplasty with apicolysis in the treatment of tuberculosis . [ 7 ] [ 10 ] He was also considered to be fortunate to have one of the best anaesthetists to assist, Ivan Magill . [ 11 ]
From 1948 to 1952 he was affiliated with the Court of Examiners for the Royal College of Surgeons (RCS). He became vice-president of the RCS between 1962 and 1964, after contributing to its council since 1952. [ 8 ]
In 1958, he was the third Tudor Edwards Memorial lecturer [ 11 ] and in 1960 and 1963, Vicary Lecturer and Bradshaw lecturer respectively. [ 8 ]
Price Thomas was president of several medical organisations including the Association of Surgeons of Great Britain and Ireland, the Society of Thoracic Surgeons, the Royal Society of Medicine, the British Medical Association and the Welsh National School of Medicine. [ 8 ] He was a valuable expert counsellor, being president of the Medical Protection Society for many years. [ 10 ] He was a founder president of the medical council on alcoholism. [ 14 ]
He was awarded honorary degrees by the universities of Wales, Belfast, Paris, Lisbon, Athens and Karachi. [ 6 ]
He was elected to the Welsh National School of Medicine's president, a position he held between 1958 and 1970. [ 6 ]
Price Thomas was a notable medical educator and facilitated weekly surgical conferences at the Brompton and continuing small group teaching. [ 10 ]
Price Thomas led the team that removed a cancerous left lung from King George VI . His Times obituary noted that despite his huge fame and international reputation "the more honours that befell him, the more did his innate modesty came to the fore". [ 15 ]
The king had been unwell in 1951, and was advised by his physicians Sir Daniel Davies , Sir Horace Evans , Geoffrey Marshall and Sir John Weir , to return to London from Balmoral and confine himself to his room. [ 16 ] [ 17 ] He was described as having 'catarrhal inflammation', and rest may improve it. However, he did not improve and was considerably weak, thin and pale with little exercise tolerance due to intermittent claudication . [ 18 ] A series of X-rays were reviewed which, reported by Peter Kerley , [ 18 ] a Westminster Hospital radiologist, suggested a tumour, and after Sir Horace Evans consulted Price Thomas, a bronchoscopy was scheduled. The bronchoscopy and biopsy were performed on 16 September, transported to the Brompton hospital by Price Thomas' son, Brian, and the result did confirmed a lung tumor. [ 19 ] The diagnosis was concealed from the king, who had the surgery with the understanding it was to remove a lung blockage. The public was not informed as to the King's health problems. [ 18 ]
On Sunday morning, 23 September 1951, the operation on the king's lung was performed by Price Thomas and his assistants Charles Edwin Drew and Peter Jones in the Buhl Room of Buckingham Palace . Even the changing the King's guard was switched to St James's palace to avoid disturbance outside the operating theatre, where it would have otherwise taken place. Attempting to perform the surgery in their routine manner whereby the assistants sewed up the wound following removal of the tumour, Price Thomas is recalled to have remarked "I haven't stitched up a chest for 25 years and I'm not going to start practising today!" [ 20 ] Following surgery, the king was moved back into his own bedroom. [ 21 ] [ 22 ]
Despite injury to the left recurrent laryngeal nerve and an effect on the king's voice, the cancerous lung was successfully removed. [ 18 ] Price Thomas declined the fee for the surgery, considering it an honour to have been of service to his king. [ 19 ] The king honoured him with the KCVO , [ 1 ] in December 1951, barely two months before he died from the effects of arterial disease. [ 23 ] [ 19 ]
In 1925, Price Thomas married Ethel Doris, whose father was Mortimer Ricks from Paignton in South Devon. He had interests in golf, photography and reading. [ 8 ] His son Martyn Price Thomas FRCS (1935–2000) was also a surgeon. [ 24 ]
Price Thomas lived in St John's Wood , was very welcoming and deeply religious. He was also dedicated to his wife and sons. [ 6 ] [ 10 ] He had numerous nicknames including ‘Clem’, ‘CP’ and ‘Pricey'. Often people would be unsure as to how to address him. [ 10 ]
He retained his Welsh accent as well as his Welsh patriotism. [ 21 ]
Price Thomas was also well known for heart surgery. He had been involved with the first resection of coarctation of the aorta in 1946, with Clarence Crafoord. As cardiac surgery expanded and became more complex in the 1950s, he decided to leave it to his junior colleagues. [ 10 ] Charles Drew went on to research hypothermia and cardiac surgery , whilst Peter Jones carried on with thoracic surgery. [ 25 ]
Price Thomas was a chain-smoker himself, carrying at least 50 cigarettes in his pocket, and consequently, he suffered from lung cancer. His caricature in Ellis's Operations that made history , 1996, shows a suited Price Thomas with numerous cigarette stubs at his feet. [ 21 ] In 1964, Price Thomas underwent a lobectomy for lung cancer, performed by the same surgeon (Charles Drew) who had assisted him in the King's operation in 1951. [ 26 ]
Despite his ill-health, he remained actively involved in his presidential projects. As president of the Welsh National School of Medicine , Price Thomas remained active in matters of medical education and the school progressed under his leadership. He continued to attend council meetings and ceremonies. In 1965 he laid the foundation stone of a large medical teaching centre in Heath Park, which is situated in the north of Cardiff . This centre incorporated the University Hospital of Wales, a new dental school and hospital and also the Tenovus Institute for Cancer Research. [ 6 ]
Price Thomas died at the age of 79 years, on 19 March 1973. [ 8 ]
He was buried in New Bethel Chapel cemetery, Mynyddislwyn , where his parents were buried. A memorial service was held in Westminster Abbey on 29 May 1973. [ 6 ]
For his work within the medical community Price Thomas received numerous decorations and honorary appointments. In 1951, Price Thomas was appointed a Knight Commander of the Royal Victorian Order after operating successfully on King George VI . [ 6 ] [ 27 ] [ 28 ]
Price Thomas was the third president of the Travelling Surgical Club, as it was known from 1952 to 1972. Now known as the Travelling Surgical Society of Great Britain, the Price Thomas Travelling Fellowship was established in the memory of Price Thomas and his surgeon son Martyn. Two bursaries are awarded annually to inspire education and encourage surgical exchanges. [ 29 ]
The operating table is on display at Westminster Hospital , while Cyril F. Scurr donated the ECG machine to the British Oxygen Company Museum at the Association of Anaesthetists of Great Britain. [ 18 ]
Price Thomas will be remembered for the thoracotomy on King George VI, [ 6 ] which was re-enacted in Stephen Daldry 's TV series The Crown in 2016. The highly realistic and accurate model of the king complete with surgical incisions was donated to the Gordon Museum of Pathology as an educational aid. [ 30 ] The controversies over the cause of the king's death were also touched on in the 2010 film The King's Speech . [ 18 ]
|
https://en.wikipedia.org/wiki/Clement_Price_Thomas
|
Clinical clerkships encompass a period of medical education in which students – medical , dental , veterinary , nursing or otherwise – practice medicine under the supervision of a health practitioner. [ 1 ]
In medical education , a clerkship , or rotation , refers to the practice of medicine by medical students ( M.D. , D.O. , D.P.M ) during their final year(s) of study. [ 2 ] Traditionally, the first half of medical school trains students in the classroom setting, and the second half takes place in a teaching hospital . [ 3 ] Clerkships give students experience in all parts of the hospital setting, including the operating room , emergency department , and various other departments that allow learning by viewing and doing.
Students are required to undergo a pre-clerkship course, which include introduction to clinical medicine, clinical skills, and clinical reasoning. [ 4 ] A performance assessment such as the Objective Structured Clinical Examination (OSCE) is conducted at the end of this period. [ 4 ] During the clerkship training, students are required to rotate through different medical specialties and treat patients under the supervision of physicians . Students elicit patient histories , complete physical examinations , write progress notes , and assist in surgeries and medical procedures . They are also actively involved in the diagnoses and treatment of patients under the supervision of a resident or faculty. [ 2 ]
Students undergoing two-year clerkships spend their first year in patient care environment in month-long rotations with limited patient workloads. [ 5 ] In their final year, when they are sometimes referred to as sub-interns or externs , they are given more patient care responsibilities in a variety or elective rotations.
The work hours are that of a full-time job, generally similar to that of residents . Students may also be required to work on weekends and to be on call.
For medical students , clerkships occur after the basic science curriculum, and are supervised by medical specialists at a teaching hospital or medical school . Typically, certain clerkships are required to obtain the Doctor of Medicine degree or the Doctor of Osteopathic Medicine degree in the United States (e.g., internal medicine , surgery , pediatrics ), while others are elective (e.g., dermatology , pathology , and neurology ).
The intent of the clinical clerkship is to teach the medical student the fundamentals of clinical examination, evaluation, and care provision, and to enable the student to select the course of further study. Another purpose of the clerkship is for the student to determine if they really want to pursue a career in the field of medicine. [ 6 ] During the clinical clerkship, the medical student will interact with real patients much as a physician does, but their evaluation and recommendations will be reviewed and approved by more senior physicians. The expectation is that the students will not only master the knowledge in successfully treating patients but they are also expected to assume the physician's role. [ 7 ]
In the United States, medical school typically lasts four years. Medical students spend the first part of this third and fourth years rotating through a combination of required clerkship and electives. Most medical schools require rotations in internal medicine , surgery , pediatrics , psychiatry , obstetrics and gynecology , family medicine , and neurology . Some schools may additionally require emergency medicine , anesthesiology , radiology , ambulatory medicine , or intensive-care medicine . Furthermore, a common graduation requirement is to complete a sub-internship in a specialty, where the medical student acts as an intern . [ citation needed ]
In the 2010s, the New South Wales administration partnered with the University of Wollongong to enroll its senior medical students in a year-long integrated experience of longitudinal clinical clerkship. Students were sent in regional, rural or remote areas of the NSW and worked in interprofessional hospitals and community teams in which a supervisor or a review gave them first access to acute and chronic care patients. Active and experiential learning were based on multi-professional general practices, primary health care clinics, hospital emergency , ward-based patient care and concerns of surgery.
Care and supervision had been modelled on the previous Cambridge community-based clinical course and on the Parallel Rural Community Curriculum introduced by South Australia in 2007. [ 8 ]
In nursing education , a clerkship refers to the clinical courses conducted by students during their final year of studies. The student satisfaction with the clerkship is a determinant factor in selection of nursing field. [ 9 ] [ 10 ]
Physician assistant programs in the United States used the term in the same manner. [ 11 ] [ 12 ] [ 13 ]
|
https://en.wikipedia.org/wiki/Clerkship_(medicine)
|
Climacturia is urinary incontinence at the moment of sexual climax ( orgasm ). It can be a result of radical prostatectomy to treat prostate cancer . It is uncomfortable at times, but usually harmless. [ 1 ] [ 2 ]
This sexuality -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Climacturia
|
Clinical Breast Cancer is a bimonthly peer-reviewed medical journal established in 2000 and published by Elsevier . It covers all areas related to breast cancer .
Clinical Breast Cancer is indexed by Index Medicus / PubMed , EMBASE Excerpta Medica, ISI Current Contents , CINAHL , Scopus , and Chemical Abstracts . According to the Journal Citation Reports , the journal has a 2017 impact factor of 2.703. [ 1 ]
This article about an oncology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Clinical_Breast_Cancer
|
Clinical Cancer Research is a peer-reviewed medical journal on oncology , including the cellular and molecular characterization, prevention, diagnosis, and therapy of human cancer, medical and hematological oncology, radiation therapy , pediatric oncology, pathology , surgical oncology, and clinical genetics . The applications of the disciplines of pharmacology , immunology , cell biology , and molecular genetics to intervention in human cancer are also included. One of the main interests of Clinical Cancer Research is on clinical trials that evaluate new treatments together with research on pharmacology and molecular alterations or biomarkers that predict response or resistance to treatment. [ 1 ] Another priority for Clinical Cancer Research is laboratory and animal studies of new drugs as well as molecule-targeted agents with the potential to lead to clinical trials, and studies of targetable mechanisms of oncogenesis , progression of the malignant phenotype , and metastatic disease. [ 1 ] The journal is published by the American Association for Cancer Research .
The first issue of Clinical Cancer Research was published in January 1995. [ 2 ] By 1 December 1994, 128 manuscripts had been submitted for publication by investigators representing a variety of clinical and laboratory disciplines not only from the United States but also from the international research community. In 1998, the number of manuscripts submitted had risen from 500 in the first year to almost 800. The journal reported an acceptance rate of 52% at that time. With the aim to publish only papers of high quality, the editors decided to increase the stringency of review. [ 3 ]
The journal is abstracted and indexed in Chemical Abstracts , Index Medicus , MEDLINE , Science Citation Index , and Current Contents /Clinical Medicine. According to the Journal Citation Reports , the journal has a 2013 impact factor of 8.193, ranking it 13th out of 202 journals in the category "Oncology". [ 4 ]
|
https://en.wikipedia.org/wiki/Clinical_Cancer_Research
|
Clinical Colorectal Cancer is a peer-reviewed medical journal published by CIG Media Group (Cancer Information Group) from 2001 to 2010 and by Elsevier since 2011. It publishes original articles describing various aspects of clinical and translational research of gastrointestinal cancers. The journal is devoted to articles on detection, diagnosis, prevention, and treatment of colorectal, pancreatic, liver, and other gastrointestinal cancers. The main emphasis is on recent scientific developments in all areas related to gastrointestinal cancers. Specific areas of interest include clinical research and mechanistic approaches; drug sensitivity and resistance; gene and antisense therapy; pathology, markers, and prognostic indicators; chemoprevention strategies; multimodality therapy; and integration of various approaches.
Clinical Colorectal Cancer is indexed in Index Medicus / PubMed , EMBASE Excerpta Medica, ISI Current Contents , CINAHL (Cumulative Index to Nursing and Allied Health Literature), Chemical Abstracts , and Journal Citation Reports .
The journal publishes editorials, original research papers, comprehensive reviews, current treatment reports, case reports, brief communications, current trials, translational medicine pieces, and a "Meeting Highlights" section.
This article about an oncology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Clinical_Colorectal_Cancer
|
Clinical Genitourinary Cancer is a peer-reviewed medical journal published by Elsevier , and previously by CIG Media Group (Cancer Information Group). The journal publishes articles on detection, diagnosis, prevention, and treatment of genitourinary cancers. The main emphasis is on recent scientific developments in all areas related to genitourinary cancers. The journal was previously published as Clinical Prostate Cancer through September 2005.
Clinical Genitourinary Cancer indexed in Index Medicus / PubMed , EMBASE Excerpta Medica, ISI Current Contents , CINAHL (Cumulative Index to Nursing and Allied Health Literature), Chemical Abstracts , and Journal Citation Reports .
The journal publishes Review, Perspective, Original Study, and Case Series.
This article about an oncology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Clinical_Genitourinary_Cancer
|
Clinical Lung Cancer is a peer-reviewed medical journal that has been published by Elsevier since 2011. It was established by the CIG Media Group in 1999.
Clinical Lung Cancer is indexed in Index Medicus / PubMed , EMBASE ( Excerpta Medica ), Current Contents /Clinical Medicine, Science Citation Index Expanded , CINAHL , and Chemical Abstracts . It is published six times annually.
Clinical Lung Cancer publishes articles on detection, diagnosis, prevention, and treatment of lung cancer. The emphasis is on recent scientific developments in all areas related to lung cancer.
Areas of interest include clinical research and mechanistic approaches; drug sensitivity and resistance; gene and antisense therapy; pathology, markers, and prognostic indicators; chemoprevention strategies; multimodality therapy; and integration of various approaches.
The journal publishes editorials, original research papers, comprehensive reviews, current treatment reports, case reports, brief communications, current trials, translational medicine pieces, and a "Meeting Highlights" section.
The editor-in-chief is David R. Gandara. He is a member of the board of directors of the International Association for the Study of Lung Cancer and the Addario Foundation. He served as a member of the board of directors of the American Society of Clinical Oncology (ASCO) and as secretary-treasurer. Gandara is a member of professional committees, including the NCI Investigational Drug Steering Committee and the NCI Science Correlates Committee.
He is the Chair of the Southwest Oncology Group's Lung Committee (SWOG). Gandara has published more than 225 articles and 10 book chapters. Gandara holds a BA from the University of Texas and an MD with honors from the University of Texas Medical Branch .
|
https://en.wikipedia.org/wiki/Clinical_Lung_Cancer
|
Clinical Lymphoma, Myeloma & Leukemia is a peer-reviewed medical journal published by Elsevier (previously by CIG Media Group ). It was established as Clinical Lymphoma in 2000, renamed to Clinical Lymphoma & Myeloma in 2005 and obtained its current name in 2010. The journal covers research on detection, diagnosis, prevention, and treatment of lymphoma , myeloma , leukemia , and related disorders, including macroglobulinemia , amyloidosis , and plasma-cell dyscrasias .
The journal is abstracted and indexed in Index Medicus / MEDLINE / PubMed , EMBASE , Excerpta Medica , Current Contents /Clinical Medicine, CINAHL , Chemical Abstracts , Scopus , and the Science Citation Index Expanded . According to the Journal Citation Reports , the journal has a 2014 impact factor of 2.02. [ 1 ]
This article about an oncology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Clinical_Lymphoma,_Myeloma_&_Leukemia
|
Clinical Medicine Insights: Oncology is a peer-reviewed open access academic journal focusing on all aspects of cancer research and oncology . The journal was founded in 2007, and was originally published by Libertas Academica , but SAGE Publications became the publisher in September 2016. [ 1 ] The editor in chief is William Chi-shing Cho.
The journal is indexed in:
The specialized news platform, OncoDaily, has also highlighted papers published in the journal. [ 4 ] [ 5 ]
This article about an oncology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Clinical_Medicine_Insights:_Oncology
|
Clinical Oncology is a peer-reviewed medical journal covering oncology . It was established in 1989 and is published monthly by Elsevier . It is the official journal of the Royal College of Radiologists . The editor-in-chief is Thankamma Ajithkumar. [ 1 ] According to the Journal Citation Reports , the journal has a 2022 impact factor of 3.4. [ 2 ]
This article about an oncology journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Clinical_Oncology
|
Clinical Ovarian Cancer & Other Gynecologic Malignancies was a peer-reviewed medical journal published by Elsevier . It covered research on the detection, diagnosis , prevention, and treatment of ovarian cancer . Specific areas of interest included clinical research and mechanistic approaches, drug sensitivity and resistance , gene and antisense therapy, pathology, markers, and prognostic indicators, chemoprevention strategies, multimodality therapy, and integration of various approaches. It was replaced by Clinical Ovarian and other Gynecologic Cancer , which was discontinued in 2016. [ 1 ]
The journal was abstracted and indexed in CINAHL , EMBASE , and Chemical Abstracts .
The journal published editorials, original research papers, reviews, current treatment reports, case reports, brief communications, current trials, translational medicine pieces, and a "Meeting Highlights" section.
|
https://en.wikipedia.org/wiki/Clinical_Ovarian_Cancer_&_Other_Gynecologic_Malignancies
|
Clinical and Translational Science Award ( CTSA ) is a type of U.S. federal grant administered by the National Center for Advancing Translational Sciences , part of the National Institutes of Health . The CTSA program began in October 2006 under the auspices of the National Center for Research Resources with a consortium of 12 academic health centers . The program was fully implemented in 2012, comprising 60 grantee institutions and their partners. [ 1 ]
The CTSA program helps institutions create an integrated academic home for clinical and translational science with the resources to support researchers and research teams working to apply new knowledge and techniques to patient care. The program is structured to encourage collaborations among researchers from different scientific fields. [ 2 ]
The CTSA program has raised awareness of clinical and translational science as a discipline among academic and industry researchers, philanthropists, government officials and the broader public. [ 3 ]
CTSA consortium leaders have set five broad goals to guide their activities. These include building national clinical and translational research capability, providing training and improving career development of clinical and translational scientists, enhancing consortium-wide collaborations, improving the health of U.S. communities and the nation, and advancing T1 translational research to move basic laboratory discoveries and knowledge into clinical testing. [ 4 ]
Institutions funded by the CTSA program are working with other research facilities to improve drug discovery and development. For example, several consortium institutions are collaborating with the Rat Resource and Research Center at the University of Missouri to increase the speed of drug screening so that drug research is translated into clinical uses more quickly. [ 5 ] Consortium institutions also are creating new fields of study or new uses for technologies. For example, researchers at the University of Rochester are pioneering the field of lipidomics , exploring how lipids affect human disease. Their work has led to lipid research collaborations among experts in community and preventive medicine , proteomics , nutrition , and pharmaceutical research . [ 6 ]
Some CTSA institutions are collaborating with community-based organizations to ensure research is translated successfully into clinical practice. Researchers at Duke University are working to prevent strokes by partnering with a local health care program to build stroke awareness among Latino immigrants. [ 7 ]
Others are pursuing public and private partnerships to speed innovation. For example, the Oregon Health and Science University and Intel are developing new wireless devices with sensors to detect symptoms in patients who have diabetes or those at high risk of stroke so they can be treated earlier. [ 8 ]
With the most recent awards, announced in July 2011, the consortium comprises 60 institutions in 30 states and the District of Columbia. [ 9 ] These include: [ 10 ]
On the 20 December 2011, the OIG published a report critical of the NIH's administration of the Clinical and Translational Science Awards (CTSA) program. [ 11 ] The report read in part:
For all 38 Clinical and Translational Science Awards (CTSA) cooperative agreements awarded from 2006 through 2008, CTSA program staff did not document awardees' progress in compliance with NIH policy.
CTSA program staff must ensure that awardees submit annual progress reports and financial status reports, determine whether awardee progress remains satisfactory before awardees receive continued funding, and maintain official files in accordance with Department of Health and Human Services (HHS) policy. Additionally, under cooperative agreements, CTSA program staff provide assistance to awardees above and beyond the levels usually required for program stewardship of grants. This level of stewardship is known as substantial involvement. CTSA program staff assign NIH Project Scientists to awardees to provide this substantial involvement through technical assistance, advice, and coordination. Names of substantially involved staff and an annual summary of staff involvement should be documented in the official files.
CTSA program staff documented a comparison of accomplishments to research objectives for only 1 of 38 awardees throughout our review period. Although reviews for six awardees' files mentioned an inability to fulfill goals, only one file included a note from CTSA program staff regarding resolution. Also, most progress reports and half of financial status reports were late, yet the files contained no evidence that CTSA program staff took action to address timeliness of reports. CTSA program staff did not maintain files in accordance with HHS policy. Finally, no files contained evidence that CTSA program staff provided substantial involvement to awardees in accordance with Federal requirements and NIH policy.
We recommend that NIH ensure that CTSA program staff (1) document their monitoring of awardee progress; (2) ensure timely submission of required reports; (3) maintain official files in accordance with Federal policy; and (4) as required for cooperative agreements, provide substantial involvement to CTSA awardees. NIH concurred with our recommendations.
|
https://en.wikipedia.org/wiki/Clinical_and_Translational_Science_Award
|
Clinical attachment loss ( CAL ) is the predominant clinical manifestation and determinant of periodontal disease .
Teeth are attached to the surrounding and supporting alveolar bone by periodontal ligament ( PDL ) fibers; these fibers run from the bone into the cementum that naturally exists on the entire root surface of teeth. They are also attached to the gingival (gum) tissue that covers the alveolar bone by an attachment apparatus; because this attachment exists superficial to the crest, or height, of the alveolar bone, it is termed the supracrestal attachment apparatus .
The supracrestal attachment apparatus is composed of two layers: the coronal junctional epithelium and the more apical gingival connective tissue fibers . [ 1 ] The two layers together form the thickness of the gingival tissue, and this dimension is termed the biologic width .
Plaque-induced periodontal diseases are generally classified destructive or non-destructive. Clinical attachment loss is a sign of destructive (physiologically irreversible) periodontal disease.
The term clinical attachment loss is used almost exclusively to refer to connective tissue attachment loss: https://medical-dictionary.thefreedictionary.com/loss+of+attachment
Sites with periodontitis exhibit clinical signs of gingival inflammation and loss of connective tissue attachment. Connective tissue attachment loss refers to the pathological detachment of collagen fibers from cemental surface with the concomitant apical migration of the junctional or pocket epithelium onto the root surface. [ 2 ]
|
https://en.wikipedia.org/wiki/Clinical_attachment_loss
|
Clinical cardiac electrophysiology (also referred to as cardiac electrophysiology or simply EP ), is a branch of the medical specialty of cardiology concerned with the study and treatment of rhythm disorders of the heart . [ 1 ] Cardiologists with expertise in this area are usually referred to as electrophysiologists. Electrophysiologists are trained in the mechanism, function, and performance of the electrical activities of the heart . Electrophysiologists work closely with other cardiologists and cardiac surgeons to assist or guide therapy for heart rhythm disturbances ( arrhythmias ). They are trained to perform interventional and surgical procedures to treat cardiac arrhythmia.
The training required to become an electrophysiologist is lengthy and requires eight years after medical school (in the U.S.), entailing three years of internal medicine residency , three years of clinical cardiology fellowship , and two years of clinical cardiac electrophysiology. This is necessary due to the significant complexity of patients that electrophysiologists usually treat, the constant advances in methods and equipment used in their daily practice, making the field of electrophysiology one of the most demanding subspecialties of modern medicine.
An electrophysiology study is any of a number of invasive (intracardiac) and non-invasive recording of spontaneous electrical activity, as well as of cardiac responses to programmed electrical stimulation . These studies are performed to assess arrhythmias , elucidate symptoms, evaluate abnormal electrocardiograms , assess risk of developing arrhythmias in the future, and design treatment.
In addition to diagnostic testing of the electrical properties of the heart, electrophysiologists are trained in therapeutic and surgical methods to treat many of the rhythm disturbances of the heart. Therapeutic modalities employed in this field include antiarrhythmic drug therapy and surgical implantation of pacemakers and implantable cardioverter-defibrillators .
Common rhythms dealt with include atrial fibrillation , ventricular tachycardia , and the supraventricular tachycardias . Abnormal rhythms have multiple ways they can be treated and choosing is often individualized based on symptoms and patient preference.
Electrophysiologists will commonly employ the following diagnostic tests and may be performed or interpreted exclusively by the electrophysiologist. Other tests such as cardiac stress testing may be included in an evaluation but are not exclusive to electrophysiology.
Initial administration and monitoring of the effect of drugs for treatment of heart rhythm disorders. Electrophysiologists are often involved when severe or life-threatening arrhythmias are being treated, or when multiple drugs must be used to treat an arrhythmia. Antiarrhythmic agents such as flecainide , dofetilide , and amiodarone are commonly used to try to control rhythms.
Ablation therapy is a catheter based ablation of lesions in the heart (with radiofrequency energy, cryotherapy (destructive freezing), microwave, or ultrasound energy) to cure or control arrhythmias (see radiofrequency ablation ). Ablation is usually performed during the same procedure as the electrophysiology study during which arrhythmias are attempted to be induced as well as elucidating the mechanism of the arrhythmia for which ablation therapy is sought.
Implantation of devices include
Additionally, there are, at times, indications to remove these devices and extraction (ie, removal) of these devices can also be performed by electrophysiologists.
Once implanted, long-term clinical follow up and reprogramming of implanted devices also falls to the electrophysiologist.
|
https://en.wikipedia.org/wiki/Clinical_cardiac_electrophysiology
|
Clinical clerkships encompass a period of medical education in which students – medical , dental , veterinary , nursing or otherwise – practice medicine under the supervision of a health practitioner. [ 1 ]
In medical education , a clerkship , or rotation , refers to the practice of medicine by medical students ( M.D. , D.O. , D.P.M ) during their final year(s) of study. [ 2 ] Traditionally, the first half of medical school trains students in the classroom setting, and the second half takes place in a teaching hospital . [ 3 ] Clerkships give students experience in all parts of the hospital setting, including the operating room , emergency department , and various other departments that allow learning by viewing and doing.
Students are required to undergo a pre-clerkship course, which include introduction to clinical medicine, clinical skills, and clinical reasoning. [ 4 ] A performance assessment such as the Objective Structured Clinical Examination (OSCE) is conducted at the end of this period. [ 4 ] During the clerkship training, students are required to rotate through different medical specialties and treat patients under the supervision of physicians . Students elicit patient histories , complete physical examinations , write progress notes , and assist in surgeries and medical procedures . They are also actively involved in the diagnoses and treatment of patients under the supervision of a resident or faculty. [ 2 ]
Students undergoing two-year clerkships spend their first year in patient care environment in month-long rotations with limited patient workloads. [ 5 ] In their final year, when they are sometimes referred to as sub-interns or externs , they are given more patient care responsibilities in a variety or elective rotations.
The work hours are that of a full-time job, generally similar to that of residents . Students may also be required to work on weekends and to be on call.
For medical students , clerkships occur after the basic science curriculum, and are supervised by medical specialists at a teaching hospital or medical school . Typically, certain clerkships are required to obtain the Doctor of Medicine degree or the Doctor of Osteopathic Medicine degree in the United States (e.g., internal medicine , surgery , pediatrics ), while others are elective (e.g., dermatology , pathology , and neurology ).
The intent of the clinical clerkship is to teach the medical student the fundamentals of clinical examination, evaluation, and care provision, and to enable the student to select the course of further study. Another purpose of the clerkship is for the student to determine if they really want to pursue a career in the field of medicine. [ 6 ] During the clinical clerkship, the medical student will interact with real patients much as a physician does, but their evaluation and recommendations will be reviewed and approved by more senior physicians. The expectation is that the students will not only master the knowledge in successfully treating patients but they are also expected to assume the physician's role. [ 7 ]
In the United States, medical school typically lasts four years. Medical students spend the first part of this third and fourth years rotating through a combination of required clerkship and electives. Most medical schools require rotations in internal medicine , surgery , pediatrics , psychiatry , obstetrics and gynecology , family medicine , and neurology . Some schools may additionally require emergency medicine , anesthesiology , radiology , ambulatory medicine , or intensive-care medicine . Furthermore, a common graduation requirement is to complete a sub-internship in a specialty, where the medical student acts as an intern . [ citation needed ]
In the 2010s, the New South Wales administration partnered with the University of Wollongong to enroll its senior medical students in a year-long integrated experience of longitudinal clinical clerkship. Students were sent in regional, rural or remote areas of the NSW and worked in interprofessional hospitals and community teams in which a supervisor or a review gave them first access to acute and chronic care patients. Active and experiential learning were based on multi-professional general practices, primary health care clinics, hospital emergency , ward-based patient care and concerns of surgery.
Care and supervision had been modelled on the previous Cambridge community-based clinical course and on the Parallel Rural Community Curriculum introduced by South Australia in 2007. [ 8 ]
In nursing education , a clerkship refers to the clinical courses conducted by students during their final year of studies. The student satisfaction with the clerkship is a determinant factor in selection of nursing field. [ 9 ] [ 10 ]
Physician assistant programs in the United States used the term in the same manner. [ 11 ] [ 12 ] [ 13 ]
|
https://en.wikipedia.org/wiki/Clinical_clerkship
|
Clinical death is the medical term for cessation of blood circulation and breathing, the two criteria necessary to sustain the lives of human beings and of many other organisms. [ 1 ] It occurs when the heart stops beating in a regular rhythm, a condition called cardiac arrest . The term is also sometimes used in resuscitation research.
Stopped blood circulation has historically proven irreversible in most cases. Prior to the invention of cardiopulmonary resuscitation (CPR), defibrillation , epinephrine injection, and other treatments in the 20th century, the absence of blood circulation (and vital functions related to blood circulation) was historically considered the official definition of death . With the advent of these strategies, cardiac arrest came to be called clinical death rather than simply death , to reflect the possibility of post-arrest resuscitation.
At the onset of clinical death, consciousness is lost within several seconds, and in dogs, measurable brain activity has been measured to stop within 20 to 40 seconds. [ 2 ] Irregular gasping may occur during this early time period, and is sometimes mistaken by rescuers as a sign that CPR is not necessary. [ 3 ] During clinical death, all tissues and organs in the body steadily accumulate a type of injury called ischemic injury .
Most tissues and organs of the body can survive clinical death for considerable periods. Blood circulation can be stopped in the entire body below the heart for at least 30 minutes, with injury to the spinal cord being a limiting factor. [ 4 ] Detached limbs may be successfully reattached after 6 hours of no blood circulation at warm temperatures. Bone, tendon, and skin can survive as long as 8 to 12 hours. [ 5 ]
The brain, however, appears to accumulate ischemic injury faster than any other organ. Without special treatment after circulation is restarted, full recovery of the brain after more than 3 minutes of clinical death at normal body temperature is rare. [ 6 ] [ 7 ] Usually brain damage or later brain death results after longer intervals of clinical death even if the heart is restarted and blood circulation is successfully restored. Brain injury is therefore the chief limiting factor for recovery from clinical death.
Although loss of function is almost immediate, there is no specific duration of clinical death at which the non-functioning brain clearly dies. The most vulnerable cells in the brain, CA1 neurons of the hippocampus , are fatally injured by as little as 10 minutes without oxygen. However, the injured cells do not actually die until hours after resuscitation. [ 8 ] This delayed death can be prevented in vitro by a simple drug treatment even after 20 minutes without oxygen. [ 9 ] In other areas of the brain, viable human neurons have been recovered and grown in culture hours after clinical death. [ 10 ] Brain failure after clinical death is now known to be due to a complex series of processes called reperfusion injury that occur after blood circulation has been restored, especially processes that interfere with blood circulation during the recovery period. [ 11 ] Control of these processes is the subject of ongoing research.
In 1990, the laboratory of resuscitation pioneer Peter Safar discovered that reducing body temperature by three degrees Celsius after restarting blood circulation could double the time window of recovery from clinical death without brain damage from 5 minutes to 10 minutes. This induced hypothermia technique is beginning to be used in emergency medicine. [ 12 ] [ 13 ] The combination of mildly reducing body temperature, reducing blood cell concentration, and increasing blood pressure after resuscitation was found especially effective – allowing for recovery of dogs after 12 minutes of clinical death at normal body temperature with practically no brain injury. [ 14 ] [ 15 ] The addition of a drug treatment protocol has been reported to allow recovery of dogs after 16 minutes of clinical death at normal body temperature with no lasting brain injury. [ 16 ] Cooling treatment alone has permitted recovery after 17 minutes of clinical death at normal temperature, but with brain injury. [ 17 ]
Under laboratory conditions at normal body temperature, the longest period of clinical death of a cat (after complete circulatory arrest) survived with eventual return of brain function is one hour. [ 18 ] [ 19 ]
Reduced body temperature, or therapeutic hypothermia , during clinical death slows the rate of injury accumulation, and extends the time period during which clinical death can be survived. The decrease in the rate of injury can be approximated by the Q 10 rule, which states that the rate of biochemical reactions decreases by a factor of two for every 10 °C reduction in temperature. As a result, humans can sometimes survive periods of clinical death exceeding one hour at temperatures below 20 °C. [ 20 ] The prognosis is improved if clinical death is caused by hypothermia rather than occurring prior to it; in 1999, 29-year-old Swedish woman Anna Bågenholm spent 80 minutes trapped in ice and survived with a near full recovery from a 13.7 °C core body temperature. It is said in emergency medicine that "nobody is dead until they are warm and dead." [ 21 ] In animal studies, up to three hours of clinical death can be survived at temperatures near 0 °C. [ 22 ] [ 23 ]
The purpose of cardiopulmonary resuscitation (CPR) during cardiac arrest is ideally reversal of the clinically dead state by restoration of blood circulation and breathing. However, there is great variation in the effectiveness of CPR for this purpose. Blood pressure is very low during manual CPR, [ 24 ] resulting in only a ten-minute average extension of survival. [ 25 ] Yet there are cases of patients regaining consciousness during CPR while still in full cardiac arrest. [ 26 ] In absence of cerebral function monitoring or frank return to consciousness, the neurological status of patients undergoing CPR is intrinsically uncertain. It is somewhere between the state of clinical death and a normal functioning state.
Patients supported by methods that certainly maintain enough blood circulation and oxygenation for sustaining life during stopped heartbeat and breathing, such as cardiopulmonary bypass , are not customarily considered clinically dead. All parts of the body except the heart and lungs continue to function normally. Clinical death occurs only if machines providing sole circulatory support are turned off, leaving the patient in a state of stopped blood circulation.
Certain surgeries for cerebral aneurysms or aortic arch defects require that blood circulation be stopped while repairs are performed. This deliberate temporary induction of clinical death is called circulatory arrest . It is typically performed by lowering body temperature to between 18 °C and 20 °C (64 and 68 °F) and stopping the heart and lungs. This state is called deep hypothermic circulatory arrest . At such low temperatures most patients can tolerate the clinically dead state for up to 30 minutes without incurring significant brain injury. [ 27 ] Longer durations are possible at lower temperatures, but the usefulness of longer procedures has not been established yet. [ 28 ]
Controlled clinical death has also been proposed as a treatment for exsanguinating trauma to create time for surgical repair. [ 29 ]
Death was historically believed to be an event that coincided with the onset of clinical death. It is now understood that death is a series of physical events, not a single one, and determination of permanent death is dependent on other factors beyond simple cessation of breathing and heartbeat. [ 11 ]
Clinical death that occurs unexpectedly is treated as a medical emergency. CPR is initiated. In a United States hospital, a Code Blue is declared and Advanced Cardiac Life Support procedures used to attempt to restart a normal heartbeat. This effort continues until either the heart is restarted, or a physician determines that continued efforts are useless and recovery is impossible. If this determination is made, the physician pronounces legal death and resuscitation efforts stop.
If clinical death is expected due to terminal illness or withdrawal of supportive care, often a Do Not Resuscitate (DNR) or "no code" order is in place. This means that no resuscitation efforts are made, and a physician or nurse may pronounce legal death at the onset of clinical death. [ citation needed ]
A patient with working heart and lungs who is determined to be brain dead can be pronounced legally dead without clinical death occurring. However, some courts have been reluctant to impose such a determination over the religious objections of family members, such as in the Jesse Koochin case. [ 30 ] Similar issues were also raised by the case of Mordechai Dov Brody, but the child died before a court could resolve the matter. [ 31 ] Conversely, in the case of Marlise Muñoz , a hospital refused to remove a brain dead woman from life support machines for nearly two months, despite her husband's requests, because she was pregnant . [ 32 ]
|
https://en.wikipedia.org/wiki/Clinical_death
|
Clinical ecology was the name given by proponents in the 1960s to a claim that exposure to low levels of certain chemical agents harm susceptible people, causing multiple chemical sensitivity and other disorders. Clinical ecologists are people that support and promote this offshoot of conventional medicine. [ 1 ] They often have a background in the field of allergy or otorhinolaryngology , and the theoretical approach is derived in part from classic concepts of allergic responses, first articulated by Theron Randolph and developed by Richard Mackarness . [ 2 ]
Clinical ecologists support a cause-and-effect relationship for non-specific symptoms reported by some people after low-dose exposure to chemical, biologic, or physical agents. This pattern of low-dose reaction is not generally accepted by toxicologists. [ 1 ] Although some of the mainstream medical community continue to reject these claims, the concept is gaining some recognition under the modern and more clearly articulated classification of environmental medicine . [ 3 ] [ 4 ]
"Clinical Ecologist" is an environmental approach that is consistent with the practice of holistic medicine. Practitioners with this orientation do not use the term "Clinical Ecologist," although those opposed to this complementary medicine approach to illness often still do. Unlike terms such as physician or nurse , the term clinical ecologist is not legally regulated in any jurisdiction, which means that any person may legally claim to be a clinical ecologist. If wanted, they may obtain an extralegal certification or membership from the unregulated private organization American Academy of Environmental Medicine upon payment of a fee. [ 1 ] [ 5 ]
Many clinical ecologists are traditionally licensed healthcare professionals who hold advanced traditional medical certifications. Others may have a more alternative training. [ citation needed ]
Randolph published a number of books to promote clinical ecology and environmental medicine, including:
In 1965, Randolph founded the Society for Clinical Ecology as an organization to promote his theories based on the symptoms of his patients, known as multiple chemical sensitivities (MCS).
During the 1980s the movement was rejected by some medical organizations and judges, [ 1 ] and health insurance companies often refused to pay their bills. The society's name was changed from the Society for Clinical Ecology, according to its opponents, in order to flee from its bad reputation. [ 3 ]
Despite the confusion in the traditional medical establishment regarding the classification and treatment of MCS, MCS has achieved credibility in workers compensation claims, tort liability, and regulatory actions. The pragmatic determination of MCS includes four elements: (1) the syndrome is acquired after a documentable environmental exposure that may have caused objective evidence of health effects; (2) the symptoms are referable to multiple organ systems and vary predictably in response to environmental stimuli; (3) the symptoms occur in relation to measurable levels of chemicals, but the levels are below those known to harm health; and (4) no objective evidence of organ damage can be found. [ 6 ]
Randolph's theories about chemical effects have been criticized by toxicologists. His broader interpretation of "allergies" beyond that of IgE antibodies in true allergy conflicted with traditional allergists of his time. Of course, Randolph did not claim that environmental sensitivities were "true allergies" mediated by IgE, claiming this fine point was irrelevant to people suffering from non-allergic sensitivities. The turf war waged by allergists and defense expert witnesses during those years also has less relevance today than it once did. Several National Academy of Sciences workshops and Research Councils into Gulf War syndrome have validated the idiosyncratic effect low chemical exposure on sensitized individuals. [ citation needed ]
Clinical ecology is not a recognized medical specialty . [ 7 ] Practitioners have been criticized for tricking mentally ill and suggestible patients into thinking that they were chemically sensitive. [ 3 ] Twentieth century critics of clinical ecology charged that multiple chemical sensitivity (MCS) had never been clearly defined, no scientifically plausible mechanism has been proposed for it, no diagnostic tests have been substantiated, and not a single case has been scientifically proven. Well-conducted studies establishing the theories and practices of clinical ecology were not found in reviews of evidence supporting its practices by the American Medical Association in 1992, [ 8 ] the American College of Physicians in 1989, [ 9 ] the Canadian Psychiatric Association, the International Society of Regulatory Toxicology and Pharmacology in 1993, [ 10 ] the American Academy of Allergy, Asthma and Immunology, [ 11 ] and more recently by the American College of Occupational and Environmental Medicine in 1999. [ 12 ]
The development of GMO food and the increased use of herbicides on food crops has resulted in an increased interest in the area of environmental sensitivities. A polarized debate has grown between supporters of the new agri-technology who characterize themselves as rational scientists and opponents as ignorant alarmists. On the other hand, the opponents characterize the supporters as dogmatic industry shills and themselves as critical thinkers and environmentalists. Both groups claim to be the majority opinion, although the only consensus that has weight is within the government organizations that rule on safety. At issue is the non-industry science that characterizes herbicides and the genetically engineered pesticides of GMO crops as endocrine disruptors. That disruption also triggers autoimmune system responses consistent with those observed by clinical ecologists. [ citation needed ]
|
https://en.wikipedia.org/wiki/Clinical_ecology
|
Clinical empathy is expressed as the skill of understanding what a patient says and feels, and effectively communicating this understanding to the patient. [ 1 ] The opposite of clinical empathy is clinical detachment. Detached concern, or clinical detachment, is the ability to distance oneself from the patient in order to serve the patient from an objective standpoint. [ 2 ] For physicians to maximize their role as providers, a balance must be developed between clinical detachment and clinical empathy. [ 3 ]
In 2001, an instrument was created to measure a physician's empathy towards each patient. This tool is called the Jefferson Scale of Physician Empathy. [ 4 ] The 20-item questionnaire was originally developed for administration to medical students and physicians but has extended to dentistry and nursing because it is easy to interpret, administer, and analyze. [ 5 ] [ 6 ]
From a student's first year to their fourth year in medical school, empathy scores on the Jefferson Scale of Physician Empathy (S-version) decrease. [ 7 ] Both gender and specialty choice affect empathy scores, favoring women and primary care specialties. [ 8 ]
Clinical empathy is a main component of the patient-provider relationship. It is seen as a commonly accepted pillar of professionalism for medical students. [ 9 ] Empathy involves both cognitive and affective aspects. [ 10 ] The cognitive domain revolves around understanding a patient's experiences and being able to understand the world from their point of view. This contrasts the affective aspect of empathy which involves joining in the patient's emotional experiences and feelings, which correlates closer to sympathy. [ 4 ] Empathetic physicians share understanding with patients, which serves to benefit the patient in their physical, mental and social well-being. Both a provider's ability to provide empathetic care as well as a perception of this care by the patient are important in diagnosis and treatment. [ 11 ] Developing the ability to understand a patient's thoughts and feelings lends itself to a successful medical interview and collaborative treatment. [ 12 ] Practicing empathy in a clinical setting leads to greater patient satisfaction, [ 13 ] better compliance, [ 14 ] and fewer lawsuits. [ 15 ]
Clinical detachment is a means of providing objective, detached medical care while maintaining enough concern for the patient to offer emotional understanding. [ 16 ] A close patient-provider relationship threatens objectivity, therefore a social distance is expected to ensure professionalism. [ 17 ] Students in medical school are taught clinical detachment as a protective mechanism for dealing with emotional experiences such as death and dying. [ 18 ] Clinical detachment is also a means of dealing with the pressure of making mistakes [ 19 ] and medical uncertainty. [ 20 ] Suppression and repression of emotions, intellectualization , and humor are mechanisms used to confront distressing situations in order to give an objective assessment. [ 21 ]
Because empathy is a multi-faceted and complex concept, measurement proves to be difficult. [ 22 ] Although there are scales to measure empathy such as the Interpersonal Reactivity Index , developed by Davis, the Emotional Empathy Scale, developed by Mehrabian and Epstein, and the Hogan Empathy Scale, they were not created explicitly to measure physician empathy. The Jefferson Scale of Physician Empathy was created at the Center for Research in Medical Education and Health Care (CRMEHC) at Jefferson Medical College to measure patient perceptions of empathy from their provider. Construct validity, criterion-related validity, predictive validity, internal consistency, and test-retest reliability all provide empirical support for the Jefferson Scale of Physician Empathy. [ 23 ] The scale was originally intended for distribution to medical students and physicians. [ 4 ] Since its creation, it has been translated into 53 languages [ 24 ] and applied to other medical professions such as dentistry and nursing. [ 5 ] Three versions of the scale now exist, one for medical students (S-version), one for health professions (HP-version), and one for health professions students (HPS-version). [ 24 ] Results of the 20 item questionnaire provide that higher scores are related to higher levels of empathy in interpersonal care.
Medical students' first experience with a patient is often with a cadaver in a gross anatomy course. Working intimately with a cadaver during a gross anatomy course captures the essence of the patient-provider relationship. [ 25 ] Cadaver dissection is a challenging emotional and mental experience. Involvement, emotional coping, and ability are three themes that develop during the dissection experience. [ 26 ] Medical students in a gross anatomy course may experience mixed emotions and variable reactions to cadaver dissection. Students who view their donor as a scientific specimen are less opposed to dissection, whereas students who view their donor as a former living person face greater difficulty with dissection and foster feelings of empathy towards the cadaver . [ 27 ] Because of the emotional impact of dissection, students may develop detached concern to cope with these feelings. [ 28 ]
In western countries, medical education emphasizes a "body as first patient" philosophy for dissection. [ 29 ] This anonymizes cadavers which fosters a different relationship than in eastern countries. Many eastern countries adopt a mindset of donor as "first teacher". For example, in Thailand, students are encouraged to develop a personal relationship with their donors. The students are instructed to view their donors with the highest honor and view the cadavers as a "great teacher". [ 30 ] This intention allows medical students to form a relationship that is familiar to them, one of a teacher and student, as opposed to approaching their donor as a doctor, a practice that new and unfamiliar to students. [ 31 ] Although eastern and western countries handle cadaver relationships differently, it can be generalized that gross anatomy courses offer an opportunity for students to examine their feelings on life, death, and dying. [ 32 ] These courses also promote development of coping strategies for stressful situations. [ 29 ]
Over the course of medical education, males and females differ in their attitudes and execution of empathetic treatment. Students entering people-oriented specialties such as family medicine , general internal medicine , and other primary care specialties have higher scores on the Jefferson Scale of Physician Empathy, whereas students entering technology-oriented specialties such as pathology , radiology , and anesthesiology score lower on empathy. [ 8 ] Female students are more likely to enter people-oriented specialties whereas men are more likely to enter technology-oriented specialties. [ 12 ] Female students score higher than male students on the Jefferson Scale of Physician Empathy across all years of medical school education. Female students also have a greater likelihood than men to disagree with a need for detached concern in order to provide the best medical treatment. [ 33 ]
Several studies have indicated that clinical empathy may decline in students during medical school, with a change even being observed from the start to the end of first year. [ 34 ] If this is the case, there could be negative consequences, as it is feared that a reduction in empathy may affect professionalism and quality of care.
A recent study investigate the causes of the decline. [ 35 ] It seems that a "hidden curriculum" which includes a high workload, paucity of adequate role models, and lack of support can cause adaptations such as cynicism and detachment. In addition, the decrease may be due to the medical curriculum which may cause students to develop more of a scientific instead of a holistic approach to medicine. [ 36 ] [ 37 ] ). Another reason is that medical school is a competitive environment that can cause students to prioritise their performance in medical school, rather than maintaining a caring demeanour. [ 38 ] Similarly, it has also been suggested that as the pressure to obtain medical knowledge increases throughout medical school, students become more worried about retaining this knowledge alongside having to remain empathetic and caring towards patients. Students are more likely to lose their empathic qualities as compensation to allow them to still feel as though they are capable of learning all of the information they are required to. [ 39 ] Furthermore, as students’ progress through medical school, they may be more likely to dehumanise patients to protect themselves from feelings of distress as they encounter increasingly challenging patients. As a result their empathy for patients may suffer. [ 40 ]
Many methods have been put forward which aim to maintain the empathy of healthcare students and professionals with varying success. [ 41 ] Interventions have included medical humanities and creative arts around a patient narrative, writing interventions including creative writing and blogging, drama, formal communication and inter-personal skills training and problem based learning. [ 41 ]
|
https://en.wikipedia.org/wiki/Clinical_empathy
|
Clinical epidemiology is a subfield of epidemiology specifically focused on issues relevant to clinical medicine . The term was first introduced by virologist John R. Paul in his presidential address to the American Society for Clinical Investigation in 1938. [ 1 ] [ 2 ] It is sometimes referred to as "the basic science of clinical medicine". [ 3 ]
When he coined the term "clinical epidemiology" in 1938, John R. Paul defined it as "a marriage between quantitative concepts used by epidemiologists to study disease in populations and decision-making in the individual case which is the daily fare of clinical medicine". [ 4 ] According to Stephenson & Babiker (2000), "Clinical epidemiology can be defined as the investigation and control of the distribution and determinants of disease." [ 5 ] Walter O. Spitzer has highlighted the ways in which the field of clinical epidemiology is not clearly defined. However, he felt that, despite criticism of the term, it was a useful way to define a specific subfield of epidemiology. [ 6 ] In contrast, John M. Last felt that the term was an oxymoron, and that its increasing popularity in many different medical schools was a serious problem. [ 4 ]
Clinical epidemiology aims to optimise the diagnostic, treatment and prevention processes for an individual patient, based on an assessment of the diagnostic and treatment process using epidemiological research data. [ 7 ] [ 8 ] A central tenet of clinical epidemiology is that every clinical decision must be based on rigorously evidence-based science.
The objectives of clinical epidemiology are primarily to develop epidemiologically sound clinical guidelines and standards for diagnosis, disease progression, prognosis, treatment and prevention. The data obtained in epidemiological studies are also applicable for the epidemiological justification of preventive programmes for communicable and noncommunicable diseases. [ 9 ]
There are various types of epidemiological studies in use: case-control studies, cohort studies, experimental controlled randomised trials (RCTs).
Experimentation, in general, is a general scientific method of testing causal hypotheses by means of an intervention (controlled influence) in the natural course of the phenomenon under study. In order to assess the result of the intervention, the experiment necessarily involves comparable groups - experimental and control, i.e. the study is controlled. The division of patients into groups should be done casually, by randomisation. [ citation needed ]
A key aspect of clinical epidemiology is the evaluation of the effectiveness of treatment and prevention medicines. [ 10 ] The effectiveness of preventive and curative medicines is divided into potential effectiveness (the maximum achievable effect of interventions at a given level of science) and real effectiveness (the effect that is available in practice). [ citation needed ]
This medical article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clinical_epidemiology
|
A clinical formulation , also known as case formulation and problem formulation , is a theoretically-based explanation or conceptualisation of the information obtained from a clinical assessment. It offers a hypothesis about the cause and nature of the presenting problems and is considered an adjunct or alternative approach to the more categorical approach of psychiatric diagnosis . [ 1 ] In clinical practice, formulations are used to communicate a hypothesis and provide framework for developing the most suitable treatment approach. It is most commonly used by clinical psychologists and is deemed to be a core component of that profession. [ 2 ] Mental health nurses, [ 3 ] social workers, and some psychiatrists [ 4 ] may also use formulations.
Different psychological schools or models utilize clinical formulations, including cognitive behavioral therapy (CBT) and related therapies: systemic therapy , [ 5 ] psychodynamic therapy , [ 6 ] and applied behavior analysis . [ 7 ] The structure and content of a clinical formulation is determined by the psychological model. Most systems of formulation contain the following broad categories of information: symptoms and problems; precipitating stressors or events; predisposing life events or stressors; and an explanatory mechanism that links the preceding categories together and offers a description of the precipitants and maintaining influences of the person's problems. [ 8 ]
Behavioral case formulations used in applied behavior analysis and behavior therapy are built on a rank list of problem behaviors, [ 7 ] from which a functional analysis is conducted, [ 9 ] sometimes based on relational frame theory . [ 10 ] Such functional analysis is also used in third-generation behavior therapy or clinical behavior analysis such as acceptance and commitment therapy [ 11 ] and functional analytic psychotherapy . [ 12 ] Functional analysis looks at setting events (ecological variables, history effects, and motivating operations), antecedents, behavior chains, the problem behavior, and the consequences, short- and long-term, for the behavior. [ 9 ]
A model of formulation that is more specific to CBT is described by Jacqueline Persons. [ 13 ] This has seven components: problem list, core beliefs, precipitants and activating situations, origins, working hypothesis, treatment plan, and predicted obstacles to treatment.
A psychodynamic formulation would consist of a summarizing statement, a description of nondynamic factors, description of core psychodynamics using a specific model (such as ego psychology , object relations or self psychology ), and a prognostic assessment which identifies the potential areas of resistance in therapy. [ 6 ]
One school of psychotherapy which relies heavily on the formulation is cognitive analytic therapy (CAT). [ 14 ] CAT is a fixed-term therapy, typically of around 16 sessions. At around session four, a formal written reformulation letter is offered to the patient which forms the basis for the rest of the treatment. This is usually followed by a diagrammatic reformulation to amplify and reinforce the letter. [ 15 ]
Many psychologists use an integrative psychotherapy approach to formulation. [ 16 ] [ 17 ] This is to take advantage of the benefits of resources from each model the psychologist is trained in, according to the patient's needs. [ 18 ]
The quality of specific clinical formulations, and the quality of the general theoretical models used in those formulations, can be evaluated with criteria such as: [ 19 ]
Formulations can vary in temporal scope from case-based to episode-based or moment-based, and formulations may evolve during the course of treatment. [ 20 ] Therefore, ongoing monitoring, testing, and assessment during treatment are necessary: monitoring can take the form of session-by-session progress reviews using quantitative measures, and formulations can be modified if an intervention is not as effective as hoped. [ 21 ] [ 22 ]
Psychologist George Kelly , who developed personal construct theory in the 1950s, noted his complaint against traditional diagnosis in his book The Psychology of Personal Constructs (1955): "Much of the reform proposed by the psychology of personal constructs is directed towards the tendency for psychologists to impose preemptive constructions upon human behaviour. Diagnosis is all too frequently an attempt to cram a whole live struggling client into a nosological category." [ 23 ] : 154 In place of nosological categories, Kelly used the word "formulation" and mentioned two types of formulation: [ 24 ] : 337 a first stage of structuralization , in which the clinician tentatively organizes clinical case information "in terms of dimensions rather than in terms of disease entities" [ 23 ] : 192 while focusing on "the more important ways in which the client can change, and not merely ways in which the psychologist can distinguish him from other persons", [ 23 ] : 154 and a second stage of construction , in which the clinician seeks a kind of negotiated integration of the clinician's organization of the case information with the client's personal meanings. [ 25 ]
Psychologists Hans Eysenck , Monte B. Shapiro , Vic Meyer , and Ira Turkat were also among the early developers of systematic individualized alternatives to diagnosis. [ 26 ] : 4 Meyer has been credited with providing perhaps the first training course of behaviour therapy based on a case formulation model, at the Middlesex Hospital Medical School in London in 1970. [ 1 ] : 13 Meyer's original choice of words for clinical formulation were "behavioural formulation" or "problem formulation". [ 1 ] : 14
|
https://en.wikipedia.org/wiki/Clinical_formulation
|
Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids , such as blood , urine , and tissue homogenates or extracts using the tools of chemistry , microbiology , hematology , molecular pathology , and Immunohaematology . This specialty requires a medical residency .
Clinical pathology is a term used in the US, UK, Ireland, many Commonwealth countries , Portugal, Brazil, Italy, Japan, and Peru; countries using the equivalent in the home language of "laboratory medicine" include Austria, Germany, Romania, Poland and other Eastern European countries; other terms are "clinical analysis" (Spain) and "clinical/medical biology (France, Belgium, Netherlands, North and West Africa). [ 1 ]
The American Board of Pathology certifies clinical pathologists, and recognizes the following secondary specialties of clinical pathology:
In some countries other sub specialities fall under certified Clinical Biologists responsibility: [ 2 ]
Clinical pathologists are often medical doctors. In some countries in South America , Europe , Africa or Asia , this specialty can be practiced by non-physicians, such as Ph.D. or Pharm.D. after a variable number of years of residency .
Clinical pathologists work in close collaboration with clinical scientists (clinical biochemists, clinical microbiologists, etc.), medical technologists , hospital administrators, and referring physicians to ensure the accuracy and optimal utilization of laboratory testing.
Clinical pathology is one of the two major divisions of pathology , the other being anatomical pathology . Often, pathologists practice both anatomical and clinical pathology, a combination sometimes known as general pathology . Similar specialties exist in veterinary pathology .
Clinical pathology is itself divided into subspecialties, the main ones being clinical chemistry , clinical hematology / blood banking , hematopathology and clinical microbiology and emerging subspecialties such as molecular diagnostics and proteomics . Many areas of clinical pathology overlap with anatomic pathology. Both can serve as medical directors of CLIA certified laboratories. Under the CLIA law, only the US Department of Health and Human Services approved Board Certified Ph.D. , DSc , or MD and DO can perform the duties of a Medical or Clinical Laboratory Director. This overlap includes immunoassays, flow cytometry, microbiology and cytogenetics and any assay done on tissue. Overlap between anatomic and clinical pathology is expanding to molecular diagnostics and proteomics as we move towards making the best use of new technologies for personalized medicine. [ 3 ]
Clinical pathologists may assist physicians in interpreting complex tests such as platelet aggregometry, hemoglobin or serum protein electrophoresis , or coagulation profiles. If interfering substances are suspected, they may recommend alternate test methods. For example, hemolysis , icterus, lipemia , or heterophile antibodies may confound results obtained by traditional methods such as ion-selective electrodes, enzymatic assays or immunoassays . Alternate methods such as blood gas analysers, point-of-care testing or mass spectrometry may help resolve the clinical question.
Recently, EFLM has chosen the name of "Specialists in Laboratory Medicine" to define all European Clinical pathologists, regardless of their training (M.D., Ph.D. or Pharm.D.). [ 4 ]
In France, Clinical Pathology is called Medical Biology ("Biologie médicale") and is practiced by both M.D.s and Pharm.D.s. The residency lasts four years. Specialists in this discipline are called "Biologiste médical" which literally translates as Clinical Biologist rather than "Clinical pathologist ". [ 5 ]
Tangible tools include microscopes, analyzers, strips, and centrifuges.
Visual examination of the specimen may provide information to the pathologist or the physician. For example, fluid drained from an abscess may appear cloudy, or cerebrospinal fluid obtained by lumbar puncture may exhibit xanthochromia , suggesting a bleed has occurred. Laboratory technologists may provide qualitative descriptions accordingly.
Microscopic analysis is an important activity of the pathologist and the laboratory technologist. They have many different stains at their disposal ( GRAM , MGG, Grocott , Ziehl–Neelsen , etc.). Immunofluorescence, cytochemistry, the immunocytochemistry, and FISH are also used in order make a correct diagnosis.
Pathologists may review samples such as pleural , peritoneal , synovial, or pericardial fluids to characterize them as "normal", tumoral, inflammatory, or even infectious. Microscopic examination can also determine the causal infectious agent – often a bacterium, mould, yeast, parasite, or (rarely) virus.
Automated analysers , by the association of robotics and spectrophotometry, have allowed these last decades better reproducibility of the results, in particular in medical biochemistry and hematology. [ 6 ]
Efficiency and productivity can be enhanced by automating the pre-analytical processing, including barcode reading, sorting, centrifuging, and aliquoting specimens.
The analysers must undergo daily controls prior to performing patient testing. Analysers must also undergo daily, weekly and monthly maintenance. Quality management involves reviewing quality control trends to detect emerging problems in instrument calibration, correlating results between instruments that perform similar testing, and running standardized samples to prove linearity and precision.
Some laboratory processes involve automated analysis combined with manual review by technologists. For example, when hematology analysers flag samples as abnormal, automated white blood cell differential counts may be superseded by manual differential counts using stained slides read at the microscope or scanned by digital imaging software. Laboratory technologists may flag abnormal samples for pathologist review. The pathologist may recommend additional testing, such as flow cytometry to identify lymphoma or leukemia cells, or cytology to characterize solid tumor cells.
Samples undergoing examination for pathogens, primarily in medical microbiology , may be incubated with culture media. Those allow, for example, the description of one or several infectious agents responsible of the clinical signs.
A reference range in medicine is the range or the interval of values that is deemed normal for a physiological measurement in healthy persons (for example, the amount of creatinine in the blood , or the partial pressure of oxygen ). It is a basis for comparison for a physician or other health professional to interpret a set of test results for a particular patient. Some important reference ranges in medicine are reference ranges for blood tests and reference ranges for urine tests .
|
https://en.wikipedia.org/wiki/Clinical_pathology
|
A clinical pathway , also known as care pathway , integrated care pathway , critical pathway , or care map , is one of the main tools used to manage the quality in healthcare concerning the standardisation of care processes. [ 1 ] [ 2 ] It has been shown that their implementation reduces the variability in clinical practice and improves outcomes. [ 3 ] [ 4 ] [ 5 ] Clinical pathways aim to promote organised and efficient patient care based on evidence-based medicine , [ 6 ] [ 7 ] and aim to optimise outcomes in settings such as acute care and home care . A single clinical pathway may refer to multiple clinical guidelines on several topics in a well specified context.
A clinical pathway is a multidisciplinary management tool based on evidence-based practice for a specific group of patients with a predictable clinical course, in which the different tasks (interventions) by the professionals involved in the patient care are defined, optimized and sequenced either by hour (ED), day (acute care) or visit (homecare). Outcomes are tied to specific interventions.
The concept of clinical pathways may have different meanings to different stakeholders. [ 8 ] Managed care organizations often view clinical pathways in a similar way as they view care plans, in which the care provided to a patient is definitive and deliberate. Clinical pathways can range in scope from simple medication utilization to a comprehensive treatment plan. Clinical pathways aim for greater standardization of treatment regimens and sequencing as well as improved outcomes, from both a quality of life and a clinical outcomes perspective.
The clinical pathway concept appeared for the first time at the New England Medical Center ( Boston , United States) in 1985, inspired by Karen Zander and Kathleen Bower. [ 9 ] [ non-primary source needed ] Clinical pathways appeared as a result of the adaptation of the documents used in industrial quality management , the standard operating procedures (SOPs), whose goals are:
Clinical pathways (integrated care pathways) can be seen as an application of process management thinking to the improvement of patient healthcare. An aim is to re-center the focus on the patient's overall journey, rather than the contribution of each specialty or caring function independently. Instead, all are emphasised to be working together, in the same way as a cross-functional team .
More than just a guideline or a protocol, a care pathway is typically recorded in a single all-encompassing bedside document that will stand as an indicator of the care a patient is likely to be provided in the course of the pathway going forward; and ultimately as a single unified legal record of the care the patient has received, and the progress of their condition, as the pathway has been undertaken.
The pathway design tries to capture the foreseeable actions which will most commonly represent best practice for most patients most of the time, and include prompts for them at the appropriate time in the pathway document to ascertain whether they have been carried out, and whether results have been as expected. In this way results are recorded, and important questions and actions are not overlooked. However, pathways are typically not prescriptive; the patient's journey is an individual one, and an important part of the purpose of the pathway documents is to capture information on "variances", where due to circumstances or clinical judgment different actions have been taken, or different results unfolded. The combined variances for a sufficiently large population of patients are then analysed to identify important or systematic features, which can be used to improve the next iteration of the pathway.
The following signals may indicate that it may be useful to commit resources to establish and implement a clinical pathway for a particular condition:
|
https://en.wikipedia.org/wiki/Clinical_pathway
|
Clinical pluralism is a term used by some psychotherapists to denote an approach to clinical treatment that would seek to remain respectful towards divergences in meaning-making . It can signify both an undertaking to negotiate theoretical difference between clinicians, [ 1 ] and an undertaking to negotiate differences of belief occurring within the therapeutic relationship itself. [ 2 ] [ 3 ] [ 4 ] While the notion of clinical pluralism is associated with the practice of psychotherapy, similar issues have been raised within the field of medical ethics (see Medical ethics § Cultural concerns ). [ 5 ] [ 6 ]
Clinical pluralism can be applied within a particular approach to psychotherapy, such as psychoanalytic psychotherapy . [ 7 ] Modern psychoanalytic training involves not only hours of training sessions but the use of diverse clinical practices. [ 8 ] An example of psychoanalytic treatment following clinical pluralism is coparticipant psychoanalysis, which features an individualized treatment but is diverse in the practices employed. This technique holds that all analyses represent unique sets of practices, which depend on the varying characteristics of the personalities that make up the analytic dyad. [ 9 ]
Clinical pluralism is also associated with eclectic and integrative psychotherapy , which are distinguished from clinical practice that follows a specific theoretical school with its own therapeutic techniques. [ 10 ] These approaches to therapy all maintain that there is no single theory or therapeutic modality that can offer optimum efficacy. [ 10 ]
This psychology -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clinical_pluralism
|
Clinical professor , sometimes known as professor of practice , is an academic appointment made to a member of a profession who is associated with a university or other academic body, and engages in practical (clinical) instruction of students (e.g., medical students, engineering students). [ 1 ] Titles in this category may include clinical instructor, assistant clinical professor, associate clinical professor, and clinical professor. [ 2 ]
Clinical professorship generally does not offer a " tenure track ," but can be either full- or part-time, and is typically noted for its emphasis on practical skills training as opposed to theoretical matters. Thus, most members of such faculty are expected to have considerable practical experience in their respective fields of expertise. Unlike with most other faculty, this is deemed at least as important as educational credentials. [ 2 ]
For administrative purposes, some universities classify such a designation as equivalent to " adjunct professor ." [ 3 ] Clinical professors may be salaried or may teach as a volunteer. [ 4 ]
In the field of medicine, the usage of the terms (in ascending order of rank) clinical instructor, clinical assistant professor, clinical associate professor, and clinical professor (as opposed to the same titles without the clinical modifier) are not well standardized. In some institutions, clinical faculty may receive a designation of rank with the "clinical" modifier as a courtesy, often on the basis of involvement in education of medical (or other) students. In such a context, ascending rank may acknowledge seniority and/or reputation.
Medical faculty working full-time as an academic medical center with involvement in scholarly pursuits are typically assigned a rank without the clinical modifier of instructor, assistant professor, associate professor, or professor with or without tenure depending upon the institution. The assistant clinical professor position may be almost entirely honorary. [ 5 ] [ 6 ]
In Canada, doctors who teach are called " preceptors ." [ 7 ]
The University of Sydney in Australia appoints "professors of practice". [ 8 ]
This job-, occupation-, or vocation-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clinical_professor
|
The term " Clinical research center " (CRC) or " General clinical research center " ( GCRC ) refers to any designated medical facility used to conduct clinical research , such as at a hospital or medical clinic . [ 1 ] They have been used to perform clinical trials of various medical procedures. The medical profession has had specific uses for CRC facilities, including awarding grants to support various types of research.
For example, the U.S. National Institutes of Health had, for years, issued GCRC grants, but later changed to awarding a Clinical and Translational Science Award (CTSA). Many hospitals or clinics have included a wing, ward, or other area titled as "Clinical Research Center" (with capitalized words).
Some examples of CRC facilities are:
|
https://en.wikipedia.org/wiki/Clinical_research_center
|
Supervision is used in counselling, psychotherapy, and other mental health disciplines as well as many other professions engaged in working with people. [ 1 ] Supervision may be applied as well to practitioners in somatic disciplines for their preparatory work for patients as well as collateral with patients. Supervision is a replacement instead of formal retrospective inspection, delivering evidence about the skills of the supervised practitioners.
It consists of the practitioner meeting regularly with another professional, not necessarily more senior, but normally with training in the skills of supervision, to discuss casework and other professional issues in a structured way. [ 2 ] This is often known as clinical or counselling supervision (consultation differs in being optional advice from someone without a supervisor's formal authority). The purpose is to assist the practitioner to learn from his or her experience and progress in expertise, as well as to ensure good service to the client or patient. [ 3 ] Learning shall be applied to planning work as well as to diagnostic work and therapeutic work. [ 4 ]
Derek Milne defined clinical supervision as: "The formal provision, by approved supervisors, of a relationship-based education and training that is work-focused and which manages, supports, develops and evaluates the work of colleague/s". [ 5 ] The main methods that supervisors use are corrective feedback on the supervisee's performance, teaching, and collaborative goal-setting. [ 6 ] It therefore differs from related activities, such as mentoring and coaching, by incorporating an evaluative component. Supervision's objectives are "normative" (e.g. quality control), "restorative" (e.g. encourage emotional processing) and "formative" (e.g. maintaining and facilitating supervisees' competence, capability and general effectiveness).
Some practitioners (e.g. art, music and drama therapists, chaplains, psychologists, and mental health occupational therapists) have used this practice for many years. [ 7 ] In other disciplines the practice may be a new concept. For NHS nurses, the use of clinical supervision is expected as part of good practice. [ 8 ] [ 9 ] In a randomly controlled trial in Australia, [ 10 ] White and Winstanley looked at the relationships between supervision, quality of nursing care and patient outcomes, and found that supervision had sustainable beneficial effects for supervisors and supervisees. Waskett believes that maintaining the practice of clinical supervision always requires managerial and systemic backing, and has examined the practicalities of introducing and embedding clinical supervision into large organisations such as NHS Trusts (2009, 2010). [ 11 ] [ 12 ] [ 13 ] Clinical supervision has some overlap with managerial activities, mentorship, and preceptorship, though all of these end or become less direct as staff develop into senior and autonomous roles. [ 14 ]
Key issues around clinical supervision in healthcare raised have included time and financial investment. [ 15 ] It has however been suggested that quality improvement gained, reduced sick leave and burnout, and improved recruitment and retention make the process worthwhile. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ]
Clinical supervision is used in many disciplines in the British National Health Service . Registered allied health professionals such as occupational therapists , [ 23 ] physiotherapists , [ 24 ] dieticians , [ 25 ] speech and language therapists [ 26 ] and art , [ 27 ] music and drama therapists are now expected to have regular clinical supervision. C. Waskett (2006) has written on the application of solution focused supervision skills to either counselling or clinical supervision work. Practising members of the British Association for Counselling and Psychotherapy [ 28 ] are bound to have supervision for at least 1.5 hours a month. Students and trainees must have it at a rate of one hour for every eight hours of client contact.
The concept is also well used in psychology, social work, the probation service and at other workplaces.
There are many different ways of developing supervision skills which can be helpful to the clinician or practitioner in their work. [ 29 ] Specific models or approaches to both counselling supervision and clinical supervision come from different historical strands of thinking and beliefs about relationships between people. A few examples are given below.
Peter Hawkins (1985 [ 30 ] ) developed an integrative process model which is used internationally in a variety of helping professions. His "Seven Eyed model of Supervision" was further developed by Peter Hawkins along with Robin Shohet, Judy Ryde and Joan Wilmot in "Supervision in the Helping Professions" (1989, 2000 and 2006 and 2012 [ 31 ] ) and with Nick Smith in "Coaching, Mentoring and organisational Consultancy: Supervision and Development" (2006 and 2013 [ 32 ] ) and is taught on the courses of the Centre for Supervision and Team Development [ 33 ] as well as many other supervision training courses.
S. Page and V. Wosket describe a cyclical structure. [ citation needed ]
F. Inskipp and B. Proctor (1993, 1995) developed an approach based on the normative, formative and restorative elements of the relationship between supervisor and supervisee. The Brief Therapy practice [ 34 ] teaches a solution focused approach based on the work of Steve de Shazer and Insoo Kim Berg which uses the concepts of respectful curiosity, the preferred future, recognition of strengths and resources, and the use of scaling to assist the practitioner to progress (described in [ 35 ] ). Waskett has described teaching solution-focused supervision skills to a variety of professionals [ 36 ]
Evidence-based CBT supervision is a distinctive and recent model that is based on cognitive-behaviour therapy (CBT), enhanced by relevant theories (e.g. experiential learning theory), expert consensus statements, and on applied research findings (Milne & Reiser, 2017). It is therefore an example of evidence-based practice, applied to supervision. CBT supervision meets the general definition of clinical supervision, adding some distinctive features that reflect CBT as a therapy. [ 37 ] This includes a high degree of session structure and direction (e.g. detailed agenda-setting), but within a fundamentally collaborative relationship. Also, there is a primary emphasis on cognitive case conceptualization, mainly through the use of case discussion, intended to develop diagrammatic CBT formulations. But discussion should properly be combined with other CBT techniques, including Socratic questioning, guided discovery, educational role-play, behavioural rehearsal, and corrective feedback. Another distinctive aspect is a focus on evidence-based principles and methods, including the use of reliable instruments for feedback and evaluation, in relation to both therapy and supervision. Perhaps the single most defining characteristic of evidence-based CBT supervision is the active and routine commitment to research methods and findings: where other approaches refer to theory and clinical/supervisory experience for guidance, evidence-based CBT supervision appeals ultimately to 'the data'. Examples of the use of relevant theories, expert consensus statements and research, together with six formally-developed supervision guidelines (illustrated through video clips), can be found in Milne & Reiser (2017).
Deliberate practice supervision is a focused and structured approach where therapists continuously work on refining specific skills through targeted exercises and feedback. [ 38 ] Supervisors help identify areas for improvement, set clear objectives, and provide real-time, constructive feedback. [ 39 ] Based on the work of K. Anders Ericsson , [ 40 ] deliberate practice supervision emphasizes repetitive practice and reflection to enhance clinical effectiveness and adaptability, ultimately aiming to bridge the gap between current capabilities and desired performance levels in therapeutic settings. [ 41 ] Over 20 peer-reviewed empirical studies have examined the process and outcome of deliberate practice supervision. [ 42 ] [ 43 ] [ 44 ] A review published in 2024 described two major models of deliberate practice supervision. [ 45 ] The Better Results model, created by Scott Miller, Mark Hubble, and Daryl Chow, uses data from Feedback Informed Treatment to guide deliberate practice supervision. [ 46 ] The Sentio Supervision Model, created by the Sentio Marriage and Family Therapy MA program in California, [ 47 ] systematically integrates psychotherapy skill building with the use of clinical videos and outcome data to increase trainees' clinical competence and confidence. [ 48 ] [ 49 ]
Developmental models of supervision view supervisees as progressing through distinct stages of professional growth, requiring different types of supervision at each stage. Stoltenberg & Delworth’s Integrated Developmental Model (IDM) proposes three levels of supervisee development (beginner, intermediate, advanced), each with increasing autonomy and complexity in clinical skills. [ 50 ] Loganbill, Hardy, & Delworth's developmental model describes cycles of stagnation, confusion, and integration as supervisees develop. [ 51 ]
In the Discrimination Model of supervision by Janine Bernard, supervisors take on three roles: Teacher (instructing and guiding), Counselor (helping with emotional reactions), and Consultant (collaborative problem-solving), and focus on three skill areas: Process (e.g., interpersonal dynamics), Conceptualization (understanding client issues), and Personalization (how the therapist uses themselves in therapy). [ 52 ] [ 53 ]
Some studies have suggested that supervision may improve clinical effectiveness. [ 54 ] [ 55 ] [ 56 ] Other studies have raised questions about the effectiveness of supervision. [ 57 ] [ 58 ] Three literature reviews of research in this topic raised concerns about the reliability of these findings and voiced caution in assuming that supervision may improve clinical effectiveness. [ 59 ] [ 60 ] [ 61 ]
Counselling or clinical supervisors will be experienced in their discipline and normally then have further training in any of the above-mentioned approaches, or others. [ 62 ] The guidelines of the American Psychological Association, [ 63 ] American Counseling Association, [ 64 ] and American Association for Marriage and Family Therapists [ 65 ] provide standards for supervisor competence and training. There are many different ways of developing supervision skills which can be helpful to the clinician or practitioner in their work. [ 29 ] Training programs in psychology, counseling, social work, and other allied fields often provide graduate-level coursework in supervision. Post-graduate supervisor training is also offered, often by non-profit organizations. For example, the non-profit Sentio Counseling Center offers a one-year Deliberate Practice supervisor training program [ 66 ] [ 67 ] that provides over 50 hours of video-based training in Deliberate Practice supervision methods, aiming to enhance supervisory skills through close mentorship with experienced trainers. [ 68 ]
[ 1 ]
|
https://en.wikipedia.org/wiki/Clinical_supervision
|
Clinical trials are often assigned contrived acronyms. [ 1 ] [ 2 ] Some common themes include acronyms excluding words from the acronym and including letters taken from the middle of words. [ 3 ] It is suggested that the use of acronyms in titles is associated with a higher citation rate of research publications. [ 4 ]
Acronyms were first used to identify clinical trials in the 1970s. [ 5 ] The first identified instance was "UGDP", an initialism for University Group Diabetes Program. The first trial title commonly pronounced as an English-language word or words came in 1982 with the publication of "MRFIT", referring to the Multiple Risk Factor Intervention Trial, and spoken as "Mr. Fit" or "the Mr. Fit trial". [ 5 ]
The term "acronymophilia" was coined in 1994 to refer to the overuse of acronyms in medicine. [ 6 ]
An article in the Annals of Internal Medicine classified clinical trial titles into five broad groups: un-abbreviated titles; initialisms that are not pronounced as English words; homonyms pronounced as a recognizable English word but spelled in a novel way; descriptive medical words relating to the study topic, such as CARDIAC and RALES ; medical or health words that are not related to the topic of the study, such as ALIVE or RESCUE; and other English words not related to the topic, with a wide variety of subjects, including myths, places, musical terms, animals, and space, such as ISIS, CASANOVA, and APRICOT. [ 5 ]
A scientific study ranking acronyms was published in the British Medical Journal . Some of the negatively graded criteria include using letters that do not begin a word, and including letters in the acronym that are not found in the title. According to their metric, some of the worst names included "METGO: A 48-week, randomized, double-blind, double-observer, placebo-controlled multicenter trial of combination METhotrexate and intramuscular GOld therapy in rheumatoid arthritis", "PERFORM: Prevention of cerebrovascular and cardiovascular Events of ischaemic origin with teRutroban in patients with a history oF ischaemic strOke or tRansient ischaeMic attack", and "TYPHOON: Trial to assess the use of the cYPHer sirolimus-eluting coronary stent in acute myocardial infarction treated with BallOON angioplasty". Their ranking of acronyms shows a decrease in measured quality between 2000 and 2012. [ 4 ]
In a letter to the International Journal of Cardiology , Tsung O. Cheng called out his own field as prone to overuse of contrived acronyms, calling it a "persistent problem". He was spurred to write the letter after he reviewed nine articles about a study named "ZAHARA" without finding any explanation of what the acronym meant. [ 3 ] [ 7 ] [ 8 ]
Other clinical trials that have been noted in publications for their acronyms include: TORPEDO (Thrombus Obliteration by Rapid Percutaneous Endovenous Intervention (PEVI) in Deep Venous Occlusion) [ 9 ] and BATMAN (Bisphosphonate and Anastrozole Trial – Bone Maintenance Algorithm Assessment). [ 9 ]
|
https://en.wikipedia.org/wiki/Clinical_trial_naming_conventions
|
A clinically isolated syndrome ( CIS ) is a clinical situation of an individual's first neurological episode, caused by inflammation or demyelination of nerve tissue. An episode may be monofocal , in which symptoms present at a single site in the central nervous system , or multifocal , in which multiple sites exhibit symptoms. CIS with enough paraclinical evidence can be considered as a clinical stage of multiple sclerosis (MS). It can also be retrospectively diagnosed as a kind of MS when more evidence is available.
Brain lesions associated with a clinically isolated syndrome may be indicative of several neurological diseases, like multiple sclerosis (MS) or neuromyelitis optica . In order for such a diagnosis , multiple sites in the central nervous system must present lesions, typically over multiple episodes, and for which no other diagnosis is likely. A clinically definitive diagnosis of MS is made once an MRI detects lesions in the brain, consistent with those typical of MS. Other diagnostics include cerebrospinal fluid analysis and evoked response testing. [ 1 ]
Currently it is considered that the best predictor of future development of clinical multiple sclerosis is the number of T2 lesions visualized by magnetic resonance imaging during the CIS [ 2 ] and their size. [ 3 ] It is normal to evaluate diagnostic criteria against the "time to conversion to definite".
In 2001, the International Panel on the Diagnosis of multiple sclerosis issued the McDonald criteria , a revision of the previous diagnostic procedures to detect MS, known as the Poser criteria . "While maintaining the basic requirements of dissemination in time and space, the McDonald criteria provided specific guidelines for using findings on MRI and cerebrospinal fluid analysis to provide evidence of the second attack in those individuals who have had a single demyelinating episode and thereby confirm the diagnosis more quickly." [ 4 ] Further revisions were issued in 2005.
The 1996 definition of the clinical courses of MS (phenotypes) was updated on 2013 by an international panel (International Advisory Committee on Clinical Trials). [ 5 ]
While the main classification in 1996 was the recovery from the attacks (this clinical feature separates relapsing remitting (RR) from progressive), in the updated revision the main classification is the activity. [ 6 ]
MS courses in the new revision are divided into active and non-active, and CIS, when is active on MRI, becomes a kind of RRMS (this, of course, must be retrospectively diagnosed after the CDMS conversion) [ 7 ]
Some reviews describe CIS as "the prodromal stage of MS". [ 8 ]
Before the 2010 McDonald criteria , [ 9 ] when it was not possible to prove dissemination of the lesions in space and time, the condition was called CIS and was considered outside the MS spectrum. As soon as dissemination was clear (a second lesion development) the situation was called "Conversion to MS".
The 2010 revision of the McDonald criteria allows the diagnosis of MS with only one proved lesion (CIS). Therefore, the 2013 revision of the phenotypes for the disease course, consistently, included CIS as one of the clinical phenotypes of MS. [ 7 ]
Therefore the former expression "conversion from CIS to MS", which is still in use, should be redefined consistently with the former changes, since CIS now is inside MS. [ citation needed ]
|
https://en.wikipedia.org/wiki/Clinically_isolated_syndrome
|
A clinician is a health care professional typically employed at a skilled nursing facility or clinic . Clinicians work directly with patients rather than in a laboratory, community health setting or in research. [ 1 ] A clinician may diagnose, treat and care for patients as a psychologist , clinical pharmacist , clinical scientist, nurse , occupational therapist , speech-language pathologist , physiotherapist , dentist , optometrist , physician assistant , clinical officer , physician , paramedic , or chaplain . Clinicians undergo and take comprehensive training and exams to be licensed and some complete graduate degrees (master's or doctorates) in their field of expertise. [ 2 ]
The main function of a clinician is to manage a sick person in order to cure their illness, reduce pain and suffering, and extend life considering the impact of illness upon the patient and their family as well as other social factors. [ 3 ]
This medical article is a stub . You can help Wikipedia by expanding it .
This vocabulary -related article is a stub . You can help Wikipedia by expanding it .
This job-, occupation-, or vocation-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clinician
|
In medicine , clinophilia is a sleep disorder described as the tendency of a patient to remain in bed in a reclined position without sleeping for prolonged periods of time. [ 1 ] [ 2 ]
The word clinophilia means "liking to lie down" (from the Greek clino- [lying down] and -philia [love]).
It is one of the first symptoms of depression [ 3 ] or schizophrenia , [ 4 ] but is not in itself a disease. Clinophiliacs generally experience feelings of isolation and repressed sadness. [ 5 ]
This is a psychologically-based disorder sometimes found in depression or certain forms of schizophrenia. Clinophiles generally feel lonely. Care must be taken not to confuse this disorder with true hypersomnia , since in the latter patients sleep genuinely and very deeply, whereas in clinophilia, the long sleep times patients may describe are not objectively present. In clinophilia, if patients complain of oversleeping, this is due to psychic problems and not to a physiological defect in the wake/sleep system, as in idiopathic hypersomnia or narcolepsy . Similarly, it should not be confused with dysania, which describes a difficulty in getting out of bed, whereas clinophilia does not describe an "impediment" to getting up, but rather a "willingness" to lie down. [ 6 ]
Clinophilia can also accompany a post-fall syndrome as part of an overall psychomotor regression in the elderly. Although it can affect anyone, clinophilia seems to be more prevalent in women aged between 20 and 40 (particularly after major hormonal changes) and in the elderly. [ 7 ]
This vocabulary -related article is a stub . You can help Wikipedia by expanding it .
This article about a disease , disorder, or medical condition is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clinophilia
|
Clostridium perfringens (formerly known as C. welchii , or Bacillus welchii ) is a Gram-positive , bacillus (rod-shaped), anaerobic , spore-forming pathogenic bacterium of the genus Clostridium . [ 1 ] [ 2 ] C. perfringens is ever-present in nature and can be found as a normal component of decaying vegetation, marine sediment , the intestinal tract of humans and other vertebrates , insects , and soil . It has the shortest reported generation time of any organism at 6.3 minutes in thioglycolate medium. [ 3 ]
Clostridium perfringens is one of the most common causes of food poisoning in the United States, alongside norovirus , Salmonella , Campylobacter , and Staphylococcus aureus . [ 4 ] However, it can sometimes be ingested and cause no harm. [ 5 ]
Infections induced by C. perfringens are associated with tissue necrosis , bacteremia , emphysematous cholecystitis , and gas gangrene , which is also known as clostridial myonecrosis . [ 6 ] The specific name, perfringens, is derived from the Latin per (meaning "through") and frango ("burst"), referring to the disruption of tissue that occurs during gas gangrene. [ 7 ] Gas gangrene is caused by alpha toxin, or α-toxin , that embeds itself into the plasma membrane of cells and disrupts normal cellular function by altering membrane structure. [ 8 ] Research suggests that C. perfringens is capable of engaging in polymicrobial anaerobic infections . [ 9 ] It is commonly encountered in infections as a component of the normal flora . In this case, its role in disease is minor. [ 10 ]
C. perfringens toxins are a result of horizontal gene transfer of a neighboring cell's plasmids. [ 11 ] Shifts in genomic make-up are common for this species of bacterium and contribute to novel pathogenesis. [ 12 ] Major toxins are expressed differently in certain populations of C. perfringens; these populations are organized into strains based on their expressed toxins. [ 13 ] This especially impacts the food industry, as controlling this microbe is important for preventing foodborne illness. [ 12 ] Novel findings in C. perfringens hyper-motility, which was provisionally thought as non-motile, have been discovered as well. [ 14 ] Findings in metabolic processes reveal more information concerning C. perfringens pathogenic nature. [ 15 ]
Clostridium perfringens has a stable G+C content around 27 to 28 percent and average genome size of 3.5 Mb. [ 16 ] Genomes of 56 C. perfringens strains have since been made available on the NCBI genomes database for the scientific research community. Genomic research has revealed surprisingly high diversity in C. perfringens pangenome , with only 12.6 percent core genes, identified as the most divergent Gram-positive bacteria reported. [ 16 ] Nevertheless, 16S rRNA regions in between C. perfringens strains are found to be highly conserved ( sequence identity >99.1%). [ 16 ]
The Clostridium perfringens enterotoxin (CPE)–producing strain has been identified to be a small portion of the overall C. perfringens population (~1-5%) through genomic testing. [ 17 ] Advances in genetic information surrounding strain A CPE C. perfringens has allowed techniques such as microbial source tracking (MST) to identify food contamination sources. [ 17 ] The CPE gene has been found within chromosomal DNA as well as plasmid DNA. Plasmid DNA has been shown to play and integral role in cell pathogenesis and encodes for major toxins, including CPE. [ 11 ]
C. perfringens has been shown to carry plasmid-containing genes for antibiotic resistance . The pCW3 plasmid is the primary conjugation plasmid responsible for creating antibiotic resistance in C. perfringens . Furthermore, the pCW3 plasmid also encodes for multiple toxins found in pathogenic strains of C. perfringens . [ 18 ] Antibiotic resistance genes observed thus far include tetracycline resistance, efflux protein, and aminoglycoside resistance. [ 19 ]
Within industrial contexts, such as food production , sequencing genomes for pathogenic strains of C. perfringens has become an expanding field of research. Poultry production is impacted directly from this trend as antibiotic-resistant strains of C. perfringens are becoming more common. [ 12 ] By performing a meta-genome analysis, researches are capable to identify novel strains of pathogenic bacterium, such as C. perfringens B20. [ 12 ]
Clostridium perfringens is provisionally identified as non-motile. They lack flagella; however, recent research suggests gliding as a form of motility. [ 20 ] [ 21 ]
In agar plate cultures bacteria with hypermotile variations like SM101 frequently appear around the borders of the colonies. They create long thin filaments that enable them to move quickly, much like bacteria with flagella, according to video imaging of their gliding motion. The causes of the hypermotile phenotype and its immediate descendants were found using genome sequencing. The hypermotile offspring of strains SM101 and SM102, SM124 and SM127, respectively, had 10 and 6 nucleotide polymorphisms (SNPs) in comparison to their parent strains. The hypermotile strains have the common trait of gene mutations related to cell division. [ 20 ]
Some strains of C. perfringens cause various diseases like gas gangrene and myonecrosis. Toxins produced that are required for myonecrosis is regulated by the C. perfringens Agr-like (CpAl) system through the VirSR two-component system. The CpAL/VirSR system is a quorum sensing system encoded by other pathogenic clostridia. Myonecrosis starts at the infection site and involves bacteria migrating deeper via gliding motility. Researchers investigated if the CpAL/VirSR system regulates gliding motility. The study demonstrated that the CpAL/VirSR regulates C. perfringens gliding motility. Additionally, the study suggests that gliding bacteria in myonecrosis have increased transcription of toxin genes. [ 21 ]
There are two methods of genetic manipulation via experimentation that have been shown to cause genetic transformation in C. perfringens .
The first report of transformation in C. perfringens involved polyethylene glycol-mediated transformation of protoplasts . The transformation procedure involved the addition of the plasmid DNA to the protoplasts in the presence of high concentrations of polyethylene glycol . During the first protoplast transformation experiment, L-phase variants of C. perfringens were generated by penicillin treatment in the presence 0.4m sucrose. After the transformation procedure was completed, all of the transformed cells were still in the form of L-phase variants. Reversion to vegetative cells was not obtained, but it was observed that autoplasts (protoplasts derived from autolysis ) were able to be regenerated to produce rods with cell walls and could be transformed with C. perfringens plasmid DNA. [ 22 ]
Electroporation involves the application of a high-voltage electric field to vegetative bacteria cells for a very short period. This technique resulted in major advances in genetic transformation of C. perfringens , due to the bacteria often displaying itself as a vegetative cell or as dormant spores in food. [ 23 ] The electric pulse creates pores in the bacterial cell membrane and allows the passive influx of DNA molecules. [ 24 ]
C. perfringens is an aerotolerant anaerobe bacterium that lives in a variety of environments including soil and human intestinal tract. [ 15 ] C. perfringens is incapable of synthesizing multiple amino acids due to the lack of genes required for biosynthesis. [ 15 ] Instead, the bacterium produces enzymes and toxins to break down host cells and import nutrients from the degrading cell. [ 15 ]
C. perfringens has a complete set of enzymes for glycolysis and glycogen metabolism. In the fermentation pathway, pyruvate is converted into acetyl-CoA by pyruvate-ferredoxin oxidoreductase , producing CO2 gas and reduced ferredoxin . [ 25 ] Electrons from the reduced ferredoxin are transferred to protons by hydrogenase, resulting in the formation of hydrogen molecules (H2) that are released from the cell along with CO2 . Pyruvate is also converted to lactate by lactate dehydrogenase , whereas acetyl-CoA is converted into ethanol , acetate , and butyrate through various enzymatic reactions, completing the anaerobic glycolysis that serves as a potential main energy source for C. perfringens . C. perfringens utilizes a variety of sugars such as fructose , galactose , glycogen , lactose , maltose , mannose , raffinose , starch , and sucrose , and various genes for glycolytic enzymes. The amino acids of these various enzymes and sugar molecules are converted to propionate through propionyl-CoA , which results in energy production. [ 25 ]
Membrane-damaging enzymes, pore-forming toxins, intracellular toxins, and hydrolytic enzymes are the functional categories into which C. perfringens ' virulence factors may be divided. These virulence factor-encoding genes can be found on chromosomes and large plasmids. [ 13 ]
The human gastrointestinal tract is lined with intestinal mucosa that secrete mucus and act as a defense mechanism against pathogens, toxins, and harmful substances. Mucus is made up of mucins containing several O-linked glycan glycoproteins that recognizes and forms a barrier around microbes, preventing them from attaching to endothelial cells and infecting them. [ 26 ] [ 27 ] C. perfringens can secrete different carbohydrate-active enzymes (CAZymes) that aid in degrading mucins and other O-glycans within the intestinal mucosa. These enzymes include: Sialidases, Hexosaminidases, Galactosidases, and Fucosidases belonging to various glycoside hydrolase families . [ 27 ]
Sialidases , also called neuraminidases, function to breakdown mucin by hydrolyzing the terminal sialic acid residues located within the protein through the process of desialylation . C. perfringens has three sialidases belonging to glycoside hydrolase family 33 (GH33) : NanH, NanI, and NanJ. All strains of C. perfringens encode for at least one of these enzymes. [ 27 ] [ 28 ]
C. perfringens can secrete NanI and NanJ through secretion signal peptides located on each protein. Research suggests that NanH operates in the cytoplasm of C. perfringens , as it does not contain a secretion signal peptide. NanH contains only a catalytic domain, whereas NanI and NanJ contain a catalytic domain and additional carbohydrate-binding modules (CBMs) to aid in catalytic activity. Located on their N-terminals, NanI contains CBM40, whereas NanJ contains both CBM40 and CBM32. Based on studies analyzing the three-dimensional structure of NanI, its active site has a pocket-like orientation that aids in the removal of sialic acid residues from sialomucins in the intestinal mucosa. [ 27 ]
The mucus layer consists of intestinal mucin glycans, glycolipids, and glycoproteins that contain hexosamines , such as N-acetylglucosamine (GlcNAc) and N-acetylgalactosamine (GalNAc). C. perfringens encodes for eight hexosaminidases that break down hexosamines in the mucus. These hexosaminidases belong to four glycoside hydrolase families: GH36, GH84, GH89, and GH123. [ 27 ]
C. perfringens encodes for AagA ( Cp GH36A) and Cp GH36B in glycoside hydrolase family 36 (GH36) : AagA removes GalNAc from O-glycans, and Cp GH36B is expected to have a similar structure to AagA, but specificities on its function are unknown. NagH, NagI, NagJ, and NagK, belonging to glycoside hydrolase family 84 (GH84), cleave terminal GlcNAc residues using a substrate-assisted digestion mechanism. AgnC ( Cp GH89), belonging to glycoside hydrolase family 89 (GH89), both cleaves GlcNAc from the ends of mucin glycans and acts on gastric mucin. Belonging to glycoside hydrolase family 123 (GH123), Cp Nga123 cleaves GalNAc, but research suggests that it only breaks down glycans taken up by C. perfringens due to the absence of a secretion signal peptide. [ 27 ]
C. perfringens has four galactosidases that belong to the glycoside hydrolase family 2 (GH2) : Cp GH2A, Cp GH2B, Cp GH2C, and Cp GH2D. Research suggests that these enzymes are effective at breaking down core mucin glycan structures with the ability to bind galactose using CBM51. However, minimal research exists on the specific functioning of galactosidases in C. perfringens . [ 27 ]
Fucose monosaccharides are located on the terminal ends of core O-linked glycans. C. perfringens encodes for three fucosidases that belong to two glycoside hydrolase families: Afc1 and Afc2 in glycoside hydrolase family 29 (GH29), and Afc3 in glycoside hydrolase family 95 (GH95). Afc3 contains a C-terminal CBM51 and is the only fucosidase that contains a carbohydrate-binding module in C. perfringens . Fucosyl residues tend to cover the ends of glycans and protect them against enzymatic digestion, so research suggests that the ability of fucosidases to cleave complex and diverse fucosyl linkages is due to long-term adaptations in C. perfringens that persisted within close range of mucins. [ 27 ]
There are five major toxins produced by Clostridium perfringens. Alpha, beta, epsilon and enterotoxin are toxins that increase a cells permeability which causes an ion imbalance while iota toxins destroy the cell's actin cytoskeleton. [ 29 ] On the basis of which major, "typing" toxins are produced, C. perfringens can be classified into seven "toxinotypes", A, B, C, D, E, F and G: [ 30 ]
Alpha toxin (CPA) is a zinc-containing phospholipase C, composed of two structural domains, which destroy a cell's membrane. Alpha toxins are produced by all five types of C. perfringens. This toxin is linked to gas gangrene of humans and animals. Most cases of gas gangrene has been related to a deep wound being contaminated by soil that harbors C. perfringens . [ 29 ] [ 32 ]
Beta toxins (CPB) are a protein that causes hemorrhagic necrotizing enteritis and enterotoxaemia in both animals (type B) and humans (type C) which leads to the infected individual's feces becoming bloody and their intestines necrotizing. [ 29 ] Proteolytic enzymes , such as trypsin, can break down CPB, making them ineffective. Therefore, the presence of trypsin inhibitors in colostrum makes CPB especially deadly for mammal offspring. [ 33 ]
Epsilon toxin (ETX) is a protein produced by type B and type D strains of C. perfringens. This toxin is currently ranked the third most potent bacterial toxin known. [ 34 ] ETX causes enterotoxaemia in mainly goats and sheep, but cattle are sometime susceptible to it as well. An experiment using mice found that ETX had an LD50 of 50-110 ng/kg. [ 35 ] The excessive production of ETX increases the permeability of the intestines. This causes severe edema in organs such as the brain and kidneys. [ 36 ]
The very low LD50 of ETX has led to concern that it may be used as a bioweapon. It appeared on the select agent lists of the US CDC and USDA, until it was removed in 2012. There are no human vaccines for this toxin, but effective vaccines for animals exist. [ 37 ]
Iota toxin (ITX) is a protein produced by type E strains of C. perfringens. Iota toxins are made up of two, unlinked proteins that form a multimeric complex on cells. Iota toxins prevent the formation of filamentous actin. This causes the destruction of the cells cytoskeleton which in turn leads to the death of the cell as it can no longer maintain homeostasis. [ 38 ]
This toxin (CPE) causes food poisoning. It alters intracellular claudin tight junctions in gut epithelial cells. This pore-forming toxin also can bind to human ileal and colonic epithelium in vitro and necrotize it. Through the caspase-3 pathway, this toxin can cause apoptosis of affected cells. This toxin is linked to type F strains, but has also been found to be produced by certain types of C, D, and E strains. [ 39 ]
TpeL is a toxin found in type B, C, and G [ 40 ] strains. It is in the same protein family as C. difficile toxin A . [ 41 ] It does not appear important in the pathogenesis of types B and C infections, but may contribute to virulence in type G strains. It glycosylates Rho and Ras GTPases , disrupting host cell signaling. [ 40 ]
Tissue necrosis , bacteremia , emphysematous cholecystitis , and gas gangrene , also known as clostridial myonecrosis , have been linked to infections associated with C. perfringens . [ 8 ] Research suggests that C. perfringens is capable of engaging in polymicrobial anaerobic infections . [ 42 ]
Clostridium perfringens is a common cause of food poisoning in the United States. C. perfringens produces spores, and when these spores are consumed, they produce a toxin that causes diarrhea. Foods cooked in large batches and held at unsafe temperatures (between 40 °F and 140 °F) are the source of C. perfringens food poisoning outbreaks. Meats such as poultry, beef, and pork are commonly linked to C. perfringens food poisoning. [ 43 ] C. perfringens can proliferate in foods that are improperly stored due to the spore's ability to survive normal cooking temperatures. The type A toxin of C. perfringens , also known as the CPA is responsible for food poisoning. [ 44 ]
Clostridium perfringens is the most common bacterial agent for gas gangrene . [ 45 ] Gas gangrene is induced by α-toxin that embeds itself into the plasma membrane of cells and disrupts normal cellular function by altering membrane structure. [ 8 ] Some symptoms include blisters, tachycardia, swelling, and jaundice. [ 45 ]
C. perfringens is most commonly known for foodborne illness but can translocate from a gastrointestinal source into the bloodstream which causes bacteremia . C. perfringens bacteremia can lead to toxin-mediated intravascular hemolysis and septic shock. [ 46 ] This is rare as it makes up less than 1% of bloodstream isolates but is highly fatal with a reported mortality rate of 27% to 58%. [ 47 ]
Clostridium perfringens food poisoning can also lead to another disease known as enteritis necroticans or clostridial necrotizing enteritis , (also known as pigbel); this is caused by C. perfringens type C. This infection is often fatal. Large numbers of C. perfringens grow in the intestines and secrete exotoxin. This exotoxin causes necrosis of the intestines, varying levels of hemorrhaging, and perforation of the intestine. Inflammation usually occurs in sections of the jejunum, midsection of the small intestine. [ 48 ] Perfringolysin O ( pfoA )-positive C. perfringens strains were also associated with the rapid onset of necrotizing enterocolitis in preterm infants. [ 49 ]
A strain of C. perfringens might be implicated in multiple sclerosis (MS) nascent ( Pattern III ) lesions. [ 50 ] Tests in mice found that two strains of intestinal C. perfringens that produced epsilon toxins (ETX) caused MS-like damage in the brain, and earlier work had identified this strain of C. perfringens in a human with MS. [ 51 ] [ 52 ] MS patients were found to be 10 times more likely [ 53 ] to be immune-reactive to the epsilon toxin than healthy people. [ 54 ] Greatly increased rates of gut colonization by type B and D C. perfringens are seen in MS patients. [ 55 ]
Tissue gas occurs when C. perfringens infects corpses. It causes extremely accelerated decomposition and can only be stopped by embalming the corpse. Tissue gas most commonly occurs to those who have died from gangrene, large decubitus ulcers, necrotizing fasciitis or to those who had soil, feces, or water contaminated with C. perfringens forced into an open wound. [ 56 ]
Clostridium perfringens infections can lead to various clinical manifestations, ranging from mild gastrointestinal symptoms to life-threatening conditions. The most common presentation is food poisoning, characterized by acute abdominal pain, diarrhea, and, in some cases, vomiting, typically occurring 6 to 24 hours after the ingestion of contaminated food. Unlike many other foodborne illnesses, fever is usually absent. Symptoms are usually self-limiting and resolve within 24 to 48 hours; however, severe dehydration can occur in cases of significant fluid loss. Symptoms of dehydration include dry mouth, decreased urine output, dizziness, and fatigue. Severe symptoms such as diarrhea that persists for more than 48 hours, the inability to keep fluids down, or signs of severe dehydration may necessitate medical attention. [ 57 ] Most people are able to recover from C. perfringens food poisoning without treatment. However, people who experience diarrhea are usually instructed to drink water or rehydration solutions. [ 58 ]
Gas gangrene caused by Clostridium perfringens is characterized by severe symptoms, including intense pain at the injury site, fever, rapid heart rate, sweating, and anxiety. The affected area may show signs of swelling, discoloration (ranging from pale to dark red or purplish), and large, discolored blisters filled with foul-smelling fluid. As the toxins spread, skin and muscle tissue are rapidly destroyed, leading to large areas of dead tissue, gas pockets under the skin (crepitus), and possible renal failure due to red blood cell destruction. Sepsis and septic shock may also occur, which can be fatal. [ 59 ]
Necrotizing enteritis caused by Clostridium perfringens presents with a wide range of symptoms, which can vary in severity. The clinical signs range from mild diarrhea to more severe manifestations such as intense abdominal pain, vomiting, bloody stools, and even septic shock. In the most serious cases, the infection can lead to death. [ 60 ]
The diagnosis of Clostridium perfringens food poisoning relies on laboratory detection of the bacterium or its toxin in either a patient’s stool sample or contaminated food linked to the illness. A positive stool culture would have growth of at least 10 cfu/g of C. perfringens . Stool studies include WBCs , ova , and parasites in order to rule out other potential etiologies . ELISA testing is used to detect the CPA toxin. Diagnosing C. perfringens food poisoning is relatively uncommon for several reasons. [ 61 ] Most individuals with this foodborne illness do not seek medical care or submit a stool sample for testing, and routine testing for C. perfringens is not typically performed in clinical laboratories. Additionally, public health laboratories generally conduct testing for this pathogen only in the event of an outbreak. [ 62 ]
The diagnosis of gas gangrene typically involves several methods to confirm the infection. Imaging techniques such as X-rays , CT scans , or MRIs can reveal gas bubbles or tissue changes indicative of muscle damage. Additionally, bacterial staining or culture of fluid taken from the wound helps identify Clostridium perfringens and other bacteria responsible for the infection. In some cases, a biopsy is performed, where a sample of the affected tissue is analyzed for signs of damage or necrosis. [ 59 ]
The diagnosis of clostridial necrotizing enteritis is primarily based on the patient's clinical symptoms, which can include severe abdominal pain, vomiting, and bloody diarrhea. Additionally, confirmation of the presence of Clostridium perfringens type C toxin in stool samples is crucial for accurate diagnosis. [ 60 ]
Clostridium perfringens is responsible for an estimated 966,000 cases annually, or about 10.3% of all foodborne illnesses in which a pathogen is identified. Transmission typically occurs when food contaminated with C. perfringens spores is consumed, allowing the bacteria to produce a toxin in the intestines that causes diarrhea. Outbreaks are often associated with foods cooked in large batches, such as poultry, meat, and gravy, and held at unsafe temperatures between 40-140 °F, which allows the bacteria to thrive. These outbreaks tend to occur in settings where large groups are served, such as hospitals, school cafeterias, prisons, nursing homes, and catered events. In most cases, C. perfringens infection causes mild symptoms, including watery diarrhea and mild abdominal cramps, with symptoms typically appearing 8 to 12 hours after consuming contaminated food and resolving within 24 hours. About 90% of affected individuals recover without seeking medical attention, usually within two days. However, vulnerable groups such as the elderly, young children, and immunocompromised individuals face a higher risk of severe complications like dehydration, which can lead to more serious illness or, in rare cases, death. Each year, C. perfringens infections result in approximately 438 hospitalizations and 26 deaths, accounting for 0.8% of foodborne illness-related hospitalizations and 1.9% of associated deaths. Outbreaks are most common in November and December, coinciding with holiday foods like turkey and roast beef. The economic burden of C. perfringens is significant, estimated at $342.7 million annually, including $53.2 million in medical costs, $64.3 million in productivity loss, and $225 million related to fatalities. [ 63 ] [ 64 ]
Clostridial necrotizing enteritis is rare in the United States; typically, it occurs in populations with a higher risk. Data show that of the 9.4 million cases of foodborne illness in the United States each year, only about 11% are caused by Clostridium perfringens . [ 65 ] "Risk factors for enteritis necroticans include protein-deficient diet, unhygienic food preparation, sporadic feasts of meat (after long periods of a protein-deficient diet), diets containing large amounts of trypsin inhibitors ( sweet potatoes ), and areas prone to infection of the parasite Ascaris (produces a trypsin inhibitor). This disease is contracted in populations living in New Guinea, parts of Africa, Central America, South America, and Asia. [ 48 ]
Risk factors for gas gangrene include severe injuries, abdominal surgeries, and underlying health conditions such as colon cancer , diseases of the blood vessels, diabetes , and diverticulitis . However, the most common way to get gas gangrene is through a traumatic injury. In the United States, there is only about 1000 cases of gas gangrene per year. When addressed with adequate care, gas gangrene has a mortality rate of 20-30% but has a mortality rate of 100% if left untreated. [ 66 ]
On May 7, 2010, 42 residents and 12 staff members at a Louisiana (USA) state psychiatric hospital were affected and experienced vomiting, abdominal cramps, and diarrhea. Three patients died within 24 hours. The outbreak was linked to chicken which was cooked a day before it was served and was not cooled down according to hospital guidelines. The outbreak affected 31% of the residents of the hospital and 69% of the staff who ate the chicken. How many of the affected residents ate the chicken is unknown. [ 67 ]
In May 2011, a man died after allegedly eating food contaminated with the bacteria on a transatlantic American Airlines flight. The man's wife and daughter were suing American and LSG Sky Chefs , the German company that prepared the inflight food. [ 68 ]
In December 2012, a 46-year-old woman died two days after eating a Christmas Day meal at a pub in Hornchurch , Essex , England. She was among about 30 people to fall ill after eating the meal. Samples taken from the victims contained C. perfringens . The hotel manager and the cook were jailed for forging cooking records relating to the cooking of the turkey. [ 69 ]
In December 2014, 87-year-old Bessie Scott died three days after eating a church potluck supper in Nackawic , New Brunswick , Canada. Over 30 other people reported signs of gastrointestinal illness, diarrhea, and abdominal pain. The province's acting chief medical officer says, Clostridium perfringens is the bacteria [sic] that most likely caused the woman's death. [ 70 ]
In October 2016, 66-year-old Alex Zdravich died four days after eating an enchilada, burrito, and taco at Agave Azul in West Lafayette, Indiana , United States. Three others who dined the same day reported signs of foodborne illness, which were consistent with the symptoms and rapid onset of C. perfringens infection. They later tested positive for the presence of the bacteria, but the leftover food brought home by Zdravich tested negative. [ 71 ] [ 72 ]
In November 2016, food contaminated with C. perfringens caused three individuals to die, and another 22 to be sickened, after a Thanksgiving luncheon hosted by a church in Antioch, California , United States. [ 73 ]
In January 2017, a mother and her son sued a restaurant in Rochester, New York , United States, as they and 260 other people were sickened after eating foods contaminated with C. perfringens . "Officials from the Monroe County Department of Public Health closed down the Golden Ponds after more than a fourth of its Thanksgiving Day guests became ill. An inspection revealed a walk-in refrigerator with food spills and mold, a damaged gasket preventing the door from closing, and mildew growing inside." [ 74 ]
In July 2018, 647 people reported symptoms after eating at a Chipotle Mexican Grill restaurant in Powell, Ohio , United States. Stool samples tested by the CDC tested positive for C. perfringens . [ 75 ]
In November 2018, approximately 300 people in Concord, North Carolina , United States, were sickened by food at a church barbecue that tested positive for C. perfringens . [ 76 ]
In 2021, a foodborne illness outbreak in Homer, Alaska , affected approximately 80 employees of South Peninsula Hospital and was traced to Cubano sandwiches served during staff meals. The Alaska Department of Health and Social Services identified the likely cause as C. perfringens . No hospitalizations were reported, and the outbreak was contained to hospital staff. Such localized outbreaks are considered uncommon in Alaska when not tied to a national foodborne incident. [ 77 ]
Preventing Clostridium perfringens contamination and growth involves careful food handling, proper cooking, and appropriate storage practices. Most foods, especially beef and chicken, can be protected by cooking them to the recommended internal temperatures. Using a kitchen thermometer is the most reliable way to check that meats reach safe cooking temperatures. As a general rule, food should be avoided if it smells, tastes, looks off, or has been left out at unsafe temperatures for a long period of time. [ 78 ]
C. perfringens spores can multiply within a temperature range of 59 °F (15 °C) to 122 °F (50 °C). [ 79 ] To prevent bacterial growth, leftovers should be refrigerated within two hours of preparation, with their temperature chilled down to below 40 °F (4 °C). Large portions of food that contain meat, should be divided into smaller containers before refrigeration to ensure even cooling. Before serving leftovers, they should be reheated to at least 165 °F (74 °C) to destroy any bacteria that may have grown during storage. [ 78 ]
High-risk foods, such as canned vegetables, smoked or cured meats, and salted or smoked fish, require additional attention. Improper processing or storage can allow bacteria to grow and produce dangerous toxins. Signs of contamination, such as unusual odors, changes in texture, or bulging cans (also known as "bombage"), indicate food spoilage and should be disposed. [ 80 ]
Preventing gas gangrene involves taking precautions to avoid bacterial infections. Healthcare providers follow strict protocols to prevent infections, including those caused by Clostridium perfringens . To reduce the risk of gas gangrene, individuals should clean wounds thoroughly with soap and water and seek medical attention for deep or uncleanable wounds. It is also essential to monitor injuries for changes in skin condition or the onset of severe pain. Wearing protective gear when engaging in activities like biking or motorcycling can help prevent injury. Additionally, working with healthcare providers to manage underlying conditions that affect circulation or weaken the immune system can further reduce the risk of infection. [ 59 ]
The treatment of Clostridium perfringens infections depends on the type and severity of the condition. For severe infections, such as gas gangrene (clostridial myonecrosis), the primary approach involves surgical debridement of the affected area. This procedure removes devitalized tissue where bacteria grow, which limits the spread of the infection. Antimicrobial therapy is usually started at the same time, with penicillin being the most commonly used drug. [ 81 ] However, C. perfringens shows different resistance patterns with about 20% of strains being resistant to clindamycin, and 10% being resistant to metronidazole. [ 82 ] C. perfringens is often more susceptible to vancomycin when compared to other pathogenic Clostridia , making it an alternative option for treatment in some cases. [ 81 ]
Therapies, such as hyperbaric oxygen therapy (HBOT), may also be used for severe clostridial tissue infections. HBOT increases oxygen delivery to infected tissues, creating an environment that inhibits the growth of anaerobic bacteria like C. perfringens . While not commonly used, HBOT can be beneficial in certain cases. [ 83 ]
For foodborne illness caused by C. perfringens , treatment is typically unnecessary. Most people who suffer from food poisoning caused by C. perfringens usually fight off the illness without the need of any antibiotics. Extra fluids should be drank consistently until diarrhea dissipates. [ 84 ]
C. perfringens has shown increasing multidrug resistance , particularly in strains from humans and animals. High resistance levels were found with antibiotics such as tetracycline, erythromycin, and sulfonamides. Genetic factors, misuse of antibiotics, and bacterial evolution are the cause of this issue. This highlights the importance of finding new treatment strategies. [ 85 ]
Multilocus Sequence Typing (MLST) and Whole Genome Sequencing (WGS) have been used to find the genetic diversity of C. perfringens . These methods have identified 195 distinct sequence types grouped into 25 clonal complexes from 322 genomes. Phylogenetic groups were also found in multiple different hosts and environmental sources. This highlights the bacteria's transmission potential and adaptability across species. [ 86 ]
|
https://en.wikipedia.org/wiki/Clostridium_perfringens
|
A cloth face mask is a mask made of common textiles , usually cotton , worn over the mouth and nose. When more effective masks are not available, cloth face masks are recommended by public health agencies for disease source control in epidemic situations to protect others from virus-laden aerosols emitted by infected mask wearers' as they breathe, talk, cough, or sneeze.
Cloth masks are also used to reduce the risk of transmission to the wearer. Because they are less effective than N95 masks in protecting the wearer against viruses and other airborne particles, [ 1 ] [ 2 ] they are not considered to be personal protective equipment by public health agencies. [ 3 ]
Cloth face masks were routinely used by healthcare workers starting from the late 19th century. They fell out of use in the developed world in favor of disposable surgical masks, and respirators with an electret (electrically charged) filter material , [ 4 ] but cloth masks persisted in developing countries . [ 5 ] During the COVID-19 pandemic , their use in developed countries was revived due to shortages , as well as for environmental concerns and practicality. Launderable cloth electret filters were also being developed. [ 6 ]
Prior to the COVID-19 pandemic , reusable cloth face masks were predominantly used by healthcare workers, and in the community, in developing countries and in Asia. Cloth face masks contrast with surgical masks and respirators such as N95 masks , which are made of nonwoven fabric and contain a layer or layers formed through a melt blowing process. Melt-blown materials have a fine, random, structure, which leads to excellent filtration properties. Respirators are regulated for their effectiveness based upon filtration efficiency of sub-micron sodium chloride particles (count median diameter 0.075 ± 0.020 μm; mass median diameter 0.26 μm), [ 8 ] along with other criteria such as outer splash/spray protection, inner splash/spray absorption, contaminant accumulation and shedding, breathability (or pressure drop across the mask), and inflammability. [ 9 ] Like surgical masks, and unlike respirators, cloth face masks do not provide a seal around the face, and prior to the 2019 COVID-19 outbreak were generally not authorized by institutions for protection from diseases recognized as airborne (e.g., tuberculosis). [ 5 ]
In healthcare settings, they are used on sick patients as source control to reduce disease transmission through respiratory droplets and respiratory aerosols and by healthcare workers when surgical masks and respirators are unavailable. Cloth face masks are only recommended for use by healthcare workers as a last resort if supplies of surgical masks and respirators are exhausted. [ 5 ] They are also used by the general public in household and community settings to reduce the risks of both infectious diseases and particulate air pollution and to contain the wearer's exhaled virus-laden droplets and aerosols. [ 5 ] [ 10 ]
Several types of cloth face masks are available commercially, especially in Asia. [ 10 ] Homemade masks can also be improvised using bandanas , [ 7 ] T-shirts , [ 7 ] [ 9 ] handkerchiefs , [ 9 ] scarves , [ 9 ] or towels . [ 11 ] Depending on the design and materials, reusable cloth masks with incorporated filters can block particles as well as surgical masks. [ 12 ] [ 13 ] [ 14 ]
The World Health Organization (WHO) continues to recommend that masks be used as part of a comprehensive strategy of measures to suppress transmission and save lives; the use of a mask alone is not sufficient to provide an adequate level of protection against COVID-19. [ 15 ] The US Center for Disease Control , along with Johns Hopkins University School of Medicine , The Mayo Clinic , and Cleveland Clinic all concur with this recommendation. [ 16 ] [ 17 ] [ 18 ] [ 19 ] The World Health Organization also recommended that those aged over 60 years old or with underlying health risks require more protection and should wear medical masks in areas where there is community transmission. [ 20 ]
The World Health Organization recommends using masks with at least three layers of different materials. Two spunbond polypropylene layers offer useful increases in filtration with acceptable breathability. [ 21 ] [ 22 ] When producing cloth face masks, two parameters should be considered: filtration efficiency of the material and breathability.
The filter quality factor known as "Q" is commonly used as an integrated filter quality indicator. It is a function of filtration efficiency and breathability, with higher values indicating better performance. Experts recommend Q-factor of three or higher. [ 21 ] The usefulness of this ratio in selecting materials for masks has not, to our knowledge, been demonstrated.
A peer-reviewed summary [ 23 ] of the filtration properties of cloth and cloth masks concluded that, pending further research, evidence is strongest for 2 to 4 layers of plain weave cotton or flannel, at least 100 thread count . A plain-language summary of this review is available. [ 24 ]
Cloth face masks can be used for source control to reduce disease transmission arising from the wearer's respiratory droplets and respiratory aerosols, and also to reduce risk for the wearer by filtering incoming aerosols, [ 1 ] [ 14 ] thereby reducing disease transmission, [ 2 ] but are not considered personal protective equipment for the wearer in regulated or health-and-safety contexts [ 25 ] [ 26 ] [ 27 ] Filtration efficiency of the material is typically low compared with non-woven materials used in surgical masks and respirators. [ 28 ] [ 29 ] However, a well-designed cloth mask may fit very well, with little edge leak. In contrast, surgical mask material is an excellent filter, but surgical masks typically fit poorly, with visible gaps and large edge leaks. This explains why cloth masks (less good filter, good fit) and surgical masks (good filter, less good fit) perform similarly in tests of protection of the wearer from aerosol-sized particles. [ 1 ] [ 14 ]
As of 2015, there had been no randomized clinical trials or guidance on the use of reusable cloth face masks. [ 5 ] [ 11 ] Most research had been performed in the early 20th century, before disposable surgical masks became prevalent. One 2010 study found that 40–90% of particles in the 20–1000 nm range penetrated a cloth mask and other fabric materials. [ 28 ] The performance of cloth face masks varies greatly with the shape, fit, and type of fabric, [ 10 ] as well as the fabric fineness and number of layers. [ 11 ] As of 2006, no cloth face masks had been cleared by the U.S. Food and Drug Administration for use as surgical masks. [ 9 ] A Vietnamese study of healthcare workers compared influenza-like illness outcome among those wearing cloth masks versus medical masks. [ 30 ] They concluded that cloth masks were ineffective at preventing transmission in high-risk clinical settings. Although discouraged in clinical settings, cloth masks may still serve a useful role in reducing disease transmission in public settings according to a systematic review. [ 31 ] [ 32 ]
One role of masks worn by the general public is to "stop those who are already infected broadcasting the virus into the air around them" or source control. [ 33 ] This is of particular importance with the COVID-19 pandemic , as transmission from people who are asymptomatic is a key feature of its rapid spread. [ 34 ] For example, of the people on board the Diamond Princess cruise ship , 634 people were found to be infected—52% had no symptoms at the time of testing, including 18% who never developed symptoms. [ 35 ] Best practice is to implement multiple prevention techniques to reduce risk, for example, increasing the proportion of outside air used in ventilation, filtration systems built into HVAC, stand-alone filtration systems such as HEPA filters and Corsi-Rosenthal boxes , avoiding crowded spaces, and practicing staying home while sick and for the period of probable infectiousness following illness. [ 36 ]
Compared with bacteria recovery from unmasked volunteers, a mask made of muslin and flannel reduced bacteria recovered on agar sedimentation plates by 99%, total airborne microorganisms by 99%, and bacteria recovered from aerosols (< 4 μm) by 88% to 99%. [ 37 ] In 1975, 4 medical masks and 1 commercially produced reusable mask made of 4 layers of cotton muslin were compared. Filtration efficiency, assessed by bacterial counts, was 96% to 99% for the medical masks and 99% for the cloth mask; for aerosols (< 3.3 μm), it was 72% to 89% and 89%, respectively. [ 38 ]
An experiment carried out in 2013 by Public Health England, that country's health-protection agency, found that a commercially made surgical mask filtered 90% of virus particles from the air coughed out by participants, a vacuum cleaner bag filtered out 86%, a tea towel blocked 72% and a cotton t-shirt 51%—though fitting any DIY mask properly and ensuring a good seal around the mouth and nose is crucial. [ 39 ] [ 33 ] The use of common fabrics in making face masks has been tested. [ 40 ] [ 41 ] [ 42 ] [ 43 ] Filter efficiency can be improved with multiple layers, but it is important also to consider breathability. Cotton is the most commonly used material, and filter efficiencies can reach > 80% for particles <. 300 nm with fabric combinations such as cotton-silk, cotton-chiffon, or cotton-flannel, though these filtration efficiencies are not directly comparable with the reported properties of surgical masks and respirators, because of the non-standard methods used in this study, conducted in the early months of the Covid-19 pandemic. [ 43 ] The WHO recommended that cloth masks should have three layers with a hydrophilic inner layer (e.g., cotton) to absorb moisture from the wearer's breathing, a filter layer, ideally spunbond polypropylene, [ 44 ] and a hydrophobic outer layers (e.g., polyester). [ 21 ] The benfits of a hydrophobic outer layer have not, to our knowledge, been demonstrated, and may be a throw-back to the idea of protection from droplets, which is now recognized as not the dominant mode of transmission. [ 45 ] [ 46 ] [ 47 ] Masks should be cleaned after each use. They can either be laundered or hand-washed in soapy hot water and dried with high heat. [ 48 ]
In Roman times, Pliny the Elder recommended that miners use animal bladders to protect against inhaling lead oxides. Some followers of Jainism , which originated in India around 500 B.C.E, wear cloth masks to avoid accidentally inhaling insects as part of practicing ahimsa . [ 50 ] [ 51 ] [ 52 ] In the 16th century, Leonardo da Vinci advised the use of a wet woven cloth to protect against toxic agents [ which? ] of chemical warfare. [ 53 ] In the early modern period, the plague-doctor costume included a beaked face-mask worn to protect the wearer from infectious " miasma ".
Conventional cowboy attire in the American West often included a bandanna , which could protect the face from blown dust and also potentially doubled as a means of obscuring identity. [ 54 ]
In 1890 William Stewart Halsted pioneered the use of rubber gloves and surgical face masks, although some European surgeons such as Paul Berger and Jan Mikulicz-Radecki had worn cotton gloves and masks earlier. These masks became commonplace after World War I and the influenza pandemic of 1918. [ 55 ] [ 56 ] Cloth face masks were promoted by Wu Lien-teh in the 1910–11 Manchurian pneumonic plague outbreak , although Western medics doubted their efficacy in preventing the spread of disease. [ 57 ]
Cloth masks were largely supplanted by modern surgical masks made of nonwoven fabric in the 1960s. [ 9 ] [ 11 ] Peer reveiwed evidence supporting their equivalence appears to be lacking. The use of cloth masks continued in developing countries. [ 5 ] They were used in Asia during the 2002–2004 SARS outbreak , and in West Africa during the 2013–2016 Ebola epidemic . [ 5 ] Compared with bacteria recovery from unmasked volunteers, a mask made of muslin and flannel reduced bacteria recovered on agar sedimentation plates by 99%, total airborne microorganisms by 99%, and bacteria recovered from aerosols (<4 μm) by 88% to 99%. [ 37 ] In 1975, 4 medical masks and 1 commercially produced reusable mask made of 4 layers of cotton muslin were compared. Filtration efficiency, assessed by bacterial counts, was 96% to 99% for the medical masks and 99% for the cloth mask; for aerosols (<3.3 μm), it was 72% to 89% and 89%, respectively. [ 38 ]
In the early years of the COVID-19 pandemic , most countries recommended the use of cloth masks to reduce the spread of the virus. [ 58 ]
On June 5, 2020, WHO changed its advice on face masks, recommending that the general public should wear fabric masks where widespread COVID-19 transmission exists and physical distancing is not possible (for example, "on public transport, in shops or in other confined or crowded environments"). [ 59 ] [ 60 ] The WHO continues to recommend their use: 'Masks should be used as part of a comprehensive strategy of measures to suppress transmission and save lives; the use of a mask alone is not sufficient to provide an adequate level of protection against COVID-19.' [ 15 ] The CDC also continues to recommend masking: 'Wearing a mask is an additional prevention strategy that you can choose to further protect yourself and others.' [ 61 ]
|
https://en.wikipedia.org/wiki/Cloth_face_mask
|
Clue cells are epithelial cells of the vagina that get their distinctive stippled appearance by being covered with bacteria . The etymology behind the term "clue" cell derives from the original research article from Gardner and Dukes describing the characteristic cells. The name was chosen for its brevity in describing the sine qua non of bacterial vaginosis . [ 1 ]
They are a medical sign of bacterial vaginosis, particularly that caused by Gardnerella vaginalis , [ 2 ] a group of Gram-variable bacteria. This bacterial infection is characterized by thin gray vaginal discharge , and an increase in vaginal pH from around 4.5 to over 5.5.
This article related to pathology is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clue_cell
|
Clumping factor A , or ClfA , is a virulence factor from Staphylococcus aureus ( S. aureus ) that binds to fibrinogen .
ClfA also has been shown to bind to complement regulator I protein. [ 1 ]
It is responsible for the clumping of blood plasma observed when adding S. aureus to human plasma. Clumping factor can be detected by the slide test .
This protein -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Clumping_factor_A
|
Cluster headache is a neurological disorder characterized by recurrent severe headaches on one side of the head, typically around the eye(s) . [ 1 ] There is often accompanying eye watering, nasal congestion , or swelling around the eye on the affected side. [ 1 ] These symptoms typically last 15 minutes to 3 hours. [ 2 ] Attacks often occur in clusters which typically last for weeks or months and occasionally more than a year. [ 2 ] The disease is considered among the most painful conditions known to medical science. [ 6 ] [ 7 ]
The cause is unknown, [ 2 ] but is most likely related to dysfunction of the posterior hypothalamus . [ 8 ] Risk factors include a history of exposure to tobacco smoke and a family history of the condition. [ 2 ] Exposures which may trigger attacks include alcohol , nitroglycerin , and histamine . [ 2 ] They are a primary headache disorder of the trigeminal autonomic cephalalgias (TAC) type. [ 2 ] Diagnosis is based on symptoms. [ 2 ]
Recommended management includes lifestyle adaptations such as avoiding potential triggers. [ 2 ] Treatments for acute attacks include oxygen or a fast-acting triptan . [ 2 ] [ 4 ] Measures recommended to decrease the frequency of attacks include steroid injections , galcanezumab , civamide , verapamil , or oral glucocorticoids such as prednisone . [ 8 ] [ 4 ] [ 9 ] Nerve stimulation or surgery may occasionally be used if other measures are not effective. [ 2 ] [ 8 ]
The condition affects about 0.1% of the general population at some point in their life and 0.05% in any given year. [ 5 ] The condition usually first occurs between 20 and 40 years of age. [ 2 ] Men are affected about four times more often than women. [ 5 ] Cluster headaches are named for the occurrence of groups of headache attacks (clusters). [ 1 ] They have also been referred to as " suicide headaches ". [ 2 ]
Cluster headaches are recurring bouts of severe unilateral headache attacks. [ 10 ] [ 11 ] The duration of a typical cluster headache ranges from about 15 to 180 minutes. [ 2 ] About 75% of untreated attacks last less than 60 minutes. [ 12 ] However, women may have longer and more severe cluster headaches. [ 13 ]
The onset of an attack is rapid and typically without an aura . Preliminary sensations of pain in the general area of attack, referred to as "shadows", may signal an imminent cluster headache, or these symptoms may linger after an attack has passed, or between attacks. [ 14 ] Though cluster headaches are strictly unilateral, there are some documented cases of "side-shift" between cluster periods, [ 15 ] or, rarely, simultaneous (within the same cluster period) bilateral cluster headaches. [ 16 ]
The pain occurs only on one side of the head, around the eye, particularly behind or above the eye, in the temple. The pain is typically greater than in other headache conditions, including migraines , and is usually described as burning, stabbing, drilling or squeezing. [ 17 ] While suicide is rare, those with cluster headaches may experience suicidal thoughts (giving the alternative name "suicide headache" or "suicidal headache"). [ 18 ] [ 19 ]
Dr. Peter Goadsby , Professor of Clinical Neurology at University College London, a leading researcher on the condition has commented:
"Cluster headache is probably the worst pain that humans experience. I know that's quite a strong remark to make, but if you ask a cluster headache patient if they've had a worse experience, they'll universally say they haven't. Women with cluster headache will tell you that an attack is worse than giving birth. So you can imagine that these people give birth without anesthetic once or twice a day, for six, eight, or ten weeks at a time, and then have a break. It's just awful." [ 20 ]
The typical symptoms of cluster headache include grouped occurrence and recurrence (cluster) of headache attack, severe unilateral orbital, supraorbital and/or temporal pain. If left untreated, attack frequency may range from one attack every two days to eight attacks per day. [ 2 ] [ 21 ] Cluster headache attack is accompanied by at least one of the following autonomic symptoms: drooping eyelid , pupil constriction , redness of the conjunctiva , tearing , runny nose and less commonly, facial blushing , swelling, or sweating, typically appearing on the same side of the head as the pain. [ 21 ] Similar to a migraine, sensitivity to light ( photophobia ) or noise ( hyperacusis ) may occur during a cluster headache. Nausea is a rare symptom although it has been reported. [ 10 ]
Restlessness (for example, pacing or rocking back and forth) may occur. Secondary effects may include the inability to organize thoughts and plans, physical exhaustion, confusion, agitation, aggressiveness, depression, and anxiety. [ 18 ]
People with cluster headaches may dread facing another headache and adjust their physical or social activities around a possible future occurrence. Likewise they may seek assistance to accomplish what would otherwise be normal tasks. They may hesitate to make plans because of the regularity, or conversely, the unpredictability of the pain schedule. These factors can lead to generalized anxiety disorders , panic disorder , [ 18 ] serious depressive disorders , [ 22 ] social withdrawal and isolation. [ 23 ]
Cluster headaches have been recently associated with obstructive sleep apnea comorbidity. [ 24 ]
Cluster headaches may occasionally be referred to as "alarm clock headache" because of the regularity of their recurrence. Cluster headaches often awaken individuals from sleep. Both individual attacks and the cluster grouping can have a metronomic regularity; attacks typically strike at a precise time of day each morning or night. The recurrence of headache cluster grouping may occur more often around solstices , or seasonal changes, sometimes showing circannual periodicity. Conversely, attack frequency may be highly unpredictable, showing no periodicity at all. These observations have prompted researchers to speculate an involvement or dysfunction of the hypothalamus. The hypothalamus controls the body's "biological clock" and circadian rhythm . [ 25 ] [ 26 ] In episodic cluster headache, attacks occur once or more daily, often at the same time each day for a period of several weeks, followed by a headache-free period lasting weeks, months, or years. Approximately 10–15% of cluster headaches are chronic , with multiple headaches occurring every day for years, sometimes without any remission. [ 27 ]
In accordance with the International Headache Society (IHS) diagnostic criteria, cluster headaches occurring in two or more cluster periods, lasting from 7 to 365 days with a pain-free remission of one month or longer between the headache attacks may be classified as episodic. If headache attacks occur for more than a year without pain-free remission of at least three months, the condition is classified as chronic. [ 21 ] Chronic cluster headaches both occur and recur without any remission periods between cycles; there may be variation in cycles, meaning the frequency and severity of attacks may change without predictability for a period of time. The frequency, severity, and duration of headache attacks experienced by people during these cycles varies between individuals and does not demonstrate complete remission of the episodic form. The condition may change unpredictably from chronic to episodic and from episodic to chronic. [ 28 ]
The specific causes and pathogenesis of cluster headaches are not fully understood. [ 8 ] The Third Edition of the International Classification of Headache disorders classifies cluster headaches as belonging to the trigeminal autonomic cephalalgias . [ 29 ]
Some experts consider the posterior hypothalamus to be important in the pathogenesis of cluster headaches. This is supported by a relatively high success ratio of deep-brain stimulation therapy on the posterior hypothalamic grey matter . [ 8 ]
Therapies acting on the vagus nerve ( cranial nerve X) and the greater occipital nerve have both shown efficacy in managing cluster headache, but the specific roles of these nerves are not well-understood. [ 8 ] Two nerves thought to play an important role in cluster headaches include the trigeminal nerve and the facial nerve . [ 30 ]
Cluster headache may run in some families in an autosomal dominant inheritance pattern. [ 31 ] [ 32 ] People with a first degree relative with the condition are about 14–48 times more likely to develop it themselves, [ 1 ] and around 8 to 10% of persons with cluster headaches have a family history. [ 31 ] [ 33 ] Several studies have found a higher number of relatives affected among females. [ 33 ] Others have suggested these observations may be due to lower numbers of females in these studies. [ 33 ] Possible genetic factors warrant further research, current evidence for genetic inheritance is limited. [ 32 ]
Genes that are thought to play a role in the disease are the hypocretin/orexin receptor type 2 (HCRTR2), alcohol dehydrogenase 4(ADH4), G protein beta 3 (GNB3), pituitary adenylate cyclase-activating polypeptide type I receptor (ADCYAP1R1), and membrane metalloendopeptidase (MME) genes. [ 31 ]
About 65% of persons with cluster headache are, or have been, tobacco smokers. [ 1 ] Stopping smoking does not lead to improvement of the condition, and cluster headaches also occur in those who have never smoked (e.g., children); [ 1 ] it is thought unlikely that smoking is a cause. [ 1 ] People with cluster headaches may be predisposed to certain traits, including smoking or other lifestyle habits. [ 34 ]
A review suggests that the suprachiasmatic nucleus of the hypothalamus , which is the major biological clock in the human body, may be involved in cluster headaches, because cluster headaches occur with diurnal and seasonal rhythmicity. [ 35 ]
Positron emission tomography (PET) scans indicate the brain areas which are activated during attack only, compared to pain free periods. These pictures show brain areas that are active during pain in yellow/orange color (called "pain matrix"). The area in the center (in all three views) is activated only during cluster headaches. The bottom row voxel-based morphometry shows structural brain differences between individuals with and without CH; only a portion of the hypothalamus is different. [ 36 ]
Cluster-like head pain may be diagnosed as secondary headache rather than cluster headache. [ 21 ]
A detailed oral history aids practitioners in correct differential diagnosis, as there are no confirmatory tests for cluster headache. A headache diary can be useful in tracking when and where pain occurs, how severe it is, and how long the pain lasts. A record of coping strategies used may help distinguish between headache type; data on frequency, severity and duration of headache attacks are a necessary tool for initial and correct differential diagnosis in headache conditions. [ 37 ]
Correct diagnosis presents a challenge as the first cluster headache attack may present where staff are not trained in the diagnosis of rare or complex chronic disease. [ 12 ] Experienced ER staff are sometimes trained to detect headache types. [ 38 ] While cluster headache attacks themselves are not directly life-threatening, suicide ideation has been observed. [ 18 ]
Individuals with cluster headaches typically experience diagnostic delay before correct diagnosis. [ 39 ] People are often misdiagnosed due to reported neck, tooth, jaw, and sinus symptoms and may unnecessarily endure many years of referral to ear, nose and throat (ENT) specialists for investigation of sinuses; dentists for tooth assessment; chiropractors and manipulative therapists for treatment; or psychiatrists , psychologists , and other medical disciplines before their headaches are correctly diagnosed. [ 40 ] Under-recognition of cluster headaches by health care professionals is reflected in consistent findings in Europe and the United States that the average time to diagnosis is around seven years. [ 41 ]
Cluster headache may be misdiagnosed as migraine or sinusitis . [ 41 ] Other types of headache are sometimes mistaken for, or may mimic closely, cluster headaches. Incorrect terms like "cluster migraine" confuse headache types, confound differential diagnosis and are often the cause of unnecessary diagnostic delay, [ 42 ] ultimately delaying appropriate specialist treatment.
Other types of headaches that may be confused with cluster headache include:
Management for cluster headache is divided into three primary categories: abortive, transitional, and preventive. [ 48 ] Preventive treatments are used to reduce or eliminate cluster headache attacks; they are generally used in combination with abortive and transitional techniques. [ 10 ]
The recommended first-line preventive therapy is verapamil , a calcium channel blocker . [ 2 ] [ 49 ] Verapamil was previously underused in people with cluster headache. [ 10 ] Improvement can be seen in an average of 1.7 weeks for episodic cluster headache and 5 weeks for chronic cluster headache when using a dosage of ranged between 160 and 720 mg (mean 240 mg/day). [ 50 ] Preventive therapy with verapamil is believed to work because it has an effect on the circadian rhythm and on CGRPs as CGRP-release is controlled by voltage-gated calcium channels. [ 50 ]
Since these compounds are steroids , there is little evidence to support long-term benefits from glucocorticoids , [ 2 ] but they may be used until other medications take effect as they appear to be effective at three days. [ 2 ] They are generally discontinued after 8–10 days of treatment. [ 10 ] Prednisone is given at a starting dose of 60–80 milligrams daily; then it is reduced by 5 milligrams every day. Corticosteroids are also used to break cycles, especially in chronic patients. [ 51 ]
Nerve stimulators may be an option in the small number of people who do not improve with medications. [ 52 ] [ 53 ] Two procedures, deep brain stimulation or occipital nerve stimulation , may be useful; [ 2 ] early experience shows a benefit in about 60% of cases. [ 54 ] It typically takes weeks or months for this benefit to appear. [ 53 ] A non-invasive method using transcutaneous electrical nerve stimulation (TENS) is being studied. [ 53 ]
A number of surgical procedures, such as a rhizotomy or microvascular decompression , may also be considered, [ 53 ] but evidence to support them is limited and there are cases of people whose symptoms worsen after these procedures. [ 53 ]
Lithium , methysergide , and topiramate are recommended alternative treatments, [ 49 ] [ 55 ] although there is little evidence supporting the use of topiramate or methysergide. [ 2 ] [ 56 ] This is also true for tianeptine , melatonin , and ergotamine . [ 2 ] Valproate , sumatriptan , and oxygen are not recommended as preventive measures. [ 2 ] Botulinum toxin injections have shown limited success. [ 57 ] Evidence for baclofen , botulinum toxin , and capsaicin is unclear. [ 56 ]
There are two primary treatments for acute CH: oxygen and triptans , [ 2 ] but they are underused due to misdiagnosis of the syndrome. [ 10 ] During bouts of headaches, triggers such as alcohol , nitroglycerine , and naps during the day should be avoided. [ 12 ]
Oxygen therapy may help to abort attacks, though it does not prevent future episodes. [ 2 ] Typically it is given via a non-rebreather mask at 12–15 liters per minute for 15–20 minutes. [ 2 ] One review found about 70% of patients improve within 15 minutes. [ 12 ] The evidence for effectiveness of 100% oxygen, however, is weak. [ 12 ] [ 58 ] Hyperbaric oxygen at pressures of ~2 times greater than atmospheric pressure may relieve cluster headaches. [ 58 ]
The other primarily recommended treatment of acute attacks is subcutaneous or intranasal sumatriptan . [ 49 ] [ 59 ] Sumatriptan and zolmitriptan have both been shown to improve symptoms during an attack with sumatriptan being superior. [ 60 ] Because of the vasoconstrictive side-effect of triptans, they may be contraindicated in people with ischemic heart disease . [ 2 ] The vasoconstrictor ergot compounds may be useful, [ 12 ] but have not been well studied in acute attacks. [ 60 ]
The use of opioid medication in management of cluster headache is not recommended [ 61 ] and may make headache syndromes worse. [ 62 ] [ 63 ] Long-term opioid use is associated with well known dependency, addiction, and withdrawal syndromes. [ 64 ] Prescription of opioid medication may additionally lead to further delay in differential diagnosis, undertreatment, and mismanagement. [ 61 ]
Intranasal lidocaine (sprayed in the ipsilateral nostril) may be an effective treatment with patient resistant to more conventional treatment. [ 13 ]
Octreotide administered subcutaneously has been demonstrated to be more effective than placebo for the treatment of acute attacks. [ 65 ]
Sub-occipital steroid injections have shown benefit and are recommended for use as a transitional therapy to provide temporary headache relief as more long term prophylactic therapies are instituted. [ 66 ]
Cluster headache affects about 0.1% of the general population at some point in their life. [ 5 ] Males are affected about four times more often than females. [ 5 ] The condition usually starts between the ages of 20 and 50 years, although it can occur at any age. [ 1 ] About one in five affected adults report the onset of cluster headache between 10 and 19 years of age. [ 67 ]
The first complete description of cluster headache was given by the London neurologist Wilfred Harris in 1926, who named the disease migrainous neuralgia . [ 68 ] [ 69 ] [ 70 ] Descriptions of cluster headache date to 1745 and probably earlier. [ 71 ]
The condition was originally named Horton's cephalalgia after Bayard Taylor Horton , a US neurologist who postulated the first theory as to their pathogenesis. His original paper describes the severity of the headaches as being able to take normal men and force them to attempt or die by suicide; his 1939 paper said:
"Our patients were disabled by the disorder and suffered from bouts of pain from two to twenty times a week. They had found no relief from the usual methods of treatment. Their pain was so severe that several of them had to be constantly watched for fear of suicide. Most of them were willing to submit to any operation which might bring relief." [ 72 ]
CH has alternately been called erythroprosopalgia of Bing, ciliary neuralgia, erythromelalgia of the head, Horton's headache, histaminic cephalalgia, petrosal neuralgia, sphenopalatine neuralgia, vidian neuralgia, Sluder's neuralgia, Sluder's syndrome, and hemicrania angioparalyticia. [ 73 ]
Robert Shapiro, a professor of neurology, says that while cluster headaches are about as common as multiple sclerosis with a similar disability level, as of 2013, the US National Institutes of Health had spent $1.872 billion on research into multiple sclerosis in one decade, but less than $2 million on cluster headache research in 25 years. [ 74 ]
Some case reports suggest that ingesting lysergamides such as LSD , tryptamines such as psilocybin (as found in hallucinogenic mushrooms), or DMT can abort attacks and interrupt cluster headache cycles. [ 75 ] [ 76 ] The hallucinogen DMT has a chemical structure that is similar to the triptan sumatriptan, indicating a possible shared mechanism in preventing or stopping migraine and TACs. [ 51 ] In a 2006 survey of 53 individuals, 18 of 19 psilocybin users reported extended remission periods. The survey was not a blinded or a controlled study, and was "limited by recall and selection bias". [ 75 ] The safety and efficacy of psilocybin is currently being studied in cluster headache, with the extension phase of one randomized controlled trial demonstrating reduced cluster attack burden after a 3-dose pulse of psilocybin. [ 77 ] [ 78 ] [ 79 ] In Canada, a first cluster headache patient gained access to psychedelic-assisted therapy via Canada’s special access scheme for psilocybin. [ 80 ]
Fremanezumab , a humanized monoclonal antibody directed against calcitonin gene-related peptides alpha and beta, was in phase 3 clinical trials for cluster headaches, but the studies were stopped early due to a futility analysis demonstrating that a successful outcome was unlikely. [ 81 ] [ 82 ]
|
https://en.wikipedia.org/wiki/Cluster_headache
|
The cluster of differentiation (also known as cluster of designation or classification determinant and often abbreviated as CD ) is a protocol used for the identification and investigation of cell surface molecules providing targets for immunophenotyping of cells. [ 1 ] In terms of physiology, CD molecules can act in numerous ways, often acting as receptors or ligands important to the cell. A signal cascade is usually initiated, altering the behavior of the cell (see cell signaling ). Some CD proteins do not play a role in cell signaling, but have other functions, such as cell adhesion . CD for humans is numbered up to 371 (as of 21 April 2016 [update] ). [ 2 ] [ 3 ]
The CD nomenclature was proposed and established in the 1st International Workshop and Conference on Human Leukocyte Differentiation Antigens (HLDA), held in Paris in 1982. [ 4 ] [ 5 ] This system was intended for the classification of the many monoclonal antibodies (mAbs) generated by different laboratories around the world against epitopes on the surface molecules of leukocytes (white blood cells). Since then, its use has expanded to many other cell types, and more than 370 CD unique clusters and subclusters have been identified. The proposed surface molecule is assigned a CD number once two specific monoclonal antibodies are shown to bind to the molecule. If the molecule has not been well characterized or has only one mAb, it is usually given the provisional indicator "w" (as in " CDw186 "). [ 6 ]
For instance, CD2 mAbs are reagents that react with a 50‐kDa transmembrane glycoprotein expressed on T cells . The CD designations were used to describe the recognized molecules but had to be clarified by attaching the term antigen or molecule to the designation (e.g., CD2 molecule). Currently, "CD2" is generally used to designate the molecule, and "CD2 antibody " is used to designate the antibody. [ 7 ]
Cell populations are usually defined using a '+' or a '−' symbol to indicate whether a certain cell fraction expresses or lacks a CD molecule. For example, a " CD34 +, CD31 −" cell is one that expresses CD34 but not CD31. This CD combination typically corresponds to a stem cell , as opposed to a fully differentiated endothelial cell . Some cell populations can also be defined as hi , mid , or low (alternatively, bright , mid , or dim ), indicating an overall variability in CD expression , particularly when compared to other cells being studied. A review of the development of T cells in the thymus uses this nomenclature to identify cells transitioning from CD4 mid /CD8 mid double-positive cells to CD4 hi /CD8 mid . [ 8 ]
Since 1982 there have been nine Human Leukocyte Differentiation Antigen Workshops culminating in a conference.
The CD system is commonly used as cell markers in immunophenotyping , allowing cells to be defined based on what molecules are present on their surface. These markers are often used to associate cells with certain immune functions . While using one CD molecule to define populations is uncommon (though a few examples exist), combining markers has allowed for cell types with very specific definitions within the immune system. [ citation needed ]
CD molecules are utilized in cell sorting using various methods, including flow cytometry . [ citation needed ]
Two commonly used CD molecules are CD4 and CD8 , which are, in general, used as markers for helper and cytotoxic T cells, respectively. These molecules are defined in combination with CD3+, as some other leukocytes also express these CD molecules (some macrophages express low levels of CD4; dendritic cells express high levels of CD8). Human immunodeficiency virus binds CD4 and a chemokine receptor on the surface of a T helper cell to gain entry. The number of CD4 and CD8 T cells in blood is often used to monitor the progression of HIV infection . [ citation needed ]
While CD molecules are very useful in defining leukocytes, they are not merely markers on the cell surface . Though only a fraction of known CD molecules have been thoroughly characterised, most of them have important functions. In the example of CD4 and CD8, these molecules are critical in antigen recognition. Others (e.g., CD135 ) act as cell surface receptors for growth factors . Recently, the marker CD47 was found to have anti- phagocytic signals to macrophages and inhibit natural killer (NK) cells. This enabled researchers to apply CD47 as a potential target to attenuate immune rejection . [ 20 ] [ 21 ]
|
https://en.wikipedia.org/wiki/Cluster_of_differentiation
|
Cluttering is a speech and communication disorder characterized by a rapid rate of speech, erratic rhythm, and poor syntax or grammar, making speech difficult to understand.
Cluttering is a speech and communication disorder that has also been described as a fluency disorder. [ 1 ]
It is defined as:
Cluttering is a fluency disorder characterized by a rate that is perceived to be abnormally rapid, irregular, or both for the speaker (although measured syllable rates may not exceed normal limits). These rate abnormalities further are manifest in one or more of the following symptoms: (a) an excessive number of disfluencies , the majority of which are not typical of people with stuttering ; (b) the frequent placement of pauses and use of prosodic patterns that do not conform to syntactic and semantic constraints; and (c) inappropriate (usually excessive) degrees of coarticulation among sounds, especially in multisyllabic words. [ 2 ]
Cluttering is sometimes confused with stuttering. Both communication disorders break the normal flow of speech, but they are distinct. A stutterer has a coherent pattern of thoughts, but may have a difficult time vocally expressing those thoughts; in contrast, a clutterer has no problem putting thoughts into words, but those thoughts become disorganized during speaking. Cluttering affects not only speech, but also thought patterns, writing, typing, and conversation. [ 3 ]
Stutterers are usually dysfluent on initial sounds, when beginning to speak, and become more fluent towards the ends of utterances. In contrast, clutterers are most clear at the start of utterances, but their speaking rate increases and intelligibility decreases towards the end of utterances.
Stuttering is characterized by struggle behavior, such as overtense speech production muscles. Cluttering, in contrast, is effortless. Cluttering is also characterized by slurred speech , especially dropped or distorted /r/ and /l/ sounds; and monotone speech that starts loud and trails off into a murmur.
A clutterer described the feeling associated with a clutter as:
It feels like 1) about twenty thoughts explode on my mind all at once, and I need to express them all, 2) that when I'm trying to make a point, that I just remembered something that I was supposed to say, so the person can understand, and I need to interrupt myself to say something that I should have said before, and 3) that I need to constantly revise the sentences that I'm working on, to get it out right. [ 4 ]
Cluttering can often be confused with various language disorders , learning disabilities , and attention deficit hyperactivity disorder (ADHD). [ 5 ] Clutterers often have reading and writing disabilities, especially sprawling, disorderly handwriting, which poorly integrate ideas and space. [ 6 ] It can occur with Parkinson's disease . [ 7 ]
The common goals of treatment for cluttering include slowing the rate of speech, heightening monitoring, using clear articulation, using acceptable and organized language, interacting with listeners, speaking naturally, and reducing excessive disfluencies. [ 8 ]
Slowing the rate of speech can help many of the symptoms of cluttering, and can be achieved in a couple of different ways. It is important that speech language pathologists do not nag their clients to "slow down" incessantly, as this does not help and can actually hinder progress. Additionally, it is important to remember that speech rate often increases when emotional arousal or stress increases. Instead of constant verbal reminders, clinicians may use a combination of delayed auditory feedback (DAF), giving out "speeding tickets" (written reminders to slow down speech), or recording speech and having clients transcribe it, writing in where there is need for spaces and pauses. [ 8 ]
Many people who clutter are either unable or unwilling to think about their speech, particularly in casual speech. The strategies to slow speech down all require careful monitoring of speech, which can be very difficult for those who clutter. Imagination and careful observation are used to increase monitoring. For instance, an adult who clutters may be asked to visualize themselves speaking slowly and clearly before they actually speak. Additionally, video and audio recordings may be used to show those who clutter where communication starts to break down in their speech. [ 8 ]
In general, slowing the rate of speech and/or monitoring speech more effectively should lead to clearer articulation. However, if they do not, additional treatment is needed. These articulation treatment strategies include practicing short sentences with "over-articulated", unnatural but technically correct, speech. Reading multisyllabic words and focusing on including each of the sounds is another strategy to enhance articulation. [ 8 ]
Some individuals who clutter will need help learning to tell stories logically and sequentially. This can be aided by learning how to begin narratives with simple, short sentences, and slowly building to longer, more complex ones. Additionally, clinicians may transcribe cluttered speech to clients to show them run-ons and ramblings, and then ask them to just state the necessary, most important information in the utterance. [ 8 ]
Additional strategies that may help people who clutter include checking in, ensuring that they've understood any non-verbal or turn-taking cues in the conversation, imitating clinician models of speech to improve natural speech, and treating any stuttering that may be co-occurring with cluttering. The two are separate disorders, but many people who clutter also stutter. [ 8 ]
Battaros [ citation needed ] was a legendary Libyan king who spoke quickly and in a disorderly fashion. Others who spoke as he did were said to have battarismus . [ 9 ] This is the earliest record of the speech disorder of cluttering.
In the 1960s, cluttering was called tachyphemia , a word derived from the Greek for 'fast speech'. This word is no longer used to describe cluttering because fast speech is not a required element of cluttering.
Deso Weiss described cluttering as the outward manifestation of a "central language imbalance". [ 10 ]
The First World Conference on Cluttering was held in May 2007 in Razlog , Bulgaria . [ 11 ] It had over 60 participants from North America, Europe, the Middle East and Asia. [ 12 ]
Weiss claimed that Battaros, Demosthenes , Pericles , Justinian , Otto von Bismarck , and Winston Churchill were clutterers. He says about these people, "Each of these contributors to world history viewed his world holistically, and was not deflected by exaggerated attention to small details. Perhaps then, they excelled because of, rather than in spite of, their [cluttering]." [ 13 ]
|
https://en.wikipedia.org/wiki/Cluttering
|
The Coalition on Psychiatric Emergencies ( CPE ) is a collaborative working group of behavioral health , psychiatry , and emergency medicine professionals headed by the American College of Emergency Physicians . [ 1 ] CPE represents several professional organizations , making it a large collaborative in the field of emergency psychiatry in the United States . [ 2 ]
According to CPE's website, the coalition came out of a "psychiatric emergency summit" in December 2014. [ 3 ]
CPE hosted its "1st Annual Research Consensus Conference on Acute Mental Illness" on December 7–9, 2016 in Las Vegas, NV . [ 4 ] According to 2023-2024 CPE Chair Dr. Michael Wilson, CPE has conducted several activities to support the creation of a new Focused Practice Destination in Emergency Behavioral Health. [ 5 ]
The coalition's member organizations represent multiple healthcare disciplines, including emergency physicians, nurses, pharmacists, and other stakeholders. CPE is composed of the following member organizations: [ 3 ]
CPE past supporters (but not representative members) include Teva Pharmaceutical Industries , New Directions and Alexza Pharmaceuticals. [ citation needed ]
The formation of CPE has been widely reported in the medical media. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Scott Zeller, MD, Chief of Psychiatric Emergency Services for the Alameda Health System, has described the collaborative as "unprecedented." [ 2 ] Peggy DeCarlis, chief operating and innovation officer of New Directions Behavioral Health, has expressed "excitement" towards her organization's partnership with CPE. [ 13 ]
David W. Covington, LPC, MBA, CEO and president of RI International, an international provider of recovery services, has suggested that the "reinforcements" that CPE will bring to American emergency departments are not enough to combat the problems that emergency departments face in dealing with acute psychiatric emergencies. [ 14 ]
[1]
|
https://en.wikipedia.org/wiki/Coalition_on_Psychiatric_Emergencies
|
Cobalt-chrome or cobalt-chromium ( CoCr ) is a metal alloy of cobalt and chromium . Cobalt-chrome has a very high specific strength and is commonly used in gas turbines , dental implants , and orthopedic implants . [ 1 ]
Co-Cr alloy was first discovered by Elwood Haynes in the early 1900s by fusing cobalt and chromium. The alloy was first discovered with many other elements such as tungsten and molybdenum in it. Haynes reported his alloy was capable of resisting oxidation and corrosive fumes and exhibited no visible sign of tarnish even when subjecting the alloy to boiling nitric acid. [ 2 ] Under the name Stellite , Co-Cr alloy has been used in various fields where high wear-resistance was needed including aerospace industry , [ 3 ] cutlery, bearings, and blades.
Co-Cr alloy started receiving more attention as its biomedical application was found. In the 20th century, the alloy was first used in medical tool manufacturing, [ 4 ] and in 1960, the first Co-Cr prosthetic heart valve was implanted, which happened to last over 30 years showing its high wear-resistance. [ 5 ] Recently, due to excellent resistant properties, biocompatibility , high melting points, and incredible strength at high temperatures, Co-Cr alloy is used for the manufacture of many artificial joints including hips and knees, dental partial bridge work, gas turbines, and many others. [ 4 ]
The common Co-Cr alloy production requires the extraction of cobalt and chromium from cobalt oxide and chromium oxide ores. Both of the ores need to go through reduction process to obtain pure metals. Chromium usually goes through aluminothermic reduction technique , and pure cobalt can be achieved through many different ways depending on the characteristics of the specific ore. Pure metals are then fused together under vacuum either by electric arc or by induction melting . [ 4 ] Due to the chemical reactivity of metals at high temperature, the process requires vacuum conditions or inert atmosphere to prevent oxygen uptake by the metal. ASTM F75, a Co-Cr-Mo alloy, is produced in an inert argon atmosphere by ejecting molten metals through a small nozzle that is immediately cooled to produce a fine powder of the alloy. [ 3 ]
However, synthesis of Co-Cr alloy through the method mentioned above is very expensive and difficult. Recently, in 2010, scientists at the University of Cambridge have produced the alloy through a novel electrochemical, solid-state reduction technique known as the FFC Cambridge Process which involves the reduction of an oxide precursor cathode in a molten chloride electrolyte. [ 4 ]
Co-Cr alloys show high resistance to corrosion due to the spontaneous formation of a protective passive film composed of mostly Cr 2 O 3 , and minor amounts of cobalt and other metal oxides on the surface. [ 6 ] CoCr has a melting point around 1,330 °C (2,430 °F). [ 7 ]
As its wide application in biomedical industry indicates, Co-Cr alloys are well known for their biocompatibility. Biocompatibility also depends on the film and how this oxidized surface interacts with physiological environment. [ 8 ] Good mechanical properties that are similar to stainless steel are a result of a multiphase structure and precipitation of carbides, which increase the hardness of Co-Cr alloys tremendously. The hardness of Co-Cr alloys varies ranging 550-800 MPa, and tensile strength varies ranging 145-270 MPa. [ 9 ] Moreover, tensile and fatigue strength increases radically as they are heat-treated. [ 10 ] However, Co-Cr alloys tend to have low ductility , which can cause component fracture. This is a concern as the alloys are commonly used in hip replacements. [ 11 ] In order to overcome the low ductility, nickel , carbon , and/or nitrogen are added. These elements stabilize the γ phase, which has better mechanical properties compared to other phases of Co-Cr alloys. [ 12 ]
There are several Co-Cr alloys that are commonly produced and used in various fields. ASTM F75, ASTM F799, ASTM F1537 are Co-Cr-Mo alloys with very similar composition yet slightly different production processes, ASTM F90 is a Co-Cr-W-Ni alloy , and ASTM F562 is a Co-Ni-Cr-Mo-Ti alloy. [ 3 ]
Depending on the percent composition of cobalt or chromium and the temperature, Co-Cr alloys show different structures. The σ phase, where the alloy contains approximately 60–75% chromium, tends to be brittle and subject to a fracture . FCC crystal structure is found in the γ phase, and the γ phase shows improved strength and ductility compared to the σ phase. FCC crystal structure is commonly found in cobalt rich alloys, while chromium rich alloys tend to have BCC crystal structure. The γ phase Co-Cr alloy can be converted into the ε phase at high pressures, which shows a HCP crystal structure. [ 12 ]
Co-Cr alloys are most commonly used to make artificial joints including knee and hip joints due to high wear-resistance and biocompatibility. [ 4 ] Co-Cr alloys tend to be corrosion resistant, which reduces complication with the surrounding tissues when implanted, and chemically inert that they minimize the possibility of irritation, allergic reaction , and immune response . [ 13 ] Co-Cr alloy has also been widely used in the manufacture of stent and other surgical implants as Co-Cr alloy demonstrates excellent biocompatibility with blood and soft tissues as well. [ 14 ] The alloy composition used in orthopedic implants is described in industry standard ASTM -F75: mainly cobalt, with 27 to 30% chromium , 5 to 7% molybdenum , and upper limits on other important elements such as less than 1% each of manganese and silicon , less than 0.75% iron , less than 0.5% nickel , and very small amounts of carbon , nitrogen , tungsten , phosphorus , sulfur , boron , etc. [ 1 ]
Besides cobalt-chromium-molybdenum (CoCrMo), cobalt-nickel-chromium-molybdenum (CoNiCrMo) is also used for implants. [ citation needed ] The possible toxicity of released Ni ions from CoNiCr alloys and also their limited frictional properties are a matter of concern in using these alloys as articulating components. Thus, CoCrMo is usually the dominant alloy for total joint arthroplasty . [ citation needed ]
Co-Cr alloy dentures and cast partial dentures have been commonly manufactured since 1929 due to lower cost and lower density compared to gold alloys; however, Co-Cr alloys tend to exhibit a higher modulus of elasticity and cyclic fatigue resistance, which are significant factors for dental prosthesis. [ 15 ] The alloy is a commonly used as a metal framework for dental partials. A well known brand for this purpose is Vitallium .
Due to mechanical properties such as high resistance to corrosion and wear, Co-Cr alloys (e.g., Stellites ) are used in making wind turbines, engine components, and many other industrial/mechanical components where high wear resistance is needed. [ 3 ]
Co-Cr alloy is also very commonly used in fashion industry to make jewellery, especially wedding bands.
Metals released from Co-Cr alloy tools and prosthetics may cause allergic reactions and skin eczema . [ 16 ] Prosthetics or any medical equipment with high nickel mass percentage Co-Cr alloy should be avoided due to low biocompatibility, as nickel is the most common metal sensitizer in the human body. [ 12 ]
|
https://en.wikipedia.org/wiki/Cobalt-chrome
|
Coblation tonsillectomy is a surgical procedure in which the patient's tonsils are removed by destroying the surrounding tissues that attach them to the pharynx . [ 1 ] [ 2 ] It was first implemented in 2001. The word coblation is short for ‘controlled ablation ’, which means a controlled procedure used to destroy soft tissue . [ 3 ]
This procedure uses low temperature radio frequency during the operation, which was found to cause less pain for the patient than previous technologies used for tonsillectomy . Data collected from coblation tonsillectomy operations showed that the healing of the tonsillar fossa is much faster when this low temperature technology is used instead of a heat based technology, such as electrocautery tonsillectomy. [ 4 ]
Since coblation has been introduced to the medical field, more than 10 million surgical operations have been performed, but as of 2019, research is still ongoing to determine the positive and negative effects of this procedure. [ 5 ]
The equipment used for coblation tonsillectomy consists of a radio frequency (RF) generator, foot pedal control, irrigation system, and a tonsil wand. The generator provides radio frequency, which is essential for the procedure, and connects the foot pedal system to the tonsil wand. The foot pedals are colour coded to prevent confusion: one is yellow and is used for controlling the coblation, while the other is blue and used for controlling the radio frequency cautery. The wand is connected to the RF generator so it can be controlled with the pedals. The wand consists of a base electrode and an active electrode, which have ceramic and flowing saline between them. The radio frequency current that is produced by the generator travels through the saline, breaking the molecular bonds and forming ions . This creates a plasma field around the electrodes, which is used for removing soft tissue. There should not be any smoke produced while the coblation wand is being used during the operation; if this occurs, it is a sign that ablated tissue has entered the coblation wand's electrode area. This means that the current is not able to break down the saline into ions properly, so smoke is produced. When this happens, the coblation wand needs to be cleaned out before using it again. [ citation needed ]
The plasma field has a radius of about 100μm-200μm around the electrodes and is kept stable within the head of the coblation wand by the continuous supply of saline. Furthermore, the plasma field is controlled by the bipolar energy between the negative and positive ions that are produced by the plasma in order to use a precise amount of plasma to prevent damaging healthy tissues around the tonsils. [ 5 ]
Plasma does not have a thermal effect on tissue: it only affects it on a chemical level. The plasma field produces positively charged hydrogen ions (H+) and negatively charged hydroxide ions (OH-), which enable plasma to destroy tissue. There is no or only minor damage done to nearby tissues during the coblation procedure because charged particles move between the generated plasma field and the ablated tissue, hence the molecules breakdown without the temperatures rising high. [ citation needed ]
The temperature for coblation tonsillectomy ranged from 60 °C to 70 °C, while other tonsillectomy operation procedures, such as electrosurgery require temperatures ranging from 400 °C to 600 °C, which is much higher. Thus, coblation is considered to be a non-heat focused medical procedure that is much better at causing minimal thermal damage to untargeted tissues near the targeted area. [ 6 ]
The need for removing tonsils of an individual using the coblation tonsillectomy surgical process can occur for four reasons. Firstly, the patient may have frequent long lasting tonsillitis. Secondly, tonsils can become swollen and inflamed which may cause breathing problems. Thirdly, blood loss can occur through the tonsils, which is a sign that they need to be removed. Lastly, in some cases tonsils get affected by rare diseases and viruses , which can only be treated by removing the tonsils.
Tonsils are part of the first line of defense in the mouth as they create white blood cells in order to fight diseases, bacteria and viruses that enter through the mouth. Owing to being exposed to amounts of material that enter the mouth, tonsils can become infected, which is called tonsillitis . This occurs mainly in children as in this age group is when the tonsils are the key providers of immune system functions. By the time individuals reach adulthood, they have been exposed to many diseases, bacteria and viruses, so they have developed immunity against these infection causing micro-organisms. Thus, adults normally do not need the help of tonsils anymore.
When tonsillitis occurs regularly, meaning about seven times a year then the removal of tonsils is needed, otherwise it can have detrimental effects on the individual's health. Furthermore, tonsil surgery may be required when antibiotic treatment does not help in getting rid of the bacteria, which causes tonsillitis.
After being exposed to several tonsil infections , tonsils have the tendency to become swollen and inflamed, but other factors may make them enlarged as well, such as genetics . Some of the significant problems that can arise due to having enlarged tonsils are breathing difficulties and the ability to swallow, thus the removal of tonsils is required.
After having frequent tonsil infections or severe diseases, bleeding from the tonsils tissues can occur. Furthermore, cancer cells can develop in the tonsil tissues. These are mostly uncommon but can still occur and can only be treated by surgically removing the tonsils from the both sides of the back of the throat. [ 7 ]
Similarly to any other medical surgery, coblation tonsillectomy has health risk factors as well, which can but will not necessarily occur from the surgery. Firstly, reactions to the general anesthetic drug, which is used to make the patients sleep while they are under surgery, can cause both short-term and long-term health problems, for example minor health problems like vomiting , nausea , muscle soreness , headache , and major health problems can include death . Another risk factor is that swelling can occur throughout the whole mouth during the beginning section of the surgery, but swelling is most likely to occur on the tongue and in the tissues surrounding the tonsils at the back of the throat . Furthermore, rarely bleeding may be present in the tonsil area of the mouth throughout the surgical procedure, which would need extra treatment on top of the coblation tonsillectomy surgery. This would require the patient to remain in the hospital for a longer period of time in order to fully recover. Bleeding in areas around the tonsils can also occur during the healing stage in the first few weeks after surgery because the wound is still fresh and can open up in come occasions. In addition, infections may develop in the mouth from the surgery, the main cause of these infections are the surgical equipment that are used if they were not disinfected properly before surgical procedure. [ 7 ] [ 8 ]
Tonsillectomy is a surgical procedure that consists of taking out the patient's tonsils, which produce chemical substances in back mouth area to assist in keeping a good health by fighting off infections. Tonsils can become enlarged when they are infected by a virus or bacteria over and over again, hence to combat the frequently occurring infections it becomes necessary to surgically remove the tonsils. High percentage of the population get their tonsils removed in their child years, however some people who do not go through the operation in their early years tend to get it removed later on in their lifetime because their tonsil glands swell to a point where they are struggling to breath. [ 8 ] [ 9 ]
Before the surgery begins, the surgeon will take multiple blood tests , physically examine the patient, and the surgeon will also check the past medical records of the patient to make sure it is safe to conduct the surgical procedure. On top of that, the surgeon doctor will ask about the types of medications that have been taken by the patient in the last 10 days following up to the surgery because certain medications can increase the chances of bleeding during the surgical procedure, such as aspirin and naproxen . Furthermore, the patient will be required to stop the consumption of any food and drinks for several hours before the procedure begins. [ 8 ]
Coblation tonsillectomy is an outpatient surgical process, meaning patients are able leave the hospital and go home after they have gone through the surgery and have woken up, so it is unnecessary for them to stay overnight.
At the start of the surgical procedure, the patient is given a specific amount of general anesthesia to go into a deep sleep, in which they are not able to feel any pain until it wears off, and by the time it wears off the operation is over as it tends take less than one hour for operation to be fully done. Additionally, a breathing tube is provided for the patient, which is inserted into the nose instead of the regular breathing tube for the mouth in order to have enough space in the mouth for the surgery to be conducted in a safe and precise manner.
The operation commences immediately after the patient has fallen asleep from the anesthesia. A mouth prop is used as a wedge to put between the upper and lower teeth on one side of the mouth in order to keep the patient's mouth wide open and steady for the surgeon to conduct the surgery with ease.
Both tonsils are removed with coblation technology by applying precise amounts of plasma to detach the tonsils from the surrounding tissues without causing thermal damage.
After the tonsils are fully removed, any bleeding that occurs at the back of the throat is stopped and the patient is woken up using medical drugs. Then the breathing tubes are completely removed and the patient is moved to the post-anesthesia care unit to recover and wake up. [ 9 ]
After the coblation tonsillectomy surgery, the patient will be taken the recovery room , where they will wake up soon after the surgery. A nurse will check the patient's blood pressure , heart rate , and if any bleeding is present. If bleeding is present in the tonsil area due to slow recovery rate then the patient will be required to stay at the hospital overnight. Otherwise, if everything is stable then the patient can go home, but a relative or someone close to the patient will be required to take them home and take care of the patient until necessary.
One to two weeks is normally the required time to for all wounds to fully heal from the coblation tonsillectomy surgery. If the patient is still recovering at home, they should try not to come in contact with any sick people because the recovery stage is critical as they can easily get an infection in the tonsil area. The patient will likely experience pain in the throat, ears, jaws, and neck where it is most common. So it is recommended to take pain medication to reduce the pain in these areas. Furthermore, to assist the recovery of the wounds and to avoid any obstacles during recovery, the patient should consume lots of fluids to prevent being dehydrated . The patient only consume food that can be swallowed with an ease, soft and smooth food is recommended in order to prevent reopening the wounds from the surgery. The most vital part of recovery is taking lots of bed rest and avoiding physical activity, like sports for about two weeks.
In some cases, it is vital for the patient to see a doctor immediately or go to emergency in a hospital if any health problems occur during recovery, for example bleeding, fever , dehydration and breathing problems. [ 7 ] [ 8 ] [ 10 ]
|
https://en.wikipedia.org/wiki/Coblation_tonsillectomy
|
The cochlea is the part of the inner ear involved in hearing . It is a spiral-shaped cavity in the bony labyrinth , in humans making 2.75 turns around its axis, the modiolus . [ 2 ] [ 3 ] A core component of the cochlea is the organ of Corti , the sensory organ of hearing, which is distributed along the partition separating the fluid chambers in the coiled tapered tube of the cochlea.
The name 'cochlea' is derived from the Latin word for snail shell , which in turn is from the Ancient Greek κοχλίας kokhlias ("snail, screw"), and from κόχλος kokhlos ("spiral shell") [ 4 ] in reference to its coiled shape; the cochlea is coiled in mammals with the exception of monotremes .
The cochlea ( pl. : cochleae) is a spiraled, hollow, conical chamber of bone, in which waves propagate from the base (near the middle ear and the oval window ) to the apex (the top or center of the spiral). The spiral canal of the cochlea is a section of the bony labyrinth of the inner ear that is approximately 30 mm long and makes 2 3 ⁄ 4 turns about the modiolus. The cochlear structures include:
The cochlea is a portion of the inner ear that looks like a snail shell ( cochlea is Greek for snail). [ 5 ] The cochlea receives sound in the form of vibrations, which cause the stereocilia to move. The stereocilia then convert these vibrations into nerve impulses which are taken up to the brain to be interpreted. Two of the three fluid sections are canals and the third is the 'organ of Corti' which detects pressure impulses that travel along the auditory nerve to the brain. The two canals are called the vestibular canal and the tympanic canal.
The walls of the hollow cochlea are made of bone, with a thin, delicate lining of epithelial tissue . This coiled tube is divided through most of its length by an inner membranous partition. Two fluid-filled outer spaces (ducts or scalae ) are formed by this dividing membrane. At the top of the snailshell-like coiling tubes, there is a reversal of the direction of the fluid, thus changing the vestibular duct to the tympanic duct. This area is called the helicotrema. This continuation at the helicotrema allows fluid being pushed into the vestibular duct by the oval window to move back out via movement in the tympanic duct and deflection of the round window; since the fluid is nearly incompressible and the bony walls are rigid, it is essential for the conserved fluid volume to exit somewhere.
The lengthwise partition that divides most of the cochlea is itself a fluid-filled tube, the third 'duct'. This central column is called the cochlear duct. Its fluid, endolymph, also contains electrolytes and proteins, but is chemically quite different from perilymph. Whereas the perilymph is rich in sodium ions, the endolymph is rich in potassium ions, which produces an ionic , electrical potential.
The hair cells are arranged in four rows in the organ of Corti along the entire length of the cochlear coil. Three rows consist of outer hair cells (OHCs) and one row consists of inner hair cells (IHCs). The inner hair cells provide the main neural output of the cochlea. The outer hair cells, instead, mainly 'receive' neural input from the brain, which influences their motility as part of the cochlea's mechanical "pre-amplifier". The input to the OHC is from the olivary body via the medial olivocochlear bundle.
The cochlear duct is almost as complex on its own as the ear itself. The cochlear duct is bounded on three sides by the basilar membrane , the stria vascularis , and Reissner's membrane. The stria vascularis is a rich bed of capillaries and secretory cells; Reissner's membrane is a thin membrane that separates endolymph from perilymph; and the basilar membrane is a mechanically somewhat stiff membrane, supporting the receptor organ for hearing, the organ of Corti, and determines the mechanical wave propagation properties of the cochlear system.
Between males and females, there are differences in the shape of the human cochlea. The variation is in the twist at the end of the spiral. Because of this difference, and because the cochlea is one of the more durable bones in the skull, it is used in ascertaining the sexes of human remains found at archaeological sites. [ 6 ]
The cochlea is filled with a watery liquid, the endolymph , which moves in response to the vibrations coming from the middle ear via the oval window. As the fluid moves, the cochlear partition (basilar membrane and organ of Corti) moves; thousands of hair cells sense the motion via their stereocilia , and convert that motion to electrical signals that are communicated via neurotransmitters to many thousands of nerve cells. These primary auditory neurons transform the signals into electrochemical impulses known as action potentials , which travel along the auditory nerve to structures in the brainstem for further processing.
The stapes (stirrup) ossicle bone of the middle ear transmits vibrations to the fenestra ovalis (oval window) on the outside of the cochlea, which vibrates the perilymph in the vestibular duct (upper chamber of the cochlea). The ossicles are essential for efficient coupling of sound waves into the cochlea, since the cochlea environment is a fluid–membrane system, and it takes more pressure to move sound through fluid–membrane waves than it does through air. A pressure increase is achieved by reducing the area ratio from the tympanic membrane (drum) to the oval window ( stapes bone) by 20. As pressure = force/area, results in a pressure gain of about 20 times from the original sound wave pressure in air. This gain is a form of impedance matching – to match the soundwave travelling through air to that travelling in the fluid–membrane system.
At the base of the cochlea, each 'duct' ends in a membranous portal that faces the middle ear cavity: The vestibular duct ends at the oval window , where the footplate of the stapes sits. The footplate vibrates when the pressure is transmitted via the ossicular chain. The wave in the perilymph moves away from the footplate and towards the helicotrema . Since those fluid waves move the cochlear partition that separates the ducts up and down, the waves have a corresponding symmetric part in perilymph of the tympanic duct, which ends at the round window, bulging out when the oval window bulges in.
The perilymph in the vestibular duct and the endolymph in the cochlear duct act mechanically as a single duct, being kept apart only by the very thin Reissner's membrane .
The vibrations of the endolymph in the cochlear duct displace the basilar membrane in a pattern that peaks a distance from the oval window depending upon the soundwave frequency. The organ of Corti vibrates due to outer hair cells further amplifying these vibrations. Inner hair cells are then displaced by the vibrations in the fluid, and depolarise by an influx of K+ via their tip-link -connected channels, and send their signals via neurotransmitter to the primary auditory neurons of the spiral ganglion . [ 7 ]
The hair cells in the organ of Corti are tuned to certain sound frequencies by way of their location in the cochlea, due to the degree of stiffness in the basilar membrane. [ 8 ] This stiffness is due to, among other things, the thickness and width of the basilar membrane, [ 9 ] which along the length of the cochlea is stiffest nearest its beginning at the oval window, where the stapes introduces the vibrations coming from the eardrum. Since its stiffness is high there, it allows only high-frequency vibrations to move the basilar membrane, and thus the hair cells. The farther a wave travels towards the cochlea's apex (the helicotrema ), the less stiff the basilar membrane is; thus lower frequencies travel down the tube, and the less-stiff membrane is moved most easily by them where the reduced stiffness allows: that is, as the basilar membrane gets less and less stiff, waves slow down and it responds better to lower frequencies. In addition, in mammals, the cochlea is coiled, which has been shown to enhance low-frequency vibrations as they travel through the fluid-filled coil. [ 10 ] This spatial arrangement of sound reception is referred to as tonotopy .
For very low frequencies (below 20 Hz), the waves propagate along the complete route of the cochlea – differentially up vestibular duct and tympanic duct all the way to the helicotrema . Frequencies this low still activate the organ of Corti to some extent but are too low to elicit the perception of a pitch . Higher frequencies do not propagate to the helicotrema , due to the stiffness-mediated tonotopy.
A very strong movement of the basilar membrane due to very loud noise may cause hair cells to die. This is a common cause of partial hearing loss and is the reason why users of firearms or heavy machinery often wear earmuffs or earplugs .
To transmit the sensation of sound to the brain, where it can be processed into the perception of hearing , hair cells of the cochlea must convert their mechanical stimulation into the electrical signaling patterns of the nervous system. Hair cells are modified neurons , able to generate action potentials which can be transmitted to other nerve cells. These action potential signals travel through the vestibulocochlear nerve to eventually reach the anterior medulla , where they synapse and are initially processed in the cochlear nuclei . [ 11 ]
Some processing occurs in the cochlear nuclei themselves, but the signals must also travel to the superior olivary complex of the pons as well as the inferior colliculi for further processing. [ 11 ]
Not only does the cochlea "receive" sound, a healthy cochlea generates and amplifies sound when necessary. Where the organism needs a mechanism to hear very faint sounds, the cochlea amplifies by the reverse transduction of the OHCs, converting electrical signals back to mechanical in a positive-feedback configuration. The OHCs have a protein motor called prestin on their outer membranes; it generates additional movement that couples back to the fluid–membrane wave. This "active amplifier" is essential in the ear's ability to amplify weak sounds. [ 12 ] [ 13 ]
The active amplifier also leads to the phenomenon of soundwave vibrations being emitted from the cochlea back into the ear canal through the middle ear (otoacoustic emissions).
Otoacoustic emissions are due to a wave exiting the cochlea via the oval window, and propagating back through the middle ear to the eardrum, and out the ear canal, where it can be picked up by a microphone. Otoacoustic emissions are important in some types of tests for hearing impairment , since they are present when the cochlea is working well, and less so when it is suffering from loss of OHC activity. Otoacoustic emissions also exhibit sex dimorphisms, since females tend to display higher magnitudes of otoacoustic emissions. Males tend to experience a reduction in otoacoustic emission magnitudes as they age. Women, on the other hand, do not experience a change in otoacoustic emission magnitudes with age. [ 14 ]
Gap-junction proteins, called connexins , expressed in the cochlea play an important role in auditory functioning. [ 15 ] Mutations in gap-junction genes have been found to cause syndromic and nonsyndromic deafness. [ 16 ] Certain connexins, including connexin 30 and connexin 26 , are prevalent in the two distinct gap-junction systems found in the cochlea. The epithelial-cell gap-junction network couples non-sensory epithelial cells, while the connective-tissue gap-junction network couples connective-tissue cells. [ 17 ] Gap-junction channels recycle potassium ions back to the endolymph after mechanotransduction in hair cells . [ 18 ] Importantly, gap junction channels are found between cochlear supporting cells, but not auditory hair cells . [ 19 ]
Damage to the cochlea can result from different incidents or conditions like a severe head injury, a cholesteatoma , an infection, and/or exposure to loud noise which could kill hair cells in the cochlea.
Hearing loss associated with the cochlea is often a result of outer hair cells and inner hair cells damage or death. Outer hair cells are more susceptible to damage, which can result in less sensitivity to weak sounds. Frequency sensitivity is also affected by cochlear damage which can impair the patient's ability to distinguish between spectral differences of vowels. The effects of cochlear damage on different aspects of hearing loss like temporal integration, pitch perception, and frequency determination are still being studied, given that multiple factors must be taken into account in regard to cochlear research. [ 20 ]
In 2009, engineers at the Massachusetts Institute of Technology created an electronic chip that can quickly analyze a very large range of radio frequencies while using only a fraction of the power needed for existing technologies; its design specifically mimics a cochlea. [ 21 ] [ 22 ]
The coiled form of cochlea is unique to mammals . In birds and in other non-mammalian vertebrates , the compartment containing the sensory cells for hearing is occasionally also called "cochlea," despite not being coiled up. Instead, it forms a blind-ended tube, also called the cochlear duct. This difference apparently evolved in parallel with the differences in frequency range of hearing between mammals and non-mammalian vertebrates. The superior frequency range in mammals is partly due to their unique mechanism of pre-amplification of sound by active cell-body vibrations of outer hair cells . Frequency resolution is, however, not better in mammals than in most lizards and birds, but the upper frequency limit is – sometimes much – higher. Most bird species do not hear above 4–5 kHz, the currently known maximum being ~ 11 kHz in the barn owl. Some marine mammals hear up to 200 kHz. A long coiled compartment, rather than a short and straight one, provides more space for additional octaves of hearing range, and has made possible some of the highly derived behaviors involving mammalian hearing. [ 23 ]
As the study of the cochlea should fundamentally be focused at the level of hair cells, it is important to note the anatomical and physiological differences between the hair cells of various species. In birds, for instance, instead of outer and inner hair cells, there are tall and short hair cells. There are several similarities of note in regard to this comparative data. For one, the tall hair cell is very similar in function to that of the inner hair cell, and the short hair cell, lacking afferent auditory-nerve fiber innervation, resembles the outer hair cell. One unavoidable difference, however, is that while all hair cells are attached to a tectorial membrane in birds, only the outer hair cells are attached to the tectorial membrane in mammals.
|
https://en.wikipedia.org/wiki/Cochlea
|
Cochlear hydrops (or cochlear Meniere's or cochlear endolymphatic hydrops ) is a condition of the inner ear involving a pathological increase of fluid affecting the cochlea . This results in swelling that can lead to hearing loss or changes in hearing perception. It is a form of endolymphatic hydrops and related to Ménière's disease . Cochlear hydrops refers to a case of inner-ear hydrops that only involves auditory symptoms and does not cause vestibular issues. [ 1 ]
Cochlear hydrops refers to an increase in endolymphatic fluid in the inner ear. This build-up is either due to an overproduction or insufficient drainage of endolymph in the constant regulation of fluid in the inner ear. Usually, only one ear is affected. The root cause of the process is unclear and may vary from patient to patient, but can have auto-immune, viral, and/or allergic triggers, among others. [ 2 ]
The build-up of endolymph creates pressure in the scala media . This causes its diameter to increase, and the vestibular membrane to curve outwards in the direction of the vestibule . The changes to the membrane can result in changes to either the hearing perception or hearing threshold of a patient. [ 3 ]
Episodes are usually cyclical and symptoms fluctuate through time. Patients may be symptom-free between episodes, which themselves may progressively worsen, improve, or remain constant in severity or duration. For some, permanent damage occurs, and they may be left with long-term hearing loss, hearing distortion, tinnitus , and/or a feeling of fullness in the affected ear(s). [ 4 ]
A study looking at spiral ganglion cell counts compared to hair cell counts in the inner ear of patients who had Meniere's disease found that they maintained more hair cells than spiral ganglion cells. [ 5 ] Thus, it could be possible that hydrops affects auditory nerves more than hair cells. [ 6 ] In contrast, a 2021 article by Richard Gacek posits that the hearing loss is actually caused by toxic nucleic acids that are released to the outer hair cells: "Since the outer hair cells (OHC) are freely surrounded by perilymph, their walls and nerve terminals are also bathed in this fluid. The few type-II spiral ganglion cells in contact with the OHC are unlikely to play a significant role in hearing loss because of their low numbers and the lack of a known connection to the central auditory pathway." [ 7 ]
Cochlear hydrops preferentially affects the apex of the cochlea where low-frequency sounds are interpreted. Due to the fluid imbalance in this area, parts of the cochlea are stretched or under more tension than usual, which can lead to distortions of sound, changes in pitch perception, or hearing loss, all usually in the low frequencies.
Common symptoms include:
As with Meniere's disease, atypical, early, or mild cases may only present some symptoms.
Diagnosis is based on symptoms and a hearing test that documents a loss in the low and mid frequencies, usually only in one ear. For patients with mild or atypical hydrops, the hearing thresholds may be normal, but they may experience a subjective, unilateral distortion of sounds in lower frequencies, such as diplacusis or that voices are sounding "robotic". Patients may also mention a feeling of pressure or fullness in the ear. [ 8 ]
It is also possible to reveal the presence of hydrops with an MRI. [ 9 ]
If vertigo is experienced, the diagnosis progresses to Meniere's disease. This occurs if the fluid increase leads to a leak or rupture of the membranes in the inner ear, causing a mixture of perilymph and endolymph. [ 10 ]
Treatment for cochlear hydrops is the same as for Meniere's disease. Currently, no cure exists for either. [ 11 ]
If a patient has undergone sudden sensorineural hearing loss , a course of steroids is often prescribed in an attempt to recover the hearing. Steroids may be injected directly through the eardrum. [ 12 ]
Like Meniere's Disease, a low salt diet is recommended as a preventative measure. A diuretic may be prescribed to help lower salt content. [ 13 ]
Betahistine is the most widely prescribed medication for the treatment of Meniere's disease. The drug is thought to increase blood flow to the inner ear and to prevent the frequency and intensity of episodes. While Betahistine is considered safe, there is insufficient evidence that it is an effective treatment. [ 14 ] It is not FDA approved in the United States, yet has still been clinically observed to benefit patients, and is considered safer and more effective than diuretics. [ 15 ] Betahistine at high doses (such as 144 mg/day) can yield similar vertigo control as intratympanic dexamethasone. [ 16 ] [ 17 ]
Antivirals have been proven effective for those who suspect a viral cause for their cochlear Hydrops. [ 18 ]
For some, surgery may be effective, such as an endolymphatic sac decompression. Surgery is often reserved for cases where other measures have proven ineffective and/or when vestibular issues are the main complaint, as it runs the risk of causing hearing or other nerve damage. [ 19 ]
The symptoms of cochlear hydrops fluctuate, and the condition may stabilize or go away on its own after several years. However, because the organ of Corti undergoes stress during the hydrops episodes, long-term hearing loss, tinnitus, or hyperacusis is possible.
It is considered by some that cochlear hydrops is an early form of Meniere's disease. However, while all people with Meniere's disease have some form of hydrops, the majority of cochlear hydrops patients do not go on to develop Meniere's disease. [ 3 ] It takes an average of one year from the onset of symptoms for someone to develop full Meniere's disease, if at all.
The data on how often progression to Meniere's disease occurs is mixed, but the majority of recent studies suggest a low likelihood.
A 1984 study from Japan looked at patients with Meniere's disease and classified them into subcategories based on their first symptoms. The study found that the majority of patients with Meniere's disease (104 out of 163, or 63.80%) presented vertigo with their first symptoms, and only 59 out of 163 (36.19%) of patients presented with cochlear symptoms first, such as "tinnitus or deafness." However, the study found that 59 out of 74 (79.72%) patients who started out with a cochlear hydrops diagnosis progressed to Meniere's disease, and concluded that "cochlear Meniere's disease frequently develops into Meniere's disease." [ 20 ]
Conversely, A 2006 study from doctors at the House Ear Institute found that “conversion from cochlear hydrops to Meniere's disease occurred in 33%” of diagnosed patients in a study including 46 subjects. [ 21 ] A 2009 study from Japan found that only about 10% of their diagnosed patients with sudden low-frequency hearing loss (SLFHL) went on to develop full Meniere's disease, and about 18% with recurring SLFHL developed Meniere's disease. [ 22 ] [ 23 ] From this study, about 70% of patients who did not develop Meniere's disease maintained their hearing in the end. 30% went on to have lasting hearing difficulty, reported from a ten-year follow-up. [ 24 ] [ 25 ]
A 2018 study from Korea found the chance of progression to Meniere's disease of all participants with SLFHL to be 9.38% with an average progression time of 1.7±1.4 years, but when limited to patients with recurring symptoms "it was confirmed that about half (46.88%) of them progressed to Meniere's disease." However, the study was said to have limitations as "hearing fluctuations and the possibility of transitioning to Meniere's disease in the non-relapse group could not be completely ruled out." [ 26 ]
|
https://en.wikipedia.org/wiki/Cochlear_hydrops
|
The cochlear nerve (also auditory nerve or acoustic nerve ) is one of two parts of the vestibulocochlear nerve , a cranial nerve present in amniotes , the other part being the vestibular nerve. The cochlear nerve carries auditory sensory information from the cochlea of the inner ear directly to the brain . The other portion of the vestibulocochlear nerve is the vestibular nerve , which carries spatial orientation information to the brain from the semicircular canals , also known as semicircular ducts. [ 1 ]
In terms of anatomy, an auditory nerve fiber is either bipolar or unipolar , with its distal projection being called the peripheral process , and its proximal projection being called the axon ; these two projections are also known as the "peripheral axon" and the "central axon", respectively. The peripheral process is sometimes referred to as a dendrite , although that term is somewhat inaccurate. Unlike the typical dendrite, the peripheral process generates and conducts action potentials , which then "jump" across the cell body (or soma ) and continue to propagate along the central axon. In this respect, auditory nerve fibers are somewhat unusual in that action potentials pass through the soma. Both the peripheral process and the axon are myelinated .
In humans, there are on average 30,000 nerve fibers within the cochlear nerve. [ 2 ] The number of fibers varies significantly across species; the domestic cat, for example, has an average of 50,000 fibers. The peripheral axons of auditory nerve fibers form synaptic connections with the hair cells of the cochlea via ribbon synapses using the neurotransmitter glutamate . The central axons form synaptic connections with cells in the cochlear nucleus of the brainstem.
The cell bodies of the cochlear nerve lie within the cochlea and collectively form the spiral ganglion , named for the spiral shape it shares with the cochlea. These central axons exit the cochlea at its base and form a nerve trunk , which, in humans, is approximately one inch long. This travels in parallel with the vestibular nerves through the internal auditory canal , through which it connects to the brainstem. There, its fibers synapse with the cell bodies of the cochlear nucleus .
In mammals, cochlear nerve fibers are classified as either type I or type II.
In mammals, the axons from each cochlear nerve terminate in the cochlear nuclear complex that is ipsilaterally located in the medulla of the brainstem. The cochlear nucleus is the first 'relay station' of the central auditory system and receives mainly ipsilateral afferent input.
The three major components of the cochlear nuclear complex are (see figure below):
Each of the three cochlear nuclei are organized to sort sound according to a specific spacial arrangement . As such, sound frequencies detected by the cochlea are transmitted electrically to specific positions in the cochlear nuclei. The axons from the low-frequency region of the cochlea project to the ventral portion of the dorsal cochlear nucleus and the ventrolateral portions of the anteroventral cochlear nucleus. The axons from the high-frequency region project to the dorsal portion of the anteroventral cochlear nucleus and the uppermost dorsal portions of the dorsal cochlear nucleus. The axons from the intermediate frequency region project to intermediate targets. Through this process, a spatial representation of sound is created by electrical nerve impulses through the cochlear complex.
|
https://en.wikipedia.org/wiki/Cochlear_nerve
|
The cochlear nucleus ( CN ) or cochlear nuclear complex comprises two cranial nerve nuclei in the human brainstem , the ventral cochlear nucleus (VCN) and the dorsal cochlear nucleus (DCN).
The ventral cochlear nucleus is unlayered whereas the dorsal cochlear nucleus is layered. Auditory nerve fibers, fibers that travel through the auditory nerve (also known as the cochlear nerve or eighth cranial nerve) carry information from the inner ear, the cochlea , on the same side of the head, to the nerve root in the ventral cochlear nucleus.
At the nerve root the fibers branch to innervate the ventral cochlear nucleus and the deep layer of the dorsal cochlear nucleus. All acoustic information thus enters the brain through the cochlear nuclei, where the processing of acoustic information begins. The outputs from the cochlear nuclei are received in higher regions of the auditory brainstem .
The cochlear nuclei (CN) are located at the dorso-lateral side of the brainstem , spanning the junction of the pons and medulla .
The major input to the cochlear nucleus is from the auditory nerve, a part of cranial nerve VIII (the vestibulocochlear nerve ). The auditory nerve fibers form a highly organized system of connections according to their peripheral innervation of the cochlea. Axons from the spiral ganglion cells of the lower frequencies innervate the ventrolateral portions of the ventral cochlear nucleus and lateral-ventral portions of the dorsal cochlear nucleus. The axons from the higher frequency organ of corti hair cells project to the dorsal portion of the ventral cochlear nucleus and the dorsal-medial portions of the dorsal cochlear nucleus. The mid frequency projections end up in between the two extremes; in this way the tonotopic organization that is established in the cochlea is preserved in the cochlear nuclei. This tonotopic organization is preserved because only a few inner hair cells synapse on the dendrites of a nerve cell in the spiral ganglion, and the axon from that nerve cell synapses on only a very few dendrites in the cochlear nucleus. In contrast with the VCN that receives all acoustic input from the auditory nerve, the DCN receives input not only from the auditory nerve but it also receives acoustic input from neurons in the VCN (T stellate cells). The DCN is therefore in a sense a second order sensory nucleus.
The cochlear nuclei have long been thought to receive input only from the ipsilateral ear. There is evidence, however, for stimulation from the contralateral ear via the contralateral CN, [ 2 ] and also the somatosensory parts of the brain. [ 3 ]
There are three major fiber bundles, axons of cochlear nuclear neurons, that carry information from the cochlear nuclei to targets that are mainly on the opposite side of the brain. Through the medulla , one projection goes to the contralateral superior olivary complex (SOC) via the trapezoid body , whilst the other half shoots to the ipsilateral SOC. This pathway is called the ventral acoustic stria (VAS or, more commonly, the trapezoid body). Another pathway, called the dorsal acoustic stria (DAS, also known as the stria of von Monakow), rises above the medulla into the pons where it hits the nuclei of the lateral lemniscus along with its kin, the intermediate acoustic stria (IAS, also known as the stria of Held). The IAS decussates across the medulla, before joining the ascending fibers in the contralateral lateral lemniscus. The lateral lemniscus contains cells of the nuclei of the lateral lemniscus, and in turn projects to the inferior colliculus . The inferior colliculus receives direct, monosynaptic projections from the superior olivary complex, the contralateral dorsal acoustic stria, some classes of stellate neurons of the VCN, as well as from the different nuclei of the lateral lemniscus.
Most of these inputs terminate in the inferior colliculus, although there are a few small projections that bypass the inferior colliculus and project to the medial geniculate, or other forebrain structures.
Three types of principal cells convey information out of the ventral cochlear nucleus: Bushy cells, stellate cells, and octopus cells.
Two types of principal cells convey information out of the dorsal cochlear nucleus (DCN) to the contralateral inferior colliculus. The principal cells receive two systems of inputs. Acoustic input comes to the deep layer through several paths. Excitatory acoustic input comes from auditory nerve fibers and also from stellate cells of the VCN. Acoustic input is also conveyed through inhibitory interneurons (tuberculoventral cells of the DCN and "wide band inhibitors" in the VCN). Through the outermost molecular layer, the DCN receives other types of sensory information, most importantly information about the location of the head and ears, through parallel fibers. This information is distributed through a cerebellar like circuit that also includes inhibitory interneurons.
The cochlear nuclear complex is the first integrative, or processing, stage in the auditory system . [ 4 ] Information is brought to the nuclei from the ipsilateral cochlea via the cochlear nerve . [ 5 ] Several tasks are performed in the cochlear nuclei. By distributing acoustic input to multiple types of principal cells, the auditory pathway is subdivided into parallel ascending pathways, which can simultaneously extract different types of information. The cells of the ventral cochlear nucleus extract information that is carried by the auditory nerve in the timing of firing and in the pattern of activation of the population of auditory nerve fibers. The cells of the dorsal cochlear nucleus perform a non-linear spectral analysis and place that spectral analysis into the context of the location of the head, ears and shoulders and that separate expected, self-generated spectral cues from more interesting, unexpected spectral cues using input from the auditory cortex , pontine nuclei , trigeminal ganglion and nucleus, dorsal column nuclei and the second dorsal root ganglion . It is likely that these neurons help mammals to use spectral cues for orienting toward those sounds. The information is used by higher brainstem regions to achieve further computational objectives (such as sound source location or improvement in signal-to-noise ratio ). The inputs from these other areas of the brain probably play a role in sound localization .
In order to understand in more detail the specific functions of the cochlear nuclei it is first necessary to understand the way sound information is represented by the fibers of the auditory nerve . Briefly, there are around 30,000 auditory nerve fibres in each of the two auditory nerves. Each fiber is an axon of a spiral ganglion cell that represents a particular frequency of sound, and a particular range of loudness. Information in each nerve fibre is represented by the rate of action potentials as well as the particular timing of individual action potentials. The particular physiology and morphology of each cochlear nucleus cell type enhances different aspects of sound information.
This article incorporates text in the public domain from page 788 of the 20th edition of Gray's Anatomy (1918) Young ED, Spirou GA, Rice JJ, Voigt HF (June 1992). "Neural organization and responses to complex stimuli in the dorsal cochlear nucleus". Philos. Trans. R. Soc. Lond. B Biol. Sci . 336 (1278): 407– 13. doi : 10.1098/rstb.1992.0076 . PMID 1354382 .
|
https://en.wikipedia.org/wiki/Cochlear_nucleus
|
Cock's peculiar tumour is a sebaceous cyst linked growth that can resemble a squamous cell carcinoma. [ 1 ] The name is given after a 19th-century English surgeon Edward Cock . [ 2 ] The proliferating cyst is usually solitary, but it often arises from a simple trichilemmal cysts in the hair follicle epithelium and these are multiple in 70% of cases. They are most commonly found on the scalp where the proliferating trichilemmal cyst will grow to a large size and ulcerate. Chronic inflammation can cause the cyst to take the form of a granuloma . This granuloma mimics a squamous-cell carcinoma (both clinically and histologically) and these ulcerating solitary cysts are called Cock's peculiar tumour. [ 3 ]
The most common sites are the ones where one can find hairs. These are, scalp and scrotum .
This oncology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Cock's_peculiar_tumour
|
Cockayne syndrome ( CS ), also called Neill-Dingwall syndrome , is a rare and fatal autosomal recessive neurodegenerative disorder characterized by growth failure, impaired development of the nervous system , abnormal sensitivity to sunlight ( photosensitivity ), eye disorders and premature aging . [ 1 ] [ 2 ] [ 3 ] Failure to thrive and neurological disorders are criteria for diagnosis, while photosensitivity, hearing loss, eye abnormalities, and cavities are other very common features. [ 3 ] Problems with any or all of the internal organs are possible. It is associated with a group of disorders called leukodystrophies , which are conditions characterized by degradation of neurological white matter . There are two primary types of Cockayne syndrome: Cockayne syndrome type A (CSA) , arising from mutations in the ERCC8 gene, and Cockayne syndrome type B (CSB) , resulting from mutations in the ERCC6 gene. [ 4 ]
The underlying disorder is a defect in a DNA repair mechanism. [ 5 ] Unlike other defects of DNA repair, patients with CS are not predisposed to cancer or infection. [ 6 ] Cockayne syndrome is a rare but destructive disease usually resulting in death within the first or second decade of life. The mutation of specific genes in Cockayne syndrome is known, but the widespread effects and its relationship with DNA repair is yet to be well understood. [ 6 ]
It is named after English physician Edward Alfred Cockayne (1880–1956) who first described it in 1936 and re-described in 1946. [ 7 ] Neill-Dingwall syndrome was named after Mary M. Dingwall and Catherine A. Neill. [ 7 ] These two scientists described the case of two brothers with Cockayne syndrome and asserted it was the same disease described by Cockayne. In their article, the two contributed to the signs of the disease through their discovery of calcifications in the brain. They also compared Cockayne syndrome to what is now known as Hutchinson–Gilford progeria syndrome (HGPS), then called progeria, due to the advanced aging that characterizes both disorders. [ 7 ]
If hyperoxia or excess oxygen occurs in the body, the cellular metabolism produces several highly reactive forms of oxygen called free radicals . This can cause oxidative damage to cellular components including the DNA . In normal cells, our body repairs the damaged sections. In the case of this disease, due to subtle defects in transcription , children's genetic machinery for synthesizing proteins needed by the body does not operate at normal capacity. Over time, went this theory, results in developmental failure and death. Every minute, the body pumps 10 to 20 liters of oxygen through the blood , carrying it to billions of cells in our bodies. In its normal molecular form, oxygen is harmless. However, cellular metabolism involving oxygen can generate several highly reactive free radicals. These free radicals can cause oxidative damage to cellular components including the DNA. In an average human cell , several thousand lesions occur in the DNA every day. Many of these lesions result from oxidative damage . Each lesion—a damaged section of DNA—must be snipped out and the DNA repaired to preserve its normal function. Unrepaired DNA can lose its ability to code for proteins. Mutations also can result. These mutations can activate oncogenes or silence tumor suppressor genes. According to research, oxidative damage to active genes is not preferentially repaired, and in the most severe cases, the repair is slowed throughout the whole genome . The resulting accumulation of oxidative damage could impair the normal functions of the DNA and may even result in triggering a program of cell death (apoptosis). The children with this disease do not repair the active genes where oxidative damage occurs. Normally, oxidative damage repair is faster in the active genes (which make up less than five percent of the genome) than in inactive regions of the DNA. The resulting accumulation of oxidative damage could impair the normal functions of the DNA and may even result in triggering a program of cell death ( apoptosis ). [ 12 ]
Cockayne syndrome is classified genetically as follows: [ 13 ]
In contrast to cells with normal repair capability, CSA and CSB deficient cells are unable to preferentially repair cyclobutane pyrimidine dimers induced by the action of ultraviolet (UV) light on the template strand of actively transcribed genes . [ 14 ] This deficiency reflects the loss of ability to perform the DNA repair process known as transcription coupled nucleotide excision repair (TC-NER). [ 15 ]
Within the damaged cell, the CSA protein normally localizes to sites of DNA damage , particularly inter-strand cross-links, double-strand breaks and some monoadducts. [ 16 ] CSB protein is also normally recruited to DNA damaged sites, and its recruitment is most rapid and robust as follows: interstrand crosslinks > double-strand breaks > monoadducts > oxidative damage. [ 16 ] CSB protein forms a complex with another DNA repair protein, SNM1A ( DCLRE1A ), a 5' – 3' exonuclease , that localizes to inter-strand cross-links in a transcription dependent manner. [ 17 ] The accumulation of CSB protein at sites of DNA double-strand breaks occurs in a transcription dependent manner and facilitates homologous recombinational repair of the breaks. [ 18 ] During the G0 / G1 phase of the cell cycle, DNA damage can trigger a CSB-dependent recombinational repair process that uses an RNA (rather than DNA ) template. [ 19 ]
The premature aging features of CS are likely due, at least in part, to the deficiencies in DNA repair (see DNA damage theory of aging ). [ 15 ]
People with this syndrome have smaller than normal head sizes ( microcephaly ), are of short stature ( dwarfism ), their eyes appear sunken, and they have an "aged" look. They often have long limbs with joint contractures (inability to relax the muscle at a joint), a hunched back ( kyphosis ), and they may be very thin ( cachetic ), due to a loss of subcutaneous fat. Their small chin, large ears, and pointy, thin nose often give an aged appearance. [ 9 ] The skin of those with Cockayne syndrome is also frequently affected: hyperpigmentation, varicose or spider veins ( telangiectasia ), [ 9 ] and serious sensitivity to sunlight are common, even in individuals without XP-CS. Often patients with Cockayne Syndrome will severely burn or blister with very little heat exposure.
The eyes of patients can be affected in various ways and eye abnormalities are common in CS. Cataracts and cloudiness of the cornea ( corneal opacity ) are common. The loss of and damage to the nerves of the optic nerve, causing optic atrophy, can occur. [ 3 ] Nystagmus , or involuntary eye movement, and pupils that fail to dilate demonstrate a loss of control of voluntary and involuntary muscle movement. [ 9 ] A salt and pepper retinal pigmentation is also a typical sign.
Diagnosis is determined by a specific test for DNA repair, which measures the recovery of RNA after exposure to UV radiation. Despite being associated with genes involved in nucleotide excision repair (NER), unlike xeroderma pigmentosum , CS is not associated with an increased risk of cancer. [ 6 ]
In Cockayne syndrome patients, UV-irradiated cells show decreased DNA and RNA synthesis. [ 20 ] Laboratory studies are mainly useful to eliminate other disorders. For example, skeletal radiography, endocrinologic tests, and chromosomal breakage studies can help in excluding disorders included in the differential diagnosis. [ citation needed ]
Brain CT scanning in Cockayne syndrome patients may reveal calcifications and cortical atrophy. [ 15 ]
Prenatal evaluation is possible. Amniotic fluid cell culturing is used to demonstrate that fetal cells are deficient in RNA synthesis after UV irradiation. [ citation needed ]
Imaging studies reveal a widespread absence of the myelin sheaths of the neurons in the white matter of the brain and general atrophy of the cortex. [ 6 ] Calcifications have also been found in the putamen , an area of the forebrain that regulates movements and aids in some forms of learning, [ 9 ] along with the cortex. [ 7 ] Additionally, atrophy of the central area of the cerebellum found in patients with Cockayne syndrome could also result in the lack of muscle control, particularly involuntary, and poor posture typically seen. [ citation needed ]
There is no cure for this syndrome, although patients can be symptomatically treated. Treatment usually involves physical therapy and minor surgeries to the affected organs, such as cataract removal. [ 3 ] Also wearing high-factor sunscreen and protective clothing is recommended because Cockayne Syndrome patients are very sensitive to UV radiation. [ 21 ] Optimal nutrition can also help. Genetic counseling for the parents is recommended, as the disorder has a 25% chance of being passed to any future children, and prenatal testing is also a possibility. [ 3 ] Another important aspect is the prevention of recurrence of CS in other siblings. Identification of gene defects involved makes it possible to offer genetic counseling and antenatal
diagnostic testing to the parents who already have one affected child. [ 22 ]
Currently, there are two ongoing projects focused on the development of gene therapy for Cockayne syndrome. The first project, led by the Viljem Julijan Association for Children with Rare Diseases, aims to develop gene therapy specifically for Cockayne syndrome type B. [ 23 ] The second project, led by the Riaan Research Initiative, is dedicated to the development of gene therapy for Cockayne syndrome type A. [ 24 ]
The prognosis for those with Cockayne syndrome is poor, as death typically occurs by the age of 12. [ 25 ] The prognosis for Cockayne syndrome varies by disease type. There are three types of Cockayne syndrome according to the severity and onset of the symptoms. However, the differences between the types are not always clear-cut, and some researchers believe the signs and symptoms reflect a spectrum instead of distinct types:
Cockayne syndrome Type A (CSA) is marked by normal development until a child is 1 or 2 years old, at which point growth slows and developmental delays are noticed. Symptoms are not apparent until they are 1 year. Life expectancy for type A is approximately 10 to 20 years. These symptoms are seen in CS type 1 children.
Cockayne syndrome type B (CSB), also known as "cerebro-oculo-facio-skeletal (COFS) syndrome" (or "Pena-Shokeir syndrome type B"), is the most severe subtype. Symptoms are present at birth and normal brain development stops after birth. The average lifespan for children with type B is up to 7 years of age. These symptoms are seen in CS type 2 children.
Cockayne syndrome type C (CSC) appears later in childhood with milder symptoms than the other types and a slower progression of the disorder. People with this type of Cockayne syndrome live into adulthood, with an average lifespan of 40 to 50 years. These symptoms are seen in CS type 3. [ 15 ]
Cockayne syndrome is rare worldwide. No racial predilection is reported for Cockayne syndrome. No sexual predilection is described for Cockayne syndrome; the male-to-female ratio is equal. Cockayne syndrome I (CS-A) manifests in childhood. Cockayne syndrome II (CS-B) manifests at birth or in infancy, and it has a worse prognosis. [ 15 ]
The recent research on Jan 2018 mentions different CS features that are seen globally with similarities and differences:
CS has an incidence of 1 in 250,000 live births, and a prevalence of approximately 1 per 2.5 million, which is remarkably consistent across various regions globally: [ 26 ] [ 27 ]
Calcification [55–95%] of the cerebral cortex (especially depths of sulci , basal ganglia , cerebellum, thalamus ; also of the arteries , arterioles , and capillaries ).
Vascular changes - String vessels , especially in areas of Metachromatic leukodystrophy, calcification in leptomeningeal vessels, accelerated atherosclerosis and arteriolosclerosis . Gliosis is present.
Astrocytes and microglia may show irregular cytoplasm , multiple nuclei . May be seen as a high-intensity white matter on FLAIR MRI sequences signals.
No major brain malformations . Relative sparing of the cerebral cortex, slight thinning of cortical ribbon may be seen. Normal gyral pattern with widening of sulci . Lamination, neuronal size, and configuration of the neocortex are preserved. May show parietal occipital dominance.
Severe cerebellar atrophy . Loss of Purkinje , granular neurons , and in some cases neurons in the dentate nucleus . Dendrites of Purkinje cells may be grossly deformed ("cactus flowers"), ferruginated dendrites. Dendrites have fewer higher order branches. Purkinje " axonal torpedoes " may be present. Ventricular enlargement, enlarged cisterna magna are seen. Amyloid plaques , neurofibrillary tangles , Hirano bodies not commonly seen, although ubiquitin reactivity of axons present
Cataracts [36–86%]. Usually bilateral, most develop by 4 years of age. Pigmentary retinopathy ("salt and pepper")[43–89%]. Miotic pupils , Optic disk pallor, Enophthalmos , Narrow palpebral fissures .
|
https://en.wikipedia.org/wiki/Cockayne_syndrome
|
The cocktail party effect refers to a phenomenon wherein the brain focuses a person's attention on a particular stimulus, usually auditory . This focus excludes a range of other stimuli from conscious awareness, as when a partygoer follows a single conversation in a noisy room. [ 1 ] [ 2 ] This ability is widely distributed among humans, with most listeners more or less easily able to portion the totality of sound detected by the ears into distinct streams, and subsequently to decide which streams are most pertinent, excluding all or most others. [ 3 ]
It has been proposed that a person's sensory memory subconsciously parses all stimuli and identifies discrete portions of these sensations according to their salience . [ 4 ] This allows most people to tune effortlessly into a single voice while tuning out all others. The phenomenon is often described as a "selective attention" or " selective hearing ". It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name among a wide range of auditory input. [ 5 ] [ 6 ]
A person who lacks the ability to segregate stimuli in this way is often said to display the cocktail party problem [ 7 ] or cocktail party deafness . [ 8 ] This may also be described as auditory processing disorder or King-Kopetzky syndrome.
Auditory attention in regards to the cocktail party effect primarily occurs in the left hemisphere of the superior temporal gyrus , a non-primary region of auditory cortex; a fronto-parietal network involving the inferior frontal gyrus , superior parietal sulcus, and intraparietal sulcus also accounts for the acts of attention-shifting, speech processing , and attention control. [ 9 ] [ 10 ] Both the target stream (the more important information being attended to) and competing/interfering streams are processed in the same pathway within the left hemisphere, but fMRI scans show that target streams are treated with more attention than competing streams. [ 11 ]
Furthermore, activity in the superior temporal gyrus (STG) toward the target stream is decreased/interfered with when competing stimuli streams (that typically hold significant value) arise. The "cocktail party effect" – the ability to detect significant stimuli in multi-talker situations – has also been labeled the "cocktail party problem", because the ability to selectively attend simultaneously interferes with the effectiveness of attention at a neurological level. [ 11 ]
The cocktail party effect works best as a binaural effect, which requires hearing with both ears. People with only one functioning ear seem much more distracted by interfering noise than people with two typical ears. [ 12 ] The benefit of using two ears may be partially related to the localization of sound sources . The auditory system is able to localize at least two sound sources and assign the correct characteristics to these sources simultaneously. As soon as the auditory system has localized a sound source, it can extract the signals of this sound source out of a mixture of interfering sound sources. [ 13 ] However, much of this binaural benefit can be attributed to two other processes, better-ear listening and binaural unmasking . [ 12 ] Better-ear listening is the process of exploiting the better of the two signal-to-noise ratios available at the ears. Binaural unmasking is a process that involves a combination of information from the two ears in order to extract signals from noise.
In the early 1950s much of the early attention research can be traced to problems faced by air traffic controllers . At that time, controllers received messages from pilots over loudspeakers in the control tower . Hearing the intermixed voices of many pilots over a single loudspeaker made the controller's task very difficult. [ 14 ] The effect was first defined and named "the cocktail party problem" by Colin Cherry in 1953. [ 7 ] Cherry conducted attention experiments in which participants listened to two different messages from a single loudspeaker at the same time and tried to separate them; this was later termed a dichotic listening task. [ 15 ] His work reveals that the ability to separate sounds from background noise is affected by many variables, such as the sex of the speaker, the direction from which the sound is coming, the pitch , and the rate of speech. [ 7 ]
Cherry developed the shadowing task in order to further study how people selectively attend to one message amid other voices and noises. In a shadowing task participants wear a special headset that presents a different message to each ear. The participant is asked to repeat aloud the message (called shadowing) that is heard in a specified ear (called a channel). [ 15 ] Cherry found that participants were able to detect their name from the unattended channel, the channel they were not shadowing. [ 16 ] Later research using Cherry's shadowing task was done by Neville Moray in 1959. He was able to conclude that almost none of the rejected message is able to penetrate the block set up, except subjectively "important" messages. [ 16 ]
Selective attention shows up across all ages. Starting with infancy, babies begin to turn their heads toward a sound that is familiar to them, such as their parents' voices. [ 17 ] This shows that infants selectively attend to specific stimuli in their environment. Reviews of selective attention indicate that infants favor "baby" talk over speech with an adult tone. [ 15 ] [ 17 ] This preference indicates that infants can recognize physical changes in the tone of speech. The accuracy in noticing these physical differences, like tone, amid background noise improves over time. [ 17 ] Infants may simply ignore stimuli because something like their name, while familiar, holds no higher meaning to them at such a young age; research suggests that infants do not understand that the noise being presented to them amidst distracting noise is their own name, and thus do not respond. [ 18 ] The ability to filter out unattended stimuli reaches its prime in young adulthood. In reference to the cocktail party phenomenon, older adults have a harder time than younger adults focusing in on one conversation if competing stimuli, like "subjectively" important messages, make up the background noise. [ 17 ]
Examples of messages that catch people's attention include personal names and taboo words. [ 19 ] The ability to selectively attend to one's own name has been found in infants as young as 5 months of age and appears to be fully developed by 13 months. [ 18 ] Along with multiple experts in the field, Anne Treisman states that people are permanently primed to detect personally significant words, like names, and theorizes that they may require less perceptual information than other words to trigger identification. [ 20 ] Taboo words often contain sexually explicit material that cause an alert system in people that leads to decreased performance in shadowing tasks. [ 21 ] Taboo words do not affect children in selective attention until they develop a strong vocabulary with an understanding of language.
Selective attention begins to waver as we get older. Older adults have longer latency periods in discriminating between conversation streams, typically attributed to the fact that general cognitive ability begins to decay with old age (as exemplified with memory, visual perception, higher order functioning, etc.). [ 9 ] [ 22 ]
Even more recently, modern neuroscience techniques are being applied to study the cocktail party problem. Some notable examples of researchers doing such work include Edward Chang, Nima Mesgarani, and Charles Schroeder using electrocorticography ; Jonathan Simon, Mounya Elhilali, Adrian KC Lee, Shihab Shamma, Barbara Shinn-Cunningham, Daniel Baldauf, and Jyrki Ahveninen using magnetoencephalography ; Jyrki Ahveninen, Edmund Lalor, and Barbara Shinn-Cunningham using electroencephalography ; and Jyrki Ahveninen and Lee M. Miller using functional magnetic resonance imaging .
Not all the information presented to us can be processed. In theory, the selection of what to pay attention to can be random or nonrandom. [ 23 ] For example, when driving, drivers are able to focus on the traffic lights rather than on other stimuli present in the scene. In such cases it is mandatory to select which portion of presented stimuli is important. A basic question in psychology is when this selection occurs. [ 15 ] This issue has developed into the early versus late selection controversy. The basis for this controversy can be found in the Cherry dichotic listening experiments. Participants were able to notice physical changes, like pitch or change in gender of the speaker, and stimuli, like their own name, in the unattended channel. This brought about the question of whether the meaning, semantics , of the unattended message was processed before selection. [ 15 ] In an early selection attention model very little information is processed before selection occurs. In late selection attention models more information, like semantics, is processed before selection occurs. [ 23 ]
The earliest work in exploring mechanisms of early selective attention was performed by Donald Broadbent , who proposed a theory that came to be known as the filter model . [ 24 ] This model was established using the dichotic listening task. His research showed that most participants were accurate in recalling information that they actively attended to, but were far less accurate in recalling information that they had not attended to. This led Broadbent to the conclusion that there must be a "filter" mechanism in the brain that could block out information that was not selectively attended to. The filter model was hypothesized to work in the following way: as information enters the brain through sensory organs (in this case, the ears) it is stored in sensory memory , a buffer memory system that hosts an incoming stream of information long enough for us to pay attention to it. [ 15 ] Before information is processed further, the filter mechanism allows only attended information to pass through. The selected attention is then passed into working memory , the set of mechanisms that underlies short-term memory and communicates with long-term memory . [ 15 ] In this model, auditory information can be selectively attended to on the basis of its physical characteristics, such as location and volume. [ 24 ] [ 25 ] [ 26 ] Others suggest that information can be attended to on the basis of Gestalt features, including continuity and closure. [ 27 ] For Broadbent, this explained the mechanism by which people can choose to attend to only one source of information at a time while excluding others. However, Broadbent's model failed to account for the observation that words of semantic importance, for example the individual's own name, can be instantly attended to despite having been in an unattended channel.
Shortly after Broadbent's experiments, Oxford undergraduates Gray and Wedderburn repeated his dichotic listening tasks, altered with monosyllabic words that could form meaningful phrases, except that the words were divided across ears. [ 28 ] For example, the words, "Dear, one, Jane," were sometimes presented in sequence to the right ear, while the words, "three, Aunt, six," were presented in a simultaneous, competing sequence to the left ear. Participants were more likely to remember, "Dear Aunt Jane," than to remember the numbers; they were also more likely to remember the words in the phrase order than to remember the numbers in the order they were presented. This finding goes against Broadbent's theory of complete filtration because the filter mechanism would not have time to switch between channels. This suggests that meaning may be processed first.
In a later addition to this existing theory of selective attention, Anne Treisman developed the attenuation model . [ 29 ] In this model, information, when processed through a filter mechanism, is not completely blocked out as Broadbent might suggest. Instead, the information is weakened (attenuated), allowing it to pass through all stages of processing at an unconscious level. Treisman also suggested a threshold mechanism whereby some words, on the basis of semantic importance, may grab one's attention from the unattended stream. One's own name, according to Treisman, has a low threshold value (i.e. it has a high level of meaning) and thus is recognized more easily. The same principle applies to words like fire , directing our attention to situations that may immediately require it. The only way this can happen, Treisman argued, is if information was being processed continuously in the unattended stream.
Diana Deutsch , best known for her work in music perception and auditory illusions, has also made important contributions to models of attention. In order to explain in more detail how words can be attended to on the basis of semantic importance, Deutsch & Deutsch [ 30 ] and Norman [ 31 ] proposed a model of attention which includes a second selection mechanism based on meaning. In what came to be known as the Deutsch-Norman model, information in the unattended stream is not processed all the way into working memory, as Treisman's model would imply. Instead, information on the unattended stream is passed through a secondary filter after pattern recognition. If the unattended information is recognized and deemed unimportant by the secondary filter, it is prevented from entering working memory. In this way, only immediately important information from the unattended channel can come to awareness.
Daniel Kahneman also proposed a model of attention, but it differs from previous models in that he describes attention not in terms of selection, but in terms of capacity. For Kahneman, attention is a resource to be distributed among various stimuli, [ 32 ] a proposition which has received some support. [ 6 ] [ 4 ] [ 33 ] This model describes not when attention is focused, but how it is focused. According to Kahneman, attention is generally determined by arousal ; a general state of physiological activity. The Yerkes-Dodson law predicts that arousal will be optimal at moderate levels - performance will be poor when one is over- or under-aroused. Of particular relevance, Narayan et al. discovered a sharp decline in the ability to discriminate between auditory stimuli when background noises were too numerous and complex - this is evidence of the negative effect of overarousal on attention. [ 4 ] Thus, arousal determines our available capacity for attention. Then, an allocation policy acts to distribute our available attention among a variety of possible activities. Those deemed most important by the allocation policy will have the most attention given to them. The allocation policy is affected by enduring dispositions (automatic influences on attention) and momentary intentions (a conscious decision to attend to something). Momentary intentions requiring a focused direction of attention rely on substantially more attention resources than enduring dispositions . [ 34 ] Additionally, there is an ongoing evaluation of the particular demands of certain activities on attention capacity. [ 32 ] That is to say, activities that are particularly taxing on attention resources will lower attention capacity and will influence the allocation policy - in this case, if an activity is too draining on capacity, the allocation policy will likely cease directing resources to it and instead focus on less taxing tasks. Kahneman's model explains the cocktail party phenomenon in that momentary intentions might allow one to expressly focus on a particular auditory stimulus, but that enduring dispositions (which can include new events, and perhaps words of particular semantic importance) can capture our attention. It is important to note that Kahneman's model doesn't necessarily contradict selection models, and thus can be used to supplement them.
Some research has demonstrated that the cocktail party effect may not be simply an auditory phenomenon, and that relevant effects can be obtained when testing visual information as well. For example, Shapiro et al. were able to demonstrate an "own name effect" with visual tasks, where subjects were able to easily recognize their own names when presented as unattended stimuli. [ 35 ] They adopted a position in line with late selection models of attention such as the Treisman or Deutsch-Norman models, suggesting that early selection would not account for such a phenomenon. The mechanisms by which this effect might occur were left unexplained.
Animals that communicate in choruses such as frogs, insects, songbirds and other animals that communicate acoustically can experience the cocktail party effect as multiple signals or calls occur concurrently. Similar to their human counterparts, acoustic mediation allows animals to listen for what they need to within their environments. For Bank swallows , cliff swallows, and king penguins , acoustic mediation allows for parent/offspring recognition in noisy environments. Amphibians also demonstrate this effect as evidenced in frogs; female frogs can listen for and differentiate male mating calls, while males can mediate other males' aggression calls. [ 36 ] There are two leading theories as to why acoustic signaling evolved among different species. Receiver psychology holds that the development of acoustic signaling can be traced back to the nervous system and the processing strategies the nervous system uses. Specifically, how the physiology of auditory scene analysis affects how a species interprets and gains meaning from sound. Communication Network Theory states that animals can gain information by eavesdropping on other signals between others of their species. This is true especially among songbirds. [ 36 ]
Hearable devices like noise-canceling headphones have been designed to address the cocktail party problem. [ 37 ] [ 38 ] These types of devices could provide wearers with a degree of control over the sound sources around them. [ 39 ] [ 40 ] [ 41 ]
Deep learning headphone systems like target speech hearing have been proposed to give wearers the ability to hear a target person in a crowded room with multiple speakers and background noise. [ 37 ] This technology uses real-time neural networks to learn the voice characteristics of an enrolled target speaker, which is later used to focus on their speech while suppressing other speakers and noise. [ 39 ] [ 42 ] Semantic hearing headsets also use neural networks to enable wearers to hear specific sounds, such as birds tweeting or alarms ringing, based on their semantic description, while suppressing other ambient sounds in the environment. [ 38 ] Real-time neural networks have also been used to create programmable sound bubbles on headsets, allowing all speakers within the bubble to be audible while suppressing speakers and noise outside the bubble. [ 43 ] [ 41 ]
These devices could benefit individuals with hearing loss , sensory processing disorders and misophonia as well as people who require focused listening for their job in health-care and military, or for factory or construction workers.
|
https://en.wikipedia.org/wiki/Cocktail_party_effect
|
Coex is a biopolymer with flame-retardant properties derived from the functionalization of cellulosic fibers such as cotton , linen , jute , cannabis , coconut , ramie , bamboo , raffia palm , stipa , abacà , sisal , nettle and kapok . The formation of coex has been proven possible on wood and semi-synthetic fibers such as cellulose acetate , cellulose triacetate , viscose , modal , lyocell and cupro .
The material is obtained by sulfation and phosphorylation reactions on glucan units linked to each other in position 1,4. Typical reaction locations are on the secondary and tertiary hydroxyl groups of the cellulosic fiber. [ 1 ] The chemical modification of the cellulosic fibers does not involve physical and visual alterations compared to the starting material.
in 2015 the World Textile Information Network (WTiN) declared Coex the winner of the "Future Materials Award" as the best innovation in the Home Textile category. [ 2 ]
Coex preserves the physical and chemical characteristics of the raw material from which it is formed. The main features of Coex materials are comfort, hydrophilicity, antistatic properties, mechanical resistance and versatility in the textile sector, like all natural and semi-synthetic cellulosic fibers.
Coex materials are resistant to moths, mildew and sunlight. The flame resistant nature of Coex is unique in that it acts as a barrier to the flames rather than only delaying the spread of fire; the biopolymer fibres carbonize and therefore extinguish the flame. [ citation needed ] The resulting products are hypoallergenic and biodegradable.
This material -related article is a stub . You can help Wikipedia by expanding it .
This article related to medical technology is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Coex_(material)
|
The Cognitive Abilities Screening Instrument (CASI) is a cognitive test screening for dementia , in monitoring the disease progression, and in providing profiles of cognitive impairment by examining abilities on attention , concentration , orientation , short-term memory , long-term memory , language abilities , visual construction , list-generating fluency , abstraction , and judgment with score ranges of 0 to 100, respectively. [ 1 ] [ 2 ]
This medical article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Cognitive_Abilities_Screening_Instrument
|
Cognitive disengagement syndrome ( CDS ) is a syndrome characterized by developmentally inappropriate , impairing, and persistent levels of decoupled attentional processing from the ongoing external context and resultant hypoactivity . Symptoms often manifest in difficulties with staring , mind blanking , absent-mindedness , mental confusion , and maladaptive mind-wandering alongside delayed, sedentary, or slow motor movements. [ 2 ] To scientists in the field, it has reached the threshold of evidence and recognition as a distinct syndrome. [ 2 ]
Since 1798, the medical literature on attentional disorders has distinguished between at least two kinds: one a disorder of distractibility, lack of sustained attention, and poor inhibition (that is now known as ADHD ), and the other a disorder of low power, arousal, or oriented/selective attention (now known as CDS). [ 3 ]
Although it implicates attention, CDS is distinct from ADHD. Unlike ADHD, which is the result of deficient executive functioning and self-regulation, [ 4 ] [ 5 ] [ 6 ] CDS presents with problems in arousal, maladaptive daydreaming , and oriented or selective attention (distinguishing what is important from unimportant in information that has to be processed rapidly), as opposed to poor persistence or sustained attention, inhibition, and self-regulation. [ 7 ] In educational settings, CDS tends to result in decreased work accuracy, while ADHD impairs productivity. [ 8 ]
CDS can also occur as a comorbidity with ADHD in some people, leading to substantially higher impairment than when either condition occurs alone.
In contemporary science today, it is clear that this set of symptoms is important because it is associated with unique impairments, above and beyond ADHD. CDS independently has a negative impact on functioning (such as a diminished quality of life , [ 9 ] increased stress, and suicidal behavior, [ 10 ] as well as lower educational attainment and socioeconomic status [ 11 ] ). CDS is clinically relevant as multiple randomized controlled clinical trials (RCTs) have shown that it responds poorly to methylphenidate . [ 12 ] [ 13 ] [ 14 ] [ 15 ]
Originally, CDS was thought to represent about one in three persons with the inattentive presentation of ADHD , [ 16 ] as a psychiatric misdiagnosis, and to be incompatible with hyperactivity. Subsequent research established that it can be comorbid with ADHD—and present in individuals without ADHD as well. Therefore, and due to many other lines of evidence, there is a scientific consensus that the condition is a distinct syndrome. [ 2 ]
If CDS and ADHD coexist together, the problems are additive: those with both conditions had higher levels of impairment and inattention than adults with ADHD only [ 17 ] and were more likely to be unmarried, out of work, or on disability. [ 18 ] CDS alone is also present in the population and can be quite impairing in educational and occupational settings, even if it is not as pervasively impairing as ADHD. The studies on medical treatments are limited. However, research suggests that atomoxetine [ 19 ] [ 20 ] [ 21 ] [ 22 ] and lisdexamfetamine [ 19 ] [ 23 ] may be used to treat CDS.
The condition was previously called sluggish cognitive tempo ( SCT ). The terms concentration deficit disorder ( CDD ) or cognitive disengagement syndrome ( CDS ) have recently been preferred to SCT because they better and more accurately explain the condition and thus eliminate confusion. [ 18 ] [ 24 ]
In many ways, those who have a CDS profile have some of the opposite symptoms of those with predominantly hyperactive-impulsive or combined presentation of ADHD: instead of being hyperactive , extroverted , obtrusive, excessively energetic, and risk takers, those with CDS are drifting, absent-minded , listless, introspective , and daydreamy. They feel like they are "in the fog" and seem "out of it". [ 25 ]
The comorbid psychiatric problems often associated with CDS are more often of the internalizing types , such as anxiety , unhappiness, or depression . [ 16 ] Most consistent across studies was a pattern of reticence and social withdrawal in interactions with peers. Their typically shy nature and slow response time has often been misinterpreted as aloofness or disinterest by others. In social group interactions, those with CDS may be ignored and neglected. People with classic ADHD are more likely to be rejected in these situations because of their social intrusiveness or aggressive behavior. Compared to children with CDS, they are also much more likely to show antisocial behaviors like substance abuse , oppositional-defiant disorder , or conduct disorder (frequent lying, stealing, fighting etc.). [ 18 ] Fittingly, in terms of personality, ADHD seems to be associated with sensitivity to reward and fun seeking while CDS may be associated with punishment sensitivity . [ 26 ] [ 18 ]
Individuals with CDS symptoms may show a qualitatively different kind of attention deficit that is more typical of a true information processing problem; such as poor focusing of attention on details or the capacity to distinguish important from unimportant information rapidly. In contrast, people with ADHD have more difficulties with persistence of attention and action toward goals coupled with impaired resistance to responding to distractions. Unlike CDS, those with classic ADHD have problems with inhibition but have no difficulty selecting and filtering sensory input. [ 27 ] [ 18 ]
Some think that CDS and ADHD produce different kinds of inattention: While those with ADHD can engage their attention but fail to sustain it over time, people with CDS seem to have difficulty with engaging their attention to a specific task. [ 28 ] [ 29 ] Accordingly, the ability to orient attention has been found to be abnormal in CDS. [ 30 ]
Both disorders interfere significantly with academic performance but may do so by different means. CDS may be more problematic with the accuracy of the work a child does in school and lead to making more errors. Conversely, ADHD may more adversely affect productivity that represents the amount of work done in a particular time interval. Children with CDS seem to have more difficulty with consistently remembering things that were previously learned and make more mistakes on memory retrieval tests than do children with ADHD. They have been found to perform much worse on psychological tests involving perceptual-motor speed or hand-eye coordination and speed. They also have a more disorganized thought process, a greater degree of sloppiness, and lose things more easily. The risk for additional learning disabilities seems equal in both ADHD and CDS (23–50%), but math disorders may be more frequent in the CDS group. [ 25 ]
A key behavioral characteristic of those with CDS symptoms is that they are more likely to appear to be lacking motivation and may even have an unusually higher frequency of daytime sleepiness. [ 31 ] They seem to lack energy to deal with mundane tasks and will consequently seek to concentrate on things that are mentally stimulating perhaps because of their underaroused state . Alternatively, CDS may involve a pathological form of excessive mind-wandering . [ 18 ]
The executive system of the human brain provides for the cross-temporal organization of behavior toward goals and the future and coordinates actions and strategies for everyday goal-directed tasks. Essentially, this system permits humans to self-regulate their behavior so as to sustain action and problem solving toward goals specifically and the future more generally. Dysexecutive syndrome is defined as a "cluster of impairments generally associated with damage to the frontal lobes of the brain" which includes "difficulties with high-level tasks such as planning, organising, initiating, monitoring, and adapting behaviour". [ 32 ] Such executive deficits pose serious problems for a person's ability to engage in self-regulation over time to attain their goals and anticipate and prepare for the future.
Adele Diamond postulated that the core cognitive deficit of those with ADHD-I is working memory , or, as she coined in a paper on the subject, "childhood-onset dysexecutive syndrome". [ 33 ] However, two more recent studies by Barkley found that while children and adults with CDS had some deficits in executive functions (EF) in everyday life activities, they were primarily of far less magnitude and largely centered around problems with self-organization and problem-solving. Even then, analyses showed that most of the difficulties with executive function deficits were the result of overlapping ADHD symptoms that may co-exist with CDS rather than being attributable to CDS itself. More research on the link of CDS to executive function deficits is clearly indicated, but—as of this time—CDS does not seem to be as strongly associated with executive function deficits as is ADHD. [ 18 ]
Unlike ADHD, the general causes of CDS symptoms are almost unknown, though one recent study of twins suggested that the condition appears to be nearly as heritable or genetically influenced in nature as ADHD. [ 34 ]
Little is known about the neurobiology of CDS. However, symptoms of CDS seem to indicate that the posterior attention networks may be more involved here than the prefrontal cortex region of the brain and difficulties with working memory so prominent in ADHD. This hypothesis gained greater support following a 2015 neuroimaging study comparing ADHD inattentive symptoms and CDS symptoms in adolescents: It found that CDS was associated with a decreased activity in the left superior parietal lobule (SPL), whereas inattentive symptoms were associated with other differences in activation. [ 35 ] A 2018 study showed an association between CDS and specific parts of the frontal lobes, differing from classical ADHD neuro-anatomy. [ 36 ]
A study showed a small link between thyroid functioning and CDS symptoms suggesting that thyroid dysfunction is not the cause of CDS. High rates of CDS were observed in children who had prenatal alcohol exposure and in survivors of acute lymphoblastic leukemia , where they were associated with cognitive late effects . [ 37 ] [ 38 ] [ 39 ]
CDS is included, with its previous name of sluggish cognitive tempo, as a diagnostic descriptor in the current International Classification of Diseases (ICD) released in 2022 under the World Health Organization (WHO). [ 40 ] However, it is not included as a separate disorder in the ICD or current Diagnostic and Statistical Manual of Mental Disorders (DSM) (2013) [ 41 ] [ 42 ] although it may be in subsequent editions; to scientists in the field, it has reached the threshold of evidence and recognition as a distinct syndrome [ 2 ] and is diagnosed by some professional practices. [ 43 ] Screening tools have been created to assess CDS symptoms. [ 44 ] [ 45 ] Although some symptoms of other conditions are partially shared with CDS, they are distinct conditions. [ 46 ]
Treatment of CDS has not been well investigated. Initial drug studies were done only with the ADHD medication methylphenidate , and even then only with children who were diagnosed as ADD without hyperactivity (using DSM-III criteria) and not specifically for CDS. The research seems to have found that most children with ADD ( attention deficit disorder ) with Hyperactivity (currently ADHD combined presentation) responded well at medium-to-high doses. [ 33 ] However, a sizable percentage of children with ADD without hyperactivity (currently ADHD inattentive presentation, therefore the results may apply to CDS) did not gain much benefit from methylphenidate , and when they did benefit, it was at a much lower dose. [ 47 ]
However, one study and a retrospective analysis of medical histories found that the presence or absence of CDS symptoms made no difference in response to methylphenidate in children with ADHD-I. [ 48 ] [ 18 ] These studies did not specifically and explicitly examine the effect of the drug on CDS symptoms in children. Atomoxetine may be used to treat CDS, [ 19 ] as multiple randomised controlled clinical trials ( RCTs ) have found that it is an effective treatment. [ 19 ] [ 20 ] [ 22 ] In contrast, multiple other RCTs have shown that it responds poorly to methylphenidate . [ 49 ] [ 50 ] [ 51 ] [ 52 ]
Only one study has investigated the use of behavior modification methods at home and school for children with predominantly CDS symptoms and it found good success. [ 53 ]
In April 2014, The New York Times reported that sluggish cognitive tempo is the subject of pharmaceutical company clinical drug trials, including ones by Eli Lilly that proposed that one of its biggest-selling drugs, Strattera , could be prescribed to treat proposed symptoms of sluggish cognitive tempo. [ 54 ] Other researchers believe that there is no effective treatment for CDS. [ 55 ]
The prognosis of CDS is unknown. In contrast, much is known about the adolescent and adult outcomes of children having ADHD. Those with CDS symptoms typically show a later onset of their symptoms than do those with ADHD, perhaps by as much as a year or two later on average. Both groups had similar levels of learning problems and inattention, but CDS children had less externalizing symptoms and higher levels of unhappiness, anxiety/depression, withdrawn behavior, and social dysfunction. They do not have the same risks for oppositional defiant disorder, conduct disorder, or social aggression and thus may have different life course outcomes compared to children with ADHD-HI and Combined subtypes who have far higher risks for these other " externalizing " disorders. [ 18 ]
However, unlike ADHD, there are no longitudinal studies of children with CDS that can shed light on the developmental course and adolescent or adult outcomes of these individuals.
Recent studies indicate that the symptoms of CDS in children form two dimensions: daydreamy-spacey and sluggish-lethargic, and that the former are more distinctive of the disorder from ADHD than the latter. [ 56 ] [ 57 ] This same pattern was recently found in the first study of adults with CDS by Barkley and also in more recent studies of college students. [ 18 ] These studies indicated that CDS is probably not a subtype of ADHD but a distinct disorder from it. Yet it is one that overlaps with ADHD in 30–50% of cases of each disorder, suggesting a pattern of comorbidity between two related disorders rather than subtypes of the same disorder. Nevertheless, CDS is strongly correlated with ADHD inattentive and combined subtypes. [ 56 ] [ 58 ] According to a Norwegian study, "[CDS] correlated significantly with inattentiveness, regardless of the subtype of ADHD." [ 59 ]
There have been descriptions in literature for centuries of children who are very inattentive and prone to foggy thought.
Symptoms similar to ADHD were first systematically described in 1775 by Melchior Adam Weikard and in 1798 by Alexander Crichton in their medical textbooks. Although Weikard mainly described a single disorder of attention resembling the combined presentation of ADHD, Crichton postulates an additional attention disorder, described as a "morbid diminution of its power or energy", and further explores possible "corporeal" and "mental" causes for the disorder (including "irregularities in diet, excessive evacuations, and the abuse of corporeal desires"). However, he does not further describe any symptoms of the disorder, making this an early but certainly non-specific reference to a CDS-like syndrome. [ 60 ] [ 18 ]
One example from fictional literature is Heinrich Hoffmann 's character of "Johnny Head-in-Air" ( Hanns Guck-in-die-Luft ), in Struwwelpeter (1845). (Some researchers see several characters in this book as showing signs of child psychiatric disorders). [ 61 ]
The Canadian pediatrician Guy Falardeau, besides working with hyperactive children, also wrote about very dreamy, quiet and well-behaved children that he encountered in his practice. [ 62 ]
In more modern times, research surrounding attention disorders has traditionally focused on hyperactive symptoms, but began to newly address inattentive symptoms in the 1970s. Influenced by this research, the DSM-III (1980) allowed for the first time a diagnosis of an ADD subtype that presented without hyperactivity. Researchers exploring this subtype created rating scales for children which included questions regarding symptoms such as short attention span, distractibility, drowsiness, and passivity. [ 63 ] In the mid-1980s, it was proposed that as opposed to the then accepted dichotomy of ADD with or without hyperactivity (ADD/H, ADD/noH), instead a three-factor model of ADD was more appropriate, consisting of hyperactivity-impulsivity, inattention-disorganization, and slow tempo subtypes. [ 64 ]
In the 1990s, Weinberg and Brumback proposed a new disorder: "primary disorder of vigilance" (PVD). Characteristic symptoms of it were difficulty sustaining alertness and arousal , daydreaming, difficulty focusing attention, losing one's place in activities and conversation, slow completion of tasks and a kind personality. The most detailed case report in their article looks like a prototypical representation of CDS. The authors acknowledged an overlap of PVD and ADHD but argued in favor of considering PVD to be distinct in its unique cognitive impairments. [ 65 ] [ 66 ] Problematic with the paper is that it dismissed ADHD as a nonexistent disorder (despite it having several thousand research studies by then) and preferred the term PVD for this CDS-like symptom complex. A further difficulty with the PVD diagnosis is that not only is it based merely on 6 cases instead of the far larger samples of CDS children used in other studies but the very term implies that science has established the underlying cognitive deficits giving rise to CDS symptoms, and this is hardly the case. [ 18 ]
With the publication of DSM-IV in 1994, the disorder was labeled as ADHD , and was divided into three subtypes: predominantly inattentive, predominantly hyperactive-impulsive, and combined. Of the proposed CDS-specific symptoms discussed while developing the DSM-IV, only "forgetfulness" was included in the symptom list for ADHD-I, and no others were mentioned. However, several of the proposed CDS symptoms were included in the diagnosis of "ADHD, not otherwise specified". [ 63 ]
Prior to 2001, there were a total of four scientific journal articles specifically addressing symptoms of CDS. But then a researcher suggested that sluggish tempo symptoms (such as inconsistent alertness and orientation) were, in fact, adequate for the diagnosis of ADHD-I. Thus, he argued, their exclusion from DSM-IV was inappropriate. [ 67 ] The research article and its accompanying commentary urging the undertaking of more research on CDS spurred the publication of over 30 scientific journal articles to date which specifically address symptoms of CDS. [ 63 ]
However, with the publication of DSM-5 in 2013, ADHD continues to be classified as predominantly inattentive, predominantly hyperactive-impulsive, and combined type and there continues to be no mention of CDS as a diagnosis or a diagnosis subtype anywhere in the manual. The diagnosis of "ADHD, not otherwise specified" also no longer includes any mention of CDS symptoms. [ 68 ] Similarly, ICD-10 , the medical diagnostic manual, has no diagnosis code for CDS. Although CDS is not recognized as a disorder at this point, researchers continue to debate its usefulness as a construct and its implications for further attention disorder research. [ 63 ]
Significant skepticism has been raised within the medical and scientific communities as to whether CDS, currently considered a "symptom cluster," actually exists as a distinct disorder. [ 54 ]
Allen Frances , emeritus professor of psychiatry at Duke University , argues: "We're seeing a fad in evolution: Just as ADHD has been the diagnosis du jour for 15 years or so, this is the beginning of another. This is a public health experiment on millions of kids...I have no doubt there are kids who meet the criteria for this thing, but nothing is more irrelevant. The enthusiasts here are thinking of missed patients. What about the mislabeled kids who are called patients when there's nothing wrong with them? They are not considering what is happening in the real world." [ 54 ]
UCLA researcher and Journal of Abnormal Child Psychology editorial board member Steve S. Lee expresses concern that based on CDS's close relationship to ADHD, a pattern of overdiagnosis of the latter has "already grown to encompass too many children with common youthful behavior, or whose problems are derived not from a neurological disorder but from inadequate sleep, a different learning disability or other sources." Lee states: "The scientist part of me says we need to pursue knowledge, but we know that people will start saying their kids have [cognitive disengagement syndrome], and doctors will start diagnosing it and prescribing for it long before we know whether it's real...ADHD has become a public health, societal question, and it's a fair question to ask of [CDS]." [ 54 ]
Adding to the controversy are potential conflicts of interest among the condition's proponents, including the funding of prominent CDS researchers' work by the global pharmaceutical company Eli Lilly. [ 54 ] When referring to the "increasing clinical referrals occurring now and more rapidly in the near future driven by increased awareness of the general public in [CDS]", Dr. Barkley writes: "The fact that [CDS] is not recognized as yet in any official taxonomy of psychiatric disorders will not alter this circumstance given the growing presence of information on [CDS] at various widely visited internet sites such as YouTube and Wikipedia , among others." [ 69 ]
|
https://en.wikipedia.org/wiki/Cognitive_disengagement_syndrome
|
Cognitive neuropsychiatry is a growing multidisciplinary field arising out of cognitive psychology and neuropsychiatry that aims to understand mental illness and psychopathology in terms of models of normal psychological function. A concern with the neural substrates of
impaired cognitive mechanisms links cognitive neuropsychiatry to the basic neuroscience. Alternatively, CNP provides a way of uncovering normal psychological processes by studying the effects of their change or impairment.
The term "cognitive neuropsychiatry" was coined by Prof Hadyn Ellis ( Cardiff University ) in a paper "The cognitive neuropsychiatric origins of the Capgras delusion", presented at the International Symposium on the Neuropsychology of Schizophrenia, Institute of Psychiatry, London (Coltheart, 2007).
Although clinically useful, current syndrome classifications (e.g. DSM-IV; ICD-10) have no empirical basis as models of normal cognitive processes. CNP moves beyond diagnosis and classification to offer a cognitive explanation for established psychiatric behaviours, regardless of whether the symptoms are due to recognised brain pathology or to dysfunction in brain areas or networks without structural lesions.
CNP has been influential, not least because of its early success in explaining some psychiatric delusions, most notably the Capgras delusion , Fregoli delusion and other delusional misidentification syndromes . The Capgras delusion is "explained as the interruption in the covert route to face recognition, namely affective responses to familiar stimuli, localized in the dorsal route of vision from striate cortex to limbic system . According to standard molecular hypotheses, acute delusions are the result of a dysregulated activity of some neuromodulators." [ 1 ]
Additionally, the study of cognitive neuropsychiatry has shown to intersect with the study of philosophy. This intersection revolves around a reconsideration of the mind-body relationship and the contemplation of moral issues that can arise by fields such as neuropsychopathology. For example, it has been under consideration whether or not Parkinson's patients should be held morally accountable for their physical actions. This discussion and study has taken place due to the discovery that under certain circumstances, Parkinson's patients can initiate and control their own movement. Examples such as this are cause for difficult judgement calls, i.e. "about who is mad and who is bad" (Stein 1999). Cognitive neuropsychiatry has also explored the difference between implicit and explicit cognition, especially in catatonic patients. For more information on the bridge between neuropsychiatry and philosophy see (e.g., Stein, Dan (1999). Philosophy, Psychiatry, & Psychology).
|
https://en.wikipedia.org/wiki/Cognitive_neuropsychiatry
|
Cognitive reserve is the mind's and brain's resistance to damage of the brain. The mind's resilience is evaluated behaviorally, whereas the neuropathological damage is evaluated histologically, although damage may be estimated using blood-based markers and imaging methods. There are two models that can be used when exploring the concept of "reserve": brain reserve and cognitive reserve . These terms, albeit often used interchangeably in the literature, provide a useful way of discussing the models. Using a computer analogy, brain reserve can be seen as hardware and cognitive reserve as software. All these factors are currently believed to contribute to global reserve. Cognitive reserve is commonly used to refer to both brain and cognitive reserves in the literature.
In 1988 a study published in Annals of Neurology reporting findings from post-mortem examinations on 137 elderly persons unexpectedly revealed that there was a discrepancy between the degree of Alzheimer's disease neuropathology and the clinical manifestations of the disease: [ 1 ] some participants whose brains had extensive Alzheimer's disease pathology, had no or very few clinical manifestations of the disease. Furthermore, the study showed that these persons had higher brain weights and greater number of neurons as compared to age-matched controls. The investigators speculated with two possible explanations for this phenomenon: these people may have had incipient Alzheimer's disease but somehow avoided the loss of large numbers of neurons , or alternatively, started with larger brains and more neurons and thus might be said to have had a greater "reserve". This is the first time this term has been used in the literature in this context.
The study sparked off interest in this area, and to try to confirm these initial findings further studies were done. Higher reserve was found to provide a greater threshold before clinical deficit appears. [ 2 ] [ 3 ] [ 4 ] Furthermore, those with higher capacity showed more rapid decline once becoming clinically impaired, probably indicating a failure of all compensatory systems and strategies put in place by the individual with greater reserve to cope with the increasing neuropathological damage. [ 5 ]
Brain reserve may be defined as the brain's resilience, its ability to cope with increasing damage while still functioning adequately. This passive, threshold model presumes the existence of a fixed cut-off which, once reached, would inevitably lead to clinical manifestations of dementia.
A 1997 study found that Alzheimer's disease pathology in large brains did not necessarily result in clinical dementia . [ 6 ] Another study reported head circumference to be independently associated with a reduced risk of clinical Alzheimer's disease. [ 7 ]
While some studies, like those mentioned, find an association, others do not. This is thought to be because head circumference and other approximations are indirect measures.
The amount of synapse loss is greater in early onset dementia than in late onset dementia. [ 8 ] This might indicate a vulnerability to the manifestation of clinical cognitive impairment, although there may be other explanations.
Structures like the cerebellum contribute to brain reserve. [ 9 ] The cerebellum contains the majority of neurons in the brain and participates in both cognitive and motor operations. [ 10 ] Cerebellar circuitry is a site of multiple forms of neuronal plasticity, a factor playing a major role in terms of brain reserve. [ 11 ]
Evidence from a twin study indicates a genetic contribution to cognitive functions. [ 12 ] Heritability estimates have been found to be high for general cognitive functions but low for memory itself. [ 13 ] Adjusting for the effects of education 79% of executive function can be explained by genetic contribution. [ 14 ] A study combining twin and adoption studies found all cognitive functions to be heritable. Speed of processing had the highest heritability in this particular study. [ 15 ]
Cognitive reserve also indicates a resilience to neuropathological damage, but the emphasis here is in the way the brain uses its damaged resources. It could be defined as the ability to optimize or maximize performance through differential recruitment of brain networks and/or alternative cognitive strategies . This is an efficiency model, rather than a threshold model, and it implies that the task is processed using less resources or using neural resources more efficiently, resulting in better cognitive performance. Studies use factors like education, occupation, and lifestyle as proxies for cognitive reserve because they tend to positively correlate with higher cognitive reserve.
More education and cognitively complex occupation are some of the factors that predict higher cognitive abilities in old age. [ 16 ] Therefore, two most commonly used proxies to study cognitive reserve are education and occupation. Education is known to play a role in cognitive decline in normal aging, as well as in degenerative diseases or traumatic brain injuries. [ 17 ] A higher prevalence of dementia in individuals with fewer years of education has suggested that education may protect against Alzheimer's disease. [ 18 ] Moreover, the level of education has a strong impact on adult's lifestyle. Level of education is measured by the number of years an individual spends in school or alternatively, the degree of literacy. [ 17 ] Possibly, the level of education itself provides a set of cognitive tools that allow the individual to compensate for the pathological changes. [ 18 ] Cognitive Reserve Index Questionnaire (CRIq), devised to assess the level of cognitive reserve in order to provide better diagnosis and treatment, takes into account years of education and possible training courses lasting at least six months to assess the education load on cognitive reserve. [ 17 ] Clinically, education is negatively correlated with dementia severity, [ 19 ] but positively correlated with grey matter atrophy, intracranial volume, and overall global cognition. [ 20 ] [ 21 ] Neurologically, education is correlated to greater functional connectivity between fronto-parietal regions [ 22 ] and greater cortical thickness in the left inferior temporal gyrus . [ 23 ] In addition to the level of education, it has been shown that bilingualism enhances attention and cognitive control in both children and older adults and delays the onset of dementia. It allows the brain to better tolerate the underlying pathologies and can be considered as a protective factor contributing positively to the cognitive reserve. [ 24 ] Another proxy for cognitive reserve is the occupation. Studies suggest that occupation may provide additive and independent source of cognitive reserve throughout person's life. The last or the longest job is usually taken into account. Occupation values may vary in terms of cognitive load involved. Some other common indices, such as prestige or salary can also be considered. Working activity measured by CRIq assesses adulthood professions. There are five different levels of working activities available, differing in the degree of intellectual involvement and personal responsibility. Working activity was recorded as the number of years in each profession over the lifespan. [ 17 ] Occupation as a proxy for cognitive reserve is positively correlated with local efficiency and functional connectivity in the right medial temporal lobe. [ 23 ] More cognitively stimulating occupations are weakly associated with greater memory, but are more strongly correlated with greater executive functioning. [ 21 ] These two proxies are typically measured together and are typically highly correlated with each other. [ 21 ] A genetic study using Mendelian randomization analysis demonstrated that high occupation levels were associated with reduced risk for Alzheimer’s disease. In addition, this study confirmed that occupational attainment had an independent effect on the risk for Alzheimer’s disease even after taking educational attainment into account. [ 25 ]
Intellectual quotients derived from psychometric testing have been identified as valuable proxy measures of cognitive reserve, with higher scores relative to the mean being associated with slower rates of cognitive decline. [ 26 ] However, the rate of decline in some cognitive subdomains, such as processing speed, may be less affected by premorbid IQ. [ 27 ] The degree of association between IQ and cognitive reserve may vary between different types of dementia. [ 28 ]
For any given level of clinical impairment, there is a higher degree of neuropathological change in the brains of those Alzheimer's disease sufferers who are involved in greater number of activities. This is true even when education and IQ are controlled for. This suggests that differences in lifestyle may increase cognitive reserve by making the individual more resilient. [ 29 ] In other words, everyday experience affecting cognition is analogous to physical exercise influencing musculoskeletal and cardiovascular functions. [ 30 ] Using cerebral blood flow as an indirect measure of neuropathological damage, lower CBF indicating more damage, it was found that at a given level of clinical impairment leisure activity score was negatively correlated with CBF. [ 30 ] In other words, individuals with greater activity score were able to withstand more brain damage and therefore can be said to have more reserve. Mortimer et al. performed cognitive testing on a population of 678 nuns in 1997, in which they showed that different levels of cognitive activity and performance were possible in patients diagnosed with Alzheimer's. One subject showing reduced neocortical plaques survived with mild deficits, despite (or due to) low brain weight.
More recent studies distinguish four modifiable lifestyle factors which influence cognitive health in later life and offer potential to reduce the risk of cognitive decline and dementia. [ 31 ] Between 2011 and 2013 the Cognitive Function and Aging Study Wales (CFAS-Wales) collected data from a cohort of 2,315 cognitively healthy participants aged 65 years and over, not only confirming the theory of impacting lifestyle factors but also detecting a mediating effect of cognitive reserve on the cross-sectional association between lifestyle factors and cognitive function in later life.
Cognitive and social activity: People with high leisure activity of intellectual (reading magazines or newspapers or books, playing cards, games or bingo, going to classes etc.), social (visiting or being visited by friends or relatives, etc.), engaging (helping others with daily tasks, paid work and volunteer work) nature have a significant smaller risk of developing dementia. [ 30 ]
Physical activity: Has a strong impact on developing cognitive decline or dementia. [ 31 ]
Healthy diet: Research on healthy diets emphasizes the benefits of adhering to the Mediterranean-style diet as protection of cognitive health. [ 31 ]
Alcohol consumption: Studies suggest that light-to-moderate alcohol intake is associated with lower risk (once or twice a week or three or four times a week), as were frequent drinking in earlier life is identified as a risk factor for cognitive decline in later life. [ 31 ]
Due to the variety of the four lifestyle factors, a lot of different self-report-scales are used to specify the severity of each proxy.
Parkinson's disease is an example for a condition which is associated with the role of cognitive reserve and cognitive impairment. Previous investigation into Parkinson's disease implicated a possible influence of cognitive reserve in the human brain.
According to some studies [ 32 ] the so-called Cognitive Lifestyle is seen as a general protective factor that can be mediated though several different mechanisms.
A study from 2015 [ 33 ] included the effects of (cognitive) lifestyle on cross-sectional and longitudinal measures. 525 participants with Parkinson’s disease completed different baseline assessments of cognition and provided clinical, social and demographic data. After 4 years 323 participated in a cognition assessment in the follow-up. The researchers therefore used the measures of global cognition dementia severity. It has been shown, that next to the educational level and the socio-economic status a higher level of recent social engagement was also associated with a decreased risk of dementia. On the other hand, increasing age and low levels of social engagement may increase the risk of dementia in Parkinson’s disease.
In spite of the differences in approach between the models of brain reserve and cognitive reserve, there is evidence that both might be interdependent and related. This is where the computer analogy ends, as with the brain it seems that hardware can be changed by software.
Exposure to an enriched environment , defined as a combination of more opportunities for physical activity, learning and social interaction, may produce structural and functional changes in the brain and influence the rate of neurogenesis in adult and senescent animal model hippocampi. [ 34 ] Many of these changes can be effected merely by introducing a physical exercise regimen rather than requiring cognitive activity per se. [ 35 ]
In humans, the posterior hippocampi of licensed London taxi drivers was famously found to be larger than that of matched controls, while the anterior hippocampi were smaller. [ 36 ] This study shows that people choosing taxi driving as a career (one which has as a barrier to entry—the ability to memorize London's streets—described as "the world's most demanding test (of street knowledge)") have larger hippocampi, but does not demonstrate change in volume as a result of driving. Similarly, while acquiring a second language requires extensive and sustained cognitive activity, it does not appear to reduce dementia risk compared to those who have not learned another language, [ 37 ] although lifelong bilingualism is associated with delayed onset of Alzheimer's disease. [ 38 ]
The clinical diagnosis of dementia is not perfectly linked to levels of underlying neuropathology . The severity of pathologies and the deficit in cognitive performance could not have direct relationship. The theory of cognitive reserve explains this phenomenon. Katzman et al. (1998) conducted a study on the autopsy results of 10 people and found a pathology related to Alzheimer's disease. [ 1 ] However, the same patients showed no symptoms of Alzheimer's disease during their life time. So, when pathology emerges in the brain, cognitive reserve helps to cope with cognitive decline. Thus, individuals with high cognitive reserve cope better than those with low cognitive reserve even if they have the same pathology. [ 39 ] This causes people with high cognitive reserve to go un-diagnosed until damage becomes severe.
Cognitive reserve, which can be estimated clinically, is affected by many variables. The Cognitive Reserve Index questionnaire (CRIq) measures cognitive reserve under three main sources, namely the education, work activities and leisure time activities throughout the individual's lifespan. [ 40 ]
It is important to note that cognitive reserve (and the variables associated with it) do not "protect" from Alzheimer's disease as a disease process—the definition of cognitive reserve is based exactly on the presence of disease pathology. This means that the traditional idea that education protects from Alzheimer's disease is false, albeit that cognitive reserve is protective of the clinical manifestations of disease. [ 34 ] As of 2010, there was insufficient evidence to recommend any way to increase cognitive reserve to prevent dementia or Alzheimer's. [ 35 ] On the other hand, cognitive reserve has a very important impact on neurodegenerative diseases. Patients with high cognitive reserve showed a delay in cognitive decline when compared to patients with low cognitive reserve. However, when the symptoms of cognitive decline become symptomatic, patients with high cognitive reserve show rapid cognitive decline. [ 41 ]
The presence of cognitive reserve implies that people with greater reserve who already are suffering neuropathological changes in the brain will not be picked up by standard clinical cognitive testing. Conversely anyone who has used these instruments clinically knows that they can yield false positives in people with very low reserve. From this point of view the concept of "adequate level of challenge" easily emerges. Conceivably one could measure cognitive reserve and then offer specifically tailored tests that would pose enough level of challenge to accurately detect early cognitive impairment both in individuals with high and low reserve. This has implications for treatment and care.
In people with high reserve, deterioration occurs rapidly once the threshold is reached. [ 36 ] In these individuals and their careers early diagnosis might provide an opportunity to plan future care and to adjust to the diagnosis while they are still able to make decisions. A cognitive rehabilitation study, conducted with dementia patients, showed that patients with low cognitive reserve had better outcomes from cognitive training rehabilitation when compared to high cognitive reserve. This is due to the fact that the patients with high cognitive reserve had delayed cognitive symptoms and therefore the disease could no longer resist the pathology. Furthermore, the improvement seen in the patients with low cognitive reserve indicates that these patients can build their cognitive reserve as a life-long process. [ 42 ]
|
https://en.wikipedia.org/wiki/Cognitive_reserve
|
Coital incontinence ( CI ) is urinary leakage that occurs during either penetration or orgasm and can occur with a sexual partner or with masturbation . It has been reported to occur in 10% to 27% of sexually active women with urinary continence problems. [ 1 ] There is evidence to suggest links between urinary leakage at penetration and urodynamic stress incontinence, and between urinary leakage at orgasm and detrusor overactivity. [ 2 ]
Coital incontinence is physiologically distinct from female ejaculation , with which it is sometimes confused. [ 2 ] [ 3 ]
This sexuality -related article is a stub . You can help Wikipedia by expanding it .
This medical article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Coital_incontinence
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.