id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
7738 | https://en.wikipedia.org/wiki/Chiropractic | Chiropractic | Chiropractic () is a form of alternative medicine concerned with the diagnosis, treatment and prevention of mechanical disorders of the musculoskeletal system, especially of the spine. It is based on several pseudoscientific ideas.
Many chiropractors (often known informally as chiros), especially those in the field's early history, have proposed that mechanical disorders of the joints, especially of the spine, affect general health, and that regular manipulation of the spine (spinal adjustment) improves general health. The main chiropractic treatment technique involves manual therapy, especially manipulation of the spine, other joints, and soft tissues, but may also include exercises and health and lifestyle counseling. A chiropractor may have a Doctor of Chiropractic (D.C.) degree and be referred to as "doctor" but is not a Doctor of Medicine (M.D.) or a Doctor of Osteopathic Medicine (D.O.). While many chiropractors view themselves as primary care providers, chiropractic clinical training does not meet the requirements for that designation.
Systematic reviews of controlled clinical studies of treatments used by chiropractors have found no evidence that chiropractic manipulation is effective, with the possible exception of treatment for back pain. A 2011 critical evaluation of 45 systematic reviews concluded that the data included in the study "fail[ed] to demonstrate convincingly that spinal manipulation is an effective intervention for any condition." Spinal manipulation may be cost-effective for sub-acute or chronic low back pain, but the results for acute low back pain were insufficient. No compelling evidence exists to indicate that maintenance chiropractic care adequately prevents symptoms or diseases.
There is not sufficient data to establish the safety of chiropractic manipulations. It is frequently associated with mild to moderate adverse effects, with serious or fatal complications in rare cases. There is controversy regarding the degree of risk of vertebral artery dissection, which can lead to stroke and death, from cervical manipulation. Several deaths have been associated with this technique and it has been suggested that the relationship is causative, a claim which is disputed by many chiropractors.
Chiropractic is well established in the United States, Canada, and Australia. It overlaps with other manual-therapy professions such as osteopathy and physical therapy. Most who seek chiropractic care do so for low back pain. Back and neck pain are considered the specialties of chiropractic, but many chiropractors treat ailments other than musculoskeletal issues. Chiropractic has two main groups: "straights", now the minority, emphasize vitalism, "Innate Intelligence", and consider vertebral subluxations to be the cause of all disease; and "mixers", the majority, are more open to mainstream views and conventional medical techniques, such as exercise, massage, and ice therapy.
D. D. Palmer founded chiropractic in the 1890s, claiming that he had received it from "the other world". Palmer maintained that the tenets of chiropractic were passed along to him by a doctor who had died 50 years previously. His son B. J. Palmer helped to expand chiropractic in the early 20th century. Throughout its history, chiropractic has been controversial. Its foundation is at odds with evidence-based medicine, and is underpinned by pseudoscientific ideas such as vertebral subluxation and Innate Intelligence. Despite the overwhelming evidence that vaccination is an effective public health intervention, there are significant disagreements among chiropractors over the subject, which has led to negative impacts on both public vaccination and mainstream acceptance of chiropractic. The American Medical Association called chiropractic an "unscientific cult" in 1966 and boycotted it until losing an antitrust case in 1987. Chiropractic has had a strong political base and sustained demand for services. In the last decades of the twentieth century, it gained more legitimacy and greater acceptance among conventional physicians and health plans in the United States. During the COVID-19 pandemic, chiropractic professional associations advised chiropractors to adhere to CDC, WHO, and local health department guidance. Despite these recommendations, a small but vocal and influential number of chiropractors spread vaccine misinformation.
Conceptual basis
Philosophy
Chiropractic is generally categorized as complementary and alternative medicine (CAM), which focuses on manipulation of the musculoskeletal system, especially the spine. Its founder, D.D. Palmer, called it "a science of healing without drugs".
Chiropractic's origins lie in the folk medicine of bonesetting, and as it evolved it incorporated vitalism, spiritual inspiration and rationalism. Its early philosophy was based on deduction from irrefutable doctrine, which helped distinguish chiropractic from medicine, provided it with legal and political defenses against claims of practicing medicine without a license, and allowed chiropractors to establish themselves as an autonomous profession. This "straight" philosophy, taught to generations of chiropractors, rejects the inferential reasoning of the scientific method, and relies on deductions from vitalistic first principles rather than on the materialism of science. However, most practitioners tend to incorporate scientific research into chiropractic, and most practitioners are "mixers" who attempt to combine the materialistic reductionism of science with the metaphysics of their predecessors and with the holistic paradigm of wellness. A 2008 commentary proposed that chiropractic actively divorce itself from the straight philosophy as part of a campaign to eliminate untestable dogma and engage in critical thinking and evidence-based research.
Although a wide diversity of ideas exist among chiropractors, they share the belief that the spine and health are related in a fundamental way, and that this relationship is mediated through the nervous system. Some chiropractors claim spinal manipulation can have an effect on a variety of ailments such as irritable bowel syndrome and asthma.
Chiropractic philosophy includes the following perspectives:
Holism assumes that health is affected by everything in an individual's environment; some sources also include a spiritual or existential dimension. In contrast, reductionism in chiropractic reduces causes and cures of health problems to a single factor, vertebral subluxation. Homeostasis emphasizes the body's inherent self-healing abilities. Chiropractic's early notion of innate intelligence can be thought of as a metaphor for homeostasis.
A large number of chiropractors fear that if they do not separate themselves from the traditional vitalistic concept of innate intelligence, chiropractic will continue to be seen as a fringe profession. A variant of chiropractic called naprapathy originated in Chicago in the early twentieth century. It holds that manual manipulation of soft tissue can reduce "interference" in the body and thus improve health.
Straights and mixers
Straight chiropractors adhere to the philosophical principles set forth by D.D. and B.J. Palmer, and retain metaphysical definitions and vitalistic qualities. Straight chiropractors believe that vertebral subluxation leads to interference with an "innate intelligence" exerted via the human nervous system and is a primary underlying risk factor for many diseases. Straights view the medical diagnosis of patient complaints, which they consider to be the "secondary effects" of subluxations, to be unnecessary for chiropractic treatment. Thus, straight chiropractors are concerned primarily with the detection and correction of vertebral subluxation via adjustment and do not "mix" other types of therapies into their practice style. Their philosophy and explanations are metaphysical in nature and they prefer to use traditional chiropractic lexicon terminology such as "perform spinal analysis", "detect subluxation", "correct with adjustment". They prefer to remain separate and distinct from mainstream health care. Although considered the minority group, "they have been able to transform their status as purists and heirs of the lineage into influence dramatically out of proportion to their numbers."
Mixer chiropractors "mix" diagnostic and treatment approaches from chiropractic, medical or osteopathic viewpoints and make up the majority of chiropractors. Unlike straight chiropractors, mixers believe subluxation is one of many causes of disease, and hence they tend to be open to mainstream medicine. Many of them incorporate mainstream medical diagnostics and employ conventional treatments including techniques of physical therapy such as exercise, stretching, massage, ice packs, electrical muscle stimulation, therapeutic ultrasound, and moist heat. Some mixers also use techniques from alternative medicine, including nutritional supplements, acupuncture, homeopathy, herbal remedies, and biofeedback.
Although mixers are the majority group, many of them retain belief in vertebral subluxation as shown in a 2003 survey of 1,100 North American chiropractors, which found that 88 percent wanted to retain the term "vertebral subluxation complex", and that when asked to estimate the percent of disorders of internal organs that subluxation significantly contributes to, the mean response was 62 percent. A 2008 survey of 6,000 American chiropractors demonstrated that most chiropractors seem to believe that a subluxation-based clinical approach may be of limited utility for addressing visceral disorders, and greatly favored non-subluxation-based clinical approaches for such conditions. The same survey showed that most chiropractors generally believed that the majority of their clinical approach for addressing musculoskeletal/biomechanical disorders such as back pain was based on subluxation. Chiropractors often offer conventional therapies such as physical therapy and lifestyle counseling, and it may for the lay person be difficult to distinguish the unscientific from the scientific.
Vertebral subluxation
In science-based medicine, the term "subluxation" refers to an incomplete or partial dislocation of a joint, from the Latin luxare for 'dislocate'. While medical doctors use the term exclusively to refer to physical dislocations, Chiropractic founder D. D. Palmer imbued the word subluxation with a metaphysical and philosophical meaning drawn from pseudoscientific traditions such as Vitalism.
Palmer claimed that vertebral subluxations interfered with the body's function and its inborn ability to heal itself. D. D. Palmer repudiated his earlier theory that vertebral subluxations caused pinched nerves in the intervertebral spaces in favor of subluxations causing altered nerve vibration, either too tense or too slack, affecting the tone (health) of the end organ. He qualified this by noting that knowledge of innate intelligence was not essential to the competent practice of chiropractic. This concept was later expanded upon by his son, B. J. Palmer, and was instrumental in providing the legal basis of differentiating chiropractic from conventional medicine. In 1910, D. D. Palmer theorized that the nervous system controlled health:
Vertebral subluxation, a core concept of traditional chiropractic, remains unsubstantiated and largely untested, and a debate about whether to keep it in the chiropractic paradigm has been ongoing for decades. In general, critics of traditional subluxation-based chiropractic (including chiropractors) are skeptical of its clinical value, dogmatic beliefs and metaphysical approach. While straight chiropractic still retains the traditional vitalistic construct espoused by the founders, evidence-based chiropractic suggests that a mechanistic view will allow chiropractic care to become integrated into the wider health care community. This is still a continuing source of debate within the chiropractic profession as well, with some schools of chiropractic still teaching the traditional/straight subluxation-based chiropractic, while others have moved towards an evidence-based chiropractic that rejects metaphysical foundings and limits itself to primarily neuromusculoskeletal conditions.
In 2005, the chiropractic subluxation was defined by the World Health Organization as "a lesion or dysfunction in a joint or motion segment in which alignment, movement integrity and/or physiological function are altered, although contact between joint surfaces remains intact. It is essentially a functional entity, which may influence biomechanical and neural integrity." This differs from the medical definition of subluxation as a significant structural displacement, which can be seen with static imaging techniques such as X-rays. The use of X-ray imaging in the case of vertebral subluxation exposes patients to harmful ionizing radiation for no evidentially supported reason. The 2008 book Trick or Treatment states "X-rays can reveal neither the subluxations nor the innate intelligence associated with chiropractic philosophy, because they do not exist." Attorney David Chapman-Smith, Secretary-General of the World Federation of Chiropractic, has stated that "Medical critics have asked how there can be a subluxation if it cannot be seen on X-ray. The answer is that the chiropractic subluxation is essentially a functional entity, not structural, and is therefore no more visible on static X-ray than a limp or headache or any other functional problem." The General Chiropractic Council, the statutory regulatory body for chiropractors in the United Kingdom, states that the chiropractic vertebral subluxation complex "is not supported by any clinical research evidence that would allow claims to be made that it is the cause of disease."
As of 2014, the US National Board of Chiropractic Examiners states "The specific focus of chiropractic practice is known as the chiropractic subluxation or joint dysfunction. A subluxation is a health concern that manifests in the skeletal joints, and, through complex anatomical and physiological relationships, affects the nervous system and may lead to reduced function, disability or illness."
Pseudoscience versus spinal manipulation therapy
While some chiropractors limit their practice to short-term treatment of musculoskeletal conditions, many falsely claim to be able treat a myriad of other conditions. Some dissuade patients from seeking medical care, others have pretended to be qualified to act as a family doctor.
Quackwatch, an alternative medicine watchdog, cautions against seeing chiropractors who:
Treat young children
Discourage immunization
Pretend to be a family doctor
Take full spine X-rays
Promote unproven dietary supplements
Are antagonistic to scientific medicine
Claim to treat non-musculoskeletal problems
Writing for the Skeptical Inquirer, one physician cautioned against seeing even chiropractors who solely claim to treat musculoskeletal conditions:
Scope of practice
Chiropractors emphasize the conservative management of the neuromusculoskeletal system without the use of medicines or surgery, with special emphasis on the spine. Back and neck pain are the specialties of chiropractic but many chiropractors treat ailments other than musculoskeletal issues. There is a range of opinions among chiropractors: some believed that treatment should be confined to the spine, or back and neck pain; others disagreed. For example, while one 2009 survey of American chiropractors had found that 73% classified themselves as "back pain/musculoskeletal specialists", the label "back and neck pain specialists" was regarded by 47% of them as a least desirable description in a 2005 international survey. Chiropractic combines aspects from mainstream and alternative medicine, and there is no agreement about how to define the profession: although chiropractors have many attributes of primary care providers, chiropractic has more attributes of a medical specialty like dentistry or podiatry. It has been proposed that chiropractors specialize in nonsurgical spine care, instead of attempting to also treat other problems, but the more expansive view of chiropractic is still widespread.
Mainstream health care and governmental organizations such as the World Health Organization consider chiropractic to be complementary and alternative medicine (CAM); and a 2008 study reported that 31% of surveyed chiropractors categorized chiropractic as CAM, 27% as integrated medicine, and 12% as mainstream medicine. Many chiropractors believe they are primary care providers, including US and UK chiropractors, but the length, breadth, and depth of chiropractic clinical training do not support the requirements to be considered primary care providers, so their role on primary care is limited and disputed.
Chiropractic overlaps with several other forms of manual therapy, including massage therapy, osteopathy, physical therapy, and sports medicine. Chiropractic is autonomous from and competitive with mainstream medicine, and osteopathy outside the US remains primarily a manual medical system; physical therapists work alongside and cooperate with mainstream medicine, and osteopathic medicine in the U.S. has merged with the medical profession. Practitioners may distinguish these competing approaches through claims that, compared to other therapists, chiropractors heavily emphasize spinal manipulation, tend to use firmer manipulative techniques, and promote maintenance care; that osteopaths use a wider variety of treatment procedures; and that physical therapists emphasize machinery and exercise.
Chiropractic diagnosis may involve a range of methods including skeletal imaging, observational and tactile assessments, and orthopedic and neurological evaluation. A chiropractor may also refer a patient to an appropriate specialist, or co-manage with another health care provider. Common patient management involves spinal manipulation (SM) and other manual therapies to the joints and soft tissues, rehabilitative exercises, health promotion, electrical modalities, complementary procedures, and lifestyle advice.
Chiropractors are not normally licensed to write medical prescriptions or perform major surgery in the United States (although New Mexico has become the first US state to allow "advanced practice" trained chiropractors to prescribe certain medications). In the US, their scope of practice varies by state, based on inconsistent views of chiropractic care: some states, such as Iowa, broadly allow treatment of "human ailments"; some, such as Delaware, use vague concepts such as "transition of nerve energy" to define scope of practice; others, such as New Jersey, specify a severely narrowed scope. US states also differ over whether chiropractors may conduct laboratory tests or diagnostic procedures, dispense dietary supplements, or use other therapies such as homeopathy and acupuncture; in Oregon they can become certified to perform minor surgery and to deliver children via natural childbirth. A 2003 survey of North American chiropractors found that a slight majority favored allowing them to write prescriptions for over-the-counter drugs. A 2010 survey found that 72% of Swiss chiropractors considered their ability to prescribe nonprescription medication as an advantage for chiropractic treatment.
A related field, veterinary chiropractic, applies manual therapies to animals and is recognized in many US states, but is not recognized by the American Chiropractic Association as being chiropractic. It remains controversial within certain segments of the veterinary and chiropractic professions.
No single profession "owns" spinal manipulation and there is little consensus as to which profession should administer SM, raising concerns by chiropractors that other medical physicians could "steal" SM procedures from chiropractors. A focus on evidence-based SM research has also raised concerns that the resulting practice guidelines could limit the scope of chiropractic practice to treating backs and necks. Two US states (Washington and Arkansas) prohibit physical therapists from performing SM, some states allow them to do it only if they have completed advanced training in SM, and some states allow only chiropractors to perform SM, or only chiropractors and physicians. Bills to further prohibit non-chiropractors from performing SM are regularly introduced into state legislatures and are opposed by physical therapist organizations.
Treatments
Spinal manipulation, which chiropractors call "spinal adjustment" or "chiropractic adjustment", is the most common treatment used in chiropractic care. Spinal manipulation is a passive manual maneuver during which a three-joint complex is taken past the normal range of movement, but not so far as to dislocate or damage the joint. Its defining factor is a dynamic thrust, which is a sudden force that causes an audible release and attempts to increase a joint's range of motion. High-velocity, low-amplitude spinal manipulation (HVLA-SM) thrusts have physiological effects that signal neural discharge from paraspinal muscle tissues, depending on duration and amplitude of the thrust are factors of the degree in paraspinal muscle spindles activation. Clinical skill in employing HVLA-SM thrusts depends on the ability of the practitioner to handle the duration and magnitude of the load. More generally, spinal manipulative therapy (SMT) describes techniques where the hands are used to manipulate, massage, mobilize, adjust, stimulate, apply traction to, or otherwise influence the spine and related tissues.
There are several schools of chiropractic adjustive techniques, although most chiropractors mix techniques from several schools. The following adjustive procedures were received by more than 10% of patients of licensed US chiropractors in a 2003 survey: Diversified technique (full-spine manipulation, employing various techniques), extremity adjusting, Activator technique (which uses a spring-loaded tool to deliver precise adjustments to the spine), Thompson Technique (which relies on a drop table and detailed procedural protocols), Gonstead (which emphasizes evaluating the spine along with specific adjustment that avoids rotational vectors), Cox/flexion-distraction (a gentle, low-force adjusting procedure which mixes chiropractic with osteopathic principles and utilizes specialized adjusting tables with movable parts), adjustive instrument, Sacro-Occipital Technique (which models the spine as a torsion bar), Nimmo Receptor-Tonus Technique, applied kinesiology (which emphasises "muscle testing" as a diagnostic tool), and cranial. Chiropractic biophysics technique uses inverse functions of rotations during spinal manipulation. Koren Specific Technique (KST) may use their hands, or they may use an electric device known as an "ArthroStim" for assessment and spinal manipulations. Insurers in the US and UK that cover other chiropractic techniques exclude KST from coverage because they consider it to be "experimental and investigational". Medicine-assisted manipulation, such as manipulation under anesthesia, involves sedation or local anesthetic and is done by a team that includes an anesthesiologist; a 2008 systematic review did not find enough evidence to make recommendations about its use for chronic low back pain.
Many other procedures are used by chiropractors for treating the spine, other joints and tissues, and general health issues. The following procedures were received by more than one-third of patients of licensed US chiropractors in a 2003 survey: Diversified technique (full-spine manipulation; mentioned in previous paragraph), physical fitness/exercise promotion, corrective or therapeutic exercise, ergonomic/postural advice, self-care strategies, activities of daily living, changing risky/unhealthy behaviors, nutritional/dietary recommendations, relaxation/stress reduction recommendations, ice pack/cryotherapy, extremity adjusting (also mentioned in previous paragraph), trigger point therapy, and disease prevention/early screening advice.
A 2010 study describing Belgian chiropractors and their patients found chiropractors in Belgium mostly focus on neuromusculoskeletal complaints in adult patients, with emphasis on the spine. The diversified technique is the most often applied technique at 93%, followed by the Activator mechanical-assisted technique at 41%. A 2009 study assessing chiropractic students giving or receiving spinal manipulations while attending a United States chiropractic college found Diversified, Gonstead, and upper cervical manipulations are frequently used methods.
Practice guidelines
Reviews of research studies within the chiropractic community have been used to generate practice guidelines outlining standards that specify which chiropractic treatments are legitimate (i.e. supported by evidence) and conceivably reimbursable under managed care health payment systems. Evidence-based guidelines are supported by one end of an ideological continuum among chiropractors; the other end employs antiscientific reasoning and makes unsubstantiated claims. Chiropractic remains at a crossroads, and that in order to progress it would need to embrace science; the promotion by some for it to be a cure-all was both "misguided and irrational". A 2007 survey of Alberta chiropractors found that they do not consistently apply research in practice, which may have resulted from a lack of research education and skills. Specific guidelines concerning the treatment of nonspecific (i.e., unknown cause) low back pain are inconsistent between countries.
Effectiveness
Numerous controlled clinical studies of treatments used by chiropractors have been conducted, with varied results. There is no conclusive evidence that chiropractic manipulative treatment is effective for the treatment of any medical condition, except perhaps for certain kinds of back pain.
Generally, the research carried out into the effectiveness of chiropractic has been of poor quality. Research published by chiropractors is distinctly biased: reviews of SM for back pain tended to find positive conclusions when authored by chiropractors, while reviews by mainstream authors did not.
There is a wide range of ways to measure treatment outcomes. Chiropractic care benefits from the placebo response, but it is difficult to construct a trustworthy placebo for clinical trials of spinal manipulative therapy (SMT). The efficacy of maintenance care in chiropractic is unknown.
Available evidence covers the following conditions:
Low back pain. A 2013 Cochrane review found very low to moderate evidence that SMT was no more effective than inert interventions, sham SMT or as an adjunct therapy for acute low back pain. The same review found that SMT appears to be no better than other recommended therapies. A 2012 overview of systematic reviews found that collectively, SM failed to show it is an effective intervention for pain. A 2011 Cochrane review found strong evidence that suggests there is no clinically meaningful difference between SMT and other treatments for reducing pain and improving function for chronic low back pain. A 2010 Cochrane review found no difference between the effects of combined chiropractic treatments and other treatments for chronic or mixed duration low back pain. A 2010 systematic review found that most studies suggest SMT achieves equivalent or superior improvement in pain and function when compared with other commonly used interventions for short, intermediate, and long-term follow-up.
Radiculopathy. A 2013 systematic review and meta-analysis found a statistically significant improvement in overall recovery from sciatica following SM, when compared to usual care, and suggested that SM may be considered. There is moderate quality evidence to support the use of SM for the treatment of acute lumbar radiculopathy and acute lumbar disc herniation with associated radiculopathy. There is low or very low evidence supporting SM for chronic lumbar spine-related extremity symptoms and cervical spine-related extremity symptoms of any duration and no evidence exists for the treatment of thoracic radiculopathy.
Whiplash and other neck pain. There is no consensus on the effectiveness of manual therapies for neck pain. A 2013 systematic review found that the data suggests that there are minimal short- and long-term treatment differences when comparing manipulation or mobilization of the cervical spine to physical therapy or exercise for neck pain improvement. A 2013 systematic review found that although there is insufficient evidence that thoracic SM is more effective than other treatments, it is a suitable intervention to treat some patients with non-specific neck pain. A 2011 systematic review found that thoracic SM may offer short-term improvement for the treatment of acute or subacute mechanical neck pain; although the body of literature is still weak. A 2010 Cochrane review found low quality evidence that suggests cervical manipulation may offer better short-term pain relief than a control for neck pain, and moderate evidence that cervical manipulation and mobilization produced similar effects on pain, function and patient satisfaction. A 2010 systematic review found low level evidence that suggests chiropractic care improves cervical range of motion and pain in the management of whiplash.
Headache. There is conflicting evidence surrounding the use of chiropractic SMT for the treatment and prevention of migraine headaches. A 2006 review found no rigorous evidence supporting SM or other manual therapies for tension headache. A 2005 review found that the evidence was weak for effectiveness of chiropractic manipulation for tension headache, and that it was probably more effective for tension headache than for migraine.
Extremity conditions. A 2011 systematic review and meta-analysis concluded that the addition of manual mobilizations to an exercise program for the treatment of knee osteoarthritis resulted in better pain relief than a supervised exercise program alone and suggested that manual therapists consider adding manual mobilization to optimize supervised active exercise programs. There is silver level evidence that manual therapy is more effective than exercise for the treatment of hip osteoarthritis, however this evidence could be considered to be inconclusive. There is a small amount of research into the efficacy of chiropractic treatment for upper limbs, limited to low level evidence supporting chiropractic management of shoulder pain and limited or fair evidence supporting chiropractic management of leg conditions.
Other. A 2012 systematic review found insufficient low bias evidence to support the use of spinal manipulation as a therapy for the treatment of hypertension. A 2011 systematic review found moderate evidence to support the use of manual therapy for cervicogenic dizziness. There is very weak evidence for chiropractic care for adult scoliosis (curved or rotated spine) and no scientific data for idiopathic adolescent scoliosis. A 2007 systematic review found that few studies of chiropractic care for nonmusculoskeletal conditions are available, and they are typically not of high quality; it also found that the entire clinical encounter of chiropractic care (as opposed to just SM) provides benefit to patients with cervicogenic dizziness, and that the evidence from reviews is negative, or too weak to draw conclusions, for a wide variety of other nonmusculoskeletal conditions, including ADHD/learning disabilities, dizziness, high blood pressure, and vision conditions. Other reviews have found no evidence of significant benefit for asthma, baby colic, bedwetting, carpal tunnel syndrome, fibromyalgia, gastrointestinal disorders, kinetic imbalance due to suboccipital strain (KISS) in infants, menstrual cramps, insomnia, postmenopausal symptoms, or pelvic and back pain during pregnancy. As there is no evidence of effectiveness or safety for cervical manipulation for baby colic, it is not endorsed.
Safety
The World Health Organization found chiropractic care in general is safe when employed skillfully and appropriately. There is not sufficient data to establish the safety of chiropractic manipulations. Manipulation is regarded as relatively safe but complications can arise, and it has known adverse effects, risks and contraindications. Absolute contraindications to spinal manipulative therapy are conditions that should not be manipulated; these contraindications include rheumatoid arthritis and conditions known to result in unstable joints. Relative contraindications are conditions where increased risk is acceptable in some situations and where low-force and soft-tissue techniques are treatments of choice; these contraindications include osteoporosis. Although most contraindications apply only to manipulation of the affected region, some neurological signs indicate referral to emergency medical services; these include sudden and severe headache or neck pain unlike that previously experienced. Indirect risks of chiropractic involve delayed or missed diagnoses through consulting a chiropractor.
Spinal manipulation is associated with frequent, mild and temporary adverse effects, including new or worsening pain or stiffness in the affected region. They have been estimated to occur in 33% to 61% of patients, and frequently occur within an hour of treatment and disappear within 24 to 48 hours; adverse reactions appear to be more common following manipulation than mobilization. The most frequently stated adverse effects are mild headache, soreness, and briefly elevated pain fatigue. Chiropractic is correlated with a very high incidence of minor adverse effects. Rarely, spinal manipulation, particularly on the upper spine, can also result in complications that can lead to permanent disability or death; these can occur in adults and children. Estimates vary widely for the incidence of these complications, and the actual incidence is unknown, due to high levels of underreporting and to the difficulty of linking manipulation to adverse effects such as stroke, which is a particular concern. Adverse effects are poorly reported in recent studies investigating chiropractic manipulations. A 2016 systematic review concludes that the level of reporting is unsuitable and unacceptable. Reports of serious adverse events have occurred, resulting from spinal manipulation therapy of the lumbopelvic region. Estimates for serious adverse events vary from 5 strokes per 100,000 manipulations to 1.46 serious adverse events per 10 million manipulations and 2.68 deaths per 10 million manipulations, though it was determined that there was inadequate data to be conclusive. Several case reports show temporal associations between interventions and potentially serious complications. The published medical literature contains reports of 26 deaths since 1934 following chiropractic manipulations and many more seem to remain unpublished.
Vertebrobasilar artery stroke (VAS) is statistically associated with chiropractic services in persons under 45 years of age, but it is similarly associated with general practitioner services, suggesting that these associations are likely explained by preexisting conditions. Weak to moderately strong evidence supports causation (as opposed to statistical association) between cervical manipulative therapy (CMT) and VAS. There is insufficient evidence to support a strong association or no association between cervical manipulation and stroke. While the biomechanical evidence is not sufficient to support the statement that CMT causes cervical artery dissection (CD), clinical reports suggest that mechanical forces have a part in a substantial number of CDs and the majority of population controlled studies found an association between CMT and VAS in young people. It is strongly recommended that practitioners consider the plausibility of CD as a symptom, and people can be informed of the association between CD and CMT before administering manipulation of the cervical spine. There is controversy regarding the degree of risk of stroke from cervical manipulation. Many chiropractors state that, the association between chiropractic therapy and vertebral arterial dissection is not proven. However, it has been suggested that the causality between chiropractic cervical manipulation beyond the normal range of motion and vascular accidents is probable or definite. There is very low evidence supporting a small association between internal carotid artery dissection and chiropractic neck manipulation. The incidence of internal carotid artery dissection following cervical spine manipulation is unknown. The literature infrequently reports helpful data to better understand the association between cervical manipulative therapy, cervical artery dissection and stroke. The limited evidence is inconclusive that chiropractic spinal manipulation therapy is not a cause of intracranial hypotension. Cervical intradural disc herniation is very rare following spinal manipulation therapy.
Chiropractors sometimes employ diagnostic imaging techniques such as X-rays and CT scans that rely on ionizing radiation. Although there is no clear evidence to justify the practice, some chiropractors still X-ray a patient several times a year. Practice guidelines aim to reduce unnecessary radiation exposure, which increases cancer risk in proportion to the amount of radiation received. Research suggests that radiology instruction given at chiropractic schools worldwide seem to be evidence-based. Although, there seems to be a disparity between some schools and available evidence regarding the aspect of radiography for patients with acute low back pain without an indication of a serious disease, which may contribute to chiropractic overuse of radiography for low back pain.
Risk-benefit
A 2012 systematic review concluded that no accurate assessment of risk-benefit exists for cervical manipulation. A 2010 systematic review stated that there is no good evidence to assume that neck manipulation is an effective treatment for any medical condition and suggested a precautionary principle in healthcare for chiropractic intervention even if a causality with vertebral artery dissection after neck manipulation were merely a remote possibility. The same review concluded that the risk of death from manipulations to the neck outweighs the benefits. Chiropractors have criticized this conclusion, claiming that the author did not evaluate the potential benefits of spinal manipulation. Edzard Ernst stated "This detail was not the subject of my review. I do, however, refer to such evaluations and should add that a report recently commissioned by the General Chiropractic Council did not support many of the outlandish claims made by many chiropractors across the world." A 1999 review of 177 previously reported cases published between 1925 and 1997 in which injuries were attributed to manipulation of the cervical spine (MCS) concluded that "The literature does not demonstrate that the benefits of MCS outweigh the risks." The professions associated with each injury were assessed. Physical therapists (PT) were involved in less than 2% of all cases, with no deaths caused by PTs. Chiropractors were involved in a little more than 60% of all cases, including 32 deaths.
A 2009 review evaluating maintenance chiropractic care found that spinal manipulation is associated with considerable harm and no compelling evidence exists to indicate that it adequately prevents symptoms or diseases, thus the risk-benefit is not evidently favorable.
Cost-effectiveness
A 2012 systematic review suggested that the use of spine manipulation in clinical practice is a cost-effective treatment when used alone or in combination with other treatment approaches. A 2011 systematic review found evidence supporting the cost-effectiveness of using spinal manipulation for the treatment of sub-acute or chronic low back pain; the results for acute low back pain were insufficient.
A 2006 systematic cost-effectiveness review found that the reported cost-effectiveness of spinal manipulation in the United Kingdom compared favorably with other treatments for back pain, but that reports were based on data from clinical trials without placebo controls and that the specific cost-effectiveness of the treatment (as opposed to non-specific effects) remains uncertain. A 2005 American systematic review of economic evaluations of conservative treatments for low back pain found that significant quality problems in available studies meant that definite conclusions could not be drawn about the most cost-effective intervention. The cost-effectiveness of maintenance chiropractic care is unknown.
Analysis of a clinical and cost utilization data from the years 2003 to 2005 by an integrative medicine independent physician association (IPA) which looked the chiropractic services utilization found that the clinical and cost utilization of chiropractic services based on 70,274 member-months over a 7-year period decreased patient costs associate with the following use of services by 60% for in-hospital admissions, 59% for hospital days, 62% for outpatient surgeries and procedures, and 85% for pharmaceutical costs when compared with conventional medicine (visit to a medical doctor primary care provider) IPA performance for the same health maintenance organization product in the same geography and time frame.
Education, licensing, and regulation
Requirements vary between countries. In the U.S. chiropractors obtain a non-medical accredited diploma in the field of chiropractic. Chiropractic education in the U.S. has been criticized for failing to meet generally accepted standards of evidence-based medicine. The curriculum content of North American chiropractic and medical colleges with regard to basic and clinical sciences has little similarity, both in the kinds of subjects offered and in the time assigned to each subject. Accredited chiropractic programs in the U.S. require that applicants have 90 semester hours of undergraduate education with a grade point average of at least 3.0 on a 4.0 scale. Many programs require at least three years of undergraduate education, and more are requiring a bachelor's degree. Canada requires a minimum three years of undergraduate education for applicants, and at least 4200 instructional hours (or the equivalent) of full-time chiropractic education for matriculation through an accredited chiropractic program. Graduates of the Canadian Memorial Chiropractic College (CMCC) are formally recognized to have at least 7–8 years of university level education. The World Health Organization (WHO) guidelines suggest three major full-time educational paths culminating in either a DC, DCM, BSc, or MSc degree. Besides the full-time paths, they also suggest a conversion program for people with other health care education and limited training programs for regions where no legislation governs chiropractic.
Upon graduation, there may be a requirement to pass national, state, or provincial board examinations before being licensed to practice in a particular jurisdiction. Depending on the location, continuing education may be required to renew these licenses. Specialty training is available through part-time postgraduate education programs such as chiropractic orthopedics and sports chiropractic, and through full-time residency programs such as radiology or orthopedics.
In the U.S., chiropractic schools are accredited through the Council on Chiropractic Education (CCE) while the General Chiropractic Council (GCC) is the statutory governmental body responsible for the regulation of chiropractic in the UK. The U.S. CCE requires a mixing curriculum, which means a straight-educated chiropractor may not be eligible for licensing in states requiring CCE accreditation. CCEs in the U.S., Canada, Australia and Europe have joined to form CCE-International (CCE-I) as a model of accreditation standards with the goal of having credentials portable internationally. Today, there are 18 accredited Doctor of Chiropractic programs in the U.S., 2 in Canada, 6 in Australasia, and 5 in Europe. All but one of the chiropractic colleges in the U.S. are privately funded, but in several other countries they are in government-sponsored universities and colleges. Of the two chiropractic colleges in Canada, one is publicly funded (UQTR) and one is privately funded (CMCC). In 2005, CMCC was granted the privilege of offering a professional health care degree under the Post-secondary Education Choice and Excellence Act, which sets the program within the hierarchy of education in Canada as comparable to that of other primary contact health care professions such as medicine, dentistry and optometry.
Regulatory colleges and chiropractic boards in the U.S., Canada, Mexico, and Australia are responsible for protecting the public, standards of practice, disciplinary issues, quality assurance and maintenance of competency. There are an estimated 49,000 chiropractors in the U.S. (2008), 6,500 in Canada (2010), 2,500 in Australia (2000), and 1,500 in the UK (2000).
Chiropractors often argue that this education is as good as or better than medical physicians', but most chiropractic training is confined to classrooms with much time spent learning theory, adjustment, and marketing. The fourth year of chiropractic education persistently showed the highest stress levels. Every student, irrespective of year, experienced different ranges of stress when studying. The chiropractic leaders and colleges have had internal struggles. Rather than cooperation, there has been infighting between different factions. A number of actions were posturing due to the confidential nature of the chiropractic colleges in an attempt to enroll students.
In 2024, Oregon Public Broadcasting reported on the high debt burden of students who pursued degrees in alternative medicine. Ten different chiropractic programs were ranked among the 47 US graduate programs with highest debt to earnings ratios. Analyses by Quackwatch and the Sunlight Foundation found high rates of default on Health Education Assistance Loan (HEAL) student loans used for chiropractic programs. Among health professionals who were listed as in default on HEAL loans in 2012, 53% were chiropractors.
Ethics
The chiropractic oath is a modern variation of the classical Hippocratic Oath historically taken by physicians and other healthcare professionals swearing to practice their professions ethically. The American Chiropractic Association (ACA) has an ethical code "based upon the acknowledgement that the social contract dictates the profession's responsibilities to the patient, the public, and the profession; and upholds the fundamental principle that the paramount purpose of the chiropractic doctor's professional services shall be to benefit the patient." The International Chiropractor's Association (ICA) also has a set of professional canons.
A 2008 commentary proposed that the chiropractic profession actively regulate itself to combat abuse, fraud, and quackery, which are more prevalent in chiropractic than in other health care professions, violating the social contract between patients and physicians. According to a 2015 Gallup poll of U.S. adults, the perception of chiropractors is generally favorable; two-thirds of American adults agree that chiropractors have their patient's best interest in mind and more than half also agree that most chiropractors are trustworthy. Less than 10% of US adults disagreed with the statement that chiropractors were trustworthy.
Chiropractors, especially in America, have a reputation for unnecessarily treating patients. In many circumstances the focus seems to be put on economics instead of health care. Sustained chiropractic care is promoted as a preventive tool, but unnecessary manipulation could possibly present a risk to patients. Some chiropractors are concerned by the routine unjustified claims chiropractors have made. A 2010 analysis of chiropractic websites found the majority of chiropractors and their associations made claims of effectiveness not supported by scientific evidence, while 28% of chiropractor websites advocate lower back pain care, which has some sound evidence.
The US Office of the Inspector General (OIG) estimated that for calendar year 2013, 82% of payments to chiropractors under Medicare Part B, a total of $359 million, did not comply with Medicare requirements. There have been at least 15 OIG reports about chiropractic billing irregularities since 1986.
In 2009, a backlash to the libel suit filed by the British Chiropractic Association (BCA) against Simon Singh inspired the filing of formal complaints of false advertising against more than 500 individual chiropractors within one 24-hour period, prompting the McTimoney Chiropractic Association to write to its members advising them to remove leaflets that make claims about whiplash and colic from their practice, to be wary of new patients and telephone inquiries, and telling their members: "If you have a website, take it down NOW" and "Finally, we strongly suggest you do NOT discuss this with others, especially patients." An editorial in Nature suggested that the BCA may have been trying to suppress debate and that this use of English libel law was a burden on the right to freedom of expression, which is protected by the European Convention on Human Rights. The libel case ended with the BCA withdrawing its suit in 2010.
Reception
Chiropractic is established in the U.S., Canada, and Australia, and is present to a lesser extent in many other countries. It is viewed as a marginal and non-clinically–proven attempt at complementary and alternative medicine, which has not integrated into mainstream medicine.
Australia
In Australia, there are approximately 2488 chiropractors, or one chiropractor for every 7980 people. Most private health insurance funds in Australia cover chiropractic care, and the federal government funds chiropractic care when the patient is referred by a medical practitioner. In 2014, the chiropractic profession had a registered workforce of 4,684 practitioners in Australia represented by two major organizations – the Chiropractors' Association of Australia (CAA) and the Chiropractic and Osteopathic College of Australasia (COCA). Annual expenditure on chiropractic care (alone or combined with osteopathy) in Australia is estimated to be between AUD$750–988 million with musculoskeletal complaints such as back and neck pain making up the bulk of consultations; and proportional expenditure is similar to that found in other countries. While Medicare (the Australian publicly funded universal health care system) coverage of chiropractic services is limited to only those directed by a medical referral to assist chronic disease management, most private health insurers in Australia do provide partial reimbursement for a wider range of chiropractic services in addition to limited third party payments for workers compensation and motor vehicle accidents.
Of the 2,005 chiropractors who participated in a 2015 survey, 62.4% were male and the average age was 42.1 (SD = 12.1) years. Nearly all chiropractors (97.1%) had a bachelor's degree or higher, with the majority of chiropractor's highest professional qualification being a bachelor or double bachelor's degree (34.6%), followed by a master's degree (32.7%), Doctor of Chiropractic (28.9%) or PhD (0.9%). Only a small number of chiropractor's highest professional qualification was a diploma (2.1%) or advanced diploma (0.8%).
Germany
In Germany, chiropractic may be offered by medical doctors and alternative practitioners. Chiropractors qualified abroad must obtain a German non-medical practitioner license. Authorities have routinely required a comprehensive knowledge test for this, but in the recent past, some administrative courts have ruled that training abroad should be recognised.
Switzerland
In Switzerland, only trained medical professionals are allowed to offer chiropractic. There are 300 chiropractors in Switzerland.
United Kingdom
In the United Kingdom, there are over 2,000 chiropractors, representing one chiropractor per 29,206 people. Chiropractic is available on the National Health Service in some areas, such as Cornwall, where the treatment is only available for neck or back pain.
A 2010 study by questionnaire presented to UK chiropractors indicated only 45% of chiropractors disclosed to patients the serious risk associated with manipulation of the cervical spine and that 46% believed there was possibility patients would refuse treatment if the risks were correctly explained. However 80% acknowledged the ethical/moral responsibility to disclose risk to patients.
United States and Canada
The percentage of the population that utilizes chiropractic care at any given time generally falls into a range from 6% to 12% in the U.S. and Canada, with a global high of 20% in Alberta in 2006. In 2008, chiropractors were reported to be the most common CAM providers for children and adolescents, these patients representing up to 14% of all visits to chiropractors.
There were around 50,330 chiropractors practicing in North America in 2000. In 2008, this has increased by almost 20% to around 60,000 chiropractors. In 2002–03, the majority of those who sought chiropractic did so for relief from back and neck pain and other neuromusculoskeletal complaints; most do so specifically for low back pain. The majority of U.S. chiropractors participate in some form of managed care. Although the majority of U.S. chiropractors view themselves as specialists in neuromusculoskeletal conditions, many also consider chiropractic as a type of primary care. In the majority of cases, the care that chiropractors and physicians provide divides the market, however for some, their care is complementary.
In the U.S., chiropractors perform over 90% of all manipulative treatments. Satisfaction rates are typically higher for chiropractic care compared to medical care, with a 1998 U.S. survey reporting 83% of respondents satisfied or very satisfied with their care; quality of communication seems to be a consistent predictor of patient satisfaction with chiropractors.
Utilization of chiropractic care is sensitive to the costs incurred by the co-payment by the patient. The use of chiropractic declined from 9.9% of U.S. adults in 1997 to 7.4% in 2002; this was the largest relative decrease among CAM professions, which overall had a stable use rate. As of 2007 7% of the U.S. population is being reached by chiropractic. They were the third largest medical profession in the US in 2002, following physicians and dentists. Employment of U.S. chiropractors was expected to increase 14% between 2006 and 2016, faster than the average for all occupations.
In the U.S., most states require insurers to cover chiropractic care, and most HMOs cover these services.
History
Chiropractic's origins lie in the folk medicine practice of bonesetting, in which untrained practitioners engaged in joint manipulation or resetting fractured bones.
Chiropractic was founded in 1895 by Daniel David (D. D.) Palmer in Davenport, Iowa. Palmer, a magnetic healer, hypothesized that manual manipulation of the spine could cure disease. The first chiropractic patient of D. D. Palmer was Harvey Lillard, a worker in the building where Palmer's office was located. He claimed that he had severely reduced hearing for 17 years, which started shortly following a "pop" in his spine. A few days following his adjustment, Lillard claimed his hearing was almost completely restored. Another of Palmer's patients, Samuel Weed, coined the term chiropractic, from Greek 'hand' (itself from 'hand') and 'practical'. Chiropractic is classified as a field of pseudomedicine.
Chiropractic competed with its predecessor osteopathy, another medical system based on magnetic healing; both systems were founded by charismatic midwesterners in opposition to the conventional medicine of the day, and both postulated that manipulation improved health. Although initially keeping chiropractic a family secret, in 1898 Palmer began teaching it to a few students at his new Palmer School of Chiropractic. One student, his son Bartlett Joshua (B. J.) Palmer, became committed to promoting chiropractic, took over the Palmer School in 1906, and rapidly expanded its enrollment.
Early chiropractors believed that all disease was caused by interruptions in the flow of innate intelligence, a vitalistic nervous energy or life force that represented God's presence in man; chiropractic leaders often invoked religious imagery and moral traditions. D. D. Palmer said he "received chiropractic from the other world". D. D. and B. J. both seriously considered declaring chiropractic a religion, which might have provided legal protection under the U.S. constitution, but decided against it partly to avoid confusion with Christian Science. Early chiropractors also tapped into the Populist movement, emphasizing craft, hard work, competition, and advertisement, aligning themselves with the common man against intellectuals and trusts, among which they included the American Medical Association (AMA).
Chiropractic has seen considerable controversy and criticism. Although D. D. and B. J. were "straight" and disdained the use of instruments, some early chiropractors, whom B. J. scornfully called "mixers", advocated the use of instruments. In 1910, B. J. changed course and endorsed X-rays as necessary for diagnosis; this resulted in a significant exodus from the Palmer School of the more conservative faculty and students. The mixer camp grew until by 1924 B. J. estimated that only 3,000 of the United States' 25,000 chiropractors remained straight. That year, B. J.'s invention and promotion of the neurocalometer, a temperature-sensing device, was highly controversial among B. J.'s fellow straights. By the 1930s, chiropractic was the largest alternative healing profession in the U.S.
Chiropractors faced heavy opposition from organized medicine. D. D. Palmer was jailed in 1907 for practicing medicine without a license. Thousands of chiropractors were prosecuted for practicing medicine without a license, and D. D. and many other chiropractors were jailed. To defend against medical statutes, B. J. argued that chiropractic was separate and distinct from medicine, asserting that chiropractors "analyzed" rather than "diagnosed", and "adjusted" subluxations rather than "treated" disease. B. J. cofounded the Universal Chiropractors' Association (UCA) to provide legal services to arrested chiropractors. Although the UCA won their first test case in Wisconsin in 1907, prosecutions instigated by state medical boards became increasingly common and in many cases were successful. In response, chiropractors conducted political campaigns to secure separate licensing statutes, eventually succeeding in all fifty states, from Kansas in 1913 through Louisiana in 1974. The longstanding feud between chiropractors and medical doctors continued for decades.
Restraint of trade decision 1989
The AMA labeled chiropractic an "unscientific cult" in 1966, and until 1980 advised its members that it was unethical for medical doctors to associate with "unscientific practitioners". This culminated in a landmark 1987 decision, Wilk v. AMA, in which the court found that the AMA had engaged in unreasonable restraint of trade and conspiracy, and which ended the AMA's de facto boycott of chiropractic.
Growing scholarly interest
Serious research to test chiropractic theories did not begin until the 1970s, and is continuing to be hampered by antiscientific and pseudoscientific ideas that sustained the profession in its long battle with organized medicine. By the mid-1990s there was a growing scholarly interest in chiropractic, which helped efforts to improve service quality and establish clinical guidelines that recommended manual therapies for acute low back pain.
In recent decades chiropractic gained legitimacy and greater acceptance by medical physicians and health plans, and enjoyed a strong political base and sustained demand for services. However, its future seemed uncertain: as the number of practitioners grew, evidence-based medicine insisted on treatments with demonstrated value, managed care restricted payment, and competition grew from massage therapists and other health professions. The profession responded by marketing natural products and devices more aggressively, and by reaching deeper into alternative medicine and primary care.
Public health
Some chiropractors oppose vaccination and water fluoridation, which are common public health practices. Within the chiropractic community there are significant disagreements about vaccination, one of the most cost-effective public health interventions available. Most chiropractic writings on vaccination focus on its negative aspects, claiming that it is hazardous, ineffective, and unnecessary. Some chiropractors have embraced vaccination, but a significant portion of the profession rejects it, as original chiropractic philosophy traces diseases to causes in the spine and states that vaccines interfere with healing. The extent to which anti-vaccination views perpetuate the current chiropractic profession is uncertain. The American Chiropractic Association and the International Chiropractors Association support individual exemptions to compulsory vaccination laws, and a 1995 survey of U.S. chiropractors found that about a third believed there was no scientific proof that immunization prevents disease. The Canadian Chiropractic Association supports vaccination; a survey in Alberta in 2002 found that 25% of chiropractors advised patients for, and 27% against, vaccinating themselves or their children.
Early opposition to water fluoridation included chiropractors, some of whom continue to oppose it as being incompatible with chiropractic philosophy and an infringement of personal freedom. Other chiropractors have actively promoted fluoridation, and several chiropractic organizations have endorsed scientific principles of public health. In addition to traditional chiropractic opposition to water fluoridation and vaccination, chiropractors' attempts to establish a positive reputation for their public health role are also compromised by their reputation for recommending repetitive lifelong chiropractic treatment.
Controversy
Throughout its history chiropractic has been the subject of internal and external controversy and criticism. According to Daniel D. Palmer, the founder of chiropractic, subluxation is the sole cause of disease and manipulation is the cure for all diseases of the human race. A 2003 profession-wide survey found "most chiropractors (whether 'straights' or 'mixers') still hold views of innate intelligence and of the cause and cure of disease (not just back pain) consistent with those of the Palmers." A critical evaluation stated "Chiropractic is rooted in mystical concepts. This led to an internal conflict within the chiropractic profession, which continues today." Chiropractors, including D. D. Palmer, were jailed for practicing medicine without a license. For most of its existence, chiropractic has battled with mainstream medicine, sustained by antiscientific and pseudoscientific ideas such as subluxation. Collectively, systematic reviews have not demonstrated that spinal manipulation, the main treatment method employed by chiropractors, is effective for any medical condition, with the possible exception of treatment for back pain. Chiropractic remains controversial, though to a lesser extent than in past years.
| Biology and health sciences | Alternative and traditional medicine | Health |
7739 | https://en.wikipedia.org/wiki/Carbide | Carbide | In chemistry, a carbide usually describes a compound composed of carbon and a metal. In metallurgy, carbiding or carburizing is the process for producing carbide coatings on a metal piece.
Interstitial / Metallic carbides
The carbides of the group 4, 5 and 6 transition metals (with the exception of chromium) are often described as interstitial compounds. These carbides have metallic properties and are refractory. Some exhibit a range of stoichiometries, being a non-stoichiometric mixture of various carbides arising due to crystal defects. Some of them, including titanium carbide and tungsten carbide, are important industrially and are used to coat metals in cutting tools.
The long-held view is that the carbon atoms fit into octahedral interstices in a close-packed metal lattice when the metal atom radius is greater than approximately 135 pm:
When the metal atoms are cubic close-packed, (ccp), then filling all of the octahedral interstices with carbon achieves 1:1 stoichiometry with the rock salt structure.
When the metal atoms are hexagonal close-packed, (hcp), as the octahedral interstices lie directly opposite each other on either side of the layer of metal atoms, filling only one of these with carbon achieves 2:1 stoichiometry with the CdI2 structure.
The following table shows structures of the metals and their carbides. (N.B. the body centered cubic structure adopted by vanadium, niobium, tantalum, chromium, molybdenum and tungsten is not a close-packed lattice.) The notation "h/2" refers to the M2C type structure described above, which is only an approximate description of the actual structures. The simple view that the lattice of the pure metal "absorbs" carbon atoms can be seen to be untrue as the packing of the metal atom lattice in the carbides is different from the packing in the pure metal, although it is technically correct that the carbon atoms fit into the octahedral interstices of a close-packed metal lattice.
For a long time the non-stoichiometric phases were believed to be disordered with a random filling of the interstices, however short and longer range ordering has been detected.
Iron forms a number of carbides, , and . The best known is cementite, Fe3C, which is present in steels. These carbides are more reactive than the interstitial carbides; for example, the carbides of Cr, Mn, Fe, Co and Ni are all hydrolysed by dilute acids and sometimes by water, to give a mixture of hydrogen and hydrocarbons. These compounds share features with both the inert interstitials and the more reactive salt-like carbides.
Some metals, such as lead and tin, are believed not to form carbides under any circumstances. There exists however a mixed titanium-tin carbide, which is a two-dimensional conductor.
Chemical classification of carbides
Carbides can be generally classified by the chemical bonds type as follows:
salt-like (ionic),
covalent compounds,
interstitial compounds, and
"intermediate" transition metal carbides.
Examples include calcium carbide (CaC2), silicon carbide (SiC), tungsten carbide (WC; often called, simply, carbide when referring to machine tooling), and cementite (Fe3C), each used in key industrial applications. The naming of ionic carbides is not systematic.
Salt-like / saline / ionic carbides
Salt-like carbides are composed of highly electropositive elements such as the alkali metals, alkaline earth metals, lanthanides, actinides, and group 3 metals (scandium, yttrium, and lutetium). Aluminium from group 13 forms carbides, but gallium, indium, and thallium do not. These materials feature isolated carbon centers, often described as "C4−", in the methanides or methides; two-atom units, "", in the acetylides; and three-atom units, "", in the allylides. The graphite intercalation compound KC8, prepared from vapour of potassium and graphite, and the alkali metal derivatives of C60 are not usually classified as carbides.
Methanides
Methanides are a subset of carbides distinguished by their tendency to decompose in water producing methane. Three examples are aluminium carbide , magnesium carbide and beryllium carbide .
Transition metal carbides are not saline: their reaction with water is very slow and is usually neglected. For example, depending on surface porosity, 5–30 atomic layers of titanium carbide are hydrolyzed, forming methane within 5 minutes at ambient conditions, following by saturation of the reaction.
Note that methanide in this context is a trivial historical name. According to the IUPAC systematic naming conventions, a compound such as NaCH3 would be termed a "methanide", although this compound is often called methylsodium. See Methyl group#Methyl anion for more information about the anion.
Acetylides/ethynides
Several carbides are assumed to be salts of the acetylide anion (also called percarbide, by analogy with peroxide), which has a triple bond between the two carbon atoms. Alkali metals, alkaline earth metals, and lanthanoid metals form acetylides, for example, sodium carbide Na2C2, calcium carbide CaC2, and LaC2. Lanthanides also form carbides (sesquicarbides, see below) with formula M2C3. Metals from group 11 also tend to form acetylides, such as copper(I) acetylide and silver acetylide. Carbides of the actinide elements, which have stoichiometry MC2 and M2C3, are also described as salt-like derivatives of .
The C–C triple bond length ranges from 119.2 pm in CaC2 (similar to ethyne), to 130.3 pm in LaC2 and 134 pm in UC2. The bonding in LaC2 has been described in terms of LaIII with the extra electron delocalised into the antibonding orbital on , explaining the metallic conduction.
Allylides
The polyatomic ion , sometimes called allylide, is found in and . The ion is linear and is isoelectronic with . The C–C distance in Mg2C3 is 133.2 pm. yields methylacetylene, CH3CCH, and propadiene, CH2CCH2, on hydrolysis, which was the first indication that it contains .
Covalent carbides
The carbides of silicon and boron are described as "covalent carbides", although virtually all compounds of carbon exhibit some covalent character. Silicon carbide has two similar crystalline forms, which are both related to the diamond structure. Boron carbide, B4C, on the other hand, has an unusual structure which includes icosahedral boron units linked by carbon atoms. In this respect boron carbide is similar to the boron rich borides. Both silicon carbide (also known as carborundum) and boron carbide are very hard materials and refractory. Both materials are important industrially. Boron also forms other covalent carbides, such as B25C.
Molecular carbides
Metal complexes containing C are known as metal carbido complexes. Most common are carbon-centered octahedral clusters, such as (where "Ph" represents a phenyl group) and [Fe6C(CO)6]2−. Similar species are known for the metal carbonyls and the early metal halides. A few terminal carbides have been isolated, such as .
Metallocarbohedrynes (or "met-cars") are stable clusters with the general formula where M is a transition metal (Ti, Zr, V, etc.).
Related materials
In addition to the carbides, other groups of related carbon compounds exist:
graphite intercalation compounds
alkali metal fullerides
endohedral fullerenes, where the metal atom is encapsulated within a fullerene molecule
metallacarbohedrenes (met-cars) which are cluster compounds containing C2 units.
tunable nanoporous carbon, where gas chlorination of metallic carbides removes metal molecules to form a highly porous, near-pure carbon material capable of high-density energy storage.
transition metal carbene complexes.
two-dimensional transition metal carbides: MXenes
| Physical sciences | Ceramic compounds | Chemistry |
7783 | https://en.wikipedia.org/wiki/Coriolis%20force | Coriolis force | In physics, the Coriolis force is an inertial (or fictitious) force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise (or counterclockwise) rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology.
Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation). The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system.
In popular (non-technical) usage of the term "Coriolis effect", the rotating reference frame implied is almost always the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each sidereal day, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean, or where high precision is important, such as artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator ("clockwise") and to the left of this direction south of it ("anticlockwise"). This effect is responsible for the rotation and thus formation of cyclones .
History
Italian scientist Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannonball fired to the north to deflect to the east. In 1674, Claude François Milliet Dechales described in his Cursus seu Mundus Mathematicus how the rotation of the Earth should cause a deflection in the trajectories of both falling bodies and projectiles aimed toward one of the planet's poles. Riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earth's rotation should create the effect, and so failure to detect the effect was evidence for an immobile Earth. The Coriolis acceleration equation was derived by Euler in 1749, and the effect was described in the tidal equations of Pierre-Simon Laplace in 1778.
Gaspard-Gustave de Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one. The effect was known in the early 20th century as the "acceleration of Coriolis", and by 1920 as "Coriolis force".
In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds.
The understanding of the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure-gradient force and deflecting force that in the end causes air masses to move along isobars was understood.
Formula
In Newtonian mechanics, the equation of motion for an object in an inertial reference frame is:
where is the vector sum of the physical forces acting on the object, is the mass of the object, and is the acceleration of the object relative to the inertial reference frame.
Transforming this equation to a reference frame rotating about a fixed axis through the origin with angular velocity having variable rotation rate, the equation takes the form:
where the prime (') variables denote coordinates of the rotating reference frame (not a derivative) and:
is the vector sum of the physical forces acting on the object
is the angular velocity of the rotating reference frame relative to the inertial frame
is the position vector of the object relative to the rotating reference frame
is the velocity of the object relative to the rotating reference frame
is the acceleration of the object relative to the rotating reference frame
The fictitious forces as they are perceived in the rotating frame act as additional forces that contribute to the apparent acceleration just like the real external forces. The fictitious force terms of the equation are, reading from left to right:
Euler force,
Coriolis force,
centrifugal force,
As seen in these formulas the Euler and centrifugal forces depend on the position vector of the object, while the Coriolis force depends on the object's velocity as measured in the rotating reference frame. As expected, for a non-rotating inertial frame of reference the Coriolis force and all other fictitious forces disappear.
Direction of Coriolis force for simple cases
As the Coriolis force is proportional to a cross product of two vectors, it is perpendicular to both vectors, in this case the object's velocity and the frame's rotation vector. It therefore follows that:
if the velocity is parallel to the rotation axis, the Coriolis force is zero. For example, on Earth, this situation occurs for a body at the equator moving north or south relative to the Earth's surface. (At any latitude other than the equator, however, the north–south motion would have a component perpendicular to the rotation axis and a force specified by the inward or outward cases mentioned below).
if the velocity is straight inward to the axis, the Coriolis force is in the direction of local rotation. For example, on Earth, this situation occurs for a body at the equator falling downward, as in the Dechales illustration above, where the falling ball travels further to the east than does the tower. Note also that heading north in the northern hemisphere would have a velocity component toward the rotation axis, resulting in a Coriolis force to the east (more pronounced the further north one is).
if the velocity is straight outward from the axis, the Coriolis force is against the direction of local rotation. In the tower example, a ball launched upward would move toward the west.
if the velocity is in the direction of rotation, the Coriolis force is outward from the axis. For example, on Earth, this situation occurs for a body at the equator moving east relative to Earth's surface. It would move upward as seen by an observer on the surface. This effect (see Eötvös effect below) was discussed by Galileo Galilei in 1632 and by Riccioli in 1651.
if the velocity is against the direction of rotation, the Coriolis force is inward to the axis. For example, on Earth, this situation occurs for a body at the equator moving west, which would deflect downward as seen by an observer.
Intuitive explanation
For an intuitive explanation of the origin of the Coriolis force, consider an object, constrained to follow the Earth's surface and moving northward in the Northern Hemisphere. Viewed from outer space, the object does not appear to go due north, but has an eastward motion (it rotates around toward the right along with the surface of the Earth). The further north it travels, the smaller the "radius of its parallel (latitude)" (the minimum distance from the surface point to the axis of rotation, which is in a plane orthogonal to the axis), and so the slower the eastward motion of its surface. As the object moves north it has a tendency to maintain the eastward speed it started with (rather than slowing down to match the reduced eastward speed of local objects on the Earth's surface), so it veers east (i.e. to the right of its initial motion).
Though not obvious from this example, which considers northward motion, the horizontal deflection occurs equally for objects moving eastward or westward (or in any other direction). However, the theory that the effect determines the rotation of draining water in a household bathtub, sink or toilet has been repeatedly disproven by modern-day scientists; the force is negligibly small compared to the many other influences on the rotation.
Length scales and the Rossby number
The time, space, and velocity scales are important in determining the importance of the Coriolis force. Whether rotation is important in a system can be determined by its Rossby number (Ro), which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, , and the length scale, L, of the motion:
Hence, it is the ratio of inertial to Coriolis forces; a small Rossby number indicates a system is strongly affected by Coriolis forces, and a large Rossby number indicates a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, so in them the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems the Rossby number is low, as the centrifugal force is negligible; there, the balance is between Coriolis and pressure forces. In oceanic systems the Rossby number is often around 1, with all three forces comparable.
An atmospheric system moving at U = occupying a spatial distance of L = , has a Rossby number of approximately 0.1.
A baseball pitcher may throw the ball at U = for a distance of L = . The Rossby number in this case would be 32,000 (at latitude 31°47'46.382").
Baseball players don't care about which hemisphere they're playing in. However, an unguided missile obeys exactly the same physics as a baseball, but can travel far enough and be in the air long enough to experience the effect of Coriolis force. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the Southern Hemisphere landed to the left.) In fact, it was this effect that first drew the attention of Coriolis himself.
Simple cases
Tossed ball on a rotating carousel
The figure illustrates a ball tossed from 12:00 o'clock toward the center of a counter-clockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the center, while the ball-thrower rotates counter-clockwise with the carousel. On the right, the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed.
On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the center of the carousel (providing the ball-thrower's line of sight), and the other points from the center of the carousel to the ball. (This arrow gets shorter as the ball approaches the center.) A shifted version of the two arrows is shown dotted.
On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the center of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the center of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel.
The ball travels in the air, and there is no net force upon it. To the stationary observer, the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for a counter-clockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory.
Bounced ball
The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight).
On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection.
The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position 1. From the inertial viewer's standpoint, positions 1, 2, and 3 are occupied in sequence. At position 2, the ball strikes the rail, and at position 3, the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied.
Applied to the Earth
The acceleration affecting the motion of air "sliding" over the Earth's surface is the horizontal component of the Coriolis term
This component is orthogonal to the velocity over the Earth surface and is given by the expression
where
is the spin rate of the Earth
is the latitude, positive in the northern hemisphere and negative in the southern hemisphere
In the northern hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere.
Rotating sphere
Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system [listing components in the order east (e), north (n) and upward (u)] are:
When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration () is small compared with the acceleration due to gravity (g, approximately near Earth's surface). For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0):
where is called the Coriolis parameter.
By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south; similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right (for positive φ) and of the same size regardless of the horizontal orientation.
In the case of equatorial motion, setting φ = 0° yields:
Ω in this case is parallel to the north-south axis.
Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west.
Meteorology and oceanography
Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornadoes are for practical purposes negligible.
Because surface ocean currents are driven by the movement of wind over the water's surface, the Coriolis force also affects the movement of ocean currents and cyclones as well. Many of the ocean's largest currents circulate around warm, high-pressure areas called gyres. Though the circulation is not as significant as that in the air, the deflection caused by the Coriolis effect is what creates the spiralling pattern in these gyres. The spiralling wind pattern helps the hurricane form. The stronger the force from the Coriolis effect, the faster the wind spins and picks up additional energy, increasing the strength of the hurricane.
Air within high-pressure systems rotates in a direction such that the Coriolis force is directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. As a result, air travels clockwise around high pressure in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. Air around low-pressure rotates in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient.
Flow around a low-pressure area
If a low-pressure area forms in the atmosphere, air tends to flow in towards it, but is deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure-gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure.
Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. The geostrophic balance is thus very different from the case of "inertial motions" (see below), which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be.
This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is anticlockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region.
Inertial circles
An air or water mass moving with speed subject only to the Coriolis force travels in a circular trajectory called an inertial circle. Since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius is given by:
where is the Coriolis parameter , introduced above (where is the latitude). The time taken for the mass to complete a full circle is therefore . The Coriolis parameter typically has a mid-latitude value of about 10−4 s−1; hence for a typical atmospheric speed of , the radius is with a period of about 17 hours. For an ocean current with a typical speed of , the radius of an inertial circle is . These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anticlockwise in the southern hemisphere.
If the rotating system is a parabolic turntable, then is constant and the trajectories are exact circles. On a rotating planet, varies with latitude and the paths of particles do not form exact circles. Since the parameter varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude of ±90°), and increase toward the equator.
Other terrestrial effects
The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance.
Eötvös effect
The practical impact of the "Coriolis effect" is mostly caused by the horizontal acceleration component produced by horizontal motion.
There are other components of the Coriolis effect. Westward-traveling objects are deflected downwards, while eastward-traveling objects are deflected upwards. This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by the Eötvös effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure suggest that it is unimportant in the hydrostatic equilibrium. However, in the atmosphere, winds are associated with small deviations of pressure from the hydrostatic equilibrium. In the tropical atmosphere, the order of magnitude of the pressure deviations is so small that the contribution of the Eötvös effect to the pressure deviations is considerable.
In addition, objects traveling upwards (i.e. out) or downwards (i.e. in) are deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. For example, idealized numerical modeling studies suggest that this effect can directly affect tropical large-scale wind field by roughly 10% given long-duration (2 weeks or more) heating or cooling in the atmosphere. Moreover, in the case of large changes of momentum, such as a spacecraft being launched into orbit, the effect becomes significant. The fastest and most fuel-efficient path to orbit is a launch from the equator that curves to a directly eastward heading.
Intuitive example
Imagine a train that travels through a frictionless railway line along the equator. Assume that, when in motion, it moves at the necessary speed to complete a trip around the world in one day (465 m/s). The Coriolis effect can be considered in three cases: when the train travels west, when it is at rest, and when it travels east. In each case, the Coriolis effect can be calculated from the rotating frame of reference on Earth first, and then checked against a fixed inertial frame. The image below illustrates the three cases as viewed by an observer at rest in a (near) inertial frame from a fixed point above the North Pole along the Earth's axis of rotation; the train is denoted by a few red pixels, fixed at the left side in the leftmost picture, moving in the others
The train travels toward the west: In that case, it moves against the direction of rotation. Therefore, on the Earth's rotating frame the Coriolis term is pointed inwards towards the axis of rotation (down). This additional force downwards should cause the train to be heavier while moving in that direction.If one looks at this train from the fixed non-rotating frame on top of the center of the Earth, at that speed it remains stationary as the Earth spins beneath it. Hence, the only force acting on it is gravity and the reaction from the track. This force is greater (by 0.34%) than the force that the passengers and the train experience when at rest (rotating along with Earth). This difference is what the Coriolis effect accounts for in the rotating frame of reference.
The train comes to a stop: From the point of view on the Earth's rotating frame, the velocity of the train is zero, thus the Coriolis force is also zero and the train and its passengers recuperate their usual weight.From the fixed inertial frame of reference above Earth, the train now rotates along with the rest of the Earth. 0.34% of the force of gravity provides the centripetal force needed to achieve the circular motion on that frame of reference. The remaining force, as measured by a scale, makes the train and passengers "lighter" than in the previous case.
The train travels east. In this case, because it moves in the direction of Earth's rotating frame, the Coriolis term is directed outward from the axis of rotation (up). This upward force makes the train seem lighter still than when at rest. From the fixed inertial frame of reference above Earth, the train traveling east now rotates at twice the rate as when it was at rest—so the amount of centripetal force needed to cause that circular path increases leaving less force from gravity to act on the track. This is what the Coriolis term accounts for on the previous paragraph.As a final check one can imagine a frame of reference rotating along with the train. Such frame would be rotating at twice the angular velocity as Earth's rotating frame. The resulting centrifugal force component for that imaginary frame would be greater. Since the train and its passengers are at rest, that would be the only component in that frame explaining again why the train and the passengers are lighter than in the previous two cases.
This also explains why high-speed projectiles that travel west are deflected down, and those that travel east are deflected up. This vertical component of the Coriolis effect is called the Eötvös effect.
The above example can be used to explain why the Eötvös effect starts diminishing when an object is traveling westward as its tangential speed increases above Earth's rotation (465 m/s). If the westward train in the above example increases speed, part of the force of gravity that pushes against the track accounts for the centripetal force needed to keep it in circular motion on the inertial frame. Once the train doubles its westward speed at that centripetal force becomes equal to the force the train experiences when it stops. From the inertial frame, in both cases it rotates at the same speed but in the opposite directions. Thus, the force is the same cancelling completely the Eötvös effect. Any object that moves westward at a speed above experiences an upward force instead. In the figure, the Eötvös effect is illustrated for a object on the train at different speeds. The parabolic shape is because the centripetal force is proportional to the square of the tangential speed. On the inertial frame, the bottom of the parabola is centered at the origin. The offset is because this argument uses the Earth's rotating frame of reference. The graph shows that the Eötvös effect is not symmetrical, and that the resulting downward force experienced by an object that travels west at high velocity is less than the resulting upward force when it travels east at the same speed.
Draining in bathtubs and toilets
Contrary to popular misconception, bathtubs, toilets, and other water receptacles do not drain in opposite directions in the Northern and Southern Hemispheres. This is because the magnitude of the Coriolis force is negligible at this scale. Forces determined by the initial conditions of the water (e.g. the geometry of the drain, the geometry of the receptacle, preexisting momentum of the water, etc.) are likely to be orders of magnitude greater than the Coriolis force and hence will determine the direction of water rotation, if any. For example, identical toilets flushed in both hemispheres drain in the same direction, and this direction is determined mostly by the shape of the toilet bowl.
Under real-world conditions, the Coriolis force does not influence the direction of water flow perceptibly. Only if the water is so still that the effective rotation rate of the Earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may indeed determine the direction of the vortex. Without such careful preparation, the Coriolis effect will be much smaller than various other influences on drain direction such as any residual rotation of the water and the geometry of the container.
Laboratory testing of draining water under atypical conditions
In 1962, Ascher Shapiro performed an experiment at MIT to test the Coriolis force on a large basin of water, across, with a small wooden cross above the plug hole to display the direction of rotation, covering it and waiting for at least 24 hours for the water to settle. Under these precise laboratory conditions, he demonstrated the effect and consistent counterclockwise rotation. The experiment required extreme precision, since the acceleration due to Coriolis effect is only that of gravity. The vortex was measured by a cross made of two slivers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds.
He reported that,
Lloyd Trefethen reported clockwise rotation in the Southern Hemisphere at the University of Sydney in five tests with settling times of 18 h or more.
Ballistic trajectories
The Coriolis force is important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about . The Coriolis force minutely changes the trajectory of a bullet, affecting accuracy at extremely long distances. It is adjusted for by accurate long-distance shooters, such as snipers. At the latitude of Sacramento, California, a northward shot would be deflected to the right. There is also a vertical component, explained in the Eötvös effect section above, which causes westward shots to hit low, and eastward shots to hit high.
The effects of the Coriolis force on ballistic trajectories should not be confused with the curvature of the paths of missiles, satellites, and similar objects when the paths are plotted on two-dimensional (flat) maps, such as the Mercator projection. The projections of the three-dimensional curved surface of the Earth to a two-dimensional surface (the map) necessarily results in distorted features. The apparent curvature of the path is a consequence of the sphericity of the Earth and would occur even in a non-rotating frame.
The Coriolis force on a moving projectile depends on velocity components in all three directions, latitude, and azimuth. The directions are typically downrange (the direction that the gun is initially pointing), vertical, and cross-range.
where
, down-range acceleration.
, vertical acceleration with positive indicating acceleration upward.
, cross-range acceleration with positive indicating acceleration to the right.
, down-range velocity.
, vertical velocity with positive indicating upward.
, cross-range velocity with positive indicating velocity to the right.
= 0.00007292 rad/sec, angular velocity of the earth (based on a sidereal day).
, latitude with positive indicating Northern hemisphere.
, azimuth measured clockwise from due North.
Visualization
To demonstrate the Coriolis effect, a parabolic turntable can be used.
On a flat turntable, the inertia of a co-rotating object forces it off the edge. However, if the turntable surface has the correct paraboloid (parabolic bowl) shape (see the figure) and rotates at the corresponding rate, the force components shown in the figure make the component of gravity tangential to the bowl surface exactly equal to the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation.
Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared with analysis and observation of elliptical motion in the inertial frame.
Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth.
In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.)
The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum.
In other areas
Coriolis flow meter
A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though not completely circular, provides the rotating reference frame that gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid.
Molecular physics
In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects are therefore present, and make the atoms move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels, from which Coriolis coupling constants can be determined.
Gyroscopic precession
When an external torque is applied to a spinning gyroscope along an axis that is at right angles to the spin axis, the rim velocity that is associated with the spin becomes radially directed in relation to the external torque axis. This causes a torque-induced force to act on the rim in such a way as to tilt the gyroscope at right angles to the direction that the external torque would have tilted it. This tendency has the effect of keeping spinning bodies in their rotational frame.
Insect flight
Flies (Diptera) and some moths (Lepidoptera) exploit the Coriolis effect in flight with specialized appendages and organs that relay information about the angular velocity of their bodies. Coriolis forces resulting from linear motion of these appendages are detected within the rotating frame of reference of the insects' bodies. In the case of flies, their specialized appendages are dumbbell shaped organs located just behind their wings called "halteres".
The fly's halteres oscillate in a plane at the same beat frequency as the main wings so that any body rotation results in lateral deviation of the halteres from their plane of motion.
In moths, their antennae are known to be responsible for the sensing of Coriolis forces in the similar manner as with the halteres in flies. In both flies and moths, a collection of mechanosensors at the base of the appendage are sensitive to deviations at the beat frequency, correlating to rotation in the pitch and roll planes, and at twice the beat frequency, correlating to rotation in the yaw plane.
Lagrangian point stability
In astronomy, Lagrangian points are five positions in the orbital plane of two large orbiting bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The first three Lagrangian points (L1, L2, L3) lie along the line connecting the two large bodies, while the last two points (L4 and L5) each form an equilateral triangle with the two large bodies. The L4 and L5 points, although they correspond to maxima of the effective potential in the coordinate frame that rotates with the two large bodies, are stable due to the Coriolis effect. The stability can result in orbits around just L4 or L5, known as tadpole orbits, where trojans can be found. It can also result in orbits that encircle L3, L4, and L5, known as horseshoe orbits.
| Physical sciences | Classical mechanics | null |
7794 | https://en.wikipedia.org/wiki/Crystallography | Crystallography | Crystallography is the branch of science devoted to the study of molecular and crystalline structure and properties. The word crystallography is derived from the Ancient Greek word (; "clear ice, rock-crystal"), and (; "to write"). In July 2012, the United Nations recognised the importance of the science of crystallography by proclaiming 2014 the International Year of Crystallography.
Crystallography is a broad topic, and many of its subareas, such as X-ray crystallography, are themselves important scientific topics. Crystallography ranges from the fundamentals of crystal structure to the mathematics of crystal geometry, including those that are not periodic or quasicrystals. At the atomic scale it can involve the use of X-ray diffraction to produce experimental data that the tools of X-ray crystallography can convert into detailed positions of atoms, and sometimes electron density. At larger scales it includes experimental tools such as orientational imaging to examine the relative orientations at the grain boundary in materials. Crystallography plays a key role in many areas of biology, chemistry, and physics, as well new developments in these fields.
History and timeline
Before the 20th century, the study of crystals was based on physical measurements of their geometry using a goniometer. This involved measuring the angles of crystal faces relative to each other and to theoretical reference axes (crystallographic axes), and establishing the symmetry of the crystal in question. The position in 3D space of each crystal face is plotted on a stereographic net such as a Wulff net or Lambert net. The pole to each face is plotted on the net. Each point is labelled with its Miller index. The final plot allows the symmetry of the crystal to be established.
The discovery of X-rays and electrons in the last decade of the 19th century enabled the determination of crystal structures on the atomic scale, which brought about the modern era of crystallography. The first X-ray diffraction experiment was conducted in 1912 by Max von Laue, while electron diffraction was first realized in 1927 in the Davisson–Germer experiment and parallel work by George Paget Thomson and Alexander Reid. These developed into the two main branches of crystallography, X-ray crystallography and electron diffraction. The quality and throughput of solving crystal structures greatly improved in the second half of the 20th century, with the developments of customized instruments and phasing algorithms. Nowadays, crystallography is an interdisciplinary field, supporting theoretical and experimental discoveries in various domains. Modern-day scientific instruments for crystallography vary from laboratory-sized equipment, such as diffractometers and electron microscopes, to dedicated large facilities, such as photoinjectors, synchrotron light sources and free-electron lasers.
Methodology
Crystallographic methods depend mainly on analysis of the diffraction patterns of a sample targeted by a beam of some type. X-rays are most commonly used; other beams used include electrons or neutrons. Crystallographers often explicitly state the type of beam used, as in the terms X-ray diffraction, neutron diffraction and electron diffraction. These three types of radiation interact with the specimen in different ways.
X-rays interact with the spatial distribution of electrons in the sample.
Neutrons are scattered by the atomic nuclei through the strong nuclear forces, but in addition the magnetic moment of neutrons is non-zero, so they are also scattered by magnetic fields. When neutrons are scattered from hydrogen-containing materials, they produce diffraction patterns with high noise levels, which can sometimes be resolved by substituting deuterium for hydrogen.
Electrons are charged particles and therefore interact with the total charge distribution of both the atomic nuclei and the electrons of the sample.
It is hard to focus x-rays or neutrons, but since electrons are charged they can be focused and are used in electron microscope to produce magnified images. There are many ways that transmission electron microscopy and related techniques such as scanning transmission electron microscopy, high-resolution electron microscopy can be used to obtain images with in many cases atomic resolution from which crystallographic information can be obtained. There are also other methods such as low-energy electron diffraction, low-energy electron microscopy and reflection high-energy electron diffraction which can be used to obtain crystallographic information about surfaces.
Applications in various areas
Materials science
Crystallography is used by materials scientists to characterize different materials. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically because the natural shapes of crystals reflect the atomic structure. In addition, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Most materials do not occur as a single crystal, but are poly-crystalline in nature (they exist as an aggregate of small crystals with different orientations). As such, powder diffraction techniques, which take diffraction patterns of samples with a large number of crystals, play an important role in structural determination.
Other physical properties are also linked to crystallography. For example, the minerals in clay form small, flat, platelike structures. Clay can be easily deformed because the platelike particles can slip along each other in the plane of the plates, yet remain strongly connected in the direction perpendicular to the plates. Such mechanisms can be studied by crystallographic texture measurements. Crystallographic studies help elucidate the relationship between a material's structure and its properties, aiding in developing new materials with tailored characteristics. This understanding is crucial in various fields, including metallurgy, geology, and materials science. Advancements in crystallographic techniques, such as electron diffraction and X-ray crystallography, continue to expand our understanding of material behavior at the atomic level.
In another example, iron transforms from a body-centered cubic (bcc) structure called ferrite to a face-centered cubic (fcc) structure called austenite when it is heated. The fcc structure is a close-packed structure unlike the bcc structure; thus the volume of the iron decreases when this transformation occurs.
Crystallography is useful in phase identification. When manufacturing or using a material, it is generally desirable to know what compounds and what phases are present in the material, as their composition, structure and proportions will influence the material's properties. Each phase has a characteristic arrangement of atoms. X-ray or neutron diffraction can be used to identify which structures are present in the material, and thus which compounds are present. Crystallography covers the enumeration of the symmetry patterns which can be formed by atoms in a crystal and for this reason is related to group theory.
Biology
X-ray crystallography is the primary method for determining the molecular conformations of biological macromolecules, particularly protein and nucleic acids such as DNA and RNA. The double-helical structure of DNA was deduced from crystallographic data. The first crystal structure of a macromolecule was solved in 1958, a three-dimensional model of the myoglobin molecule obtained by X-ray analysis. The Protein Data Bank (PDB) is a freely accessible repository for the structures of proteins and other biological macromolecules. Computer programs such as RasMol, Pymol or VMD can be used to visualize biological molecular structures.
Neutron crystallography is often used to help refine structures obtained by X-ray methods or to solve a specific bond; the methods are often viewed as complementary, as X-rays are sensitive to electron positions and scatter most strongly off heavy atoms, while neutrons are sensitive to nucleus positions and scatter strongly even off many light isotopes, including hydrogen and deuterium.
Electron diffraction has been used to determine some protein structures, most notably membrane proteins and viral capsids.
Notation
Coordinates in square brackets such as [100] denote a direction vector (in real space).
Coordinates in angle brackets or chevrons such as <100> denote a family of directions which are related by symmetry operations. In the cubic crystal system for example, <100> would mean [100], [010], [001] or the negative of any of those directions.
Miller indices in parentheses such as (100) denote a plane of the crystal structure, and regular repetitions of that plane with a particular spacing. In the cubic system, the normal to the (hkl) plane is the direction [hkl], but in lower-symmetry cases, the normal to (hkl) is not parallel to [hkl].
Indices in curly brackets or braces such as {100} denote a family of planes and their normals. In cubic materials the symmetry makes them equivalent, just as the way angle brackets denote a family of directions. In non-cubic materials, <hkl> is not necessarily perpendicular to {hkl}.
Reference literature
The International Tables for Crystallography is an eight-book series that outlines the standard notations for formatting, describing and testing crystals. The series contains books that covers analysis methods and the mathematical procedures for determining organic structure through x-ray crystallography, electron diffraction, and neutron diffraction. The International tables are focused on procedures, techniques and descriptions and do not list the physical properties of individual crystals themselves. Each book is about 1000 pages and the titles of the books are:
Vol A - Space Group Symmetry,
Vol A1 - Symmetry Relations Between Space Groups,
Vol B - Reciprocal Space,
Vol C - Mathematical, Physical, and Chemical Tables,
Vol D - Physical Properties of Crystals,
Vol E - Subperiodic Groups,
Vol F - Crystallography of Biological Macromolecules, and
Vol G - Definition and Exchange of Crystallographic Data.
Notable scientists
| Physical sciences | Crystallography | null |
7808 | https://en.wikipedia.org/wiki/Cyprinodontiformes | Cyprinodontiformes | Cyprinodontiformes is an order of ray-finned fish, comprising mostly small, freshwater fish. Many popular aquarium fish, such as killifish and live-bearers, are included. They are closely related to the Atheriniformes and are occasionally included with them. A colloquial term for the order as a whole is toothcarps, though they are not actually close relatives of the true carps – the latter belong to the superorder Ostariophysi, while the toothcarps are Acanthopterygii.
The families of Cyprinodontiformes can be informally divided into three groups based on reproductive strategy: viviparous and ovoviviparous (all species give live birth), and oviparous (all species are egg-laying). The live-bearing groups differ in whether the young are carried to term within (ovoviviparous) or without (viviparous) an enclosing eggshell. Phylogenetically however, one of the two suborders – the Aplocheiloidei – contains oviparous species exclusively, as do two of the four superfamilies of the other suborder (the Cyprinodontoidea and Valencioidea of the Cyprinodontoidei). Vivipary and ovovivipary have evolved independently from oviparous ancestors, the latter possibly twice.
Description
Some members of this order are notable for inhabiting extreme environments, such as saline or very warm waters, heavily polluted waters, rain water pools devoid of minerals and made acidic by decaying vegetation, or isolated situations where no other types of fish occur.
They are typically carnivores, and often live near the surface, where the oxygen-rich water compensates for environmental disadvantages. Scheel (1968) observed the gut contents were invariably ants, others have reported insects, worms and aquatic crustaceans. Aquarium specimens are invariably seen eating protozoans from the water column and the surfaces of leaves, however these are not apparent as stomach contents. Many members of the family Cyprinodontidae (the pupfishes) eat plant material as well and some have adapted to a diet very high in algae to the point where one, the American Flag Fish, is a renowned algae eater in the aquarium, in spite of belonging to an order of fishes that do not generally consume any plant material. In addition, killifish derive some of the carotenoids and other chemicals required to make their body pigments from pollen grains on the surface of and in the gut of insects they eat from the surface of the water; this can be simulated in culture by the use of special color enhancing foods that contain these compounds.
Although the Cyprinodontiformes are a diverse group, most species contained within are small to medium-sized fish, with small mouths, large eyes, a single dorsal fin, and a rounded caudal fin. The largest species is the cuatro ojos (Anableps dowei), which measures in length, while the smallest, the least killifish (Heterandria formosa), is just long as an adult.
Systematics
CYPRINODONTIFORMES
Suborder Aplocheiloidei (all oviparous)
Family Aplocheilidae - Asian killifishes
Family Nothobranchiidae - African killifishes
Family Rivulidae - New World killifishes
Suborder Cyprinodontoidei
Superfamily Funduloidea
Family Profundulidae – Central American killifishes (oviparous)
Family Goodeidae – splitfins (largely viviparous)
Family Fundulidae – topminnows and North American killifishes (oviparous)
Superfamily Valencioidea (oviparous)
Family Valenciidae – Mediterranean killifishes
Superfamily Cyprinodontoidea (oviparous)
Family Cyprinodontidae – pupfishes
Family Aphaniidae _ Oriental killifishes
Superfamily Poecilioidea
Family Anablepidae – four-eyed fishes and relatives (largely ovoviviparous)
Family Poeciliidae – livebearers and relatives (some oviparous, some ovoviviparous)
The family Aplocheilidae has been expanded by some authorities to include all the killifishes with three subfamilies, Aplocheilinae, Cynolebiinae and Nothobranchiinae, but this is not the classification adopted in the 5th Edition of Fishes of the World.
| Biology and health sciences | Acanthomorpha | null |
7819 | https://en.wikipedia.org/wiki/Cactus | Cactus | A cactus (: cacti, cactuses, or less commonly, cactus) is a member of the plant family Cactaceae (), a family of the order Caryophyllales comprising about 127 genera with some 1,750 known species. The word cactus derives, through Latin, from the Ancient Greek word (káktos), a name originally used by Theophrastus for a spiny plant whose identity is now not certain. Cacti occur in a wide range of shapes and sizes. They are native to the Americas, ranging from Patagonia in the south to parts of western Canada in the north, with the exception of Rhipsalis baccifera, which is also found in Africa and Sri Lanka. Cacti are adapted to live in very dry environments, including the Atacama Desert, one of the driest places on Earth. Because of this, cacti show many adaptations to conserve water. For example, almost all cacti are succulents, meaning they have thickened, fleshy parts adapted to store water. Unlike many other succulents, the stem is the only part of most cacti where this vital process takes place. Most species of cacti have lost true leaves, retaining only spines, which are highly modified leaves. As well as defending against herbivores, spines help prevent water loss by reducing air flow close to the cactus and providing some shade. In the absence of true leaves, cacti's enlarged stems carry out photosynthesis.
Cactus spines are produced from specialized structures called areoles, a kind of highly reduced branch. Areoles are an identifying feature of cacti. As well as spines, areoles give rise to flowers, which are usually tubular and multipetaled. Many cacti have short growing seasons and long dormancies and are able to react quickly to any rainfall, helped by an extensive but relatively shallow root system that quickly absorbs any water reaching the ground surface. Cactus stems are often ribbed or fluted with a number of ribs which corresponds to a number in the Fibonacci numbers (2, 3, 5, 8, 13, 21, 34 etc.). This allows them to expand and contract easily for quick water absorption after rain, followed by retention over long drought periods. Like other succulent plants, most cacti employ a special mechanism called "crassulacean acid metabolism" (CAM) as part of photosynthesis. Transpiration, during which carbon dioxide enters the plant and water escapes, does not take place during the day at the same time as photosynthesis, but instead occurs at night. The plant stores the carbon dioxide it takes in as malic acid, retaining it until daylight returns, and only then using it in photosynthesis. Because transpiration takes place during the cooler, more humid night hours, water loss is significantly reduced.
Many smaller cacti have globe-shaped stems, combining the highest possible volume for water storage with the lowest possible surface area for water loss from transpiration. The tallest free-standing cactus is Pachycereus pringlei, with a maximum recorded height of , and the smallest is Blossfeldia liliputiana, only about in diameter at maturity. A fully grown saguaro (Carnegiea gigantea) is said to be able to absorb as much as of water during a rainstorm. A few species differ significantly in appearance from most of the family. At least superficially, plants of the genera Leuenbergeria, Rhodocactus and Pereskia resemble other trees and shrubs growing around them. They have persistent leaves, and when older, bark-covered stems. Their areoles identify them as cacti, and in spite of their appearance, they, too, have many adaptations for water conservation. Leuenbergeria is considered close to the ancestral species from which all cacti evolved. In tropical regions, other cacti grow as forest climbers and epiphytes (plants that grow on trees). Their stems are typically flattened, almost leaf-like in appearance, with fewer or even no spines, such as the well-known Christmas cactus or Thanksgiving cactus (in the genus Schlumbergera).
Cacti have a variety of uses: many species are used as ornamental plants, others are grown for fodder or forage, and others for food (particularly their fruit). Cochineal is the product of an insect that lives on some cacti.
Many succulent plants in both the Old and New World – such as some Euphorbiaceae (euphorbias) – are also spiny stem succulents and because of this are sometimes incorrectly referred to as "cactus".
Morphology
The 1,500 to 1,800 species of cacti mostly fall into one of two groups of "core cacti": opuntias (subfamily Opuntioideae) and "cactoids" (subfamily Cactoideae). Most members of these two groups are easily recognizable as cacti. They have fleshy succulent stems that are major organs of photosynthesis. They have absent, small, or transient leaves. They have flowers with ovaries that lie below the sepals and petals, often deeply sunken into a fleshy receptacle (the part of the stem from which the flower parts grow). All cacti have areoles—highly specialized short shoots with extremely short internodes that produce spines, normal shoots, and flowers.
The remaining cacti fall into only two groups: three tree-like genera, Leuenbergeria, Pereskia and Rhodocactus (all formerly placed in Pereskia), and the much smaller Maihuenia. These two groups are rather different from other cacti, which means any description of cacti as a whole must frequently make exceptions for them. Species of the first three genera superficially resemble other tropical forest trees. When mature, they have woody stems that may be covered with bark and long-lasting leaves that provide the main means of photosynthesis. Their flowers may have superior ovaries (i.e., above the points of attachment of the sepals and petals) and areoles that produce further leaves. The two species of Maihuenia have succulent but non-photosynthetic stems and prominent succulent leaves.
Growth habit
Cacti show a wide variety of growth habits, which are difficult to divide into clear, simple categories.
Cacti can be tree-like (arborescent), meaning they typically have a single more-or-less woody trunk topped by several to many branches. In the genera Leuenbergeria, Pereskia and Rhodocactus, the branches are covered with leaves, so the species of these genera may not be recognized as cacti. In most other cacti, the branches are more typically cactus-like, bare of leaves and bark and covered with spines, as in Pachycereus pringlei or the larger opuntias. Some cacti may become tree-sized but without branches, such as larger specimens of Echinocactus platyacanthus. Cacti may also be described as shrubby, with several stems coming from the ground or from branches very low down, such as in Stenocereus thurberi.
Smaller cacti may be described as columnar. They consist of erect, cylinder-shaped stems, which may or may not branch, without a very clear division into trunk and branches. The boundary between columnar forms and tree-like or shrubby forms is difficult to define. Smaller and younger specimens of Cephalocereus senilis, for example, are columnar, whereas older and larger specimens may become tree-like. In some cases, the "columns" may be horizontal rather than vertical. Thus, Stenocereus eruca can be described as columnar even though it has stems growing along the ground, rooting at intervals.
Cacti whose stems are even smaller may be described as globular (or globose). They consist of shorter, more ball-shaped stems than columnar cacti. Globular cacti may be solitary, such as Ferocactus latispinus, or their stems may form clusters that can create large mounds. All or some stems in a cluster may share a common root.
Other forms
Other cacti have a quite different appearance. In tropical regions, some grow as forest climbers and epiphytes. Their stems are typically flattened and almost leaf-like in appearance, with few or even no spines. Climbing cacti can be very large; a specimen of Hylocereus was reported as long from root to the most distant stem. Epiphytic cacti, such as species of Rhipsalis or Schlumbergera, often hang downwards, forming dense clumps where they grow in trees high above the ground.
Stems
The leafless, spiny stem is the characteristic feature of the majority of cacti (all belonging to the largest subfamily, the Cactoideae). The stem is typically succulent, meaning it is adapted to store water. The surface of the stem may be smooth (as in some species of Opuntia) or covered with protuberances of various kinds, which are usually called tubercles. These vary from small "bumps" to prominent, nipple-like shapes in the genus Mammillaria and outgrowths almost like leaves in Ariocarpus species. The stem may also be ribbed or fluted in shape. The prominence of these ribs depends on how much water the stem is storing: when full (up to 90% of the mass of a cactus may be water), the ribs may be almost invisible on the swollen stem, whereas when the cactus is short of water and the stems shrink, the ribs may be very visible.
The stems of most cacti are some shade of green, often bluish or brownish green. Such stems contain chlorophyll and are able to carry out photosynthesis; they also have stomata (small structures that can open and close to allow passage of gases). Cactus stems are often visibly waxy.
Areoles
Areoles are structures unique to cacti. Although variable, they typically appear as woolly or hairy areas on the stems from which spines emerge. Flowers are also produced from areoles. In the genus Leuenbergeria, believed similar to the ancestor of all cacti, the areoles occur in the axils of leaves (i.e. in the angle between the leaf stalk and the stem). In leafless cacti, areoles are often borne on raised areas on the stem where leaf bases would have been.
Areoles are highly specialized and very condensed shoots or branches. In a normal shoot, nodes bearing leaves or flowers would be separated by lengths of stem (internodes). In an areole, the nodes are so close together, they form a single structure. The areole may be circular, elongated into an oval shape, or even separated into two parts; the two parts may be visibly connected in some way (e.g. by a groove in the stem) or appear entirely separate (a dimorphic areole). The part nearer the top of the stem then produces flowers, the other part spines. Areoles often have multicellular hairs (trichomes) that give the areole a hairy or woolly appearance, sometimes of a distinct color such as yellow or brown.
In most cacti, the areoles produce new spines or flowers only for a few years and then become inactive. This results in a relatively fixed number of spines, with flowers being produced only from the ends of stems, which are still growing and forming new areoles. In Pereskia, a genus close to the ancestor of cacti, areoles remain active for much longer; this is also the case in Opuntia and Neoraimondia.
Leaves
The great majority of cacti have no visible leaves; photosynthesis takes place in the stems (which may be flattened and leaflike in some species). Exceptions occur in three (taxonomically, four) groups of cacti. All the species of Leuenbergeria, Pereskia and Rhodocactus are superficially like normal trees or shrubs and have numerous leaves with a midrib and a flattened blade (lamina) on either side. This group is paraphyletic, forming two taxonomic clades. Many cacti in the opuntia group (subfamily Opuntioideae) also have visible leaves, which may be long-lasting (as in Pereskiopsis species) or produced only during the growing season and then lost (as in many species of Opuntia). The small genus Maihuenia also relies on leaves for photosynthesis. The structure of the leaves varies somewhat between these groups. Opuntioids and Maihuenia have leaves that appear to consist only of a midrib.
Even those cacti without visible photosynthetic leaves do usually have very small leaves, less than long in about half of the species studied and almost always less than long. The function of such leaves cannot be photosynthesis; a role in the production of plant hormones, such as auxin, and in defining axillary buds has been suggested.
Spines
Botanically, "spines" are distinguished from "thorns": spines are modified leaves, and thorns are modified branches. Cacti produce spines, always from areoles as noted above. Spines are present even in those cacti with leaves, such as Pereskia, Pereskiopsis and Maihuenia, so they clearly evolved before complete leaflessness. Some cacti only have spines when young, possibly only when seedlings. This is particularly true of tree-living cacti, such as Rhipsalis and Schlumbergera, but also of some ground-living cacti, such as Ariocarpus.
The spines of cacti are often useful in identification, since they vary greatly between species in number, color, size, shape and hardness, as well as in whether all the spines produced by an areole are similar or whether they are of distinct kinds. Most spines are straight or at most slightly curved, and are described as hair-like, bristle-like, needle-like or awl-like, depending on their length and thickness. Some cacti have flattened spines (e.g. Sclerocactus papyracanthus). Other cacti have hooked spines. Sometimes, one or more central spines are hooked, while outer spines are straight (e.g., Mammillaria rekoi).
In addition to normal-length spines, members of the subfamily Opuntioideae have relatively short spines, called glochids, that are barbed along their length and easily shed. These enter the skin and are difficult to remove due to being very fine and easily broken, causing long-lasting irritation.
Roots
Most ground-living cacti have only fine roots, which spread out around the base of the plant for varying distances, close to the surface. Some cacti have taproots; in genera such as Ariocarpus, these are considerably larger and of a greater volume than the body. Taproots may aid in stabilizing the larger columnar cacti. Climbing, creeping and epiphytic cacti may have only adventitious roots, produced along the stems where these come into contact with a rooting medium.
Flowers
Like their spines, cactus flowers are variable. Typically, the ovary is surrounded by material derived from stem or receptacle tissue, forming a structure called a pericarpel. Tissue derived from the petals and sepals continues the pericarpel, forming a composite tube—the whole may be called a floral tube, although strictly speaking only the part furthest from the base is floral in origin. The outside of the tubular structure often has areoles that produce wool and spines. Typically, the tube also has small scale-like bracts, which gradually change into sepal-like and then petal-like structures, so the sepals and petals cannot be clearly differentiated (and hence are often called "tepals"). Some cacti produce floral tubes without wool or spines (e.g. Gymnocalycium) or completely devoid of any external structures (e.g. Mammillaria). Unlike the flowers of most other cacti, Pereskia flowers may be borne in clusters.
Cactus flowers usually have many stamens, but only a single style, which may branch at the end into more than one stigma. The stamens usually arise from all over the inner surface of the upper part of the floral tube, although in some cacti, the stamens are produced in one or more distinct "series" in more specific areas of the inside of the floral tube.
The flower as a whole is usually radially symmetrical (actinomorphic), but may be bilaterally symmetrical (zygomorphic) in some species. Flower colors range from white through yellow and red to magenta.
Adaptations for water conservation
All cacti have some adaptations to promote efficient water use. Most cacti—opuntias and cactoids—specialize in surviving in hot and dry environments (i.e. are xerophytes), but the first ancestors of modern cacti were already adapted to periods of intermittent drought. A small number of cactus species in the tribes Hylocereeae and Rhipsalideae have become adapted to life as climbers or epiphytes, often in tropical forests, where water conservation is less important.
Leaves and spines
The absence of visible leaves is one of the most striking features of most cacti. Pereskia (which is close to the ancestral species from which all cacti evolved) does have long-lasting leaves, which are, however, thickened and succulent in many species. Other species of cactus with long-lasting leaves, such as the opuntioid Pereskiopsis, also have succulent leaves. A key issue in retaining water is the ratio of surface area to volume. Water loss is proportional to surface area, whereas the amount of water present is proportional to volume. Structures with a high surface area-to-volume ratio, such as thin leaves, necessarily lose water at a higher rate than structures with a low area-to-volume ratio, such as thickened stems.
Spines, which are modified leaves, are present on even those cacti with true leaves, showing the evolution of spines preceded the loss of leaves. Although spines have a high surface area-to-volume ratio, at maturity they contain little or no water, being composed of fibers made up of dead cells. Spines provide protection from herbivores and camouflage in some species, and assist in water conservation in several ways. They trap air near the surface of the cactus, creating a moister layer that reduces evaporation and transpiration. They can provide some shade, which lowers the temperature of the surface of the cactus, also reducing water loss. When sufficiently moist air is present, such as during fog or early morning mist, spines can condense moisture, which then drips onto the ground and is absorbed by the roots.
Stems
The majority of cacti are stem succulents, i.e., plants in which the stem is the main organ used to store water. Water may form up to 90% of the total mass of a cactus. Stem shapes vary considerably among cacti. The cylindrical shape of columnar cacti and the spherical shape of globular cacti produce a low surface area-to-volume ratio, thus reducing water loss, as well as minimizing the heating effects of sunlight. The ribbed or fluted stems of many cacti allow the stem to shrink during periods of drought and then swell as it fills with water during periods of availability. A mature saguaro (Carnegiea gigantea) is said to be able to absorb as much as of water during a rainstorm. The outer layer of the stem usually has a tough cuticle, reinforced with waxy layers, which reduce water loss. These layers are responsible for the grayish or bluish tinge to the stem color of many cacti.
The stems of most cacti have adaptations to allow them to conduct photosynthesis in the absence of leaves. This is discussed further below under Metabolism.
Roots
Many cacti have roots that spread out widely, but only penetrate a short distance into the soil. In one case, a young saguaro only tall had a root system with a diameter of , but no more than deep. Cacti can also form new roots quickly when rain falls after a drought. The concentration of salts in the root cells of cacti is relatively high. All these adaptations enable cacti to absorb water rapidly during periods of brief or light rainfall. Thus, Ferocactus cylindraceus reportedly can take up a significant amount of water within 12 hours from as little as of rainfall, becoming fully hydrated in a few days.
Although in most cacti, the stem acts as the main organ for storing water, some cacti have in addition large taproots. These may be several times the length of the above-ground body in the case of species such as Copiapoa atacamensis, which grows in one of the driest places in the world, the Atacama Desert in northern Chile.
Metabolism
Photosynthesis requires plants to take in carbon dioxide gas (). As they do so, they lose water through transpiration. Like other types of succulents, cacti reduce this water loss by the way in which they carry out photosynthesis. "Normal" leafy plants use the C3 mechanism: during daylight hours, is continually drawn out of the air present in spaces inside leaves and converted first into a compound containing three carbon atoms (3-phosphoglycerate) and then into products such as carbohydrates. The access of air to internal spaces within a plant is controlled by stomata, which are able to open and close. The need for a continuous supply of during photosynthesis means the stomata must be open, so water vapor is continuously being lost. Plants using the C3 mechanism lose as much as 97% of the water taken up through their roots in this way. A further problem is that as temperatures rise, the enzyme that captures starts to capture more and more oxygen instead, reducing the efficiency of photosynthesis by up to 25%.
Crassulacean acid metabolism (CAM) is a mechanism adopted by cacti and other succulents to avoid the problems of the C3 mechanism. In full CAM, the stomata open only at night, when temperatures and water loss are lowest. enters the plant and is captured in the form of organic acids stored inside cells (in vacuoles). The stomata remain closed throughout the day, and photosynthesis uses only this stored . CAM uses water much more efficiently at the price of limiting the amount of carbon fixed from the atmosphere and thus available for growth. CAM-cycling is a less water-efficient system whereby stomata open in the day, just as in plants using the C3 mechanism. At night, or when the plant is short of water, the stomata close and the CAM mechanism is used to store produced by respiration for use later in photosynthesis. CAM-cycling is present in Pereskia species.
By studying the ratio of 14C to 13C incorporated into a plant—its isotopic signature—it is possible to deduce how much is taken up at night and how much in the daytime. Using this approach, most of the Pereskia species investigated exhibit some degree of CAM-cycling, suggesting this ability was present in the ancestor of all cacti. Pereskia leaves are claimed to only have the C3 mechanism with CAM restricted to stems. More recent studies show that "it is highly unlikely that significant carbon assimilation occurs in the stem"; Pereskia species are described as having "C3 with inducible CAM." Leafless cacti carry out all their photosynthesis in the stem, using full CAM. , it is not clear whether stem-based CAM evolved once only in the core cacti, or separately in the opuntias and cactoids; CAM is known to have evolved convergently many times.
To carry out photosynthesis, cactus stems have undergone many adaptations. Early in their evolutionary history, the ancestors of modern cacti (other than Leuenbergeria species) developed stomata on their stems and began to delay developing bark. However, this alone was not sufficient; cacti with only these adaptations appear to do very little photosynthesis in their stems. Stems needed to develop structures similar to those normally found only in leaves. Immediately below the outer epidermis, a hypodermal layer developed made up of cells with thickened walls, offering mechanical support. Air spaces were needed between the cells to allow carbon dioxide to diffuse inwards. The center of the stem, the cortex, developed "chlorenchyma" – a plant tissue made up of relatively unspecialized cells containing chloroplasts, arranged into a "spongy layer" and a "palisade layer" where most of the photosynthesis occurs.
Taxonomy and classification
Naming and classifying cacti has been both difficult and controversial since the first cacti were discovered for science. The difficulties began with Carl Linnaeus. In 1737, he placed the cacti he knew into two genera, Cactus and Pereskia. However, when he published Species Plantarum in 1753—the starting point for modern botanical nomenclature—he relegated them all to one genus, Cactus. The word "cactus" is derived through Latin from the Ancient Greek (kaktos), a name used by Theophrastus for a spiny plant, which may have been the cardoon (Cynara cardunculus).
Later botanists, such as Philip Miller in 1754, divided cacti into several genera, which, in 1789, Antoine Laurent de Jussieu placed in his newly created family Cactaceae. By the early 20th century, botanists came to feel Linnaeus's name Cactus had become so confused as to its meaning (was it the genus or the family?) that it should not be used as a genus name. The 1905 Vienna botanical congress rejected the name Cactus and instead declared Mammillaria was the type genus of the family Cactaceae. It did, however, conserve the name Cactaceae, leading to the unusual situation in which the family Cactaceae no longer contains the genus after which it was named.
The difficulties continued, partly because giving plants scientific names relies on "type specimens". Ultimately, if botanists want to know whether a particular plant is an example of, say, Mammillaria mammillaris, they should be able to compare it with the type specimen to which this name is permanently attached. Type specimens are normally prepared by compression and drying, after which they are stored in herbaria to act as definitive references. However, cacti are very difficult to preserve in this way; they have evolved to resist drying and their bodies do not easily compress. A further difficulty is that many cacti were given names by growers and horticulturalists rather than botanists; as a result, the provisions of the International Code of Nomenclature for algae, fungi, and plants (which governs the names of cacti, as well as other plants) were often ignored. Curt Backeberg, in particular, is said to have named or renamed 1,200 species without one of his names ever being attached to a specimen, which, according to David Hunt, ensured he "left a trail of nomenclatural chaos that will probably vex cactus taxonomists for centuries."
Classification
In 1984, it was decided that the Cactaceae Section of the International Organization for Succulent Plant Study should set up a working party, now called the International Cactaceae Systematics Group (ICSG), to produce consensus classifications down to the level of genera. Their system has been used as the basis of subsequent classifications. Detailed treatments published in the 21st century have divided the family into around 125–130 genera and 1,400–1,500 species, which are then arranged into a number of tribes and subfamilies. The ICSG classification of the cactus family recognized four subfamilies, the largest of which was divided into nine tribes. The subfamilies were:
Subfamily Pereskioideae K. Schumann
The only genus in the ICSG classification was Pereskia. It has features considered closest to the ancestors of the Cactaceae. Plants are trees or shrubs with leaves; their stems are smoothly round in cross section, rather than being ribbed or having tubercles. Two systems may be used in photosynthesis, both the "normal" C3 mechanism and crassulean acid metabolism (CAM)—an "advanced" feature of cacti and other succulents that conserves water.
Molecular phylogenetic studies showed that when broadly circumscribed, Pereskia was not monophyletic, and it has been split into three genera, Leuenbergeria, Rhodocactus and a narrowly circumscribed Pereskia. Leuenbergeria is then placed on its own in a separate monogeneric subfamily, Leuenbergerioideae.
Subfamily Opuntioideae K. Schumann
Some 15 genera are included in this subfamily. They may have leaves when they are young, but these are lost later. Their stems are usually divided into distinct "joints" or "pads" (cladodes). Plants vary in size from the small cushions of Maihueniopsis to treelike species of Opuntia, rising to or more.
Subfamily Maihuenioideae P. Fearn
The only genus is Maihuenia, with two species, both of which form low-growing mats. It has some features that are primitive within the cacti. Plants have leaves, and crassulean acid metabolism is wholly absent.
Subfamily Cactoideae
Divided into nine tribes, this is the largest subfamily, including all the "typical" cacti. Members are highly variable in habit, varying from tree-like to epiphytic. Leaves are normally absent, although sometimes very reduced leaves are produced by young plants. Stems are usually not divided into segments, and are ribbed or tuberculate. Two of the tribes, Hylocereeae and Rhipsalideae, contain climbing or epiphytic forms with a rather different appearance; their stems are flattened and may be divided into segments.
Molecular phylogenetic studies have supported the monophyly of three of these subfamilies (not Pereskioideae), but have not supported all of the tribes or even genera below this level; indeed, a 2011 study found only 39% of the genera in the subfamily Cactoideae sampled in the research were monophyletic. Classification of the cacti currently remains uncertain and is likely to change.
Phylogeny and evolution
Phylogeny
A 2005 study suggested the genus Pereskia as then circumscribed (Pereskia sensu lato) was basal within the Cactaceae, but confirmed earlier suggestions it was not monophyletic, i.e., did not include all the descendants of a common ancestor. The Bayesian consensus cladogram from this study is shown below with subsequent generic changes added.
A 2011 study using fewer genes but more species also found that Pereskia s.l. was divided into the same clades, but was unable to resolve the members of the "core cacti" clade. It was accepted that the relationships shown above are "the most robust to date."
Leuenbergeria species (Pereskia s.l. Clade A) always lack two key features of the stem present in most of the remaining "caulocacti": like most non-cacti, their stems begin to form bark early in the plants' life and also lack stomata—structures that control admission of air into a plant and hence control photosynthesis. By contrast, caulocacti, including species of Rhodocactus and the remaining species of Pereskia s.s., typically delay forming bark and have stomata on their stems, thus giving the stem the potential to become a major organ for photosynthesis. (The two highly specialized species of Maihuenia are something of an exception.)
The first cacti are thought to have been only slightly succulent shrubs or small trees whose leaves carried out photosynthesis. They lived in tropical areas that experienced periodic drought. If Leuenbergeria is a good model of these early cacti, then, although they would have appeared superficially similar to other trees growing nearby, they had already evolved strategies to conserve water (some of which are present in members of other families in the order Caryophyllales). These strategies included being able to respond rapidly to periods of rain, and keeping transpiration low by using water very efficiently during photosynthesis. The latter was achieved by tightly controlling the opening of stomata. Like Pereskia species today, early ancestors may have been able to switch from the normal C3 mechanism, where carbon dioxide is used continuously in photosynthesis, to CAM cycling, in which when the stomata are closed, carbon dioxide produced by respiration is stored for later use in photosynthesis.
The clade containing Rhodocactus and Pereskia s.s. marks the beginnings of an evolutionary switch to using stems as photosynthetic organs. Stems have stomata and the formation of bark takes place later than in normal trees. The "core cacti" show a steady increase in both stem succulence and photosynthesis accompanied by multiple losses of leaves, more-or-less complete in the Cactoideae. One evolutionary question at present unanswered is whether the switch to full CAM photosynthesis in stems occurred only once in the core cacti, in which case it has been lost in Maihuenia, or separately in Opuntioideae and Cactoideae, in which case it never evolved in Maihuenia.
Understanding evolution within the core cacti clade is difficult , since phylogenetic relationships are still uncertain and not well related to current classifications. Thus, a 2011 study found "an extraordinarily high proportion of genera" were not monophyletic, so were not all descendants of a single common ancestor. For example, of the 36 genera in the subfamily Cactoideae sampled in the research, 22 (61%) were found not monophyletic. Nine tribes are recognized within Cactoideae in the International Cactaceae Systematics Group (ICSG) classification; one, Calymmantheae, comprises a single genus, Calymmanthium. Only two of the remaining eight – Cacteae and Rhipsalideae – were shown to be monophyletic in a 2011 study by Hernández-Hernández et al. For a more detailed discussion of the phylogeny of the cacti, see Classification of the Cactaceae.
Evolutionary history
No known fossils of cacti exist to throw light on their evolutionary history. However, the geographical distribution of cacti offers some evidence. Except for a relatively recent spread of Rhipsalis baccifera to parts of the Old World, cacti are plants of South America and mainly southern regions of North America. This suggests the family must have evolved after the ancient continent of Gondwana split into South America and Africa, which occurred during the Early Cretaceous, around . Precisely when after this split cacti evolved is less clear. Older sources suggest an early origin around 90 – 66 million years ago, during the Late Cretaceous. More recent molecular studies suggest a much younger origin, perhaps in very Late Eocene to early Oligocene periods, around 35–30 million years ago. Based on the phylogeny of the cacti, the earliest diverging group (Leuenbergeria) may have originated in Central America and northern South America, whereas the caulocacti, those with more-or-less succulent stems, evolved later in the southern part of South America, and then moved northwards. Core cacti, those with strongly succulent stems, are estimated to have evolved around 25 million years ago. A possible stimulus to their evolution may have been uplifting in the central Andes, some 25–20 million years ago, which was associated with increasing and varying aridity. However, the current species diversity of cacti is thought to have arisen only in the last 10–5 million years (from the late Miocene into the Pliocene). Other succulent plants, such as the Aizoaceae in South Africa, the Didiereaceae in Madagascar and the genus Agave in the Americas, appear to have diversified at the same time, which coincided with a global expansion of arid environments.
Distribution
Cacti inhabit diverse regions, from coastal plains to high mountain areas. With one exception, they are native to the Americas, where their range extends from Patagonia to British Columbia and Alberta in western Canada. A number of centers of diversity exist. For cacti adapted to drought, the three main centers are Mexico and the southwestern United States; the southwestern Andes, where they are found in Peru, Bolivia, Chile and Argentina; and eastern Brazil, away from the Amazon Basin. Tree-living epiphytic and climbing cacti necessarily have different centers of diversity, as they require moister environments. They are mainly found in the coastal mountains and Atlantic forests of southeastern Brazil; in Bolivia, which is the center of diversity for the subfamily Rhipsalideae; and in forested regions of Central America, where the climbing Hylocereeae are most diverse.
Rhipsalis baccifera is the exception; it is native to both the Americas and the Old World, where it is found in tropical Africa, Madagascar, and Sri Lanka. One theory is it was spread by being carried as seeds in the digestive tracts of migratory birds; the seeds of Rhipsalis are adapted for bird distribution. Old World populations are polyploid, and regarded as distinct subspecies, supporting the idea that the spread was not recent. The alternative theory is the species initially crossed the Atlantic on European ships trading between South America and Africa, after which birds may have spread it more widely.
Naturalized species
Many other species have become naturalized outside the Americas after having been introduced by people, especially in Australia, Hawaii, and the Mediterranean region. In Australia, species of Opuntia, particularly Opuntia stricta, were introduced in the 19th century for use as natural agricultural fences and in an attempt to establish a cochineal industry. They rapidly became a major weed problem, but are now controlled by biological agents, particularly the moth Cactoblastis cactorum. The weed potential of Opuntia species in Australia continues however, leading to all opuntioid cacti except O. ficus-indica being declared Weeds of National Significance by the Australian Weeds Committee in April 2012.
The Arabian Peninsula has a wide variety of ever-increasing, introduced cactus populations. Some of these are cultivated, some are escapes from cultivation, and some are invasives that are presumed to be ornamental escapes.
Reproductive ecology
Cactus flowers are pollinated by insects, birds and bats. None are known to be wind-pollinated and self-pollination occurs in only a very few species; for example the flowers of some species of Frailea do not open (cleistogamy). The need to attract pollinators has led to the evolution of pollination syndromes, which are defined as groups of "floral traits, including rewards, associated with the attraction and utilization of a specific group of animals as pollinators."
Bees are the most common pollinators of cacti; bee-pollination is considered to have been the first to evolve. Day-flying butterflies and nocturnal moths are associated with different pollination syndromes. Butterfly-pollinated flowers are usually brightly colored, opening during the day, whereas moth-pollinated flowers are often white or pale in color, opening only in the evening and at night. As an example, Lophocereus schottii is pollinated by a particular species of moth, Upiga virescens, which also lays its eggs among the developing seeds its caterpillars later consume. The flowers of this cactus are funnel-shaped, white to deep pink, up to long, and open at night.
Hummingbirds are significant pollinators of cacti. Species showing the typical hummingbird-pollination syndrome have flowers with colors towards the red end of the spectrum, anthers and stamens that protrude from the flower, and a shape that is not radially symmetrical, with a lower lip that bends downwards; they produce large amounts of nectar with a relatively low sugar content. Schlumbergera species, such as S. truncata, have flowers that correspond closely to this syndrome. Other hummingbird-pollinated genera include Cleistocactus and Disocactus.
Bat-pollination is relatively uncommon in flowering plants, but about a quarter of the genera of cacti are known to be pollinated by bats—an unusually high proportion, exceeded among eudicots by only two other families, both with very few genera. Columnar cacti growing in semidesert areas are among those most likely to be bat-pollinated; this may be because bats are able to travel considerable distances, so are effective pollinators of plants growing widely separated from one another. The pollination syndrome associated with bats includes a tendency for flowers to open in the evening and at night, when bats are active. Other features include a relatively dull color, often white or green; a radially symmetrical shape, often tubular; a smell described as "musty"; and the production of a large amount of sugar-rich nectar. Carnegiea gigantea is an example of a bat-pollinated cactus, as are many species of Pachycereus and Pilosocereus.
The fruits produced by cacti after the flowers have been fertilized vary considerably; many are fleshy, although some are dry. All contain a large number of seeds. Fleshy, colorful and sweet-tasting fruits are associated with seed dispersal by birds. The seeds pass through their digestive systems and are deposited in their droppings. Fruit that falls to the ground may be eaten by other animals; giant tortoises are reported to distribute Opuntia seeds in the Galápagos Islands. Ants appear to disperse the seeds of a few genera, such as Blossfeldia. Drier spiny fruits may cling to the fur of mammals or be moved around by the wind.
Uses
Early history
, there is still controversy as to the precise dates when humans first entered those areas of the New World where cacti are commonly found, and hence when they might first have used them. An archaeological site in Chile has been dated to around 15,000 years ago, suggesting cacti would have been encountered before then. Early evidence of the use of cacti includes cave paintings in the Serra da Capivara in Brazil, and seeds found in ancient middens (waste dumps) in Mexico and Peru, with dates estimated at 12,000–9,000 years ago. Hunter-gatherers likely collected cactus fruits in the wild and brought them back to their camps.
It is not known when cacti were first cultivated. Opuntias (prickly pears) were used for a variety of purposes by the Aztecs, whose empire, lasting from the 14th to the 16th century, had a complex system of horticulture. Their capital from the 15th century was Tenochtitlan (now Mexico City); one explanation for the origin of the name is that it includes the Nahuatl word nōchtli, referring to the fruit of an opuntia. The coat of arms of Mexico shows an eagle perched on a cactus while holding a snake, an image at the center of the myth of the founding of Tenochtitlan. The Aztecs symbolically linked the ripe red fruits of an opuntia to human hearts; just as the fruit quenches thirst, so offering human hearts to the sun god ensured the sun would keep moving.
Europeans first encountered cacti when they arrived in the New World late in the 15th century. Their first landfalls were in the West Indies, where relatively few cactus genera are found; one of the most common is the genus Melocactus. Thus, melocacti were possibly among the first cacti seen by Europeans. Melocactus species were present in English collections of cacti before the end of the 16th century (by 1570 according to one source,) where they were called Echinomelocactus, later shortened to Melocactus by Joseph Pitton de Tourneville in the early 18th century. Cacti, both purely ornamental species and those with edible fruit, continued to arrive in Europe, so Carl Linnaeus was able to name 22 species by 1753. One of these, his Cactus opuntia (now part of Opuntia ficus-indica), was described as "" (with larger fruit ... now in Spain and Portugal), indicative of its early use in Europe.
Food
The plant now known as Opuntia ficus-indica, or the Indian fig cactus, has long been an important source of food. The original species is thought to have come from central Mexico, although this is now obscure because the indigenous people of southern North America developed and distributed a range of horticultural varieties (cultivars), including forms of the species and hybrids with other opuntias. Both the fruit and pads are eaten, the former often under the Spanish name tuna, the latter under the name nopal. Cultivated forms are often significantly less spiny or even spineless. The nopal industry in Mexico was said to be worth US$150 million in 2007. The Indian fig cactus was probably already present in the Caribbean when the Spanish arrived, and was soon after brought to Europe. It spread rapidly in the Mediterranean area, both naturally and by being introduced—so much so, early botanists assumed it was native to the area. Outside the Americas, the Indian fig cactus is an important commercial crop in Sicily, Algeria and other North African countries. Fruits of other opuntias are also eaten, generally under the same name, tuna. Flower buds, particularly of Cylindropuntia species, are also consumed.
Almost any fleshy cactus fruit is edible. The word pitaya or pitahaya (usually considered to have been taken into Spanish from Haitian creole) can be applied to a range of "scaly fruit", particularly those of columnar cacti. The fruit of the saguaro (Carnegiea gigantea) has long been important to the indigenous peoples of northwestern Mexico and the southwestern United States, including the Sonoran Desert. It can be preserved by boiling to produce syrup and by drying. The syrup can also be fermented to produce an alcoholic drink. Fruits of Stenocereus species have also been important food sources in similar parts of North America; Stenocereus queretaroensis is cultivated for its fruit. In more tropical southern areas, the climber Selenicereus undatus provides pitahaya orejona, now widely grown in Asia under the name dragon fruit. Other cacti providing edible fruit include species of Echinocereus, Ferocactus, Mammillaria, Myrtillocactus, Pachycereus, Peniocereus and Selenicereus. The bodies of cacti other than opuntias are less often eaten, although Anderson reported that Neowerdermannia vorwerkii is prepared and eaten like potatoes in upland Bolivia.
Psychoactive agents
A number of species of cacti have been shown to contain psychoactive agents, chemical compounds that can cause changes in mood, perception and cognition through their effects on the brain. Two species have a long history of use by the indigenous peoples of the Americas: peyote, Lophophora williamsii, in North America, and the San Pedro cactus, Trichocereus macrogonus var. pachanoi, in South America. Both contain mescaline.
L. williamsii is native to northern Mexico and southern Texas. Individual stems are about high with a diameter of , and may be found in clumps up to wide. A large part of the stem is usually below ground. Mescaline is concentrated in the photosynthetic portion of the stem above ground. The center of the stem, which contains the growing point (the apical meristem), is sunken. Experienced collectors of peyote remove a thin slice from the top of the plant, leaving the growing point intact, thus allowing the plant to regenerate. Evidence indicates peyote was in use more than 5,500 years ago; dried peyote buttons presumed to be from a site on the Rio Grande, Texas, were radiocarbon dated to around 3780–3660 BC. Peyote is perceived as a means of accessing the spirit world. Attempts by the Roman Catholic church to suppress its use after the Spanish conquest were largely unsuccessful, and by the middle of the 20th century, peyote was more widely used than ever by indigenous peoples as far north as Canada. It is now used formally by the Native American Church.
Trichocereus macrogonus var. pachanoi (syn. Echinopsis pachanoi) is native to Ecuador and Peru. It is very different in appearance from L. williamsii. It has tall stems, up to high, with a diameter of , which branch from the base, giving the whole plant a shrubby or tree-like appearance. Archaeological evidence of the use of this cactus appears to date back to 2,000–2,300 years ago, with carvings and ceramic objects showing columnar cacti. Although church authorities under the Spanish attempted to suppress its use, this failed, as shown by the Christian element in the common name "San Pedro cactus"—Saint Peter cactus. Anderson attributes the name to the belief that just as St Peter holds the keys to heaven, the effects of the cactus allow users "to reach heaven while still on earth." It continues to be used for its psychoactive effects, both for spiritual and for healing purposes, often combined with other psychoactive agents, such as Datura ferox and tobacco. Several other species of Echinopsis, including E. peruviana and E. lageniformis, also contain mescaline.
Ornamental plants
Cacti were cultivated as ornamental plants from the time they were first brought from the New World. By the early 1800s, enthusiasts in Europe had large collections (often including other succulents alongside cacti). Rare plants were sold for very high prices. Suppliers of cacti and other succulents employed collectors to obtain plants from the wild, in addition to growing their own. In the late 1800s, collectors turned to orchids, and cacti became less popular, although never disappearing from cultivation.
Cacti are often grown in greenhouses, particularly in regions unsuited to the cultivation of cacti outdoors, such the northern parts of Europe and North America. Here, they may be kept in pots or grown in the ground. Cacti are also grown as houseplants, many being tolerant of the often dry atmosphere. Cacti in pots may be placed outside in the summer to ornament gardens or patios, and then kept under cover during the winter. Less drought-resistant epiphytes, such as epiphyllum hybrids, Schlumbergera (the Thanksgiving or Christmas cactus) and Hatiora (the Easter cactus), are widely cultivated as houseplants.
Cacti may also be planted outdoors in regions with suitable climates. Concern for water conservation in arid regions has led to the promotion of gardens requiring less watering (xeriscaping). For example, in California, the East Bay Municipal Utility District sponsored the publication of a book on plants and landscapes for summer-dry climates. Cacti are one group of drought-resistant plants recommended for dry landscape gardening.
Other uses
Cacti have many other uses. They are used for human food and as fodder for animals, usually after burning off their spines. In addition to their use as psychoactive agents, some cacti are employed in herbal medicine. The practice of using various species of Opuntia in this way has spread from the Americas, where they naturally occur, to other regions where they grow, such as India.
Cochineal is a red dye produced by a scale insect that lives on species of Opuntia. Long used by the peoples of Central and North America, demand fell rapidly when European manufacturers began to produce synthetic dyes in the middle of the 19th century. Commercial production has now increased following a rise in demand for natural dyes.
Cacti are used as construction materials. Living cactus fences are employed as barricades around buildings to prevent people breaking in. They also used to corral animals. The woody parts of cacti, such as Cereus repandus and Echinopsis atacamensis, are used in buildings and in furniture. The frames of wattle and daub houses built by the Seri people of Mexico may use parts of the saguaro (Carnegiea gigantea). The very fine spines and hairs (trichomes) of some cacti were used as a source of fiber for filling pillows and in weaving.
Conservation
All cacti are included in Appendix II of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which "lists species that are not necessarily now threatened with extinction but that may become so unless trade is closely controlled." Control is exercised by making international trade in most specimens of cacti illegal unless permits have been issued, at least for exports. Some exceptions are allowed, e.g., for "naturalized or artificially propagated plants". Some cacti, such as all Ariocarpus and Discocactus species, are included in the more restrictive Appendix I, used for the "most endangered" species. These may only be moved between countries for non-commercial purposes, and only then when accompanied by both export and import permits.
The three main threats to cacti in the wild are development, grazing and over-collection. Development takes many forms. The construction of a dam near Zimapan, Mexico, caused the destruction of a large part of the natural habitat of Echinocactus grusonii. Urban development and highways have destroyed cactus habitats in parts of Mexico, New Mexico and Arizona, including the Sonoran Desert. The conversion of land to agriculture has affected populations of Ariocarpus kotschoubeyanus in Mexico, where dry plains were plowed for maize cultivation, and of Copiapoa and Eulychnia in Chile, where valley slopes were planted with vines. Grazing, in many areas by introduced animals, such as goats, has caused serious damage to populations of cacti (as well as other plants); two examples cited by Anderson are the Galápagos Islands generally and the effect on Browningia candelaris in Peru. Over-collection of cacti for sale has greatly affected some species. For example, the type locality of Pelecyphora strobiliformis near Miquihuana, Mexico, was virtually denuded of plants, which were dug up for sale in Europe. Illegal collecting of cacti from the wild continues to pose a threat.
Conservation of cacti can be in situ or ex situ. In situ conservation involves preserving habits through enforcement of legal protection and the creation of specially protected areas such as national parks and reserves. Examples of such protected areas in the United States include Big Bend National Park, Texas; Joshua Tree National Park, California; and Saguaro National Park, Arizona. Latin American examples include Parque Nacional del Pinacate, Sonora, Mexico and Pan de Azúcar National Park, Chile. Ex situ conservation aims to preserve plants and seeds outside their natural habitats, often with the intention of later reintroduction. Botanical gardens play an important role in ex situ conservation; for example, seeds of cacti and other succulents are kept in long-term storage at the Desert Botanical Garden, Arizona.
Cultivation
The popularity of cacti means many books are devoted to their cultivation. Cacti naturally occur in a wide range of habitats and are then grown in many countries with different climates, so precisely replicating the conditions in which a species normally grows is usually not practical. A broad distinction can be made between semidesert cacti and epiphytic cacti, which need different conditions and are best grown separately. This section is primarily concerned with the cultivation of semidesert cacti in containers and under protection, such as in a greenhouse or in the home, rather than cultivation outside in the ground in those climates that permit it. For the cultivation of epiphytic cacti, see Cultivation of Schlumbergera (Christmas or Thanksgiving cacti), and Cultivation of epiphyllum hybrids.
Growing medium
The purpose of the growing medium is to provide support and to store water, oxygen and dissolved minerals to feed the plant. In the case of cacti, there is general agreement that an open medium with a high air content is important. When cacti are grown in containers, recommendations as to how this should be achieved vary greatly; Miles Anderson says that if asked to describe a perfect growing medium, "ten growers would give 20 different answers". Roger Brown suggests a mixture of two parts commercial soilless growing medium, one part hydroponic clay and one part coarse pumice or perlite, with the addition of soil from earthworm castings. The general recommendation of 25–75% organic-based material, the rest being inorganic such as pumice, perlite or grit, is supported by other sources. However, the use of organic material is rejected altogether by others; Hecht says that cacti (other than epiphytes) "want soil that is low in or free of humus", and recommends coarse sand as the basis of a growing medium.
Watering
Semi-desert cacti need careful watering. General advice is hard to give, since the frequency of watering required depends on where the cacti are being grown, the nature of the growing medium, and the original habitat of the cacti. Brown says that more cacti are lost through the "untimely application of water than for any other reason" and that even during the dormant winter season, cacti need some water. Other sources say that water can be withheld during winter (November to March in the Northern Hemisphere). Another issue is the hardness of the water; where it is necessary to use hard water, regular re-potting is recommended to avoid the build up of salts. The general advice given is that during the growing season, cacti should be allowed to dry out between thorough waterings. A water meter can help in determining when the soil is dry.
Light and temperature
Although semi-desert cacti may be exposed to high light levels in the wild, they may still need some shading when subjected to the higher light levels and temperatures of a greenhouse in summer. Allowing the temperature to rise above is not recommended. The minimum winter temperature required depends very much on the species of cactus involved. For a mixed collection, a minimum temperature of between and is often suggested, except for cold-sensitive genera such as Melocactus and Discocactus. Some cacti, particularly those from the high Andes, are fully frost-hardy when kept dry (e.g. Rebutia minuscula survives temperatures down to in cultivation) and may flower better when exposed to a period of cold.
Propagation
Cacti can be propagated by seed, cuttings or grafting. Seed sown early in the year produces seedlings that benefit from a longer growing period. Seed is sown in a moist growing medium and then kept in a covered environment, until 7–10 days after germination, to avoid drying out. A very wet growing medium can cause both seeds and seedlings to rot. A temperature range of is suggested for germination; soil temperatures of around promote the best root growth. Low light levels are sufficient during germination, but afterwards semi-desert cacti need higher light levels to produce strong growth, although acclimatization is needed to conditions in a greenhouse, such as higher temperatures and strong sunlight.
Reproduction by cuttings makes use of parts of a plant that can grow roots. Some cacti produce "pads" or "joints" that can be detached or cleanly cut off. Other cacti produce offsets that can be removed. Otherwise, stem cuttings can be made, ideally from relatively new growth. It is recommended that any cut surfaces be allowed to dry for a period of several days to several weeks until a callus forms over the cut surface. Rooting can then take place in an appropriate growing medium at a temperature of around .
Grafting is used for species difficult to grow well in cultivation or that cannot grow independently, such as some chlorophyll-free forms with white, yellow or red bodies, or some forms that show abnormal growth (e.g., cristate or forms). For the host plant (the stock), growers choose one that grows strongly in cultivation and is compatible with the plant to be propagated: the scion. The grower makes cuts on both stock and scion and joins the two, binding them together while they unite. Various kinds of graft are used—flat grafts, where both scion and stock are of similar diameters, and cleft grafts, where a smaller scion is inserted into a cleft made in the stock.
Commercially, huge numbers of cacti are produced annually. For example, in 2002 in Korea alone, 49 million plants were propagated, with a value of almost US$9 million. Most of them (31 million plants) were propagated by grafting.
Pests and diseases
A range of pests attack cacti in cultivation. Those that feed on sap include mealybugs, living on both stems and roots; scale insects, generally only found on stems; whiteflies, which are said to be an "infrequent" pest of cacti; red spider mites, which are very small but can occur in large numbers, constructing a fine web around themselves and badly marking the cactus via their sap sucking, even if they do not kill it; and thrips, which particularly attack flowers. Some of these pests are resistant to many insecticides, although there are biological controls available. Roots of cacti can be eaten by the larvae of sciarid flies and fungus gnats. Slugs and snails also eat cacti.
Fungi, bacteria and viruses attack cacti, the first two particularly when plants are over-watered. Fusarium rot can gain entry through a wound and cause rotting accompanied by red-violet mold. "Helminosporium rot" is caused by Bipolaris cactivora ( Helminosporium cactivorum); Phytophthora species also cause similar rotting in cacti. Fungicides may be of limited value in combating these diseases. Several viruses have been found in cacti, including cactus virus X. These appear to cause only limited visible symptoms, such as chlorotic (pale green) spots and mosaic effects (streaks and patches of paler color). However, in an Agave species, cactus virus X has been shown to reduce growth, particularly when the roots are dry. There are no treatments for virus diseases.
| Biology and health sciences | Caryophyllales | null |
7834 | https://en.wikipedia.org/wiki/Chain%20reaction | Chain reaction | A chain reaction is a sequence of reactions where a reactive product or by-product causes additional reactions to take place. In a chain reaction, positive feedback leads to a self-amplifying chain of events.
Chain reactions are one way that systems which are not in thermodynamic equilibrium can release energy or increase entropy in order to reach a state of higher entropy. For example, a system may not be able to reach a lower energy state by releasing energy into the environment, because it is hindered or prevented in some way from taking the path that will result in the energy release. If a reaction results in a small energy release making way for more energy releases in an expanding chain, then the system will typically collapse explosively until much or all of the stored energy has been released.
A macroscopic metaphor for chain reactions is thus a snowball causing a larger snowball until finally an avalanche results ("snowball effect"). This is a result of stored gravitational potential energy seeking a path of release over friction. Chemically, the equivalent to a snow avalanche is a spark causing a forest fire. In nuclear physics, a single stray neutron can result in a prompt critical event, which may finally be energetic enough for a nuclear reactor meltdown or (in a bomb) a nuclear explosion.
Another metaphor for a chain reaction is the domino effect, named after the act of domino toppling, where the simple action of toppling one domino leads to all dominoes eventually toppling, even if they are significantly larger.
Numerous chain reactions can be represented by a mathematical model based on Markov chains.
Chemical chain reactions
History
In 1913, the German chemist Max Bodenstein first put forth the idea of chemical chain reactions. If two molecules react, not only molecules of the final reaction products are formed, but also some unstable molecules which can further react with the parent molecules with a far larger probability than the initial reactants. (In the new reaction, further unstable molecules are formed besides the stable products, and so on.)
In 1918, Walther Nernst proposed that the photochemical reaction between hydrogen and chlorine is a chain reaction in order to explain what is known as the quantum yield phenomena. This means that one photon of light is responsible for the formation of as many as 106 molecules of the product HCl. Nernst suggested that the photon dissociates a Cl2 molecule into two Cl atoms which each initiate a long chain of reaction steps forming HCl.
In 1923, Danish and Dutch scientists J. A. Christiansen and Hendrik Anthony Kramers, in an analysis of the formation of polymers, pointed out that such a chain reaction need not start with a molecule excited by light, but could also start with two molecules colliding violently due to thermal energy as previously proposed for initiation of chemical reactions by van' t Hoff.
Christiansen and Kramers also noted that if, in one link of the reaction chain, two or more unstable molecules are produced, the reaction chain would branch and grow. The result is in fact an exponential growth, thus giving rise to explosive increases in reaction rates, and indeed to chemical explosions themselves. This was the first proposal for the mechanism of chemical explosions.
A quantitative chain chemical reaction theory was created later on by Soviet physicist Nikolay Semyonov in 1934. Semyonov shared the Nobel Prize in 1956 with Sir Cyril Norman Hinshelwood, who independently developed many of the same quantitative concepts.
Typical steps
The main types of steps in chain reaction are of the following types.
Initiation (formation of active particles or chain carriers, often free radicals, in either a thermal or a photochemical step)
Propagation (may comprise several elementary steps in a cycle, where the active particle through reaction forms another active particle which continues the reaction chain by entering the next elementary step). In effect the active particle serves as a catalyst for the overall reaction of the propagation cycle. Particular cases are:
chain branching (a propagation step where one active particle enters the step and two or more are formed);
chain transfer (a propagation step in which the active particle is a growing polymer chain which reacts to form an inactive polymer whose growth is terminated and an active small particle (such as a radical), which may then react to form a new polymer chain).
Termination (elementary step in which the active particle loses its activity; e. g. by recombination of two free radicals).
The chain length is defined as the average number of times the propagation cycle is repeated, and equals the overall reaction rate divided by the initiation rate.
Some chain reactions have complex rate equations with fractional order or mixed order kinetics.
Detailed example: the hydrogen-bromine reaction
The reaction H2 + Br2 → 2 HBr proceeds by the following mechanism:
Initiation
Br2 → 2 Br• (thermal) or Br2 + hν → 2 Br• (photochemical)
each Br atom is a free radical, indicated by the symbol "•" representing an unpaired electron.
Propagation (here a cycle of two steps)
Br• + H2 → HBr + H•
H• + Br2 → HBr + Br•
the sum of these two steps corresponds to the overall reaction H2 + Br2 → 2 HBr, with catalysis by Br• which participates in the first step and is regenerated in the second step.
Retardation (inhibition)
H• + HBr → H2 + Br•
this step is specific to this example, and corresponds to the first propagation step in reverse.
Termination 2 Br• → Br2
recombination of two radicals, corresponding in this example to initiation in reverse.
As can be explained using the steady-state approximation, the thermal reaction has an initial rate of fractional order (3/2), and a complete rate equation with a two-term denominator (mixed-order kinetics).
Further chemical examples
The reaction 2 H2 + O2 → 2 H2O provides an example of chain branching. The propagation is a sequence of two steps whose net effect is to replace an H atom by another H atom plus two OH radicals. This leads to an explosion under certain conditions of temperature and pressure.
H• + O2 → •OH + •O•
•O• + H2 → •OH + H•
In chain-growth polymerization, the propagation step corresponds to the elongation of the growing polymer chain. Chain transfer corresponds to transfer of the activity from this growing chain, whose growth is terminated, to another molecule which may be a second growing polymer chain. For polymerization, the kinetic chain length defined above may differ from the degree of polymerization of the product macromolecule.
Polymerase chain reaction, a technique used in molecular biology to amplify (make many copies of) a piece of DNA by in vitro enzymatic replication using a DNA polymerase.
Acetaldehyde pyrolysis and rate equation
The pyrolysis (thermal decomposition) of acetaldehyde, CH3CHO (g) → CH4 (g) + CO (g), proceeds via the Rice-Herzfeld mechanism:
Initiation (formation of free radicals):
CH3CHO (g) → •CH3 (g) + •CHO (g) k1
The methyl and CHO groups are free radicals.
Propagation (two steps):
•CH3 (g) + CH3CHO (g) → CH4 (g) + •CH3CO (g) k2
This reaction step provides methane, which is one of the two main products.
•CH3CO (g) → CO (g) + •CH3 (g) k3
The product •CH3CO (g) of the previous step gives rise to carbon monoxide (CO), which is the second main product.
The sum of the two propagation steps corresponds to the overall reaction CH3CHO (g) → CH4 (g) + CO (g), catalyzed by a methyl radical •CH3.
Termination:
•CH3 (g) + •CH3 (g) → C2H6 (g) k4
This reaction is the only source of ethane (minor product) and it is concluded to be the main chain ending step.
Although this mechanism explains the principal products, there are others that are formed in a minor degree, such as acetone (CH3COCH3) and propanal (CH3CH2CHO).
Applying the Steady State Approximation for the intermediate species CH3(g) and CH3CO(g), the rate law for the formation of methane and the order of reaction are found:
The rate of formation of the product methane is
For the intermediates
and
Adding (2) and (3), we obtain
so that
Using (4) in (1) gives the rate law , which is order 3/2 in the reactant CH3CHO.
Nuclear chain reactions
A nuclear chain reaction was proposed by Leo Szilard in 1933, shortly after the neutron was discovered, yet more than five years before nuclear fission was first discovered. Szilárd knew of chemical chain reactions, and he had been reading about an energy-producing nuclear reaction involving high-energy protons bombarding lithium, demonstrated by John Cockcroft and Ernest Walton, in 1932. Now, Szilárd proposed to use neutrons theoretically produced from certain nuclear reactions in lighter isotopes, to induce further reactions in light isotopes that produced more neutrons. This would in theory produce a chain reaction at the level of the nucleus. He did not envision fission as one of these neutron-producing reactions, since this reaction was not known at the time. Experiments he proposed using beryllium and indium failed.
Later, after fission was discovered in 1938, Szilárd immediately realized the possibility of using neutron-induced fission as the particular nuclear reaction necessary to create a chain-reaction, so long as fission also produced neutrons. In 1939, with Enrico Fermi, Szilárd proved this neutron-multiplying reaction in uranium. In this reaction, a neutron plus a fissionable atom causes a fission resulting in a larger number of neutrons than the single one that was consumed in the initial reaction. Thus was born the practical nuclear chain reaction by the mechanism of neutron-induced nuclear fission.
Specifically, if one or more of the produced neutrons themselves interact with other fissionable nuclei, and these also undergo fission, then there is a possibility that the macroscopic overall fission reaction will not stop, but continue throughout the reaction material. This is then a self-propagating and thus self-sustaining chain reaction. This is the principle for nuclear reactors and atomic bombs.
Demonstration of a self-sustaining nuclear chain reaction was accomplished by Enrico Fermi and others, in the successful operation of Chicago Pile-1, the first artificial nuclear reactor, in late 1942.
Electron avalanche in gases
An electron avalanche happens between two unconnected electrodes in a gas when an electric field exceeds a certain threshold. Random thermal collisions of gas atoms may result in a few free electrons and positively charged gas ions, in a process called impact ionization. Acceleration of these free electrons in a strong electric field causes them to gain energy, and when they impact other atoms, the energy causes release of new free electrons and ions (ionization), which fuels the same process. If this process happens faster than it is naturally quenched by ions recombining, the new ions multiply in successive cycles until the gas breaks down into a plasma and current flows freely in a discharge.
Electron avalanches are essential to the dielectric breakdown process within gases. The process can culminate in corona discharges, streamers, leaders, or in a spark or continuous electric arc that completely bridges the gap. The process may extend huge sparks — streamers in lightning discharges propagate by formation of electron avalanches created in the high potential gradient ahead of the streamers' advancing tips. Once begun, avalanches are often intensified by the creation of photoelectrons as a result of ultraviolet radiation emitted by the excited medium's atoms in the aft-tip region. The extremely high temperature of the resulting plasma cracks the surrounding gas molecules and the free ions recombine to create new chemical compounds.
The process can also be used to detect radiation that initiates the process, as the passage of a single particles can be amplified to large discharges. This is the mechanism of a Geiger counter and also the visualization possible with a spark chamber and other wire chambers.
Avalanche breakdown in semiconductors
An avalanche breakdown process can happen in semiconductors, which in some ways conduct electricity analogously to a mildly ionized gas. Semiconductors rely on free electrons knocked out of the crystal by thermal vibration for conduction. Thus, unlike metals, semiconductors become better conductors the higher the temperature. This sets up conditions for the same type of positive feedback—heat from current flow causes temperature to rise, which increases charge carriers, lowering resistance, and causing more current to flow. This can continue to the point of complete breakdown of normal resistance at a semiconductor junction, and failure of the device (this may be temporary or permanent depending on whether there is physical damage to the crystal). Certain devices, such as avalanche diodes, deliberately make use of the effect.
Living organisms
Examples of chain reactions in living organisms include excitation of neurons in epilepsy and lipid peroxidation. In peroxidation, a lipid radical reacts with oxygen to form a peroxyl radical (L• + O2 → LOO•). The peroxyl radical then oxidises another lipid, thus forming another lipid radical (LOO• + L–H → LOOH + L•). A chain reaction in glutamatergic synapses is the cause of synchronous discharge in some epileptic seizures.
| Physical sciences | Basics_3 | Chemistry |
7839 | https://en.wikipedia.org/wiki/Stellar%20corona | Stellar corona | A corona (: coronas or coronae) is the outermost layer of a star's atmosphere. It is a hot but relatively dim region of plasma populated by intermittent coronal structures known as solar prominences or filaments.
The Sun's corona lies above the chromosphere and extends millions of kilometres into outer space. Coronal light is typically obscured by diffuse sky radiation and glare from the solar disk, but can be easily seen by the naked eye during a total solar eclipse or with a specialized coronagraph. Spectroscopic measurements indicate strong ionization in the corona and a plasma temperature in excess of , much hotter than the surface of the Sun, known as the photosphere.
is, in turn, derived .
History
In 1724, French-Italian astronomer Giacomo F. Maraldi recognized that the aura visible during a solar eclipse belongs to the Sun, not to the Moon. In 1809, Spanish astronomer José Joaquín de Ferrer coined the term 'corona'. Based on his own observations of the 1806 solar eclipse at Kinderhook (New York), de Ferrer also proposed that the corona was part of the Sun and not of the Moon. English astronomer Norman Lockyer identified the first element unknown on Earth in the Sun's chromosphere, which was called helium (from Greek 'sun'). French astronomer Jules Jenssen noted, after comparing his readings between the 1871 and 1878 eclipses, that the size and shape of the corona changes with the sunspot cycle. In 1930, Bernard Lyot invented the "coronograph" (now "coronagraph"), which allows viewing the corona without a total eclipse. In 1952, American astronomer Eugene Parker proposed that the solar corona might be heated by myriad tiny 'nanoflares', miniature brightenings resembling solar flares that would occur all over the surface of the Sun.
Historical theories
The high temperature of the Sun's corona gives it unusual spectral features, which led some in the 19th century to suggest that it contained a previously unknown element, "coronium". Instead, these spectral features have since been explained by highly ionized iron (Fe-XIV, or Fe13+). Bengt Edlén, following the work of Walter Grotrian in 1939, first identified the coronal spectral lines in 1940 (observed since 1869) as transitions from low-lying metastable levels of the ground configuration of highly ionised metals (the green Fe-XIV line from Fe13+ at , but also the red Fe-X line from Fe9+ at ).
Observable components
The solar corona has three recognized, and distinct, sources of light that occupy the same volume: the "F-corona" (for "Fraunhofer"), the "K-corona" (for "Kontinuierlich"), and the "E-corona" (for "emission").
The "F-corona" is named for the Fraunhofer spectrum of absorption lines in ordinary sunlight, which are preserved by reflection off small material objects. The F-corona is faint near the Sun itself, but drops in brightness only gradually far from the Sun, extending far across the sky and becoming the zodiacal light. The F-corona is recognized to arise from small dust grains orbiting the Sun; these form a tenuous cloud that extends through much of the solar system.
The "K-corona" is named for the fact that its spectrum is a continuum, with no major spectral features. It is sunlight that is Thomson-scattered by free electrons in the hot plasma of the Sun's outer atmosphere. The continuum nature of the spectrum arises from Doppler broadening of the Sun's Fraunhofer absorption lines in the reference frame of the (hot and therefore fast-moving) electrons. Although the K-corona is a phenomenon of the electrons in the plasma, the term is frequently used to describe the plasma itself (as distinct from the dust that gives rise to the F-corona).
The "E-corona" is the component of the corona with an emission-line spectrum, either inside or outside the wavelength band of visible light. It is a phenomenon of the ion component of the plasma, as individual ions are excited by collision with other ions or electrons, or by absorption of ultraviolet light from the Sun.
Physical features
The Sun's corona is much hotter (by a factor from 150 to 450) than the visible surface of the Sun: the corona's temperature is 1 to 3 million kelvin compared to the photosphere's average temperature – around . The corona is far less dense than the photosphere, and produces about one-millionth as much visible light. The corona is separated from the photosphere by the relatively shallow chromosphere. The exact mechanism by which the corona is heated is still the subject of some debate, but likely possibilities include episodic energy releases from the pervasive magnetic field and magnetohydrodynamic waves from below. The outer edges of the Sun's corona are constantly being transported away, creating the "open" magnetic flux entrained in the solar wind.
The corona is not always evenly distributed across the surface of the Sun. During periods of quiet, the corona is more or less confined to the equatorial regions, with coronal holes covering the polar regions. However, during the Sun's active periods, the corona is evenly distributed over the equatorial and polar regions, though it is most prominent in areas with sunspot activity. The solar cycle spans approximately 11 years, from one solar minimum to the following minimum. Since the solar magnetic field is continually wound up due to the faster rotation of mass at the Sun's equator (differential rotation), sunspot activity is more pronounced at solar maximum where the magnetic field is more twisted. Associated with sunspots are coronal loops, loops of magnetic flux, upwelling from the solar interior. The magnetic flux pushes the hotter photosphere aside, exposing the cooler plasma below, thus creating the relatively dark sun spots.
High-resolution X-ray images of the Sun's corona photographed by Skylab in 1973, by Yohkoh in 1991–2001, and by subsequent space-based instruments revealed the structure of the corona to be quite varied and complex, leading astronomers to classify various zones on the coronal disc.
Astronomers usually distinguish several regions, as described below.
Active regions
Active regions are ensembles of loop structures connecting points of opposite magnetic polarity in the photosphere, the so-called coronal loops. They generally distribute in two zones of activity, which are parallel to the solar equator. The average temperature is between two and four million kelvin, while the density goes from 109 to 1010 particles per cubic centimetre.
Active regions involve all the phenomena directly linked to the magnetic field, which occur at different heights above the Sun's surface: sunspots and faculae occur in the photosphere; spicules, Hα filaments and plages in the chromosphere; prominences in the chromosphere and transition region; and flares and coronal mass ejections (CME) happen in the corona and chromosphere. If flares are very violent, they can also perturb the photosphere and generate a Moreton wave. On the contrary, quiescent prominences are large, cool, dense structures which are observed as dark, "snake-like" Hα ribbons (appearing like filaments) on the solar disc. Their temperature is about –, and so they are usually considered as chromospheric features.
In 2013, images from the High Resolution Coronal Imager revealed never-before-seen "magnetic braids" of plasma within the outer layers of these active regions.
Coronal loops
Coronal loops are the basic structures of the magnetic solar corona. These loops are the closed-magnetic flux cousins of the open-magnetic flux that can be found in coronal holes and the solar wind. Loops of magnetic flux well up from the solar body and fill with hot solar plasma. Due to the heightened magnetic activity in these coronal loop regions, coronal loops can often be the precursor to solar flares and CMEs.
The solar plasma that feeds these structures is heated from under to well over 106 K from the photosphere, through the transition region, and into the corona. Often, the solar plasma will fill these loops from one point and drain to another, called foot points (siphon flow due to a pressure difference, or asymmetric flow due to some other driver).
When the plasma rises from the foot points towards the loop top, as always occurs during the initial phase of a compact flare, it is defined as chromospheric evaporation. When the plasma rapidly cools and falls toward the photosphere, it is called chromospheric condensation. There may also be symmetric flow from both loop foot points, causing a build-up of mass in the loop structure. The plasma may cool rapidly in this region (for a thermal instability), its dark filaments obvious against the solar disk or prominences off the Sun's limb.
Coronal loops may have lifetimes in the order of seconds (in the case of flare events), minutes, hours or days. Where there is a balance in loop energy sources and sinks, coronal loops can last for long periods of time and are known as steady state or quiescent coronal loops (example).
Coronal loops are very important to our understanding of the current coronal heating problem. Coronal loops are highly radiating sources of plasma and are therefore easy to observe by instruments such as TRACE. An explanation of the coronal heating problem remains as these structures are being observed remotely, where many ambiguities are present (i.e., radiation contributions along the line-of-sight propagation). In-situ measurements are required before a definitive answer can be determined, but due to the high plasma temperatures in the corona, in-situ measurements are, at present, impossible. The next mission of the NASA Parker Solar Probe will approach the Sun very closely, allowing more direct observations.
Large-scale structures
Large-scale structures are very long arcs which can cover over a quarter of the solar disk but contain plasma less dense than in the coronal loops of the active regions.
They were first detected in the June 8, 1968, flare observation during a rocket flight.
The large-scale structure of the corona changes over the 11-year solar cycle and becomes particularly simple during the minimum period, when the magnetic field of the Sun is almost similar to a dipolar configuration (plus a quadrupolar component).
Interconnections of active regions
The interconnections of active regions are arcs connecting zones of opposite magnetic field, of different active regions. Significant variations of these structures are often seen after a flare.
Some other features of this kind are helmet streamers – large, cap-like coronal structures with long, pointed peaks that usually overlie sunspots and active regions. Coronal streamers are considered to be sources of the slow solar wind.
Filament cavities
Filament cavities are zones which look dark in the X-rays and are above the regions where Hα filaments are observed in the chromosphere. They were first observed in the two 1970 rocket flights which also detected coronal holes.
Filament cavities are cooler clouds of plasma suspended above the Sun's surface by magnetic forces. The regions of intense magnetic field look dark in images because they are empty of hot plasma. In fact, the sum of the magnetic pressure and plasma pressure must be constant everywhere on the heliosphere in order to have an equilibrium configuration: where the magnetic field is higher, the plasma must be cooler or less dense. The plasma pressure can be calculated by the state equation of a perfect gas: , where is the particle number density, the Boltzmann constant and the plasma temperature. It is evident from the equation that the plasma pressure lowers when the plasma temperature decreases with respect to the surrounding regions or when the zone of intense magnetic field empties. The same physical effect renders sunspots apparently dark in the photosphere.
Bright points
Bright points are small active regions found on the solar disk. X-ray bright points were first detected on April 8, 1969, during a rocket flight.
The fraction of the solar surface covered by bright points varies with the solar cycle. They are associated with small bipolar regions of the magnetic field. Their average temperature ranges from 1.1 MK to 3.4 MK. The variations in temperature are often correlated with changes in the X-ray emission.
Coronal holes
Coronal holes are unipolar regions which look dark in the X-rays since they do not emit much radiation. These are wide zones of the Sun where the magnetic field is unipolar and opens towards the interplanetary space. The high speed solar wind arises mainly from these regions.
In the UV images of the coronal holes, some small structures, similar to elongated bubbles, are often seen as they were suspended in the solar wind. These are the coronal plumes. More precisely, they are long thin streamers that project outward from the Sun's north and south poles.
The quiet Sun
The solar regions which are not part of active regions and coronal holes are commonly identified as the quiet Sun.
The equatorial region has a faster rotation speed than the polar zones. The result of the Sun's differential rotation is that the active regions always arise in two bands parallel to the equator and their extension increases during the periods of maximum of the solar cycle, while they almost disappear during each minimum. Therefore, the quiet Sun always coincides with the equatorial zone and its surface is less active during the maximum of the solar cycle. Approaching the minimum of the solar cycle (also named butterfly cycle), the extension of the quiet Sun increases until it covers the whole disk surface excluding some bright points on the hemisphere and the poles, where there are coronal holes.
Alfvén surface
The Alfvén surface is the boundary separating the corona from the solar wind defined as where the coronal plasma's Alfvén speed and the large-scale solar wind speed are equal.
Researchers were unsure exactly where the Alfvén critical surface of the Sun lay. Based on remote images of the corona, estimates had put it somewhere between 10 and 20 solar radii from the surface of the Sun. On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface.
Variability of the corona
A portrait, as diversified as the one already pointed out for the coronal features, is emphasized by the analysis of the dynamics of the main structures of the corona, which evolve at differential times. Studying coronal variability in its complexity is not easy because the times of evolution of the different structures can vary considerably: from seconds to several months. The typical sizes of the regions where coronal events take place vary in the same way, as it is shown in the following table.
Flares
Flares take place in active regions and are characterized by a sudden increase of the radiative flux emitted from small regions of the corona. They are very complex phenomena, visible at different wavelengths; they involve several zones of the solar atmosphere and many physical effects, thermal and not thermal, and sometimes wide reconnections of the magnetic field lines with material expulsion.
Flares are impulsive phenomena, of average duration of 15 minutes, and the most energetic events can last several hours. Flares produce a high and rapid increase of the density and temperature.
An emission in white light is only seldom observed: usually, flares are only seen at extreme UV wavelengths and into the X-rays, typical of the chromospheric and coronal emission.
In the corona, the morphology of flares is described by observations in the UV, soft and hard X-rays, and in Hα wavelengths, and is very complex. However, two kinds of basic structures can be distinguished:
Compact flares, when each of the two arches where the event is happening maintains its morphology: only an increase of the emission is observed without significant structural variations. The emitted energy is of the order of 1022 – 1023 J.
Flares of long duration, associated with eruptions of prominences, transients in white light and two-ribbon flares: in this case the magnetic loops change their configuration during the event. The energies emitted during these flares are of such great proportion they can reach 1025 J.
As for temporal dynamics, three different phases are generally distinguished, whose duration are not comparable. The durations of those periods depend on the range of wavelengths used to observe the event:
An initial impulsive phase, whose duration is on the order of minutes, strong emissions of energy are often observed even in the microwaves, EUV wavelengths and in the hard X-ray frequencies.
A maximum phase
A decay phase, which can last several hours.
Sometimes also a phase preceding the flare can be observed, usually called as "pre-flare" phase.
Coronal mass ejections
Often accompanying large solar flares and prominences are coronal mass ejections (CME). These are enormous emissions of coronal material and magnetic field that travel outward from the Sun at up to 3000 km/s, containing roughly 10 times the energy of the solar flare or prominence that accompanies them. Some larger CMEs can propel hundreds of millions of tons of material into interplanetary space at roughly 1.5 million kilometers an hour.
Stellar coronae
Coronal stars are ubiquitous among the stars in the cool half of the Hertzsprung–Russell diagram. These coronae can be detected using X-ray telescopes. Some stellar coronae, particularly in young stars, are much more luminous than the Sun's. For example, FK Comae Berenices is the prototype for the FK Com class of variable star. These are giants of spectral types G and K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (Lx ≥ 1032 erg·s−1 or 1025W) and the hottest known with dominant temperatures up to 40 MK.
The astronomical observations planned with the Einstein Observatory by Giuseppe Vaiana and his group showed that F-, G-, K- and M-stars have chromospheres and often coronae much like the Sun.
The O-B stars, which do not have surface convection zones, have a strong X-ray emission. However these stars do not have coronae, but the outer stellar envelopes emit this radiation during shocks due to thermal instabilities in rapidly moving gas blobs.
Also A-stars do not have convection zones but they do not emit at the UV and X-ray wavelengths. Thus they appear to have neither chromospheres nor coronae.
Physics of the corona
The matter in the external part of the solar atmosphere is in the state of plasma, at very high temperature (a few million kelvin) and at very low density (of the order of 1015 particles/m3).
According to the definition of plasma, it is a quasi-neutral ensemble of particles which exhibits a collective behaviour.
The composition is similar to that in the Sun's interior, mainly hydrogen, but with much greater ionization of its heavier elements than that found in the photosphere. Heavier metals, such as iron, are partially ionized and have lost most of the external electrons. The ionization state of a chemical element depends strictly on the temperature and is regulated by the Saha equation in the lowest atmosphere, but by collisional equilibrium in the optically thin corona. Historically, the presence of the spectral lines emitted from highly ionized states of iron allowed determination of the high temperature of the coronal plasma, revealing that the corona is much hotter than the internal layers of the chromosphere.
The corona behaves like a gas which is very hot but very light at the same time: the pressure in the corona is usually only 0.1 to 0.6 Pa in active regions, while on the Earth the atmospheric pressure is about 100 kPa, approximately a million times higher than on the solar surface. However it is not properly a gas, because it is made of charged particles, basically protons and electrons, moving at different velocities. Supposing that they have the same kinetic energy on average
(for the equipartition theorem), electrons have a mass roughly times smaller than protons, therefore they acquire more velocity. Metal ions are always slower. This fact has relevant physical consequences either on radiative processes (that are very different from the photospheric radiative processes), or on thermal conduction.
Furthermore, the presence of electric charges induces the generation of electric currents and high magnetic fields.
Magnetohydrodynamic waves (MHD waves) can also propagate in this plasma, even though it is still not clear how they can be transmitted or generated in the corona.
Radiation
Coronal plasma is optically thin and therefore transparent to the electromagnetic radiation that it emits and to that coming from lower layers. The plasma is very rarefied and the photon mean free path overcomes by far all the other length-scales, including the typical sizes of common coronal features.
Electromagnetic radiation from the corona has been identified coming from three main sources, located in the same volume of space:
The K-corona (K for , "continuous" in German) is created by sunlight Thomson scattering off free electrons; doppler broadening of the reflected photospheric absorption lines spreads them so greatly as to completely obscure them, giving the spectral appearance of a continuum with no absorption lines.
The F-corona (F for Fraunhofer) is created by sunlight bouncing off dust particles, and is observable because its light contains the Fraunhofer absorption lines that are seen in raw sunlight; the F-corona extends to very high elongation angles from the Sun, where it is called the zodiacal light.
The E-corona (E for emission) is due to spectral emission lines produced by ions that are present in the coronal plasma; it may be observed in broad or forbidden or hot spectral emission lines and is the main source of information about the corona's composition.
Thermal conduction
In the corona thermal conduction occurs from the external hotter atmosphere towards the inner cooler layers. Responsible for the diffusion process of the heat are the electrons, which are much lighter than ions and move faster, as explained above.
When there is a magnetic field the thermal conductivity of the plasma becomes higher in the direction which is parallel to the field lines rather than in the perpendicular direction.
A charged particle moving in the direction perpendicular to the magnetic field line is subject to the Lorentz force which is normal to the plane individuated by the velocity and the magnetic field. This force bends the path of the particle. In general, since particles also have a velocity component along the magnetic field line, the Lorentz force constrains them to bend and move along spirals around the field lines at the cyclotron frequency.
If collisions between the particles are very frequent, they are scattered in every direction. This happens in the photosphere, where the plasma carries the magnetic field in its motion. In the corona, on the contrary, the mean free-path of the electrons is of the order of kilometres and even more, so each electron can do a helicoidal motion long before being scattered after a collision. Therefore, the heat transfer is enhanced along the magnetic field lines and inhibited in the perpendicular direction.
In the direction longitudinal to the magnetic field, the thermal conductivity of the corona is
where is the Boltzmann constant, is the temperature in kelvin, is the electron mass, is the electric charge of the electron,
is the Coulomb logarithm, and
is the Debye length of the plasma with particle density . The Coulomb logarithm is roughly 20 in the corona, with a mean temperature of 1 MK and a density of 1015 particles/m3, and about 10 in the chromosphere, where the temperature is approximately 10kK and the particle density is of the order of 1018 particles/m3, and in practice it can be assumed constant.
Thence, if we indicate with the heat for a volume unit, expressed in J m−3, the Fourier equation of heat transfer, to be computed only along the direction of the field line, becomes
Numerical calculations have shown that the thermal conductivity of the corona is comparable to that of copper.
Coronal seismology
Coronal seismology is a method of studying the plasma of the solar corona with the use of magnetohydrodynamic (MHD) waves. MHD studies the dynamics of electrically conducting fluids – in this case, the fluid is the coronal plasma. Philosophically, coronal seismology is similar to the Earth's seismology, the Sun's helioseismology, and MHD spectroscopy of laboratory plasma devices. In all these approaches, waves of various kinds are used to probe a medium. The potential of coronal seismology in the estimation of the coronal magnetic field, density scale height, fine structure and heating has been demonstrated by different research groups.
Coronal heating problem
The coronal heating problem in solar physics relates to the question of why the temperature of the Sun's corona is millions of degrees kelvin greater than the thousands of degrees kelvin of the surface. Several theories have been proposed to explain this phenomenon, but it is still challenging to determine which is correct. The problem first emerged after the identification of unknown spectral lines in the solar spectrum with highly ionized iron and calcium atoms. The comparison of the coronal and the photospheric temperatures of , leads to the question of how the 200-times-hotter coronal temperature can be maintained. The problem is primarily concerned with how the energy is transported up into the corona and then converted into heat within a few solar radii.
The high temperatures require energy to be carried from the solar interior to the corona by non-thermal processes, because the second law of thermodynamics prevents heat from flowing directly from the solar photosphere (surface), which is at about , to the much hotter corona at about 1 to 3 MK (parts of the corona can even reach ).
Between the photosphere and the corona, the thin region through which the temperature increases is known as the transition region. It ranges from only tens to hundreds of kilometers thick. Energy cannot be transferred from the cooler photosphere to the corona by conventional heat transfer as this would violate the second law of thermodynamics. An analogy of this would be a light bulb raising the temperature of the air surrounding it to something greater than its glass surface. Hence, some other manner of energy transfer must be involved in the heating of the corona.
The amount of power required to heat the solar corona can easily be calculated as the difference between coronal radiative losses and heating by thermal conduction toward the chromosphere through the transition region. It is about 1 kilowatt for every square meter of surface area on the Sun's chromosphere, or 1/ of the amount of light energy that escapes the Sun.
Many coronal heating theories have been proposed, but two theories have remained as the most likely candidates: wave heating and magnetic reconnection (or nanoflares). Through most of the past 50 years, neither theory has been able to account for the extreme coronal temperatures.
In 2012, high resolution (<0.2″) soft X-ray imaging with the High Resolution Coronal Imager aboard a sounding rocket revealed tightly wound braids in the corona. It is hypothesized that the reconnection and unravelling of braids can act as primary sources of heating of the active solar corona to temperatures of up to 4 million kelvin. The main heat source in the quiescent corona (about 1.5 million kelvin) is assumed to originate from MHD waves.
NASA's Parker Solar Probe is intended to approach the Sun to a distance of approximately 9.5 solar radii to investigate coronal heating and the origin of the solar wind. It was successfully launched on August 12, 2018 and by late 2022 had completed the first 13 of more than 20 planned close approaches to the Sun.
Wave heating theory
The wave heating theory, proposed in 1949 by Évry Schatzman, proposes that waves carry energy from the solar interior to the solar chromosphere and corona. The Sun is made of plasma rather than ordinary gas, so it supports several types of waves analogous to sound waves in air. The most important types of wave are magneto-acoustic waves and Alfvén waves. Magneto-acoustic waves are sound waves that have been modified by the presence of a magnetic field, and Alfvén waves are similar to ultra low frequency radio waves that have been modified by interaction with matter in the plasma. Both types of waves can be launched by the turbulence of granulation and super granulation at the solar photosphere, and both types of waves can carry energy for some distance through the solar atmosphere before turning into shock waves that dissipate their energy as heat.
One problem with wave heating is delivery of the heat to the appropriate place. Magneto-acoustic waves cannot carry sufficient energy upward through the chromosphere to the corona, both because of the low pressure present in the chromosphere and because they tend to be reflected back to the photosphere. Alfvén waves can carry enough energy, but do not dissipate that energy rapidly enough once they enter the corona. Waves in plasmas are notoriously difficult to understand and describe analytically, but computer simulations, carried out by Thomas Bogdan and colleagues in 2003, seem to show that Alfvén waves can transmute into other wave modes at the base of the corona, providing a pathway that can carry large amounts of energy from the photosphere through the chromosphere and transition region and finally into the corona where it dissipates it as heat.
Another problem with wave heating has been the complete absence, until the late 1990s, of any direct evidence of waves propagating through the solar corona. The first direct observation of waves propagating into and through the solar corona was made in 1997 with the Solar and Heliospheric Observatory space-borne solar observatory, the first platform capable of observing the Sun in the extreme ultraviolet (EUV) for long periods of time with stable photometry. Those were magneto-acoustic waves with a frequency of about 1 millihertz (mHz, corresponding to a wave period), that carry only about 10% of the energy required to heat the corona. Many observations exist of localized wave phenomena, such as Alfvén waves launched by solar flares, but those events are transient and cannot explain the uniform coronal heat.
It is not yet known exactly how much wave energy is available to heat the corona. Results published in 2004 using data from the TRACE spacecraft seem to indicate that there are waves in the solar atmosphere at frequencies as high as (10 second period). Measurements of the temperature of different ions in the solar wind with the UVCS instrument aboard SOHO give strong indirect evidence that there are waves at frequencies as high as , well into the range of human hearing. These waves are very difficult to detect under normal circumstances, but evidence collected during solar eclipses by teams from Williams College suggest the presences of such waves in the 1– range.
Recently, Alfvénic motions have been found in the lower solar atmosphere and also in the quiet Sun, in coronal holes and in active regions using observations with AIA on board the Solar Dynamics Observatory.
These Alfvénic oscillations have significant power, and seem to be connected to the chromospheric Alfvénic oscillations previously reported with the Hinode spacecraft.
Solar wind observations with the Wind spacecraft have recently shown evidence to support theories of Alfvén-cyclotron dissipation, leading to local ion heating.
Magnetic reconnection theory
The magnetic reconnection theory relies on the solar magnetic field to induce electric currents in the solar corona. The currents then collapse suddenly, releasing energy as heat and wave energy in the corona. This process is called "reconnection" because of the peculiar way that magnetic fields behave in plasma (or any electrically conductive fluid such as mercury or seawater). In a plasma, magnetic field lines are normally tied to individual pieces of matter, so that the topology of the magnetic field remains the same: if a particular north and south magnetic pole are connected by a single field line, then even if the plasma is stirred or if the magnets are moved around, that field line will continue to connect those particular poles. The connection is maintained by electric currents that are induced in the plasma. Under certain conditions, the electric currents can collapse, allowing the magnetic field to "reconnect" to other magnetic poles and release heat and wave energy in the process.
Magnetic reconnection is hypothesized to be the mechanism behind solar flares, the largest explosions in the Solar System. Furthermore, the surface of the Sun is covered with millions of small magnetized regions 50– across. These small magnetic poles are buffeted and churned by the constant granulation. The magnetic field in the solar corona must undergo nearly constant reconnection to match the motion of this "magnetic carpet", so the energy released by the reconnection is a natural candidate for the coronal heat, perhaps as a series of "microflares" that individually provide very little energy but together account for the required energy.
The idea that nanoflares might heat the corona was proposed by Eugene Parker in the 1980s but is still controversial. In particular, ultraviolet telescopes such as TRACE and SOHO/EIT can observe individual micro-flares as small brightenings in extreme ultraviolet light, but there seem to be too few of these small events to account for the energy released into the corona. The additional energy not accounted for could be made up by wave energy, or by gradual magnetic reconnection that releases energy more smoothly than micro-flares and therefore does not appear well in the TRACE data. Variations on the micro-flare hypothesis use other mechanisms to stress the magnetic field or to release the energy, and are a subject of active research in 2005.
Spicules (type II)
For decades, researchers believed spicules could send heat into the corona. However, following observational research in the 1980s, it was found that spicule plasma did not reach coronal temperatures, and so the theory was discounted.
As per studies performed in 2010 at the National Center for Atmospheric Research in Colorado, in collaboration with the Lockheed Martin's Solar and Astrophysics Laboratory (LMSAL) and the Institute of Theoretical Astrophysics of the University of Oslo, a new class of spicules (TYPE II) discovered in 2007, which travel faster (up to 100 km/s) and have shorter lifespans, can account for the problem. These jets insert heated plasma into the Sun's outer atmosphere.
Thus, a much greater understanding of the corona and improvement in the knowledge of the Sun's subtle influence on the Earth's upper atmosphere can be expected henceforth. The Atmospheric Imaging Assembly on NASA's recently launched Solar Dynamics Observatory and NASA's Focal Plane Package for the Solar Optical Telescope on the Japanese Hinode satellite which was used to test this hypothesis. The high spatial and temporal resolutions of the newer instruments reveal this coronal mass supply.
These observations reveal a one-to-one connection between plasma that is heated to millions of degrees and the spicules that insert this plasma into the corona.
| Physical sciences | Solar System | Astronomy |
7844 | https://en.wikipedia.org/wiki/Chimpanzee | Chimpanzee | The chimpanzee (; Pan troglodytes), also simply known as the chimp, is a species of great ape native to the forests and savannahs of tropical Africa. It has four confirmed subspecies and a fifth proposed one. When its close relative the bonobo was more commonly known as the pygmy chimpanzee, this species was often called the common chimpanzee or the robust chimpanzee. The chimpanzee and the bonobo are the only species in the genus Pan. Evidence from fossils and DNA sequencing shows that Pan is a sister taxon to the human lineage and is thus humans' closest living relative.
The chimpanzee is covered in coarse black hair but has a bare face, fingers, toes, palms of the hands, and soles of the feet. It is larger and more robust than the bonobo, weighing for males and for females and standing .
The chimpanzee lives in groups that range in size from 15 to 150 members, although individuals travel and forage in much smaller groups during the day. The species lives in a strict male-dominated hierarchy, where disputes are generally settled without the need for violence. Nearly all chimpanzee populations have been recorded using tools, modifying sticks, rocks, grass and leaves and using them for hunting and acquiring honey, termites, ants, nuts and water. The species has also been found creating sharpened sticks to spear small mammals. Its gestation period is eight months. The infant is weaned at about three years old but usually maintains a close relationship with its mother for several years more.
The chimpanzee is listed on the IUCN Red List as an endangered species. Between 170,000 and 300,000 individuals are estimated across its range. The biggest threats to the chimpanzee are habitat loss, poaching, and disease. Chimpanzees appear in Western popular culture as stereotyped clown-figures and have featured in entertainments such as chimpanzees' tea parties, circus acts and stage shows. Although chimpanzees have been kept as pets, their strength, aggressiveness, and unpredictability makes them dangerous in this role. Some hundreds have been kept in laboratories for research, especially in the United States. Many attempts have been made to teach languages such as American Sign Language to chimpanzees, with limited success.
Etymology
The English word chimpanzee is first recorded in 1738. It is derived from Vili ci-mpenze or Tshiluba language chimpenze, with a meaning of "ape", or "mockman". The colloquialism "chimp" was most likely coined some time in the late 1870s. The genus name Pan derives from the Greek god, while the specific name troglodytes was taken from the Troglodytae, a mythical race of cave-dwellers.
Taxonomy
The first great ape known to Western science in the 17th century was the "orang-outang" (genus Pongo), the local Malay name being recorded in Java by the Dutch physician Jacobus Bontius. In 1641, the Dutch anatomist Nicolaes Tulp applied the name to a chimpanzee or bonobo brought to the Netherlands from Angola. Another Dutch anatomist, Peter Camper, dissected specimens from Central Africa and Southeast Asia in the 1770s, noting the differences between the African and Asian apes. The German naturalist Johann Friedrich Blumenbach classified the chimpanzee as Simia troglodytes by 1775. Another German naturalist, Lorenz Oken, coined the genus Pan in 1816. The bonobo was recognised as distinct from the chimpanzee by 1933.
Evolution
Despite a large number of Homo fossil finds, Pan fossils were not described until 2005. Existing chimpanzee populations in West and Central Africa do not overlap with the major human fossil sites in East Africa, but chimpanzee fossils have now been reported from Kenya. This indicates that both humans and members of the Pan clade were present in the East African Rift Valley during the Middle Pleistocene.
According to studies published in 2017 by researchers at George Washington University, bonobos, along with chimpanzees, split from the human line about 8 million years ago; then bonobos split from the common chimpanzee line about 2 million years ago. Another 2017 genetic study suggests ancient gene flow (introgression) between 200,000 and 550,000 years ago from the bonobo into the ancestors of central and eastern chimpanzees.
Subspecies and population status
Four subspecies of the chimpanzee have been recognised, with the possibility of a fifth:
Central chimpanzee or the tschego (Pan troglodytes troglodytes), found in Cameroon, the Central African Republic, Equatorial Guinea, Gabon, the Republic of the Congo, and the Democratic Republic of the Congo, with about 140,000 individuals existing in the wild.
Western chimpanzee (P. troglodytes verus), found in Ivory Coast, Guinea, Liberia, Mali, Sierra Leone, Guinea-Bissau, Senegal, and Ghana with about 52,800 individuals still in existence.
Nigeria-Cameroon chimpanzee (P. troglodytes ellioti (also known as P. t. vellerosus)), that live within forested areas across Nigeria and Cameroon, with 6000–9000 individuals still in existence.
Eastern chimpanzee (P. troglodytes schweinfurthii), found in the Central African Republic, South Sudan, the Democratic Republic of the Congo, Uganda, Rwanda, Burundi, Tanzania, and Zambia, with approximately 180,000–256,000 individuals still existing in the wild.
Southeastern chimpanzee, P. troglodytes marungensis, in Burundi, Rwanda, Tanzania, and Uganda. Colin Groves argues that this is a subspecies, created by enough variation between the northern and southern populations of P. t. schweinfurthii, but it is not recognised by the IUCN.
Genome
A draft version of the chimpanzee genome was published in 2005 and encodes 18,759 proteins, (compared to 20,383 in the human proteome). The DNA sequences of humans and chimpanzees are very similar and the difference in protein number mostly arises from incomplete sequences in the chimpanzee genome. Both species differ by about 35 million single-nucleotide changes, five million insertion/deletion events and various chromosomal rearrangements. Typical human and chimpanzee protein homologs differ in an average of only two amino acids. About 30% of all human proteins are identical in sequence to the corresponding chimpanzee protein. Duplications of small parts of chromosomes have been the major source of differences between human and chimpanzee genetic material; about 2.7% of the corresponding modern genomes represent differences, produced by gene duplications or deletions, since humans and chimpanzees diverged from their common evolutionary ancestor.
Characteristics
Adult chimpanzees have an average standing height of . Wild adult males weigh between , and females weigh between . In exceptional cases, certain individuals may considerably exceed these measurements, standing over on two legs and weighing up to in captivity.
The chimpanzee is more robustly built than the bonobo but less than the gorilla. The arms of a chimpanzee are longer than its legs and can reach below the knees. The hands have long fingers with short thumbs and flat fingernails. The feet are adapted for grasping, and the big toe is opposable. The pelvis is long with an extended ilium. A chimpanzee's head is rounded with a prominent and prognathous face and a pronounced brow ridge. It has forward-facing eyes, a small nose, rounded non-lobed ears and a long mobile upper lip. Additionally, adult males have sharp canine teeth. Like all great apes, it has a dental formula of , that is, two incisors, one canine, two premolars, and three molars on both halves of each jaw. Chimpanzees lack the prominent sagittal crest and associated head and neck musculature of gorillas.
Chimpanzee bodies are covered by coarse hair, except for the face, fingers, toes, palms of the hands, and soles of the feet. Chimpanzees lose more hair as they age and develop bald spots. The hair of a chimpanzee is typically black but can be brown or ginger. As they get older, white or grey patches may appear, particularly on the chin and lower region. Chimpanzee skin that is covered with body hair is white, while exposed areas vary: white which ages into a dark muddy colour in eastern chimpanzees, freckled on white which ages to a heavily mottled muddy colour in central chimpanzees, and black with a butterfly-shaped white mask that darkens with age in western chimpanzees. Facial pigmentation increases with age and exposure to ultraviolet light. Females develop swelling pink skin when in oestrus. Like bonobos, male chimpanzees have a long filiform penis with a small baculum, but without a glans.
Chimpanzees are adapted for both arboreal and terrestrial locomotion. Arboreal locomotion consists of vertical climbing and brachiation. On the ground, chimpanzees move both quadrupedally and bipedally. These movements appear to have similar energy costs. As with bonobos and gorillas, chimpanzees move quadrupedally by knuckle-walking, which probably evolved independently in Pan and Gorilla. Their muscles are 50% stronger per weight than those of humans due to higher content of fast twitch muscle fibres, one of the chimpanzee's adaptations for climbing and swinging. According to Japan's Asahiyama Zoo, the grip strength of an adult chimpanzee is estimated to be , while other sources claim figures of up to .
Ecology
The chimpanzee is a highly adaptable species. It lives in a variety of habitats, including dry savanna, evergreen rainforest, montane forest, swamp forest, and dry woodland-savanna mosaic. In Gombe, the chimpanzee mostly uses semideciduous and evergreen forest as well as open woodland. At Bossou, the chimpanzee inhabits multistage secondary deciduous forest, which has grown after shifting cultivation, as well as primary forest and grassland. At Taï, it is found in the last remaining tropical rain forest in Ivory Coast. The chimpanzee has an advanced cognitive map of its home range and can repeatedly find food. The chimpanzee builds a sleeping nest in a tree in a different location each night, never using the same nest more than once. Chimpanzees sleep alone in separate nests except for infants or juvenile chimpanzees, which sleep with their mothers.
Diet
The chimpanzee is an omnivorous frugivore. It prefers fruit above all other food, but it also eats leaves, leaf buds, seeds, blossoms, stems, pith, bark, and resin. A study in Budongo Forest, Uganda found that 64.5% of their feeding time concentrated on fruits (84.6% of which being ripe), particularly those from two species of Ficus, Maesopsis eminii, and Celtis gomphophylla. In addition, 19% of feeding time was spent on arboreal leaves, mostly Broussonetia papyrifera and Celtis mildbraedii. While the chimpanzee is mostly herbivorous, it does eat honey, soil, insects, birds and their eggs, and small to medium-sized mammals, including other primates. Insect species consumed include the weaver ant Oecophylla longinoda, Macrotermes termites, and honey bees. The red colobus ranks at the top of preferred mammal prey. Other mammalian prey include red-tailed monkeys, infant and juvenile yellow baboons, bush babies, blue duikers, bushbucks, and common warthogs.
Despite the fact that chimpanzees are known to hunt and to collect both insects and other invertebrates, such food actually makes up a very small portion of their diet, from as little as 2% yearly to as much as 65 grams of animal flesh per day for each adult chimpanzee in peak hunting seasons. This also varies from troop to troop and year to year. However, in all cases, the majority of their diet consists of fruits, leaves, roots, and other plant matter. Female chimpanzees appear to consume much less animal flesh than males, according to several studies. Jane Goodall documented many occasions within Gombe Stream National Park of chimpanzees and western red colobus monkeys ignoring each other despite close proximity.
Chimpanzees do not appear to directly compete with gorillas in areas where they overlap. When fruit is abundant, gorilla and chimpanzee diets converge, but when fruit is scarce gorillas resort to vegetation. The two apes may also feed on different species, whether fruit or insects. Interactions between them can range from friendly and even stable social bonding, to avoidance, to aggression and even predation of infants on the part of chimpanzees.
Mortality and health
The average lifespan of a wild chimpanzee is relatively short. They usually live less than 15 years, although individuals that reach 12 years may live an additional 15 years. On rare occasions, wild chimpanzees may live nearly 60 years. Captive chimpanzees tend to live longer than most wild ones, with median lifespans of 31.7 years for males and 38.7 years for females. The oldest-known male captive chimpanzee to have been documented lived to 66 years, and the oldest female, Little Mama, was nearly 80 years old.
Leopards prey on chimpanzees in some areas. It is possible that much of the mortality caused by leopards can be attributed to individuals that have specialised in killing chimpanzees. Chimpanzees may react to a leopard's presence with loud vocalising, branch shaking, and throwing objects. There is at least one record of chimpanzees killing a leopard cub after mobbing it and its mother in their den. Four chimpanzees could have fallen prey to lions at Mahale Mountains National Park. Although no other instances of lion predation on chimpanzees have been recorded, lions likely do kill chimpanzees occasionally, and the larger group sizes of savanna chimpanzees may have developed as a response to threats from these big cats. Chimpanzees may react to lions by fleeing up trees, vocalising, or hiding in silence.
Chimpanzees and humans share only 50% of their parasite and microbe species. This is due to the differences in environmental and dietary adaptations; human internal parasite species overlap more with omnivorous, savanna-dwelling baboons. The chimpanzee is host to the louse species Pediculus schaeffi, a close relative of P. humanus, which infests human head and body hair. By contrast, the human pubic louse Pthirus pubis is closely related to Pthirus gorillae, which infests gorillas. A 2017 study of gastrointestinal parasites of wild chimpanzees in degraded forest in Uganda found nine species of protozoa, five nematodes, one cestode, and one trematode. The most prevalent species was the protozoan Troglodytella abrassarti.
Behaviour
Recent studies have suggested that human observers influence chimpanzee behaviour. One suggestion is that drones, camera traps, and remote microphones should be used to record and monitor chimpanzees rather than direct human observation.
Group structure
Chimpanzees live in communities that typically range from around 15 to more than 150 members but spend most of their time traveling in small, temporary groups consisting of a few individuals. These groups may consist of any combination of age and sexes. Both males and females sometimes travel alone. This fission–fusion society may include groups of four types: all-male, adult females and offspring, adults of both sexes, or one female and her offspring. These smaller groups emerge in a variety of types, for a variety of purposes. For example, an all-male troop may be organised to hunt for meat, while a group consisting of lactating females serves to act as a "nursery group" for the young.
At the core of social structures are males, which patrol the territory, protect group members, and search for food. Males remain in their natal communities, while females generally emigrate at adolescence. Males in a community are more likely to be related to one another than females are to each other. Among males, there is generally a dominance hierarchy, and males are dominant over females. However, this unusual fission-fusion social structure, "in which portions of the parent group may on a regular basis separate from and then rejoin the rest," is highly variable in terms of which particular individual chimpanzees congregate at a given time. This is caused mainly by the large measure of individual autonomy that individuals have within their fission-fusion social groups. As a result, individual chimpanzees often forage for food alone, or in smaller groups, as opposed to the much larger "parent" group, which encompasses all the chimpanzees which regularly come into contact with each other and congregate into parties in a particular area.
Male chimpanzees exist in a linear dominance hierarchy. Top-ranking males tend to be aggressive even during dominance stability. This is probably due to the chimpanzee's fission-fusion society, with male chimpanzees leaving groups and returning after extended periods of time. With this, a dominant male is unsure if any "political maneuvering" has occurred in his absence and must re-establish his dominance. Thus, a large amount of aggression occurs within five to fifteen minutes after a reunion. During these encounters, displays of aggression are generally preferred over physical attacks.
Males maintain and improve their social ranks by forming coalitions, which have been characterised as "exploitative" and based on an individual's influence in agonistic interactions. Being in a coalition allows males to dominate a third individual when they could not by themselves, as politically apt chimpanzees can exert power over aggressive interactions regardless of their rank. Coalitions can also give an individual male the confidence to challenge a dominant or larger male. The more allies a male has, the better his chance of becoming dominant. However, most changes in hierarchical rank are caused by dyadic interactions. Chimpanzee alliances can be very fickle, and one member may suddenly turn on another if it is to his advantage.
Low-ranking males frequently switch sides in disputes between more dominant individuals. Low-ranking males benefit from an unstable hierarchy and often find increased sexual opportunities if a dispute or conflict occurs. In addition, conflicts between dominant males cause them to focus on each other rather than the lower-ranking males. Social hierarchies among adult females tend to be weaker. Nevertheless, the status of an adult female may be important for her offspring. Females in Taï have also been recorded to form alliances. While chimpanzee social structure is often referred to as patriarchal, it is not entirely unheard of for females to forge coalitions against males. There is also at least one recorded case of females securing a dominant position over males in their respective troop, albeit in a captive environment. Social grooming appears to be important in the formation and maintenance of coalitions. It is more common among adult males than either between adult females or between males and females.
Chimpanzees have been described as highly territorial and will frequently kill other chimpanzees, although Margaret Power wrote in her 1991 book The Egalitarians that the field studies from which the aggressive data came, Gombe and Mahale, used artificial feeding systems that increased aggression in the chimpanzee populations studied. Thus, the behaviour may not reflect innate characteristics of the species as a whole. In the years following her artificial feeding conditions at Gombe, Jane Goodall described groups of male chimpanzees patrolling the borders of their territory, brutally attacking chimpanzees that had split off from the Gombe group. A study published in 2010 found that the chimpanzees wage wars over territory, not mates. Patrols from smaller groups are more likely to avoid contact with their neighbours. Patrols from large groups even take over a smaller group's territory, gaining access to more resources, food, and females. While it was traditionally accepted that only female chimpanzees immigrate and males remain in their natal troop for life, there are confirmed cases of adult males safely integrating themselves into new communities among West African chimpanzees, suggesting they are less territorial than other subspecies.
Mating and parenting
Chimpanzees mate throughout the year, although the number of females in oestrus varies seasonally in a group. Female chimpanzees are more likely to come into oestrus when food is readily available. Oestrous females exhibit sexual swellings. Chimpanzees are promiscuous: during oestrus, females mate with several males in their community, while males have large testicles for sperm competition. Other forms of mating also exist. A community's dominant males sometimes restrict reproductive access to females. A male and female can form a consortship and mate outside their community. In addition, females sometimes leave their community and mate with males from neighboring communities. These alternative mating strategies give females more mating opportunities without losing the support of the males in their community. Infanticide has been recorded in chimpanzee communities in some areas, and the victims are often consumed. Male chimpanzees practice infanticide on unrelated young to shorten the interbirth intervals in the females. Females sometimes practice infanticide. This may be related to the dominance hierarchy in females or may simply be pathological.
Inbreeding was studied in a relatively undisturbed eastern chimpanzee community that displayed substantial bisexual philopatry. Despite an increased inbreeding risk incurred by females who do not disperse before reaching reproductive age, these females were still able to avoid producing inbred offspring.
Copulation is brief, lasting approximately seven seconds. The gestation period is eight months. Care for the young is provided mostly by their mothers. The survival and emotional health of the young is dependent on maternal care. Mothers provide their young with food, warmth, and protection, and teach them certain skills. In addition, a chimpanzee's future rank may be dependent on its mother's status. Male chimpanzees continue to associate with the females they impregnated and interact with and support their offspring. Newborn chimpanzees are helpless. For example, their grasping reflex is not strong enough to support them for more than a few seconds. For their first 30 days, infants cling to their mother's bellies. Infants are unable to support their own weight for their first two months and need their mothers' support.
When they reach five to six months, infants ride on their mothers' backs. They remain in continual contact for the rest of their first year. When they reach two years of age, they are able to move and sit independently and start moving beyond the arms' reach of their mothers. By four to six years, chimpanzees are weaned and infancy ends. The juvenile period for chimpanzees lasts from their sixth to ninth years. Juveniles remain close to their mothers, but interact an increasing amount with other members of their community. Adolescent females move between groups and are supported by their mothers in agonistic encounters. Adolescent males spend time with adult males in social activities like hunting and boundary patrolling. A captive study suggests males can safely immigrate to a new group if accompanied by immigrant females who have an existing relationship with this male. This gives the resident males reproductive advantages with these females, as they are more inclined to remain in the group if their male friend is also accepted.
Communication
Chimpanzees use facial expressions, postures, and sounds to communicate with each other. Chimpanzees have expressive faces that are important in close-up communications. When frightened, a "full closed grin" causes nearby individuals to be fearful, as well. Playful chimpanzees display an open-mouthed grin. Chimpanzees may also express themselves with the "pout", which is made in distress, the "sneer", which is made when threatening or fearful, and "compressed-lips face", which is a type of display. When submitting to a dominant individual, a chimpanzee crunches, bobs, and extends a hand. When in an aggressive mode, a chimpanzee swaggers bipedally, hunched over and arms waving, in an attempt to exaggerate its size. While travelling, chimpanzees keep in contact by beating their hands and feet against the trunks of large trees, an act that is known as "drumming". They also do this when encountering individuals from other communities.
Vocalisations are also important in chimpanzee communication. The most common call in adults is the "pant-hoot", which may signal social rank and bond along with keeping groups together. Pant-hoots are made of four parts, starting with soft "hoos", the introduction; that gets louder and louder, the build-up; and climax into screams and sometimes barks; these die down back to soft "hoos" during the letdown phase as the call ends. Grunting is made in situations like feeding and greeting. Submissive individuals make "pant-grunts" towards their superiors. Whimpering is made by young chimpanzees as a form of begging or when lost from the group. Chimpanzees use distance calls to draw attention to danger, food sources, or other community members. "Barks" may be made as "short barks" when hunting and "tonal barks" when sighting large snakes.
Hunting
When hunting small monkeys such as the red colobus, chimpanzees hunt where the forest canopy is interrupted or irregular. This allows them to easily corner the monkeys when chasing them in the appropriate direction. Chimpanzees may also hunt as a coordinated team, so that they can corner their prey even in a continuous canopy. During an arboreal hunt, each chimpanzee in the hunting groups has a role. "Drivers" serve to keep the prey running in a certain direction and follow them without attempting to make a catch. "Blockers" are stationed at the bottom of the trees and climb up to block prey that takes off in a different direction. "Chasers" move quickly and try to make a catch. Finally, "ambushers" hide and rush out when a monkey nears. While both adults and infants are taken, adult male colobus monkeys will attack the hunting chimps. When caught and killed, the meal is distributed to all hunting party members and even bystanders.
Male chimpanzees hunt in groups more than females. Female chimpanzees tend to hunt solitarily. If a female chimpanzee were to participate in the hunting group and catch a Red Colobus, it would likely immediately be taken by an adult male. Female chimpanzees are estimated to hunt ≈ 10-15% of a community's vertebrates.
Intelligence
Chimpanzees display numerous signs of intelligence, from the ability to remember symbols to cooperation, tool use, and varied language capabilities. They are among species that have passed the mirror test, suggesting self-awareness. In one study, two young chimpanzees showed retention of mirror self-recognition after one year without access to mirrors. Chimpanzees have been observed to use insects to treat their own wounds and those of others. They catch them and apply them directly to the injury. Chimpanzees also display signs of culture among groups, with the learning and transmission of variations in grooming, tool use and foraging techniques leading to localized traditions.
A 30-year study at Kyoto University's Primate Research Institute has shown that chimpanzees are able to learn to recognise the numbers 1 to 9 and their values. The chimpanzees further show an aptitude for eidetic memory, demonstrated in experiments in which the jumbled digits are flashed onto a computer screen for less than a quarter of a second. One chimpanzee, Ayumu, was able to correctly and quickly point to the positions where they appeared in ascending order. Ayumu performed better than human adults who were given the same test.
In controlled experiments on cooperation, chimpanzees show a basic understanding of cooperation, and recruit the best collaborators. In a group setting with a device that delivered food rewards only to cooperating chimpanzees, cooperation first increased, then, due to competitive behaviour, decreased, before finally increasing to the highest level through punishment and other arbitrage behaviours.
Great apes show laughter-like vocalisations in response to physical contact, such as wrestling, play chasing, or tickling. This is documented in wild and captive chimpanzees. Chimpanzee laughter is not readily recognisable to humans as such, because it is generated by alternating inhalations and exhalations that sound more like breathing and panting. Instances in which nonhuman primates have expressed joy have been reported. Humans and chimpanzees share similar ticklish areas of the body, such as the armpits and belly. The enjoyment of tickling in chimpanzees does not diminish with age.
Chimpanzees have displayed different behaviours in response to a dying or dead group member. When witnessing a sudden death, the other group members act in frenzy, with vocalisations, aggressive displays, and touching of the corpse. In one case chimpanzees cared for a dying elder, then attended and cleaned the corpse. Afterward, they avoided the spot where the elder died and behaved in a more subdued manner. Mothers have been reported to carry around and groom their dead infants for several days.
Experimenters now and then witness behaviour that cannot be readily reconciled with chimpanzee intelligence or theory of mind. Wolfgang Köhler, for instance, reported insightful behaviour in chimpanzees, but he likewise often observed that they experienced "special difficulty" in solving simple problems. Researchers also reported that, when faced with a choice between two persons, chimpanzees were just as likely to beg food from a person who could see the begging gesture as from a person who could not, thereby raising the possibility that chimpanzees lack theory of mind.
Tool use
Nearly all chimpanzee populations have been recorded using tools. They modify sticks, rocks, grass, and leaves and use them when foraging for termites and ants, nuts, honey, algae or water. Despite the lack of complexity, forethought and skill are apparent in making these tools. Chimpanzees have used stone tools since at least 4,300 years ago.
A chimpanzee from the Kasakela chimpanzee community was the first nonhuman animal reported making a tool, by modifying a twig to use as an instrument for extracting termites from their mound. At Taï, chimpanzees simply use their hands to extract termites. When foraging for honey, chimpanzees use modified short sticks to scoop the honey out of the hive if the bees are stingless. For hives of the dangerous African honeybees, chimpanzees use longer and thinner sticks to extract the honey.
Chimpanzees also fish for ants using the same tactic. Ant dipping is difficult and some chimpanzees never master it. West African chimpanzees crack open hard nuts with stones or branches. Some forethought in this activity is apparent, as these tools are not found together or where the nuts are collected. Nut cracking is also difficult and must be learned. Chimpanzees also use leaves as sponges or spoons to drink water.
West African chimpanzees in Senegal were found to sharpen sticks with their teeth, which were then used to spear Senegal bushbabies out of small holes in trees. An eastern chimpanzee has been observed using a modified branch as a tool to capture a squirrel.
Whilst experimental studies on captive chimpanzees have found that many of their species-typical tool-use behaviours can be individually learnt by each chimpanzees, a 2021 study on their abilities to make and use stone flakes, in a similar way as hypothesised for early hominins, did not find this behaviour across two populations of chimpanzees—suggesting that this behaviour is outside the chimpanzee species-typical range.
Language
Scientists have attempted to teach human language to several species of great ape. One early attempt by Allen and Beatrix Gardner in the 1960s involved spending 51 months teaching American Sign Language to a chimpanzee named Washoe. The Gardners reported that Washoe learned 151 signs, and had spontaneously taught them to other chimpanzees, including her adopted son, Loulis. Over a longer period of time, Washoe was reported to have learned over 350 signs.
Debate is ongoing among scientists such as David Premack about chimpanzees' ability to learn language. Since the early reports on Washoe, numerous other studies have been conducted, with varying levels of success. One involved a chimpanzee jokingly named Nim Chimpsky (in allusion to the theorist of language Noam Chomsky), trained by Herbert Terrace of Columbia University. Although his initial reports were quite positive, in November 1979, Terrace and his team, including psycholinguist Thomas Bever, re-evaluated the videotapes of Nim with his trainers, analyzing them frame by frame for signs, as well as for exact context (what was happening both before and after Nim's signs). In the reanalysis, Terrace and Bever concluded that Nim's utterances could be explained merely as prompting on the part of the experimenters, as well as mistakes in reporting the data. "Much of the apes' behaviour is pure drill", he said. "Language still stands as an important definition of the human species." In this reversal, Terrace now argued Nim's use of ASL was not like human language acquisition. Nim never initiated conversations himself, rarely introduced new words, and mostly imitated what the humans did. More importantly, Nim's word strings varied in their ordering, suggesting that he was incapable of syntax. Nim's sentences also did not grow in length, unlike human children whose vocabulary and sentence length show a strong positive correlation.
Human relations
In culture
Chimpanzees are rarely represented in African culture, as people find them "too close for comfort". The Gio people of Liberia and the Hemba people of the Congo make chimpanzee masks. Gio masks are crude and blocky, and worn when teaching young people how not to behave. The Hemba masks have a smile that suggests drunken anger, insanity or horror and are worn during rituals at funerals, representing the "awful reality of death". The masks may also serve to guard households and protect both human and plant fertility. Stories have been told of chimpanzees kidnapping and raping women.
In Western popular culture, chimpanzees have occasionally been stereotyped as childlike companions, sidekicks or clowns. They are especially suited for the latter role on account of their prominent facial features, long limbs and fast movements, which humans often find amusing. Accordingly, entertainment acts featuring chimpanzees dressed up as humans with lip-synchronised human voices have been traditional staples of circuses, stage shows and TV shows like Lancelot Link, Secret Chimp (1970–1972) and The Chimp Channel (1999). From 1926 until 1972, London Zoo, followed by several other zoos around the world, held a chimpanzees' tea party daily, inspiring a long-running series of advertisements for PG Tips tea featuring such a party. Animal rights groups have urged a stop to such acts, considering them abusive.
Chimpanzees in media include Judy on the television series Daktari in the 1960s and Darwin on The Wild Thornberrys in the 1990s. In contrast to the fictional depictions of other animals, such as dogs (as in Lassie), dolphins (Flipper), horses (The Black Stallion) or even other great apes (King Kong), chimpanzee characters and actions are rarely relevant to the plot. Depictions of chimpanzees as individuals rather than stock characters, and as central rather than incidental to the plot can be found in science fiction. Robert A. Heinlein's 1947 short story "Jerry Was a Man" concerns a genetically enhanced chimpanzee suing for better treatment. The 1972 film Conquest of the Planet of the Apes, the third sequel of the 1968 film Planet of the Apes, portrays a futuristic revolt of enslaved apes led by the only talking chimpanzee, Caesar, against their human masters.
As pets
Chimpanzees have traditionally been kept as pets in a few African villages, especially in the Democratic Republic of Congo. In Virunga National Park in the east of the country, the park authorities regularly confiscate chimpanzees from people keeping them as pets. Outside their range, chimpanzees are popular as exotic pets despite their strength and aggression. Even in places where keeping non-human primates as pets is illegal, the exotic pet trade continues to prosper, leading to injuries from attacks.
Use in research
Hundreds of chimpanzees have been kept in laboratories for research. Most such laboratories either conduct or make the animals available for invasive research, defined as "inoculation with an infectious agent, surgery or biopsy conducted for the sake of research and not for the sake of the chimpanzee, and/or drug testing". Research chimpanzees tend to be used repeatedly over decades for up to 40 years, unlike the pattern of use of most laboratory animals. Two federally funded American laboratories use chimpanzees: the Yerkes National Primate Research Center at Emory University in Atlanta, Georgia, and the Southwest National Primate Center in San Antonio, Texas. Five hundred chimpanzees have been retired from laboratory use in the US and live in animal sanctuaries in the US or Canada.
A five-year moratorium was imposed by the US National Institutes of Health in 1996, because too many chimpanzees had been bred for HIV research, and it has been extended annually since 2001. With the publication of the chimpanzee genome, plans to increase the use of chimpanzees in America were reportedly increasing in 2006, some scientists arguing that the federal moratorium on breeding chimpanzees for research should be lifted. However, in 2007, the NIH made the moratorium permanent.
Other researchers argue that chimpanzees either should not be used in research, or should be treated differently, for instance with legal status as persons. Pascal Gagneux, an evolutionary biologist and primate expert at the University of California, San Diego, argues, given chimpanzees' sense of self, tool use, and genetic similarity to human beings, studies using chimpanzees should follow the ethical guidelines used for human subjects unable to give consent. A recent study suggests chimpanzees which are retired from labs exhibit a form of post-traumatic stress disorder. Stuart Zola, director of the Yerkes laboratory, disagrees. He told National Geographic: "I don't think we should make a distinction between our obligation to treat humanely any species, whether it's a rat or a monkey or a chimpanzee. No matter how much we may wish it, chimps are not human."
Only one European laboratory, the Biomedical Primate Research Centre in Rijswijk, the Netherlands, used chimpanzees in research. It formerly held 108 chimpanzees among 1,300 non-human primates. The Dutch ministry of science decided to phase out research at the centre from 2001. Trials already under way were however allowed to run their course. Chimpanzees including the female Ai have been studied at the Primate Research Institute of Kyoto University, Japan, formerly directed by Tetsuro Matsuzawa, since 1978. 12 chimpanzees are currently held at the facility.
Two chimpanzees have been sent into outer space as NASA research subjects. Ham, the first great ape in space, was launched in the Mercury-Redstone 2 capsule on 31 January 1961, and survived the suborbital flight. Enos, the third primate to orbit Earth after Soviet cosmonauts Yuri Gagarin and Gherman Titov, flew on Mercury-Atlas 5 on 29 November of the same year.
Field study
Jane Goodall undertook the first long-term field study of the chimpanzee, begun in Tanzania at Gombe Stream National Park in 1960. Other long-term studies begun in the 1960s include Adriaan Kortlandt's in the eastern Democratic Republic of the Congo and Toshisada Nishida's in Mahale Mountains National Park in Tanzania. Current understanding of the species' typical behaviours and social organisation has been formed largely from Goodall's ongoing 60-year Gombe research study.
Attacks
Chimpanzees have attacked humans. In Uganda, several attacks on children have happened, some of them fatal. Some of these attacks may have been due to the chimpanzees being intoxicated (from alcohol obtained from rural brewing operations) and becoming aggressive towards humans. Human interactions with chimpanzees may be especially dangerous if the chimpanzees perceive humans as potential rivals. At least six cases of chimpanzees snatching and eating human babies are documented.
A chimpanzee's strength and sharp teeth mean that attacks, even on adult humans, can cause severe injuries. This was evident after the attack and near death of former NASCAR driver St. James Davis, who was mauled by two escaped chimpanzees while he and his wife were celebrating the birthday of their former pet chimpanzee. Another example of chimpanzees being aggressive toward humans occurred in 2009 in Stamford, Connecticut, when a , 13-year-old pet chimpanzee named Travis attacked his owner's friend, who lost her hands, eyes, nose, and part of her maxilla from the attack.
Human immunodeficiency virus
Two primary classes of human immunodeficiency virus (HIV) infect humans: HIV-1 and HIV-2. HIV-1 is the more virulent and easily transmitted, and is the source of the majority of HIV infections throughout the world; HIV-2 occurs mostly in west Africa. Both types originated in west and central Africa, jumping from other primates to humans. HIV-1 has evolved from a simian immunodeficiency virus (SIVcpz) found in the subspecies P. t. troglodytes of southern Cameroon. Kinshasa, in the Democratic Republic of Congo, has the greatest genetic diversity of HIV-1 so far discovered, suggesting the virus has been there longer than anywhere else. HIV-2 crossed species from a different strain of HIV, found in the sooty mangabey monkeys in Guinea-Bissau.
Conservation
The chimpanzee is on the IUCN Red List as an endangered species. Chimpanzees are legally protected in most of their range and are found both in and outside national parks. Between 172,700 and 299,700 individuals are thought to be living in the wild, a decrease from about a million chimpanzees in the early 1900s. Chimpanzees are listed in Appendix I of the Convention on International Trade in Endangered Species (CITES), meaning that commercial international trade in wild-sourced specimens is prohibited and all other international trade (including in parts and derivatives) is regulated by the CITES permitting system.
The biggest threats to the chimpanzee are habitat destruction, poaching, and disease. Chimpanzee habitats have been limited by deforestation in both West and Central Africa. Road building has caused habitat degradation and fragmentation of chimpanzee populations and may allow poachers more access to areas that had not been seriously affected by humans. Although deforestation rates are low in western Central Africa, selective logging may take place outside national parks.
Chimpanzees are a common target for poachers. In Ivory Coast, chimpanzees make up 1–3% of bushmeat sold in urban markets. They are also taken, often illegally, for the pet trade and are hunted for medicinal purposes in some areas. Farmers sometimes kill chimpanzees that threaten their crops; others are unintentionally maimed or killed by snares meant for other animals.
Infectious diseases are a main cause of death for chimpanzees. They succumb to many diseases that afflict humans because the two species are so similar. As the human population grows, so does the risk of disease transmission between humans and chimpanzees.
| Biology and health sciences | Primates | null |
7849 | https://en.wikipedia.org/wiki/Crystallographic%20defect | Crystallographic defect | A crystallographic defect is an interruption of the regular patterns of arrangement of atoms or molecules in crystalline solids. The positions and orientations of particles, which are repeating at fixed distances determined by the unit cell parameters in crystals, exhibit a periodic crystal structure, but this is usually imperfect. Several types of defects are often characterized: point defects, line defects, planar defects, bulk defects. Topological homotopy establishes a mathematical method of characterization.
Point defects
Point defects are defects that occur only at or around a single lattice point. They are not extended in space in any dimension. Strict limits for how small a point defect is are generally not defined explicitly. However, these defects typically involve at most a few extra or missing atoms. Larger defects in an ordered structure are usually considered dislocation loops. For historical reasons, many point defects, especially in ionic crystals, are called centers: for example a vacancy in many ionic solids is called a luminescence center, a color center, or F-center. These dislocations permit ionic transport through crystals leading to electrochemical reactions. These are frequently specified using Kröger–Vink notation.
Vacancy defects are lattice sites which would be occupied in a perfect crystal, but are vacant. If a neighboring atom moves to occupy the vacant site, the vacancy moves in the opposite direction to the site which used to be occupied by the moving atom. The stability of the surrounding crystal structure guarantees that the neighboring atoms will not simply collapse around the vacancy. In some materials, neighboring atoms actually move away from a vacancy, because they experience attraction from atoms in the surroundings. A vacancy (or pair of vacancies in an ionic solid) is sometimes called a Schottky defect.
Interstitial defects are atoms that occupy a site in the crystal structure at which there is usually not an atom. They are generally high energy configurations. Small atoms (mostly impurities) in some crystals can occupy interstices without high energy, such as hydrogen in palladium.
A nearby pair of a vacancy and an interstitial is often called a Frenkel defect or Frenkel pair. This is caused when an ion moves into an interstitial site and creates a vacancy.
Due to fundamental limitations of material purification methods, materials are never 100% pure, which by definition induces defects in crystal structure. In the case of an impurity, the atom is often incorporated at a regular atomic site in the crystal structure. This is neither a vacant site nor is the atom on an interstitial site and it is called a substitutional defect. The atom is not supposed to be anywhere in the crystal, and is thus an impurity. In some cases where the radius of the substitutional atom (ion) is substantially smaller than that of the atom (ion) it is replacing, its equilibrium position can be shifted away from the lattice site. These types of substitutional defects are often referred to as off-center ions. There are two different types of substitutional defects: Isovalent substitution and aliovalent substitution. Isovalent substitution is where the ion that is substituting the original ion is of the same oxidation state as the ion it is replacing. Aliovalent substitution is where the ion that is substituting the original ion is of a different oxidation state than the ion it is replacing. Aliovalent substitutions change the overall charge within the ionic compound, but the ionic compound must be neutral. Therefore, a charge compensation mechanism is required. Hence either one of the metals is partially or fully oxidised or reduced, or ion vacancies are created.
Antisite defects occur in an ordered alloy or compound when atoms of different type exchange positions. For example, some alloys have a regular structure in which every other atom is a different species; for illustration assume that type A atoms sit on the corners of a cubic lattice, and type B atoms sit in the center of the cubes. If one cube has an A atom at its center, the atom is on a site usually occupied by a B atom, and is thus an antisite defect. This is neither a vacancy nor an interstitial, nor an impurity.
Topological defects are regions in a crystal where the normal chemical bonding environment is topologically different from the surroundings. For instance, in a perfect sheet of graphite (graphene) all atoms are in rings containing six atoms. If the sheet contains regions where the number of atoms in a ring is different from six, while the total number of atoms remains the same, a topological defect has formed. An example is the Stone Wales defect in nanotubes, which consists of two adjacent 5-membered and two 7-membered atom rings.
Amorphous solids may contain defects. These are naturally somewhat hard to define, but sometimes their nature can be quite easily understood. For instance, in ideally bonded amorphous silica all Si atoms have 4 bonds to O atoms and all O atoms have 2 bonds to Si atom. Thus e.g. an O atom with only one Si bond (a dangling bond) can be considered a defect in silica. Moreover, defects can also be defined in amorphous solids based on empty or densely packed local atomic neighbourhoods, and the properties of such 'defects' can be shown to be similar to normal vacancies and interstitials in crystals.
Complexes can form between different kinds of point defects. For example, if a vacancy encounters an impurity, the two may bind together if the impurity is too large for the lattice. Interstitials can form 'split interstitial' or 'dumbbell' structures where two atoms effectively share an atomic site, resulting in neither atom actually occupying the site.
Line defects
Line defects can be described by gauge theories.
Dislocations are linear defects, around which the atoms of the crystal lattice are misaligned.
There are two basic types of dislocations, the edge dislocation and the screw dislocation. "Mixed" dislocations, combining aspects of both types, are also common.
Edge dislocations are caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the adjacent planes are not straight, but instead bend around the edge of the terminating plane so that the crystal structure is perfectly ordered on either side. The analogy with a stack of paper is apt: if a half a piece of paper is inserted in a stack of paper, the defect in the stack is only noticeable at the edge of the half sheet.
The screw dislocation is more difficult to visualise, but basically comprises a structure in which a helical path is traced around the linear defect (dislocation line) by the atomic planes of atoms in the crystal lattice.
The presence of dislocation results in lattice strain (distortion). The direction and magnitude of such distortion is expressed in terms of a Burgers vector (b). For an edge type, b is perpendicular to the dislocation line, whereas in the cases of the screw type it is parallel. In metallic materials, b is aligned with close-packed crystallographic directions and its magnitude is equivalent to one interatomic spacing.
Dislocations can move if the atoms from one of the surrounding planes break their bonds and rebond with the atoms at the terminating edge.
It is the presence of dislocations and their ability to readily move (and interact) under the influence of stresses induced by external loads that leads to the characteristic malleability of metallic materials.
Dislocations can be observed using transmission electron microscopy, field ion microscopy and atom probe techniques.
Deep-level transient spectroscopy has been used for studying the electrical activity of dislocations in semiconductors, mainly silicon.
Disclinations are line defects corresponding to "adding" or "subtracting" an angle around a line. Basically, this means that if you track the crystal orientation around the line defect, you get a rotation. Usually, they were thought to play a role only in liquid crystals, but recent developments suggest that they might have a role also in solid materials, e.g. leading to the self-healing of cracks.
Planar defects
Grain boundaries occur where the crystallographic direction of the lattice abruptly changes. This usually occurs when two crystals begin growing separately and then meet.
Antiphase boundaries occur in ordered alloys: in this case, the crystallographic direction remains the same, but each side of the boundary has an opposite phase: For example, if the ordering is usually ABABABAB (hexagonal close-packed crystal), an antiphase boundary takes the form of ABABBABA.
Stacking faults occur in a number of crystal structures, but the common example is in close-packed structures. They are formed by a local deviation of the stacking sequence of layers in a crystal. An example would be the ABABCABAB stacking sequence.
A twin boundary is a defect that introduces a plane of mirror symmetry in the ordering of a crystal. For example, in cubic close-packed crystals, the stacking sequence of a twin boundary would be ABCABCBACBA.
On planes of single crystals, steps between atomically flat terraces can also be regarded as planar defects. It has been shown that such defects and their geometry have significant influence on the adsorption of organic molecules
Bulk defects
Three-dimensional macroscopic or bulk defects, such as pores, cracks, or inclusions
Voids — small regions where there are no atoms, and which can be thought of as clusters of vacancies
Impurities can cluster together to form small regions of a different phase. These are often called precipitates.
Mathematical classification methods
A successful mathematical classification method for physical lattice defects, which works not only with the theory of dislocations and other defects in crystals but also, e.g., for disclinations in liquid crystals and for excitations in superfluid 3He, is the topological homotopy theory.
Computer simulation methods
Density functional theory, classical molecular dynamics and kinetic Monte Carlo
simulations are widely used to study the properties of defects in solids with computer simulations.
Simulating jamming of hard spheres of different sizes and/or in containers with non-commeasurable sizes using the Lubachevsky–Stillinger algorithm
can be an effective technique for demonstrating some types of crystallographic defects.
| Physical sciences | Crystallography | Physics |
7890 | https://en.wikipedia.org/wiki/Dune | Dune | A dune is a landform composed of wind- or water-driven sand. It typically takes the form of a mound, ridge, or hill. An area with dunes is called a dune system or a dune complex. A large dune complex is called a dune field, while broad, flat regions covered with wind-swept sand or dunes, with little or no vegetation, are called ergs or sand seas. Dunes occur in different shapes and sizes, but most kinds of dunes are longer on the stoss (upflow) side, where the sand is pushed up the dune, and have a shorter slip face in the lee side. The valley or trough between dunes is called a dune slack.
Dunes are most common in desert environments, where the lack of moisture hinders the growth of vegetation that would otherwise interfere with the development of dunes. However, sand deposits are not restricted to deserts, and dunes are also found along sea shores, along streams in semiarid climates, in areas of glacial outwash, and in other areas where poorly cemented sandstone bedrock disintegrates to produce an ample supply of loose sand. Subaqueous dunes can form from the action of water flow (fluvial processes) on sand or gravel beds of rivers, estuaries, and the sea-bed.
Some coastal areas have one or more sets of dunes running parallel to the shoreline directly inland from the beach. In most cases, the dunes are important in protecting the land against potential ravages by storm waves from the sea. Artificial dunes are sometimes constructed to protect coastal areas. The dynamic action of wind and water can sometimes cause dunes to drift, which can have serious consequences. For example, the town of Eucla, Western Australia, had to be relocated in the 1890s because of dune drift.
The modern word "dune" came into English from French around 1790, which in turn came from Middle Dutch dūne.
Formation
A universally precise distinction does not exist between ripples, dunes, and draas, which are all deposits of the same type of materials. Dunes are generally defined as greater than 7 cm tall and may have ripples, while ripples are deposits that are less than 3 cm tall. A draa is a very large aeolian landform, with a length of several kilometers and a height of tens to hundreds of meters, and which may have superimposed dunes.
Dunes are made of sand-sized particles, and may consist of quartz, calcium carbonate, snow, gypsum, or other materials. The upwind/upstream/upcurrent side of the dune is called the stoss side; the downflow side is called the lee side. Sand is pushed (creep) or bounces (saltation) up the stoss side, and slides down the lee side. A side of a dune that the sand has slid down is called a slip face (or slipface).
The Bagnold formula gives the speed at which particles can be transported.
Aeolian dunes
Aeolian dune shapes
Five basic dune types are recognized: crescentic, linear, star, dome, and parabolic. Dune areas may occur in three forms: simple (isolated dunes of basic type), compound (larger dunes on which smaller dunes of same type form), and complex (combinations of different types).
Barchan or crescentic
Barchan dunes are crescent-shaped mounds which are generally wider than they are long. The lee-side slipfaces are on the concave sides of the dunes. These dunes form under winds that blow consistently from one direction (unimodal
winds). They form separate crescents when the sand supply is comparatively small. When the sand supply is greater, they may merge into barchanoid ridges, and then transverse dunes (see below).
Some types of crescentic dunes move more quickly over desert surfaces than any other type of dune. A group of dunes moved more than 100 metres per year between 1954 and 1959 in China's Ningxia Province, and similar speeds have been recorded in the Western Desert of Egypt. The largest crescentic dunes on Earth, with mean crest-to-crest widths of more than three kilometres, are in China's Taklamakan Desert.
Transverse dunes
Abundant barchan dunes may merge into barchanoid ridges, which then grade into linear (or slightly sinuous) transverse dunes, so called because they lie transverse, or across, the wind direction, with the wind blowing perpendicular to the ridge crest.
Seif or longitudinal dunes
Seif dunes are linear (or slightly sinuous) dunes with two slip faces. The dunes lie generally parallel to each other The two slip faces make them sharp-crested. They are called seif dunes after the Arabic word for "sword". They may be more than 160 kilometres (100 miles) long, and thus easily visible in satellite images (see illustrations).
Seif dunes are associated with bidirectional winds. The long axes and ridges of these dunes extend along the resultant direction of sand movement (hence the name "longitudinal"). Some linear dunes merge to form Y-shaped compound dunes.
Formation is debated. Ralph Bagnold, in The Physics of Blown Sand and Desert Dunes, suggested that some seif dunes form when a barchan dune moves into a bidirectional wind regime, and one arm or wing of the crescent elongates. Others suggest that seif dunes are formed by vortices in a unidirectional wind. In the sheltered troughs between highly developed seif dunes, barchans may be formed, because the wind is constrained to be unidirectional by the dunes.
Seif dunes are common in the Sahara. They range up to in height and in length. In the southern third of the Arabian Peninsula, a vast erg, called the Rub' al Khali or Empty Quarter, contains seif dunes that stretch for almost and reach heights of over .
Linear loess hills known as pahas are superficially similar. These hills appear to have been formed during the last ice age under permafrost conditions dominated by sparse tundra vegetation.
Star
Star dunes are pyramidal sand mounds with slipfaces on three or more arms that radiate from the high center of the mound. They tend to accumulate in areas with multidirectional wind regimes. Star dunes grow upward rather than laterally. They dominate the Grand Erg Oriental of the Sahara. In other deserts, they occur around the margins of the sand seas, particularly near topographic barriers. In the southeast Badain Jaran Desert of China, the star dunes are up to 500 metres tall and may be the tallest dunes on Earth.
Dome
Oval or circular mounds that generally lack a slipface. Dome dunes are rare and occur at the far upwind margins of sand seas.
Lunettes
Fixed crescentic dunes that form on the leeward margins of playas and river valleys in arid and semiarid regions in response to the direction (s) of prevailing winds, are known as lunettes, source-bordering dunes, bourrelets and clay dunes. They may be composed of clay, silt, sand, or gypsum, eroded from the basin floor or shore, transported up the concave side of the dune, and deposited on the convex side. Examples in Australia are up to 6.5 km long, 1 km wide, and up to 50 metres high. They also occur in southern and West Africa, and in parts of the western United States, especially Texas.
Parabolic
U-shaped mounds of sand with convex noses trailed by elongated arms are parabolic dunes. These dunes are formed from blowout dunes where the erosion of vegetated sand leads to a U-shaped depression. The elongated arms are held in place by vegetation; the largest arm known on Earth reaches 12 km. Sometimes these dunes are called U-shaped, blowout, or hairpin dunes, and they are well known in coastal deserts. Unlike crescent shaped dunes, their crests point upwind. The bulk of the sand in the dune migrates forward.
In plan view, these are U-shaped or V-shaped mounds of well-sorted, very fine to medium sand with elongated arms that extend upwind behind the central part of the dune. There are slipfaces that often occur on the outer side of the nose and on the outer slopes of the arms.
These dunes often occur in semiarid areas where the precipitation is retained in the lower parts of the dune and underlying soils. The stability of the dunes was once attributed to the vegetative cover but recent research has pointed to water as the main source of parabolic dune stability. The vegetation that covers them—grasses, shrubs, and trees—help anchor the trailing arms. In inland deserts, parabolic dunes commonly originate and extend downwind from blowouts in sand sheets only partly anchored by vegetation. They can also originate from beach sands and extend inland into vegetated areas in coastal zones and on shores of large lakes.
Most parabolic dunes do not reach heights higher than a few tens of metres except at their nose, where vegetation stops or slows the advance of accumulating sand.
Simple parabolic dunes have only one set of arms that trail upwind, behind the leading nose. Compound parabolic dunes are coalesced features with several sets of trailing arms. Complex parabolic dunes include subsidiary superposed or coalesced forms, usually of barchanoid or linear shapes.
Parabolic dunes, like crescent dunes, occur in areas where very strong winds are mostly unidirectional. Although these dunes are found in areas now characterized by variable wind speeds, the effective winds associated with the growth and migration of both the parabolic and crescent dunes probably are the most consistent in wind direction.
The grain size for these well-sorted, very fine to medium sands is about 0.06 to 0.5 mm. Parabolic dunes have loose sand and steep slopes only on their outer flanks. The inner slopes are mostly well packed and anchored by vegetation, as are the corridors between individual dunes. Because all dune arms are oriented in the same direction, and, the inter-dune corridors are generally swept clear of loose sand, the corridors can usually be traversed in between the trailing arms of the dune. However to cross straight over the dune by going over the trailing arms, can be very difficult. Also, traversing the nose is very difficult as well because the nose is usually made up of loose sand without much if any vegetation.
A type of extensive parabolic dune that lacks discernible slipfaces and has mostly coarse grained sand is known as a zibar. The term zibar comes from the Arabic word to describe "rolling transverse ridges ... with a hard surface". The dunes are small, have low relief, and can be found in many places across the planet from Wyoming (United States) to Saudi Arabia to Australia. Spacing between zibars ranges from 50 to 400 metres and they do not become more than 10 metres high. The dunes form at about ninety degrees to the prevailing wind which blows away the small, fine-grained sand leaving behind the coarser grained sand to form the crest.
Reversing dunes
Occurring wherever winds periodically reverse direction, reversing dunes are varieties of any of the above shapes. These dunes typically have major and minor slipfaces oriented in opposite directions. The minor slipfaces are usually temporary, as they appear after a reverse wind and are generally destroyed when the wind next blows in the dominant direction.
Draas
Draas are very large-scale dune bedforms; they may be tens or a few hundreds of metres in height, kilometres wide, and hundreds of kilometres in length. After a draa has reached a certain size, it generally develops superimposed dune forms. They are thought to be more ancient and slower-moving than smaller dunes, and to form by vertical growth of existing dunes. Draas are widespread in sand seas and are well-represented in the geological record.
Dune complexity
All these dune shapes may occur in three forms: simple (isolated dunes of basic type), compound (larger dunes on which smaller dunes of same type form), and complex (combinations of different types). Simple dunes are basic forms with the minimum number of slipfaces that define the geometric type. Compound dunes are large dunes on which smaller dunes of similar type and slipface orientation are superimposed. Complex dunes are combinations of two or more dune types. A crescentic dune with a star dune superimposed on its crest is the most common complex dune. Simple dunes represent a wind regime that has not changed in intensity or direction since the formation of the dune, while compound and complex dunes suggest that the intensity and direction of the wind has changed.
Dune movement
The sand mass of dunes can move either windward or leeward, depending on if the wind is making contact with the dune from below or above its apogee. If wind hits from above, the sand particles move leeward; the leeward flux of sand is greater than the windward flux. Conversely, if sand hits from below, sand particles move windward. Further, if the wind is carrying sand particles when it hits the dune, the dune's sand particles will saltate more than if the wind had hit the dune without carrying sand particles.
Coastal dunes
Coastal dunes form when wet sand is deposited along the coast and dries out and is blown along the beach. Dunes form where the beach is wide enough to allow for the accumulation of wind-blown sand, and where prevailing onshore winds tend to blow sand inland. The three key ingredients for coastal dune formation are a large sand supply, winds to move said sand supply, and a place for the sand supply to accumulate. Obstacles—for example, vegetation, pebbles and so on—tend to slow down the wind and lead to the deposition of sand grains. These small "incipient dunes or "shadow dunes" tend to grow in the vertical direction if the obstacle slowing the wind can also grow vertically (i.e., vegetation). Coastal dunes expand laterally as a result of lateral growth of coastal plants via seed or rhizome. Models of coastal dunes suggest that their final equilibrium height is related to the distance between the water line and where vegetation can grow. Coastal dunes can be classified by where they develop, or begin to take shape. Dunes are commonly grouped into either the Primary Dune Group or the Secondary Dune Group. Primary dunes gain most of their sand from the beach itself, while secondary dunes gain their sand from the primary dune. Along the Florida Panhandle, most dunes are considered to be foredunes or hummocks. Different locations around the globe have dune formations unique to their given coastal profile.
Coastal sand dunes can provide privacy and/or habitats to support local flora and fauna. Animals such as sand snakes, lizards, and rodents can live in coastal sand dunes, along with insects of all types. Often the vegetation of sand dunes is discussed without acknowledging the importance that coastal dunes have for animals. Further, some animals, such as foxes and feral pigs can use coastal dunes as hunting grounds to find food. Birds are also known to utilize coastal dunes as nesting grounds. All these species find the coastal environment of the sand dune vital to their species' survival.
Over the course of time coastal dunes may be impacted by tropical cyclones or other intense storm activity, dependent on their location. Recent work has suggested that coastal dunes tend to evolve toward a high or low morphology depending on the growth rate of dunes relative to storm frequency. During a storm event, dunes play a significant role in minimizing wave energy as it moves onshore. As a result, coastal dunes, especially those in the foredune area affected by a storm surge, will retreat or erode. To counteract the damage from tropical activity on coastal dunes, short term post-storm efforts can be made by individual agencies through fencing to help with sand accumulation.
How much a dune erodes during any storm surge is related to its location on the coastal shoreline and the profile of the beach during a particular season. In those areas with harsher winter weather, during the summer a beach tends to take on more of a convex appearance due to gentler waves, while the same beach in the winter may take on more of a concave appearance. As a result, coastal dunes can get eroded much more quickly in the winter than in the summer. The converse is true in areas with harsher summer weather.
There are many threats to these coastal communities. Some coastal dunes, for example ones in San Francisco, have been completely altered by urbanization; reshaping the dune for human use. This puts native species at risk. Another danger, in California and places in the UK specifically, is the introduction of invasive species. Plant species, such as Carpobrotus edulis, were introduced from South Africa in an attempt to stabilize the dunes and provide horticultural benefits, but instead spread taking land away from native species. Ammophila arenaria, known as European beachgrass, has a similar story, though it has no horticulture benefits. It has great ground coverage and, as intended, stabilized the dunes but as an unintended side effect prevented native species from thriving in those dunes. One such example is the dune field at Point Reyes, California. There are now efforts to get rid of both of these invasive species.
Ecological succession on coastal dunes
As a dune forms, plant succession occurs. The conditions on an embryo dune are harsh, with salt spray from the sea carried on strong winds. The dune is well drained and often dry, and composed of calcium carbonate from seashells. Rotting seaweed, brought in by storm waves adds nutrients to allow pioneer species to colonize the dune. For example, in the United Kingdom these pioneer species are often marram grass, sea wort grass and other sea grasses. These plants are well adapted to the harsh conditions of the foredune, typically having deep roots which reach the water table, root nodules that produce nitrogen compounds, and protected stoma, reducing transpiration. Also, the deep roots bind the sand together, and the dune grows into a foredune as more sand is blown over the grasses. The grasses add nitrogen to the soil, meaning other, less hardy plants can then colonize the dunes. Typically these are heather, heaths and gorses. These too are adapted to the low soil water content and have small, prickly leaves which reduce transpiration. Heather adds humus to the soil and is usually replaced by coniferous trees, which can tolerate low soil pH, caused by the accumulation and decomposition of organic matter with nitrate leaching. Coniferous forests and heathland are common climax communities for sand dune systems.
Young dunes are called yellow dunes and dunes which have high humus content are called grey dunes. Leaching occurs on the dunes, washing humus into the slacks, and the slacks may be much more developed than the exposed tops of the dunes. It is usually in the slacks that more rare species are developed and there is a tendency for the dune slacks' soil to be waterlogged where only marsh plants can survive. In Europe these plants include: creeping willow, cotton grass, yellow iris, reeds, and rushes. As for vertebrates in European dunes, natterjack toads sometimes breed here.
Coastal dune floral adaptations
Dune ecosystems are extremely difficult places for plants to survive. This is due to a number of pressures related to their proximity to the ocean and confinement to growth on sandy substrates. These include:
Little available soil moisture
Little available soil organic matter/nutrients/water
Harsh winds
Salt spray
Erosion/shifting and sometimes burial or exposure (from shifting)
Tidal influences
Plants have evolved many adaptations to cope with these pressures:
Deep taproot to reach water table (Pink Sand Verbena)
Shallow but extensive root systems
Rhizomes
Prostrate growth form to avoid wind/salt spray (Abronia spp., Beach Primrose)
Krummholz growth form (Monterrey Cypress-not a dune plant but deals with similar pressures)
Thickened cuticle/Succulence to reduce moisture loss and reduce salt uptake (Ambrosia/Abronia spp., Calystegia soldanella)
Pale leaves to reduce insolation (Artemisia/Ambrosia spp.)
Thorny/Spiky seeds to ensure establishment in vicinity of parent, reduces chances of being blown away or swept out to sea (Ambrosia chamissonis)
Gypsum dunes
In deserts where large amounts of limestone mountains surround a closed basin, such as at White Sands National Park in south-central New Mexico, occasional storm runoff transports dissolved limestone and gypsum into a low-lying pan within the basin where the water evaporates, depositing the gypsum and forming crystals known as selenite. The crystals left behind by this process are eroded by the wind and deposited as vast white dune fields that resemble snow-covered landscapes. These types of dune are rare, and only form in closed arid basins that retain the highly soluble gypsum that would otherwise be washed into the sea.
Nabkha dunes
A nabkha, or coppice dune, is a small dune anchored by vegetation. They usually indicate desertification or soil erosion, and serve as nesting and burrow sites for animals.
Sub-aqueous dunes
Sub-aqueous (underwater) dunes form on a bed of sand or gravel under the actions of water flow. They are ubiquitous in natural channels such as rivers and estuaries, and also form in engineered canals and pipelines. Dunes move downstream as the upstream slope is eroded and the sediment deposited on the downstream or lee slope in typical bedform construction. In the case of sub-aqueous barchan dunes, sediment is lost by their extremities, known as horns.
These dunes most often form as a continuous 'train' of dunes, showing remarkable similarity in wavelength and height. The shape of a dune gives information about its formation environment. For instance, rivers produce asymmetrical ripples, with the steeper slip face facing downstream. Ripple marks preserved in sedimentary strata in the geological record can be used to determine the direction of current flow, and thus an indication of the source of the sediments.
Dunes on the bed of a channel significantly increase flow resistance, their presence and growth playing a major part in river flooding.
Lithified dunes
A lithified (consolidated) sand dune is a type of sandstone that is formed when a marine or aeolian sand dune becomes compacted and hardened. Once in this form, water passing through the rock can carry and deposit minerals, which can alter the colour of the rock. Cross-bedded layers of stacks of lithified dunes can produce the cross-hatching patterns, such as those seen in Zion National Park in the western United States.
A slang term, used in the southwest US, for consolidated and hardened sand dunes is "slickrock", a name that was introduced by pioneers of the Old West because their steel-rimmed wagon wheels could not gain traction on the rock.
Desertification
Sand dunes can have a negative impact on humans when they encroach on human habitats. Sand dunes move via a few different means, all of them helped along by wind. One way that dunes can move is by saltation, where sand particles skip along the ground like a bouncing ball. When these skipping particles land, they may knock into other particles and cause them to move as well, in a process known as creep. With slightly stronger winds, particles collide in mid-air, causing sheet flows. In a major dust storm, dunes may move tens of metres through such sheet flows. Also as in the case of snow, sand avalanches, falling down the slipface of the dunes—that face away from the winds—also move the dunes forward.
Sand threatens buildings and crops in Africa, the Middle East, and China. Drenching sand dunes with oil stops their migration, but this approach uses a valuable resource and is quite destructive to the dunes' animal habitats. Sand fences might also slow their movement to a crawl, but geologists are still analyzing results for the optimum fence designs. Preventing sand dunes from overwhelming towns, villages, and agricultural areas has become a priority for the United Nations Environment Programme. Planting dunes with vegetation also helps to stabilise them.
Conservation
Dune habitats provide niches for highly specialized plants and animals, including numerous rare species and some endangered species. Due to widespread human population expansion, dunes face destruction through land development and recreational usages, as well as alteration to prevent the encroachment of sand onto inhabited areas. Some countries, notably the United States, Australia, Canada, New Zealand, the United Kingdom, Netherlands, and Sri Lanka have developed significant programs of dune protection through the use of sand dune stabilization. In the U.K., a Biodiversity Action Plan has been developed to assess dunes loss and to prevent future dunes destruction.
Examples
Africa
Alexandria Coastal Dunefields, in the Eastern Cape, South Africa
Witsand Nature Reserve in the Kalahari Desert, South Africa
The white dunes of De Hoop Nature Reserve, South Africa
The dunes of the Suguta Valley, a desert part of the Great Rift Valley in northwestern Kenya
The dunes of the Danakil Depression, northeastern Ethiopia toward the border with Eritrea
The dunes of Sossusvlei in the greater Namib-Naukluft National Park, Namibia
Chad Basin National Park in northern Nigeria
The coastal dunes of Iona National Park in the southwesternmost part of Angola
Khawa dunes in the Kgalagadi Transfrontier Park, the southwesternmost part of Botswana
La Dune Rose in the city of Gao in northern Mali near the Niger River
Erg Aoukar in southeastern Mauritania extending into Mali
Erg Chech in southwestern Algeria and northern Mali
Erg Chebbi and Erg Chigaga in southern Morocco
Grand Erg Oriental in northeastern Algeria and southern Tunisia
Grand Erg Occidental in western Algeria
The Idehan Ubari and the Idehan Murzuq in southwestern Libya
The Corralejo dunes in the Canary Islands, Spain
The Rebiana Sand Sea in southeastern Libya
The Great Sand Sea in southeastern Libya and southwestern Egypt
The Selima Sand Sheet in northwestern Sudan
The dunes of the Bayuda Desert in northern Sudan
The dunes of the Lompoul Desert in northwestern Senegal
The coastal dunes of Bazaruto Island, Mozambique
Erg du Djourab in northern Chad
The dunes of the Mourdi Depression in northeastern Chad
The dunes of Tin Toumma Desert, in southeastern Niger
Grand Erg de Bilma in the Ténéré, in northern Niger
The dunes of Oursi in the Sahel Region, northern Burkina Faso
Tanzania's Shifting Sands near Olduvai Gorge
Asia
Sunset view dunes in the Alankuda village on Kalpitiya peninsula in Sri Lanka.
The dunes in the Thar Desert in India and Pakistan
Tottori Sand Dunes, Tottori Prefecture, Japan
Rig-e Jenn in the Central Desert of Iran.
Rig-e Lut in the Southeast of Iran.
The Ilocos Norte Sand Dunes in the Philippines, particularly Paoay Sand Dunes.
Moreeb Dune in Liwa Oasis, United Arab Emirates, used as an arena for drag motor sports and Sandboarding.
Gumuk Pasir Parangkusumo near Parangtritis beach in Yogyakarta, Indonesia.
Lautan Pasir, a volcanic dune in Bromo Tengger Semeru National Park, Indonesia.
Mui Ne, Vietnam.
Wahiba Sands, Oman
Teri, red dune complex in southern India
The dunes of the Taklamakan Desert in southwest Xinjiang in Northwest China
Tukulans of the Central Yakutian Lowland, Russia
Europe
The Dunes of Dyuni, near Pomorie, Bulgaria, vast area of sand dunes in the Burgas Province
The Dune of Pilat, not far from Bordeaux, France, is the largest known sand dune in Europe
The Dunes of Piscinas, in the south west of Sardinia island.
Sands of Forvie within the Ythan Estuary complex, Aberdeenshire, Scotland.
Oxwich Dunes, near Swansea, is on the Gower Peninsula in Wales.
Winterton Dunes – Norfolk, England
Słowiński National Park, Poland
Siedlec Desert, Poland
Starczynów Desert, now mostly forested dunes, Poland
Sand dunes of Lemnos, Lesbos Prefecture, Greece
Akrotiri Sand Dune, Lemesos, Cyprus
Råbjerg mile, Northern Jutland, Denmark
Thy National Park, North Denmark Region, Denmark
Dunes of Corrubedo, Spain
Cresmina Dune, Portugal
Northern Littoral Natural Park, Portugal
Dune of Salir, Portugal
São Jacinto Dunes Natural Reserve, Portugal
Rëra e Hedhur in Shëngjin, Albania
De Hoge Veluwe National Park, Veluwe, Netherlands
Kootwijkerzand, Veluwe, Netherlands, 7 km2
Dunes of Texel National Park, Texel, Netherlands
Zuid-Kennemerland National Park, North Holland, Netherlands
Berkheide, Netherlands
Ammothines Lemnou, Lemnos, Greece
Dunes of the Curonian Spit, Lithuania and Russia
Parnidis Dune, Vecekrugas Dune - Curonian Spit, Lithuania
Oleshky Sands, Ukraine
Ullahau, Sweden, Big Parabel Dune and dune system
North America
Victoria Island Sand Dunes,160 km North West of Cambridge Bay, Nunavut, Canada. Approximately 600 square kilometers, the largest in Canada, third largest in North America and the largest in the Arctic. There are two lakes with direct access to the Dunes from float planes.
Herring Cove, Race Point and The Province Lands bicycle path in Provincetown, Massachusetts as part of the US National Park Service of the Cape Cod National Seashore.
Great Kobuk Sand Dunes, Kobuk Valley National Park, Alaska
The Athabasca Sand Dunes, located in the Athabasca Sand Dunes Provincial Park, Saskatchewan.
The Cadiz Dunes in the Mojave Trails National Monument in California.
The Kelso Dunes in the Mojave Desert of California.
Eureka Valley Sand Dunes and Mesquite Flat Sand Dunes in Death Valley National Park, California.
Great Sand Dunes National Park, Colorado.
White Sands National Park, New Mexico.
Little Sahara Recreation Area, Utah.
Sleeping Bear Dunes National Lakeshore, Michigan, on the east shore of Lake Michigan.
Indiana Dunes National Park, Indiana, on the south shore of Lake Michigan.
Warren Dunes State Park, Michigan, on the east shore of Lake Michigan.
Grand Sable Dunes, in the Pictured Rocks National Lakeshore, Michigan.
Samalayuca Dunes, in the state of Chihuahua, Mexico
Algodones Dunes near Brawley, California.
Guadalupe-Nipomo Dunes, on the central coast of California.
Monahans Sandhills State Park near Odessa, Texas.
Beaver Dunes State Park near Beaver, Oklahoma.
The Killpecker sand dunes of the Red Desert in southwestern Wyoming.
Jockey's Ridge State Park – on the Outer Banks, North Carolina.
The Great Dune found in Cape Henlopen State Park in Lewes, Delaware.
Oregon Dunes National Recreation Area near Florence, Oregon, on the Pacific Coast.
Bruneau Dunes State Park – Owyhee Desert, Idaho
Hoffmaster State Park – Muskegon, Michigan
Silver Lake State Park — a sand dunes that allows off-road vehicle use located near Mears, Michigan.
Carcross Desert near Carcross, Yukon
Grey Cloud Dunes- Minnesota
Nogahabara Sand Dunes in western Alaska.
South America
Lençóis Maranhenses National Park in the state of Maranhão, Brazil
Ilha Comprida Environmental Protection Area in the state of São Paulo, Brazil
Joaquina Beach Dunes in Florianópolis, in the state of Santa Catarina, Brazil
Jericoacoara National Park, in the state of Ceará, Brazil
Genipabu in Natal, Brazil
Medanos de Coro National Park near the town of Coro, in Falcón State, Venezuela
Duna Federico Kirbus in Catamarca Province, Argentina
Villa Gesell in Buenos Aires Province, Argentina
Cerro Blanco in Nazca Province, Peru
Huacachina in Ica Region, Peru
Cerro Medanoso in Atacama Region, Chile
Colún Beach, Valdivian Coastal Reserve in Chile
, Chile
, urban dune in Iquique, Chile
Oceania
Simpson Desert sand dunes in the Northern Territory, Queensland and South Australia, Australia
Coorong National Park in South Australia, Australia
Lincoln National Park in South Australia, Australia
Coffin Bay National Park in South Australia, Australia
Fraser Island in Queensland, Australia
Cronulla sand dunes in New South Wales, Australia
Stockton Beach in New South Wales, Australia
Lancelin sand dunes in Western Australia, Australia
Te Paki sand dunes near Cape Reinga, New Zealand
World's highest dunes
Sand dune systems
(coastal dunes featuring succession)
Athabasca Sand Dunes Provincial Park, Alberta and Saskatchewan
Ashdod Sand Dune, Israel
Bamburgh Dunes, Northumberland, England
Bradley Beach, New Jersey
Circeo National Park, a Mediterranean dune area on the southwest coast of the Lazio region of Italy
Cronulla sand dunes, NSW, Australia
Crymlyn Burrows, Wales
Dawlish Warren, Devon, England
Fraser Island, Queensland Australia, largest sand island in the world
Indiana Dunes National Park, Indiana
Kenfig Burrows, Wales
Margam burrows, Wales
Murlough Sand Dunes, Newcastle, Co Down, Northern Ireland
Morfa Harlech sand dunes, Gwynedd, Wales
Newborough Warren, North Wales
Oregon Dunes National Recreation Area, near North Bend, Oregon
Penhale Sands, Cornwall, England
Sleeping Bear Dunes National Lakeshore, Michigan
Sandy Island Beach State Park, Richland, New York
Studland, Dorset, England
Thy National Park, North Denmark Region, Denmark
Winterton, Norfolk, England
Woolacombe, Devon, England
Ynyslas Sand Dunes, Wales
Extraterrestrial dunes
Dunes can likely be found in any environment where there is a substantial atmosphere, winds, and dust to be blown. Dunes are common on Mars and in the equatorial regions of Titan.
Titan's dunes include large expanses with modal lengths of about 20–30 km. The regions are not topographically confined, resembling sand seas. These dunes are interpreted to be longitudinal dunes whose crests are oriented parallel to the dominant wind direction, which generally indicates west-to-east wind flow. The sand is likely composed of hydrocarbon particles, possibly with some water ice mixed in.
Dunes are a popular theme in science fiction, featuring in depictions of dry Desert planets appearing as early as the 1956 film Forbidden Planet and Frank Herbert's 1965 novel Dune. The environment of the desert planet Arrakis (also known as Dune) in the Dune franchise Dune in turn inspired the Star Wars franchise, which includes prominent theme of dunes on fictional planets such as Tatooine, Geonosis, and Jakku.
| Physical sciences | Aeolian landforms | null |
7903 | https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman%20key%20exchange | Diffie–Hellman key exchange | Diffie–Hellman (DH) key exchange is a mathematical method of securely generating a symmetric cryptographic key over a public channel and was one of the first public-key protocols as conceived by Ralph Merkle and named after Whitfield Diffie and Martin Hellman. DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography. Published in 1976 by Diffie and Hellman, this is the earliest publicly known work that proposed the idea of a private key and a corresponding public key.
Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical means, such as paper key lists transported by a trusted courier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure channel. This key can then be used to encrypt subsequent communications using a symmetric-key cipher.
Diffie–Hellman is used to secure a variety of Internet services. However, research published in October 2015 suggests that the parameters in use for many DH Internet applications at that time are not strong enough to prevent compromise by very well-funded attackers, such as the security services of some countries.
The scheme was published by Whitfield Diffie and Martin Hellman in 1976, but in 1997 it was revealed that James H. Ellis, Clifford Cocks, and Malcolm J. Williamson of GCHQ, the British signals intelligence agency, had previously shown in 1969 how public-key cryptography could be achieved.
Although Diffie–Hellman key exchange itself is a non-authenticated key-agreement protocol, it provides the basis for a variety of authenticated protocols, and is used to provide forward secrecy in Transport Layer Security's ephemeral modes (referred to as EDH or DHE depending on the cipher suite).
The method was followed shortly afterwards by RSA, an implementation of public-key cryptography using asymmetric algorithms.
Expired US patent 4200770 from 1977 describes the now public-domain algorithm. It credits Hellman, Diffie, and Merkle as inventors.
Name
In 2006, Hellman suggested the algorithm be called Diffie–Hellman–Merkle key exchange in recognition of Ralph Merkle's contribution to the invention of public-key cryptography (Hellman, 2006), writing:
Description
General overview
Diffie–Hellman key exchange establishes a shared secret between two parties that can be used for secret communication for exchanging data over a public network. An analogy illustrates the concept of public key exchange by using colors instead of very large numbers:
The process begins by having the two parties, Alice and Bob, publicly agree on an arbitrary starting color that does not need to be kept secret. In this example, the color is yellow. Each person also selects a secret color that they keep to themselves – in this case, red and cyan. The crucial part of the process is that Alice and Bob each mix their own secret color together with their mutually shared color, resulting in orange-tan and light-blue mixtures respectively, and then publicly exchange the two mixed colors. Finally, each of them mixes the color they received from the partner with their own private color. The result is a final color mixture (yellow-brown in this case) that is identical to their partner's final color mixture.
If a third party listened to the exchange, they would only know the common color (yellow) and the first mixed colors (orange-tan and light-blue), but it would be very hard for them to find out the final secret color (yellow-brown). Bringing the analogy back to a real-life exchange using large numbers rather than colors, this determination is computationally expensive. It is impossible to compute in a practical amount of time even for modern supercomputers.
Cryptographic explanation
The simplest and the original implementation, later formalized as Finite Field Diffie–Hellman in RFC 7919, of the protocol uses the multiplicative group of integers modulo p, where p is prime, and g is a primitive root modulo p. These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 to . Here is an example of the protocol, with non-secret values in , and secret values in .
Alice and Bob publicly agree to use a modulus = and base = (which is a primitive root modulo 23).
Alice chooses a secret integer = 4, then sends Bob = mod
= mod = (in this example both and have the same value 4, but this is usually not the case)
Bob chooses a secret integer = 3, then sends Alice = mod
= mod =
Alice computes = mod
= mod =
Bob computes = mod
= mod =
Alice and Bob now share a secret (the number 18).
Both Alice and Bob have arrived at the same values because under mod p,
More specifically,
Only a and b are kept secret. All the other values – p, g, ga mod p, and gb mod p – are sent in the clear. The strength of the scheme comes from the fact that gab mod p = gba mod p take extremely long times to compute by any known algorithm just from the knowledge of p, g, ga mod p, and gb mod p. Such a function that is easy to compute but hard to invert is called a one-way function. Once Alice and Bob compute the shared secret they can use it as an encryption key, known only to them, for sending messages across the same open communications channel.
Of course, much larger values of a, b, and p would be needed to make this example secure, since there are only 23 possible results of n mod 23. However, if p is a prime of at least 600 digits, then even the fastest modern computers using the fastest known algorithm cannot find a given only g, p and ga mod p. Such a problem is called the discrete logarithm problem. The computation of ga mod p is known as modular exponentiation and can be done efficiently even for large numbers.
Note that g need not be large at all, and in practice is usually a small integer (like 2, 3, ...).
Secrecy chart
The chart below depicts who knows what, again with non-secret values in , and secret values in . Here Eve is an eavesdropper – she watches what is sent between Alice and Bob, but she does not alter the contents of their communications.
g, public (primitive root) base, known to Alice, Bob, and Eve. g = 5
p, public (prime) modulus, known to Alice, Bob, and Eve. p = 23
a, Alice's private key, known only to Alice. a = 6
b, Bob's private key known only to Bob. b = 15
A, Alice's public key, known to Alice, Bob, and Eve. A = ga mod p = 8
B, Bob's public key, known to Alice, Bob, and Eve. B = gb mod p = 19
Now s is the shared secret key and it is known to both Alice and Bob, but not to Eve. Note that it is not helpful for Eve to compute AB, which equals ga+b mod p.
Note: It should be difficult for Alice to solve for Bob's private key or for Bob to solve for Alice's private key. If it is not difficult for Alice to solve for Bob's private key (or vice versa), then an eavesdropper, Eve, may simply substitute her own private / public key pair, plug Bob's public key into her private key, produce a fake shared secret key, and solve for Bob's private key (and use that to solve for the shared secret key). Eve may attempt to choose a public / private key pair that will make it easy for her to solve for Bob's private key.
Generalization to finite cyclic groups
Here is a more general description of the protocol:
Alice and Bob agree on a natural number n and a generating element g in the finite cyclic group G of order n. (This is usually done long before the rest of the protocol; g and n are assumed to be known by all attackers.) The group G is written multiplicatively.
Alice picks a random natural number a with 1 < a < n, and sends the element ga of G to Bob.
Bob picks a random natural number b with 1 < b < n, and sends the element gb of G to Alice.
Alice computes the element of G.
Bob computes the element of G.
Both Alice and Bob are now in possession of the group element gab = gba, which can serve as the shared secret key. The group G satisfies the requisite condition for secure communication as long as there is no efficient algorithm for determining gab given g, ga, and gb.
For example, the elliptic curve Diffie–Hellman protocol is a variant that represents an element of G as a point on an elliptic curve instead of as an integer modulo n. Variants using hyperelliptic curves have also been proposed. The supersingular isogeny key exchange is a Diffie–Hellman variant that was designed to be secure against quantum computers, but it was broken in July 2022.
Ephemeral and/or static keys
The used keys can either be ephemeral or static (long term) key, but could even be mixed, so called semi-static DH. These variants have different properties and hence different use cases. An overview over many variants and some also discussions can for example be found in NIST SP 800-56A. A basic list:
ephemeral, ephemeral: Usually used for key agreement. Provides forward secrecy, but no authenticity.
static, static: Would generate a long term shared secret. Does not provide forward secrecy, but implicit authenticity. Since the keys are static it would for example not protect against replay-attacks.
ephemeral, static: For example, used in ElGamal encryption or Integrated Encryption Scheme (IES). If used in key agreement it could provide implicit one-sided authenticity (the ephemeral side could verify the authenticity of the static side). No forward secrecy is provided.
It is possible to use ephemeral and static keys in one key agreement to provide more security as for example shown in NIST SP 800-56A, but it is also possible to combine those in a single DH key exchange, which is then called triple DH (3-DH).
Triple Diffie–Hellman (3-DH)
In 1997 a kind of triple DH was proposed by Simon Blake-Wilson, Don Johnson, Alfred Menezes in 1997, which was improved by C. Kudla and K. G. Paterson in 2005 and shown to be secure.
The long term secret keys of Alice and Bob are denoted by a and b respectively, with public keys A and B, as well as the ephemeral key pairs (x, X) and (y, Y). Then protocol is:
The long term public keys need to be transferred somehow. That can be done beforehand in a separate, trusted channel, or the public keys can be encrypted using some partial key agreement to preserve anonymity. For more of such details as well as other improvements like side channel protection or explicit key confirmation, as well as early messages and additional password authentication, see e.g. US patent "Advanced modular handshake for key agreement and optional authentication".
Extended Triple Diffie–Hellman (X3DH)
X3DH was initially proposed as part of the Double Ratchet Algorithm used in the Signal Protocol. The protocol offers forward secrecy and cryptographic deniability. It operates on an elliptic curve.
The protocol uses five public keys. Alice has an identity key IKA and an ephemeral key EKA. Bob has an identity key IKB, a signed prekey SPKB, and a one-time prekey OPKB. Bob first publishes his three keys to a server, which Alice downloads and verifies the signature on. Alice then initiates the exchange to Bob. The OPK is optional.
Operation with more than two parties
Diffie–Hellman key agreement is not limited to negotiating a key shared by only two participants. Any number of users can take part in an agreement by performing iterations of the agreement protocol and exchanging intermediate data (which does not itself need to be kept secret). For example, Alice, Bob, and Carol could participate in a Diffie–Hellman agreement as follows, with all operations taken to be modulo p:
The parties agree on the algorithm parameters p and g.
The parties generate their private keys, named a, b, and c.
Alice computes and sends it to Bob.
Bob computes and sends it to Carol.
Carol computes and uses it as her secret.
Bob computes and sends it to Carol.
Carol computes and sends it to Alice.
Alice computes and uses it as her secret.
Carol computes and sends it to Alice.
Alice computes and sends it to Bob.
Bob computes and uses it as his secret.
An eavesdropper has been able to see , , , , , and , but cannot use any combination of these to efficiently reproduce .
To extend this mechanism to larger groups, two basic principles must be followed:
Starting with an "empty" key consisting only of g, the secret is made by raising the current value to every participant's private exponent once, in any order (the first such exponentiation yields the participant's own public key).
Any intermediate value (having up to N−1 exponents applied, where N is the number of participants in the group) may be revealed publicly, but the final value (having had all N exponents applied) constitutes the shared secret and hence must never be revealed publicly. Thus, each user must obtain their copy of the secret by applying their own private key last (otherwise there would be no way for the last contributor to communicate the final key to its recipient, as that last contributor would have turned the key into the very secret the group wished to protect).
These principles leave open various options for choosing in which order participants contribute to keys. The simplest and most obvious solution is to arrange the N participants in a circle and have N keys rotate around the circle, until eventually every key has been contributed to by all N participants (ending with its owner) and each participant has contributed to N keys (ending with their own). However, this requires that every participant perform N modular exponentiations.
By choosing a more desirable order, and relying on the fact that keys can be duplicated, it is possible to reduce the number of modular exponentiations performed by each participant to using a divide-and-conquer-style approach, given here for eight participants:
Participants A, B, C, and D each perform one exponentiation, yielding ; this value is sent to E, F, G, and H. In return, participants A, B, C, and D receive .
Participants A and B each perform one exponentiation, yielding , which they send to C and D, while C and D do the same, yielding , which they send to A and B.
Participant A performs an exponentiation, yielding , which it sends to B; similarly, B sends to A. C and D do similarly.
Participant A performs one final exponentiation, yielding the secret = , while B does the same to get = ; again, C and D do similarly.
Participants E through H simultaneously perform the same operations using as their starting point.
Once this operation has been completed all participants will possess the secret , but each participant will have performed only four modular exponentiations, rather than the eight implied by a simple circular arrangement.
Security and practical considerations
The protocol is considered secure against eavesdroppers if G and g are chosen properly. In particular, the order of the group G must be large, particularly if the same group is used for large amounts of traffic. The eavesdropper has to solve the Diffie–Hellman problem to obtain gab. This is currently considered difficult for groups whose order is large enough. An efficient algorithm to solve the discrete logarithm problem would make it easy to compute a or b and solve the Diffie–Hellman problem, making this and many other public key cryptosystems insecure. Fields of small characteristic may be less secure.
The order of G should have a large prime factor to prevent use of the Pohlig–Hellman algorithm to obtain a or b. For this reason, a Sophie Germain prime q is sometimes used to calculate , called a safe prime, since the order of G is then only divisible by 2 and q. Sometimes g is chosen to generate the order q subgroup of G, rather than G, so that the Legendre symbol of ga never reveals the low order bit of a. A protocol using such a choice is for example IKEv2.
The generator g is often a small integer such as 2. Because of the random self-reducibility of the discrete logarithm problem a small g is equally secure as any other generator of the same group.
If Alice and Bob use random number generators whose outputs are not completely random and can be predicted to some extent, then it is much easier to eavesdrop.
In the original description, the Diffie–Hellman exchange by itself does not provide authentication of the communicating parties and can be vulnerable to a man-in-the-middle attack.
Mallory (an active attacker executing the man-in-the-middle attack) may establish two distinct key exchanges, one with Alice and the other with Bob, effectively masquerading as Alice to Bob, and vice versa, allowing her to decrypt, then re-encrypt, the messages passed between them. Note that Mallory must be in the middle from the beginning and continuing to be so, actively decrypting and re-encrypting messages every time Alice and Bob communicate. If she arrives after the keys have been generated and the encrypted conversation between Alice and Bob has already begun, the attack cannot succeed. If she is ever absent, her previous presence is then revealed to Alice and Bob. They will know that all of their private conversations had been intercepted and decoded by someone in the channel. In most cases it will not help them get Mallory's private key, even if she used the same key for both exchanges.
A method to authenticate the communicating parties to each other is generally needed to prevent this type of attack. Variants of Diffie–Hellman, such as STS protocol, may be used instead to avoid these types of attacks.
Denial-of-service attack
A CVE released in 2021 (CVE-2002-20001) disclosed a denial-of-service attack (DoS) against the protocol variants use ephemeral keys, called D(HE)at attack. The attack exploits that the Diffie–Hellman key exchange allows attackers to send arbitrary numbers that are actually not public keys, triggering expensive modular exponentiation calculations on the victim's side. Another CVEs released disclosed that the Diffie–Hellman key exchange implementations may use long private exponents (CVE-2022-40735) that arguably make modular exponentiation calculations unnecessarily expensive or may unnecessary check peer's public key (CVE-2024-41996) has similar resource requirement as key calculation using a long exponent. An attacker can exploit both vulnerabilities together.
Practical attacks on Internet traffic
The number field sieve algorithm, which is generally the most effective in solving the discrete logarithm problem, consists of four computational steps. The first three steps only depend on the order of the group G, not on the specific number whose finite log is desired. It turns out that much Internet traffic uses one of a handful of groups that are of order 1024 bits or less. By precomputing the first three steps of the number field sieve for the most common groups, an attacker need only carry out the last step, which is much less computationally expensive than the first three steps, to obtain a specific logarithm. The Logjam attack used this vulnerability to compromise a variety of Internet services that allowed the use of groups whose order was a 512-bit prime number, so called export grade. The authors needed several thousand CPU cores for a week to precompute data for a single 512-bit prime. Once that was done, individual logarithms could be solved in about a minute using two 18-core Intel Xeon CPUs.
As estimated by the authors behind the Logjam attack, the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would cost on the order of $100 million, well within the budget of a large national intelligence agency such as the U.S. National Security Agency (NSA). The Logjam authors speculate that precomputation against widely reused 1024-bit DH primes is behind claims in leaked NSA documents that NSA is able to break much of current cryptography.
To avoid these vulnerabilities, the Logjam authors recommend use of elliptic curve cryptography, for which no similar attack is known. Failing that, they recommend that the order, p, of the Diffie–Hellman group should be at least 2048 bits. They estimate that the pre-computation required for a 2048-bit prime is 109 times more difficult than for 1024-bit primes.
Security against quantum computers
Quantum computers can break public-key cryptographic schemes, such as RSA, finite-field DH and elliptic-curve DH key-exchange protocols, using Shor's algorithm for solving the factoring problem, the discrete logarithm problem, and the period-finding problem. A post-quantum variant of Diffie-Hellman algorithm was proposed in 2023, and relies on a combination of the quantum-resistant CRYSTALS-Kyber protocol, as well as the old elliptic curve X25519 protocol. A quantum Diffie-Hellman key-exchange protocol that relies on a quantum one-way function, and its security relies on fundamental principles of quantum mechanics has also been proposed in the literature.
Other uses
Encryption
Public key encryption schemes based on the Diffie–Hellman key exchange have been proposed. The first such scheme is the ElGamal encryption. A more modern variant is the Integrated Encryption Scheme.
Forward secrecy
Protocols that achieve forward secrecy generate new key pairs for each session and discard them at the end of the session. The Diffie–Hellman key exchange is a frequent choice for such protocols, because of its fast key generation.
Password-authenticated key agreement
When Alice and Bob share a password, they may use a password-authenticated key agreement (PK) form of Diffie–Hellman to prevent man-in-the-middle attacks. One simple scheme is to compare the hash of s concatenated with the password calculated independently on both ends of channel. A feature of these schemes is that an attacker can only test one specific password on each iteration with the other party, and so the system provides good security with relatively weak passwords. This approach is described in ITU-T Recommendation X.1035, which is used by the G.hn home networking standard.
An example of such a protocol is the Secure Remote Password protocol.
Public key
It is also possible to use Diffie–Hellman as part of a public key infrastructure, allowing Bob to encrypt a message so that only Alice will be able to decrypt it, with no prior communication between them other than Bob having trusted knowledge of Alice's public key. Alice's public key is . To send her a message, Bob chooses a random b and then sends Alice (unencrypted) together with the message encrypted with symmetric key . Only Alice can determine the symmetric key and hence decrypt the message because only she has a (the private key). A pre-shared public key also prevents man-in-the-middle attacks.
In practice, Diffie–Hellman is not used in this way, with RSA being the dominant public key algorithm. This is largely for historical and commercial reasons, namely that RSA Security created a certificate authority for key signing that became Verisign. Diffie–Hellman, as elaborated above, cannot directly be used to sign certificates. However, the ElGamal and DSA signature algorithms are mathematically related to it, as well as MQV, STS and the IKE component of the IPsec protocol suite for securing Internet Protocol communications.
| Technology | Computer security | null |
7921 | https://en.wikipedia.org/wiki/Derivative | Derivative | In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation.
There are multiple different notations for differentiation, two of the most commonly used being Leibniz notation and prime notation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances.
Derivatives can be generalized to functions of several real variables. In this generalization, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector.
Definition
As a limit
A function of a real variable is differentiable at a point of its domain, if its domain contains an open interval containing , and the limit
exists. This means that, for every positive real number , there exists a positive real number such that, for every such that and then is defined, and
where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit.
If the function is differentiable at , that is if the limit exists, then this limit is called the derivative of at . Multiple notations for the derivative exist. The derivative of at can be denoted , read as " prime of "; or it can be denoted , read as "the derivative of with respect to at " or " by (or over) at ". See below. If is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point to the value of the derivative of at . This function is written and is called the derivative function or the derivative of . The function sometimes has a derivative at most, but not all, points of its domain. The function whose value at equals whenever is defined and elsewhere is undefined is also called the derivative of . It is still a function, but its domain may be smaller than the domain of .
For example, let be the squaring function: . Then the quotient in the definition of the derivative is
The division in the last step is valid as long as . The closer is to , the closer this expression becomes to the value . The limit exists, and for every input the limit is . So, the derivative of the squaring function is the doubling function: .
The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function , specifically the points and . As is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of at . In other words, the derivative is the slope of the tangent.
Using infinitesimals
One way to think of the derivative is as the ratio of an infinitesimal change in the output of the function to an infinitesimal change in its input. In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the in the Leibniz notation. Thus, the derivative of becomes for an arbitrary infinitesimal , where denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Taking the squaring function as an example again,
Continuity and differentiability
If is differentiable at , then must also be continuous at . As an example, choose a point and let be the step function that returns the value 1 for all less than , and returns a different value 10 for all greater than or equal to . The function cannot have a derivative at . If is negative, then is on the low part of the step, so the secant line from to is very steep; as tends to zero, the slope tends to infinity. If is positive, then is on the high part of the step, so the secant line from to has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by is continuous at , but it is not differentiable there. If is positive, then the slope of the secant line from 0 to is one; if is negative, then the slope of the secant line from to is . This can be seen graphically as a "kink" or a "cusp" in the graph at . Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by is not differentiable at . In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative.
Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point.
Notation
One common way of writing the derivative of a function is Leibniz notation, introduced by Gottfried Wilhelm Leibniz in 1675, which denotes a derivative as the quotient of two differentials, such as and . It is still commonly used when the equation is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by , read as "the derivative of with respect to ". This derivative can alternately be treated as the application of a differential operator to a function, Higher derivatives are expressed using the notation for the -th derivative of . These are abbreviations for multiple applications of the derivative operator; for example, Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if and then
Another common notation for differentiation is by using the prime mark in the symbol of a function . This is known as prime notation, due to Joseph-Louis Lagrange. The first derivative is written as , read as " prime of , or , read as " prime". Similarly, the second and the third derivatives can be written as and , respectively. For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as or . The latter notation generalizes to yield the notation for the th derivative of .
In Newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. If is a function of , then the first and second derivatives can be written as and , respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry. However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables.
Another notation is D-notation, which represents the differential operator by the symbol . The first derivative is written and higher derivatives are written with a superscript, so the -th derivative is . This notation is sometimes called Euler notation, although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast. To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function , its partial derivative with respect to can be written or . Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. and .
Rules of computation
In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation.
Rules for basic functions
The following are the rules for the derivatives of the most common basic functions. Here, is a real number, and is the base of the natural logarithm, approximately .
Derivatives of powers:
Functions of exponential, natural logarithm, and logarithm with general base:
, for
, for
, for
Trigonometric functions:
Inverse trigonometric functions:
, for
, for
Rules for combined functions
Given that the and are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions.
Constant rule: if is constant, then for all ,
Sum rule:
for all functions and and all real numbers and .
Product rule:
for all functions and . As a special case, this rule includes the fact whenever is a constant because by the constant rule.
Quotient rule:
for all functions and at all inputs where .
Chain rule for composite functions: If , then
Computation example
The derivative of the function given by is
Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions , , , , and , as well as the constant , were also used.
Higher-order derivatives
Higher order derivatives are the result of differentiating a function repeatedly. Given that is a differentiable function, the derivative of is the first derivative, denoted as . The derivative of is the second derivative, denoted as , and the derivative of is the third derivative, denoted as . By continuing this process, if it exists, the th derivative is the derivative of the th derivative or the derivative of order . As has been discussed above, the generalization of derivative of a function may be denoted as . A function that has successive derivatives is called times differentiable. If the th derivative is continuous, then the function is said to be of differentiability class . A function that has infinitely many derivatives is called infinitely differentiable or smooth. Any polynomial function is infinitely differentiable; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero.
One application of higher-order derivatives is in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk.
In other dimensions
Vector-valued functions
A vector-valued function of a real variable sends real numbers to vectors in some vector space . A vector-valued function can be split up into its coordinate functions , meaning that . This includes, for example, parametric curves in or . The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is,
if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of exists for every value of , then is another vector-valued function.
Partial derivatives
Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function with respect to the variable is variously denoted by
among other possibilities. It can be thought of as the rate of change of the function in the -direction. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". For example, let , then the partial derivative of function with respect to both variables and are, respectively:
In general, the partial derivative of a function in the direction at the point is defined to be:
This is fundamental for the study of the functions of several real variables. Let be such a real-valued function. If all partial derivatives with respect to are defined at the point , these partial derivatives define the vector
which is called the gradient of at . If is differentiable at every point in some domain, then the gradient is a vector-valued function that maps the point to the vector . Consequently, the gradient determines a vector field.
Directional derivatives
If is a real-valued function on , then the partial derivatives of measure its variation in the direction of the coordinate axes. For example, if is a function of and , then its partial derivatives measure the variation in in the and direction. However, they do not directly measure the variation of in any other direction, such as along the diagonal line . These are measured using directional derivatives. Given a vector , then the directional derivative of in the direction of at the point is:
If all the partial derivatives of exist and are continuous at , then they determine the directional derivative of in the direction by the formula:
Total derivative, total differential and Jacobian matrix
When is a function from an open subset of to , then the directional derivative of in a chosen direction is the best linear approximation to at that point and in that direction. However, when , no single directional derivative can give a complete picture of the behavior of . The total derivative gives a complete picture by considering all directions at once. That is, for any vector starting at , the linear approximation formula holds:
Similarly with the single-variable derivative, is chosen so that the error in this approximation is as small as possible. The total derivative of at is the unique linear transformation such that
Here is a vector in , so the norm in the denominator is the standard length on . However, is a vector in , and the norm in the numerator is the standard length on . If is a vector starting at , then is called the pushforward of by .
If the total derivative exists at , then all the partial derivatives and directional derivatives of exist at , and for all , is the directional derivative of in the direction . If is written using coordinate functions, so that , then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of at :
Generalizations
The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point.
An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers to . The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If is identified with by writing a complex number as then a differentiable function from to is certainly differentiable as a function from to (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy–Riemann equations – see holomorphic functions.
Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold is a space that can be approximated near each point by a vector space called its tangent space: the prototypical example is a smooth surface in . The derivative (or differential) of a (differentiable) map between manifolds, at a point in , is then a linear map from the tangent space of at to the tangent space of at . The derivative function becomes a map between the tangent bundles of and . This definition is used in differential geometry.
Differentiation can also be defined for maps between vector space, such as Banach space, in which those generalizations are the Gateaux derivative and the Fréchet derivative.
One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average".
Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is differential algebra. Here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on.
The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus.
The arithmetic derivative involves the function that is defined for the integers by the prime factorization. This is an analogy with the product rule.
| Mathematics | Calculus and analysis | null |
7938 | https://en.wikipedia.org/wiki/Diatomic%20molecule | Diatomic molecule | Diatomic molecules () are molecules composed of only two atoms, of the same or different chemical elements. If a diatomic molecule consists of two atoms of the same element, such as hydrogen () or oxygen (), then it is said to be homonuclear. Otherwise, if a diatomic molecule consists of two different atoms, such as carbon monoxide () or nitric oxide (), the molecule is said to be heteronuclear. The bond in a homonuclear diatomic molecule is non-polar.
The only chemical elements that form stable homonuclear diatomic molecules at standard temperature and pressure (STP) (or at typical laboratory conditions of 1 bar and 25 °C) are the gases hydrogen (), nitrogen (), oxygen (), fluorine (), and chlorine (), and the liquid bromine ().
The noble gases (helium, neon, argon, krypton, xenon, and radon) are also gases at STP, but they are monatomic. The homonuclear diatomic gases and noble gases together are called "elemental gases" or "molecular gases", to distinguish them from other gases that are chemical compounds.
At slightly elevated temperatures, the halogens bromine () and iodine () also form diatomic gases. All halogens have been observed as diatomic molecules, except for astatine and tennessine, which are uncertain.
Other elements form diatomic molecules when evaporated, but these diatomic species repolymerize when cooled. Heating ("cracking") elemental phosphorus gives diphosphorus (). Sulfur vapor is mostly disulfur (). Dilithium () and disodium () are known in the gas phase. Ditungsten () and dimolybdenum () form with sextuple bonds in the gas phase. Dirubidium () is diatomic.
Heteronuclear molecules
All other diatomic molecules are chemical compounds of two different elements. Many elements can combine to form heteronuclear diatomic molecules, depending on temperature and pressure.
Examples are gases carbon monoxide (CO), nitric oxide (NO), and hydrogen chloride (HCl).
Many 1:1 binary compounds are not normally considered diatomic because they are polymeric at room temperature, but they form diatomic molecules when evaporated, for example gaseous MgO, SiO, and many others.
Occurrence
Hundreds of diatomic molecules have been identified in the environment of the Earth, in the laboratory, and in interstellar space. About 99% of the Earth's atmosphere is composed of two species of diatomic molecules: nitrogen (78%) and oxygen (21%). The natural abundance of hydrogen (H2) in the Earth's atmosphere is only of the order of parts per million, but H2 is the most abundant diatomic molecule in the universe. The interstellar medium is dominated by hydrogen atoms.
Molecular geometry
All diatomic molecules are linear and characterized by a single parameter which is the bond length or distance between the two atoms. Diatomic nitrogen has a triple bond, diatomic oxygen has a double bond, and diatomic hydrogen, fluorine, chlorine, iodine, and bromine all have single bonds.
Historical significance
Diatomic elements played an important role in the elucidation of the concepts of element, atom, and molecule in the 19th century, because some of the most common elements, such as hydrogen, oxygen, and nitrogen, occur as diatomic molecules. John Dalton's original atomic hypothesis assumed that all elements were monatomic and that the atoms in compounds would normally have the simplest atomic ratios with respect to one another. For example, Dalton assumed water's formula to be HO, giving the atomic weight of oxygen as eight times that of hydrogen, instead of the modern value of about 16. As a consequence, confusion existed regarding atomic weights and molecular formulas for about half a century.
As early as 1805, Gay-Lussac and von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen, and by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the assumption of diatomic elemental molecules. However, these results were mostly ignored until 1860, partly due to the belief that atoms of one element would have no chemical affinity toward atoms of the same element, and also partly due to apparent exceptions to Avogadro's law that were not explained until later in terms of dissociating molecules.
At the 1860 Karlsruhe Congress on atomic weights, Cannizzaro resurrected Avogadro's ideas and used them to produce a consistent table of atomic weights, which mostly agree with modern values. These weights were an important prerequisite for the discovery of the periodic law by Dmitri Mendeleev and Lothar Meyer.
Excited electronic states
Diatomic molecules are normally in their lowest or ground state, which conventionally is also known as the state. When a gas of diatomic molecules is bombarded by energetic electrons, some of the molecules may be excited to higher electronic states, as occurs, for example, in the natural aurora; high-altitude nuclear explosions; and rocket-borne electron gun experiments. Such excitation can also occur when the gas absorbs light or other electromagnetic radiation. The excited states are unstable and naturally relax back to the ground state. Over various short time scales after the excitation (typically a fraction of a second, or sometimes longer than a second if the excited state is metastable), transitions occur from higher to lower electronic states and ultimately to the ground state, and in each transition results a photon is emitted. This emission is known as fluorescence. Successively higher electronic states are conventionally named , , , etc. (but this convention is not always followed, and sometimes lower case letters and alphabetically out-of-sequence letters are used, as in the example given below). The excitation energy must be greater than or equal to the energy of the electronic state in order for the excitation to occur.
In quantum theory, an electronic state of a diatomic molecule is represented by the molecular term symbol
where is the total electronic spin quantum number, is the total electronic angular momentum quantum number along the internuclear axis, and is the vibrational quantum number. takes on values 0, 1, 2, ..., which are represented by the electronic state symbols , , , ...
For example, the following table lists the common electronic states (without vibrational quantum numbers) along with the energy of the lowest vibrational level () of diatomic nitrogen (N2), the most abundant gas in the Earth's atmosphere.
The subscripts and superscripts after give additional quantum mechanical details about the electronic state. The superscript or determines whether reflection in a plane containing the internuclear axis introduces a sign change in the wavefunction. The sub-script or applies to molecules of identical atoms, and when reflecting the state along a plane perpendicular to the molecular axis, states that does not change are labelled (gerade), and states that change sign are labelled (ungerade).
The aforementioned fluorescence occurs in distinct regions of the electromagnetic spectrum, called "emission bands": each band corresponds to a particular transition from a higher electronic state and vibrational level to a lower electronic state and vibrational level (typically, many vibrational levels are involved in an excited gas of diatomic molecules). For example, N2 - emission bands (a.k.a. Vegard-Kaplan bands) are present in the spectral range from 0.14 to 1.45 μm (micrometres). A given band can be spread out over several nanometers in electromagnetic wavelength space, owing to the various transitions that occur in the molecule's rotational quantum number, . These are classified into distinct sub-band branches, depending on the change in . The branch corresponds to , the branch to , and the branch to . Bands are spread out even further by the limited spectral resolution of the spectrometer that is used to measure the spectrum. The spectral resolution depends on the instrument's point spread function.
Energy levels
The molecular term symbol is a shorthand expression of the angular momenta that characterize the electronic quantum states of a diatomic molecule, which are also eigenstates of the electronic molecular Hamiltonian. It is also convenient, and common, to represent a diatomic molecule as two-point masses connected by a massless spring. The energies involved in the various motions of the molecule can then be broken down into three categories: the translational, rotational, and vibrational energies. The theoretical study of the rotational energy levels of the diatomic molecules can be described using the below description of the rotational energy levels. While the study of vibrational energy level of the diatomic molecules can be described using the harmonic oscillator approximation or using the quantum vibrational interaction potentials. These potentials give more accurate energy levels because they take multiple vibrational effects into account.
Concerning history, the first treatment of diatomic molecules with quantum mechanics was made by Lucy Mensing in 1926.
Translational energies
The translational energy of the molecule is given by the kinetic energy expression:
where is the mass of the molecule and is its velocity.
Rotational energies
Classically, the kinetic energy of rotation is
where
is the angular momentum
is the moment of inertia of the molecule
For microscopic, atomic-level systems like a molecule, angular momentum can only have specific discrete values given by
where is a non-negative integer and is the reduced Planck constant.
Also, for a diatomic molecule the moment of inertia is
where
is the reduced mass of the molecule and
is the average distance between the centers of the two atoms in the molecule.
So, substituting the angular momentum and moment of inertia into Erot, the rotational energy levels of a diatomic molecule are given by:
Vibrational energies
Another type of motion of a diatomic molecule is for each atom to oscillate—or vibrate—along the line connecting the two atoms. The vibrational energy is approximately that of a quantum harmonic oscillator:
where
is an integer
is the reduced Planck constant and
is the angular frequency of the vibration.
Comparison between rotational and vibrational energy spacings
The spacing, and the energy of a typical spectroscopic transition, between vibrational energy levels is about 100 times greater than that of a typical transition between rotational energy levels.
Hund's cases
The good quantum numbers for a diatomic molecule, as well as good approximations of rotational energy levels, can be obtained by modeling the molecule using Hund's cases.
Mnemonics
The mnemonics BrINClHOF, pronounced "Brinklehof", HONClBrIF, pronounced "Honkelbrif", “HOBrFINCl”, pronounced “Hoberfinkel”, and HOFBrINCl, pronounced "Hofbrinkle", have been coined to aid recall of the list of diatomic elements. Another method, for English-speakers, is the sentence: "Never Have Fear of Ice Cold Beer" as a representation of Nitrogen, Hydrogen, Fluorine, Oxygen, Iodine, Chlorine, Bromine.
| Physical sciences | Substance | Chemistry |
7955 | https://en.wikipedia.org/wiki/DNA | DNA | Deoxyribonucleic acid (; DNA) is a polymer composed of two polynucleotide chains that coil around each other to form a double helix. The polymer carries genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids. Alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life.
The two DNA strands are known as polynucleotides as they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds (known as the phosphodiester linkage) between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules (A with T and C with G), with hydrogen bonds to make double-stranded DNA. The complementary nitrogenous bases are divided into two groups, the single-ringed pyrimidines and the double-ringed purines. In DNA, the pyrimidines are thymine and cytosine; the purines are adenine and guanine.
Both strands of double-stranded DNA store the same biological information. This information is replicated when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences. The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (or bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. RNA strands are created using DNA strands as a template in a process called transcription, where DNA bases are exchanged for their corresponding bases except in the case of thymine (T), for which RNA substitutes uracil (U). Under the genetic code, these RNA strands specify the sequence of amino acids within proteins in a process called translation.
Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division, these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms (animals, plants, fungi and protists) store most of their DNA inside the cell nucleus as nuclear DNA, and some in the mitochondria as mitochondrial DNA or in chloroplasts as chloroplast DNA. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm, in circular chromosomes. Within eukaryotic chromosomes, chromatin proteins, such as histones, compact and organize DNA. These compacting structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
Properties
DNA is a long polymer made from repeating units called nucleotides. The structure of DNA is dynamic along its length, being capable of coiling into tight loops and other shapes. In all species it is composed of two helical chains, bound to each other by hydrogen bonds. Both chains are coiled around the same axis, and have the same pitch of . The pair of chains have a radius of . According to another study, when measured in a different solution, the DNA chain measured wide, and one nucleotide unit measured long. The buoyant density of most DNA is 1.7g/cm3.
DNA does not usually exist as a single strand, but instead as a pair of strands that are held tightly together. These two long strands coil around each other, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside, and a base linked to a sugar and to one or more phosphate groups is called a nucleotide. A biopolymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide.
The backbone of the DNA strand is made from alternating phosphate and sugar groups. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These are known as the 3′-end (three prime end), and 5′-end (five prime end) carbons, the prime symbol being used to distinguish these carbon atoms from those of the base to which the deoxyribose forms a glycosidic bond.
Therefore, any DNA strand normally has one end at which there is a phosphate group attached to the 5′ carbon of a ribose (the 5′ phosphoryl) and another end at which there is a free hydroxyl group attached to the 3′ carbon of a ribose (the 3′ hydroxyl). The orientation of the 3′ and 5′ carbons along the sugar-phosphate backbone confers directionality (sometimes called polarity) to each DNA strand. In a nucleic acid double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are said to have a directionality of five prime end (5′ ), and three prime end (3′), with the 5′ end having a terminal phosphate group and the 3′ end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the related pentose sugar ribose in RNA.
The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. The four bases found in DNA are adenine (), cytosine (), guanine () and thymine (). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine, forming and base pairs.
Nucleobase classification
The nucleobases are classified into two types: the purines, and , which are fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings and . A fifth pyrimidine nucleobase, uracil (), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology.
Non-canonical bases
Modified bases occur in DNA. The first of these recognized was 5-methylcytosine, which was found in the genome of Mycobacterium tuberculosis in 1925. The reason for the presence of these noncanonical bases in bacterial viruses (bacteriophages) is to avoid the restriction enzymes present in bacteria. This enzyme system acts at least in part as a molecular immune system protecting bacteria from infection by viruses. Modifications of the bases cytosine and adenine, the more common and modified DNA bases, play vital roles in the epigenetic control of gene expression in plants and animals.
A number of noncanonical bases are known to occur in DNA. Most of these are modifications of the canonical bases plus uracil.
Modified Adenine
N6-carbamoyl-methyladenine
N6-methyadenine
Modified Guanine
7-Deazaguanine
7-Methylguanine
Modified Cytosine
N4-Methylcytosine
5-Carboxylcytosine
5-Formylcytosine
5-Glycosylhydroxymethylcytosine
5-Hydroxycytosine
5-Methylcytosine
Modified Thymidine
α-Glutamythymidine
α-Putrescinylthymine
Uracil and modifications
Base J
Uracil
5-Dihydroxypentauracil
5-Hydroxymethyldeoxyuracil
Others
Deoxyarchaeosine
2,6-Diaminopurine (2-Aminoadenine)
Grooves
Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. The major groove is wide, while the minor groove is in width. Due to the larger width of the major groove, the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in width that would be seen if the DNA was twisted back into the ordinary B form.
Base pairing
Top, a base pair with three hydrogen bonds. Bottom, an base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the pairs are shown as dashed lines.
In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix (from six-carbon ring to six-carbon ring) is called a Watson-Crick base pair. DNA with high GC-content is more stable than DNA with low -content. A Hoogsteen base pair (hydrogen bonding the 6-carbon ring to the 5-carbon ring) is a rare variation of base-pairing. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in organisms.
ssDNA vs. dsDNA
Most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double-stranded (dsDNA) structure is maintained largely by the intrastrand base stacking interactions, which are strongest for stacks. The two strands can come apart—a process known as melting—to form two single-stranded DNA (ssDNA) molecules. Melting occurs at high temperatures, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used).
The stability of the dsDNA form depends not only on the -content (% basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the melting temperature (also called Tm value), which is the temperature at which 50% of the double-strand molecules are converted to single-strand molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high -content have more strongly interacting strands, while short helices with high content have more weakly interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the Pribnow box in some promoters, tend to have a high content, making the strands easier to pull apart.
In the laboratory, the strength of this interaction can be measured by finding the melting temperature Tm necessary to break half of the hydrogen bonds. When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others.
Amount
In humans, the total female diploid nuclear genome per cell extends for 6.37 Gigabase pairs (Gbp), is 208.23 cm long and weighs 6.51 picograms (pg). Male values are 6.27 Gbp, 205.00 cm, 6.41 pg. Each DNA polymer can contain hundreds of millions of nucleotides, such as in chromosome 1. Chromosome 1 is the largest human chromosome with approximately 220 million base pairs, and would be long if straightened.
In eukaryotes, in addition to nuclear DNA, there is also mitochondrial DNA (mtDNA) which encodes certain proteins used by the mitochondria. The mtDNA is usually relatively small in comparison to the nuclear DNA. For example, the human mitochondrial DNA forms closed circular molecules, each of which contains 16,569 DNA base pairs, with each such molecule normally containing a full set of the mitochondrial genes. Each human mitochondrion contains, on average, approximately 5 such mtDNA molecules. Each human cell contains approximately 100 mitochondria, giving a total number of mtDNA molecules per human cell of approximately 500. However, the amount of mitochondria per cell also varies by cell type, and an egg cell can contain 100,000 mitochondria, corresponding to up to 1,500,000 copies of the mitochondrial genome (constituting up to 90% of the DNA of the cell).
Sense and antisense
A DNA sequence is called a "sense" sequence if it is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.
Supercoiling
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
Alternative DNA structures
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution.
The first published reports of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson functions that provided only a limited amount of structural information for oriented fibers of DNA. An alternative analysis was proposed by Wilkins et al. in 1953 for the in vivo B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double helix.
Although the B-DNA form is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.
Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
Alternative DNA chemistry
For many years, exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1 was announced, though the research was disputed, and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules.
Quadruplex structures
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases, known as a guanine tetrad, form a flat plate. These flat four-base units then stack on top of each other to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
Branched DNA
Branched DNA can form networks containing multiple branches.
In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.
Artificial bases
Several artificial nucleobases have been synthesized, and successfully incorporated in the eight-base DNA analogue named Hachimoji DNA. Dubbed S, B, P, and Z, these artificial bases are capable of bonding with each other in a predictable way (S–B and P–Z), maintain the double helix structure of DNA, and be transcribed to RNA. Their existence could be seen as an indication that there is nothing special about the four natural nucleobases that evolved on Earth. On the other hand, DNA is tightly related to RNA which does not only act as a transcript of DNA but also performs as molecular machines many tasks in cells. For this purpose it has to fold into a structure. It has been shown that to allow to create all possible structures at least four bases are required for the corresponding RNA, while a higher number is also possible but this would be against the natural principle of least effort.
Acidity
The phosphate groups of DNA give it similar acidic properties to phosphoric acid and it can be considered as a strong acid. It will be fully ionized at a normal cellular pH, releasing protons which leave behind negative charges on the phosphate groups. These negative charges protect DNA from breakdown by hydrolysis by repelling nucleophiles which could hydrolyze it.
Macroscopic appearance
Pure DNA extracted from cells forms white, stringy clumps.
Chemical modifications and altered DNA packaging
Base modifications and DNA packaging
Structure of cytosine with and without the 5-methyl group. Deamination converts 5-methylcytosine into thymine.
The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression.
For one example, cytosine methylation produces 5-methylcytosine, which is important for X-inactivation of chromosomes. The average level of methylation varies between organisms—the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
Damage
DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations. These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging.
Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.
Biological functions
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In an alternative fashion, a cell may copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome.
Genes and genomes
Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame.
In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species, represent a long-standing puzzle known as the "C-value enigma". However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.
Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes but are important for the function and stability of chromosomes. An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.
Transcription and translation
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).
In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAG, TAA, and TGA codons, (UAG, UAA, and UGA on the mRNA).
Replication
Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.
Extracellular nucleic acids
Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 μg/L, and its concentration in natural aquatic environments may be as high at 88 μg/L. Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer; it may provide nutrients; and it may act as a buffer to recruit or titrate ions or antibiotics. Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm; it may contribute to biofilm formation; and it may contribute to the biofilm's physical strength and resistance to biological stress.
Cell-free fetal DNA is found in the blood of the mother, and can be sequenced to determine a great deal of information about the developing fetus.
Under the name of environmental DNA eDNA has seen increased use in the natural sciences as a survey tool for ecology, monitoring the movements and presence of species in water, air, or on land, and assessing an area's biodiversity.
Neutrophil extracellular traps
Neutrophil extracellular traps (NETs) are networks of extracellular fibers, primarily composed of DNA, which allow neutrophils, a type of white blood cell, to kill extracellular pathogens while minimizing damage to the host cells.
Interactions with proteins
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
DNA-binding proteins
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation, and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.
A distinct group of DNA-binding proteins is the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination, and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.
In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase.
As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.
DNA-modifying enzymes
Nucleases and ligases
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GATATC-3′ and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
Topoisomerases and helicases
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
Polymerases
Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products is created based on existing polynucleotide chains—which are called templates. These enzymes function by repeatedly adding a nucleotide to the 3′ hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.
In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.
RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. For example, HIV reverse transcriptase is an enzyme for AIDS virus replication. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure. It synthesizes telomeres at the ends of chromosomes. Telomeres prevent fusion of the ends of neighboring chromosomes and protect chromosome ends from damage.
Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
Genetic recombination
Structure of the Holliday junction intermediate in genetic recombination. The four separate DNA strands are coloured red, blue, green and yellow.
A DNA helix usually does not interact with other segments of DNA, and in human cells, the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.
Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.
The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA. Only strands of like polarity exchange DNA during recombination. There are two types of cleavage: east-west cleavage and north–south cleavage. The north–south cleavage nicks both strands of DNA, while the east–west cleavage has one strand of DNA intact. The formation of a Holliday junction during recombination makes it possible for genetic diversity, genes to exchange on chromosomes, and expression of wild-type viral genomes.
Evolution
DNA contains the genetic information that allows all forms of life to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes. However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial.
Building blocks of DNA (adenine, guanine, and related organic molecules) may have been formed extraterrestrially in outer space. Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds.
Ancient DNA has been recovered from ancient organisms at a timescale where genome evolution can be directly observed, including from extinct organisms up to millions of years old, such as the woolly mammoth.
Uses in technology
Genetic engineering
Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture.
DNA profiling
Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, also called DNA fingerprinting. In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.
The development of forensic science and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defense to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime.
DNA profiling is also used successfully to positively identify victims of mass casualty incidents, bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members.
DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Normal DNA sequencing methods happen after birth, but there are new methods to test paternity while a mother is still pregnant.
DNA enzymes or catalytic DNA
Deoxyribozymes, also called DNAzymes or catalytic DNA, were first discovered in 1994. They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction. The most extensively studied class of DNAzymes is RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific). The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in cells.
Bioinformatics
Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning, and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.
DNA nanotechnology
DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the DNA origami method) and three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins. DNA and other nucleic acids are the basis of aptamers, synthetic oligonucleotide ligands for specific target molecules used in a range of biotechnology and biomedical applications.
History and anthropology
Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology.
Information storage
DNA as a storage device for information has enormous potential since it has much higher storage density compared to electronic devices. However, high costs, slow read and write times (memory latency), and insufficient reliability has prevented its practical use.
History
DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases.
In 1909, Phoebus Levene identified the base, sugar, and phosphate nucleotide unit of RNA (then named "yeast nucleic acid"). In 1929, Levene identified deoxyribose sugar in "thymus nucleic acid" (DNA). Levene suggested that DNA consisted of a string of four nucleotide units linked together through the phosphate groups ("tetranucleotide hypothesis"). Levene thought the chain was short and the bases repeated in a fixed order. In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information.
In 1933, while studying virgin sea urchin eggs, Jean Brachet suggested that DNA is found in the cell nucleus and that RNA is present exclusively in the cytoplasm. At the time, "yeast nucleic acid" (RNA) was thought to occur only in plants, while "thymus nucleic acid" (DNA) only in animals. The latter was thought to be a tetramer, with the function of buffering cellular pH.
In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.
In 1943, Oswald Avery, along with co-workers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle, supporting Griffith's suggestion (Avery–MacLeod–McCarty experiment). Erwin Chargaff developed and published observations now known as Chargaff's rules, stating that in DNA from any species of any organism, the amount of guanine should be equal to cytosine and the amount of adenine should be equal to thymine.
Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. DNA's role in heredity was confirmed in 1952 when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the enterobacteria phage T2.
In May 1952, Raymond Gosling, a graduate student working under the supervision of Rosalind Franklin, took an X-ray diffraction image, labeled as "Photo 51", at high hydration levels of DNA. This photo was given to Watson and Crick by Maurice Wilkins and was critical to their obtaining the correct structure of DNA. Franklin told Crick and Watson that the backbones had to be on the outside. Before then, Linus Pauling, and Watson and Crick, had erroneous models with the chains inside and the bases pointing outwards. Franklin's identification of the space group for DNA crystals revealed to Crick that the two DNA strands were antiparallel. In February 1953, Linus Pauling and Robert Corey proposed a model for nucleic acids containing three intertwined chains, with the phosphates near the axis, and the bases on the outside. Watson and Crick completed their model, which is now accepted as the first correct model of the double helix of DNA. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge, England to announce that he and Watson had "discovered the secret of life".
The 25 April 1953 issue of the journal Nature published a series of five articles giving the Watson and Crick double-helix structure DNA and evidence supporting it. The structure was reported in a letter titled "MOLECULAR STRUCTURE OF NUCLEIC ACIDS A Structure for Deoxyribose Nucleic Acid, in which they said, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." This letter was followed by a letter from Franklin and Gosling, which was the first publication of their own X-ray diffraction data and of their original analysis method. Then followed a letter by Wilkins and two of his colleagues, which contained an analysis of in vivo B-DNA X-ray patterns, and which supported the presence in vivo of the Watson and Crick structure.
In April 2023, scientists, based on new evidence, concluded that Rosalind Franklin was a contributor and "equal player" in the discovery process of DNA, rather than otherwise, as may have been presented subsequently after the time of the discovery.
In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery.
In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and co-workers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley, and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.
In 1986, DNA analysis was first used for criminal investigative purposes when police in the UK requested Alec Jeffreys of the University of Leicester to verify or disprove a suspect's rape-murder "confession". In this particular case, the suspect had confessed to two rape-murders, but had later retracted his confession. DNA testing at the university labs soon disproved the veracity of the suspect's original "confession", and the suspect was exonerated from the murder-rape charges.
| Biology and health sciences | Chemistry | null |
7985 | https://en.wikipedia.org/wiki/Dill | Dill | Dill (Anethum graveolens) is an annual herb in the celery family Apiaceae. It is native to North Africa, Iran, and the Arabian Peninsula; it is grown widely in Eurasia, where its leaves and seeds are used as a herb or spice for flavouring food.
Etymology
The word dill and its close relatives are found in most of the Germanic languages; its ultimate origin is unknown.
Taxonomy
The genus name Anethum is the Latin form of Greek ἄνῑσον / ἄνησον / ἄνηθον / ἄνητον, which meant both "dill" and "anise". The form 'anīsum' came to be used for anise, and 'anēthum' for dill. The Latin word is the origin of dill's names in the Western Romance languages ('anet', 'aneldo' etc.), and also of the obsolete English 'anet'.
Botany
Dill grows up to from a taproot like a carrot. Its stems are slender and hollow with finely divided, softly delicate leaves; the leaves are alternately arranged, long with ultimate leaf divisions are broad, slightly broader than the similar leaves of fennel, which are threadlike, less than broad, but harder in texture.
In hot or dry weather, small white to yellow scented flowers form in small umbels diameter from one long stalk. The seeds come from dried up fruit long and thick, and straight to slightly curved with a longitudinally ridged surface.
Cultivation
Successful cultivation requires warm to hot summers with high sunshine levels; even partial shade will reduce the yield substantially. It also prefers rich, well-drained soil. The seed is harvested by cutting the flower heads off the stalks when the seed is beginning to ripen. The seed heads are placed upside down in a paper bag and left in a warm, dry place for a week. The seeds then separate from the stems easily for storage in an airtight container.
These plants, like their fennel and parsley relatives, often are eaten by black swallowtail caterpillars in areas where that species occurs. For this reason, they may be included in some butterfly gardens.
History
Dill has been found in the tomb of Egyptian Pharaoh Amenhotep II, dating to around 1400 BC. It was also later found in the Greek city of Samos, around the 7th century BC, and mentioned in the writings of Theophrastus (371–287 BC). In Greek mythology, the dill was originally a young man named Anethus who was transformed into the plant.
Uses
Culinary
Aroma profile
Apiole and dillapiole
Carvone
Limonene
Myristicin
Umbelliferone
Fresh and dried dill leaves (sometimes called "dill weed" or "dillweed" to distinguish it from dill seed) are widely used as herbs in Europe and in central and south-eastern Asia.
Like caraway, the fern-like leaves of dill are aromatic and are used to flavour many foods such as gravlax (cured salmon) and other fish dishes, borscht, and other soups, as well as pickles (where the dill flower is sometimes used). Dill is best when used fresh, as it loses its flavor rapidly if dried. However, freeze-dried dill leaves retain their flavour relatively well for a few months.
Dill oil is extracted from the leaves, stems, and seeds of the plant. The oil from the seeds is distilled and used in the manufacturing of soaps.
Dill is the eponymous ingredient in dill pickles.
Central and eastern Europe
In central and eastern Europe, the Nordic countries, the Baltic states, Ukraine, and Russia, dill is a staple culinary herb along with chives and parsley. Fresh, finely cut dill leaves are used as a topping in soups, especially the hot red borsht and the cold borsht mixed with curds, kefir, yogurt, or sour cream, which is served during hot summer weather and is called 'okroshka'. It also is popular in summer to drink fermented milk (curds, kefir, yogurt, or buttermilk) mixed with dill (and sometimes other herbs).
In the same way, dill is used as a topping for boiled potatoes covered with fresh butter – especially in summer when there are so-called new, or young, potatoes. The dill leaves may be mixed with butter, making a dill butter, to serve the same purpose. Dill leaves mixed with tvorog form one of the traditional cheese spreads used for sandwiches. Fresh dill leaves are used throughout the year as an ingredient in salads, e.g., one made of lettuce, fresh cucumbers, and tomatoes, as basil leaves are used in Italy and Greece.
Russian cuisine is noted for liberal use of dill, where it is known as . It is supposed to have antiflatulent properties; some Russian cosmonauts recommended its use in human spaceflight due to such properties being beneficial in confined quarters with a closed air supply.
In Polish cuisine, fresh dill leaves mixed with sour cream are the basis for dressings. It is especially popular to use this kind of sauce with freshly cut cucumbers, which are almost wholly immersed in the sauce, making a salad called 'mizeria'. Dill sauce is used hot for baked freshwater fish and for chicken or turkey breast, or used hot or cold for hard-boiled eggs. A dill-based soup, (zupa koperkowa), served with potatoes and hard-boiled eggs, is popular in Poland. Whole stems including roots and flower buds are used traditionally to prepare Polish-style pickled cucumbers (ogórki kiszone), especially the so-called low-salt cucumbers (ogórki małosolne). Whole stems of dill (often including the roots) also are cooked with potatoes, especially the potatoes of autumn and winter, so they resemble the flavour of the newer potatoes found in summer. Some kinds of fish, especially trout and salmon, traditionally are baked with the stems and leaves of dill.
In the Czech Republic, white dill sauce made of cream (or milk), butter, flour, vinegar, and dill is called 'koprová omáčka' (also 'koprovka' or 'kopračka') and is served either with boiled eggs and potatoes, or with dumplings and boiled beef. Another Czech dish with dill is a soup called 'kulajda' that contains mushrooms (traditionally wild ones).
In Germany, dill is popular as a seasoning for fish and many other dishes, chopped as a garnish on potatoes, and as a flavouring in pickles.
In the UK, dill may be used in fish pie.
In Bulgaria dill is widely used in traditional vegetable salads, and most notably the yogurt-based cold soup Tarator. It is also used in the preparation of sour pickles, cabbage, and other dishes.
In Romania dill (mărar) is widely used as an ingredient for soups such as 'borş' (pronounced "borsh"), pickles, and other dishes, especially those based on peas, beans, and cabbage. It is popular for dishes based on potatoes and mushrooms and may be found in many summer salads (especially cucumber salad, cabbage salad and lettuce salad). During springtime, it is used in omelets with spring onions. It often complements sauces based on sour cream or yogurt and is mixed with salted cheese and used as a filling. Another popular dish with dill as a main ingredient is dill sauce, which is served with eggs and fried sausages.
In Hungary, dill is very widely used. It is popular as a sauce or filling, and mixed with a type of cottage cheese. Dill is also used for pickling and in salads. The Hungarian name for dill is 'kapor'.
In Serbia, dill is known as 'mirodjija' and is used as an addition to soups, potato and cucumber salads, and French fries. It features in the Serbian proverb, "бити мирођија у свакој чорби" /biti mirodjija u svakoj čorbi/ (to be a dill in every soup), which corresponds to the English proverb "to have a finger in every pie".
In Greece, dill is known as 'άνηθος' (anithos). In antiquity it was used as an ingredient in wines that were called "anithites oinos" (wine with anithos-dill). In modern days, dill is used in salads, soups, sauces, and fish and vegetable dishes.
In Santa Maria, Azores, dill (endro) is the most important ingredient of the traditional Holy Ghost soup (sopa do Espírito Santo). Dill is found ubiquitously in Santa Maria, yet, is rare in the other Azorean Islands.
In Sweden, dill is a common spice or herb. The flowers of fully grown dill are called 'krondill' (crown dill) and used when cooking crayfish. The krondill is put into the water after the crayfish is boiled, but still in hot and salt water. Then the entire dish is refrigerated for at least 24 hours before being served (with toasted bread and butter). Krondill is also used to flavor pickles and vodka. After a month or two of fermentation, the cucumber pickles are ready to eat, for instance, with pork, brown sauce, and potatoes, as a sweetener. The thinner part of dill and young plants may be used with boiled fresh potatoes (especially the first potatoes of the year, new potatoes, which usually are small and have a very thin skin). In salads it is used together with, or instead, of other green herbs, such as parsley, chives, and basil. It is often paired up with chives when used in food. Dill is often used to flavour fish and seafood in Sweden, for example, gravlax and various herring pickles, among them the traditional, 'sill i dill' (literally 'herring in dill'). In contrast to the various fish dishes flavoured with dill, there is also a traditional Swedish dish called, 'dillkött', which is a meaty stew flavoured with dill. The dish commonly contains pieces of veal or lamb that are boiled until tender and then served together with a vinegary dill sauce. Dill seeds may be used in breads or 'akvavit'. A newer, non-traditional use of dill is to pair it with chives as a flavouring for potato chips. These are called 'dillchips' and are quite popular in Sweden.
In Finland, the uses of dill are very similar to those in Sweden, including flavouring potato chips and, less popularly, in a dish similar to 'dillkött' ('tilliliha'). However, the use of dill in Finland is not as extensive as in large parts of central and eastern Europe, particularly Russia but including even the ethnolinguistically close Estonia.
Asia and Middle East
In Iran, dill is known as 'shevid' and sometimes, is used with rice and called 'shevid-polo'. It also is used in Iranian 'aash' recipes, and similarly, is called in Persian.
In India, dill is known as 'Sholpa' in Bengali, (शेपू) in Marathi, sheppi (शेप्पी) in Konkani, in Hindi, or in Punjabi. In Telugu, it is called 'Soa-kura' (herb greens). It also is called (ಸಬ್ಬಸಿಗೆ ಸೊಪ್ಪು) in Kannada. In Tamil it is known as (சதகுப்பி). In Malayalam, it is ചതകുപ്പ () or (). In Sanskrit, this herb is called . In Gujarati, it is known as (સૂવા). In India, dill is prepared in the manner of yellow 'moong dal', as a main-course dish. It is considered to have very good antiflatulent properties, so it is used as 'mukhwas', or an after-meal digestive. Traditionally, it is given to mothers immediately after childbirth. In the state of Uttar Pradesh in India, a small amount of fresh dill is cooked along with cut potatoes and fresh fenugreek leaves (Hindi आलू-मेथी-सोया).
In Manipur, dill, locally known as , is an essential ingredient of – a traditional Manipuri dish made with fermented soybean and rice.
In Laos and parts of northern Thailand, dill is known in English as Lao coriander ( or ), and served as a side with salad yum or papaya salad. In the Lao language, it is called 'phak see', and in Thai, it is known as 'phak chee Lao'. In Lao cuisine, Lao coriander is used extensively in traditional Lao dishes such as 'mok pa' (steamed fish in banana leaf) and several coconut milk curries that contain fish or prawns.
In China dill is called colloquially, 'huíxiāng' (, perfume of Hui people), or more properly 'shíluó' (). It is a common filling in 'baozi', 'jiaozi' and 'xianbing' and may be used as vegetarian with rice vermicelli, or combined with either meat or eggs. Vegetarian dill baozi are a common part of a Beijing breakfast. In baozi and xianbing, it often is interchangeable with non-bulbing fennel and the term also may refer to fennel, similarly to caraway and coriander leaf, sharing a name in Chinese as well. Dill also may be stir fried as a potherb, often with egg, in the same manner as Chinese chives. In Northern China, Beijing, Inner-Mongolia, Ningxia, Gansu, and Xinjiang, dill seeds commonly are called 'zīrán' (), but also 'kūmíng' (), 'kūmíngzi' (), 'shíluózi' (), 'xiǎohuíxiāngzi' () and are used with pepper for lamb meat. In the whole of China, 'yángchuàn' () or 'yángròu chuàn' (), lamb brochette, a speciality from Uyghurs, uses cumin and pepper.
In Taiwan, it is also commonly used as a filling in steamed buns (baozi) and dumplings (jiaozi).
In Vietnam, the use of dill in cooking is regional. It is used mainly in northern Vietnamese cuisine.
Middle East
In Arab countries, dill seed, called (grasshopper's eye), is used as a spice in cold dishes such as 'fattoush' and pickles. In Arab countries of the Persian Gulf, dill is called 'shibint' and is used mostly in fish dishes. In Egypt, dillweed is commonly used to flavour cabbage dishes, including 'mahshi koronb' (stuffed cabbage leaves). In Israel, dill weed is used in salads and also to flavour omelettes, often alongside parsley.
Companion planting
When used as a companion plant, dill attracts many beneficial insects as the umbrella flower heads go to seed. It makes a good companion plant for cucumbers and broccoli.
Tomato plants benefit from dill when it is young because it repels harmful pests while attracting pollinators. But the dill must be pruned before it flowers, otherwise it can slow or stop the growth of the tomatoes.
| Biology and health sciences | Herbs and spices | Plants |
7992 | https://en.wikipedia.org/wiki/Devonian | Devonian | The Devonian ( ) is a geologic period and system of the Paleozoic era during the Phanerozoic eon, spanning 60.3 million years from the end of the preceding Silurian period at million years ago (Ma), to the beginning of the succeeding Carboniferous period at Ma. It is the fourth period of both the Paleozoic and the Phanerozoic. It is named after Devon, South West England, where rocks from this period were first studied.
The first significant evolutionary radiation of life on land occurred during the Devonian, as free-sporing land plants (pteridophytes) began to spread across dry land, forming extensive coal forests which covered the continents. By the middle of the Devonian, several groups of vascular plants had evolved leaves and true roots, and by the end of the period the first seed-bearing plants (pteridospermatophytes) appeared. This rapid evolution and colonization process, which had begun during the Silurian, is known as the Silurian-Devonian Terrestrial Revolution. The earliest land animals, predominantly arthropods such as myriapods, arachnids and hexapods, also became well-established early in this period, after beginning their colonization of land at least from the Ordovician period.
Fishes, especially jawed fish, reached substantial diversity during this time, leading the Devonian to often be dubbed the Age of Fishes. The armored placoderms began dominating almost every known aquatic environment. In the oceans, cartilaginous fishes such as primitive sharks became more numerous than in the Silurian and Late Ordovician. Tetrapodomorphs, which include the ancestors of all four-limbed vertebrates (i.e. tetrapods), began diverging from freshwater lobe-finned fish as their more robust and muscled pectoral and pelvic fins gradually evolved into forelimbs and hindlimbs, though they were not fully established for life on land until the Late Carboniferous.
The first ammonites, a subclass of cephalopod molluscs, appeared. Trilobites, brachiopods and the great coral reefs were still common during the Devonian. The Late Devonian extinction, which started about 375 Ma, severely affected marine life, killing off most of the reef systems, most of the jawless fish, the placoderms, and nearly all trilobites save for a few species of the order Proetida. The subsequent end-Devonian extinction, which occurred at around 359 Ma, further impacted the ecosystems and completed the extinction of all calcite sponge reefs and placoderms.
Devonian palaeogeography was dominated by the supercontinent Gondwana to the south, the small continent of Siberia to the north, and the medium-sized continent of Laurussia to the east. Major tectonic events include the closure of the Rheic Ocean, the separation of South China from Gondwana, and the resulting expansion of the Paleo-Tethys Ocean. The Devonian experienced several major mountain-building events as Laurussia and Gondwana approached; these include the Acadian Orogeny in North America and the beginning of the Variscan Orogeny in Europe. These early collisions preceded the formation of the single supercontinent Pangaea in the Late Paleozoic.
History
The period is named after Devon, a county in southwestern England, where a controversial argument in the 1830s over the age and structure of the rocks found throughout the county was resolved by adding the Devonian Period to the geological timescale. The Great Devonian Controversy was a lengthy debate between Roderick Murchison, Adam Sedgwick and Henry De la Beche over the naming of the period. Murchison and Sedgwick won the debate and named it the Devonian System.
While the rock beds that define the start and end of the Devonian Period are well identified, the exact dates are uncertain. According to the International Commission on Stratigraphy, the Devonian extends from the end of the Silurian Ma, to the beginning of the Carboniferous Ma – in North America, at the beginning of the Mississippian subperiod of the Carboniferous.
In 19th-century texts, the Devonian has been called the "Old Red Age", after the red and brown terrestrial deposits known in the United Kingdom as the Old Red Sandstone in which early fossil discoveries were found. Another common term is "Age of the Fishes", referring to the evolution of several major groups of fish that took place during the period. Older literature on the Anglo-Welsh basin divides it into the Downtonian, Dittonian, Breconian, and Farlovian stages, the latter three of which are placed in the Devonian.
The Devonian has also erroneously been characterised as a "greenhouse age", due to sampling bias: most of the early Devonian-age discoveries came from the strata of Western Europe and eastern North America, which at the time straddled the Equator as part of the supercontinent of Euramerica where fossil signatures of widespread reefs indicate tropical climates that were warm and moderately humid. In fact, the climate in the Devonian differed greatly during its epochs and between geographic regions. For example, during the Early Devonian, arid conditions were prevalent through much of the world including Siberia, Australia, North America, and China, but Africa and South America had a warm temperate climate. In the Late Devonian, by contrast, arid conditions were less prevalent across the world and temperate climates were more common.
Subdivisions
The Devonian Period is formally broken into Early, Middle and Late subdivisions. The rocks corresponding to those epochs are referred to as belonging to the Lower, Middle and Upper parts of the Devonian System.
Early Devonian
The Early Devonian lasted from to Ma. It began with the Lochkovian Stage from to Ma, which was followed by the Pragian from to Ma and then by the Emsian, which lasted until the Middle Devonian began at Ma.
During this time, the first ammonoids appeared, descending from bactritoid nautiloids. Ammonoids during this time period were simple and differed little from their nautiloid counterparts. These ammonoids belong to the order Agoniatitida, which in later epochs evolved to new ammonoid orders, for example Goniatitida and Clymeniida. This class of cephalopod molluscs would dominate the marine fauna until the beginning of the Mesozoic Era.
Middle Devonian
The Middle Devonian comprised two subdivisions: first the Eifelian, which then gave way to the Givetian at Ma. During this time, the jawless agnathan fishes began to decline in diversity in freshwater and marine environments partly due to drastic environmental changes and partly due to the increasing competition, predation, and diversity of jawed fishes. The shallow, warm, oxygen-depleted waters of Devonian inland lakes, surrounded by primitive plants, provided the environment necessary for certain early fish to develop such essential characteristics as well developed lungs and the ability to crawl out of the water and onto the land for short periods of time.
Late Devonian
Finally, the Late Devonian started with the Frasnian, from to Ma, during which the first forests took shape on land. The first tetrapods appeared in the fossil record in the ensuing Famennian subdivision, the beginning and end of which are marked with extinction events. This lasted until the end of the Devonian at Ma.
Climate
The Devonian was a relatively warm period, although significant glaciers may have existed during the Early and Middle Devonian. The temperature gradient from the equator to the poles was not as large as it is today. The weather was also very arid, mostly along the equator where it was the driest. Reconstruction of tropical sea surface temperature from conodont apatite implies an average value of in the Early Devonian. Early Devonian mean annual surface temperatures were approximately 16 °C. levels dropped steeply throughout the Devonian Period. The newly evolved forests drew carbon out of the atmosphere, which were then buried into sediments. This may be reflected by a Mid-Devonian cooling of around . The Late Devonian warmed to levels equivalent to the Early Devonian; while there is no corresponding increase in concentrations, continental weathering increases (as predicted by warmer temperatures); further, a range of evidence, such as plant distribution, points to a Late Devonian warming. The climate would have affected the dominant organisms in reefs; microbes would have been the main reef-forming organisms in warm periods, with corals and stromatoporoid sponges taking the dominant role in cooler times. The warming at the end of the Devonian may even have contributed to the extinction of the stromatoporoids. At the terminus of the Devonian, Earth rapidly cooled into an icehouse, marking the beginning of the Late Paleozoic icehouse.
Paleogeography
The Devonian world involved many continents and ocean basins of various sizes. The largest continent, Gondwana, was located entirely within the Southern Hemisphere. It corresponds to modern day South America, Africa, Australia, Antarctica, and India, as well as minor components of North America and Asia. The second-largest continent, Laurussia, was northwest of Gondwana, and corresponds to much of modern-day North America and Europe. Various smaller continents, microcontinents, and terranes were present east of Laurussia and north of Gondwana, corresponding to parts of Europe and Asia. The Devonian Period was a time of great tectonic activity, as the major continents of Laurussia and Gondwana drew closer together.
Sea levels were high worldwide, and much of the land lay under shallow seas, where tropical reef organisms lived. The enormous "world ocean", Panthalassa, occupied much of the Northern Hemisphere as well as wide swathes east of Gondwana and west of Laurussia. Other minor oceans were the Paleo-Tethys Ocean and Rheic Ocean.
Laurussia
By the early Devonian, the continent Laurussia (also known as Euramerica) was fully formed through the collision of the continents Laurentia (modern day North America) and Baltica (modern day northern and eastern Europe). The tectonic effects of this collision continued into the Devonian, producing a string of mountain ranges along the southeastern coast of the continent. In present-day eastern North America, the Acadian Orogeny continued to raise the Appalachian Mountains. Further east, the collision also extended the rise of the Caledonian Mountains of Great Britain and Scandinavia. As the Caledonian Orogeny wound down in the later part of the period, orogenic collapse facilitated a cluster of granite intrusions in Scotland.
Most of Laurussia was located south of the equator, but in the Devonian it moved northwards and began to rotate counterclockwise towards its modern position. While the most northern parts of the continent (such as Greenland and Ellesmere Island) established tropical conditions, most of the continent was located within the natural dry zone along the Tropic of Capricorn, which (as nowadays) is a result of the convergence of two great air-masses, the Hadley cell and the Ferrel cell. In these near-deserts, the Old Red Sandstone sedimentary beds formed, made red by the oxidised iron (hematite) characteristic of drought conditions. The abundance of red sandstone on continental land also lends Laurussia the name "the Old Red Continent". For much of the Devonian, the majority of western Laurussia (North America) was covered by subtropical inland seas which hosted a diverse ecosystem of reefs and marine life. Devonian marine deposits are particularly prevalent in the midwestern and northeastern United States. Devonian reefs also extended along the southeast edge of Laurussia, a coastline now corresponding to southern England, Belgium, and other mid-latitude areas of Europe.
In the Early and Middle Devonian, the west coast of Laurussia was a passive margin with broad coastal waters, deep silty embayments, river deltas and estuaries, found today in Idaho and Nevada. In the Late Devonian, an approaching volcanic island arc reached the steep slope of the continental shelf and began to uplift deep water deposits. This minor collision sparked the start of a mountain-building episode called the Antler orogeny, which extended into the Carboniferous. Mountain building could also be found in the far northeastern extent of the continent, as minor tropical island arcs and detached Baltic terranes re-join the continent. Deformed remnants of these mountains can still be found on Ellesmere Island and Svalbard. Many of the Devonian collisions in Laurussia produce both mountain chains and foreland basins, which are frequently fossiliferous.
Gondwana
Gondwana was by far the largest continent on the planet. It was completely south of the equator, although the northeastern sector (now Australia) did reach tropical latitudes. The southwestern sector (now South America) was located to the far south, with Brazil situated near the South Pole. The northwestern edge of Gondwana was an active margin for much of the Devonian, and saw the accretion of many smaller land masses and island arcs. These include Chilenia, Cuyania, and Chaitenia, which now form much of Chile and Patagonia. These collisions were associated with volcanic activity and plutons, but by the Late Devonian the tectonic situation had relaxed and much of South America was covered by shallow seas. These south polar seas hosted a distinctive brachiopod fauna, the Malvinokaffric Realm, which extended eastward to marginal areas now equivalent to South Africa and Antarctica. Malvinokaffric faunas even managed to approach the South Pole via a tongue of Panthalassa which extended into the Paraná Basin.
The northern rim of Gondwana was mostly a passive margin, hosting extensive marine deposits in areas such as northwest Africa and Tibet. The eastern margin, though warmer than the west, was equally active. Numerous mountain building events and granite and kimberlite intrusions affected areas equivalent to modern day eastern Australia, Tasmania, and Antarctica.
Asian terranes
Several island microcontinents (which would later coalesce into modern day Asia) stretched over a low-latitude archipelago to the north of Gondwana. They were separated from the southern continent by an oceanic basin: the Paleo-Tethys. Although the western Paleo-Tethys Ocean had existed since the Cambrian, the eastern part only began to rift apart as late as the Silurian. This process accelerated in the Devonian. The eastern branch of the Paleo-Tethys was fully opened when South China and Annamia (a terrane equivalent to most of Indochina), together as a unified continent, detached from the northeastern sector of Gondwana. Nevertheless, they remained close enough to Gondwana that their Devonian fossils were more closely related to Australian species than to north Asian species. Other Asian terranes remained attached to Gondwana, including Sibumasu (western Indochina), Tibet, and the rest of the Cimmerian blocks.
While the South China-Annamia continent was the newest addition to the Asian microcontinents, it was not the first. North China and the Tarim Block (now northwesternmost China) were located westward and continued to drift northwards, powering over older oceanic crust in the process. Further west was a small ocean (the Turkestan Ocean), followed by the larger microcontinents of Kazakhstania, Siberia, and Amuria. Kazakhstania was a volcanically active region during the Devonian, as it continued to assimilate smaller island arcs. The island arcs of the region, such as the Balkhash-West Junggar Arc, exhibited biological endemism as a consequence of their location.
Siberia was located just north of the equator as the largest landmass in the Northern Hemisphere. At the beginning of the Devonian, Siberia was inverted (upside down) relative to its modern orientation. Later in the period it moved northwards and began to twist clockwise, though it was not near its modern location. Siberia approached the eastern edge of Laurussia as the Devonian progressed, but it was still separated by a seaway, the Ural Ocean. Although Siberia's margins were generally tectonically stable and ecologically productive, rifting and deep mantle plumes impacted the continent with flood basalts during the Late Devonian. The Altai-Sayan region was shaken by volcanism in the Early and Middle Devonian, while Late Devonian magmatism was magnified further to produce the Vilyuy Traps, flood basalts which may have contributed to the Late Devonian Mass Extinction. The last major round of volcanism, the Yakutsk Large Igneous Province, continued into the Carboniferous to produce extensive kimberlite deposits.
Similar volcanic activity also affected the nearby microcontinent of Amuria (now Manchuria, Mongolia and their vicinities). Though certainly close to Siberia in the Devonian, the precise location of Amuria is uncertain due to contradictory paleomagnetic data.
Closure of the Rheic Ocean
The Rheic Ocean, which separated Laurussia from Gondwana, was wide at the start of the Devonian, having formed after the drift of Avalonia away from Gondwana. It steadily shrunk as the period continued, as the two major continents approached near the equator in the early stages of the assembly of Pangaea. The closure of the Rheic Ocean began in the Devonian and continued into the Carboniferous. As the ocean narrowed, endemic marine faunas of Gondwana and Laurussia combined into a single tropical fauna. The history of the western Rheic Ocean is a subject of debate, but there is good evidence that Rheic oceanic crust experienced intense subduction and metamorphism under Mexico and Central America.
The closure of the eastern part of the Rheic Ocean is associated with the assemblage of central and southern Europe. In the early Paleozoic, much of Europe was still attached to Gondwana, including the terranes of Iberia, Armorica (France), Palaeo-Adria (the western Mediterranean area), Bohemia, Franconia, and Saxothuringia. These continental blocks, collectively known as the Armorican Terrane Assemblage, split away from Gondwana in the Silurian and drifted towards Laurussia through the Devonian. Their collision with Laurussia leads to the beginning of the Variscan Orogeny, a major mountain-building event which would escalate further in the Late Paleozoic. Franconia and Saxothuringia collided with Laurussia near the end of the Early Devonian, pinching out the easternmost Rheic Ocean. The rest of the Armorican terranes followed, and by the end of the Devonian they were fully connected with Laurussia. This sequence of rifting and collision events led to the successive creation and destruction of several small seaways, including the Rheno-Hercynian, Saxo-Thuringian, and Galicia-Moldanubian oceans. Their sediments were eventually compressed and completely buried as Gondwana fully collided with Laurussia in the Carboniferous.
Life
Marine biota
Sea levels in the Devonian were generally high. Marine faunas continued to be dominated by conodonts, bryozoans, diverse and abundant brachiopods, the enigmatic hederellids, microconchids, and corals. Lily-like crinoids (animals, their resemblance to flowers notwithstanding) were abundant, and trilobites were still fairly common. Bivalves became commonplace in deep water and outer shelf environments. The first ammonites also appeared during or slightly before the early Devonian Period around 400 Ma. Bactritoids make their first appearance in the Early Devonian as well; their radiation, along with that of ammonoids, has been attributed by some authors to increased environmental stress resulting from decreasing oxygen levels in the deeper parts of the water column. Among vertebrates, jawless armored fish (ostracoderms) declined in diversity, while the jawed fish (gnathostomes) simultaneously increased in both the sea and fresh water. Armored placoderms were numerous during the early ages of the Devonian Period and became extinct in the Late Devonian, perhaps because of competition for food against the other fish species. Early cartilaginous (Chondrichthyes) and bony fishes (Osteichthyes) also become diverse and played a large role within the Devonian seas. The first abundant genus of cartilaginous fish, Cladoselache, appeared in the oceans during the Devonian Period. The great diversity of fish around at the time has led to the Devonian being given the name "The Age of Fishes" in popular culture.
The Devonian saw significant expansion in the diversity of nektonic marine life driven by the abundance of planktonic microorganisms in the free water column as well as high ecological competition in benthic habitats, which were extremely saturated; this diversification has been labeled the Devonian Nekton Revolution by many researchers. However, other researchers have questioned whether this revolution existed at all; a 2018 study found that although the proportion of biodiversity constituted by nekton increased across the boundary between the Silurian and Devonian, it decreased across the span of the Devonian, particularly during the Pragian, and that the overall diversity of nektonic taxa did not increase significantly during the Devonian compared to during other geologic periods, and was in fact higher during the intervals spanning from the Wenlock to the Lochkovian and from the Carboniferous to the Permian. The study's authors instead attribute the increased overall diversity of nekton in the Devonian to a broader, gradual trend of nektonic diversification across the entire Palaeozoic.
Reefs
A now-dry barrier reef, located in present-day Kimberley Basin of northwest Australia, once extended , fringing a Devonian continent. Reefs are generally built by various carbonate-secreting organisms that can erect wave-resistant structures near sea level. Although modern reefs are constructed mainly by corals and calcareous algae, Devonian reefs were either microbial reefs built up mostly by autotrophic cyanobacteria or coral-stromatoporoid reefs built up by coral-like stromatoporoids and tabulate and rugose corals. Microbial reefs dominated under the warmer conditions of the early and late Devonian, while coral-stromatoporoid reefs dominated during the cooler middle Devonian.
Terrestrial biota
By the Devonian Period, life was well underway in its colonization of the land. The moss forests and bacterial and algal mats of the Silurian were joined early in the period by primitive rooted plants that created the first stable soils and harbored arthropods like mites, scorpions, trigonotarbids and myriapods (although arthropods appeared on land much earlier than in the Early Devonian and the existence of fossils such as Protichnites suggest that amphibious arthropods may have appeared as early as the Cambrian). By far the largest land organism at the beginning of this period was the enigmatic Prototaxites, which was possibly the fruiting body of an enormous fungus, rolled liverwort mat, or another organism of uncertain affinities that stood more than tall, and towered over the low, carpet-like vegetation during the early part of the Devonian. Also, the first possible fossils of insects appeared around 416 Ma, in the Early Devonian. Evidence for the earliest tetrapods takes the form of trace fossils in shallow lagoon environments within a marine carbonate platform/shelf during the Middle Devonian, although these traces have been questioned and an interpretation as fish feeding traces (Piscichnus) has been advanced.
The greening of land
Many Early Devonian plants did not have true roots or leaves like extant plants, although vascular tissue is observed in many of those plants. Some of the early land plants such as Drepanophycus likely spread by vegetative growth and spores. The earliest land plants such as Cooksonia consisted of leafless, dichotomous axes with terminal sporangia and were generally very short-statured, and grew hardly more than a few centimetres tall. Fossils of Armoricaphyton chateaupannense, about 400 million years old, represent the oldest known plants with woody tissue. By the Middle Devonian, shrub-like forests of primitive plants existed: lycophytes, horsetails, ferns, and progymnosperms evolved. Most of these plants had true roots and leaves, and many were quite tall. The earliest-known trees appeared in the Middle Devonian. These included a lineage of lycopods and another arborescent, woody vascular plant, the cladoxylopsids and progymnosperm Archaeopteris. These tracheophytes were able to grow to large size on dry land because they had evolved the ability to biosynthesize lignin, which gave them physical rigidity and improved the effectiveness of their vascular system while giving them resistance to pathogens and herbivores. In Eifelian age, cladoxylopsid trees formed the first forests in Earth history. By the end of the Devonian, the first seed-forming plants had appeared. This rapid appearance of many plant groups and growth forms has been referred to as the Devonian Explosion or the Silurian-Devonian Terrestrial Revolution.
The 'greening' of the continents acted as a carbon sink, and atmospheric concentrations of carbon dioxide may have dropped. This may have cooled the climate and led to a massive extinction event. (See Late Devonian extinction).
Animals and the first soils
Primitive arthropods co-evolved with this diversified terrestrial vegetation structure. The evolving co-dependence of insects and seed plants that characterized a recognizably modern world had its genesis in the Late Devonian Epoch. The development of soils and plant root systems probably led to changes in the speed and pattern of erosion and sediment deposition. The rapid evolution of a terrestrial ecosystem that contained copious animals opened the way for the first vertebrates to seek terrestrial living. By the end of the Devonian, arthropods were solidly established on the land.
Gallery
Late Devonian extinction
The Late Devonian extinction is not a single event, but rather is a series of pulsed extinctions at the Givetian-Frasnian boundary, the Frasnian-Famennian boundary, and the Devonian-Carboniferous boundary. Together, these are considered one of the "Big Five" mass extinctions in Earth's history. The Devonian extinction crisis primarily affected the marine community, and selectively affected shallow warm-water organisms rather than cool-water organisms. The most important group to be affected by this extinction event were the reef-builders of the great Devonian reef systems.
Amongst the severely affected marine groups were the brachiopods, trilobites, ammonites, and acritarchs, and the world saw the disappearance of an estimated 96% of vertebrates like conodonts and bony fishes, and all of the ostracoderms and placoderms. Land plants as well as freshwater species, such as our tetrapod ancestors, were relatively unaffected by the Late Devonian extinction event (there is a counterargument that the Devonian extinctions nearly wiped out the tetrapods).
The reasons for the Late Devonian extinctions are still unknown, and all explanations remain speculative. Canadian paleontologist Digby McLaren suggested in 1969 that the Devonian extinction events were caused by an asteroid impact. However, while there were Late Devonian collision events (see the Alamo bolide impact), little evidence supports the existence of a large enough Devonian crater.
| Physical sciences | Geological timescale | Earth science |
8005 | https://en.wikipedia.org/wiki/Dentistry | Dentistry | Dentistry, also known as dental medicine and oral medicine, is the branch of medicine focused on the teeth, gums, and mouth. It consists of the study, diagnosis, prevention, management, and treatment of diseases, disorders, and conditions of the mouth, most commonly focused on dentition (the development and arrangement of teeth) as well as the oral mucosa. Dentistry may also encompass other aspects of the craniofacial complex including the temporomandibular joint. The practitioner is called a dentist.
The history of dentistry is almost as ancient as the history of humanity and civilization, with the earliest evidence dating from 7000 BC to 5500 BC. Dentistry is thought to have been the first specialization in medicine which has gone on to develop its own accredited degree with its own specializations. Dentistry is often also understood to subsume the now largely defunct medical specialty of stomatology (the study of the mouth and its disorders and diseases) for which reason the two terms are used interchangeably in certain regions. However, some specialties such as oral and maxillofacial surgery (facial reconstruction) may require both medical and dental degrees to accomplish. In European history, dentistry is considered to have stemmed from the trade of barber surgeons.
Dental treatments are carried out by a dental team, which often consists of a dentist and dental auxiliaries (such as dental assistants, dental hygienists, dental technicians, and dental therapists). Most dentists either work in private practices (primary care), dental hospitals, or (secondary care) institutions (prisons, armed forces bases, etc.).
The modern movement of evidence-based dentistry calls for the use of high-quality scientific research and evidence to guide decision-making such as in manual tooth conservation, use of fluoride water treatment and fluoride toothpaste, dealing with oral diseases such as tooth decay and periodontitis, as well as systematic diseases such as osteoporosis, diabetes, celiac disease, cancer, and HIV/AIDS which could also affect the oral cavity. Other practices relevant to evidence-based dentistry include radiology of the mouth to inspect teeth deformity or oral malaises, haematology (study of blood) to avoid bleeding complications during dental surgery, cardiology (due to various severe complications arising from dental surgery with patients with heart disease), etc.
Terminology
The term dentistry comes from dentist, which comes from French dentiste, which comes from the French and Latin words for tooth. The term for the associated scientific study of teeth is odontology (from ) – the study of the structure, development, and abnormalities of the teeth.
Dental treatment
Dentistry usually encompasses practices related to the oral cavity. According to the World Health Organization, oral diseases are major public health problems due to their high incidence and prevalence across the globe, with the disadvantaged affected more than other socio-economic groups.
The majority of dental treatments are carried out to prevent or treat the two most common oral diseases which are dental caries (tooth decay) and periodontal disease (gum disease or pyorrhea). Common treatments involve the restoration of teeth, extraction or surgical removal of teeth, scaling and root planing, endodontic root canal treatment, and cosmetic dentistry
By nature of their general training, dentists, without specialization can carry out the majority of dental treatments such as restorative (fillings, crowns, bridges), prosthetic (dentures), endodontic (root canal) therapy, periodontal (gum) therapy, and extraction of teeth, as well as performing examinations, radiographs (x-rays), and diagnosis. Dentists can also prescribe medications used in the field such as antibiotics, sedatives, and any other drugs used in patient management. Depending on their licensing boards, general dentists may be required to complete additional training to perform sedation, dental implants, etc.
Dentists also encourage the prevention of oral diseases through proper hygiene and regular, twice or more yearly, checkups for professional cleaning and evaluation. Oral infections and inflammations may affect overall health and conditions in the oral cavity may be indicative of systemic diseases, such as osteoporosis, diabetes, celiac disease or cancer. Many studies have also shown that gum disease is associated with an increased risk of diabetes, heart disease, and preterm birth. The concept that oral health can affect systemic health and disease is referred to as "oral-systemic health".
Education and licensing
John M. Harris started the world's first dental school in Bainbridge, Ohio, and helped to establish dentistry as a health profession. It opened on 21 February 1828, and today is a dental museum. The first dental college, Baltimore College of Dental Surgery, opened in Baltimore, Maryland, US in 1840. The second in the United States was the Ohio College of Dental Surgery, established in Cincinnati, Ohio, in 1845. The Philadelphia College of Dental Surgery followed in 1852. In 1907, Temple University accepted a bid to incorporate the school.
Studies show that dentists that graduated from different countries, or even from different dental schools in one country, may make different clinical decisions for the same clinical condition. For example, dentists that graduated from Israeli dental schools may recommend the removal of asymptomatic impacted third molar (wisdom teeth) more often than dentists that graduated from Latin American or Eastern European dental schools.
In the United Kingdom, the first dental schools, the London School of Dental Surgery and the Metropolitan School of Dental Science, both in London, opened in 1859. The British Dentists Act of 1878 and the 1879 Dentists Register limited the title of "dentist" and "dental surgeon" to qualified and registered practitioners. However, others could legally describe themselves as "dental experts" or "dental consultants". The practice of dentistry in the United Kingdom became fully regulated with the 1921 Dentists Act, which required the registration of anyone practising dentistry. The British Dental Association, formed in 1880 with Sir John Tomes as president, played a major role in prosecuting dentists practising illegally. Dentists in the United Kingdom are now regulated by the General Dental Council.
In many countries, dentists usually complete between five and eight years of post-secondary education before practising. Though not mandatory, many dentists choose to complete an internship or residency focusing on specific aspects of dental care after they have received their dental degree. In a few countries, to become a qualified dentist one must usually complete at least four years of postgraduate study; Dental degrees awarded around the world include the Doctor of Dental Surgery (DDS) and Doctor of Dental Medicine (DMD) in North America (US and Canada), and the Bachelor of Dental Surgery/Baccalaureus Dentalis Chirurgiae (BDS, BDent, BChD, BDSc) in the UK and current and former British Commonwealth countries.
All dentists in the United States undergo at least three years of undergraduate studies, but nearly all complete a bachelor's degree. This schooling is followed by four years of dental school to qualify as a "Doctor of Dental Surgery" (DDS) or "Doctor of Dental Medicine" (DMD). Specialization in dentistry is available in the fields of Anesthesiology, Dental Public Health, Endodontics, Oral Radiology, Oral and Maxillofacial Surgery, Oral Medicine, Orofacial Pain, Pathology, Orthodontics, Pediatric Dentistry (Pedodontics), Periodontics, and Prosthodontics.
Specialties
Some dentists undertake further training after their initial degree in order to specialize. Exactly which subjects are recognized by dental registration bodies varies according to location. Examples include:
Anesthesiology – The specialty of dentistry that deals with the advanced use of general anesthesia, sedation and pain management to facilitate dental procedures.
Cosmetic dentistry – Focuses on improving the appearance of the mouth, teeth and smile.
Dental public health – The study of epidemiology and social health policies relevant to oral health.
Endodontics (also called endodontology) – Root canal therapy and study of diseases of the dental pulp and periapical tissues.
Forensic odontology – The gathering and use of dental evidence in law. This may be performed by any dentist with experience or training in this field. The function of the forensic dentist is primarily documentation and verification of identity.
Geriatric dentistry or geriodontics – The delivery of dental care to older adults involving the diagnosis, prevention, and treatment of problems associated with normal aging and age-related diseases as part of an interdisciplinary team with other health care professionals.
Oral and maxillofacial pathology – The study, diagnosis, and sometimes the treatment of oral and maxillofacial related diseases.
Oral and maxillofacial radiology – The study and radiologic interpretation of oral and maxillofacial diseases.
Oral and maxillofacial surgery (also called oral surgery) – Extractions, implants, and surgery of the jaws, mouth and face.
Oral biology – Research in dental and craniofacial biology
Oral Implantology – The art and science of replacing extracted teeth with dental implants.
Oral medicine – The clinical evaluation and diagnosis of oral mucosal diseases
Orthodontics and dentofacial orthopedics – The straightening of teeth and modification of midface and mandibular growth.
Pediatric dentistry (also called pedodontics) – Dentistry for children
Periodontology (also called periodontics) – The study and treatment of diseases of the periodontium (non-surgical and surgical) as well as placement and maintenance of dental implants
Prosthodontics (also called prosthetic dentistry) – Dentures, bridges and the restoration of implants.
Some prosthodontists super-specialize in maxillofacial prosthetics, which is the discipline originally concerned with the rehabilitation of patients with congenital facial and oral defects such as cleft lip and palate or patients born with an underdeveloped ear (microtia). Today, most maxillofacial prosthodontists return function and esthetics to patients with acquired defects secondary to surgical removal of head and neck tumors, or secondary to trauma from war or motor vehicle accidents.
Special needs dentistry (also called special care dentistry) – Dentistry for those with developmental and acquired disabilities.
Sports dentistry – the branch of sports medicine dealing with prevention and treatment of dental injuries and oral diseases associated with sports and exercise. The sports dentist works as an individual consultant or as a member of the Sports Medicine Team.
Veterinary dentistry – The field of dentistry applied to the care of animals. It is a specialty of veterinary medicine.
History
Tooth decay was low in pre-agricultural societies, but the advent of farming society about 10,000 years ago correlated with an increase in tooth decay (cavities). An infected tooth from Italy partially cleaned with flint tools, between 13,820 and 14,160 years old, represents the oldest known dentistry, although a 2017 study suggests that 130,000 years ago the Neanderthals already used rudimentary dentistry tools. In Italy evidence dated to the Paleolithic, around 13,000 years ago, points to bitumen used to fill a tooth and in Neolithic Slovenia, 6500 years ago, beeswax was used to close a fracture in a tooth. The Indus valley has yielded evidence of dentistry being practised as far back as 7000 BC, during the Stone Age. The Neolithic site of Mehrgarh (now in Pakistan's south western province of Balochistan) indicates that this form of dentistry involved curing tooth related disorders with bow drills operated, perhaps, by skilled bead-crafters. The reconstruction of this ancient form of dentistry showed that the methods used were reliable and effective. The earliest dental filling, made of beeswax, was discovered in Slovenia and dates from 6500 years ago. Dentistry was practised in prehistoric Malta, as evidenced by a skull which had a dental abscess lanced from the root of a tooth dating back to around 2500 BC.
An ancient Sumerian text describes a "tooth worm" as the cause of dental caries. Evidence of this belief has also been found in ancient India, Egypt, Japan, and China. The legend of the worm is also found in the Homeric Hymns, and as late as the 14th century AD the surgeon Guy de Chauliac still promoted the belief that worms cause tooth decay.
Recipes for the treatment of toothache, infections and loose teeth are spread throughout the Ebers Papyrus, Kahun Papyri, Brugsch Papyrus, and Hearst papyrus of Ancient Egypt. The Edwin Smith Papyrus, written in the 17th century BC but which may reflect previous manuscripts from as early as 3000 BC, discusses the treatment of dislocated or fractured jaws. In the 18th century BC, the Code of Hammurabi referenced dental extraction twice as it related to punishment. Examination of the remains of some ancient Egyptians and Greco-Romans reveals early attempts at dental prosthetics. However, it is possible the prosthetics were prepared after death for aesthetic reasons.
Ancient Greek scholars Hippocrates and Aristotle wrote about dentistry, including the eruption pattern of teeth, treating decayed teeth and gum disease, extracting teeth with forceps, and using wires to stabilize loose teeth and fractured jaws. Use of dental appliances, bridges and dentures was applied by the Etruscans in northern Italy, from as early as 700 BC, of human or other animal teeth fastened together with gold bands. The Romans had likely borrowed this technique by the 5th century BC. The Phoenicians crafted dentures during the 6th–4th century BC, fashioning them from gold wire and incorporating two ivory teeth. In ancient Egypt, Hesy-Ra is the first named "dentist" (greatest of the teeth). The Egyptians bound replacement teeth together with gold wire. Roman medical writer Cornelius Celsus wrote extensively of oral diseases as well as dental treatments such as narcotic-containing emollients and astringents. The earliest dental amalgams were first documented in a Tang dynasty medical text written by the Chinese physician Su Kung in 659, and appeared in Germany in 1528.
During the Islamic Golden Age Dentistry was discussed in several famous books of medicine such as The Canon in medicine written by Avicenna and Al-Tasreef by Al-Zahrawi who is considered the greatest surgeon of the Middle Ages, Avicenna said that jaw fracture should be reduced according to the occlusal guidance of the teeth; this principle is still valid in modern times. Al-Zahrawi invented over 200 surgical tools that resemble the modern kind.
Historically, dental extractions have been used to treat a variety of illnesses. During the Middle Ages and throughout the 19th century, dentistry was not a profession in itself, and often dental procedures were performed by barbers or general physicians. Barbers usually limited their practice to extracting teeth which alleviated pain and associated chronic tooth infection. Instruments used for dental extractions date back several centuries. In the 14th century, Guy de Chauliac most probably invented the dental pelican (resembling a pelican's beak) which was used to perform dental extractions up until the late 18th century. The pelican was replaced by the dental key which, in turn, was replaced by modern forceps in the 19th century.
The first book focused solely on dentistry was the "Artzney Buchlein" in 1530, and the first dental textbook written in English was called "Operator for the Teeth" by Charles Allen in 1685.
In the United Kingdom, there was no formal qualification for the providers of dental treatment until 1859 and it was only in 1921 that the practice of dentistry was limited to those who were professionally qualified. The Royal Commission on the National Health Service in 1979 reported that there were then more than twice as many registered dentists per 10,000 population in the UK than there were in 1921.
Modern dentistry
It was between 1650 and 1800 that the science of modern dentistry developed. The English physician Thomas Browne in his A Letter to a Friend ( pub. 1690) made an early dental observation with characteristic humour:
The French surgeon Pierre Fauchard became known as the "father of modern dentistry". Despite the limitations of the primitive surgical instruments during the late 17th and early 18th century, Fauchard was a highly skilled surgeon who made remarkable improvisations of dental instruments, often adapting tools from watchmakers, jewelers and even barbers, that he thought could be used in dentistry. He introduced dental fillings as treatment for dental cavities. He asserted that sugar-derived acids like tartaric acid were responsible for dental decay, and also suggested that tumors surrounding the teeth and in the gums could appear in the later stages of tooth decay.
Fauchard was the pioneer of dental prosthesis, and he invented many methods to replace lost teeth. He suggested that substitutes could be made from carved blocks of ivory or bone. He also introduced dental braces, although they were initially made of gold, he discovered that the teeth position could be corrected as the teeth would follow the pattern of the wires. Waxed linen or silk threads were usually employed to fasten the braces. His contributions to the world of dental science consist primarily of his 1728 publication Le chirurgien dentiste or The Surgeon Dentist. The French text included "basic oral anatomy and function, dental construction, and various operative and restorative techniques, and effectively separated dentistry from the wider category of surgery".
After Fauchard, the study of dentistry rapidly expanded. Two important books, Natural History of Human Teeth (1771) and Practical Treatise on the Diseases of the Teeth (1778), were published by British surgeon John Hunter. In 1763, he entered into a period of collaboration with the London-based dentist James Spence. He began to theorise about the possibility of tooth transplants from one person to another. He realised that the chances of a successful tooth transplant (initially, at least) would be improved if the donor tooth was as fresh as possible and was matched for size with the recipient. These principles are still used in the transplantation of internal organs. Hunter conducted a series of pioneering operations, in which he attempted a tooth transplant. Although the donated teeth never properly bonded with the recipients' gums, one of Hunter's patients stated that he had three which lasted for six years, a remarkable achievement for the period.
Major advances in science were made in the 19th century, and dentistry evolved from a trade to a profession. The profession came under government regulation by the end of the 19th century. In the UK, the Dentist Act was passed in 1878 and the British Dental Association formed in 1879. In the same year, Francis Brodie Imlach was the first ever dentist to be elected President of the Royal College of Surgeons (Edinburgh), raising dentistry onto a par with clinical surgery for the first time.
Hazards in modern dentistry
Long term occupational noise exposure can contribute to permanent hearing loss, which is referred to as noise-induced hearing loss (NIHL) and tinnitus. Noise exposure can cause excessive stimulation of the hearing mechanism, which damages the delicate structures of the inner ear. NIHL can occur when an individual is exposed to sound levels above 90 dBA according to the Occupational Safety and Health Administration (OSHA). Regulations state that the permissible noise exposure levels for individuals is 90 dBA. For the National Institute for Occupational Safety and Health (NIOSH), exposure limits are set to 85 dBA. Exposures below 85 dBA are not considered to be hazardous. Time limits are placed on how long an individual can stay in an environment above 85 dBA before it causes hearing loss. OSHA places that limitation at 8 hours for 85 dBA. The exposure time becomes shorter as the dBA level increases.
Within the field of dentistry, a variety of cleaning tools are used including piezoelectric and sonic scalers, and ultrasonic scalers and cleaners. While a majority of the tools do not exceed 75 dBA, prolonged exposure over many years can lead to hearing loss or complaints of tinnitus. Few dentists have reported using personal hearing protective devices, which could offset any potential hearing loss or tinnitus.
Evidence-based dentistry
There is a movement in modern dentistry to place a greater emphasis on high-quality scientific evidence in decision-making. Evidence-based dentistry (EBD) uses current scientific evidence to guide decisions. It is an approach to oral health that requires the application and examination of relevant scientific data related to the patient's oral and medical health. Along with the dentist's professional skill and expertise, EBD allows dentists to stay up to date on the latest procedures and patients to receive improved treatment. A new paradigm for medical education designed to incorporate current research into education and practice was developed to help practitioners provide the best care for their patients. It was first introduced by Gordon Guyatt and the Evidence-Based Medicine Working Group at McMaster University in Ontario, Canada in the 1990s. It is part of the larger movement toward evidence-based medicine and other evidence-based practices, especially since a major part of dentistry involves dealing with oral and systemic diseases. Other issues relevant to the dental field in terms of evidence-based research and evidence-based practice include population oral health, dental clinical practice, tooth morphology etc.
Ethical and medicolegal issues
Dentistry is unique in that it requires dental students to have competence-based clinical skills that can only be acquired through supervised specialized laboratory training and direct patient care. This necessitates the need for a scientific and professional basis of care with a foundation of extensive research-based education. According to some experts, the accreditation of dental schools can enhance the quality and professionalism of dental education.
| Biology and health sciences | Health, fitness, and medicine | null |
8011 | https://en.wikipedia.org/wiki/Alcohol%20intoxication | Alcohol intoxication | Alcohol intoxication, commonly described in higher doses as drunkenness or inebriation, and known in overdose as alcohol poisoning, is the behavior and physical effects caused by recent consumption of alcohol. The technical term intoxication in common speech may suggest that a large amount of alcohol has been consumed, leading to accompanying physical symptoms and deleterious health effects. Mild intoxication is mostly referred to by slang terms such as tipsy or buzzed. In addition to the toxicity of ethanol, the main psychoactive component of alcoholic beverages, other physiological symptoms may arise from the activity of acetaldehyde, a metabolite of alcohol. These effects may not arise until hours after ingestion and may contribute to a condition colloquially known as a hangover.
Symptoms of intoxication at lower doses may include mild sedation and poor coordination. At higher doses, there may be slurred speech, trouble walking, and vomiting. Extreme doses may result in a respiratory depression, coma, or death. Complications may include seizures, aspiration pneumonia, low blood sugar, and injuries or self-harm such as suicide. Alcohol intoxication can lead to alcohol-related crime with perpetrators more likely to be intoxicated than victims.
Alcohol intoxication typically begins after two or more alcoholic drinks. Alcohol has the potential for abuse. Risk factors include a social situation where heavy drinking is common and a person having an impulsive personality. Diagnosis is usually based on the history of events and physical examination. Verification of events by witnesses may be useful. Legally, alcohol intoxication is often defined as a blood alcohol concentration (BAC) of greater than 5.4–17.4 mmol/L (25–80 mg/dL or 0.025–0.080%). This can be measured by blood or breath testing. Alcohol is broken down in the human body at a rate of about 3.3 mmol/L (15 mg/dL) per hour, depending on an individual's metabolic rate (metabolism). The DSM-5 defines alcohol intoxication as at least one of the following symptoms that developed during or close after alcohol ingestion: slurred speech, incoordination, unsteady walking/movement, nystagmus (uncontrolled eye movement), attention or memory impairment, or near unconsciousness or coma.
Management of alcohol intoxication involves supportive care. Typically this includes putting the person in the recovery position, keeping the person warm, and making sure breathing is sufficient. Gastric lavage and activated charcoal have not been found to be useful. Repeated assessments may be required to rule out other potential causes of a person's symptoms.
Acute intoxication has been documented throughout history, and alcohol remains one of the world's most widespread recreational drugs. Some religions, such as Islam, consider alcohol intoxication to be a sin.
Symptoms
Vomiting
Slow Breathing (fewer than eight breaths per minute)
Seizures
Blue, grey or pale skin
Hypothermia (Low Body Temperature)
Lethargy (Trouble staying conscious)
Alcohol intoxication leads to negative health effects due to the recent drinking of large amount of ethanol (alcohol). When severe it may become a medical emergency. Some effects of alcohol intoxication, such as euphoria and lowered social inhibition, are central to alcohol's desirability.
As drinking increases, people become sleepy or fall into a stupor. At very high blood alcohol concentrations, for example above 0.3%, the respiratory system becomes depressed and the person may stop breathing. Comatose patients may aspirate their vomit (resulting in vomitus in the lungs, which may cause "drowning" and later pneumonia if survived). CNS depression and impaired motor coordination along with poor judgment increase the likelihood of accidental injury occurring. It is estimated that about one-third of alcohol-related deaths are due to accidents and another 14% are from intentional injury.
In addition to respiratory failure and accidents caused by its effects on the central nervous system, alcohol causes significant metabolic derangements. Hypoglycaemia occurs due to ethanol's inhibition of gluconeogenesis, especially in children, and may cause lactic acidosis, ketoacidosis, and acute kidney injury. Metabolic acidosis is compounded by respiratory failure. Patients may also present with hypothermia.
Pathophysiology
Alcohol is metabolized by a normal liver at the rate of about 8 grams of pure ethanol per hour. 8 grams or is one British standard unit. An "abnormal" liver with conditions such as hepatitis, cirrhosis, gall bladder disease, and cancer is likely to result in a slower rate of metabolism.
Diagnosis
Alcohol intoxication is described as a mental and behavioural disorder by the International Classification of Diseases. (ICD-10). Definitive diagnosis relies on a blood test for alcohol, usually performed as part of a toxicology screen. Law enforcement officers in the United States and other countries often use breathalyzer units and field sobriety tests as more convenient and rapid alternatives to blood tests. There are also various models of breathalyzer units that are available for consumer use. Because these may have varying reliability and may produce different results than the tests used for law-enforcement purposes, the results from such devices should be conservatively interpreted.
Many informal intoxication tests exist, which, in general, are unreliable and not recommended as deterrents to excessive intoxication or as indicators of the safety of activities such as motor vehicle driving, heavy equipment operation, machine tool use, etc.
For determining whether someone is intoxicated by alcohol by some means other than a blood-alcohol test, it is necessary to rule out other conditions such as hypoglycemia, stroke, usage of other intoxicants, mental health issues, and so on. It is best if their behavior has been observed while the subject is sober to establish a baseline. Several well-known criteria can be used to establish a probable diagnosis. For a physician in the acute-treatment setting, acute alcohol intoxication can mimic other acute neurological disorders or is frequently combined with other recreational drugs that complicate diagnosis and treatment.
Management
Acute alcohol poisoning is a medical emergency due to the risk of death from respiratory depression or aspiration of vomit if vomiting occurs while the person is unresponsive. Emergency treatment strives to stabilize and maintain an open airway and sufficient breathing while waiting for the alcohol to metabolize. This can be done by removal of any vomit or, if the person is unconscious or has impaired gag reflex, intubation of the trachea.
Other measures may include
Administer the vitamin thiamine to prevent Wernicke–Korsakoff syndrome, which can cause a seizure (more usually a treatment for chronic alcoholism, but in the acute context usually co-administered to ensure maximal benefit).
Hemodialysis if the blood concentration is very high at >130 mmol/L (>600 mg/dL)
Provide oxygen therapy as needed via nasal cannula or non-rebreather mask.
Administration of intravenous fluids in cases involving hypoglycemia and electrolyte imbalance.
While the medication metadoxine may speed the breakdown of alcohol, use in alcohol intoxication requires further study as of 2017. It is approved in a number of countries in Europe, as well as India and Brazil.
Additional medication may be indicated for treatment of nausea, tremor, and anxiety.
Clinical findings
Hospital admissions
Alcohol intoxication was found to be prevalent in clinical populations within the United States involving people treated for trauma and in the age group of people aged within their 18th–24th years (in a study of a group for the years 1999–2004). In the United States during the years 2010–2012, acute intoxication was found to be the direct cause of an average of 2,221 deaths, in the sample group of those aged within their 15th year or older. The same mortality route is thought to cause indirectly more than 30,000 deaths per year.
Prognosis
A normal liver detoxifies the blood of alcohol over a period of time that depends on the initial level and the patient's overall physical condition. An abnormal liver will take longer but still succeeds, provided the alcohol does not cause liver failure.
People having drunk heavily for several days or weeks may have withdrawal symptoms after the acute intoxication has subsided.
A person consuming a dangerous amount of alcohol persistently can develop memory blackouts and idiosyncratic intoxication or pathological drunkenness symptoms. Long-term persistent consumption of excessive amounts of alcohol can cause liver damage and have other deleterious health effects.
Society and culture
Alcohol intoxication is a risk factor in some cases of catastrophic injury, in particular for unsupervised recreational activity. A study in the province of Ontario based on epidemiological data from 1986, 1989, 1992, and 1995 states that 79.2% of the 2,154 catastrophic injuries recorded for the study were preventable, of which 346 (17%) involved alcohol consumption. The activities most commonly associated with alcohol-related catastrophic injury were snowmobiling (124), fishing (41), diving (40), boating (31) and canoeing (7), swimming (31), riding an all-terrain vehicle (24), and cycling (23). These events are often associated with unsupervised young males, often inexperienced in the activity, and may result in drowning. Alcohol use is also associated with unsafe sex.
Legal issues
Laws on drunkenness vary. In the United States, it is a criminal offense for a person to be drunk while driving a motorized vehicle, except in Wisconsin, where it is only a fine for the first offense. It is also a criminal offense to fly an aircraft or (in some American states) to assemble or operate an amusement park ride while drunk. Similar laws also exist in the United Kingdom and most other countries.
In some jurisdictions, it is also an offense to serve alcohol to an already-intoxicated person, and, often, alcohol can only be sold by persons qualified to serve responsibly through alcohol server training.
The blood alcohol content (BAC) for legal operation of a vehicle is typically measured as a percentage of a unit volume of blood. This percentage ranges from 0.00% in Romania and the United Arab Emirates; to 0.05% in Australia, South Africa, Germany, Scotland, and New Zealand (0.00% for underage individuals); to 0.08% in England and Wales, the United States and Canada.
The United States Federal Aviation Administration prohibits crew members from performing their duties within eight hours of consuming an alcoholic beverage, while under the influence of alcohol, or with a BAC greater than 0.04%.
In the United States, the United Kingdom, and Australia, public intoxication is a crime (also known as "being drunk and disorderly" or "being drunk and incapable").
In some countries, there are special facilities, sometimes known as "drunk tanks", for the temporary detention of persons found to be drunk.
Religious views
Some religious groups permit the consumption of alcohol; some permit consumption but prohibit intoxication; others prohibit any amount of alcohol consumption altogether.
Christianity
Most denominations of Christianity, such as Catholicism, Orthodoxy and Lutheranism, use wine as a part of the Eucharist (Holy Communion) and permit its consumption, but consider it sinful to become intoxicated.
Romans 13:13–14, 1 Corinthians 6:9–11, Galatians 5:19–21 and Ephesians 5:18 are among a number of other Bible passages that speak against intoxication.
Some Protestant Christian denominations prohibit the consumption of alcohol based upon biblical passages that condemn drunkenness, but others allow a moderate rate of consumption.
In the Church of Jesus Christ of Latter-day Saints, alcohol consumption is forbidden, and teetotalism has become a distinguishing feature of its members. Jehovah's Witnesses allow moderate alcohol consumption among its members.
Islam
In the Quran, there is a prohibition on the consumption of grape-based alcoholic beverages, and intoxication is considered an abomination in the hadith of Muhammad. The schools of thought of Islamic jurisprudence have interpreted this as a strict prohibition of the consumption of all types of alcohol and declared it to be haram (), although other uses may be permitted.
Buddhism
In Buddhism, in general, the consumption of intoxicants is discouraged for both monastics and lay followers. Many Buddhists observe a basic code of ethics known as the five precepts, of which the fifth precept is an undertaking to refrain from the consumption of intoxicating substances (except for medical reasons). In the bodhisattva vows of the Brahmajala Sutra, observed by Mahayana Buddhist communities, distribution of intoxicants is likewise discouraged, as well as consumption.
Hinduism
In the Gaudiya Vaishnavism branch of Hinduism, one of the four regulative principles forbids the taking of intoxicants, including alcohol.
Judaism
In the Bible, the Book of Proverbs contains several chapters related to the negative effects of drunkenness and warns to stay away from intoxicating beverages. The Book of Genesis refers to the use of wine by Lot's daughters to rape him. The story of Samson in the Book of Judges tells of a monk from the Israelite tribe of Dan who, as a Nazirite, is prohibited from cutting his hair and drinking wine. Proverbs 31:4 warns against kings and other rulers drinking wine and similar alcoholic beverages, Proverbs 31:6–7 promotes giving such beverages to the perishing and wine to those whose lives are bitter as a coping mechanism against the likes of poverty and other troubles.
In Judaism, in accordance with the biblical stance against drinking, drinking wine is restricted for priests. The biblical command to sanctify the Sabbath and other holidays has been interpreted as having three ceremonial meals with wine or grape juice, known as Kiddush. A number of Jewish marriage ceremonies end with the bride and groom drinking a shared cup of wine after reciting seven blessings; this occurs after a fasting day in some Ashkenazi traditions. It has been customary and in many cases even mandated to drink moderately so as to stay sober, and only after the prayers are over.
During the Seder on Passover, there is an obligation to drink four ceremonial cups of wine while reciting the Haggadah. It has been assumed as the source of the wine-drinking ritual at communion in some Christian groups. During Purim, there is an obligation to become intoxicated; however, as with many other decrees, this has been avoided in many communities by allowing sleep during the day as a replacement.
During the U.S. Prohibition era in the 1920s, a rabbi from the Reform Judaism movement proposed using grape juice for the ritual instead of wine. Although refuted at first, the practice became widely accepted by orthodox Jews as well.
Other animals
In the film Animals Are Beautiful People, an entire section was dedicated to showing many different animals including monkeys, elephants, hogs, giraffes, and ostriches, eating over-ripe marula tree fruit causing them to sway and lose their footing in a manner similar to human drunkenness. Birds may become intoxicated with fermented berries and some die colliding with hard objects when flying under the influence.
In elephant warfare, practiced by the Greeks during the Maccabean revolt and by Hannibal during the Punic wars, it has been recorded that the elephants would be given wine before the attack, and only then would they charge forward after being agitated by their driver.
It is a regular practice to give small amounts of beer to race horses in Ireland. Ruminant farm animals have natural fermentation occurring in their stomach, and adding alcoholic beverages in small amounts to their drink will generally do them no harm, and will not cause them to become drunk.
Alcoholic beverages are extremely harmful to dogs, and often for reasons of additives such as xylitol, an artificial sweetener in some mixers. Dogs can absorb ethyl alcohol in dangerous amounts through their skin as well as through drinking the liquid or consuming it in foods. Even fermenting bread dough can be dangerous to dogs. In 1999, one of the royal footmen for Britain's Queen Elizabeth II was demoted from Buckingham Palace due to his "party trick" of spiking the meals and drinks of the Queen's pet corgi dogs with alcohol which in turn would lead the dogs to run around drunk.
| Biology and health sciences | Drugs and pharmacology | null |
8013 | https://en.wikipedia.org/wiki/Data%20compression | Data compression | In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.
The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding: encoding is done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal.
Data Compression algorithms present a space-time complexity trade-off between the bytes needed to store or transmit information, and the Computational resources needed to perform the encoding and decoding. The design of data compression schemes involves balancing the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources or time required to compress and decompress the data.
Lossless
Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image may have areas of color that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the data may be encoded as "279 red pixels". This is a basic example of run-length encoding; there are many schemes to reduce file size by eliminating redundancy.
The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. In the mid-1980s, following work by Terry Welch, the Lempel–Ziv–Welch (LZW) algorithm rapidly became the method of choice for most general-purpose compression systems. LZW is used in GIF images, programs such as PKZIP, and hardware devices such as modems. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded. Grammar-based codes like this can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Other practical grammar compression algorithms include Sequitur and Re-Pair.
The strongest modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling. In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression compared to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of the input data. An early example of the use of arithmetic coding was in an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.263, H.264/MPEG-4 AVC and HEVC for video coding.
Archive software typically has the ability to adjust the "dictionary size", where a larger size demands more random-access memory during compression and decompression, but compresses stronger, especially on repeating patterns in files' content.
Lossy
In the late 1980s, digital images became more common, and standards for lossless image compression emerged. In the early 1990s, lossy compression methods began to be widely used. In these schemes, some loss of information is accepted as dropping nonessential detail can save storage space. There is a corresponding trade-off between preserving information and reducing size. Lossy data compression schemes are designed by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information. A number of popular compression formats exploit these perceptual differences, including psychoacoustics for sound, and psychovisuals for images and video.
Most forms of lossy compression are based on transform coding, especially the discrete cosine transform (DCT). It was first proposed in 1972 by Nasir Ahmed, who then developed a working algorithm with T. Natarajan and K. R. Rao in 1973, before introducing it in January 1974. DCT is the most widely used lossy compression method, and is used in multimedia formats for images (such as JPEG and HEIF), video (such as MPEG, AVC and HEVC) and audio (such as MP3, AAC and Vorbis).
Lossy image compression is used in digital cameras, to increase storage capacities. Similarly, DVDs, Blu-ray and streaming video use lossy video coding formats. Lossy compression is extensively used in video.
In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the audio signal. Compression of human speech is often performed with even more specialized techniques; speech coding is distinguished as a separate discipline from general-purpose audio compression. Speech coding is used in internet telephony, for example, audio compression is used for CD ripping and is decoded by the audio players.
Lossy compression can cause generation loss.
Theory
The theoretical basis for compression is provided by information theory and, more specifically, Shannon's source coding theorem; domain-specific theories include algorithmic information theory for lossless compression and rate–distortion theory for lossy compression. These areas of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Other topics associated with compression include coding theory and statistical inference.
Machine learning
There is a close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution). Conversely, an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence".
An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C(.) we define an associated vector space ℵ, such that C(.) maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM.
According to AIXI theory, a connection more directly explained in Hutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form.
Examples of AI-powered audio/video compression software include NVIDIA Maxine, AIVC. Examples of software that can perform AI-powered image compression include OpenCV, TensorFlow, MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression.
In unsupervised machine learning, k-means clustering can be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such as image compression.
Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by the centroid of its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial in image and signal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space.
Large language models (LLMs) are also capable of lossless data compression, as demonstrated by DeepMind's research with the Chinchilla 70B model. Developed by DeepMind, Chinchilla 70B effectively compressed data, outperforming conventional methods such as Portable Network Graphics (PNG) for images and Free Lossless Audio Codec (FLAC) for audio. It achieved compression of image and audio data to 43.4% and 16.4% of their original sizes, respectively.
Data differencing
Data compression can be viewed as a special case of data differencing. Data differencing consists of producing a difference given a source and a target, with patching reproducing the target given a source and a difference. Since there is no separate source and target in data compression, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a difference from nothing. This is the same as considering absolute entropy (corresponding to data compression) as a special case of relative entropy (corresponding to data differencing) with no initial data.
The term differential compression is used to emphasize the data differencing connection.
Uses
Image
Entropy coding originated in the 1940s with the introduction of Shannon–Fano coding, the basis for Huffman coding which was developed in 1950. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the Hadamard transform in 1969.
An important image compression technique is the discrete cosine transform (DCT), a technique developed in the early 1970s. DCT is the basis for JPEG, a lossy compression format which was introduced by the Joint Photographic Experts Group (JPEG) in 1992. JPEG greatly reduces the amount of data required to represent an image at the cost of a relatively small reduction in image quality and has become the most widely used image file format. Its highly efficient DCT-based compression algorithm was largely responsible for the wide proliferation of digital images and digital photos.
Lempel–Ziv–Welch (LZW) is a lossless compression algorithm developed in 1984. It is used in the GIF format, introduced in 1987. DEFLATE, a lossless compression algorithm specified in 1996, is used in the Portable Network Graphics (PNG) format.
Wavelet compression, the use of wavelets in image compression, began after the development of DCT coding. The JPEG 2000 standard was introduced in 2000. In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform (DWT) algorithms. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004.
Audio
Audio data compression, not to be confused with dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements of audio data. Audio compression formats compression algorithms are implemented in software as audio codecs. In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, quantization, DCT and linear prediction to reduce the amount of information used to represent the uncompressed data.
Lossy audio compression algorithms provide higher compression and are used in numerous audio applications including Vorbis and MP3. These algorithms almost all rely on psychoacoustics to eliminate or reduce fidelity of less audible sounds, thereby reducing the space required to store or transmit them.
The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640 MB compact disc (CD) holds approximately one hour of uncompressed high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in the MP3 format at a medium bit rate. A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640 MB.
Lossless audio compression produces a representation of digital data that can be decoded to an exact digital duplicate of the original. Compression ratios are around 50–60% of the original size, which is similar to those for generic lossless data compression. Lossless codecs use curve fitting or linear prediction as a basis for estimating the signal. Parameters describing the estimation and the difference between the estimation and the actual signal are coded separately.
A number of lossless audio compression formats exist. See list of lossless codecs for a listing. Some formats are associated with a distinct system, such as Direct Stream Transfer, used in Super Audio CD and Meridian Lossless Packing, used in DVD-Audio, Dolby TrueHD, Blu-ray and HD DVD.
Some audio file formats feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack, and OptimFROG DualStream.
When audio files are to be processed, either by further compression or for editing, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies.
Lossy audio compression
Lossy audio compression is used in a wide range of applications. In addition to standalone audio-only applications of file playback in MP3 players or computers, digitally compressed audio streams are used in most video DVDs, digital television, streaming media on the Internet, satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression, by discarding less-critical data based on psychoacoustic optimizations.
Psychoacoustics recognizes that not all data in an audio stream can be perceived by the human auditory system. Most lossy compression reduces redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at the same time as louder sounds. Those irrelevant sounds are coded with decreased accuracy or not at all.
Due to the nature of lossy algorithms, audio quality suffers a digital generation loss when a file is decompressed and recompressed. This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, lossy formats such as MP3 are very popular with end-users as the file size is reduced to 5-20% of the original size and a megabyte can store about a minute's worth of music at adequate quality.
Several proprietary lossy compression algorithms have been developed that provide higher quality audio performance by using a combination of lossless and lossy algorithms with adaptive bit rates and lower compression ratios. Examples include aptX, LDAC, LHDC, MQA and SCL6.
Coding methods
To determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain, typically the frequency domain. Once transformed, component frequencies can be prioritized according to how audible they are. Audibility of spectral components is assessed using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weigh the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models.
Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. LPC uses a model of the human vocal tract to analyze speech sounds and infer the parameters used by the model to produce them moment to moment. These changing parameters are transmitted or stored and used to drive another model in the decoder which reproduces the sound.
Lossy formats are often used for the distribution of streaming audio or interactive communication (such as in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications.
Latency is introduced by the methods used to encode and decode the data. Some codecs will analyze a longer segment, called a frame, of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. The inherent latency of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality.
In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analyzed before a block of audio is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms.
Speech encoding
Speech encoding is an important category of audio data compression. The perceptual models used to estimate what aspects of speech a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice is normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate.
This is accomplished, in general, by some combination of two approaches:
Only encoding sounds that could be made by a single human voice.
Throwing away more of the data in the signal—keeping just enough to reconstruct an "intelligible" voice rather than the full frequency range of human hearing.
The earliest algorithms used in speech encoding (and audio data compression in general) were the A-law algorithm and the μ-law algorithm.
History
Early audio research was conducted at Bell Labs. There, in 1950, C. Chapin Cutler filed the patent on differential pulse-code modulation (DPCM). In 1973, Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan.
Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC). Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966. During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time. Perceptual coding is used by modern audio compression formats such as MP3 and AAC.
Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3, Dolby Digital, and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987, following earlier work by Princen and Bradley in 1986.
The world's first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor at the University of Buenos Aires.
In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967, he started developing a practical application based on the recently developed IBM PC computer, and the broadcast automation system was launched in 1987 under the name Audicom.
35 years later, almost all the radio stations in the world were using this technology manufactured by a number of companies because the inventor refuses to get invention patents for his work. He prefers declaring it of Public Domain publishing it
A literature compendium for a large variety of audio coding systems was published in the IEEE's Journal on Selected Areas in Communications (JSAC), in February 1988. While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual techniques and some kind of frequency analysis and back-end noiseless coding.
Video
Uncompressed video requires a very high data rate. Although lossless video compression codecs perform at a compression factor of 5 to 12, a typical H.264 lossy compression video has a compression factor between 20 and 200.
The two key video compression techniques used in video coding standards are the DCT and motion compensation (MC). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT video coding (block motion compensation).
Most video codecs are used alongside audio compression techniques to store the separate but complementary data streams as one combined package using so-called container formats.
Encoding theory
Video data may be represented as a series of still image frames. Such data usually contains abundant amounts of spatial and temporal redundancy. Video compression algorithms attempt to reduce redundancy and store information more compactly.
Most video compression formats and codecs exploit both spatial and temporal redundancy (e.g. through difference coding with motion compensation). Similarities can be encoded by only storing differences between e.g. temporally adjacent frames (inter-frame coding) or spatially adjacent pixels (intra-frame coding). Inter-frame compression (a temporal delta encoding) (re)uses data from one or more earlier or later frames in a sequence to describe the current frame. Intra-frame coding, on the other hand, uses only data from within the current frame, effectively being still-image compression.
The intra-frame video coding formats used in camcorders and video editing employ simpler compression that uses only intra-frame prediction. This simplifies video editing software, as it prevents a situation in which a compressed frame refers to data that the editor has deleted.
Usually, video compression additionally employs lossy compression techniques like quantization that reduce aspects of the source data that are (more or less) irrelevant to the human visual perception by exploiting perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas in a manner similar to those used in JPEG image compression. As in all lossy compression, there is a trade-off between video quality and bit rate, cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distracting artifacts.
Other methods other than the prevalent DCT-based transform formats, such as fractal compression, matching pursuit and the use of a discrete wavelet transform (DWT), have been the subject of some research, but are typically not used in practical products. Wavelet compression is used in still-image coders and video coders without motion compensation. Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods.
Inter-frame coding
In inter-frame coding, individual frames of a video sequence are compared from one frame to the next, and the video compression codec records the differences to the reference frame. If the frame contains areas where nothing has moved, the system can simply issue a short command that copies that part of the previous frame into the next one. If sections of the frame move in a simple manner, the compressor can emit a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than data generated by intra-frame compression. Usually, the encoder will also transmit a residue signal which describes the remaining more subtle differences to the reference imagery. Using entropy coding, these residue signals have a more compact representation than the full signal. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in the variable bitrate.
Hybrid block-based transform formats
Many commonly used video compression methods (e.g., those in standards approved by the ITU-T or ISO) share the same basic architecture that dates back to H.261 which was standardized in 1988 by the ITU-T. They mostly rely on the DCT, applied to rectangular blocks of neighboring pixels, and temporal prediction using motion vectors, as well as nowadays also an in-loop filtering step.
In the prediction stage, various deduplication and difference-coding techniques are applied that help decorrelate data and describe new data based on already transmitted data.
Then rectangular blocks of remaining pixel data are transformed to the frequency domain. In the main lossy processing stage, frequency domain data gets quantized in order to reduce information that is irrelevant to human visual perception.
In the last stage statistical redundancy gets largely eliminated by an entropy coder which often applies some form of arithmetic coding.
In an additional in-loop filtering stage various filters can be applied to the reconstructed image signal. By computing these filters also inside the encoding loop they can help compression because they can be applied to reference material before it gets used in the prediction process and they can be guided using the original signal. The most popular example are deblocking filters that blur out blocking artifacts from quantization discontinuities at transform block boundaries.
History
In 1967, A.H. Robinson and C. Cherry proposed a run-length encoding bandwidth compression scheme for the transmission of analog television signals. The DCT, which is fundamental to modern video compression, was introduced by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974.
H.261, which debuted in 1988, commercially introduced the prevalent basic architecture of video compression technology. It was the first video coding format based on DCT compression. H.261 was developed by a number of companies, including Hitachi, PictureTel, NTT, BT and Toshiba.
The most popular video coding standards used for codecs have been the MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which was developed by a number of companies, primarily Sony, Thomson and Mitsubishi Electric. MPEG-2 became the standard video format for DVD and SD digital television. In 1999, it was followed by MPEG-4/H.263. It was also developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic.
H.264/MPEG-4 AVC was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics. AVC commercially introduced the modern context-adaptive binary arithmetic coding (CABAC) and context-adaptive variable-length coding (CAVLC) algorithms. AVC is the main video encoding standard for Blu-ray Discs, and is widely used by video sharing websites and streaming internet services such as YouTube, Netflix, Vimeo, and iTunes Store, web software such as Adobe Flash Player and Microsoft Silverlight, and various HDTV broadcasts over terrestrial and satellite television.
Genetics
Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not use a reference genome for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and is less computationally intensive than the leading general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF-based encoding (MAFE), which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing the dataset. Other algorithms developed in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing 6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged over many genomes). For a benchmark in genetics/genomics data compressors, see
Outlook and currently unused potential
It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.
| Mathematics | Discrete mathematics | null |
8072 | https://en.wikipedia.org/wiki/Disease | Disease | A disease is a particular abnormal condition that adversely affects the structure or function of all or part of an organism and is not immediately due to any external injury. Diseases are often known to be medical conditions that are associated with specific signs and symptoms. A disease may be caused by external factors such as pathogens or by internal dysfunctions. For example, internal dysfunctions of the immune system can produce a variety of different diseases, including various forms of immunodeficiency, hypersensitivity, allergies, and autoimmune disorders.
In humans, disease is often used more broadly to refer to any condition that causes pain, dysfunction, distress, social problems, or death to the person affected, or similar problems for those in contact with the person. In this broader sense, it sometimes includes injuries, disabilities, disorders, syndromes, infections, isolated symptoms, deviant behaviors, and atypical variations of structure and function, while in other contexts and for other purposes these may be considered distinguishable categories. Diseases can affect people not only physically but also mentally, as contracting and living with a disease can alter the affected person's perspective on life.
Death due to disease is called death by natural causes. There are four main types of disease: infectious diseases, deficiency diseases, hereditary diseases (including both genetic and non-genetic hereditary diseases), and physiological diseases. Diseases can also be classified in other ways, such as communicable versus non-communicable diseases. The deadliest diseases in humans are coronary artery disease (blood flow obstruction), followed by cerebrovascular disease and lower respiratory infections. In developed countries, the diseases that cause the most sickness overall are neuropsychiatric conditions, such as depression and anxiety.
The study of disease is called pathology, which includes the study of etiology, or cause.
Terminology
Concepts
In many cases, terms such as disease, disorder, morbidity, sickness and illness are used interchangeably; however, there are situations when specific terms are considered preferable.
Disease
The term disease broadly refers to any condition that impairs the normal functioning of the body. For this reason, diseases are associated with the dysfunction of the body's normal homeostatic processes. Commonly, the term is used to refer specifically to infectious diseases, which are clinically evident diseases that result from the presence of pathogenic microbial agents, including viruses, bacteria, fungi, protozoa, multicellular organisms, and aberrant proteins known as prions. An infection or colonization that does not and will not produce clinically evident impairment of normal functioning, such as the presence of the normal bacteria and yeasts in the gut, or of a passenger virus, is not considered a disease. By contrast, an infection that is asymptomatic during its incubation period, but expected to produce symptoms later, is usually considered a disease. Non-infectious diseases are all other diseases, including most forms of cancer, heart disease, and genetic disease.
Acquired disease
An acquired disease is one that began at some point during one's lifetime, as opposed to disease that was already present at birth, which is congenital disease. Acquired sounds like it could mean "caught via contagion", but it simply means acquired sometime after birth. It also sounds like it could imply secondary disease, but acquired disease can be primary disease.
Acute disease
An acute disease is one of a short-term nature (acute); the term sometimes also connotes a fulminant nature
Chronic condition or chronic disease
A chronic disease is one that persists over time, often for at least six months, but may also include illnesses that are expected to last for the entirety of one's natural life.
Congenital disorder or congenital disease
A congenital disorder is one that is present at birth. It is often a genetic disease or disorder and can be inherited. It can also be the result of a vertically transmitted infection from the mother, such as HIV/AIDS.
Genetic disease
A genetic disorder or disease is caused by one or more genetic mutations. It is often inherited, but some mutations are random and de novo.
Hereditary or inherited disease
A hereditary disease is a type of genetic disease caused by genetic mutations that are hereditary (and can run in families)
Iatrogenic disease
An iatrogenic disease or condition is one that is caused by medical intervention, whether as a side effect of a treatment or as an inadvertent outcome.
Idiopathic disease
An idiopathic disease has an unknown cause or source. As medical science has advanced, many diseases with entirely unknown causes have had some aspects of their sources explained and therefore shed their idiopathic status. For example, when germs were discovered, it became known that they were a cause of infection, but particular germs and diseases had not been linked. In another example, it is known that autoimmunity is the cause of some forms of diabetes mellitus type 1, even though the particular molecular pathways by which it works are not yet understood. It is also common to know certain factors are associated with certain diseases; however, association does not necessarily imply causality. For example, a third factor might be causing both the disease, and the associated phenomenon.
Incurable disease
A disease that cannot be cured. Incurable diseases are not necessarily terminal diseases, and sometimes a disease's symptoms can be treated sufficiently for the disease to have little or no impact on quality of life.
Primary disease
A primary disease is a disease that is due to a root cause of illness, as opposed to secondary disease, which is a sequela, or complication that is caused by the primary disease. For example, a common cold is a primary disease, where rhinitis is a possible secondary disease, or sequela. A doctor must determine what primary disease, a cold or bacterial infection, is causing a patient's secondary rhinitis when deciding whether or not to prescribe antibiotics.
Secondary disease
A secondary disease is a disease that is a sequela or complication of a prior, causal disease, which is referred to as the primary disease or simply the underlying cause (root cause). For example, a bacterial infection can be primary, wherein a healthy person is exposed to bacteria and becomes infected, or it can be secondary to a primary cause, that predisposes the body to infection. For example, a primary viral infection that weakens the immune system could lead to a secondary bacterial infection. Similarly, a primary burn that creates an open wound could provide an entry point for bacteria, and lead to a secondary bacterial infection.
Terminal disease
A terminal disease is one that is expected to have the inevitable result of death. Previously, AIDS was a terminal disease; it is now incurable, but can be managed indefinitely using medications.
Illness
The terms illness and sickness are both generally used as synonyms for disease; however, the term illness is occasionally used to refer specifically to the patient's personal experience of their disease. In this model, it is possible for a person to have a disease without being ill (to have an objectively definable, but asymptomatic, medical condition, such as a subclinical infection, or to have a clinically apparent physical impairment but not feel sick or distressed by it), and to be ill without being diseased (such as when a person perceives a normal experience as a medical condition, or medicalizes a non-disease situation in their life – for example, a person who feels unwell as a result of embarrassment, and who interprets those feelings as sickness rather than normal emotions). Symptoms of illness are often not directly the result of infection, but a collection of evolved responses – sickness behavior by the body – that helps clear infection and promote recovery. Such aspects of illness can include lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and inability to concentrate.
A disorder is a functional abnormality or disturbance that may or may not show specific signs and symptoms. Medical disorders can be categorized into mental disorders, physical disorders, genetic disorders, emotional and behavioral disorders, and functional disorders. The term disorder is often considered more value-neutral and less stigmatizing than the terms disease or illness, and therefore is preferred terminology in some circumstances. In mental health, the term mental disorder is used as a way of acknowledging the complex interaction of biological, social, and psychological factors in psychiatric conditions; however, the term disorder is also used in many other areas of medicine, primarily to identify physical disorders that are not caused by infectious organisms, such as metabolic disorders.
Medical condition or health condition
A medical condition or health condition is a broad concept that includes all diseases, lesions, disorders, or nonpathologic condition that normally receives medical treatment, such as pregnancy or childbirth. While the term medical condition generally includes mental illnesses, in some contexts the term is used specifically to denote any illness, injury, or disease except for mental illnesses. The Diagnostic and Statistical Manual of Mental Disorders (DSM), the widely used psychiatric manual that defines all mental disorders, uses the term general medical condition to refer to all diseases, illnesses, and injuries except for mental disorders. This usage is also commonly seen in the psychiatric literature. Some health insurance policies also define a medical condition as any illness, injury, or disease except for psychiatric illnesses.
As it is more value-neutral than terms like disease, the term medical condition is sometimes preferred by people with health issues that they do not consider deleterious. However, by emphasizing the medical nature of the condition, this term is sometimes rejected, such as by proponents of the autism rights movement.
The term medical condition is also a synonym for medical state, in which case it describes an individual patient's current state from a medical standpoint. This usage appears in statements that describe a patient as being in critical condition, for example.
Morbidity () is a diseased state, disability, or poor health due to any cause. The term may refer to the existence of any form of disease, or to the degree that the health condition affects the patient. Among severely ill patients, the level of morbidity is often measured by ICU scoring systems. Comorbidity, or co-existing disease, is the simultaneous presence of two or more medical conditions, such as schizophrenia and substance abuse.
In epidemiology and actuarial science, the term morbidity (also morbidity rate or morbidity frequency) can refer to either the incidence rate, the prevalence of a disease or medical condition, or the percentage of people who experience a given condition within a given timeframe (e.g., 20% of people will get influenza in a year). This measure of sickness is contrasted with the mortality rate of a condition, which is the proportion of people dying during a given time interval. Morbidity rates are used in actuarial professions, such as health insurance, life insurance, and long-term care insurance, to determine the premiums charged to customers. Morbidity rates help insurers predict the likelihood that an insured will contract or develop any number of specified diseases.
Pathosis or pathology
Pathosis (plural pathoses) is synonymous with disease. The word pathology also has this sense, in which it is commonly used by physicians in the medical literature, although some editors prefer to reserve pathology to its other senses. Sometimes a slight connotative shade causes preference for pathology or pathosis implying "some [as yet poorly analyzed] pathophysiologic process" rather than disease implying "a specific disease entity as defined by diagnostic criteria being already met". This is hard to quantify denotatively, but it explains why cognitive synonymy is not invariable.
Syndrome
A syndrome is the association of several signs and symptoms, or other characteristics that often occur together, regardless of whether the cause is known. Some syndromes such as Down syndrome are known to have only one cause (an extra chromosome at birth). Others such as Parkinsonian syndrome are known to have multiple possible causes. Acute coronary syndrome, for example, is not a single disease itself but is rather the manifestation of any of several diseases including myocardial infarction secondary to coronary artery disease. In yet other syndromes, however, the cause is unknown. A familiar syndrome name often remains in use even after an underlying cause has been found or when there are a number of different possible primary causes. Examples of the first-mentioned type are that Turner syndrome and DiGeorge syndrome are still often called by the "syndrome" name despite that they can also be viewed as disease entities and not solely as sets of signs and symptoms.
Predisease
Predisease is a subclinical or prodromal vanguard of a disease. Prediabetes and prehypertension are common examples. The nosology or epistemology of predisease is contentious, though, because there is seldom a bright line differentiating a legitimate concern for subclinical or premonitory status and the conflict of interest–driven over-medicalization (e.g., by pharmaceutical manufacturers) or de-medicalization (e.g., by medical and disability insurers). Identifying legitimate predisease can result in useful preventive measures, such as motivating the person to get a healthy amount of physical exercise, but labeling a healthy person with an unfounded notion of predisease can result in overtreatment, such as taking drugs that only help people with severe disease or paying for treatments with a poor benefit–cost ratio.
One review proposed three criteria for predisease:
a high risk for progression to disease making one "far more likely to develop" it than others are- for example, a pre-cancer will almost certainly turn into cancer over time
actionability for risk reduction – for example, removal of the precancerous tissue prevents it from turning into a potentially deadly cancer
benefit that outweighs the harm of any interventions taken – removing the precancerous tissue prevents cancer, and thus prevents a potential death from cancer.
Types by body system
Mental
Mental illness is a broad, generic label for a category of illnesses that may include affective or emotional instability, behavioral dysregulation, cognitive dysfunction or impairment. Specific illnesses known as mental illnesses include major depression, generalized anxiety disorders, schizophrenia, and attention deficit hyperactivity disorder, to name a few. Mental illness can be of biological (e.g., anatomical, chemical, or genetic) or psychological (e.g., trauma or conflict) origin. It can impair the affected person's ability to work or study and can harm interpersonal relationship.
Organic
An organic disease is one caused by a physical or physiological change to some tissue or organ of the body. The term sometimes excludes infections. It is commonly used in contrast with mental disorders. It includes emotional and behavioral disorders if they are due to changes to the physical structures or functioning of the body, such as after a stroke or a traumatic brain injury, but not if they are due to psychosocial issues.
Stages
In an infectious disease, the incubation period is the time between infection and the appearance of symptoms. The latency period is the time between infection and the ability of the disease to spread to another person, which may precede, follow, or be simultaneous with the appearance of symptoms. Some viruses also exhibit a dormant phase, called viral latency, in which the virus hides in the body in an inactive state. For example, varicella zoster virus causes chickenpox in the acute phase; after recovery from chickenpox, the virus may remain dormant in nerve cells for many years, and later cause herpes zoster (shingles).
Acute disease
An acute disease is a short-lived disease, like the common cold.
Chronic disease
A chronic disease is one that lasts for a long time, usually at least six months. During that time, it may be constantly present, or it may go into remission and periodically relapse. A chronic disease may be stable (does not get any worse) or it may be progressive (gets worse over time). Some chronic diseases can be permanently cured. Most chronic diseases can be beneficially treated, even if they cannot be permanently cured.
Clinical disease
One that has clinical consequences; in other words, the stage of the disease that produces the characteristic signs and symptoms of that disease. AIDS is the clinical disease stage of HIV infection.
Cure
A cure is the end of a medical condition or a treatment that is very likely to end it, while remission refers to the disappearance, possibly temporarily, of symptoms. Complete remission is the best possible outcome for incurable diseases.
Flare-up
A flare-up can refer to either the recurrence of symptoms or an onset of more severe symptoms.
Progressive disease
Progressive disease is a disease whose typical natural course is the worsening of the disease until death, serious debility, or organ failure occurs. Slowly progressive diseases are also chronic diseases; many are also degenerative diseases. The opposite of progressive disease is stable disease or static disease: a medical condition that exists, but does not get better or worse.
A refractory disease is a disease that resists treatment, especially an individual case that resists treatment more than is normal for the specific disease in question.
Subclinical disease
Also called silent disease, silent stage, or asymptomatic disease. This is a stage in some diseases before the symptoms are first noted.
Terminal phase
If a person will die soon from a disease, regardless of whether that disease typically causes death, then the stage between the earlier disease process and active dying is the terminal phase.
Recovery
Recovery can refer to the repairing of physical processes (tissues, organs etc.) and the resumption of healthy functioning after damage causing processes have been cured.
Extent
Localized disease
A localized disease is one that affects only one part of the body, such as athlete's foot or an eye infection.
Disseminated disease
A disseminated disease has spread to other parts; with cancer, this is usually called metastatic disease.
Systemic disease
A systemic disease is a disease that affects the entire body, such as influenza or high blood pressure.
Classification
Diseases may be classified by cause, pathogenesis (mechanism by which the disease is caused), or by symptoms. Alternatively, diseases may be classified according to the organ system involved, though this is often complicated since many diseases affect more than one organ.
A chief difficulty in nosology is that diseases often cannot be defined and classified clearly, especially when cause or pathogenesis are unknown. Thus diagnostic terms often only reflect a symptom or set of symptoms (syndrome).
Classical classification of human disease derives from the observational correlation between pathological analysis and clinical syndromes. Today it is preferred to classify them by their cause if it is known.
The most known and used classification of diseases is the World Health Organization's ICD. This is periodically updated. Currently, the last publication is the ICD-11.
Causes
Diseases can be caused by any number of factors and may be acquired or congenital. Microorganisms, genetics, the environment or a combination of these can contribute to a diseased state.
Only some diseases such as influenza are contagious and commonly believed infectious. The microorganisms that cause these diseases are known as pathogens and include varieties of bacteria, viruses, protozoa, and fungi. Infectious diseases can be transmitted, e.g. by hand-to-mouth contact with infectious material on surfaces, by bites of insects or other carriers of the disease, and from contaminated water or food (often via fecal contamination), etc. Also, there are sexually transmitted diseases. In some cases, microorganisms that are not readily spread from person to person play a role, while other diseases can be prevented or ameliorated with appropriate nutrition or other lifestyle changes.
Some diseases, such as most (but not all) forms of cancer, heart disease, and mental disorders, are non-infectious diseases. Many non-infectious diseases have a partly or completely genetic basis (see genetic disorder) and may thus be transmitted from one generation to another.
Social determinants of health are the social conditions in which people live that determine their health. Illnesses are generally related to social, economic, political, and environmental circumstances. Social determinants of health have been recognized by several health organizations such as the Public Health Agency of Canada and the World Health Organization to greatly influence collective and personal well-being. The World Health Organization's Social Determinants Council also recognizes Social determinants of health in poverty.
When the cause of a disease is poorly understood, societies tend to mythologize the disease or use it as a metaphor or symbol of whatever that culture considers evil. For example, until the bacterial cause of tuberculosis was discovered in 1882, experts variously ascribed the disease to heredity, a sedentary lifestyle, depressed mood, and overindulgence in sex, rich food, or alcohol, all of which were social ills at the time.
When a disease is caused by a pathogenic organism (e.g., when malaria is caused by Plasmodium), one should not confuse the pathogen (the cause of the disease) with disease itself. For example, West Nile virus (the pathogen) causes West Nile fever (the disease). The misuse of basic definitions in epidemiology is frequent in scientific publications.
Types of causes
Airborne An airborne disease is any disease that is caused by pathogens and transmitted through the air.
Foodborne Foodborne illness or food poisoning is any illness resulting from the consumption of food contaminated with pathogenic bacteria, toxins, viruses, prions or parasites.
Infectious Infectious diseases, also known as transmissible diseases or communicable diseases, comprise clinically evident illness (i.e., characteristic medical signs or symptoms of disease) resulting from the infection, presence and growth of pathogenic biological agents in an individual host organism. Included in this category are contagious diseases – an infection, such as influenza or the common cold, that commonly spreads from one person to another – and communicable diseases – a disease that can spread from one person to another, but does not necessarily spread through everyday contact.
Lifestyle A lifestyle disease is any disease that appears to increase in frequency as countries become more industrialized and people live longer, especially if the risk factors include behavioral choices like a sedentary lifestyle or a diet high in unhealthful foods such as refined carbohydrates, trans fats, or alcoholic beverages.
Non-communicable A non-communicable disease is a medical condition or disease that is non-transmissible. Non-communicable diseases cannot be spread directly from one person to another. Heart disease and cancer are examples of non-communicable diseases in humans.
Prevention
Many diseases and disorders can be prevented through a variety of means. These include sanitation, proper nutrition, adequate exercise, vaccinations and other self-care and public health measures, .
Treatments
Medical therapies or treatments are efforts to cure or improve a disease or other health problems. In the medical field, therapy is synonymous with the word treatment. Among psychologists, the term may refer specifically to psychotherapy or "talk therapy". Common treatments include medications, surgery, medical devices, and self-care. Treatments may be provided by an organized health care system, or informally, by the patient or family members.
Preventive healthcare is a way to avoid an injury, sickness, or disease in the first place. A treatment or cure is applied after a medical problem has already started. A treatment attempts to improve or remove a problem, but treatments may not produce permanent cures, especially in chronic diseases. Cures are a subset of treatments that reverse diseases completely or end medical problems permanently. Many diseases that cannot be completely cured are still treatable. Pain management (also called pain medicine) is that branch of medicine employing an interdisciplinary approach to the relief of pain and improvement in the quality of life of those living with pain.
Treatment for medical emergencies must be provided promptly, often through an emergency department or, in less critical situations, through an urgent care facility.
Epidemiology
Epidemiology is the study of the factors that cause or encourage diseases. Some diseases are more common in certain geographic areas, among people with certain genetic or socioeconomic characteristics, or at different times of the year.
Epidemiology is considered a cornerstone methodology of public health research and is highly regarded in evidence-based medicine for identifying risk factors for diseases. In the study of communicable and non-communicable diseases, the work of epidemiologists ranges from outbreak investigation to study design, data collection, and analysis including the development of statistical models to test hypotheses and the documentation of results for submission to peer-reviewed journals. Epidemiologists also study the interaction of diseases in a population, a condition known as a syndemic. Epidemiologists rely on a number of other scientific disciplines such as biology (to better understand disease processes), biostatistics (the current raw information available), Geographic Information Science (to store data and map disease patterns) and social science disciplines (to better understand proximate and distal risk factors). Epidemiology can help identify causes as well as guide prevention efforts.
In studying diseases, epidemiology faces the challenge of defining them. Especially for poorly understood diseases, different groups might use significantly different definitions. Without an agreed-on definition, different researchers may report different numbers of cases and characteristics of the disease.
Some morbidity databases are compiled with data supplied by states and territories health authorities, at national levels or larger scale (such as European Hospital Morbidity Database (HMDB)) which may contain hospital discharge data by detailed diagnosis, age and sex. The European HMDB data was submitted by European countries to the World Health Organization Regional Office for Europe.
Burdens of disease
Disease burden is the impact of a health problem in an area measured by financial cost, mortality, morbidity, or other indicators.
There are several measures used to quantify the burden imposed by diseases on people. The years of potential life lost (YPLL) is a simple estimate of the number of years that a person's life was shortened due to a disease. For example, if a person dies at the age of 65 from a disease, and would probably have lived until age 80 without that disease, then that disease has caused a loss of 15 years of potential life. YPLL measurements do not account for how disabled a person is before dying, so the measurement treats a person who dies suddenly and a person who died at the same age after decades of illness as equivalent. In 2004, the World Health Organization calculated that 932 million years of potential life were lost to premature death.
The quality-adjusted life year (QALY) and disability-adjusted life year (DALY) metrics are similar but take into account whether the person was healthy after diagnosis. In addition to the number of years lost due to premature death, these measurements add part of the years lost to being sick. Unlike YPLL, these measurements show the burden imposed on people who are very sick, but who live a normal lifespan. A disease that has high morbidity, but low mortality, has a high DALY and a low YPLL. In 2004, the World Health Organization calculated that 1.5 billion disability-adjusted life years were lost to disease and injury. In the developed world, heart disease and stroke cause the most loss of life, but neuropsychiatric conditions like major depressive disorder cause the most years lost to being sick.
Society and culture
How a society responds to diseases is the subject of medical sociology.
A condition may be considered a disease in some cultures or eras but not in others. For example, obesity was associated with prosperity and abundance, and this perception persists in many African regions, especially since the beginning of the HIV/AIDS. Epilepsy is considered a sign of spiritual gifts among the Hmong people.
Sickness confers the social legitimization of certain benefits, such as illness benefits, work avoidance, and being looked after by others. The person who is sick takes on a social role called the sick role. A person who responds to a dreaded disease, such as cancer, in a culturally acceptable fashion may be publicly and privately honored with higher social status. In return for these benefits, the sick person is obligated to seek treatment and work to become well once more. As a comparison, consider pregnancy, which is not interpreted as a disease or sickness, even if the mother and baby may both benefit from medical care.
Most religions grant exceptions from religious duties to people who are sick. For example, one whose life would be endangered by fasting on Yom Kippur or during the month of Ramadan is exempted from the requirement, or even forbidden from participating. People who are sick are also exempted from social duties. For example, ill health is the only socially acceptable reason for an American to refuse an invitation to the White House.
The identification of a condition as a disease, rather than as simply a variation of human structure or function, can have significant social or economic implications. The controversial recognition of diseases such as repetitive stress injury (RSI) and post-traumatic stress disorder (PTSD) has had a number of positive and negative effects on the financial and other responsibilities of governments, corporations, and institutions towards individuals, as well as on the individuals themselves. The social implication of viewing aging as a disease could be profound, though this classification is not yet widespread.
Lepers were people who were historically shunned because they had an infectious disease, and the term "leper" still evokes social stigma. Fear of disease can still be a widespread social phenomenon, though not all diseases evoke extreme social stigma.
Social standing and economic status affect health. Diseases of poverty are diseases that are associated with poverty and low social status; diseases of affluence are diseases that are associated with high social and economic status. Which diseases are associated with which states vary according to time, place, and technology. Some diseases, such as diabetes mellitus, may be associated with both poverty (poor food choices) and affluence (long lifespans and sedentary lifestyles), through different mechanisms. The term lifestyle diseases describes diseases associated with longevity and that are more common among older people. For example, cancer is far more common in societies in which most members live until they reach the age of 80 than in societies in which most members die before they reach the age of 50.
Language of disease
An illness narrative is a way of organizing a medical experience into a coherent story that illustrates the sick individual's personal experience.
People use metaphors to make sense of their experiences with disease. The metaphors move disease from an objective thing that exists to an affective experience. The most popular metaphors draw on military concepts: Disease is an enemy that must be feared, fought, battled, and routed. The patient or the healthcare provider is a warrior, rather than a passive victim or bystander. The agents of communicable diseases are invaders; non-communicable diseases constitute internal insurrection or civil war. Because the threat is urgent, perhaps a matter of life and death, unthinkably radical, even oppressive, measures are society's and the patient's moral duty as they courageously mobilize to struggle against destruction. The War on Cancer is an example of this metaphorical use of language. This language is empowering to some patients, but leaves others feeling like they are failures.
Another class of metaphors describes the experience of illness as a journey: The person travels to or from a place of disease, and changes himself, discovers new information, or increases his experience along the way. He may travel "on the road to recovery" or make changes to "get on the right track" or choose "pathways". Some are explicitly immigration-themed: the patient has been exiled from the home territory of health to the land of the ill, changing identity and relationships in the process. This language is more common among British healthcare professionals than the language of physical aggression.
Some metaphors are disease-specific. Slavery is a common metaphor for addictions: The alcoholic is enslaved by drink, and the smoker is captive to nicotine. Some cancer patients treat the loss of their hair from chemotherapy as a metonymy or metaphor for all the losses caused by the disease.
Some diseases are used as metaphors for social ills: "Cancer" is a common description for anything that is endemic and destructive in society, such as poverty, injustice, or racism. AIDS was seen as a divine judgment for moral decadence, and only by purging itself from the "pollution" of the "invader" could society become healthy again. More recently, when AIDS seemed less threatening, this type of emotive language was applied to avian flu and type 2 diabetes mellitus. Authors in the 19th century commonly used tuberculosis as a symbol and a metaphor for transcendence. People with the disease were portrayed in literature as having risen above daily life to become ephemeral objects of spiritual or artistic achievement. In the 20th century, after its cause was better understood, the same disease became the emblem of poverty, squalor, and other social problems.
| Biology and health sciences | Science and medicine | null |
8078 | https://en.wikipedia.org/wiki/Dynamite | Dynamite | Dynamite is an explosive made of nitroglycerin, sorbents (such as powdered shells or clay), and stabilizers. It was invented by the Swedish chemist and engineer Alfred Nobel in Geesthacht, Northern Germany, and was patented in 1867. It rapidly gained wide-scale use as a more robust alternative to the traditional black powder explosives. It allows the use of nitroglycerine's favorable explosive properties while greatly reducing its risk of accidental detonation.
History
Dynamite was invented by Swedish chemist Alfred Nobel in 1866 and was the first safely manageable explosive stronger than black powder.
Alfred Nobel's father, Immanuel Nobel, was an industrialist, engineer, and inventor. He built bridges and buildings in Stockholm and founded Sweden's first rubber factory. His construction work inspired him to research new methods of blasting rock that were more effective than black powder. After some bad business deals in Sweden, in 1838 Immanuel moved his family to Saint Petersburg, where Alfred and his brothers were educated privately under Swedish and Russian tutors. At the age of 17, Alfred Nobel was sent abroad for two years; in the United States he met Swedish engineer John Ericsson and in France studied under famed chemist Théophile-Jules Pelouze and his pupil Ascanio Sobrero, who had first synthesized nitroglycerin in 1847. Pelouze cautioned Nobel against using nitroglycerine as a commercial explosive because of its great sensitivity to shock.
In 1857, Nobel filed the first of several hundred patents, mostly concerning air pressure, gas and fluid gauges, but remained fascinated with nitroglycerin's potential as an explosive. Nobel, along with his father and brother Emil, experimented with various combinations of nitroglycerin and black powder. Nobel came up with a way to safely detonate nitroglycerin by inventing the detonator, or blasting cap, that allowed a controlled explosion set off from a distance using a fuse. In 1863 Nobel performed his first successful detonation of pure nitroglycerin, using a blasting cap made of a copper percussion cap and mercury fulminate. In 1864, Alfred Nobel filed patents for both the blasting cap and his method of synthesizing nitroglycerin, using sulfuric acid, nitric acid and glycerin. On 3 September 1864, while experimenting with nitroglycerin, Emil and several others were killed in an explosion at the factory at Immanuel Nobel's estate at Heleneborg. After this, Alfred founded the company Nitroglycerin Aktiebolaget in Vinterviken to continue work in a more isolated area and the following year moved to Germany, where he founded another company, Dynamit Nobel.
Despite the invention of the blasting cap, the instability of nitroglycerin rendered it useless as a commercial explosive. To solve this problem, Nobel sought to combine it with another substance that would make it safe for transport and handling but would not reduce its effectiveness as an explosive. He tried combinations of cement, coal, and sawdust, but was unsuccessful. Finally, he tried diatomaceous earth, which is fossilized algae, that he brought from the Elbe River near his factory in Hamburg, which successfully stabilized the nitroglycerin into a portable explosive.
Nobel obtained patents for his inventions in England on 7 May 1867 and in Sweden on 19 October 1867. After its introduction, dynamite rapidly gained wide-scale use as a safe alternative to black powder and nitroglycerin. Nobel tightly controlled the patents, and unlicensed duplicating companies were quickly shut down. A few American businessmen got around the patent by using absorbents other than diatomaceous earth, such as resin.
Nobel originally sold dynamite as "Nobel's Blasting Powder" and later changed the name to dynamite, from the Ancient Greek word dýnamis (), meaning "power".
Manufacture
Form
Dynamite is usually sold in the form of cardboard cylinders about long and about in diameter, with a mass of about . A stick of dynamite thus produced contains roughly 1 MJ (megajoule) of energy. Other sizes also exist, rated by either portion (Quarter-Stick or Half-Stick) or by weight.
Dynamite is usually rated by "weight strength" (the amount of nitroglycerin it contains), usually from 20% to 60%. For example, 40% dynamite is composed of 40% nitroglycerin and 60% "dope" (the absorbent storage medium mixed with the stabilizer and any additives).
Storage considerations
The maximum shelf life of nitroglycerin-based dynamite is recommended as one year from the date of manufacture under good storage conditions.
Over time, regardless of the sorbent used, sticks of dynamite will "weep" or "sweat" nitroglycerin, which can then pool in the bottom of the box or storage area. For that reason, explosive manuals recommend the regular up-ending of boxes of dynamite in storage. Crystals will form on the outside of the sticks, causing them to be even more sensitive to shock, friction, and temperature. Therefore, while the risk of an explosion without the use of a blasting cap is minimal for fresh dynamite, old dynamite is dangerous. Modern packaging helps eliminate this by placing the dynamite into sealed plastic bags and using wax-coated cardboard.
Dynamite is moderately sensitive to shock. Shock resistance tests are usually carried out with a drop-hammer: about 100 mg of explosive is placed on an anvil, upon which a weight of between is dropped from different heights until detonation is achieved. With a hammer of 2 kg, mercury fulminate detonates with a drop distance of 1 to 2 cm, nitroglycerin with 4 to 5 cm, dynamite with 15 to 30 cm, and ammoniacal explosives with 40 to 50 cm.
Major manufacturers
South Africa
For several decades beginning in the 1940s, the largest producer of dynamite in the world was the Union of South Africa. There, the De Beers company established a factory in 1902 at Somerset West. The explosives factory was later operated by AECI (African Explosives and Chemical Industries). The demand for the product came mainly from the country's vast gold mines, centered on the Witwatersrand. The factory at Somerset West was in operation in 1903 and by 1907 it was already producing 340,000 cases, each, annually. A rival factory at Modderfontein was producing another 200,000 cases per year.
There were two large explosions at the Somerset West plant during the 1960s. Some workers died, but the loss of life was limited by the modular design of the factory and its earth works, and the planting of trees that directed the blasts upward. There were several other explosions at the Modderfontein factory. After 1985, pressure from trade unions forced AECI to phase out the production of dynamite. The factory then went on to produce ammonium nitrate emulsion-based explosives that are safer to manufacture and handle.
United States
Dynamite was first manufactured in the US by the Giant Powder Company of San Francisco, California, whose founder had obtained the exclusive rights from Nobel in 1867. Giant was eventually acquired by DuPont, which produced dynamite under the Giant name until Giant was dissolved by DuPont in 1905.
Thereafter, DuPont produced dynamite under its own name until 1911–12, when its explosives monopoly was broken up by the U.S. Circuit Court in the "Powder Case". Two new companies were formed upon the breakup, the Hercules Powder Company and the Atlas Powder Company, which took up the manufacture of dynamite (in different formulations).
Currently, only Dyno Nobel manufactures dynamite in the US. The only facility producing it is located in Carthage, Missouri, but the material is purchased from Dyno Nobel by other manufacturers who put their labels on the dynamite and boxes.
Non-dynamite explosives
Other explosives are often referred to or confused with dynamite:
TNT
Trinitrotoluene (TNT) is often assumed to be the same as (or confused for) dynamite largely because of the ubiquity of both explosives during the 20th century. This incorrect connection between TNT and dynamite was enhanced by cartoons such as Bugs Bunny, where animators labeled any kind of bomb (ranging from sticks of dynamite to kegs of black powder) as TNT, because the acronym was shorter and more memorable and did not require literacy to recognize that TNT meant "bomb".
Aside from both being high explosives, TNT and dynamite have little in common. TNT is a second generation castable explosive adopted by the military, while dynamite, in contrast, has never been popular in warfare because it degenerates quickly under severe conditions and can be detonated by either fire or a wayward bullet. The German armed forces adopted TNT as a filling for artillery shells in 1902, some 40 years after the invention of dynamite, which is a first generation phlegmatized explosive primarily intended for civilian earthmoving. TNT has never been popular or widespread in civilian earthmoving, as it is considerably more expensive and less powerful by weight than dynamite, as well as being slower to mix and pack into boreholes. TNT's primary asset is its remarkable insensitivity and stability: it is waterproof and incapable of detonating without the extreme shock and heat provided by a blasting cap (or a sympathetic detonation); this stability also allows it to be melted at , poured into high explosive shells and allowed to re-solidify, with no extra danger or change in the TNT's characteristics. Accordingly, more than 90% of the TNT produced in America was always for the military market, with most TNT used for filling shells, hand grenades and aerial bombs, and the remainder being packaged in brown "bricks" (not red cylinders) for use as demolition charges by combat engineers.
"Extra" dynamite
In the United States, in 1885, the chemist Russell S. Penniman invented "ammonium dynamite", a form of explosive that used ammonium nitrate as a substitute for the more costly nitroglycerin. Ammonium nitrate has only 85% of the chemical energy of nitroglycerin.
It is rated by either "weight strength" (the amount of ammonium nitrate in the medium) or "cartridge strength" (the potential explosive strength generated by an amount of explosive of a certain density and grain size used in comparison to the explosive strength generated by an equivalent density and grain size of a standard explosive). For example, high-explosive 65% Extra dynamite has a weight strength of 65% ammonium nitrate and 35% "dope" (the absorbent medium mixed with the stabilizers and additives). Its "cartridge strength" would be its weight in pounds times its strength in relation to an equal amount of ANFO (the civilian baseline standard) or TNT (the military baseline standard). For example, 65% ammonium dynamite with a 20% cartridge strength would mean the stick was equal to an equivalent weight strength of 20% ANFO.
"Military dynamite"
"Military dynamite" (or M1 dynamite) is a dynamite substitute made with more stable ingredients than nitroglycerin. It contains 75% RDX, 15% TNT and 10% desensitizers and plasticizers. It has only 60% equivalent strength as commercial dynamite, but is much safer to store and handle.
Regulation
Various countries around the world have enacted explosives laws and require licenses to manufacture, distribute, store, use, and possess explosives or ingredients.
| Technology | Material and chemical | null |
8082 | https://en.wikipedia.org/wiki/Diamond | Diamond | Diamond is a solid form of the element carbon with its atoms arranged in a crystal structure called diamond cubic. Diamond as a form of carbon is tasteless, odourless, strong, brittle solid, colourless in pure form, a poor conductor of electricity, and insoluble in water. Another solid form of carbon known as graphite is the chemically stable form of carbon at room temperature and pressure, but diamond is metastable and converts to it at a negligible rate under those conditions. Diamond has the highest hardness and thermal conductivity of any natural material, properties that are used in major industrial applications such as cutting and polishing tools. They are also the reason that diamond anvil cells can subject materials to pressures found deep in the Earth.
Because the arrangement of atoms in diamond is extremely rigid, few types of impurity can contaminate it (two exceptions are boron and nitrogen). Small numbers of defects or impurities (about one per million of lattice atoms) can color a diamond blue (boron), yellow (nitrogen), brown (defects), green (radiation exposure), purple, pink, orange, or red. Diamond also has a very high refractive index and a relatively high optical dispersion.
Most natural diamonds have ages between 1 billion and 3.5 billion years. Most were formed at depths between in the Earth's mantle, although a few have come from as deep as . Under high pressure and temperature, carbon-containing fluids dissolved various minerals and replaced them with diamonds. Much more recently (hundreds to tens of million years ago), they were carried to the surface in volcanic eruptions and deposited in igneous rocks known as kimberlites and lamproites.
Synthetic diamonds can be grown from high-purity carbon under high pressures and temperatures or from hydrocarbon gases by chemical vapor deposition (CVD).
Properties
Diamond is a solid form of pure carbon with its atoms arranged in a crystal. Solid carbon comes in different forms known as allotropes depending on the type of chemical bond. The two most common allotropes of pure carbon are diamond and graphite. In graphite, the bonds are sp2 orbital hybrids and the atoms form in planes, with each bound to three nearest neighbors, 120 degrees apart. In diamond, they are sp3 and the atoms form tetrahedra, with each bound to four nearest neighbors. Tetrahedra are rigid, the bonds are strong, and, of all known substances, diamond has the greatest number of atoms per unit volume, which is why it is both the hardest and the least compressible. It also has a high density, ranging from 3150 to 3530 kilograms per cubic metre (over three times the density of water) in natural diamonds and 3520 kg/m in pure diamond. In graphite, the bonds between nearest neighbors are even stronger, but the bonds between parallel adjacent planes are weak, so the planes easily slip past each other. Thus, graphite is much softer than diamond. However, the stronger bonds make graphite less flammable.
Diamonds have been adopted for many uses because of the material's exceptional physical characteristics. It has the highest thermal conductivity and the highest sound velocity. It has low adhesion and friction, and its coefficient of thermal expansion is extremely low. Its optical transparency extends from the far infrared to the deep ultraviolet and it has high optical dispersion. It also has high electrical resistance. It is chemically inert, not reacting with most corrosive substances, and has excellent biological compatibility.
Thermodynamics
The equilibrium pressure and temperature conditions for a transition between graphite and diamond are well established theoretically and experimentally. The equilibrium pressure varies linearly with temperature, between at and at (the diamond/graphite/liquid triple point). However, the phases have a wide region about this line where they can coexist. At standard temperature and pressure, and , the stable phase of carbon is graphite, but diamond is metastable and its rate of conversion to graphite is negligible. However, at temperatures above about , diamond rapidly converts to graphite. Rapid conversion of graphite to diamond requires pressures well above the equilibrium line: at , a pressure of is needed.
Above the graphite–diamond–liquid carbon triple point, the melting point of diamond increases slowly with increasing pressure; but at pressures of hundreds of GPa, it decreases. At high pressures, silicon and germanium have a BC8 body-centered cubic crystal structure, and a similar structure is predicted for carbon at high pressures. At , the transition is predicted to occur at .
Results published in an article in the scientific journal Nature Physics in 2010 suggest that, at ultra-high pressures and temperatures (about 10 million atmospheres or 1 TPa and 50,000 °C), diamond melts into a metallic fluid. The extreme conditions required for this to occur are present in the ice giants Neptune and Uranus. Both planets are made up of approximately 10 percent carbon and could hypothetically contain oceans of liquid carbon. Since large quantities of metallic fluid can affect the magnetic field, this could serve as an explanation as to why the geographic and magnetic poles of the two planets are unaligned.
Crystal structure
The most common crystal structure of diamond is called diamond cubic. It is formed of unit cells (see the figure) stacked together. Although there are 18 atoms in the figure, each corner atom is shared by eight unit cells and each atom in the center of a face is shared by two, so there are a total of eight atoms per unit cell. The length of each side of the unit cell is denoted by a and is 3.567 angstroms.
The nearest neighbor distance in the diamond lattice is 1.732a/4 where a is the lattice constant, usually given in Angstrøms as a = 3.567 Å, which is 0.3567 nm.
A diamond cubic lattice can be thought of as two interpenetrating face-centered cubic lattices with one displaced by of the diagonal along a cubic cell, or as one lattice with two atoms associated with each lattice point. Viewed from a crystallographic direction, it is formed of layers stacked in a repeating ABCABC ... pattern. Diamonds can also form an ABAB ... structure, which is known as hexagonal diamond or lonsdaleite, but this is far less common and is formed under different conditions from cubic carbon.
Crystal habit
Diamonds occur most often as euhedral or rounded octahedra and twinned octahedra known as macles. As diamond's crystal structure has a cubic arrangement of the atoms, they have many facets that belong to a cube, octahedron, rhombicosidodecahedron, tetrakis hexahedron, or disdyakis dodecahedron. The crystals can have rounded-off and unexpressive edges and can be elongated. Diamonds (especially those with rounded crystal faces) are commonly found coated in nyf, an opaque gum-like skin.
Some diamonds contain opaque fibers. They are referred to as opaque if the fibers grow from a clear substrate or fibrous if they occupy the entire crystal. Their colors range from yellow to green or gray, sometimes with cloud-like white to gray impurities. Their most common shape is cuboidal, but they can also form octahedra, dodecahedra, macles, or combined shapes. The structure is the result of numerous impurities with sizes between 1 and 5 microns. These diamonds probably formed in kimberlite magma and sampled the volatiles.
Diamonds can also form polycrystalline aggregates. There have been attempts to classify them into groups with names such as boart, ballas, stewartite, and framesite, but there is no widely accepted set of criteria. Carbonado, a type in which the diamond grains were sintered (fused without melting by the application of heat and pressure), is black in color and tougher than single crystal diamond. It has never been observed in a volcanic rock. There are many theories for its origin, including formation in a star, but no consensus.
Mechanical
Hardness
Diamond is the hardest material on the qualitative Mohs scale. To conduct the quantitative Vickers hardness test, samples of materials are struck with a pyramid of standardized dimensions using a known force – a diamond crystal is used for the pyramid to permit a wide range of materials to be tested. From the size of the resulting indentation, a Vickers hardness value for the material can be determined. Diamond's great hardness relative to other materials has been known since antiquity, and is the source of its name. This does not mean that it is infinitely hard, indestructible, or unscratchable. Indeed, diamonds can be scratched by other diamonds and worn down over time even by softer materials, such as vinyl phonograph records.
Diamond hardness depends on its purity, crystalline perfection, and orientation: hardness is higher for flawless, pure crystals oriented to the <111> direction (along the longest diagonal of the cubic diamond lattice). Therefore, whereas it might be possible to scratch some diamonds with other materials, such as boron nitride, the hardest diamonds can only be scratched by other diamonds and nanocrystalline diamond aggregates.
The hardness of diamond contributes to its suitability as a gemstone. Because it can only be scratched by other diamonds, it maintains its polish extremely well. Unlike many other gems, it is well-suited to daily wear because of its resistance to scratching—perhaps contributing to its popularity as the preferred gem in engagement or wedding rings, which are often worn every day.
The hardest natural diamonds mostly originate from the Copeton and Bingara fields located in the New England area in New South Wales, Australia. These diamonds are generally small, perfect to semiperfect octahedra, and are used to polish other diamonds. Their hardness is associated with the crystal growth form, which is single-stage crystal growth. Most other diamonds show more evidence of multiple growth stages, which produce inclusions, flaws, and defect planes in the crystal lattice, all of which affect their hardness. It is possible to treat regular diamonds under a combination of high pressure and high temperature to produce diamonds that are harder than the diamonds used in hardness gauges.
Diamonds cut glass, but this does not positively identify a diamond because other materials, such as quartz, also lie above glass on the Mohs scale and can also cut it. Diamonds can scratch other diamonds, but this can result in damage to one or both stones. Hardness tests are infrequently used in practical gemology because of their potentially destructive nature. The extreme hardness and high value of diamond means that gems are typically polished slowly, using painstaking traditional techniques and greater attention to detail than is the case with most other gemstones; these tend to result in extremely flat, highly polished facets with exceptionally sharp facet edges. Diamonds also possess an extremely high refractive index and fairly high dispersion. Taken together, these factors affect the overall appearance of a polished diamond and most diamantaires still rely upon skilled use of a loupe (magnifying glass) to identify diamonds "by eye".
Toughness
Somewhat related to hardness is another mechanical property toughness, which is a material's ability to resist breakage from forceful impact. The toughness of natural diamond has been measured as 50–65 MPa·m1/2. This value is good compared to other ceramic materials, but poor compared to most engineering materials such as engineering alloys, which typically exhibit toughness over 80MPa·m1/2. As with any material, the macroscopic geometry of a diamond contributes to its resistance to breakage. Diamond has a cleavage plane and is therefore more fragile in some orientations than others. Diamond cutters use this attribute to cleave some stones before faceting them. "Impact toughness" is one of the main indexes to measure the quality of synthetic industrial diamonds.
Yield strength
Diamond has compressive yield strength of 130–140GPa. This exceptionally high value, along with the hardness and transparency of diamond, are the reasons that diamond anvil cells are the main tool for high pressure experiments. These anvils have reached pressures of . Much higher pressures may be possible with nanocrystalline diamonds.
Elasticity and tensile strength
Usually, attempting to deform bulk diamond crystal by tension or bending results in brittle fracture. However, when single crystalline diamond is in the form of micro/nanoscale wires or needles (~100–300nanometers in diameter, micrometers long), they can be elastically stretched by as much as 9–10 percent tensile strain without failure, with a maximum local tensile stress of about , very close to the theoretical limit for this material.
Electrical conductivity
Other specialized applications also exist or are being developed, including use as semiconductors: some blue diamonds are natural semiconductors, in contrast to most diamonds, which are excellent electrical insulators. The conductivity and blue color originate from boron impurity. Boron substitutes for carbon atoms in the diamond lattice, donating a hole into the valence band.
Substantial conductivity is commonly observed in nominally undoped diamond grown by chemical vapor deposition. This conductivity is associated with hydrogen-related species adsorbed at the surface, and it can be removed by annealing or other surface treatments.
Thin needles of diamond can be made to vary their electronic band gap from the normal 5.6 eV to near zero by selective mechanical deformation.
High-purity diamond wafers 5 cm in diameter exhibit perfect resistance in one direction and perfect conductance in the other, creating the possibility of using them for quantum data storage. The material contains only 3 parts per million of nitrogen. The diamond was grown on a stepped substrate, which eliminated cracking.
Surface property
Diamonds are naturally lipophilic and hydrophobic, which means the diamonds' surface cannot be wet by water, but can be easily wet and stuck by oil. This property can be utilized to extract diamonds using oil when making synthetic diamonds. However, when diamond surfaces are chemically modified with certain ions, they are expected to become so hydrophilic that they can stabilize multiple layers of water ice at human body temperature.
The surface of diamonds is partially oxidized. The oxidized surface can be reduced by heat treatment under hydrogen flow. That is to say, this heat treatment partially removes oxygen-containing functional groups. But diamonds (sp3C) are unstable against high temperature (above about ) under atmospheric pressure. The structure gradually changes into sp2C above this temperature. Thus, diamonds should be reduced below this temperature.
Chemical stability
At room temperature, diamonds do not react with any chemical reagents including strong acids and bases.
In an atmosphere of pure oxygen, diamond has an ignition point that ranges from to ; smaller crystals tend to burn more easily. It increases in temperature from red to white heat and burns with a pale blue flame, and continues to burn after the source of heat is removed. By contrast, in air the combustion will cease as soon as the heat is removed because the oxygen is diluted with nitrogen. A clear, flawless, transparent diamond is completely converted to carbon dioxide; any impurities will be left as ash. Heat generated from cutting a diamond will not ignite the diamond, and neither will a cigarette lighter, but house fires and blow torches are hot enough. Jewelers must be careful when molding the metal in a diamond ring.
Diamond powder of an appropriate grain size (around 50microns) burns with a shower of sparks after ignition from a flame. Consequently, pyrotechnic compositions based on synthetic diamond powder can be prepared. The resulting sparks are of the usual red-orange color, comparable to charcoal, but show a very linear trajectory which is explained by their high density. Diamond also reacts with fluorine gas above about .
Color
Diamond has a wide band gap of corresponding to the deep ultraviolet wavelength of 225nanometers. This means that pure diamond should transmit visible light and appear as a clear colorless crystal. Colors in diamond originate from lattice defects and impurities. The diamond crystal lattice is exceptionally strong, and only atoms of nitrogen, boron, and hydrogen can be introduced into diamond during the growth at significant concentrations (up to atomic percents). Transition metals nickel and cobalt, which are commonly used for growth of synthetic diamond by high-pressure high-temperature techniques, have been detected in diamond as individual atoms; the maximum concentration is 0.01% for nickel and even less for cobalt. Virtually any element can be introduced to diamond by ion implantation.
Nitrogen is by far the most common impurity found in gem diamonds and is responsible for the yellow and brown color in diamonds. Boron is responsible for the blue color. Color in diamond has two additional sources: irradiation (usually by alpha particles), that causes the color in green diamonds, and plastic deformation of the diamond crystal lattice. Plastic deformation is the cause of color in some brown and perhaps pink and red diamonds. In order of increasing rarity, yellow diamond is followed by brown, colorless, then by blue, green, black, pink, orange, purple, and red. "Black", or carbonado, diamonds are not truly black, but rather contain numerous dark inclusions that give the gems their dark appearance. Colored diamonds contain impurities or structural defects that cause the coloration, while pure or nearly pure diamonds are transparent and colorless. Most diamond impurities replace a carbon atom in the crystal lattice, known as a carbon flaw. The most common impurity, nitrogen, causes a slight to intense yellow coloration depending upon the type and concentration of nitrogen present. The Gemological Institute of America (GIA) classifies low saturation yellow and brown diamonds as diamonds in the normal color range, and applies a grading scale from "D" (colorless) to "Z" (light yellow). Yellow diamonds of high color saturation or a different color, such as pink or blue, are called fancy colored diamonds and fall under a different grading scale.
In 2008, the Wittelsbach Diamond, a blue diamond once belonging to the King of Spain, fetched over US$24 million at a Christie's auction. In May 2009, a blue diamond fetched the highest price per carat ever paid for a diamond when it was sold at auction for 10.5 million Swiss francs (6.97 million euros, or US$9.5 million at the time). That record was, however, beaten the same year: a vivid pink diamond was sold for US$10.8 million in Hong Kong on December 1, 2009.
Clarity
Clarity is one of the 4C's (color, clarity, cut and carat weight) that helps in identifying the quality of diamonds. The Gemological Institute of America (GIA) developed 11 clarity scales to decide the quality of a diamond for its sale value. The GIA clarity scale spans from Flawless (FL) to included (I) having internally flawless (IF), very, very slightly included (VVS), very slightly included (VS) and slightly included (SI) in between. Impurities in natural diamonds are due to the presence of natural minerals and oxides. The clarity scale grades the diamond based on the color, size, location of impurity and quantity of clarity visible under 10x magnification. Inclusions in diamond can be extracted by optical methods. The process is to take pre-enhancement images, identifying the inclusion removal part and finally removing the diamond facets and noises.
Fluorescence
Between 25% and 35% of natural diamonds exhibit some degree of fluorescence when examined under invisible long-wave ultraviolet light or higher energy radiation sources such as X-rays and lasers. Incandescent lighting will not cause a diamond to fluoresce. Diamonds can fluoresce in a variety of colors including blue (most common), orange, yellow, white, green and very rarely red and purple. Although the causes are not well understood, variations in the atomic structure, such as the number of nitrogen atoms present are thought to contribute to the phenomenon.
Thermal conductivity
Diamonds can be identified by their high thermal conductivity (900–). Their high refractive index is also indicative, but other materials have similar refractivity.
Geology
Diamonds are extremely rare, with concentrations of at most parts per billion in source rock. Before the 20th century, most diamonds were found in alluvial deposits. Loose diamonds are also found along existing and ancient shorelines, where they tend to accumulate because of their size and density. Rarely, they have been found in glacial till (notably in Wisconsin and Indiana), but these deposits are not of commercial quality. These types of deposit were derived from localized igneous intrusions through weathering and transport by wind or water.
Most diamonds come from the Earth's mantle, and most of this section discusses those diamonds. However, there are other sources. Some blocks of the crust, or terranes, have been buried deep enough as the crust thickened so they experienced ultra-high-pressure metamorphism. These have evenly distributed microdiamonds that show no sign of transport by magma. In addition, when meteorites strike the ground, the shock wave can produce high enough temperatures and pressures for microdiamonds and nanodiamonds to form. Impact-type microdiamonds can be used as an indicator of ancient impact craters. Popigai impact structure in Russia may have the world's largest diamond deposit, estimated at trillions of carats, and formed by an asteroid impact.
A common misconception is that diamonds form from highly compressed coal. Coal is formed from buried prehistoric plants, and most diamonds that have been dated are far older than the first land plants. It is possible that diamonds can form from coal in subduction zones, but diamonds formed in this way are rare, and the carbon source is more likely carbonate rocks and organic carbon in sediments, rather than coal.
Surface distribution
Diamonds are far from evenly distributed over the Earth. A rule of thumb known as Clifford's rule states that they are almost always found in kimberlites on the oldest part of cratons, the stable cores of continents with typical ages of 2.5billion years or more. However, there are exceptions. The Argyle diamond mine in Australia, the largest producer of diamonds by weight in the world, is located in a mobile belt, also known as an orogenic belt, a weaker zone surrounding the central craton that has undergone compressional tectonics. Instead of kimberlite, the host rock is lamproite. Lamproites with diamonds that are not economically viable are also found in the United States, India, and Australia. In addition, diamonds in the Wawa belt of the Superior province in Canada and microdiamonds in the island arc of Japan are found in a type of rock called lamprophyre.
Kimberlites can be found in narrow (1 to 4 meters) dikes and sills, and in pipes with diameters that range from about 75 m to 1.5 km. Fresh rock is dark bluish green to greenish gray, but after exposure rapidly turns brown and crumbles. It is hybrid rock with a chaotic mixture of small minerals and rock fragments (clasts) up to the size of watermelons. They are a mixture of xenocrysts and xenoliths (minerals and rocks carried up from the lower crust and mantle), pieces of surface rock, altered minerals such as serpentine, and new minerals that crystallized during the eruption. The texture varies with depth. The composition forms a continuum with carbonatites, but the latter have too much oxygen for carbon to exist in a pure form. Instead, it is locked up in the mineral calcite ().
All three of the diamond-bearing rocks (kimberlite, lamproite and lamprophyre) lack certain minerals (melilite and kalsilite) that are incompatible with diamond formation. In kimberlite, olivine is large and conspicuous, while lamproite has Ti-phlogopite and lamprophyre has biotite and amphibole. They are all derived from magma types that erupt rapidly from small amounts of melt, are rich in volatiles and magnesium oxide, and are less oxidizing than more common mantle melts such as basalt. These characteristics allow the melts to carry diamonds to the surface before they dissolve.
Exploration
Kimberlite pipes can be difficult to find. They weather quickly (within a few years after exposure) and tend to have lower topographic relief than surrounding rock. If they are visible in outcrops, the diamonds are never visible because they are so rare. In any case, kimberlites are often covered with vegetation, sediments, soils, or lakes. In modern searches, geophysical methods such as aeromagnetic surveys, electrical resistivity, and gravimetry, help identify promising regions to explore. This is aided by isotopic dating and modeling of the geological history. Then surveyors must go to the area and collect samples, looking for kimberlite fragments or indicator minerals. The latter have compositions that reflect the conditions where diamonds form, such as extreme melt depletion or high pressures in eclogites. However, indicator minerals can be misleading; a better approach is geothermobarometry, where the compositions of minerals are analyzed as if they were in equilibrium with mantle minerals.
Finding kimberlites requires persistence, and only a small fraction contain diamonds that are commercially viable. The only major discoveries since about 1980 have been in Canada. Since existing mines have lifetimes of as little as 25 years, there could be a shortage of new natural diamonds in the future.
Ages
Diamonds are dated by analyzing inclusions using the decay of radioactive isotopes. Depending on the elemental abundances, one can look at the decay of rubidium to strontium, samarium to neodymium, uranium to lead, argon-40 to argon-39, or rhenium to osmium. Those found in kimberlites have ages ranging from , and there can be multiple ages in the same kimberlite, indicating multiple episodes of diamond formation. The kimberlites themselves are much younger. Most of them have ages between tens of millions and 300 million years old, although there are some older exceptions (Argyle, Premier and Wawa). Thus, the kimberlites formed independently of the diamonds and served only to transport them to the surface. Kimberlites are also much younger than the cratons they have erupted through. The reason for the lack of older kimberlites is unknown, but it suggests there was some change in mantle chemistry or tectonics. No kimberlite has erupted in human history.
Origin in mantle
Most gem-quality diamonds come from depths of 150–250 km in the lithosphere. Such depths occur below cratons in mantle keels, the thickest part of the lithosphere. These regions have high enough pressure and temperature to allow diamonds to form and they are not convecting, so diamonds can be stored for billions of years until a kimberlite eruption samples them.
Host rocks in a mantle keel include harzburgite and lherzolite, two type of peridotite. The most dominant rock type in the upper mantle, peridotite is an igneous rock consisting mostly of the minerals olivine and pyroxene; it is low in silica and high in magnesium. However, diamonds in peridotite rarely survive the trip to the surface. Another common source that does keep diamonds intact is eclogite, a metamorphic rock that typically forms from basalt as an oceanic plate plunges into the mantle at a subduction zone.
A smaller fraction of diamonds (about 150 have been studied) come from depths of 330–660 km, a region that includes the transition zone. They formed in eclogite but are distinguished from diamonds of shallower origin by inclusions of majorite (a form of garnet with excess silicon). A similar proportion of diamonds comes from the lower mantle at depths between 660 and 800 km.
Diamond is thermodynamically stable at high pressures and temperatures, with the phase transition from graphite occurring at greater temperatures as the pressure increases. Thus, underneath continents it becomes stable at temperatures of 950degrees Celsius and pressures of 4.5 gigapascals, corresponding to depths of 150kilometers or greater. In subduction zones, which are colder, it becomes stable at temperatures of 800 °C and pressures of 3.5gigapascals. At depths greater than 240 km, iron–nickel metal phases are present and carbon is likely to be either dissolved in them or in the form of carbides. Thus, the deeper origin of some diamonds may reflect unusual growth environments.
In 2018 the first known natural samples of a phase of ice called Ice VII were found as inclusions in diamond samples. The inclusions formed at depths between 400 and 800 km, straddling the upper and lower mantle, and provide evidence for water-rich fluid at these depths.
Carbon sources
The mantle has roughly one billion gigatonnes of carbon (for comparison, the atmosphere-ocean system has about 44,000 gigatonnes). Carbon has two stable isotopes, 12C and 13C, in a ratio of approximately 99:1 by mass. This ratio has a wide range in meteorites, which implies that it also varied a lot in the early Earth. It can also be altered by surface processes like photosynthesis. The fraction is generally compared to a standard sample using a ratio δ13C expressed in parts per thousand. Common rocks from the mantle such as basalts, carbonatites, and kimberlites have ratios between −8 and −2. On the surface, organic sediments have an average of −25 while carbonates have an average of 0.
Populations of diamonds from different sources have distributions of δ13C that vary markedly. Peridotitic diamonds are mostly within the typical mantle range; eclogitic diamonds have values from −40 to +3, although the peak of the distribution is in the mantle range. This variability implies that they are not formed from carbon that is primordial (having resided in the mantle since the Earth formed). Instead, they are the result of tectonic processes, although (given the ages of diamonds) not necessarily the same tectonic processes that act in the present. Diamond-forming carbon originates in the top 700 kilometers (430 mi) or so of the upper mantle closest to the surface, known as the asthenosphere.
Formation and growth
Diamonds in the mantle form through a metasomatic process where a C–O–H–N–S fluid or melt dissolves minerals in a rock and replaces them with new minerals. (The vague term C–O–H–N–S is commonly used because the exact composition is not known.) Diamonds form from this fluid either by reduction of oxidized carbon (e.g., CO2 or CO3) or oxidation of a reduced phase such as methane.
Using probes such as polarized light, photoluminescence, and cathodoluminescence, a series of growth zones can be identified in diamonds. The characteristic pattern in diamonds from the lithosphere involves a nearly concentric series of zones with very thin oscillations in luminescence and alternating episodes where the carbon is resorbed by the fluid and then grown again. Diamonds from below the lithosphere have a more irregular, almost polycrystalline texture, reflecting the higher temperatures and pressures as well as the transport of the diamonds by convection.
Transport to the surface
Geological evidence supports a model in which kimberlite magma rises at 4–20 meters per second, creating an upward path by hydraulic fracturing of the rock. As the pressure decreases, a vapor phase exsolves from the magma, and this helps to keep the magma fluid. At the surface, the initial eruption explodes out through fissures at high speeds (over ). Then, at lower pressures, the rock is eroded, forming a pipe and producing fragmented rock (breccia). As the eruption wanes, there is pyroclastic phase and then metamorphism and hydration produces serpentinites.
Double diamonds
In rare cases, diamonds have been found that contain a cavity within which is a second diamond. The first double diamond, the Matryoshka, was found by Alrosa in Yakutia, Russia, in 2019. Another one was found in the Ellendale Diamond Field in Western Australia in 2021.
In space
Although diamonds on Earth are rare, they are very common in space. In meteorites, about three percent of the carbon is in the form of nanodiamonds, having diameters of a few nanometers. Sufficiently small diamonds can form in the cold of space because their lower surface energy makes them more stable than graphite. The isotopic signatures of some nanodiamonds indicate they were formed outside the Solar System in stars.
High pressure experiments predict that large quantities of diamonds condense from methane into a "diamond rain" on the ice giant planets Uranus and Neptune. Some extrasolar planets may be almost entirely composed of diamond.
Diamonds may exist in carbon-rich stars, particularly white dwarfs. One theory for the origin of carbonado, the toughest form of diamond, is that it originated in a white dwarf or supernova. Diamonds formed in stars may have been the first minerals.
Industry
The most familiar uses of diamonds today are as gemstones used for adornment, and as industrial abrasives for cutting hard materials. The markets for gem-grade and industrial-grade diamonds value diamonds differently.
Gem-grade diamonds
The dispersion of white light into spectral colors is the primary gemological characteristic of gem diamonds. In the 20th century, experts in gemology developed methods of grading diamonds and other gemstones based on the characteristics most important to their value as a gem. Four characteristics, known informally as the four Cs, are now commonly used as the basic descriptors of diamonds: these are its mass in carats (a carat being equal to 0.2grams), cut (quality of the cut is graded according to proportions, symmetry and polish), color (how close to white or colorless; for fancy diamonds how intense is its hue), and clarity (how free is it from inclusions). A large, flawless diamond is known as a paragon.
A large trade in gem-grade diamonds exists. Although most gem-grade diamonds are sold newly polished, there is a well-established market for resale of polished diamonds (e.g. pawnbroking, auctions, second-hand jewelry stores, diamantaires, bourses, etc.). One hallmark of the trade in gem-quality diamonds is its remarkable concentration: wholesale trade and diamond cutting is limited to just a few locations; in 2003, 92% of the world's diamonds were cut and polished in Surat, India. Other important centers of diamond cutting and trading are the Antwerp diamond district in Belgium, where the International Gemological Institute is based, London, the Diamond District in New York City, the Diamond Exchange District in Tel Aviv and Amsterdam. One contributory factor is the geological nature of diamond deposits: several large primary kimberlite-pipe mines each account for significant portions of market share (such as the Jwaneng mine in Botswana, which is a single large-pit mine that can produce between of diamonds per year). Secondary alluvial diamond deposits, on the other hand, tend to be fragmented amongst many different operators because they can be dispersed over many hundreds of square kilometers (e.g., alluvial deposits in Brazil).
The production and distribution of diamonds is largely consolidated in the hands of a few key players, and concentrated in traditional diamond trading centers, the most important being Antwerp, where 80% of all rough diamonds, 50% of all cut diamonds and more than 50% of all rough, cut and industrial diamonds combined are handled. This makes Antwerp a de facto "world diamond capital". The city of Antwerp also hosts the Antwerpsche Diamantkring, created in 1929 to become the first and biggest diamond bourse dedicated to rough diamonds. Another important diamond center is New York City, where almost 80% of the world's diamonds are sold, including auction sales.
The De Beers company, as the world's largest diamond mining company, holds a dominant position in the industry, and has done so since soon after its founding in 1888 by the British businessman Cecil Rhodes. De Beers is currently the world's largest operator of diamond production facilities (mines) and distribution channels for gem-quality diamonds. The Diamond Trading Company (DTC) is a subsidiary of De Beers and markets rough diamonds from De Beers-operated mines. De Beers and its subsidiaries own mines that produce some 40% of annual world diamond production. For most of the 20th century over 80% of the world's rough diamonds passed through De Beers, but by 2001–2009 the figure had decreased to around 45%, and by 2013 the company's market share had further decreased to around 38% in value terms and even less by volume. De Beers sold off the vast majority of its diamond stockpile in the late 1990s – early 2000s and the remainder largely represents working stock (diamonds that are being sorted before sale). This was well documented in the press but remains little known to the general public.
As a part of reducing its influence, De Beers withdrew from purchasing diamonds on the open market in 1999 and ceased, at the end of 2008, purchasing Russian diamonds mined by the largest Russian diamond company Alrosa. As of January 2011, De Beers states that it only sells diamonds from the following four countries: Botswana, Namibia, South Africa and Canada. Alrosa had to suspend their sales in October 2008 due to the global energy crisis, but the company reported that it had resumed selling rough diamonds on the open market by October 2009. Apart from Alrosa, other important diamond mining companies include BHP, which is the world's largest mining company; Rio Tinto, the owner of the Argyle (100%), Diavik (60%), and Murowa (78%) diamond mines; and Petra Diamonds, the owner of several major diamond mines in Africa.
Further down the supply chain, members of The World Federation of Diamond Bourses (WFDB) act as a medium for wholesale diamond exchange, trading both polished and rough diamonds. The WFDB consists of independent diamond bourses in major cutting centers such as Tel Aviv, Antwerp, Johannesburg and other cities across the US, Europe and Asia. In 2000, the WFDB and The International Diamond Manufacturers Association established the World Diamond Council to prevent the trading of diamonds used to fund war and inhumane acts. WFDB's additional activities include sponsoring the World Diamond Congress every two years, as well as the establishment of the International Diamond Council (IDC) to oversee diamond grading.
Once purchased by Sightholders (which is a trademark term referring to the companies that have a three-year supply contract with DTC), diamonds are cut and polished in preparation for sale as gemstones ('industrial' stones are regarded as a by-product of the gemstone market; they are used for abrasives). The cutting and polishing of rough diamonds is a specialized skill that is concentrated in a limited number of locations worldwide. Traditional diamond cutting centers are Antwerp, Amsterdam, Johannesburg, New York City, and Tel Aviv. Recently, diamond cutting centers have been established in China, India, Thailand, Namibia and Botswana. Cutting centers with lower cost of labor, notably Surat in Gujarat, India, handle a larger number of smaller carat diamonds, while smaller quantities of larger or more valuable diamonds are more likely to be handled in Europe or North America. The recent expansion of this industry in India, employing low cost labor, has allowed smaller diamonds to be prepared as gems in greater quantities than was previously economically feasible.
Diamonds prepared as gemstones are sold on diamond exchanges called bourses. There are 28 registered diamond bourses in the world. Bourses are the final tightly controlled step in the diamond supply chain; wholesalers and even retailers are able to buy relatively small lots of diamonds at the bourses, after which they are prepared for final sale to the consumer. Diamonds can be sold already set in jewelry, or sold unset ("loose"). According to the Rio Tinto, in 2002 the diamonds produced and released to the market were valued at US$9 billion as rough diamonds, US$14 billion after being cut and polished, US$28 billion in wholesale diamond jewelry, and US$57 billion in retail sales.
Cutting
Mined rough diamonds are converted into gems through a multi-step process called "cutting". Diamonds are extremely hard, but also brittle and can be split up by a single blow. Therefore, diamond cutting is traditionally considered as a delicate procedure requiring skills, scientific knowledge, tools and experience. Its final goal is to produce a faceted jewel where the specific angles between the facets would optimize the diamond luster, that is dispersion of white light, whereas the number and area of facets would determine the weight of the final product. The weight reduction upon cutting is significant and can be of the order of 50%. Several possible shapes are considered, but the final decision is often determined not only by scientific, but also practical considerations. For example, the diamond might be intended for display or for wear, in a ring or a necklace, singled or surrounded by other gems of certain color and shape. Some of them may be considered as classical, such as round, pear, marquise, oval, hearts and arrows diamonds, etc. Some of them are special, produced by certain companies, for example, Phoenix, Cushion, Sole Mio diamonds, etc.
The most time-consuming part of the cutting is the preliminary analysis of the rough stone. It needs to address a large number of issues, bears much responsibility, and therefore can last years in case of unique diamonds. The following issues are considered:
The hardness of diamond and its ability to cleave strongly depend on the crystal orientation. Therefore, the crystallographic structure of the diamond to be cut is analyzed using X-ray diffraction to choose the optimal cutting directions.
Most diamonds contain visible non-diamond inclusions and crystal flaws. The cutter has to decide which flaws are to be removed by the cutting and which could be kept.
Splitting a diamond with a hammer is difficult, a well-calculated, angled blow can cut the diamond, piece-by-piece, but it can also ruin the diamond itself. Alternatively, it can be cut with a diamond saw, which is a more reliable method.
After initial cutting, the diamond is shaped in numerous stages of polishing. Unlike cutting, which is a responsible but quick operation, polishing removes material by gradual erosion and is extremely time-consuming. The associated technique is well developed; it is considered as a routine and can be performed by technicians. After polishing, the diamond is reexamined for possible flaws, either remaining or induced by the process. Those flaws are concealed through various diamond enhancement techniques, such as repolishing, crack filling, or clever arrangement of the stone in the jewelry. Remaining non-diamond inclusions are removed through laser drilling and filling of the voids produced.
Marketing
Marketing has significantly affected the image of diamond as a valuable commodity.
N. W. Ayer & Son, the advertising firm retained by De Beers in the mid-20th century, succeeded in reviving the American diamond market and the firm created new markets in countries where no diamond tradition had existed before. N. W. Ayer's marketing included product placement, advertising focused on the diamond product itself rather than the De Beers brand, and associations with celebrities and royalty. Without advertising the De Beers brand, De Beers was advertising its competitors' diamond products as well, but this was not a concern as De Beers dominated the diamond market throughout the 20th century. De Beers' market share dipped temporarily to second place in the global market below Alrosa in the aftermath of the global economic crisis of 2008, down to less than 29% in terms of carats mined, rather than sold. The campaign lasted for decades but was effectively discontinued by early 2011. De Beers still advertises diamonds, but the advertising now mostly promotes its own brands, or licensed product lines, rather than completely "generic" diamond products. The campaign was perhaps best captured by the slogan "a diamond is forever". This slogan is now being used by De Beers Diamond Jewelers, a jewelry firm which is a 50/50% joint venture between the De Beers mining company and LVMH, the luxury goods conglomerate.
Brown-colored diamonds constituted a significant part of the diamond production, and were predominantly used for industrial purposes. They were seen as worthless for jewelry (not even being assessed on the diamond color scale). After the development of Argyle diamond mine in Australia in 1986, and marketing, brown diamonds have become acceptable gems. The change was mostly due to the numbers: the Argyle mine, with its of diamonds per year, makes about one-third of global production of natural diamonds; 80% of Argyle diamonds are brown.
Industrial-grade diamonds
Industrial diamonds are valued mostly for their hardness and thermal conductivity, making many of the gemological characteristics of diamonds, such as the 4 Cs, irrelevant for most applications. Eighty percent of mined diamonds (equal to about annually) are unsuitable for use as gemstones and are used industrially. In addition to mined diamonds, synthetic diamonds found industrial applications almost immediately after their invention in the 1950s; in 2014, of synthetic diamonds were produced, 90% of which were produced in China. Approximately 90% of diamond grinding grit is currently of synthetic origin.
The boundary between gem-quality diamonds and industrial diamonds is poorly defined and partly depends on market conditions (for example, if demand for polished diamonds is high, some lower-grade stones will be polished into low-quality or small gemstones rather than being sold for industrial use). Within the category of industrial diamonds, there is a sub-category comprising the lowest-quality, mostly opaque stones, which are known as bort.
Industrial use of diamonds has historically been associated with their hardness, which makes diamond the ideal material for cutting and grinding tools. As the hardest known naturally occurring material, diamond can be used to polish, cut, or wear away any material, including other diamonds. Common industrial applications of this property include diamond-tipped drill bits and saws, and the use of diamond powder as an abrasive. Less expensive industrial-grade diamonds (bort) with more flaws and poorer color than gems, are used for such purposes. Diamond is not suitable for machining ferrous alloys at high speeds, as carbon is soluble in iron at the high temperatures created by high-speed machining, leading to greatly increased wear on diamond tools compared to alternatives.
Specialized applications include use in laboratories as containment for high-pressure experiments (see diamond anvil cell), high-performance bearings, and limited use in specialized windows. With the continuing advances being made in the production of synthetic diamonds, future applications are becoming feasible. The high thermal conductivity of diamond makes it suitable as a heat sink for integrated circuits in electronics.
Mining
Approximately of diamonds are mined annually, with a total value of nearly US$9 billion, and about are synthesized annually.
Roughly 49% of diamonds originate from Central and Southern Africa, although significant sources of the mineral have been discovered in Canada, India, Russia, Brazil, and Australia. They are mined from kimberlite and lamproite volcanic pipes, which can bring diamond crystals, originating from deep within the Earth where high pressures and temperatures enable them to form, to the surface. The mining and distribution of natural diamonds are subjects of frequent controversy such as concerns over the sale of blood diamonds or conflict diamonds by African paramilitary groups. The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly concentrated in a small number of locations around the world.
Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during which care is required not to destroy larger diamonds, and then sorted by density. Today, diamonds are located in the diamond-rich density fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a stronger tendency to stick to grease than the other minerals in the ore.
Historically, diamonds were found only in alluvial deposits in Guntur and Krishna district of the Krishna River delta in Southern India. India led the world in diamond production from the time of their discovery in approximately the 9th century BC to the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late 18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were found in 1725. Currently, one of the most prominent Indian mines is located at Panna.
Diamond extraction from primary deposits (kimberlites and lamproites) started in the 1870s after the discovery of the Diamond Fields in South Africa. Production has increased over time and now an accumulated total of have been mined since that date. Twenty percent of that amount has been mined in the last five years, and during the last 10 years, nine new mines have started production; four more are waiting to be opened soon. Most of these mines are located in Canada, Zimbabwe, Angola, and one in Russia.
In the U.S., diamonds have been found in Arkansas, Colorado, New Mexico, Wyoming, and Montana. In 2004, the discovery of a microscopic diamond in the U.S. led to the January 2008 bulk-sampling of kimberlite pipes in a remote part of Montana. The Crater of Diamonds State Park in Arkansas is open to the public, and is the only mine in the world where members of the public can dig for diamonds.
Today, most commercially viable diamond deposits are in Russia (mostly in Sakha Republic, for example Mir pipe and Udachnaya pipe), Botswana, Australia (Northern and Western Australia) and the Democratic Republic of the Congo. In 2005, Russia produced almost one-fifth of the global diamond output, according to the British Geological Survey. Australia boasts the richest diamantiferous pipe, with production from the Argyle diamond mine reaching peak levels of 42metric tons per year in the 1990s. There are also commercial deposits being actively mined in the Northwest Territories of Canada and Brazil. Diamond prospectors continue to search the globe for diamond-bearing kimberlite and lamproite pipes.
Political issues
In some of the more politically unstable central African and west African countries, revolutionary groups have taken control of diamond mines, using proceeds from diamond sales to finance their operations. Diamonds sold through this process are known as conflict diamonds or blood diamonds.
In response to public concerns that their diamond purchases were contributing to war and human rights abuses in central and western Africa, the United Nations, the diamond industry and diamond-trading nations introduced the Kimberley Process in 2002. The Kimberley Process aims to ensure that conflict diamonds do not become intermixed with the diamonds not controlled by such rebel groups. This is done by requiring diamond-producing countries to provide proof that the money they make from selling the diamonds is not used to fund criminal or revolutionary activities. Although the Kimberley Process has been moderately successful in limiting the number of conflict diamonds entering the market, some still find their way in. According to the International Diamond Manufacturers Association, conflict diamonds constitute 2–3% of all diamonds traded. Two major flaws still hinder the effectiveness of the Kimberley Process: (1) the relative ease of smuggling diamonds across African borders, and (2) the violent nature of diamond mining in nations that are not in a technical state of war and whose diamonds are therefore considered "clean".
The Canadian Government has set up a body known as the Canadian Diamond Code of Conduct to help authenticate Canadian diamonds. This is a stringent tracking system of diamonds and helps protect the "conflict free" label of Canadian diamonds.
Mineral resource exploitation in general causes irreversible environmental damage, which must be weighed against the socio-economic benefits to a country.
Synthetics, simulants, and enhancements
Synthetics
Synthetic diamonds are diamonds manufactured in a laboratory, as opposed to diamonds mined from the Earth. The gemological and industrial uses of diamond have created a large demand for rough stones. This demand has been satisfied in large part by synthetic diamonds, which have been manufactured by various processes for more than half a century. However, in recent years it has become possible to produce gem-quality synthetic diamonds of significant size. It is possible to make colorless synthetic gemstones that, on a molecular level, are identical to natural stones and so visually similar that only a gemologist with special equipment can tell the difference.
The majority of commercially available synthetic diamonds are yellow and are produced by so-called high-pressure high-temperature (HPHT) processes. The yellow color is caused by nitrogen impurities. Other colors may also be reproduced such as blue, green or pink, which are a result of the addition of boron or from irradiation after synthesis.
Another popular method of growing synthetic diamond is chemical vapor deposition (CVD). The growth occurs under low pressure (below atmospheric pressure). It involves feeding a mixture of gases (typically to hydrogen) into a chamber and splitting them into chemically active radicals in a plasma ignited by microwaves, hot filament, arc discharge, welding torch, or laser. This method is mostly used for coatings, but can also produce single crystals several millimeters in size (see picture).
As of 2010, nearly all 5,000 million carats (1,000tonnes) of synthetic diamonds produced per year are for industrial use. Around 50% of the 133 million carats of natural diamonds mined per year end up in industrial use. Mining companies' expenses average 40 to 60 US dollars per carat for natural colorless diamonds, while synthetic manufacturers' expenses average for synthetic, gem-quality colorless diamonds. However, a purchaser is more likely to encounter a synthetic when looking for a fancy-colored diamond because only 0.01% of natural diamonds are fancy-colored, while most synthetic diamonds are colored in some way.
Simulants
A diamond simulant is a non-diamond material that is used to simulate the appearance of a diamond, and may be referred to as diamante. Cubic zirconia is the most common. The gemstone moissanite (silicon carbide) can be treated as a diamond simulant, though more costly to produce than cubic zirconia. Both are produced synthetically.
Enhancements
Diamond enhancements are specific treatments performed on natural or synthetic diamonds (usually those already cut and polished into a gem), which are designed to better the gemological characteristics of the stone in one or more ways. These include laser drilling to remove inclusions, application of sealants to fill cracks, treatments to improve a white diamond's color grade, and treatments to give fancy color to a white diamond.
Coatings are increasingly used to give a diamond simulant such as cubic zirconia a more "diamond-like" appearance. One such substance is diamond-like carbon—an amorphous carbonaceous material that has some physical properties similar to those of the diamond. Advertising suggests that such a coating would transfer some of these diamond-like properties to the coated stone, hence enhancing the diamond simulant. Techniques such as Raman spectroscopy should easily identify such a treatment.
Identification
Early diamond identification tests included a scratch test relying on the superior hardness of diamond. This test is destructive, as a diamond can scratch another diamond, and is rarely used nowadays. Instead, diamond identification relies on its superior thermal conductivity. Electronic thermal probes are widely used in the gemological centers to separate diamonds from their imitations. These probes consist of a pair of battery-powered thermistors mounted in a fine copper tip. One thermistor functions as a heating device while the other measures the temperature of the copper tip: if the stone being tested is a diamond, it will conduct the tip's thermal energy rapidly enough to produce a measurable temperature drop. This test takes about two to three seconds.
Whereas the thermal probe can separate diamonds from most of their simulants, distinguishing between various types of diamond, for example synthetic or natural, irradiated or non-irradiated, etc., requires more advanced, optical techniques. Those techniques are also used for some diamonds simulants, such as silicon carbide, which pass the thermal conductivity test. Optical techniques can distinguish between natural diamonds and synthetic diamonds. They can also identify the vast majority of treated natural diamonds. "Perfect" crystals (at the atomic lattice level) have never been found, so both natural and synthetic diamonds always possess characteristic imperfections, arising from the circumstances of their crystal growth, that allow them to be distinguished from each other.
Laboratories use techniques such as spectroscopy, microscopy, and luminescence under shortwave ultraviolet light to determine a diamond's origin. They also use specially made instruments to aid them in the identification process. Two screening instruments are the DiamondSure and the DiamondView, both produced by the DTC and marketed by the GIA.
Several methods for identifying synthetic diamonds can be performed, depending on the method of production and the color of the diamond. CVD diamonds can usually be identified by an orange fluorescence. D–J colored diamonds can be screened through the Swiss Gemmological Institute's Diamond Spotter. Stones in the D–Z color range can be examined through the DiamondSure UV/visible spectrometer, a tool developed by De Beers. Similarly, natural diamonds usually have minor imperfections and flaws, such as inclusions of foreign material, that are not seen in synthetic diamonds.
Screening devices based on diamond type detection can be used to make a distinction between diamonds that are certainly natural and diamonds that are potentially synthetic. Those potentially synthetic diamonds require more investigation in a specialized lab. Examples of commercial screening devices are D-Screen (WTOCD / HRD Antwerp), Alpha Diamond Analyzer (Bruker / HRD Antwerp), and D-Secure (DRC Techno).
Etymology, earliest use and composition discovery
The name diamond is derived from (adámas), 'proper, unalterable, unbreakable, untamed', from ἀ- (a-), 'not' + (damáō), 'to overpower, tame'. Diamonds are thought to have been first recognized and mined in India, where significant alluvial deposits of the stone could be found many centuries ago along the rivers Penner, Krishna, and Godavari. Diamonds have been known in India for at least 3,000years but most likely 6,000years.
Diamonds have been treasured as gemstones since their use as religious icons in ancient India. Their usage in engraving tools also dates to early human history. The popularity of diamonds has risen since the 19th century because of increased supply, improved cutting and polishing techniques, growth in the world economy, and innovative and successful advertising campaigns.
In 1772, the French scientist Antoine Lavoisier used a lens to concentrate the rays of the sun on a diamond in an atmosphere of oxygen, and showed that the only product of the combustion was carbon dioxide, proving that diamond is composed of carbon. Later, in 1797, the English chemist Smithson Tennant repeated and expanded that experiment. By demonstrating that burning diamond and graphite releases the same amount of gas, he established the chemical equivalence of these substances.
| Physical sciences | Chemical elements_2 | null |
8102 | https://en.wikipedia.org/wiki/Dysprosium | Dysprosium | Dysprosium is a chemical element; it has symbol Dy and atomic number 66. It is a rare-earth element in the lanthanide series with a metallic silver luster. Dysprosium is never found in nature as a free element, though, like other lanthanides, it is found in various minerals, such as xenotime. Naturally occurring dysprosium is composed of seven isotopes, the most abundant of which is 164Dy.
Dysprosium was first identified in 1886 by Paul Émile Lecoq de Boisbaudran, but it was not isolated in pure form until the development of ion-exchange techniques in the 1950s. Dysprosium has relatively few applications where it cannot be replaced by other chemical elements. It is used for its high thermal neutron absorption cross-section in making control rods in nuclear reactors, for its high magnetic susceptibility () in data-storage applications, and as a component of Terfenol-D (a magnetostrictive material). Soluble dysprosium salts are mildly toxic, while the insoluble salts are considered non-toxic.
Characteristics
Physical properties
Dysprosium is a rare-earth element and has a metallic, bright silver luster. It is quite soft and can be machined without sparking if overheating is avoided. Dysprosium's physical characteristics can be greatly affected by even small amounts of impurities.
Dysprosium and holmium have the highest magnetic strengths of the elements, especially at low temperatures. Dysprosium has a simple ferromagnetic ordering at temperatures below its Curie temperature of , at which point it undergoes a first-order phase transition from the orthorhombic crystal structure to hexagonal close-packed (hcp). It then has a helical antiferromagnetic state, in which all of the atomic magnetic moments in a particular basal plane layer are parallel and oriented at a fixed angle to the moments of adjacent layers. This unusual antiferromagnetism transforms into a disordered (paramagnetic) state at . It transforms from the hcp phase to the body-centered cubic phase at .
Chemical properties
Dysprosium metal retains its luster in dry air but it will tarnish slowly in moist air, and it burns readily to form dysprosium(III) oxide:
4 Dy + 3 O2 → 2 Dy2O3
Dysprosium is quite electropositive and reacts slowly with cold water (and quickly with hot water) to form dysprosium hydroxide:
2 Dy (s) + 6 H2O (l) → 2 Dy(OH)3 (aq) + 3 H2 (g)
Dysprosium hydroxide decomposes to form DyO(OH) at elevated temperatures, which then decomposes again to dysprosium(III) oxide.
Dysprosium metal vigorously reacts with all the halogens at above 200 °C:
2 Dy (s) + 3 F2 (g) → 2 DyF3 (s) [green]
2 Dy (s) + 3 Cl2 (g) → 2 DyCl3 (s) [white]
2 Dy (s) + 3 Br2 (l) → 2 DyBr3 (s) [white]
2 Dy (s) + 3 I2 (g) → 2 DyI3 (s) [green]
Dysprosium dissolves readily in dilute sulfuric acid to form solutions containing the yellow Dy(III) ions, which exist as a [Dy(OH2)9]3+ complex:
2 Dy (s) + 3 H2SO4 (aq) → 2 Dy3+ (aq) + 3 (aq) + 3 H2 (g)
The resulting compound, dysprosium(III) sulfate, is noticeably paramagnetic.
Compounds
Dysprosium halides, such as DyF3 and DyBr3, tend to take on a yellow color. Dysprosium oxide, also known as dysprosia, is a white powder that is highly magnetic, more so than iron oxide.
Dysprosium combines with various non-metals at high temperatures to form binary compounds with varying composition and oxidation states +3 and sometimes +2, such as DyN, DyP, DyH2 and DyH3; DyS, DyS2, Dy2S3 and Dy5S7; DyB2, DyB4, DyB6 and DyB12, as well as Dy3C and Dy2C3.
Dysprosium carbonate, Dy2(CO3)3, and dysprosium sulfate, Dy2(SO4)3, result from similar reactions. Most dysprosium compounds are soluble in water, though dysprosium carbonate tetrahydrate (Dy2(CO3)3·4H2O) and dysprosium oxalate decahydrate (Dy2(C2O4)3·10H2O) are both insoluble in water. Two of the most abundant dysprosium carbonates, Dy2(CO3)3·2–3H2O (similar to the mineral tengerite-(Y)), and DyCO3(OH) (similar to minerals kozoite-(La) and kozoite-(Nd)), are known to form via a poorly ordered (amorphous) precursor phase with a formula of Dy2(CO3)3·4H2O. This amorphous precursor consists of highly hydrated spherical nanoparticles of 10–20 nm diameter that are exceptionally stable under dry treatment at ambient and high temperatures.
Dysprosium forms several intermetallics, including the dysprosium stannides.
Isotopes
Naturally occurring dysprosium is composed of seven isotopes: 156Dy, 158Dy, 160Dy, 161Dy, 162Dy, 163Dy, and 164Dy. These are all considered stable, although only the last two are theoretically stable: the others can theoretically undergo alpha decay. Of the naturally occurring isotopes, 164Dy is the most abundant at 28%, followed by 162Dy at 26%. The least abundant is 156Dy at 0.06%. Dysprosium is the heaviest element to have isotopes that are predicted to be stable rather than observationally stable isotopes that are predicted to be radioactive.
Twenty-nine radioisotopes have been synthesized, ranging in atomic mass from 138 to 173. The most stable of these is 154Dy, with a half-life of approximately 3 years, followed by 159Dy with a half-life of 144.4 days. The least stable is 138Dy, with a half-life of 200 ms. As a general rule, isotopes that are lighter than the stable isotopes tend to decay primarily by β+ decay, while those that are heavier tend to decay by β− decay. However, 154Dy decays primarily by alpha decay, and 152Dy and 159Dy decay primarily by electron capture. Dysprosium also has at least 11 metastable isomers, ranging in atomic mass from 140 to 165. The most stable of these is 165mDy, which has a half-life of 1.257 minutes. 149Dy has two metastable isomers, the second of which, 149m2Dy, has a half-life of 28 ns.
History
In 1878, erbium ores were found to contain the oxides of holmium and thulium. French chemist Paul Émile Lecoq de Boisbaudran, while working with holmium oxide, separated dysprosium oxide from it in Paris in 1886. His procedure for isolating the dysprosium involved dissolving dysprosium oxide in acid, then adding ammonia to precipitate the hydroxide. He was only able to isolate dysprosium from its oxide after more than 30 attempts at his procedure. On succeeding, he named the element dysprosium from the Greek dysprositos (δυσπρόσιτος), meaning "hard to get". The element was not isolated in relatively pure form until after the development of ion exchange techniques by Frank Spedding at Iowa State University in the early 1950s.
Due to its role in permanent magnets used for wind turbines, it has been argued that dysprosium will be one of the main objects of geopolitical competition in a world running on renewable energy. But this perspective has been criticised for failing to recognise that most wind turbines do not use permanent magnets and for underestimating the power of economic incentives for expanded production.
In 2021, Dy was turned into a 2-dimensional supersolid quantum gas.
Occurrence
While dysprosium is never encountered as a free element, it is found in many minerals, including xenotime, fergusonite, gadolinite, euxenite, polycrase, blomstrandine, monazite and bastnäsite, often with erbium and holmium or other rare earth elements. No dysprosium-dominant mineral (that is, with dysprosium prevailing over other rare earths in the composition) has yet been found.
In the high-yttrium version of these, dysprosium happens to be the most abundant of the heavy lanthanides, comprising up to 7–8% of the concentrate (as compared to about 65% for yttrium). The concentration of Dy in the Earth's crust is about 5.2 mg/kg and in sea water 0.9 ng/L.
Production
Dysprosium is obtained primarily from monazite sand, a mixture of various phosphates. The metal is obtained as a by-product in the commercial extraction of yttrium. In isolating dysprosium, most of the unwanted metals can be removed magnetically or by a flotation process. Dysprosium can then be separated from other rare earth metals by an ion exchange displacement process. The resulting dysprosium ions can then react with either fluorine or chlorine to form dysprosium fluoride, DyF3, or dysprosium chloride, DyCl3. These compounds can be reduced using either calcium or lithium metals in the following reactions:
3 Ca + 2 DyF3 → 2 Dy + 3 CaF2
3 Li + DyCl3 → Dy + 3 LiCl
The components are placed in a tantalum crucible and fired in a helium atmosphere. As the reaction progresses, the resulting halide compounds and molten dysprosium separate due to differences in density. When the mixture cools, the dysprosium can be cut away from the impurities.
About 100 tonnes of dysprosium are produced worldwide each year, with 99% of that total produced in China. Dysprosium prices have climbed nearly twentyfold, from $7 per pound in 2003, to $130 a pound in late 2010. The price increased to $1,400/kg in 2011 but fell to $240 in 2015, largely due to illegal production in China which circumvented government restrictions.
Currently, most dysprosium is being obtained from the ion-adsorption clay ores of southern China. the Browns Range Project pilot plant, 160 km south east of Halls Creek, Western Australia, is producing per annum.
According to the United States Department of Energy, the wide range of its current and projected uses, together with the lack of any immediately suitable replacement, makes dysprosium the single most critical element for emerging clean energy technologies; even their most conservative projections predicted a shortfall of dysprosium before 2015. As of late 2015, there is a nascent rare earth (including dysprosium) extraction industry in Australia.
Applications
Dysprosium is used, in conjunction with vanadium and other elements, in making laser materials and commercial lighting. Because of dysprosium's high thermal-neutron absorption cross-section, dysprosium-oxide–nickel cermets are used in neutron-absorbing control rods in nuclear reactors. Dysprosium–cadmium chalcogenides are sources of infrared radiation, which is useful for studying chemical reactions. Because dysprosium and its compounds are highly susceptible to magnetization, they are employed in various data-storage applications, such as in hard disks. Dysprosium is increasingly in demand for the permanent magnets used in electric-car motors and wind-turbine generators.
Neodymium–iron–boron magnets can have up to 6% of the neodymium substituted by dysprosium to raise the coercivity for demanding applications, such as drive motors for electric vehicles and generators for wind turbines. This substitution would require up to 100 grams of dysprosium per electric car produced. Based on Toyota's projected 2 million units per year, the use of dysprosium in applications such as this would quickly exhaust its available supply. The dysprosium substitution may also be useful in other applications because it improves the corrosion resistance of the magnets.
Dysprosium is one of the components of Terfenol-D, along with iron and terbium. Terfenol-D has the highest room-temperature magnetostriction of any known material, which is employed in transducers, wide-band mechanical resonators, and high-precision liquid-fuel injectors.
Dysprosium is used in dosimeters for measuring ionizing radiation. Crystals of calcium sulfate or calcium fluoride are doped with dysprosium. When these crystals are exposed to radiation, the dysprosium atoms become excited and luminescent. The luminescence can be measured to determine the degree of exposure to which the dosimeter has been subjected.
Nanofibers of dysprosium compounds have high strength and a large surface area. Therefore, they can be used to reinforce other materials and act as a catalyst. Fibers of dysprosium oxide fluoride can be produced by heating an aqueous solution of DyBr3 and NaF to 450 °C at 450 bars for 17 hours. This material is remarkably robust, surviving over 100 hours in various aqueous solutions at temperatures exceeding 400 °C without redissolving or aggregating. Additionally, dysprosium has been used to create a two dimensional supersolid in a laboratory environment. Supersolids are expected to exhibit unusual properties, including superfluidity.
Dysprosium iodide and dysprosium bromide are used in high-intensity metal-halide lamps. These compounds dissociate near the hot center of the lamp, releasing isolated dysprosium atoms. The latter re-emit light in the green and red part of the spectrum, thereby effectively producing bright light.
Several paramagnetic crystal salts of dysprosium (dysprosium gallium garnet, DGG; dysprosium aluminium garnet, DAG; dysprosium iron garnet, DyIG) are used in adiabatic demagnetization refrigerators.
The trivalent dysprosium ion (Dy3+) has been studied due to its downshifting luminescence properties. Dy-doped yttrium aluminium garnet (Dy:YAG) excited in the ultraviolet region of the electromagnetic spectrum results in the emission of photons of longer wavelength in the visible region. This idea is the basis for a new generation of UV-pumped white light-emitting diodes.
The stable isotopes of dysprosium have been laser cooled and confined in magneto-optical traps for quantum physics experiments. The first Bose and Fermi quantum degenerate gases of an open shell lanthanide were created with dysprosium. Because dysprosium is highly magnetic—indeed it is the most magnetic fermionic element and nearly tied with terbium for most magnetic bosonic atom—such gases serve as the basis for quantum simulation with strongly dipolar atoms.
Due to its strong magnetic properties, Dysprosium alloys are used in the marine industry's sound navigation and ranging (SONAR) system. The inclusion of dysprosium alloys in the design of SONAR transducers and receivers can improve sensitivity and accuracy by providing more stable and efficient magnetic fields.
Precautions
Like many powders, dysprosium powder may present an explosion hazard when mixed with air and when an ignition source is present. Thin foils of the substance can also be ignited by sparks or by static electricity. Dysprosium fires cannot be extinguished with water. It can react with water to produce flammable hydrogen gas. Dysprosium chloride fires can be extinguished with water. Dysprosium fluoride and dysprosium oxide are non-flammable. Dysprosium nitrate, Dy(NO3)3, is a strong oxidizing agent and readily ignites on contact with organic substances.
Soluble dysprosium salts, such as dysprosium chloride and dysprosium nitrate are mildly toxic when ingested. Based on the toxicity of dysprosium chloride to mice, it is estimated that the ingestion of 500 grams or more could be fatal to a human (c.f. lethal dose of 300 grams of common table salt for a 100 kilogram human). The insoluble salts are non-toxic.
| Physical sciences | Chemical elements_2 | null |
8104 | https://en.wikipedia.org/wiki/Desertification | Desertification | Desertification is a type of gradual land degradation of fertile land into arid desert due to a combination of natural processes and human activities.
The immediate cause of desertification is the loss of most vegetation. This is driven by a number of factors, alone or in combination, such as drought, climatic shifts, tillage for agriculture, overgrazing and deforestation for fuel or construction materials. Though vegetation plays a major role in determining the biological composition of the soil, studies have shown that, in many environments, the rate of erosion and runoff decreases exponentially with increased vegetation cover. Unprotected, dry soil surfaces blow away with the wind or are washed away by flash floods, leaving infertile lower soil layers that bake in the sun and become an unproductive hardpan. This spread of arid areas is caused by a variety of factors, such as overexploitation of soil as a result of human activity and the effects of climate change.
At least 90% of the inhabitants of drylands live in developing countries, where they also suffer from poor economic and social conditions. This situation is exacerbated by land degradation because of the reduction in productivity, the precariousness of living conditions and the difficulty of access to resources and opportunities.
Geographic areas most affected are located in Africa (Sahel region), Asia (Gobi Desert and Mongolia) and parts of South America. Drylands occupy approximately 40–41% of Earth's land area and are home to more than 2 billion people. Effects of desertification include sand and dust storms, food insecurity, and poverty.
Methods of mitigating or reversing desertification include improving soil quality, greening deserts, managing grazing, and tree-planting (reforestation and afforestation).
Throughout geological history, the development of deserts has occurred naturally over long intervals of time. The modern study of desertification emerged from the study of the 1980s drought in the Sahel.
Definitions
Desertification is a gradual process of increased soil aridity. Desertification has been defined in the text of the United Nations Convention to Combat Desertification (UNCCD) as "land degradation in arid, semi-arid and dry sub-humid regions resulting from various factors, including climatic variations and human activities."
Definition of Desert – That area of the earth where the sum of rain and snowfall is much less than other areas, where the annual average rainfall is less than 25CM. Definition by UNO (1995) – Land degradation in barren, humid and sub-humid areas due to climate change and human activities is called desertification.
As of 2005, considerable controversy existed over the proper definition of the term desertification with more than 100 formal definitions in existence. The most widely accepted of these was that of the Princeton University Dictionary which defined it as "the process of fertile land transforming into desert typically as a result of deforestation, drought or improper/inappropriate agriculture". This definition clearly demonstrated the interconnectedness of desertification and human activities, in particular land use and land management practices. It also highlighted the economic, social and environmental implications of desertification. However, this original understanding that desertification involved the physical expansion of deserts has been rejected as the concept has further evolved since then.
There exists also controversy around the sub-grouping of types of desertification, including, for example, the validity and usefulness of such terms as "man-made desert" and "non-pattern desert".
Causes
Immediate causes
The immediate cause of desertification is the loss of most vegetation. This is driven by a number of factors, alone or in combination, such as drought, climatic shifts, tillage for agriculture, overgrazing and deforestation for fuel or construction materials. Though vegetation plays a major role in determining the biological composition of the soil, studies have shown that, in many environments, the rate of erosion and runoff decreases exponentially with increased vegetation cover. Unprotected, dry soil surfaces blow away with the wind or are washed away by flash floods, leaving infertile lower soil layers that bake in the sun and become an unproductive hardpan.
Influence of human activities
Early studies argued one of the most common causes of desertification was overgrazing, over consumption of vegetation by cattle or other livestock. However, the role of local overexploitation in driving desertification in the recent past is controversial. Drought in the Sahel region is now thought to be principally the result of seasonal variability in rainfall caused by large-scale sea surface temperature variations, largely driven by natural variability and anthropogenic emissions of aerosols (reflective sulphate particles) and greenhouse gases. As a result, changing ocean temperature and reductions in sulfate emissions have caused a re-greening of the region. This has led some scholars to argue that agriculture-induced vegetation loss is a minor factor in desertification.
Human population dynamics have a considerable impact on overgrazing, over-farming and deforestation, as previously acceptable techniques have become unsustainable.
There are multiple reasons farmers use intensive farming as opposed to extensive farming but the main reason is to maximize yields. By increasing productivity, they require a lot more fertilizer, pesticides, and labor to upkeep machinery. This continuous use of the land rapidly depletes the nutrients of the soil causing desertification to spread.
Natural variations
Scientists agree that the existence of a desert in the place where the Sahara desert is now located is due to natural variations in solar insolation due to orbital precession of the Earth. Such variations influence the strength of the West African Monsoon, inducing feedback in vegetation and dust emission that amplify the cycle of wet and dry Sahara climate. There is also a suggestion the transition of the Sahara from savanna to desert during the mid-Holocene was partially due to overgrazing by the cattle of the local population.
Climate change
Research into desertification is complex, and there is no single metric which can define all aspects. However, more intense climate change is still expected to increase the current extent of drylands on the Earth's continents: from 38% in late 20th century to 50% or 56% by the end of the century, under the "moderate" and high-warming Representative Concentration Pathways 4.5 and 8.5. Most of the expansion will be seen over regions such as "southwest North America, the northern fringe of Africa, southern Africa, and Australia".
Drylands cover 41% of the earth's land surface and include 45% of the world's agricultural land. These regions are among the most vulnerable ecosystems to anthropogenic climate and land use change and are under threat of desertification. An observation-based attribution study of desertification was carried out in 2020 which accounted for climate change, climate variability, CO2 fertilization as well as both the gradual and rapid ecosystem changes caused by land use. The study found that, between 1982 and 2015, 6% of the world's drylands underwent desertification driven by unsustainable land use practices compounded by anthropogenic climate change. Despite an average global greening, anthropogenic climate change has degraded 12.6% (5.43 million km2) of drylands, contributing to desertification and affecting 213 million people, 93% of who live in developing economies.
Effects
Sand and dust storms
There has been a 25% increase in global annual dust emissions between the late nineteenth century to present day. The increase of desertification has also increased the amount of loose sand and dust that the wind can pick up ultimately resulting in a storm. For example, dust storms in the Middle East “are becoming more frequent and intense in recent years” because “long-term reductions in rainfall [cause] lower soil moisture and vegetative cover”.
Dust storms can contribute to certain respiratory disorders such as pneumonia, skin irritations, asthma and many more. They can pollute open water, reduce the effectiveness of clean energy efforts, and halt most forms of transportation.
Dust and sand storms can have a negative effect on the climate which can make desertification worse. Dust particles in the air scatter incoming radiation from the sun (Hassan, 2012). The dust can provide momentary coverage for the ground temperature but the atmospheric temperature will increase. This can disform and shorten the life time of clouds which can result in less rainfall.
Food insecurity
Global food security is being threatened by desertification. The more that population grows, the more food that has to be grown. The agricultural business is being displaced from one country to another. For example, Europe on average imports over 50% of its food. Meanwhile, 44% of agricultural land is located in dry lands and it supplies 60% of the world's food production. Desertification is decreasing the amount of sustainable land for agricultural uses but demands are continuously growing. In the near future, the demands will overcome the supply. The violent herder–farmer conflicts in Nigeria, Sudan, Mali and other countries in the Sahel region have been exacerbated by climate change, land degradation and population growth.
Increasing poverty
At least 90% of the inhabitants of drylands live in developing countries, where they also suffer from poor economic and social conditions. This situation is exacerbated by land degradation because of the reduction in productivity, the precariousness of living conditions and the difficulty of access to resources and opportunities.
Many underdeveloped countries are affected by overgrazing, land exhaustion and overdrafting of groundwater due to pressures to exploit marginal drylands for farming. Decision-makers are understandably averse to invest in arid zones with low potential. This absence of investment contributes to the marginalization of these zones. When unfavorable agri-climatic conditions are combined with an absence of infrastructure and access to markets, as well as poorly adapted production techniques and an underfed and undereducated population, most such zones are excluded from development.
Desertification often causes rural lands to become unable to support the same sized populations that previously lived there. This results in mass migrations out of rural areas and into urban areas particularly in Africa creating unemployment and slums. The number of these environmental refugees grows every year, with projections for sub-Saharan Africa showing a probable increase from 14 million in 2010 to nearly 200 million by 2050. This presents a future crisis for the region, as neighboring nations do not always have the ability to support large populations of refugees.
In Mongolia, the land is 90% fragile dry land, which causes many herders to migrate to the city for work. With very limited resources, the herders that stay on the dry land graze very carefully in order to preserve the land.
Agriculture is a main source of income for many desert communities. The increase in desertification in these regions has degraded the land to such an extent where people can no longer productively farm and make a profit. This has negatively impacted the economy and increased poverty rates.
There is, however, increased global advocacy e.g. the UN SDG 15 to combat desertification and restore affected lands.
Geographic areas affected
Drylands occupy approximately 40–41% of Earth's land area and are home to more than 2 billion people. It has been estimated that some 10–20% of drylands are already degraded, the total area affected by desertification being between 6 and 12 million square kilometers, that about 1–6% of the inhabitants of drylands live in desertified areas, and that a billion people are under threat from further desertification.
Sahel
The impact of climate change and human activities on desertification are exemplified in the Sahel region of Africa. The region is characterized by a dry hot climate, high temperatures and low rainfall (100–600 mm per year). So, droughts are the rule in the Sahel region. The Sahel has lost approximately 650,000 km2 of its productive agricultural land over the past 50 years; the propagation of desertification in this area is considerable.
The climate of the Sahara has undergone enormous variations over the last few hundred thousand years, oscillating between wet (grassland) and dry (desert) every 20,000 years (a phenomenon believed to be caused by long-term changes in the North African climate cycle that alters the path of the North African Monsoon, caused by an approximately 40,000-year cycle in which the axial tilt of the earth changes between 22° and 24.5°). Some statistics have shown that, since 1900, the Sahara has expanded by 250 km to the south over a stretch of land from west to east 6,000 km long.
Lake Chad, located in the Sahel region, has undergone desiccation due to water withdrawal for irrigation and decrease in rainfall. The lake has shrunk by over 90% since 1987, displacing millions of inhabitants. Recent efforts have managed to make some progress toward its restoration, but it is still considered to be at risk of disappearing entirely.
To limit desertification, the Great Green Wall (Africa) initiative was started in 2007 involving the planting of vegetation along a stretch of 7,775 km, 15 km wide, involving 22 countries to 2030. The purpose of this mammoth planting initiative is to enhance retention of water in the ground following the seasonal rainfall, thus promoting land rehabilitation and future agriculture. Senegal has already contributed to the project by planting 50,000 acres of trees. It is said to have improved land quality and caused an increase in economic opportunity in the region.
Gobi Desert and Mongolia
Another major area that is being impacted by desertification is the Gobi Desert located in Northern China and Southern Mongolia. The Gobi Desert is the fastest expanding desert on Earth, as it transforms over of grassland into wasteland annually. Although the Gobi Desert itself is still a distance away from Beijing, reports from field studies state there are large sand dunes forming only 70 km (43.5 mi) outside the city.
In Mongolia, around 90% of grassland is considered vulnerable to desertification by the UN. An estimated 13% of desertification in Mongolia is caused by natural factors; the rest is due to human influence particularly overgrazing and increased erosion of soils in cultivated areas. During the period 1940 to 2015, the mean air temperature increased by 2.24 °C. The warmest ten-year period was during the latest decade to 2021. Precipitation has decreased by 7% over this period resulting in increased arid conditions throughout Mongolia. The Gobi desert continues to expand northward, with over 70% of Mongolia's land degraded through overgrazing, deforestation, and climate change. In addition, the Mongolia government has listed forest fires, blights, unsustainable forestry and mining activities as leading causes of desertification in the country. The transition from sheep to goat farming in order to meet export demands for cashmere wool has caused degradation of grazing lands. Compared to sheep, goats do more damage to grazing lands by eating roots and flowers.
To mitigate the financial impact of desertification in Inner Mongolia, Bai Jingying teaches women how to do traditional embroidery, which they then sell to provide additional income.
South America
South America is another area vulnerable by desertification, as 25% of the land is classified as drylands and over 68% of the land area has undergone soil erosion as a result of deforestation and overgrazing. 27 to 43% of the land areas in Bolivia, Chile, Ecuador and Peru are at risk due to desertification. In Argentina, Mexico and Paraguay, greater than half the land area is degraded by desertification and cannot be used for agriculture. In Central America, drought has caused increased unemployment and decreased food security - also causing migration of people. Similar impacts have been seen in rural parts of Mexico where about 1,000 km2 of land have been lost yearly due to desertification. In Argentina, desertification has the potential to disrupt the nation's food supply.
Reversing desertification
Techniques and countermeasures exist for mitigating or reversing desertification. For some of these measures, there are numerous barriers to their implementation. Yet for others, the solution simply requires the exercise of human reason.
One proposed barrier is that the costs of adopting sustainable agricultural practices sometimes exceed the benefits for individual farmers, even while they are socially and environmentally beneficial. Another issue is a lack of political will, and lack of funding to support land reclamation and anti-desertification programs.
Desertification is recognized as a major threat to biodiversity. Some countries have developed biodiversity action plans to counter its effects, particularly in relation to the protection of endangered flora and fauna.
Improving soil quality
Techniques focus on two aspects: provisioning of water, and fixation and hyper-fertilizing soil. Fixating the soil is often done through the use of shelter belts, woodlots and windbreaks. Windbreaks are made from trees and bushes and are used to reduce soil erosion and evapotranspiration.
Some soils (for example, clay), due to lack of water can become consolidated rather than porous (as in the case of sandy soils). Some techniques as zaï or tillage are then used to still allow the planting of crops.
Another technique that is useful is contour trenching. This involves the digging of 150 m long, 1 m deep trenches in the soil. The trenches are made parallel to the height lines of the landscape, preventing the water from flowing within the trenches and causing erosion. Stone walls are placed around the trenches to prevent the trenches from closing up again. This method was invented by Peter Westerveld.
Enriching of the soil and restoration of its fertility is often achieved by plants. Of these, leguminous plants which extract nitrogen from the air and fix it in the soil, succulents (such as Opuntia), and food crops/trees as grains, barley, beans and dates are the most important. Sand fences can also be used to control drifting of soil and sand erosion.
Another way to restore soil fertility is through the use of nitrogen-rich fertilizer. Due to the higher cost of this fertilizer, many smallholder farmers are reluctant to use it, especially in areas where subsistence farming is common. Several nations, including India, Zambia, and Malawi have responded to this by implementing subsidies to help encourage adoption of this technique.
Some research centres (such as Bel-Air Research Center IRD/ISRA/UCAD) are also experimenting with the inoculation of tree species with mycorrhiza in arid zones. The mycorrhiza are basically fungi attaching themselves to the roots of the plants. They hereby create a symbiotic relation with the trees, increasing the surface area of the tree's roots greatly (allowing the tree to gather much more nutrient from the soil).
The bioengineering of soil microbes, particularly photosynthesizers, has also been suggested and theoretically modeled as a method to protect drylands. The aim would be to enhance the existing cooperative loops between soil microbes and vegetation.
Desert greening
As there are many different types of deserts, there are also different types of desert reclamation methodologies. An example for this is the salt flats in the Rub' al Khali desert in Saudi Arabia. These salt flats are one of the most promising desert areas for seawater agriculture and could be revitalized without the use of freshwater or much energy.
Farmer-managed natural regeneration (FMNR) is another technique that has produced successful results for desert reclamation. Since 1980, this method to reforest degraded landscape has been applied with some success in Niger. This simple and low-cost method has enabled farmers to regenerate some 30,000 square kilometers in Niger. The process involves enabling native sprouting tree growth through selective pruning of shrub shoots. The residue from pruned trees can be used to provide mulching for fields thus increasing soil water retention and reducing evaporation. Additionally, properly spaced and pruned trees can increase crop yields. The Humbo Assisted Regeneration Project which uses FMNR techniques in Ethiopia has received money from The World Bank's BioCarbon Fund, which supports projects that sequester or conserve carbon in forests or agricultural ecosystems.
The Food and Agriculture Organization of the United Nations launched the FAO Drylands Restoration Initiative in 2012 to draw together knowledge and experience on dryland restoration. In 2015, FAO published global guidelines for the restoration of degraded forests and landscapes in drylands, in collaboration with the Turkish Ministry of Forestry and Water Affairs and the Turkish Cooperation and Coordination Agency.
The "Green Wall of China" is a high-profile example of one method that has been finding success in this battle with desertification. This wall is a much larger-scale version of what American farmers did in the 1930s to stop the great Midwest dust bowl. This plan was proposed in the late 1970s, and has become a major ecological engineering project that is not predicted to end until the year 2055. According to Chinese reports, there have been nearly 66 billion trees planted in China's great green wall. The green wall of China has decreased desert land in China by an annual average of 1,980 square km. The frequency of sandstorms nationwide have fallen 20% due to the green wall. Due to the success that China has been finding in stopping the spread of desertification, plans are currently being made in Africa to start a "wall" along the borders of the Sahara desert as well to be financed by the United Nations Global Environment Facility trust.
In 2007 the African Union started the Great Green Wall of Africa project in order to combat desertification in 20 countries. The wall is 8,000 km wide, stretching across the entire width of the continent and has 8 billion dollars in support of the project. The project has restored 36millionhectares of land, and by 2030 the initiative plans to restore a total of 100 millionhectares. The Great Green Wall has created many job opportunities for the participating countries, with over 20,000 jobs created in Nigeria alone.
Better managed grazing
Restored grasslands store CO2 from the atmosphere as organic plant material. Grazing livestock, usually not left to wander, consume the grass and minimize its growth. A method proposed to restore grasslands uses fences with many small paddocks, moving herds from one paddock to another after a day or two in order to mimic natural grazers and allowing the grass to grow optimally. Proponents of managed grazing methods estimate that increasing this method could increase carbon content of the soils in the world's 3.5 billion hectares of agricultural grassland and offset nearly 12 years of CO2 emissions.
History
The world's most noted deserts have been formed by natural processes interacting over long intervals of time. During most of these times, deserts have grown and shrunk independently of human activities. Paleodeserts are large sand seas now inactive because they are stabilized by vegetation, some extending beyond the present margins of core deserts, such as the Sahara, the largest hot desert.
Historical evidence shows that the serious and extensive land deterioration occurring several centuries ago in arid regions had three centers: the Mediterranean, the Mesopotamian Valley, and the Loess Plateau of China, where population was dense.
The earliest known discussion of the topic arose soon after the French colonization of West Africa, when the Comité d'Etudes commissioned a study on desséchement progressif to explore the prehistoric expansion of the Sahara Desert. The modern study of desertification emerged from the study of the 1980s drought in the Sahel.
| Physical sciences | Biomes: General | Earth science |
8131 | https://en.wikipedia.org/wiki/Dendrite | Dendrite | A dendrite (from Greek δένδρον déndron, "tree") or dendron is a branched cytoplasmic process that extends from a nerve cell that propagates the electrochemical stimulation received from other neural cells to the cell body, or soma, of the neuron from which the dendrites project. Electrical stimulation is transmitted onto dendrites by upstream neurons (usually via their axons) via synapses which are located at various points throughout the dendritic tree.
Dendrites play a critical role in integrating these synaptic inputs and in determining the extent to which action potentials are produced by the neuron.
Structure and function
Dendrites are one of two types of cytoplasmic processes that extrude from the cell body of a neuron, the other type being an axon. Axons can be distinguished from dendrites by several features including shape, length, and function. Dendrites often taper off in shape and are shorter, while axons tend to maintain a constant radius and can be very long. Typically, axons transmit electrochemical signals and dendrites receive the electrochemical signals, although some types of neurons in certain species lack specialized axons and transmit signals via their dendrites. Dendrites provide an enlarged surface area to receive signals from axon terminals of other neurons. The dendrite of a large pyramidal cell receives signals from about 30,000 presynaptic neurons. Excitatory synapses terminate on dendritic spines, tiny protrusions from the dendrite with a high density of neurotransmitter receptors. Most inhibitory synapses directly contact the dendritic shaft.
Synaptic activity causes local changes in the electrical potential across the plasma membrane of the dendrite. This change in membrane potential will passively spread along the dendrite, but becomes weaker with distance without an action potential. To generate an action potential, many excitatory synapses have to be active at the same time, leading to strong depolarization of the dendrite and the cell body (soma). The action potential, which typically starts at the axon hillock, propagates down the length of the axon to the axon terminals where it triggers the release of neurotransmitters, but also backwards into the dendrite (retrograde propagation), providing an important signal for spike-timing-dependent plasticity (STDP).
Most synapses are axodendritic, involving an axon signaling to a dendrite. There are also dendrodendritic synapses, signaling from one dendrite to another. An autapse is a synapse in which the axon of one neuron transmits signals to its own dendrite.
The general structure of the dendrite is used to classify neurons into multipolar, bipolar and unipolar types. Multipolar neurons are composed of one axon and many dendritic trees. Pyramidal cells are multipolar cortical neurons with pyramid-shaped cell bodies and large dendrites that extend towards the surface of the cortex (apical dendrite). Bipolar neurons have two main dendrites at opposing ends of the cell body. Many inhibitory neurons have this morphology. Unipolar neurons, typical for insects, have a stalk that extends from the cell body that separates into two branches with one containing the dendrites and the other with the terminal buttons. In vertebrates, sensory neurons detecting touch or temperature are unipolar. Dendritic branching can be extensive and in some cases is sufficient to receive as many as 100,000 inputs to a single neuron.
History
The term dendrites was first used in 1889 by Wilhelm His to describe the number of smaller "protoplasmic processes" that were attached to a nerve cell. German anatomist Otto Friedrich Karl Deiters is generally credited with the discovery of the axon by distinguishing it from the dendrites.
Some of the first intracellular recordings in a nervous system were made in the late 1930s by Kenneth S. Cole and Howard J. Curtis. Swiss Rüdolf Albert von Kölliker and German Robert Remak were the first to identify and characterize the axonal initial segment. Alan Hodgkin and Andrew Huxley also employed the squid giant axon (1939) and by 1952 they had obtained a full quantitative description of the ionic basis of the action potential, leading to the formulation of the Hodgkin–Huxley model. Hodgkin and Huxley were awarded jointly the Nobel Prize for this work in 1963. The formulas detailing axonal conductance were extended to vertebrates in the Frankenhaeuser–Huxley equations. Louis-Antoine Ranvier was the first to describe the gaps or nodes found on axons and for this contribution these axonal features are now commonly referred to as the Nodes of Ranvier. Santiago Ramón y Cajal, a Spanish anatomist, proposed that axons were the output components of neurons. He also proposed that neurons were discrete cells that communicated with each other via specialized junctions, or spaces, between cells, now known as a synapse. Ramón y Cajal improved a silver staining process known as Golgi's method, which had been developed by his rival, Camillo Golgi.
Dendrite development
During the development of dendrites, several factors can influence differentiation. These include modulation of sensory input, environmental pollutants, body temperature, and drug use. For example, rats raised in dark environments were found to have a reduced number of spines in pyramidal cells located in the primary visual cortex and a marked change in distribution of dendrite branching in layer 4 stellate cells. Experiments done in vitro and in vivo have shown that the presence of afferents and input activity per se can modulate the patterns in which dendrites differentiate.
Little is known about the process by which dendrites orient themselves in vivo and are compelled to create the intricate branching pattern unique to each specific neuronal class. One theory on the mechanism of dendritic arbor development is the Synaptotropic Hypothesis. The synaptotropic hypothesis proposes that input from a presynaptic to a postsynaptic cell (and maturation of excitatory synaptic inputs) eventually can change the course of synapse formation at dendritic and axonal arbors.
This synapse formation is required for the development of neuronal structure in the functioning brain. A balance between metabolic costs of dendritic elaboration and the need to cover the receptive field presumably determine the size and shape of dendrites. A complex array of extracellular and intracellular cues modulates dendrite development including transcription factors, receptor-ligand interactions, various signaling pathways, local translational machinery, cytoskeletal elements, Golgi outposts and endosomes. These contribute to the organization of the dendrites on individual cell bodies and the placement of these dendrites in the neuronal circuitry. For example, it was shown that β-actin zipcode binding protein 1 (ZBP1) contributes to proper dendritic branching.
Other important transcription factors involved in the morphology of dendrites include CUT, Abrupt, Collier, Spineless, ACJ6/drifter, CREST, NEUROD1, CREB, NEUROG2 etc. Secreted proteins and cell surface receptors include neurotrophins and tyrosine kinase receptors, BMP7, Wnt/dishevelled, EPHB 1–3, Semaphorin/plexin-neuropilin, slit-robo, netrin-frazzled, reelin. Rac, CDC42 and RhoA serve as cytoskeletal regulators, and the motor protein includes KIF5, dynein, LIS1. Dendritic arborization has been found to be induced in cerebellum Purkinje cells by substance P. Important secretory and endocytic pathways controlling the dendritic development include DAR3 /SAR1, DAR2/Sec23, DAR6/Rab1 etc. All these molecules interplay with each other in controlling dendritic morphogenesis including the acquisition of type specific dendritic arborization, the regulation of dendrite size and the organization of dendrites emanating from different neurons.
Types of dendritic patterns
Dendritic arborization, also known as dendritic branching, is a multi-step biological process by which neurons form new dendritic trees and branches to create new synapses. Dendrites in many organisms assume different morphological patterns of branching. The morphology of dendrites such as branch density and grouping patterns are highly correlated to the function of the neuron. Malformation of dendrites is also tightly correlated to impaired nervous system function.
Branching morphologies may assume an adendritic structure (not having a branching structure, or not tree-like), or a tree-like radiation structure. Tree-like arborization patterns can be spindled (where two dendrites radiate from opposite poles of a cell body with few branches, see bipolar neurons ), spherical (where dendrites radiate in a part or in all directions from a cell body, see cerebellar granule cells), laminar (where dendrites can either radiate planarly, offset from cell body by one or more stems, or multi-planarly, see retinal horizontal cells, retinal ganglion cells, retinal amacrine cells respectively), cylindrical (where dendrites radiate in all directions in a cylinder, disk-like fashion, see pallidal neurons), conical (dendrites radiate like a cone away from cell body, see pyramidal cells), or fanned (where dendrites radiate like a flat fan as in Purkinje cells).
Electrical properties
The structure and branching of a neuron's dendrites, as well as the availability and variation of voltage-gated ion conductance, strongly influences how the neuron integrates the input from other neurons. This integration is both temporal, involving the summation of stimuli that arrive in rapid succession, as well as spatial, entailing the aggregation of excitatory and inhibitory inputs from separate branches.
Dendrites were once thought to merely convey electrical stimulation passively. This passive transmission means that voltage changes measured at the cell body are the result of activation of distal synapses propagating the electric signal towards the cell body without the aid of voltage-gated ion channels. Passive cable theory describes how voltage changes at a particular location on a dendrite transmit this electrical signal through a system of converging dendrite segments of different diameters, lengths, and electrical properties. Based on passive cable theory one can track how changes in a neuron's dendritic morphology impact the membrane voltage at the cell body, and thus how variation in dendrite architectures affects the overall output characteristics of the neuron.
Action potentials initiated at the axon hillock propagate back into the dendritic arbor. These back-propagating action potentials depolarize the dendritic membrane and provide a crucial signal for synapse modulation and long-term potentiation. Back-propagation is not completely passive, but modulated by the presence of dendritic voltage-gated potassium channels. Furthermore, in certain types of neurons, a train of back-propagating action potentials can induce a calcium action potential (a dendritic spike) at dendritic initiation zones.
Plasticity
Dendrites themselves appear to be capable of plastic changes during the adult life of animals, including invertebrates. Neuronal dendrites have various compartments known as functional units that are able to compute incoming stimuli. These functional units are involved in processing input and are composed of the subdomains of dendrites such as spines, branches, or groupings of branches. Therefore, plasticity that leads to changes in the dendrite structure will affect communication and processing in the cell. During development, dendrite morphology is shaped by intrinsic programs within the cell's genome and extrinsic factors such as signals from other cells. But in adult life, extrinsic signals become more influential and cause more significant changes in dendrite structure compared to intrinsic signals during development. In females, the dendritic structure can change as a result of physiological conditions induced by hormones during periods such as pregnancy, lactation, and following the estrous cycle. This is particularly visible in pyramidal cells of the CA1 region of the hippocampus, where the density of dendrites can vary up to 30%.
Recent experimental observations suggest that adaptation is performed in the neuronal dendritic trees, where the timescale of adaptation was observed to be as low as several seconds. Certain machine learning architectures based on dendritic trees have been shown to simplify the learning algorithm without affecting performance.
| Biology and health sciences | Nervous system | Biology |
8143 | https://en.wikipedia.org/wiki/December | December | December is the twelfth and final month of the year in the Julian and Gregorian calendars. Its length is 31 days.
December's name derives from the Latin word decem (meaning ten) because it was originally the tenth month of the year in the calendar of Romulus , which began in March. The winter days following December were not included as part of any month. Later, the months of January and February were created out of the monthless period and added to the beginning of the calendar, but December retained its name.
In Ancient Rome, as one of the four Agonalia, this day in honour of Sol Indiges was held on December 11, as was Septimontium. Dies natalis (birthday) was held at the temple of Tellus on December 13, Consualia was held on December 15, Saturnalia was held December 17–23, Opiconsivia was held on December 19, Divalia was held on December 21, Larentalia was held on December 23, and the dies natalis of Sol Invictus was held on December 25. These dates do not correspond to the modern Gregorian calendar.
The Anglo-Saxons referred to December–January as Ġēolamonaþ (modern English: "Yule month"). The French Republican Calendar contained December within the months of Frimaire and Nivôse.
Astronomy
December contains the winter solstice in the Northern Hemisphere, the day with the fewest daylight hours, and the summer solstice in the Southern Hemisphere, the day with the most daylight hours (excluding polar regions in both cases, which consistently have none or 24 hours, respectively, near the solstice). December in the Northern Hemisphere is the seasonal equivalent to June in the Southern Hemisphere and vice versa. In the Northern hemisphere, the beginning of the astronomical winter is traditionally 21 December or the date of the solstice.
Meteor showers occurring in December are the Andromedids (September 25 – December 6, peaking around November 9), the Canis-Minorids (December 4 – December 15, peaking around December 10–11), the Coma Berenicids (December 12 to December 23, peaking around December 16), the Delta Cancrids (December 14 to February 14, the main shower from January 1 to January 24, peaking on January 17), the Geminids (December 13–14), the Monocerotids (December 7 to December 20, peaking on December 9. This shower can also start in November), the Phoenicids (November 29 to December 9, with a peak occurring around 5/6 December), the Quadrantids (typically a January shower but can also start in December), the Sigma Hydrids (December 4–15), and the Ursids (December 17-to December 25/26, peaking around December 22).
Astrology
The zodiac signs for the month of December are Sagittarius (until December 21) and Capricorn (December 22 onward).
Symbols
December's birth flower is the narcissus. Its birthstones are turquoise, zircon and tanzanite.
Observances
This list does not necessarily imply either official status or general observance.
Non-Gregorian
(All Baháʼí, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.)
List of observances set by the Baháʼí calendar
List of observances set by the Chinese calendar
List of observances set by the Hebrew calendar
List of observances set by the Islamic calendar
List of observances set by the Solar Hijri calendar
Month-long
In Catholic tradition, December typically marks the beginning of the Season of Advent. It is also devoted to the Immaculate Conception.
National Egg Nog Month (United States)
National Impaired Driving Prevention Month (United States)
National Fruit Cake Month (United States)
National Pear Month (United States)
Movable
| Technology | Months | null |
8179 | https://en.wikipedia.org/wiki/Dye | Dye | A dye is a colored substance that chemically bonds to the substrate to which it is being applied. This distinguishes dyes from pigments which do not chemically bind to the material they color. Dye is generally applied in an aqueous solution and may require a mordant to improve the fastness of the dye on the fiber.
The majority of natural dyes are derived from non-animal sources such as roots, berries, bark, leaves, wood, fungi and lichens. However, due to large-scale demand and technological improvements, most dyes used in the modern world are synthetically produced from substances such as petrochemicals.
Some are extracted from insects and/or minerals.
Synthetic dyes are produced from various chemicals. The great majority of dyes are obtained in this way because of their superior cost, optical properties (color), and resilience (fastness, mordancy). Both dyes and pigments are colored, because they absorb only some wavelengths of visible light. Dyes are usually soluble in some solvent, whereas pigments are insoluble. Some dyes can be rendered insoluble with the addition of salt to produce a lake pigment.
History
Textile dyeing dates back to the Neolithic period. Throughout history, people have dyed their textiles using common, locally available materials. Scarce dyestuffs that produced brilliant and permanent colors such as the natural invertebrate dyes Tyrian purple and crimson kermes were highly prized luxury items in the ancient and medieval world. Plant-based dyes such as woad, indigo, saffron, and madder were important trade goods in the economies of Asia and Europe. Across Asia and Africa, patterned fabrics were produced using resist dyeing techniques to control the absorption of color in piece-dyed cloth. Dyes from the New World such as cochineal and logwood were brought to Europe by the Spanish treasure fleets, and the dyestuffs of Europe were carried by colonists to America.
Dyed flax fibers have been found in the Republic of Georgia in a prehistoric cave dated to 36,000 BP. Archaeological evidence shows that, particularly in India and Phoenicia, dyeing has been widely carried out for over 5,000 years. Early dyes were obtained from animal, vegetable or mineral sources, with no to very little processing. By far the greatest source of dyes has been from the plant kingdom, notably roots, berries, bark, leaves and wood, only few of which are used on a commercial scale.
Early industrialization was conducted by J. Pullar and Sons in Scotland. The first synthetic dye, mauve, was discovered serendipitously by William Henry Perkin in 1856. The discovery of mauveine started a surge in synthetic dyes and in organic chemistry in general. Other aniline dyes followed, such as fuchsine, safranine, and induline. Many thousands of synthetic dyes have since been prepared.
The discovery of mauve also led to developments within immunology and chemotherapy. In 1863 the forerunner to Bayer AG was formed in what became Wuppertal, Germany. In 1891, Paul Ehrlich discovered that certain cells or organisms took up certain dyes selectively. He then reasoned that a sufficiently large dose could be injected to kill pathogenic microorganisms, if the dye did not affect other cells. Ehrlich went on to use a compound to target syphilis, the first time a chemical was used in order to selectively kill bacteria in the body. He also used methylene blue to target the plasmodium responsible for malaria.
Chemistry
The color of a dye is dependent upon the ability of the substance to absorb light within the visible region of the electromagnetic spectrum (380–750 nm). An earlier theory known as Witt theory stated that a colored dye had two components, a chromophore which imparts color by absorbing light in the visible region (some examples are nitro, azo, quinoid groups) and an auxochrome which serves to deepen the color. This theory has been superseded by modern electronic structure theory which states that the color in dyes is due to excitation of valence π-electrons by visible light.
Types
Dyes are classified according to their solubility and chemical properties.
Acid dyes are water-soluble anionic dyes that are applied to fibers such as silk, wool, nylon and modified acrylic fibers using neutral to acid dye baths. Attachment to the fiber is attributed, at least partly, to salt formation between anionic groups in the dyes and cationic groups in the fiber. Acid dyes are not substantive to cellulosic fibers. Most synthetic food colors fall in this category. Examples of acid dye are Alizarine Pure Blue B, Acid red 88, etc.
Basic dyes are water-soluble cationic dyes that are mainly applied to acrylic fibers, but find some use for wool and silk. Usually acetic acid is added to the dye bath to help the uptake of the dye onto the fiber. Basic dyes are also used in the coloration of paper.
Direct or substantive dyeing is normally carried out in a neutral or slightly alkaline dye bath, at or near boiling point, with the addition of either sodium chloride (NaCl) or sodium sulfate (Na2SO4) or sodium carbonate (Na2CO3). Direct dyes are used on cotton, paper, leather, wool, silk and nylon. They are also used as pH indicators and as biological stains.
Laser dyes are used in the production of some lasers, optical media (CD-R), and camera sensors (color filter array).
Mordant dyes require a mordant, which improves the fastness of the dye against water, light and perspiration. The choice of mordant is very important as different mordants can change the final color significantly. Most natural dyes are mordant dyes and there is therefore a large literature base describing dyeing techniques. The most important mordant dyes are the synthetic mordant dyes, or chrome dyes, used for wool; these comprise some 30% of dyes used for wool, and are especially useful for black and navy shades. The mordant potassium dichromate is applied as an after-treatment. It is important to note that many mordants, particularly those in the heavy metal category, can be hazardous to health and extreme care must be taken in using them.
Vat dyes are essentially insoluble in water and incapable of dyeing fibres directly. However, reduction in alkaline liquor produces the water-soluble alkali metal salt of the dye. This form is often colorless, in which case it is referred to as a Leuco dye, and has an affinity for the textile fibre. Subsequent oxidation reforms the original insoluble dye. The color of denim is due to indigo, the original vat dye.
Reactive dyes utilize a chromophore attached to a substituent that is capable of directly reacting with the fiber substrate. The covalent bonds that attach reactive dye to natural fibers make them among the most permanent of dyes. "Cold" reactive dyes, such as Procion MX, Cibacron F, and Drimarene K, are very easy to use because the dye can be applied at room temperature. Reactive dyes are by far the best choice for dyeing cotton and other cellulose fibers at home or in the art studio.
Disperse dyes were originally developed for the dyeing of cellulose acetate, and are water-insoluble. The dyes are finely ground in the presence of a dispersing agent and sold as a paste, or spray-dried and sold as a powder. Their main use is to dye polyester, but they can also be used to dye nylon, cellulose triacetate, and acrylic fibers. In some cases, a dyeing temperature of is required, and a pressurized dyebath is used. The very fine particle size gives a large surface area that aids dissolution to allow uptake by the fiber. The dyeing rate can be significantly influenced by the choice of dispersing agent used during the grinding.
Azoic dyeing is a technique in which an insoluble Azo dye is produced directly onto or within the fiber. This is achieved by treating a fiber with both diazoic and coupling components. With suitable adjustment of dyebath conditions the two components react to produce the required insoluble azo dye. This technique of dyeing is unique, in that the final color is controlled by the choice of the diazoic and coupling components. This method of dyeing cotton is declining in importance due to the toxic nature of the chemicals used.
Sulfur dyes are inexpensive dyes used to dye cotton with dark colors. Dyeing is effected by heating the fabric in a solution of an organic compound, typically a nitrophenol derivative, and sulfide or polysulfide. The organic compound reacts with the sulfide source to form dark colors that adhere to the fabric. Sulfur Black 1, the largest selling dye by volume, does not have a well defined chemical structure.
Some dyes commonly used in Staining:
Food dyes
One other class that describes the role of dyes, rather than their mode of use, is the food dye. Because food dyes are classed as food additives, they are manufactured to a higher standard than some industrial dyes. Food dyes can be direct, mordant and vat dyes, and their use is strictly controlled by legislation. Many are azo dyes, although anthraquinone and triphenylmethane compounds are used for colors such as green and blue. Some naturally occurring dyes are also used.
Other important dyes
A number of other classes have also been established, including:
Oxidation bases, for mainly hair and fur
Laser dyes: rhodamine 6G and coumarin dyes.
Leather dyes, for leather
Fluorescent brighteners, for textile fibres and paper
Solvent dyes, for wood staining and producing colored lacquers, solvent inks, coloring oils, waxes.
Contrast dyes, injected for magnetic resonance imaging, are essentially the same as clothing dye except they are coupled to an agent that has strong paramagnetic properties.
Mayhems dye, used in water cooling for looks, often rebranded RIT dye
Chromophoric dyes
By the nature of their chromophore, dyes are divided into:
:Category:Acridine dyes, derivates of acridine
:Category:Anthraquinone dyes, derivates of anthraquinone
Arylmethane dyes
:Category:Diarylmethane dyes, based on diphenyl methane
:Category:Triarylmethane dyes, derivates of triphenylmethane
:Category:Azo dyes, based on -N=N- azo structure
Phthalocyanine dyes, derivatives of phthalocyanine
Quinone-imine dyes, derivatives of quinone
:Category:Azin dyes
:Category:Eurhodin dyes
Category:Safranih
dyes, derivates of safranin
Indamins
:Category:Indophenol dyes, derivates of indophenol
:Category:Oxazin dyes, derivates of oxazin
Oxazone dyes, derivates of oxazone
:Category:Thiazine dyes
:Category:Thiazole dyes
:Category:Safranin dyes
Xanthene dyes
Fluorene dyes, derivatives of fluorene
Pyronin dyes
:Category:Fluorone dyes, based on fluorone
:Category:Rhodamine dyes, derivatives of rhodamine
Pollution
Dyes produced by the textile, printing and paper industries are a source of pollution of rivers and waterways. An estimated 700,000 tons of dyestuffs are produced annually (1990 data). The disposal of that material has received much attention, using chemical and biological means.
Vital dyes
A "vital dye" or stain is a dye capable of penetrating living cells or tissues without causing immediate visible degenerative changes. Such dyes are useful in medical and pathological fields in order to selectively color certain structures (such as cells) in order to distinguish them from surrounding tissue and thus make them more visible for study (for instance, under a microscope). As the visibility is meant to allow study of the cells or tissues, it is usually important that the dye not have other effects on the structure or function of the tissue that might impair objective observation.
A distinction is drawn between dyes that are meant to be used on cells that have been removed from the organism prior to study (supravital staining) and dyes that are used within a living body - administered by injection or other means (intravital staining) - as the latter is (for instance) subject to higher safety standards, and must typically be a chemical known to avoid causing adverse effects on any biochemistry (until cleared from the tissue), not just to the tissue being studied, or in the short term.
The term "vital stain" is occasionally used interchangeably with both intravital and supravital stains, the underlying concept in either case being that the cells examined are still alive.
In a stricter sense, the term "vital staining" means the polar opposite of "supravital staining."
If living cells absorb the stain during supravital staining, they exclude it during "vital staining"; for example, they color negatively while only dead cells color positively, and thus viability can be determined by counting the percentage of total cells that stain negatively.
Because the dye determines whether the staining is supravital or intravital, a combination of supravital and vital dyes can be used to more accurately classify cells into various groups (e.g., viable, dead, dying).
| Technology | Techniques_2 | null |
8187 | https://en.wikipedia.org/wiki/Descriptive%20statistics | Descriptive statistics | A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently nonparametric statistics. Even when a data analysis draws its main conclusions using inferential statistics, descriptive statistics are generally also presented. For example, in papers reporting on human subjects, typically a table is included giving the overall sample size, sample sizes in important subgroups (e.g., for each treatment or exposure group), and demographic or clinical characteristics such as the average age, the proportion of subjects of each sex, the proportion of subjects with related co-morbidities, etc.
Some measures that are commonly used to describe a data set are measures of central tendency and measures of variability or dispersion. Measures of central tendency include the mean, median and mode, while measures of variability include the standard deviation (or variance), the minimum and maximum values of the variables, kurtosis and skewness.
Use in statistical analysis
Descriptive statistics provide simple summaries about the sample and about the observations that have been made. Such summaries may be either quantitative, i.e. summary statistics, or visual, i.e. simple-to-understand graphs. These summaries may either form the basis of the initial description of the data as part of a more extensive statistical analysis, or they may be sufficient in and of themselves for a particular investigation.
For example, the shooting percentage in basketball is a descriptive statistic that summarizes the performance of a player or a team. This number is the number of shots made divided by the number of shots taken. For example, a player who shoots 33% is making approximately one shot in every three. The percentage summarizes or describes multiple discrete events. Consider also the grade point average. This single number describes the general performance of a student across the range of their course experiences.
The use of descriptive and summary statistics has an extensive history and, indeed, the simple tabulation of populations and of economic data was the first way the topic of statistics appeared. More recently, a collection of summarisation techniques has been formulated under the heading of exploratory data analysis: an example of such a technique is the box plot.
In the business world, descriptive statistics provides a useful summary of many types of data. For example, investors and brokers may use a historical account of return behaviour by performing empirical and analytical analyses on their investments in order to make better investing decisions in the future.
Univariate analysis
Univariate analysis involves describing the distribution of a single variable, including its central tendency (including the mean, median, and mode) and dispersion (including the range and quartiles of the data-set, and measures of spread such as the variance and standard deviation). The shape of the distribution may also be described via indices such as skewness and kurtosis. Characteristics of a variable's distribution may also be depicted in graphical or tabular format, including histograms and stem-and-leaf display.
Bivariate and multivariate analysis
When a sample consists of more than one variable, descriptive statistics may be used to describe the relationship between pairs of variables. In this case, descriptive statistics include:
Cross-tabulations and contingency tables
Graphical representation via scatterplots
Quantitative measures of dependence
Descriptions of conditional distributions
The main reason for differentiating univariate and bivariate analysis is that bivariate analysis is not only a simple descriptive analysis, but also it describes the relationship between two different variables. Quantitative measures of dependence include correlation (such as Pearson's r when both variables are continuous, or Spearman's rho if one or both are not) and covariance (which reflects the scale variables are measured on). The slope, in regression analysis, also reflects the relationship between variables. The unstandardised slope indicates the unit change in the criterion variable for a one unit change in the predictor. The standardised slope indicates this change in standardised (z-score) units. Highly skewed data are often transformed by taking logarithms. The use of logarithms makes graphs more symmetrical and look more similar to the normal distribution, making them easier to interpret intuitively.
| Mathematics | Statistics | null |
8214 | https://en.wikipedia.org/wiki/Decimal | Decimal | The decimal numeral system (also called the base-ten positional numeral system and denary or decanary) is the standard system for denoting integer and non-integer numbers. It is the extension to non-integer numbers (decimal fractions) of the Hindu–Arabic numeral system. The way of denoting numbers in the decimal system is often referred to as decimal notation.
A decimal numeral (also often just decimal or, less correctly, decimal number), refers generally to the notation of a number in the decimal numeral system. Decimals may sometimes be identified by a decimal separator (usually "." or "," as in or ).
Decimal may also refer specifically to the digits after the decimal separator, such as in " is the approximation of to two decimals". Zero-digits after a decimal separator serve the purpose of signifying the precision of a value.
The numbers that may be represented in the decimal system are the decimal fractions. That is, fractions of the form , where is an integer, and is a non-negative integer. Decimal fractions also result from the addition of an integer and a fractional part; the resulting sum sometimes is called a fractional number.
Decimals are commonly used to approximate real numbers. By increasing the number of digits after the decimal separator, one can make the approximation errors as small as one wants, when one has a method for computing the new digits.
Originally and in most uses, a decimal has only a finite number of digits after the decimal separator. However, the decimal system has been extended to infinite decimals for representing any real number, by using an infinite sequence of digits after the decimal separator (see decimal representation). In this context, the usual decimals, with a finite number of non-zero digits after the decimal separator, are sometimes called terminating decimals. A repeating decimal is an infinite decimal that, after some place, repeats indefinitely the same sequence of digits (e.g., ). An infinite decimal represents a rational number, the quotient of two integers, if and only if it is a repeating decimal or has a finite number of non-zero digits.
Origin
Many numeral systems of ancient civilizations use ten and its powers for representing numbers, possibly because there are ten fingers on two hands and people started counting by using their fingers. Examples are firstly the Egyptian numerals, then the Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, and Chinese numerals. Very large numbers were difficult to represent in these old numeral systems, and only the best mathematicians were able to multiply or divide large numbers. These difficulties were completely solved with the introduction of the Hindu–Arabic numeral system for representing integers. This system has been extended to represent some non-integer numbers, called decimal fractions or decimal numbers, for forming the decimal numeral system.
Decimal notation
For writing numbers, the decimal system uses ten decimal digits, a decimal mark, and, for negative numbers, a minus sign "−". The decimal digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9; the decimal separator is the dot "" in many countries (mostly English-speaking), and a comma "" in other countries.
For representing a non-negative number, a decimal numeral consists of
either a (finite) sequence of digits (such as "2017"), where the entire sequence represents an integer:
or a decimal mark separating two sequences of digits (such as "20.70828")
.
If , that is, if the first sequence contains at least two digits, it is generally assumed that the first digit is not zero. In some circumstances it may be useful to have one or more 0's on the left; this does not change the value represented by the decimal: for example, . Similarly, if the final digit on the right of the decimal mark is zero—that is, if —it may be removed; conversely, trailing zeros may be added after the decimal mark without changing the represented number; for example, and .
For representing a negative number, a minus sign is placed before .
The numeral represents the number
.
The integer part or integral part of a decimal numeral is the integer written to the left of the decimal separator (see also truncation). For a non-negative decimal numeral, it is the largest integer that is not greater than the decimal. The part from the decimal separator to the right is the fractional part, which equals the difference between the numeral and its integer part.
When the integral part of a numeral is zero, it may occur, typically in computing, that the integer part is not written (for example, , instead of ). In normal writing, this is generally avoided, because of the risk of confusion between the decimal mark and other punctuation.
In brief, the contribution of each digit to the value of a number depends on its position in the numeral. That is, the decimal system is a positional numeral system.
Decimal fractions
Decimal fractions (sometimes called decimal numbers, especially in contexts involving explicit fractions) are the rational numbers that may be expressed as a fraction whose denominator is a power of ten. For example, the decimal expressions represent the fractions , , , and , and therefore denote decimal fractions. An example of a fraction that cannot be represented by a decimal expression (with a finite number of digits) is , 3 not being a power of 10.
More generally, a decimal with digits after the separator (a point or comma) represents the fraction with denominator , whose numerator is the integer obtained by removing the separator.
It follows that a number is a decimal fraction if and only if it has a finite decimal representation.
Expressed as fully reduced fractions, the decimal numbers are those whose denominator is a product of a power of 2 and a power of 5. Thus the smallest denominators of decimal numbers are
Approximation using decimal numbers
Decimal numerals do not allow an exact representation for all real numbers. Nevertheless, they allow approximating every real number with any desired accuracy, e.g., the decimal 3.14159 approximates , being less than 10−5 off; so decimals are widely used in science, engineering and everyday life.
More precisely, for every real number and every positive integer , there are two decimals and with at most digits after the decimal mark such that and .
Numbers are very often obtained as the result of measurement. As measurements are subject to measurement uncertainty with a known upper bound, the result of a measurement is well-represented by a decimal with digits after the decimal mark, as soon as the absolute measurement error is bounded from above by . In practice, measurement results are often given with a certain number of digits after the decimal point, which indicate the error bounds. For example, although 0.080 and 0.08 denote the same number, the decimal numeral 0.080 suggests a measurement with an error less than 0.001, while the numeral 0.08 indicates an absolute error bounded by 0.01. In both cases, the true value of the measured quantity could be, for example, 0.0803 or 0.0796 (see also significant figures).
Infinite decimal expansion
For a real number and an integer , let denote the (finite) decimal expansion of the greatest number that is not greater than that has exactly digits after the decimal mark. Let denote the last digit of . It is straightforward to see that may be obtained by appending to the right of . This way one has
,
and the difference of and amounts to
,
which is either 0, if , or gets arbitrarily small as tends to infinity. According to the definition of a limit, is the limit of when tends to infinity. This is written asor
,
which is called an infinite decimal expansion of .
Conversely, for any integer and any sequence of digits the (infinite) expression is an infinite decimal expansion of a real number . This expansion is unique if neither all are equal to 9 nor all are equal to 0 for large enough (for all greater than some natural number ).
If all for equal to 9 and , the limit of the sequence is the decimal fraction obtained by replacing the last digit that is not a 9, i.e.: , by , and replacing all subsequent 9s by 0s (see 0.999...).
Any such decimal fraction, i.e.: for , may be converted to its equivalent infinite decimal expansion by replacing by and replacing all subsequent 0s by 9s (see 0.999...).
In summary, every real number that is not a decimal fraction has a unique infinite decimal expansion. Each decimal fraction has exactly two infinite decimal expansions, one containing only 0s after some place, which is obtained by the above definition of , and the other containing only 9s after some place, which is obtained by defining as the greatest number that is less than , having exactly digits after the decimal mark.
Rational numbers
Long division allows computing the infinite decimal expansion of a rational number. If the rational number is a decimal fraction, the division stops eventually, producing a decimal numeral, which may be prolongated into an infinite expansion by adding infinitely many zeros. If the rational number is not a decimal fraction, the division may continue indefinitely. However, as all successive remainders are less than the divisor, there are only a finite number of possible remainders, and after some place, the same sequence of digits must be repeated indefinitely in the quotient. That is, one has a repeating decimal. For example,
= 0.012345679012... (with the group 012345679 indefinitely repeating).
The converse is also true: if, at some point in the decimal representation of a number, the same string of digits starts repeating indefinitely, the number is rational.
or, dividing both numerator and denominator by 6, .
Decimal computation
Most modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally).
For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems.
For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.)
Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal, especially in database implementations, but there are other decimal representations in use (including decimal floating point such as in newer revisions of the IEEE 754 Standard for Floating-Point Arithmetic).
Decimal arithmetic is used in computers so that decimal fractional results of adding (or subtracting) values with a fixed length of their fractional part always are computed to this same length of precision. This is especially important for financial calculations, e.g., requiring in their results integer multiples of the smallest currency unit for book keeping purposes. This is not possible in binary, because the negative powers of have no finite binary fractional representation; and is generally impossible for multiplication (or division). See Arbitrary-precision arithmetic for exact calculations.
History
Many ancient cultures calculated with numerals based on ten, perhaps because two human hands have ten fingers. Standardized weights used in the Indus Valley Civilisation () were based on the ratios: 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20, 50, 100, 200, and 500, while their standardized ruler – the Mohenjo-daro ruler – was divided into ten equal parts. Egyptian hieroglyphs, in evidence since around 3000 BCE, used a purely decimal system, as did the Linear A script () of the Minoans and the Linear B script (c. 1400–1200 BCE) of the Mycenaeans. The Únětice culture in central Europe (2300-1600 BC) used standardised weights and a decimal system in trade. The number system of classical Greece also used powers of ten, including an intermediate base of 5, as did Roman numerals. Notably, the polymath Archimedes (c. 287–212 BCE) invented a decimal positional system in his Sand Reckoner which was based on 108. Hittite hieroglyphs (since 15th century BCE) were also strictly decimal.
The Egyptian hieratic numerals, the Greek alphabet numerals, the Hebrew alphabet numerals, the Roman numerals, the Chinese numerals and early Indian Brahmi numerals are all non-positional decimal systems, and required large numbers of symbols. For instance, Egyptian numerals used different symbols for 10, 20 to 90, 100, 200 to 900, 1000, 2000, 3000, 4000, to 10,000.
The world's earliest positional decimal system was the Chinese rod calculus.
History of decimal fractions
Starting from the 2nd century BCE, some Chinese units for length were based on divisions into ten; by the 3rd century CE these metrological units were used to express decimal fractions of lengths, non-positionally. Calculations with decimal fractions of lengths were performed using positional counting rods, as described in the 3rd–5th century CE Sunzi Suanjing. The 5th century CE mathematician Zu Chongzhi calculated a 7-digit approximation of . Qin Jiushao's book Mathematical Treatise in Nine Sections (1247) explicitly writes a decimal fraction representing a number rather than a measurement, using counting rods. The number 0.96644 is denoted
.
Historians of Chinese science have speculated that the idea of decimal fractions may have been transmitted from China to the Middle East.
Al-Khwarizmi introduced fractions to Islamic countries in the early 9th century CE, written with a numerator above and denominator below, without a horizontal bar. This form of fraction remained in use for centuries.
Positional decimal fractions appear for the first time in a book by the Arab mathematician Abu'l-Hasan al-Uqlidisi written in the 10th century. The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350 but did not develop any notation to represent them. The Persian mathematician Jamshid al-Kashi used, and claimed to have discovered, decimal fractions in the 15th century.
A forerunner of modern European decimal notation was introduced by Simon Stevin in the 16th century. Stevin's influential booklet De Thiende ("the art of tenths") was first published in Dutch in 1585 and translated into French as La Disme.
John Napier introduced using the period (.) to separate the integer part of a decimal number from the fractional part in his book on constructing tables of logarithms, published posthumously in 1620.
Natural languages
A method of expressing every possible natural number using a set of ten symbols emerged in India. Several Indian languages show a straightforward decimal system. Dravidian languages have numbers between 10 and 20 expressed in a regular pattern of addition to 10.
The Hungarian language also uses a straightforward decimal system. All numbers between 10 and 20 are formed regularly (e.g. 11 is expressed as "tizenegy" literally "one on ten"), as with those between 20 and 100 (23 as "huszonhárom" = "three on twenty").
A straightforward decimal rank system with a word for each order (10 , 100 , 1000 , 10,000 ), and in which 11 is expressed as ten-one and 23 as two-ten-three, and 89,345 is expressed as 8 (ten thousands) 9 (thousand) 3 (hundred) 4 (tens) 5 is found in Chinese, and in Vietnamese with a few irregularities. Japanese, Korean, and Thai have imported the Chinese decimal system. Many other languages with a decimal system have special words for the numbers between 10 and 20, and decades. For example, in English 11 is "eleven" not "ten-one" or "one-teen".
Incan languages such as Quechua and Aymara have an almost straightforward decimal system, in which 11 is expressed as ten with one and 23 as two-ten with three.
Some psychologists suggest irregularities of the English names of numerals may hinder children's counting ability.
Other bases
Some cultures do, or did, use other bases of numbers.
Pre-Columbian Mesoamerican cultures such as the Maya used a base-20 system (perhaps based on using all twenty fingers and toes).
The Yuki language in California and the Pamean languages in Mexico have octal (base-8) systems because the speakers count using the spaces between their fingers rather than the fingers themselves.
The existence of a non-decimal base in the earliest traces of the Germanic languages is attested by the presence of words and glosses meaning that the count is in decimal (cognates to "ten-count" or "tenty-wise"); such would be expected if normal counting is not decimal, and unusual if it were. Where this counting system is known, it is based on the "long hundred" = 120, and a "long thousand" of 1200. The descriptions like "long" only appear after the "small hundred" of 100 appeared with the Christians. Gordon's Introduction to Old Norse p. 293, gives number names that belong to this system. An expression cognate to 'one hundred and eighty' translates to 200, and the cognate to 'two hundred' translates to 240. Goodare details the use of the long hundred in Scotland in the Middle Ages, giving examples such as calculations where the carry implies i C (i.e. one hundred) as 120, etc. That the general population were not alarmed to encounter such numbers suggests common enough use. It is also possible to avoid hundred-like numbers by using intermediate units, such as stones and pounds, rather than a long count of pounds. Goodare gives examples of numbers like vii score, where one avoids the hundred by using extended scores. There is also a paper by W.H. Stevenson, on 'Long Hundred and its uses in England'.
Many or all of the Chumashan languages originally used a base-4 counting system, in which the names for numbers were structured according to multiples of 4 and 16.
Many languages use quinary (base-5) number systems, including Gumatj, Nunggubuyu, Kuurn Kopan Noot and Saraveca. Of these, Gumatj is the only true 5–25 language known, in which 25 is the higher group of 5.
Some Nigerians use duodecimal systems. So did some small communities in India and Nepal, as indicated by their languages.
The Huli language of Papua New Guinea is reported to have base-15 numbers. Ngui means 15, ngui ki means 15 × 2 = 30, and ngui ngui means 15 × 15 = 225.
Umbu-Ungu, also known as Kakoli, is reported to have base-24 numbers. Tokapu means 24, tokapu talu means 24 × 2 = 48, and tokapu tokapu means 24 × 24 = 576.
Ngiti is reported to have a base-32 number system with base-4 cycles.
The Ndom language of Papua New Guinea is reported to have base-6 numerals. Mer means 6, mer an thef means 6 × 2 = 12, nif means 36, and nif thef means 36×2 = 72.
| Mathematics | Basics | null |
8221 | https://en.wikipedia.org/wiki/Death | Death | Death is the end of life; the irreversible cessation of all biological functions that sustain a living organism. The remains of a former organism normally begin to decompose shortly after death. Death eventually and inevitably occurs in all organisms. Some organisms, such as Turritopsis dohrnii, are biologically immortal; however, they can still die from means other than aging. Death is generally applied to whole organisms; the equivalent for individual components of an organism, such as cells or tissues, is necrosis. Something that is not considered an organism, such as a virus, can be physically destroyed but is not said to die, as a virus is not considered alive in the first place.
As of the early 21st century, 56 million people die per year. The most common reason is aging, followed by cardiovascular disease, which is a disease that affects the heart or blood vessels. As of 2022, an estimated total of almost 110 billion humans have died, or roughly 94% of all humans to have ever lived. A substudy of gerontology known as biogerontology seeks to eliminate death by natural aging in humans, often through the application of natural processes found in certain organisms. However, as humans do not have the means to apply this to themselves, they have to use other ways to reach the maximum lifespan for a human, often through lifestyle changes, such as calorie reduction, dieting, and exercise. The idea of lifespan extension is considered and studied as a way for people to live longer.
Determining when a person has definitively died has proven difficult. Initially, death was defined as occurring when breathing and the heartbeat ceased, a status still known as clinical death. However, the development of cardiopulmonary resuscitation (CPR) meant that such a state was no longer strictly irreversible. Brain death was then considered a more fitting option, but several definitions exist for this. Some people believe that all brain functions must cease. Others believe that even if the brainstem is still alive, the personality and identity are irretrievably lost, so therefore, the person should be considered entirely dead. Brain death is sometimes used as a legal definition of death. For all organisms with a brain, death can instead be focused on this organ. The cause of death is usually considered important, and an autopsy can be done. There are many causes, from accidents to diseases.
Many cultures and religions have a concept of an afterlife that may hold the idea of judgment of good and bad deeds in one's life. There are also different customs for honoring the body, such as a funeral, cremation, or sky burial. After a death, an obituary may be posted in a newspaper, and the "survived by" kin and friends usually go through the grieving process.
Diagnosis
Definition
There are many scientific approaches and various interpretations of the concept. Additionally, the advent of life-sustaining therapy and the numerous criteria for defining death from both a medical and legal standpoint have made it difficult to create a single unifying definition.
Defining life to define death
One of the challenges in defining death is in distinguishing it from life. As a point in time, death seems to refer to the moment when life ends. Determining when death has occurred is difficult, as cessation of life functions is often not simultaneous across organ systems. Such determination, therefore, requires drawing precise conceptual boundaries between life and death. This is difficult due to there being little consensus on how to define life.
It is possible to define life in terms of consciousness. When consciousness ceases, an organism can be said to have died. One of the flaws in this approach is that there are many organisms that are alive but probably not conscious. Another problem is in defining consciousness, which has many different definitions given by modern scientists, psychologists and philosophers. Additionally, many religious traditions, including Abrahamic and Dharmic traditions, hold that death does not (or may not) entail the end of consciousness. In certain cultures, death is more of a process than a single event. It implies a slow shift from one spiritual state to another.
Other definitions for death focus on the character of cessation of organismic functioning and human death, which refers to irreversible loss of personhood. More specifically, death occurs when a living entity experiences irreversible cessation of all functioning. As it pertains to human life, death is an irreversible process where someone loses their existence as a person.
Definition of death by heartbeat and breath
Historically, attempts to define the exact moment of a human's death have been subjective or imprecise. Death was defined as the cessation of heartbeat (cardiac arrest) and breathing, but the development of CPR and prompt defibrillation have rendered that definition inadequate because breathing and heartbeat can sometimes be restarted. This type of death where circulatory and respiratory arrest happens is known as the circulatory definition of death (CDD). Proponents of the CDD believe this definition is reasonable because a person with permanent loss of circulatory and respiratory function should be considered dead. Critics of this definition state that while cessation of these functions may be permanent, it does not mean the situation is irreversible because if CPR is applied fast enough, the person could be revived. Thus, the arguments for and against the CDD boil down to defining the actual words "permanent" and "irreversible," which further complicates the challenge of defining death. Furthermore, events causally linked to death in the past no longer kill in all circumstances; without a functioning heart or lungs, life can sometimes be sustained with a combination of life support devices, organ transplants, and artificial pacemakers.
Brain death
Today, where a definition of the moment of death is required, doctors and coroners usually turn to "brain death" or "biological death" to define a person as being dead; people are considered dead when the electrical activity in their brain ceases. It is presumed that an end of electrical activity indicates the end of consciousness. Suspension of consciousness must be permanent and not transient, as occurs during certain sleep stages, and especially a coma. In the case of sleep, electroencephalograms (EEGs) are used to tell the difference.
The category of "brain death" is seen as problematic by some scholars. For instance, Dr. Franklin Miller, a senior faculty member at the Department of Bioethics, National Institutes of Health, notes: "By the late 1990s... the equation of brain death with death of the human being was increasingly challenged by scholars, based on evidence regarding the array of biological functioning displayed by patients correctly diagnosed as having this condition who were maintained on mechanical ventilation for substantial periods of time. These patients maintained the ability to sustain circulation and respiration, control temperature, excrete wastes, heal wounds, fight infections and, most dramatically, to gestate fetuses (in the case of pregnant "brain-dead" women)."
While "brain death" is viewed as problematic by some scholars, there are proponents of it that believe this definition of death is the most reasonable for distinguishing life from death. The reasoning behind the support for this definition is that brain death has a set of criteria that is reliable and reproducible. Also, the brain is crucial in determining our identity or who we are as human beings. The distinction should be made that "brain death" cannot be equated with one in a vegetative state or coma, in that the former situation describes a state that is beyond recovery.
EEGs can detect spurious electrical impulses, while certain drugs, hypoglycemia, hypoxia, or hypothermia can suppress or even stop brain activity temporarily; because of this, hospitals have protocols for determining brain death involving EEGs at widely separated intervals under defined conditions.
Neocortical brain death
People maintaining that only the neo-cortex of the brain is necessary for consciousness sometimes argue that only electrical activity should be considered when defining death. Eventually, the criterion for death may be the permanent and irreversible loss of cognitive function, as evidenced by the death of the cerebral cortex. All hope of recovering human thought and personality is then gone, given current and foreseeable medical technology. Even by whole-brain criteria, the determination of brain death can be complicated.
Total brain death
At present, in most places, the more conservative definition of death (irreversible cessation of electrical activity in the whole brain, as opposed to just in the neo-cortex) has been adopted. One example is the Uniform Determination Of Death Act in the United States. In the past, the adoption of this whole-brain definition was a conclusion of the President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research in 1980. They concluded that this approach to defining death sufficed in reaching a uniform definition nationwide. A multitude of reasons was presented to support this definition, including uniformity of standards in law for establishing death, consumption of a family's fiscal resources for artificial life support, and legal establishment for equating brain death with death to proceed with organ donation.
Problems in medical practice
Aside from the issue of support of or dispute against brain death, there is another inherent problem in this categorical definition: the variability of its application in medical practice. In 1995, the American Academy of Neurology (AAN) established the criteria that became the medical standard for diagnosing neurologic death. At that time, three clinical features had to be satisfied to determine "irreversible cessation" of the total brain, including coma with clear etiology, cessation of breathing, and lack of brainstem reflexes. These criteria were updated again, most recently in 2010, but substantial discrepancies remain across hospitals and medical specialties.
Donations
The problem of defining death is especially imperative as it pertains to the dead donor rule, which could be understood as one of the following interpretations of the rule: there must be an official declaration of death in a person before starting organ procurement, or that organ procurement cannot result in the death of the donor. A great deal of controversy has surrounded the definition of death and the dead donor rule. Advocates of the rule believe that the rule is legitimate in protecting organ donors while also countering any moral or legal objection to organ procurement. Critics, on the other hand, believe that the rule does not uphold the best interests of the donors and that the rule does not effectively promote organ donation.
Signs
Signs of death or strong indications that a warm-blooded animal is no longer alive are:
Respiratory arrest (no breathing)
Cardiac arrest (no pulse)
Brain death (no neuronal activity)
The stages that follow after death are:
, paleness which happens in 15–120 minutes after death
, the reduction in body temperature following death. This is generally a steady decline until matching ambient temperature
, the limbs of the corpse become stiff (Latin rigor) and difficult to move or manipulate
, a settling of the blood in the lower (dependent) portion of the body
Putrefaction, the beginning signs of decomposition
Decomposition, the reduction into simpler forms of matter, accompanied by a strong, unpleasant odor.
Skeletonization, the end of decomposition, where all soft tissues have decomposed, leaving only the skeleton.
Fossilization, the natural preservation of the skeletal remains formed over a very long period
Legal
The death of a person has legal consequences that may vary between jurisdictions. Most countries follow the whole-brain death criteria, where all functions of the brain must have completely ceased. However, in other jurisdictions, some follow the brainstem version of brain death. Afterward, a death certificate is issued in most jurisdictions, either by a doctor or by an administrative office, upon presentation of a doctor's declaration of death.
Misdiagnosis
There are many anecdotal references to people being declared dead by physicians and then "coming back to life," sometimes days later in their coffin or when embalming procedures are about to begin. From the mid-18th century onwards, there was an upsurge in the public's fear of being mistakenly buried alive and much debate about the uncertainty of the signs of death. Various suggestions were made to test for signs of life before burial, ranging from pouring vinegar and pepper into the corpse's mouth to applying red hot pokers to the feet or into the rectum. Writing in 1895, the physician J.C. Ouseley claimed that as many as 2,700 people were buried prematurely each year in England and Wales, although some estimates peg the figure to be closer to 800.
In cases of electric shock, cardiopulmonary resuscitation (CPR) for an hour or longer can allow stunned nerves to recover, allowing an apparently dead person to survive. People found unconscious under icy water may survive if their faces are kept continuously cold until they arrive at an emergency room. This "diving response," in which metabolic activity and oxygen requirements are minimal, is something humans share with cetaceans called the mammalian diving reflex.
As medical technologies advance, ideas about when death occurs may have to be reevaluated in light of the ability to restore a person to vitality after longer periods of apparent death (as happened when CPR and defibrillation showed that cessation of heartbeat is inadequate as a decisive indicator of death). The lack of electrical brain activity may not be enough to consider someone scientifically dead. Therefore, the concept of information-theoretic death has been suggested as a better means of defining when true death occurs, though the concept has few practical applications outside the field of cryonics.
Causes
The leading cause of human death in developing countries is infectious disease. The leading causes in developed countries are atherosclerosis (heart disease and stroke), cancer, and other diseases related to obesity and aging. By an extremely wide margin, the largest unifying cause of death in the developed world is biological aging, leading to various complications known as aging-associated diseases. These conditions cause loss of homeostasis, leading to cardiac arrest, causing loss of oxygen and nutrient supply, causing irreversible deterioration of the brain and other tissues. Of the roughly 150,000 people who die each day across the globe, about two thirds die of age-related causes. In industrialized nations, the proportion is much higher, approaching 90%. With improved medical capability, dying has become a condition to be managed.
In developing nations, inferior sanitary conditions and lack of access to modern medical technology make death from infectious diseases more common than in developed countries. One such disease is tuberculosis, a bacterial disease that killed 1.8 million people in 2015. In 2004, malaria caused about 2.7 million deaths annually. The AIDS death toll in Africa may reach 90–100 million by 2025.
According to Jean Ziegler, the United Nations Special Reporter on the Right to Food, 2000 – Mar 2008, mortality due to malnutrition accounted for 58% of the total mortality rate in 2006. Ziegler says worldwide, approximately 62 million people died from all causes and of those deaths, more than 36 million died of hunger or diseases due to deficiencies in micronutrients.
Tobacco smoking killed 100 million people worldwide in the 20th century and could kill 1 billion people worldwide in the 21st century, a World Health Organization report warned.
Many leading developed world causes of death can be postponed by diet and physical activity, but the accelerating incidence of disease with age still imposes limits on human longevity. The evolutionary cause of aging is, at best, only beginning to be understood. It has been suggested that direct intervention in the aging process may now be the most effective intervention against major causes of death.
Selye proposed a unified non-specific approach to many causes of death. He demonstrated that stress decreases the adaptability of an organism and proposed to describe adaptability as a special resource, adaptation energy. The animal dies when this resource is exhausted. Selye assumed that adaptability is a finite supply presented at birth. Later, Goldstone proposed the concept of production or income of adaptation energy which may be stored (up to a limit) as a capital reserve of adaptation. In recent works, adaptation energy is considered an internal coordinate on the "dominant path" in the model of adaptation. It is demonstrated that oscillations of well-being appear when the reserve of adaptability is almost exhausted.
In 2012, suicide overtook car crashes as the leading cause of human injury deaths in the U.S., followed by poisoning, falls, and murder.
Accidents and disasters, from nuclear disasters to structural collapses, also claim lives. One of the deadliest incidents of all time is the 1975 Banqiao Dam Failure, with varying estimates, up to 240,000 dead. Other incidents with high death tolls are the Wanggongchang explosion (when a gunpowder factory ended up with 20,000 deaths), a collapse of a wall of Circus Maximus that killed 13,000 people, and the Chernobyl disaster that killed between 95 and 4,000 people.
Natural disasters kill around 45,000 people annually, although this number can vary to millions to thousands on a per-decade basis. Some of the deadliest natural disasters are the 1931 China floods, which killed an estimated 4 million people, although estimates widely vary; the 1887 Yellow River flood, which killed an estimated 2 million people in China; and the 1970 Bhola cyclone, which killed as many as 500,000 people in Pakistan. If naturally occurring famines are considered natural disasters, the Chinese famine of 1906–1907, which killed 15–20 million people, can be considered the deadliest natural disaster in recorded history.
In animals, predation can be a common cause of death. Livestock have a 6% death rate from predation. However, younger animals are more susceptible to predation. For example, 50% of young foxes die to birds, bobcats, coyotes, and other foxes as well. Young bear cubs in the Yellowstone National Park only have a 40% chance to survive to adulthood from other bears and predators.
Autopsy
An autopsy, also known as a postmortem examination or an obduction, is a medical procedure that consists of a thorough examination of a human corpse to determine the cause and manner of a person's death and to evaluate any disease or injury that may be present. It is usually performed by a specialized medical doctor called a pathologist.
Autopsies are either performed for legal or medical purposes. A forensic autopsy is carried out when the cause of death may be a criminal matter, while a clinical or academic autopsy is performed to find the medical cause of death and is used in cases of unknown or uncertain death, or for research purposes. Autopsies can be further classified into cases where external examination suffices, and those where the body is dissected and an internal examination is conducted. Permission from next of kin may be required for internal autopsy in some cases. Once an internal autopsy is complete the body is generally reconstituted by sewing it back together.
A necropsy, which is not always a medical procedure, was a term previously used to describe an unregulated postmortem examination. In modern times, this term is more commonly associated with the corpses of animals.
Death before birth
Death before birth can happen in several ways: stillbirth, when the fetus dies before or during the delivery process; miscarriage, when the embryo dies before independent survival; and abortion, the artificial termination of the pregnancy. Stillbirth and miscarriage can happen for various reasons, while abortion is carried out purposely.
Stillbirth
Stillbirth can happen right before or after the delivery of a fetus. It can result from defects of the fetus or risk factors present in the mother. Reductions of these factors, caesarean sections when risks are present, and early detection of birth defects have lowered the rate of stillbirth. However, 1% of births in the United States end in a stillbirth.
Miscarriage
A miscarriage is defined by the World Health Organization as, "The expulsion or extraction from its mother of an embryo or fetus weighing 500g or less." Miscarriage is one of the most frequent problems in pregnancy, and is reported in around 12–15% of all clinical pregnancies; however, by including pregnancy losses during menstruation, it could be up to 17–22% of all pregnancies. There are many risk-factors involved in miscarriage; consumption of caffeine, tobacco, alcohol, drugs, having a previous miscarriage, and the use of abortion can increase the chances of having a miscarriage.
Abortion
An abortion may be performed for many reasons, such as pregnancy from rape, financial constraints of having a child, teenage pregnancy, and the lack of support from a significant other. There are two forms of abortion: a medical abortion and an in-clinic abortion or sometimes referred to as a surgical abortion. A medical abortion involves taking a pill that will terminate the pregnancy no more than 11 weeks past the last period, and an in-clinic abortion involves a medical procedure using suction to empty the uterus; this is possible after 12 weeks, but it may be more difficult to find an operating doctor who will go through with the procedure.
Senescence
Senescence refers to a scenario when a living being can survive all calamities but eventually dies due to causes relating to old age. Conversely, premature death can refer to a death that occurs before old age arrives, for example, human death before a person reaches the age of 75. Animal and plant cells normally reproduce and function during the whole period of natural existence, but the aging process derives from the deterioration of cellular activity and the ruination of regular functioning. The aptitude of cells for gradual deterioration and mortality means that cells are naturally sentenced to stable and long-term loss of living capacities, even despite continuing metabolic reactions and viability. In the United Kingdom, for example, nine out of ten of all the deaths that occur daily relates to senescence, while around the world, it accounts for two-thirds of 150,000 deaths that take place daily.
Almost all animals who survive external hazards to their biological functioning eventually die from biological aging, known in life sciences as "senescence." Some organisms experience negligible senescence, even exhibiting biological immortality. These include the jellyfish Turritopsis dohrnii, the hydra, and the planarian. Unnatural causes of death include suicide and predation. Of all causes, roughly 150,000 people die around the world each day. Of these, two-thirds die directly or indirectly due to senescence, but in industrialized countries – such as the United States, the United Kingdom, and Germany – the rate approaches 90% (i.e., nearly nine out of ten of all deaths are related to senescence).
Physiological death is now seen as a process, more than an event: conditions once considered indicative of death are now reversible. Where in the process, a dividing line is drawn between life and death depends on factors beyond the presence or absence of vital signs. In general, clinical death is neither necessary nor sufficient for a determination of legal death. A patient with working heart and lungs determined to be brain dead can be pronounced legally dead without clinical death occurring.
Life extension
Life extension refers to an increase in maximum or average lifespan, especially in humans, by slowing or reversing aging processes through anti-aging measures. Aging is the most common cause of death worldwide. Aging is seen as inevitable, so according to Aubrey de Grey little is spent on research into anti-aging therapies, a phenomenon known as pro-aging trance.
The average lifespan is determined by vulnerability to accidents and age or lifestyle-related afflictions such as cancer or cardiovascular disease. Extension of lifespan can be achieved by good diet, exercise, and avoidance of hazards such as smoking. Maximum lifespan is determined by the rate of aging for a species inherent in its genes. A recognized method of extending maximum lifespan is calorie restriction. Theoretically, the extension of the maximum lifespan can be achieved by reducing the rate of aging damage, by periodic replacement of damaged tissues, molecular repair, or rejuvenation of deteriorated cells and tissues.
A United States poll found religious and irreligious people, as well as men and women and people of different economic classes, have similar rates of support for life extension, while Africans and Hispanics have higher rates of support than white people. 38% said they would desire to have their aging process cured.
Researchers of life extension can be known as "biomedical gerontologists." They try to understand aging, and develop treatments to reverse aging processes, or at least slow them for the improvement of health and maintenance of youthfulness. Those who use life extension findings and apply them to themselves are called "life extensionists" or "longevists." The primary life extension strategy currently is to apply anti-aging methods to attempt to live long enough to benefit from a cure for aging.
Cryonics
Cryonics (from Greek κρύος 'kryos-' meaning 'icy cold') is the low-temperature preservation of animals, including humans, who cannot be sustained by contemporary medicine, with the hope that healing and resuscitation may be possible in the future.
Cryopreservation of people and other large animals is not reversible with current technology. The stated rationale for cryonics is that people who are considered dead by current legal or medical definitions, may not necessarily be dead according to the more stringent 'information-theoretic' definition of death.
Some scientific literature is claimed to support the feasibility of cryonics. Medical science and cryobiologists generally regard cryonics with skepticism.
Location
Around 1930, most people in Western countries died in their own homes, surrounded by family, and comforted by clergy, neighbors, and doctors making house calls. By the mid-20th century, half of all Americans died in a hospital. By the start of the 21st century, only about 20 to 25% of people in developed countries died outside of a medical institution. The shift from dying at home towards dying in a professional medical environment has been termed the "Invisible Death." This shift occurred gradually over the years until most deaths now occur outside the home.
Psychology
Death studies is a field within psychology. To varying degrees people inherently fear death, both the process and the eventuality; it is hard wired and part of the 'survival instinct' of all animals. Discussing, thinking about, or planning for their deaths causes them discomfort. This fear may cause them to put off financial planning, preparing a will and testament, or requesting help from a hospice organization.
Mortality salience is the awareness that death is inevitable. However, self-esteem and culture are ways to reduce the anxiety this effect can cause. The awareness of someone's own death can cause a deepened bond in their in-group as a defense mechanism. This can also cause the person to become very judging. In a study, two groups were formed; one group was asked to reflect upon their mortality, the other was not, afterwards, the groups were told to set a bond for a prostitute. The group that did not reflect on death had an average of $50, the group who was reminded about their death had an average of $455.
Different people have different responses to the idea of their deaths. Philosopher Galen Strawson writes that the death that many people wish for is an instant, painless, unexperienced annihilation. In this unlikely scenario, the person dies without realizing it and without being able to fear it. One moment the person is walking, eating, or sleeping, and the next moment, the person is dead. Strawson reasons that this type of death would not take anything away from the person, as he believes a person cannot have a legitimate claim to ownership in the future.
Society and culture
In society, the nature of death and humanity's awareness of its mortality has, for millennia, been a concern of the world's religious traditions and philosophical inquiry. Including belief in resurrection or an afterlife (associated with Abrahamic religions), reincarnation or rebirth (associated with Dharmic religions), or that consciousness permanently ceases to exist, known as eternal oblivion (associated with secular humanism).
Commemoration ceremonies after death may include various mourning, funeral practices, and ceremonies of honoring the deceased. The physical remains of a person, commonly known as a corpse or body, are usually interred whole or cremated, though among the world's cultures, there are a variety of other methods of mortuary disposal. In the English language, blessings directed towards a dead person include rest in peace (originally the Latin, requiescat in pace) or its initialism RIP.
Death is the center of many traditions and organizations; customs relating to death are a feature of every culture around the world. Much of this revolves around the care of the dead, as well as the afterlife and the disposal of bodies upon the onset of death. The disposal of human corpses does, in general, begin with the last offices before significant time has passed, and ritualistic ceremonies often occur, most commonly interment or cremation. This is not a unified practice; in Tibet, for instance, the body is given a sky burial and left on a mountain top. Proper preparation for death and techniques and ceremonies for producing the ability to transfer one's spiritual attainments into another body (reincarnation) are subjects of detailed study in Tibet. Mummification or embalming is also prevalent in some cultures to retard the rate of decay. The rise of secularism resulted in material mementos of death declining.
Some parts of death in culture are legally based, having laws for when death occurs, such as the receiving of a death certificate, the settlement of the deceased estate, and the issues of inheritance and, in some countries, inheritance taxation.
Capital punishment is also a culturally divisive aspect of death. In most jurisdictions where capital punishment is carried out today, the death penalty is reserved for premeditated murder, espionage, treason, or as part of military justice. In some countries, sexual crimes, such as adultery and sodomy, carry the death penalty, as do religious crimes, such as apostasy, the formal renunciation of one's religion. In many retentionist countries, drug trafficking is also a capital offense. In China, human trafficking and serious cases of corruption are also punished by the death penalty. In militaries around the world, courts-martial have imposed death sentences for offenses such as cowardice, desertion, insubordination, and mutiny. Mutiny is punishable by death in the United States.
Death in warfare and suicide attacks also have cultural links, and the ideas of dulce et decorum est pro patria mori, which translates to "It is sweet and proper to die for one's country", is a concept that dates to antiquity. Additionally, grieving relatives of dead soldiers and death notification are embedded in many cultures. Recently in the Western world—with the increase in terrorism following the September 11 attacks but also further back in time with suicide bombings, kamikaze missions in World War II, and suicide missions in a host of other conflicts in history—death for a cause by way of suicide attack, including martyrdom, have had significant cultural impacts.
Suicide, in general, and particularly euthanasia, are also points of cultural debate. Both acts are understood very differently in different cultures. In Japan, for example, ending a life with honor by seppuku was considered a desirable death, whereas according to traditional Christian and Islamic cultures, suicide is viewed as a sin.
Death is personified in many cultures, with such symbolic representations as the Grim Reaper, Azrael, the Hindu god Yama, and Father Time. In the west, the Grim Reaper, or figures similar to it, is the most popular depiction of death in western cultures.
In Brazil, death is counted officially when it is registered by existing family members at a cartório, a government-authorized registry. Before being able to file for an official death, the deceased must have been registered for an official birth at the cartório. Though a Public Registry Law guarantees all Brazilian citizens the right to register deaths, regardless of their financial means of their family members (often children), the Brazilian government has not taken away the burden, the hidden costs, and fees of filing for a death. For many impoverished families, the indirect costs and burden of filing for a death lead to a more appealing, unofficial, local, and cultural burial, which, in turn, raises the debate about inaccurate mortality rates.
Talking about death and witnessing it is a difficult issue in most cultures. Western societies may like to treat the dead with the utmost material respect, with an official embalmer and associated rites. Eastern societies (like India) may be more open to accepting it as a fait accompli, with a funeral procession of the dead body ending in an open-air burning-to-ashes.
Origins of death
The origin of death is a theme or myth of how death came to be. It is present in nearly all cultures across the world, as death is a universal happening. This makes it an origin myth, a myth that describes how a feature of the natural or social world appeared. There can be some similarities between myths and cultures. In North American mythology, the theme of a man who wants to be immortal and a man who wants to die can be seen across many Indigenous people. In Christianity, death is the result of the fall of man after eating the fruit from the tree of the knowledge of good and evil. In Greek mythology, the opening of Pandora's box releases death upon the world.
Consciousness
Much interest and debate surround the question of what happens to one's consciousness as one's body dies. The belief in the permanent loss of consciousness after death is often called eternal oblivion. The belief that the stream of consciousness is preserved after physical death is described by the term afterlife.
Near-death experiences (NDEs) describe the subjective experiences associated with impending death. Some survivors of such experiences report it as "seeing the afterlife while they were dying". Seeing a being of light and talking with it, life flashing before the eyes, and the confirmation of cultural beliefs of the afterlife are common themes in NDEs.
In biology
After death, the remains of a former organism become part of the biogeochemical cycle, during which animals may be consumed by a predator or a scavenger. Organic material may then be further decomposed by detritivores, organisms that recycle detritus, returning it to the environment for reuse in the food chain, where these chemicals may eventually end up being consumed and assimilated into the cells of an organism. Examples of detritivores include earthworms, woodlice, and millipedes.
Microorganisms also play a vital role, raising the temperature of the decomposing matter as they break it down into yet simpler molecules. Not all materials need to be fully decomposed. Coal, a fossil fuel formed over vast tracts of time in swamp ecosystems, is one example.
Natural selection
The contemporary evolutionary theory sees death as an important part of the process of natural selection. It is considered that organisms less adapted to their environment are more likely to die, having produced fewer offspring, thereby reducing their contribution to the gene pool. Their genes are thus eventually bred out of a population, leading at worst to extinction and, more positively, making the process possible, referred to as speciation. Frequency of reproduction plays an equally important role in determining species survival: an organism that dies young but leaves numerous offspring displays, according to Darwinian criteria, much greater fitness than a long-lived organism leaving only one.
Death also has a role in competition, where if a species out-competes another, there is a risk of death for the population, especially in the case where they are directly fighting over resources.
Extinction
Death plays a role in extinction, the cessation of existence of a species or group of taxa, reducing biodiversity, due to extinction being generally considered to be the death of the last individual of that species (although the capacity to breed and recover may have been lost before this point). Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively.
Evolution of aging and mortality
Inquiry into the evolution of aging aims to explain why so many living things and the vast majority of animals weaken and die with age. However, there are exceptions, such as Hydra and the jellyfish Turritopsis dohrnii, which research shows to be biologically immortal.
Organisms showing only asexual reproduction, such as bacteria, some protists, like the euglenoids and many amoebozoans, and unicellular organisms with sexual reproduction, colonial or not, like the volvocine algae Pandorina and Chlamydomonas, are "immortal" at some extent, dying only due to external hazards, like being eaten or meeting with a fatal accident. In multicellular organisms and also in multinucleate ciliates with a Weismannist development, that is, with a division of labor between mortal somatic (body) cells and "immortal" germ (reproductive) cells, death becomes an essential part of life, at least for the somatic line.
The Volvox algae are among the simplest organisms to exhibit that division of labor between two completely different cell types, and as a consequence, include the death of somatic line as a regular, genetically regulated part of its life history.
Grief in animals
Animals have sometimes shown grief for their partners or "friends." When two chimpanzees form a bond together, sexual or not, and one of them dies, the surviving chimpanzee will show signs of grief, ripping out their hair in anger and starting to cry; if the body is removed, they will resist, they will eventually go quiet when the body is gone, but upon seeing the body again, the chimp will return to a violent state.
Furthermore, anthropologist Barbara J. King has suggested that one way to evaluate the expression of grief in animals is to look for altered behaviors such as social withdrawal, disrupted eating or sleeping, expression of affect, or increased stress reactions in response to the death of a family member, mate, or friend. These criteria do not assume the ability to anticipate death, understand its finality, or experience emotions equivalent to those of humans, but at the same time do not rule out the possibility of those abilities existing in some animals or that different kinds of emotional experiences might constitute grief. Based on these criteria, King gives examples of observed potential mourning behaviors in animals such as cetaceans, apes and monkeys, elephants, domesticated animals (including dogs, cats, rabbits, horses, and farmed animals), giraffes, peccaries, donkeys, prairie voles, and some species of birds.
Death of abiotic factors
Some non-living things can be considered dead. For example, a volcano, batteries, electrical components, and stars are all nonliving things that can "die," whether from destruction or cessation of function.
A volcano, a break in the earth's crust that allows lava, ash, and gases to escape, has three states that it may be in, active, dormant, and extinct. An active volcano has recently or is currently erupting; in a dormant volcano, it has not erupted for a significant amount of time, but it may erupt again; in an extinct volcano, it may be cut off from the supply of its lava and will never be expected to erupt again, so the volcano can be considered to be dead.
A battery can be considered dead after the charge is fully used up. Electrical components are similar in this fashion, in the case that it may not be able to be used again, such as after a spill of water on the components, the component can be considered dead.
Stars also have a life-span and, therefore, can die. After it starts to run out of fuel, it starts to expand, this can be analogous to the star aging. After it exhausts all fuel, it may explode in a supernova, collapse into a black hole, or turn into a neutron star.
Religious views
Buddhism
In Buddhist doctrine and practice, death plays an important role. Awareness of death motivated Prince Siddhartha to strive to find the "deathless" and finally attain enlightenment. In Buddhist doctrine, death functions as a reminder of the value of having been born as a human being. Rebirth as a human being is considered the only state in which one can attain enlightenment. Therefore, death helps remind oneself that one should not take life for granted. The belief in rebirth among Buddhists does not necessarily remove death anxiety since all existence in the cycle of rebirth is considered filled with suffering, and being reborn many times does not necessarily mean that one progresses.
Death is part of several key Buddhist tenets, such as the Four Noble Truths and dependent origination.
Christianity
While there are different sects of Christianity with different branches of belief, the overarching ideology on death grows from the knowledge of the afterlife. After death, the individual will undergo a separation from mortality to immortality; their soul leaves the body, entering a realm of spirits. Following this separation of body and spirit (death), resurrection will occur. Representing the same transformation Jesus Christ embodied after his body was placed in the tomb for three days, each person's body will be resurrected, reuniting the spirit and body in a perfect form. This process allows the individual's soul to withstand death and transform into life after death.
Hinduism
In Hindu texts, death is described as the individual eternal spiritual jiva-atma (soul or conscious self) exiting the current temporary material body. The soul exits this body when the body can no longer sustain the conscious self (life), which may be due to mental or physical reasons or, more accurately, the inability to act on one's kama (material desires). During conception, the soul enters a compatible new body based on the remaining merits and demerits of one's karma (good/bad material activities based on dharma) and the state of one's mind (impressions or last thoughts) at the time of death.
Usually, the process of reincarnation makes one forget all memories of one's previous life. Because nothing really dies and the temporary material body is always changing, both in this life and the next, death means forgetfulness of one's previous experiences.
Islam
The Islamic view is that death is the separation of the soul from the body as well as the beginning of the afterlife. The afterlife, or akhirah, is one of the six main beliefs in Islam. Rather than seeing death as the end of life, Muslims consider death as a continuation of life in another form. In Islam, life on earth right now is a short, temporary life and a testing period for every soul. True life begins with the Day of Judgement when all people will be divided into two groups. The righteous believers will be welcomed to janna (heaven), and the disbelievers and evildoers will be punished in jahannam (hellfire).
Muslims believe death to be wholly natural and predetermined by God. Only God knows the exact time of a person's death. The Quran emphasizes that death is inevitable, no matter how much people try to escape death, it will reach everyone. (Q50:16) Life on earth is the one and only chance for people to prepare themselves for the life to come and choose to either believe or not believe in God, and death is the end of that learning opportunity.
Judaism
There are a variety of beliefs about the afterlife within Judaism, but none of them contradict the preference for life over death. This is partially because death puts a cessation to the possibility of fulfilling any commandments.
Language
The word "death" comes from Old English dēaþ, which in turn comes from Proto-Germanic *dauþuz (reconstructed by etymological analysis). This comes from the Proto-Indo-European stem *dheu- meaning the "process, act, condition of dying."
The concept and symptoms of death, and varying degrees of delicacy used in discussion in public forums, have generated numerous scientific, legal, and socially acceptable terms or euphemisms. When a person has died, it is also said they have "passed away", "passed on", "expired", or "gone", among other socially accepted, religiously specific, slang, and irreverent terms.
As a formal reference to a dead person, it has become common practice to use the participle form of "decease", as in "the deceased"; another noun form is "decedent".
Bereft of life, the dead person is a "corpse", "cadaver", "body", "set of remains", or when all flesh is gone, a "skeleton". The terms "carrion" and "carcass" are also used, usually for dead non-human animals. The ashes left after a cremation are lately called "cremains".
| Biology and health sciences | Science and medicine | null |
8242 | https://en.wikipedia.org/wiki/Debian | Debian | Debian (), also known as Debian GNU/Linux, is a free and open source Linux distribution, developed by the Debian Project, which was established by Ian Murdock in August 1993. Debian is one of the oldest operating systems based on the Linux kernel, and is the basis for many other Linux distributions.
As of September 2023, Debian is the second oldest Linux distribution still in active development, only behind Slackware. The project is coordinated over the Internet by a team of volunteers guided by the Debian Project Leader and three foundational documents: the Debian Social Contract, the Debian Constitution, and the Debian Free Software Guidelines.
In general, Debian has been developed openly and distributed freely according to some of the principles of the GNU Project and Free Software. Because of this, the Free Software Foundation sponsored the project from November 1994 to November 1995. However, it is no longer endorsed by GNU and the FSF due to the distribution's long-term practice of hosting non-free software repositories and, since 2022, its inclusion of non-free firmware in its installation media by default. On June 16, 1997, the Debian Project founded the nonprofit organization Software in the Public Interest to continue financially supporting development.
History
Debian version history
Debian distribution codenames are based on the names of characters from the Toy Story films. Debian's unstable trunk is named after Sid, a character who regularly destroyed his toys.
Founding (1993–1998)
First announced on August 16, 1993, Debian was founded by Ian Murdock, who initially called the system "the Debian Linux Release". The word "Debian" was formed as a portmanteau of the first name of his then-girlfriend (later ex-wife) Debra Lynn and his own first name. Before Debian's release, the Softlanding Linux System (SLS) had been a popular Linux distribution and the basis for Slackware. The perceived poor maintenance and prevalence of bugs in SLS motivated Murdock to launch a new distribution.
Debian 0.01, released on September 15, 1993, was the first of several internal releases. Version 0.90 was the first public release, providing support through mailing lists hosted at Pixar. The release included the Debian Linux Manifesto, outlining Murdock's view for the new operating system. In it he called for the creation of a distribution to be maintained "openly in the spirit of Linux and GNU."
The Debian project released the 0.9x versions in 1994 and 1995. During this time it was sponsored by the Free Software Foundation for one year. Ian Murdock delegated the base system, the core packages of Debian, to Bruce Perens and Murdock focused on the management of the growing project. The first ports to non-IA-32 architectures began in 1995, and Debian 1.1 was released in 1996. By that time and thanks to Ian Jackson, the dpkg package manager was already an essential part of Debian.
In 1996, Bruce Perens assumed the project leadership. Perens was a controversial leader, regarded as authoritarian and strongly attached to Debian. He drafted a social contract and edited suggestions from a month-long discussion into the Debian Social Contract and the Debian Free Software Guidelines. After the FSF withdrew their sponsorship in the midst of the free software vs. open source debate, Perens initiated the creation of the legal umbrella organization Software in the Public Interest instead of seeking renewed involvement with the FSF. He led the conversion of the project from a.out to ELF. He created the BusyBox program to make it possible to run a Debian installer on a single floppy disk, and wrote a new installer. By the time Debian 1.2 was released, the project had grown to nearly two hundred volunteers. Perens left the project in 1998.
Ian Jackson became the leader in 1998. Debian 2.0 introduced the second official port, m68k. During this time the first port to a non-Linux kernel, Debian GNU/Hurd, was started. On December 2, the first Debian Constitution was ratified.
Leader election (1999–2005)
From 1999, the project leader was elected yearly. APT was deployed with Debian 2.1. The number of applicants was overwhelming and the project established the new member process. The first Debian derivatives, namely Libranet, Corel Linux and Stormix's Storm Linux, were started in 1999. The 2.2 release in 2000 was dedicated to Joel Klecker, a developer who died of Duchenne muscular dystrophy.
In late 2000, the project reorganized the archive with new package "pools" and created the Testing distribution, made up of packages considered stable, to reduce the freeze for the next release. In the same year, developers began holding an annual conference called DebConf with talks and workshops for developers and technical users. In May 2001, Hewlett-Packard announced plans to base its Linux development on Debian.
In July 2002, the project released version 3.0, code-named Woody, the first release to include cryptographic software, a free licensed KDE and internationalization. During these last release cycles, the Debian project drew considerable criticism from the free software community because of the long time between stable releases.
Some events disturbed the project while working on Sarge, as Debian servers were attacked by fire and hackers. One of the most memorable was the Vancouver prospectus. After a meeting held in Vancouver, release manager Steve Langasek announced a plan to reduce the number of supported ports to four in order to shorten future release cycles. There was a large reaction because the proposal looked more like a decision and because such a drop would damage Debian's aim to be "the universal operating system".
The first version of the Debian-based Ubuntu, named "4.10 Warty Warthog", was released on October 20, 2004. Because it was distributed as a free download, it became one of the most popular and successful operating systems with more than "40 million users" according to Canonical Ltd. However, Murdock was critical of the differences between Ubuntu packages and Debian, stating that it leads to incompatibilities.
Sarge and later releases (2005–present)
The 3.1 Sarge release was made in June 2005. This release updated 73% of the software and included over 9,000 new packages. A new installer with a modular design, Debian-Installer, allowed installations with RAID, XFS and LVM support, improved hardware detection, made installations easier for novice users, and was translated into almost forty languages. An installation manual and release notes were in ten and fifteen languages respectively. The efforts of Skolelinux, Debian-Med and Debian-Accessibility raised the number of packages that were educational, had a medical affiliation, and ones made for people with disabilities.
In 2006, as a result of a much-publicized dispute, Mozilla software was rebranded in Debian, with Firefox forked as Iceweasel and Thunderbird as Icedove. The Mozilla Corporation stated that software with unapproved modifications could not be distributed under the Firefox trademark. Two reasons that Debian modified the Firefox software were to change non-free artwork and to provide security patches. In February 2016, it was announced that Mozilla and Debian had reached an agreement and Iceweasel would revert to the name Firefox; similar agreement was anticipated for Icedove/Thunderbird.
A fund-raising experiment, Dunc-Tank, was created to solve the release cycle problem and release managers were paid to work full-time; in response, unpaid developers slowed down their work and the release was delayed.
Debian 4.0 (Etch) was released in April 2007, featuring the x86-64 port and a graphical installer.
Debian 5.0 (Lenny) was released in February 2009, supporting Marvell's Orion platform and netbooks such as the Asus Eee PC. The release was dedicated to Thiemo Seufer, a developer who died in a car crash.
In July 2009, the policy of time-based development freezes on a two-year cycle was announced. Time-based freezes are intended to blend the predictability of time based releases with Debian's policy of feature based releases, and to reduce overall freeze time. The Squeeze cycle was going to be especially short; however, this initial schedule was abandoned. In September 2010, the backports service became official, providing more recent versions of some software for the stable release.
Debian 6.0 (Squeeze) was released in February 2011, featuring Debian GNU/kFreeBSD as a technology preview, along with adding a dependency-based boot system, and moving problematic firmware to the non-free section.
Debian 7 (Wheezy) was released in May 2013, featuring multiarch support.
Debian 8 (Jessie) was released in April 2015, using systemd as the new init system.
Debian 9 (Stretch) was released in June 2017, with nftables as a replacement for iptables, support for Flatpak apps, and MariaDB as the replacement for MySQL.
Debian 10 (Buster) was released in July 2019, adding support for Secure Boot and enabling AppArmor by default.
Debian 11 (Bullseye) was released in August 2021, enabling persistency in the system journal, adding support for driverless scanning, and containing kernel-level support for exFAT filesystems.
Debian 12 (Bookworm) was released on June 10, 2023, including various improvements and features, increasing the supported Linux Kernel to version 6.1, and leveraging new "Emerald" artwork.
Debian is still in development and new packages are uploaded to unstable every day.
Debian used to be released as a very large set of CDs for each architecture, but with the release of Debian 9 (Stretch) in 2017, many of the images have been dropped from the archive but remain buildable via jigdo.
Throughout Debian's lifetime, both the Debian distribution and its website have won various awards from different organizations, including Server Distribution of the Year 2011, The best Linux distro of 2011, and a Best of the Net award for October 1998.
On December 2, 2015, Microsoft announced that they would offer Debian GNU/Linux as an endorsed distribution on the Azure cloud platform. Microsoft has also added a user environment to their Windows 10 desktop operating system called Windows Subsystem for Linux that offers a Debian subset.
Features
Debian has access to online repositories that contain over 51,000 packages. Debian officially contains only free software, but non-free software can be downloaded and installed from the Debian repositories. Debian includes popular free programs such as LibreOffice, Firefox web browser, Evolution mail, K3b disc burner, VLC media player, GIMP image editor, and Evince document viewer. Debian is a popular choice for servers, for example as the operating system component of a LAMP stack.
Kernels
Several flavors of the Linux kernel exist for each port. For example, the i386 port has flavors for IA-32 PCs supporting Physical Address Extension and real-time computing, for older PCs, and for x86-64 PCs. The Linux kernel does not officially contain firmware lacking source code, although such firmware is available in non-free packages and alternative installation media.
Desktop environments
Debian offers CD and DVD images specifically built for Xfce, GNOME, KDE, MATE, Cinnamon, LXDE, and LXQt. MATE support was added in 2014, and Cinnamon support was added with Debian 8 Jessie. Less common window managers such as Enlightenment, Openbox, Fluxbox, IceWM, Window Maker and others are available.
The default desktop environment of version 7 Wheezy was temporarily switched to Xfce, because GNOME 3 did not fit on the first CD of the set. The default for the version 8 Jessie was changed again to Xfce in November 2013, and back to GNOME in September 2014.
Localization
Several parts of Debian are translated into languages other than American English, including package descriptions, configuration messages, documentation and the website. The level of software localization depends on the language, ranging from the highly supported German and French to the barely translated Creek and Samoan. The Debian 10 installer is available in 76 languages.
Multimedia support
Multimedia support has been problematic in Debian regarding codecs threatened by possible patent infringements, lacking source code, or under too restrictive licenses. Even though packages with problems related to their distribution could go into the non-free area, software such as libdvdcss is not hosted at Debian .
A notable third party repository exists, formerly named Debian-multimedia.org, providing software not present in Debian such as Windows codecs, libdvdcss and the Adobe Flash Player. Even though this repository is maintained by Christian Marillat, a Debian developer, it is not part of the project and is not hosted on a Debian server. The repository provides packages already included in Debian, interfering with the official maintenance. Eventually, project leader Stefano Zacchiroli asked Marillat to either settle an agreement about the packaging or to stop using the "Debian" name. Marillat chose the latter and renamed the repository to deb-multimedia.org. The repository was so popular that the switchover was announced by the official blog of the Debian project.
Distribution
Debian offers DVD and CD images for installation that can be downloaded using BitTorrent or jigdo. Physical discs can also be bought from retailers. The full sets are made up of several discs (the amd64 port consists of 13 DVDs or 84 CDs), but only the first disc is required for installation, as the installer can retrieve software not contained in the first disc image from online repositories.
Debian offers different network installation methods. A minimal install of Debian is available via the netinst CD, whereby Debian is installed with just a base and later added software can be downloaded from the Internet. Another option is to boot the installer from the network.
The default bootstrap loader is GNU GRUB version 2, though the package name is simply grub, while version 1 was renamed to grub-legacy. This conflicts with distros (e.g., Fedora Linux), where grub version 2 is named grub2.
The default desktop may be chosen from the DVD boot menu among GNOME, KDE Plasma, Xfce and LXDE, and from special disc 1 CDs.
Debian releases live install images for CDs, DVDs and USB thumb drives, for IA-32 and x86-64 architectures, and with a choice of desktop environments. These Debian Live images allow users to boot from removable media and run Debian without affecting the contents of their computer. A full install of Debian to the computer's hard drive can be initiated from the live image environment. Personalized images can be built with the live-build tool for discs, USB drives and for network booting purposes. Installation images are hybrid on some architectures and can be used to create a bootable USB drive (Live USB).
Packages
Package management operations can be performed with different tools available on Debian, from the lowest level command dpkg to graphical front-ends like Synaptic. The recommended standard for administering packages on a Debian system is the apt toolset.
dpkg provides the low-level infrastructure for package management. The dpkg database contains the list of installed software on the current system. The dpkg command tool does not know about repositories. The command can work with local .deb package files, and information from the dpkg database.
APT tools
An Advanced Packaging Tool (APT) allows a Debian system to retrieve and resolve package dependencies from repositories. APT tools share dependency information and cached packages.
The apt command itself is intended as an end user interface and enables some options better suited for interactive usage by default compared to more specialized APT like apt-get and apt-cache explained below.
apt-get and apt-cache are command tools of the standard apt package. apt-get installs and removes packages, and apt-cache is used for searching packages and displaying package information.
Aptitude is a command line tool that also offers a text-based user interface. The program comes with enhancements such as better search on package metadata.
GDebi and other front-ends
GDebi is an APT tool which can be used in command-line and on the GUI. GDebi can install a local .deb file via the command line like the dpkg command, but with access to repositories to resolve dependencies. Other graphical front-ends for APT include Software Center, Synaptic and Apper.
GNOME Software is a graphical front-end for PackageKit, which itself can work on top of various software packaging systems.
Repositories
The Debian Free Software Guidelines (DFSG) define the distinctive meaning of the word "free" as in "free and open-source software". Packages that comply with these guidelines, usually under the GNU General Public License, Modified BSD License or Artistic License, are included inside the main area; otherwise, they are included inside the non-free and contrib areas. These last two areas are not distributed within the official installation media, but they can be adopted manually.
Non-free includes packages that do not comply with the DFSG, such as documentation with invariant sections and proprietary software, and legally questionable packages. Contrib includes packages which do comply with the DFSG but fail other requirements. For example, they may depend on packages which are in non-free or requires such for building them.
Richard Stallman and the Free Software Foundation have criticized the Debian project for hosting the non-free repository and because the contrib and non-free areas are easily accessible, an opinion echoed by some in Debian including the former project leader Wichert Akkerman. The internal dissent in the Debian project regarding the non-free section has persisted, but the last time it came to a vote in 2004, the majority decided to keep it.
Cross-distribution package managers
The most popular optional Linux cross-distribution package manager are graphical (front-ends) package managers. They are available within the official Debian Repository but are not installed by default. They are widely popular with both Debian users and Debian software developers who are interested in installing the most recent versions of application or using the cross-distribution package manager built-in sandbox environment. While at the same time remaining in control of the security.
Four most popular cross-distribution package managers, sorted in alphabetical order:
AppImage Linux distribution-agnostic binary software deployment
Flatpak software code is owned and maintained by the not for profit Flatpak Team, with an open source LGPL-2.1-or-later license.
Homebrew software code is owned and maintained by its original author Max Howell, with an open source BSD 2-Clause License.
Snap software code is owned and maintained by the for profit Canonical Group Limited, with an open source GNU General Public License, version 3.0.
Branches
Three branches of Debian (also called releases, distributions or suites) are regularly maintained:
Stable is the current release and targets stable and well-tested software needs. Stable is made by freezing Testing for a few months where bugs are fixed and packages with too many bugs are removed; then the resulting system is released as stable. It is updated only if major security or usability fixes are incorporated. This branch has an optional backports service that provides more recent versions of some software. Stables CDs and DVDs can be found in the Debian website. The current version of Stable is codenamed bookworm.
Testing is the preview branch that will eventually become the next major release. The packages included in this branch have had some testing in unstable but they may not be fit for release yet. It contains newer packages than stable but older than unstable. This branch is updated continually until it is frozen. Testings CDs and DVDs can be found on the Debian website. The current version of Testing is codenamed trixie.
Unstable, always codenamed sid, is the trunk. Packages are accepted without checking the distribution as a whole. This branch is usually run by software developers who participate in a project and need the latest libraries available, and by those who prefer bleeding-edge software. Debian does not provide full Sid installation discs, but rather a minimal ISO that can be used to install over a network connection. Additionally, this branch can be installed through a system upgrade from stable or testing.
Other branches in Debian:
Oldstable is the prior stable release. It is supported by the Debian Security Team until one year after a new stable is released, and since the release of Debian 6, for another two years through the Long Term Support project. Eventually, oldstable is moved to a repository for archived releases. Debian 11 is the current Oldstable release (since 2023-06-10).
Oldoldstable is the prior oldstable release. It is supported by the Long Term Support community. Eventually, oldoldstable is moved to a repository for archived releases. Debian 10 is the current Oldoldstable release (since 2023-06-10).
Experimental is a temporary staging area of highly experimental software that is likely to break the system. It is not a full distribution and missing dependencies are commonly found in unstable, where new software without the damage chance is normally uploaded.
The snapshot archive provides older versions of the branches. They may be used to install a specific older version of some software.
Numbering scheme
Stable and oldstable get minor updates, called point releases; , the stable release is version 11.7, released on , and the oldstable release is version 10.10.
The numbering scheme for the point releases up to Debian 4.0 was to include the letter r (for revision) after the main version number and then the number of the point release; for example, the latest point release of version 4.0 is 4.0r9. This scheme was chosen because a new dotted version would make the old one look obsolete and vendors would have trouble selling their CDs.
From Debian 5.0, the numbering scheme of point releases was changed, conforming to the GNU version numbering standard; the first point release of Debian 5.0 was 5.0.1 instead of 5.0r1. The numbering scheme was once again changed for the first Debian 7 update, which was version 7.1. The r scheme is no longer in use, but point release announcements include a note about not throwing away old CDs.
Branding
Debian has two logos. The official logo (also known as open use logo) contains the well-known Debian swirl and best represents the visual identity of the Debian Project. A separate, logo, also exists for use by the Debian Project and its members only.
The Debian "swirl" logo was designed by Raul Silva in 1999 as part of a contest to replace the semi-official logo that had been used. The winner of the contest received an @Debian.org email address, and a set of Debian 2.1 install CDs for the architecture of their choice. Initially, the swirl was magic smoke arising from an also included bottle of an Arabian-style genie presented in black profile, but shortly after was reduced to the red smoke swirl for situations where space or multiple colours were not an option, and before long the bottle version effectively was superseded. There has been no official statement from the Debian project on the logo's meaning, but at the time of the logo's selection, it was suggested that the logo represented the magic smoke that made computers work.
One theory about the origin of the Debian logo is that Buzz Lightyear, the chosen character for the first named Debian release, has a swirl in his chin. Stefano Zacchiroli also suggested that this swirl is the Debian one. Buzz Lightyear's swirl is a more likely candidate as the codenames for Debian are names of Toy Story characters. The former Debian project leader Bruce Perens used to work for Pixar and is credited as a studio tools engineer on Toy Story 2 (1999).
Hardware
Hardware requirements are at least those of the kernel and the GNU toolsets. Debian's recommended system requirements depend on the level of installation, which corresponds to increased numbers of installed components:
The real minimum memory requirements depend on the architecture and may be much less than the numbers listed in this table. It is possible to install Debian with 170 MB of RAM for x86-64; the installer will run in low memory mode and it is recommended to create a swap partition. The installer for z/Architecture requires about 20 MB of RAM, but relies on network hardware. Similarly, disk space requirements, which depend on the packages to be installed, can be reduced by manually selecting the packages needed. , no Pure Blend exists that would lower the hardware requirements easily.
It is possible to run graphical user interfaces on older or low-end systems. However, the installation of window managers instead of desktop environments is recommended, as desktop environments are more resource intensive. Requirements for individual software vary widely and must be considered, with those of the base operating environment.
Architectures
, the official ports are:
amd64: x86-64 architecture with 64-bit userland and supporting 32-bit software
arm64: ARMv8-A architecture
armel: Little-endian ARM architecture (ARMv4T instruction set) on various embedded systems (embedded application binary interface (EABI)), although support has ended after Buster
armhf: ARM hard-float architecture (ARMv7 instruction set) requiring hardware with a floating-point unit
i386: IA-32 architecture with 32-bit userland, compatible with x86-64 machines
mips64el: Little-endian 64-bit MIPS
mipsel: Little-endian 32-bit MIPS
ppc64el: Little-endian PowerPC architecture supporting POWER7+ and POWER8 CPUs
riscv64: 64-bit RISC-V
s390x: z/Architecture with 64-bit userland, intended to replace s390
Unofficial ports are available as part of the unstable distribution:
alpha: DEC Alpha architecture
hppa: HP PA-RISC architecture
hurd-i386: GNU Hurd kernel on IA-32 architecture
ia64: Intel Itanium
loong64: LoongArch
m68k: Motorola 68k architecture on Amiga, Atari, Macintosh and various embedded VME systems
powerpc: 32-bit PowerPC
ppc64: PowerPC64 architecture supporting 64-bit PowerPC CPUs with VMX
sh4: Hitachi SuperH architecture
sparc64: Sun SPARC architecture with 64-bit userland
x32: x32 ABI userland for x86-64
Debian supports a variety of ARM-based NAS devices. The NSLU2 was supported by the installer in Debian 4.0 and 5.0, and Martin Michlmayr is providing installation tarballs since version 6.0. Other supported NAS devices are the Buffalo Kurobox Pro, GLAN Tank, Thecus N2100 and QNAP Turbo Stations.
Devices based on the Kirkwood system on a chip (SoC) are supported too, such as the SheevaPlug plug computer and OpenRD products. There are efforts to run Debian on mobile devices, but this is not a project goal yet since the Debian Linux kernel maintainers would not apply the needed patches. Nevertheless, there are packages for resource-limited systems.
There are efforts to support Debian on wireless access points. Debian is known to run on set-top boxes. Work is ongoing to support the AM335x processor, which is used in electronic point of service solutions. Debian may be customized to run on cash machines.
BeagleBoard, a low-power open-source hardware single-board computer (made by Texas Instruments) has switched to Debian Linux preloaded on its Beaglebone Black board's flash.
Roqos Core, manufactured by Roqos, is a x86-64 based IPS firewall router running Debian Linux.
Organization
Debian's policies and team efforts focus on collaborative software development and testing processes. As a result, a new major release tends to occur every two years with revision releases that fix security issues and important problems. The Debian project is a volunteer organization with three foundational documents:
The Debian Social Contract defines a set of basic principles by which the project and its developers conduct affairs.
The Debian Free Software Guidelines define the criteria for "free software" and thus what software is permissible in the distribution. These guidelines have been adopted as the basis of the Open Source Definition. Although this document can be considered separate, it formally is part of the Social Contract.
The Debian Constitution describes the organizational structure for formal decision-making within the project, and enumerates the powers and responsibilities of the Project Leader, the Secretary and other roles.
Debian developers are organized in a web of trust. There are about one thousand active Debian developers, but it is possible to contribute to the project without being an official developer.
The project maintains official mailing lists and conferences for communication and coordination between developers. For issues with single packages and other tasks, a public bug tracking system is used by developers and end users. Internet Relay Chat is also used for communication among developers and to provide real time help.
Debian is supported by donations made to organizations authorized by the leader. The largest supporter is Software in the Public Interest, the owner of the Debian trademark, manager of the monetary donations and umbrella organization for various other community free software projects.
A Project Leader is elected once per year by the developers. The leader has special powers, but they are not absolute, and appoints delegates to perform specialized tasks. Delegates make decisions as they think is best, taking into account technical criteria and consensus. By way of a General Resolution, the developers may recall the leader, reverse a decision made by the leader or a delegate, amend foundational documents and make other binding decisions. The voting method is based on the Schulze method (Cloneproof Schwartz Sequential Dropping).
Project leadership is distributed occasionally. Branden Robinson was helped by the Project Scud, a team of developers that assisted the leader, but there were concerns that such leadership would split Debian into two developer classes. Anthony Towns created a supplemental position, Second In Charge (2IC), that shared some powers of the leader. Steve McIntyre was 2IC and had a 2IC himself.
One important role in Debian's leadership is that of a release manager. The release team sets goals for the next release, supervises the processes and decides when to release. The team is led by the next release managers and stable release managers. Release assistants were introduced in 2003.
Developers
The Debian Project has an influx of applicants wishing to become developers. These applicants must undergo a vetting process which establishes their identity, motivation, understanding of the project's principles, and technical competence. This process has become much harder throughout the years.
Debian developers join the project for many reasons. Some that have been cited include:
Debian is their main operating system and they want to promote Debian
To improve the support for their favorite technology
They are involved with a Debian derivative
A desire to contribute back to the free-software community
To make their Debian maintenance work easier
Debian developers may resign their positions at any time or, when deemed necessary, they can be expelled. Those who follow the retiring protocol are granted the "emeritus" status and they may regain their membership through a shortened new member process.
Development
Flowchart of the life cycle of a Debian package
Each software package has a maintainer that may be either one person or a team of Debian developers and non-developer maintainers. The maintainer keeps track of upstream releases, and ensures that the package coheres with the rest of the distribution and meets the standards of quality of Debian. Packages may include modifications introduced by Debian to achieve compliance with Debian Policy, even to fix non-Debian specific bugs, although coordination with upstream developers is advised.
The maintainer releases a new version by uploading the package to the "incoming" system, which verifies the integrity of the packages and their digital signatures. If the package is found to be valid, it is installed in the package archive into an area called the "pool" and distributed every day to hundreds of mirrors worldwide. The upload must be signed using OpenPGP-compatible software. All Debian developers have individual cryptographic key pairs. Developers are responsible for any package they upload even if the packaging was prepared by another contributor.
Initially, an accepted package is only available in the unstable branch. For a package to become a candidate for the next release, it must migrate to the Testing branch by meeting the following:
It has been in unstable for a certain length of time that depends on the urgency of the changes.
It does not have "release-critical" bugs, except for the ones already present in Testing. Release-critical bugs are those considered serious enough that they make the package unsuitable for release.
There are no outdated versions in unstable for any release ports.
The migration does not break any packages in Testing.
Its dependencies can be satisfied by packages already in Testing or by packages being migrated at the same time.
The migration is not blocked by a freeze.
Thus, a release-critical bug in a new version of a shared library on which many packages depend may prevent those packages from entering Testing, because the updated library must meet the requirements too. From the branch viewpoint, the migration process happens twice per day, rendering Testing in perpetual beta.
Periodically, the release team publishes guidelines to the developers in order to ready the release. A new release occurs after a freeze, when all important software is reasonably up-to-date in the Testing branch and any other significant issues are solved. At that time, all packages in the testing branch become the new stable branch. Although freeze dates are time-based, release dates are not, which are announced by the release managers a couple of weeks beforehand.
A version of a package can belong to more than one branch, usually testing and unstable. It is possible for a package to keep the same version between stable releases and be part of oldstable, stable, testing and unstable at the same time. Each branch can be seen as a collection of pointers into the package "pool" mentioned above.
One way to resolve the challenge of a release-critical bug in a new application version is the use of optional package managers. They allow software developers to use sandbox environments, while at the same time remaining in control of security. Another benefit of a cross-distribution package manager is that they allow application developers to directly provide updates to users without going through distributions, and without having to package and test the application separately for each distribution.
Release cycle
A new stable branch of Debian gets released approximately every 2 years. It will receive official support for about 3 years with update for major security or usability fixes. Point releases will be available every several months as determined by Stable Release Managers (SRM).
Debian also launched its Long Term Support (LTS) project since Debian 6 (Debian Squeeze). For each Debian release, it will receive two years of extra security updates provided by LTS Team after its End Of Life (EOL). However, no point releases will be made. Now each Debian release can receive 5 years of security support in total.
Security
The Debian project handles security through public disclosure. Debian security advisories are compatible with the Common Vulnerabilities and Exposures dictionary, are usually coordinated with other free software vendors and are published the same day a vulnerability is made public. There used to be a security audit project that focused on packages in the stable release looking for security bugs; Steve Kemp, who started the project, retired in 2011 but resumed his activities and applied to rejoin in 2014.
The stable branch is supported by the Debian security team; oldstable is supported for one year. Although Squeeze is not officially supported, Debian is coordinating an effort to provide long-term support (LTS) until February 2016, five years after the initial release, but only for the IA-32 and x86-64 platforms. Testing is supported by the testing security team, but does not receive updates in as timely a manner as stable. Unstables security is left for the package maintainers.
The Debian project offers documentation and tools to harden a Debian installation both manually and automatically. AppArmor support is available and enabled by default since Buster. Debian provides an optional hardening wrapper, and does not harden all of its software by default using gcc features such as PIE and buffer overflow protection, unlike operating systems such as OpenBSD, but tries to build as many packages as possible with hardening flags.
In May 2008, a Debian developer discovered that the OpenSSL package distributed with Debian and derivatives such as Ubuntu made a variety of security keys vulnerable to a random number generator attack, since only 32,767 different keys were generated. The security weakness was caused by changes made in 2006 by another Debian developer in response to memory debugger warnings. The complete resolution procedure was cumbersome because patching the security hole was not enough; it involved regenerating all affected keys and certificates.
Value
The cost of developing all of the packages included in Debian 5.0 Lenny (323 million lines of code) has been estimated to be about , using one method based on the COCOMO model. , Black Duck Open Hub estimated that the current codebase (74 million lines of code) would cost about to develop, using a different method based on the same model.
Forks and derivatives
A large number of forks and derivatives have been built upon Debian over the years. Among the more notable are Ubuntu, developed by Canonical Ltd. and first released in 2004, which has surpassed Debian in popularity with desktop users; Knoppix, first released in the year 2000 and one of the first distributions optimized to boot from external storage; and Devuan, which gained attention in 2014 when it forked in disagreement over Debian's adoption of the systemd software suite, and has been mirroring Debian releases since 2017. The Linux Mint Debian Edition (LMDE) uses Debian Stable as the software source base since 2014.
Derivatives and flavors
Debian is one of the most popular Linux distributions, and many other distributions have been created from the Debian codebase. , DistroWatch lists 121 active Debian derivatives. The Debian project provides its derivatives with guidelines for best practices and encourages derivatives to merge their work back into Debian.
Debian Pure Blends are subsets of a Debian release configured out-of-the-box for users with particular skills and interests. For example, Debian Jr. is made for children, while Debian Science is for researchers and scientists. The complete Debian distribution includes all available Debian Pure Blends. "Debian Blend" (without "Pure") is a term for a Debian-based distribution that strives to become part of mainstream Debian, and have its extra features included in future releases.
Debian GNU/Hurd
Debian GNU/Hurd is a flavor based on the Hurd kernel (which, in turn, runs on the GNU Mach microkernel), instead of the Linux kernel. Debian GNU/Hurd has been in development since 1998, and made a formal release in May 2013, with 78% of the software packaged for Debian GNU/Linux ported to the GNU Hurd. Hurd is not yet an official Debian release, and is maintained and developed as an unofficial port. Debian GNU/Hurd is distributed as an installer CD (running the official Debian installer) or ready-to-run virtual disk image (Live CD, Live USB). The CD uses the IA-32 architecture, making it compatible with IA-32 and x86-64 PCs. The current version of Debian GNU/Hurd is 2023, published in June 2023.
Debian GNU/kFreeBSD
Debian GNU/kFreeBSD is a discontinued Debian flavor. It used the FreeBSD kernel and GNU userland. The majority of software in Debian GNU/kFreeBSD was built from the same sources as Debian, with some kernel packages from FreeBSD. The k in kFreeBSD is an abbreviation for kernel, which refers to the FreeBSD kernel. Before discontinuing the project, Debian maintained i386 and amd64 ports. The last version of Debian kFreeBSD was Debian 8 (Jessie) RC3. Debian GNU/kFreeBSD was created in 2002. It was included in Debian 6.0 (Squeeze) as a technology preview, and in Debian 7 (Wheezy) as an official port. Debian GNU/kFreeBSD was discontinued as an officially supported platform as of Debian 8. Debian developers cited OSS, pf, jails, NDIS, and ZFS as reasons for being interested in the FreeBSD kernel. It has not been officially updated since Debian 8. However, starting in July 2019, the operating system continued to be maintained unofficially. As of July 2023, the development of Debian GNU/kFreeBSD has officially terminated due to the lack of interest and developers.
| Technology | Operating Systems | null |
8254 | https://en.wikipedia.org/wiki/Diode | Diode | A diode is a two-terminal electronic component that conducts current primarily in one direction (asymmetric conductance). It has low (ideally zero) resistance in one direction and high (ideally infinite) resistance in the other.
A semiconductor diode, the most commonly used type today, is a crystalline piece of semiconductor material with a p–n junction connected to two electrical terminals. It has an exponential current–voltage characteristic. Semiconductor diodes were the first semiconductor electronic devices. The discovery of asymmetric electrical conduction across the contact between a crystalline mineral and a metal was made by German physicist Ferdinand Braun in 1874. Today, most diodes are made of silicon, but other semiconducting materials such as gallium arsenide and germanium are also used.
The obsolete thermionic diode is a vacuum tube with two electrodes, a heated cathode and a plate, in which electrons can flow in only one direction, from the cathode to the plate.
Among many uses, diodes are found in rectifiers to convert alternating current (AC) power to direct current (DC), demodulation in radio receivers, and can even be used for logic or as temperature sensors. A common variant of a diode is a light-emitting diode, which is used as electric lighting and status indicators on electronic devices.
Main functions
Unidirectional current flow
The most common function of a diode is to allow an electric current to pass in one direction (called the diode's forward direction), while blocking it in the opposite direction (the reverse direction). Its hydraulic analogy is a check valve. This unidirectional behavior can convert alternating current (AC) to direct current (DC), a process called rectification. As rectifiers, diodes can be used for such tasks as extracting modulation from radio signals in radio receivers.
Threshold voltage
A diode's behavior is often simplified as having a forward threshold voltage or turn-on voltage or cut-in voltage, above which there is significant current and below which there is almost no current, which depends on a diode's composition:
This voltage may loosely be referred to simply as the diode's forward voltage drop or just voltage drop, since a consequence of the steepness of the exponential is that a diode's voltage drop will not significantly exceed the threshold voltage under normal forward bias operating conditions. Datasheets typically quote a typical or maximum forward voltage (VF) for a specified current and temperature (e.g. 20 mA and 25 °C for LEDs), so the user has a guarantee about when a certain amount of current will kick in. At higher currents, the forward voltage drop of the diode increases. For instance, a drop of 1 V to 1.5 V is typical at full rated current for silicon power diodes. ( | Technology | Components | null |
8263 | https://en.wikipedia.org/wiki/Dissociation%20constant | Dissociation constant | In chemistry, biochemistry, and pharmacology, a dissociation constant (KD) is a specific type of equilibrium constant that measures the propensity of a larger object to separate (dissociate) reversibly into smaller components, as when a complex falls apart into its component molecules, or when a salt splits up into its component ions. The dissociation constant is the inverse of the association constant. In the special case of salts, the dissociation constant can also be called an ionization constant. For a general reaction:
A_\mathit{x} B_\mathit{y} <=> \mathit{x} A{} + \mathit{y} B
in which a complex breaks down into x A subunits and y B subunits, the dissociation constant is defined as
where [A], [B], and [Ax By] are the equilibrium concentrations of A, B, and the complex Ax By, respectively.
One reason for the popularity of the dissociation constant in biochemistry and pharmacology is that in the frequently encountered case where x = y = 1, KD has a simple physical interpretation: when [A] = KD, then [B] = [AB] or, equivalently, . That is, KD, which has the dimensions of concentration, equals the concentration of free A at which half of the total molecules of B are associated with A. This simple interpretation does not apply for higher values of x or y. It also presumes the absence of competing reactions, though the derivation can be extended to explicitly allow for and describe competitive binding. It is useful as a quick description of the binding of a substance, in the same way that EC50 and IC50 describe the biological activities of substances.
Concentration of bound molecules
Molecules with one binding site
Experimentally, the concentration of the molecule complex [AB] is obtained indirectly from the measurement of the concentration of a free molecules, either [A] or [B].
In principle, the total amounts of molecule [A]0 and [B]0 added to the reaction are known.
They separate into free and bound components according to the mass conservation principle:
To track the concentration of the complex [AB], one substitutes the concentration of the free molecules ([A] or [B]), of the respective conservation equations, by the definition of the dissociation constant,
This yields the concentration of the complex related to the concentration of either one of the free molecules
Macromolecules with identical independent binding sites
Many biological proteins and enzymes can possess more than one binding site.
Usually, when a ligand binds with a macromolecule , it can influence binding kinetics of other ligands binding to the macromolecule.
A simplified mechanism can be formulated if the affinity of all binding sites can be considered independent of the number of ligands bound to the macromolecule.
This is valid for macromolecules composed of more than one, mostly identical, subunits. It can be then assumed that each of these subunits are identical, symmetric and that they possess only a single binding site. Then the concentration of bound ligands [L]_{bound} becomes
In this case, , but comprises all partially saturated forms of the macromolecule:
where the saturation occurs stepwise
For the derivation of the general binding equation a saturation function is defined as the quotient from the portion of bound ligand to the total
amount of the macromolecule:
K′n are so-called macroscopic or apparent dissociation constants and can result from multiple individual reactions. For example, if a macromolecule M has three binding sites, K′1 describes a ligand being bound to any of the three binding sites. In this example, K′2 describes two molecules being bound and K′3 three molecules being bound to the macromolecule. The microscopic or individual dissociation constant describes the equilibrium of ligands binding to specific binding sites. Because we assume identical binding sites with no cooperativity, the microscopic dissociation constant must be equal for every binding site and can be abbreviated simply as KD. In our example, K′1 is the amalgamation of a ligand binding to either of the three possible binding sites (I, II and III), hence three microscopic dissociation constants and three distinct states of the ligand–macromolecule complex. For K′2 there are six different microscopic dissociation constants (I–II, I–III, II–I, II–III, III–I, III–II) but only three distinct states (it does not matter whether you bind pocket I first and then II or II first and then I). For K′3 there are three different dissociation constants — there are only three possibilities for which pocket is filled last (I, II or III) — and one state (I–II–III).
Even when the microscopic dissociation constant is the same for each individual binding event, the macroscopic outcome (K′1, K′2 and K′3) is not equal. This can be understood intuitively for our example of three possible binding sites. K′1 describes the reaction from one state (no ligand bound) to three states (one ligand bound to either of the three binding sides). The apparent K′1 would therefore be three times smaller than the individual KD. K′2 describes the reaction from three states (one ligand bound) to three states (two ligands bound); therefore, K′2 would be equal to KD. K′3 describes the reaction from three states (two ligands bound) to one state (three ligands bound); hence, the apparent dissociation constant K′3 is three times bigger than the microscopic dissociation constant KD.
The general relationship between both types of dissociation constants for n binding sites is
Hence, the ratio of bound ligand to macromolecules becomes
where is the binomial coefficient.
Then the first equation is proved by applying the binomial rule
Protein–ligand binding
The dissociation constant is commonly used to describe the affinity between a ligand L (such as a drug) and a protein P; i.e., how tightly a ligand binds to a particular protein. Ligand–protein affinities are influenced by non-covalent intermolecular interactions between the two molecules such as hydrogen bonding, electrostatic interactions, hydrophobic and van der Waals forces. Affinities can also be affected by high concentrations of other macromolecules, which causes macromolecular crowding.
The formation of a ligand–protein complex LP can be described by a two-state process
L + P <=> LP
the corresponding dissociation constant is defined
where [P], [L], and [LP] represent molar concentrations of the protein, ligand, and protein–ligand complex, respectively.
The dissociation constant has molar units (M) and corresponds to the ligand concentration [L] at which half of the proteins are occupied at equilibrium, i.e., the concentration of ligand at which the concentration of protein with ligand bound [LP] equals the concentration of protein with no ligand bound [P]. The smaller the dissociation constant, the more tightly bound the ligand is, or the higher the affinity between ligand and protein. For example, a ligand with a nanomolar (nM) dissociation constant binds more tightly to a particular protein than a ligand with a micromolar (μM) dissociation constant.
Sub-picomolar dissociation constants as a result of non-covalent binding interactions between two molecules are rare. Nevertheless, there are some important exceptions. Biotin and avidin bind with a dissociation constant of roughly 10−15 M = 1 fM = 0.000001 nM.
Ribonuclease inhibitor proteins may also bind to ribonuclease with a similar 10−15 M affinity.
The dissociation constant for a particular ligand–protein interaction can change with solution conditions (e.g., temperature, pH and salt concentration). The effect of different solution conditions is to effectively modify the strength of any intermolecular interactions holding a particular ligand–protein complex together.
Drugs can produce harmful side effects through interactions with proteins for which they were not meant to or designed to interact. Therefore, much pharmaceutical research is aimed at designing drugs that bind to only their target proteins (negative design) with high affinity (typically 0.1–10 nM) or at improving the affinity between a particular drug and its in vivo protein target (positive design).
Antibodies
In the specific case of antibodies (Ab) binding to antigen (Ag), usually the term affinity constant refers to the association constant.
Ab + Ag <=> AbAg
This chemical equilibrium is also the ratio of the on-rate (kforward or ka) and off-rate (kback or kd) constants. Two antibodies can have the same affinity, but one may have both a high on- and off-rate constant, while the other may have both a low on- and off-rate constant.
Acid–base reactions
For the deprotonation of acids, K is known as Ka, the acid dissociation constant. Strong acids, such as sulfuric or phosphoric acid, have large dissociation constants; weak acids, such as acetic acid, have small dissociation constants.
The symbol Ka, used for the acid dissociation constant, can lead to confusion with the association constant, and it may be necessary to see the reaction or the equilibrium expression to know which is meant.
Acid dissociation constants are sometimes expressed by pKa, which is defined by
This notation is seen in other contexts as well; it is mainly used for covalent dissociations (i.e., reactions in which chemical bonds are made or broken) since such dissociation constants can vary greatly.
A molecule can have several acid dissociation constants. In this regard, that is depending on the number of the protons they can give up, we define monoprotic, diprotic and triprotic acids. The first (e.g., acetic acid or ammonium) have only one dissociable group, the second (e.g., carbonic acid, bicarbonate, glycine) have two dissociable groups and the third (e.g., phosphoric acid) have three dissociable groups. In the case of multiple pK values they are designated by indices: pK1, pK2, pK3 and so on. For amino acids, the pK1 constant refers to its carboxyl (–COOH) group, pK2 refers to its amino (–NH2) group and the pK3 is the pK value of its side chain.
Dissociation constant of water
The dissociation constant of water is denoted Kw:
The concentration of water, [H2O], is omitted by convention, which means that the value of Kw differs from the value of Keq that would be computed using that concentration.
The value of Kw varies with temperature, as shown in the table below. This variation must be taken into account when making precise measurements of quantities such as pH.
{| class="wikitable" style="text-align:center;"
|-
! Water temperature
! Kw
! pKw
|-
|0 °C
|0.112
|14.95
|-
|25 °C
|1.023
|13.99
|-
|50 °C
|5.495
|13.26
|-
|75 °C
|19.95
|12.70
|-
|100 °C
|56.23
|12.25
|}
| Physical sciences | Thermodynamics | Chemistry |
8267 | https://en.wikipedia.org/wiki/Dimensional%20analysis | Dimensional analysis | In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae.
Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless.
Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation.
The concept of physical dimension or quantity dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822.
Formulation
The Buckingham π theorem describes how every physically meaningful equation involving variables can be equivalently rewritten as an equation of dimensionless parameters, where m is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables.
A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature. This may give insight into the fundamental properties of the system, as illustrated in the examples below.
The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The dimension of a physical quantity is more fundamental than some scale or unit used to express the amount of that physical quantity. For example, mass is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary".
There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols:
time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J).
The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity is given by
where , , , , , , are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since .
A quantity that has only (with all other exponents zero) is known as a geometric quantity. A quantity that has only both and is known as a kinematic quantity. A quantity that has only all of , , and is known as a dynamic quantity.
A quantity that has all exponents null is said to have dimension one.
The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, ; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity.
There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis.
Simple cases
As examples, the dimension of the physical quantity speed is
The dimension of the physical quantity acceleration is
The dimension of the physical quantity force is
The dimension of the physical quantity pressure is
The dimension of the physical quantity energy is
The dimension of the physical quantity power is
The dimension of the physical quantity electric charge is
The dimension of the physical quantity voltage is
The dimension of the physical quantity capacitance is
Rayleigh's method
In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh.
The method involves the following steps:
Gather all the independent variables that are likely to influence the dependent variable.
If is a variable that depends upon independent variables , , , ..., , then the functional equation can be written as .
Write the above equation in the form , where is a dimensionless constant and , , , ..., are arbitrary exponents.
Express each of the quantities in the equation in some base units in which the solution is required.
By using dimensional homogeneity, obtain a set of simultaneous equations involving the exponents , , , ..., .
Solve these equations to obtain the values of the exponents , , , ..., .
Substitute the values of exponents in the main equation, and form the non-dimensional parameters by grouping the variables with like exponents.
As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis.
Concrete numbers and base units
Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof.
A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units.
Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as .
Percentages, derivatives and integrals
Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since .
Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus:
position () has the dimension L (length);
derivative of position with respect to time (, velocity) has dimension T−1L—length from position, time due to the gradient;
the second derivative (, acceleration) has dimension .
Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator.
force has the dimension (mass multiplied by acceleration);
the integral of force with respect to the distance () the object has travelled (, work) has dimension .
In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year).
In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged.
Dimensional homogeneity (commensurability)
The most basic rule of dimensional analysis is that of dimensional homogeneity.
However, the dimensions form an abelian group under multiplication, so:
For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h.
The rule implies that in a physically meaningful expression only quantities of the same dimension can be added, subtracted, or compared. For example, if , and denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression is meaningful, but the heterogeneous expression is meaningless. However, is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions.
Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities.
To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use to convert 35 yards to 32.004 m.
A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres.
Conversion factor
In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and . The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to . Since any quantity can be multiplied by 1 without changing it, the expression "" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, because , and bar/bar cancels out, so .
Applications
Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well.
Mathematics
A simple application of dimensional analysis to mathematics is in computing the form of the volume of an -ball (the solid ball in n dimensions), or the area of its surface, the -sphere: being an -dimensional figure, the volume scales as , while the surface area, being -dimensional, scales as . Thus the volume of the -ball in terms of the radius is , for some constant . Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone.
Finance, economics, and accounting
In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios.
For example, the P/E ratio has dimensions of time (unit: year), and can be interpreted as "years of earnings to earn the price paid".
In economics, debt-to-GDP ratio also has the unit year (debt has a unit of currency, GDP has a unit of currency/year).
Velocity of money has a unit of 1/years (GDP/money supply has a unit of currency/year over currency): how often a unit of currency circulates per year.
Annual continuously compounded interest rates and simple interest rates are often expressed as a percentage (adimensional quantity) while time is expressed as an adimensional quantity consisting of the number of years. However, if the time includes year as the unit of measure, the dimension of the rate is 1/year. Of course, there is nothing special (apart from the usual convention) about using year as a unit of time: any other time unit can be used. Furthermore, if rate and time include their units of measure, the use of different units for each is not problematic. In contrast, rate and time need to refer to a common period if they are adimensional. (Note that effective interest rates can only be defined as adimensional quantities.)
In financial analysis, bond duration can be defined as , where is the value of a bond (or portfolio), is the continuously compounded interest rate and is a derivative. From the previous point, the dimension of is 1/time. Therefore, the dimension of duration is time (usually expressed in years) because is in the "denominator" of the derivative.
Fluid mechanics
In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include:
Reynolds number (), generally important in all types of fluid problems:
Froude number (), modeling flow with a free surface:
Euler number (), used in problems in which pressure is of interest:
Mach number (), important in high speed flows where the velocity approaches or exceeds the local speed of sound: where is the local speed of sound.
History
The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Joseph-Louis Lagrange, in a 1799 article at the Turin Academy of Science.
This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem.
Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term dimension instead of the Daviet homogeneity.
In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like should be independent of the units employed to measure the physical variables.
James Clerk Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant is taken as unity, thereby defining . By assuming a form of Coulomb's law in which the Coulomb constant ke is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were , which, after substituting his equation for mass, results in charge having the same dimensions as mass, viz. .
Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound.
The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents.
Examples
A simple example: period of a harmonic oscillator
What is the period of oscillation of a mass attached to an ideal linear spring with spring constant suspended in gravity of strength ? That period is the solution for of some dimensionless equation in the variables , , , and .
The four quantities have the following dimensions: [T]; [M]; [M/T2]; and [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, , and putting for some dimensionless constant gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well.
The variable does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines with , , and , because is the only quantity that involves the dimension L. This implies that in this problem the is irrelevant. Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of : it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: , for some dimensionless constant (equal to from the original dimensionless equation).
When faced with a case where dimensional analysis rejects a variable (, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here.
When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as .
A more complex example: energy of a vibrating wire
Consider the case of a vibrating wire of length (L) vibrating with an amplitude (L). The wire has a linear density (M/L) and is under tension (LM/T2), and we want to know the energy (L2M/T2) in the wire. Let and be two dimensionless products of powers of the variables chosen, given by
The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation
where is some unknown function, or, equivalently as
where is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function . But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to , and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident.
The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis.
A third example: demand versus capacity for a rotating disc
Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness (L) and radius (L). The disc has a density (M/L3), rotates at an angular velocity (T−1) and this leads to a stress (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following () non-dimensional groups:
demand/capacity =
thickness/radius or aspect ratio =
Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs.
Properties
Mathematical properties
The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; , and the inverse of L is 1/L or L−1. L raised to any integer power is a member of the group, having an inverse of L or 1/L. The operation of the group is multiplication, having the usual rules for handling exponents (). Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second).
An abelian group is equivalent to a module over the integers, with the dimensional symbol corresponding to the tuple . When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module.
A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa).
The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, .
In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like . However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions.
One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions and , one has the vector spaces and , and can define as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar.
The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., ) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, . (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity can be expressed in the general form
Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form
Knowing this restriction can be a powerful tool for obtaining new insight into the system.
Mechanics
The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent.
For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M.
On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons:
There is no way to obtain mass – or anything derived from it, such as force – without introducing another base dimension (thus, they do not span the space).
Velocity, being expressible in terms of length and time (), is redundant (the set is not linearly independent).
Other fields of physics and chemistry
Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is also defined as a base dimension, N.
In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features.
Polynomials and transcendental functions
Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form.
Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.)
While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity , where the logarithm is taken in any base, holds for dimensionless numbers and , but it does not hold if and are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.
Similarly, while one can evaluate monomials () of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for , the expression makes sense (as an area), while for , the expression does not make sense.
However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example,
This is the height to which an object rises in time if the acceleration of gravity is 9.8 and the initial upward speed is 500 . It is not necessary for to be in seconds. For example, suppose = 0.01 minutes. Then the first term would be
Combining units and numerical values
The value of a dimensional physical quantity is written as the product of a unit [] within the dimension and a dimensionless numerical value or numerical factor, .
When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed:
is identical to
The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted.
Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units.
Quantity equations
A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities.
In contrast, in a numerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit.
For example, a quantity equation for displacement as speed multiplied by time difference would be:
for = 5 m/s, where and may be expressed in any units, converted if necessary.
In contrast, a corresponding numerical-value equation would be:
where is the numeric value of when expressed in seconds and is the numeric value of when expressed in metres.
Generally, the use of numerical-value equations is discouraged.
Dimensionless concepts
Constants
The dimensionless constants that arise in the results obtained, such as the in the Poiseuille's Law problem and the in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc.
Formalisms
Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be , where is the dimension of the lattice.
It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: , , and , in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other.
Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants , , and (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit , and . In problems involving a gravitational field the latter limit should be taken such that the field stays finite.
Dimensional equivalences
Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.
SI units
Programming languages
Dimensional correctness as part of type checking has been studied since 1977.
Implementations for Ada and C++ were described in 1985 and 1988.
Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran.
Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices.
McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure.
Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions.
Geometry: position vs. displacement
Affine quantities
Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change).
Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable:
adding two displacements should yield a new displacement (walking ten paces then twenty paces gets you thirty paces forward),
adding a displacement to a position should yield a new position (walking one block down the street from an intersection gets you to the next intersection),
subtracting two positions should yield a displacement,
but one may not add two positions.
This illustrates the subtle distinction between affine quantities (ones modeled by an affine space, such as position) and vector quantities (ones modeled by a vector space, such as displacement).
Vector quantities may be added to each other, yielding a new vector quantity, and a vector quantity may be added to a suitable affine quantity (a vector space acts on an affine space), yielding a new affine quantity.
Affine quantities cannot be added, but may be subtracted, yielding relative quantities which are vectors, and these relative differences may then be added to each other or to an affine quantity.
Properly then, positions have dimension of affine length, while displacements have dimension of vector length. To assign a number to an affine unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a vector unit only requires a unit of measurement.
Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis.
This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero,
−273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F,
where the symbol ≘ means corresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated.
For temperature differences,
1 K = 1 °C ≠ 1 °F = 1 °R.
(Here °R refers to the Rankine scale, not the Réaumur scale).
Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C.
Orientation and frame of reference
Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a direction. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference.
This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis.
Huntley's extensions
Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank of the dimensional matrix.
He introduced two approaches:
The magnitudes of the components of a vector are to be considered dimensionally independent. For example, rather than an undifferentiated length dimension L, we may have Lx represent dimension in the x-direction, and so forth. This requirement stems ultimately from the requirement that each component of a physically meaningful equation (scalar, vector, or tensor) must be dimensionally consistent.
Mass as a measure of the quantity of matter is to be considered dimensionally independent from mass as a measure of inertia.
Directed dimensions
As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then , the distance travelled, with dimension L, , , both dimensioned as T−1L, and the downward acceleration of gravity, with dimension T−2L.
With these four quantities, we may conclude that the equation for the range may be written:
Or dimensionally
from which we may deduce that and , which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation.
However, if we use directed length dimensions, then will be dimensioned as T−1L, as T−1L, as L and as T−2L. The dimensional equation becomes:
and we may solve completely as , and . The increase in deductive power gained by the use of directed length dimensions is apparent.
Huntley's concept of directed length dimensions however has some serious limitations:
It does not deal well with vector equations involving the cross product,
nor does it handle well the use of angles as physical variables.
It also is often quite difficult to assign the L, L, L, L, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries?
Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems.
Quantity of matter
In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition.
For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables:
There are three fundamental variables, so the above five equations will yield two independent dimensionless variables:
If we distinguish between inertial mass with dimension and quantity of matter with dimension , then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written:
where now only is an undetermined constant (found to be equal to by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law.
Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable.
Siano's extension: orientational analysis
Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin at a speed and angle above the x-axis, with the force of gravity directed along the negative y-axis. It is desired to find the range , at which point the mass returns to the x-axis. Conventional analysis will yield the dimensionless variable , but offers no insight into the relationship between and .
Siano has suggested that the directed dimensions of Huntley be replaced by using orientational symbols to denote vector directions, and an orientationless symbol 10. Thus, Huntley's L becomes L1 with L specifying the dimension of length, and specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that , the following multiplication table for the orientation symbols results:
The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of . For angles, consider an angle that lies in the z-plane. Form a right triangle in the z-plane with being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation and the side opposite has an orientation . Since (using to indicate orientational equivalence) we conclude that an angle in the xy-plane must have an orientation , which is not unreasonable. Analogous reasoning forces the conclusion that has orientation while has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form , where and are real scalars. An expression such as is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written:
which for and yields . Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is .
The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd.
As an example, for the projectile problem, using orientational symbols, , being in the xy-plane will thus have dimension and the range of the projectile will be of the form:
Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that . In other words, that must be an odd integer. In fact, the required function of theta will be which is a series consisting of odd powers of .
It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical.
Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
| Physical sciences | Basics | Basics and measurement |
8271 | https://en.wikipedia.org/wiki/Digital%20television | Digital television | Digital television (DTV) is the transmission of television signals using digital encoding, in contrast to the earlier analog television technology which used analog signals. At the time of its development it was considered an innovative advancement and represented the first significant evolution in television technology since color television in the 1950s. Modern digital television is transmitted in high-definition television (HDTV) with greater resolution than analog TV. It typically uses a widescreen aspect ratio (commonly 16:9) in contrast to the narrower format (4:3) of analog TV. It makes more economical use of scarce radio spectrum space; it can transmit up to seven channels in the same bandwidth as a single analog channel, and provides many new features that analog television cannot. A transition from analog to digital broadcasting began around 2000. Different digital television broadcasting standards have been adopted in different parts of the world; below are the more widely used standards:
Digital Video Broadcasting (DVB) uses coded orthogonal frequency-division multiplexing (OFDM) modulation and supports hierarchical transmission. This standard has been adopted in Europe, Africa, Asia and Australia, for a total of approximately 60 countries.
Advanced Television System Committee (ATSC) standard uses eight-level vestigial sideband (8VSB) for terrestrial broadcasting. This standard has been adopted by 9 countries: the United States, Canada, Mexico, South Korea, Bahamas, Jamaica, the Dominican Republic, Haiti and Suriname.
Integrated Services Digital Broadcasting (ISDB) is a system designed to provide good reception to fixed receivers and also portable or mobile receivers. It utilizes OFDM and two-dimensional interleaving. It supports hierarchical transmission of up to three layers and uses MPEG-2 video and Advanced Audio Coding. This standard has been adopted in Japan and the Philippines. ISDB-T International is an adaptation of this standard using H.264/MPEG-4 AVC, which has been adopted in most of South America as well as Botswana and Angola.
Digital Terrestrial Multimedia Broadcast (DTMB) adopts time-domain synchronous (TDS) OFDM technology with a pseudo-random signal frame to serve as the guard interval (GI) of the OFDM block and the training symbol. The DTMB standard has been adopted in China, including Hong Kong and Macau.
Digital Multimedia Broadcasting (DMB) is a digital radio transmission technology developed in South Korea as part of the national information technology project for sending multimedia such as TV, radio and datacasting to mobile devices such as mobile phones, laptops and GPS navigation systems.
History
Background
Digital television's roots are tied to the availability of inexpensive, high-performance computers. It was not until the 1990s that digital TV became a real possibility. Digital television was previously not practically feasible due to the impractically high bandwidth requirements of uncompressed video, requiring around for a standard-definition television (SDTV) signal, and over for high-definition television (HDTV).
Development
In the mid-1980s, Toshiba released a television set with digital capabilities, using integrated circuit chips such as a microprocessor to convert analog television broadcast signals to digital video signals, enabling features such as freezing pictures and showing two channels at once. In 1986, Sony and NEC Home Electronics announced their own similar TV sets with digital video capabilities. However, they still relied on analog TV broadcast signals, with true digital TV broadcasts not yet being available at the time.
A digital TV broadcast service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an "Integrated Network System" service. However, it was not possible to practically implement such a digital TV service until the adoption of motion-compensated DCT video compression formats such as MPEG made it possible in the early 1990s.
In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, and as the MUSE analog format was proposed by Japan's public broadcaster NHK as a worldwide standard. Japanese advancements were seen as pacesetters that threatened to eclipse US electronics companies. Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration.
Between 1988 and 1991, several European organizations were working on DCT-based digital video coding standards for both SDTV and HDTV. The EU 256 project by the CMTT and ETSI, along with research by Italian broadcaster RAI, developed a DCT video codec that broadcast SDTV at and near-studio-quality HDTV at about . RAI demonstrated this with a 1990 FIFA World Cup broadcast in March 1990. An American company, General Instrument, also demonstrated the feasibility of a digital television signal in 1990. This led to the FCC being persuaded to delay its decision on an advanced television (ATV) standard until a digitally based standard could be developed.
When it became evident that a digital standard might be achieved in March 1990, the FCC took several important actions. First, the Commission declared that the new TV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being simulcast on different channels. The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements.
A universal standard for scanning formats, aspect ratios, or lines of resolution was not produced by the FCC's final standard. This outcome resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—is superior. Interlaced scanning, which is used in televisions worldwide, scans even-numbered lines first, then odd-numbered ones. Progressive scanning, which is the format used in computers, scans lines in sequences, from top to bottom. The computer industry argued that progressive scanning is superior because it does not flicker in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format.
Inaugural launches
DirecTV in the US launched the first commercial digital satellite platform in May 1994, using the Digital Satellite System (DSS) standard. Digital cable broadcasts were tested and launched in the US in 1996 by TCI and Time Warner. The first digital terrestrial platform was launched in November 1998 as ONdigital in the UK, using the DVB-T standard.
Technical information
Formats and bandwidth
Digital television supports many different picture formats defined by the broadcast television systems which are a combination of size and aspect ratio (width to height ratio).
With digital terrestrial television (DTT) broadcasting, the range of formats can be broadly divided into two categories: high-definition television (HDTV) for the transmission of high-definition video and standard-definition television (SDTV). These terms by themselves are not very precise and many subtle intermediate cases exist.
One of several different HDTV formats that can be transmitted over DTV is: pixels in progressive scan mode (abbreviated 720p) or pixels in interlaced video mode (1080i). Each of these uses a 16:9 aspect ratio. HDTV cannot be transmitted over analog television channels because of channel capacity issues.
SDTV, by comparison, may use one of several different formats taking the form of various aspect ratios depending on the technology used in the country of broadcast. NTSC can deliver a resolution in 4:3 and in 16:9, while PAL can give in 4:3 and in 16:9. However, broadcasters may choose to reduce these resolutions to reduce bit rate (e.g., many DVB-T channels in the UK use a horizontal resolution of 544 or 704 pixels per line).
Each commercial broadcasting terrestrial television DTV channel in North America is allocated enough bandwidth to broadcast up to 19 megabits per second. However, the broadcaster does not need to use this entire bandwidth for just one broadcast channel. Instead, the broadcast can use Program and System Information Protocol and subdivide across several video subchannels (a.k.a. feeds) of varying quality and compression rates, including non-video datacasting services.
A broadcaster may opt to use a standard-definition (SDTV) digital signal instead of an HDTV signal, because current convention allows the bandwidth of a DTV channel (or "multiplex") to be subdivided into multiple digital subchannels, (similar to what most FM radio stations offer with HD Radio), providing multiple feeds of entirely different television programming on the same channel. This ability to provide either a single HDTV feed or multiple lower-resolution feeds is often referred to as distributing one's bit budget or multicasting. This can sometimes be arranged automatically, using a statistical multiplexer. With some implementations, image resolution may be less directly limited by bandwidth; for example in DVB-T, broadcasters can choose from several different modulation schemes, giving them the option to reduce the transmission bit rate and make reception easier for more distant or mobile viewers.
Reception
There are several different ways to receive digital television. One of the oldest means of receiving DTV (and TV in general) is from terrestrial transmitters using an antenna (known as an aerial in some countries). This delivery method is known as digital terrestrial television (DTT). With DTT, viewers are limited to channels that have a terrestrial transmitter within range of their antenna.
Other delivery methods include digital cable and digital satellite. In some countries where transmissions of TV signals are normally achieved by microwaves, digital multichannel multipoint distribution service is used. Other standards, such as digital multimedia broadcasting (DMB) and digital video broadcasting - handheld (DVB-H), have been devised to allow handheld devices such as mobile phones to receive TV signals. Another way is Internet Protocol television (IPTV), which is the delivery of TV over a computer network. Finally, an alternative way is to receive digital TV signals via the open Internet (Internet television), whether from a central streaming service or a P2P (peer-to-peer) system.
Some signals are protected by encryption and backed up with the force of law under the WIPO Copyright Treaty and national legislation implementing it, such as the US Digital Millennium Copyright Act. Access to encrypted channels can be controlled by a removable card, for example via the Common Interface or CableCard.
Protection parameters
Digital television signals must not interfere with each other and they must also coexist with analog television until it is phased out. The following table gives allowable signal-to-noise and signal-to-interference ratios for various interference scenarios. This table is a crucial regulatory tool for controlling the placement and power levels of stations. Digital TV is more tolerant of interference than analog TV.
Interaction
People can interact with a DTV system in various ways. One can, for example, browse the electronic program guide. Modern DTV systems sometimes use a return path providing feedback from the end user to the broadcaster. This is possible over cable TV or through an Internet connection but is not possible with a standard antenna alone.
Some of these systems support video on demand using a communication channel localized to a neighborhood rather than a city (terrestrial) or an even larger area (satellite).
1seg
1seg (1-segment) is a special form of ISDB. Each channel is further divided into 13 segments. Twelve are allocated for HDTV and the other for narrow-band receivers such as mobile televisions and cell phones.
Comparison to analog
DTV has several advantages over analog television, the most significant being that digital channels take up less bandwidth and the bandwidth allocations are flexible depending on the level of compression and resolution of the transmitted image. This means that digital broadcasters can provide more digital channels in the same space, provide high-definition television service, or provide other non-television services such as multimedia or interactivity. DTV also permits special services such as multiplexing (more than one program on the same channel), electronic program guides and additional languages (spoken or subtitled). The sale of non-television services may provide an additional revenue source to broadcasters.
Digital and analog signals react to interference differently. For example, common problems with analog television include ghosting of images, noise from weak signals and other problems that degrade the quality of the image and sound, although the program material may still be watchable. With digital television, because of the cliff effect, reception of the digital signal must be very nearly complete; otherwise, neither audio nor video will be usable.
Analog TV began with monophonic sound and later developed multichannel television sound with two independent audio signal channels. DTV allows up to 5 audio signal channels plus a subwoofer bass channel, producing broadcasts similar in quality to movie theaters and DVDs.
Digital TV signals require less transmission power than analog TV signals to be broadcast and received satisfactorily.
Compression artifacts, picture quality monitoring and allocated bandwidth
DTV images have some picture defects that are not present on analog television or motion picture cinema, because of present-day limitations of bit rate and compression algorithms such as MPEG-2. This defect is sometimes referred to as mosquito noise.
Because of the way the human visual system works, defects in an image that are localized to particular features of the image or that come and go are more perceptible than defects that are uniform and constant. However, the DTV system is designed to take advantage of other limitations of the human visual system to help mask these flaws, e.g., by allowing more compression artifacts during fast motion where the eye cannot track and resolve them as easily and, conversely, minimizing artifacts in still backgrounds that, because time allows, may be closely examined in a scene.
Broadcast, cable, satellite and Internet DTV operators control the picture quality of television signal encoders using sophisticated, neuroscience-based algorithms, such as the structural similarity index measure (SSIM) video quality measurement tool. Another tool called visual information fidelity (VIF), is used in the Netflix VMAF video quality monitoring system.
Quantising effects can create contours—rather than smooth gradations—on areas with small graduations in amplitude. Typically, a very flat scene, such as a cloudless sky, will exhibit visible steps across its expanse, often appearing as concentric circles or ellipses. This is known as color banding. Similar effects can be seen in very dark scenes, where true black backgrounds are overlaid by dark gray areas. These transitions may be smooth, or may show a scattering effect as the digital processing dithers and is unable to consistently allocate a value of either absolute black or the next step up the greyscale.
Effects of poor reception
Changes in signal reception from factors such as degrading antenna connections or changing weather conditions may gradually reduce the quality of analog TV. The nature of digital TV results in a perfectly decodable video initially, until the receiving equipment starts picking up interference that overpowers the desired signal or if the signal is too weak to decode. Some equipment will show a garbled picture with significant damage, while other devices may go directly from perfectly decodable video to no video at all or lock up. This phenomenon is known as the digital cliff effect.
Block errors may occur when transmission is done with compressed images. A block error in a single frame often results in black boxes in several subsequent frames, making viewing difficult.
For remote locations, distant channels that, as analog signals, were previously usable in a snowy and degraded state may, as digital signals, be perfectly decodable or may become completely unavailable. The use of higher frequencies add to these problems, especially in cases where a clear line-of-sight from the receiving antenna to the transmitter is not available because usually higher frequency signals can't pass through obstacles as easily.
Effect on old analog technology
Television sets with only analog tuners cannot decode digital transmissions. When analog broadcasting over the air ceases, users of sets with analog-only tuners may use other sources of programming (e.g., cable, recorded media) or may purchase set-top converter boxes to tune in the digital signals. In the United States, a government-sponsored coupon was available to offset the cost of an external converter box.
The digital television transition began around the late 1990s and has been completed on a country-by-country basis in most parts of the world.
Disappearance of TV-audio receivers
Prior to the conversion to digital TV, analog television broadcast audio for TV channels on a separate FM carrier signal from the video signal. This FM audio signal could be heard using standard radios equipped with the appropriate tuning circuits.
However, after the digital television transition, no portable radio manufacturer has yet developed an alternative method for portable radios to play just the audio signal of digital TV channels; DTV radio is not the same thing.
Environmental issues
The adoption of a broadcast standard incompatible with existing analog receivers has created the problem of large numbers of analog receivers being discarded. One superintendent of public works was quoted in 2009 saying; "some of the studies I’ve read in the trade magazines say up to a quarter of American households could be throwing a TV out in the next two years following the regulation change." In Michigan in 2009, one recycler estimated that as many as one household in four would dispose of or recycle a TV set in the following year. The digital television transition, migration to high-definition television receivers and the replacement of CRTs with flat screens are all factors in the increasing number of discarded analog CRT-based television receivers. In 2009, an estimated 99 million analog TV receivers were sitting unused in homes in the US alone and, while some obsolete receivers are being retrofitted with converters, many more are simply dumped in landfills where they represent a source of toxic metals such as lead as well as lesser amounts of materials such as barium, cadmium and chromium.
| Technology | Broadcasting | null |
8301 | https://en.wikipedia.org/wiki/Distillation | Distillation | Distillation, also classical distillation, is the process of separating the component substances of a liquid mixture of two or more chemically discrete substances; the separation process is realized by way of the selective boiling of the mixture and the condensation of the vapors in a still.
Distillation can operate over a wide range of pressures from 0.14 bar (e.g., ethylbenzene/styrene) to nearly 21 bar (e.g.,propylene/propane) and is capable of separating feeds with high volumetric flowrates and various components that cover a range of relative volatilities from only 1.17 (o-xylene/m-xylene) to 81.2 (water/ethylene glycol). Distillation provides a convenient and time-tested solution to separate a diversity of chemicals in a continuous manner with high purity. However, distillation has an enormous environmental footprint, resulting in the consumption of approximately 25% of all industrial energy use. The key issue is that distillation operates based on phase changes, and this separation mechanism requires vast energy inputs.
Dry distillation (thermolysis and pyrolysis) is the heating of solid materials to produce gases that condense either into fluid products or into solid products. The term dry distillation includes the separation processes of destructive distillation and of chemical cracking, breaking down large hydrocarbon molecules into smaller hydrocarbon molecules. Moreover, a partial distillation results in partial separations of the mixture's components, which process yields nearly-pure components; partial distillation also realizes partial separations of the mixture to increase the concentrations of selected components. In either method, the separation process of distillation exploits the differences in the relative volatility of the component substances of the heated mixture.
In the industrial applications of classical distillation, the term distillation is used as a unit of operation that identifies and denotes a process of physical separation, not a chemical reaction; thus an industrial installation that produces distilled beverages, is a distillery of alcohol. These are some applications of the chemical separation process that is distillation:
Distilling fermented products to yield alcoholic beverages with a high content by volume of ethyl alcohol.
Desalination to produce potable water and for medico-industrial applications.
Crude oil stabilisation, a partial distillation to reduce the vapor pressure of crude oil, which thus is safe to store and to transport, and thereby reduces the volume of atmospheric emissions of volatile hydrocarbons.
Fractional distillation used in the midstream operations of an oil refinery for producing fuels and chemical raw materials for livestock feed.
Cryogenic Air separation into the component gases — oxygen, nitrogen, and argon — for use as industrial gases.
Chemical synthesis to separate impurities and unreacted materials.
History
Iron Age
Early evidence of distillation was found on Akkadian tablets dated describing perfumery operations. The tablets provided textual evidence that an early, primitive form of distillation was known to the Babylonians of ancient Mesopotamia.
Classical antiquity
Greek and Roman terminology
According to British chemist T. Fairley, neither the Greeks nor the Romans had any term for the modern concept of distillation. Words like "distill" would have referred to something else, in most cases a part of some process unrelated to what now is known as distillation. In the words of Fairley and German chemical engineer Norbert Kockmann respectively:
According to Dutch chemical historian Robert J. Forbes, the word distillare (to drip off) when used by the Romans, e.g. Seneca and Pliny the Elder, was "never used in our sense".
Aristotle
Aristotle knew that water condensing from evaporating seawater is fresh:
Letting seawater evaporate and condense into freshwater can not be called "distillation" for distillation involves boiling, but the experiment may have been an important step towards distillation.
Alexandrian chemists
Early evidence of distillation has been found related to alchemists working in Alexandria in Roman Egypt in the 1st century CE.
Distilled water has been in use since at least , when Alexander of Aphrodisias described the process. Work on distilling other liquids continued in early Byzantine Egypt under Zosimus of Panopolis in the 3rd century.
Ancient India and China (1–500 CE)
Distillation was practiced in the ancient Indian subcontinent, which is evident from baked clay retorts and receivers found at Taxila, Shaikhan Dheri, and Charsadda in Pakistan and Rang Mahal in India dating to the early centuries of the Common Era. Frank Raymond Allchin says these terracotta distill tubes were "made to imitate bamboo". These "Gandhara stills" were only capable of producing very weak liquor, as there was no efficient means of collecting the vapors at low heat.
Distillation in China may have begun at the earliest during the Eastern Han dynasty (1st–2nd century CE).
Islamic Golden Age
Medieval Muslim chemists such as Jābir ibn Ḥayyān (Latin: Geber, ninth century) and Abū Bakr al-Rāzī (Latin: Rhazes, ) experimented extensively with the distillation of various substances. The fractional distillation of organic substances plays an important role in the works attributed to Jābir, such as in the ('The Book of Seventy'), translated into Latin by Gerard of Cremona () under the title . The Jabirian experiments with fractional distillation of animal and vegetable substances, and to a lesser degree also of mineral substances, is the main topic of the , an originally Arabic work falsely attributed to Avicenna that was translated into Latin and would go on to form the most important alchemical source for Roger Bacon ().
The distillation of wine is attested in Arabic works attributed to al-Kindī () and to al-Fārābī (), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) (later translated into Latin as ). In the twelfth century, recipes for the production of ("burning water", i.e., ethanol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century it had become a widely known substance among Western European chemists. The works of Taddeo Alderotti (1223–1296) describe a method for concentrating alcohol involving repeated distillation through a water-cooled still, by which an alcohol purity of 90% could be obtained.
Medieval China
The distillation of beverages began in the Southern Song (10th–13th century) and Jin (12th–13th century) dynasties, according to archaeological evidence. A still was found in an archaeological site in Qinglong, Hebei province, China, dating back to the 12th century. Distilled beverages were common during the Yuan dynasty (13th–14th century).
Modern era
In 1500, German alchemist Hieronymus Brunschwig published (The Book of the Art of Distillation out of Simple Ingredients), the first book solely dedicated to the subject of distillation, followed in 1512 by a much expanded version. Right after that, in 1518, the oldest surviving distillery in Europe, The Green Tree Distillery, was founded.
In 1651, John French published The Art of Distillation, the first major English compendium on the practice, but it has been claimed that much of it derives from Brunschwig's work. This includes diagrams with people in them showing the industrial rather than bench scale of the operation.
As alchemy evolved into the science of chemistry, vessels called retorts became used for distillations. Both alembics and retorts are forms of glassware with long necks pointing to the side at a downward angle to act as air-cooled condensers to condense the distillate and let it drip downward for collection. Later, copper alembics were invented. Riveted joints were often kept tight by using various mixtures, for instance a dough made of rye flour. These alembics often featured a cooling system around the beak, using cold water, for instance, which made the condensation of alcohol more efficient. These were called pot stills. Today, the retorts and pot stills have been largely supplanted by more efficient distillation methods in most industrial processes. However, the pot still is still widely used for the elaboration of some fine alcohols, such as cognac, Scotch whisky, Irish whiskey, tequila, rum, cachaça, and some vodkas. Pot stills made of various materials (wood, clay, stainless steel) are also used by bootleggers in various countries. Small pot stills are also sold for use in the domestic production of flower water or essential oils.
Early forms of distillation involved batch processes using one vaporization and one condensation. Purity was improved by further distillation of the condensate. Greater volumes were processed by simply repeating the distillation. Chemists reportedly carried out as many as 500 to 600 distillations in order to obtain a pure compound.
In the early 19th century, the basics of modern techniques, including pre-heating and reflux, were developed. In 1822, Anthony Perrier developed one of the first continuous stills, and then, in 1826, Robert Stein improved that design to make his patent still. In 1830, Aeneas Coffey got a patent for improving the design even further. Coffey's continuous still may be regarded as the archetype of modern petrochemical units. The French engineer Armand Savalle developed his steam regulator around 1846. In 1877, Ernest Solvay was granted a U.S. Patent for a tray column for ammonia distillation, and the same and subsequent years saw developments in this theme for oils and spirits.
With the emergence of chemical engineering as a discipline at the end of the 19th century, scientific rather than empirical methods could be applied. The developing petroleum industry in the early 20th century provided the impetus for the development of accurate design methods, such as the McCabe–Thiele method by Ernest Thiele and the Fenske equation. The first industrial plant in the United States to use distillation as a means of ocean desalination opened in Freeport, Texas in 1961 with the hope of bringing water security to the region.
The availability of powerful computers has allowed direct computer simulations of distillation columns.
Applications
The application of distillation can roughly be divided into four groups: laboratory scale, industrial distillation, distillation of herbs for perfumery and medicinals (herbal distillate), and food processing. The latter two are distinctively different from the former two in that distillation is not used as a true purification method but more to transfer all volatiles from the source materials to the distillate in the processing of beverages and herbs.
The main difference between laboratory scale distillation and industrial distillation are that laboratory scale distillation is often performed on a batch basis, whereas industrial distillation often occurs continuously. In batch distillation, the composition of the source material, the vapors of the distilling compounds, and the distillate change during the distillation. In batch distillation, a still is charged (supplied) with a batch of feed mixture, which is then separated into its component fractions, which are collected sequentially from most volatile to less volatile, with the bottoms – remaining least or non-volatile fraction – removed at the end. The still can then be recharged and the process repeated.
In continuous distillation, the source materials, vapors, and distillate are kept at a constant composition by carefully replenishing the source material and removing fractions from both vapor and liquid in the system. This results in a more detailed control of the separation process.
Idealized model
The boiling point of a liquid is the temperature at which the vapor pressure of the liquid equals the pressure around the liquid, enabling bubbles to form without being crushed. A special case is the normal boiling point, where the vapor pressure of the liquid equals the ambient atmospheric pressure.
It is a misconception that in a liquid mixture at a given pressure, each component boils at the boiling point corresponding to the given pressure, allowing the vapors of each component to collect separately and purely. However, this does not occur, even in an idealized system. Idealized models of distillation are essentially governed by Raoult's law and Dalton's law and assume that vapor–liquid equilibria are attained.
Raoult's law states that the vapor pressure of a solution is dependent on 1) the vapor pressure of each chemical component in the solution and 2) the fraction of solution each component makes up, a.k.a. the mole fraction. This law applies to ideal solutions, or solutions that have different components but whose molecular interactions are the same as or very similar to pure solutions.
Dalton's law states that the total pressure is the sum of the partial pressures of each individual component in the mixture. When a multi-component liquid is heated, the vapor pressure of each component will rise, thus causing the total vapor pressure to rise. When the total vapor pressure reaches the pressure surrounding the liquid, boiling occurs and liquid turns to gas throughout the bulk of the liquid. A mixture with a given composition has one boiling point at a given pressure when the components are mutually soluble. A mixture of constant composition does not have multiple boiling points.
An implication of one boiling point is that lighter components never cleanly "boil first". At boiling point, all volatile components boil, but for a component, its percentage in the vapor is the same as its percentage of the total vapor pressure. Lighter components have a higher partial pressure and, thus, are concentrated in the vapor, but heavier volatile components also have a (smaller) partial pressure and necessarily vaporize also, albeit at a lower concentration in the vapor. Indeed, batch distillation and fractionation succeed by varying the composition of the mixture. In batch distillation, the batch vaporizes, which changes its composition; in fractionation, liquid higher in the fractionation column contains more lights and boils at lower temperatures. Therefore, starting from a given mixture, it appears to have a boiling range instead of a boiling point, although this is because its composition changes: each intermediate mixture has its own, singular boiling point.
The idealized model is accurate in the case of chemically similar liquids, such as benzene and toluene. In other cases, severe deviations from Raoult's law and Dalton's law are observed, most famously in the mixture of ethanol and water. These compounds, when heated together, form an azeotrope, which is when the vapor phase and liquid phase contain the same composition. Although there are computational methods that can be used to estimate the behavior of a mixture of arbitrary components, the only way to obtain accurate vapor–liquid equilibrium data is by measurement.
It is not possible to completely purify a mixture of components by distillation, as this would require each component in the mixture to have a zero partial pressure. If ultra-pure products are the goal, then further chemical separation must be applied. When a binary mixture is vaporized and the other component, e.g., a salt, has zero partial pressure for practical purposes, the process is simpler.
Batch or differential distillation
Heating an ideal mixture of two volatile substances, A and B, with A having the higher volatility, or lower boiling point, in a batch distillation setup (such as in an apparatus depicted in the opening figure) until the mixture is boiling results in a vapor above the liquid that contains a mixture of A and B. The ratio between A and B in the vapor will be different from the ratio in the liquid. The ratio in the liquid will be determined by how the original mixture was prepared, while the ratio in the vapor will be enriched in the more volatile compound, A (due to Raoult's Law, see above). The vapor goes through the condenser and is removed from the system. This, in turn, means that the ratio of compounds in the remaining liquid is now different from the initial ratio (i.e., more enriched in B than in the starting liquid).
The result is that the ratio in the liquid mixture is changing, becoming richer in component B. This causes the boiling point of the mixture to rise, which results in a rise in the temperature in the vapor, which results in a changing ratio of A : B in the gas phase (as distillation continues, there is an increasing proportion of B in the gas phase). This results in a slowly changing ratio of A : B in the distillate.
If the difference in vapour pressure between the two components A and B is large – generally expressed as the difference in boiling points – the mixture in the beginning of the distillation is highly enriched in component A, and when component A has distilled off, the boiling liquid is enriched in component B.
Continuous distillation
Continuous distillation is an ongoing distillation in which a liquid mixture is continuously (without interruption) fed into the process and separated fractions are removed continuously as output streams occur over time during the operation. Continuous distillation produces a minimum of two output fractions, including at least one volatile distillate fraction, which has boiled and been separately captured as a vapor and then condensed to a liquid. There is always a bottoms (or residue) fraction, which is the least volatile residue that has not been separately captured as a condensed vapor.
Continuous distillation differs from batch distillation in the respect that concentrations should not change over time. Continuous distillation can be run at a steady state for an arbitrary amount of time. For any source material of specific composition, the main variables that affect the purity of products in continuous distillation are the reflux ratio and the number of theoretical equilibrium stages, in practice determined by the number of trays or the height of packing. Reflux is a flow from the condenser back to the column, which generates a recycle that allows a better separation with a given number of trays. Equilibrium stages are ideal steps where compositions achieve vapor–liquid equilibrium, repeating the separation process and allowing better separation given a reflux ratio. A column with a high reflux ratio may have fewer stages, but it refluxes a large amount of liquid, giving a wide column with a large holdup. Conversely, a column with a low reflux ratio must have a large number of stages, thus requiring a taller column.
General improvements
Both batch and continuous distillations can be improved by making use of a fractionating column on top of the distillation flask. The column improves separation by providing a larger surface area for the vapor and condensate to come into contact. This helps it remain at equilibrium for as long as possible. The column can even consist of small subsystems ('trays' or 'dishes') which all contain an enriched, boiling liquid mixture, all with their own vapor–liquid equilibrium.
There are differences between laboratory-scale and industrial-scale fractionating columns, but the principles are the same. Examples of laboratory-scale fractionating columns (in increasing efficiency) include:
Air condenser
Vigreux column (usually laboratory scale only)
Packed column (packed with glass beads, metal pieces, or other chemically inert material)
Spinning band distillation system.
Laboratory procedures
Laboratory scale distillations are almost exclusively run as batch distillations. The device used in distillation, sometimes referred to as a still, consists at a minimum of a reboiler or pot in which the source material is heated, a condenser in which the heated vapor is cooled back to the liquid state, and a receiver in which the concentrated or purified liquid, called the distillate, is collected. Several laboratory scale techniques for distillation exist (see also distillation types).
A completely sealed distillation apparatus could experience extreme and rapidly varying internal pressure, which could cause it to burst open at the joints. Therefore, some path is usually left open (for instance, at the receiving flask) to allow the internal pressure to equalize with atmospheric pressure. Alternatively, a vacuum pump may be used to keep the apparatus at a lower than atmospheric pressure. If the substances involved are air- or moisture-sensitive, the connection to the atmosphere can be made through one or more drying tubes packed with materials that scavenge the undesired air components, or through bubblers that provide a movable liquid barrier. Finally, the entry of undesired air components can be prevented by pumping a low but steady flow of suitable inert gas, like nitrogen, into the apparatus.
Simple distillation
In simple distillation, the vapor is immediately channeled into a condenser. Consequently, the distillate is not pure but rather its composition is identical to the composition of the vapors at the given temperature and pressure. That concentration follows Raoult's law.
As a result, simple distillation is effective only when the liquid boiling points differ greatly (rule of thumb is 25 °C) or when separating liquids from non-volatile solids or oils. For these cases, the vapor pressures of the components are usually different enough that the distillate may be sufficiently pure for its intended purpose.
A cutaway schematic of a simple distillation operation is shown at right. The starting liquid 15 in the boiling flask 2 is heated by a combined hotplate and magnetic stirrer 13 via a silicone oil bath (orange, 14). The vapor flows through a short Vigreux column 3, then through a Liebig condenser 5, is cooled by water (blue) that circulates through ports 6 and 7. The condensed liquid drips into the receiving flask 8, sitting in a cooling bath (blue, 16). The adapter 10 has a connection 9 that may be fitted to a vacuum pump. The components are connected by ground glass joints.
Fractional distillation
For many cases, the boiling points of the components in the mixture will be sufficiently close that Raoult's law must be taken into consideration. Therefore, fractional distillation must be used to separate the components by repeated vaporization-condensation cycles within a packed fractionating column. This separation, by successive distillations, is also referred to as rectification.
As the solution to be purified is heated, its vapors rise to the fractionating column. As it rises, it cools, condensing on the condenser walls and the surfaces of the packing material. Here, the condensate continues to be heated by the rising hot vapors; it vaporizes once more. However, the composition of the fresh vapors is determined once again by Raoult's law. Each vaporization-condensation cycle (called a theoretical plate) will yield a purer solution of the more volatile component. In reality, each cycle at a given temperature does not occur at exactly the same position in the fractionating column; theoretical plate is thus a concept rather than an accurate description.
More theoretical plates lead to better separations. A spinning band distillation system uses a spinning band of Teflon or metal to force the rising vapors into close contact with the descending condensate, increasing the number of theoretical plates.
Steam distillation
Like vacuum distillation, steam distillation is a method for distilling compounds which are heat-sensitive. The temperature of the steam is easier to control than the surface of a heating element and allows a high rate of heat transfer without heating at a very high temperature. This process involves bubbling steam through a heated mixture of the raw material. By Raoult's law, some of the target compound will vaporize (in accordance with its partial pressure). The vapor mixture is cooled and condensed, usually yielding a layer of oil and a layer of water.
Steam distillation of various aromatic herbs and flowers can result in two products: an essential oil as well as a watery herbal distillate. The essential oils are often used in perfumery and aromatherapy while the watery distillates have many applications in aromatherapy, food processing and skin care.
Vacuum distillation
Some compounds have very high boiling points. To boil such compounds, it is often better to lower the pressure at which such compounds are boiled instead of increasing the temperature. Once the pressure is lowered to the vapor pressure of the compound (at the given temperature), boiling and the rest of the distillation process can commence. This technique is referred to as vacuum distillation and it is commonly found in the laboratory in the form of the rotary evaporator.
This technique is also very useful for compounds which boil beyond their decomposition temperature at atmospheric pressure and which would therefore be decomposed by any attempt to boil them under atmospheric pressure.
Molecular distillation
Molecular distillation is vacuum distillation below the pressure of 0.01 torr. 0.01 torr is one order of magnitude above high vacuum, where fluids are in the free molecular flow regime, i.e., the mean free path of molecules is comparable to the size of the equipment. The gaseous phase no longer exerts significant pressure on the substance to be evaporated, and consequently, rate of evaporation no longer depends on pressure. That is, because the continuum assumptions of fluid dynamics no longer apply, mass transport is governed by molecular dynamics rather than fluid dynamics. Thus, a short path between the hot surface and the cold surface is necessary, typically by suspending a hot plate covered with a film of feed next to a cold plate with a line of sight in between. Molecular distillation is used industrially for purification of oils.
Short path distillation
Short path distillation is a distillation technique that involves the distillate travelling a short distance, often only a few centimeters, and is normally done at reduced pressure. A classic example would be a distillation involving the distillate travelling from one glass bulb to another, without the need for a condenser separating the two chambers. This technique is often used for compounds which are unstable at high temperatures or to purify small amounts of compound. The advantage is that the heating temperature can be considerably lower (at reduced pressure) than the boiling point of the liquid at standard pressure, and the distillate only has to travel a short distance before condensing. A short path ensures that little compound is lost on the sides of the apparatus. The Kugelrohr apparatus is a kind of short path distillation method which often contains multiple chambers to collect distillate fractions.
Air-sensitive vacuum distillation
Some compounds have high boiling points as well as being air sensitive. A simple vacuum distillation system as exemplified above can be used, whereby the vacuum is replaced with an inert gas after the distillation is complete. However, this is a less satisfactory system if one desires to collect fractions under a reduced pressure. To do this a "cow" or "pig" adaptor can be added to the end of the condenser, or for better results or for very air sensitive compounds a Perkin triangle apparatus can be used.
The Perkin triangle has means via a series of glass or Teflon taps to allows fractions to be isolated from the rest of the still, without the main body of the distillation being removed from either the vacuum or heat source, and thus can remain in a state of reflux. To do this, the sample is first isolated from the vacuum by means of the taps, the vacuum over the sample is then replaced with an inert gas (such as nitrogen or argon) and can then be stoppered and removed. A fresh collection vessel can then be added to the system, evacuated and linked back into the distillation system via the taps to collect a second fraction, and so on, until all fractions have been collected.
Zone distillation
Zone distillation is a distillation process in a long container with partial melting of refined matter in moving liquid zone and condensation of vapor in the solid phase at condensate pulling in cold area. The process is worked in theory. When zone heater is moving from the top to the bottom of the container then solid condensate with irregular impurity distribution is forming. Then most pure part of the condensate may be extracted as product. The process may be iterated many times by moving (without turnover) the received condensate to the bottom part of the container on the place of refined matter. The irregular impurity distribution in the condensate (that is efficiency of purification) increases with the number of iterations.
Zone distillation is the distillation analog of zone recrystallization. Impurity distribution in the condensate is described by known equations of zone recrystallization – with the replacement of the distribution co-efficient k of crystallization - for the separation factor α of distillation.
Closed-system vacuum distillation (cryovap)
Non-condensable gas can be expelled from the apparatus by the vapor of relatively volatile co-solvent, which spontaneously evaporates during initial pumping, and this can be achieved with regular oil or diaphragm pump.
Other types
The process of reactive distillation involves using the reaction vessel as the still. In this process, the product is usually significantly lower boiling than its reactants. As the product is formed from the reactants, it is vaporized and removed from the reaction mixture. This technique is an example of a continuous vs. a batch process; advantages include less downtime to charge the reaction vessel with starting material, and less workup. Distillation "over a reactant" could be classified as a reactive distillation. It is typically used to remove volatile impurity from the distillation feed. For example, a little lime may be added to remove carbon dioxide from water followed by a second distillation with a little sulfuric acid added to remove traces of ammonia.
Catalytic distillation is the process by which the reactants are catalyzed while being distilled to continuously separate the products from the reactants. This method is used to assist equilibrium reactions in reaching completion.
Pervaporation is a method for the separation of mixtures of liquids by partial vaporization through a non-porous membrane.
Extractive distillation is defined as distillation in the presence of a miscible, high boiling, relatively non-volatile component, the solvent, that forms no azeotrope with the other components in the mixture.
Flash evaporation (or partial evaporation) is the partial vaporization that occurs when a saturated liquid stream undergoes a reduction in pressure by passing through a throttling valve or other throttling device. This process is one of the simplest unit operations, being equivalent to a distillation with only one equilibrium stage.
Codistillation is distillation which is performed on mixtures in which the two compounds are not miscible. In the laboratory, the Dean-Stark apparatus is used for this purpose to remove water from synthesis products. The Bleidner apparatus is another example with two refluxing solvents.
Membrane distillation is a type of distillation in which vapors of a mixture to be separated are passed through a membrane, which selectively permeates one component of mixture. Vapor pressure difference is the driving force. It has potential applications in seawater desalination and in removal of organic and inorganic components.
The unit process of evaporation may also be called "distillation":
In rotary evaporation a vacuum distillation apparatus is used to remove bulk solvents from a sample. Typically the vacuum is generated by a water aspirator or a membrane pump.
In a Kugelrohr apparatus a short path distillation apparatus is typically used (generally in combination with a (high) vacuum) to distill high boiling (> 300 °C) compounds. The apparatus consists of an oven in which the compound to be distilled is placed, a receiving portion which is outside of the oven, and a means of rotating the sample. The vacuum is normally generated by using a high vacuum pump.
Other uses:
Dry distillation or destructive distillation, despite the name, is not truly distillation, but rather a chemical reaction known as pyrolysis in which solid substances are heated in an inert or reducing atmosphere and any volatile fractions, containing high-boiling liquids and products of pyrolysis, are collected. The destructive distillation of wood to give methanol is the root of its common name – wood alcohol.
Freeze distillation is an analogous method of purification using freezing instead of evaporation. It is not truly distillation, but a recrystallization where the product is the mother liquor, and does not produce products equivalent to distillation. This process is used in the production of ice beer and ice wine to increase ethanol and sugar content, respectively. It is also used to produce applejack. Unlike distillation, freeze distillation concentrates poisonous congeners rather than removing them; As a result, many countries prohibit such applejack as a health measure. Also, distillation by evaporation can separate these since they have different boiling points.
Distillation by filtration: In early alchemy and chemistry, otherwise known as natural philosophy, a form of "distillation" by capillary filtration was known as a form of distillation at the time. In this, a series of cups or bowls were set upon a stepped support with a "wick" of cotton or felt-like material, which had been wetted with water or a clear liquid with each step dripping down through the wetted cloth through capillary action in succeeding steps, creating a "purification" of the liquid, leaving solid materials behind in the upper bowls and purifying the succeeding product through capillary action through the moistened cloth. This was called "distillatio" by filtration by those using the method.
Azeotropic process
Interactions between the components of the solution create properties unique to the solution, as most processes entail non-ideal mixtures, where Raoult's law does not hold. Such interactions can result in a constant-boiling azeotrope which behaves as if it were a pure compound (i.e., boils at a single temperature instead of a range). At an azeotrope, the solution contains the given component in the same proportion as the vapor, so that evaporation does not change the purity, and distillation does not result in separation. For example, 95.6% ethanol (by mass) in water forms an azeotrope at 78.1 °C.
If the azeotrope is not considered sufficiently pure for use, there exist some techniques to break the azeotrope to give a more pure distillate. These techniques are known as azeotropic distillation. Some techniques achieve this by "jumping" over the azeotropic composition (by adding another component to create a new azeotrope, or by varying the pressure). Others work by chemically or physically removing or sequestering the impurity. For example, to purify ethanol beyond 95%, a drying agent (or desiccant, such as potassium carbonate) can be added to convert the soluble water into insoluble water of crystallization. Molecular sieves are often used for this purpose as well.
Immiscible liquids, such as water and toluene, easily form azeotropes. Commonly, these azeotropes are referred to as a low boiling azeotrope because the boiling point of the azeotrope is lower than the boiling point of either pure component. The temperature and composition of the azeotrope is easily predicted from the vapor pressure of the pure components, without use of Raoult's law. The azeotrope is easily broken in a distillation set-up by using a liquid–liquid separator (a decanter) to separate the two liquid layers that are condensed overhead. Only one of the two liquid layers is refluxed to the distillation set-up.
High boiling azeotropes, such as a 20 percent by weight mixture of hydrochloric acid in water, also exist. As implied by the name, the boiling point of the azeotrope is greater than the boiling point of either pure component.
Breaking an azeotrope with unidirectional pressure manipulation
The boiling points of components in an azeotrope overlap to form a band. By exposing an azeotrope to a vacuum or positive pressure, it is possible to bias the boiling point of one component away from the other by exploiting the differing vapor pressure curves of each; the curves may overlap at the azeotropic point, but are unlikely to remain identical further along the pressure axis to either side of the azeotropic point. When the bias is great enough, the two boiling points no longer overlap and so the azeotropic band disappears.
This method can remove the need to add other chemicals to a distillation, but it has two potential drawbacks.
Under negative pressure, power for a vacuum source is needed and the reduced boiling points of the distillates requires that the condenser be run cooler to prevent distillate vapors being lost to the vacuum source. Increased cooling demands will often require additional energy and possibly new equipment or a change of coolant.
Alternatively, if positive pressures are required, standard glassware can not be used, energy must be used for pressurization and there is a higher chance of side reactions occurring in the distillation, such as decomposition, due to the higher temperatures required to effect boiling.
A unidirectional distillation will rely on a pressure change in one direction, either positive or negative.
Pressure-swing distillation
Pressure-swing distillation is essentially the same as the unidirectional distillation used to break azeotropic mixtures, but here both positive and negative pressures may be employed.
This improves the selectivity of the distillation and allows a chemist to optimize distillation by avoiding extremes of pressure and temperature that waste energy. This is particularly important in commercial applications.
One example of the application of pressure-swing distillation is during the industrial purification of ethyl acetate after its catalytic synthesis from ethanol.
Industrial process
Large scale industrial distillation applications include both batch and continuous fractional, vacuum, azeotropic, extractive, and steam distillation. The most widely used industrial applications of continuous, steady-state fractional distillation are in petroleum refineries, petrochemical and chemical plants and natural gas processing plants.
To control and optimize such industrial distillation, a standardized laboratory method, ASTM D86, is established. This test method extends to the atmospheric distillation of petroleum products using a laboratory batch distillation unit to quantitatively determine the boiling range characteristics of petroleum products.
Industrial distillation is typically performed in large, vertical cylindrical columns known as distillation towers or distillation columns with diameters ranging from about and heights ranging from about or more. When the process feed has a diverse composition, as in distilling crude oil, liquid outlets at intervals up the column allow for the withdrawal of different fractions or products having different boiling points or boiling ranges. The "lightest" products (those with the lowest boiling point) exit from the top of the columns and the "heaviest" products (those with the highest boiling point) exit from the bottom of the column and are often called the bottoms.
Industrial towers use reflux to achieve a more complete separation of products. Reflux refers to the portion of the condensed overhead liquid product from a distillation or fractionation tower that is returned to the upper part of the tower as shown in the schematic diagram of a typical, large-scale industrial distillation tower. Inside the tower, the downflowing reflux liquid provides cooling and condensation of the upflowing vapors thereby increasing the efficiency of the distillation tower. The more reflux that is provided for a given number of theoretical plates, the better the tower's separation of lower boiling materials from higher boiling materials. Alternatively, the more reflux that is provided for a given desired separation, the fewer the number of theoretical plates required. Chemical engineers must choose what combination of reflux rate and number of plates is both economically and physically feasible for the products purified in the distillation column.
Such industrial fractionating towers are also used in cryogenic air separation, producing liquid oxygen, liquid nitrogen, and high purity argon. Distillation of chlorosilanes also enables the production of high-purity silicon for use as a semiconductor.
Design and operation of a distillation tower depends on the feed and desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used. For a multi-component feed, simulation models are used both for design and operation. Moreover, the efficiencies of the vapor–liquid contact devices (referred to as "plates" or "trays") used in distillation towers are typically lower than that of a theoretical 100% efficient equilibrium stage. Hence, a distillation tower needs more trays than the number of theoretical vapor–liquid equilibrium stages. A variety of models have been postulated to estimate tray efficiencies.
In modern industrial uses, a packing material is used in the column instead of trays when low pressure drops across the column are required. Other factors that favor packing are: vacuum systems, smaller diameter columns, corrosive systems, systems prone to foaming, systems requiring low liquid holdup, and batch distillation. Conversely, factors that favor plate columns are: presence of solids in feed, high liquid rates, large column diameters, complex columns, columns with wide feed composition variation, columns with a chemical reaction, absorption columns, columns limited by foundation weight tolerance, low liquid rate, large turn-down ratio and those processes subject to process surges.
This packing material can either be random or dumped packing ( wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing and the vapors pass across this wetted surface, where mass transfer takes place. Unlike conventional tray distillation in which every tray represents a separate point of vapor–liquid equilibrium, the vapor–liquid equilibrium curve in a packed column is continuous. However, when modeling packed columns, it is useful to compute a number of "theoretical stages" to denote the separation efficiency of the packed column with respect to more traditional trays. Differently shaped packings have different surface areas and void space between packings. Both these factors affect packing performance.
Another factor in addition to the packing shape and surface area that affects the performance of random or structured packing is the liquid and vapor distribution entering the packed bed. The number of theoretical stages required to make a given separation is calculated using a specific vapor to liquid ratio. If the liquid and vapor are not evenly distributed across the superficial tower area as it enters the packed bed, the liquid to vapor ratio will not be correct in the packed bed and the required separation will not be achieved. The packing will appear to not be working properly. The height equivalent to a theoretical plate (HETP) will be greater than expected. The problem is not the packing itself but the mal-distribution of the fluids entering the packed bed. Liquid mal-distribution is more frequently the problem than vapor. The design of the liquid distributors used to introduce the feed and reflux to a packed bed is critical to making the packing perform to it maximum efficiency. Methods of evaluating the effectiveness of a liquid distributor to evenly distribute the liquid entering a packed bed can be found in references. Considerable work has been done on this topic by Fractionation Research, Inc. (commonly known as FRI).
Multi-effect distillation
The goal of multi-effect distillation is to increase the energy efficiency of the process, for use in desalination, or in some cases one stage in the production of ultrapure water. The number of effects is inversely proportional to the kW·h/m3 of water recovered figure and refers to the volume of water recovered per unit of energy compared with single-effect distillation. One effect is roughly 636 kW·h/m3:
Multi-stage flash distillation can achieve more than 20 effects with thermal energy input, as mentioned in the article.
Vapor compression evaporation – Commercial large-scale units can achieve around 72 effects with electrical energy input, according to manufacturers.
There are many other types of multi-effect distillation processes, including one referred to as simply multi-effect distillation (MED), in which multiple chambers, with intervening heat exchangers, are employed.
In food processing
Beverages
Carbohydrate-containing plant materials are allowed to ferment, producing a dilute solution of ethanol in the process. Spirits such as whiskey and rum are prepared by distilling these dilute solutions of ethanol. Components other than ethanol, including water, esters, and other alcohols, are collected in the condensate, which account for the flavor of the beverage. Some of these beverages are then stored in barrels or other containers to acquire more flavor compounds and characteristic flavors.
Gallery
| Physical sciences | Separation processes | null |
8303 | https://en.wikipedia.org/wiki/Down%20syndrome | Down syndrome | Down syndrome or Down's syndrome, also known as trisomy 21, is a genetic disorder caused by the presence of all or part of a third copy of chromosome 21. It is usually associated with developmental delays, mild to moderate intellectual disability, and characteristic physical features.
The parents of the affected individual are usually genetically normal. The incidence of the syndrome increases with the age of the mother, from less than 0.1% for 20-year-old mothers to 3% for those of age 45. It is believed to occur by chance, with no known behavioral activity or environmental factor that changes the probability. Usually, babies get 23 chromosomes from each parent for a total of 46, whereas in Down syndrome, a third 21st chromosome is attached. The extra chromosome is provided at conception as the egg and sperm combine. In 1–2% of cases, the additional chromosome is added in the embryo stage and only impacts some of the cells in the body; this is known as Mosaic Down syndrome. Translocation Down syndrome is another rare type. Down syndrome can be identified during pregnancy by prenatal screening, followed by diagnostic testing, or after birth by direct observation and genetic testing. Since the introduction of screening, Down syndrome pregnancies are often aborted (rates varying from 50 to 85% depending on maternal age, gestational age, and maternal race/ethnicity).
There is no cure for Down syndrome. Education and proper care have been shown to provide better quality of life. Some children with Down syndrome are educated in typical school classes, while others require more specialized education. Some individuals with Down syndrome graduate from high school, and a few attend post-secondary education. In adulthood, about 20% in the United States do some paid work, with many requiring a sheltered work environment. Caretaker support in financial and legal matters is often needed. Life expectancy is around 50 to 60 years in the developed world, with proper health care. Regular screening for health issues common in Down syndrome is recommended throughout the person's life.
Down syndrome is the most common chromosomal abnormality, occurring in about 1 in 1,000 babies born worldwide, and one in 700 in the US. In 2015, there were 5.4 million people with Down syndrome globally, of whom 27,000 died, down from 43,000 deaths in 1990. The syndrome is named after British physician John Langdon Down, who fully described it in 1866. Some aspects were described earlier by French psychiatrist Jean-Étienne Dominique Esquirol in 1838 and French physician Édouard Séguin in 1844. The genetic cause was discovered in 1959.
Signs and symptoms
Those with Down syndrome nearly always have physical and intellectual disabilities. As adults, their mental abilities are typically similar to those of an 8- or 9-year-old. At the same time, their emotional and social awareness is very high. They can have poor immune function and generally reach developmental milestones at a later age. They have an increased risk of a number of health concerns, such as congenital heart defect, epilepsy, leukemia, and thyroid diseases.
Physical
People with Down syndrome may have these physical characteristics: a small chin, epicanthic folds, low muscle tone, a flat nasal bridge, and a protruding tongue. A protruding tongue is caused by low tone and weak facial muscles, and often corrected with myofunctional exercises. Some characteristic airway features can lead to obstructive sleep apnea in around half of those with Down syndrome. Other common features include: excessive joint flexibility, extra space between big toe and second toe, a single crease of the palm, and short fingers.
Instability of the atlantoaxial joint occurs in about 1–2%. Atlantoaxial instability may cause myelopathy due to cervical spinal cord compression later in life, this often manifests as new onset weakness, problems with coordination, bowel or bladder incontinence, and gait dysfunction. Serial imaging cannot reliably predict future cervical cord compression, but changes can be seen on neurological exam. The condition is surgically corrected with spine surgery.
Growth in height is slower, resulting in adults who tend to have short stature—the average height for men is , and for women is . Individuals with Down syndrome are at increased risk for obesity as they age due to hypothyroidism, other medical issues and lifestyle. Growth charts have been developed specifically for children with Down syndrome.
Neurological
This syndrome causes about a third of cases of intellectual disability. Many developmental milestones are delayed with the ability to crawl typically occurring around 8–22 months rather than 6–12 months, and the ability to walk independently typically occurring around 1–4 years rather than 9–18 months. Walking is acquired in 50% of children after 24 months.
Most individuals with Down syndrome have mild (IQ: 50–69) or moderate (IQ: 35–50) intellectual disability with some cases having severe (IQ: 20–35) difficulties. Those with mosaic Down syndrome typically have IQ scores 10–30 points higher than that. As they age, the gap tends to widen between people with Down syndrome and their same-age peers.
Commonly, individuals with Down syndrome have better language understanding than ability to speak. Babbling typically emerges around 15 months on average. 10–45% of those with Down syndrome have either a stutter or rapid and irregular speech, making it difficult to understand them. After reaching 30 years of age, some may lose their ability to speak.
They typically do fairly well with social skills. Behavior problems are not generally as great an issue as in other syndromes associated with intellectual disability. In children with Down syndrome, mental illness occurs in nearly 30% with autism occurring in 5–10%. People with Down syndrome experience a wide range of emotions. While people with Down syndrome are generally happy, symptoms of depression and anxiety may develop in early adulthood.
Children and adults with Down syndrome are at increased risk of epileptic seizures, which occur in 5–10% of children and up to 50% of adults. This includes an increased risk of a specific type of seizure called infantile spasms. Many (15%) who live 40 years or longer develop Alzheimer's disease. In those who reach 60 years of age, 50–70% have the disease.
Down syndrome regression disorder is a sudden regression with neuropsychiatric symptoms such as catatonia, possibly caused by an autoimmune disease. It primarily appears in teenagers and younger adults.
Senses
Hearing and vision disorders occur in more than half of people with Down syndrome.
Ocular findings
Brushfield spots (small white or grayish/brown spots on the periphery of the iris), upward slanting palpebral fissures (the opening between the upper and lower lids) and epicanthal folds (folds of skin between the upper eyelid and the nose) are clinical signs at birth suggesting the diagnosis of Down syndrome especially in the Western World. None of these requires treatment.
Visually significant congenital cataracts (clouding of the lens of the eye) occur more frequently with Down syndrome. Neonates with Down syndrome should be screened for cataract because early recognition and referral reduce the risk of vision loss from amblyopia. Dot-like opacities in the cortex of the lens (cerulean cataract) are present in up to 50% of people with Down syndrome, but may be followed without treatment if they are not visually significant.
Strabismus, nystagmus and nasolacrimal duct obstruction occur more frequently in children with Down syndrome. Screening for these diagnoses should begin within six months of birth. Strabismus is more often acquired than congenital. Early diagnosis and treatment of strabismus reduces the risk of vision loss from amblyopia. In Down syndrome, the presence of epicanthal folds may give the false impression of strabismus, referred to as pseudostrabismus. Nasolacrimal duct obstruction, which causes tearing (epiphora), is more frequently bilateral and multifactorial than in children without Down syndrome.
Refractive error is more common with Down syndrome, though the rate may not differ until after twelve months of age compared to children without Down syndrome. Early screening is recommended to identify and treat significant refractive error with glasses or contact lenses. Poor accommodation (ability to focus on close objects) is associated with Down syndrome, which may mean bifocals are indicated.
In keratoconus, the cornea progressively thins and bulges into a cone shape, causing visual blurring or distortion. Keratoconus first presents in the teen years and progresses into the thirties. Down syndrome is a strong risk factor for developing keratoconus, and onset may be occur at a younger age than in those without Down syndrome. Eye rubbing is also a risk factor for developing keratoconus. It is speculated that chronic eye irritation from blepharitis may increase eye rubbing in Down syndrome, contributing to the increased prevalence of keratoconus.
An association between glaucoma and Down syndrome is often cited. Glaucoma in children with Down syndrome is uncommon, with a prevalence of less than 1%. It is currently unclear if the prevalence of glaucoma in those with Down syndrome differs from that in the absence of Down syndrome.
Estimates of prevalence of ocular findings in Down Syndrome vary widely depending on the study. Some prevalence estimates follow. Vision problems have been observed in 38–80% of cases. Brushfield spots are present in 38–85% of individuals. Between 20 and 50% have strabismus. Cataracts occur in 15%, and may be present at birth. Keratoconus may occur in as many as 21–30%.
Hearing loss
Hearing problems are found in 50–90% of children with Down syndrome. This is often the result of otitis media with effusion which occurs in 50–70% and chronic ear infections which occur in 40–60%. Ear infections often begin in the first year of life and are partly due to poor eustachian tube function. Excessive ear wax can also cause hearing loss due to obstruction of the outer ear canal. Even a mild degree of hearing loss can have negative consequences for speech, language understanding, and academics. It is important to rule out hearing loss as a factor in social and cognitive deterioration. Age-related hearing loss of the sensorineural type occurs at a much earlier age and affects 10–70% of people with Down syndrome.
Heart
The rate of congenital heart disease in newborns with Down syndrome is around 40%. Of those with heart disease, about 80% have an atrial septal defect or ventricular septal defect with the former being more common. Congenital heart disease can also put individuals at a higher risk of pulmonary hypertension, where arteries in the lungs narrow and cause inadequate blood oxygenation. Some of the genetic contributions to pulmonary hypertension in individuals with Down Syndrome are abnormal lung development, endothelial dysfunction, and proinflammatory genes. Mitral valve problems become common as people age, even in those without heart problems at birth. Other problems that may occur include tetralogy of Fallot and patent ductus arteriosus. People with Down syndrome have a lower risk of hardening of the arteries.
Cancer
Although the overall risk of cancer in Down syndrome is not changed, the risk of testicular cancer and certain blood cancers, including acute lymphoblastic leukemia (ALL) and acute megakaryoblastic leukemia (AMKL) is increased while the risk of other non-blood cancers is decreased. People with Down syndrome are believed to have an increased risk of developing cancers derived from germ cells whether these cancers are blood- or non-blood-related. In 2008, the World Health Organization (WHO) introduced a distinct classification for myeloid proliferation in individuals with Down syndrome.
Blood cancers
Leukemia is 10 to 15 times more common in children with Down syndrome. In particular, acute lymphoblastic leukemia is 20 times more common and the megakaryoblastic form of acute myeloid leukemia (acute megakaryoblastic leukemia), is 500 times more common. Acute megakaryoblastic leukemia (AMKL) is a leukemia of megakaryoblasts, the precursors cells to megakaryocytes which form blood platelets. Acute lymphoblastic leukemia in Down syndrome accounts for 1–3% of all childhood cases of ALL. It occurs most often in those older than nine years or having a white blood cell count greater than 50,000 per microliter and is rare in those younger than one year old. ALL in Down syndrome tends to have poorer outcomes than other cases of ALL in people without Down syndrome. In short, the likelihood of developing acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) is higher in children with Down syndrome compared to those without Down syndrome.
Myeloid leukemia typically precedes Down syndrome and is accompanied by a condition known as transient abnormal myelopoiesis (TAM), which generally disrupts the differentiation of megakaryocytes and erythrocytes. In Down syndrome, AMKL is typically preceded by transient myeloproliferative disease (TMD), a disorder of blood cell production in which non-cancerous megakaryoblasts with a mutation in the GATA1 gene rapidly divide during the later period of pregnancy. GATA1 mutations combined with trisomy 21 contribute to a predisposition to TAM. In trisomy 21, the process of leukemogenesis starts in early fetal life, with genetic factors, including GATA1 mutations, contributing to the development of TAM on the preleukemic pathway. The condition affects 3–10% of babies with Down. While it often spontaneously resolves within three months of birth, it can cause serious blood, liver, or other complications. In about 10% of cases, TMD progresses to AMKL during the three months to five years following its resolution.
Non-blood cancers
People with Down syndrome have a lower risk of all major solid cancers, including those of lung, breast, and cervix, with the lowest relative rates occurring in those aged 50 years or older. This low risk is thought to be due to an increase in the expression of tumor suppressor genes present on chromosome 21. One exception is testicular germ cell cancer which occurs at a higher rate in Down syndrome.
Endocrine
Problems of the thyroid gland occur in 20–50% of individuals with Down syndrome. Low thyroid is the most common form, occurring in almost half of all individuals. Thyroid problems can be due to a poorly or nonfunctioning thyroid at birth (known as congenital hypothyroidism) which occurs in 1% or can develop later due to an attack on the thyroid by the immune system resulting in Graves' disease or autoimmune hypothyroidism. Type 1 diabetes mellitus is also more common.
Gastrointestinal
Constipation occurs in nearly half of people with Down syndrome and may result in changes in behavior. One potential cause is Hirschsprung's disease, occurring in 2–15%, which is due to a lack of nerve cells controlling the colon. Other congenital problems can include duodenal atresia, imperforate anus and gastroesophageal reflux disease. Celiac disease affects about 7–20%.
Teeth
People with Down syndrome tend to be more susceptible to gingivitis as well as early, severe periodontal disease, necrotising ulcerative gingivitis, and early tooth loss, especially in the lower front teeth. While plaque and poor oral hygiene are contributing factors, the severity of these periodontal diseases cannot be explained solely by external factors. Research suggests that the severity is likely a result of a weakened immune system. The weakened immune system also contributes to increased incidence of yeast infections in the mouth (from Candida albicans).
People with Down syndrome also tend to have a more alkaline saliva resulting in a greater resistance to tooth decay, despite decreased quantities of saliva, less effective oral hygiene habits, and higher plaque indexes.
Higher rates of tooth wear and bruxism are also common. Other common oral manifestations of Down syndrome include enlarged hypotonic tongue, crusted and hypotonic lips, mouth breathing, narrow palate with crowded teeth, class III malocclusion with an underdeveloped maxilla and posterior crossbite, delayed exfoliation of baby teeth and delayed eruption of adult teeth, shorter roots on teeth, and often missing and malformed (usually smaller) teeth. Less common manifestations include cleft lip and palate and enamel hypocalcification (20% prevalence).
Taurodontism, an elongation of the pulp chamber, has a high prevalence in people with DS.
Fertility
Males with Down syndrome usually do not father children, while females have lower rates of fertility relative to those who are unaffected. Fertility is estimated to be present in 30–50% of females. Menopause usually occurs at an earlier age. The poor fertility in males is thought to be due to problems with sperm development; however, it may also be related to not being sexually active. As of 2006, three instances of males with Down syndrome fathering children and 26 cases of females having children have been reported. Without assisted reproductive technologies, around half of the children of someone with Down syndrome will also have the syndrome.
Cause
The cause of the extra full or partial chromosome is still unknown. Most of the time, Down syndrome is caused by a random mistake in cell division during early development of the fetus, but not inherited, and there is no scientific research which shows that environmental factors or the parents' activities contribute to Down syndrome. The only factor that has been linked to the increased chance of having a baby with Down syndrome is advanced parental age. This is mostly associated with advanced maternal age but about 10 per cent of cases are associated with advanced paternal age.
Down syndrome is caused by having three copies of the genes on chromosome 21, rather than the usual two. The parents of the affected individual are typically genetically normal. Those who have one child with Down syndrome have about a 1% possibility of having a second child with the syndrome, if both parents are found to have normal karyotypes.
The extra chromosome content can arise through several different ways. The most common cause (about 92–95% of cases) is a complete extra copy of chromosome 21, resulting in trisomy 21. In 1–2.5% of cases, some of the cells in the body are normal and others have trisomy 21, known as mosaic Down syndrome. The other common mechanisms that can give rise to Down syndrome include: a Robertsonian translocation, isochromosome, or ring chromosome. These contain additional material from chromosome 21 and occur in about 2.5% of cases. An isochromosome results when the two long arms of a chromosome separate together rather than the long and short arm separating together during egg or sperm development.
Trisomy 21
Down syndrome (also known by the karyotype 47,XX,+21 for females and 47,XY,+21 for males) is mostly caused by a failure of the 21st chromosome to separate during egg or sperm development, known as nondisjunction. As a result, a sperm or egg cell is produced with an extra copy of chromosome 21; this cell thus has 24 chromosomes. When combined with a normal cell from the other parent, the baby has 47 chromosomes, with three copies of chromosome 21. About 88% of cases of trisomy 21 result from nonseparation of the chromosomes in the mother, 8% from nonseparation in the father, and 3% after the egg and sperm have merged.
Mosaic Down syndrome
Mosaic Down syndrome is diagnosed when there is a mixture of two types of cells: some cells have three copies of chromosome 21 but some cells have the typical two copies of chromosome 21. This type is the least common form of Down syndrome and accounts for only about 1% of all cases. Children with mosaic Down syndrome may have the same features as other children with Down syndrome. However, they may have fewer characteristics of the condition due to the presence of some (or many) cells with a typical number of chromosomes.
Translocation Down syndrome
The extra chromosome 21 material may also occur due to a Robertsonian translocation in 2–4% of cases. In this translocation Down syndrome, the long arm of chromosome 21 is attached to another chromosome, often chromosome 14. In a male affected with Down syndrome, it results in a karyotype of 46XY,t(14q21q). This may be a new mutation or previously present in one of the parents. The parent with such a translocation is usually normal physically and mentally; however, during production of egg or sperm cells, a higher chance of creating reproductive cells with extra chromosome 21 material exists. This results in a 15% chance of having a child with Down syndrome when the mother is affected and a less than 5% probability if the father is affected. The probability of this type of Down syndrome is not related to the mother's age. Some children without Down syndrome may inherit the translocation and have a higher probability of having children of their own with Down syndrome. In this case it is sometimes known as familial Down syndrome.
Mechanism
The extra genetic material present in Down syndrome results in overexpression of a portion of the 310 genes located on chromosome 21. This overexpression has been estimated at 50%, due to the third copy of the chromosome present. Some research has suggested the Down syndrome critical region is located at bands 21q22.1–q22.3, with this area including genes for the amyloid precursor protein, superoxide dismutase, and likely the ETS2 proto oncogene. Other research, however, has not confirmed these findings. MicroRNAs are also proposed to be involved.
The dementia that occurs in Down syndrome is due to an excess of amyloid beta peptide produced in the brain and is similar to Alzheimer's disease, which also involves amyloid beta build-up. Amyloid beta is processed from amyloid precursor protein, the gene for which is located on chromosome 21. Senile plaques and neurofibrillary tangles are present in nearly all by 35 years of age, though dementia may not be present. It is hypothesized that those with Down syndrome lack a normal number of lymphocytes and produce less antibodies which is said to present an increased risk of infection.
Epigenetics
Down syndrome is associated with an increased risk of some chronic diseases that are typically associated with older age such as Alzheimer's disease. It is believed that accelerated aging occurs and increases the biological age of tissues, but molecular evidence for this hypothesis is sparse. According to a biomarker of tissue age known as epigenetic clock, it is hypothesized that trisomy 21 increases the age of blood and brain tissue (on average by 6.6 years).
Diagnosis
Screening before birth
Guidelines recommend screening for Down syndrome to be offered to all pregnant women, regardless of age. A number of tests are used, with varying levels of accuracy. They are typically used in combination to increase the detection rate. None can be definitive; thus, if screening predicts a high possibility of Down syndrome, either amniocentesis or chorionic villus sampling is required to confirm the diagnosis.
Ultrasound
Prenatal ultrasound can be used to screen for Down syndrome. Findings that indicate increased chances when seen at 14 to 24 weeks of gestation include a small or no nasal bone, large ventricles, nuchal fold thickness, and an abnormal right subclavian artery, among others. The presence or absence of many markers is more accurate. Increased fetal nuchal translucency (NT) indicates an increased possibility of Down syndrome picking up 75–80% of cases and being falsely positive in 6%.
Blood tests
Several blood markers can be measured to predict the chances of Down syndrome during the first or second trimester. Testing in both trimesters is sometimes recommended and test results are often combined with ultrasound results. In the second trimester, often two or three tests are used in combination with two or three of: α-fetoprotein, unconjugated estriol, total hCG, and free βhCG detecting about 60–70% of cases.
Testing of the mother's blood for fetal DNA is being studied and appears promising in the first trimester. The International Society for Prenatal Diagnosis considers it a reasonable screening option for those women whose pregnancies are at a high likelihood of trisomy 21. Accuracy has been reported at 98.6% in the first trimester of pregnancy. Confirmatory testing by invasive techniques (amniocentesis, CVS) is still required to confirm the screening result.
Combinations
Efficacy
For combinations of ultrasonography and non-genetic blood tests, screening in both the first and second trimesters is better than just screening in the first trimester. The different screening techniques in use are able to pick up 90–95% of cases, with a false-positive rate of 2–5%. If Down syndrome occurs in one in 500 pregnancies with a 90% detection rate and the test used has a 5% false-positive rate, this means, of 20 women who test positive on screening, only one will not have a fetus with Down syndrome confirmed. If the screening test has a 2% false-positive rate, this means, of 50 women who test positive on screening, one will not have a fetus with Down syndrome.
Invasive genetic testing
Amniocentesis and chorionic villus sampling are more reliable tests, but they increase the risk of miscarriage by between 0.5–1%. The risk of limb problems may be increased in the offspring if chorionic villus sampling is performed before 10 weeks.
The risk from the procedure is greater the earlier it is performed, thus amniocentesis is not recommended before 15 weeks gestational age and chorionic villus sampling before 10 weeks gestational age.
Abortion rates
About 92% of pregnancies in Europe with a diagnosis of Down syndrome are terminated. As a result, there is almost no one with Down syndrome in Iceland and Denmark, where screening is commonplace. In the United States, the termination rate after diagnosis is around 75%, but varies from 61 to 93%, depending on the population surveyed. Rates are lower among women who are younger and have decreased over time. When asked if they would have a termination if their fetus tested positive, 23–33% said yes, when high-risk pregnant women were asked, 46–86% said yes, and when women who screened positive are asked, 89–97% say yes.
After birth
A diagnosis can often be suspected based on the child's physical appearance at birth. An analysis of the child's chromosomes is needed to confirm the diagnosis, and to determine if a translocation is present, as this may help determine the chances of the child's parents having further children with Down syndrome.
Management
Efforts such as early childhood intervention, therapies, screening for common medical issues, a good family environment, and work-related training can improve the development of children with Down syndrome and provide good quality of life. Common therapies utilized include physical therapy, occupational therapy and speech therapy. Education and proper care can provide a positive quality of life. Typical childhood vaccinations are recommended.
Health screening
A number of health organizations have issued recommendations for screening those with Down syndrome for particular diseases. This is recommended to be done systematically.
At birth, all children should get an electrocardiogram and ultrasound of the heart. Surgical repair of heart problems may be required as early as three months of age. Heart valve problems may occur in young adults, and further ultrasound evaluation may be needed in adolescents and in early adulthood. Due to the elevated risk of testicular cancer, some recommend checking the person's testicles yearly.
Cognitive development
Some people with Down syndrome experience hearing loss. In this instance, hearing aids or other amplification devices can be useful for language learning. Speech therapy may be useful and is recommended to be started around nine months of age. As those with Down syndrome typically have good hand-eye coordination, learning sign language is a helpful communication tool. Augmentative and alternative communication methods, such as pointing, body language, objects, or pictures, are often used to help with communication. Behavioral issues and mental illness are typically managed with counseling or medications.
Education programs before reaching school age may be useful. School-age children with Down syndrome may benefit from inclusive education (whereby students of differing abilities are placed in classes with their peers of the same age), provided some adjustments are made to the curriculum. In the United States, the Individuals with Disabilities Education Act of 1975 requires public schools generally to allow attendance by students with Down syndrome.
Individuals with Down syndrome may learn better visually. Drawing may help with language, speech, and reading skills. Children with Down syndrome still often have difficulty with sentence structure and grammar, as well as developing the ability to speak clearly. Several types of early intervention can help with cognitive development. Efforts to develop motor skills include physical therapy, speech and language therapy, and occupational therapy. Physical therapy focuses specifically on motor development and teaching children to interact with their environment. Speech and language therapy can help prepare for later language. Lastly, occupational therapy can help with skills needed for later independence.
Other
Tympanostomy tubes are often needed and often more than one set during the person's childhood. Tonsillectomy is also often done to help with sleep apnea and throat infections. Surgery does not correct every instance of sleep apnea and a continuous positive airway pressure (CPAP) machine may be useful in those cases.
Efforts to prevent respiratory syncytial virus (RSV) infection with human monoclonal antibodies should be considered, especially in those with heart problems. In those who develop dementia there is no evidence for memantine, donepezil, rivastigmine, or galantamine.
Prognosis
Between 5–15% of children with Down syndrome in Sweden attend regular school. Some graduate from high school; however, most do not. Of those with intellectual disability in the United States who attended high school about 40% graduated. Many learn to read and write and some are able to do paid work. In adulthood about 20% in the United States do paid work in some capacity. In Sweden, however, less than 1% have regular jobs. Many are able to live semi-independently, but they often require help with financial, medical, and legal matters. Those with mosaic Down syndrome usually have better outcomes.
Individuals with Down syndrome have a higher risk of early death than the general population. This is most often from heart problems or infections. Following improved medical care, particularly for heart and gastrointestinal problems, the life expectancy has increased. This increase has been from 12 years in 1912, to 25 years in the 1980s, to 50 to 60 years in the developed world in the 2000s. Data collected between the 1985–2003 showed between 4–12% infants with Down syndrome die in the first year of life. The probability of long-term survival is partly determined by the presence of heart problems. From research at the turn of the century, it tracked those with congenital heart problems, showing 60% survived to at least 10 years and 50% survived to at least 30 years of age. The research failed to track further aging beyond 30 years. In those without heart problems, 85% studied survived to at least 10 years and 80% survived to at least 30 years of age. It is estimated that 10% lived to 70 years of age in the early 2000s. Much of this data is outdated and life expectancy has drastically improved with more equitable healthcare and continuous advancement of surgical practice. The National Down Syndrome Society provides information regarding raising a child with Down syndrome.
Epidemiology
Down syndrome is the most common chromosomal abnormality in humans. Globally, , Down syndrome occurs in about 1 per 1,000 births and results in about 17,000 deaths. More children are born with Down syndrome in countries where abortion is not allowed and in countries where pregnancy more commonly occurs at a later age. About 1.4 per 1,000 live births in the United States and 1.1 per 1,000 live births in Norway are affected. In the 1950s, in the United States, it occurred in 2 per 1,000 live births with the decrease since then due to prenatal screening and abortions. The number of pregnancies with Down syndrome is more than two times greater with many spontaneously aborting. It is the cause of 8% of all congenital disorders.
Maternal age affects the chances of having a pregnancy with Down syndrome. At age 20, the chance is 1 in 1,441; at age 30, it is 1 in 959; at age 40, it is 1 in 84; and at age 50 it is 1 in 44. Although the probability increases with maternal age, 70% of children with Down syndrome are born to women 35 years of age and younger, because younger people have more children. The father's older age is also a risk factor in women older than 35, but not in women younger than 35, and may partly explain the increase in risk as women age.
History
English physician John Langdon Down first described Down syndrome in 1862, recognizing it as a distinct type of mental disability, and again in a more widely published report in 1866. Édouard Séguin described it as separate from cretinism in 1844. By the 20th century, Down syndrome had become the most recognizable form of mental disability.
Due to his perception that children with Down syndrome shared facial similarities with those of Blumenbach's Mongoloid race, John Langdon Down used the term "mongoloid". He felt that the existence of Down syndrome confirmed that all peoples were genetically related. In the 1950s with discovery of the underlying cause as being related to chromosomes, concerns about the race-based nature of the name increased.
In 1961, a group of nineteen scientists suggested that "mongolism" had "misleading connotations" and had become "an embarrassing term". The World Health Organization (WHO) dropped the term in 1965 after a request by the delegation from the Mongolian People's Republic. While this terminology continued to be used until the late twentieth century, it is now considered unacceptable and is no longer in common use.
In antiquity, many infants with disabilities were either killed or abandoned.
In June 2020, the earliest incidence of Down syndrome was found in genomic evidence from an infant that was buried before 3200 BC at Poulnabrone dolmen in Ireland.
Researchers believe that a number of historical pieces of art portray Down syndrome, including pottery from the pre-Columbian Tumaco-La Tolita culture in present-day Colombia and Ecuador, and the 16th-century painting The Adoration of the Christ Child.
In the 20th century, many individuals with Down syndrome were institutionalized, few of the associated medical problems were treated, and most people died in infancy or early adulthood. With the rise of the eugenics movement, 33 of the then 48 U.S. states and several countries began programs of forced sterilization of individuals with Down syndrome and comparable degrees of disability. Action T4 in Nazi Germany saw the systematic murder of people with Down syndrome made public policy.
With the discovery of karyotype techniques in the 1950s it became possible to identify abnormalities of chromosomal number or shape. In 1959 Jérôme Lejeune reported the discovery that Down syndrome resulted from an extra chromosome. However, Lejeune's claim to the discovery has been disputed, and in 2014 the Scientific Council of the French Federation of Human Genetics unanimously awarded its Grand Prize to his colleague Marthe Gautier for her role in this discovery. The discovery took place in the laboratory of Raymond Turpin at the Hôpital Trousseau in Paris, France. Jérôme Lejeune and Marthe Gautier were both his students.
As a result of this discovery, the condition became known as trisomy 21. Even before the discovery of its cause, the presence of the syndrome in all races, its association with older maternal age, and its rarity of recurrence had been noticed. Medical texts had assumed it was caused by a combination of inheritable factors that had not been identified. Other theories had focused on injuries sustained during birth.
Society and culture
Name
Down syndrome is named after John Langdon Down. He was the first person to provide an accurate description of the syndrome. His research that was published in 1866 earned him the recognition as the Father of the syndrome. While others had previously recognized components of the condition, John Langdon Down described the syndrome as a distinct, unique medical condition.
In 1975, the United States National Institutes of Health (NIH) convened a conference to standardize the naming and recommended replacing the possessive form, "Down's syndrome", with "Down syndrome". However, both the possessive and nonpossessive forms remain in use by the general population. The term "trisomy 21" is also commonly used.
Ethics
Obstetricians routinely offer antenatal screenings for various conditions, including Down syndrome. When results from testing become available, it is considered an ethical requirement to share the results with the patient.
Some bioethicists deem it reasonable for parents to select a child who would have the highest well-being. One criticism of this reasoning is that it often values those with disabilities less. Some parents argue that Down syndrome should not be prevented or cured and that eliminating Down syndrome amounts to genocide. The disability rights movement does not have a position on screening, although some members consider testing and abortion discriminatory. Some in the United States who are anti-abortion support abortion if the fetus is disabled, while others do not. Of a group of 40 mothers in the United States who have had one child with Down syndrome, half agreed to screening in the next pregnancy.
Within the US, some Protestant denominations see abortion as acceptable when a fetus has Down syndrome while Orthodox Christianity and Roman Catholicism do not. Women may face disapproval whether they choose abortion or not. Some of those against screening refer to it as a form of eugenics.
Advocacy groups
Advocacy groups for individuals with Down syndrome began to be formed after the Second World War. These were organizations advocating for the inclusion of people with Down syndrome into the general school system and for a greater understanding of the condition among the general population, as well as groups providing support for families with children living with Down syndrome. Before this individuals with Down syndrome were often placed in mental hospitals or asylums. Organizations included the Royal Society for Handicapped Children and Adults founded in the UK in 1946 by Judy Fryd, Kobato Kai founded in Japan in 1964, the National Down Syndrome Congress founded in the United States in 1973 by Kathryn McGee and others, and the National Down Syndrome Society founded in 1979 in the United States. The first Roman Catholic order of nuns for women with Down Syndrome, Little Sisters Disciples of the Lamb, was founded in 1985 in France.
The first World Down Syndrome Day was held on 21 March 2006. The day and month were chosen to correspond with 21 and trisomy, respectively. It was recognized by the United Nations General Assembly in 2011.
Special21.org, founded in 2015, advocates the need for a specific classification category to enable Down syndrome swimmers the opportunity to qualify and compete at the Paralympic Games. The project began when International Down syndrome swimmer Filipe Santos broke the world record in the 50m butterfly event, but was unable to compete at the Paralympic Games.
Paralympic Swimming
International Paralympic Committee Para-swimming classification codes are based upon single impairment only, whereas Down syndrome individuals have both physical and intellectual impairments.
Although Down syndrome swimmers are able to compete in the Paralympic Swimming S14 intellectual impairment category (provided they score low in IQ tests), they are often outmatched by the superior physicality of their opponents.
At present there is no designated Paralympic category for swimmers with Down syndrome, meaning they have to compete as intellectually disadvantaged athletes. This disregards their physical disabilities.
A number of advocacy groups globally have been lobbying for the inclusion of a distinct classification category for Down syndrome swimmers within the IPC Classification Codes framework.
Despite ongoing advocacy, the issue remains unresolved, and swimmers with Down syndrome continue to face challenges in accessing appropriate classification pathways.
Research
Efforts are underway to determine how the extra chromosome 21 material causes Down syndrome, as currently this is unknown, and to develop treatments to improve intelligence in those with the syndrome. Two efforts being studied are the use stem cells and gene therapy. Other methods being studied include the use of antioxidants, gamma secretase inhibition, adrenergic agonists, and memantine. Research is often carried out on an animal model, the Ts65Dn mouse.
Other hominids
Down syndrome may also occur in hominids other than humans. In great apes chromosome 22 corresponds to the human chromosome 21 and thus trisomy 22 causes Down syndrome in apes. The condition was observed in a common chimpanzee in 1969 and a Bornean orangutan in 1979, but neither lived very long. The common chimpanzee Kanako (born around 1993, in Japan) has become the longest-lived known example of this condition. Kanako has some of the same symptoms that are common in human Down syndrome. It is unknown how common this condition is in chimps, but it is plausible it could be roughly as common as Down syndrome is in humans.
Fossilized remains of a Neanderthal aged approximately 6 at death were described in 2024. The child, nicknamed Tina, suffered from a malformation of the inner ear that only occurs in people with Down syndrome, and would have caused hearing loss and disabling vertigo. The fact that a Neanderthal with such a condition survived to such an age was taken as evidence of compassion and extra-maternal care among Neanderthals.
In popular culture
Individuals
Jamie Brewer is an American actress and model. She is best known for her roles in the FX horror anthology television series American Horror Story. In its first season, Murder House, she portrayed Adelaide "Addie" Langdon; in the third season, Coven, she portrayed Nan, an enigmatic and clairvoyant witch; in the fourth season Freak Show, she portrayed Chester Creb's vision of his doll, Marjorie; in the seventh season Cult, she portrayed Hedda, a member of the 'SCUM' crew, led by feminist Valerie Solanas; and she also returned to her role as Nan in the eighth season, Apocalypse. In February 2015, Brewer became the first woman with Down syndrome to walk the red carpet at New York Fashion Week, for designer Carrie Hammer.
Sofía Jirau is a Puerto Rican model with Down syndrome, working with top designers and renowned media outlets such Vogue Mexico, People, Hola!, among others. In February 2020, Jirau made her debut at New York Fashion Week. Then in February 2022, she became the first-ever model with Down Syndrome to be hired by the American retail company Victoria's Secret. She walked the LA Fashion Week runway in 2022. Jirau launched a campaign in 2021 called Sin Límites or No Limits "which seeks to make visible the challenges facing the Down syndrome community, demonstrate our ability to achieve our goals, and raise awareness about the condition throughout the world."
Chris Nikic is the first person with Down syndrome to finish an Ironman Triathlon. He was awarded the Jimmy V Award for Perseverance at the 2021 ESPY Awards. Nikic continues to run races around the world, using his platform to promote his 1% Better message and bring awareness to the endless possibilities for people with Down syndrome.
Grace Strobel is an American model and the first person with Down Syndrome to represent an American skin-care brand. She first joined Obagi in 2020, and continues to be an Ambassador for the brand as of 2022. She walked the runway representing Tommy Hilfiger for Runway of Dreams New York Fashion Week 2020 and Atlantic City Fashion Week. Strobel has been featured in Forbes, on The Today Show, Good Morning America, by Rihanna's Fenty Beauty, Lady Gaga's Kindness Channel, and many more. She is also a public speaker and gives a presentation called #TheGraceEffect about what it is like to live with Down syndrome.
Television and film
Life Goes On is an American drama television series that aired on ABC from September 12, 1989, to May 23, 1993. The show centers on the Thatcher family living in suburban Chicago: Drew, his wife Libby, and their children Paige, Rebecca and Charles. Charles, called Corky on the show and portrayed by Chris Burke, was the first major character on a television series with Down syndrome. Burke's revolutionary role conveyed a realistic portrayal of people with Down syndrome and changed the way audiences viewed people with disabilities.
Struck by Lightning, an Australian film by Jerzy Domaradzki and starring Garry McDonald, is a comedy-drama depicting the efforts by a newly appointed physical education teacher to introduce soccer to a specialized school for youths with Down syndrome.
Champions (2023) is a film starring four main actors with Down syndrome: Madison Tevlin, Kevin Iannucci, Matthew Von Der Ahe and James Day Keith. It is an American sports comedy film directed by Bobby Farrelly in his solo directorial debut, from a screenplay written by Mark Rizzo. The film stars Woody Harrelson as a temperamental minor-league basketball coach who after an arrest must coach a team of players with intellectual disabilities as community service; Kaitlin Olson, Ernie Hudson, and Cheech Marin also star.
Born This Way is an American reality television series produced by Bunim/Murray Productions featuring seven adults with Down syndrome with work hard to achieve goals and overcome obstacles. The show received a Television Academy Honor in 2016.
The Peanut Butter Falcon is a 2019 American comedy-drama film written and directed by Tyler Nilson and Michael Schwartz, in their directorial film debut, and starring Zack Gottsagen, Shia LaBeouf, Dakota Johnson and John Hawkes. The plot follows a young man with Down syndrome who escapes from an assisted living facility, in order to follow his dream of being a wrestler, and befriends a wayward fisherman on the run. As the two men form a rapid bond, a social worker attempts to track them.
Music
The Devo song "Mongoloid" is about someone with Down syndrome.
The Amateur Transplants song "Your Baby" is about a fetus with Down syndrome.
Toys
In 2023, Mattel released a Barbie doll with characteristics of a person having Down syndrome as a way to promote diversity.
| Biology and health sciences | Disability | null |
8305 | https://en.wikipedia.org/wiki/Dyslexia | Dyslexia | Dyslexia, previously known as word blindness, is a learning disability that affects either reading or writing. Different people are affected to different degrees. Problems may include difficulties in spelling words, reading quickly, writing words, "sounding out" words in the head, pronouncing words when reading aloud and understanding what one reads. Often these difficulties are first noticed at school. The difficulties are involuntary, and people with this disorder have a normal desire to learn. People with dyslexia have higher rates of attention deficit hyperactivity disorder (ADHD), developmental language disorders, and difficulties with numbers.
Dyslexia is believed to be caused by the interaction of genetic and environmental factors. Some cases run in families. Dyslexia that develops due to a traumatic brain injury, stroke, or dementia is sometimes called "acquired dyslexia" or alexia. The underlying mechanisms of dyslexia result from differences within the brain's language processing. Dyslexia is diagnosed through a series of tests of memory, vision, spelling, and reading skills. Dyslexia is separate from reading difficulties caused by hearing or vision problems or by insufficient teaching or opportunity to learn.
Treatment involves adjusting teaching methods to meet the person's needs. While not curing the underlying problem, it may decrease the degree or impact of symptoms. Treatments targeting vision are not effective. Dyslexia is the most common learning disability and occurs in all areas of the world. It affects 3–7% of the population; however, up to 20% of the general population may have some degree of symptoms. While dyslexia is more often diagnosed in boys, this is partly explained by a self-fulfilling referral bias among teachers and professionals. It has even been suggested that the condition affects men and women equally. Some believe that dyslexia is best considered as a different way of learning, with both benefits and downsides.
Classification
Dyslexia is divided into developmental and acquired forms. Acquired dyslexia occurs subsequent to neurological insult, such as traumatic brain injury or stroke. People with acquired dyslexia exhibit some of the signs or symptoms of the developmental disorder, but require different assessment strategies and treatment approaches. Pure alexia, also known as agnosic alexia or pure word blindness, is one form of alexia which makes up "the peripheral dyslexia" group.
Signs and symptoms
In early childhood, symptoms that correlate with a later diagnosis of dyslexia include delayed onset of speech and a lack of phonological awareness. A common myth closely associates dyslexia with mirror writing and reading letters or words backwards. These behaviors are seen in many children as they learn to read and write, and are not considered to be defining characteristics of dyslexia.
School-age children with dyslexia may exhibit signs of difficulty in identifying or generating rhyming words, or counting the number of syllables in words—both of which depend on phonological awareness. They may also show difficulty in segmenting words into individual sounds (such as sounding out the three sounds of k, a, and t in cat) or may struggle to blend sounds, indicating reduced phonemic awareness.
Difficulties with word retrieval or naming things is also associated with dyslexia. People with dyslexia are commonly poor spellers, a feature sometimes called dysorthographia or dysgraphia, which depends on the skill of orthographic coding.
Problems persist into adolescence and adulthood and may include difficulties with summarizing stories, memorization, reading aloud, or learning foreign languages. Adults with dyslexia can often read with good comprehension, though they tend to read more slowly than others without a learning difficulty and perform worse in spelling tests or when reading nonsense words—a measure of phonological awareness.
Associated conditions
Dyslexia often co-occurs with other learning disorders, but the reasons for this comorbidity have not been clearly identified. These associated disabilities include:
Dysgraphia A disorder involving difficulties with writing or typing, sometimes due to problems with eye–hand coordination; it also can impede direction- or sequence-oriented processes, such as tying knots or carrying out repetitive tasks. In dyslexia, dysgraphia is often multifactorial, due to impaired letter-writing automaticity, organizational and elaborative difficulties, and impaired visual word forming, which makes it more difficult to retrieve the visual picture of words required for spelling.
Attention deficit hyperactivity disorder (ADHD) A disorder characterized by problems sustaining attention, hyperactivity, or acting impulsively. Dyslexia and ADHD commonly occur together. Approximately 15% or 12–24% of people with dyslexia have ADHD; and up to 35% of people with ADHD have dyslexia.
Auditory processing disorder A listening disorder that affects the ability to process auditory information. This can lead to problems with auditory memory and auditory sequencing. Many people with dyslexia have auditory processing problems, and may develop their own logographic cues to compensate for this type of deficit. Some research suggests that auditory processing skills could be the primary shortfall in dyslexia.
Developmental coordination disorder A neurological condition characterized by difficulty in carrying out routine tasks involving balance, fine-motor control and kinesthetic coordination; difficulty in the use of speech sounds; and problems with short-term memory and organization.
Causes
Researchers have been trying to find the neurobiological basis of dyslexia since the condition was first identified in 1881. For example, some have tried to associate the common problem among people with dyslexia of not being able to see letters clearly to abnormal development of their visual nerve cells.
Neuroanatomy
Neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), have shown a correlation between both functional and structural differences in the brains of children with reading difficulties. Some people with dyslexia show less activation in parts of the left hemisphere of the brain involved with reading, such as the inferior frontal gyrus, inferior parietal lobule, and the middle and ventral temporal cortex. Over the past decade, brain activation studies using PET to study language have produced a breakthrough in the understanding of the neural basis of language. Neural bases for the visual lexicon and for auditory verbal short-term memory components have been proposed, with some implication that the observed neural manifestation of developmental dyslexia is task-specific (i.e., functional rather than structural). fMRIs of people with dyslexia indicate an interactive role of the cerebellum and cerebral cortex as well as other brain structures in reading.
The cerebellar theory of dyslexia proposes that impairment of cerebellum-controlled muscle movement affects the formation of words by the tongue and facial muscles, resulting in the fluency problems that some people with dyslexia experience. The cerebellum is also involved in the automatization of some tasks, such as reading. The fact that some children with dyslexia have motor task and balance impairments could be consistent with a cerebellar role in their reading difficulties. However, the cerebellar theory has not been supported by controlled research studies.
Genetics
Research into potential genetic causes of dyslexia has its roots in post-autopsy examination of the brains of people with dyslexia. Observed anatomical differences in the language centers of such brains include microscopic cortical malformations known as ectopias, and more rarely, vascular micro-malformations, and microgyrus—a smaller than usual size for the gyrus. The previously cited studies and others suggest that abnormal cortical development, presumed to occur before or during the sixth month of fetal brain development, may have caused the abnormalities. Abnormal cell formations in people with dyslexia have also been reported in non-language cerebral and subcortical brain structures. Several genes have been associated with dyslexia, including DCDC2 and KIAA0319 on chromosome 6, and DYX1C1 on chromosome 15.
Gene–environment interaction
The contribution of gene–environment interaction to reading disability has been intensely studied using twin studies, which estimate the proportion of variance associated with a person's environment and the proportion associated with their genes. Both environmental and genetic factors appear to contribute to reading development. Studies examining the influence of environmental factors such as parental education and teaching quality have determined that genetics have greater influence in supportive, rather than less optimal, environments. However, more optimal conditions may just allow those genetic risk factors to account for more of the variance in outcome because the environmental risk factors have been minimized.
As environment plays a large role in learning and memory, it is likely that epigenetic modifications play an important role in reading ability. Measures of gene expression, histone modifications, and methylation in the human periphery are used to study epigenetic processes; however, all of these have limitations in the extrapolation of results for application to the human brain.
Language
The orthographic complexity of a language directly affects how difficult it is to learn to read it. English and French have comparatively "deep" phonemic orthographies within the Latin alphabet writing system, with complex structures employing spelling patterns on several levels: letter-sound correspondence, syllables, and morphemes. Languages such as Spanish, Italian and Finnish primarily employ letter-sound correspondence—so-called "shallow" orthographies—which makes them easier to learn for people with dyslexia. Logographic writing systems, such as Chinese characters, have extensive symbol use; and these also pose problems for dyslexic learners.
Pathophysiology
For most people who are right-hand dominant, the left hemisphere of their brain is more specialized for language processing. With regard to the mechanism of dyslexia, fMRI studies suggest that this specialization is less pronounced or absent in people with dyslexia. In other studies, dyslexia is correlated with anatomical differences in the corpus callosum, the bundle of nerve fibers that connects the left and right hemispheres.
Data via diffusion tensor MRI indicate changes in connectivity or in gray matter density in areas related to reading and language. Finally, the left inferior frontal gyrus has shown differences in phonological processing in people with dyslexia. Neurophysiological and imaging procedures are being used to ascertain phenotypic characteristics in people with dyslexia, thus identifying the effects of dyslexia-related genes.
Dual route theory
The dual-route theory of reading aloud was first described in the early 1970s. This theory suggests that two separate mental mechanisms, or cognitive routes, are involved in reading aloud. One mechanism is the lexical route, which is the process whereby skilled readers can recognize known words by sight alone, through a "dictionary" lookup procedure. The other mechanism is the nonlexical or sublexical route, which is the process whereby the reader can "sound out" a written word. This is done by identifying the word's constituent parts (letters, phonemes, graphemes) and applying knowledge of how these parts are associated with each other, for example, how a string of neighboring letters sound together. The dual-route system could explain the different rates of dyslexia occurrence between different languages (e.g., the consistency of phonological rules in the Spanish language could account for the fact that Spanish-speaking children show a higher level of performance in non-word reading, when compared to English-speakers).
Diagnosis
Dyslexia is a heterogeneous, dimensional learning disorder that impairs accurate and fluent word reading and spelling. Typical—but not universal—features include difficulties with phonological awareness; inefficient and often inaccurate processing of sounds in oral language (phonological processing); and verbal working memory deficits.
Dyslexia is a neurodevelopmental disorder, subcategorized in diagnostic guides as a learning disorder with impairment in reading (ICD-11 prefixes "developmental" to "learning disorder"; DSM-5 uses "specific"). Dyslexia is not a problem with intelligence. Emotional problems often arise secondary to learning difficulties. The National Institute of Neurological Disorders and Stroke describes dyslexia as "difficulty with phonological processing (the manipulation of sounds), spelling, and/or rapid visual-verbal responding".
The British Dyslexia Association defines dyslexia as "a learning difficulty that primarily affects the skills involved in accurate and fluent word reading and spelling" and is characterized by "difficulties in phonological awareness, verbal memory and verbal processing speed". Phonological awareness enables one to identify, discriminate, remember (working memory), and mentally manipulate the sound structures of language—phonemes, onsite-rime segments, syllables, and words.
Assessment
The following can be done to assess for dyslexia:
Apply a multidisciplinary team approach involving the child's parent(s) and teacher(s), school psychologist, pediatrician, and, as appropriate, speech and language pathologist (speech therapist), and occupational therapist.
Gain familiarity with typical ages children reach various general developmental milestones, and domain-specific milestones, such as phonological awareness (recognizing rhyming words; identifying the initial sounds in words).
Do not rely on tests exclusively. Careful observation of the child in the school and home environments, and sensitive, comprehensive parental interviews are just as important as tests.
Look at the empirically supported response to intervention (RTI) approach, which "... involves monitoring the progress of a group of children through a programme of intervention rather than undertaking a static assessment of their current skills. Children with the most need are those who fail to respond to effective teaching, and they are readily identified using this approach."
Assessment tests
There is a wide range of tests that are used in clinical and educational settings to evaluate the possibility of dyslexia. If initial testing suggests that a person might have dyslexia, such tests are often followed up with a full diagnostic assessment to determine the extent and nature of the disorder. Some tests can be administered by a teacher or computer; others require specialized training and are given by psychologists. Some test results indicate how to carry out teaching strategies. Because a variety of different cognitive, behavioral, emotional, and environmental factors all could contribute to difficulty learning to read, a comprehensive evaluation should consider these different possibilities. These tests and observations can include:
General measures of cognitive ability, such as the Wechsler Intelligence Scale for Children, Woodcock-Johnson Tests of Cognitive Abilities, or Stanford-Binet Intelligence Scales. Low general cognitive ability would make reading more difficult. Cognitive ability measures also often try to measure different cognitive processes, such as verbal ability, nonverbal and spatial reasoning, working memory, and processing speed. There are different versions of these tests for different age groups. Almost all of these require additional training to give and score correctly, and are done by psychologists. According to Mather and Schneider (2015), a confirmatory profile and/or pattern of scores on cognitive tests confirming or ruling-out reading disorder has not yet been identified.
Screening or evaluation for mental health conditions: Parents and teachers can complete rating scales or behavior checklists to gather information about emotional and behavioral functioning for younger people. Many checklists have similar versions for parents, teachers, and younger people old enough to read reasonably well (often 11 years and older) to complete. Examples include the Behavioral Assessment System for Children, and the Strengths and Difficulties Questionnaire. All of these have nationally representative norms, making it possible to compare the level of symptoms to what would be typical for the younger person's age and biological sex. Other checklists link more specifically to psychiatric diagnoses, such as the Vanderbilt ADHD Rating Scales or the Screen for Child Anxiety Related Emotional Disorders (SCARED). Screening uses brief tools that are designed to catch cases with a disorder, but they often get false positive scores for people who do not have the disorder. Screeners should be followed up by a more accurate test or diagnostic interview as a result. Depressive disorders and anxiety disorders are two-three times higher in people with dyslexia, and attention-deficit/hyperactivity disorder is more common, as well.
Review of academic achievement and skills: Average spelling/reading ability for a dyslexic is a percentage ranking <16, well below normal. In addition to reviewing grades and teacher notes, standardized test results are helpful in evaluating progress. These include group administered tests, such as the Iowa Tests of Educational Development, that a teacher may give to a group or whole classroom of younger people at the same time. They also could include individually administered tests of achievement, such as the Wide Range Achievement Test, or the Woodcock-Johnson (which also includes a set of achievement tests). The individually administered tests again require more specialized training.
Screening
Screening procedures seek to identify children who show signs of possible dyslexia. In the preschool years, a family history of dyslexia, particularly in biological parents and siblings, predicts an eventual dyslexia diagnosis better than any test. In primary school (ages 5–7), the ideal screening procedure consists of training primary school teachers to carefully observe and record their pupils' progress through the phonics curriculum, and thereby identify children progressing slowly. When teachers identify such students they can supplement their observations with screening tests such as the Phonics screening check used by United Kingdom schools during Year one.
In the medical setting, child and adolescent psychiatrist M. S. Thambirajah emphasizes that "[g]iven the high prevalence of developmental disorders in school-aged children, all children seen in clinics should be systematically screened for developmental disorders irrespective of the presenting problem/s." Thambirajah recommends screening for developmental disorders, including dyslexia, by conducting a brief developmental history, a preliminary psychosocial developmental examination, and obtaining a school report regarding academic and social functioning.
Management
Through the use of compensation strategies, therapy and educational support, individuals with dyslexia can learn to read and write. There are techniques and technical aids that help to manage or conceal symptoms of the disorder. Reducing stress and anxiety can sometimes improve written comprehension. For dyslexia intervention with alphabet-writing systems, the fundamental aim is to increase a child's awareness of correspondences between graphemes (letters) and phonemes (sounds), and to relate these to reading and spelling by teaching how sounds blend into words. Reinforced collateral training focused on reading and spelling may yield longer-lasting gains than oral phonological training alone. Early intervention can be successful in reducing reading failure.
Research does not suggest that specially-tailored fonts (such as Dyslexie and OpenDyslexic) help with reading. Children with dyslexia read text set in a regular font such as Times New Roman and Arial just as quickly, and they show a preference for regular fonts over specially-tailored fonts. Some research has pointed to increased letter-spacing being beneficial.
There is currently no evidence showing that music education significantly improves the reading skills of adolescents with dyslexia.
Prognosis
Dyslexic children require special instruction for word analysis and spelling from an early age. The prognosis, generally speaking, is positive for individuals who are identified in childhood and receive support from friends and family. The New York educational system (NYED) indicates "a daily uninterrupted 90-minute block of instruction in reading" and "instruction in phonemic awareness, phonics, vocabulary development, reading fluency" so as to improve the individual's reading ability.
Epidemiology
The prevalence of dyslexia is unknown, but it has been estimated to be as low as 5% and as high as 17% of the population. Dyslexia is diagnosed more often in males.
There are different definitions of dyslexia used throughout the world. Further, differences in writing systems may affect development of written language ability due to the interplay between auditory and written representations of phonemes. Dyslexia is not limited to difficulty in converting letters to sounds, and Chinese people with dyslexia may have difficulty converting Chinese characters into their meanings. The Chinese vocabulary uses logographic, monographic, non-alphabet writing where one character can represent an individual phoneme.
The phonological-processing hypothesis attempts to explain why dyslexia occurs in a wide variety of languages. Furthermore, the relationship between phonological capacity and reading appears to be influenced by orthography.
History
Dyslexia was clinically described by Oswald Berkhan in 1881, but the term dyslexia was coined in 1883 by Rudolf Berlin, an ophthalmologist in Stuttgart. He used the term to refer to the case of a young boy who had severe difficulty learning to read and write, despite showing typical intelligence and physical abilities in all other respects. In 1896, W. Pringle Morgan, a British physician from Seaford, East Sussex, published a description of a reading-specific learning disorder in a report to the British Medical Journal titled "Congenital Word Blindness". The distinction between phonological versus surface types of dyslexia is only descriptive, and without any etiological assumption as to the underlying brain mechanisms. However, studies have alluded to potential differences due to variation in performance. Over time, the consensus has changed from an intelligence-based model to an age-based model for dyslexia .
Society and culture
As is the case with any disorder, society often makes an assessment based on incomplete information. Before the 1980s, dyslexia was thought to be a consequence of education, rather than a neurological disability. As a result, society often misjudges those with the disorder. There is also sometimes a workplace stigma and negative attitude towards those with dyslexia. If the instructors of a person with dyslexia lack the necessary training to support a child with the condition, there is often a negative effect on the student's learning participation.
Since at least the 1960s in the UK, the children diagnosed with developmental dyslexia have consistently been from privileged families. Although half of prisoners in the UK have significant reading difficulties, very few have ever been evaluated for dyslexia. Access to some special educational resources and funding is contingent upon having a diagnosis of dyslexia. As a result, when Staffordshire and Warwickshire proposed in 2018 to teach reading to all children with reading difficulties, using techniques proven to be successful for most children with a diagnosis of dyslexia, without first requiring the families to obtain an official diagnosis, dyslexia advocates and parents of children with dyslexia were fearful that they were losing a privileged status.
Stigma and success
Due to the various cognitive processes that dyslexia affects and the overwhelming societal stigma around the disability, individuals with dyslexia often employ behaviors of self-stigma and perfectionistic self-presentation in order to cope with their disability. The perfectionist self-presentation is when an individual attempts to present themselves as the perfect ideal image and hides any imperfections. This behavior presents serious risk as it often results in mental health issues and refusal to seek help for their disability.
Research
Most dyslexia research relates to alphabetic writing systems, and especially to European languages. However, substantial research is also available regarding people with dyslexia who speak Arabic, Chinese, Hebrew, or other languages. The outward expression of individuals with reading disability, and regular poor readers, is the same in some respects.
| Biology and health sciences | Disability | null |
8311 | https://en.wikipedia.org/wiki/Dinosaur | Dinosaur | Dinosaurs are a diverse group of reptiles of the clade Dinosauria. They first appeared during the Triassic period, between 243 and 233.23 million years ago (mya), although the exact origin and timing of the evolution of dinosaurs is a subject of active research. They became the dominant terrestrial vertebrates after the Triassic–Jurassic extinction event 201.3 mya and their dominance continued throughout the Jurassic and Cretaceous periods. The fossil record shows that birds are feathered dinosaurs, having evolved from earlier theropods during the Late Jurassic epoch, and are the only dinosaur lineage known to have survived the Cretaceous–Paleogene extinction event approximately 66 mya. Dinosaurs can therefore be divided into avian dinosaurs—birds—and the extinct non-avian dinosaurs, which are all dinosaurs other than birds.
Dinosaurs are varied from taxonomic, morphological and ecological standpoints. Birds, at over 11,000 living species, are among the most diverse groups of vertebrates. Using fossil evidence, paleontologists have identified over 900 distinct genera and more than 1,000 different species of non-avian dinosaurs. Dinosaurs are represented on every continent by both extant species (birds) and fossil remains. Through the first half of the 20th century, before birds were recognized as dinosaurs, most of the scientific community believed dinosaurs to have been sluggish and cold-blooded. Most research conducted since the 1970s, however, has indicated that dinosaurs were active animals with elevated metabolisms and numerous adaptations for social interaction. Some were herbivorous, others carnivorous. Evidence suggests that all dinosaurs were egg-laying, and that nest-building was a trait shared by many dinosaurs, both avian and non-avian.
While dinosaurs were ancestrally bipedal, many extinct groups included quadrupedal species, and some were able to shift between these stances. Elaborate display structures such as horns or crests are common to all dinosaur groups, and some extinct groups developed skeletal modifications such as bony armor and spines. While the dinosaurs' modern-day surviving avian lineage (birds) are generally small due to the constraints of flight, many prehistoric dinosaurs (non-avian and avian) were large-bodied—the largest sauropod dinosaurs are estimated to have reached lengths of and heights of and were the largest land animals of all time. The misconception that non-avian dinosaurs were uniformly gigantic is based in part on preservation bias, as large, sturdy bones are more likely to last until they are fossilized. Many dinosaurs were quite small, some measuring about in length.
The first dinosaur fossils were recognized in the early 19th century, with the name "dinosaur" (meaning "terrible lizard") being coined by Sir Richard Owen in 1842 to refer to these "great fossil lizards". Since then, mounted fossil dinosaur skeletons have been major attractions at museums worldwide, and dinosaurs have become an enduring part of popular culture. The large sizes of some dinosaurs, as well as their seemingly monstrous and fantastic nature, have ensured their regular appearance in best-selling books and films, such as the Jurassic Park franchise. Persistent public enthusiasm for the animals has resulted in significant funding for dinosaur science, and new discoveries are regularly covered by the media.
Definition
Under phylogenetic nomenclature, dinosaurs are usually defined as the group consisting of the most recent common ancestor (MRCA) of Triceratops and modern birds (Neornithes), and all its descendants. It has also been suggested that Dinosauria be defined with respect to the MRCA of Megalosaurus and Iguanodon, because these were two of the three genera cited by Richard Owen when he recognized the Dinosauria. Both definitions cover the same known genera: Dinosauria = Ornithischia + Saurischia. This includes major groups such as ankylosaurians (armored herbivorous quadrupeds), stegosaurians (plated herbivorous quadrupeds), ceratopsians (bipedal or quadrupedal herbivores with neck frills), pachycephalosaurians (bipedal herbivores with thick skulls), ornithopods (bipedal or quadrupedal herbivores including "duck-bills"), theropods (mostly bipedal carnivores and birds), and sauropodomorphs (mostly large herbivorous quadrupeds with long necks and tails).
Birds are the sole surviving dinosaurs. In traditional taxonomy, birds were considered a separate class that had evolved from dinosaurs, a distinct superorder. However, most contemporary paleontologists reject the traditional style of classification based on anatomical similarity, in favor of phylogenetic taxonomy based on deduced ancestry, in which each group is defined as all descendants of a given founding genus. Birds belong to the dinosaur subgroup Maniraptora, which are coelurosaurs, which are theropods, which are saurischians.
Research by Matthew G. Baron, David B. Norman, and Paul M. Barrett in 2017 suggested a radical revision of dinosaurian systematics. Phylogenetic analysis by Baron et al. recovered the Ornithischia as being closer to the Theropoda than the Sauropodomorpha, as opposed to the traditional union of theropods with sauropodomorphs. This would cause sauropods and kin to fall outside traditional dinosaurs, so they re-defined Dinosauria as the last common ancestor of Triceratops horridus, Passer domesticus and Diplodocus carnegii, and all of its descendants, to ensure that sauropods and kin remain included as dinosaurs. They also resurrected the clade Ornithoscelida to refer to the group containing Ornithischia and Theropoda.
General description
Using one of the above definitions, dinosaurs can be generally described as archosaurs with hind limbs held erect beneath the body. Other prehistoric animals, including pterosaurs, mosasaurs, ichthyosaurs, plesiosaurs, and Dimetrodon, while often popularly conceived of as dinosaurs, are not taxonomically classified as dinosaurs. Pterosaurs are distantly related to dinosaurs, being members of the clade Ornithodira. The other groups mentioned are, like dinosaurs and pterosaurs, members of Sauropsida (the reptile and bird clade), except Dimetrodon (which is a synapsid). None of them had the erect hind limb posture characteristic of true dinosaurs.
Dinosaurs were the dominant terrestrial vertebrates of the Mesozoic Era, especially the Jurassic and Cretaceous periods. Other groups of animals were restricted in size and niches; mammals, for example, rarely exceeded the size of a domestic cat and were generally rodent-sized carnivores of small prey. Dinosaurs have always been recognized as an extremely varied group: over 900 non-avian dinosaur genera have been confidently identified (2018) with 1124 species (2016). Estimates put the total number of dinosaur genera preserved in the fossil record at 1850, nearly 75% still undiscovered, and the number that ever existed (in or out of the fossil record) at 3,400. A 2016 estimate put the number of dinosaur species living in the Mesozoic at 1,543–2,468, compared to the number of modern-day birds (avian dinosaurs) at 10,806 species.
Extinct dinosaurs, as well as modern birds, include genera that are herbivorous and others carnivorous, including seed-eaters, fish-eaters, insectivores, and omnivores. While dinosaurs were ancestrally bipedal (as are all modern birds), some evolved into quadrupeds, and others, such as Anchisaurus and Iguanodon, could walk as easily on two or four legs. Cranial modifications like horns and crests are common dinosaurian traits, and some extinct species had bony armor. Although the best-known genera are remarkable for their large size, many Mesozoic dinosaurs were human-sized or smaller, and modern birds are generally small in size. Dinosaurs today inhabit every continent, and fossils show that they had achieved global distribution by the Early Jurassic epoch at latest. Modern birds inhabit most available habitats, from terrestrial to marine, and there is evidence that some non-avian dinosaurs (such as Microraptor) could fly or at least glide, and others, such as spinosaurids, had semiaquatic habits.
Distinguishing anatomical features
While recent discoveries have made it more difficult to present a universally agreed-upon list of their distinguishing features, nearly all dinosaurs discovered so far share certain modifications to the ancestral archosaurian skeleton, or are clearly descendants of older dinosaurs showing these modifications. Although some later groups of dinosaurs featured further modified versions of these traits, they are considered typical for Dinosauria; the earliest dinosaurs had them and passed them on to their descendants. Such modifications, originating in the most recent common ancestor of a certain taxonomic group, are called the synapomorphies of such a group.
A detailed assessment of archosaur interrelations by Sterling Nesbitt confirmed or found the following twelve unambiguous synapomorphies, some previously known:
In the skull, a supratemporal fossa (excavation) is present in front of the supratemporal fenestra, the main opening in the rear skull roof
Epipophyses, obliquely backward-pointing processes on the rear top corners of the anterior (front) neck vertebrae behind the atlas and axis, the first two neck vertebrae
Apex of a deltopectoral crest (a projection on which the deltopectoral muscles attach) located at or more than 30% down the length of the humerus (upper arm bone)
Radius, a lower arm bone, shorter than 80% of humerus length
Fourth trochanter (projection where the caudofemoralis muscle attaches on the inner rear shaft) on the femur (thigh bone) is a sharp flange
Fourth trochanter asymmetrical, with distal, lower, margin forming a steeper angle to the shaft
On the astragalus and calcaneum, upper ankle bones, the proximal articular facet, the top connecting surface, for the fibula occupies less than 30% of the transverse width of the element
Exoccipitals (bones at the back of the skull) do not meet along the midline on the floor of the endocranial cavity, the inner space of the braincase
In the pelvis, the proximal articular surfaces of the ischium with the ilium and the pubis are separated by a large concave surface (on the upper side of the ischium a part of the open hip joint is located between the contacts with the pubic bone and the ilium)
Cnemial crest on the tibia (protruding part of the top surface of the shinbone) arcs anterolaterally (curves to the front and the outer side)
Distinct proximodistally oriented (vertical) ridge present on the posterior face of the distal end of the tibia (the rear surface of the lower end of the shinbone)
Concave articular surface for the fibula of the calcaneum (the top surface of the calcaneum, where it touches the fibula, has a hollow profile)
Nesbitt found a number of further potential synapomorphies and discounted a number of synapomorphies previously suggested. Some of these are also present in silesaurids, which Nesbitt recovered as a sister group to Dinosauria, including a large anterior trochanter, metatarsals II and IV of subequal length, reduced contact between ischium and pubis, the presence of a cnemial crest on the tibia and of an ascending process on the astragalus, and many others.
A variety of other skeletal features are shared by dinosaurs. However, because they either are common to other groups of archosaurs or were not present in all early dinosaurs, these features are not considered to be synapomorphies. For example, as diapsids, dinosaurs ancestrally had two pairs of Infratemporal fenestrae (openings in the skull behind the eyes), and as members of the diapsid group Archosauria, had additional openings in the snout and lower jaw. Additionally, several characteristics once thought to be synapomorphies are now known to have appeared before dinosaurs, or were absent in the earliest dinosaurs and independently evolved by different dinosaur groups. These include an elongated scapula, or shoulder blade; a sacrum composed of three or more fused vertebrae (three are found in some other archosaurs, but only two are found in Herrerasaurus); and a perforate acetabulum, or hip socket, with a hole at the center of its inside surface (closed in Saturnalia tupiniquim, for example). Another difficulty of determining distinctly dinosaurian features is that early dinosaurs and other archosaurs from the Late Triassic epoch are often poorly known and were similar in many ways; these animals have sometimes been misidentified in the literature.
Dinosaurs stand with their hind limbs erect in a manner similar to most modern mammals, but distinct from most other reptiles, whose limbs sprawl out to either side. This posture is due to the development of a laterally facing recess in the pelvis (usually an open socket) and a corresponding inwardly facing distinct head on the femur. Their erect posture enabled early dinosaurs to breathe easily while moving, which likely permitted stamina and activity levels that surpassed those of "sprawling" reptiles. Erect limbs probably also helped support the evolution of large size by reducing bending stresses on limbs. Some non-dinosaurian archosaurs, including rauisuchians, also had erect limbs but achieved this by a "pillar-erect" configuration of the hip joint, where instead of having a projection from the femur insert on a socket on the hip, the upper pelvic bone was rotated to form an overhanging shelf.
History of study
Pre-scientific history
Dinosaur fossils have been known for millennia, although their true nature was not recognized. The Chinese considered them to be dragon bones and documented them as such. For example, Huayang Guo Zhi (), a gazetteer compiled by Chang Qu () during the Western Jin Dynasty (265–316), reported the discovery of dragon bones at Wucheng in Sichuan Province. Villagers in central China have long unearthed fossilized "dragon bones" for use in traditional medicines. In Europe, dinosaur fossils were generally believed to be the remains of giants and other biblical creatures.
Early dinosaur research
Scholarly descriptions of what would now be recognized as dinosaur bones first appeared in the late 17th century in England. Part of a bone, now known to have been the femur of a Megalosaurus, was recovered from a limestone quarry at Cornwell near Chipping Norton, Oxfordshire, in 1676. The fragment was sent to Robert Plot, Professor of Chemistry at the University of Oxford and first curator of the Ashmolean Museum, who published a description in his The Natural History of Oxford-shire (1677). He correctly identified the bone as the lower extremity of the femur of a large animal, and recognized that it was too large to belong to any known species. He therefore concluded it to be the femur of a huge human, perhaps a Titan or another type of giant featured in legends. Edward Lhuyd, a friend of Sir Isaac Newton, published Lithophylacii Britannici ichnographia (1699), the first scientific treatment of what would now be recognized as a dinosaur. In it he described and named a sauropod tooth, "Rutellum impicatum", that had been found in Caswell, near Witney, Oxfordshire.
Between 1815 and 1824, the Rev William Buckland, the first Reader of Geology at the University of Oxford, collected more fossilized bones of Megalosaurus and became the first person to describe a non-avian dinosaur in a scientific journal. The second non-avian dinosaur genus to be identified, Iguanodon, was purportedly discovered in 1822 by Mary Ann Mantell, the wife of English geologist Gideon Mantell, though this is disputed and some historians say Gideon had acquired remains years earlier. Gideon Mantell recognized similarities between his fossils and the bones of modern iguanas and published his findings in 1825.
The study of these "great fossil lizards" soon became of great interest to European and American scientists, and in 1842 the English paleontologist Sir Richard Owen coined the term "dinosaur", using it to refer to the "distinct tribe or sub-order of Saurian Reptiles" that were then being recognized in England and around the world. The term is derived . Though the taxonomic name has often been interpreted as a reference to dinosaurs' teeth, claws, and other fearsome characteristics, Owen intended it also to evoke their size and majesty. Owen recognized that the remains that had been found so far, Iguanodon, Megalosaurus and Hylaeosaurus, shared distinctive features, and so decided to present them as a distinct taxonomic group. As clarified by British geologist and historian Hugh Torrens, Owen had given a presentation about fossil reptiles to the British Association for the Advancement of Science in 1841, but reports of the time show that Owen did not mention the word "dinosaur", nor recognize dinosaurs as a distinct group of reptiles in his address. He introduced the Dinosauria only in the revised text version of his talk published in April 1842. With the backing of Prince Albert, the husband of Queen Victoria, Owen established the Natural History Museum, London, to display the national collection of dinosaur fossils and other biological and geological exhibits.
Discoveries in North America
In 1858, William Parker Foulke discovered the first known American dinosaur, in marl pits in the small town of Haddonfield, New Jersey. (Although fossils had been found before, their nature had not been correctly discerned.) The creature was named Hadrosaurus foulkii. It was an extremely important find: Hadrosaurus was one of the first nearly complete dinosaur skeletons found (the first was in 1834, in Maidstone, England), and it was clearly a bipedal creature. This was a revolutionary discovery as, until that point, most scientists had believed dinosaurs walked on four feet, like other lizards. Foulke's discoveries sparked a wave of interests in dinosaurs in the United States, known as dinosaur mania.
Dinosaur mania was exemplified by the fierce rivalry between Edward Drinker Cope and Othniel Charles Marsh, both of whom raced to be the first to find new dinosaurs in what came to be known as the Bone Wars. This fight between the two scientists lasted for over 30 years, ending in 1897 when Cope died after spending his entire fortune on the dinosaur hunt. Many valuable dinosaur specimens were damaged or destroyed due to the pair's rough methods: for example, their diggers often used dynamite to unearth bones. Modern paleontologists would find such methods crude and unacceptable, since blasting easily destroys fossil and stratigraphic evidence. Despite their unrefined methods, the contributions of Cope and Marsh to paleontology were vast: Marsh unearthed 86 new species of dinosaur and Cope discovered 56, a total of 142 new species. Cope's collection is now at the American Museum of Natural History in New York City, while Marsh's is at the Peabody Museum of Natural History at Yale University.
"Dinosaur renaissance" and beyond
World War II caused a pause in palaeontological research; after the war, research attention was also diverted increasingly to fossil mammals rather than dinosaurs, which were seen as sluggish and cold-blooded. At the end of the 1960s, however, the field of dinosaur research experienced a surge in activity that remains ongoing. Several seminal studies led to this activity. First, John Ostrom discovered the bird-like dromaeosaurid theropod Deinonychus and described it in 1969. Its anatomy indicated that it was an active predator that was likely warm-blooded, in marked contrast to the then-prevailing image of dinosaurs. Concurrently, Robert T. Bakker published a series of studies that likewise argued for active lifestyles in dinosaurs based on anatomical and ecological evidence (see ), which were subsequently summarized in his 1986 book The Dinosaur Heresies.
New revelations were supported by an increase in dinosaur discoveries. Major new dinosaur discoveries have been made by paleontologists working in previously unexplored regions, including India, South America, Madagascar, Antarctica, and most significantly China. Across theropods, sauropodomorphs, and ornithischians, the number of named genera began to increase exponentially in the 1990s. In 2008 over 30 new species of dinosaurs were named each year. At least sauropodomorphs experienced a further increase in the number of named species in the 2010s, with an average of 9.3 new species having been named each year between 2009 and 2020. As a consequence, more sauropodomorphs were named between 1990 and 2020 than in all previous years combined. These new localities also led to improvements in overall specimen quality, with new species being increasingly named not on scrappy fossils but on more complete skeletons, sometimes from multiple individuals. Better specimens also led to new species being invalidated less frequently. Asian localities have produced the most complete theropod specimens, while North American localities have produced the most complete sauropodomorph specimens.
Prior to the dinosaur renaissance, dinosaurs were mostly classified using the traditional rank-based system of Linnaean taxonomy. The renaissance was also accompanied by the increasingly widespread application of cladistics, a more objective method of classification based on ancestry and shared traits, which has proved tremendously useful in the study of dinosaur systematics and evolution. Cladistic analysis, among other techniques, helps to compensate for an often incomplete and fragmentary fossil record. Reference books summarizing the state of dinosaur research, such as David B. Weishampel and colleagues' The Dinosauria, made knowledge more accessible and spurred further interest in dinosaur research. The release of the first and second editions of The Dinosauria in 1990 and 2004, and of a review paper by Paul Sereno in 1998, were accompanied by increases in the number of published phylogenetic trees for dinosaurs.
Soft tissue and molecular preservation
Dinosaur fossils are not limited to bones, but also include imprints or mineralized remains of skin coverings, organs, and other tissues. Of these, skin coverings based on keratin proteins are most easily preserved because of their cross-linked, hydrophobic molecular structure. Fossils of keratin-based skin coverings or bony skin coverings are known from most major groups of dinosaurs. Dinosaur fossils with scaly skin impressions have been found since the 19th century. Samuel Beckles discovered a sauropod forelimb with preserved skin in 1852 that was incorrectly attributed to a crocodile; it was correctly attributed by Marsh in 1888 and subject to further study by Reginald Hooley in 1917. Among ornithischians, in 1884 Jacob Wortman found skin impressions on the first known specimen of Edmontosaurus annectens, which were largely destroyed during the specimen's excavation. Owen and Hooley subsequently described skin impressions of Hypsilophodon and Iguanodon in 1885 and 1917. Since then, scale impressions have been most frequently found among hadrosaurids, where the impressions are known from nearly the entire body across multiple specimens.
Starting from the 1990s, major discoveries of exceptionally preserved fossils in deposits known as conservation Lagerstätten contributed to research on dinosaur soft tissues. Chiefly among these were the rocks that produced the Jehol (Early Cretaceous) and Yanliao (Mid-to-Late Jurassic) biotas of northeastern China, from which hundreds of dinosaur specimens bearing impressions of feather-like structures (both closely related to birds and otherwise, see ) have been described by Xing Xu and colleagues. In living reptiles and mammals, pigment-storing cellular structures known as melanosomes are partially responsible for producing colouration. Both chemical traces of melanin and characteristically shaped melanosomes have been reported from feathers and scales of Jehol and Yanliao dinosaurs, including both theropods and ornithischians. This has enabled multiple full-body reconstructions of dinosaur colouration, such as for Sinosauropteryx and Psittacosaurus by Jakob Vinther and colleagues, and similar techniques have also been extended to dinosaur fossils from other localities. (However, some researchers have also suggested that fossilized melanosomes represent bacterial remains.) Stomach contents in some Jehol and Yanliao dinosaurs closely related to birds have also provided indirect indications of diet and digestive system anatomy (e.g., crops). More concrete evidence of internal anatomy has been reported in Scipionyx from the Pietraroja Plattenkalk of Italy. It preserves portions of the intestines, colon, liver, muscles, and windpipe.
Concurrently, a line of work led by Mary Higby Schweitzer, Jack Horner, and colleagues reported various occurrences of preserved soft tissues and proteins within dinosaur bone fossils. Various mineralized structures that likely represented red blood cells and collagen fibres had been found by Schweitzer and others in tyrannosaurid bones as early as 1991. However, in 2005, Schweitzer and colleagues reported that a femur of Tyrannosaurus preserved soft, flexible tissue within, including blood vessels, bone matrix, and connective tissue (bone fibers) that had retained their microscopic structure. This discovery suggested that original soft tissues could be preserved over geological time, with multiple mechanisms having been proposed. Later, in 2009, Schweitzer and colleagues reported that a Brachylophosaurus femur preserved similar microstructures, and immunohistochemical techniques (based on antibody binding) demonstrated the presence of proteins such as collagen, elastin, and laminin. Both specimens yielded collagen protein sequences that were viable for molecular phylogenetic analyses, which grouped them with birds as would be expected. The extraction of fragmentary DNA has also been reported for both of these fossils, along with a specimen of Hypacrosaurus. In 2015, Sergio Bertazzo and colleagues reported the preservation of collagen fibres and red blood cells in eight Cretaceous dinosaur specimens that did not show any signs of exceptional preservation, indicating that soft tissue may be preserved more commonly than previously thought. Suggestions that these structures represent bacterial biofilms have been rejected, but cross-contamination remains a possibility that is difficult to detect.
Evolutionary history
Origins and early evolution
Dinosaurs diverged from their archosaur ancestors during the Middle to Late Triassic epochs, roughly 20 million years after the devastating Permian–Triassic extinction event wiped out an estimated 96% of all marine species and 70% of terrestrial vertebrate species approximately 252 million years ago. The oldest dinosaur fossils known from substantial remains date to the Carnian epoch of the Triassic period and have been found primarily in the Ischigualasto and Santa Maria Formations of Argentina and Brazil, and the Pebbly Arkose Formation of Zimbabwe.
The Ischigualasto Formation (radiometrically dated at 231–230 million years old) has produced the early saurischian Eoraptor, originally considered a member of the Herrerasauridae but now considered to be an early sauropodomorph, along with the herrerasaurids Herrerasaurus and Sanjuansaurus, and the sauropodomorphs Chromogisaurus, Eodromaeus, and Panphagia. Eoraptor likely resemblance to the common ancestor of all dinosaurs suggests that the first dinosaurs would have been small, bipedal predators. The Santa Maria Formation (radiometrically dated to be older, at 233.23 million years old) has produced the herrerasaurids Gnathovorax and Staurikosaurus, along with the sauropodomorphs Bagualosaurus, Buriolestes, Guaibasaurus, Macrocollum, Nhandumirim, Pampadromaeus, Saturnalia, and Unaysaurus. The Pebbly Arkose Formation, which is of uncertain age but was likely comparable to the other two, has produced the sauropodomorph Mbiresaurus, along with an unnamed herrerasaurid.
Less well-preserved remains of the sauropodomorphs Jaklapallisaurus and Nambalia, along with the early saurischian Alwalkeria, are known from the Upper Maleri and Lower Maleri Formations of India. The Carnian-aged Chañares Formation of Argentina preserves primitive, dinosaur-like ornithodirans such as Lagosuchus and Lagerpeton in Argentina, making it another important site for understanding dinosaur evolution. These ornithodirans support the model of early dinosaurs as small, bipedal predators. Dinosaurs may have appeared as early as the Anisian epoch of the Triassic, approximately 243 million years ago, which is the age of Nyasasaurus from the Manda Formation of Tanzania. However, its known fossils are too fragmentary to identify it as a dinosaur or only a close relative. The referral of the Manda Formation to the Anisian is also uncertain. Regardless, dinosaurs existed alongside non-dinosaurian ornithodirans for a period of time, with estimates ranging from 5–10 million years to 21 million years.
When dinosaurs appeared, they were not the dominant terrestrial animals. The terrestrial habitats were occupied by various types of archosauromorphs and therapsids, like cynodonts and rhynchosaurs. Their main competitors were the pseudosuchians, such as aetosaurs, ornithosuchids and rauisuchians, which were more successful than the dinosaurs. Most of these other animals became extinct in the Triassic, in one of two events. First, at about 215 million years ago, a variety of basal archosauromorphs, including the protorosaurs, became extinct. This was followed by the Triassic–Jurassic extinction event (about 201 million years ago), that saw the end of most of the other groups of early archosaurs, like aetosaurs, ornithosuchids, phytosaurs, and rauisuchians. Rhynchosaurs and dicynodonts survived (at least in some areas) at least as late as early –mid Norian and late Norian or earliest Rhaetian stages, respectively, and the exact date of their extinction is uncertain. These losses left behind a land fauna of crocodylomorphs, dinosaurs, mammals, pterosaurians, and turtles. The first few lines of early dinosaurs diversified through the Carnian and Norian stages of the Triassic, possibly by occupying the niches of the groups that became extinct. Also notably, there was a heightened rate of extinction during the Carnian pluvial event.
Evolution and paleobiogeography
Dinosaur evolution after the Triassic followed changes in vegetation and the location of continents. In the Late Triassic and Early Jurassic, the continents were connected as the single landmass Pangaea, and there was a worldwide dinosaur fauna mostly composed of coelophysoid carnivores and early sauropodomorph herbivores. Gymnosperm plants (particularly conifers), a potential food source, radiated in the Late Triassic. Early sauropodomorphs did not have sophisticated mechanisms for processing food in the mouth, and so must have employed other means of breaking down food farther along the digestive tract. The general homogeneity of dinosaurian faunas continued into the Middle and Late Jurassic, where most localities had predators consisting of ceratosaurians, megalosauroids, and allosauroids, and herbivores consisting of stegosaurian ornithischians and large sauropods. Examples of this include the Morrison Formation of North America and Tendaguru Beds of Tanzania. Dinosaurs in China show some differences, with specialized metriacanthosaurid theropods and unusual, long-necked sauropods like Mamenchisaurus. Ankylosaurians and ornithopods were also becoming more common, but primitive sauropodomorphs had become extinct. Conifers and pteridophytes were the most common plants. Sauropods, like earlier sauropodomorphs, were not oral processors, but ornithischians were evolving various means of dealing with food in the mouth, including potential cheek-like organs to keep food in the mouth, and jaw motions to grind food. Another notable evolutionary event of the Jurassic was the appearance of true birds, descended from maniraptoran coelurosaurians.
By the Early Cretaceous and the ongoing breakup of Pangaea, dinosaurs were becoming strongly differentiated by landmass. The earliest part of this time saw the spread of ankylosaurians, iguanodontians, and brachiosaurids through Europe, North America, and northern Africa. These were later supplemented or replaced in Africa by large spinosaurid and carcharodontosaurid theropods, and rebbachisaurid and titanosaurian sauropods, also found in South America. In Asia, maniraptoran coelurosaurians like dromaeosaurids, troodontids, and oviraptorosaurians became the common theropods, and ankylosaurids and early ceratopsians like Psittacosaurus became important herbivores. Meanwhile, Australia was home to a fauna of basal ankylosaurians, hypsilophodonts, and iguanodontians. The stegosaurians appear to have gone extinct at some point in the late Early Cretaceous or early Late Cretaceous. A major change in the Early Cretaceous, which would be amplified in the Late Cretaceous, was the evolution of flowering plants. At the same time, several groups of dinosaurian herbivores evolved more sophisticated ways to orally process food. Ceratopsians developed a method of slicing with teeth stacked on each other in batteries, and iguanodontians refined a method of grinding with dental batteries, taken to its extreme in hadrosaurids. Some sauropods also evolved tooth batteries, best exemplified by the rebbachisaurid Nigersaurus.
There were three general dinosaur faunas in the Late Cretaceous. In the northern continents of North America and Asia, the major theropods were tyrannosaurids and various types of smaller maniraptoran theropods, with a predominantly ornithischian herbivore assemblage of hadrosaurids, ceratopsians, ankylosaurids, and pachycephalosaurians. In the southern continents that had made up the now-splitting supercontinent Gondwana, abelisaurids were the common theropods, and titanosaurian sauropods the common herbivores. Finally, in Europe, dromaeosaurids, rhabdodontid iguanodontians, nodosaurid ankylosaurians, and titanosaurian sauropods were prevalent. Flowering plants were greatly radiating, with the first grasses appearing by the end of the Cretaceous. Grinding hadrosaurids and shearing ceratopsians became very diverse across North America and Asia. Theropods were also radiating as herbivores or omnivores, with therizinosaurians and ornithomimosaurians becoming common.
The Cretaceous–Paleogene extinction event, which occurred approximately 66 million years ago at the end of the Cretaceous, caused the extinction of all dinosaur groups except for the neornithine birds. Some other diapsid groups, including crocodilians, dyrosaurs, sebecosuchians, turtles, lizards, snakes, sphenodontians, and choristoderans, also survived the event.
The surviving lineages of neornithine birds, including the ancestors of modern ratites, ducks and chickens, and a variety of waterbirds, diversified rapidly at the beginning of the Paleogene period, entering ecological niches left vacant by the extinction of Mesozoic dinosaur groups such as the arboreal enantiornithines, aquatic hesperornithines, and even the larger terrestrial theropods (in the form of Gastornis, eogruiids, bathornithids, ratites, geranoidids, mihirungs, and "terror birds"). It is often stated that mammals out-competed the neornithines for dominance of most terrestrial niches but many of these groups co-existed with rich mammalian faunas for most of the Cenozoic Era. Terror birds and bathornithids occupied carnivorous guilds alongside predatory mammals, and ratites are still fairly successful as midsized herbivores; eogruiids similarly lasted from the Eocene to Pliocene, becoming extinct only very recently after over 20 million years of co-existence with many mammal groups.
Classification
Dinosaurs belong to a group known as archosaurs, which also includes modern crocodilians. Within the archosaur group, dinosaurs are differentiated most noticeably by their gait. Dinosaur legs extend directly beneath the body, whereas the legs of lizards and crocodilians sprawl out to either side.
Collectively, dinosaurs as a clade are divided into two primary branches, Saurischia and Ornithischia. Saurischia includes those taxa sharing a more recent common ancestor with birds than with Ornithischia, while Ornithischia includes all taxa sharing a more recent common ancestor with Triceratops than with Saurischia. Anatomically, these two groups can be distinguished most noticeably by their pelvic structure. Early saurischians—"lizard-hipped", from the Greek () meaning "lizard" and () meaning "hip joint"—retained the hip structure of their ancestors, with a pubis bone directed cranially, or forward. This basic form was modified by rotating the pubis backward to varying degrees in several groups (Herrerasaurus, therizinosauroids, dromaeosaurids, and birds). Saurischia includes the theropods (exclusively bipedal and with a wide variety of diets) and sauropodomorphs (long-necked herbivores which include advanced, quadrupedal groups).
By contrast, ornithischians—"bird-hipped", from the Greek ornitheios (ὀρνίθειος) meaning "of a bird" and ischion (ἰσχίον) meaning "hip joint"—had a pelvis that superficially resembled a bird's pelvis: the pubic bone was oriented caudally (rear-pointing). Unlike birds, the ornithischian pubis also usually had an additional forward-pointing process. Ornithischia includes a variety of species that were primarily herbivores.
Despite the terms "bird hip" (Ornithischia) and "lizard hip" (Saurischia), birds are not part of Ornithischia. Birds instead belong to Saurischia, the "lizard-hipped" dinosaurs—birds evolved from earlier dinosaurs with "lizard hips".
Taxonomy
The following is a simplified classification of dinosaur groups based on their evolutionary relationships, and those of the main dinosaur groups Theropoda, Sauropodomorpha and Ornithischia, compiled by Justin Tweet. Further details and other hypotheses of classification may be found on individual articles.
Dinosauria
†Ornithischia ("bird-hipped"; diverse bipedal and quadrupedal herbivores)
†Saphornithischia ("true" ornithischians)
†Heterodontosauridae (small herbivores/omnivores with prominent canine-like teeth)
†Genasauria ("cheeked lizards")
†Thyreophora (armored dinosaurs; bipeds and quadrupeds)
†Eurypoda (heavy, quadrupedal thyreophorans)
†Stegosauria (spikes and plates as primary armor)
†Huayangosauridae (small stegosaurs with flank osteoderms and tail clubs)
†Stegosauridae (large stegosaurs)
†Ankylosauria (scutes as primary armor)
†Parankylosauria (small, southern ankylosaurs with macuahuitl-like tails)
†Nodosauridae (mostly spiky, club-less ankylosaurs)
†Ankylosauridae (characterized by flat scutes)
†Ankylosaurinae (club-tailed ankylosaurids)
†Neornithischia ("new ornithischians")
†Pyrodontia ("fire teeth")
†Thescelosauridae ("wondrous lizards")
†Orodrominae (burrowers)
†Thescelosaurinae (large thescelosaurids)
†Cerapoda ("horned feet")
†Marginocephalia (characterized by a cranial growth)
†Pachycephalosauria (bipeds with domed or knobby growth on skulls)
†Ceratopsia (bipeds and quadrupeds; many had neck frills and horns)
†Chaoyangsauridae (small, frill-less basal ceratopsians)
†Neoceratopsia ("new ceratopsians")
†Leptoceratopsidae (little to no frills, hornless, with robust jaws)
†Protoceratopsidae (basal ceratopsians with small frills and stubby horns)
†Ceratopsoidea (large-horned ceratopsians)
†Ceratopsidae (large, elaborately ornamented ceratopsians)
†Chasmosaurinae (ceratopsids with enlarged brow horns)
†Centrosaurinae (ceratopsids mostly characterized by frill and nasal ornamentation)
†Nasutoceratopsini (centrosaurines with enlarged nasal cavities)
†Centrosaurini (centrosaurines with enlarged nasal horns)
†Pachyrhinosaurini (mostly had nasal bosses instead of horns)
†Ornithopoda (various sizes; bipeds and quadrupeds; evolved a method of chewing using skull flexibility and numerous teeth)
†Hypsilophodontidae (small European neornithischians)
†Iguanodontia ("iguana teeth"; advanced ornithopods)
†Rhabdodontomorpha (with distinctive dentition)
†Tenontosauridae (North American rhabdodontomorphs; bipeds and quadrupeds)
†Rhabdodontidae (European rhabdodontomorphs)
†Euiguanodontia ("true iguanodonts")
†Elasmaria (mostly southern ornithopods with mineralized plates along the ribs; may be thescelosaurids)
†Dryomorpha (Dryosaurus and more advanced ornithopods)
†Dryosauridae (mid-sized, small headed)
†Ankylopollexia (early members mid-sized, stocky)
†Styracosterna ("spiked sterna")
†Hadrosauriformes (ancestrally had a thumb spike; large quadrupedal herbivores, with teeth merged into dental batteries)
†Hadrosauromorpha (hadrosaurids and their closest relatives)
†Hadrosauridae ("duck-billed dinosaurs"; often with crests)
†Saurolophinae (hadrosaurids with solid, small, no crests)
†Brachylophosaurini (short-crested)
†Kritosaurini (enlarged, solid nasal crests)
†Saurolophini (small, spike-like crests)
†Edmontosaurini (flat-headed saurolophines)
†Lambeosaurinae (hadrosaurids often with hollow crests)
†Aralosaurini (solid-crested)
†Tsintaosaurini (vertical, tube-like crests)
†Parasaurolophini (long, backwards-arcing crests)
†Lambeosaurini (usually rounded crests)
Saurischia
†Herrerasauridae (early bipedal carnivores)
†Sauropodomorpha (herbivores with small heads, long necks, and long tails)
†Unaysauridae (primitive, strictly bipedal "prosauropods")
†Plateosauria (diverse; bipeds and quadrupeds)
†Massopoda ("heavy feet")
†Massospondylidae (long-necked, primitive sauropodomorphs)
†Riojasauridae (large, primitive sauropodomorphs)
†Sauropodiformes (heavy, bipeds and quadrupeds)
†Sauropoda (very large and heavy; quadrupedal)
†Lessemsauridae (gigantic yet lacking several weight-saving adaptations)
†Gravisauria ("heavy lizards")
†Eusauropoda ("true sauropods")
†Turiasauria (often large, widespread sauropods)
†Neosauropoda ("new sauropods"; columnar limbs)
†Diplodocoidea (skulls and tails elongated; teeth typically narrow and pencil-like)
†Rebbachisauridae (short-necked, low-browsing diplodocoids often with high backs)
†Flagellicaudata (whip-tailed)
†Dicraeosauridae (small, short-necked diplodocoids with enlarged cervical and dorsal vertebrae)
†Diplodocidae (extremely long-necked)
†Apatosaurinae (robust cervical vertebrae)
†Diplodocinae (long, thin necks)
†Macronaria (boxy skulls; spoon- or pencil-shaped teeth)
†Titanosauriformes ("titan lizard forms")
†Brachiosauridae (long-necked, long-armed macronarians)
†Somphospondyli ("porous vertebrae")
†Euhelopodidae (stocky, mostly Asian)
†Diamantinasauria (horse-like skulls; restricted to the Southern Hemisphere; may be titanosaurs)
†Titanosauria (diverse; stocky, with wide hips; most common in the Late Cretaceous of southern continents)
Theropoda (carnivorous)
Neotheropoda ("new theropods")
†Coelophysoidea (early theropods; includes Coelophysis and close relatives)
†"Dilophosaur-grade neotheropods" (larger kink-snouted dinosaurs)
Averostra ("bird snouts")
†Ceratosauria (generally elaborately horned carnivores that existed from the Jurassic to Cretaceous periods, originally included Coelophysoidea)
†Ceratosauridae (ceratosaurs with large teeth)
†Abelisauroidea (ceratosaurs exemplified by reduced arms and hands)
†Abelisauridae (large abelisauroids with short arms and oftentimes elaborate facial ornamentation)
†Noasauridae (diverse, generally light theropods; may include several obscure taxa)
†Elaphrosaurinae (bird-like; omnivorous as juveniles but herbivorous as adults)
†Noasaurinae (small carnivores)
Tetanurae (stiff-tailed dinosaurs)
†Megalosauroidea (early group of large carnivores)
†Piatnitzkysauridae (small basal megalosauroids endemic to the Americas)
†Megalosauridae (large megalosauroids with powerful arms and hands)
†Spinosauridae (crocodile-like, semiaquatic carnivores)
Avetheropoda ("bird theropods")
†Carnosauria (large meat-eating dinosaurs; megalosauroids sometimes included)
†Metriacanthosauridae (primitive Asian allosauroids)
†Allosauridae (Allosaurus and its very closest relatives)
†Carcharodontosauridae (robust allosauroids; includes some of the largest purely terrestrial carnivores)
Coelurosauria (feathered theropods, with a range of body sizes and niches)
†Megaraptora? (theropods with large hand claws; potentially tyrannosauroids or neovenatorids)
†"Nexus of basal coelurosaurs" (used by Tweet to denote well-known taxa with unstable positions at the base of Coelurosauria)
Tyrannoraptora ("tyrant thieves")
†Tyrannosauroidea (mostly large, primitive coelurosaurs)
†Proceratosauridae (tyrannosauroids with head crests)
†Tyrannosauridae (Tyrannosaurus and close relatives)
Maniraptoriformes (bird-like dinosaurs)
†Ornithomimosauria (small-headed, mostly toothless, omnivorous or possible herbivores)
†Ornithomimidae (very ostrich-like dinosaurs)
Maniraptora (dinosaurs with pennaceous feathers)
†Alvarezsauroidea (small hunters with reduced forelimbs)
†Alvarezsauridae (insectivores with only one enlarged digit)
†Therizinosauria (tall, long-necked theropods; omnivores and herbivores)
†Therizinosauroidea (larger therizinosaurs)
†Therizinosauridae (sloth-like herbivores, often with enlarged claws)
†Oviraptorosauria (omnivorous, beaked dinosaurs)
†Caudipteridae (bird-like, basal oviraptorosaurs)
†Caenagnathoidea (cassowary-like oviraptorosaurs)
†Caenagnathidae (toothless oviraptorosaurs known from North America and Asia)
†Oviraptoridae (characterized by two bony projections at the back of the mouth; exclusive to Asia)
Paraves (avialans and their closest relatives)
†Scansoriopterygidae (small tree-climbing theropods with membranous wings)
†Deinonychosauria (toe-clawed dinosaurs; may not form a natural group)
†Archaeopterygidae (small, winged theropods or primitive birds)
†Troodontidae (omnivores; enlarged brain cavities)
†Dromaeosauridae ("raptors")
†Microraptoria (characterized by large wings on both the arms and legs; may have been capable of powered flight)
†Eudromaeosauria (hunters with greatly enlarged sickle claws)
†Unenlagiidae (piscivores; may be dromaeosaurids)
†Halszkaraptorinae (duck-like; potentially semiaquatic)
†Unenlagiinae (long-snouted)
Avialae (modern birds and extinct relatives)
Timeline of major groups
Timeline of major dinosaur groups per .
Paleobiology
Knowledge about dinosaurs is derived from a variety of fossil and non-fossil records, including fossilized bones, feces, trackways, gastroliths, feathers, impressions of skin, internal organs and other soft tissues. Many fields of study contribute to our understanding of dinosaurs, including physics (especially biomechanics), chemistry, biology, and the Earth sciences (of which paleontology is a sub-discipline). Two topics of particular interest and study have been dinosaur size and behavior.
Size
Current evidence suggests that dinosaur average size varied through the Triassic, Early Jurassic, Late Jurassic and Cretaceous. Predatory theropod dinosaurs, which occupied most terrestrial carnivore niches during the Mesozoic, most often fall into the category when sorted by estimated weight into categories based on order of magnitude, whereas recent predatory carnivoran mammals peak in the category. The mode of Mesozoic dinosaur body masses is between . This contrasts sharply with the average size of Cenozoic mammals, estimated by the National Museum of Natural History as about .
The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest was an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as Paraceratherium (the largest land mammal ever) were dwarfed by the giant sauropods, and only modern whales approach or surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.
Largest and smallest
Scientists will probably never be certain of the largest and smallest dinosaurs to have ever existed. This is because only a tiny percentage of animals were ever fossilized and most of these remain buried in the earth. Few non-avian dinosaur specimens that are recovered are complete skeletons, and impressions of skin and other soft tissues are rare. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art, and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork.
The tallest and heaviest dinosaur known from good skeletons is Giraffatitan brancai (previously classified as a species of Brachiosaurus). Its remains were discovered in Tanzania between 1907 and 1912. Bones from several similar-sized individuals were incorporated into the skeleton now mounted and on display at the Museum für Naturkunde in Berlin; this mount is tall and long, and would have belonged to an animal that weighed between and kilograms ( and lb). The longest complete dinosaur is the long Diplodocus, which was discovered in Wyoming in the United States and displayed in Pittsburgh's Carnegie Museum of Natural History in 1907. The longest dinosaur known from good fossil material is Patagotitan: the skeleton mount in the American Museum of Natural History in New York is long. The Museo Municipal Carmen Funes in Plaza Huincul, Argentina, has an Argentinosaurus reconstructed skeleton mount that is long.
There were larger dinosaurs, but knowledge of them is based entirely on a small number of fragmentary fossils. Most of the largest herbivorous specimens on record were discovered in the 1970s or later, and include the massive Argentinosaurus, which may have weighed and reached lengths of ; some of the longest were the long Diplodocus hallorum (formerly Seismosaurus), the long Supersaurus, and long Patagotitan; and the tallest, the tall Sauroposeidon, which could have reached a sixth-floor window. There were a few dinosaurs that was considered either the heaviest and longest. The most famous one include Amphicoelias fragillimus, known only from a now lost partial vertebral neural arch described in 1878. Extrapolating from the illustration of this bone, the animal may have been long and weighed . However, recent research have placed Amphicoelias from the long, gracile diplodocid to the shorter but much stockier rebbachisaurid. Now renamed as Maraapunisaurus, this sauropod now stands as much as long and weigh as much as . Another contender of this title includes Bruhathkayosaurus, a controversial taxon that was recently confirmed to exist after archived photos were uncovered. Bruhathkayosaurus was a titanosaur and would have most likely weighed more than even Marrapunisaurus. Recent size estimates in 2023 have placed this sauropod reaching lengths of up to long and a colossal weight range of around , if these upper estimates up true, Bruhathkayosaurus would have rivaled the blue whale and Perucetus colossus as one of the largest animals to have ever existed.
The largest carnivorous dinosaur was Spinosaurus, reaching a length of and weighing . Other large carnivorous theropods included Giganotosaurus, Carcharodontosaurus, and Tyrannosaurus. Therizinosaurus and Deinocheirus were among the tallest of the theropods. The largest ornithischian dinosaur was probably the hadrosaurid Shantungosaurus giganteus which measured . The largest individuals may have weighed as much as .
The smallest dinosaur known is the bee hummingbird, with a length of only and mass of around . The smallest known non-avialan dinosaurs were about the size of pigeons and were those theropods most closely related to birds. For example, Anchiornis huxleyi is currently the smallest non-avialan dinosaur described from an adult specimen, with an estimated weight of and a total skeletal length of . The smallest herbivorous non-avialan dinosaurs included Microceratus and Wannanosaurus, at about long each.
Behavior
Many modern birds are highly social, often found living in flocks. There is general agreement that some behaviors that are common in birds, as well as in crocodilians (closest living relatives of birds), were also common among extinct dinosaur groups. Interpretations of behavior in fossil species are generally based on the pose of skeletons and their habitat, computer simulations of their biomechanics, and comparisons with modern animals in similar ecological niches.
The first potential evidence for herding or flocking as a widespread behavior common to many dinosaur groups in addition to birds was the 1878 discovery of 31 Iguanodon, ornithischians that were then thought to have perished together in Bernissart, Belgium, after they fell into a deep, flooded sinkhole and drowned. Other mass-death sites have been discovered subsequently. Those, along with multiple trackways, suggest that gregarious behavior was common in many early dinosaur species. Trackways of hundreds or even thousands of herbivores indicate that duck-billed (hadrosaurids) may have moved in great herds, like the American bison or the African springbok. Sauropod tracks document that these animals traveled in groups composed of several different species, at least in Oxfordshire, England, although there is no evidence for specific herd structures. Congregating into herds may have evolved for defense, for migratory purposes, or to provide protection for young. There is evidence that many types of slow-growing dinosaurs, including various theropods, sauropods, ankylosaurians, ornithopods, and ceratopsians, formed aggregations of immature individuals. One example is a site in Inner Mongolia that has yielded remains of over 20 Sinornithomimus, from one to seven years old. This assemblage is interpreted as a social group that was trapped in mud. The interpretation of dinosaurs as gregarious has also extended to depicting carnivorous theropods as pack hunters working together to bring down large prey. However, this lifestyle is uncommon among modern birds, crocodiles, and other reptiles, and the taphonomic evidence suggesting mammal-like pack hunting in such theropods as Deinonychus and Allosaurus can also be interpreted as the results of fatal disputes between feeding animals, as is seen in many modern diapsid predators.
The crests and frills of some dinosaurs, like the marginocephalians, theropods and lambeosaurines, may have been too fragile to be used for active defense, and so they were likely used for sexual or aggressive displays, though little is known about dinosaur mating and territorialism. Head wounds from bites suggest that theropods, at least, engaged in active aggressive confrontations.
From a behavioral standpoint, one of the most valuable dinosaur fossils was discovered in the Gobi Desert in 1971. It included a Velociraptor attacking a Protoceratops, providing evidence that dinosaurs did indeed attack each other. Additional evidence for attacking live prey is the partially healed tail of an Edmontosaurus, a hadrosaurid dinosaur; the tail is damaged in such a way that shows the animal was bitten by a tyrannosaur but survived. Cannibalism amongst some species of dinosaurs was confirmed by tooth marks found in Madagascar in 2003, involving the theropod Majungasaurus.
Comparisons between the scleral rings of dinosaurs and modern birds and reptiles have been used to infer daily activity patterns of dinosaurs. Although it has been suggested that most dinosaurs were active during the day, these comparisons have shown that small predatory dinosaurs such as dromaeosaurids, Juravenator, and Megapnosaurus were likely nocturnal. Large and medium-sized herbivorous and omnivorous dinosaurs such as ceratopsians, sauropodomorphs, hadrosaurids, ornithomimosaurs may have been cathemeral, active during short intervals throughout the day, although the small ornithischian Agilisaurus was inferred to be diurnal.
Based on fossil evidence from dinosaurs such as Oryctodromeus, some ornithischian species seem to have led a partially fossorial (burrowing) lifestyle. Many modern birds are arboreal (tree climbing), and this was also true of many Mesozoic birds, especially the enantiornithines. While some early bird-like species may have already been arboreal as well (including dromaeosaurids) such as Microraptor) most non-avialan dinosaurs seem to have relied on land-based locomotion. A good understanding of how dinosaurs moved on the ground is key to models of dinosaur behavior; the science of biomechanics, pioneered by Robert McNeill Alexander, has provided significant insight in this area. For example, studies of the forces exerted by muscles and gravity on dinosaurs' skeletal structure have investigated how fast dinosaurs could run, whether diplodocids could create sonic booms via whip-like tail snapping, and whether sauropods could float.
Communication
Modern birds communicate by visual and auditory signals, and the wide diversity of visual display structures among fossil dinosaur groups, such as horns, frills, crests, sails, and feathers, suggests that visual communication has always been important in dinosaur biology. Reconstruction of the plumage color of Anchiornis suggest the importance of color in visual communication in non-avian dinosaurs. Vocalization in non-avian dinosaurs is less certain. In birds, the larynx plays no role in sound production. Instead, birds vocalize with a novel organ, the syrinx, farther down the trachea. The earliest remains of a syrinx were found in a specimen of the duck-like Vegavis iaai dated 69 –66 million years ago, and this organ is unlikely to have existed in non-avian dinosaurs.
On the basis that non-avian dinosaurs did not have syrinxes and that their next close living relatives, crocodilians, use the larynx, Phil Senter, a paleontologist, has suggested that the non-avians could not vocalize, because the common ancestor would have been mute. He states that they mostly on visual displays and possibly non-vocal sounds, such as hissing, jaw-grinding or -clapping, splashing, and wing-beating (possible in winged maniraptoran dinosaurs). Other researchers have countered that vocalizations also exist in turtles, the closest relatives of archosaurs, suggesting that the trait is ancestral to their lineage. In addition, vocal communication in dinosaurs is indicated by the development of advanced hearing in nearly all major groups. Hence the syrinx may have supplemented and then replaced the larynx as a vocal organ, without a "silent period" in bird evolution.
In 2023, a fossilized larynx was described, from a specimen of the ankylosaurid Pinacosaurus. The structure was composed of cricoid and arytenoid cartilages, similar to those of non-avian reptiles; but the mobile cricoid–arytenoid joint and long arytenoid cartilages would have allowed air-flow control similar to that of birds, and thus could have made bird-like vocalizations. In addition, the cartilages were ossified, implying that laryngeal ossification is a feature of some non-avian dinosaurs. A 2016 study concludes that some dinosaurs may have produced closed-mouth vocalizations, such as cooing, hooting, and booming. These occur in both reptiles and birds and involve inflating the esophagus or tracheal pouches. Such vocalizations evolved independently in extant archosaurs numerous times, following increases in body size. The crests of some hadrosaurids and the nasal chambers of ankylosaurids may have been resonators.
Reproductive biology
All dinosaurs laid amniotic eggs. Dinosaur eggs were usually laid in a nest. Most species create somewhat elaborate nests which can be cups, domes, plates, beds scrapes, mounds, or burrows. Some species of modern bird have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. Primitive birds and many non-avialan dinosaurs often lay eggs in communal nests, with males primarily incubating the eggs. While modern birds have only one functional oviduct and lay one egg at a time, more primitive birds and dinosaurs had two oviducts, like crocodiles. Some non-avialan dinosaurs, such as Troodon, exhibited iterative laying, where the adult might lay a pair of eggs every one or two days, and then ensured simultaneous hatching by delaying brooding until all eggs were laid.
When laying eggs, females grow a special type of bone between the hard outer bone and the marrow of their limbs. This medullary bone, which is rich in calcium, is used to make eggshells. A discovery of features in a Tyrannosaurus skeleton provided evidence of medullary bone in extinct dinosaurs and, for the first time, allowed paleontologists to establish the sex of a fossil dinosaur specimen. Further research has found medullary bone in the carnosaur Allosaurus and the ornithopod Tenontosaurus. Because the line of dinosaurs that includes Allosaurus and Tyrannosaurus diverged from the line that led to Tenontosaurus very early in the evolution of dinosaurs, this suggests that the production of medullary tissue is a general characteristic of all dinosaurs.
Another widespread trait among modern birds (but see below in regards to fossil groups and extant megapodes) is parental care for young after hatching. Jack Horner's 1978 discovery of a Maiasaura ("good mother lizard") nesting ground in Montana demonstrated that parental care continued long after birth among ornithopods. A specimen of the oviraptorid Citipati osmolskae was discovered in a chicken-like brooding position in 1993, which may indicate that they had begun using an insulating layer of feathers to keep the eggs warm. An embryo of the basal sauropodomorph Massospondylus was found without teeth, indicating that some parental care was required to feed the young dinosaurs. Trackways have also confirmed parental behavior among ornithopods from the Isle of Skye in northwestern Scotland.
However, there is ample evidence of precociality or superprecociality among many dinosaur species, particularly theropods. For instance, non-ornithuromorph birds have been abundantly demonstrated to have had slow growth rates, megapode-like egg burying behavior and the ability to fly soon after birth. Both Tyrannosaurus and Troodon had juveniles with clear superprecociality and likely occupying different ecological niches than the adults. Superprecociality has been inferred for sauropods.
Genital structures are unlikely to fossilize as they lack scales that may allow preservation via pigmentation or residual calcium phosphate salts. In 2021, the best preserved specimen of a dinosaur's cloacal vent exterior was described for Psittacosaurus, demonstrating lateral swellings similar to crocodylian musk glands used in social displays by both sexes and pigmented regions which could also reflect a signalling function. However, this specimen on its own does not offer enough information to determine whether this dinosaur had sexual signalling functions; it only supports the possibility. Cloacal visual signalling can occur in either males or females in living birds, making it unlikely to be useful to determine sex for extinct dinosaurs.
Physiology
Because both modern crocodilians and birds have four-chambered hearts (albeit modified in crocodilians), it is likely that this is a trait shared by all archosaurs, including all dinosaurs. While all modern birds have high metabolisms and are endothermic ("warm-blooded"), a vigorous debate has been ongoing since the 1960s regarding how far back in the dinosaur lineage this trait extended. Various researchers have supported dinosaurs as being endothermic, ectothermic ("cold-blooded"), or somewhere in between. An emerging consensus among researchers is that, while different lineages of dinosaurs would have had different metabolisms, most of them had higher metabolic rates than other reptiles but lower than living birds and mammals, which is termed mesothermy by some. Evidence from crocodiles and their extinct relatives suggests that such elevated metabolisms could have developed in the earliest archosaurs, which were the common ancestors of dinosaurs and crocodiles.
After non-avian dinosaurs were discovered, paleontologists first posited that they were ectothermic. This was used to imply that the ancient dinosaurs were relatively slow, sluggish organisms, even though many modern reptiles are fast and light-footed despite relying on external sources of heat to regulate their body temperature. The idea of dinosaurs as ectothermic remained a prevalent view until Robert T. Bakker, an early proponent of dinosaur endothermy, published an influential paper on the topic in 1968. Bakker specifically used anatomical and ecological evidence to argue that sauropods, which had hitherto been depicted as sprawling aquatic animals with their tails dragging on the ground, were endotherms that lived vigorous, terrestrial lives. In 1972, Bakker expanded on his arguments based on energy requirements and predator-prey ratios. This was one of the seminal results that led to the dinosaur renaissance.
One of the greatest contributions to the modern understanding of dinosaur physiology has been paleohistology, the study of microscopic tissue structure in dinosaurs. From the 1960s forward, Armand de Ricqlès suggested that the presence of fibrolamellar bone—bony tissue with an irregular, fibrous texture and filled with blood vessels—was indicative of consistently fast growth and therefore endothermy. Fibrolamellar bone was common in both dinosaurs and pterosaurs, though not universally present. This has led to a significant body of work in reconstructing growth curves and modeling the evolution of growth rates across various dinosaur lineages, which has suggested overall that dinosaurs grew faster than living reptiles. Other lines of evidence suggesting endothermy include the presence of feathers and other types of body coverings in many lineages (see ); more consistent ratios of the isotope oxygen-18 in bony tissue compared to ectotherms, particularly as latitude and thus air temperature varied, which suggests stable internal temperatures (although these ratios can be altered during fossilization); and the discovery of polar dinosaurs, which lived in Australia, Antarctica, and Alaska when these places would have had cool, temperate climates.
In saurischian dinosaurs, higher metabolisms were supported by the evolution of the avian respiratory system, characterized by an extensive system of air sacs that extended the lungs and invaded many of the bones in the skeleton, making them hollow. Such respiratory systems, which may have appeared in the earliest saurischians, would have provided them with more oxygen compared to a mammal of similar size, while also having a larger resting tidal volume and requiring a lower breathing frequency, which would have allowed them to sustain higher activity levels. The rapid airflow would also have been an effective cooling mechanism, which in conjunction with a lower metabolic rate would have prevented large sauropods from overheating. These traits may have enabled sauropods to grow quickly to gigantic sizes. Sauropods may also have benefitted from their size—their small surface area to volume ratio meant that they would have been able to thermoregulate more easily, a phenomenon termed gigantothermy.
Like other reptiles, dinosaurs are primarily uricotelic, that is, their kidneys extract nitrogenous wastes from their bloodstream and excrete it as uric acid instead of urea or ammonia via the ureters into the intestine. This would have helped them to conserve water. In most living species, uric acid is excreted along with feces as a semisolid waste. However, at least some modern birds (such as hummingbirds) can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. This material, as well as the output of the intestines, emerges from the cloaca. In addition, many species regurgitate pellets, and fossil pellets are known as early as the Jurassic from Anchiornis.
The size and shape of the brain can be partly reconstructed based on the surrounding bones. In 1896, Marsh calculated ratios between brain weight and body weight of seven species of dinosaurs, showing that the brain of dinosaurs was proportionally smaller than in today's crocodiles, and that the brain of Stegosaurus was smaller than in any living land vertebrate. This contributed to the widespread public notion of dinosaurs as being sluggish and extraordinarily stupid. Harry Jerison, in 1973, showed that proportionally smaller brains are expected at larger body sizes, and that brain size in dinosaurs was not smaller than expected when compared to living reptiles. Later research showed that relative brain size progressively increased during the evolution of theropods, with the highest intelligence – comparable to that of modern birds – calculated for the troodontid Troodon.
Origin of birds
The possibility that dinosaurs were the ancestors of birds was first suggested in 1868 by Thomas Henry Huxley. After the work of Gerhard Heilmann in the early 20th century, the theory of birds as dinosaur descendants was abandoned in favor of the idea of them being descendants of generalized thecodonts, with the key piece of evidence being the supposed lack of clavicles in dinosaurs. However, as later discoveries showed, clavicles (or a single fused wishbone, which derived from separate clavicles) were not actually absent; they had been found as early as 1924 in Oviraptor, but misidentified as an interclavicle. In the 1970s, John Ostrom revived the dinosaur–bird theory, which gained momentum in the coming decades with the advent of cladistic analysis, and a great increase in the discovery of small theropods and early birds. Of particular note have been the fossils of the Jehol Biota, where a variety of theropods and early birds have been found, often with feathers of some type. Birds share over a hundred distinct anatomical features with theropod dinosaurs, which are now generally accepted to have been their closest ancient relatives. They are most closely allied with maniraptoran coelurosaurs. A minority of scientists, most notably Alan Feduccia and Larry Martin, have proposed other evolutionary paths, including revised versions of Heilmann's basal archosaur proposal, or that maniraptoran theropods are the ancestors of birds but themselves are not dinosaurs, only convergent with dinosaurs.
Feathers
Feathers are one of the most recognizable characteristics of modern birds, and a trait that was also shared by several non-avian dinosaurs. Based on the current distribution of fossil evidence, it appears that feathers were an ancestral dinosaurian trait, though one that may have been selectively lost in some species. Direct fossil evidence of feathers or feather-like structures has been discovered in a diverse array of species in many non-avian dinosaur groups, both among saurischians and ornithischians. Simple, branched, feather-like structures are known from heterodontosaurids, primitive neornithischians, and theropods, and primitive ceratopsians. Evidence for true, vaned feathers similar to the flight feathers of modern birds has been found only in the theropod subgroup Maniraptora, which includes oviraptorosaurs, troodontids, dromaeosaurids, and birds. Feather-like structures known as pycnofibres have also been found in pterosaurs.
However, researchers do not agree regarding whether these structures share a common origin between lineages (i.e., they are homologous), or if they were the result of widespread experimentation with skin coverings among ornithodirans. If the former is the case, filaments may have been common in the ornithodiran lineage and evolved before the appearance of dinosaurs themselves. Research into the genetics of American alligators has revealed that crocodylian scutes do possess feather-keratins during embryonic development, but these keratins are not expressed by the animals before hatching. The description of feathered dinosaurs has not been without controversy in general; perhaps the most vocal critics have been Alan Feduccia and Theagarten Lingham-Soliar, who have proposed that some purported feather-like fossils are the result of the decomposition of collagenous fiber that underlaid the dinosaurs' skin, and that maniraptoran dinosaurs with vaned feathers were not actually dinosaurs, but convergent with dinosaurs. However, their views have for the most part not been accepted by other researchers, to the point that the scientific nature of Feduccia's proposals has been questioned.
Archaeopteryx was the first fossil found that revealed a potential connection between dinosaurs and birds. It is considered a transitional fossil, in that it displays features of both groups. Brought to light just two years after Charles Darwin's seminal On the Origin of Species (1859), its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for the small theropod Compsognathus. Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Many of these specimens were unearthed in the lagerstätten of the Jehol Biota. If feather-like structures were indeed widely present among non-avian dinosaurs, the lack of abundant fossil evidence for them may be due to the fact that delicate features like skin and feathers are seldom preserved by fossilization and thus often absent from the fossil record.
Skeleton
Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent another important line of evidence for paleontologists. Areas of the skeleton with important similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, furcula (wishbone), and breast bone. Comparison of bird and dinosaur skeletons through cladistic analysis strengthens the case for the link.
Soft anatomy
Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to a 2005 investigation led by Patrick M. O'Connor. The lungs of theropod dinosaurs (carnivores that walked on two legs and had bird-like feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In 2008, scientists described Aerosteon riocoloradensis, the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT scanning of Aerosteon'''s fossil bones revealed evidence for the existence of air sacs within the animal's body cavity.
Behavioral evidence
Fossils of the troodonts Mei and Sinornithoides demonstrate that some dinosaurs slept with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds. Several deinonychosaur and oviraptorosaur specimens have also been found preserved on top of their nests, likely brooding in a bird-like manner. The ratio between egg volume and body mass of adults among these dinosaurs suggest that the eggs were primarily brooded by the male and that the young were highly precocial, similar to many modern ground-dwelling birds.
Some dinosaurs are known to have used gizzard stones like modern birds. These stones are swallowed by animals to aid digestion and break down food and hard fibers once they enter the stomach. When found in association with fossils, gizzard stones are called gastroliths.
Extinction of major groups
All non-avian dinosaurs and most lineages of birds became extinct in a mass extinction event, called the Cretaceous–Paleogene (K-Pg) extinction event, at the end of the Cretaceous period. Above the Cretaceous–Paleogene boundary, which has been dated to 66.038 ± 0.025 million years ago, fossils of non-avian dinosaurs disappear abruptly; the absence of dinosaur fossils was historically used to assign rocks to the ensuing Cenozoic. The nature of the event that caused this mass extinction has been extensively studied since the 1970s, leading to the development of two mechanisms that are thought to have played major roles: an extraterrestrial impact event in the Yucatán Peninsula, along with flood basalt volcanism in India. However, the specific mechanisms of the extinction event and the extent of its effects on dinosaurs are still areas of ongoing research. Alongside dinosaurs, many other groups of animals became extinct: pterosaurs, marine reptiles such as mosasaurs and plesiosaurs, several groups of mammals, ammonites (nautilus-like mollusks), rudists (reef-building bivalves), and various groups of marine plankton. In all, approximately 47% of genera and 76% of species on Earth became extinct during the K-Pg extinction event. The relatively large size of most dinosaurs and the low diversity of small-bodied dinosaur species at the end of the Cretaceous may have contributed to their extinction; the extinction of the bird lineages that did not survive may also have been caused by a dependence on forest habitats or a lack of adaptations to eating seeds for survival.
Pre-extinction diversity
Just before the K-Pg extinction event, the number of non-avian dinosaur species that existed globally has been estimated at between 628 and 1078. It remains uncertain whether the diversity of dinosaurs was in gradual decline before the K-Pg extinction event, or whether dinosaurs were actually thriving prior to the extinction. Rock formations from the Maastrichtian epoch, which directly preceded the extinction, have been found to have lower diversity than the preceding Campanian epoch, which led to the prevailing view of a long-term decline in diversity. However, these comparisons did not account either for varying preservation potential between rock units or for different extents of exploration and excavation. In 1984, Dale Russell carried out an analysis to account for these biases, and found no evidence of a decline; another analysis by David Fastovsky and colleagues in 2004 even showed that dinosaur diversity continually increased until the extinction, but this analysis has been rebutted. Since then, different approaches based on statistics and mathematical models have variously supported either a sudden extinction or a gradual decline. End-Cretaceous trends in diversity may have varied between dinosaur lineages: it has been suggested that sauropods were not in decline, while ornithischians and theropods were in decline.
Impact event
The bolide impact hypothesis, first brought to wide attention in 1980 by Walter Alvarez, Luis Alvarez, and colleagues, attributes the K-Pg extinction event to a bolide (extraterrestrial projectile) impact. Alvarez and colleagues proposed that a sudden increase in iridium levels, recorded around the world in rock deposits at the Cretaceous–Paleogene boundary, was direct evidence of the impact. Shocked quartz, indicative of a strong shockwave emanating from an impact, was also found worldwide. The actual impact site remained elusive until a crater measuring wide was discovered in the Yucatán Peninsula of southeastern Mexico, and was publicized in a 1991 paper by Alan Hildebrand and colleagues. Now, the bulk of the evidence suggests that a bolide wide impacted the Yucatán Peninsula 66 million years ago, forming this crater and creating a "kill mechanism" that triggered the extinction event.
Within hours, the Chicxulub impact would have created immediate effects such as earthquakes, tsunamis, and a global firestorm that likely killed unsheltered animals and started wildfires. However, it would also have had longer-term consequences for the environment. Within days, sulfate aerosols released from rocks at the impact site would have contributed to acid rain and ocean acidification. Soot aerosols are thought to have spread around the world over the ensuing months and years; they would have cooled the surface of the Earth by reflecting thermal radiation, and greatly slowed photosynthesis by blocking out sunlight, thus creating an impact winter. (This role was ascribed to sulfate aerosols until experiments demonstrated otherwise.) The cessation of photosynthesis would have led to the collapse of food webs depending on leafy plants, which included all dinosaurs save for grain-eating birds.
Deccan Traps
At the time of the K-Pg extinction, the Deccan Traps flood basalts of India were actively erupting. The eruptions can be separated into three phases around the K-Pg boundary, two prior to the boundary and one after. The second phase, which occurred very close to the boundary, would have extruded 70 to 80% of the volume of these eruptions in intermittent pulses that occurred around 100,000 years apart. Greenhouse gases such as carbon dioxide and sulfur dioxide would have been released by this volcanic activity, resulting in climate change through temperature perturbations of roughly but possibly as high as . Like the Chicxulub impact, the eruptions may also have released sulfate aerosols, which would have caused acid rain and global cooling. However, due to large error margins in the dating of the eruptions, the role of the Deccan Traps in the K-Pg extinction remains unclear.
Before 2000, arguments that the Deccan Traps eruptions—as opposed to the Chicxulub impact—caused the extinction were usually linked to the view that the extinction was gradual. Prior to the discovery of the Chicxulub crater, the Deccan Traps were used to explain the global iridium layer; even after the crater's discovery, the impact was still thought to only have had a regional, not global, effect on the extinction event. In response, Luis Alvarez rejected volcanic activity as an explanation for the iridium layer and the extinction as a whole. Since then, however, most researchers have adopted a more moderate position, which identifies the Chicxulub impact as the primary progenitor of the extinction while also recognizing that the Deccan Traps may also have played a role. Walter Alvarez himself has acknowledged that the Deccan Traps and other ecological factors may have contributed to the extinctions in addition to the Chicxulub impact. Some estimates have placed the start of the second phase in the Deccan Traps eruptions within 50,000 years after the Chicxulub impact. Combined with mathematical modelling of the seismic waves that would have been generated by the impact, this has led to the suggestion that the Chicxulub impact may have triggered these eruptions by increasing the permeability of the mantle plume underlying the Deccan Traps.
Whether the Deccan Traps were a major cause of the extinction, on par with the Chicxulub impact, remains uncertain. Proponents consider the climatic impact of the sulfur dioxide released to have been on par with the Chicxulub impact, and also note the role of flood basalt volcanism in other mass extinctions like the Permian-Triassic extinction event. They consider the Chicxulub impact to have worsened the ongoing climate change caused by the eruptions. Meanwhile, detractors point out the sudden nature of the extinction and that other pulses in Deccan Traps activity of comparable magnitude did not appear to have caused extinctions. They also contend that the causes of different mass extinctions should be assessed separately. In 2020, Alfio Chiarenza and colleagues suggested that the Deccan Traps may even have had the opposite effect: they suggested that the long-term warming caused by its carbon dioxide emissions may have dampened the impact winter from the Chicxulub impact.
Possible Paleocene survivors
Non-avian dinosaur remains have occasionally been found above the K-Pg boundary. In 2000, Spencer Lucas and colleagues reported the discovery of a single hadrosaur right femur in the San Juan Basin of New Mexico, and described it as evidence of Paleocene dinosaurs. The rock unit in which the bone was discovered has been dated to the early Paleocene epoch, approximately 64.8 million years ago. If the bone was not re-deposited by weathering action, it would provide evidence that some dinosaur populations survived at least half a million years into the Cenozoic. Other evidence includes the presence of dinosaur remains in the Hell Creek Formation up to above the Cretaceous–Paleogene boundary, representing 40,000 years of elapsed time. This has been used to support the view that the K-Pg extinction was gradual. However, these supposed Paleocene dinosaurs are considered by many other researchers to be reworked, that is, washed out of their original locations and then reburied in younger sediments. The age estimates have also been considered unreliable.
Cultural depictions
By human standards, dinosaurs were creatures of fantastic appearance and often enormous size. As such, they have captured the popular imagination and become an enduring part of human culture. The entry of the word "dinosaur" into the common vernacular reflects the animals' cultural importance: in English, "dinosaur" is commonly used to describe anything that is impractically large, obsolete, or bound for extinction.
Public enthusiasm for dinosaurs first developed in Victorian England, where in 1854, three decades after the first scientific descriptions of dinosaur remains, a menagerie of lifelike dinosaur sculptures was unveiled in London's Crystal Palace Park. The Crystal Palace dinosaurs proved so popular that a strong market in smaller replicas soon developed. In subsequent decades, dinosaur exhibits opened at parks and museums around the world, ensuring that successive generations would be introduced to the animals in an immersive and exciting way. The enduring popularity of dinosaurs, in its turn, has resulted in significant public funding for dinosaur science, and has frequently spurred new discoveries. In the United States, for example, the competition between museums for public attention led directly to the Bone Wars of the 1880s and 1890s, during which a pair of feuding paleontologists made enormous scientific contributions.
The popular preoccupation with dinosaurs has ensured their appearance in literature, film, and other media. Beginning in 1852 with a passing mention in Charles Dickens Bleak House, dinosaurs have been featured in large numbers of fictional works. Jules Verne's 1864 novel Journey to the Center of the Earth, Sir Arthur Conan Doyle's 1912 book The Lost World, the 1914 animated film Gertie the Dinosaur (featuring the first animated dinosaur), the iconic 1933 film King Kong, the 1954 Godzilla and its many sequels, the best-selling 1990 novel Jurassic Park'' by Michael Crichton and its 1993 film adaptation are just a few notable examples of dinosaur appearances in fiction. Authors of general-interest non-fiction works about dinosaurs, including some prominent paleontologists, have often sought to use the animals as a way to educate readers about science in general. Dinosaurs are ubiquitous in advertising; numerous companies have referenced dinosaurs in printed or televised advertisements, either in order to sell their own products or in order to characterize their rivals as slow-moving, dim-witted, or obsolete.
| Biology and health sciences | Biology | null |
8315 | https://en.wikipedia.org/wiki/Diamagnetism | Diamagnetism | Diamagnetism is the property of materials that are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances, the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than the permeability of vacuum, μ0. In most materials, diamagnetism is a weak effect which can be detected only by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it entirely expels any magnetic field from its interior (the Meissner effect).
Diamagnetism was first discovered when Anton Brugmans observed in 1778 that bismuth was repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and concluded that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field. On a suggestion by William Whewell, Faraday first referred to the phenomenon as diamagnetic (the prefix dia- meaning through or across), then later changed it to diamagnetism.
A simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: If all electrons in the particle are paired, then the substance made of this particle is diamagnetic; If it has unpaired electrons, then the substance is paramagnetic.
Materials
Diamagnetism is a property of all materials, and always makes a weak contribution to the material's response to a magnetic field. However, other forms of magnetism (such as ferromagnetism or paramagnetism) are so much stronger such that, when different forms of magnetism are present in a material, the diamagnetic contribution is usually negligible. Substances where the diamagnetic behaviour is the strongest effect are termed diamagnetic materials, or diamagnets. Diamagnetic materials are those that some people generally think of as non-magnetic, and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility values of various molecular fragments are called Pascal's constants (named after ).
Diamagnetic materials, like water, or water-based materials, have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than or equal to 0, since susceptibility is defined as . This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is . The most strongly diamagnetic material is bismuth, , although pyrolytic carbon may have a susceptibility of in one plane. Nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Because χv is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value.
In rare cases, the diamagnetic contribution can be stronger than paramagnetic contribution. This is the case for gold, which has a magnetic susceptibility less than 0 (and is thus by definition a diamagnetic material), but when measured carefully with X-ray magnetic circular dichroism, has an extremely weak paramagnetic contribution that is overcome by a stronger diamagnetic contribution.
Superconductors
Superconductors may be considered perfect diamagnets (), because they expel all magnetic fields (except in a thin surface layer) due to the Meissner effect.
Demonstrations
Curving water surfaces
If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by a reflection in its surface.
Levitation
Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem applies only to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space.
A thin slice of pyrolytic graphite, which is an unusually strongly diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective and relatively convenient demonstration of diamagnetism.
The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated.
In September 2009, NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California announced it had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. JPL said it hopes to perform experiments regarding the effects of microgravity on bone and muscle mass.
Recent experiments studying the growth of protein crystals have led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity.
A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet.
Theory
The electrons in a material generally settle in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However, since the electrons are rigidly held in orbitals by the charge of the protons and are further constrained by the Pauli exclusion principle, many materials exhibit diamagnetism, but typically respond very little to the applied field.
The Bohr–Van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. However, the classical theory of Langevin for diamagnetism gives the same prediction as the quantum theory. The classical theory is given below.
Langevin diamagnetism
Paul Langevin's theory of diamagnetism (1905) applies to materials containing atoms with closed shells (see dielectrics). A field with intensity , applied to an electron with charge and mass , gives rise to Larmor precession with frequency . The number of revolutions per unit time is, so the current for an atom with electrons is (in SI units)
The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the axis. The average loop area can be given as , where is the mean square distance of the electrons perpendicular to the axis. The magnetic moment is therefore
If the distribution of charge is spherically symmetric, we can suppose that the distribution of coordinates are independent and identically distributed. Then , where is the mean square distance of the electrons from the nucleus. Therefore, . If is the number of atoms per unit volume, the volume diamagnetic susceptibility in SI units is
In atoms, Langevin susceptibility is of the same order of magnitude as Van Vleck paramagnetic susceptibility.
In metals
The Langevin theory is not the full picture for metals because there are also non-localized electrons. The theory that describes diamagnetism in a free electron gas is called Landau diamagnetism, named after Lev Landau, and instead considers the weak counteracting field that forms when the electrons' trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins. For the bulk case of a 3D system and low magnetic fields, the (volume) diamagnetic susceptibility can be calculated using Landau quantization, which in SI units is
where is the Fermi energy. This is equivalent to , exactly times Pauli paramagnetic susceptibility, where is the Bohr magneton and is the density of states (number of states per energy per volume). This formula takes into account the spin degeneracy of the carriers (spin-1/2 electrons).
In doped semiconductors the ratio between Landau and Pauli susceptibilities may change due to the effective mass of the charge carriers differing from the electron mass in vacuum, increasing the diamagnetic contribution. The formula presented here only applies for the bulk; in confined systems like quantum dots, the description is altered due to quantum confinement. Additionally, for strong magnetic fields, the susceptibility of delocalized electrons oscillates as a function of the field strength, a phenomenon known as the De Haas–Van Alphen effect, also first described theoretically by Landau.
| Physical sciences | Magnetostatics | Physics |
8324 | https://en.wikipedia.org/wiki/Difference%20engine | Difference engine | A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. It was designed in the 1820s, and was first created by Charles Babbage. The name difference engine is derived from the method of finite differences, a way to interpolate or tabulate functions by using a small set of polynomial co-efficients. Some of the most common mathematical functions used in engineering, science and navigation are built from logarithmic and trigonometric functions, which can be approximated by polynomials, so a difference engine can compute many useful tables.
History
The notion of a mechanical calculator for mathematical functions can be traced back to the Antikythera mechanism of the 2nd century BC, while early modern examples are attributed to Pascal and Leibniz in the 17th century.
In 1784 J. H. Müller, an engineer in the Hessian army, devised and built an adding machine and described the basic principles of a difference machine in a book published in 1786 (the first written reference to a difference machine is dated to 1784), but he was unable to obtain funding to progress with the idea.
Charles Babbage's difference engines
Charles Babbage began to construct a small difference engine in and had completed it by 1822 (Difference Engine 0). He announced his invention on 14 June 1822, in a paper to the Royal Astronomical Society, entitled "Note on the application of machinery to the computation of astronomical and mathematical tables". This machine used the decimal number system and was powered by cranking a handle. The British government was interested, since producing tables was time-consuming and expensive and they hoped the difference engine would make the task more economical.
In 1823, the British government gave Babbage £1700 to start work on the project. Although Babbage's design was feasible, the metalworking techniques of the era could not economically make parts in the precision and quantity required. Thus the implementation proved to be much more expensive and doubtful of success than the government's initial estimate. According to the 1830 design for Difference Engine No. 1, it would have about 25,000 parts, weigh 4 tons, and operate on 20-digit numbers by sixth-order differences. In 1832, Babbage and Joseph Clement produced a small working model (one-seventh of the plan), which operated on 6-digit numbers by second-order differences. Lady Byron described seeing the working prototype in 1833: "We both went to see the thinking machine (or so it seems) last Monday. It raised several Nos. to the 2nd and 3rd powers, and extracted the root of a Quadratic equation." Work on the larger engine was suspended in 1833.
By the time the government abandoned the project in 1842, Babbage had received and spent over £17,000 on development, which still fell short of achieving a working engine. The government valued only the machine's output (economically produced tables), not the development (at unpredictable cost) of the machine itself. Babbage refused to recognize that predicament. Meanwhile, Babbage's attention had moved on to developing an analytical engine, further undermining the government's confidence in the eventual success of the difference engine. By improving the concept as an analytical engine, Babbage had made the difference engine concept obsolete, and the project to implement it an utter failure in the view of the government.
The incomplete Difference Engine No. 1 was put on display to the public at the 1862 International Exhibition in South Kensington, London.
Babbage went on to design his much more general analytical engine, but later designed an improved "Difference Engine No. 2" design (31-digit numbers and seventh-order differences), between 1846 and 1849. Babbage was able to take advantage of ideas developed for the analytical engine to make the new difference engine calculate more quickly while using fewer parts.
Scheutzian calculation engine
Inspired by Babbage's difference engine in 1834, the Swedish inventor Per Georg Scheutz built several experimental models. In 1837 his son Edward proposed to construct a working model in metal, and in 1840 finished the calculating part, capable of calculating series with 5-digit numbers and first-order differences, which was later extended to third-order (1842). In 1843, after adding the printing part, the model was completed.
In 1851, funded by the government, construction of the larger and improved (15-digit numbers and fourth-order differences) machine began, and finished in 1853. The machine was demonstrated at the World's Fair in Paris, 1855 and then sold in 1856 to the Dudley Observatory in Albany, New York. Delivered in 1857, it was the first printing calculator sold. In 1857 the British government ordered the next Scheutz's difference machine, which was built in 1859. It had the same basic construction as the previous one, weighing about .
Others
Martin Wiberg improved Scheutz's construction (, his machine has the same capacity as Scheutz's: 30-digit and sixth-order) but used his device only for producing and publishing printed tables (interest tables in 1860, and logarithmic tables in 1875).
Alfred Deacon of London in produced a small difference engine (20-digit numbers and third-order differences).
American George B. Grant started working on his calculating machine in 1869, unaware of the works of Babbage and Scheutz (Schentz). One year later (1870) he learned about difference engines and proceeded to design one himself, describing his construction in 1871. In 1874 the Boston Thursday Club raised a subscription for the construction of a large-scale model, which was built in 1876. It could be expanded to enhance precision and weighed about .
Christel Hamann built one machine (16-digit numbers and second-order differences) in 1909 for the "Tables of Bauschinger and Peters" ("Logarithmic-Trigonometrical Tables with eight decimal places"), which was first published in Leipzig in 1910. It weighed about .
Burroughs Corporation in about 1912 built a machine for the Nautical Almanac Office which was used as a difference engine of second-order. It was later replaced in 1929 by a Burroughs Class 11 (13-digit numbers and second-order differences, or 11-digit numbers and [at least up to] fifth-order differences).
Alexander John Thompson about 1927 built integrating and differencing machine (13-digit numbers and fifth-order differences) for his table of logarithms "Logarithmetica britannica". This machine was composed of four modified Triumphator calculators.
Leslie Comrie in 1928 described how to use the Brunsviga-Dupla calculating machine as a difference engine of second-order (15-digit numbers). He also noted in 1931 that National Accounting Machine Class 3000 could be used as a difference engine of sixth-order.
Construction of two working No. 2 difference engines
During the 1980s, Allan G. Bromley, an associate professor at the University of Sydney, Australia, studied Babbage's original drawings for the Difference and Analytical Engines at the Science Museum library in London. This work led the Science Museum to construct a working calculating section of difference engine No. 2 from 1985 to 1991, under Doron Swade, the then Curator of Computing. This was to celebrate the 200th anniversary of Babbage's birth in 1991. In 2002, the printer which Babbage originally designed for the difference engine was also completed. The conversion of the original design drawings into drawings suitable for engineering manufacturers' use revealed some minor errors in Babbage's design (possibly introduced as a protection in case the plans were stolen), which had to be corrected. The difference engine and printer were constructed to tolerances achievable with 19th-century technology, resolving a long-standing debate as to whether Babbage's design could have worked using Georgian-era engineering methods. The machine contains 8,000 parts and weighs about 5 tons.
The printer's primary purpose is to produce stereotype plates for use in printing presses, which it does by pressing type into soft plaster to create a flong. Babbage intended that the Engine's results be conveyed directly to mass printing, having recognized that many errors in previous tables were not the result of human calculating mistakes but from slips in the manual typesetting process. The printer's paper output is mainly a means of checking the engine's performance.
In addition to funding the construction of the output mechanism for the Science Museum's difference engine, Nathan Myhrvold commissioned the construction of a second complete Difference Engine No. 2, which was on exhibit at the Computer History Museum in Mountain View, California, from May 2008 to January 2016. It has since been transferred to Intellectual Ventures in Seattle where it is on display just outside the main lobby.
Operation
The difference engine consists of a number of columns, numbered from 1 to N. The machine is able to store one decimal number in each column. The machine can only add the value of a column n + 1 to column n to produce the new value of n. Column N can only store a constant, column 1 displays (and possibly prints) the value of the calculation on the current iteration.
The engine is programmed by setting initial values to the columns. Column 1 is set to the value of the polynomial at the start of computation. Column 2 is set to a value derived from the first and higher derivatives of the polynomial at the same value of X. Each of the columns from 3 to N is set to a value derived from the first and higher derivatives of the polynomial.
Timing
In the Babbage design, one iteration (i.e. one full set of addition and carry operations) happens for each rotation of the main shaft. Odd and even columns alternately perform an addition in one cycle. The sequence of operations for column is thus:
Count up, receiving the value from column (Addition step)
Perform carry propagation on the counted up value
Count down to zero, adding to column
Reset the counted-down value to its original value
Steps 1,2,3,4 occur for every odd column, while steps 3,4,1,2 occur for every even column.
While Babbage's original design placed the crank directly on the main shaft, it was later realized that the force required to crank the machine would have been too great for a human to handle comfortably. Therefore, the two models that were built incorporate a 4:1 reduction gear at the crank, and four revolutions of the crank are required to perform one full cycle.
Steps
Each iteration creates a new result, and is accomplished in four steps corresponding to four complete turns of the handle shown at the far right in the picture below. The four steps are:
All even numbered columns (2,4,6,8) are added to all odd numbered columns (1,3,5,7) simultaneously. An interior sweep arm turns each even column to cause whatever number is on each wheel to count down to zero. As a wheel turns to zero, it transfers its value to a sector gear located between the odd/even columns. These values are transferred to the odd column causing them to count up. Any odd column value that passes from "9" to "0" activates a carry lever.
This is like Step 1, except it is odd columns (3,5,7) added to even columns (2,4,6), and column one has its values transferred by a sector gear to the print mechanism on the left end of the engine. Any even column value that passes from "9" to "0" activates a carry lever. The column 1 value, the result for the polynomial, is sent to the attached printer mechanism.
This is like Step 2, but for doing carries on even columns, and returning odd columns to their original values.
Subtraction
The engine represents negative numbers as ten's complements. Subtraction amounts to addition of a negative number. This works in the same manner that modern computers perform subtraction, known as two's complement.
Method of differences
The principle of a difference engine is Newton's method of divided differences. If the initial value of a polynomial (and of its finite differences) is calculated by some means for some value of X, the difference engine can calculate any number of nearby values, using the method generally known as the method of finite differences. For example, consider the quadratic polynomial
with the goal of tabulating the values p(0), p(1), p(2), p(3), p(4), and so forth. The table below is constructed as follows: the second column contains the values of the polynomial, the third column contains the differences of the two left neighbors in the second column, and the fourth column contains the differences of the two neighbors in the third column:
The numbers in the third values-column are constant. In fact, by starting with any polynomial of degree n, the column number n + 1 will always be constant. This is the crucial fact behind the success of the method.
This table was built from left to right, but it is possible to continue building it from right to left down a diagonal in order to compute more values. To calculate p(5) use the values from the lowest diagonal. Start with the fourth column constant value of 4 and copy it down the column. Then continue the third column by adding 4 to 11 to get 15. Next continue the second column by taking its previous value, 22 and adding the 15 from the third column. Thus p(5) is 22 + 15 = 37. In order to compute p(6), we iterate the same algorithm on the p(5) values: take 4 from the fourth column, add that to the third column's value 15 to get 19, then add that to the second column's value 37 to get 56, which is p(6). This process may be continued ad infinitum. The values of the polynomial are produced without ever having to multiply. A difference engine only needs to be able to add. From one loop to the next, it needs to store 2 numbers—in this example (the last elements in the first and second columns). To tabulate polynomials of degree n, one needs sufficient storage to hold n numbers.
Babbage's difference engine No. 2, finally built in 1991, can hold 8 numbers of 31 decimal digits each and can thus tabulate 7th degree polynomials to that precision. The best machines from Scheutz could store 4 numbers with 15 digits each.
Initial values
The initial values of columns can be calculated by first manually calculating N consecutive values of the function and by backtracking (i.e. calculating the required differences).
Col gets the value of the function at the start of computation . Col is the difference between and ...
If the function to be calculated is a polynomial function, expressed as
the initial values can be calculated directly from the constant coefficients a0, a1,a2, ..., an without calculating any data points. The initial values are thus:
Col = a0
Col = a1 + a2 + a3 + a4 + ... + an
Col = 2a2 + 6a3 + 14a4 + 30a5 + ...
Col = 6a3 + 36a4 + 150a5 + ...
Col = 24a4 + 240a5 + ...
Col = 120a5 + ...
Use of derivatives
Many commonly used functions are analytic functions, which can be expressed as power series, for example as a Taylor series. The initial values can be calculated to any degree of accuracy; if done correctly the engine will give exact results for first N steps. After that, the engine will only give an approximation of the function.
The Taylor series expresses the function as a sum obtained from its derivatives at one point. For many functions the higher derivatives are trivial to obtain; for instance, the sine function at 0 has values of 0 or for all derivatives. Setting 0 as the start of computation we get the simplified Maclaurin series
The same method of calculating the initial values from the coefficients can be used as for polynomial functions. The polynomial constant coefficients will now have the value
Curve fitting
The problem with the methods described above is that errors will accumulate and the series will tend to diverge from the true function. A solution which guarantees a constant maximum error is to use curve fitting. A minimum of N values are calculated evenly spaced along the range of the desired calculations. Using a curve fitting technique like Gaussian reduction an N−1th degree polynomial interpolation of the function is found. With the optimized polynomial, the initial values can be calculated as above.
| Technology | Early computers | null |
8328 | https://en.wikipedia.org/wiki/Divergence | Divergence | In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value.
Physical interpretation of divergence
In physical terms, the divergence of a vector field is the extent to which the vector field flux behaves like a source or a sink at a given point. It is a local measure of its "outgoingness" – the extent to which there are more of the field vectors exiting from an infinitesimal region of space than entering it. A point at which the flux is outgoing has positive divergence, and is often called a "source" of the field. A point at which the flux is directed inward has negative divergence, and is often called a "sink" of the field. The greater the flux of field through a small surface enclosing a given point, the greater the value of divergence at that point. A point at which there is zero flux through an enclosing surface has zero divergence.
The divergence of a vector field is often illustrated using the simple example of the velocity field of a fluid, a liquid or gas. A moving gas has a velocity, a speed and direction at each point, which can be represented by a vector, so the velocity of the gas forms a vector field. If a gas is heated, it will expand. This will cause a net motion of gas particles outward in all directions. Any closed surface in the gas will enclose gas which is expanding, so there will be an outward flux of gas through the surface. So the velocity field will have positive divergence everywhere. Similarly, if the gas is cooled, it will contract. There will be more room for gas particles in any volume, so the external pressure of the fluid will cause a net flow of gas volume inward through any closed surface. Therefore, the velocity field has negative divergence everywhere. In contrast, in a gas at a constant temperature and pressure, the net flux of gas out of any closed surface is zero. The gas may be moving, but the volume rate of gas flowing into any closed surface must equal the volume rate flowing out, so the net flux is zero. Thus the gas velocity has zero divergence everywhere. A field which has zero divergence everywhere is called solenoidal.
If the gas is heated only at one point or small region, or a small tube is introduced which supplies a source of additional gas at one point, the gas there will expand, pushing fluid particles around it outward in all directions. This will cause an outward velocity field throughout the gas, centered on the heated point. Any closed surface enclosing the heated point will have a flux of gas particles passing out of it, so there is positive divergence at that point. However any closed surface not enclosing the point will have a constant density of gas inside, so just as many fluid particles are entering as leaving the volume, thus the net flux out of the volume is zero. Therefore, the divergence at any other point is zero.
Definition
The divergence of a vector field at a point is defined as the limit of the ratio of the surface integral of out of the closed surface of a volume enclosing to the volume of , as shrinks to zero
where is the volume of , is the boundary of , and is the outward unit normal to that surface. It can be shown that the above limit always converges to the same value for any sequence of volumes that contain and approach zero volume. The result, , is a scalar function of .
Since this definition is coordinate-free, it shows that the divergence is the same in any coordinate system. However the above definition is not often used practically to calculate divergence; when the vector field is given in a coordinate system the coordinate definitions below are much simpler to use.
A vector field with zero divergence everywhere is called solenoidal – in which case any closed surface has no net flux across it.
Definition in coordinates
Cartesian coordinates
In three-dimensional Cartesian coordinates, the divergence of a continuously differentiable vector field is defined as the scalar-valued function:
Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. This is because the trace of the Jacobian matrix of an -dimensional vector field in -dimensional space is invariant under any invertible linear transformation.
The common notation for the divergence is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of the operator (see del), apply them to the corresponding components of , and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation.
Cylindrical coordinates
For a vector expressed in local unit cylindrical coordinates as
where is the unit vector in direction , the divergence is
The use of local coordinates is vital for the validity of the expression. If we consider the position vector and the functions , , and , which assign the corresponding global cylindrical coordinate to a vector, in general and In particular, if we consider the identity function , we find that:
Spherical coordinates
In spherical coordinates, with the angle with the axis and the rotation around the axis, and again written in local unit coordinates, the divergence is
Tensor field
Let be continuously differentiable second-order tensor field defined as follows:
the divergence in cartesian coordinate system is a first-order tensor field and can be defined in two ways:
and
We have
If tensor is symmetric then Because of this, often in the literature the two definitions (and symbols and ) are used interchangeably (especially in mechanics equations where tensor symmetry is assumed).
Expressions of in cylindrical and spherical coordinates are given in the article del in cylindrical and spherical coordinates.
General coordinates
Using Einstein notation we can consider the divergence in general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the number of the coordinate or component, so refers to the second component, and not the quantity squared. The index variable is used to refer to an arbitrary component, such as . The divergence can then be written via the Voss-Weyl formula, as:
where is the local coefficient of the volume element and are the components of with respect to the local unnormalized covariant basis (sometimes written as The Einstein notation implies summation over , since it appears as both an upper and lower index.
The volume coefficient is a function of position which depends on the coordinate system. In Cartesian, cylindrical and spherical coordinates, using the same conventions as before, we have , and , respectively. The volume can also be expressed as , where is the metric tensor. The determinant appears because it provides the appropriate invariant definition of the volume, given a set of vectors. Since the determinant is a scalar quantity which doesn't depend on the indices, these can be suppressed, writing The absolute value is taken in order to handle the general case where the determinant might be negative, such as in pseudo-Riemannian spaces. The reason for the square-root is a bit subtle: it effectively avoids double-counting as one goes from curved to Cartesian coordinates, and back. The volume (the determinant) can also be understood as the Jacobian of the transformation from Cartesian to curvilinear coordinates, which for gives
Some conventions expect all local basis elements to be normalized to unit length, as was done in the previous sections. If we write for the normalized basis, and for the components of with respect to it, we have that
using one of the properties of the metric tensor. By dotting both sides of the last equality with the contravariant element we can conclude that . After substituting, the formula becomes:
See for further discussion.
Properties
The following properties can all be derived from the ordinary differentiation rules of calculus. Most importantly, the divergence is a linear operator, i.e.,
for all vector fields and and all real numbers and .
There is a product rule of the following type: if is a scalar-valued function and is a vector field, then
or in more suggestive notation
Another product rule for the cross product of two vector fields and in three dimensions involves the curl and reads as follows:
or
The Laplacian of a scalar field is the divergence of the field's gradient:
The divergence of the curl of any vector field (in three dimensions) is equal to zero:
If a vector field with zero divergence is defined on a ball in , then there exists some vector field on the ball with . For regions in more topologically complicated than this, the latter statement might be false (see Poincaré lemma). The degree of failure of the truth of the statement, measured by the homology of the chain complex
serves as a nice quantification of the complicatedness of the underlying region . These are the beginnings and main motivations of de Rham cohomology.
Decomposition theorem
It can be shown that any stationary flux that is twice continuously differentiable in and vanishes sufficiently fast for can be decomposed uniquely into an irrotational part and a source-free part . Moreover, these parts are explicitly determined by the respective source densities (see above) and circulation densities (see the article Curl):
For the irrotational part one has
with
The source-free part, , can be similarly written: one only has to replace the scalar potential by a vector potential and the terms by , and the source density
by the circulation density .
This "decomposition theorem" is a by-product of the stationary case of electrodynamics. It is a special case of the more general Helmholtz decomposition, which works in dimensions greater than three as well.
In arbitrary finite dimensions
The divergence of a vector field can be defined in any finite number of dimensions. If
in a Euclidean coordinate system with coordinates , define
In the 1D case, reduces to a regular function, and the divergence reduces to the derivative.
For any , the divergence is a linear operator, and it satisfies the "product rule"
for any scalar-valued function .
Relation to the exterior derivative
One can express the divergence as a particular case of the exterior derivative, which takes a 2-form to a 3-form in . Define the current two-form as
It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of density moving with local velocity . Its exterior derivative is then given by
where is the wedge product.
Thus, the divergence of the vector field can be expressed as:
Here the superscript is one of the two musical isomorphisms, and is the Hodge star operator. When the divergence is written in this way, the operator is referred to as the codifferential. Working with the current two-form and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system.
In curvilinear coordinates
The appropriate expression is more complicated in curvilinear coordinates. The divergence of a vector field extends naturally to any differentiable manifold of dimension that has a volume form (or density) , e.g. a Riemannian or Lorentzian manifold. Generalising the construction of a two-form for a vector field on , on such a manifold a vector field defines an -form obtained by contracting with . The divergence is then the function defined by
The divergence can be defined in terms of the Lie derivative as
This means that the divergence measures the rate of expansion of a unit of volume (a volume element) as it flows with the vector field.
On a pseudo-Riemannian manifold, the divergence with respect to the volume can be expressed in terms of the Levi-Civita connection :
where the second expression is the contraction of the vector field valued 1-form with itself and the last expression is the traditional coordinate expression from Ricci calculus.
An equivalent expression without using a connection is
where is the metric and denotes the partial derivative with respect to coordinate . The square-root of the (absolute value of the determinant of the) metric appears because the divergence must be written with the correct conception of the volume. In curvilinear coordinates, the basis vectors are no longer orthonormal; the determinant encodes the correct idea of volume in this case. It appears twice, here, once, so that the can be transformed into "flat space" (where coordinates are actually orthonormal), and once again so that is also transformed into "flat space", so that finally, the "ordinary" divergence can be written with the "ordinary" concept of volume in flat space (i.e. unit volume, i.e. one, i.e. not written down). The square-root appears in the denominator, because the derivative transforms in the opposite way (contravariantly) to the vector (which is covariant). This idea of getting to a "flat coordinate system" where local computations can be done in a conventional way is called a vielbein. A different way to see this is to note that the divergence is the codifferential in disguise. That is, the divergence corresponds to the expression with the differential and the Hodge star. The Hodge star, by its construction, causes the volume form to appear in all of the right places.
The divergence of tensors
Divergence can also be generalised to tensors. In Einstein notation, the divergence of a contravariant vector is given by
where denotes the covariant derivative. In this general setting, the correct formulation of the divergence is to recognize that it is a codifferential; the appropriate properties follow from there.
Equivalently, some authors define the divergence of a mixed tensor by using the musical isomorphism : if is a -tensor ( for the contravariant vector and for the covariant one), then we define the divergence of to be the -tensor
that is, we take the trace over the first two covariant indices of the covariant derivative.
The symbol refers to the musical isomorphism.
| Mathematics | Multivariable and vector calculus | null |
8336 | https://en.wikipedia.org/wiki/Decision%20problem | Decision problem | In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question based on the given input values. An example of a decision problem is deciding with the help of an algorithm whether a given natural number is prime. Another example is the problem, "given two numbers x and y, does x evenly divide y?"
A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers x and y, does x evenly divide y?" would give the steps for determining whether x evenly divides y. One such algorithm is long division. If the remainder is zero the answer is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm is called decidable.
Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an effective method to determine the existence of some object or its membership in a set; some of the most important problems in mathematics are undecidable.
The field of computational complexity categorizes decidable decision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of recursion theory, meanwhile, categorizes undecidable decision problems by Turing degree, which is a measure of the noncomputability inherent in any solution.
Definition
A decision problem is a yes-or-no question on an infinite set of inputs. It is traditional to define the decision problem as the set of possible inputs together with the set of inputs for which the answer is yes.
These inputs can be natural numbers, but can also be values of some other kind, like binary strings or strings over some other alphabet. The subset of strings for which the problem returns "yes" is a formal language, and often decision problems are defined as formal languages.
Using an encoding such as Gödel numbering, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. Therefore, the algorithm of a decision problem is to compute the characteristic function of a subset of the natural numbers.
Examples
A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any effective method is enough to establish decidability.
Decidability
A decision problem is decidable or effectively solvable if the set of inputs (or natural numbers) for which the answer is yes is a recursive set. A problem is partially decidable, semidecidable, solvable, or provable if the set of inputs (or natural numbers) for which the answer is yes is a recursively enumerable set. Problems that are not decidable are undecidable. For those it is not possible to create an algorithm, efficient or otherwise, that solves them.
The halting problem is an important undecidable decision problem; for more examples, see list of undecidable problems.
Complete problems
Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions. A decision problem P is said to be complete for a set of decision problems S if P is a member of S and every problem in S can be reduced to P. Complete decision problems are used in computational complexity theory to characterize complexity classes of decision problems. For example, the Boolean satisfiability problem is complete for the class NP of decision problems under polynomial-time reducibility.
Function problems
Decision problems are closely related to function problems, which can have answers that are more complex than a simple 'yes' or 'no'. A corresponding function problem is "given two numbers x and y, what is x divided by y?".
A function problem consists of a partial function f; the informal "problem" is to compute the values of f on the inputs for which it is defined.
Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a function f is the set of pairs (x,y) such that f(x) = y.) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair (x,y)) when the function is not computable in polynomial time (in which case running time is computed as a function of x alone). The function f(x) = 2x has this property.
Every decision problem can be converted into the function problem of computing the characteristic function of the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of an NP-complete problem and its co-NP-complete complement is exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation.
Optimization problems
Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding the best answer to a particular input. Optimization problems arise naturally in many applications, such as the traveling salesman problem and many questions in linear programming.
Function and optimization problems are often transformed into decision problems by considering the question of whether the output is equal to or less than or equal to a given value. This allows the complexity of the corresponding decision problem to be studied; and in many cases the original function or optimization problem can be solved by solving its corresponding decision problem. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for each N, to decide whether the graph has any tour with weight less than N. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour.
Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such as operations research.
| Mathematics | Computability theory | null |
8376 | https://en.wikipedia.org/wiki/Day | Day | A day is the time period of a full rotation of the Earth with respect to the Sun. On average, this is 24 hours (86,400 seconds). As a day passes at a given location it experiences morning, noon, afternoon, evening, and night. This daily cycle drives circadian rhythms in many organisms, which are vital to many life processes.
A collection of sequential days is organized into calendars as dates, almost always into weeks, months and years. A solar calendar organizes dates based on the Sun's annual cycle, giving consistent start dates for the four seasons from year to year. A lunar calendar organizes dates based on the Moon's lunar phase.
In common usage, a day starts at midnight, written as 00:00 or 12:00 am in 24- or 12-hour clocks, respectively. Because the time of midnight varies between locations, time zones are set up to facilitate the use of a uniform standard time. Other conventions are sometimes used, for example the Jewish religious calendar counts days from sunset to sunset, so the Jewish Sabbath begins at sundown on Friday. In astronomy, a day begins at noon so that observations throughout a single night are recorded as happening on the same day.
In specific applications, the definition of a day is slightly modified, such as in the SI day (exactly 86,400 seconds) used for computers and standards keeping, local mean time accounting of the Earth's natural fluctuation of a solar day, and stellar day and sidereal day (using the celestial sphere) used for astronomy. In some countries outside of the tropics, daylight saving time is practiced, and each year there will be one 23-hour civil day and one 25-hour civil day. Due to slight variations in the rotation of the Earth, there are rare times when a leap second will get inserted at the end of a UTC day, and so while almost all days have a duration of 86,400 seconds, there are these exceptional cases of a day with 86,401 seconds (in the half-century spanning 1972 through 2022, there have been a total of 27 leap seconds that have been inserted, so roughly once every other year).
Etymology
The term comes from the Old English term dæġ (), with its cognates such as dagur in Icelandic, Tag in German, and dag in Norwegian, Danish, Swedish and Dutch – all stemming from a Proto-Germanic root *dagaz.
Definitions
Apparent and mean solar day
Several definitions of this universal human concept are used according to context, need, and convenience. Besides the day of 24 hours (86,400 seconds), the word day is used for several different spans of time based on the rotation of the Earth around its axis. An important one is the solar day, the time it takes for the Sun to return to its culmination point (its highest point in the sky). Due to an orbit's eccentricity, the Sun resides in one of the orbit's foci instead of the middle. Consequently, due to Kepler's second law, the planet travels at different speeds at various positions in its orbit, and thus a solar day is not the same length of time throughout the orbital year. Because the Earth moves along an eccentric orbit around the Sun while the Earth spins on an inclined axis, this period can be up to 7.9 seconds more than (or less than) 24 hours. In recent decades, the average length of a solar day on Earth has been about 86,400.002 seconds (24.000 000 6 hours). There are currently about 365.2421875 solar days in one mean tropical year.
Ancient custom has a new day starting at either the rising or setting of the Sun on the local horizon (Italian reckoning, for example, being 24 hours from sunset, old style). The exact moment of, and the interval between, two sunrises or sunsets depends on the geographical position (longitude and latitude, as well as altitude), and the time of year (as indicated by ancient hemispherical sundials).
A more constant day can be defined by the Sun passing through the local meridian, which happens at local noon (upper culmination) or midnight (lower culmination). The exact moment is dependent on the geographical longitude, and to a lesser extent on the time of the year. The length of such a day is nearly constant (24 hours ± 30 seconds). This is the time as indicated by modern sundials.
A further improvement defines a fictitious mean Sun that moves with constant speed along the celestial equator; the speed is the same as the average speed of the real Sun, but this removes the variation over a year as the Earth moves along its orbit around the Sun (due to both its velocity and its axial tilt).
In terms of Earth's rotation, the average day length is about 360.9856°. A day lasts for more than 360° of rotation because of the Earth's revolution around the Sun. With a full year being slightly more than 360 days, the Earth's daily orbit around the Sun is slightly less than 1°, so the day is slightly less than 361° of rotation.
Elsewhere in the Solar System or other parts of the universe, a day is a full rotation of other large astronomical objects with respect to its star.
Civil day
For civil purposes, a common clock time is typically defined for an entire region based on the local mean solar time at a central meridian. Such time zones began to be adopted about the middle of the 19th century when railroads with regularly occurring schedules came into use, with most major countries having adopted them by 1929. As of 2015, throughout the world, 40 such zones are now in use: the central zone, from which all others are defined as offsets, is known as UTC+00, which uses Coordinated Universal Time (UTC).
The most common convention starts the civil day at midnight: this is near the time of the lower culmination of the Sun on the central meridian of the time zone. Such a day may be called a calendar day.
A day is commonly divided into 24 hours, with each hour being made up of 60 minutes, and each minute composed of 60 seconds.
Sidereal day
A sidereal day or stellar day is the span of time it takes for the Earth to make one entire rotation with respect to the celestial background or a distant star (assumed to be fixed). Measuring a day as such is used in astronomy. A sidereal day is about 4 minutes less than a solar day of 24 hours (23 hours 56 minutes and 4.09 seconds), or 0.99726968 of a solar day of 24 hours. There are about 366.2422 stellar days in one mean tropical year (one stellar day more than the number of solar days).
Besides a stellar day on Earth, other bodies in the Solar System have day times, the durations of these being:
In the International System of Units
In the International System of Units (SI), a day not an official unit, but is accepted for use with SI. A day, with symbol d, is defined using SI units as 86,400 seconds; the second is the base unit of time in SI units. In 1967–68, during the 13th CGPM (Resolution 1), the International Bureau of Weights and Measures (BIPM) redefined a second as "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between two hyperfine levels of the ground state of the caesium-133 atom". This makes the SI-based day last exactly 794,243,384,928,000 of those periods.
In decimal and metric time
Various decimal or metric time proposals have been made, but do not redefine the day, and use the day or sidereal day as a base unit. Metric time uses metric prefixes to keep time. It uses the day as the base unit, and smaller units being fractions of a day: a metric hour (deci) is of a day; a metric minute (milli) is of a day; etc. Similarly, in decimal time, the length of a day is static to normal time. A day is also split into 10 hours, and 10 days comprise a décade – the equivalent of a week. 3 décades make a month. Various decimal time proposals which do not redefine the day: Henri de Sarrauton's proposal kept days, and subdivided hours into 100 minutes; in Mendizábal y Tamborel's proposal, the sidereal day was the basic unit, with subdivisions made upon it; and Rey-Pailhade's proposal divided the day 100 cés.
Other definitions
The word refers to various similarly defined ideas, such as:
Full day
24 hours (exactly) (a nychthemeron)
A day counting approximation, for example "See you in three days." or "the following day"
The full day covering both the dark and light periods, beginning from the start of the dark period or from a point near the middle of the dark period
A full dark and light period, sometimes called a nychthemeron in English, from the Greek for night-day; or more colloquially the term . In other languages, is also often used. Other languages also have a separate word for a full day.
Part of a date: the day of the year (doy) in ordinal dates, day of the month (dom) in calendar dates or day of the week (dow) in week dates.
Time regularly spend at paid work on a single work day, cf. man-day and workweek.
Daytime
The period of light when the Sun is above the local horizon (that is, the time period from sunrise to sunset)
The time period from 06:00–18:00 (6:00 am – 6:00 pm) or 21:00 (9:00 pm) or another fixed clock period overlapping or offset from other time periods such as "morning", "afternoon", or "evening".
The time period from first-light "dawn" to last-light "dusk".
Other
A specific period of the day, which may vary by context, such as "the school day" or "the work day".
Variations in length
Mainly due to tidal deceleration – the Moon's gravitational pull slowing down the Earth's rotation – the Earth's rotational period is slowing. Because of the way the second is defined, the mean length of a solar day is now about 86,400.002 seconds, and is increasing by about 2 milliseconds per century.
Since the rotation rate of the Earth is slowing, the length of a second fell out of sync with a second derived from the rotational period. This created the need for leap seconds, which insert extra seconds into Coordinated Universal Time (UTC). Although typically 86,400 seconds in duration, a civil day can be either 86,401 or 86,399 SI seconds long on such a day. Other than the two-millisecond variation from tidal deceleration, other factors minutely affect the day's length, which creates an irregularity in the placement of leap seconds. Leap seconds are announced in advance by the International Earth Rotation and Reference Systems Service (IERS), which measures the Earth's rotation and determines whether a leap second is necessary.
Geological day lengths
Discovered by paleontologist John W. Wells, the day lengths of geological periods have been estimated by measuring sedimentation rings in coral fossils, due to some biological systems being affected by the tide. The length of a day at the Earth's formation is estimated at 6 hours. Arbab I. Arbab plotted day lengths over time and found a curved line. Arbab attributed this to the change of water volume present affecting Earth's rotation.
Boundaries
For most diurnal animals, the day naturally begins at dawn and ends at sunset. Humans, with their cultural norms and scientific knowledge, have employed several different conceptions of the day's boundaries.
In the Hebrew Bible, Genesis 1:5 defines a day in terms of "evening" and "morning" before recounting the creation of the Sun to illuminate it: "And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day."
The Jewish day begins at either sunset or nightfall (when three second-magnitude stars appear).
Medieval Europe also followed this tradition, known as Florentine reckoning: In this system, a reference like "two hours into the day" meant two hours after sunset and thus times during the evening need to be shifted back one calendar day in modern reckoning.
Days such as Christmas Eve, Halloween (“All Hallows’ Eve”), and the Eve of Saint Agnes are remnants of the older pattern when holidays began during the prior evening.
The common convention among the ancient Romans, ancient Chinese and in modern times is for the civil day to begin at midnight, i.e. 00:00, and to last a full 24 hours until 24:00, i.e. 00:00 of the next day. The International Meridian Conference of 1884 resolved
That the Conference expresses the hope that as soon as may be practicable the astronomical and nautical days will be arranged everywhere to begin at midnight.
In ancient Egypt the day was reckoned from sunrise to sunrise.
Prior to 1926, Turkey had two time systems: Turkish, counting the hours from sunset, and French, counting the hours from midnight.
Parts
Humans have divided the day in rough periods, which can have cultural implications, and other effects on humans' biological processes. The parts of the day do not have set times; they can vary by lifestyle or hours of daylight in a given place.
Daytime
Daytime is the part of the day during which sunlight directly reaches the ground, assuming that there are no obstacles. The length of daytime averages slightly more than half of the 24-hour day. Two effects make daytime on average longer than night. The Sun is not a point but has an apparent size of about 32 minutes of arc. Additionally, the atmosphere refracts sunlight in such a way that some of it reaches the ground even when the Sun is below the horizon by about 34 minutes of arc. So the first light reaches the ground when the centre of the Sun is still below the horizon by about 50 minutes of arc. Thus, daytime is on average around 7 minutes longer than 12 hours.
Daytime is further divided into morning, afternoon, and evening. Morning occurs between sunrise and noon. Afternoon occurs between noon and sunset, or between noon and the start of evening. This period of time sees human's highest body temperature, an increase of traffic collisions, and a decrease of productivity. Evening begins around 5 or 6 pm, or when the sun sets, and ends when one goes to bed.
Twilight
Twilight is the period before sunrise and after sunset in which there is natural light but no direct sunlight. The morning twilight begins at dawn and ends at sunrise, while the evening twilight begins at sunset and ends at dusk. Both periods of twilight can be divided into civil twilight, nautical twilight, and astronomical twilight. Civil twilight is when the sun is up to 6 degrees below the horizon; nautical when it is up to 12 degrees below, and astronomical when it is up to 18 degrees below.
Night
Night is the period in which the sky is dark; the period between dusk and dawn when no light from the sun is visible. Light pollution during night can impact human and animal life, for example by disrupting sleep.
| Physical sciences | Physics | null |
8377 | https://en.wikipedia.org/wiki/Database | Database | In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spans formal techniques and practical considerations, including data modeling, efficient data representation and storage, query languages, security and privacy of sensitive data, and distributed computing issues, including supporting concurrent access and fault tolerance.
Computer scientists may classify database management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, collectively referred to as NoSQL, because they use different query languages.
Terminology and overview
Formally, a "database" refers to a set of related data accessed through the use of a "database management system" (DBMS), which is an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that detail how the data is to be organized.
Update – Insertion, modification, and deletion of the data itself.
Retrieval – Selecting data according to specified criteria (e.g., a query, a position in a hierarchy, or a position in relation to other data) and providing that data either directly to the user, or making it available for further processing by the database itself or by other applications. The retrieved data may be made available in a more or less direct form without modification, as it is stored in the database, or in a new form obtained by altering it or combining it with existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large-volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
History
The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid-1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and they remain dominant: IBM Db2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardized SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object–relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object–relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key–value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
1960s, navigational DBMS
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However, CODASYL databases were complex and required significant training and effort to produce useful applications.
IBM also had its own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman's 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems' TOTAL databases are classified as network databases. IMS remains in use .
1970s, relational DBMS
Edgar F. Codd worked at IBM in San Jose, California, in one of their offshoot offices that were primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organize the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated.
Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based.
The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit.
In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic.
Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998.
Integrated approach
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at a lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However, this idea is still pursued in certain applications by some companies like Netezza and Oracle (Exadata).
Late 1970s, SQL DBMS
IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (IBM Db2).
Larry Ellison's Oracle Database (or more simply, Oracle) started from a different chain, based on IBM's papers on System R. Though Oracle V1 implementations were completed in 1978, it was not until Oracle Version 2 when Ellison beat IBM to market in 1979.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed in the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two has become irrelevant.
1980s, on the desktop
The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
1990s, object-oriented
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be related to objects and their attributes and not to individual fields. The term "object–relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object–relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object–relational mappings (ORMs) attempt to solve the same problem.
2000s, NoSQL and NewSQL
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally.
In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem, it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system.
Use cases
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
Classification
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and used by end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application's end-users and requires little or no ongoing maintenance.
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym for federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows modeling, storage, and retrieval of (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or Δ) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business's customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared-nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?".
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
Database management system
Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database." Examples of DBMS's include MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle Database, and Microsoft Access.
The DBMS acronym is sometimes extended to indicate the underlying database model, with RDBMS for the relational, OODBMS for the object (oriented) and ORDBMS for the object–relational model. Other extensions can indicate some other characteristics, such as DDBMS for a distributed database management systems.
The functionality provided by a DBMS can vary enormously. The core functionality is the storage, retrieval and update of data. Codd proposed the following functions and services a fully-fledged general purpose DBMS should provide:
Data storage, retrieval and update
User accessible catalog or data dictionary describing the metadata
Support for transactions and concurrency
Facilities for recovering the database should it become damaged
Support for authorization of access and update of data
Access support from remote locations
Enforcing constraints to ensure data in the database abides by certain rules
It is also generally to be expected the DBMS will provide a set of utilities for such purposes as may be necessary to administer the database effectively, including import, export, monitoring, defragmentation and analysis utilities. The core part of the DBMS interacting between the database and the application interface sometimes referred to as the database engine.
Often DBMSs will have configuration parameters that can be statically and dynamically tuned, for example the maximum amount of main memory on a server the database can use. The trend is to minimize the amount of manual configuration, and for cases such as embedded databases the need to target zero-administration is paramount.
The large major enterprise DBMSs have tended to increase in size and functionality and have involved up to thousands of human years of development effort throughout their lifetime.
Early multi-user DBMS typically only allowed for the application to reside on the same computer with access via terminals or terminal emulation software. The client–server architecture was a development where the application resided on a client desktop and the database on a server allowing the processing to be distributed. This evolved into a multitier architecture incorporating application servers and web servers with the end user interface via a web browser with the database only directly connected to the adjacent tier.
A general-purpose DBMS will provide public application programming interfaces (API) and optionally a processor for database languages such as SQL to allow applications to be written to interact with and manipulate the database. A special purpose DBMS may use a private API and be specifically customized and linked to a single application. For example, an email system performs many of the functions of a general-purpose DBMS such as message insertion, message deletion, attachment handling, blocklist lookup, associating messages an email address and so forth however these functions are limited to what is required to handle email.
Application
External interaction with the database will be via an application program that interfaces with the DBMS. This can range from a database tool that allows users to execute SQL queries textually or graphically, to a website that happens to use a database to store and search information.
Application program interface
A programmer will code interactions to the database (sometimes referred to as a datasource) via an application program interface (API) or via a database language. The particular API or language chosen will need to be supported by DBMS, possibly indirectly via a preprocessor or a bridging API. Some API's aim to be database independent, ODBC being a commonly known example. Other common API's include JDBC and ADO.NET.
Database languages
Database languages are special-purpose languages, which allow one or more of the following tasks, sometimes distinguished as sublanguages:
Data control language (DCL) – controls access to data;
Data definition language (DDL) – defines data types such as creating, altering, or dropping tables and the relationships among them;
Data manipulation language (DML) – performs tasks such as inserting, updating, or deleting data occurrences;
Data query language (DQL) – allows searching for information and computing derived information.
Database languages are specific to a particular data model. Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and are supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and Db2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.
A database language may also incorporate features like:
DBMS-specific configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
Storage
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Databases as digital objects contain three layers of information which must be stored: the data, the structure, and the semantics. Proper storage of all three layers is needed for future preservation and longevity of the database. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often using the operating systems' file systems as intermediates for storage layout), storage properties and configuration settings are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look at the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
Materialized views
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
Replication
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to the same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
Virtualization
With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach.
Security
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) are allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or using specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this in the database. Monitoring can be set up to attempt to detect security breaches. Therefore, organizations must take database security seriously because of the many benefits it provides. Organizations will be safeguarded from security breaches and hacking activities like firewall intrusion, virus spread, and ransom ware. This helps in protecting the company's essential information, which cannot be shared with outsiders at any cause.
Transactions and concurrency
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring or releasing a lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: atomicity, consistency, isolation, and durability.
Migration
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This is in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help import databases from other popular DBMSs.
Building, maintaining, and tuning
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be used for this purpose. A DBMS provides the needed user interfaces to be used by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialized and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
Backup and restore
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are used to restore that state.
Static analysis
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database systems has many interesting applications, in particular, for security purposes, such as fine-grained access control, watermarking, etc.
Miscellaneous features
Other DBMS features might include:
Database logs – This helps in keeping a history of the executed functions.
Graphics component for producing graphs and charts, especially in a data warehouse system.
Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as "DevOps for database".
Design and modeling
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity–relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organization, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modeling notation used to express that design).
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like, which depend on the particular DBMS. This is often called physical database design, and the output is the physical data model. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
Models
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Hierarchical database model
Network model
Graph database
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object–relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
External, conceptual, and internal views
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level (or logical level) unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual and internal view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are in the interest of the human resources department. Thus different departments need different views of the company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On the one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
Research
Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, related concurrency control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE).
| Technology | Computer software | null |
8378 | https://en.wikipedia.org/wiki/Dipole | Dipole | In physics, a dipole () is an electromagnetic phenomenon which occurs in two ways:
An electric dipole deals with the separation of the positive and negative electric charges found in any electromagnetic system. A simple example of this system is a pair of charges of equal magnitude but opposite sign separated by some typically small distance. (A permanent electric dipole is called an electret.)
A magnetic dipole is the closed circulation of an electric current system. A simple example is a single loop of wire with constant current through it. A bar magnet is an example of a magnet with a permanent magnetic dipole moment.
Dipoles, whether electric or magnetic, can be characterized by their dipole moment, a vector quantity. For the simple electric dipole, the electric dipole moment points from the negative charge towards the positive charge, and has a magnitude equal to the strength of each charge times the separation between the charges. (To be precise: for the definition of the dipole moment, one should always consider the "dipole limit", where, for example, the distance of the generating charges should converge to 0 while simultaneously, the charge strength should diverge to infinity in such a way that the product remains a positive constant.)
For the magnetic (dipole) current loop, the magnetic dipole moment points through the loop (according to the right hand grip rule), with a magnitude equal to the current in the loop times the area of the loop.
Similar to magnetic current loops, the electron particle and some other fundamental particles have magnetic dipole moments, as an electron generates a magnetic field identical to that generated by a very small current loop. However, an electron's magnetic dipole moment is not due to a current loop, but to an intrinsic property of the electron. The electron may also have an electric dipole moment though such has yet to be observed (see electron electric dipole moment).
A permanent magnet, such as a bar magnet, owes its magnetism to the intrinsic magnetic dipole moment of the electron. The two ends of a bar magnet are referred to as poles (not to be confused with monopoles, see Classification below) and may be labeled "north" and "south". In terms of the Earth's magnetic field, they are respectively "north-seeking" and "south-seeking" poles: if the magnet were freely suspended in the Earth's magnetic field, the north-seeking pole would point towards the north and the south-seeking pole would point towards the south. The dipole moment of the bar magnet points from its magnetic south to its magnetic north pole. In a magnetic compass, the north pole of a bar magnet points north. However, that means that Earth's geomagnetic north pole is the south pole (south-seeking pole) of its dipole moment and vice versa.
The only known mechanisms for the creation of magnetic dipoles are by current loops or quantum-mechanical spin since the existence of magnetic monopoles has never been experimentally demonstrated.
Classification
A physical dipole consists of two equal and opposite point charges: in the literal sense, two poles. Its field at large distances (i.e., distances large in comparison to the separation of the poles) depends almost entirely on the dipole moment as defined above. A point (electric) dipole is the limit obtained by letting the separation tend to 0 while keeping the dipole moment fixed. The field of a point dipole has a particularly simple form, and the order-1 term in the multipole expansion is precisely the point dipole field.
Although there are no known magnetic monopoles in nature, there are magnetic dipoles in the form of the quantum-mechanical spin associated with particles such as electrons (although the accurate description of such effects falls outside of classical electromagnetism). A theoretical magnetic point dipole has a magnetic field of exactly the same form as the electric field of an electric point dipole. A very small current-carrying loop is approximately a magnetic point dipole; the magnetic dipole moment of such a loop is the product of the current flowing in the loop and the (vector) area of the loop.
Any configuration of charges or currents has a 'dipole moment', which describes the dipole whose field is the best approximation, at large distances, to that of the given configuration. This is simply one term in the multipole expansion when the total charge ("monopole moment") is 0—as it always is for the magnetic case, since there are no magnetic monopoles. The dipole term is the dominant one at large distances: Its field falls off in proportion to , as compared to for the next (quadrupole) term and higher powers of for higher terms, or for the monopole term.
Molecular dipoles
Many molecules have such dipole moments due to non-uniform distributions of positive and negative charges on the various atoms. Such is the case with polar compounds like hydrogen fluoride (HF), where electron density is shared unequally between atoms. Therefore, a molecule's dipole is an electric dipole with an inherent electric field that should not be confused with a magnetic dipole, which generates a magnetic field.
The physical chemist Peter J. W. Debye was the first scientist to study molecular dipoles extensively, and, as a consequence, dipole moments are measured in the non-SI unit named debye in his honor.
For molecules there are three types of dipoles:
Permanent dipoles These occur when two atoms in a molecule have substantially different electronegativity : One atom attracts electrons more than another, becoming more negative, while the other atom becomes more positive. A molecule with a permanent dipole moment is called a polar molecule. See dipole–dipole attractions.
Instantaneous dipoles These occur due to chance when electrons happen to be more concentrated in one place than another in a molecule, creating a temporary dipole. These dipoles are smaller in magnitude than permanent dipoles, but still play a large role in chemistry and biochemistry due to their prevalence. See instantaneous dipole.
Induced dipoles These can occur when one molecule with a permanent dipole repels another molecule's electrons, inducing a dipole moment in that molecule. A molecule is polarized when it carries an induced dipole. See induced-dipole attraction.
More generally, an induced dipole of any polarizable charge distribution ρ (remember that a molecule has a charge distribution) is caused by an electric field external to ρ. This field may, for instance, originate from an ion or polar molecule in the vicinity of ρ or may be macroscopic (e.g., a molecule between the plates of a charged capacitor). The size of the induced dipole moment is equal to the product of the strength of the external field and the dipole polarizability of ρ.
Dipole moment values can be obtained from measurement of the dielectric constant. Some typical gas phase values given with the unit debye are:
carbon dioxide: 0
carbon monoxide: 0.112 D
ozone: 0.53 D
phosgene: 1.17 D
ammonia: 1.42 D
water vapor: 1.85 D
hydrogen cyanide: 2.98 D
cyanamide: 4.27 D
potassium bromide: 10.41 D
Potassium bromide (KBr) has one of the highest dipole moments because it is an ionic compound that exists as a molecule in the gas phase.
The overall dipole moment of a molecule may be approximated as a vector sum of bond dipole moments. As a vector sum it depends on the relative orientation of the bonds, so that from the dipole moment information can be deduced about the molecular geometry.
For example, the zero dipole of CO2 implies that the two C=O bond dipole moments cancel so that the molecule must be linear. For H2O the O−H bond moments do not cancel because the molecule is bent. For ozone (O3) which is also a bent molecule, the bond dipole moments are not zero even though the O−O bonds are between similar atoms. This agrees with the Lewis structures for the resonance forms of ozone which show a positive charge on the central oxygen atom.
An example in organic chemistry of the role of geometry in determining dipole moment is the cis and trans isomers of 1,2-dichloroethene. In the cis isomer the two polar C−Cl bonds are on the same side of the C=C double bond and the molecular dipole moment is 1.90 D. In the trans isomer, the dipole moment is zero because the two C−Cl bonds are on opposite sides of the C=C and cancel (and the two bond moments for the much less polar C−H bonds also cancel).
Another example of the role of molecular geometry is boron trifluoride, which has three polar bonds with a difference in electronegativity greater than the traditionally cited threshold of 1.7 for ionic bonding. However, due to the equilateral triangular distribution of the fluoride ions centered on and in the same plane as the boron cation, the symmetry of the molecule results in its dipole moment being zero.
Quantum-mechanical dipole operator
Consider a collection of N particles with charges qi and position vectors ri. For instance, this collection may be a molecule consisting of electrons, all with charge −e, and nuclei with charge eZi, where Zi is the atomic number of the i th nucleus.
The dipole observable (physical quantity) has the quantum mechanical dipole operator:
Notice that this definition is valid only for neutral atoms or molecules, i.e. total charge equal to zero. In the ionized case, we have
where is the center of mass of the molecule/group of particles.
Atomic dipoles
A non-degenerate (S-state) atom can have only a zero permanent dipole. This fact follows quantum mechanically from the inversion symmetry of atoms. All 3 components of the dipole operator are antisymmetric under inversion with respect to the nucleus,
where is the dipole operator and is the inversion operator.
The permanent dipole moment of an atom in a non-degenerate state (see degenerate energy level) is given as the expectation (average) value of the dipole operator,
where is an S-state, non-degenerate, wavefunction, which is symmetric or antisymmetric under inversion: . Since the product of the wavefunction (in the ket) and its complex conjugate (in the bra) is always symmetric under inversion and its inverse,
it follows that the expectation value changes sign under inversion. We used here the fact that , being a symmetry operator, is unitary: and by definition the Hermitian adjoint may be moved from bra to ket and then becomes . Since the only quantity that is equal to minus itself is the zero, the expectation value vanishes,
In the case of open-shell atoms with degenerate energy levels, one could define a dipole moment by the aid of the first-order Stark effect. This gives a non-vanishing dipole (by definition proportional to a non-vanishing first-order Stark shift) only if some of the wavefunctions belonging to the degenerate energies have opposite parity; i.e., have different behavior under inversion. This is a rare occurrence, but happens for the excited H-atom, where 2s and 2p states are "accidentally" degenerate (see article Laplace–Runge–Lenz vector for the origin of this degeneracy) and have opposite parity (2s is even and 2p is odd).
Field of a static magnetic dipole
Magnitude
The far-field strength, B, of a dipole magnetic field is given by
where
B is the strength of the field, measured in teslas
r is the distance from the center, measured in metres
λ is the magnetic latitude (equal to 90° − θ) where θ is the magnetic colatitude, measured in radians or degrees from the dipole axis
m is the dipole moment, measured in ampere-square metres or joules per tesla
μ0 is the permeability of free space, measured in henries per metre.
Conversion to cylindrical coordinates is achieved using and
where ρ is the perpendicular distance from the z-axis. Then,
Vector form
The field itself is a vector quantity:
where
B is the field
r is the vector from the position of the dipole to the position where the field is being measured
r is the absolute value of r: the distance from the dipole
r̂ = is the unit vector parallel to r;
m is the (vector) dipole moment
μ0 is the permeability of free space
This is exactly the field of a point dipole, exactly the dipole term in the multipole expansion of an arbitrary field, and approximately the field of any dipole-like configuration at large distances.
Magnetic vector potential
The vector potential A of a magnetic dipole is
with the same definitions as above.
Field from an electric dipole
The electrostatic potential at position r due to an electric dipole at the origin is given by:
where p is the (vector) dipole moment, and є0 is the permittivity of free space.
This term appears as the second term in the multipole expansion of an arbitrary electrostatic potential Φ(r). If the source of Φ(r) is a dipole, as it is assumed here, this term is the only non-vanishing term in the multipole expansion of Φ(r). The electric field from a dipole can be found from the gradient of this potential:
This is of the same form of the expression for the magnetic field of a point magnetic dipole, ignoring the delta function.
In a real electric dipole, however, the charges are physically separate and the electric field diverges or converges at the point charges.
This is different to the magnetic field of a real magnetic dipole which is continuous everywhere. The delta function represents the strong field pointing in the opposite direction between the point charges, which is often omitted since one is rarely interested in the field at the dipole's position.
For further discussions about the internal field of dipoles, see or .
Torque on a dipole
Since the direction of an electric field is defined as the direction of the force on a positive charge, electric field lines point away from a positive charge and toward a negative charge.
When placed in a homogeneous electric or magnetic field, equal but opposite forces arise on each side of the dipole creating a torque }:
for an electric dipole moment p (in coulomb-meters), or
for a magnetic dipole moment m (in ampere-square meters).
The resulting torque will tend to align the dipole with the applied field, which in the case of an electric dipole, yields a potential energy of
.
The energy of a magnetic dipole is similarly
.
Dipole radiation
In addition to dipoles in electrostatics, it is also common to consider an electric or magnetic dipole that is oscillating in time. It is an extension, or a more physical next-step, to spherical wave radiation.
In particular, consider a harmonically oscillating electric dipole, with angular frequency ω and a dipole moment p0 along the ẑ direction of the form
In vacuum, the exact field produced by this oscillating dipole can be derived using the retarded potential formulation as:
For ≫ 1, the far-field takes the simpler form of a radiating "spherical" wave, but with angular dependence embedded in the cross-product:
The time-averaged Poynting vector
is not distributed isotropically, but concentrated around the directions lying perpendicular to the dipole moment, as a result of the non-spherical electric and magnetic waves. In fact, the spherical harmonic function (sin θ) responsible for such toroidal angular distribution is precisely the l = 1 "p" wave.
The total time-average power radiated by the field can then be derived from the Poynting vector as
Notice that the dependence of the power on the fourth power of the frequency of the radiation is in accordance with the Rayleigh scattering, and the underlying effects why the sky consists of mainly blue colour.
A circular polarized dipole is described as a superposition of two linear dipoles.
| Physical sciences | Basics_9 | null |
8389 | https://en.wikipedia.org/wiki/Major%20depressive%20disorder | Major depressive disorder | Major depressive disorder (MDD), also known as clinical depression, is a mental disorder characterized by at least two weeks of pervasive low mood, low self-esteem, and loss of interest or pleasure in normally enjoyable activities. Introduced by a group of US clinicians in the mid-1970s, the term was adopted by the American Psychiatric Association for this symptom cluster under mood disorders in the 1980 version of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III), and has become widely used since. The disorder causes the second-most years lived with disability, after lower back pain.
The diagnosis of major depressive disorder is based on the person's reported experiences, behavior reported by family or friends, and a mental status examination. There is no laboratory test for the disorder, but testing may be done to rule out physical conditions that can cause similar symptoms. The most common time of onset is in a person's 20s, with females affected about three times as often as males. The course of the disorder varies widely, from one episode lasting months to a lifelong disorder with recurrent major depressive episodes.
Those with major depressive disorder are typically treated with psychotherapy and antidepressant medication. Medication appears to be effective, but the effect may be significant only in the most severely depressed. Hospitalization (which may be involuntary) may be necessary in cases with associated self-neglect or a significant risk of harm to self or others. Electroconvulsive therapy (ECT) may be considered if other measures are not effective.
Major depressive disorder is believed to be caused by a combination of genetic, environmental, and psychological factors, with about 40% of the risk being genetic. Risk factors include a family history of the condition, major life changes, childhood traumas, environmental lead exposure, certain medications, chronic health problems, and substance use disorders. It can negatively affect a person's personal life, work life, or education, and cause issues with a person's sleeping habits, eating habits, and general health.
Signs and symptoms
A person having a major depressive episode usually exhibits a low mood, which pervades all aspects of life, and an inability to experience pleasure in previously enjoyable activities. Depressed people may be preoccupied with or ruminate over thoughts and feelings of worthlessness, inappropriate guilt or regret, helplessness or hopelessness.
Other symptoms of depression include poor concentration and memory, withdrawal from social situations and activities, reduced sex drive, irritability, and thoughts of death or suicide. Insomnia is common; in the typical pattern, a person wakes very early and cannot get back to sleep. Hypersomnia, or oversleeping, can also happen, as well as day-night rhythm disturbances, such as diurnal mood variation. Some antidepressants may also cause insomnia due to their stimulating effect. In severe cases, depressed people may have psychotic symptoms. These symptoms include delusions or, less commonly, hallucinations, usually unpleasant. People who have had previous episodes with psychotic symptoms are more likely to have them with future episodes.
A depressed person may report multiple physical symptoms such as fatigue, headaches, or digestive problems; physical complaints are the most common presenting problem in developing countries, according to the World Health Organization's criteria for depression. Appetite often decreases, resulting in weight loss, although increased appetite and weight gain occasionally occur.
Major depression significantly affects a person's family and personal relationships, work or school life, sleeping and eating habits, and general health. Family and friends may notice agitation or lethargy. Older depressed people may have cognitive symptoms of recent onset, such as forgetfulness, and a more noticeable slowing of movements.
Depressed children may often display an irritable rather than a depressed mood; most lose interest in school and show a steep decline in academic performance. Diagnosis may be delayed or missed when symptoms are interpreted as "normal moodiness". Elderly people may not present with classical depressive symptoms. Diagnosis and treatment is further complicated in that the elderly are often simultaneously treated with a number of other drugs, and often have other concurrent diseases.
Cause
The etiology of depression is not yet fully understood. The biopsychosocial model proposes that biological, psychological, and social factors all play a role in causing depression. The diathesis–stress model specifies that depression results when a preexisting vulnerability, or diathesis, is activated by stressful life events. The preexisting vulnerability can be either genetic, implying an interaction between nature and nurture, or schematic, resulting from views of the world learned in childhood. American psychiatrist Aaron Beck suggested that a triad of automatic and spontaneous negative thoughts about the self, the world or environment, and the future may lead to other depressive signs and symptoms.
Genetics
Genes play a major role in the development of depression. Family and twin studies find that nearly 40% of individual differences in risk for major depressive disorder can be explained by genetic factors. Like most psychiatric disorders, major depressive disorder is likely influenced by many individual genetic changes. In 2018, a genome-wide association study discovered 44 genetic variants linked to risk for major depression; a 2019 study found 102 variants in the genome linked to depression. However, it appears that major depression is less heritable compared to bipolar disorder and schizophrenia. Research focusing on specific candidate genes has been criticized for its tendency to generate false positive findings. There are also other efforts to examine interactions between life stress and polygenic risk for depression.
Other health problems
Depression can also arise after a chronic or terminal medical condition, such as HIV/AIDS or asthma, and may be labeled "secondary depression". It is unknown whether the underlying diseases induce depression through effect on quality of life, or through shared etiologies (such as degeneration of the basal ganglia in Parkinson's disease or immune dysregulation in asthma). Depression may also be iatrogenic (the result of healthcare), such as drug-induced depression. Therapies associated with depression include interferons, beta-blockers, isotretinoin, contraceptives, cardiac agents, anticonvulsants, antimigraine drugs, antipsychotics, and hormonal agents such as gonadotropin-releasing hormone agonist (GnRH agonist). Celiac disease is another possible contributing factor.
Substance use in early age is associated with increased risk of developing depression later in life. Depression occurring after giving birth is called postpartum depression and is thought to be the result of hormonal changes associated with pregnancy. Seasonal affective disorder, a type of depression associated with seasonal changes in sunlight, is thought to be triggered by decreased sunlight.
Vitamin B2, B6 and B12 deficiency may cause depression in females.
Environmental
Adverse childhood experiences (incorporating childhood abuse, neglect and family dysfunction) markedly increase the risk of major depression, especially if more than one type. Childhood trauma also correlates with severity of depression, poor responsiveness to treatment and length of illness. Some are more susceptible than others to developing mental illness such as depression after trauma, and various genes have been suggested to control susceptibility. Couples in unhappy marriages have a higher risk of developing clinical depression.
There appears to be a link between air pollution and depression and suicide. There may be an association between long-term PM2.5 exposure and depression, and a possible association between short-term PM10 exposure and suicide.
Pathophysiology
The pathophysiology of depression is not completely understood, but current theories center around monoaminergic systems, the circadian rhythm, immunological dysfunction, HPA-axis dysfunction and structural or functional abnormalities of emotional circuits.
Derived from the effectiveness of monoaminergic drugs in treating depression, the monoamine theory posits that insufficient activity of monoamine neurotransmitters is the primary cause of depression. Evidence for the monoamine theory comes from multiple areas. First, acute depletion of tryptophan—a necessary precursor of serotonin and a monoamine—can cause depression in those in remission or relatives of people who are depressed, suggesting that decreased serotonergic neurotransmission is important in depression. Second, the correlation between depression risk and polymorphisms in the 5-HTTLPR gene, which codes for serotonin receptors, suggests a link. Third, decreased size of the locus coeruleus, decreased activity of tyrosine hydroxylase, increased density of alpha-2 adrenergic receptor, and evidence from rat models suggest decreased adrenergic neurotransmission in depression. Furthermore, decreased levels of homovanillic acid, altered response to dextroamphetamine, responses of depressive symptoms to dopamine receptor agonists, decreased dopamine receptor D1 binding in the striatum, and polymorphism of dopamine receptor genes implicate dopamine, another monoamine, in depression. Lastly, increased activity of monoamine oxidase, which degrades monoamines, has been associated with depression. However, the monoamine theory is inconsistent with observations that serotonin depletion does not cause depression in healthy persons, that antidepressants instantly increase levels of monoamines but take weeks to work, and the existence of atypical antidepressants which can be effective despite not targeting this pathway.
One proposed explanation for the therapeutic lag, and further support for the deficiency of monoamines, is a desensitization of self-inhibition in raphe nuclei by the increased serotonin mediated by antidepressants. However, disinhibition of the dorsal raphe has been proposed to occur as a result of decreased serotonergic activity in tryptophan depletion, resulting in a depressed state mediated by increased serotonin. Further countering the monoamine hypothesis is the fact that rats with lesions of the dorsal raphe are not more depressive than controls, the finding of increased jugular 5-HIAA in people who are depressed that normalized with selective serotonin reuptake inhibitor (SSRI) treatment, and the preference for carbohydrates in people who are depressed. Already limited, the monoamine hypothesis has been further oversimplified when presented to the general public. A 2022 review found no consistent evidence supporting the serotonin hypothesis, linking serotonin levels and depression.
HPA-axis abnormalities have been suggested in depression given the association of CRHR1 with depression and the increased frequency of dexamethasone test non-suppression in people who are depressed. However, this abnormality is not adequate as a diagnosis tool, because its sensitivity is only 44%. These stress-related abnormalities are thought to be the cause of hippocampal volume reductions seen in people who are depressed. Furthermore, a meta-analysis yielded decreased dexamethasone suppression, and increased response to psychological stressors. Further abnormal results have been obscured with the cortisol awakening response, with increased response being associated with depression.
There is also a connection between the gut microbiome and the central nervous system, otherwise known as the Gut-Brain axis, which is a two-way communication system between the brain and the gut. Experiments have shown that microbiota in the gut can play an important role in depression as people with MDD often have gut-brain dysfunction. One analysis showed that those with MDD have different bacteria living in their guts. Bacteria Bacteroidetes and Firmicutes were most affected in people with MDD, and they are also impacted in people with irritable bowel syndrome. Another study showed that people with IBS have a higher chance of developing depression, which shows the two are connected. There is even evidence suggesting that altering the microbes in the gut can have regulatory effects on developing depression.
Theories unifying neuroimaging findings have been proposed. The first model proposed is the limbic-cortical model, which involves hyperactivity of the ventral paralimbic regions and hypoactivity of frontal regulatory regions in emotional processing. Another model, the cortico-striatal model, suggests that abnormalities of the prefrontal cortex in regulating striatal and subcortical structures result in depression. Another model proposes hyperactivity of salience structures in identifying negative stimuli, and hypoactivity of cortical regulatory structures resulting in a negative emotional bias and depression, consistent with emotional bias studies.
Immune pathogenesis theories on depression
The newer field of psychoneuroimmunology, the study between the immune system and the nervous system and emotional state, suggests that cytokines may impact depression.
Immune system abnormalities have been observed, including increased levels of cytokines -cells produced by immune cells that affect inflammation- involved in generating sickness behavior, creating a pro-inflammatory profile in MDD. Some people with depression have increased levels of pro-inflammatory cytokines and some have decreased levels of anti-inflammatory cytokines. Research suggests that treatments can reduce pro-inflammatory cell production, like the experimental treatment of ketamine with treatment-resistant depression. With this, in MDD, people will more likely have a Th-1 dominant immune profile, which is a pro-inflammatory profile. This suggests that there are components of the immune system affecting the pathology of MDD.
Another way cytokines can affect depression is in the kynurenine pathway, and when this is overactivated, it can cause depression. This can be due to too much microglial activation and too little astrocytic activity. When microglia get activated, they release pro-inflammatory cytokines that cause an increase in the production of COX2. This, in turn, causes the production of PGE2, which is a prostaglandin, and this catalyzes the production of indolamine, IDO. IDO causes tryptophan to get converted into kynurenine and kynurenine becomes quinolinic acid. Quinolinic acid is an agonist for NMDA receptors, so it activates the pathway. Studies have shown that the post-mortem brains of patients with MDD have higher levels of quinolinic acid than people who did not have MDD. With this, researchers have also seen that the concentration of quinolinic acid correlates to the severity of depressive symptoms.
Diagnosis
Assessment
A diagnostic assessment may be conducted by a suitably trained general practitioner, or by a psychiatrist or psychologist, who records the person's current circumstances, biographical history, current symptoms, family history, and alcohol and drug use. The assessment also includes a mental state examination, which is an assessment of the person's current mood and thought content, in particular the presence of themes of hopelessness or pessimism, self-harm or suicide, and an absence of positive thoughts or plans. Specialist mental health services are rare in rural areas, and thus diagnosis and management is left largely to primary-care clinicians. This issue is even more marked in developing countries. Rating scales are not used to diagnose depression, but they provide an indication of the severity of symptoms for a time period, so a person who scores above a given cut-off point can be more thoroughly evaluated for a depressive disorder diagnosis. Several rating scales are used for this purpose; these include the Hamilton Rating Scale for Depression, the Beck Depression Inventory or the Suicide Behaviors Questionnaire-Revised.
Primary-care physicians have more difficulty with underrecognition and undertreatment of depression compared to psychiatrists. These cases may be missed because for some people with depression, physical symptoms often accompany depression. In addition, there may also be barriers related to the person, provider, and/or the medical system. Non-psychiatrist physicians have been shown to miss about two-thirds of cases, although there is some evidence of improvement in the number of missed cases.
A doctor generally performs a medical examination and selected investigations to rule out other causes of depressive symptoms. These include blood tests measuring TSH and thyroxine to exclude hypothyroidism; basic electrolytes and serum calcium to rule out a metabolic disturbance; and a full blood count including ESR to rule out a systemic infection or chronic disease. Adverse affective reactions to medications or alcohol misuse may be ruled out, as well. Testosterone levels may be evaluated to diagnose hypogonadism, a cause of depression in men. Vitamin D levels might be evaluated, as low levels of vitamin D have been associated with greater risk for depression. Subjective cognitive complaints appear in older depressed people, but they can also be indicative of the onset of a dementing disorder, such as Alzheimer's disease. Cognitive testing and brain imaging can help distinguish depression from dementia. A CT scan can exclude brain pathology in those with psychotic, rapid-onset or otherwise unusual symptoms. No biological tests confirm major depression. In general, investigations are not repeated for a subsequent episode unless there is a medical indication.
DSM and ICD criteria
The most widely used criteria for diagnosing depressive conditions are found in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM) and the World Health Organization's International Statistical Classification of Diseases and Related Health Problems (ICD). The latter system is typically used in European countries, while the former is used in the US and many other non-European nations, and the authors of both have worked towards conforming one with the other. Both DSM and ICD mark out typical (main) depressive symptoms. The most recent edition of the DSM is the Fifth Edition, Text Revision (DSM-5-TR), and the most recent edition of the ICD is the Eleventh Edition (ICD-11).
Under mood disorders, ICD-11 classifies major depressive disorder as either single episode depressive disorder (where there is no history of depressive episodes, or of mania) or recurrent depressive disorder (where there is a history of prior episodes, with no history of mania). ICD-11 symptoms, present nearly every day for at least two weeks, are a depressed mood or anhedonia, accompanied by other symptoms such as "difficulty concentrating, feelings of worthlessness or excessive or inappropriate guilt, hopelessness, recurrent thoughts of death or suicide, changes in appetite or sleep, psychomotor agitation or retardation, and reduced energy or fatigue." These symptoms must affect work, social, or domestic activities. The ICD-11 system allows further specifiers for the current depressive episode: the severity (mild, moderate, severe, unspecified); the presence of psychotic symptoms (with or without psychotic symptoms); and the degree of remission if relevant (currently in partial remission, currently in full remission). These two disorders are classified as "Depressive disorders", in the category of "Mood disorders".
According to DSM-5, at least one of the symptoms is either depressed mood or loss of interest or pleasure. Depressed mood occurs nearly every day as subjective feelings like sadness, emptiness, and hopelessness or observations made by others (e.g. appears tearful). Loss of interest or pleasure occurs in all, or almost all activities of the day, nearly every day. These symptoms, as well as five out of the nine more specific symptoms listed, must frequently occur for more than two weeks (to the extent in which it impairs functioning) for the diagnosis. Major depressive disorder is classified as a mood disorder in the DSM-5. The diagnosis hinges on the presence of single or recurrent major depressive episodes. Further qualifiers are used to classify both the episode itself and the course of the disorder. The category Unspecified Depressive Disorder is diagnosed if the depressive episode's manifestation does not meet the criteria for a major depressive episode.
Major depressive episode
A major depressive episode is characterized by the presence of a severely depressed mood that persists for at least two weeks. Episodes may be isolated or recurrent and are categorized as mild (few symptoms in excess of minimum criteria), moderate, or severe (marked impact on social or occupational functioning). An episode with psychotic features—commonly referred to as psychotic depression—is automatically rated as severe. If the person has had an episode of mania or markedly elevated mood, a diagnosis of bipolar disorder is made instead. Depression without mania is sometimes referred to as unipolar because the mood remains at one emotional state or "pole".
Bereavement is not an exclusion criterion in the DSM-5, and it is up to the clinician to distinguish between normal reactions to a loss and MDD. Excluded are a range of related diagnoses, including dysthymia, which involves a chronic but milder mood disturbance; recurrent brief depression, consisting of briefer depressive episodes; minor depressive disorder, whereby only some symptoms of major depression are present; and adjustment disorder with depressed mood, which denotes low mood resulting from a psychological response to an identifiable event or stressor.
Subtypes
The DSM-5 recognizes six further subtypes of MDD, called specifiers, in addition to noting the length, severity and presence of psychotic features:
"Melancholic depression" is characterized by a loss of pleasure in most or all activities, a failure of reactivity to pleasurable stimuli, a quality of depressed mood more pronounced than that of grief or loss, a worsening of symptoms in the morning hours, early-morning waking, psychomotor retardation, excessive weight loss (not to be confused with anorexia nervosa), or excessive guilt.
"Atypical depression" is characterized by mood reactivity (paradoxical anhedonia) and positivity, significant weight gain or increased appetite (comfort eating), excessive sleep or sleepiness (hypersomnia), a sensation of heaviness in limbs known as leaden paralysis, and significant long-term social impairment as a consequence of hypersensitivity to perceived interpersonal rejection.
"Catatonic depression" is a rare and severe form of major depression involving disturbances of motor behavior and other symptoms. Here, the person is mute and almost stuporous, and either remains immobile or exhibits purposeless or even bizarre movements. Catatonic symptoms also occur in schizophrenia or in manic episodes, or may be caused by neuroleptic malignant syndrome.
"Depression with anxious distress" was added into the DSM-5 as a means to emphasize the common co-occurrence between depression or mania and anxiety, as well as the risk of suicide of depressed individuals with anxiety. Specifying in such a way can also help with the prognosis of those diagnosed with a depressive or bipolar disorder.
"Depression with peri-partum onset" refers to the intense, sustained and sometimes disabling depression experienced by women after giving birth or while a woman is pregnant. DSM-IV-TR used the classification "postpartum depression", but this was changed to not exclude cases of depressed woman during pregnancy. Depression with peripartum onset has an incidence rate of 3–6% among new mothers. The DSM-5 mandates that to qualify as depression with peripartum onset, onset occurs during pregnancy or within one month of delivery.
"Seasonal affective disorder" (SAD) is a form of depression in which depressive episodes come on in the autumn or winter, and resolve in spring. The diagnosis is made if at least two episodes have occurred in colder months with none at other times, over a two-year period or longer.
Differential diagnoses
To confirm major depressive disorder as the most likely diagnosis, other potential diagnoses must be considered, including dysthymia, adjustment disorder with depressed mood, or bipolar disorder. Dysthymia is a chronic, milder mood disturbance in which a person reports a low mood almost daily over a span of at least two years. The symptoms are not as severe as those for major depression, although people with dysthymia are vulnerable to secondary episodes of major depression (sometimes referred to as double depression). Adjustment disorder with depressed mood is a mood disturbance appearing as a psychological response to an identifiable event or stressor, in which the resulting emotional or behavioral symptoms are significant but do not meet the criteria for a major depressive episode.
Other disorders need to be ruled out before diagnosing major depressive disorder. They include depressions due to physical illness, medications, and substance use disorders. Depression due to physical illness is diagnosed as a mood disorder due to a general medical condition. This condition is determined based on history, laboratory findings, or physical examination. When the depression is caused by a medication, non-medical use of a psychoactive substance, or exposure to a toxin, it is then diagnosed as a specific mood disorder (previously called substance-induced mood disorder).
Screening and prevention
Preventive efforts may result in decreases in rates of the condition of between 22 and 38%. Since 2016, the United States Preventive Services Task Force (USPSTF) has recommended screening for depression among those over the age 12; though a 2005 Cochrane review found that the routine use of screening questionnaires has little effect on detection or treatment. Screening the general population is not recommended by authorities in the UK or Canada.
Behavioral interventions, such as interpersonal therapy and cognitive-behavioral therapy, are effective at preventing new onset depression. Because such interventions appear to be most effective when delivered to individuals or small groups, it has been suggested that they may be able to reach their large target audience most efficiently through the Internet.
The Netherlands mental health care system provides preventive interventions, such as the "Coping with Depression" course (CWD) for people with sub-threshold depression. The course is claimed to be the most successful of psychoeducational interventions for the treatment and prevention of depression (both for its adaptability to various populations and its results), with a risk reduction of 38% in major depression and an efficacy as a treatment comparing favorably to other psychotherapies.
Management
The most common and effective treatments for depression are psychotherapy, medication, and electroconvulsive therapy (ECT); a combination of treatments is the most effective approach when depression is resistant to treatment. American Psychiatric Association treatment guidelines recommend that initial treatment should be individually tailored based on factors including severity of symptoms, co-existing disorders, prior treatment experience, and personal preference. Options may include pharmacotherapy, psychotherapy, exercise, ECT, transcranial magnetic stimulation (TMS) or light therapy. Antidepressant medication is recommended as an initial treatment choice in people with mild, moderate, or severe major depression, and should be given to all people with severe depression unless ECT is planned. There is evidence that collaborative care by a team of health care practitioners produces better results than routine single-practitioner care.
Psychotherapy is the treatment of choice (over medication) for people under 18, and cognitive behavioral therapy (CBT), third wave CBT and interpersonal therapy may help prevent depression. The UK National Institute for Health and Care Excellence (NICE) 2004 guidelines indicate that antidepressants should not be used for the initial treatment of mild depression because the risk-benefit ratio is poor. The guidelines recommend that antidepressants treatment in combination with psychosocial interventions should be considered for:
People with a history of moderate or severe depression
Those with mild depression that has been present for a long period
As a second line treatment for mild depression that persists after other interventions
As a first line treatment for moderate or severe depression.
The guidelines further note that antidepressant treatment should be continued for at least six months to reduce the risk of relapse, and that SSRIs are better tolerated than tricyclic antidepressants.
Treatment options are more limited in developing countries, where access to mental health staff, medication, and psychotherapy is often difficult. Development of mental health services is minimal in many countries; depression is viewed as a phenomenon of the developed world despite evidence to the contrary, and not as an inherently life-threatening condition. There is insufficient evidence to determine the effectiveness of psychological versus medical therapy in children.
Lifestyle
Physical exercise has been found to be effective for major depression, and may be recommended to people who are willing, motivated, and healthy enough to participate in an exercise program as treatment. It is equivalent to the use of medications or psychological therapies in most people. In older people it does appear to decrease depression. Sleep and diet may also play a role in depression, and interventions in these areas may be an effective add-on to conventional methods. In observational studies, smoking cessation has benefits in depression as large as or larger than those of medications.
Talking therapies
Talking therapy (psychotherapy) can be delivered to individuals, groups, or families by mental health professionals, including psychotherapists, psychiatrists, psychologists, clinical social workers, counselors, and psychiatric nurses. A 2012 review found psychotherapy to be better than no treatment but not other treatments. With more complex and chronic forms of depression, a combination of medication and psychotherapy may be used. There is moderate-quality evidence that psychological therapies are a useful addition to standard antidepressant treatment of treatment-resistant depression in the short term. Psychotherapy has been shown to be effective in older people. Successful psychotherapy appears to reduce the recurrence of depression even after it has been stopped or replaced by occasional booster sessions.
The most-studied form of psychotherapy for depression is CBT, which teaches clients to challenge self-defeating, but enduring ways of thinking (cognitions) and change counter-productive behaviors. CBT can perform as well as antidepressants in people with major depression. CBT has the most research evidence for the treatment of depression in children and adolescents, and CBT and interpersonal psychotherapy (IPT) are preferred therapies for adolescent depression. In people under 18, according to the National Institute for Health and Clinical Excellence, medication should be offered only in conjunction with a psychological therapy, such as CBT, interpersonal therapy, or family therapy. Several variables predict success for cognitive behavioral therapy in adolescents: higher levels of rational thoughts, less hopelessness, fewer negative thoughts, and fewer cognitive distortions. CBT is particularly beneficial in preventing relapse. Cognitive behavioral therapy and occupational programs (including modification of work activities and assistance) have been shown to be effective in reducing sick days taken by workers with depression. Several variants of cognitive behavior therapy have been used in those with depression, the most notable being rational emotive behavior therapy, and mindfulness-based cognitive therapy. Mindfulness-based stress reduction programs may reduce depression symptoms. Mindfulness programs also appear to be a promising intervention in youth. Problem solving therapy, cognitive behavioral therapy, and interpersonal therapy are effective interventions in the elderly.
Psychoanalysis is a school of thought, founded by Sigmund Freud, which emphasizes the resolution of unconscious mental conflicts. Psychoanalytic techniques are used by some practitioners to treat clients presenting with major depression. A more widely practiced therapy, called psychodynamic psychotherapy, is in the tradition of psychoanalysis but less intensive, meeting once or twice a week. It also tends to focus more on the person's immediate problems, and has an additional social and interpersonal focus. In a meta-analysis of three controlled trials of Short Psychodynamic Supportive Psychotherapy, this modification was found to be as effective as medication for mild to moderate depression.
Antidepressants
Conflicting results have arisen from studies that look at the effectiveness of antidepressants in people with acute, mild to moderate depression. A review commissioned by the National Institute for Health and Care Excellence (UK) concluded that there is strong evidence that SSRIs, such as escitalopram, paroxetine, and sertraline, have greater efficacy than placebo on achieving a 50% reduction in depression scores in moderate and severe major depression, and that there is some evidence for a similar effect in mild depression. Similarly, a Cochrane systematic review of clinical trials of the generic tricyclic antidepressant amitriptyline concluded that there is strong evidence that its efficacy is superior to placebo. Antidepressants work less well for the elderly than for younger individuals with depression.
To find the most effective antidepressant medication with minimal side-effects, the dosages can be adjusted, and if necessary, combinations of different classes of antidepressants can be tried. Response rates to the first antidepressant administered range from 50 to 75%, and it can take at least six to eight weeks from the start of medication to improvement. Antidepressant medication treatment is usually continued for 16 to 20 weeks after remission, to minimize the chance of recurrence, and even up to one year of continuation is recommended. People with chronic depression may need to take medication indefinitely to avoid relapse.
SSRIs are the primary medications prescribed, owing to their relatively mild side-effects, and because they are less toxic in overdose than other antidepressants. People who do not respond to one SSRI can be switched to another antidepressant, and this results in improvement in almost 50% of cases. Another option is to augment the atypical antidepressant bupropion to the SSRI as an adjunctive treatment. Venlafaxine, an antidepressant with a different mechanism of action, may be modestly more effective than SSRIs. However, venlafaxine is not recommended in the UK as a first-line treatment because of evidence suggesting its risks may outweigh benefits, and it is specifically discouraged in children and adolescents as it increases the risk of suicidal thoughts or attempts.
For children and adolescents with moderate-to-severe depressive disorder, fluoxetine seems to be the best treatment (either with or without cognitive behavioural therapy) but more research is needed to be certain. Sertraline, escitalopram, duloxetine might also help in reducing symptoms. Some antidepressants have not been shown to be effective. Medications are not recommended in children with mild disease.
There is also insufficient evidence to determine effectiveness in those with depression complicated by dementia. Any antidepressant can cause low blood sodium levels; nevertheless, it has been reported more often with SSRIs. It is not uncommon for SSRIs to cause or worsen insomnia; the sedating atypical antidepressant mirtazapine can be used in such cases.
Irreversible monoamine oxidase inhibitors, an older class of antidepressants, have been plagued by potentially life-threatening dietary and drug interactions. They are still used only rarely, although newer and better-tolerated agents of this class have been developed. The safety profile is different with reversible monoamine oxidase inhibitors, such as moclobemide, where the risk of serious dietary interactions is negligible and dietary restrictions are less strict.
It is unclear whether antidepressants affect a person's risk of suicide. For children, adolescents, and probably young adults between 18 and 24 years old, there is a higher risk of both suicidal ideations and suicidal behavior in those treated with SSRIs. For adults, it is unclear whether SSRIs affect the risk of suicidality. One review found no connection; another an increased risk; and a third no risk in those 25–65 years old and a decreased risk in those more than 65. A black box warning was introduced in the United States in 2007 on SSRIs and other antidepressant medications due to the increased risk of suicide in people younger than 24 years old. Similar precautionary notice revisions were implemented by the Japanese Ministry of Health.
Other medications and supplements
The combined use of antidepressants plus benzodiazepines demonstrates improved effectiveness when compared to antidepressants alone, but these effects may not endure. The addition of a benzodiazepine is balanced against possible harms and other alternative treatment strategies when antidepressant mono-therapy is considered inadequate.
For treatment-resistant depression, adding on the atypical antipsychotic brexpiprazole for short-term or acute management may be considered. Brexpiprazole may be effective for some people, however, the evidence as of 2023 supporting its use is weak and this medication has potential adverse effects including weight gain and akathisia. Brexpiprazole has not been sufficiently studied in older people or children and the use and effectiveness of this adjunctive therapy for longer term management is not clear.
Ketamine may have a rapid antidepressant effect lasting less than two weeks; there is limited evidence of any effect after that, common acute side effects, and longer-term studies of safety and adverse effects are needed. A nasal spray form of esketamine was approved by the FDA in March 2019 for use in treatment-resistant depression when combined with an oral antidepressant; risk of substance use disorder and concerns about its safety, serious adverse effects, tolerability, effect on suicidality, lack of information about dosage, whether the studies on it adequately represent broad populations, and escalating use of the product have been raised by an international panel of experts.
Nonsteroidal anti-inflammatory drugs (NSAIDs) and cytokine inhibitors are effective in treating depression. For instance, Celecoxib, an NSAID, is a selective COX-2 inhibitor– which is an enzyme that helps in the production of pain and inflammation. In recent clinical trials, this NSAID has been shown helpful with treatment-resistant depression as it helps inhibit proinflammatory signaling.
Statins, which are anti-inflammatory medications prescribed to lower cholesterol levels, have also been shown to have antidepressant effects. When prescribed for patients already taking SSRIs, this add-on treatment was shown to improve anti-depressant effects of SSRIs when compared to the placebo group. With this, statins have been shown to be effective in preventing depression in some cases too.
There is insufficient high quality evidence to suggest omega-3 fatty acids are effective in depression. There is limited evidence that vitamin D supplementation is of value in alleviating the symptoms of depression in individuals who are vitamin D-deficient. Lithium appears effective at lowering the risk of suicide in those with bipolar disorder and unipolar depression to nearly the same levels as the general population. There is a narrow range of effective and safe dosages of lithium thus close monitoring may be needed. Low-dose thyroid hormone may be added to existing antidepressants to treat persistent depression symptoms in people who have tried multiple courses of medication. Limited evidence suggests stimulants, such as amphetamine and modafinil, may be effective in the short term, or as adjuvant therapy. Also, it is suggested that folate supplements may have a role in depression management. There is tentative evidence for benefit from testosterone in males.
Electroconvulsive therapy
Electroconvulsive therapy (ECT) is a standard psychiatric treatment in which seizures are electrically induced in a person with depression to provide relief from psychiatric illnesses. ECT is used with informed consent as a last line of intervention for major depressive disorder. A round of ECT is effective for about 50% of people with treatment-resistant major depressive disorder, whether it is unipolar or bipolar. Follow-up treatment is still poorly studied, but about half of people who respond relapse within twelve months. Aside from effects in the brain, the general physical risks of ECT are similar to those of brief general anesthesia. Immediately following treatment, the most common adverse effects are confusion and memory loss. ECT is considered one of the least harmful treatment options available for severely depressed pregnant women.
A usual course of ECT involves multiple administrations, typically given two or three times per week, until the person no longer has symptoms. ECT is administered under anesthesia with a muscle relaxant. Electroconvulsive therapy can differ in its application in three ways: electrode placement, frequency of treatments, and the electrical waveform of the stimulus. These three forms of application have significant differences in both adverse side effects and symptom remission. After treatment, drug therapy is usually continued, and some people receive maintenance ECT.
ECT appears to work in the short term via an anticonvulsant effect mostly in the frontal lobes, and longer term via neurotrophic effects primarily in the medial temporal lobe.
Other
Transcranial magnetic stimulation (TMS) or deep transcranial magnetic stimulation is a noninvasive method used to stimulate small regions of the brain. TMS was approved by the FDA for treatment-resistant major depressive disorder (trMDD) in 2008 and as of 2014 evidence supports that it is probably effective. The American Psychiatric Association, the Canadian Network for Mood and Anxiety Disorders, and the Royal Australia and New Zealand College of Psychiatrists have endorsed TMS for trMDD. Transcranial direct current stimulation (tDCS) is another noninvasive method used to stimulate small regions of the brain with a weak electric current. Several meta-analyses have concluded that active tDCS was useful for treating depression.
There is a small amount of evidence that sleep deprivation may improve depressive symptoms in some individuals, with the effects usually showing up within a day. This effect is usually temporary. Besides sleepiness, this method can cause a side effect of mania or hypomania. There is insufficient evidence for Reiki and dance movement therapy in depression. Cannabis is specifically not recommended as a treatment.
The microbiome of people with major depressive disorder differs from that of healthy people, and probiotic and synbiotic treatment may achieve a modest depressive symptom reduction. With this, fecal microbiota transplants (FMT) are being researched as add-on therapy treatments for people who do not respond to typical therapies. It has been shown that the patient's depressive symptoms improved, with minor gastrointestinal issues, after a FMT, with improvements in symptoms lasting at least 4 weeks after the transplant.
Prognosis
Studies have shown that 80% of those with a first major depressive episode will have at least one more during their life, with a lifetime average of four episodes. Other general population studies indicate that around half those who have an episode recover (whether treated or not) and remain well, while the other half will have at least one more, and around 15% of those experience chronic recurrence. Studies recruiting from selective inpatient sources suggest lower recovery and higher chronicity, while studies of mostly outpatients show that nearly all recover, with a median episode duration of 11 months. Around 90% of those with severe or psychotic depression, most of whom also meet criteria for other mental disorders, experience recurrence. Cases when outcome is poor are associated with inappropriate treatment, severe initial symptoms including psychosis, early age of onset, previous episodes, incomplete recovery after one year of treatment, pre-existing severe mental or medical disorder, and family dysfunction.
A high proportion of people who experience full symptomatic remission still have at least one not fully resolved symptom after treatment. Recurrence or chronicity is more likely if symptoms have not fully resolved with treatment. Current guidelines recommend continuing antidepressants for four to six months after remission to prevent relapse. Evidence from many randomized controlled trials indicates continuing antidepressant medications after recovery can reduce the chance of relapse by 70% (41% on placebo vs. 18% on antidepressant). The preventive effect probably lasts for at least the first 36 months of use.
Major depressive episodes often resolve over time, whether or not they are treated. Outpatients on a waiting list show a 10–15% reduction in symptoms within a few months, with approximately 20% no longer meeting the full criteria for a depressive disorder. The median duration of an episode has been estimated to be 23 weeks, with the highest rate of recovery in the first three months. According to a 2013 review, 23% of untreated adults with mild to moderate depression will remit within 3 months, 32% within 6 months and 53% within 12 months.
Ability to work
Depression may affect people's ability to work. The combination of usual clinical care and support with return to work (like working less hours or changing tasks) probably reduces sick leave by 15%, and leads to fewer depressive symptoms and improved work capacity, reducing sick leave by an annual average of 25 days per year. Helping depressed people return to work without a connection to clinical care has not been shown to have an effect on sick leave days. Additional psychological interventions (such as online cognitive behavioral therapy) lead to fewer sick days compared to standard management only. Streamlining care or adding specific providers for depression care may help to reduce sick leave.
Life expectancy and the risk of suicide
Depressed individuals have a shorter life expectancy than those without depression, in part because people who are depressed are at risk of dying of suicide. About 50% of people who die of suicide have a mood disorder such as major depression, and the risk is especially high if a person has a marked sense of hopelessness or has both depression and borderline personality disorder. About 2–8% of adults with major depression die by suicide. In the US, the lifetime risk of suicide associated with a diagnosis of major depression is estimated at 7% for men and 1% for women, even though suicide attempts are more frequent in women.
Depressed people also have a higher rate of dying from other causes. There is a 1.5- to 2-fold increased risk of cardiovascular disease, independent of other known risk factors, and is itself linked directly or indirectly to risk factors such as smoking and obesity. People with major depression are less likely to follow medical recommendations for treating and preventing cardiovascular disorders, further increasing their risk of medical complications. Cardiologists may not recognize underlying depression that complicates a cardiovascular problem under their care.
Epidemiology
Major depressive disorder affected approximately 163 million people in 2017 (2% of the global population). The percentage of people who are affected at one point in their life varies from 7% in Japan to 21% in France. In most countries the number of people who have depression during their lives falls within an 8–18% range. Lifetime rates are higher in the developed world (15%) compared to the developing world (11%).
In the United States, 8.4% of adults (21 million individuals) have at least one episode within a year-long period; the probability of having a major depressive episode is higher for females than males (10.5% to 6.2%), and highest for those aged 18 to 25 (17%). 15% of adolescents, ages 12 to 17, in America are also affected by depression, which is equal to 3.7 million teenagers. Among individuals reporting two or more races, the US prevalence is highest. Out of all the people suffering from MDD, only about 35% seek help from a professional for their disorder.
Major depression is about twice as common in women as in men, although it is unclear why this is so, and whether factors unaccounted for are contributing to this. The relative increase in occurrence is related to pubertal development rather than chronological age, reaches adult ratios between the ages of 15 and 18, and appears associated with psychosocial more than hormonal factors. In 2019, major depressive disorder was identified (using either the DSM-IV-TR or ICD-10) in the Global Burden of Disease Study as the fifth most common cause of years lived with disability and the 18th most common for disability-adjusted life years.
People are most likely to develop their first depressive episode between the ages of 30 and 40, and there is a second, smaller peak of incidence between ages 50 and 60. The risk of major depression is increased with neurological conditions such as stroke, Parkinson's disease, or multiple sclerosis, and during the first year after childbirth (Postpartum depression). It is also more common after cardiovascular illnesses, and is related more to those with a poor cardiac disease outcome than to a better one. Depressive disorders are more common in urban populations than in rural ones and the prevalence is increased in groups with poorer socioeconomic factors, e.g., homelessness. Depression is common among those over 65 years of age and increases in frequency beyond this age. The risk of depression increases in relation to the frailty of the individual. Depression is one of the most important factors which negatively impact quality of life in adults, as well as the elderly. Both symptoms and treatment among the elderly differ from those of the rest of the population.
Major depression was the leading cause of disease burden in North America and other high-income countries, and the fourth-leading cause worldwide as of 2006. In the year 2030, it is predicted to be the second-leading cause of disease burden worldwide after HIV, according to the WHO. Delay or failure in seeking treatment after relapse and the failure of health professionals to provide treatment are two barriers to reducing disability.
Comorbidity
Major depression frequently co-occurs with other psychiatric problems. The 1990–92 National Comorbidity Survey (US) reported that half of those with major depression also have lifetime anxiety and its associated disorders, such as generalized anxiety disorder. Anxiety symptoms can have a major impact on the course of a depressive illness, with delayed recovery, increased risk of relapse, greater disability and increased suicidal behavior. Depressed people have increased rates of alcohol and substance use, particularly dependence, and around a third of individuals diagnosed with attention deficit hyperactivity disorder (ADHD) develop comorbid depression. Post-traumatic stress disorder and depression often co-occur. Depression may also coexist with ADHD, complicating the diagnosis and treatment of both. Depression is also frequently comorbid with alcohol use disorder and personality disorders. Depression can also be exacerbated during particular months (usually winter) in those with seasonal affective disorder. While overuse of digital media has been associated with depressive symptoms, using digital media may also improve mood in some situations.
Depression and pain often co-occur. One or more pain symptoms are present in 65% of people who have depression, and anywhere from 5 to 85% of people who are experiencing pain will also have depression, depending on the setting—a lower prevalence in general practice, and higher in specialty clinics. Depression is often underrecognized, and therefore undertreated, in patients presenting with pain. Depression often coexists with physical disorders common among the elderly, such as stroke, other cardiovascular diseases, Parkinson's disease, and chronic obstructive pulmonary disease.
History
The Ancient Greek physician Hippocrates described a syndrome of melancholia (, ) as a distinct disease with particular mental and physical symptoms; he characterized all "fears and despondencies, if they last a long time" as being symptomatic of the ailment. It was a similar but far broader concept than today's depression; prominence was given to a clustering of the symptoms of sadness, dejection, and despondency, and often fear, anger, delusions and obsessions were included.
The term depression itself was derived from the Latin verb , meaning "to press down". From the 14th century, "to depress" meant to subjugate or to bring down in spirits. It was used in 1665 in English author Richard Baker's Chronicle to refer to someone having "a great depression of spirit", and by English author Samuel Johnson in a similar sense in 1753. The term also came into use in physiology and economics. An early usage referring to a psychiatric symptom was by French psychiatrist Louis Delasiauve in 1856, and by the 1860s it was appearing in medical dictionaries to refer to a physiological and metaphorical lowering of emotional function. Since Aristotle, melancholia had been associated with men of learning and intellectual brilliance, a hazard of contemplation and creativity. However, by the 19th century, this association has largely shifted and melancholia became more commonly linked with women.
Although melancholia remained the dominant diagnostic term, depression gained increasing currency in medical treatises and was a synonym by the end of the century; German psychiatrist Emil Kraepelin may have been the first to use it as the overarching term, referring to different kinds of melancholia as depressive states. Freud likened the state of melancholia to mourning in his 1917 paper Mourning and Melancholia. He theorized that objective loss, such as the loss of a valued relationship through death or a romantic break-up, results in subjective loss as well; the depressed individual has identified with the object of affection through an unconscious, narcissistic process called the libidinal cathexis of the ego. Such loss results in severe melancholic symptoms more profound than mourning; not only is the outside world viewed negatively but the ego itself is compromised. The person's decline of self-perception is revealed in his belief of his own blame, inferiority, and unworthiness. He also emphasized early life experiences as a predisposing factor. Adolf Meyer put forward a mixed social and biological framework emphasizing reactions in the context of an individual's life, and argued that the term depression should be used instead of melancholia. The first version of the DSM (DSM-I, 1952) contained depressive reaction and the DSM-II (1968) depressive neurosis, defined as an excessive reaction to internal conflict or an identifiable event, and also included a depressive type of manic-depressive psychosis within Major affective disorders.
The term unipolar (along with the related term bipolar) was coined by the neurologist and psychiatrist Karl Kleist, and subsequently used by his disciples Edda Neele and Karl Leonhard.
The term Major depressive disorder was introduced by a group of US clinicians in the mid-1970s as part of proposals for diagnostic criteria based on patterns of symptoms (called the "Research Diagnostic Criteria", building on earlier Feighner Criteria), and was incorporated into the DSM-III in 1980. The American Psychiatric Association added "major depressive disorder" to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III), as a split of the previous depressive neurosis in the DSM-II, which also encompassed the conditions now known as dysthymia and adjustment disorder with depressed mood. To maintain consistency the ICD-10 used the same criteria, with only minor alterations, but using the DSM diagnostic threshold to mark a mild depressive episode, adding higher threshold categories for moderate and severe episodes. The ancient idea of melancholia still survives in the notion of a melancholic subtype.
The new definitions of depression were widely accepted, albeit with some conflicting findings and views. There have been some continued empirically based arguments for a return to the diagnosis of melancholia. There has been some criticism of the expansion of coverage of the diagnosis, related to the development and promotion of antidepressants and the biological model since the late 1950s.
Society and culture
Terminology
The term "depression" is used in a number of different ways. It is often used to mean this syndrome but may refer to other mood disorders or simply to a low mood. People's conceptualizations of depression vary widely, both within and among cultures. "Because of the lack of scientific certainty," one commentator has observed, "the debate over depression turns on questions of language. What we call it—'disease,' 'disorder,' 'state of mind'—affects how we view, diagnose, and treat it." There are cultural differences in the extent to which serious depression is considered an illness requiring personal professional treatment, or an indicator of something else, such as the need to address social or moral problems, the result of biological imbalances, or a reflection of individual differences in the understanding of distress that may reinforce feelings of powerlessness, and emotional struggle.
Cultural dimension
Cultural differences contribute to different prevalence of symptoms. "Do the Chinese somatize depression? A cross-cultural study" by Parker et al. discusses the cultural differences in prevalent symptoms of depression between individualistic and collectivistic cultures. The authors reveal that individuals with depression in collectivistic cultures tend to present more somatic symptoms and less affective symptoms compared to those in individualistic cultures. The finding suggests that individualistic cultures 'warranting' or validating one's expression of emotions explains this cultural difference since collectivistic cultures see this as a taboo against the social cooperation it deems one of the most significant values.
Stigma
Historical figures were often reluctant to discuss or seek treatment for depression due to social stigma about the condition, or due to ignorance of diagnosis or treatments. Nevertheless, analysis or interpretation of letters, journals, artwork, writings, or statements of family and friends of some historical personalities has led to the presumption that they may have had some form of depression. People who may have had depression include English author Mary Shelley, American-British writer Henry James, and American president Abraham Lincoln. Some well-known contemporary people with possible depression include Canadian songwriter Leonard Cohen and American playwright and novelist Tennessee Williams. Some pioneering psychologists, such as Americans William James and John B. Watson, dealt with their own depression.
There has been a continuing discussion of whether neurological disorders and mood disorders may be linked to creativity, a discussion that goes back to Aristotelian times. British literature gives many examples of reflections on depression. English philosopher John Stuart Mill experienced a several-months-long period of what he called "a dull state of nerves", when one is "unsusceptible to enjoyment or pleasurable excitement; one of those moods when what is pleasure at other times, becomes insipid or indifferent". He quoted English poet Samuel Taylor Coleridge's "Dejection" as a perfect description of his case: "A grief without a pang, void, dark and drear, / A drowsy, stifled, unimpassioned grief, / Which finds no natural outlet or relief / In word, or sigh, or tear." English writer Samuel Johnson used the term "the black dog" in the 1780s to describe his own depression, and it was subsequently popularized by British Prime Minister Sir Winston Churchill, who also had the disorder. Johann Wolfgang von Goethe in his Faust, Part One, published in 1808, has Mephistopheles assume the form of a black dog, specifically a poodle.
Social stigma of major depression is widespread, and contact with mental health services reduces this only slightly. Public opinions on treatment differ markedly to those of health professionals; alternative treatments are held to be more helpful than pharmacological ones, which are viewed poorly. In the UK, the Royal College of Psychiatrists and the Royal College of General Practitioners conducted a joint Five-year Defeat Depression campaign to educate and reduce stigma from 1992 to 1996; a MORI study conducted afterwards showed a small positive change in public attitudes to depression and treatment.
While serving his first term as Prime Minister of Norway, Kjell Magne Bondevik attracted international attention in August 1998 when he announced that he was suffering from a depressive episode, becoming the highest ranking world leader to admit to suffering from a mental illness while in office. Upon this revelation, Anne Enger became acting Prime Minister for three weeks, from 30 August to 23 September, while he recovered from the depressive episode. Bondevik then returned to office. Bondevik received thousands of supportive letters, and said that the experience had been positive overall, both for himself and because it made mental illness more publicly acceptable.
| Biology and health sciences | Mental disorder | null |
8398 | https://en.wikipedia.org/wiki/Dimension | Dimension | In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordinate is needed to specify a point on itfor example, the point at 5 on a number line. A surface, such as the boundary of a cylinder or sphere, has a dimension of two (2D) because two coordinates are needed to specify a point on itfor example, both a latitude and longitude are required to locate a point on the surface of a sphere. A two-dimensional Euclidean space is a two-dimensional space on the plane. The inside of a cube, a cylinder or a sphere is three-dimensional (3D) because three coordinates are needed to locate a point within these spaces.
In classical mechanics, space and time are different categories and refer to absolute space and time. That conception of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The four dimensions (4D) of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. 10 dimensions are used to describe superstring theory (6D hyperspace + 4D), 11 dimensions can describe supergravity and M-theory (7D hyperspace + 4D), and the state-space of quantum mechanics is an infinite-dimensional function space.
The concept of dimension is not restricted to physical objects. s frequently occur in mathematics and the sciences. They may be Euclidean spaces or more general parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian mechanics; these are abstract spaces, independent of the physical space.
In mathematics
In mathematics, the dimension of an object is, roughly speaking, the number of degrees of freedom of a point that moves on this object. In other words, the dimension is the number of independent parameters or coordinates that are needed for defining the position of a point that is constrained to be on the object. For example, the dimension of a point is zero; the dimension of a line is one, as a point can move on a line in only one direction (or its opposite); the dimension of a plane is two, etc.
The dimension is an intrinsic property of an object, in the sense that it is independent of the dimension of the space in which the object is or can be embedded. For example, a curve, such as a circle, is of dimension one, because the position of a point on a curve is determined by its signed distance along the curve to a fixed point on the curve. This is independent from the fact that a curve cannot be embedded in a Euclidean space of dimension lower than two, unless it is a line. Similarly, a surface is of dimension two, even if embedded in three-dimensional space.
The dimension of Euclidean -space is . When trying to generalize to other types of spaces, one is faced with the question "what makes -dimensional?" One answer is that to cover a fixed ball in by small balls of radius , one needs on the order of such small balls. This observation leads to the definition of the Minkowski dimension and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For example, the boundary of a ball in looks locally like and this leads to the notion of the inductive dimension. While these notions agree on , they turn out to be different when one looks at more general spaces.
A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term "dimension" is as in: "A tesseract has four dimensions", mathematicians usually express this as: "The tesseract has dimension 4", or: "The dimension of the tesseract is 4" or: 4D.
Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli and Bernhard Riemann. Riemann's 1854 Habilitationsschrift, Schläfli's 1852 Theorie der vielfachen Kontinuität, and Hamilton's discovery of the quaternions and John T. Graves' discovery of the octonions in 1843 marked the beginning of higher-dimensional geometry.
The rest of this section examines some of the more important mathematical definitions of dimension.
Vector spaces
The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel dimension or algebraic dimension to distinguish it from other notions of dimension.
For the non-free case, this generalizes to the notion of the length of a module.
Manifolds
The uniquely defined dimension of every connected topological manifold can be calculated. A connected topological manifold is locally homeomorphic to Euclidean -space, in which the number is the manifold's dimension.
For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point.
In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases are simplified by having extra space in which to "work"; and the cases and are in some senses the most difficult. This state of affairs was highly marked in the various cases of the Poincaré conjecture, in which four different proof methods are applied.
Complex dimension
The dimension of a manifold depends on the base field with respect to which Euclidean space is defined. While analysis usually assumes a manifold to be over the real numbers, it is sometimes useful in the study of complex manifolds and algebraic varieties to work over the complex numbers instead. A complex number (x + iy) has a real part x and an imaginary part y, in which x and y are both real numbers; hence, the complex dimension is half the real dimension.
Conversely, in algebraically unconstrained contexts, a single complex coordinate system may be applied to an object having two real dimensions. For example, an ordinary two-dimensional spherical surface, when given a complex metric, becomes a Riemann sphere of one complex dimension.
Varieties
The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably the dimension of the tangent space at any Regular point of an algebraic variety. Another intuitive way is to define the dimension as the number of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces the dimension by one unless if the hyperplane contains the variety.
An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its components. It is equal to the maximal length of the chains of sub-varieties of the given algebraic set (the length of such a chain is the number of "").
Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack. There are however many stacks which do not correspond to varieties, and some of these have negative dimension. Specifically, if V is a variety of dimension m and G is an algebraic group of dimension n acting on V, then the quotient stack [V/G] has dimension m − n.
Krull dimension
The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n being a sequence of prime ideals related by inclusion. It is strongly related to the dimension of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of the polynomials on the variety.
For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0.
Topological spaces
For any normal topological space , the Lebesgue covering dimension of is defined to be the smallest integer n for which the following holds: any open cover has an open refinement (a second open cover in which each element is a subset of an element in the first cover) such that no point is included in more than elements. In this case dim . For a manifold, this coincides with the dimension mentioned above. If no such integer exists, then the dimension of is said to be infinite, and one writes dim . Moreover, has dimension −1, i.e. dim if and only if is empty. This definition of covering dimension can be extended from the class of normal spaces to all Tychonoff spaces merely by replacing the term "open" in the definition by the term "functionally open".
An inductive dimension may be defined inductively as follows. Consider a discrete set of points (such as a finite collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1-dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In general, one obtains an ()-dimensional object by dragging an -dimensional object in a new direction. The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive dimension, and is based on the analogy that, in the case of metric spaces, balls have -dimensional boundaries, permitting an inductive definition based on the dimension of the boundaries of open sets. Moreover, the boundary of a discrete set of points is the empty set, and therefore the empty set can be taken to have dimension -1.
Similarly, for the class of CW complexes, the dimension of an object is the largest for which the -skeleton is nontrivial. Intuitively, this can be described as follows: if the original space can be continuously deformed into a collection of higher-dimensional triangles joined at their faces with a complicated surface, then the dimension of the object is the dimension of those triangles.
Hausdorff dimension
The Hausdorff dimension is useful for studying structurally complicated sets, especially fractals. The Hausdorff dimension is defined for all metric spaces and, unlike the dimensions considered above, can also have non-integer real values. The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of fractal dimensions that work for highly irregular sets and attain non-integer positive real values.
Hilbert spaces
Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if the space's Hamel dimension is finite, and in this case the two dimensions coincide.
In physics
Spatial dimensions
Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward and forward is just as the name of the direction implies i.e., moving in a linear combination of up and forward. In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three dimensions. (See Space and Cartesian coordinate system.)
Time
A temporal dimension, or time dimension, is a dimension of time. Time is often referred to as the "fourth dimension" for this reason, but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change. It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move freely in time but subjectively move in one direction.
The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it. The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing in the direction of increasing entropy).
The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime, and in the special, flat case as Minkowski space. Time is different from other spatial dimensions as time operates in all spatial dimensions. Time operates in the first, second and third as well as theoretical spatial dimensions such as a fourth spatial dimension. Time is not however present in a single point of absolute infinite singularity as defined as a geometric point, as an infinitely small point can have no change and therefore no time. Just as when an object moves through positions in space, it also moves through positions in time. In this sense the force moving any object to change is time.
Additional dimensions
In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt to unify the four fundamental forces by introducing extra dimensions/hyperspace. Most notably, superstring theory requires 10 spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory which subsumes five previously distinct superstring theories. Supergravity theory also promotes 11D spacetime = 7D hyperspace + 4 common dimensions. To date, no direct experimental or observational evidence is available to support the existence of these extra dimensions. If hyperspace exists, it must be hidden from us by some physical mechanism. One well-studied possibility is that the extra dimensions may be "curled up" at such tiny scales as to be effectively invisible to current experiments.
In 1921, Kaluza–Klein theory presented 5D including an extra dimension of space. At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances. In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However, at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string theory is intended to provide. In particular, superstring theory requires six compact dimensions (6D hyperspace) forming a Calabi–Yau manifold. Thus Kaluza-Klein theory may be considered either as an incomplete description on its own, or as a subset of string theory model building.
In addition to small and curled up extra dimensions, there may be extra dimensions that instead are not apparent because the matter associated with our visible universe is localized on a subspace. Thus, the extra dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended objects of various dimensionalities predicted by string theory that could play this role. They have the property that open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints, whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or "the bulk". This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes itself as it propagates into a higher-dimensional volume.
Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology attempts to explain why there are three dimensions of space using topological and thermodynamic considerations. According to this idea it would be since three is the largest number of spatial dimensions in which strings can generically intersect. If initially there are many windings of strings around compact dimensions, space could only expand to macroscopic sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate. But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three dimensions of space are allowed to grow large given this kind of initial configuration.
Extra dimensions are said to be universal if all fields are equally free to propagate within them.
In computer graphics and spatial data
Several types of digital systems are based on the storage, analysis, and visualization of geometric shapes, including illustration software, Computer-aided design, and Geographic information systems. Different vector systems use a wide variety of data structures to represent shapes, but almost all are fundamentally based on a set of geometric primitives corresponding to the spatial dimensions:
Point (0-dimensional), a single coordinate in a Cartesian coordinate system.
Line or Polyline (1-dimensional) usually represented as an ordered list of points sampled from a continuous line, whereupon the software is expected to interpolate the intervening shape of the line as straight- or curved-line segments.
Polygon (2-dimensional) usually represented as a line that closes at its endpoints, representing the boundary of a two-dimensional region. The software is expected to use this boundary to partition 2-dimensional space into an interior and exterior.
Surface (3-dimensional) represented using a variety of strategies, such as a polyhedron consisting of connected polygon faces. The software is expected to use this surface to partition 3-dimensional space into an interior and exterior.
Frequently in these systems, especially GIS and Cartography, a representation of a real-world phenomenon may have a different (usually lower) dimension than the phenomenon being represented. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional volume of material) may be represented as a line. This dimensional generalization correlates with tendencies in spatial cognition. For example, asking the distance between two cities presumes a conceptual model of the cities as points, while giving directions involving travel "up," "down," or "along" a road imply a one-dimensional conceptual model. This is frequently done for purposes of data efficiency, visual simplicity, or cognitive efficiency, and is acceptable if the distinction between the representation and the represented is understood but can cause confusion if information users assume that the digital shape is a perfect representation of reality (i.e., believing that roads really are lines).
More dimensions
List of topics by dimension
| Mathematics | Geometry | null |
8407 | https://en.wikipedia.org/wiki/Dodecahedron | Dodecahedron | In geometry, a dodecahedron (; ) or duodecahedron is any polyhedron with twelve flat faces. The most familiar dodecahedron is the regular dodecahedron with regular pentagons as faces, which is a Platonic solid. There are also three regular star dodecahedra, which are constructed as stellations of the convex form. All of these have icosahedral symmetry, order 120.
Some dodecahedra have the same combinatorial structure as the regular dodecahedron (in terms of the graph formed by its vertices and edges), but their pentagonal faces are not regular:
The pyritohedron, a common crystal form in pyrite, has pyritohedral symmetry, while the tetartoid has tetrahedral symmetry.
The rhombic dodecahedron can be seen as a limiting case of the pyritohedron, and it has octahedral symmetry. The elongated dodecahedron and trapezo-rhombic dodecahedron variations, along with the rhombic dodecahedra, are space-filling. There are numerous other dodecahedra.
While the regular dodecahedron shares many features with other Platonic solids, one unique property of it is that one can start at a corner of the surface and draw an infinite number of straight lines across the figure that return to the original point without crossing over any other corner.
Regular dodecahedron
The convex regular dodecahedron is one of the five regular Platonic solids and can be represented by its Schläfli symbol {5, 3}.
The dual polyhedron is the regular icosahedron {3, 5}, having five equilateral triangles around each vertex.
The convex regular dodecahedron also has three stellations, all of which are regular star dodecahedra. They form three of the four Kepler–Poinsot polyhedra. They are the small stellated dodecahedron {5/2, 5}, the great dodecahedron {5, 5/2}, and the great stellated dodecahedron {5/2, 3}. The small stellated dodecahedron and great dodecahedron are dual to each other; the great stellated dodecahedron is dual to the great icosahedron {3, 5/2}. All of these regular star dodecahedra have regular pentagonal or pentagrammic faces. The convex regular dodecahedron and great stellated dodecahedron are different realisations of the same abstract regular polyhedron; the small stellated dodecahedron and great dodecahedron are different realisations of another abstract regular polyhedron.
Other pentagonal dodecahedra
In crystallography, two important dodecahedra can occur as crystal forms in some symmetry classes of the cubic crystal system that are topologically equivalent to the regular dodecahedron but less symmetrical: the pyritohedron with pyritohedral symmetry, and the tetartoid with tetrahedral symmetry:
Pyritohedron
A pyritohedron is a dodecahedron with pyritohedral (Th) symmetry. Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices (see figure). However, the pentagons are not constrained to be regular, and the underlying atomic arrangement has no true fivefold symmetry axis. Its 30 edges are divided into two sets – containing 24 and 6 edges of the same length. The only axes of rotational symmetry are three mutually perpendicular twofold axes and four threefold axes.
Although regular dodecahedra do not exist in crystals, the pyritohedron form occurs in the crystals of the mineral pyrite, and it may be an inspiration for the discovery of the regular Platonic solid form. The true regular dodecahedron can occur as a shape for quasicrystals (such as holmium–magnesium–zinc quasicrystal) with icosahedral symmetry, which includes true fivefold rotation axes.
Crystal pyrite
The name crystal pyrite comes from one of the two common crystal habits shown by pyrite (the other one being the cube). In pyritohedral pyrite, the faces have a Miller index of (210), which means that the dihedral angle is 2·arctan(2) ≈ 126.87° and each pentagonal face has one angle of approximately 121.6° in between two angles of approximately 106.6° and opposite two angles of approximately 102.6°. The following formulas show the measurements for the face of a perfect crystal (which is rarely found in nature).
Cartesian coordinates
The eight vertices of a cube have the coordinates (±1, ±1, ±1).
The coordinates of the 12 additional vertices are
(0, ±(1 + h), ±(1 − h2)),
(±(1 + h), ±(1 − h2), 0) and
(±(1 − h2), 0, ±(1 + h)).
h is the height of the wedge-shaped "roof" above the faces of that cube with edge length 2.
An important case is h = (a quarter of the cube edge length) for perfect natural pyrite (also the pyritohedron in the Weaire–Phelan structure).
Another one is h = = 0.618... for the regular dodecahedron. See section Geometric freedom for other cases.
Two pyritohedra with swapped nonzero coordinates are in dual positions to each other like the dodecahedra in the compound of two dodecahedra.
Geometric freedom
The pyritohedron has a geometric degree of freedom with limiting cases of a cubic convex hull at one limit of collinear edges, and a rhombic dodecahedron as the other limit as 6 edges are degenerated to length zero. The regular dodecahedron represents a special intermediate case where all edges and angles are equal.
It is possible to go past these limiting cases, creating concave or nonconvex pyritohedra. The endo-dodecahedron is concave and equilateral; it can tessellate space with the convex regular dodecahedron. Continuing from there in that direction, we pass through a degenerate case where twelve vertices coincide in the centre, and on to the regular great stellated dodecahedron where all edges and angles are equal again, and the faces have been distorted into regular pentagrams. On the other side, past the rhombic dodecahedron, we get a nonconvex equilateral dodecahedron with fish-shaped self-intersecting equilateral pentagonal faces.
Tetartoid
A tetartoid (also tetragonal pentagonal dodecahedron, pentagon-tritetrahedron, and tetrahedric pentagon dodecahedron) is a dodecahedron with chiral tetrahedral symmetry (T). Like the regular dodecahedron, it has twelve identical pentagonal faces, with three meeting in each of the 20 vertices. However, the pentagons are not regular and the figure has no fivefold symmetry axes.
Although regular dodecahedra do not exist in crystals, the tetartoid form does. The name tetartoid comes from the Greek root for one-fourth because it has one fourth of full octahedral symmetry, and half of pyritohedral symmetry. The mineral cobaltite can have this symmetry form.
Abstractions sharing the solid's topology and symmetry can be created from the cube and the tetrahedron. In the cube each face is bisected by a slanted edge. In the tetrahedron each edge is trisected, and each of the new vertices connected to a face center. (In Conway polyhedron notation this is a gyro tetrahedron.)
Cartesian coordinates
The following points are vertices of a tetartoid pentagon under tetrahedral symmetry:
(a, b, c); (−a, −b, c); (−, −, ); (−c, −a, b); (−, , ),
under the following conditions:
,
n = a2c − bc2,
d1 = a2 − ab + b2 + ac − 2bc,
d2 = a2 + ab + b2 − ac − 2bc,
.
Geometric freedom
The regular dodecahedron is a tetartoid with more than the required symmetry. The triakis tetrahedron is a degenerate case with 12 zero-length edges. (In terms of the colors used above this means, that the white vertices and green edges are absorbed by the green vertices.)
Dual of triangular gyrobianticupola
A lower symmetry form of the regular dodecahedron can be constructed as the dual of a polyhedron constructed from two triangular anticupola connected base-to-base, called a triangular gyrobianticupola. It has D3d symmetry, order 12. It has 2 sets of 3 identical pentagons on the top and bottom, connected 6 pentagons around the sides which alternate upwards and downwards. This form has a hexagonal cross-section and identical copies can be connected as a partial hexagonal honeycomb, but all vertices will not match.
Rhombic dodecahedron
The rhombic dodecahedron is a zonohedron with twelve rhombic faces and octahedral symmetry. It is dual to the quasiregular cuboctahedron (an Archimedean solid) and occurs in nature as a crystal form. The rhombic dodecahedron packs together to fill space.
The rhombic dodecahedron can be seen as a degenerate pyritohedron where the 6 special edges have been reduced to zero length, reducing the pentagons into rhombic faces.
The rhombic dodecahedron has several stellations, the first of which is also a parallelohedral spacefiller.
Another important rhombic dodecahedron, the Bilinski dodecahedron, has twelve faces congruent to those of the rhombic triacontahedron, i.e. the diagonals are in the ratio of the golden ratio. It is also a zonohedron and was described by Bilinski in 1960. This figure is another spacefiller, and can also occur in non-periodic spacefillings along with the rhombic triacontahedron, the rhombic icosahedron and rhombic hexahedra.
Other dodecahedra
There are 6,384,634 topologically distinct convex dodecahedra, excluding mirror images—the number of vertices ranges from 8 to 20. (Two polyhedra are "topologically distinct" if they have intrinsically different arrangements of faces and vertices, such that it is impossible to distort one into the other simply by changing the lengths of edges or the angles between edges or faces.)
Topologically distinct dodecahedra (excluding pentagonal and rhombic forms)
Uniform polyhedra:
Decagonal prism – 10 squares, 2 decagons, D10h symmetry, order 40.
Pentagonal antiprism – 10 equilateral triangles, 2 pentagons, D5d symmetry, order 20
Johnson solids (regular faced):
Pentagonal cupola – 5 triangles, 5 squares, 1 pentagon, 1 decagon, C5v symmetry, order 10
Snub disphenoid – 12 triangles, D2d, order 8
Elongated square dipyramid – 8 triangles and 4 squares, D4h symmetry, order 16
Metabidiminished icosahedron – 10 triangles and 2 pentagons, C2v symmetry, order 4
Congruent irregular faced: (face-transitive)
Hexagonal bipyramid – 12 isosceles triangles, dual of hexagonal prism, D6h symmetry, order 24
Hexagonal trapezohedron – 12 kites, dual of hexagonal antiprism, D6d symmetry, order 24
Triakis tetrahedron – 12 isosceles triangles, dual of truncated tetrahedron, Td symmetry, order 24
Other less regular faced:
Hendecagonal pyramid – 11 isosceles triangles and 1 regular hendecagon, C11v, order 11
Trapezo-rhombic dodecahedron – 6 rhombi, 6 trapezoids – dual of triangular orthobicupola, D3h symmetry, order 12
Rhombo-hexagonal dodecahedron or elongated Dodecahedron – 8 rhombi and 4 equilateral hexagons, D4h symmetry, order 16
Truncated pentagonal trapezohedron, D5d, order 20, topologically equivalent to regular dodecahedron
Practical usage
Armand Spitz used a dodecahedron as the "globe" equivalent for his Digital Dome planetarium projector, based upon a suggestion from Albert Einstein.
Regular dodecahedrons are sometimes used as dice, when they are known as d12s, especially in games such as Dungeons and Dragons.
| Mathematics | Three-dimensional space | null |
8410 | https://en.wikipedia.org/wiki/Decibel | Decibel | The decibel (symbol: dB) is a relative unit of measurement equal to one tenth of a bel (B). It expresses the ratio of two values of a power or root-power quantity on a logarithmic scale. Two signals whose levels differ by one decibel have a power ratio of 101/10 (approximately ) or root-power ratio of 101/20 (approximately ).
The unit fundamentally expresses a relative change but may also be used to express an absolute value as the ratio of a value to a fixed reference value; when used in this way, the unit symbol is often suffixed with letter codes that indicate the reference value. For example, for the reference value of 1 volt, a common suffix is "V" (e.g., "20 dBV").
Two principal types of scaling of the decibel are in common use. When expressing a power ratio, it is defined as ten times the logarithm with base 10. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level. When expressing root-power quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two, so that the related power and root-power levels change by the same value in linear systems, where power is proportional to the square of amplitude.
The definition of the decibel originated in the measurement of transmission loss and power in telephony of the early 20th century in the Bell System in the United States. The bel was named in honor of Alexander Graham Bell, but the bel is seldom used. Instead, the decibel is used for a wide variety of measurements in science and engineering, most prominently for sound power in acoustics, in electronics and control theory. In electronics, the gains of amplifiers, attenuation of signals, and signal-to-noise ratios are often expressed in decibels.
History
The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. Until the mid-1920s, the unit for loss was miles of standard cable (MSC). 1 MSC corresponded to the loss of power over one mile (approximately 1.6 km) of standard telephone cable at a frequency of radians per second (795.8 Hz), and matched closely the smallest attenuation detectable to a listener. A standard telephone cable was "a cable having uniformly distributed resistance of 88 ohms per loop-mile and uniformly distributed shunt capacitance of 0.054 microfarads per mile" (approximately corresponding to 19 gauge wire).
In 1924, Bell Telephone Laboratories received a favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the Transmission Unit (TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power.
The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell.
The bel is seldom used, as the decibel was the proposed working unit.
The naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931:
In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, and suggested the name logit for "standard magnitudes which combine by multiplication", to contrast with the name unit for "standard magnitudes which combine by addition".
In April 2003, the International Committee for Weights and Measures (CIPM) considered a recommendation for the inclusion of the decibel in the International System of Units (SI), but decided against the proposal. However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO). The IEC permits the use of the decibel with root-power quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios. In spite of their widespread use, suffixes (such as in dBA or dBV) are not recognized by the IEC or ISO.
Definition
The IEC Standard 60027-3:2002 defines the following quantities. The decibel (dB) is one-tenth of a bel: . The bel (B) is ln(10) nepers: . The neper is the change in the level of a root-power quantity when the root-power quantity changes by a factor of e, that is , thereby relating all of the units as nondimensional natural log of root-power-quantity ratios, = = . Finally, the level of a quantity is the logarithm of the ratio of the value of that quantity to a reference value of the same kind of quantity.
Therefore, the bel represents the logarithm of a ratio between two power quantities of 10:1, or the logarithm of a ratio between two root-power quantities of :1.
Two signals whose levels differ by one decibel have a power ratio of 101/10, which is approximately , and an amplitude (root-power quantity) ratio of 101/20 ().
The bel is rarely used either without a prefix or with SI unit prefixes other than deci; it is customary, for example, to use hundredths of a decibel rather than millibels. Thus, five one-thousandths of a bel would normally be written 0.05 dB, and not 5 mB.
The method of expressing a ratio as a level in decibels depends on whether the measured property is a power quantity or a root-power quantity; see Power, root-power, and field quantities for details.
Power quantities
When referring to measurements of power quantities, a ratio can be expressed as a level in decibels by evaluating ten times the base-10 logarithm of the ratio of the measured quantity to reference value. Thus, the ratio of P (measured power) to P0 (reference power) is represented by LP, that ratio expressed in decibels, which is calculated using the formula:
The base-10 logarithm of the ratio of the two power quantities is the number of bels. The number of decibels is ten times the number of bels (equivalently, a decibel is one-tenth of a bel). P and P0 must measure the same type of quantity, and have the same units before calculating the ratio. If in the above equation, then LP = 0. If P is greater than P0 then LP is positive; if P is less than P0 then LP is negative.
Rearranging the above equation gives the following formula for P in terms of P0 and LP :
Root-power (field) quantities
When referring to measurements of root-power quantities, it is usual to consider the ratio of the squares of F (measured) and F0 (reference). This is because the definitions were originally formulated to give the same value for relative ratios for both power and root-power quantities. Thus, the following definition is used:
The formula may be rearranged to give
Similarly, in electrical circuits, dissipated power is typically proportional to the square of voltage or current when the impedance is constant. Taking voltage as an example, this leads to the equation for power gain level LG:
where Vout is the root-mean-square (rms) output voltage, Vin is the rms input voltage. A similar formula holds for current.
The term root-power quantity is introduced by ISO Standard 80000-1:2009 as a substitute of field quantity. The term field quantity is deprecated by that standard and root-power is used throughout this article.
Relationship between power and root-power levels
Although power and root-power quantities are different quantities, their respective levels are historically measured in the same units, typically decibels. A factor of 2 is introduced to make changes in the respective levels match under restricted conditions such as when the medium is linear and the same waveform is under consideration with changes in amplitude, or the medium impedance is linear and independent of both frequency and time. This relies on the relationship
holding. In a nonlinear system, this relationship does not hold by the definition of linearity. However, even in a linear system in which the power quantity is the product of two linearly related quantities (e.g. voltage and current), if the impedance is frequency- or time-dependent, this relationship does not hold in general, for example if the energy spectrum of the waveform changes.
For differences in level, the required relationship is relaxed from that above to one of proportionality (i.e., the reference quantities P and F need not be related), or equivalently,
must hold to allow the power level difference to be equal to the root-power level difference from power P and F to P and F. An example might be an amplifier with unity voltage gain independent of load and frequency driving a load with a frequency-dependent impedance: the relative voltage gain of the amplifier is always 0 dB, but the power gain depends on the changing spectral composition of the waveform being amplified. Frequency-dependent impedances may be analyzed by considering the quantities power spectral density and the associated root-power quantities via the Fourier transform, which allows elimination of the frequency dependence in the analysis by analyzing the system at each frequency independently.
Conversions
Since logarithm differences measured in these units often represent power ratios and root-power ratios, values for both are shown below. The bel is traditionally used as a unit of logarithmic power ratio, while the neper is used for logarithmic root-power (amplitude) ratio.
Examples
The unit dBW is often used to denote a ratio for which the reference is 1 W, and similarly dBm for a reference point.
Calculating the ratio in decibels of (one kilowatt, or watts) to yields:
The ratio in decibels of to is:
, illustrating the consequence from the definitions above that LG has the same value, 30 dB, regardless of whether it is obtained from powers or from amplitudes, provided that in the specific system being considered power ratios are equal to amplitude ratios squared.
The ratio in decibels of to (one milliwatt) is obtained with the formula:
The power ratio corresponding to a change in level is given by:
A change in power ratio by a factor of 10 corresponds to a change in level of . A change in power ratio by a factor of 2 or is approximately a change of 3 dB. More precisely, the change is ± dB, but this is almost universally rounded to 3 dB in technical writing. This implies an increase in voltage by a factor of . Likewise, a doubling or halving of the voltage, corresponding to a quadrupling or quartering of the power, is commonly described as 6 dB rather than ± dB.
Should it be necessary to make the distinction, the number of decibels is written with additional significant figures. 3.000 dB corresponds to a power ratio of 103/10, or , about 0.24% different from exactly 2, and a voltage ratio of , about 0.12% different from exactly . Similarly, an increase of 6.000 dB corresponds to a power ratio of , about 0.5% different from 4.
Properties
The decibel is useful for representing large ratios and for simplifying representation of multiplicative effects, such as attenuation from multiple sources along a signal chain. Its application in systems with additive effects is less intuitive, such as in the combined sound pressure level of two machines operating together. Care is also necessary with decibels directly in fractions and with the units of multiplicative operations.
Reporting large ratios
The logarithmic scale nature of the decibel means that a very large range of ratios can be represented by a convenient number, in a manner similar to scientific notation. This allows one to clearly visualize huge changes of some quantity. See Bode plot and Semi-log plot. For example, 120 dB SPL may be clearer than "a trillion times more intense than the threshold of hearing".
Representation of multiplication operations
Level values in decibels can be added instead of multiplying the underlying power values, which means that the overall gain of a multi-component system, such as a series of amplifier stages, can be calculated by summing the gains in decibels of the individual components, rather than multiply the amplification factors; that is, = log(A) + log(B) + log(C). Practically, this means that, armed only with the knowledge that 1 dB is a power gain of approximately 26%, 3 dB is approximately 2× power gain, and 10 dB is 10× power gain, it is possible to determine the power ratio of a system from the gain in dB with only simple addition and multiplication. For example:
A system consists of 3 amplifiers in series, with gains (ratio of power out to in) of 10 dB, 8 dB, and 7 dB respectively, for a total gain of 25 dB. Broken into combinations of 10, 3, and 1 dB, this is: With an input of 1 watt, the output is approximately Calculated precisely, the output is 1 W × 1025/10 ≈ 316.2 W. The approximate value has an error of only +0.4% with respect to the actual value, which is negligible given the precision of the values supplied and the accuracy of most measurement instrumentation.
However, according to its critics, the decibel creates confusion, obscures reasoning, is more related to the era of slide rules than to modern digital processing, and is cumbersome and difficult to interpret.
Quantities in decibels are not necessarily additive, thus being "of unacceptable form for use in dimensional analysis".
Thus, units require special care in decibel operations. Take, for example, carrier-to-noise-density ratio C/N0 (in hertz), involving carrier power C (in watts) and noise power spectral density N0 (in W/Hz). Expressed in decibels, this ratio would be a subtraction (C/N0)dB = CdB − N0 dB. However, the linear-scale units still simplify in the implied fraction, so that the results would be expressed in dB-Hz.
Representation of addition operations
According to Mitschke, "The advantage of using a logarithmic measure is that in a transmission chain, there are many elements concatenated, and each has its own gain or attenuation. To obtain the total, addition of decibel values is much more convenient than multiplication of the individual factors." However, for the same reason that humans excel at additive operation over multiplication, decibels are awkward in inherently additive operations:if two machines each individually produce a sound pressure level of, say, 90 dB at a certain point, then when both are operating together we should expect the combined sound pressure level to increase to 93 dB, but certainly not to 180 dB!; suppose that the noise from a machine is measured (including the contribution of background noise) and found to be 87 dBA but when the machine is switched off the background noise alone is measured as 83 dBA. [...] the machine noise [level (alone)] may be obtained by 'subtracting' the 83 dBA background noise from the combined level of 87 dBA; i.e., 84.8 dBA.; in order to find a representative value of the sound level in a room a number of measurements are taken at different positions within the room, and an average value is calculated. [...] Compare the logarithmic and arithmetic averages of [...] 70 dB and 90 dB: logarithmic average = 87 dB; arithmetic average = 80 dB.
Addition on a logarithmic scale is called logarithmic addition, and can be defined by taking exponentials to convert to a linear scale, adding there, and then taking logarithms to return. For example, where operations on decibels are logarithmic addition/subtraction and logarithmic multiplication/division, while operations on the linear scale are the usual operations:
The logarithmic mean is obtained from the logarithmic sum by subtracting , since logarithmic division is linear subtraction.
Fractions
Attenuation constants, in topics such as optical fiber communication and radio propagation path loss, are often expressed as a fraction or ratio to distance of transmission. In this case, dB/m represents decibel per meter, dB/mi represents decibel per mile, for example. These quantities are to be manipulated obeying the rules of dimensional analysis, e.g., a 100-meter run with a 3.5 dB/km fiber yields a loss of 0.35 dB = 3.5 dB/km × 0.1 km.
Uses
Perception
The human perception of the intensity of sound and light more nearly approximates the logarithm of intensity rather than a linear relationship (see Weber–Fechner law), making the dB scale a useful measure.
Acoustics
The decibel is commonly used in acoustics as a unit of sound power level or sound pressure level. The reference pressure for sound in air is set at the typical threshold of perception of an average human and there are common comparisons used to illustrate different levels of sound pressure. As sound pressure is a root-power quantity, the appropriate version of the unit definition is used:
where prms is the root mean square of the measured sound pressure and pref is the standard reference sound pressure of 20 micropascals in air or 1 micropascal in water.
Use of the decibel in underwater acoustics leads to confusion, in part because of this difference in reference value.
Sound intensity is proportional to the square of sound pressure. Therefore, the sound intensity level can also be defined as:
The human ear has a large dynamic range in sound reception. The ratio of the sound intensity that causes permanent damage during short exposure to that of the quietest sound that the ear can hear is equal to or greater than 1 trillion (1012). Such large measurement ranges are conveniently expressed in logarithmic scale: the base-10 logarithm of 1012 is 12, which is expressed as a sound intensity level of 120 dB re 1 pW/m2. The reference values of I and p in air have been chosen such that this corresponds approximately to a sound pressure level of 120 dB re 20 μPa.
Since the human ear is not equally sensitive to all sound frequencies, the acoustic power spectrum is modified by frequency weighting (A-weighting being the most common standard) to get the weighted acoustic power before converting to a sound level or noise level in decibels.
Telephony
The decibel is used in telephony and audio. Similarly to the use in acoustics, a frequency weighted power is often used. For audio noise measurements in electrical circuits, the weightings are called psophometric weightings.
Electronics
In electronics, the decibel is often used to express power or amplitude ratios (as for gains) in preference to arithmetic ratios or percentages. One advantage is that the total decibel gain of a series of components (such as amplifiers and attenuators) can be calculated simply by summing the decibel gains of the individual components. Similarly, in telecommunications, decibels denote signal gain or loss from a transmitter to a receiver through some medium (free space, waveguide, coaxial cable, fiber optics, etc.) using a link budget.
The decibel unit can also be combined with a reference level, often indicated via a suffix, to create an absolute unit of electric power. For example, it can be combined with "m" for "milliwatt" to produce the "dBm". A power level of 0 dBm corresponds to one milliwatt, and 1 dBm is one decibel greater (about 1.259 mW).
In professional audio specifications, a popular unit is the dBu. This is relative to the root mean square voltage which delivers 1 mW (0 dBm) into a 600-ohm resistor, or ≈ 0.775 VRMS. When used in a 600-ohm circuit (historically, the standard reference impedance in telephone circuits), dBu and dBm are identical.
Optics
In an optical link, if a known amount of optical power, in dBm (referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities.
In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B.
Video and digital imaging
In connection with video and digital image sensors, decibels generally represent ratios of video voltages or digitized light intensities, using 20 log of the ratio, even when the represented intensity (optical power) is directly proportional to the voltage generated by the sensor, not to its square, as in a CCD imager where response voltage is linear in intensity.
Thus, a camera signal-to-noise ratio or dynamic range quoted as 40 dB represents a ratio of 100:1 between optical signal intensity and optical-equivalent dark-noise intensity, not a 10,000:1 intensity (power) ratio as 40 dB might suggest.
Sometimes the 20 log ratio definition is applied to electron counts or photon counts directly, which are proportional to sensor signal amplitude without the need to consider whether the voltage response to intensity is linear.
However, as mentioned above, the 10 log intensity convention prevails more generally in physical optics, including fiber optics, so the terminology can become murky between the conventions of digital photographic technology and physics. Most commonly, quantities called "dynamic range" or "signal-to-noise" (of the camera) would be specified in 20 log dB, but in related contexts (e.g. attenuation, gain, intensifier SNR, or rejection ratio) the term should be interpreted cautiously, as confusion of the two units can result in very large misunderstandings of the value.
Photographers typically use an alternative base-2 log unit, the stop, to describe light intensity ratios or dynamic range.
Suffixes and reference values
Suffixes are commonly attached to the basic dB unit in order to indicate the reference value by which the ratio is calculated. For example, dBm indicates power measurement relative to 1 milliwatt.
In cases where the unit value of the reference is stated, the decibel value is known as "absolute". If the unit value of the reference is not explicitly stated, as in the dB gain of an amplifier, then the decibel value is considered relative.
This form of attaching suffixes to dB is widespread in practice, albeit being against the rules promulgated by standards bodies (ISO and IEC), given the "unacceptability of attaching information to units" and the "unacceptability of mixing information with units". The IEC 60027-3 standard recommends the following format: or as , where x is the quantity symbol and xref is the value of the reference quantity, e.g., = 20 dB or = 20 dB for the electric field strength E relative to 1 μV/m reference value.
If the measurement result 20 dB is presented separately, it can be specified using the information in parentheses, which is then part of the surrounding text and not a part of the unit: 20 dB (re: 1 μV/m) or 20 dB (1 μV/m).
Outside of documents adhering to SI units, the practice is very common as illustrated by the following examples. There is no general rule, with various discipline-specific practices. Sometimes the suffix is a unit symbol ("W","K","m"), sometimes it is a transliteration of a unit symbol ("uV" instead of μV for microvolt), sometimes it is an acronym for the unit's name ("sm" for square meter, "m" for milliwatt), other times it is a mnemonic for the type of quantity being calculated ("i" for antenna gain with respect to an isotropic antenna, "λ" for anything normalized by the EM wavelength), or otherwise a general attribute or identifier about the nature of the quantity ("A" for A-weighted sound pressure level). The suffix is often connected with a hyphen, as in "dBHz", or with a space, as in "dB HL", or enclosed in parentheses, as in "dB(HL)", or with no intervening character, as in "dBm" (which is non-compliant with international standards).
List of suffixes
Voltage
Since the decibel is defined with respect to power, not amplitude, conversions of voltage ratios to decibels must square the amplitude, or use the factor of 20 instead of 10, as discussed above.
dB dB(VRMS) – voltage relative to 1 volt, regardless of impedance. This is used to measure microphone sensitivity, and also to specify the consumer line-level of , in order to reduce manufacturing costs relative to equipment using a line-level signal.
dB or dB RMS voltage relative to (i.e. the voltage that would dissipate 1 mW into a 600 Ω load). An RMS voltage of 1 V therefore corresponds to Originally dB, it was changed to dB to avoid confusion with dB. The v comes from volt, while u comes from the volume unit displayed on a VU meter.dB can be used as a measure of voltage, regardless of impedance, but is derived from a 600 Ω load dissipating 0 dB (1 mW). The reference voltage comes from the computation where is the resistance and is the power.
In professional audio, equipment may be calibrated to indicate a "0" on the VU meters some finite time after a signal has been applied at an amplitude of . Consumer equipment typically uses a lower "nominal" signal level of Therefore, many devices offer dual voltage operation (with different gain or "trim" settings) for interoperability reasons. A switch or adjustment that covers at least the range between and is common in professional equipment.
dB Defined by Recommendation ITU-R V.574 ; dB: dB(mVRMS) – root mean square voltage relative to 1 millivolt across 75 Ω. Widely used in cable television networks, where the nominal strength of a single TV signal at the receiver terminals is about 0 dB. Cable TV uses 75 Ω coaxial cable, so 0 dB corresponds to −78.75 dB or approximately 13 nW.
dB or dB dB(μVRMS) – voltage relative to 1 microvolt. Widely used in television and aerial amplifier specifications. 60 dBμV = 0 dB.
Acoustics
Probably the most common usage of "decibels" in reference to sound level is dB, sound pressure level referenced to the nominal threshold of human hearing: The measures of pressure (a root-power quantity) use the factor of 20, and the measures of power (e.g. dB and dB) use the factor of 10.
dB dB (sound pressure level) – for sound in air and other gases, relative to 20 micropascals (μPa), or , approximately the quietest sound a human can hear. For sound in water and other liquids, a reference pressure of 1 μPa is used. An RMS sound pressure of one pascal corresponds to a level of 94 dB SPL.
dB dB sound intensity level – relative to 10−12 W/m2, which is roughly the threshold of human hearing in air.
dB dB sound power level – relative to 10−12 W.
dB, dB, and dB These symbols are often used to denote the use of different weighting filters, used to approximate the human ear's response to sound, although the measurement is still in dB (SPL). These measurements usually refer to noise and its effects on humans and other animals, and they are widely used in industry while discussing noise control issues, regulations and environmental standards. Other variations that may be seen are dB or dB(A). According to standards from the International Electro-technical Committee (IEC 61672-2013) and the American National Standards Institute, ANSI S1.4, the preferred usage is to write Nevertheless, the units dB and dB(A) are still commonly used as a shorthand for Aweighted measurements. Compare dB, used in telecommunications.
dB dB hearing level is used in audiograms as a measure of hearing loss. The reference level varies with frequency according to a minimum audibility curve as defined in ANSI and other standards, such that the resulting audiogram shows deviation from what is regarded as 'normal' hearing.
dB sometimes used to denote weighted noise level, commonly using the ITU-R 468 noise weighting
dB relative to the peak to peak sound pressure.
dB G‑weighted spectrum
Audio electronics
| Physical sciences | Other | null |
8420 | https://en.wikipedia.org/wiki/Dodo | Dodo | The dodo (Raphus cucullatus) is an extinct flightless bird that was endemic to the island of Mauritius, which is east of Madagascar in the Indian Ocean. The dodo's closest relative was the also-extinct and flightless Rodrigues solitaire. The two formed the subtribe Raphina, a clade of extinct flightless birds that were a part of the family which includes pigeons and doves. The closest living relative of the dodo is the Nicobar pigeon. A white dodo was once thought to have existed on the nearby island of Réunion, but it is now believed that this assumption was merely confusion based on the also-extinct Réunion ibis and paintings of white dodos.
Subfossil remains show the dodo measured about in height and may have weighed in the wild. The dodo's appearance in life is evidenced only by drawings, paintings, and written accounts from the 17th century. Since these portraits vary considerably, and since only some of the illustrations are known to have been drawn from live specimens, the dodos' exact appearance in life remains unresolved, and little is known about its behaviour. It has been depicted with brownish-grey plumage, yellow feet, a tuft of tail feathers, a grey, naked head, and a black, yellow, and green beak. It used gizzard stones to help digest its food, which is thought to have included fruits, and its main habitat is believed to have been the woods in the drier coastal areas of Mauritius. One account states its clutch consisted of a single egg. It is presumed that the dodo became flightless because of the ready availability of abundant food sources and a relative absence of predators on Mauritius. Though the dodo has historically been portrayed as being fat and clumsy, it is now thought to have been well-adapted for its ecosystem.
The first recorded mention of the dodo was by Dutch sailors in 1598. In the following years, the bird was hunted by sailors and invasive species, while its habitat was being destroyed. The last widely accepted sighting of a dodo was in 1662. Its extinction was not immediately noticed, and some considered the bird to be a myth. In the 19th century, research was conducted on a small quantity of remains of four specimens that had been brought to Europe in the early 17th century. Among these is a dried head, the only soft tissue of the dodo that remains today. Since then, a large amount of subfossil material has been collected on Mauritius, mostly from the Mare aux Songes swamp. The extinction of the dodo less than a century after its discovery called attention to the previously unrecognised problem of human involvement in the disappearance of entire species. The dodo achieved widespread recognition from its role in the story of Alice's Adventures in Wonderland, and it has since become a fixture in popular culture, often as a symbol of extinction and obsolescence.
Taxonomy
The dodo was variously declared a small ostrich, a rail, an albatross, or a vulture, by early scientists. In 1842, Danish zoologist Johannes Theodor Reinhardt proposed that dodos were ground pigeons, based on studies of a dodo skull he had discovered in the collection of the Natural History Museum of Denmark. This view was met with ridicule, but was later supported by English naturalists Hugh Edwin Strickland and Alexander Gordon Melville in their 1848 monograph The Dodo and Its Kindred, which attempted to separate myth from reality. After dissecting the preserved head and foot of the specimen at the Oxford University Museum and comparing it with the few remains then available of the extinct Rodrigues solitaire (Pezophaps solitaria), they concluded that the two were closely related. Strickland stated that although not identical, these birds shared many distinguishing features of the leg bones, otherwise known only in pigeons.
Strickland and Melville established that the dodo was anatomically similar to pigeons in many features. They pointed to the very short keratinous portion of the beak, with its long, slender, naked basal part. Other pigeons also have bare skin around their eyes, almost reaching their beak, as in dodos. The forehead was high in relation to the beak, and the nostril was located low on the middle of the beak and surrounded by skin, a combination of features shared only with pigeons. The legs of the dodo were generally more similar to those of terrestrial pigeons than of other birds, both in their scales and in their skeletal features. Depictions of the large crop hinted at a relationship with pigeons, in which this feature is more developed than in other birds. Pigeons generally have very small clutches, and the dodo is said to have laid a single egg. Like pigeons, the dodo lacked the vomer and septum of the nostrils, and it shared details in the mandible, the zygomatic bone, the palate, and the hallux. The dodo differed from other pigeons mainly in the small size of the wings and the large size of the beak in proportion to the rest of the cranium.
Throughout the 19th century, several species were classified as congeneric with the dodo, including the Rodrigues solitaire and the Réunion solitaire, as Didus solitarius and Raphus solitarius, respectively (Didus and Raphus being names for the dodo genus used by different authors of the time). An atypical 17th-century description of a dodo and bones found on Rodrigues, now known to have belonged to the Rodrigues solitaire, led Abraham Dee Bartlett to name a new species, Didus nazarenus, in 1852. Based on solitaire remains, it is now a synonym of that species. Crude drawings of the red rail of Mauritius were also misinterpreted as dodo species; Didus broeckii and Didus herberti.
For many years the dodo and the Rodrigues solitaire were placed in a family of their own, the Raphidae (formerly Dididae), because their exact relationships with other pigeons were unresolved. Each was also placed in its own monotypic family (Raphidae and Pezophapidae, respectively), as it was thought that they had evolved their similarities independently. Osteological and DNA analysis has since led to the dissolution of the family Raphidae, and the dodo and solitaire are now placed in the columbid subfamily Raphinae and tribe Raphini, along with their closest relatives. In 2024, the new subtribe Raphina was created to include only the dodo and the solitaire.
Evolution
In 2002, American geneticist Beth Shapiro and colleagues analysed the DNA of the dodo for the first time. Comparison of mitochondrial cytochrome b and 12S rRNA sequences isolated from a tarsal of the Oxford specimen and a femur of a Rodrigues solitaire confirmed their close relationship and their placement within the Columbidae. The genetic evidence was interpreted as showing the Southeast Asian Nicobar pigeon (Caloenas nicobarica) to be their closest living relative, followed by the crowned pigeons (Goura) of New Guinea, and the superficially dodo-like tooth-billed pigeon (Didunculus strigirostris) from Samoa (its scientific name refers to its dodo-like beak). This clade consists of generally ground-dwelling island endemic pigeons. The following cladogram shows the dodo's closest relationships within the Columbidae, based on Shapiro and colleagues, 2002:
A similar cladogram was published in 2007, inverting the placement of Goura and Didunculus and including the pheasant pigeon (Otidiphaps nobilis) and the thick-billed ground pigeon (Trugon terrestris) at the base of the clade. The DNA used in these studies was obtained from the Oxford specimen, and since this material is degraded, and no usable DNA has been extracted from subfossil remains, these findings still need to be independently verified. Based on behavioural and morphological evidence, Jolyon C. Parish proposed that the dodo and Rodrigues solitaire should be placed in the subfamily Gourinae along with the Goura pigeons and others, in agreement with the genetic evidence. In 2014, DNA of the only known specimen of the recently extinct spotted green pigeon (Caloenas maculata) was analysed, and it was found to be a close relative of the Nicobar pigeon, and thus also the dodo and Rodrigues solitaire.
The 2002 study indicated that the ancestors of the dodo and the solitaire diverged around the Paleogene-Neogene boundary, about 23.03 million years ago. The Mascarene Islands (Mauritius, Réunion, and Rodrigues), are of volcanic origin and are less than 10 million years old. Therefore, the ancestors of both birds probably remained capable of flight for a considerable time after the separation of their lineage. The Nicobar and spotted green pigeon were placed at the base of a lineage leading to the Raphinae, which indicates the flightless raphines had ancestors that were able to fly, were semi-terrestrial, and inhabited islands. This in turn supports the hypothesis that the ancestors of those birds reached the Mascarene islands by island hopping from South Asia. The lack of mammalian herbivores competing for resources on these islands allowed the solitaire and the dodo to attain very large sizes and flightlessness. Despite its divergent skull morphology and adaptations for larger size, many features of its skeleton remained similar to those of smaller, flying pigeons. Another large, flightless pigeon, the Viti Levu giant pigeon (Natunaornis gigoura), was described in 2001 from subfossil material from Fiji. It was only slightly smaller than the dodo and the solitaire, and it too is thought to have been related to the crowned pigeons.
Etymology
One of the original names for the dodo was the Dutch "Walghvoghel", first used in the journal of Dutch Vice Admiral Wybrand van Warwijck, who visited Mauritius during the Second Dutch Expedition to Indonesia in 1598. Walghe means "tasteless", "insipid", or "sickly", and means "bird". The name was translated by Jakob Friedlib into German as Walchstök or Walchvögel. The original Dutch report titled Waarachtige Beschryving was lost, but the English translation survived:
Another account from that voyage, perhaps the first to mention the dodo, states that the Portuguese referred to them as penguins. The meaning may not have been derived from penguin (the Portuguese referred to those birds as "fotilicaios" at the time), but from pinion, a reference to the small wings. The crew of the Dutch ship Gelderland referred to the bird as "Dronte" (meaning "swollen") in 1602, a name that is still used in some languages. This crew also called them "griff-eendt" and "kermisgans", in reference to fowl fattened for the Kermesse festival in Amsterdam, which was held the day after they anchored on Mauritius.
The etymology of the word dodo is unclear. Some ascribe it to the Dutch word dodoor for "sluggard", but it is more probably related to Dodaars, which means either "fat-arse" or "knot-arse", referring to the knot of feathers on the hind end. The first record of the word Dodaars is in Captain Willem Van West-Zanen's journal in 1602. The English writer Sir Thomas Herbert was the first to use the word dodo in print in his 1634 travelogue claiming it was referred to as such by the Portuguese, who had visited Mauritius in 1507. Another Englishman, Emmanuel Altham, had used the word in a 1628 letter in which he also claimed its origin was Portuguese. The name "dodar" was introduced into English at the same time as dodo, but was only used until the 18th century. As far as is known, the Portuguese never mentioned the bird. Nevertheless, some sources still state that the word dodo derives from the Portuguese word doudo (currently doido), meaning "fool" or "crazy". It has also been suggested that dodo was an onomatopoeic approximation of the bird's call, a two-note pigeon-like sound resembling "doo-doo".
The Latin name cucullatus ("hooded") was first used by Juan Eusebio Nieremberg in 1635 as Cygnus cucullatus, in reference to Carolus Clusius's 1605 depiction of a dodo. In his 18th-century classic work Systema Naturae, Carl Linnaeus used cucullatus as the specific name, but combined it with the genus name Struthio (ostrich). Mathurin Jacques Brisson coined the genus name Raphus (referring to the bustards) in 1760, resulting in the current name Raphus cucullatus. In 1766, Linnaeus coined the new binomial Didus ineptus (meaning "inept dodo"). This has become a synonym of the earlier name because of nomenclatural priority.
Description
As no complete dodo specimens exist, its external appearance, such as plumage and colouration, is hard to determine. Illustrations and written accounts of encounters with the dodo between its discovery and its extinction (1598–1662) are the primary evidence for its external appearance. According to most representations, the dodo had greyish or brownish plumage, with lighter primary feathers and a tuft of curly light feathers high on its rear end. The head was grey and naked, the beak green, black and yellow, and the legs were stout and yellowish, with black claws. A study of the few remaining feathers on the Oxford specimen head showed that they were pennaceous rather than plumaceous (downy) and most similar to those of other pigeons.
Subfossil remains and remnants of the birds that were brought to Europe in the 17th century show that dodos were very large birds, measuring about in height. The bird was sexually dimorphic; males were larger and had proportionally longer beaks. Weight estimates have varied from study to study. In 1993, Bradley C. Livezey proposed that males would have weighed and females . Also in 1993, Andrew C. Kitchener attributed a high contemporary weight estimate and the roundness of dodos depicted in Europe to these birds having been overfed in captivity; weights in the wild were estimated to have been in the range of , and fattened birds could have weighed . A 2011 estimate by Angst and colleagues gave an average weight as low as . This has also been questioned, and there is still controversy over weight estimates. A 2016 study estimated the weight at , based on CT scans of composite skeletons. It has also been suggested that the weight depended on the season, and that individuals were fat during cool seasons, but less so during hot.
Skeleton
The skull of the dodo differed much from those of other pigeons, especially in being more robust, the bill having a hooked tip, and in having a short cranium compared to the jaws. The upper bill was nearly twice as long as the cranium, which was short compared to those of its closest pigeon relatives. The openings of the bony nostrils were elongated along the length of the beak, and they contained no bony septum. The cranium (excluding the beak) was wider than it was long, and the frontal bone formed a dome-shape, with the highest point above the hind part of the eye sockets. The skull sloped downwards at the back. The eye sockets occupied much of the hind part of the skull. The sclerotic rings inside the eye were formed by eleven ossicles (small bones), similar to the amount in other pigeons. The mandible was slightly curved, and each half had a single fenestra (opening), as in other pigeons.
The dodo had about nineteen presynsacral vertebrae (those of the neck and thorax, including three fused into a notarium), sixteen synsacral vertebrae (those of the lumbar region and sacrum), six free tail (caudal) vertebrae, and a pygostyle. The neck had well-developed areas for muscle and ligament attachment, probably to support the heavy skull and beak. On each side, it had six ribs, four of which articulated with the sternum through sternal ribs. The sternum was large, but small in relation to the body compared to those of much smaller pigeons that are able to fly. The sternum was highly pneumatic, broad, and relatively thick in cross-section. The bones of the pectoral girdle, shoulder blades, and wing bones were reduced in size compared to those of flighted pigeon, and were more gracile compared to those of the Rodrigues solitaire, but none of the individual skeletal components had disappeared. The carpometacarpus of the dodo was more robust than that of the solitaire, however. The pelvis was wider than that of the solitaire and other relatives, yet was comparable to the proportions in some smaller, flighted pigeons. Most of the leg bones were more robust than those of extant pigeons and the solitaire, but the length proportions were little different.
Many of the skeletal features that distinguish the dodo and the Rodrigues solitaire, its closest relative, from other pigeons have been attributed to their flightlessness. The pelvic elements were thicker than those of flighted pigeons to support the higher weight, and the pectoral region and the small wings were paedomorphic, meaning that they were underdeveloped and retained juvenile features. The skull, trunk and pelvic limbs were peramorphic, meaning that they changed considerably with age. The dodo shared several other traits with the Rodrigues solitaire, such as features of the skull, pelvis, and sternum, as well as their large size. It differed in other aspects, such as being more robust and shorter than the solitaire, having a larger skull and beak, a rounded skull roof, and smaller orbits. The dodo's neck and legs were proportionally shorter, and it did not possess an equivalent to the knob present on the solitaire's wrists.
Contemporary descriptions
Most contemporary descriptions of the dodo are found in ship's logs and journals of the Dutch East India Company vessels that docked in Mauritius when the Dutch Empire ruled the island. These records were used as guides for future voyages. Few contemporary accounts are reliable, as many seem to be based on earlier accounts, and none were written by scientists. One of the earliest accounts, from van Warwijck's 1598 journal, describes the bird as follows:
One of the most detailed descriptions is by Herbert in A Relation of Some Yeares Travaille into Afrique and the Greater Asia from 1634:
Contemporary depictions
The travel journal of the Dutch ship Gelderland (1601–1603), rediscovered in the 1860s, contains the only known sketches of living or recently killed specimens drawn on Mauritius. They have been attributed to the professional artist Joris Joostensz Laerle, who also drew other now-extinct Mauritian birds, and to a second, less refined artist. Apart from these sketches, it is unknown how many of the twenty or so 17th-century illustrations of the dodos were drawn from life or from stuffed specimens, which affects their reliability. Since dodos are otherwise only known from limited physical remains and descriptions, contemporary artworks are important to reconstruct their appearance in life. While there has been an effort since the mid-19th century to list all historical illustrations of dodos, previously unknown depictions continue to be discovered occasionally.
The traditional image of the dodo is of a very fat and clumsy bird, but this view may be exaggerated. The general opinion of scientists today is that many old European depictions were based on overfed captive birds or crudely stuffed specimens. It has also been suggested that the images might show dodos with puffed feathers, as part of display behaviour. The Dutch painter Roelant Savery was the most prolific and influential illustrator of the dodo, having made at least twelve depictions, often showing it in the lower corners. A famous painting of his from 1626, now called Edwards's Dodo as it was once owned by the ornithologist George Edwards, has since become the standard image of a dodo. It is housed in the Natural History Museum, London. The image shows a particularly fat bird and is the source for many other dodo illustrations.
An Indian Mughal painting rediscovered in the Hermitage Museum, St. Petersburg, in 1955 shows a dodo along with native Indian birds. It depicts a slimmer, brownish bird, and its discoverer Aleksander Iwanow and British palaeontologist Julian Hume regarded it as one of the most accurate depictions of the living dodo; the surrounding birds are clearly identifiable and depicted with appropriate colouring. It is believed to be from the 17th century and has been attributed to the Mughal painter Ustad Mansur. The bird depicted probably lived in the menagerie of the Mughal Emperor Jahangir, located in Surat, where the English traveller Peter Mundy also claimed to have seen two dodos sometime between 1628 and 1633. In 2014, another Indian illustration of a dodo was reported, but it was found to be derivative of an 1836 German illustration.
All post-1638 depictions appear to be based on earlier images, around the time reports mentioning dodos became rarer. Differences in the depictions led ornithologists such as Anthonie Cornelis Oudemans and Masauji Hachisuka to speculate about sexual dimorphism, ontogenic traits, seasonal variation, and even the existence of different species, but these theories are not accepted today. Because details such as markings of the beak, the form of the tail feathers, and colouration vary from account to account, it is impossible to determine the exact morphology of these features, whether they signal age or sex, or if they even reflect reality. Hume argued that the nostrils of the living dodo would have been slits, as seen in the Gelderland, Cornelis Saftleven, Savery's Crocker Art Gallery, and Mansur images. According to this claim, the gaping nostrils often seen in paintings indicate that taxidermy specimens were used as models. Most depictions show that the wings were held in an extended position, unlike flighted pigeons, but similar to ratites such as the ostrich and kiwi.
Behaviour and ecology
Little is known of the behaviour of the dodo, as most contemporary descriptions are very brief. Based on weight estimates, it has been suggested the male could reach the age of 21, and the female 17. Studies of the cantilever strength of its leg bones indicate that it could run quite fast. The legs were robust and strong to support the bulk of the bird, and also made it agile and manoeuvrable in the dense, pre-human landscape. Though the wings were small, well-developed muscle scars on the bones show that they were not completely vestigial, and may have been used for display behaviour and balance; extant pigeons also use their wings for such purposes. Unlike the Rodrigues solitaire, there is no evidence that the dodo used its wings in intraspecific combat. Though some dodo bones have been found with healed fractures, it had weak pectoral muscles and more reduced wings in comparison. The dodo may instead have used its large, hooked beak in territorial disputes. Since Mauritius receives more rainfall and has less seasonal variation than Rodrigues, which would have affected the availability of resources on the island, the dodo would have less reason to evolve aggressive territorial behaviour. The Rodrigues solitaire was therefore probably the more aggressive of the two. In 2016, the first 3D endocast was made from the brain of the dodo; the brain-to-body-size ratio was similar to that of modern pigeons, indicating that dodos were probably equal in intelligence.
The preferred habitat of the dodo is unknown, but old descriptions suggest that it inhabited the woods on the drier coastal areas of south and west Mauritius. This view is supported by the fact that the Mare aux Songes swamp, where most dodo remains have been excavated, is close to the sea in south-eastern Mauritius. Such a limited distribution across the island could well have contributed to its extinction. A 1601 map from the Gelderland journal shows a small island off the coast of Mauritius where dodos were caught. Julian Hume has suggested this island was l'île aux Bénitiers in Tamarin Bay, on the west coast of Mauritius. Subfossil bones have also been found inside caves in highland areas, indicating that it once occurred on mountains. Work at the Mare aux Songes swamp has shown that its habitat was dominated by tambalacoque and Pandanus trees and endemic palms. The near-coastal placement and wetness of the Mare aux Songes led to a high diversity of plant species, whereas the surrounding areas were drier.
Many endemic species of Mauritius became extinct after the arrival of humans, so the ecosystem of the island is badly damaged and hard to reconstruct. Before humans arrived, Mauritius was entirely covered in forests, but very little remains of them today, because of deforestation. The surviving endemic fauna is still seriously threatened. The dodo lived alongside other recently extinct Mauritian birds such as the flightless red rail, the broad-billed parrot, the Mascarene grey parakeet, the Mauritius blue pigeon, the Mauritius scops owl, the Mascarene coot, the Mauritian shelduck, the Mauritian duck, and the Mauritius night heron. Extinct Mauritian reptiles include the saddle-backed Mauritius giant tortoise, the domed Mauritius giant tortoise, the Mauritian giant skink, and the Round Island burrowing boa. The small Mauritian flying fox and the snail Tropidophora carinata lived on Mauritius and Réunion, but vanished from both islands. Some plants, such as Casearia tinifolia and the palm orchid, have also become extinct.
Diet and feeding
A 1631 Dutch letter (long thought lost, but rediscovered in 2017) is the only account of the dodo's diet, and also mentions that it used its beak for defence. The document uses word-play to refer to the animals described, with dodos presumably being an allegory for wealthy mayors:
In addition to fallen fruits, the dodo probably subsisted on nuts, seeds, bulbs, and roots. It has also been suggested that the dodo might have eaten crabs and shellfish, like their relatives the crowned pigeons. Its feeding habits must have been versatile, since captive specimens were probably given a wide range of food on the long sea journeys. Oudemans suggested that as Mauritius has marked dry and wet seasons, the dodo probably fattened itself on ripe fruits at the end of the wet season to survive the dry season, when food was scarce; contemporary reports describe the bird's "greedy" appetite. The Mauritian ornithologist France Staub suggested in 1996 that they mainly fed on palm fruits, and he attempted to correlate the fat-cycle of the dodo with the fruiting regime of the palms.
Skeletal elements of the upper jaw appear to have been rhynchokinetic (movable in relation to each other), which must have affected its feeding behaviour. In extant birds, such as frugivorous (fruit-eating) pigeons, kinetic premaxillae help with consuming large food items. The beak also appears to have been able to withstand high force loads, which indicates a diet of hard food. Examination of the brain endocast found that though the brain was similar to that of other pigeons in most respects, the dodo had a comparatively large olfactory bulb. This gave the dodo a good sense of smell, which may have aided in locating fruit and small prey.
Several contemporary sources state that the dodo used Gastroliths (gizzard stones) to aid digestion. The English writer Sir Hamon L'Estrange witnessed a live bird in London and described it as follows:
It is not known how the young were fed, but related pigeons provide crop milk. Contemporary depictions show a large crop, which was probably used to add space for food storage and to produce crop milk. It has been suggested that the maximum size attained by the dodo and the solitaire was limited by the amount of crop milk they could produce for their young during early growth.
In 1973, the tambalacoque, also known as the dodo tree, was thought to be dying out on Mauritius, to which it is endemic. There were supposedly only 13 specimens left, all estimated to be about 300 years old. Stanley Temple hypothesised that it depended on the dodo for its propagation, and that its seeds would germinate only after passing through the bird's digestive tract. He claimed that the tambalacoque was now nearly coextinct because of the disappearance of the dodo. Temple overlooked reports from the 1940s that found that tambalacoque seeds germinated, albeit very rarely, without being abraded during digestion. Others have contested his hypothesis and suggested that the decline of the tree was exaggerated or seeds were also distributed by other extinct animals such as Cylindraspis tortoises, fruit bats, or the broad-billed parrot. According to Wendy Strahm and Anthony Cheke, two experts in the ecology of the Mascarene Islands, the tree, while rare, has germinated since the demise of the dodo and numbers several hundred, not 13 as claimed by Temple, hence, discrediting Temple's view as to the dodo and the tree's sole survival relationship.
The Brazilian ornithologist Carlos Yamashita suggested in 1997 that the broad-billed parrot may have depended on dodos and Cylindraspis tortoises to eat palm fruits and excrete their seeds, which became food for the parrots. Anodorhynchus macaws depended on now-extinct South American megafauna in the same way, but now rely on domesticated cattle for this service.
Reproduction and development
As it was flightless and terrestrial and there were no mammalian predators or other kinds of natural enemy on Mauritius, the dodo probably nested on the ground. The account by François Cauche from 1651 is the only description of the egg and the call:
Cauche's account is problematic, since it also mentions that the bird he was describing had three toes and no tongue, unlike dodos. This led some to believe that Cauche was describing a new species of dodo ("Didus nazarenus"). The description was most probably mingled with that of a cassowary, and Cauche's writings have other inconsistencies. A mention of a "young ostrich" taken on board a ship in 1617 is the only other reference to a possible juvenile dodo. An egg claimed to be that of a dodo is stored in the East London Museum in South Africa. It was donated by the South African museum official Marjorie Courtenay-Latimer, whose great aunt had received it from a captain who claimed to have found it in a swamp on Mauritius. In 2010, the curator of the museum proposed using genetic studies to determine its authenticity. It may instead be an aberrant ostrich egg.
Because of the possible single-egg clutch and the bird's large size, it has been proposed that the dodo was K-selected, meaning that it produced few altricial offspring, which required parental care until they matured. Some evidence, including the large size and the fact that tropical and frugivorous birds have slower growth rates, indicates that the bird may have had a protracted development period. The fact that no juvenile dodos have been found in the Mare aux Songes swamp may indicate that they produced little offspring, that they matured rapidly, that the breeding grounds were far away from the swamp, or that the risk of miring was seasonal.
A 2017 study examined the histology of thin-sectioned dodo bones, modern Mauritian birds, local ecology, and contemporary accounts, to recover information about the life history of the dodo. The study suggested that dodos bred around August, after having potentially fattened themselves, corresponding with the fat and thin cycles of many vertebrates of Mauritius. The chicks grew rapidly, reaching robust, almost adult, sizes, and sexual maturity before Austral summer or the cyclone season. Adult dodos which had just bred moulted after Austral summer, around March. The feathers of the wings and tail were replaced first, and the moulting would have completed at the end of July, in time for the next breeding season. Different stages of moulting may also account for inconsistencies in contemporary descriptions of dodo plumage.
Relationship with humans
Mauritius had previously been visited by Arab vessels in the Middle Ages and Portuguese ships between 1507 and 1513, but was settled by neither. No records of dodos by these are known, although the Portuguese name for Mauritius, "Cerne (swan) Island", may have been a reference to dodos. The Dutch Empire acquired Mauritius in 1598, renaming it after Maurice of Nassau, and it was used for the provisioning of trade vessels of the Dutch East India Company henceforward. The earliest known accounts of the dodo were provided by Dutch travellers during the Second Dutch Expedition to Indonesia, led by admiral Jacob van Neck in 1598. They appear in reports published in 1601, which also contain the first published illustration of the bird. Since the first sailors to visit Mauritius had been at sea for a long time, their interest in these large birds was mainly culinary. The 1602 journal by Willem Van West-Zanen of the ship Bruin-Vis mentions that 24–25 dodos were hunted for food, which were so large that two could scarcely be consumed at mealtime, their remains being preserved by salting. An illustration made for the 1648 published version of this journal, showing the killing of dodos, a dugong, and possibly Mascarene grey parakeets, was captioned with a Dutch poem, here in Hugh Strickland's 1848 translation:
Some early travellers found dodo meat unsavoury, and preferred to eat parrots and pigeons; others described it as tough, but good. Some hunted dodos only for their gizzards, as this was considered the most delicious part of the bird. Dodos were easy to catch, but hunters had to be careful not to be bitten by their powerful beaks.
The appearance of the dodo and the red rail led Peter Mundy to speculate, 230 years before Charles Darwin's theory of evolution:
Dodos transported abroad
The dodo was found interesting enough that living specimens were sent to Europe and the East. The number of transported dodos that reached their destinations alive is uncertain, and it is unknown how they relate to contemporary depictions and the few non-fossil remains in European museums. Based on a combination of contemporary accounts, paintings, and specimens, Julian Hume has inferred that at least eleven transported dodos reached their destinations alive.
Hamon L'Estrange's description of a dodo that he saw in London in 1638 is the only account that specifically mentions a live specimen in Europe. In 1626 Adriaen van de Venne drew a dodo that he claimed to have seen in Amsterdam, but he did not mention if it was alive, and his depiction is reminiscent of Savery's Edwards's Dodo. Two live specimens were seen by Peter Mundy in Surat, India, between 1628 and 1634, one of which may have been the individual painted by Mansur around 1625. In 1628, Emmanuel Altham visited Mauritius and sent a letter to his brother in England:
Whether the dodo survived the journey is unknown, and the letter was destroyed by fire in the 19th century.
The earliest known picture of a dodo specimen in Europe is from a collection of paintings depicting animals in the royal menagerie of Emperor Rudolph II in Prague. This collection includes paintings of other Mauritian animals as well, including a red rail. The dodo, which may be a juvenile, seems to have been dried or embalmed, and had probably lived in the emperor's zoo for a while together with the other animals. That whole stuffed dodos were present in Europe indicates they had been brought alive and died there; it is unlikely that taxidermists were on board the visiting ships, and spirits were not yet used to preserve biological specimens. Most tropical specimens were preserved as dried heads and feet.
One dodo was reportedly sent as far as Nagasaki, Japan, in 1647, but it was long unknown whether it arrived. Contemporary documents first published in 2014 proved the story, and showed that it had arrived alive. It was meant as a gift, and, despite its rarity, was considered of equal value to a white deer and a bezoar stone. It is the last recorded live dodo in captivity.
Extinction
Like many animals that evolved in isolation from significant predators, the dodo was entirely fearless of humans. This fearlessness and its inability to fly made the dodo easy prey, but predation by humans was not the main cause of extinction, contrary to popular belief. Although some scattered reports describe mass killings of dodos for ships' provisions, archaeological investigations have found scant evidence of human predation. Bones of at least two dodos were found in caves at Baie du Cap that sheltered fugitive slaves and convicts in the 17th century, which would not have been easily accessible to dodos because of the high, broken terrain. The human population on Mauritius (an area of ) never exceeded 50 people in the 17th century, but they introduced other animals, including dogs, pigs, cats, rats, and crab-eating macaques, which plundered dodo nests and competed for the limited food resources. At the same time, humans destroyed the forest habitat of the dodos. The impact of the introduced animals on the dodo population, especially the pigs and macaques, is today considered more severe than that of hunting. Rats were perhaps not much of a threat to the nests, since dodos would have been used to dealing with local land crabs.
It has been suggested that the dodo may already have been rare or localised before the arrival of humans on Mauritius, since it would have been unlikely to become extinct so rapidly if it had occupied all the remote areas of the island. A 2005 expedition found subfossil remains of dodos and other animals killed by a flash flood. Such mass mortalities would have further jeopardised a species already in danger of becoming extinct. Yet the fact that the dodo survived hundreds of years of volcanic activity and climatic changes shows the bird was resilient within its ecosystem.
Some controversy surrounds the date of its extinction. The last widely accepted record of a dodo sighting is the 1662 report by shipwrecked mariner Volkert Evertsz of the Dutch ship Arnhem, who described birds caught on a small islet off Mauritius, now suggested to be Amber Island:
The dodos on this islet may not necessarily have been the last members of the species. The last claimed sighting of a dodo was reported in the hunting records of Isaac Johannes Lamotius in 1688. A 2003 statistical analysis of these records by the biologists David L. Roberts and Andrew R. Solow gave a new estimated extinction date of 1693, with a 95% confidence interval of 1688–1715. These authors also pointed out that because the last sighting before 1662 was in 1638, the dodo was probably already quite rare by the 1660s, and thus a disputed report from 1674 by an escaped slave could not be dismissed out of hand.
The British ornithologist Alfred Newton suggested in 1868 that the name of the dodo was transferred to the red rail after the former had gone extinct. Cheke also pointed out that some descriptions after 1662 use the names "Dodo" and "Dodaers" when referring to the red rail, indicating that they had been transferred to it. He therefore pointed to the 1662 description as the last credible observation. A 1668 account by English traveller John Marshall, who used the names "Dodo" and "Red Hen" interchangeably for the red rail, mentioned that the meat was "hard", which echoes the description of the meat in the 1681 account. Even the 1662 account has been questioned by the writer Errol Fuller, as the reaction to distress cries matches what was described for the red rail. Until this explanation was proposed, a description of "dodos" from 1681 was thought to be the last account, and that date still has proponents.
Cheke stated in 2014 that then recently accessible Dutch manuscripts indicate that no dodos were seen by settlers in 1664–1674. In 2020, Cheke and the British researcher Jolyon C. Parish suggested that all mentions of dodos after the mid-17th century instead referred to red rails, and that the dodo had disappeared due to predation by feral pigs during a hiatus in settlement of Mauritius (1658–1664). The dodo's extinction therefore was not realised at the time, since new settlers had not seen real dodos, but as they expected to see flightless birds, they referred to the red rail by that name instead. Since red rails probably had larger clutches than dodos and their eggs could be incubated faster, and their nests were perhaps concealed, they probably bred more efficiently, and were less vulnerable to pigs.
It is unlikely the issue will ever be resolved, unless late reports mentioning the name alongside a physical description are rediscovered. The IUCN Red List accepts Cheke's rationale for choosing the 1662 date, taking all subsequent reports to refer to red rails. In any case, the dodo was probably extinct by 1700, about a century after its discovery in 1598. The Dutch left Mauritius in 1710, but by then the dodo and most of the large terrestrial vertebrates there had become extinct.
Even though the rareness of the dodo was reported already in the 17th century, its extinction was not recognised until the 19th century. This was partly because, for religious reasons, extinction was not believed possible until later proved so by Georges Cuvier, and partly because many scientists doubted that the dodo had ever existed. It seemed altogether too strange a creature, and many believed it a myth. The bird was first used as an example of human-induced extinction in Penny Magazine in 1833, and has since been referred to as an "icon" of extinction.
Physical remains
17th-century specimens
The only extant remains of dodo specimens taken to Europe in the 17th century are a dried head and foot in the Oxford University Museum of Natural History, a foot once housed in the British Museum but now lost, a skull in the University of Copenhagen Zoological Museum, and an upper jaw in the National Museum, Prague. The last two were rediscovered and identified as dodo remains in the mid-19th century. Several stuffed dodos were also mentioned in old museum inventories, but none are known to have survived. Apart from these remains, a dried foot which belonged to Dutch professor Pieter Pauw was mentioned by Carolus Clusius in 1605. Its provenance is unknown, and it is now lost, but it may have been collected during the Van Neck voyage. Purported stuffed dodos seen in museums around the world today have in fact been made from feathers of other birds; many by British taxidermist Rowland Ward's company.
The only known soft tissue remains, the Oxford head (specimen OUM 11605) and foot, belonged to the last known stuffed dodo, which was first mentioned as part of the Tradescant collection in 1656 and was moved to the Ashmolean Museum in 1659. It has been suggested that this might be the remains of the bird that Hamon L'Estrange saw in London, the bird sent by Emanuel Altham, or a donation by Thomas Herbert. Since the remains do not show signs of having been mounted, the specimen might instead have been preserved as a study skin. In 2018, it was reported that scans of the Oxford dodo's head showed that its skin and bone contained lead shot, which was used to hunt birds in the 17th century. This indicates that the Oxford dodo was shot either before being transported to Britain, or some time after arriving. The circumstances of its killing are unknown, and the pellets are to be examined to identify where the lead was mined from.
Many sources state that the Ashmolean Museum burned the stuffed dodo around 1755 because of severe decay, saving only the head and leg. Statute 8 of the museum states "That as any particular grows old and perishing the keeper may remove it into one of the closets or other repository; and some other to be substituted." The deliberate destruction of the specimen is now believed to be a myth; it was removed from exhibition to preserve what remained of it. This remaining soft tissue has since degraded further; the head was dissected by Strickland and Melville, separating the skin from the skull in two-halves. The foot is in a skeletal state, with only scraps of skin and tendons. Very few feathers remain on the head. It is probably a female, as the foot is 11% smaller and more gracile than the London foot, yet appears to be fully grown. The specimen was exhibited at the Oxford museum from at least the 1860s and until 1998, where-after it was mainly kept in storage to prevent damage. Casts of the head can today be found in many museums worldwide.
The dried London foot, first mentioned in 1665, and transferred to the British Museum in the 18th century, was displayed next to Savery's Edwards's Dodo painting until the 1840s, and it too was dissected by Strickland and Melville. It was not posed in a standing posture, which suggests that it was severed from a fresh specimen, not a mounted one. By 1896 it was mentioned as being without its integuments, and only the bones are believed to remain today, though its present whereabouts are unknown.
The Copenhagen skull (specimen ZMUC 90-806) is known to have been part of the collection of Bernardus Paludanus in Enkhuizen until 1651, when it was moved to the museum in Gottorf Castle, Schleswig. After the castle was occupied by Danish forces in 1702, the museum collection was assimilated into the Royal Danish collection. The skull was rediscovered by J. T. Reinhardt in 1840. Based on its history, it may be the oldest known surviving remains of a dodo brought to Europe in the 17th century. It is shorter than the Oxford skull, and may have belonged to a female. It was mummified, but the skin has perished.
The front part of a skull (specimen NMP P6V-004389) in the National Museum of Prague was found in 1850 among the remains of the Böhmisches Museum. Other elements supposedly belonging to this specimen have been listed in the literature, but it appears only the partial skull was ever present (a partial right limb in the museum appears to be from a Rodrigues solitaire). It may be what remains of one of the stuffed dodos known to have been at the menagerie of Emperor Rudolph II, possibly the specimen painted by Hoefnagel or Savery there.
Subfossil specimens
Until 1860, the only known dodo remains were the four incomplete 17th-century specimens. Philip Burnard Ayres found the first subfossil bones in 1860, which were sent to Richard Owen at the British Museum, who did not publish the findings. In 1863, Owen requested the Mauritian Bishop Vincent Ryan to spread word that he should be informed if any dodo bones were found. In 1865, George Clark, the government schoolmaster at Mahébourg, finally found an abundance of subfossil dodo bones in the swamp of Mare aux Songes in Southern Mauritius, after a 30-year search inspired by Strickland and Melville's monograph. In 1866, Clark explained his procedure to The Ibis, an ornithology journal: he had sent his coolies to wade through the centre of the swamp, feeling for bones with their feet. At first they found few bones, until they cut away herbage that covered the deepest part of the swamp, where they found many fossils. Harry Pasley Higginson, a railway engineer from Yorkshire, reports discovering the Mare aux Songes bones at the same time as Clark and there is some dispute over who found them first. Higginson sent boxes of these bones to Liverpool, Leeds and York museums. The swamp yielded the remains of over 300 dodos, but very few skull and wing bones, possibly because the upper bodies were washed away or scavenged while the lower body was trapped. The situation is similar to many finds of moa remains in New Zealand marshes. Most dodo remains from the Mare aux Songes have a medium to dark brown colouration.
Clark's reports about the finds rekindled interest in the bird. Sir Richard Owen and Alfred Newton both wanted to be first to describe the post-cranial anatomy of the dodo, and Owen bought a shipment of dodo bones originally meant for Newton, which led to rivalry between the two. Owen described the bones in Memoir on the Dodo in October 1866, but erroneously based his reconstruction on the Edwards's Dodo painting by Savery, making it too squat and obese. In 1869 he received more bones and corrected its stance, making it more upright. Newton moved his focus to the Réunion solitaire instead. The remaining bones not sold to Owen or Newton were auctioned off or donated to museums. In 1889, Théodor Sauzier was commissioned to explore the "historical souvenirs" of Mauritius and find more dodo remains in the Mare aux Songes. He was successful, and also found remains of other extinct species.
In 2005, after a hundred years of neglect, a part of the Mare aux Songes swamp was excavated by an international team of researchers (International Dodo Research Project). To prevent malaria, the British had covered the swamp with hard core during their rule over Mauritius, which had to be removed. Many remains were found, including bones of at least 17 dodos in various stages of maturity (though no juveniles), and several bones obviously from the skeleton of one individual bird, which have been preserved in their natural position. These findings were made public in December 2005 in the Naturalis museum in Leiden. 63% of the fossils found in the swamp belonged to turtles of the extinct genus Cylindraspis, and 7.1% belonged to dodos, which had been deposited within several centuries, 4,000 years ago. Subsequent excavations suggested that dodos and other animals became mired in the Mare aux Songes while trying to reach water during a long period of severe drought about 4,200 years ago. Furthermore, cyanobacteria thrived in the conditions created by the excrements of animals gathered around the swamp, which died of intoxication, dehydration, trampling, and miring. Though many small skeletal elements were found during the recent excavations of the swamp, few were found during the 19th century, probably owing to the employment of less refined methods when collecting.
Louis Étienne Thirioux, an amateur naturalist at Port Louis, also found many dodo remains around 1900 from several locations. They included the first articulated specimen, which is the first subfossil dodo skeleton found outside the Mare aux Songes, and the only remains of a juvenile specimen, a now lost tarsometatarsus. The former specimen was found in 1904 in a cave near Le Pouce mountain, and is the only known complete skeleton of an individual dodo. Thirioux donated the specimen to the Museum Desjardins (now Natural History Museum at Mauritius Institute). Thrioux's heirs sold a second mounted composite skeleton (composed of at least two skeletons, with a mainly reconstructed skull) to the Durban Museum of Natural Science in South Africa in 1918. Together, these two skeletons represent the most completely known dodo remains, including bone elements previously unrecorded (such as knee-caps and wing bones). Though some contemporary writers noted the importance of Thrioux's specimens, they were not scientifically studied, and were largely forgotten until 2011, when sought out by a group of researchers. The mounted skeletons were laser scanned, from which 3-D models were reconstructed, which became the basis of a 2016 monograph about the osteology of the dodo. In 2006, explorers discovered a complete skeleton of a dodo in a lava cave in Mauritius. This was only the second associated skeleton of an individual specimen ever found, and the only one in recent times.
Worldwide, 26 museums have significant holdings of dodo material, almost all found in the Mare aux Songes. The Natural History Museum, American Museum of Natural History, Cambridge University Museum of Zoology, the Senckenberg Museum, and others have almost complete skeletons, assembled from the dissociated subfossil remains of several individuals. In 2011, a wooden box containing dodo bones from the Edwardian era was rediscovered at the Grant Museum at University College London during preparations for a move. They had been stored with crocodile bones until then.
White dodo
The supposed "white dodo" (or "solitaire") of Réunion is now considered an erroneous conjecture based on contemporary reports of the Réunion ibis and 17th-century paintings of white, dodo-like birds by Pieter Withoos and Pieter Holsteyn that surfaced in the 19th century. The confusion began when Willem Ysbrandtszoon Bontekoe, who visited Réunion around 1619, mentioned fat, flightless birds that he referred to as "Dod-eersen" in his journal, though without mentioning their colouration. When the journal was published in 1646, it was accompanied by an engraving of a dodo from Savery's "Crocker Art Gallery sketch". A white, stocky, and flightless bird was first mentioned as part of the Réunion fauna by Chief Officer J. Tatton in 1625. Sporadic mentions were subsequently made by Sieur Dubois and other contemporary writers.
Baron Edmond de Sélys Longchamps coined the name Raphus solitarius for these birds in 1848, as he believed the accounts referred to a species of dodo. When 17th-century paintings of white dodos were discovered by 19th-century naturalists, it was assumed they depicted these birds. Oudemans suggested that the discrepancy between the paintings and the old descriptions was that the paintings showed females, and that the species was therefore sexually dimorphic. Some authors also believed the birds described were of a species similar to the Rodrigues solitaire, as it was referred to by the same name, or even that there were white species of both dodo and solitaire on the island.
The Pieter Withoos painting, which was discovered first, appears to be based on an earlier painting by Pieter Holsteyn, three versions of which are known to have existed. According to Hume, Cheke, and Valledor de Lozoya, it appears that all depictions of white dodos were based on Roelant Savery's painting Landscape with Orpheus and the animals, or on copies of it. The painting has generally been dated to 1611, though a post-1614, or even post-1626, date has also been proposed. The painting shows a whitish specimen and was apparently based on a stuffed specimen then in Prague; a walghvogel described as having a "dirty off-white colouring" was mentioned in an inventory of specimens in the Prague collection of the Holy Roman Emperor Rudolf II, to whom Savery was contracted at the time (1607–1611). Savery's several later images all show greyish birds, possibly because he had by then seen another specimen. Cheke and Hume believe the painted specimen was white, owing to albinism. Valledor de Lozoya has instead suggested that the light plumage was a juvenile trait, a result of bleaching of old taxidermy specimens, or simply artistic license.
In 1987, scientists described fossils of a recently extinct species of ibis from Réunion with a relatively short beak, Borbonibis latipes, before a connection to the solitaire reports had been made. Cheke suggested to one of the authors, Francois Moutou, that the fossils may have been of the Réunion solitaire, and this suggestion was published in 1995. The ibis was reassigned to the genus Threskiornis, now combined with the specific epithet from the binomial R. solitarius. Birds of this genus are also white and black with slender beaks, fitting the old descriptions of the Réunion solitaire. No fossil remains of dodo-like birds have ever been found on the island.
Cultural significance
The dodo's significance as one of the best-known extinct animals and its singular appearance led to its use in literature and popular culture as a symbol of an outdated concept or object, as in the expression "dead as a dodo," which has come to mean unquestionably dead or obsolete. Similarly, the phrase "to go the way of the dodo" means to become extinct or obsolete, to fall out of common usage or practice, or to become a thing of the past. "Dodo" is also a slang term for a stupid, dull-witted person, as it was said to be stupid and easily caught.
The dodo appears frequently in works of popular fiction, and even before its extinction, it was featured in European literature, as a symbol for exotic lands, and of gluttony, due to its apparent fatness. In 1865, the same year that George Clark started to publish reports about excavated dodo fossils, the newly vindicated bird was featured as a character in Lewis Carroll's Alice's Adventures in Wonderland. It is thought that he included the dodo because he identified with it and had adopted the name as a nickname for himself because of his stammer, which made him accidentally introduce himself as "Do-do-dodgson", his legal surname. Carroll and the girl who served as inspiration for Alice, Alice Liddell, had enjoyed visiting the Oxford museum to see the dodo remains there. The book's popularity made the dodo a well-known icon of extinction. Popular depictions of the dodo often became more exaggerated and cartoonish following its Alice in Wonderland fame, which was in line with the inaccurate belief that it was clumsy, tragic, and destined for extinction.
The dodo is used as a mascot for many kinds of products, especially in Mauritius. It appears as a supporter on the coat of arms of Mauritius, on Mauritius coins, is used as a watermark on all Mauritian rupee banknotes, and features as the background of the Mauritian immigration form. A smiling dodo is the symbol of the Brasseries de Bourbon, a popular brewer on Réunion, whose emblem displays the white species once thought to have lived there.
The dodo is used to promote the protection of endangered species by environmental organisations, such as the Durrell Wildlife Conservation Trust and the Durrell Wildlife Park. The Center for Biological Diversity gives an annual 'Rubber Dodo Award', to "those who have done the most to destroy wild places, species and biological diversity". In 2011, the nephiline spider Nephilengys dodo, which inhabits the same woods as the dodo once did, was named after the bird to raise awareness of the urgent need for protection of the Mauritius biota. Two species of ant from Mauritius have been named after the dodo: Pseudolasius dodo in 1946 and Pheidole dodo in 2013. A species of isopod from a coral reef off Réunion was named Hansenium dodo in 1991.
The name dodo has been used by scientists naming genetic elements, honouring the dodo's flightless nature. A fruitfly gene within a region of a chromosome required for flying ability was named "dodo". In addition, a defective transposable element family from Phytophthora infestans was named DodoPi as it contained mutations that eliminated the element's ability to jump to new locations in a chromosome.
In 2009, a previously unpublished 17th-century Dutch illustration of a dodo went for sale at Christie's and was expected to sell for £6,000. It is unknown whether the illustration was based on a specimen or on a previous image, and the artist is unidentified. It sold for £44,450. Parrish suggested it depicts a stuffed specimen, as the legs look dried.
The poet Hilaire Belloc included the following poem about the dodo in his Bad Child's Book of Beasts from 1896:
| Biology and health sciences | Columbiformes | null |
8429 | https://en.wikipedia.org/wiki/Density | Density | Density (volumetric mass density or specific mass) is a substance's mass per unit of volume. The symbol most often used for density is ρ (the lower case Greek letter rho), although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume:
where ρ is the density, m is the mass, and V is the volume. In some cases (for instance, in the United States oil and gas industry), density is loosely defined as its weight per unit volume, although this is scientifically inaccurate this quantity is more specifically called specific weight.
For a pure substance the density has the same numerical value as its mass concentration.
Different materials usually have different densities, and density may be relevant to buoyancy, purity and packaging. Osmium is the densest known element at standard conditions for temperature and pressure.
To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material, usually water. Thus a relative density less than one relative to water means that the substance floats in water.
The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance (with a few exceptions) decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid, which causes it to rise relative to denser unheated material.
The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass.
Other conceptually comparable quantities or ratios include specific density, relative density (specific gravity), and specific weight.
History
Density, floating, and sinking
The understanding that different materials have different densities, and of a relationship between density, floating, and sinking must date to prehistoric times. Much later it was put in writing. Aristotle, for example, wrote:
Volume vs. density; volume of an irregular shape
In a well-known but probably apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass; but the king did not approve of this. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!" (). As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment.
The story first appeared in written form in Vitruvius' books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time.
Nevertheless, in 1586, Galileo Galilei, in one of his first experiments, made a possible reconstruction of how the experiment could have been performed with ancient Greek resources
Units
From the equation for density (), mass density has any unit that is mass divided by volume. As there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per cubic metre (kg/m3) and the cgs unit of gram per cubic centimetre (g/cm3) are probably the most commonly used units for density. One g/cm3 is equal to 1000 kg/m3. One cubic centimetre (abbreviation cc) is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are often more practical and US customary units may be used. See below for a list of some of the most common units of density.
The litre and tonne are not part of the SI, but are acceptable for use with it, leading to the following units:
kilogram per litre (kg/L)
gram per millilitre (g/mL)
tonne per cubic metre (t/m3)
Densities using the following metric units all have exactly the same numerical value, one thousandth of the value in (kg/m3). Liquid water has a density of about 1 kg/dm3, making any of these SI units numerically convenient to use as most solids and liquids have densities between 0.1 and 20 kg/dm3.
kilogram per cubic decimetre (kg/dm3)
gram per cubic centimetre (g/cm3)
1 g/cm3 = 1000 kg/m3
megagram (metric ton) per cubic metre (Mg/m3)
In US customary units density can be stated in:
Avoirdupois ounce per cubic inch (1 g/cm3 ≈ 0.578036672 oz/cu in)
Avoirdupois ounce per fluid ounce (1 g/cm3 ≈ 1.04317556 oz/US fl oz = 1.04317556 lb/US fl pint)
Avoirdupois pound per cubic inch (1 g/cm3 ≈ 0.036127292 lb/cu in)
pound per cubic foot (1 g/cm3 ≈ 62.427961 lb/cu ft)
pound per cubic yard (1 g/cm3 ≈ 1685.5549 lb/cu yd)
pound per US liquid gallon (1 g/cm3 ≈ 8.34540445 lb/US gal)
pound per US bushel (1 g/cm3 ≈ 77.6888513 lb/bu)
slug per cubic foot
Imperial units differing from the above (as the Imperial gallon and bushel differ from the US units) in practice are rarely used, though found in older documents. The Imperial gallon was based on the concept that an Imperial fluid ounce of water would have a mass of one Avoirdupois ounce, and indeed 1 g/cm3 ≈ 1.00224129 ounces per Imperial fluid ounce = 10.0224129 pounds per Imperial gallon. The density of precious metals could conceivably be based on Troy ounces and pounds, a possible cause of confusion.
Knowing the volume of the unit cell of a crystalline material and its formula weight (in daltons), the density can be calculated. One dalton per cubic ångström is equal to a density of 1.660 539 066 60 g/cm3.
Measurement
A number of techniques as well as standards exist for the measurement of density of materials. Such techniques include the use of a hydrometer (a buoyancy method for liquids), Hydrostatic balance (a buoyancy method for liquids and solids), immersed body method (a buoyancy method for liquids), pycnometer (liquids and solids), air comparison pycnometer (solids), oscillating densitometer (liquids), as well as pour and tap (solids). However, each individual method or technique measures different types of density (e.g. bulk density, skeletal density, etc.), and therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question.
Homogeneous materials
The density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is normally measured with a scale or balance; the volume may be measured directly (from the geometry of the object) or by the displacement of a fluid. To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Similarly, hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object.
Heterogeneous materials
If the body is not homogeneous, then its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: , where is an elementary volume at position . The mass of the body then can be expressed as
Non-compact materials
In practice, bulk materials such as sugar, sand, or snow contain voids. Many materials exist in nature as flakes, pellets, or granules.
Voids are regions which contain something other than the considered material. Commonly the void is air, but it could also be vacuum, liquid, solid, or a different gas or gaseous mixture.
The bulk volume of a material —inclusive of the void space fraction— is often obtained by a simple measurement (e.g. with a calibrated measuring cup) or geometrically from known dimensions.
Mass divided by bulk volume determines bulk density. This is not the same thing as the material volumetric mass density.
To determine the material volumetric mass density, one must first discount the volume of the void fraction. Sometimes this can be determined by geometrical reasoning. For the close-packing of equal spheres the non-void fraction can be at most about 74%. It can also be determined empirically. Some bulk materials, however, such as sand, have a variable void fraction which depends on how the material is agitated or poured. It might be loose or compact, with more or less air space depending on handling.
In practice, the void fraction is not necessarily air, or even gaseous. In the case of sand, it could be water, which can be advantageous for measurement as the void fraction for sand saturated in water—once any air bubbles are thoroughly driven out—is potentially more consistent than dry sand measured with an air void.
In the case of non-compact materials, one must also take care in determining the mass of the material sample. If the material is under pressure (commonly ambient air pressure at the earth's surface) the determination of mass from a measured sample weight might need to account for buoyancy effects due to the density of the void constituent, depending on how the measurement was conducted. In the case of dry sand, sand is so much denser than air that the buoyancy effect is commonly neglected (less than one part in one thousand).
Mass change upon displacing one void material with another while maintaining constant volume can be used to estimate the void fraction, if the difference in density of the two voids materials is reliably known.
Changes of density
In general, density can be changed by changing either the pressure or the temperature. Increasing the pressure always increases the density of a material. Increasing the temperature generally decreases the density, but there are notable exceptions to this generalization. For example, the density of water increases between its melting point at 0 °C and 4 °C; similar behavior is observed in silicon at low temperatures.
The effect of pressure and temperature on the densities of liquids and solids is small. The compressibility for a typical liquid or solid is 10−6 bar−1 (1 bar = 0.1 MPa) and a typical thermal expansivity is 10−5 K−1. This roughly translates into needing around ten thousand times atmospheric pressure to reduce the volume of a substance by one percent. (Although the pressures needed may be around a thousand times smaller for sandy soil and some clays.) A one percent expansion of volume typically requires a temperature increase on the order of thousands of degrees Celsius.
In contrast, the density of gases is strongly affected by pressure. The density of an ideal gas is
where is the molar mass, is the pressure, is the universal gas constant, and is the absolute temperature. This means that the density of an ideal gas can be doubled by doubling the pressure, or by halving the absolute temperature.
In the case of volumic thermal expansion at constant pressure and small intervals of temperature the temperature dependence of density is
where is the density at a reference temperature, is the thermal expansion coefficient of the material at temperatures close to .
Density of solutions
The density of a solution is the sum of mass (massic) concentrations of the components of that solution.
Mass (massic) concentration of each given component in a solution sums to density of the solution,
Expressed as a function of the densities of pure components of the mixture and their volume participation, it allows the determination of excess molar volumes:
provided that there is no interaction between the components.
Knowing the relation between excess volumes and activity coefficients of the components, one can determine the activity coefficients:
List of densities
Various materials
Others
Water
Air
Molar volumes of liquid and solid phase of elements
| Physical sciences | Classical mechanics | null |
8449 | https://en.wikipedia.org/wiki/Developmental%20biology | Developmental biology | Developmental biology is the study of the process by which animals and plants grow and develop. Developmental biology also encompasses the biology of regeneration, asexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism.
Perspectives
The main processes involved in the embryonic development of animals are: tissue patterning (via regional specification and patterned cell differentiation); tissue growth; and tissue morphogenesis.
Regional specification refers to the processes that create the spatial patterns in a ball or sheet of initially similar cells. This generally involves the action of cytoplasmic determinants, located within parts of the fertilized egg, and of inductive signals emitted from signaling centers in the embryo. The early stages of regional specification do not generate functional differentiated cells, but cell populations committed to developing to a specific region or part of the organism. These are defined by the expression of specific combinations of transcription factors.
Cell differentiation relates specifically to the formation of functional cell types such as nerve, muscle, secretory epithelia, etc. Differentiated cells contain large amounts of specific proteins associated with cell function.
Morphogenesis relates to the formation of a three-dimensional shape. It mainly involves the orchestrated movements of cell sheets and of individual cells. Morphogenesis is important for creating the three germ layers of the early embryo (ectoderm, mesoderm, and endoderm) and for building up complex structures during organ development.
Tissue growth involves both an overall increase in tissue size, and also the differential growth of parts (allometry) which contributes to morphogenesis. Growth mostly occurs through cell proliferation but also through changes in cell size or the deposition of extracellular materials.
The development of plants involves similar processes to that of animals. However, plant cells are mostly immotile so morphogenesis is achieved by differential growth, without cell movements. Also, the inductive signals and the genes involved are different from those that control animal development.
Generative biology
Generative biology is the generative science that explores the dynamics guiding the development and evolution of a biological morphological form.
Developmental processes
Cell differentiation
Cell differentiation is the process whereby different functional cell types arise in development. For example, neurons, muscle fibers and hepatocytes (liver cells) are well known types of differentiated cells. Differentiated cells usually produce large amounts of a few proteins that are required for their specific function and this gives them the characteristic appearance that enables them to be recognized under the light microscope. The genes encoding these proteins are highly active. Typically their chromatin structure is very open, allowing access for the transcription enzymes, and specific transcription factors bind to regulatory sequences in the DNA in order to activate gene expression. For example, NeuroD is a key transcription factor for neuronal differentiation, myogenin for muscle differentiation, and HNF4 for hepatocyte differentiation.
Cell differentiation is usually the final stage of development, preceded by several states of commitment which are not visibly differentiated. A single tissue, formed from a single type of progenitor cell or stem cell, often consists of several differentiated cell types. Control of their formation involves a process of lateral inhibition, based on the properties of the Notch signaling pathway. For example, in the neural plate of the embryo this system operates to generate a population of neuronal precursor cells in which NeuroD is highly expressed.
Regeneration
Regeneration indicates the ability to regrow a missing part. This is very prevalent amongst plants, which show continuous growth, and also among colonial animals such as hydroids and ascidians. But most interest by developmental biologists has been shown in the regeneration of parts in free living animals. In particular four models have been the subject of much investigation. Two of these have the ability to regenerate whole bodies: Hydra, which can regenerate any part of the polyp from a small fragment, and planarian worms, which can usually regenerate both heads and tails. Both of these examples have continuous cell turnover fed by stem cells and, at least in planaria, at least some of the stem cells have been shown to be pluripotent. The other two models show only distal regeneration of appendages. These are the insect appendages, usually the legs of hemimetabolous insects such as the cricket, and the limbs of urodele amphibians. Considerable information is now available about amphibian limb regeneration and it is known that each cell type regenerates itself, except for connective tissues where there is considerable interconversion between cartilage, dermis and tendons. In terms of the pattern of structures, this is controlled by a re-activation of signals active in the embryo.
There is still debate about the old question of whether regeneration is a "pristine" or an "adaptive" property. If the former is the case, with improved knowledge, we might expect to be able to improve regenerative ability in humans. If the latter, then each instance of regeneration is presumed to have arisen by natural selection in circumstances particular to the species, so no general rules would be expected.
Embryonic development of animals
The sperm and egg fuse in the process of fertilization to form a fertilized egg, or zygote. This undergoes a period of divisions to form a ball or sheet of similar cells called a blastula or blastoderm. These cell divisions are usually rapid with no growth so the daughter cells are half the size of the mother cell and the whole embryo stays about the same size. They are called cleavage divisions.
Mouse epiblast primordial germ cells (see Figure: "The initial stages of human embryogenesis") undergo extensive epigenetic reprogramming. This process involves genome-wide DNA demethylation, chromatin reorganization and epigenetic imprint erasure leading to totipotency. DNA demethylation is carried out by a process that utilizes the DNA base excision repair pathway.
Morphogenetic movements convert the cell mass into a three layered structure consisting of multicellular sheets called ectoderm, mesoderm and endoderm. These sheets are known as germ layers. This is the process of gastrulation. During cleavage and gastrulation the first regional specification events occur. In addition to the formation of the three germ layers themselves, these often generate extraembryonic structures, such as the mammalian placenta, needed for support and nutrition of the embryo, and also establish differences of commitment along the anteroposterior axis (head, trunk and tail).
Regional specification is initiated by the presence of cytoplasmic determinants in one part of the zygote. The cells that contain the determinant become a signaling center and emit an inducing factor. Because the inducing factor is produced in one place, diffuses away, and decays, it forms a concentration gradient, high near the source cells and low further away. The remaining cells of the embryo, which do not contain the determinant, are competent to respond to different concentrations by upregulating specific developmental control genes. This results in a series of zones becoming set up, arranged at progressively greater distance from the signaling center. In each zone a different combination of developmental control genes is upregulated. These genes encode transcription factors which upregulate new combinations of gene activity in each region. Among other functions, these transcription factors control expression of genes conferring specific adhesive and motility properties on the cells in which they are active. Because of these different morphogenetic properties, the cells of each germ layer move to form sheets such that the ectoderm ends up on the outside, mesoderm in the middle, and endoderm on the inside.
Morphogenetic movements not only change the shape and structure of the embryo, but by bringing cell sheets into new spatial relationships they also make possible new phases of signaling and response between them. In addition, first morphogenetic movements of embryogenesis, such as gastrulation, epiboly and twisting, directly activate pathways involved in endomesoderm specification through mechanotransduction processes. This property was suggested to be evolutionary inherited from endomesoderm specification as mechanically stimulated by marine environmental hydrodynamic flow in first animal organisms (first metazoa). Twisting along the body axis by a left-handed chirality is found in all chordates (including vertebrates) and is addressed by the axial twist theory.
Growth in embryos is mostly autonomous. For each territory of cells the growth rate is controlled by the combination of genes that are active. Free-living embryos do not grow in mass as they have no external food supply. But embryos fed by a placenta or extraembryonic yolk supply can grow very fast, and changes to relative growth rate between parts in these organisms help to produce the final overall anatomy.
The whole process needs to be coordinated in time and how this is controlled is not understood. There may be a master clock able to communicate with all parts of the embryo that controls the course of events, or timing may depend simply on local causal sequences of events.
Metamorphosis
Developmental processes are very evident during the process of metamorphosis. This occurs in various types of animal. Well-known examples are seen in frogs, which usually hatch as a tadpole and metamorphoses to an adult frog, and certain insects which hatch as a larva and then become remodeled to the adult form during a pupal stage.
All the developmental processes listed above occur during metamorphosis. Examples that have been especially well studied include tail loss and other changes in the tadpole of the frog Xenopus, and the biology of the imaginal discs, which generate the adult body parts of the fly Drosophila melanogaster.
Plant development
Plant development is the process by which structures originate and mature as a plant grows. It is studied in plant anatomy and plant physiology as well as plant morphology.
Plants constantly produce new tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature.
The properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts."
Growth
A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialized tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium.
In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism).
Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin.
Morphological variation
Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility.
Evolution of plant morphology
Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants.
Most land plants share a common ancestor, multicellular algae. An example of the evolution of plant morphology is seen in charophytes. Studies have shown that charophytes have traits that are homologous to land plants. There are two main theories of the evolution of plant morphology, these theories are the homologous theory and the antithetic theory. The commonly accepted theory for the evolution of plant morphology is the antithetic theory. The antithetic theory states that the multiple mitotic divisions that take place before meiosis, cause the development of the sporophyte. Then the sporophyte will development as an independent organism.
Developmental model organisms
Much of developmental biology research in recent decades has focused on the use of a small number of model organisms. It has turned out that there is much conservation of developmental mechanisms across the animal kingdom. In early development different vertebrate species all use essentially the same inductive signals and the same genes encoding regional identity. Even invertebrates use a similar repertoire of signals and genes although the body parts formed are significantly different. Model organisms each have some particular experimental advantages which have enabled them to become popular among researchers. In one sense they are "models" for the whole animal kingdom, and in another sense they are "models" for human development, which is difficult to study directly for both ethical and practical reasons. Model organisms have been most useful for elucidating the broad nature of developmental mechanisms. The more detail is sought, the more they differ from each other and from humans.
Plants
Thale cress (Arabidopsis thaliana)
Vertebrates
Frog: Xenopus (X. laevis and X. tropicalis). Good embryo supply. Especially suitable for microsurgery.
Zebrafish: Danio rerio. Good embryo supply. Well developed genetics.
Chicken: Gallus gallus. Early stages similar to mammal, but microsurgery easier. Low cost.
Mouse: Mus musculus. A mammal with well developed genetics.
Invertebrates
Fruit fly: Drosophila melanogaster. Good embryo supply. Well developed genetics.
Nematode: Caenorhabditis elegans. Good embryo supply. Well developed genetics. Low cost.
Unicellular
Algae: Chlamydomonas
Yeast: Saccharomyces
Others
Also popular for some purposes have been sea urchins and ascidians. For studies of regeneration urodele amphibians such as the axolotl Ambystoma mexicanum are used, and also planarian worms such as Schmidtea mediterranea. Organoids have also been demonstrated as an efficient model for development. Plant development has focused on the thale cress Arabidopsis thaliana as a model organism.
| Biology and health sciences | Development | null |
8463 | https://en.wikipedia.org/wiki/Dubnium | Dubnium | Dubnium is a synthetic chemical element; it has symbol Db and atomic number 105. It is highly radioactive: the most stable known isotope, dubnium-268, has a half-life of about 16 hours. This greatly limits extended research on the element.
Dubnium does not occur naturally on Earth and is produced artificially. The Soviet Joint Institute for Nuclear Research (JINR) claimed the first discovery of the element in 1968, followed by the American Lawrence Berkeley Laboratory in 1970. Both teams proposed their names for the new element and used them without formal approval. The long-standing dispute was resolved in 1993 by an official investigation of the discovery claims by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry and the International Union of Pure and Applied Physics, resulting in credit for the discovery being officially shared between both teams. The element was formally named dubnium in 1997 after the town of Dubna, the site of the JINR.
Theoretical research establishes dubnium as a member of group 5 in the 6d series of transition metals, placing it under vanadium, niobium, and tantalum. Dubnium should share most properties, such as its valence electron configuration and having a dominant +5 oxidation state, with the other group 5 elements, with a few anomalies due to relativistic effects. A limited investigation of dubnium chemistry has confirmed this.
Introduction
Discovery
Background
Uranium, element 92, is the heaviest element to occur in significant quantities in nature; heavier elements can only be practically produced by synthesis. The first synthesis of a new element—neptunium, element 93—was achieved in 1940 by a team of researchers in the United States. In the following years, American scientists synthesized the elements up to mendelevium, element 101, which was synthesized in 1955. From element 102, the priority of discoveries was contested between American and Soviet physicists. Their rivalry resulted in a race for new elements and credit for their discoveries, later named the Transfermium Wars.
Reports
The first report of the discovery of element 105 came from the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Soviet Union, in April 1968. The scientists bombarded 243Am with a beam of 22Ne ions, and reported 9.4 MeV (with a half-life of 0.1–3 seconds) and 9.7 MeV (t1/2 > 0.05 s) alpha activities followed by alpha activities similar to those of either 256103 or 257103. Based on prior theoretical predictions, the two activity lines were assigned to 261105 and 260105, respectively.
+ → 265−x105 + x (x = 4, 5)
After observing the alpha decays of element 105, the researchers aimed to observe spontaneous fission (SF) of the element and study the resulting fission fragments. They published a paper in February 1970, reporting multiple examples of two such activities, with half-lives of 14 ms and . They assigned the former activity to 242mfAm and ascribed the latter activity to an isotope of element 105. They suggested that it was unlikely that this activity could come from a transfer reaction instead of element 105, because the yield ratio for this reaction was significantly lower than that of the 242mfAm-producing transfer reaction, in accordance with theoretical predictions. To establish that this activity was not from a (22Ne,xn) reaction, the researchers bombarded a 243Am target with 18O ions; reactions producing 256103 and 257103 showed very little SF activity (matching the established data), and the reaction producing heavier 258103 and 259103 produced no SF activity at all, in line with theoretical data. The researchers concluded that the activities observed came from SF of element 105.
In April 1970, a team at Lawrence Berkeley Laboratory (LBL), in Berkeley, California, United States, claimed to have synthesized element 105 by bombarding californium-249 with nitrogen-15 ions, with an alpha activity of 9.1 MeV. To ensure this activity was not from a different reaction, the team attempted other reactions: bombarding 249Cf with 14N, Pb with 15N, and Hg with 15N. They stated no such activity was found in those reactions. The characteristics of the daughter nuclei matched those of 256103, implying that the parent nuclei were of 260105.
+ → 260105 + 4
These results did not confirm the JINR findings regarding the 9.4 MeV or 9.7 MeV alpha decay of 260105, leaving only 261105 as a possibly produced isotope.
JINR then attempted another experiment to create element 105, published in a report in May 1970. They claimed that they had synthesized more nuclei of element 105 and that the experiment confirmed their previous work. According to the paper, the isotope produced by JINR was probably 261105, or possibly 260105. This report included an initial chemical examination: the thermal gradient version of the gas-chromatography method was applied to demonstrate that the chloride of what had formed from the SF activity nearly matched that of niobium pentachloride, rather than hafnium tetrachloride. The team identified a 2.2-second SF activity in a volatile chloride portraying eka-tantalum properties, and inferred that the source of the SF activity must have been element 105.
In June 1970, JINR made improvements on their first experiment, using a purer target and reducing the intensity of transfer reactions by installing a collimator before the catcher. This time, they were able to find 9.1 MeV alpha activities with daughter isotopes identifiable as either 256103 or 257103, implying that the original isotope was either 260105 or 261105.
Naming controversy
JINR did not propose a name after their first report claiming synthesis of element 105, which would have been the usual practice. This led LBL to believe that JINR did not have enough experimental data to back their claim. After collecting more data, JINR proposed the name bohrium (Bo) in honor of the Danish nuclear physicist Niels Bohr, a founder of the theories of atomic structure and quantum theory; they soon changed their proposal to nielsbohrium (Ns) to avoid confusion with boron. Another proposed name was dubnium. When LBL first announced their synthesis of element 105, they proposed that the new element be named hahnium (Ha) after the German chemist Otto Hahn, the "father of nuclear chemistry", thus creating an element naming controversy.
In the early 1970s, both teams reported synthesis of the next element, element 106, but did not suggest names. JINR suggested establishing an international committee to clarify the discovery criteria. This proposal was accepted in 1974 and a neutral joint group formed. Neither team showed interest in resolving the conflict through a third party, so the leading scientists of LBL—Albert Ghiorso and Glenn Seaborg—traveled to Dubna in 1975 and met with the leading scientists of JINR—Georgy Flerov, Yuri Oganessian, and others—to try to resolve the conflict internally and render the neutral joint group unnecessary; after two hours of discussions, this failed. The joint neutral group never assembled to assess the claims, and the conflict remained unresolved. In 1979, IUPAC suggested systematic element names to be used as placeholders until permanent names were established; under it, element 105 would be unnilpentium, from the Latin roots un- and nil- and the Greek root pent- (meaning "one", "zero", and "five", respectively, the digits of the atomic number). Both teams ignored it as they did not wish to weaken their outstanding claims.
In 1981, the Gesellschaft für Schwerionenforschung (GSI; Society for Heavy Ion Research) in Darmstadt, Hesse, West Germany, claimed synthesis of element 107; their report came out five years after the first report from JINR but with greater precision, making a more solid claim on discovery. GSI acknowledged JINR's efforts by suggesting the name nielsbohrium for the new element. JINR did not suggest a new name for element 105, stating it was more important to determine its discoverers first.
In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed a Transfermium Working Group (TWG) to assess discoveries and establish final names for the controversial elements. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria on recognition of an element, and in 1991, they finished the work on assessing discoveries and disbanded. These results were published in 1993. According to the report, the first definitely successful experiment was the April 1970 LBL experiment, closely followed by the June 1970 JINR experiment, so credit for the discovery of the element should be shared between the two teams.
LBL said that the input from JINR was overrated in the review. They claimed JINR was only able to unambiguously demonstrate the synthesis of element 105 a year after they did. JINR and GSI endorsed the report.
In 1994, IUPAC published a recommendation on naming the disputed elements. For element 105, they proposed joliotium (Jl) after the French physicist Frédéric Joliot-Curie, a contributor to the development of nuclear physics and chemistry; this name was originally proposed by the Soviet team for element 102, which by then had long been called nobelium. This recommendation was criticized by the American scientists for several reasons. Firstly, their suggestions were scrambled: the names rutherfordium and hahnium, originally suggested by Berkeley for elements 104 and 105, were respectively reassigned to elements 106 and 108. Secondly, elements 104 and 105 were given names favored by JINR, despite earlier recognition of LBL as an equal co-discoverer for both of them. Thirdly and most importantly, IUPAC rejected the name seaborgium for element 106, having just approved a rule that an element could not be named after a living person, even though the 1993 report had given the LBL team the sole credit for its discovery.
In 1995, IUPAC abandoned the controversial rule and established a committee of national representatives aimed at finding a compromise. They suggested seaborgium for element 106 in exchange for the removal of all the other American proposals, except for the established name lawrencium for element 103. The equally entrenched name nobelium for element 102 was replaced by flerovium after Georgy Flerov, following the recognition by the 1993 report that that element had been first synthesized in Dubna. This was rejected by American scientists and the decision was retracted. The name flerovium was later used for element 114.
In 1996, IUPAC held another meeting, reconsidered all names in hand, and accepted another set of recommendations; it was approved and published in 1997. Element 105 was named dubnium (Db), after Dubna in Russia, the location of the JINR; the American suggestions were used for elements 102, 103, 104, and 106. The name dubnium had been used for element 104 in the previous IUPAC recommendation. The American scientists "reluctantly" approved this decision. IUPAC pointed out that the Berkeley laboratory had already been recognized several times, in the naming of berkelium, californium, and americium, and that the acceptance of the names rutherfordium and seaborgium for elements 104 and 106 should be offset by recognizing JINR's contributions to the discovery of elements 104, 105, and 106.
Even after 1997, LBL still sometimes used the name hahnium for element 105 in their own material, doing so as recently as 2014. However, the problem was resolved in the literature as Jens Volker Kratz, editor of Radiochimica Acta, refused to accept papers not using the 1997 IUPAC nomenclature.
Isotopes
Dubnium, having an atomic number of 105, is a superheavy element; like all elements with such high atomic numbers, it is very unstable. The longest-lasting known isotope of dubnium, 268Db, has a half-life of around a day. No stable isotopes have been seen, and a 2012 calculation by JINR suggested that the half-lives of all dubnium isotopes would not significantly exceed a day. Dubnium can only be obtained by artificial production.
The short half-life of dubnium limits experimentation. This is exacerbated by the fact that the most stable isotopes are the hardest to synthesize. Elements with a lower atomic number have stable isotopes with a lower neutron–proton ratio than those with higher atomic number, meaning that the target and beam nuclei that could be employed to create the superheavy element have fewer neutrons than needed to form these most stable isotopes. (Different techniques based on rapid neutron capture and transfer reactions are being considered as of the 2010s, but those based on the collision of a large and small nucleus still dominate research in the area.)
Only a few atoms of 268Db can be produced in each experiment, and thus the measured lifetimes vary significantly during the process. As of 2022, following additional experiments performed at the JINR's Superheavy Element Factory (which started operations in 2019), the half-life of 268Db is measured to be hours. The second most stable isotope, 270Db, has been produced in even smaller quantities: three atoms in total, with lifetimes of 33.4 h, 1.3 h, and 1.6 h. These two are the heaviest isotopes of dubnium to date, and both were produced as a result of decay of the heavier nuclei 288Mc and 294Ts rather than directly, because the experiments that yielded them were originally designed in Dubna for 48Ca beams. For its mass, 48Ca has by far the greatest neutron excess of all practically stable nuclei, both quantitative and relative, which correspondingly helps synthesize superheavy nuclei with more neutrons, but this gain is compensated by the decreased likelihood of fusion for high atomic numbers.
Predicted properties
According to the periodic law, dubnium should belong to group 5, with vanadium, niobium, and tantalum. Several studies have investigated the properties of element 105 and found that they generally agreed with the predictions of the periodic law. Significant deviations may nevertheless occur, due to relativistic effects, which dramatically change physical properties on both atomic and macroscopic scales. These properties have remained challenging to measure for several reasons: the difficulties of production of superheavy atoms, the low rates of production, which only allows for microscopic scales, requirements for a radiochemistry laboratory to test the atoms, short half-lives of those atoms, and the presence of many unwanted activities apart from those of synthesis of superheavy atoms. So far, studies have only been performed on single atoms.
Atomic and physical
A direct relativistic effect is that as the atomic numbers of elements increase, the innermost electrons begin to revolve faster around the nucleus as a result of an increase of electromagnetic attraction between an electron and a nucleus. Similar effects have been found for the outermost s orbitals (and p1/2 ones, though in dubnium they are not occupied): for example, the 7s orbital contracts by 25% in size and is stabilized by 2.6 eV.
A more indirect effect is that the contracted s and p1/2 orbitals shield the charge of the nucleus more effectively, leaving less for the outer d and f electrons, which therefore move in larger orbitals. Dubnium is greatly affected by this: unlike the previous group 5 members, its 7s electrons are slightly more difficult to extract than its 6d electrons.
Another effect is the spin–orbit interaction, particularly spin–orbit splitting, which splits the 6d subshell—the azimuthal quantum number ℓ of a d shell is 2—into two subshells, with four of the ten orbitals having their ℓ lowered to 3/2 and six raised to 5/2. All ten energy levels are raised; four of them are lower than the other six. (The three 6d electrons normally occupy the lowest energy levels, 6d3/2.)
A singly ionized atom of dubnium (Db+) should lose a 6d electron compared to a neutral atom; the doubly (Db2+) or triply (Db3+) ionized atoms of dubnium should eliminate 7s electrons, unlike its lighter homologs. Despite the changes, dubnium is still expected to have five valence electrons. As the 6d orbitals of dubnium are more destabilized than the 5d ones of tantalum, and Db3+ is expected to have two 6d, rather than 7s, electrons remaining, the resulting +3 oxidation state is expected to be unstable and even rarer than that of tantalum. The ionization potential of dubnium in its maximum +5 oxidation state should be slightly lower than that of tantalum and the ionic radius of dubnium should increase compared to tantalum; this has a significant effect on dubnium's chemistry.
Atoms of dubnium in the solid state should arrange themselves in a body-centered cubic configuration, like the previous group 5 elements. The predicted density of dubnium is 21.6 g/cm3.
Chemical
Computational chemistry is simplest in gas-phase chemistry, in which interactions between molecules may be ignored as negligible. Multiple authors have researched dubnium pentachloride; calculations show it to be consistent with the periodic laws by exhibiting the properties of a compound of a group 5 element. For example, the molecular orbital levels indicate that dubnium uses three 6d electron levels as expected. Compared to its tantalum analog, dubnium pentachloride is expected to show increased covalent character: a decrease in the effective charge on an atom and an increase in the overlap population (between orbitals of dubnium and chlorine).
Calculations of solution chemistry indicate that the maximum oxidation state of dubnium, +5, will be more stable than those of niobium and tantalum and the +3 and +4 states will be less stable. The tendency towards hydrolysis of cations with the highest oxidation state should continue to decrease within group 5 but is still expected to be quite rapid. Complexation of dubnium is expected to follow group 5 trends in its richness. Calculations for hydroxo-chlorido- complexes have shown a reversal in the trends of complex formation and extraction of group 5 elements, with dubnium being more prone to do so than tantalum.
Experimental chemistry
Experimental results of the chemistry of dubnium date back to 1974 and 1976. JINR researchers used a thermochromatographic system and concluded that the volatility of dubnium bromide was less than that of niobium bromide and about the same as that of hafnium bromide. It is not certain that the detected fission products confirmed that the parent was indeed element 105. These results may imply that dubnium behaves more like hafnium than niobium.
The next studies on the chemistry of dubnium were conducted in 1988, in Berkeley. They examined whether the most stable oxidation state of dubnium in aqueous solution was +5. Dubnium was fumed twice and washed with concentrated nitric acid; sorption of dubnium on glass cover slips was then compared with that of the group 5 elements niobium and tantalum and the group 4 elements zirconium and hafnium produced under similar conditions. The group 5 elements are known to sorb on glass surfaces; the group 4 elements do not. Dubnium was confirmed as a group 5 member. Surprisingly, the behavior on extraction from mixed nitric and hydrofluoric acid solution into methyl isobutyl ketone differed between dubnium, tantalum, and niobium. Dubnium did not extract and its behavior resembled niobium more closely than tantalum, indicating that complexing behavior could not be predicted purely from simple extrapolations of trends within a group in the periodic table.
This prompted further exploration of the chemical behavior of complexes of dubnium. Various labs jointly conducted thousands of repetitive chromatographic experiments between 1988 and 1993. All group 5 elements and protactinium were extracted from concentrated hydrochloric acid; after mixing with lower concentrations of hydrogen chloride, small amounts of hydrogen fluoride were added to start selective re-extraction. Dubnium showed behavior different from that of tantalum but similar to that of niobium and its pseudohomolog protactinium at concentrations of hydrogen chloride below 12 moles per liter. This similarity to the two elements suggested that the formed complex was either or . After extraction experiments of dubnium from hydrogen bromide into diisobutyl carbinol (2,6-dimethylheptan-4-ol), a specific extractant for protactinium, with subsequent elutions with the hydrogen chloride/hydrogen fluoride mix as well as hydrogen chloride, dubnium was found to be less prone to extraction than either protactinium or niobium. This was explained as an increasing tendency to form non‐extractable complexes of multiple negative charges. Further experiments in 1992 confirmed the stability of the +5 state: Db(V) was shown to be extractable from cation‐exchange columns with α‐hydroxyisobutyrate, like the group 5 elements and protactinium; Db(III) and Db(IV) were not. In 1998 and 1999, new predictions suggested that dubnium would extract nearly as well as niobium and better than tantalum from halide solutions, which was later confirmed.
The first isothermal gas chromatography experiments were performed in 1992 with 262Db (half-life 35 seconds). The volatilities for niobium and tantalum were similar within error limits, but dubnium appeared to be significantly less volatile. It was postulated that traces of oxygen in the system might have led to formation of , which was predicted to be less volatile than . Later experiments in 1996 showed that group 5 chlorides were more volatile than the corresponding bromides, with the exception of tantalum, presumably due to formation of . Later volatility studies of chlorides of dubnium and niobium as a function of controlled partial pressures of oxygen showed that formation of oxychlorides and general volatility are dependent on concentrations of oxygen. The oxychlorides were shown to be less volatile than the chlorides.
In 2004–05, researchers from Dubna and Livermore identified a new dubnium isotope, 268Db, as a fivefold alpha decay product of the newly created element 115. This new isotope proved to be long-lived enough to allow further chemical experimentation, with a half-life of over a day. In the 2004 experiment, a thin layer with dubnium was removed from the surface of the target and dissolved in aqua regia with tracers and a lanthanum carrier, from which various +3, +4, and +5 species were precipitated on adding ammonium hydroxide. The precipitate was washed and dissolved in hydrochloric acid, where it converted to nitrate form and was then dried on a film and counted. Mostly containing a +5 species, which was immediately assigned to dubnium, it also had a +4 species; based on that result, the team decided that additional chemical separation was needed. In 2005, the experiment was repeated, with the final product being hydroxide rather than nitrate precipitate, which was processed further in both Livermore (based on reverse phase chromatography) and Dubna (based on anion exchange chromatography). The +5 species was effectively isolated; dubnium appeared three times in tantalum-only fractions and never in niobium-only fractions. It was noted that these experiments were insufficient to draw conclusions about the general chemical profile of dubnium.
In 2009, at the JAEA tandem accelerator in Japan, dubnium was processed in nitric and hydrofluoric acid solution, at concentrations where niobium forms and tantalum forms . Dubnium's behavior was close to that of niobium but not tantalum; it was thus deduced that dubnium formed . From the available information, it was concluded that dubnium often behaved like niobium, sometimes like protactinium, but rarely like tantalum.
In 2021, the volatile heavy group 5 oxychlorides MOCl3 (M = Nb, Ta, Db) were experimentally studied at the JAEA tandem accelerator. The trend in volatilities was found to be NbOCl3 > TaOCl3 ≥ DbOCl3, so that dubnium behaves in line with periodic trends.
| Physical sciences | Group 5 | Chemistry |
8464 | https://en.wikipedia.org/wiki/Disaccharide | Disaccharide | A disaccharide (also called a double sugar or biose) is the sugar formed when two monosaccharides are joined by glycosidic linkage. Like monosaccharides, disaccharides are simple sugars soluble in water. Three common examples are sucrose, lactose, and maltose.
Disaccharides are one of the four chemical groupings of carbohydrates (monosaccharides, disaccharides, oligosaccharides, and polysaccharides). The most common types of disaccharides—sucrose, lactose, and maltose—have 12 carbon atoms, with the general formula C12H22O11. The differences in these disaccharides are due to atomic arrangements within the molecule.
The joining of monosaccharides into a double sugar happens by a condensation reaction, which involves the elimination of a water molecule from the functional groups only. Breaking apart a double sugar into its two monosaccharides is accomplished by hydrolysis with the help of a type of enzyme called a disaccharidase. As building the larger sugar ejects a water molecule, breaking it down consumes a water molecule. These reactions are vital in metabolism. Each disaccharide is broken down with the help of a corresponding disaccharidase (sucrase, lactase, and maltase).
Classification
There are two functionally different classes of disaccharides:
Reducing disaccharides, in which one monosaccharide, the reducing sugar of the pair, still has a free hemiacetal unit that can perform as a reducing aldehyde group; lactose, maltose and cellobiose are examples of reducing disaccharides, each with one hemiacetal unit, the other occupied by the glycosidic bond, which prevents it from acting as a reducing agent. They can easily be detected by the Woehlk test or Fearon's test on methylamine.
Non-reducing disaccharides, in which the component monosaccharides bond through an acetal linkage between their anomeric centers. This results in neither monosaccharide being left with a hemiacetal unit that is free to act as a reducing agent. Sucrose and trehalose are examples of non-reducing disaccharides because their glycosidic bond is between their respective hemiacetal carbon atoms. The reduced chemical reactivity of the non-reducing sugars, in comparison to reducing sugars, may be an advantage where stability in storage is important.
Formation
The formation of a disaccharide molecule from two monosaccharide molecules proceeds by displacing a hydroxy group from one molecule and a hydrogen nucleus (a proton) from the other, so that the new vacant bonds on the monosaccharides join the two monomers together. Because of the removal of the water molecule from the product, the term of convenience for such a process is "dehydration reaction" (also "condensation reaction" or "dehydration synthesis"). For example, milk sugar (lactose) is a disaccharide made by condensation of one molecule of each of the monosaccharides glucose and galactose, whereas the disaccharide sucrose in sugar cane and sugar beet, is a condensation product of glucose and fructose. Maltose, another common disaccharide, is condensed from two glucose molecules.
The dehydration reaction that bonds monosaccharides into disaccharides (and also bonds monosaccharides into more complex polysaccharides) forms what are called glycosidic bonds.
Properties
The glycosidic bond can be formed between any hydroxy group on the component monosaccharide. So, even if both component sugars are the same (e.g., glucose), different bond combinations (regiochemistry) and stereochemistry (alpha- or beta-) result in disaccharides that are diastereoisomers with different chemical and physical properties. Depending on the monosaccharide constituents, disaccharides are sometimes crystalline, sometimes water-soluble, and sometimes sweet-tasting and sticky-feeling. Disaccharides can serve as functional groups by forming glycosidic bonds with other organic compounds, forming glycosides.
Assimilation
Digestion of disaccharides involves breakdown into monosaccharides.
Common disaccharides
{| class="wikitable"
|-
! Disaccharide
! Unit 1
! Unit 2
! Bond
|-
| Sucrose (table sugar, cane sugar, beet sugar, or saccharose)
| Glucose || Fructose || α(1→2)β
|-
| Lactose (milk sugar)
| Galactose || Glucose || β(1→4)
|-
| Maltose (malt sugar)
| Glucose || Glucose || α(1→4)
|-
| Trehalose
| Glucose || Glucose || α(1→1)α
|-
| Cellobiose
| Glucose || Glucose || β(1→4)
|-
| Chitobiose
| Glucosamine || Glucosamine || β(1→4)
|}
Maltose, cellobiose, and chitobiose are hydrolysis products of the polysaccharides starch, cellulose, and chitin, respectively.
Less common disaccharides include:
{| class="wikitable"
|-
! Disaccharide
! Units
! Bond
|-
| Kojibiose || Two glucoses || α(1→2)
|-
| Nigerose || Two glucoses || α(1→3)
|-
| Isomaltose || Two glucoses || α(1→6)
|-
| β,β-Trehalose || Two glucoses || β(1→1)β
|-
| α,β-Trehalose || Two glucoses || α(1→1)β
|-
| Sophorose || Two glucoses || β(1→2)
|-
| Laminaribiose || Two glucoses || β(1→3)
|-
| Gentiobiose || Two glucoses || β(1→6)
|-
| Trehalulose
| One glucose and one fructose
| α(1→1)
|-
| Turanose || One glucose and one fructose || α(1→3)
|-
| Maltulose || One glucose and one fructose || α(1→4)
|-
| Leucrose || One glucose and one fructose || α(1→5)
|-
| Isomaltulose || One glucose and one fructose || α(1→6)
|-
| Gentiobiulose || One glucose and one fructose || β(1→6)
|-
| Mannobiose || Two mannoses || Either α(1→2), α(1→3), α(1→4), or α(1→6)
|-
| Melibiose || One galactose and one glucose || α(1→6)
|-
| Allolactose || One galactose and one glucose || β(1→6)
|-
| Melibiulose || One galactose and one fructose || α(1→6)
|-
| Lactulose || One galactose and one fructose || β(1→4)
|-
| Rutinose || One rhamnose and one glucose || α(1→6)
|-
| Rutinulose || One rhamnose and one fructose || β(1→6)
|-
| Xylobiose || Two xylopyranoses || β(1→4)
|}
| Biology and health sciences | Carbohydrates | Biology |
8466 | https://en.wikipedia.org/wiki/Dorado | Dorado | Dorado (, ) is a constellation in the Southern Sky. It was named in the late 16th century and is now one of the 88 modern constellations. Its name refers to the mahi-mahi (Coryphaena hippurus), which is known as dorado ("golden") in Spanish, although it has also been depicted as a swordfish. Dorado contains most of the Large Magellanic Cloud, the remainder being in the constellation Mensa. The South Ecliptic pole also lies within this constellation.
Even though the name Dorado is not Latin but Spanish, astronomers give it the Latin genitive form Doradus when naming its stars; it is treated (like the adjacent asterism Argo Navis) as a feminine proper name of Greek origin ending in -ō (like Io or Callisto or Argo), which have a genitive ending -ūs.
History
Dorado was one of twelve constellations named by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It appeared:
On a celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius.
First depiction in a celestial atlas, in Johann Bayer's Uranometria of 1603.
In Johannes Kepler's edition of Tycho Brahe's star list in the Rudolphine Tables of 1627; this was the first time that it was given the alternative name Xiphias, the swordfish. The name Dorado ultimately became dominant and was adopted by the IAU.
Dorado represents a dolphinfish; it has also been called the goldfish because Dorado are gold-colored.
Features
Stars
Alpha Doradus is a blue-white star of magnitude 3.3, 176 light-years from Earth. It is the brightest star in Dorado. Beta Doradus is a notably bright Cepheid variable star. It is a yellow-tinged supergiant star that has a minimum magnitude of 4.1 and a maximum magnitude of 3.5. One thousand and forty light-years from Earth, Beta Doradus has a period of 9 days and 20 hours.
R Doradus is one of the many variable stars in Dorado. S Dor, 9.721 hypergiant in the Large Magellanic Cloud, is the prototype of S Doradus variable stars. The variable star R Doradus 5.73 has the largest-known apparent size of any star other than the Sun. Gamma Doradus is the prototype of the Gamma Doradus variable stars.
Supernova 1987A was the closest supernova to occur since the invention of the telescope. SNR 0509-67.5 is the remnant of an unusually energetic Type 1a supernova from about 400 years ago.
HE 0437-5439 is a hypervelocity star escaping from the Milky Way/Magellanic Cloud system.
Dorado is also the location of the South Ecliptic pole, which lies near the fish's head. The pole was called "Polus Doradinalis" by Philipp von Zesen, aka Caesius.
In early 2020, the exoplanet TOI-700 d was discovered orbiting the star TOI-700 in Dorado. This is the first potentially Earth-like exoplanet to be discovered by the Transiting Exoplanet Survey Satellite.
Deep-sky objects
Because Dorado contains part of the Large Magellanic Cloud, it is rich in deep sky objects. The Large Magellanic Cloud, 25,000 light-years in diameter, is a satellite galaxy of the Milky Way Galaxy, located at a distance of 179,000 light-years. It has been deformed by its gravitational interactions with the larger Milky Way. In 1987, it became host to SN 1987A, the first supernova of 1987 and the closest since 1604. This 25,000-light-year-wide galaxy contains over 10,000 million stars. All coordinates given are for Epoch J2000.0.
N 180B is an emission nebula located in the Large Magellanic Cloud.
NGC 1566 (RA 04h 20m 00s Dec -56° 56.3′) is a face-on spiral galaxy. It gives its name to the NGC 1566 Group of galaxies.
NGC 1755 (RA 04h 55m 13s Dec -68° 12.2′) is a globular cluster.
NGC 1763 (RA 04h 56m 49s Dec -68° 24.5′) is a bright nebula associated with three type B stars.
NGC 1761 (RA 04h 56m 37s Dec -66° 28.4') is an open cluster.
NGC 1820 (RA 05h 04m 02s Dec -67° 15.9′) is an open cluster.
NGC 1850 (RA 05h 08m 44s Dec -68° 45.7′) is a globular cluster.
NGC 1854 (RA 05h 09m 19s Dec -68° 50.8′) is a globular cluster.
NGC 1869 (RA 05h 13m 56s Dec -67° 22.8′) is an open cluster.
NGC 1901 (RA 05h 18m 15s Dec -68° 26.2′) is an open cluster.
NGC 1910 (RA 05h 18m 43s Dec -69° 13.9′) is an open cluster.
NGC 1936 (RA 05h 22m 14s Dec -67° 58.7′) is a bright nebula and is one of four NGC objects in close proximity, the others being NGC 1929, NGC 1934 and NGC 1935.
NGC 1978 (RA 05h 28m 36s Dec -66° 14.0′) is an open cluster.
NGC 2002 (RA 05h 30m 17s Dec -66° 53.1′) is an open cluster.
NGC 2014 (RA 05h 44m 12.7s Dec −67° 42′ 57″) is a red emission nebula.
NGC 2020 (RA 05h 44m 12.7s Dec −67° 42′ 57″) is an HII region surrounding a Wolf–Rayet star.
NGC 2027 (RA 05h 35m 00s Dec -66° 55.0′) is an open cluster.
NGC 2032 (RA 05h 35m 21s Dec -67° 34.1′; also known as "Seagull Nebula") is a nebula complex that contains four NGC designations: NGC 2029, NGC 2032, NGC 2035 and NGC 2040.
NGC 2074 (RA 05h 39m 03.0s Dec −69° 29′ 54″) is an emission nebula.
NGC 2078 (RA 05h 39m 54s Dec −69° 44′ 54″) is an emission nebula.
NGC 2080, also called the "Ghost Head Nebula", is an emission nebula that is 50 light-years wide in the Large Magellanic Cloud. It is named for the two distinct white patches that it possesses, which are regions of recent star formation. The western portion is colored green from doubly ionized oxygen, the southern portion is red from hydrogen alpha emissions, and the center region is colored yellow from both oxygen and hydrogen emissions. The western white patch, A1, has one massive, recently formed star inside. The eastern patch, A2, has several stars hidden in its dust.
Tarantula Nebula is in the Large Magellanic Cloud, named for its spiderlike shape. It is also designated 30 Doradus, as it is visible to the naked eye as a slightly out-of-focus star. Larger than any nebula in the Milky Way at 1,000 light-years in diameter, it is also brighter, because it is illuminated by the open star cluster NGC 2070, which has at its center the star cluster R136. The illuminating stars are supergiants.
NGC 2164 (RA 05h 58m 53s Dec -68° 30.9′) is a globular cluster.
N44 is a superbubble in the Large Magellanic Cloud that is 1,000 light-years wide. Its overall structure is shaped by the 40 hot stars towards its center. Within the superbubble of N44 is a smaller bubble catalogued as N44F. It is approximately 35 light-years in diameter and is shaped by an incredibly hot star at its center, which has a stellar wind speed of 7 million kilometers per hour. N44F also features dust columns with probable star formation hidden inside.
Equivalents
In Chinese astronomy, the stars of Dorado are in two of Xu Guangqi's Southern Asterisms (近南極星區, Jìnnánjíxīngōu): the White Patches Attached (夾白, Jiābái) and the Goldfish (金魚, Jīnyú).
Namesakes
Dorado (SS-248) and Dorado (SS-526), two United States Navy submarines, were named after the same sea creature as the constellation.
Gallery
| Physical sciences | Other | Astronomy |
8468 | https://en.wikipedia.org/wiki/Determinant | Determinant | In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.
The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries.
The determinant of a matrix is
and the determinant of a matrix is
The determinant of an matrix can be defined in several equivalent ways, the most common being Leibniz formula, which expresses the determinant as a sum of (the factorial of ) signed products of matrix entries. It can be computed by the Laplace expansion, which expresses the determinant as a linear combination of determinants of submatrices, or with Gaussian elimination, which allows computing a row echelon form with the same determinant, equal to the product of the diagonal entries of the row echelon form.
Determinants can also be defined by some of their properties. Namely, the determinant is the unique function defined on the matrices that has the four following properties:
The determinant of the identity matrix is .
The exchange of two rows multiplies the determinant by .
Multiplying a row by a number multiplies the determinant by this number.
Adding a multiple of one row to another row does not change the determinant.
The above properties relating to rows (properties 2–4) may be replaced by the corresponding statements with respect to columns.
The determinant is invariant under matrix similarity. This implies that, given a linear endomorphism of a finite-dimensional vector space, the determinant of the matrix that represents it on a basis does not depend on the chosen basis. This allows defining the determinant of a linear endomorphism, which does not depend on the choice of a coordinate system.
Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a square matrix, whose roots are the eigenvalues. In geometry, the signed -dimensional volume of a -dimensional parallelepiped is expressed by a determinant, and the determinant of a linear endomorphism determines how the orientation and the -dimensional volume are transformed under the endomorphism. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.
Two by two matrices
The determinant of a matrix is denoted either by "" or by vertical bars around the matrix, and is defined as
For example,
First properties
The determinant has several key properties that can be proved by direct evaluation of the definition for -matrices, and that continue to hold for determinants of larger matrices. They are as follows: first, the determinant of the identity matrix is 1.
Second, the determinant is zero if two rows are the same:
This holds similarly if the two columns are the same. Moreover,
Finally, if any column is multiplied by some number (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:
Geometric meaning
If the matrix entries are real numbers, the matrix can be used to represent two linear maps: one that maps the standard basis vectors to the rows of , and one that maps them to the columns of . In either case, the images of the basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the above matrix is the one with vertices at , , , and , as shown in the accompanying diagram.
The absolute value of is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by . (The parallelogram formed by the columns of is in general a different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be the same.)
The absolute value of the determinant together with the sign becomes the signed area of the parallelogram. The signed area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix).
To show that is the signed area, one may consider a matrix containing two vectors and representing the parallelogram's sides. The signed area can be expressed as for the angle θ between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other. Due to the sine this already is the signed area, yet it may be expressed more conveniently using the cosine of the complementary angle to a perpendicular vector, e.g. , so that becomes the signed area in question, which can be determined by the pattern of the scalar product to be equal to according to the following equations:
Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by A. When the determinant is equal to one, the linear mapping defined by the matrix is equi-areal and orientation-preserving.
The object known as the bivector is related to these ideas. In 2D, it can be interpreted as an oriented plane segment formed by imagining two vectors each with origin , and coordinates and . The bivector magnitude (denoted by ) is the signed area, which is also the determinant .
If an real matrix A is written in terms of its column vectors , then
This means that maps the unit n-cube to the n-dimensional parallelotope defined by the vectors the region
The determinant gives the signed n-dimensional volume of this parallelotope, and hence describes more generally the n-dimensional volume scaling factor of the linear transformation produced by A. (The sign shows whether the transformation preserves or reverses orientation.) In particular, if the determinant is zero, then this parallelotope has volume zero and is not fully n-dimensional, which indicates that the dimension of the image of A is less than n. This means that A produces a linear transformation which is neither onto nor one-to-one, and so is not invertible.
Definition
Let A be a square matrix with n rows and n columns, so that it can be written as
The entries etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are in a commutative ring.
The determinant of A is denoted by det(A), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:
There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.
Leibniz formula
3 × 3 matrices
The Leibniz formula for the determinant of a matrix is the following:
In this expression, each term has one factor from each row, all in different columns, arranged in increasing row order. For example, bdi has b from the first row second column, d from the second row first column, and i from the third row third column. The signs are determined by how many transpositions of factors are necessary to arrange the factors in increasing order of their columns (given that the terms are arranged left-to-right in increasing row order): positive for an even number of transpositions and negative for an odd number. For the example of bdi, the single transposition of bd to db gives dbi, whose three factors are from the first, second and third columns respectively; this is an odd number of transpositions, so the term appears with negative sign.
The rule of Sarrus is a mnemonic for the expanded form of this determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-east lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration. This scheme for calculating the determinant of a matrix does not carry over into higher dimensions.
n × n matrices
Generalizing the above to higher dimensions, the determinant of an matrix is an expression involving permutations and their signatures. A permutation of the set is a bijective function from this set to itself, with values exhausting the entire set. The set of all such permutations, called the symmetric group, is commonly denoted . The signature of a permutation is if the permutation can be obtained with an even number of transpositions (exchanges of two entries); otherwise, it is
Given a matrix
the Leibniz formula for its determinant is, using sigma notation for the sum,
Using pi notation for the product, this can be shortened into
.
The Levi-Civita symbol is defined on the -tuples of integers in as if two of the integers are equal, and otherwise as the signature of the permutation defined by the n-tuple of integers. With the Levi-Civita symbol, the Leibniz formula becomes
where the sum is taken over all -tuples of integers in
Properties
Characterization of the determinant
The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an -matrix A as being composed of its columns, so denoted as
where the column vector (for each i) is composed of the entries of the matrix in the i-th column.
, where is an identity matrix.
The determinant is multilinear: if the jth column of a matrix is written as a linear combination of two column vectors v and w and a number r, then the determinant of A is expressible as a similar linear combination:
The determinant is alternating: whenever two columns of a matrix are identical, its determinant is 0:
If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any -matrix A a number that satisfies these three properties. This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula.
To see this it suffices to expand the determinant by multi-linearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.
Immediate consequences
These rules have several further consequences:
The determinant is a homogeneous function, i.e., (for an matrix ).
Interchanging any pair of columns of a matrix multiplies its determinant by −1. This follows from the determinant being multilinear and alternating (properties 2 and 3 above): This formula can be applied iteratively when several columns are swapped. For example Yet more generally, any permutation of the columns multiplies the determinant by the sign of the permutation.
If some column can be expressed as a linear combination of the other columns (i.e. the columns of the matrix form a linearly dependent set), the determinant is 0. As a special case, this includes: if some column is such that all its entries are zero, then the determinant of that matrix is 0.
Adding a scalar multiple of one column to another column does not change the value of the determinant. This is a consequence of multilinearity and being alternative: by multilinearity the determinant changes by a multiple of the determinant of a matrix with two equal columns, which determinant is 0, since the determinant is alternating.
If is a triangular matrix, i.e. , whenever or, alternatively, whenever , then its determinant equals the product of the diagonal entries: Indeed, such a matrix can be reduced, by appropriately adding multiples of the columns with fewer nonzero entries to those with more entries, to a diagonal matrix (without changing the determinant). For such a matrix, using the linearity in each column reduces to the identity matrix, in which case the stated formula holds by the very first characterizing property of determinants. Alternatively, this formula can also be deduced from the Leibniz formula, since the only permutation which gives a non-zero contribution is the identity permutation.
Example
These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix using that method:
Combining these equalities gives
Transpose
The determinant of the transpose of equals the determinant of A:
.
This can be proven by inspecting the Leibniz formula. This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an matrix as being composed of n rows, the determinant is an n-linear function.
Multiplicativity and matrix groups
The determinant is a multiplicative map, i.e., for square matrices and of equal size, the determinant of a matrix product equals the product of their determinants:
This key fact can be proven by observing that, for a fixed matrix , both sides of the equation are alternating and multilinear as a function depending on the columns of . Moreover, they both take the value when is the identity matrix. The above-mentioned unique characterization of alternating multilinear maps therefore shows this claim.
A matrix with entries in a field is invertible precisely if its determinant is nonzero. This follows from the multiplicativity of the determinant and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by
.
In particular, products and inverses of matrices with non-zero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size over a field ) forms a group known as the general linear group (respectively, a subgroup called the special linear group . More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if n is 2 or 3 consists of all rotation matrices), and the special unitary group.
Because the determinant respects multiplication and inverses, it is in fact a group homomorphism from into the multiplicative group of nonzero elements of . This homomorphism is surjective and its kernel is (the matrices with determinant one). Hence, by the first isomorphism theorem, this shows that is a normal subgroup of , and that the quotient group is isomorphic to .
The Cauchy–Binet formula is a generalization of that product formula for rectangular matrices. This formula can also be recast as a multiplicative formula for compound matrices whose entries are the determinants of all quadratic submatrices of a given matrix.
Laplace expansion
Laplace expansion expresses the determinant of a matrix recursively in terms of determinants of smaller matrices, known as its minors. The minor is defined to be the determinant of the -matrix that results from by removing the -th row and the -th column. The expression is known as a cofactor. For every , one has the equality
which is called the Laplace expansion along the th row. For example, the Laplace expansion along the first row () gives the following formula:
Unwinding the determinants of these -matrices gives back the Leibniz formula mentioned above. Similarly, the Laplace expansion along the -th column is the equality
Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the Vandermonde matrix
The n-term Laplace expansion along a row or column can be generalized to write an n x n determinant as a sum of terms, each the product of the determinant of a k x k submatrix and the determinant of the complementary (n−k) x (n−k) submatrix.
Adjugate matrix
The adjugate matrix is the transpose of the matrix of the cofactors, that is,
For every matrix, one has
Thus the adjugate matrix can be used for expressing the inverse of a nonsingular matrix:
Block matrices
The formula for the determinant of a -matrix above continues to hold, under appropriate further assumptions, for a block matrix, i.e., a matrix composed of four submatrices of dimension , , and , respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is
If is invertible, then it follows with results from the section on multiplicativity that
which simplifies to when is a -matrix.
A similar result holds when is invertible, namely
Both results can be combined to derive Sylvester's determinant theorem, which is also stated below.
If the blocks are square matrices of the same size further formulas hold. For example, if and commute (i.e., ), then
This formula has been generalized to matrices composed of more than blocks, again under appropriate commutativity conditions among the individual blocks.
For and , the following formula holds (even if and do not commute)
Sylvester's determinant theorem
Sylvester's determinant theorem states that for A, an matrix, and B, an matrix (so that A and B have dimensions allowing them to be multiplied in either order forming a square matrix):
where Im and In are the and identity matrices, respectively.
From this general result several consequences follow.
A generalization is (see Matrix_determinant_lemma), where Z is an invertible matrix and W is an invertible matrix.
Sum
The determinant of the sum of two square matrices of the same size is not in general expressible in terms of the determinants of A and of B.
However, for positive semidefinite matrices , and of equal size,
with the corollary
Brunn–Minkowski theorem implies that the th root of determinant is a concave function, when restricted to Hermitian positive-definite matrices. Therefore, if and are Hermitian positive-definite matrices, one has
since the th root of the determinant is a homogeneous function.
Sum identity for 2×2 matrices
For the special case of matrices with complex entries, the determinant of the sum can be written in terms of determinants and traces in the following identity:
Properties of the determinant in relation to other notions
Eigenvalues and characteristic polynomial
The determinant is closely related to two other central concepts in linear algebra, the eigenvalues and the characteristic polynomial of a matrix. Let be an -matrix with complex entries. Then, by the Fundamental Theorem of Algebra, must have exactly n eigenvalues . (Here it is understood that an eigenvalue with algebraic multiplicity occurs times in this list.) Then, it turns out the determinant of is equal to the product of these eigenvalues,
The product of all non-zero eigenvalues is referred to as pseudo-determinant.
From this, one immediately sees that the determinant of a matrix is zero if and only if is an eigenvalue of . In other words, is invertible if and only if is not an eigenvalue of .
The characteristic polynomial is defined as
Here, is the indeterminate of the polynomial and is the identity matrix of the same size as . By means of this polynomial, determinants can be used to find the eigenvalues of the matrix : they are precisely the roots of this polynomial, i.e., those complex numbers such that
A Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this is equivalent to the determinants of the submatrices
being positive, for all between and .
Trace
The trace tr(A) is by definition the sum of the diagonal entries of and also equals the sum of the eigenvalues. Thus, for complex matrices ,
or, for real matrices ,
Here exp() denotes the matrix exponential of , because every eigenvalue of corresponds to the eigenvalue exp() of exp(). In particular, given any logarithm of , that is, any matrix satisfying
the determinant of is given by
For example, for , , and , respectively,
cf. Cayley-Hamilton theorem. Such expressions are deducible from combinatorial arguments, Newton's identities, or the Faddeev–LeVerrier algorithm. That is, for generic , the signed constant term of the characteristic polynomial, determined recursively from
In the general case, this may also be obtained from
where the sum is taken over the set of all integers satisfying the equation
The formula can be expressed in terms of the complete exponential Bell polynomial of n arguments sl = −(l – 1)! tr(Al) as
This formula can also be used to find the determinant of a matrix with multidimensional indices and . The product and trace of such matrices are defined in a natural way as
An important arbitrary dimension identity can be obtained from the Mercator series expansion of the logarithm when the expansion converges. If every eigenvalue of A is less than 1 in absolute value,
where is the identity matrix. More generally, if
is expanded as a formal power series in then all coefficients of for are zero and the remaining polynomial is .
Upper and lower bounds
For a positive definite matrix , the trace operator gives the following tight lower and upper bounds on the log determinant
with equality if and only if . This relationship can be derived via the formula for the Kullback-Leibler divergence between two multivariate normal distributions.
Also,
These inequalities can be proved by expressing the traces and the determinant in terms of the eigenvalues. As such, they represent the well-known fact that the harmonic mean is less than the geometric mean, which is less than the arithmetic mean, which is, in turn, less than the root mean square.
Derivative
The Leibniz formula shows that the determinant of real (or analogously for complex) square matrices is a polynomial function from to . In particular, it is everywhere differentiable. Its derivative can be expressed using Jacobi's formula:
where denotes the adjugate of . In particular, if is invertible, we have
Expressed in terms of the entries of , these are
Yet another equivalent formulation is
,
using big O notation. The special case where , the identity matrix, yields
This identity is used in describing Lie algebras associated to certain matrix Lie groups. For example, the special linear group is defined by the equation . The above formula shows that its Lie algebra is the special linear Lie algebra consisting of those matrices having trace zero.
Writing a -matrix as where are column vectors of length 3, then the gradient over one of the three vectors may be written as the cross product of the other two:
History
Historically, determinants were used long before matrices: A determinant was originally defined as a property of a system of linear equations.
The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is non-zero).
In this sense, determinants were first used in the Chinese mathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rd century BCE). In Europe, solutions of linear systems of two equations were expressed by Cardano in 1545 by a determinant-like entity.
Determinants proper originated separately from the work of Seki Takakazu in 1683 in Japan and parallelly of Leibniz in 1693. stated, without proof, Cramer's rule. Both Cramer and also were led to determinants by the question of plane curves passing through a given set of points.
Vandermonde (1771) first recognized determinants as independent functions. gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word "determinant" (Laplace had used "resultant"), though not in the present signification, but rather as applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.
The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See Cauchy–Binet formula.) In this he used the word "determinant" in its present sense, summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's. With him begins the theory in its generality.
used the functional determinant which Sylvester later called the Jacobian. In his memoirs in Crelle's Journal for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work. introduced the modern notation for the determinant using vertical bars.
The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.
Applications
Cramer's rule
Determinants can be used to describe the solutions of a linear system of equations, written in matrix form as . This equation has a unique solution if and only if is nonzero. In this case, the solution is given by Cramer's rule:
where is the matrix formed by replacing the -th column of by the column vector . This follows immediately by column expansion of the determinant, i.e.
where the vectors are the columns of A. The rule is also implied by the identity
Cramer's rule can be implemented in time, which is comparable to more common methods of solving systems of linear equations, such as LU, QR, or singular value decomposition.
Linear independence
Determinants can be used to characterize linearly dependent vectors: is zero if and only if the column vectors (or, equivalently, the row vectors) of the matrix are linearly dependent. For example, given two linearly independent vectors , a third vector lies in the plane spanned by the former two vectors exactly if the determinant of the -matrix consisting of the three vectors is zero. The same idea is also used in the theory of differential equations: given functions (supposed to be times differentiable), the Wronskian is defined to be
It is non-zero (for some ) in a specified interval if and only if the given functions and all their derivatives up to order are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, in the case of analytic functions, this implies the given functions are linearly dependent. See the Wronskian and linear independence. Another such use of the determinant is the resultant, which gives a criterion when two polynomials have a common root.
Orientation of a basis
The determinant can be thought of as assigning a number to every sequence of n vectors in Rn, by using the square matrix whose columns are the given vectors. The determinant will be nonzero if and only if the sequence of vectors is a basis for Rn. In that case, the sign of the determinant determines whether the orientation of the basis is consistent with or opposite to the orientation of the standard basis. In the case of an orthogonal basis, the magnitude of the determinant is equal to the product of the lengths of the basis vectors. For instance, an orthogonal matrix with entries in Rn represents an orthonormal basis in Euclidean space, and hence has determinant of ±1 (since all the vectors have length 1). The determinant is +1 if and only if the basis has the same orientation. It is −1 if and only if the basis has the opposite orientation.
More generally, if the determinant of A is positive, A represents an orientation-preserving linear transformation (if A is an orthogonal or matrix, this is a rotation), while if it is negative, A switches the orientation of the basis.
Volume and Jacobian determinant
As pointed out above, the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if is the linear map given by multiplication with a matrix , and is any measurable subset, then the volume of is given by times the volume of . More generally, if the linear map is represented by the -matrix , then the -dimensional volume of is given by:
By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. The volume of any tetrahedron, given its vertices , , or any other combination of pairs of vertices that form a spanning tree over the vertices.
For a general differentiable function, much of the above carries over by considering the Jacobian matrix of f. For
the Jacobian matrix is the matrix whose entries are given by the partial derivatives
Its determinant, the Jacobian determinant, appears in the higher-dimensional version of integration by substitution: for suitable functions f and an open subset U of Rn (the domain of f), the integral over f(U) of some other function is given by
The Jacobian also occurs in the inverse function theorem.
When applied to the field of Cartography, the determinant can be used to measure the rate of expansion of a map near the poles.
Abstract algebraic aspects
Determinant of an endomorphism
The above identities concerning the determinant of products and inverses of matrices imply that similar matrices have the same determinant: two matrices A and B are similar, if there exists an invertible matrix X such that . Indeed, repeatedly applying the above identities yields
The determinant is therefore also called a similarity invariant. The determinant of a linear transformation
for some finite-dimensional vector space V is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of basis in V. By the similarity invariance, this determinant is independent of the choice of the basis for V and therefore only depends on the endomorphism T.
Square matrices over commutative rings
The above definition of the determinant using the Leibniz rule holds works more generally when the entries of the matrix are elements of a commutative ring , such as the integers , as opposed to the field of real or complex numbers. Moreover, the characterization of the determinant as the unique alternating multilinear map that satisfies still holds, as do all the properties that result from that characterization.
A matrix is invertible (in the sense that there is an inverse matrix whose entries are in ) if and only if its determinant is an invertible element in . For , this means that the determinant is +1 or −1. Such a matrix is called unimodular.
The determinant being multiplicative, it defines a group homomorphism
between the general linear group (the group of invertible -matrices with entries in ) and the multiplicative group of units in . Since it respects the multiplication in both groups, this map is a group homomorphism.
Given a ring homomorphism , there is a map given by replacing all entries in by their images under . The determinant respects these maps, i.e., the identity
holds. In other words, the displayed commutative diagram commutes.
For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant of its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulo of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo (the latter determinant being computed using modular arithmetic). In the language of category theory, the determinant is a natural transformation between the two functors and . Adding yet another layer of abstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general linear group to the multiplicative group,
Exterior algebra
The determinant of a linear transformation of an -dimensional vector space or, more generally a free module of (finite) rank over a commutative ring can be formulated in a coordinate-free manner by considering the -th exterior power of . The map induces a linear map
As is one-dimensional, the map is given by multiplying with some scalar, i.e., an element in . Some authors such as use this fact to define the determinant to be the element in satisfying the following identity (for all ):
This definition agrees with the more concrete coordinate-dependent definition. This can be shown using the uniqueness of a multilinear alternating form on -tuples of vectors in .
For this reason, the highest non-zero exterior power (as opposed to the determinant associated to an endomorphism) is sometimes also called the determinant of and similarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix can also be cast in this setting, by considering lower alternating forms with .
Generalizations and related notions
Determinants as treated above admit several variants: the permanent of a matrix is defined as the determinant, except that the factors occurring in Leibniz's rule are omitted. The immanant generalizes both by introducing a character of the symmetric group in Leibniz's rule.
Determinants for finite-dimensional algebras
For any associative algebra that is finite-dimensional as a vector space over a field , there is a determinant map
This definition proceeds by establishing the characteristic polynomial independently of the determinant, and defining the determinant as the lowest order term of this polynomial. This general definition recovers the determinant for the matrix algebra , but also includes several further cases including the determinant of a quaternion,
,
the norm of a field extension, as well as the Pfaffian of a skew-symmetric matrix and the reduced norm of a central simple algebra, also arise as special cases of this construction.
Infinite matrices
For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. Functional analysis provides different extensions of the determinant for such infinite-dimensional situations, which however only work for particular kinds of operators.
The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriate generalization of the formula
Another infinite-dimensional notion of determinant is the functional determinant.
Operators in von Neumann algebras
For operators in a finite factor, one may define a positive real-valued determinant called the Fuglede−Kadison determinant using the canonical trace. In fact, corresponding to every tracial state on a von Neumann algebra there is a notion of Fuglede−Kadison determinant.
Related notions for non-commutative rings
For matrices over non-commutative rings, multilinearity and alternating properties are incompatible for , so there is no good definition of the determinant in this setting.
For square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but non-commutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero with a regular element of R as value on some pair of arguments implies that R is commutative). Nevertheless, various notions of non-commutative determinant have been formulated that preserve some of the properties of determinants, notably quasideterminants and the Dieudonné determinant. For some classes of matrices with non-commutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the q-determinant on quantum groups, the Capelli determinant on Capelli matrices, and the Berezinian on supermatrices (i.e., matrices whose entries are elements of -graded rings). Manin matrices form the class closest to matrices with commutative elements.
Calculation
Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra, where for applications such as checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques. Computational geometry, however, does frequently use calculations related to determinants.
While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating ( factorial) products for an -matrix. Thus, the number of required operations grows very quickly: it is of order . The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants.
Gaussian elimination
Gaussian elimination consists of left multiplying a matrix by elementary matrices for getting a matrix in a row echelon form. One can restrict the computation to elementary matrices of determinant . In this case, the determinant of the resulting row echelon form equals the determinant of the initial matrix. As a row echelon form is a triangular matrix, its determinant is the product of the entries of its diagonal.
So, the determinant can be computed for almost free from the result of a Gaussian elimination.
Decomposition methods
Some methods compute by writing the matrix as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LU decomposition, the QR decomposition or the Cholesky decomposition (for positive definite matrices). These methods are of order , which is a significant improvement over .
For example, LU decomposition expresses as a product
of a permutation matrix (which has exactly a single in each column, and otherwise zeros), a lower triangular matrix and an upper triangular matrix .
The determinants of the two triangular matrices and can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of is just the sign of the corresponding permutation (which is for an even number of permutations and is for an odd number of permutations). Once such a LU decomposition is known for , its determinant is readily computed as
Further methods
The order reached by decomposition methods has been improved by different methods. If two matrices of order can be multiplied in time , where for some , then there is an algorithm computing the determinant in time . This means, for example, that an algorithm for computing the determinant exists based on the Coppersmith–Winograd algorithm. This exponent has been further lowered, as of 2016, to 2.373.
In addition to the complexity of the algorithm, further criteria can be used to compare algorithms.
Especially for applications concerning matrices over rings, algorithms that compute the determinant without any divisions exist. (By contrast, Gauss elimination requires divisions.) One such algorithm, having complexity is based on the following idea: one replaces permutations (as in the Leibniz rule) by so-called closed ordered walks, in which several items can be repeated. The resulting sum has more terms than in the Leibniz rule, but in the process several of these products can be reused, making it more efficient than naively computing with the Leibniz rule. Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the Gaussian elimination (or LU decomposition) method is of order , but the bit length of intermediate values can become exponentially long. By comparison, the Bareiss Algorithm, is an exact-division method (so it does use division, but only in cases where these divisions can be performed without remainder) is of the same order, but the bit complexity is roughly the bit size of the original entries in the matrix times .
If the determinant of A and the inverse of A have already been computed, the matrix determinant lemma allows rapid calculation of the determinant of , where u and v are column vectors.
Charles Dodgson (i.e. Lewis Carroll of Alice's Adventures in Wonderland fame) invented a method for computing determinants called Dodgson condensation. Unfortunately this interesting method does not always work in its original form.
| Mathematics | Algebra | null |
8471 | https://en.wikipedia.org/wiki/Delphinus | Delphinus | Delphinus is a small constellation in the Northern Celestial Hemisphere, close to the celestial equator. Its name is the Latin version for the Greek word for dolphin (). It is one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations recognized by the International Astronomical Union. It is one of the smaller constellations, ranked 69th in size. Delphinus' five brightest stars form a distinctive asterism symbolizing a dolphin with four stars representing the body and one the tail. It is bordered (clockwise from north) by Vulpecula, Sagitta, Aquila, Aquarius, Equuleus and Pegasus.
Delphinus is a faint constellation with only two stars brighter than an apparent magnitude of 4, Beta Delphini (Rotanev) at magnitude 3.6 and Alpha Delphini (Sualocin) at magnitude 3.8.
Mythology
Delphinus is associated with two stories from Greek mythology.
According to myth, the first Greek god Poseidon wanted to marry Amphitrite, a beautiful nereid. However, wanting to protect her virginity, she fled to the Atlas mountains. Her suitor then sent out several searchers, among them a certain Delphinus. Delphinus accidentally stumbled upon her and was able to persuade Amphitrite to accept Poseidon's wooing. Out of gratitude the god placed the image of a dolphin among the stars.
The second story tells of the Greek poet Arion of Lesbos (7th century BC), who was saved by a dolphin. He was a court musician at the palace of Periander, ruler of Corinth. Arion had amassed a fortune during his travels to Sicily and Italy. On his way home from Tarentum his wealth caused the crew of his ship to conspire against him. Threatened with death, Arion asked to be granted a last wish which the crew granted: he wanted to sing a dirge. This he did, and while doing so, flung himself into the sea. There, he was rescued by a dolphin which had been charmed by Arion's music. The dolphin carried Arion to the coast of Greece and left.
In non-Western astronomy
In Chinese astronomy, the stars of Delphinus are located within the Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ).
In Polynesia, two cultures recognized Delphinus as a constellation. In Pukapuka, it was called Te Toloa and in the Tuamotus, it was called Te Uru-o-tiki.
In Hindu astrology, the Delphinus corresponds to the Nakshatra, or lunar mansion, of Dhanishta.
Characteristics
Delphinus is bordered by Vulpecula to the north, Sagitta to the northwest, Aquila to the west and southwest, Aquarius to the southeast, Equuleus to the east and Pegasus to the east. Covering 188.5 square degrees, corresponding to 0.457% of the sky, it ranks 69th of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the IAU in 1922, is "Del". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 14 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . The whole constellation is visible to observers north of latitude 69°S.
Features
Stars
Delphinus has two stars above fourth (apparent) magnitude; its brightest star is of magnitude 3.6. The main asterism in Delphinus is Job's Coffin, nearly a 45°-apex lozenge or diamond of the four brightest stars: Alpha, Beta, Gamma, and Delta Delphini. Delphinus is in a rich Milky Way star field. Alpha and Beta Delphini have 19th century names Sualocin and Rotanev, read backwards: Nicolaus Venator, the Latinized name of a Palermo Observatory director, Niccolò Cacciatore (d. 1841).
Alpha Delphini is a blue-white hued main sequence star of magnitude 3.8, 241 light-years from Earth. It is a spectroscopic binary. It is officially named Sualocin. The star has an absolute magnitude of -0.4.
Beta Delphini is officially called Rotanev. It was found to be a binary star in 1873. The gap between its close binary stars is visible from large amateur telescopes. To the unaided eye, it appears to be a white star of magnitude 3.6. It has a period of 27 years and is 97 light-years from Earth.
Gamma Delphini is a celebrated binary star among amateur astronomers. The primary is orange-gold of magnitude 4.3; the secondary is a light yellow star of magnitude 5.1. The pair form a true binary with an estimated orbital period of over 3,000 years. 125 light-years away, the two components are visible in a small amateur telescope. The secondary, also described as green, is 10 arcseconds from the primary. Struve 2725, called the "Ghost Double", is a pair that appears similar but dimmer. Its components of magnitudes 7.6 and 8.4 are separated by 6 arcseconds and are 15 arcminutes from Gamma Delphini itself. An unconfirmed exoplanet with a minimum mass of 0.7 Jupiter masses may orbit one of the stars.
Delta Delphini is a type A-type star of magnitude 4.43. It is a spectroscopic binary, and both stars are Delta Scuti variables.
Epsilon Delphini, Deneb Dulfim (lit. "tail [of the] Dolphin"), or Aldulfin, is a star of stellar class B6 III. Its magnitude is variable at around 4.03.
Zeta Delphini, an A3Va main-sequence star of magnitude 4.6, was in 2014 discovered to have a brown dwarf orbiting around it. Zeta Delphini B has a mass of 50±15 .
Rho Aquilae at magnitude 4.94 is at about 150 light-years away. Due to its proper motion it has been in the (round-figure parameter) bounds of the constellation since 1992. It is an A-type main sequence star with a lower metallicity than the Sun.
HR Delphini was a nova that brightened to magnitude 3.5 in December 1967. It took an unusually long time for the nova to reach peak brightness which indicate that it barely satisfied the conditions for a thermonuclear runaway. Another nova by the name V339 Delphini was detected in 2013; it peaked at magnitude 4.3 and was the first nova observed to produce lithium.
Musica, also known by its Flamsteed designation 18 Delphini, is one of the five stars with known planets located in Delphinus. It has a spectral type of G6 III. Arion, the planet, is a very dense and massive planet with a mass at least 10.3 times greater than Jupiter. Arion was part of the first NameExoWorlds contest where the public got the opportunity to suggest names for exoplanets and their host stars.
Exoplanets
In 2024 the planet TOI-6883 b was discovered in the constellation Delphinus. It has a 16.249 day orbital period around its host star, a radius 1.08 times Jupiter's, and a mass 4.34 times Jupiter's. It was discovered from a single transit in TESS data and it was confirmed by a network of citizen scientists.
In 2024, the planet TOI-6883 c was discovered in the constellation Delphinus. It has an orbital period of 7.8458 days, a radius of 0.7 times Jupiter's, and a third of Jupiter's mass. The Neptunian-size planet was discovered from an abnormality from data retrieved from TOI-6883 c.
Deep-sky objects
Its rich Milky Way star field means many modestly deep-sky objects. NGC 6891 is a planetary nebula of magnitude 10.5; another is NGC 6905 or the Blue Flash Nebula. The Blue Flash Nebula shows broad emission lines. The central star in NGC 6905 has a spectral of WO2, meaning it is rich in oxygen.
NGC 6934 is a globular cluster of magnitude 9.75. It is about 52,000 light-years away from the Solar System. It is in the Shapley-Sawyer Concentration Class VIII and is thought to share a common origin with another globular cluster in Boötes. It has an intermediate metallicity for a globular cluster, but as of 2018 it has been poorly studied. At a distance of about 137,000 light-years, the globular cluster NGC 7006 is at the outer reaches of the galaxy. It is also fairly dim at magnitude 11.5 and is in Class I.
| Physical sciences | Other | Astronomy |
8483 | https://en.wikipedia.org/wiki/Diesel%20cycle | Diesel cycle | The Diesel cycle is a combustion process of a reciprocating internal combustion engine. In it, fuel is ignited by heat generated during the compression of air in the combustion chamber, into which fuel is then injected. This is in contrast to igniting the fuel-air mixture with a spark plug as in the Otto cycle (four-stroke/petrol) engine. Diesel engines are used in aircraft, automobiles, power generation, diesel–electric locomotives, and both surface ships and submarines.
The Diesel cycle is assumed to have constant pressure during the initial part of the combustion phase ( to in the diagram, below). This is an idealized mathematical model: real physical diesels do have an increase in pressure during this period, but it is less pronounced than in the Otto cycle. In contrast, the idealized Otto cycle of a gasoline engine approximates a constant volume process during that phase.
Idealized Diesel cycle
The image shows a p–V diagram for the ideal Diesel cycle; where is pressure and V the volume or the specific volume if the process is placed on a unit mass basis. The idealized Diesel cycle assumes an ideal gas and ignores combustion chemistry, exhaust- and recharge procedures and simply follows four distinct processes:
1→2 : isentropic compression of the fluid (blue)
2→3 : constant pressure heating (red)
3→4 : isentropic expansion (yellow)
4→1 : constant volume cooling (green)
The Diesel engine is a heat engine: it converts heat into work. During the bottom isentropic processes (blue), energy is transferred into the system in the form of work , but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant pressure (red, isobaric) process, energy enters the system as heat . During the top isentropic processes (yellow), energy is transferred out of the system in the form of , but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant volume (green, isochoric) process, some of the energy flows out of the system as heat through the right depressurizing process . The work that leaves the system is equal to the work that enters the system plus the difference between the heat added to the system and the heat that leaves the system; in other words, net gain of work is equal to the difference between the heat added to the system and the heat that leaves the system.
Work in () is done by the piston compressing the air (system)
Heat in () is done by the combustion of the fuel
Work out () is done by the working fluid expanding and pushing a piston (this produces usable work)
Heat out () is done by venting the air
Net work produced = -
The net work produced is also represented by the area enclosed by the cycle on the p–V diagram. The net work is produced per cycle and is also called the useful work, as it can be turned to other useful types of energy and propel a vehicle (kinetic energy) or produce electrical energy. The summation of many such cycles per unit of time is called the developed power. The is also called the gross work, some of which is used in the next cycle of the engine to compress the next charge of air.
Maximum thermal efficiency
The maximum thermal efficiency of a Diesel cycle is dependent on the compression ratio and the cut-off ratio. It has the following formula under cold air standard analysis:
where
is thermal efficiency
is the cut-off ratio (ratio between the end and start volume for the combustion phase)
is the compression ratio
is ratio of specific heats (Cp/Cv)
The cut-off ratio can be expressed in terms of temperature as shown below:
can be approximated to the flame temperature of the fuel used. The flame temperature can be approximated to the adiabatic flame temperature of the fuel with corresponding air-to-fuel ratio and compression pressure, .
can be approximated to the inlet air temperature.
This formula only gives the ideal thermal efficiency. The actual thermal efficiency will be significantly lower due to heat and friction losses. The formula is more complex than the Otto cycle (petrol/gasoline engine) relation that has the following formula:
The additional complexity for the Diesel formula comes around since the heat addition is at constant pressure and the heat rejection is at constant volume. The Otto cycle by comparison has both the heat addition and rejection at constant volume.
Comparing efficiency to Otto cycle
Comparing the two formulae it can be seen that for a given compression ratio (), the ideal Otto cycle will be more efficient. However, a real diesel engine will be more efficient overall since it will have the ability to operate at higher compression ratios. If a petrol engine were to have the same compression ratio, then knocking (self-ignition) would occur and this would severely reduce the efficiency, whereas in a diesel engine, the self ignition is the desired behavior. Additionally, both of these cycles are only idealizations, and the actual behavior does not divide as clearly or sharply. Furthermore, the ideal Otto cycle formula stated above does not include throttling losses, which do not apply to diesel engines.
Applications
Diesel engines
Diesel engines have the lowest specific fuel consumption of any large internal combustion engine employing a single cycle, 0.26 lb/hp·h (0.16 kg/kWh) for very large marine engines (combined cycle power plants are more efficient, but employ two engines rather than one). Two-stroke diesels with high pressure forced induction, particularly turbocharging, make up a large percentage of the very largest diesel engines.
In North America, diesel engines are primarily used in large trucks, where the low-stress, high-efficiency cycle leads to much longer engine life and lower operational costs. These advantages also make the diesel engine ideal for use in the heavy-haul railroad and earthmoving environments.
Other internal combustion engines without spark plugs
Many model airplanes use very simple "glow" and "diesel" engines. Glow engines use glow plugs. "Diesel" model airplane engines have variable compression ratios. Both types depend on special fuels.
Some 19th-century or earlier experimental engines used external flames, exposed by valves, for ignition, but this becomes less attractive with increasing compression. (It was the research of Nicolas Léonard Sadi Carnot that established the thermodynamic value of compression.) A historical implication of this is that the diesel engine could have been invented without the aid of electricity.
See the development of the hot-bulb engine and indirect injection for historical significance.
| Physical sciences | Thermodynamics | Physics |
8492 | https://en.wikipedia.org/wiki/Discrete%20mathematics | Discrete mathematics | Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics".
The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems.
Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well.
In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, like precalculus in this respect.
The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
Topics
Theoretical computer science
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms and data structures. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations. Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems and representations of geometrical objects, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Information theory
Information theory involves the quantification of information. Closely related is coding theory which is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as: analog signals, analog coding, analog encryption.
Logic
Logic is the study of the principles of valid reasoning and inference, as well as of consistency, soundness, and completeness. For example, in most systems of logic (but not in intuitionistic logic) Peirce's law (((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with a truth table. The study of mathematical proof is particularly important in logic, and has accumulated to automated theorem proving and formal verification of software.
Logical formulas are discrete structures, as are proofs, which form finite trees or, more generally, directed acyclic graph structures (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: true and false, but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied, e.g. infinitary logic.
Set theory
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas.
In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
Combinatorics
Combinatorics studies the ways in which discrete structures can be combined or arranged.
Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Topological combinatorics concerns the use of techniques from topology and algebraic topology/combinatorial topology in combinatorics.
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties.
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field.
Order theory is the study of partially ordered sets, both finite and infinite.
Graph theory
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory and topological graph theory has close links to topology. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
Number theory
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
Algebraic structures
Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: Boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages.
Discrete analogues of continuous mathematics
There are many concepts and theories in continuous mathematics which have discrete versions, such as discrete calculus, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, discrete optimization, discrete probability theory, discrete probability distribution, difference equations, discrete dynamical systems, and discrete vector measures.
Calculus of finite differences, discrete analysis, and discrete calculus
In discrete calculus and the calculus of finite differences, a function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals. As well as discrete metric spaces, there are more general discrete topological spaces, finite metric spaces, finite topological spaces.
The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems.
Discrete geometry
Discrete geometry and combinatorial geometry are about combinatorial properties of discrete collections of geometrical objects. A long-standing topic in discrete geometry is tiling of the plane.
In algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the form for a field can be studied either as , a point, or as the spectrum of the local ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion of tangent space called the Zariski tangent space, making many features of calculus applicable even in finite settings.
Discrete modelling
In applied mathematics, discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, discrete formulae are fit to data. A common method in this form of modelling is to use recurrence relation. Discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. Numerical analysis provides an important example.
Challenges
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).
In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done.
The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. The telecommunications industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need.
Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life.
Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.
| Mathematics | Discrete mathematics | null |
8494 | https://en.wikipedia.org/wiki/DDT | DDT | Dichlorodiphenyltrichloroethane, commonly known as DDT, is a colorless, tasteless, and almost odorless crystalline chemical compound, an organochloride. Originally developed as an insecticide, it became infamous for its environmental impacts. DDT was first synthesized in 1874 by the Austrian chemist Othmar Zeidler. DDT's insecticidal action was discovered by the Swiss chemist Paul Hermann Müller in 1939. DDT was used in the second half of World War II to limit the spread of the insect-borne diseases malaria and typhus among civilians and troops. Müller was awarded the Nobel Prize in Physiology or Medicine in 1948 "for his discovery of the high efficiency of DDT as a contact poison against several arthropods". The WHO's anti-malaria campaign of the 1950s and 1960s relied heavily on DDT and the results were promising, though there was a resurgence in developing countries afterwards.
By October 1945, DDT was available for public sale in the United States. Although it was promoted by government and industry for use as an agricultural and household pesticide, there were also concerns about its use from the beginning. Opposition to DDT was focused by the 1962 publication of Rachel Carson's book Silent Spring. It talked about environmental impacts that correlated with the widespread use of DDT in agriculture in the United States, and it questioned the logic of broadcasting potentially dangerous chemicals into the environment with little prior investigation of their environmental and health effects. The book cited claims that DDT and other pesticides caused cancer and that their agricultural use was a threat to wildlife, particularly birds. Although Carson never directly called for an outright ban on the use of DDT, its publication was a seminal event for the environmental movement and resulted in a large public outcry that eventually led, in 1972, to a ban on DDT's agricultural use in the United States. Along with the passage of the Endangered Species Act, the United States ban on DDT is a major factor in the comeback of the bald eagle (the national bird of the United States) and the peregrine falcon from near-extinction in the contiguous United States.
The evolution of DDT resistance and the harm both to humans and the environment led many governments to curtail DDT use. A worldwide ban on agricultural use was formalized under the Stockholm Convention on Persistent Organic Pollutants, which has been in effect since 2004. Recognizing that total elimination in many malaria-prone countries is currently unfeasible in the absence of affordable/effective alternatives for disease control, the convention exempts public health use within World Health Organization (WHO) guidelines from the ban.
DDT still has limited use in disease vector control because of its effectiveness in killing mosquitos and thus reducing malarial infections, but that use is controversial due to environmental and health concerns. DDT is one of many tools to fight malaria, which remains the primary public health challenge in many countries. WHO guidelines require that absence of DDT resistance must be confirmed before using it. Resistance is largely due to agricultural use, in much greater quantities than required for disease prevention.
Properties and chemistry
DDT is similar in structure to the insecticide methoxychlor and the acaricide dicofol. It is highly hydrophobic and nearly insoluble in water but has good solubility in most organic solvents, fats and oils. DDT does not occur naturally and is synthesised by consecutive Friedel–Crafts reactions between chloral () and two equivalents of chlorobenzene (), in the presence of an acidic catalyst. DDT has been marketed under trade names including Anofex, Cezarex, Chlorophenothane, Dicophane, Dinocide, Gesarol, Guesapon, Guesarol, Gyron, Ixodex, Neocid, Neocidol and Zerdane; INN is clofenotane.
Isomers and related compounds
Commercial DDT is a mixture of several closely related compounds. Due to the nature of the chemical reaction used to synthesize DDT, several combinations of ortho and para arene substitution patterns are formed. The major component (77%) is the desired p,p isomer. The o,p isomeric impurity is also present in significant amounts (15%). Dichlorodiphenyldichloroethylene (DDE) and dichlorodiphenyldichloroethane (DDD) make up the balance of impurities in commercial samples. DDE and DDD are also the major metabolites and environmental breakdown products. DDT, DDE and DDD are sometimes referred to collectively as DDX.
Production and use
DDT has been formulated in multiple forms, including solutions in xylene or petroleum distillates, emulsifiable concentrates, water-wettable powders, granules, aerosols, smoke candles and charges for vaporizers and lotions.
From 1950 to 1980, DDT was extensively used in agriculturemore than 40,000 tonnes each year worldwideand it has been estimated that a total of 1.8 million tonnes have been produced globally since the 1940s. In the United States, it was manufactured by some 15 companies, including Monsanto, Ciba, Montrose Chemical Company, Pennwalt, and Velsicol Chemical Corporation. Production peaked in 1963 at 82,000 tonnes per year. More than 600,000 tonnes (1.35 billion pounds) were applied in the US before the 1972 ban. Usage peaked in 1959 at about 36,000 tonnes.
China ceased production in 2007, leaving India the only country still manufacturing DDT; it is the largest consumer. In 2009, 3,314 tonnes were produced for malaria control and visceral leishmaniasis. In recent years, in addition to India, just seven other countries, all in Africa, are still using DDT.
Mechanism of insecticide action
In insects, DDT opens voltage-sensitive sodium ion channels in neurons, causing them to fire spontaneously, which leads to spasms and eventual death. Insects with certain mutations in their sodium channel gene are resistant to DDT and similar insecticides. DDT resistance is also conferred by up-regulation of genes expressing cytochrome P450 in some insect species, as greater quantities of some enzymes of this group accelerate the toxin's metabolism into inactive metabolites. Genomic studies in the model genetic organism Drosophila melanogaster revealed that high level DDT resistance is polygenic, involving multiple resistance mechanisms. In the absence of genetic adaptation, Roberts and Andre 1994 find behavioral avoidance nonetheless provides insects with some protection against DDT. The M918T mutation event produces dramatic kdr for pyrethroids but Usherwood et al. 2005 find it is entirely ineffective against DDT. Scott 2019 believes this test in Drosophila oocytes holds for oocytes in general.
History
DDT was first synthesized in 1874 by Othmar Zeidler under the supervision of Adolf von Baeyer. It was further described in 1929 in a dissertation by W. Bausch and in two subsequent publications in 1930. The insecticide properties of "multiple chlorinated aliphatic or fat-aromatic alcohols with at least one trichloromethane group" were described in a patent in 1934 by Wolfgang von Leuthold. DDT's insecticidal properties were not, however, discovered until 1939 by the Swiss scientist Paul Hermann Müller, who was awarded the 1948 Nobel Prize in Physiology and Medicine for his efforts.
Use in the 1940s and 1950s
DDT is the best-known of several chlorine-containing pesticides used in the 1940s and 1950s. During this time, the use of DDT was driven by protecting American soldiers from diseases in tropical areas. Both British and American scientists hoped to use it to control spread of malaria, typhus, dysentery, and typhoid fever among overseas soldiers, especially considering that the pyrethrum was harder to access since it came mainly from Japan. Due to the potency of DDT, it was not long before America's War Production Board placed it on military supply lists in 1942 and 1943 and encouraged its production for overseas use. Enthusiasm regarding DDT became obvious through the American government's advertising campaigns of posters depicting Americans fighting the Axis powers and insects and through media publications celebrating its military uses. In the South Pacific, it was sprayed aerially for malaria and dengue fever control with spectacular effects. While DDT's chemical and insecticidal properties were important factors in these victories, advances in application equipment coupled with competent organization and sufficient manpower were also crucial to the success of these programs.
In 1945, DDT was made available to farmers as an agricultural insecticide and played a role in the elimination of malaria in Europe and North America. Despite concerns emerging in the scientific community, and lack of research, the FDA considered it safe up to 7 parts per million in food. There was a large economic incentive to push DDT into the market and sell it to farmers, governments, and individuals to control diseases and increase food production.
DDT was also a way for American influence to reach abroad through DDT-spraying campaigns. In the 1944 issue of Life magazine there was a feature regarding the Italian program showing pictures of American public health officials in uniforms spraying DDT on Italian families.
In 1955, the World Health Organization commenced a program to eradicate malaria in countries with low to moderate transmission rates worldwide, relying largely on DDT for mosquito control and rapid diagnosis and treatment to reduce transmission. The program eliminated the disease in "North America, Europe, the former Soviet Union", and in "Taiwan, much of the Caribbean, the Balkans, parts of northern Africa, the northern region of Australia, and a large swath of the South Pacific" and dramatically reduced mortality in Sri Lanka and India.
However, failure to sustain the program, increasing mosquito tolerance to DDT, and increasing parasite tolerance led to a resurgence. In many areas early successes partially or completely reversed, and in some cases rates of transmission increased. The program succeeded in eliminating malaria only in areas with "high socio-economic status, well-organized healthcare systems, and relatively less intensive or seasonal malaria transmission".
DDT was less effective in tropical regions due to the continuous life cycle of mosquitoes and poor infrastructure. It was applied in sub-Saharan Africa by various colonial states, but the 'global' WHO eradication program didn't include the region. Mortality rates in that area never declined to the same dramatic extent, and now constitute the bulk of malarial deaths worldwide, especially following the disease's resurgence as a result of resistance to drug treatments and the spread of the deadly malarial variant caused by Plasmodium falciparum. Eradication was abandoned in 1969 and attention instead focused on controlling and treating the disease. Spraying programs (especially using DDT) were curtailed due to concerns over safety and environmental effects, as well as problems in administrative, managerial and financial implementation. Efforts shifted from spraying to the use of bednets impregnated with insecticides and other interventions.
United States ban
By October 1945, DDT was available for public sale in the United States, used both as an agricultural pesticide and as a household insecticide. Although its use was promoted by government and the agricultural industry, US scientists such as FDA pharmacologist Herbert O. Calvery expressed concern over possible hazards associated with DDT as early as 1944. In 1947, Bradbury Robinson, a physician and nutritionist practicing in St. Louis, Michigan, warned of the dangers of using the pesticide DDT in agriculture. DDT had been researched and manufactured in St. Louis by the Michigan Chemical Corporation, later purchased by Velsicol Chemical Corporation, and had become an important part of the local economy. Citing research performed by Michigan State University in 1946, Robinson, a past president of the local Conservation Club, opined that:
As its production and use increased, public response was mixed. At the same time that DDT was hailed as part of the "world of tomorrow", concerns were expressed about its potential to kill harmless and beneficial insects (particularly pollinators), birds, fish, and eventually humans. The issue of toxicity was complicated, partly because DDT's effects varied from species to species, and partly because consecutive exposures could accumulate, causing damage comparable to large doses. A number of states attempted to regulate DDT. In the 1950s the federal government began tightening regulations governing its use. These events received little attention. Women like Dorothy Colson and Mamie Ella Plyler of Claxton, Georgia, gathered evidence about DDT's effects and wrote to the Georgia Department of Public Health, the National Health Council in New York City, and other organizations.
In 1957 The New York Times reported an unsuccessful struggle to restrict DDT use in Nassau County, New York, and the issue came to the attention of the popular naturalist-author Rachel Carson when a friend, Olga Huckins, wrote to her including an article she had written in the Boston Globe about the devastation of her local bird population after DDT spraying. William Shawn, editor of The New Yorker, urged her to write a piece on the subject, which developed into her 1962 book Silent Spring. The book argued that pesticides, including DDT, were poisoning both wildlife and the environment and were endangering human health. Silent Spring was a best seller, and public reaction to it launched the modern environmental movement in the United States. The year after it appeared, President John F. Kennedy ordered his Science Advisory Committee to investigate Carson's claims. The committee's report "add[ed] up to a fairly thorough-going vindication of Rachel Carson's Silent Spring thesis", in the words of the journal Science, and recommended a phaseout of "persistent toxic pesticides". In 1965, the U.S. military removed DDT from the military supply system due in part to the development of resistance by body lice to DDT; it was replaced by lindane.
In the mid-1960s, DDT became a prime target of the burgeoning environmental movement, as concern about DDT and its effects began to rise in local communities. In 1966, a fish kill in Suffolk County, NY, was linked to a 5,000-gallon DDT dump by the county's mosquito commission, leading a group of scientists and lawyers to file a lawsuit to stop the county's further use of DDT. A year later, the group, led by Victor Yannacone and Charles Wurster, founded the Environmental Defense Fund (EDF), along with scientists Art Cooley and Dennis Puleston, and brought a string of lawsuits against DDT and other persistent pesticides in Michigan and Wisconsin.
Around the same time, evidence was mounting further about DDT causing catastrophic declines in wildlife reproduction, especially in birds of prey like peregrine falcons, bald eagles, ospreys, and brown pelicans, whose eggshells became so thin that they often cracked before hatching. Toxicologists like David Peakall were measuring DDE levels in the eggs of peregrine falcons and California condors and finding that increased levels corresponded with thinner shells. Compounding the effect was DDT’s persistence in the environment, as it was unable to dissolve in water, and ended up accumulating in animal fat and disrupting hormone metabolism across a wide range of species.
In response to an EDF suit, the U.S. District Court of Appeals in 1971 ordered the EPA to begin the de-registration procedure for DDT. After an initial six-month review process, William Ruckelshaus, the Agency's first Administrator rejected an immediate suspension of DDT's registration, citing studies from the EPA's internal staff stating that DDT was not an imminent danger. However, these findings were criticized, as they were performed mostly by economic entomologists inherited from the United States Department of Agriculture, who many environmentalists felt were biased towards agribusiness and understated concerns about human health and wildlife. The decision thus created controversy.
The EPA held seven months of hearings in 1971–1972, with scientists giving evidence for and against DDT. In the summer of 1972, Ruckelshaus announced the cancellation of most uses of DDT exempting public health uses under some conditions. Again, this caused controversy. Immediately after the announcement, both the EDF and the DDT manufacturers filed suit against EPA. Many in the agricultural community were concerned that food production would be severely impacted, while proponents of pesticides warned of increased breakouts of insect-borne diseases and questioned the accuracy of giving animals high amounts of pesticides for cancer potential. Industry sought to overturn the ban, while the EDF wanted a comprehensive ban. The cases were consolidated, and in 1973 the United States Court of Appeals for the District of Columbia Circuit ruled that the EPA had acted properly in banning DDT. During the late 1970s, the EPA also began banning organochlorines, pesticides that were chemically similar to DDT. These included aldrin, dieldrin, chlordane, heptachlor, toxaphene, and mirex.
Some uses of DDT continued under the public health exemption. For example, in June 1979, the California Department of Health Services was permitted to use DDT to suppress flea vectors of bubonic plague. DDT continued to be produced in the United States for foreign markets until 1985, when over 300 tons were exported.
International usage restrictions
In the 1970s and 1980s, agricultural use was banned in most developed countries, beginning with Hungary in 1968 although in practice it continued to be used through at least 1970. This was followed by Norway and Sweden in 1970, West Germany and the United States in 1972, but not in the United Kingdom until 1984.
In contrast to West Germany, in the German Democratic Republic DDT was used until 1988. Especially of relevance were large-scale applications in forestry in the years 1982–1984, with the aim to combat bark beetle and pine moth. As a consequence, DDT-concentrations in eastern German forest soils are still significantly higher compared to soils in the former western German states.
By 1991, total bans, including for disease control, were in place in at least 26 countries; for example, Cuba in 1970, the US in the 1980s, Singapore in 1984, Chile in 1985, and the Republic of Korea in 1986.
The Stockholm Convention on Persistent Organic Pollutants, which took effect in 2004, put a global ban on several persistent organic pollutants, and restricted DDT use to vector control. The convention was ratified by more than 170 countries. Recognizing that total elimination in many malaria-prone countries is currently unfeasible in the absence of affordable/effective alternatives, the convention exempts public health use within World Health Organization (WHO) guidelines from the ban. Resolution 60.18 of the World Health Assembly commits WHO to the Stockholm Convention's aim of reducing and ultimately eliminating DDT. Malaria Foundation International states, "The outcome of the treaty is arguably better than the status quo going into the negotiations. For the first time, there is now an insecticide which is restricted to vector control only, meaning that the selection of resistant mosquitoes will be slower than before."
Despite the worldwide ban, agricultural use continued in India, North Korea, and possibly elsewhere. As of 2013, an estimated 3,000 to 4,000 tons of DDT were produced for disease vector control, including 2,786 tons in India. DDT is applied to the inside walls of homes to kill or repel mosquitoes. This intervention, called indoor residual spraying (IRS), greatly reduces environmental damage. It also reduces the incidence of DDT resistance. For comparison, treating of cotton during a typical U.S. growing season requires the same amount of chemical to treat roughly 1,700 homes.
Environmental impact
DDT is a persistent organic pollutant that is readily adsorbed to soils and sediments, which can act both as sinks and as long-term sources of exposure affecting organisms. Depending on environmental conditions, its soil half-life can range from 22 days to 30 years. Routes of loss and degradation include runoff, volatilization, photolysis and aerobic and anaerobic biodegradation. Due to hydrophobic properties, in aquatic ecosystems DDT and its metabolites are absorbed by aquatic organisms and adsorbed on suspended particles, leaving little DDT dissolved in the water (however, its half-life in aquatic environments is listed by the National Pesticide Information Center as 150 years). Its breakdown products and metabolites, DDE and DDD, are also persistent and have similar chemical and physical properties. DDT and its breakdown products are transported from warmer areas to the Arctic by the phenomenon of global distillation, where they then accumulate in the region's food web.
Medical researchers in 1974 found a measurable and significant difference in the presence of DDT in human milk between mothers who lived in New Brunswick and mothers who lived in Nova Scotia, "possibly because of the wider use of insecticide sprays in the past".
Because of its lipophilic properties, DDT can bioaccumulate, especially in predatory birds. DDT is toxic to a wide range of living organisms, including marine animals such as crayfish, daphnids, sea shrimp and many species of fish. DDT, DDE and DDD magnify through the food chain, with apex predators such as raptor birds concentrating more chemicals than other animals in the same environment. They are stored mainly in body fat. DDT and DDE are resistant to metabolism; in humans, their half-lives are 6 and up to 10 years, respectively. In the United States, these chemicals were detected in almost all human blood samples tested by the Centers for Disease Control in 2005, though their levels have sharply declined since most uses were banned. Estimated dietary intake has declined, although FDA food tests commonly detect it.
Despite being banned for many years, in 2018 research showed that DDT residues are still present in European soils and Spanish rivers.
Eggshell thinning
The chemical and its breakdown products DDE and DDD caused eggshell thinning and population declines in multiple North American and European bird of prey species. Both laboratory experiments and field studies confirmed this effect. The effect was first conclusively proven at Bellow Island in Lake Michigan during University of Michigan-funded studies on American herring gulls in the mid-1960s. DDE-related eggshell thinning is considered a major reason for the decline of the bald eagle, brown pelican, peregrine falcon and osprey. However, birds vary in their sensitivity to these chemicals, with birds of prey, waterfowl and song birds being more susceptible than chickens and related species. Even in 2010, California condors that feed on sea lions at Big Sur that in turn feed in the Palos Verdes Shelf area of the Montrose Chemical Superfund site exhibited continued thin-shell problems, though DDT's role in the decline of the California condor is disputed.
The biological thinning mechanism is not entirely understood, but DDE appears to be more potent than DDT, and strong evidence indicates that p,p-DDE inhibits calcium ATPase in the membrane of the shell gland and reduces the transport of calcium carbonate from blood into the eggshell gland. This results in a dose-dependent thickness reduction. Other evidence indicates that o,p'-DDT disrupts female reproductive tract development, later impairing eggshell quality. Multiple mechanisms may be at work, or different mechanisms may operate in different species.
Human health
DDT is an endocrine disruptor. It is considered likely to be a human carcinogen although the majority of studies suggest it is not directly genotoxic. DDE acts as a weak androgen receptor antagonist, but not as an estrogen. p,p-DDT, DDT's main component, has little or no androgenic or estrogenic activity. The minor component o,p-DDT has weak estrogenic activity.
Acute toxicity
DDT is classified as "moderately toxic" by the U.S. National Toxicology Program (NTP) and "moderately hazardous" by WHO, based on the rat oral of 113 mg/kg. Indirect exposure is considered relatively non-toxic for humans.
Chronic toxicity
Primarily through the tendency for DDT to build up in areas of the body with high lipid content, chronic exposure can affect reproductive capabilities and the embryo or fetus.
A review article in The Lancet states: "research has shown that exposure to DDT at amounts that would be needed in malaria control might cause preterm birth and early weaning ... toxicological evidence shows endocrine-disrupting properties; human data also indicate possible disruption in semen quality, menstruation, gestational length, and duration of lactation".
Other studies document decreases in semen quality among men with high exposures (generally from indoor residual spraying).
Studies are inconsistent on whether high blood DDT or DDE levels increase time to pregnancy. In mothers with high DDE blood serum levels, daughters may have up to a 32% increase in the probability of conceiving, but increased DDT levels have been associated with a 16% decrease in one study.
Indirect exposure of mothers through workers directly in contact with DDT is associated with an increase in spontaneous abortions.
Other studies found that DDT or DDE interfere with proper thyroid function in pregnancy and childhood.
Mothers with high levels of DDT circulating in their blood during pregnancy were found to be more likely to give birth to children who would go on to develop autism.
Carcinogenicity
In 2015, the International Agency for Research on Cancer classified DDT as Group 2A "probably carcinogenic to humans". Previous assessments by the U.S. National Toxicology Program classified it as "reasonably anticipated to be a carcinogen" and by the EPA classified DDT, DDE and DDD as class B2 "probable" carcinogens; these evaluations were based mainly on animal studies.
A 2005 Lancet review stated that occupational DDT exposure was associated with increased pancreatic cancer risk in 2 case control studies, but another study showed no DDE dose-effect association. Results regarding a possible association with liver cancer and biliary tract cancer are conflicting: workers who did not have direct occupational DDT contact showed increased risk. White men had an increased risk, but not white women or black men. Results about an association with multiple myeloma, prostate and testicular cancer, endometrial cancer and colorectal cancer have been inconclusive or generally do not support an association. A 2017 review of liver cancer studies concluded that "organochlorine pesticides, including DDT, may increase hepatocellular carcinoma risk".
A 2009 review, whose co-authors included persons engaged in DDT-related litigation, reached broadly similar conclusions, with an equivocal association with testicular cancer. Case–control studies did not support an association with leukemia or lymphoma.
Breast cancer
The question of whether DDT or DDE are risk factors in breast cancer has not been conclusively answered. Several meta analyses of observational studies have concluded that there is no overall relationship between DDT exposure and breast cancer risk. The United States Institute of Medicine reviewed data on the association of breast cancer with DDT exposure in 2012 and concluded that a causative relationship could neither be proven nor disproven.
A 2007 case-control study using archived blood samples found that breast cancer risk was increased 5-fold among women who were born prior to 1931 and who had high serum DDT levels in 1963. Reasoning that DDT use became widespread in 1945 and peaked around 1950, they concluded that the ages of 14–20 were a critical period in which DDT exposure leads to increased risk. This study, which suggests a connection between DDT exposure and breast cancer that would not be picked up by most studies, has received variable commentary in third-party reviews. One review suggested that "previous studies that measured exposure in older women may have missed the critical period". The National Toxicology Program notes that while the majority of studies have not found a relationship between DDT exposure and breast cancer that positive associations have been seen in a "few studies among women with higher levels of exposure and among certain subgroups of women".
A 2015 case control study identified a link (odds ratio 3.4) between in-utero exposure (as estimated from archived maternal blood samples) and breast cancer diagnosis in daughters. The findings "support classification of DDT as an endocrine disruptor, a predictor of breast cancer, and a marker of high risk".
Malaria control
Malaria remains the primary public health challenge in many countries. In 2015, there were 214 million cases of malaria worldwide resulting in an estimated 438,000 deaths, 90% of which occurred in Africa. DDT is one of many tools to fight the disease. Its use in this context has been called everything from a "miracle weapon [that is] like Kryptonite to the mosquitoes", to "toxic colonialism".
Before DDT, eliminating mosquito breeding grounds by drainage or poisoning with Paris green or pyrethrum was sometimes successful. In parts of the world with rising living standards, the elimination of malaria was often a collateral benefit of the introduction of window screens and improved sanitation. A variety of usually simultaneous interventions represents best practice. These include antimalarial drugs to prevent or treat infection; improvements in public health infrastructure to diagnose, sequester and treat infected individuals; bednets and other methods intended to keep mosquitoes from biting humans; and vector control strategies such as larviciding with insecticides, ecological controls such as draining mosquito breeding grounds or introducing fish to eat larvae and indoor residual spraying (IRS) with insecticides, possibly including DDT. IRS involves the treatment of interior walls and ceilings with insecticides. It is particularly effective against mosquitoes, since many species rest on an indoor wall before or after feeding. DDT is one of 12 WHO–approved IRS insecticides.
The WHO's anti-malaria campaign of the 1950s and 1960s relied heavily on DDT and the results were promising, though temporary in developing countries. Experts tie malarial resurgence to multiple factors, including poor leadership, management and funding of malaria control programs; poverty; civil unrest; and increased irrigation. The evolution of resistance to first-generation drugs (e.g. chloroquine) and to insecticides exacerbated the situation. Resistance was largely fueled by unrestricted agricultural use. Resistance and the harm both to humans and the environment led many governments to curtail DDT use in vector control and agriculture. In 2006 WHO reversed a longstanding policy against DDT by recommending that it be used as an indoor pesticide in regions where malaria is a major problem.
Once the mainstay of anti-malaria campaigns, as of 2019 only five countries used DDT for Indoor Residual Spraying
Initial effectiveness
When it was introduced in World War II, DDT was effective in reducing malaria morbidity and mortality. WHO's anti-malaria campaign, which consisted mostly of spraying DDT and rapid treatment and diagnosis to break the transmission cycle, was initially successful as well. For example, in Sri Lanka, the program reduced cases from about one million per year before spraying to just 18 in 1963 and 29 in 1964. Thereafter the program was halted to save money and malaria rebounded to 600,000 cases in 1968 and the first quarter of 1969. The country resumed DDT vector control but the mosquitoes had evolved resistance in the interim, presumably because of continued agricultural use. The program switched to malathion, but despite initial successes, malaria continued its resurgence into the 1980s.
DDT remains on WHO's list of insecticides recommended for IRS. After the appointment of Arata Kochi as head of its anti-malaria division, WHO's policy shifted from recommending IRS only in areas of seasonal or episodic transmission of malaria, to advocating it in areas of continuous, intense transmission. WHO reaffirmed its commitment to phasing out DDT, aiming "to achieve a 30% cut in the application of DDT world-wide by 2014 and its total phase-out by the early 2020s if not sooner" while simultaneously combating malaria. WHO plans to implement alternatives to DDT to achieve this goal.
South Africa continues to use DDT under WHO guidelines. In 1996, the country switched to alternative insecticides and malaria incidence increased dramatically. Returning to DDT and introducing new drugs brought malaria back under control. Malaria cases increased in South America after countries in that continent stopped using DDT. Research data showed a strong negative relationship between DDT residual house sprayings and malaria. In a research from 1993 to 1995, Ecuador increased its use of DDT and achieved a 61% reduction in malaria rates, while each of the other countries that gradually decreased its DDT use had large increases.
Mosquito resistance
In some areas, resistance reduced DDT's effectiveness. WHO guidelines require that absence of resistance must be confirmed before using the chemical. Resistance is largely due to agricultural use, in much greater quantities than required for disease prevention.
Resistance was noted early in spray campaigns. Paul Russell, former head of the Allied Anti-Malaria campaign, observed in 1956 that "resistance has appeared after six or seven years". Resistance has been detected in Sri Lanka, Pakistan, Turkey and Central America and it has largely been replaced by organophosphate or carbamate insecticides, e.g. malathion or bendiocarb.
In many parts of India, DDT is ineffective. Agricultural uses were banned in 1989 and its anti-malarial use has been declining. Urban use ended. One study concluded that "DDT is still a viable insecticide in indoor residual spraying owing to its effectivity in well supervised spray operation and high excito-repellency factor."
Studies of malaria-vector mosquitoes in KwaZulu-Natal Province, South Africa found susceptibility to 4% DDT (WHO's susceptibility standard), in 63% of the samples, compared to the average of 87% in the same species caught in the open. The authors concluded that "Finding DDT resistance in the vector An. arabiensis, close to the area where we previously reported pyrethroid-resistance in the vector An. funestus Giles, indicates an urgent need to develop a strategy of insecticide resistance management for the malaria control programmes of southern Africa."
DDT can still be effective against resistant mosquitoes and the avoidance of DDT-sprayed walls by mosquitoes is an additional benefit of the chemical. For example, a 2007 study reported that resistant mosquitoes avoided treated huts. The researchers argued that DDT was the best pesticide for use in IRS (even though it did not afford the most protection from mosquitoes out of the three test chemicals) because the other pesticides worked primarily by killing or irritating mosquitoes – encouraging the development of resistance. Others argue that the avoidance behavior slows eradication. Unlike other insecticides such as pyrethroids, DDT requires long exposure to accumulate a lethal dose; however its irritant property shortens contact periods. "For these reasons, when comparisons have been made, better malaria control has generally been achieved with pyrethroids than with DDT." In India outdoor sleeping and night duties are common, implying that "the excito-repellent effect of DDT, often reported useful in other countries, actually promotes outdoor transmission".
Residents' concerns
IRS is effective if at least 80% of homes and barns in a residential area are sprayed. Lower coverage rates can jeopardize program effectiveness. Many residents resist DDT spraying, objecting to the lingering smell, stains on walls, and the potential exacerbation of problems with other insect pests. Pyrethroid insecticides (e.g. deltamethrin and lambda-cyhalothrin) can overcome some of these issues, increasing participation.
Human exposure
A 1994 study found that South Africans living in sprayed homes have levels that are several orders of magnitude greater than others. Breast milk from South African mothers contains high levels of DDT and DDE. It is unclear to what extent these levels arise from home spraying vs food residues. Evidence indicates that these levels are associated with infant neurological abnormalities.
Most studies of DDT's human health effects have been conducted in developed countries where DDT is not used and exposure is relatively low.
Illegal diversion to agriculture is also a concern as it is difficult to prevent and its subsequent use on crops is uncontrolled. For example, DDT use is widespread in Indian agriculture, particularly mango production and is reportedly used by librarians to protect books. Other examples include Ethiopia, where DDT intended for malaria control is reportedly used in coffee production, and Ghana where it is used for fishing. The residues in crops at levels unacceptable for export have been an important factor in bans in several tropical countries. Adding to this problem is a lack of skilled personnel and management.
Criticism of restrictions on DDT use
Restrictions on DDT usage have been criticized by some organizations opposed to the environmental movement, including Roger Bate of the pro-DDT advocacy group Africa Fighting Malaria and the libertarian think tank Competitive Enterprise Institute; these sources oppose restrictions on DDT and attribute large numbers of deaths to such restrictions, sometimes in the millions. These arguments were rejected as "outrageous" by former WHO scientist Socrates Litsios. May Berenbaum, University of Illinois entomologist, says, "to blame environmentalists who oppose DDT for more deaths than Hitler is worse than irresponsible". More recently, Michael Palmer, a professor of chemistry at the University of Waterloo, has pointed out that DDT is still used to prevent malaria, that its declining use is primarily due to increases in manufacturing costs, and that in Africa, efforts to control malaria have been regional or local, not comprehensive.
Criticisms of a DDT "ban" often specifically reference the 1972 United States ban (with the erroneous implication that this constituted a worldwide ban and prohibited use of DDT in vector control). Reference is often made to Silent Spring, even though Carson never pushed for a DDT ban. John Quiggin and Tim Lambert wrote, "the most striking feature of the claim against Carson is the ease with which it can be refuted".
Investigative journalist Adam Sarvana and others characterize these notions as "myths" promoted principally by Roger Bate of the pro-DDT advocacy group Africa Fighting Malaria (AFM).
Alternatives
Insecticides
Organophosphate and carbamate insecticides, e.g. malathion and bendiocarb, respectively, are more expensive than DDT per kilogram and are applied at roughly the same dosage. Pyrethroids such as deltamethrin are also more expensive than DDT, but are applied more sparingly (0.02–0.3 g/m2 vs 1–2 g/m2), so the net cost per house per treatment is about the same. DDT has one of the longest residual efficacy periods of any IRS insecticide, lasting 6 to 12 months. Pyrethroids will remain active for only 4 to 6 months, and organophosphates and carbamates remain active for 2 to 6 months. In many malaria-endemic countries, malaria transmission occurs year-round, meaning that the high expense of conducting a spray campaign (including hiring spray operators, procuring insecticides, and conducting pre-spray outreach campaigns to encourage people to be home and to accept the intervention) will need to occur multiple times per year for these shorter-lasting insecticides.
In 2019, the related compound difluorodiphenyltrichloroethane (DFDT) was described as a potentially more effective and therefore potentially safer alternative to DDT.
Non-chemical vector control
Before DDT, malaria was successfully eliminated or curtailed in several tropical areas by removing or poisoning mosquito breeding grounds and larva habitats, for example by eliminating standing water. These methods have seen little application in Africa for more than half a century. According to CDC, such methods are not practical in Africa because "Anopheles gambiae, one of the primary vectors of malaria in Africa, breeds in numerous small pools of water that form due to rainfall ... It is difficult, if not impossible, to predict when and where the breeding sites will form, and to find and treat them before the adults emerge."
The relative effectiveness of IRS versus other malaria control techniques (e.g. bednets or prompt access to anti-malarial drugs) varies and is dependent on local conditions.
A WHO study released in January 2008 found that mass distribution of insecticide-treated mosquito nets and artemisinin–based drugs cut malaria deaths in half in malaria-burdened Rwanda and Ethiopia. IRS with DDT did not play an important role in mortality reduction in these countries.
Vietnam has enjoyed declining malaria cases and a 97% mortality reduction after switching in 1991 from a poorly funded DDT-based campaign to a program based on prompt treatment, bednets and pyrethroid group insecticides.
In Mexico, effective and affordable chemical and non-chemical strategies were so successful that the Mexican DDT manufacturing plant ceased production due to lack of demand.
A review of fourteen studies in sub-Saharan Africa, covering insecticide-treated nets, residual spraying, chemoprophylaxis for children, chemoprophylaxis or intermittent treatment for pregnant women, a hypothetical vaccine and changing front–line drug treatment, found decision making limited by the lack of information on the costs and effects of many interventions, the small number of cost-effectiveness analyses, the lack of evidence on the costs and effects of packages of measures and the problems in generalizing or comparing studies that relate to specific settings and use different methodologies and outcome measures. The two cost-effectiveness estimates of DDT residual spraying examined were not found to provide an accurate estimate of the cost-effectiveness of DDT spraying; the resulting estimates may not be good predictors of cost-effectiveness in current programs.
However, a study in Thailand found the cost per malaria case prevented of DDT spraying (US$1.87) to be 21% greater than the cost per case prevented of lambda-cyhalothrin–treated nets (US$1.54), casting some doubt on the assumption that DDT was the most cost-effective measure. The director of Mexico's malaria control program found similar results, declaring that it was 25% cheaper for Mexico to spray a house with synthetic pyrethroids than with DDT. However, another study in South Africa found generally lower costs for DDT spraying than for impregnated nets.
A more comprehensive approach to measuring the cost-effectiveness or efficacy of malarial control would not only measure the cost in dollars, as well as the number of people saved, but would also consider ecological damage and negative human health impacts. One preliminary study found that it is likely that the detriment to human health approaches or exceeds the beneficial reductions in malarial cases, except perhaps in epidemics. It is similar to the earlier study regarding estimated theoretical infant mortality caused by DDT and subject to the criticism also mentioned earlier.
A study in the Solomon Islands found that "although impregnated bed nets cannot entirely replace DDT spraying without substantial increase in incidence, their use permits reduced DDT spraying".
A comparison of four successful programs against malaria in Brazil, India, Eritrea and Vietnam does not endorse any single strategy but instead states, "Common success factors included conducive country conditions, a targeted technical approach using a package of effective tools, data-driven decision-making, active leadership at all levels of government, involvement of communities, decentralized implementation and control of finances, skilled technical and managerial capacity at national and sub-national levels, hands-on technical and programmatic support from partner agencies, and sufficient and flexible financing."
DDT resistant mosquitoes may be susceptible to pyrethroids in some countries. However, pyrethroid resistance in Anopheles mosquitoes is on the rise with resistant mosquitoes found in multiple countries.
| Technology | Pest and disease control | null |
8501 | https://en.wikipedia.org/wiki/Distributed%20computing | Distributed computing | Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.
The components of a distributed system communicate and coordinate their actions by passing messages to one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail. Examples of distributed systems vary from SOA-based systems to microservices to massively multiplayer online games to peer-to-peer applications. Distributed systems cost significantly more than monolithic architectures, primarily due to increased needs for additional hardware, servers, gateways, firewalls, new subnets, proxies, and so on. Also, distributed systems are prone to fallacies of distributed computing. On the other hand, a well designed distributed system is more scalable, more durable, more changeable and more fine-tuned than a monolithic application deployed on a single machine. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but the total cost of ownership, and not just the infra cost must be considered.
A computer program that runs within a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues.
Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing.
Introduction
The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing.
While there is no single definition of a distributed system, the following defining properties are commonly used as:
There are several autonomous computational entities (computers or nodes), each of which has its own local memory.
The entities communicate with each other by message passing.
A distributed system may have a common goal, such as solving a large computational problem; the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.
Other typical properties of distributed systems include the following:
The system has to tolerate failures in individual computers.
The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.
Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.
Patterns
Here are common architectural patterns used for distributed computing:
Saga interaction pattern
Microservices
Event driven architecture
Events vs. Messages
In distributed systems, events represent a fact or state change (e.g., OrderPlaced) and are typically broadcast asynchronously to multiple consumers, promoting loose coupling and scalability. While events generally don’t expect an immediate response, acknowledgment mechanisms are often implemented at the infrastructure level (e.g., Kafka commit offsets, SNS delivery statuses) rather than being an inherent part of the event pattern itself.
In contrast, messages serve a broader role, encompassing commands (e.g., ProcessPayment), events (e.g., PaymentProcessed), and documents (e.g., DataPayload). Both events and messages can support various delivery guarantees, including at-least-once, at-most-once, and exactly-once, depending on the technology stack and implementation. However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics.
Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one). While request/reply is technically possible, it is more commonly associated with messaging patterns rather than pure event-driven systems. Events excel at state propagation and decoupled notifications, while messages are better suited for command execution, workflow orchestration, and explicit coordination.
Modern architectures commonly combine both approaches, leveraging events for distributed state change notifications and messages for targeted command execution and structured workflows based on specific timing, ordering, and delivery requirements.
Parallel and distributed computing
Distributed systems are groups of networked computers which share a common goal for their work.
The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:
In parallel computing, all processors may have access to a shared memory to exchange information between processors.
In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.
The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.
The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems (see below for more detailed discussion). Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.
History
The use of concurrent processes which communicate through message-passing has its roots in operating system architectures studied in the 1960s. The first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s.
ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET (and its successor, the global Internet), other early worldwide computer networks included Usenet and FidoNet from the 1980s, both of which were used to support distributed discussion systems.
The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its counterpart International Symposium on Distributed Computing (DISC) was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs.
Architectures
Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.
Whether these CPUs share resources or not determines a first distinction between three types of architecture:
Shared memory
Shared disk
Shared nothing.
Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling.
Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change.
Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier.
n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.
Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers. Examples of this architecture include BitTorrent and the bitcoin network.
Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a main/sub relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database.
Applications
Reasons for using distributed systems and distributed computing may include:
The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location.
There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example:
It can allow for much larger storage and memory, faster compute, and higher bandwidth than a single machine.
It can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system.
It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer.
Examples
Examples of distributed systems and applications of distributed computing include the following:
telecommunications networks:
telephone networks and cellular networks,
computer networks such as the Internet,
wireless sensor networks,
routing algorithms;
network applications:
World Wide Web and peer-to-peer networks,
massively multiplayer online games and virtual reality communities,
distributed databases and distributed database management systems,
network file systems,
distributed cache such as burst buffers,
distributed information processing systems such as banking systems and airline reservation systems;
real-time process control:
aircraft control systems,
industrial control systems;
parallel computation:
scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects,
distributed rendering in computer graphics.
peer-to-peer
Reactive distributed systems
According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive.
Theoretical foundations
Models
Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions.
Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random-access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm.
The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer?
The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer.
Three viewpoints are commonly used:
Parallel algorithms in shared-memory model
All processors have access to a shared memory. The algorithm designer chooses the program executed by each processor.
One theoretical model is the parallel random-access machines (PRAM) that are used. However, the classical PRAM model assumes synchronous access to the shared memory.
Shared-memory programs can be extended to distributed systems if the underlying operating system encapsulates the communication between nodes and virtually unifies the memory across all individual systems.
A model that is closer to the behavior of real-world multiprocessor machines and takes into account the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared memory. There is a wide body of work on this model, a summary of which can be found in the literature.
Parallel algorithms in message-passing model
The algorithm designer chooses the structure of the network, as well as the program executed by each computer.
Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a computer network: each gate is a computer that runs an extremely simple computer program. Similarly, a sorting network can be seen as a computer network: each comparator is a computer.
Distributed algorithms in message-passing model
The algorithm designer only chooses the computer program. All computers run the same program. The system must work correctly regardless of the structure of the network.
A commonly used model is a graph with one finite-state machine per node.
In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example.
An example
Consider the computational problem of finding a coloring of a given graph G. Different fields might take the following approaches:
Centralized algorithms
The graph G is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result.
Parallel algorithms
Again, the graph G is encoded as a string. However, multiple computers can access the same string in parallel. Each computer might focus on one part of the graph and produce a coloring for that part.
The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel.
Distributed algorithms
The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbors in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own color as output.
The main focus is on coordinating the operation of an arbitrary distributed system.
While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the Cole–Vishkin algorithm for graph coloring was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm.
Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing). The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing).
Complexity measures
In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel (see speedup). If a decision problem can be solved in polylogarithmic time by using a polynomial number of processors, then the problem is said to be in the class NC. The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa.
In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During each communication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task.
This complexity measure is closely related to the diameter of the network. Let D be the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2D communication rounds: simply gather all information in one location (D rounds), solve the problem, and inform each node about the solution (D rounds).
On the other hand, if the running time of the algorithm is much smaller than D communication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in their local D-neighbourhood. Many distributed algorithms are known with the running time much smaller than D rounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field. Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model.
Another commonly used measure is the total number of bits transmitted in the network (cf. communication complexity). The features of this concept are typically captured with the CONGEST(B) model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits.
Other problems
Traditional computational problems take the perspective that the user asks a question, a computer (or a distributed system) processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including the dining philosophers problem and other similar mutual exclusion problems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts or deadlocks occur.
There are also fundamental challenges that are unique to distributed computing, for example those related to fault-tolerance. Examples of related problems include consensus problems, Byzantine fault tolerance, and self-stabilisation.
Much research is also focused on understanding the asynchronous nature of distributed systems:
Synchronizers can be used to run synchronous algorithms in asynchronous systems.
Logical clocks provide a causal happened-before ordering of events.
Clock synchronization algorithms provide globally consistent physical time stamps.
Note that in distributed systems, latency should be measured through "99th percentile" because "median" and "average" can be misleading.
Election
Coordinator election (or leader election) is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator.
The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator.
The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost.
Coordinator election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing.
Many other algorithms were suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the coordinator election algorithm was suggested by Korach, Kutten, and Moran.
In order to perform coordination, distributed systems employ the concept of coordinators. The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist.
Properties of distributed systems
So far the focus has been on designing a distributed system that solves a given problem. A complementary research problem is studying the properties of a given distributed system.
The halting problem is an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem is undecidable in the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer.
However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem is PSPACE-complete, i.e., it is decidable, but not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks.
| Technology | Computer architecture concepts | null |
8506 | https://en.wikipedia.org/wiki/DirectX | DirectX | Microsoft DirectX is a collection of application programming interfaces (APIs) for handling tasks related to multimedia, especially game programming and video, on Microsoft platforms. Originally, the names of these APIs all began with "Direct", such as Direct3D, DirectDraw, DirectMusic, DirectPlay, DirectSound, and so forth. The name DirectX was coined as a shorthand term for all of these APIs (the X standing in for the particular API names) and soon became the name of the collection. When Microsoft later set out to develop a gaming console, the X was used as the basis of the name Xbox to indicate that the console was based on DirectX technology. The X initial has been carried forward in the naming of APIs designed for the Xbox such as XInput and the Cross-platform Audio Creation Tool (XACT), while the DirectX pattern has been continued for Windows APIs such as Direct2D and DirectWrite.
Direct3D (the 3D graphics API within DirectX) is widely used in the development of video games for Microsoft Windows and the Xbox line of consoles. Direct3D is also used by other software applications for visualization and graphics tasks such as CAD/CAM engineering. As Direct3D is the most widely publicized component of DirectX, it is common to see the names "DirectX" and "Direct3D" used interchangeably.
The DirectX software development kit (SDK) consists of runtime libraries in redistributable binary form, along with accompanying documentation and headers for use in coding. Originally, the runtimes were only installed by games or explicitly by the user. Windows 95 did not launch with DirectX, but DirectX was included with Windows 95 OEM Service Release 2. Windows 98 and Windows NT 4.0 both shipped with DirectX, as has every version of Windows released since. The SDK is available as a free download. While the runtimes are proprietary, closed-source software, source code is provided for most of the SDK samples. Starting with the release of Windows 8 Developer Preview, DirectX SDK has been integrated into Windows SDK.
Development history
In late 1994, Microsoft was ready to release Windows 95, its next operating system. An important factor in its value to consumers was the programs that would be able to run on it. Microsoft employee Alex St. John had been in discussions with various game developers asking how likely they would be to bring their MS-DOS games to Windows 95, and found the responses mostly negative, since programmers had found that the Windows environment did not provide the necessary features which were available under MS-DOS using BIOS routines or direct hardware access. There were also strong fears of compatibility; a notable case of this was from Disney's Animated Storybook: The Lion King which was based on the WinG programming interface. Due to numerous incompatible graphics drivers from new Compaq computers that were not tested with the WinG interface which came bundled with the game, it crashed so frequently on many desktop systems that parents had flooded Disney's call-in help lines.
St. John recognized the resistances for game development under Windows would be a limitation, and recruited two additional engineers, Craig Eisler and Eric Engstrom, to develop a better solution to get more programmers to develop games for Windows. The project was codenamed the Manhattan Project, like the World War II project of the same name, and the idea was to displace the Japanese-developed video game consoles with personal computers running Microsoft's operating system. It had initially used the radiation symbol as its logo but Microsoft asked the team to change the logo. Management did not agree to the project as they were already writing off Windows as a gaming platform, but the three committed towards this project's development. Their rebellious nature led Brad Silverberg, the senior vice president of Microsoft's office products, to name the trio the "Beastie Boys".
Most of the work by the three was done among other assigned projects starting near the end of 1994. Within four months and with input from several hardware manufacturers, the team had developed the first set of application programming interfaces (APIs) which they presented at the 1995 Game Developers Conference. The SDK included libraries implementing DirectDraw for bit-mapped graphics, DirectSound for audio, and DirectPlay for communication between players over a network. Furthermore, an extended joystick API already present in Windows 95 was documented for the first time as DirectInput, while a description of how to implement the immediate start of the installation procedure of a software title after inserting its CD-ROM, a feature called AutoPlay, was also part of the SDK. The "Direct" part of the library was so named as these routines bypassed existing core Windows 95 routines and accessed the computer hardware only via a hardware abstraction layer (HAL). Though the team had named it the "Game SDK" (software development kit), the name "DirectX" came from one journalist that had mocked the naming scheme of the various libraries. The team opted to continue to use that naming scheme and call the project DirectX.
The first version of DirectX was released in September 1995 as the Windows Game SDK. Its DirectDraw component was the Win32 replacement for the DCI and WinG APIs for Windows 3.1. DirectX allowed all versions of Microsoft Windows, starting with Windows 95, to incorporate high-performance multimedia. Eisler wrote about the frenzy to build DirectX 1 through 5 in his blog.
To get more developers on board DirectX, Microsoft approached id Software's John Carmack and offered to port Doom and Doom 2 from MS-DOS to DirectX, free of charge, with id retaining all publishing rights to the game. Carmack agreed, and Microsoft's Gabe Newell led the porting project. The first game was released as Doom 95 in August 1996, the first published DirectX game. Microsoft promoted the game heavily with Bill Gates appearing in ads for the title.
DirectX 2.0 became a built-in component of Windows with the releases of Windows 95 OSR2 and Windows NT 4.0 in mid-1996. Since Windows 95 itself was still new and few games had been released for it, Microsoft engaged in heavy promotion of DirectX to developers who were generally distrustful of Microsoft's ability to build a gaming platform in Windows. Alex St. John, the evangelist for DirectX, staged an elaborate event at the 1996 Computer Game Developers Conference which game developer Jay Barnson described as a Roman theme, including real lions, togas, and something resembling an indoor carnival. It was at this event that Microsoft first introduced Direct3D, and demonstrated multiplayer MechWarrior 2 being played over the Internet.
The DirectX team faced the challenging task of testing each DirectX release against an array of computer hardware and software. A variety of different graphics cards, audio cards, motherboards, CPUs, input devices, games, and other multimedia applications were tested with each beta and final release. The DirectX team also built and distributed tests that allowed the hardware industry to confirm that new hardware designs and driver releases would be compatible with DirectX.
Prior to DirectX Microsoft had added OpenGL to their Windows NT platform. OpenGL had been designed as a cross-platform, window system independent software interface to graphics hardware by Silicon Graphics, Inc. to bring 3D graphics programming into the mainstream of application programming. Besides it could also be used for 2D graphics and imaging and was controlled by the Architectural Review Board (ARB) which included Microsoft. Direct3D was intended to be a Microsoft controlled alternative to OpenGL, focused initially on game use. As 3D gaming grew game developers were discovering that OpenGL could be used effectively for game development. At that point a "battle" began between supporters of the cross-platform OpenGL and the Windows-only Direct3D. Incidentally, OpenGL was supported at Microsoft by the DirectX team. If a developer chose to use the OpenGL 3D graphics API in computer games, the other APIs of DirectX besides Direct3D were often combined with OpenGL because OpenGL does not include all of DirectX's functionality (such as sound or joystick support).
In a console-specific version, DirectX was used as a basis for Microsoft's Xbox, Xbox 360 and Xbox One console API. The API was developed jointly between Microsoft and Nvidia, which developed the custom graphics hardware used by the original Xbox. The Xbox API was similar to DirectX version 8.1, but is non-updateable like other console technologies. The Xbox was code named DirectXbox, but this was shortened to Xbox for its commercial name.
In 2002, Microsoft released DirectX 9 with support for the use of much longer shader programs than before with pixel and vertex shader version 2.0. Microsoft has continued to update the DirectX suite since then, introducing Shader Model 3.0 in DirectX 9.0c, released in August 2004.
As of April 2005, DirectShow was removed from DirectX and moved to the Microsoft Platform SDK instead.
DirectX has been confirmed to be present in Microsoft's Windows Phone 8.
Real-time raytracing was announced as DXR in 2018. Support for compiling HLSL to SPIR-V was also added in the DirectX Shader Compiler the same year.
Components
DirectX is composed of multiple APIs:
Direct3D (D3D): Real-time 3D rendering API
DXGI: Enumerates adapters and monitors and manages swap chains for Direct3D 10 and later.
Direct2D: 2D graphics API
DirectWrite: Text rendering API
DirectCompute: API for general-purpose computing on graphics processing units
DirectX Diagnostics (DxDiag): A tool for diagnosing and generating reports on components related to DirectX, such as audio, video, and input drivers
XACT3: High-level audio API
XAudio2: Low-level audio API
DirectX Raytracing (DXR): Real-time raytracing API
DirectStorage: GPU-oriented file I/O API
DirectML: GPU-accelerated machine learning and artificial intelligence API
DirectSR: GPU-accelerated resolution upscaling API
Media Foundation
DirectX Video Acceleration for accelerated video playback
Microsoft has deprecated the following components:
DirectX Media: Consists of:
DirectAnimation for 2D/3D web animation, DirectShow for multimedia playback and streaming media
DirectX Media Objects: Support for streaming objects such as encoders, decoders, and effects (Deprecated in favor of Media Foundation Transforms; MFTs)
DirectX Transform for web interactivity, and Direct3D Retained Mode for higher level 3D graphics
DirectX plugins for audio signal processing
DirectDraw: 2D graphics API (Deprecated in favor of Direct2D)
DirectInput: Input API for interfacing with keyboards, mice, joysticks, and game controllers (Deprecated after version 8 in favor of XInput for Xbox 360 controllers or standard WM_INPUT window message processing for keyboard and mouse input)
DirectPlay: Network API for communication over a local-area or wide-area network (Deprecated after version 8 in favor of Games for Windows Live and Xbox Live)
DirectSound: Audio API (Deprecated since DirectX 8 in favor of XAudio2 and XACT3)
DirectSound3D (DS3D): 3D sounds API (Deprecated since DirectX 8 in favor of XAudio2 and XACT3)
DirectMusic: Components for playing soundtracks authored in DirectMusic Producer (Deprecated since DirectX 8 in favor of XAudio2 and XACT3)
DirectX functionality is provided in the form of COM-style objects and interfaces. Additionally, while not DirectX components themselves, managed objects have been built on top of some parts of DirectX, such as Managed Direct3D and the XNA graphics library on top of Direct3D 9.
Microsoft distributes debugging tool for DirectX called "PIX".
Versions
DirectX 9
Introduced by Microsoft in 2002, DirectX 9 was a significant release in the DirectX family. It brought many important features and enhancements to the graphics capabilities of Windows. At the time of its release, it supported Windows 98, Windows Me, Windows 2000, and Windows XP. As of August 2024 it remains supported by all subsequent versions of Windows for backward compatibility.
One of the key features introduced in DirectX 9 was Shader Model 2.0, which included Pixel Shader 2.0 and Vertex Shader 2.0. These allowed for more complex and realistic graphics rendering. It also brought much needed performance improvements through better hardware acceleration capabilities, and better utilization of GPU resources. It also introduced HLSL, which provided a more accessible way for developers to produce shaders.
DirectX 9.0c was an update to the original, and has been continuously changed over the years affecting its compatibility with older operating systems. As of January 2007, Windows 2000 and Windows XP became the minimum required operating systems. This means support was officially dropped for Windows 98 and Windows Me. As of August 2024, DirectX 9.0c is still regularly updated.
Windows XP SP2 and newer include DirectX 9.0c, but may require a newer DirectX runtime redistributable installation for DirectX 9.0c applications compiled with the February 2005 DirectX 9.0 SDK or newer.
DirectX 9 had a significant impact on game development. Many games from the mid-2000s to early 2010s were developed using DirectX 9 and it became a standard target for developers. Even today, some games still use DirectX 9 as an option for older or less powerful hardware.
DirectX 10
A major update to DirectX API, DirectX 10 ships with and is only available with Windows Vista (launched in late 2006) and later. Previous versions of Windows such as Windows XP are not able to run DirectX 10-exclusive applications. Rather, programs that are run on a Windows XP system with DirectX 10 hardware simply resort to the DirectX 9.0c code path, the latest available for Windows XP computers.
Changes for DirectX 10 were extensive. Many former parts of DirectX API were deprecated in the latest DirectX SDK and are preserved for compatibility only: DirectInput was deprecated in favor of XInput, DirectSound was deprecated in favor of the Cross-platform Audio Creation Tool system (XACT) and additionally lost support for hardware accelerated audio, since the Vista audio stack renders sound in software on the CPU. The DirectPlay DPLAY.DLL was also removed and was replaced with dplayx.dll; games that rely on this DLL must duplicate it and rename it to dplay.dll.
In order to achieve backwards compatibility, DirectX in Windows Vista contains several versions of Direct3D:
Direct3D 9: emulates Direct3D 9 behavior as it was on Windows XP. Details and advantages of Vista's Windows Display Driver Model are hidden from the application if WDDM drivers are installed. This is the only API available if there are only XP graphic drivers (XDDM) installed, after an upgrade to Vista for example.
Direct3D 9Ex (known internally during Windows Vista development as 9.0L or 9.L): allows full access to the new capabilities of WDDM (if WDDM drivers are installed) while maintaining compatibility for existing Direct3D applications. The Windows Aero user interface relies on D3D 9Ex.
Direct3D 10: Designed around the new driver model in Windows Vista and featuring a number of improvements to rendering capabilities and flexibility, including Shader Model 4.
Direct3D 10.1 is an incremental update of Direct3D 10.0 which shipped with, and required, Windows Vista Service Pack 1, which was released in February 2008. This release mainly sets a few more image quality standards for graphics vendors, while giving developers more control over image quality. It also adds support for cube map arrays, separate blend modes per-MRT, coverage mask export from a pixel shader, ability to run pixel shader per sample, access to multi-sampled depth buffers and requires that the video card supports Shader Model 4.1 or higher and 32-bit floating-point operations. Direct3D 10.1 still fully supports Direct3D 10 hardware, but in order to utilize all of the new features, updated hardware is required.
DirectX 11
Microsoft unveiled DirectX 11 at the Gamefest 08 event in Seattle. The Final Platform Update launched for Windows Vista on October 27, 2009, which was a week after the initial release of Windows 7, which launched with Direct3D 11 as a base standard.
Major scheduled features including GPGPU support (DirectCompute), and Direct3D 11 with tessellation support and improved multi-threading support to assist video game developers in developing games that better utilize multi-core processors. Parts of the new API such as multi-threaded resource handling can be supported on Direct3D 9/10/10.1-class hardware. Hardware tessellation and Shader Model 5.0 require Direct3D 11 supporting hardware. Microsoft has since released the Direct3D 11 Technical Preview. Direct3D 11 is a strict superset of Direct3D 10.1 — all hardware and API features of version 10.1 are retained, and new features are added only when necessary for exposing new functionality. This helps to keep backwards compatibility with previous versions of DirectX.
Four updates for DirectX 11 were released:
DirectX 11.1 is included in Windows 8. It supports WDDM 1.2 for increased performance, features improved integration of Direct2D (now at version 1.1), Direct3D, and DirectCompute, and includes DirectXMath, XAudio2, and XInput libraries from the XNA framework. It also features stereoscopic 3D support for gaming and video. DirectX 11.1 was also partially backported to Windows 7, via the Windows 7 platform update.
DirectX 11.2 is included in Windows 8.1 (including the RT version) and Windows Server 2012 R2. It added some new features to Direct2D like geometry realizations. It also added swap chain composition, which allows some elements of the scene to be rendered at lower resolutions and then composited via hardware overlay with other parts rendered at higher resolution.
DirectX 11.X is a superset of DirectX 11.2 running on the Xbox One. It actually includes some features, such as draw bundles, that were later announced as part of DirectX 12.
DirectX 11.3 was announced along with DirectX 12 at GDC and released in 2015. It is meant to complement DirectX 12 as a higher-level alternative. It is included with Windows 10.
DirectX 12
DirectX 12 was announced by Microsoft at GDC on March 20, 2014, and was officially launched alongside Windows 10 on July 29, 2015.
The primary feature highlight for the new release of DirectX was the introduction of advanced low-level programming APIs for Direct3D 12 which can reduce driver overhead. Developers are now able to implement their own command lists and buffers to the GPU, allowing for more efficient resource utilization through parallel computation. Lead developer Max McMullen stated that the main goal of Direct3D 12 is to achieve "console-level efficiency on phone, tablet and PC". The release of Direct3D 12 comes alongside other initiatives for low-overhead graphics APIs including AMD's Mantle for AMD graphics cards, Apple's Metal for iOS and macOS and Khronos Group's cross-platform Vulkan.
Multiadapter support will feature in DirectX 12 allowing developers to utilize multiple GPUs on a system simultaneously; multi-GPU support was previously dependent on vendor implementations such as AMD CrossFireX or NVIDIA SLI.
Implicit Multiadapter support will work in a similar manner to previous versions of DirectX where frames are rendered alternately across linked GPUs of similar compute-power.
Explicit Multiadapter will provide two distinct API patterns to developers. Linked GPUs will allow DirectX to view graphics cards in SLI or CrossFireX as a single GPU and use the combined resources; whereas Unlinked GPUs will allow GPUs from different vendors to be utilized by DirectX, such as supplementing the dedicated GPU with the integrated GPU on the CPU, or combining AMD and NVIDIA cards. However, elaborate mixed multi-GPU setups requires significantly more attentive developer support.
DirectX 12 is supported on all Fermi and later Nvidia GPUs, on AMD's GCN-based chips and on Intel's Haswell and later processors' graphics units.
At SIGGRAPH 2014, Intel released a demo showing a computer generated asteroid field, in which DirectX 12 was claimed to be 50–70% more efficient than DirectX 11 in rendering speed and CPU power consumption.
Ashes of the Singularity was the first publicly available game to utilize DirectX 12. Testing by Ars Technica in August 2015 revealed slight performance regressions in DirectX 12 over DirectX 11 mode for the Nvidia GeForce 980 Ti, whereas the AMD Radeon R9 290x achieved consistent performance improvements of up to 70% under DirectX 12, and in some scenarios the AMD outperformed the more powerful Nvidia under DirectX 12. The performance discrepancies may be due to poor Nvidia driver optimizations for DirectX 12, or even hardware limitations of the card which was optimized for DirectX 11 serial execution; however, the exact cause remains unclear.
The performance improvements of DirectX 12 on the Xbox are not as substantial as on the PC.
In March 2018, DirectX Raytracing (DXR) was announced, capable of real-time ray-tracing on supported hardware, and the DXR API was added in the Windows 10 October 2018 update.
In 2019 Microsoft announced the arrival of DirectX 12 to Windows 7 but only as a plug-in for certain game titles.
DirectX 12 Ultimate
Microsoft revealed DirectX 12 Ultimate in March 2020. DirectX 12 Ultimate will unify to a common library on both Windows 10 computers and the Xbox Series X and other ninth-generation Xbox consoles. Among the new features in Ultimate includes DirectX Raytracing 1.1, Variable Rate Shading, which gives programmers control over the level of detail of shading depending on design choices, Mesh Shaders, and Sampler Feedback.
Version history
The version number as reported by Microsoft's DxDiag tool (version 4.09.0000.0900 and higher) use the x.xx.xxxx.xxxx format for version numbers. However, the DirectX and Windows XP MSDN page claims that the registry always has been in the x.xx.xx.xxxx format. Therefore, when the above table lists a version as '4.09.00.0904' Microsoft's DxDiag tool may have it as '4.09.0000.0904'.
Compatibility
Various releases of Windows have included and supported various versions of DirectX, allowing newer versions of the operating system to continue running applications designed for earlier versions of DirectX until those versions can be gradually phased out in favor of newer APIs, drivers, and hardware.
APIs such as Direct3D and DirectSound need to interact with hardware, and they do this through a device driver. Hardware manufacturers have to write these drivers for a particular DirectX version's device driver interface (or DDI), and test each individual piece of hardware to make them DirectX compatible. Some hardware devices have only DirectX compatible drivers (in other words, one must install DirectX in order to use that hardware). Early versions of DirectX included an up-to-date library of all of the DirectX compatible drivers currently available. This practice was stopped however, in favor of the web-based Windows Update driver-update system, which allowed users to download only the drivers relevant to their hardware, rather than the entire library.
Prior to DirectX 10, DirectX runtime was designed to be backward compatible with older drivers, meaning that newer versions of the APIs were designed to interoperate with older drivers written against a previous version's DDI. The application programmer had to query the available hardware capabilities using a complex system of "cap bits" each tied to a particular hardware feature. Direct3D 7 and earlier would work on any version of the DDI, Direct3D 8 requires a minimum DDI level of 6 and Direct3D 9 requires a minimum DDI level of 7.
However, the Direct3D 10 runtime in Windows Vista cannot run on older hardware drivers due to the significantly updated DDI, which requires a unified feature set and abandons the use of "cap bits".
Direct3D 10.1 introduces "feature levels" 10_0 and 10_1, which allow use of only the hardware features defined in the specified version of Direct3D API. Direct3D 11 adds level 11_0 and "10 Level 9" - a subset of the Direct3D 10 API designed to run on Direct3D 9 hardware, which has three feature levels (9_1, 9_2 and 9_3) grouped by common capabilities of "low", "med" and "high-end" video cards; the runtime directly uses Direct3D 9 DDI provided in all WDDM drivers. Feature level 11_1 has been introduced with Direct3D 11.1.
.NET Framework
In 2002, Microsoft released a version of DirectX compatible with the Microsoft .NET Framework, thus allowing programmers to take advantage of DirectX functionality from within .NET applications using compatible languages such as managed C++ or the use of the C# programming language. This API was known as "Managed DirectX" (or MDX for short), and claimed to operate at 98% of performance of the underlying native DirectX APIs. In December 2005, February 2006, April 2006, and August 2006, Microsoft released successive updates to this library, culminating in a beta version called Managed DirectX 2.0. While Managed DirectX 2.0 consolidated functionality that had previously been scattered over multiple assemblies into a single assembly, thus simplifying dependencies on it for software developers, development on this version has subsequently been discontinued, and it is no longer supported. The Managed DirectX 2.0 library expired on October 5, 2006.
During the GDC 2006, Microsoft presented the XNA Framework, a new managed version of DirectX (similar but not identical to Managed DirectX) that is intended to assist development of games by making it easier to integrate DirectX, HLSL and other tools in one package. It also supports the execution of managed code on the Xbox 360. The XNA Game Studio Express RTM was made available on December 11, 2006, as a free download for Windows XP. Unlike the DirectX runtime, Managed DirectX, XNA Framework or the Xbox 360 APIs (XInput, XACT etc.) have not shipped as part of Windows. Developers are expected to redistribute the runtime components along with their games or applications.
No Microsoft product including the latest XNA releases provides DirectX 10 support for the .NET Framework.
The other approach for DirectX in managed languages is to use third-party libraries like:
SlimDX, an open source library for DirectX programming on the .NET Framework
SharpDX, which is an open source project delivering the full DirectX API for .NET on all Windows platforms, allowing the development of high performance game, 2D and 3D graphics rendering as well as real-time sound applications
DirectShow.NET for the DirectShow subset
Windows API CodePack for .NET Framework , which is an open source library from Microsoft.
Alternatives
There are alternatives to the DirectX family of APIs, with OpenGL, its successor Vulkan, Metal and Mantle having the most features comparable to Direct3D. Examples of other APIs include SDL, Allegro, OpenMAX, OpenML, OpenAL, OpenCL, FMOD, SFML etc. Many of these libraries are cross-platform or have open codebases. There are also alternative implementations that aim to provide the same API, such as the one in Wine. Furthermore, the developers of ReactOS are trying to reimplement DirectX under the name "ReactX".
| Technology | Software development: General | null |
8519 | https://en.wikipedia.org/wiki/Data%20structure | Data structure | In computer science, a data structure is a data organization and storage format that is usually chosen for efficient access to data. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data, i.e., it is an algebraic structure about data.
Usage
Data structures serve as the basis for abstract data types (ADT). The ADT defines the logical form of the data type. The data structure implements the physical form of the data type.
Different types of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.
Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Data structures can be used to organize the storage and retrieval of information stored in both main memory and secondary memory.
Implementation
Data structures can be implemented using a variety of programming languages and techniques, but they all share the common goal of efficiently organizing and storing data. Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by a pointer—a bit string, representing a memory address, that can be itself stored in memory and manipulated by the program. Thus, the array and record data structures are based on computing the addresses of data items with arithmetic operations, while the linked data structures are based on storing addresses of data items within the structure itself. This approach to data structuring has profound implications for the efficiency and scalability of algorithms. For instance, the contiguous memory allocation in arrays facilitates rapid access and modification operations, leading to optimized performance in sequential data processing scenarios.
The implementation of a data structure usually requires writing a set of procedures that create and manipulate instances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations. This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly by the operations that may be performed on it, and the mathematical properties of those operations (including their space and time cost).
Examples
There are numerous types of data structures, generally built upon simpler primitive data types. Well known examples are:
An array is a number of elements in a specific order, typically all of the same type (depending on the language, individual elements may either all be forced to be the same type, or may be of almost any type). Elements are accessed using an integer index to specify which element is required. Typical implementations allocate contiguous memory words for the elements of arrays (but this is not always a necessity). Arrays may be fixed-length or resizable.
A linked list (also just called list) is a linear collection of data elements of any type, called nodes, where each node has itself a value, and points to the next node in the linked list. The principal advantage of a linked list over an array is that values can always be efficiently inserted and removed without relocating the rest of the list. Certain other operations, such as random access to a certain element, are however slower on lists than on arrays.
A record (also called tuple or struct) is an aggregate data structure. A record is a value that contains other values, typically in fixed number and sequence and typically indexed by names. The elements of records are usually called fields or members. In the context of object-oriented programming, records are known as plain old data structures to distinguish them from objects.
Hash tables, also known as hash maps, are data structures that provide fast retrieval of values based on keys. They use a hashing function to map keys to indexes in an array, allowing for constant-time access in the average case. Hash tables are commonly used in dictionaries, caches, and database indexing. However, hash collisions can occur, which can impact their performance. Techniques like chaining and open addressing are employed to handle collisions.
Graphs are collections of nodes connected by edges, representing relationships between entities. Graphs can be used to model social networks, computer networks, and transportation networks, among other things. They consist of vertices (nodes) and edges (connections between nodes). Graphs can be directed or undirected, and they can have cycles or be acyclic. Graph traversal algorithms include breadth-first search and depth-first search.
Stacks and queues are abstract data types that can be implemented using arrays or linked lists. A stack has two primary operations: push (adds an element to the top of the stack) and pop (removes the topmost element from the stack), that follow the Last In, First Out (LIFO) principle. Queues have two main operations: enqueue (adds an element to the rear of the queue) and dequeue (removes an element from the front of the queue) that follow the First In, First Out (FIFO) principle.
Trees represent a hierarchical organization of elements. A tree consists of nodes connected by edges, with one node being the root and all other nodes forming subtrees. Trees are widely used in various algorithms and data storage scenarios. Binary trees (particularly heaps), AVL trees, and B-trees are some popular types of trees. They enable efficient and optimal searching, sorting, and hierarchical representation of data.
A trie, or prefix tree, is a special type of tree used to efficiently retrieve strings. In a trie, each node represents a character of a string, and the edges between nodes represent the characters that connect them. This structure is especially useful for tasks like autocomplete, spell-checking, and creating dictionaries. Tries allow for quick searches and operations based on string prefixes.
Language support
Most assembly languages and some low-level languages, such as BCPL (Basic Combined Programming Language), lack built-in support for data structures. On the other hand, many high-level programming languages and some higher-level assembly languages, such as MASM, have special syntax or other built-in support for certain data structures, such as records and arrays. For example, the C (a direct descendant of BCPL) and Pascal languages support structs and records, respectively, in addition to vectors (one-dimensional arrays) and multi-dimensional arrays.
Most programming languages feature some sort of library mechanism that allows data structure implementations to be reused by different programs. Modern languages usually come with standard libraries that implement the most common data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, and the Microsoft .NET Framework.
Modern languages also generally support modular programming, the separation between the interface of a library module and its implementation. Some provide opaque data types that allow clients to hide implementation details. Object-oriented programming languages, such as C++, Java, and Smalltalk, typically use classes for this purpose.
Many known data structures have concurrent versions which allow multiple computing threads to access a single concrete instance of a data structure simultaneously.
| Mathematics | Discrete mathematics | null |
8524 | https://en.wikipedia.org/wiki/Deuterium | Deuterium | Deuterium (hydrogen-2, symbol H or D, also known as heavy hydrogen) is one of two stable isotopes of hydrogen; the other is protium, or hydrogen-1, H. The deuterium nucleus (deuteron) contains one proton and one neutron, whereas the far more common H has no neutrons. Deuterium has a natural abundance in Earth's oceans of about one atom of deuterium in every 6,420 atoms of hydrogen. Thus, deuterium accounts for about 0.0156% by number (0.0312% by mass) of all hydrogen in the ocean: tonnes of deuterium – mainly as HOD (or HOH or HHO) and only rarely as DO (or HO) (deuterium oxide, also known as heavy water) – in tonnes of water. The abundance of H changes slightly from one kind of natural water to another (see Vienna Standard Mean Ocean Water).
The name deuterium comes from Greek deuteros, meaning "second". American chemist Harold Urey discovered deuterium in 1931. Urey and others produced samples of heavy water in which the H had been highly concentrated. The discovery of deuterium won Urey a Nobel Prize in 1934.
Deuterium is destroyed in the interiors of stars faster than it is produced. Other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago, as the basic or primordial ratio of H to H (≈26 atoms of deuterium per 10 hydrogen atoms) has its origin from that time. This is the ratio found in the gas giant planets, such as Jupiter. The analysis of deuterium–protium ratios (HHR) in comets found results very similar to the mean ratio in Earth's oceans (156 atoms of deuterium per 10 hydrogen atoms). This reinforces theories that much of Earth's ocean water is of cometary origin. The HHR of comet 67P/Churyumov–Gerasimenko, as measured by the Rosetta space probe, is about three times that of Earth water. This figure is the highest yet measured in a comet. HHR's thus continue to be an active topic of research in both astronomy and climatology.
Differences from common hydrogen (protium)
Chemical symbol
Deuterium is often represented by the chemical symbol D. Since it is an isotope of hydrogen with mass number 2, it is also represented by H. IUPAC allows both D and H, though H is preferred. A distinct chemical symbol is used for convenience because of the isotope's common use in various scientific processes. Also, its large mass difference with protium (H) confers non-negligible chemical differences with H compounds. Deuterium has a mass of , about twice the mean hydrogen atomic weight of , or twice protium's mass of . The isotope weight ratios within other elements are largely insignificant in this regard.
Spectroscopy
In quantum mechanics, the energy levels of electrons in atoms depend on the reduced mass of the system of electron and nucleus. For a hydrogen atom, the role of reduced mass is most simply seen in the Bohr model of the atom, where the reduced mass appears in a simple calculation of the Rydberg constant and Rydberg equation, but the reduced mass also appears in the Schrödinger equation, and the Dirac equation for calculating atomic energy levels.
The reduced mass of the system in these equations is close to the mass of a single electron, but differs from it by a small amount about equal to the ratio of mass of the electron to the nucleus. For H, this amount is about , or 1.000545, and for H it is even smaller: , or 1.0002725. The energies of electronic spectra lines for H and H therefore differ by the ratio of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the corresponding lines of light hydrogen, by 0.0272%. In astronomical observation, this corresponds to a blue Doppler shift of 0.0272% of the speed of light, or 81.6 km/s.
The differences are much more pronounced in vibrational spectroscopy such as infrared spectroscopy and Raman spectroscopy, and in rotational spectra such as microwave spectroscopy because the reduced mass of the deuterium is markedly higher than that of protium. In nuclear magnetic resonance spectroscopy, deuterium has a very different NMR frequency (e.g. 61 MHz when protium is at 400 MHz) and is much less sensitive. Deuterated solvents are usually used in protium NMR to prevent the solvent from overlapping with the signal, though deuterium NMR on its own right is also possible.
Big Bang nucleosynthesis
Deuterium is thought to have played an important role in setting the number and ratios of the elements that were formed in the Big Bang. Combining thermodynamics and the changes brought about by cosmic expansion, one can calculate the fraction of protons and neutrons based on the temperature at the point that the universe cooled enough to allow formation of nuclei. This calculation indicates seven protons for every neutron at the beginning of nucleogenesis, a ratio that would remain stable even after nucleogenesis was over. This fraction was in favor of protons initially, primarily because the lower mass of the proton favored their production. As the Universe expanded, it cooled. Free neutrons and protons are less stable than helium nuclei, and the protons and neutrons had a strong energetic reason to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium.
Through much of the few minutes after the Big Bang during which nucleosynthesis could have occurred, the temperature was high enough that the mean energy per particle was greater than the binding energy of weakly bound deuterium; therefore, any deuterium that was formed was immediately destroyed. This situation is known as the deuterium bottleneck. The bottleneck delayed formation of any helium-4 until the Universe became cool enough to form deuterium (at about a temperature equivalent to 100 keV). At this point, there was a sudden burst of element formation (first deuterium, which immediately fused into helium). However, very soon thereafter, at twenty minutes after the Big Bang, the Universe became too cool for any further nuclear fusion or nucleosynthesis. At this point, the elemental abundances were nearly fixed, with the only change as some of the radioactive products of Big Bang nucleosynthesis (such as tritium) decay. The deuterium bottleneck in the formation of helium, together with the lack of stable ways for helium to combine with hydrogen or with itself (no stable nucleus has a mass number of 5 or 8) meant that an insignificant amount of carbon, or any elements heavier than carbon, formed in the Big Bang. These elements thus required formation in stars. At the same time, the failure of much nucleogenesis during the Big Bang ensured that there would be plenty of hydrogen in the later universe available to form long-lived stars, such as the Sun.
Abundance
Deuterium occurs in trace amounts naturally as deuterium gas (H or D), but most deuterium atoms in the Universe are bonded with H to form a gas called hydrogen deuteride (HD or HH). Similarly, natural water contains deuterated molecules, almost all as semiheavy water HDO with only one deuterium.
The existence of deuterium on Earth, elsewhere in the Solar System (as confirmed by planetary probes), and in the spectra of stars, is also an important datum in cosmology. Gamma radiation from ordinary nuclear fusion dissociates deuterium into protons and neutrons, and there is no known natural process other than Big Bang nucleosynthesis that might have produced deuterium at anything close to its observed natural abundance. Deuterium is produced by the rare cluster decay, and occasional absorption of naturally occurring neutrons by light hydrogen, but these are trivial sources. There is thought to be little deuterium in the interior of the Sun and other stars, as at these temperatures the nuclear fusion reactions that consume deuterium happen much faster than the proton–proton reaction that creates deuterium. However, deuterium persists in the outer solar atmosphere at roughly the same concentration as in Jupiter, and this has probably been unchanged since the origin of the Solar System. The natural abundance of H seems to be a very similar fraction of hydrogen, wherever hydrogen is found, unless there are obvious processes at work that concentrate it.
The existence of deuterium at a low but constant primordial fraction in all hydrogen is another one of the arguments in favor of the Big Bang over the Steady State theory of the Universe. The observed ratios of hydrogen to helium to deuterium in the universe are difficult to explain except with a Big Bang model. It is estimated that the abundances of deuterium have not evolved significantly since their production about 13.8 billion years ago. Measurements of Milky Way galactic deuterium from ultraviolet spectral analysis show a ratio of as much as 23 atoms of deuterium per million hydrogen atoms in undisturbed gas clouds, which is only 15% below the WMAP estimated primordial ratio of about 27 atoms per million from the Big Bang. This has been interpreted to mean that less deuterium has been destroyed in star formation in the Milky Way galaxy than expected, or perhaps deuterium has been replenished by a large in-fall of primordial hydrogen from outside the galaxy. In space a few hundred light years from the Sun, deuterium abundance is only 15 atoms per million, but this value is presumably influenced by differential adsorption of deuterium onto carbon dust grains in interstellar space.
The abundance of deuterium in Jupiter's atmosphere has been directly measured by the Galileo space probe as 26 atoms per million hydrogen atoms. ISO-SWS observations find 22 atoms per million hydrogen atoms in Jupiter. and this abundance is thought to represent close to the primordial Solar System ratio. This is about 17% of the terrestrial ratio of 156 deuterium atoms per million hydrogen atoms.
Comets such as Comet Hale-Bopp and Halley's Comet have been measured to contain more deuterium (about 200 atoms per million hydrogens), ratios which are enriched with respect to the presumed protosolar nebula ratio, probably due to heating, and which are similar to the ratios found in Earth seawater. The recent measurement of deuterium amounts of 161 atoms per million hydrogen in Comet 103P/Hartley (a former Kuiper belt object), a ratio almost exactly that in Earth's oceans (155.76 ± 0.1, but in fact from 153 to 156 ppm), emphasizes the theory that Earth's surface water may be largely from comets. Most recently the HHR of 67P/Churyumov–Gerasimenko as measured by Rosetta is about three times that of Earth water. This has caused renewed interest in suggestions that Earth's water may be partly of asteroidal origin.
Deuterium has also been observed to be concentrated over the mean solar abundance in other terrestrial planets, in particular Mars and Venus.
Production
Deuterium is produced for industrial, scientific and military purposes, by starting with ordinary water—a small fraction of which is naturally occurring heavy water—and then separating out the heavy water by the Girdler sulfide process, distillation, or other methods.
In theory, deuterium for heavy water could be created in a nuclear reactor, but separation from ordinary water is the cheapest bulk production process.
The world's leading supplier of deuterium was Atomic Energy of Canada Limited until 1997, when the last heavy water plant was shut down. Canada uses heavy water as a neutron moderator for the operation of the CANDU reactor design.
Another major producer of heavy water is India. All but one of India's atomic energy plants are pressurized heavy water plants, which use natural (i.e., not enriched) uranium. India has eight heavy water plants, of which seven are in operation. Six plants, of which five are in operation, are based on D–H exchange in ammonia gas. The other two plants extract deuterium from natural water in a process that uses hydrogen sulfide gas at high pressure.
While India is self-sufficient in heavy water for its own use, India also exports reactor-grade heavy water.
Properties
Data for molecular deuterium
Formula: or
Density: 0.180 kg/m at STP (0 °C, 101325 Pa).
Atomic weight: 2.0141017926 Da.
Mean abundance in ocean water (from VSMOW) 155.76 ± 0.1 atoms of deuterium per million atoms of all isotopes of hydrogen (about 1 atom of in 6420); that is, about 0.015% of all atoms of hydrogen (any isotope)
Data at about 18 K for H (triple point):
Density:
Liquid: 162.4 kg/m
Gas: 0.452 kg/m
Liquefied HO: 1105.2 kg/m at STP
Viscosity: 12.6 μPa·s at 300 K (gas phase)
Specific heat capacity at constant pressure c:
Solid: 2950 J/(kg·K)
Gas: 5200 J/(kg·K)
Physical properties
Compared to hydrogen in its natural composition on Earth, pure deuterium (H) has a higher melting point (18.72 K vs. 13.99 K), a higher boiling point (23.64 vs. 20.27 K), a higher critical temperature (38.3 vs. 32.94 K) and a higher critical pressure (1.6496 vs. 1.2858 MPa).
The physical properties of deuterium compounds can exhibit significant kinetic isotope effects and other physical and chemical property differences from the protium analogs. HO, for example, is more viscous than normal . There are differences in bond energy and length for compounds of heavy hydrogen isotopes compared to protium, which are larger than the isotopic differences in any other element. Bonds involving deuterium and tritium are somewhat stronger than the corresponding bonds in protium, and these differences are enough to cause significant changes in biological reactions. Pharmaceutical firms are interested in the fact that H is harder to remove from carbon than H.
Deuterium can replace H in water molecules to form heavy water (HO), which is about 10.6% denser than normal water (so that ice made from it sinks in normal water). Heavy water is slightly toxic in eukaryotic animals, with 25% substitution of the body water causing cell division problems and sterility, and 50% substitution causing death by cytotoxic syndrome (bone marrow failure and gastrointestinal lining failure). Prokaryotic organisms, however, can survive and grow in pure heavy water, though they develop slowly. Despite this toxicity, consumption of heavy water under normal circumstances does not pose a health threat to humans. It is estimated that a person might drink of heavy water without serious consequences. Small doses of heavy water (a few grams in humans, containing an amount of deuterium comparable to that normally present in the body) are routinely used as harmless metabolic tracers in humans and animals.
Quantum properties
The deuteron has spin +1 ("triplet state") and is thus a boson. The NMR frequency of deuterium is significantly different from normal hydrogen. Infrared spectroscopy also easily differentiates many deuterated compounds, due to the large difference in IR absorption frequency seen in the vibration of a chemical bond containing deuterium, versus light hydrogen. The two stable isotopes of hydrogen can also be distinguished by using mass spectrometry.
The triplet deuteron nucleon is barely bound at , and none of the higher energy states are bound. The singlet deuteron is a virtual state, with a negative binding energy of . There is no such stable particle, but this virtual particle transiently exists during neutron–proton inelastic scattering, accounting for the unusually large neutron scattering cross-section of the proton.
Nuclear properties (deuteron)
Deuteron mass and radius
The deuterium nucleus is called a deuteron. It has a mass of (just over ).
The charge radius of a deuteron is
Like the proton radius, measurements using muonic deuterium produce a smaller result: .
Spin and energy
Deuterium is one of only five stable nuclides with an odd number of protons and an odd number of neutrons. (H, Li, B, N, Ta; the long-lived radionuclides K, V, La, Lu also occur naturally.) Most odd–odd nuclei are unstable to beta decay, because the decay products are even–even, and thus more strongly bound, due to nuclear pairing effects. Deuterium, however, benefits from having its proton and neutron coupled to a spin-1 state, which gives a stronger nuclear attraction; the corresponding spin-1 state does not exist in the two-neutron or two-proton system, due to the Pauli exclusion principle which would require one or the other identical particle with the same spin to have some other different quantum number, such as orbital angular momentum. But orbital angular momentum of either particle gives a lower binding energy for the system, mainly due to increasing distance of the particles in the steep gradient of the nuclear force. In both cases, this causes the diproton and dineutron to be unstable.
The proton and neutron in deuterium can be dissociated through neutral current interactions with neutrinos. The cross section for this interaction is comparatively large, and deuterium was successfully used as a neutrino target in the Sudbury Neutrino Observatory experiment.
Diatomic deuterium (H) has ortho and para nuclear spin isomers like diatomic hydrogen, but with differences in the number and population of spin states and rotational levels, which occur because the deuteron is a boson with nuclear spin equal to one.
Isospin singlet state of the deuteron
Due to the similarity in mass and nuclear properties between the proton and neutron, they are sometimes considered as two symmetric types of the same object, a nucleon. While only the proton has electric charge, this is often negligible due to the weakness of the electromagnetic interaction relative to the strong nuclear interaction. The symmetry relating the proton and neutron is known as isospin and denoted I (or sometimes T).
Isospin is an SU(2) symmetry, like ordinary spin, so is completely analogous to it. The proton and neutron, each of which have isospin-1/2, form an isospin doublet (analogous to a spin doublet), with a "down" state (↓) being a neutron and an "up" state (↑) being a proton. A pair of nucleons can either be in an antisymmetric state of isospin called singlet, or in a symmetric state called triplet. In terms of the "down" state and "up" state, the singlet is
, which can also be written :
This is a nucleus with one proton and one neutron, i.e. a deuterium nucleus. The triplet is
and thus consists of three types of nuclei, which are supposed to be symmetric: a deuterium nucleus (actually a highly excited state of it), a nucleus with two protons, and a nucleus with two neutrons. These states are not stable.
Approximated wavefunction of the deuteron
The deuteron wavefunction must be antisymmetric if the isospin representation is used (since a proton and a neutron are not identical particles, the wavefunction need not be antisymmetric in general). Apart from their isospin, the two nucleons also have spin and spatial distributions of their wavefunction. The latter is symmetric if the deuteron is symmetric under parity (i.e. has an "even" or "positive" parity), and antisymmetric if the deuteron is antisymmetric under parity (i.e. has an "odd" or "negative" parity). The parity is fully determined by the total orbital angular momentum of the two nucleons: if it is even then the parity is even (positive), and if it is odd then the parity is odd (negative).
The deuteron, being an isospin singlet, is antisymmetric under nucleons exchange due to isospin, and therefore must be symmetric under the double exchange of their spin and location. Therefore, it can be in either of the following two different states:
Symmetric spin and symmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (+1) from spin exchange and (+1) from parity (location exchange), for a total of (−1) as needed for antisymmetry.
Antisymmetric spin and antisymmetric under parity. In this case, the exchange of the two nucleons will multiply the deuterium wavefunction by (−1) from isospin exchange, (−1) from spin exchange and (−1) from parity (location exchange), again for a total of (−1) as needed for antisymmetry.
In the first case the deuteron is a spin triplet, so that its total spin s is 1. It also has an even parity and therefore even orbital angular momentum l. The lower its orbital angular momentum, the lower its energy. Therefore, the lowest possible energy state has , .
In the second case the deuteron is a spin singlet, so that its total spin s is 0. It also has an odd parity and therefore odd orbital angular momentum l. Therefore, the lowest possible energy state has , .
Since gives a stronger nuclear attraction, the deuterium ground state is in the , state.
The same considerations lead to the possible states of an isospin triplet having , or , . Thus, the state of lowest energy has , , higher than that of the isospin singlet.
The analysis just given is in fact only approximate, both because isospin is not an exact symmetry, and more importantly because the strong nuclear interaction between the two nucleons is related to angular momentum in spin–orbit interaction that mixes different s and l states. That is, s and l are not constant in time (they do not commute with the Hamiltonian), and over time a state such as , may become a state of , . Parity is still constant in time, so these do not mix with odd l states (such as , ). Therefore, the quantum state of the deuterium is a superposition (a linear combination) of the , state and the , state, even though the first component is much bigger. Since the total angular momentum j is also a good quantum number (it is a constant in time), both components must have the same j, and therefore . This is the total spin of the deuterium nucleus.
To summarize, the deuterium nucleus is antisymmetric in terms of isospin, and has spin 1 and even (+1) parity. The relative angular momentum of its nucleons l is not well defined, and the deuteron is a superposition of mostly with some .
Magnetic and electric multipoles
In order to find theoretically the deuterium magnetic dipole moment μ, one uses the formula for a nuclear magnetic moment
with
g and g are g-factors of the nucleons.
Since the proton and neutron have different values for g and g, one must separate their contributions. Each gets half of the deuterium orbital angular momentum and spin . One arrives at
where subscripts p and n stand for the proton and neutron, and .
By using the same identities as here and using the value , one gets the following result, in units of the nuclear magneton μ
For the , state (), we obtain
For the , state (), we obtain
The measured value of the deuterium magnetic dipole moment, is , which is 97.5% of the value obtained by simply adding moments of the proton and neutron. This suggests that the state of the deuterium is indeed to a good approximation , state, which occurs with both nucleons spinning in the same direction, but their magnetic moments subtracting because of the neutron's negative moment.
But the slightly lower experimental number than that which results from simple addition of proton and (negative) neutron moments shows that deuterium is actually a linear combination of mostly , state with a slight admixture of , state.
The electric dipole is zero as usual.
The measured electric quadrupole of the deuterium is . While the order of magnitude is reasonable, since the deuteron radius is of order of 1 femtometer (see below) and its electric charge is e, the above model does not suffice for its computation. More specifically, the electric quadrupole does not get a contribution from the state (which is the dominant one) and does get a contribution from a term mixing the and the states, because the electric quadrupole operator does not commute with angular momentum.
The latter contribution is dominant in the absence of a pure contribution, but cannot be calculated without knowing the exact spatial form of the nucleons wavefunction inside the deuterium.
Higher magnetic and electric multipole moments cannot be calculated by the above model, for similar reasons.
Applications
Nuclear reactors
Deuterium is used in heavy water moderated fission reactors, usually as liquid HO, to slow neutrons without the high neutron absorption of ordinary hydrogen. This is a common commercial use for larger amounts of deuterium.
In research reactors, liquid H is used in cold sources to moderate neutrons to very low energies and wavelengths appropriate for scattering experiments.
Experimentally, deuterium is the most common nuclide used in fusion reactor designs, especially in combination with tritium, because of the large reaction rate (or nuclear cross section) and high energy yield of the deuterium–tritium (DT) reaction. There is an even higher-yield H–He fusion reaction, though the breakeven point of H–He is higher than that of most other fusion reactions; together with the scarcity of He, this makes it implausible as a practical power source, at least until DT and deuterium–deuterium (DD) fusion have been performed on a commercial scale. Commercial nuclear fusion is not yet an accomplished technology.
NMR spectroscopy
Deuterium is most commonly used in hydrogen nuclear magnetic resonance spectroscopy (proton NMR) in the following way. NMR ordinarily requires compounds of interest to be analyzed as dissolved in solution. Because of deuterium's nuclear spin properties which differ from the light hydrogen usually present in organic molecules, NMR spectra of hydrogen/protium are highly differentiable from that of deuterium, and in practice deuterium is not "seen" by an NMR instrument tuned for H. Deuterated solvents (including heavy water, but also compounds like deuterated chloroform, CDCl or CHCl, are therefore routinely used in NMR spectroscopy, in order to allow only the light-hydrogen spectra of the compound of interest to be measured, without solvent-signal interference.
Nuclear magnetic resonance spectroscopy can also be used to obtain information about the deuteron's environment in isotopically labelled samples (deuterium NMR). For example, the configuration of hydrocarbon chains in lipid bilayers can be quantified using solid state deuterium NMR with deuterium-labelled lipid molecules.
Deuterium NMR spectra are especially informative in the solid state because of its relatively small quadrupole moment in comparison with those of bigger quadrupolar nuclei such as chlorine-35, for example.
Mass spectrometry
Deuterated (i.e. where all or some hydrogen atoms are replaced with deuterium) compounds are often used as internal standards in mass spectrometry. Like other isotopically labeled species, such standards improve accuracy, while often at a much lower cost than other isotopically labeled standards. Deuterated molecules are usually prepared via hydrogen isotope exchange reactions.
Tracing
In chemistry, biochemistry and environmental sciences, deuterium is used as a non-radioactive, stable isotopic tracer, for example, in the doubly labeled water test. In chemical reactions and metabolic pathways, deuterium behaves somewhat similarly to ordinary hydrogen (with a few chemical differences, as noted). It can be distinguished from normal hydrogen most easily by its mass, using mass spectrometry or infrared spectrometry. Deuterium can be detected by femtosecond infrared spectroscopy, since the mass difference drastically affects the frequency of molecular vibrations; H–carbon bond vibrations are found in spectral regions free of other signals.
Measurements of small variations in the natural abundances of deuterium, along with those of the stable heavy oxygen isotopes O and O, are of importance in hydrology, to trace the geographic origin of Earth's waters. The heavy isotopes of hydrogen and oxygen in rainwater (meteoric water) are enriched as a function of the environmental temperature of the region in which the precipitation falls (and thus enrichment is related to latitude). The relative enrichment of the heavy isotopes in rainwater (as referenced to mean ocean water), when plotted against temperature falls predictably along a line called the global meteoric water line (GMWL). This plot allows samples of precipitation-originated water to be identified along with general information about the climate in which it originated. Evaporative and other processes in bodies of water, and also ground water processes, also differentially alter the ratios of heavy hydrogen and oxygen isotopes in fresh and salt waters, in characteristic and often regionally distinctive ways. The ratio of concentration of H to H is usually indicated with a delta as δH and the geographic patterns of these values are plotted in maps termed as isoscapes. Stable isotopes are incorporated into plants and animals and an analysis of the ratios in a migrant bird or insect can help suggest a rough guide to their origins.
Contrast properties
Neutron scattering techniques particularly profit from availability of deuterated samples: The H and H cross sections are very distinct and different in sign, which allows contrast variation in such experiments. Further, a nuisance problem of normal hydrogen is its large incoherent neutron cross section, which is nil for H. The substitution of deuterium for normal hydrogen thus reduces scattering noise.
Hydrogen is an important and major component in all materials of organic chemistry and life science, but it barely interacts with X-rays. As hydrogen atoms (including deuterium) interact strongly with neutrons; neutron scattering techniques, together with a modern deuteration facility, fills a niche in many studies of macromolecules in biology and many other areas.
Nuclear weapons
See below. Most stars, including the Sun, generate energy over most of their lives by fusing hydrogen into heavier elements; yet such fusion of light hydrogen (protium) has never been successful in the conditions attainable on Earth. Thus, all artificial fusion, including the hydrogen fusion in hydrogen bombs, requires heavy hydrogen (deuterium, tritium, or both).
Drugs
A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms in the drug molecule have been replaced by deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. In 2017, deutetrabenazine became the first deuterated drug to receive FDA approval.
Reinforced essential nutrients
Deuterium can be used to reinforce specific oxidation-vulnerable C–H bonds within essential or conditionally essential nutrients, such as certain amino acids, or polyunsaturated fatty acids (PUFA), making them more resistant to oxidative damage. Deuterated polyunsaturated fatty acids, such as linoleic acid, slow down the chain reaction of lipid peroxidation that damage living cells. Deuterated ethyl ester of linoleic acid (RT001), developed by Retrotope, is in a compassionate use trial in infantile neuroaxonal dystrophy and has successfully completed a Phase I/II trial in Friedreich's ataxia.
Thermostabilization
Live vaccines, such as oral polio vaccine, can be stabilized by deuterium, either alone or in combination with other stabilizers such as MgCl.
Slowing circadian oscillations
Deuterium has been shown to lengthen the period of oscillation of the circadian clock when dosed in rats, hamsters, and Gonyaulax dinoflagellates. In rats, chronic intake of 25% HO disrupts circadian rhythm by lengthening the circadian period of suprachiasmatic nucleus-dependent rhythms in the brain's hypothalamus. Experiments in hamsters also support the theory that deuterium acts directly on the suprachiasmatic nucleus to lengthen the free-running circadian period.
History
Suspicion of lighter element isotopes
The existence of nonradioactive isotopes of lighter elements had been suspected in studies of neon as early as 1913, and proven by mass spectrometry of light elements in 1920. At that time the neutron had not yet been discovered, and the prevailing theory was that isotopes of an element differ by the existence of additional protons in the nucleus accompanied by an equal number of nuclear electrons. In this theory, the deuterium nucleus with mass two and charge one would contain two protons and one nuclear electron. However, it was expected that the element hydrogen with a measured average atomic mass very close to , the known mass of the proton, always has a nucleus composed of a single proton (a known particle), and could not contain a second proton. Thus, hydrogen was thought to have no heavy isotopes.
Deuterium detected
It was first detected spectroscopically in late 1931 by Harold Urey, a chemist at Columbia University. Urey's collaborator, Ferdinand Brickwedde, distilled five liters of cryogenically produced liquid hydrogen to of liquid, using the low-temperature physics laboratory that had recently been established at the National Bureau of Standards (now National Institute of Standards and Technology) in Washington, DC. The technique had previously been used to isolate heavy isotopes of neon. The cryogenic boiloff technique concentrated the fraction of the mass-2 isotope of hydrogen to a degree that made its spectroscopic identification unambiguous.
Naming of the isotope and Nobel Prize
Urey created the names protium, deuterium, and tritium in an article published in 1934. The name is based in part on advice from Gilbert N. Lewis who had proposed the name "deutium". The name comes from Greek deuteros 'second', and the nucleus was to be called a "deuteron" or "deuton". Isotopes and new elements were traditionally given the name that their discoverer decided. Some British scientists, such as Ernest Rutherford, wanted to call the isotope "diplogen", from Greek diploos 'double', and the nucleus to be called "diplon".
The amount inferred for normal abundance of deuterium was so small (only about 1 atom in 6400 hydrogen atoms in seawater [156 parts per million]) that it had not noticeably affected previous measurements of (average) hydrogen atomic mass. This explained why it hadn't been suspected before. Urey was able to concentrate water to show partial enrichment of deuterium. Lewis, Urey's graduate advisor at Berkeley, had prepared and characterized the first samples of pure heavy water in 1933. The discovery of deuterium, coming before the discovery of the neutron in 1932, was an experimental shock to theory; but when the neutron was reported, making deuterium's existence more explicable, Urey was awarded the Nobel Prize in Chemistry only three years after the isotope's isolation. Lewis was deeply disappointed by the Nobel Committee's decision in 1934 and several high-ranking administrators at Berkeley believed this disappointment played a central role in his suicide a decade later.
"Heavy water" experiments in World War II
Shortly before the war, Hans von Halban and Lew Kowarski moved their research on neutron moderation from France to Britain, smuggling the entire global supply of heavy water (which had been made in Norway) across in twenty-six steel drums.
During World War II, Nazi Germany was known to be conducting experiments using heavy water as moderator for a nuclear reactor design. Such experiments were a source of concern because they might allow them to produce plutonium for an atomic bomb. Ultimately it led to the Allied operation called the "Norwegian heavy water sabotage", the purpose of which was to destroy the Vemork deuterium production/enrichment facility in Norway. At the time this was considered important to the potential progress of the war.
After World War II ended, the Allies discovered that Germany was not putting as much serious effort into the program as had been previously thought. The Germans had completed only a small, partly built experimental reactor (which had been hidden away) and had been unable to sustain a chain reaction. By the end of the war, the Germans did not even have a fifth of the amount of heavy water needed to run the reactor, partially due to the Norwegian heavy water sabotage operation. However, even if the Germans had succeeded in getting a reactor operational (as the U.S. did with Chicago Pile-1 in late 1942), they would still have been at least several years away from the development of an atomic bomb. The engineering process, even with maximal effort and funding, required about two and a half years (from first critical reactor to bomb) in both the U.S. and U.S.S.R., for example.
In thermonuclear weapons
The 62-ton Ivy Mike device built by the United States and exploded on 1 November 1952, was the first fully successful hydrogen bomb (thermonuclear bomb). In this context, it was the first bomb in which most of the energy released came from nuclear reaction stages that followed the primary nuclear fission stage of the atomic bomb. The Ivy Mike bomb was a factory-like building, rather than a deliverable weapon. At its center, a very large cylindrical, insulated vacuum flask or cryostat, held cryogenic liquid deuterium in a volume of about 1000 liters (160 kilograms in mass, if this volume had been completely filled). Then, a conventional atomic bomb (the "primary") at one end of the bomb was used to create the conditions of extreme temperature and pressure that were needed to set off the thermonuclear reaction.
Within a few years, so-called "dry" hydrogen bombs were developed that did not need cryogenic hydrogen. Released information suggests that all thermonuclear weapons built since then contain chemical compounds of deuterium and lithium in their secondary stages. The material that contains the deuterium is mostly lithium deuteride, with the lithium consisting of the isotope lithium-6. When the lithium-6 is bombarded with fast neutrons from the atomic bomb, tritium (hydrogen-3) is produced, and then the deuterium and the tritium quickly engage in thermonuclear fusion, releasing abundant energy, helium-4, and even more free neutrons. "Pure" fusion weapons such as the Tsar Bomba are believed to be obsolete. In most modern ("boosted") thermonuclear weapons, fusion directly provides only a small fraction of the total energy. Fission of a natural uranium-238 tamper by fast neutrons produced from D–T fusion accounts for a much larger (i.e. boosted) energy release than the fusion reaction itself.
Modern research
In August 2018, scientists announced the transformation of gaseous deuterium into a liquid metallic form. This may help researchers better understand gas giant planets, such as Jupiter, Saturn and some exoplanets, since such planets are thought to contain a lot of liquid metallic hydrogen, which may be responsible for their observed powerful magnetic fields.
Antideuterium
An antideuteron is the antimatter counterpart of the deuteron, consisting of an antiproton and an antineutron. The antideuteron was first produced in 1965 at the Proton Synchrotron at CERN and the Alternating Gradient Synchrotron at Brookhaven National Laboratory. A complete atom, with a positron orbiting the nucleus, would be called antideuterium, but antideuterium has not yet been created. The proposed symbol for antideuterium is , that is, D with an overbar.
| Physical sciences | s-Block | Chemistry |
8540 | https://en.wikipedia.org/wiki/Diesel%20engine | Diesel engine | The diesel engine, named after the German engineer Rudolf Diesel, is an internal combustion engine in which ignition of diesel fuel is caused by the elevated temperature of the air in the cylinder due to mechanical compression; thus, the diesel engine is called a compression-ignition engine (CI engine). This contrasts with engines using spark plug-ignition of the air-fuel mixture, such as a petrol engine (gasoline engine) or a gas engine (using a gaseous fuel like natural gas or liquefied petroleum gas).
Introduction
Diesel engines work by compressing only air, or air combined with residual combustion gases from the exhaust (known as exhaust gas recirculation, "EGR"). Air is inducted into the chamber during the intake stroke, and compressed during the compression stroke. This increases air temperature inside the cylinder so that atomised diesel fuel injected into the combustion chamber ignites. The torque a diesel engine produces is controlled by manipulating the air-fuel ratio (λ); instead of throttling the intake air, the diesel engine relies on altering the amount of fuel that is injected, and thus the air-fuel ratio is usually high.
The diesel engine has the highest thermal efficiency (see engine efficiency) of any practical internal or external combustion engine due to its very high expansion ratio and inherent lean burn, which enables heat dissipation by excess air. A small efficiency loss is also avoided compared with non-direct-injection gasoline engines, as unburned fuel is not present during valve overlap, and therefore no fuel goes directly from the intake/injection to the exhaust. Low-speed diesel engines (as used in ships and other applications where overall engine weight is relatively unimportant) can reach effective efficiencies of up to 55%. The combined cycle gas turbine (Brayton and Rankine cycle) is a combustion engine that is more efficient than a diesel engine, but due to its mass and dimensions, is unsuitable for many vehicles, including watercraft and some aircraft. The world's largest diesel engines put in service are 14-cylinder, two-stroke marine diesel engines; they produce a peak power of almost 100 MW each.
Diesel engines may be designed with either two-stroke or four-stroke combustion cycles. They were originally used as a more efficient replacement for stationary steam engines. Since the 1910s, they have been used in submarines and ships. Use in locomotives, buses, trucks, heavy equipment, agricultural equipment and electricity generation plants followed later. In the 1930s, they slowly began to be used in some automobiles. Since the 1970s energy crisis, demand for higher fuel efficiency has resulted in most major automakers, at some point, offering diesel-powered models, even in very small cars. According to Konrad Reif (2012), the EU average for diesel cars at the time accounted for half of newly registered cars. However, air pollution and overall emissions are more difficult to control in diesel engines compared to gasoline engines, so the use of diesel engines in the US is now largely relegated to larger on-road and off-road vehicles.
Though aviation has traditionally avoided using diesel engines, aircraft diesel engines have become increasingly available in the 21st century. Since the late 1990s, for various reasons—including the diesel's inherent advantages over gasoline engines, but also for recent issues peculiar to aviation—development and production of diesel engines for aircraft has surged, with over 5,000 such engines delivered worldwide between 2002 and 2018, particularly for light airplanes and unmanned aerial vehicles.
History
Diesel's idea
In 1878, Rudolf Diesel, who was a student at the "Polytechnikum" in Munich, attended the lectures of Carl von Linde. Linde explained that steam engines are capable of converting just 6–10% of the heat energy into work, but that the Carnot cycle allows conversion of much more of the heat energy into work by means of isothermal change in condition. According to Diesel, this ignited the idea of creating a highly efficient engine that could work on the Carnot cycle. Diesel was also introduced to a fire piston, a traditional fire starter using rapid adiabatic compression principles which Linde had acquired from Southeast Asia. After several years of working on his ideas, Diesel published them in 1893 in the essay Theory and Construction of a Rational Heat Motor.
Diesel was heavily criticised for his essay, but only a few found the mistake that he made; his rational heat motor was supposed to utilise a constant temperature cycle (with isothermal compression) that would require a much higher level of compression than that needed for compression ignition. Diesel's idea was to compress the air so tightly that the temperature of the air would exceed that of combustion. However, such an engine could never perform any usable work. In his 1892 US patent (granted in 1895) #542846, Diesel describes the compression required for his cycle:
pure atmospheric air is compressed, according to curve 1 2, to such a degree that, before ignition or combustion takes place, the highest pressure of the diagram and the highest temperature are obtained-that is to say, the temperature at which the subsequent combustion has to take place, not the burning or igniting point. To make this more clear, let it be assumed that the subsequent combustion shall take place at a temperature of 700°. Then in that case the initial pressure must be sixty-four atmospheres, or for 800° centigrade the pressure must be ninety atmospheres, and so on. Into the air thus compressed is then gradually introduced from the exterior finely divided fuel, which ignites on introduction, since the air is at a temperature far above the igniting-point of the fuel. The characteristic features of the cycle according to my present invention are therefore, increase of pressure and temperature up to the maximum, not by combustion, but prior to combustion by mechanical compression of air, and there upon the subsequent performance of work without increase of pressure and temperature by gradual combustion during a prescribed part of the stroke determined by the cut-oil.
By June 1893, Diesel had realised his original cycle would not work, and he adopted the constant pressure cycle. Diesel describes the cycle in his 1895 patent application. Notice that there is no longer a mention of compression temperatures exceeding the temperature of combustion. Now it is simply stated that the compression must be sufficient to trigger ignition.
1. In an internal-combustion engine, the combination of a cylinder and piston constructed and arranged to compress air to a degree producing a temperature above the igniting-point of the fuel, a supply for compressed air or gas; a fuel-supply; a distributing-valve for fuel, a passage from the air supply to the cylinder in communication with the fuel-distributing valve, an inlet to the cylinder in communication with the air-supply and with the fuel-valve, and a cut-oil, substantially as described.
In 1892, Diesel received patents in Germany, Switzerland, the United Kingdom, and the United States for "Method of and Apparatus for Converting Heat into Work". In 1894 and 1895, he filed patents and addenda in various countries for his engine; the first patents were issued in Spain (No. 16,654), France (No. 243,531) and Belgium (No. 113,139) in December 1894, and in Germany (No. 86,633) in 1895 and the United States (No. 608,845) in 1898.
Diesel was attacked and criticised over several years. Critics claimed that Diesel never invented a new motor and that the invention of the diesel engine is fraud. Otto Köhler and were two of the most prominent critics of Diesel's time. Köhler had published an essay in 1887, in which he describes an engine similar to the engine Diesel describes in his 1893 essay. Köhler figured that such an engine could not perform any work. Emil Capitaine had built a petroleum engine with glow-tube ignition in the early 1890s; he claimed against his own better judgement that his glow-tube ignition engine worked the same way Diesel's engine did. His claims were unfounded and he lost a patent lawsuit against Diesel. Other engines, such as the Akroyd engine and the Brayton engine, also use an operating cycle that is different from the diesel engine cycle. Friedrich Sass says that the diesel engine is Diesel's "very own work" and that any "Diesel myth" is "falsification of history".
The first diesel engine
Diesel sought out firms and factories that would build his engine. With the help of Moritz Schröter and , he succeeded in convincing both Krupp in Essen and the Maschinenfabrik Augsburg. Contracts were signed in April 1893, and in early summer 1893, Diesel's first prototype engine was built in Augsburg. On 10 August 1893, the first ignition took place, the fuel used was petrol. In winter 1893/1894, Diesel redesigned the existing engine, and by 18 January 1894, his mechanics had converted it into the second prototype. During January that year, an air-blast injection system was added to the engine's cylinder head and tested. Friedrich Sass argues that, it can be presumed that Diesel copied the concept of air-blast injection from George B. Brayton, albeit that Diesel substantially improved the system. On 17 February 1894, the redesigned engine ran for 88 revolutions – one minute; with this news, Maschinenfabrik Augsburg's stock rose by 30%, indicative of the tremendous anticipated demands for a more efficient engine. On 26 June 1895, the engine achieved an effective efficiency of 16.6% and had a fuel consumption of 519 g·kW−1·h−1.
However, despite proving the concept, the engine caused problems, and Diesel could not achieve any substantial progress. Therefore, Krupp considered rescinding the contract they had made with Diesel. Diesel was forced to improve the design of his engine and rushed to construct a third prototype engine. Between 8 November and 20 December 1895, the second prototype had successfully covered over 111 hours on the test bench. In the January 1896 report, this was considered a success.
In February 1896, Diesel considered supercharging the third prototype. Imanuel Lauster, who was ordered to draw the third prototype "Motor 250/400", had finished the drawings by 30 April 1896. During summer that year the engine was built, it was completed on 6 October 1896. Tests were conducted until early 1897. First public tests began on 1 February 1897. Moritz Schröter's test on 17 February 1897 was the main test of Diesel's engine. The engine was rated 13.1 kW with a specific fuel consumption of 324 g·kW−1·h−1, resulting in an effective efficiency of 26.2%. By 1898, Diesel had become a millionaire.
Timeline
1890s
1893: Rudolf Diesel's essay titled Theory and Construction of a Rational Heat Motor appears.
1893: February 21, Diesel and the Maschinenfabrik Augsburg sign a contract that allows Diesel to build a prototype engine.
1893: February 23, Diesel obtains a patent (RP 67207) titled "Arbeitsverfahren und Ausführungsart für Verbrennungsmaschinen" (Working Methods and Techniques for Internal Combustion Engines).
1893: April 10, Diesel and Krupp sign a contract that allows Diesel to build a prototype engine.
1893: April 24, both Krupp and the Maschinenfabrik Augsburg decide to collaborate and build just a single prototype in Augsburg.
1893: July, the first prototype is completed.
1893: August 10, Diesel injects fuel (petrol) for the first time, resulting in combustion, destroying the indicator.
1893: November 30, Diesel applies for a patent (RP 82168) for a modified combustion process. He obtains it on 12 July 1895.
1894: January 18, after the first prototype was modified to become the second prototype, testing with the second prototype begins.
1894: February 17, The second prototype runs for the first time.
1895: March 30, Diesel applies for a patent (RP 86633) for a starting process with compressed air.
1895: June 26, the second prototype passes brake testing for the first time.
1895: Diesel applies for a second patent US Patent # 608845
1895: November 8 – December 20, a series of tests with the second prototype is conducted. In total, 111 operating hours are recorded.
1896: April 30, Imanuel Lauster completes the third and final prototype's drawings.
1896: October 6, the third and final prototype engine is completed.
1897: February 1, Diesel's prototype engine is running and finally ready for efficiency testing and production.
1897: October 9, Adolphus Busch licenses rights to the diesel engine for the US and Canada.
1897: 29 October, Rudolf Diesel obtains a patent (DRP 95680) on supercharging the diesel engine.
1898: February 1, the Diesel Motoren-Fabrik Actien-Gesellschaft is registered.
1898: March, the first commercial diesel engine, rated 2×30 PS (2×22 kW), is installed in the Kempten plant of the Vereinigte Zündholzfabriken A.G.
1898: September 17, the Allgemeine Gesellschaft für Dieselmotoren A.-G. is founded.
1899: The first two-stroke diesel engine, invented by Hugo Güldner, is built.
1900s
1901: Imanuel Lauster designs the first trunk piston diesel engine (DM 70).
1901: By 1901, MAN had produced 77 diesel engine cylinders for commercial use.
1903: Two first diesel-powered ships are launched, both for river and canal operations: The Vandal naphtha tanker and the Sarmat.
1904: The French launch the first diesel submarine, the Aigrette.
1905: January 14: Diesel applies for a patent on unit injection (L20510I/46a).
1905: The first diesel engine turbochargers and intercoolers are manufactured by Büchi.
1906: The Diesel Motoren-Fabrik Actien-Gesellschaft is dissolved.
1908: Diesel's patents expire.
1908: The first lorry (truck) with a diesel engine appears.
1909: March 14, Prosper L'Orange applies for a patent on precombustion chamber injection. He later builds the first diesel engine with this system.
1910s
1910: MAN starts making two-stroke diesel engines.
1910: November 26, James McKechnie applies for a patent on unit injection. Unlike Diesel, he successfully built working unit injectors.
1911: November 27, the Allgemeine Gesellschaft für Dieselmotoren A.-G. is dissolved.
1911: The Germania shipyard in Kiel builds diesel engines for German submarines. These engines are installed in 1914.
1912: MAN builds the first double-acting piston two-stroke diesel engine.
1912: The first locomotive with a diesel engine is used on the Swiss Winterthur–Romanshorn railway.
1912: MS Selandia is the first ocean-going ship with diesel engines.
1913: NELSECO diesels are installed on commercial ships and US Navy submarines.
1913: September 29, Rudolf Diesel dies mysteriously while crossing the English Channel on .
1914: MAN builds two-stroke engines for Dutch submarines.
1919: Prosper L'Orange obtains a patent on a precombustion chamber insert incorporating a needle injection nozzle. First diesel engine from Cummins.
1920s
1923: At the Königsberg DLG exhibition, the first agricultural tractor with a diesel engine, the prototype Benz-Sendling S6, is presented.
1923: December 15, the first lorry with a direct-injected diesel engine is tested by MAN. The same year, Benz builds a lorry with a pre-combustion chamber injected diesel engine.
1923: The first two-stroke diesel engine with counterflow scavenging appears.
1924: Fairbanks-Morse introduces the two-stroke Y-VA (later renamed to Model 32).
1925: Sendling starts mass-producing a diesel-powered agricultural tractor.
1927: Bosch introduces the first inline injection pump for motor vehicle diesel engines.
1929: The first passenger car with a diesel engine appears. Its engine is an Otto engine modified to use the diesel principle and Bosch's injection pump. Several other diesel car prototypes follow.
1930s
1933: Junkers Motorenwerke in Germany start production of the most successful mass-produced aviation diesel engine of all time, the Jumo 205. By the outbreak of World War II, over 900 examples are produced. Its rated take-off power is 645 kW.
1933: General Motors uses its new roots-blown, unit-injected two-stroke Winton 201A diesel engine to power its automotive assembly exhibit at the Chicago World's Fair (A Century of Progress). The engine is offered in several versions ranging from .
1934: The Budd Company builds the first diesel–electric passenger train in the US, the Pioneer Zephyr 9900, using a Winton engine.
1935: The Citroën Rosalie is fitted with an early swirl chamber injected diesel engine for testing purposes. Daimler-Benz starts manufacturing the Mercedes-Benz OM 138, the first mass-produced diesel engine for passenger cars, and one of the few marketable passenger car diesel engines of its time. It is rated .
1936: March 4, the airship LZ 129 Hindenburg, the biggest aircraft ever made, takes off for the first time. It is powered by four V16 Daimler-Benz LOF 6 diesel engines, rated each.
1936: Manufacture of the first mass-produced passenger car with a diesel engine (Mercedes-Benz 260 D) begins.
1937: Konstantin Fyodorovich Chelpan develops the V-2 diesel engine, later used in the Soviet T-34 tanks, widely regarded as the best tank chassis of World War II.
1938: General Motors forms the GM Diesel Division, later to become Detroit Diesel, and introduces the Series 71 inline high-speed medium-horsepower two-stroke engine, suitable for road vehicles and marine use.
1940s
1946: Clessie Cummins obtains a patent on a fuel feeding and injection apparatus for oil-burning engines that incorporates separate components for generating injection pressure and injection timing.
1946: Klöckner-Humboldt-Deutz (KHD) introduces an air-cooled mass-production diesel engine to the market.
1950s
1950s: KHD becomes the air-cooled diesel engine global market leader.
1951: J. Siegfried Meurer obtains a patent on the M-System, a design that incorporates a central sphere combustion chamber in the piston (DBP 865683).
1953: First mass-produced swirl chamber injected passenger car diesel engine (Borgward/Fiat).
1954: Daimler-Benz introduces the Mercedes-Benz OM 312 A, a 4.6 litre straight-6 series-production industrial diesel engine with a turbocharger, rated . It proves to be unreliable.
1954: Volvo produces a small batch series of 200 units of a turbocharged version of the TD 96 engine. This 9.6 litre engine is rated .
1955: Turbocharging for MAN two-stroke marine diesel engines becomes standard.
1959: The Peugeot 403 becomes the first mass-produced passenger sedan/saloon manufactured outside West Germany to be offered with a diesel engine option.
1960s
1964: Summer, Daimler-Benz switches from precombustion chamber injection to helix-controlled direct injection.
1962–65: A diesel compression braking system, eventually to be manufactured by the Jacobs Manufacturing Company and nicknamed the "Jake Brake", is invented and patented by Clessie Cummins.
1970s
1972: KHD introduces the AD-System, Allstoff-Direkteinspritzung, (anyfuel direct-injection), for its diesel engines. AD-diesels can operate on virtually any kind of liquid fuel, but they are fitted with an auxiliary spark plug that fires if the ignition quality of the fuel is too low.
1976: Development of the common rail injection begins at the ETH Zürich.
1976: The Volkswagen Golf becomes the first compact passenger sedan/saloon to be offered with a diesel engine option.
1978: Daimler-Benz produces the first passenger car diesel engine with a turbocharger (Mercedes-Benz OM617 engine).
1979: First prototype of a low-speed two-stroke crosshead engine with common rail injection.
1980s
1981/82: Uniflow scavenging for two-stroke marine diesel engines becomes standard.
1982: August, Toyota introduces a microprocessor-controlled engine control unit (ECU) for Diesel engines to the Japanese market.
1985: December, road testing of a common rail injection system for lorries using a modified 6VD 12,5/12 GRF-E engine in an IFA W50 takes place.
1987: Daimler-Benz introduces the electronically controlled injection pump for lorry diesel engines.
1988: The Fiat Croma becomes the first mass-produced passenger car in the world to have a direct injected diesel engine.
1989: The Audi 100 is the first passenger car in the world with a turbocharged, intercooled, direct-injected, and electronically controlled diesel engine. It has a BMEP of 1.35 MPa and a BSFC of 198 g/(kW·h).
1990s
1992: 1 July, the Euro 1 emission standard comes into effect.
1993: First passenger car diesel engine with four valves per cylinder, the Mercedes-Benz OM 604.
1994: Unit injector system by Bosch for lorry diesel engines.
1996: First diesel engine with direct injection and four valves per cylinder, used in the Opel Vectra.
1996: First radial piston distributor injection pump by Bosch.
1997: First mass-produced common rail diesel engine for a passenger car, the Fiat 1.9 JTD.
1998: BMW wins the 24 Hours Nürburgring race with a modified BMW E36. The car, called 320d, is powered by a 2-litre, straight-four diesel engine with direct injection and a helix-controlled distributor injection pump (Bosch VP 44), producing . The fuel consumption is 23 L/100 km, only half the fuel consumption of a similar Otto-powered car.
1998: Volkswagen introduces the VW EA188 Pumpe-Düse engine (1.9 TDI), with Bosch-developed electronically controlled unit injectors.
1999: Daimler-Chrysler presents the first common rail three-cylinder diesel engine used in a passenger car (the Smart City Coupé).
2000s
2000: Peugeot introduces the diesel particulate filter for passenger cars.
2002: Piezoelectric injector technology by Siemens.
2003: Piezoelectric injector technology by Bosch, and Delphi.
2004: BMW introduces dual-stage turbocharging with the BMW M57 engine.
2006: The world's most powerful diesel engine, the Wärtsilä-Sulzer RTA96-C, is produced. It is rated 80,080 kW.
2006: Audi R10 TDI, equipped with a 5.5-litre V12-TDI engine, rated , wins the 2006 24 Hours of Le Mans.
2006: Daimler-Chrysler launches the first series-production passenger car engine with selective catalytic reduction exhaust gas treatment, the Mercedes-Benz OM 642. It is fully complying with the Tier2Bin8 emission standard.
2008: Volkswagen introduces the LNT catalyst for passenger car diesel engines with the VW 2.0 TDI engine.
2008: Volkswagen starts series production of the biggest passenger car diesel engine, the Audi 6-litre V12 TDI.
2008: Subaru introduces the first horizontally opposed diesel engine to be fitted to a passenger car. It is a 2-litre common rail engine, rated 110 kW.
2010s
2010: Mitsubishi developed and started mass production of its 4N13 1.8 L DOHC I4, the world's first passenger car diesel engine that features a variable valve timing system.
2012: BMW introduces dual-stage turbocharging with three turbochargers for the BMW N57 engine.
2015: Common rail systems working with pressures of 2,500 bar launched.
2015: In the Volkswagen emissions scandal, the US EPA issued a notice of violation of the Clean Air Act to Volkswagen Group after it was found that Volkswagen had intentionally programmed turbocharged direct injection (TDI) diesel engines to activate certain emissions controls only during laboratory emissions testing.
Operating principle
Overview
The characteristics of a diesel engine are
Use of compression ignition, instead of an ignition apparatus such as a spark plug.
Internal mixture formation. In diesel engines, the mixture of air and fuel is only formed inside the combustion chamber.
Quality torque control. The amount of torque a diesel engine produces is not controlled by throttling the intake air (unlike a traditional spark-ignition petrol engine, where the airflow is reduced in order to regulate the torque output), instead, the volume of air entering the engine is maximised at all times, and the torque output is regulated solely by controlling the amount of injected fuel.
High air-fuel ratio. Diesel engines run at global air-fuel ratios significantly leaner than the stoichiometric ratio.
Diffusion flame: At combustion, oxygen first has to diffuse into the flame, rather than having oxygen and fuel already mixed before combustion, which would result in a premixed flame.
Heterogeneous air-fuel mixture: In diesel engines, there is no even dispersion of fuel and air inside the cylinder. That is because the combustion process begins at the end of the injection phase, before a homogeneous mixture of air and fuel can be formed.
Preference for the fuel to have a high ignition performance (Cetane number), rather than a high knocking resistance (octane rating) that is preferred for petrol engines.
Thermodynamic cycle
The diesel internal combustion engine differs from the gasoline powered Otto cycle by using highly compressed hot air to ignite the fuel rather than using a spark plug (compression ignition rather than spark ignition).
In the diesel engine, only air is initially introduced into the combustion chamber. The air is then compressed with a compression ratio typically between 15:1 and 23:1. This high compression causes the temperature of the air to rise. At about the top of the compression stroke, fuel is injected directly into the compressed air in the combustion chamber. This may be into a (typically toroidal) void in the top of the piston or a pre-chamber depending upon the design of the engine. The fuel injector ensures that the fuel is broken down into small droplets, and that the fuel is distributed evenly. The heat of the compressed air vaporises fuel from the surface of the droplets. The vapour is then ignited by the heat from the compressed air in the combustion chamber, the droplets continue to vaporise from their surfaces and burn, getting smaller, until all the fuel in the droplets has been burnt. Combustion occurs at a substantially constant pressure during the initial part of the power stroke. The start of vaporisation causes a delay before ignition and the characteristic diesel knocking sound as the vapour reaches ignition temperature and causes an abrupt increase in pressure above the piston (not shown on the P-V indicator diagram). When combustion is complete the combustion gases expand as the piston descends further; the high pressure in the cylinder drives the piston downward, supplying power to the crankshaft.
As well as the high level of compression allowing combustion to take place without a separate ignition system, a high compression ratio greatly increases the engine's efficiency. Increasing the compression ratio in a spark-ignition engine where fuel and air are mixed before entry to the cylinder is limited by the need to prevent pre-ignition, which would cause engine damage. Since only air is compressed in a diesel engine, and fuel is not introduced into the cylinder until shortly before top dead centre (TDC), premature detonation is not a problem and compression ratios are much higher.
The pressure–volume diagram (pV) diagram is a simplified and idealised representation of the events involved in a diesel engine cycle, arranged to illustrate the similarity with a Carnot cycle. Starting at 1, the piston is at bottom dead centre and both valves are closed at the start of the compression stroke; the cylinder contains air at atmospheric pressure. Between 1 and 2 the air is compressed adiabatically – that is without heat transfer to or from the environment – by the rising piston. (This is only approximately true since there will be some heat exchange with the cylinder walls.) During this compression, the volume is reduced, the pressure and temperature both rise. At or slightly before 2 (TDC) fuel is injected and burns in the compressed hot air. Chemical energy is released and this constitutes an injection of thermal energy (heat) into the compressed gas. Combustion and heating occur between 2 and 3. In this interval the pressure remains constant since the piston descends, and the volume increases; the temperature rises as a consequence of the energy of combustion. At 3 fuel injection and combustion are complete, and the cylinder contains gas at a higher temperature than at 2. Between 3 and 4 this hot gas expands, again approximately adiabatically. Work is done on the system to which the engine is connected. During this expansion phase the volume of the gas rises, and its temperature and pressure both fall. At 4 the exhaust valve opens, and the pressure falls abruptly to atmospheric (approximately). This is unresisted expansion and no useful work is done by it. Ideally the adiabatic expansion should continue, extending the line 3–4 to the right until the pressure falls to that of the surrounding air, but the loss of efficiency caused by this unresisted expansion is justified by the practical difficulties involved in recovering it (the engine would have to be much larger). After the opening of the exhaust valve, the exhaust stroke follows, but this (and the following induction stroke) are not shown on the diagram. If shown, they would be represented by a low-pressure loop at the bottom of the diagram. At 1 it is assumed that the exhaust and induction strokes have been completed, and the cylinder is again filled with air. The piston-cylinder system absorbs energy between 1 and 2 – this is the work needed to compress the air in the cylinder, and is provided by mechanical kinetic energy stored in the flywheel of the engine. Work output is done by the piston-cylinder combination between 2 and 4. The difference between these two increments of work is the indicated work output per cycle, and is represented by the area enclosed by the pV loop. The adiabatic expansion is in a higher pressure range than that of the compression because the gas in the cylinder is hotter during expansion than during compression. It is for this reason that the loop has a finite area, and the net output of work during a cycle is positive.
Efficiency
The fuel efficiency of diesel engines is better than most other types of combustion engines, due to their high compression ratio, high air–fuel equivalence ratio (λ), and the lack of intake air restrictions (i.e. throttle valves). Theoretically, the highest possible efficiency for a diesel engine is 75%. However, in practice the efficiency is much lower, with efficiencies of up to 43% for passenger car engines, up to 45% for large truck and bus engines, and up to 55% for large two-stroke marine engines. The average efficiency over a motor vehicle driving cycle is lower than the diesel engine's peak efficiency (for example, a 37% average efficiency for an engine with a peak efficiency of 44%). That is because the fuel efficiency of a diesel engine drops at lower loads, however, it does not drop quite as fast as the Otto (spark ignition) engine's.
Emissions
Diesel engines are combustion engines and, therefore, emit combustion products in their exhaust gas. Due to incomplete combustion, diesel engine exhaust gases include carbon monoxide, hydrocarbons, particulate matter, and nitrogen oxides pollutants. About 90 per cent of the pollutants can be removed from the exhaust gas using exhaust gas treatment technology. Road vehicle diesel engines have no sulfur dioxide emissions, because motor vehicle diesel fuel has been sulfur-free since 2003. Helmut Tschöke argues that particulate matter emitted from motor vehicles has negative impacts on human health.
The particulate matter in diesel exhaust emissions is sometimes classified as a carcinogen or "probable carcinogen" and is known to increase the risk of heart and respiratory diseases.
Electrical system
In principle, a diesel engine does not require any sort of electrical system. However, most modern diesel engines are equipped with an electrical fuel pump, and an electronic engine control unit.
However, there is no high-voltage electrical ignition system present in a diesel engine. This eliminates a source of radio frequency emissions (which can interfere with navigation and communication equipment), which is why only diesel-powered vehicles are allowed in some parts of the American National Radio Quiet Zone.
Torque control
To control the torque output at any given time (i.e. when the driver of a car adjusts the accelerator pedal), a governor adjusts the amount of fuel injected into the engine. Mechanical governors have been used in the past, however electronic governors are more common on modern engines. Mechanical governors are usually driven by the engine's accessory belt or a gear-drive system and use a combination of springs and weights to control fuel delivery relative to both load and speed. Electronically governed engines use an electronic control unit (ECU) or electronic control module (ECM) to control the fuel delivery. The ECM/ECU uses various sensors (such as engine speed signal, intake manifold pressure and fuel temperature) to determine the amount of fuel injected into the engine.
Due to the amount of air being constant (for a given RPM) while the amount of fuel varies, very high ("lean") air-fuel ratios are used in situations where minimal torque output is required. This differs from a petrol engine, where a throttle is used to also reduce the amount of intake air as part of regulating the engine's torque output. Controlling the timing of the start of injection of fuel into the cylinder is similar to controlling the ignition timing in a petrol engine. It is therefore a key factor in controlling the power output, fuel consumption and exhaust emissions.
Classification
There are several different ways of categorising diesel engines, as outlined in the following sections.
RPM operating range
Günter Mau categorises diesel engines by their rotational speeds into three groups:
High-speed engines (> 1,000 rpm),
Medium-speed engines (300–1,000 rpm), and
Slow-speed engines (< 300 rpm).
High-speed diesel engines
High-speed engines are used to power trucks (lorries), buses, tractors, cars, yachts, compressors, pumps and small electrical generators. As of 2018, most high-speed engines have direct injection. Many modern engines, particularly in on-highway applications, have common rail direct injection. On bigger ships, high-speed diesel engines are often used for powering electric generators. The highest power output of high-speed diesel engines is approximately 5 MW.
Medium-speed diesel engines
Medium-speed engines are used in large electrical generators, railway diesel locomotives, ship propulsion and mechanical drive applications such as large compressors or pumps. Medium speed diesel engines operate on either diesel fuel or heavy fuel oil by direct injection in the same manner as low-speed engines. Usually, they are four-stroke engines with trunk pistons; a notable exception being the EMD 567, 645, and 710 engines, which are all two-stroke.
The power output of medium-speed diesel engines can be as high as 21,870 kW, with the effective efficiency being around 47-48% (1982). Most larger medium-speed engines are started with compressed air direct on pistons, using an air distributor, as opposed to a pneumatic starting motor acting on the flywheel, which tends to be used for smaller engines.
Medium-speed engines intended for marine applications are usually used to power (ro-ro) ferries, passenger ships or small freight ships. Using medium-speed engines reduces the cost of smaller ships and increases their transport capacity. In addition to that, a single ship can use two smaller engines instead of one big engine, which increases the ship's safety.
Low-speed diesel engines
Low-speed diesel engines are usually very large in size and mostly used to power ships. There are two different types of low-speed engines that are commonly used: Two-stroke engines with a crosshead, and four-stroke engines with a regular trunk-piston. Two-stroke engines have a limited rotational frequency and their charge exchange is more difficult, which means that they are usually bigger than four-stroke engines and used to directly power a ship's propeller.
Four-stroke engines on ships are usually used to power an electric generator. An electric motor powers the propeller. Both types are usually very undersquare, meaning the bore is smaller than the stroke. Low-speed diesel engines (as used in ships and other applications where overall engine weight is relatively unimportant) often have an effective efficiency of up to 55%. Like medium-speed engines, low-speed engines are started with compressed air, and they use heavy oil as their primary fuel.
Combustion cycle
Four-stroke engines use the combustion cycle described earlier. Most smaller diesels, for vehicular use, for instance, typically use the four-stroke cycle. This is due to several factors, such as the two-stroke design's narrow powerband which is not particularly suitable for automotive use and the necessity for complicated and expensive built-in lubrication systems and scavenging measures. The cost effectiveness (and proportion of added weight) of these technologies has less of an impact on larger, more expensive engines, while engines intended for shipping or stationary use can be run at a single speed for long periods.
Two-stroke engines use a combustion cycle which is completed in two strokes instead of four strokes. Filling the cylinder with air and compressing it takes place in one stroke, and the power and exhaust strokes are combined. The compression in a two-stroke diesel engine is similar to the compression that takes place in a four-stroke diesel engine: As the piston passes through bottom centre and starts upward, compression commences, culminating in fuel injection and ignition. Instead of a full set of valves, two-stroke diesel engines have simple intake ports, and exhaust ports (or exhaust valves). When the piston approaches bottom dead centre, both the intake and the exhaust ports are "open", which means that there is atmospheric pressure inside the cylinder. Therefore, some sort of pump is required to blow the air into the cylinder and the combustion gasses into the exhaust. This process is called scavenging. The pressure required is approximately 10-30 kPa.
Due to the lack of discrete exhaust and intake strokes, all two-stroke diesel engines use a scavenge blower or some form of compressor to charge the cylinders with air and assist in scavenging. Roots-type superchargers were used for ship engines until the mid-1950s, however since 1955 they have been widely replaced by turbochargers. Usually, a two-stroke ship diesel engine has a single-stage turbocharger with a turbine that has an axial inflow and a radial outflow.
Scavenging in two-stroke engines
In general, there are three types of scavenging possible:
Uniflow scavenging
Crossflow scavenging
Reverse flow scavenging
Crossflow scavenging is incomplete and limits the stroke, yet some manufacturers used it. Reverse flow scavenging is a very simple way of scavenging, and it was popular amongst manufacturers until the early 1980s. Uniflow scavenging is more complicated to make but allows the highest fuel efficiency; since the early 1980s, manufacturers such as MAN and Sulzer have switched to this system. It is standard for modern marine two-stroke diesel engines.
Fuel used
So-called dual-fuel diesel engines or gas diesel engines burn two different types of fuel simultaneously, for instance, a gaseous fuel and diesel engine fuel. The diesel engine fuel auto-ignites due to compression ignition, and then ignites the gaseous fuel. Such engines do not require any type of spark ignition and operate similar to regular diesel engines.
Fuel injection
The fuel is injected at high pressure into either the combustion chamber, "swirl chamber" or "pre-chamber," unlike petrol engines where the fuel is often added in the inlet manifold or carburetor. Engines where the fuel is injected into the main combustion chamber are called direct injection (DI) engines, while those which use a swirl chamber or pre-chamber are called indirect injection (IDI) engines.
Direct injection
Most direct injection diesel engines have a combustion cup in the top of the piston where the fuel is sprayed. Many different methods of injection can be used. Usually, an engine with helix-controlled mechanic direct injection has either an inline or a distributor injection pump. For each engine cylinder, the corresponding plunger in the fuel pump measures out the correct amount of fuel and determines the timing of each injection. These engines use injectors that are very precise spring-loaded valves that open and close at a specific fuel pressure. Separate high-pressure fuel lines connect the fuel pump with each cylinder. Fuel volume for each single combustion is controlled by a slanted groove in the plunger which rotates only a few degrees releasing the pressure and is controlled by a mechanical governor, consisting of weights rotating at engine speed constrained by springs and a lever. The injectors are held open by the fuel pressure. On high-speed engines the plunger pumps are together in one unit. The length of fuel lines from the pump to each injector is normally the same for each cylinder in order to obtain the same pressure delay. Direct injected diesel engines usually use orifice-type fuel injectors.
Electronic control of the fuel injection transformed the direct injection engine by allowing much greater control over the combustion.
Common rail
Common rail (CR) direct injection systems do not have the fuel metering, pressure-raising and delivery functions in a single unit, as in the case of a Bosch distributor-type pump, for example. A high-pressure pump supplies the CR. The requirements of each cylinder injector are supplied from this common high pressure reservoir of fuel. An Electronic Diesel Control (EDC) controls both rail pressure and injections depending on engine operating conditions. The injectors of older CR systems have solenoid-driven plungers for lifting the injection needle, whilst newer CR injectors use plungers driven by piezoelectric actuators that have less moving mass and therefore allow even more injections in a very short period of time. Early common rail system were controlled by mechanical means.
The injection pressure of modern CR systems ranges from 140 MPa to 270 MPa.
Indirect injection
An indirect diesel injection system (IDI) engine delivers fuel into a small chamber called a swirl chamber, precombustion chamber, pre chamber or ante-chamber, which is connected to the cylinder by a narrow air passage. Generally the goal of the pre chamber is to create increased turbulence for better air / fuel mixing. This system also allows for a smoother, quieter running engine, and because fuel mixing is assisted by turbulence, injector pressures can be lower. Most IDI systems use a single orifice injector. The pre-chamber has the disadvantage of lowering efficiency due to increased heat loss to the engine's cooling system, restricting the combustion burn, thus reducing the efficiency by 5–10%. IDI engines are also more difficult to start and usually require the use of glow plugs. IDI engines may be cheaper to build but generally require a higher compression ratio than the DI counterpart. IDI also makes it easier to produce smooth, quieter running engines with a simple mechanical injection system since exact injection timing is not as critical. Most modern automotive engines are DI which have the benefits of greater efficiency and easier starting; however, IDI engines can still be found in the many ATV and small diesel applications. Indirect injected diesel engines use pintle-type fuel injectors.
Air-blast injection
Early diesel engines injected fuel with the assistance of compressed air, which atomised the fuel and forced it into the engine through a nozzle (a similar principle to an aerosol spray). The nozzle opening was closed by a pin valve actuated by the camshaft. Although the engine was also required to drive an air compressor used for air-blast injection, the efficiency was nonetheless better than other combustion engines of the time. However the system was heavy and it was slow to react to changing torque demands, making it unsuitable for road vehicles.
Unit injectors
A unit injector system, also known as "Pumpe-Düse" (pump-nozzle in German) combines the injector and fuel pump into a single component, which is positioned above each cylinder. This eliminates the high-pressure fuel lines and achieves a more consistent injection. Under full load, the injection pressure can reach up to 220 MPa. Unit injectors are operated by a cam and the quantity of fuel injected is controlled either mechanically (by a rack or lever) or electronically.
Due to increased performance requirements, unit injectors have been largely replaced by common rail injection systems.
Diesel engine particularities
Mass
The average diesel engine has a poorer power-to-mass ratio than an equivalent petrol engine. The lower engine speeds (RPM) of typical diesel engines results in a lower power output. Also, the mass of a diesel engine is typically higher, since the higher operating pressure inside the combustion chamber increases the internal forces, which requires stronger (and therefore heavier) parts to withstand these forces.
Noise ("diesel clatter")
The distinctive noise of a diesel engine, particularly at idling speeds, is sometimes called "diesel clatter". This noise is largely caused by the sudden ignition of the diesel fuel when injected into the combustion chamber, which causes a pressure wave that sounds like knocking.
Engine designers can reduce diesel clatter through: indirect injection; pilot or pre-injection; injection timing; injection rate; compression ratio; turbo boost; and exhaust gas recirculation (EGR). Common rail diesel injection systems permit multiple injection events as an aid to noise reduction. Through measures such as these, diesel clatter noise is greatly reduced in modern engines. Diesel fuels with a higher cetane rating are more likely to ignite and hence reduce diesel clatter.
Cold weather starting
In warmer climates, diesel engines do not require any starting aid (aside from the starter motor). However, many diesel engines include some form of preheating for the combustion chamber, to assist starting in cold conditions. Engines with a displacement of less than 1 litre per cylinder usually have glowplugs, whilst larger heavy-duty engines have flame-start systems. The minimum starting temperature that allows starting without pre-heating is 40 °C (104 °F) for precombustion chamber engines, 20 °C (68 °F) for swirl chamber engines, and 0 °C (32 °F) for direct injected engines.
In the past, a wider variety of cold-start methods were used. Some engines, such as Detroit Diesel engines used a system to introduce small amounts of ether into the inlet manifold to start combustion. Instead of glowplugs, some diesel engines are equipped with starting aid systems that change valve timing. The simplest way this can be done is with a decompression lever. Activating the decompression lever locks the outlet valves in a slight down position, resulting in the engine not having any compression and thus allowing for turning the crankshaft over with significantly less resistance. When the crankshaft reaches a higher speed, flipping the decompression lever back into its normal position will abruptly re-activate the outlet valves, resulting in compression − the flywheel's mass moment of inertia then starts the engine. Other diesel engines, such as the precombustion chamber engine XII Jv 170/240 made by Ganz & Co., have a valve timing changing system that is operated by adjusting the inlet valve camshaft, moving it into a slight "late" position. This will make the inlet valves open with a delay, forcing the inlet air to heat up when entering the combustion chamber.
Supercharging & turbocharging
Forced induction, especially turbocharging is commonly used on diesel engines because it greatly increases efficiency and torque output. Diesel engines are well suited for forced induction setups due to their operating principle which is characterised by wide ignition limits and the absence of fuel during the compression stroke. Therefore, knocking, pre-ignition or detonation cannot occur, and a lean mixture caused by excess supercharging air inside the combustion chamber does not negatively affect combustion.
Major manufacturers
MTU
MAN
Wartsila
Rolls-Royce Power Systems
Siemens
Kolomna KDZ TMH BMZ and UDMZ
General Electric GE Transportation
Volvo Penta
Sulzer (manufacturer)
Doosan Doosan infracore, Doosan Marine
YaMZ VAZ, KMZ - RD Nevsky, STM GAZ VMZ VMZ
Mitsubishi, Mitsui Mazda IHI Kawasaki Honda Suzuki Subaru Isuzu Nissan plus others
Caterpillar and Cummins
AO Zvezda and Zvezda Energetika
Bergen Engines MaK Deutz AG MWM BMW VW, MAPNA BHEL DESA Steyr Motors GmbH Iran Khodro Diesel Isotta Fraschini, EMD Fairbanks Morse, Shanxi Henan Diesel SDM
Fuel and fluid characteristics
Diesel engines can combust a huge variety of fuels, including several fuel oils that have advantages over fuels such as petrol. These advantages include:
Low fuel costs, as fuel oils are relatively cheap
Good lubrication properties
High energy density
Low risk of catching fire, as they do not form a flammable vapour
Biodiesel is an easily synthesised, non-petroleum-based fuel (through transesterification) which can run directly in many diesel engines, while gasoline engines either need adaptation to run synthetic fuels or else use them as an additive to gasoline (e.g., ethanol added to gasohol).
In diesel engines, a mechanical injector system atomizes the fuel directly into the combustion chamber (as opposed to a Venturi jet in a carburetor, or a fuel injector in a manifold injection system atomizing fuel into the intake manifold or intake runners as in a petrol engine). Because only air is inducted into the cylinder in a diesel engine, the compression ratio can be much higher as there is no risk of pre-ignition provided the injection process is accurately timed. This means that cylinder temperatures are much higher in a diesel engine than a petrol engine, allowing less volatile fuels to be used.
Therefore, diesel engines can operate on a huge variety of different fuels. In general, fuel for diesel engines should have a proper viscosity, so that the injection pump can pump the fuel to the injection nozzles without causing damage to itself or corrosion of the fuel line. At injection, the fuel should form a good fuel spray, and it should not have a coking effect upon the injection nozzles. To ensure proper engine starting and smooth operation, the fuel should be willing to ignite and hence not cause a high ignition delay, (this means that the fuel should have a high cetane number). Diesel fuel should also have a high lower heating value.
Inline mechanical injector pumps generally tolerate poor-quality or bio-fuels better than distributor-type pumps. Also, indirect injection engines generally run more satisfactorily on fuels with a high ignition delay (for instance, petrol) than direct injection engines. This is partly because an indirect injection engine has a much greater 'swirl' effect, improving vaporisation and combustion of fuel, and because (in the case of vegetable oil-type fuels) lipid depositions can condense on the cylinder walls of a direct-injection engine if combustion temperatures are too low (such as starting the engine from cold). Direct-injected engines with an MAN centre sphere combustion chamber rely on fuel condensing on the combustion chamber walls. The fuel starts vaporising only after ignition sets in, and it burns relatively smoothly. Therefore, such engines also tolerate fuels with poor ignition delay characteristics, and, in general, they can operate on petrol rated 86 RON.
Fuel types
In his 1893 work Theory and Construction of a Rational Heat Motor, Rudolf Diesel considers using coal dust as fuel for the diesel engine. However, Diesel just considered using coal dust (as well as liquid fuels and gas); his actual engine was designed to operate on petroleum, which was soon replaced with regular petrol and kerosene for further testing purposes, as petroleum proved to be too viscous. In addition to kerosene and petrol, Diesel's engine could also operate on ligroin.
Before diesel engine fuel was standardised, fuels such as petrol, kerosene, gas oil, vegetable oil and mineral oil, as well as mixtures of these fuels, were used. Typical fuels specifically intended to be used for diesel engines were petroleum distillates and coal-tar distillates such as the following; these fuels have specific lower heating values of:
Diesel oil: 10,200 kcal·kg−1 (42.7 MJ·kg−1) up to 10,250 kcal·kg−1 (42.9 MJ·kg−1)
Heating oil: 10,000 kcal·kg−1 (41.8 MJ·kg−1) up to 10,200 kcal·kg−1 (42.7 MJ·kg−1)
Coal-tar creosote: 9,150 kcal·kg−1 (38.3 MJ·kg−1) up to 9,250 kcal·kg−1 (38.7 MJ·kg−1)
Kerosene: up to 10,400 kcal·kg−1 (43.5 MJ·kg−1)
Source:
The first diesel fuel standards were the DIN 51601, VTL 9140-001, and NATO F 54, which appeared after World War II. The modern European EN 590 diesel fuel standard was established in May 1993; the modern version of the NATO F 54 standard is mostly identical with it. The DIN 51628 biodiesel standard was rendered obsolete by the 2009 version of the EN 590; FAME biodiesel conforms to the EN 14214 standard. Watercraft diesel engines usually operate on diesel engine fuel that conforms to the ISO 8217 standard (Bunker C). Also, some diesel engines can operate on gasses (such as LNG).
Modern diesel fuel properties
Gelling
DIN 51601 diesel fuel was prone to waxing or gelling in cold weather; both are terms for the solidification of diesel oil into a partially crystalline state. The crystals build up in the fuel system (especially in fuel filters), eventually starving the engine of fuel and causing it to stop running. Low-output electric heaters in fuel tanks and around fuel lines were used to solve this problem. Also, most engines have a spill return system, by which any excess fuel from the injector pump and injectors is returned to the fuel tank. Once the engine has warmed, returning warm fuel prevents waxing in the tank. Before direct injection diesel engines, some manufacturers, such as BMW, recommended mixing up to 30% petrol in with the diesel by fuelling diesel cars with petrol to prevent the fuel from gelling when the temperatures dropped below −15 °C.
Safety
Fuel flammability
Diesel fuel is less flammable than petrol, because its flash point is 55 °C, leading to a lower risk of fire caused by fuel in a vehicle equipped with a diesel engine.
Diesel fuel can create an explosive air/vapour mix under the right conditions. However, compared with petrol, it is less prone due to its lower vapour pressure, which is an indication of evaporation rate. The Material Safety Data Sheet for ultra-low sulfur diesel fuel indicates a vapour explosion hazard for diesel fuel indoors, outdoors, or in sewers.
Cancer
Diesel exhaust has been classified as an IARC Group 1 carcinogen. It causes lung cancer and is associated with an increased risk for bladder cancer.
Engine runaway (uncontrollable overspeeding)
See diesel engine runaway.
Applications
The characteristics of diesel have different advantages for different applications.
Passenger cars
Diesel engines have long been popular in bigger cars and have been used in smaller cars such as superminis in Europe since the 1980s. They were popular in larger cars earlier, as the weight and cost penalties were less noticeable. Smooth operation as well as high low-end torque are deemed important for passenger cars and small commercial vehicles. The introduction of electronically controlled fuel injection significantly improved the smooth torque generation, and starting in the early 1990s, car manufacturers began offering their high-end luxury vehicles with diesel engines. Passenger car diesel engines usually have between three and twelve cylinders, and a displacement ranging from 0.8 to 6.0 litres. Modern powerplants are usually turbocharged and have direct injection.
Diesel engines do not suffer from intake-air throttling, resulting in very low fuel consumption especially at low partial load (for instance: driving at city speeds). One fifth of all passenger cars worldwide have diesel engines, with many of them being in Europe, where approximately 47% of all passenger cars are diesel-powered. Daimler-Benz in conjunction with Robert Bosch GmbH produced diesel-powered passenger cars starting in 1936. The popularity of diesel-powered passenger cars in markets such as India, South Korea and Japan is increasing (as of 2018).
Commercial vehicles and lorries
In 1893, Rudolf Diesel suggested that the diesel engine could possibly power "wagons" (lorries). The first lorries with diesel engines were brought to market in 1924.
Modern diesel engines for lorries have to be both extremely reliable and very fuel efficient. Common-rail direct injection, turbocharging and four valves per cylinder are standard. Displacements range from 4.5 to 15.5 litres, with power-to-mass ratios of 2.5–3.5 kg·kW−1 for heavy duty and 2.0–3.0 kg·kW−1 for medium duty engines. V6 and V8 engines used to be common, due to the relatively low engine mass the V configuration provides. Recently, the V configuration has been abandoned in favour of straight engines. These engines are usually straight-6 for heavy and medium duties and straight-4 for medium duty. Their undersquare design causes lower overall piston speeds which results in increased lifespan of up to . Compared with 1970s diesel engines, the expected lifespan of modern lorry diesel engines has more than doubled.
Railroad rolling stock
Diesel engines for locomotives are built for continuous operation between refuelings and may need to be designed to use poor quality fuel in some circumstances. Some locomotives use two-stroke diesel engines. Diesel engines have replaced steam engines on all non-electrified railroads in the world. The first diesel locomotives appeared in 1913, and diesel multiple units soon after. Nearly all modern diesel locomotives are more correctly known as diesel–electric locomotives because they use an electric transmission: the diesel engine drives an electric generator which powers electric traction motors. While electric locomotives have replaced the diesel locomotive for passenger services in many areas diesel traction is widely used for cargo-hauling freight trains and on tracks where electrification is not economically viable.
In the 1940s, road vehicle diesel engines with power outputs of were considered reasonable for DMUs. Commonly, regular truck powerplants were used. The height of these engines had to be less than to allow underfloor installation. Usually, the engine was mated with a pneumatically operated mechanical gearbox, due to the low size, mass, and production costs of this design. Some DMUs used hydraulic torque converters instead. Diesel–electric transmission was not suitable for such small engines. In the 1930s, the Deutsche Reichsbahn standardised its first DMU engine. It was a , 12-cylinder boxer unit, producing . Several German manufacturers produced engines according to this standard.
Watercraft
The requirements for marine diesel engines vary, depending on the application. For military use and medium-size boats, medium-speed four-stroke diesel engines are most suitable. These engines usually have up to 24 cylinders and come with power outputs in the one-digit Megawatt region. Small boats may use lorry diesel engines. Large ships use extremely efficient, low-speed two-stroke diesel engines. They can reach efficiencies of up to 55%. Unlike most regular diesel engines, two-stroke watercraft engines use highly viscous fuel oil. Submarines are usually diesel–electric.
The first diesel engines for ships were made by A. B. Diesels Motorer Stockholm in 1903. These engines were three-cylinder units of 120 PS (88 kW) and four-cylinder units of 180 PS (132 kW) and used for Russian ships. In World War I, especially submarine diesel engine development advanced quickly. By the end of the War, double acting piston two-stroke engines with up to 12,200 PS (9 MW) had been made for marine use.
Aviation
Early
Diesel engines had been used in aircraft before World War II, for instance, in the rigid airship LZ 129 Hindenburg, which was powered by four Daimler-Benz DB 602 diesel engines, or in several Junkers aircraft, which had Jumo 205 engines installed.
In 1929, in the United States, the Packard Motor Company developed America's first aircraft diesel engine, the Packard DR-980—an air-cooled, 9-cylinder radial engine. They installed it in various aircraft of the era—some of which were used in record-breaking distance or endurance flights, and in the first successful demonstration of ground-to-air radiophone communications (voice radio having been previously unintelligible in aircraft equipped with spark-ignition engines, due to electromagnetic interference). Additional advantages cited, at the time, included a lower risk of post-crash fire, and superior performance at high altitudes.
On March 6, 1930, the engine received an Approved Type Certificate—first ever for an aircraft diesel engine—from the U.S. Department of Commerce. However, noxious exhaust fumes, cold-start and vibration problems, engine structural failures, the death of its developer, and the industrial economic contraction of the Great Depression, combined to kill the program.
Modern
From then, until the late 1970s, there had not been many applications of the diesel engine in aircraft. In 1978, Piper Cherokee co-designer Karl H. Bergey argued that "the likelihood of a general aviation diesel in the near future is remote."
However, with the 1970s energy crisis and environmental movement, and resulting pressures for greater fuel economy, reduced carbon and lead in the atmosphere, and other issues, there was a resurgence of interest in diesel engines for aircraft. High-compression piston aircraft engines that run on aviation gasoline ("avgas") generally require the addition of toxic Tetraethyl lead to avgas, to avoid engine pre-ignition and detonation; but diesel engines do not require leaded fuel. Also, biodiesel can, theoretically, provide a net reduction in atmospheric carbon compared to avgas. For these reasons, the general aviation community has begun to fear the possible banning or discontinuance of leaded avgas.
Additionally, avgas is a specialty fuel in very low (and declining) demand, compared to other fuels, and its makers are susceptible to costly aviation-crash lawsuits, reducing refiners' interest in producing it. Outside the United States, avgas has already become increasingly difficult to find at airports (and generally), than less-expensive, diesel-compatible fuels like Jet-A and other jet fuels.
By the late 1990s / early 2000s, diesel engines were beginning to appear in light aircraft. Most notably, Frank Thielert and his Austrian engine enterprise, began developing diesel engines to replace the - gasoline/piston engines in common light aircraft use. First successful application of the Theilerts to production aircraft was in the Diamond DA42 Twin Star light twin, which exhibited exceptional fuel efficiency surpassing anything in its class, and its single-seat predecessor, the Diamond DA40 Diamond Star.
In subsequent years, several other companies have developed aircraft diesel engines, or have begun to—most notably Continental Aerospace Technologies which, by 2018, was reporting it had sold over 5,000 such engines worldwide.
The United States' Federal Aviation Administration has reported that "by 2007, various jet-fueled piston aircraft had logged well over 600,000 hours of service". In early 2019, AOPA reported that a diesel engine model for general aviation aircraft is "approaching the finish line." By late 2022, Continental was reporting that its "Jet-A" fueled engines had exceeded "2,000... in operation today," with over "9 million hours," and were being "specified by major OEMs" for Cessna, Piper, Diamond, Mooney, Tecnam, Glasair and Robin aircraft.
In recent years (2016), diesel engines have also found use in unmanned aircraft (UAV), due to their reliability, durability, and low fuel consumption.
Non-road diesel engines
Non-road diesel engines are commonly used for construction equipment and agricultural machinery. Fuel efficiency, reliability and ease of maintenance are very important for such engines, whilst high power output and quiet operation are negligible. Therefore, mechanically controlled fuel injection and air-cooling are still very common. The common power outputs of non-road diesel engines vary a lot, with the smallest units starting at 3 kW, and the most powerful engines being heavy duty lorry engines.
Stationary diesel engines
Stationary diesel engines are commonly used for electricity generation, but also for powering refrigerator compressors, or other types of compressors or pumps. Usually, these engines either run continuously with partial load, or intermittently with full load. Stationary diesel engines powering electric generators that put out an alternating current, usually operate with alternating load, but fixed rotational frequency. This is due to the mains' fixed frequency of either 50 Hz (Europe), or 60 Hz (United States). The engine's crankshaft rotational frequency is chosen so that the mains' frequency is a multiple of it. For practical reasons, this results in crankshaft rotational frequencies of either 25 Hz (1500 per minute) or 30 Hz (1800 per minute).
Low heat rejection engines
A special class of prototype internal combustion piston engines has been developed over several decades with the goal of improving efficiency by reducing heat loss. These engines are variously called adiabatic engines; due to better approximation of adiabatic expansion; low heat rejection engines, or high temperature engines. They are generally piston engines with combustion chamber parts lined with ceramic thermal barrier coatings. Some make use of pistons and other parts made of titanium which has a low thermal conductivity and density. Some designs are able to eliminate the use of a cooling system and associated parasitic losses altogether. Developing lubricants able to withstand the higher temperatures involved has been a major barrier to commercialization.
Future developments
In mid-2010s literature, main development goals for future diesel engines are described as improvements of exhaust emissions, reduction of fuel consumption, and increase of lifespan (2014). It is said that the diesel engine, especially the diesel engine for commercial vehicles, will remain the most important vehicle powerplant until the mid-2030s. Editors assume that the complexity of the diesel engine will increase further (2014). Some editors expect a future convergency of diesel and Otto engines' operating principles due to Otto engine development steps made towards homogeneous charge compression ignition (2017).
| Technology | Basics_8 | null |
8578 | https://en.wikipedia.org/wiki/Dyne | Dyne | The dyne (symbol: dyn; ) is a derived unit of force specified in the centimetre–gram–second (CGS) system of units, a predecessor of the modern SI.
History
The name dyne was first proposed as a CGS unit of force in 1873 by a Committee of the British Association for the Advancement of Science.
Definition
The dyne is defined as "the force required to accelerate a mass of one gram at a rate of one centimetre per second squared". An equivalent definition of the dyne is "that force which, acting for one second, will produce a change of velocity of one centimetre per second in a mass of one gram".
One dyne is equal to 10 micronewtons, 10−5 N or to 10 nsn (nanosthenes) in the old metre–tonne–second system of units.
1 dyn = 1 g⋅cm/s2 = 10−5 kg⋅m/s2 = 10−5 N
1 N = 1 kg⋅m/s2 = 105 g⋅cm/s2 = 105 dyn
Use
The dyne per centimetre is a unit traditionally used to measure surface tension. For example, the surface tension of distilled water is 71.99 dyn/cm at 25 °C (77 °F). (In SI units this is or .)
| Physical sciences | Force | Basics and measurement |
8603 | https://en.wikipedia.org/wiki/Diffraction | Diffraction | Diffraction is the deviation of waves from straight-line propagation without any change in their energy due to an obstacle or through an aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Diffraction is the same physical effect as interference, but interference is typically applied to superposition of a few waves and the term diffraction is used when many waves are superposed.
Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660.
In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength, as shown in the inserted image. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. If there are multiple closely spaced openings, a complex pattern of varying intensity can result.
These effects also occur when a light wave travels through a medium with a varying refractive index, or when a sound wave travels through a medium with varying acoustic impedance – all waves diffract, including gravitational waves, water waves, and other electromagnetic waves such as X-rays and radio waves. Furthermore, quantum mechanics also demonstrates that matter possesses wave-like properties and, therefore, undergoes diffraction (which is measurable at subatomic to molecular levels).
History
The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton studied these effects and attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves.
In 1818, supporters of the corpuscular theory of light proposed that the Paris Academy prize question address diffraction, expecting to see the wave theory defeated. However,
Augustin-Jean Fresnel took the prize with his new theory wave propagation, combining the ideas of Christiaan Huygens with Young's interference concept. Siméon Denis Poisson challenged the Fresnel theory by showing that it predicted light in the shadow behind circular obstruction; Dominique-François-Jean Arago proceeded to demonstrate experimentally that such light is visible, confirming Fresnel's diffraction model.
Mechanism
In classical physics diffraction arises because of how waves propagate; this is described by the Huygens–Fresnel principle and the principle of superposition of waves. The propagation of a wave can be visualized by considering every particle of the transmitted medium on a wavefront as a point source for a secondary spherical wave. The wave displacement at any subsequent point is the sum of these secondary waves. When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves so that the summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. Hence, diffraction patterns usually have a series of maxima and minima.
In the modern quantum mechanical understanding of light propagation through a slit (or slits) every photon is described by its wavefunction that determines the probability distribution for the photon: the light and dark bands are the areas where the photons are more or less likely to be detected. The wavefunction is determined by the physical surroundings such as slit geometry, screen distance, and initial conditions when the photon is created. The wave nature of individual photons (as opposed to wave properties only arising from the interactions between multitudes of photons) was implied by a low-intensity double-slit experiment first performed by G. I. Taylor in 1909. The quantum approach has some striking similarities to the Huygens-Fresnel principle; based on that principle, as light travels through slits and boundaries, secondary point light sources are created near or along these obstacles, and the resulting diffraction pattern is going to be the intensity profile based on the collective interference of all these light sources that have different optical paths. In the quantum formalism, that is similar to considering the limited regions around the slits and boundaries from which photons are more likely to originate, and calculating the probability distribution (that is proportional to the resulting intensity of classical formalism).
There are various analytical models for photons which allow the diffracted field to be calculated, including the Kirchhoff diffraction equation (derived from the wave equation), the Fraunhofer diffraction approximation of the Kirchhoff equation (applicable to the far field), the Fresnel diffraction approximation (applicable to the near field) and the Feynman path integral formulation. Most configurations cannot be solved analytically, but can yield numerical solutions through finite element and boundary element methods. In many cases it is assumed that there is only one scattering event, what is called kinematical diffraction, with an Ewald's sphere construction used to represent that there is no change in energy during the diffraction process. For matter waves a similar but slightly different approach is used based upon a relativistically corrected form of the Schrödinger equation, as first detailed by Hans Bethe. The Fraunhofer and Fresnel limits exist for these as well, although they correspond more to approximations for the matter wave Green's function (propagator) for the Schrödinger equation. More common is full multiple scattering models particular in electron diffraction; in some cases similar dynamical diffraction models are also used for X-rays.
It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and, in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out.
The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case; water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes, we will have to take into account the full three-dimensional nature of the problem.
Examples
The effects of diffraction are often seen in everyday life. The most striking examples of diffraction are those that involve light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern seen when looking at a disc.
This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example.
Diffraction in the atmosphere by small particles can cause a corona - a bright disc and rings around a bright light source like the sun or the moon. At the opposite point one may also observe glory - bright rings around the shadow of the observer. In contrast to the corona, glory requires the particles to be transparent spheres (like fog droplets), since the backscattering of the light that forms the glory involves refraction and internal reflection within the droplet.
A shadow of a solid object, using light from a compact source, shows small fringes near its edges.
Diffraction spikes are diffraction patterns caused due to non-circular aperture in camera or support struts in telescope; In normal vision, diffraction through eyelashes may produce such spikes.
The speckle pattern which is observed when laser light falls on an optically rough surface is also a diffraction phenomenon. When deli meat appears to be iridescent, that is diffraction off the meat fibers. All these effects are a consequence of the fact that light propagates as a wave.
Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles. Sound waves can diffract around objects, which is why one can still hear someone calling even when hiding behind a tree.
Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope.
Other examples of diffraction are considered below.
Single-slit diffraction
A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity, in accordance with the Huygens–Fresnel principle.
An illuminated slit that is wider than a wavelength produces interference effects in the space downstream of the slit. Assuming that the slit behaves as though it has a large number of point sources spaced evenly across the width of the slit interference effects can be calculated. The analysis of this system is simplified if we consider light of a single wavelength. If the incident light is coherent, these sources all have the same phase. Light incident at a given point in the space downstream of the slit is made up of contributions from each of these point sources and if the relative phases of these contributions vary by or more, we may expect to find minima and maxima in the diffracted light. Such phase differences are caused by differences in the path lengths over which contributing rays reach the point from the slit.
We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to Similarly, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is approximately so that the minimum intensity occurs at an angle given by
where is the width of the slit, is the angle of incidence at which the minimum intensity occurs, and is the wavelength of the light.
A similar argument can be used to show that if we imagine the slit to be divided into four, six, eight parts, etc., minima are obtained at angles given by
where is an integer other than zero.
There is no such simple argument to enable us to find the maxima of the diffraction pattern. The intensity profile can be calculated using the Fraunhofer diffraction equation as
where is the intensity at a given angle, is the intensity at the central maximum which is also a normalization factor of the intensity profile that can be determined by an integration from to and conservation of energy, and which is the unnormalized sinc function.
This analysis applies only to the far field (Fraunhofer diffraction), that is, at a distance much larger than the width of the slit.
From the intensity profile above, if the intensity will have little dependency on hence the wavefront emerging from the slit would resemble a cylindrical wave with azimuthal symmetry; If only would have appreciable intensity, hence the wavefront emerging from the slit would resemble that of geometrical optics.
When the incident angle of the light onto the slit is non-zero (which causes a change in the path length), the intensity profile in the Fraunhofer regime (i.e. far field) becomes:
The choice of plus/minus sign depends on the definition of the incident angle
Diffraction grating
A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θm which are given by the grating equation
where is the angle at which the light is incident, is the separation of grating elements, and is an integer which can be positive or negative.
The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns.
The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different.
Circular aperture
The far-field diffraction of a plane wave incident on a circular aperture is often referred to as the Airy disk. The variation in intensity with angle is given by
where is the radius of the circular aperture, is equal to and is a Bessel function. The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams.
General aperture
The wave that emerges from a point source has amplitude at location that is given by the solution of the frequency domain wave equation for a point source (the Helmholtz equation),
where is the 3-dimensional delta function. The delta function has only radial dependence, so the Laplace operator (a.k.a. scalar Laplacian) in the spherical coordinate system simplifies to
(See del in cylindrical and spherical coordinates.) By direct substitution, the solution to this equation can be readily shown to be the scalar Green's function, which in the spherical coordinate system (and using the physics time convention ) is
This solution assumes that the delta function source is located at the origin. If the source is located at an arbitrary source point, denoted by the vector and the field point is located at the point , then we may represent the scalar Green's function (for arbitrary source location) as
Therefore, if an electric field is incident on the aperture, the field produced by this aperture distribution is given by the surface integral
where the source point in the aperture is given by the vector
In the far field, wherein the parallel rays approximation can be employed, the Green's function,
simplifies to
as can be seen in the adjacent figure.
The expression for the far-zone (Fraunhofer region) field becomes
Now, since
and
the expression for the Fraunhofer region field from a planar aperture now becomes
Letting
and
the Fraunhofer region field of the planar aperture assumes the form of a Fourier transform
In the far-field / Fraunhofer region, this becomes the spatial Fourier transform of the aperture distribution. Huygens' principle when applied to an aperture simply says that the far-field diffraction pattern is the spatial Fourier transform of the aperture shape, and this is a direct by-product of using the parallel-rays approximation, which is identical to doing a plane wave decomposition of the aperture plane fields (see Fourier optics).
Propagation of a laser beam
The way in which the beam profile of a laser beam changes as it propagates is determined by diffraction. When the entire emitted beam has a planar, spatially coherent wave front, it approximates Gaussian beam profile and has the lowest divergence for a given diameter. The smaller the output beam, the quicker it diverges. It is possible to reduce the divergence of a laser beam by first expanding it with one convex lens, and then collimating it with a second convex lens whose focal point is coincident with that of the first lens. The resulting beam has a larger diameter, and hence a lower divergence. Divergence of a laser beam may be reduced below the diffraction of a Gaussian beam or even reversed to convergence if the refractive index of the propagation media increases with the light intensity. This may result in a self-focusing effect.
When the wave front of the emitted beam has perturbations, only the transverse coherence length (where the wave front perturbation is less than 1/4 of the wavelength) should be considered as a Gaussian beam diameter when determining the divergence of the laser beam. If the transverse coherence length in the vertical direction is higher than in horizontal, the laser beam divergence will be lower in the vertical direction than in the horizontal.
Diffraction-limited imaging
The ability of an imaging system to resolve detail is ultimately limited by diffraction. This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy disk having a central spot in the focal plane whose radius (as measured to the first null) is
where is the wavelength of the light and is the f-number (focal length divided by aperture diameter ) of the imaging optics; this is strictly accurate for (paraxial case). In object space, the corresponding angular resolution is
where is the diameter of the entrance pupil of the imaging lens (e.g., of a telescope's main mirror).
Two point sources will each produce an Airy pattern – see the photo of a binary star. As the point sources move closer together, the patterns will start to overlap, and ultimately they will merge to form a single pattern, in which case the two point sources cannot be resolved in the image. The Rayleigh criterion specifies that two point sources are considered "resolved" if the separation of the two images is at least the radius of the Airy disk, i.e. if the first minimum of one coincides with the maximum of the other.
Thus, the larger the aperture of the lens compared to the wavelength, the finer the resolution of an imaging system. This is one reason astronomical telescopes require large objectives, and why microscope objectives require a large numerical aperture (large aperture diameter compared to working distance) in order to obtain the highest possible resolution.
Speckle patterns
The speckle pattern seen when using a laser pointer is another diffraction phenomenon. It is a result of the superposition of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly.
Babinet's principle
Babinet's principle is a useful theorem stating that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape, but with differing intensities. This means that the interference conditions of a single obstruction would be the same as that of a single slit.
"Knife edge"
The knife-edge effect or knife-edge diffraction is a truncation of a portion of the incident radiation that strikes a sharp well-defined obstacle, such as a mountain range or the wall of a building.
The knife-edge effect is explained by the Huygens–Fresnel principle, which states that a well-defined obstruction to an electromagnetic wave acts as a secondary source, and creates a new wavefront. This new wavefront propagates into the geometric shadow area of the obstacle.
Knife-edge diffraction is an outgrowth of the "half-plane problem", originally solved by Arnold Sommerfeld using a plane wave spectrum formulation. A generalization of the half-plane problem is the "wedge problem", solvable as a boundary value problem in cylindrical coordinates. The solution in cylindrical coordinates was then extended to the optical regime by Joseph B. Keller, who introduced the notion of diffraction coefficients through his geometrical theory of diffraction (GTD). In 1974, Prabhakar Pathak and Robert Kouyoumjian extended the (singular) Keller coefficients via the uniform theory of diffraction (UTD).
Patterns
Several qualitative observations can be made of diffraction in general:
The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.)
The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The third figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing, between the center of one slit and the next.
Matter wave diffraction
According to quantum theory every particle exhibits wave properties and can therefore diffract. Diffraction of electrons and neutrons is one of the powerful arguments in favor of quantum mechanics. The wavelength associated with a non-relativistic particle is the de Broglie wavelength
where is the Planck constant and is the momentum of the particle (mass × velocity for slow-moving particles). For example, a sodium atom traveling at about 300 m/s would have a de Broglie wavelength of about 50 picometres.
Diffraction of matter waves has been observed for small particles, like electrons, neutrons, atoms, and even large molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic structure of solids, molecules and proteins.
Bragg diffraction
Diffraction from a large three-dimensional periodic structure such as many thousands of atoms in a crystal is called Bragg diffraction.
It is similar to what occurs when waves are scattered from a diffraction grating. Bragg diffraction is a consequence of interference between waves reflecting from many different crystal planes.
The condition of constructive interference is given by Bragg's law:
where is the wavelength, is the distance between crystal planes, is the angle of the diffracted wave, and is an integer known as the order of the diffracted beam.
Bragg diffraction may be carried out using either electromagnetic radiation of very short wavelength like X-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing. The pattern produced gives information of the separations of crystallographic planes , allowing one to deduce the crystal structure.
For completeness, Bragg diffraction is a limit for a large number of atoms with X-rays or neutrons, and is rarely valid for electron diffraction or with solid particles in the size range of less than 50 nanometers.
Coherence
The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent.
The length over which the phase in a beam of light is correlated is called the coherence length. In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence, as it is related to the presence of different frequency components in the wave. In the case of light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition.
If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double-slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single-slit diffraction patterns.
In the case of particles like electrons, neutrons, and atoms, the coherence length is related to the spatial extent of the wave function that describes the particle.
Applications
Diffraction before destruction
A new way to image single biological particles has emerged since the 2010s, utilising the bright X-rays generated by X-ray free-electron lasers. These femtosecond-duration pulses will allow for the (potential) imaging of single biological macromolecules. Due to these short pulses, radiation damage can be outrun, and diffraction patterns of single biological macromolecules will be able to be obtained.
| Physical sciences | Waves | null |
8612 | https://en.wikipedia.org/wiki/Declination | Declination | In astronomy, declination (abbreviated dec; symbol δ) is one of the two angles that locate a point on the celestial sphere in the equatorial coordinate system, the other being hour angle. The declination angle is measured north (positive) or south (negative) of the celestial equator, along the hour circle passing through the point in question.
The root of the word declination (Latin, declinatio) means "a bending away" or "a bending down". It comes from the same root as the words incline ("bend forward") and recline ("bend backward").
In some 18th and 19th century astronomical texts, declination is given as North Pole Distance (N.P.D.), which is equivalent to 90 – (declination). For instance an object marked as declination −5 would have an N.P.D. of 95, and a declination of −90 (the south celestial pole) would have an N.P.D. of 180.
Explanation
Declination in astronomy is comparable to geographic latitude, projected onto the celestial sphere, and right ascension is likewise comparable to longitude.
Points north of the celestial equator have positive declinations, while those south have negative declinations. Any units of angular measure can be used for declination, but it is customarily measured in the degrees (°), minutes (′), and seconds (″) of sexagesimal measure, with 90° equivalent to a quarter circle. Declinations with magnitudes greater than 90° do not occur, because the poles are the northernmost and southernmost points of the celestial sphere.
An object at the
celestial equator has a declination of 0°
north celestial pole has a declination of +90°
south celestial pole has a declination of −90°
The sign is customarily included whether positive or negative.
Effects of precession
The Earth's axis rotates slowly westward about the poles of the ecliptic, completing one circuit in about 26,000 years. This effect, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates (including declination) are inherently relative to the year of their observation, and astronomers specify them with reference to a particular year, known as an epoch. Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch.
The currently used standard epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix "J" indicates that it is a Julian epoch. Prior to J2000.0, astronomers used the successive Besselian Epochs B1875.0, B1900.0, and B1950.0.
Stars
A star's direction remains nearly fixed due to its vast distance, but its right ascension and declination do change gradually due to precession of the equinoxes and proper motion, and cyclically due to annual parallax. The declinations of Solar System objects change very rapidly compared to those of stars, due to orbital motion and close proximity.
As seen from locations in the Earth's Northern Hemisphere, celestial objects with declinations greater than 90° − (where = observer's latitude) appear to circle daily around the celestial pole without dipping below the horizon, and are therefore called circumpolar stars. This similarly occurs in the Southern Hemisphere for objects with declinations less (i.e. more negative) than −90° − (where is always a negative number for southern latitudes). An extreme example is the pole star which has a declination near to +90°, so is circumpolar as seen from anywhere in the Northern Hemisphere except very close to the equator.
Circumpolar stars never dip below the horizon. Conversely, there are other stars that never rise above the horizon, as seen from any given point on the Earth's surface (except extremely close to the equator. Upon flat terrain, the distance has to be within approximately 2 km, although this varies based upon the observer's altitude and surrounding terrain). Generally, if a star whose declination is is circumpolar for some observer (where is either positive or negative), then a star whose declination is − never rises above the horizon, as seen by the same observer. (This neglects the effect of atmospheric refraction.) Likewise, if a star is circumpolar for an observer at latitude , then it never rises above the horizon as seen by an observer at latitude −.
Neglecting atmospheric refraction, for an observer at the equator, declination is always 0° at east and west points of the horizon. At the north point, it is 90° − ||, and at the south point, −90° + ||. From the poles, declination is uniform around the entire horizon, approximately 0°.
Non-circumpolar stars are visible only during certain days or seasons of the year.
Sun
The Sun's declination varies with the seasons. As seen from arctic or antarctic latitudes, the Sun is circumpolar near the local summer solstice, leading to the phenomenon of it being above the horizon at midnight, which is called midnight sun. Likewise, near the local winter solstice, the Sun remains below the horizon all day, which is called polar night.
Relation to latitude
When an object is directly overhead its declination is almost always within 0.01 degrees of the observer's latitude; it would be exactly equal except for two complications.
The first complication applies to all celestial objects: the object's declination equals the observer's astronomical latitude, but the term "latitude" ordinarily means geodetic latitude, which is the latitude on maps and GPS devices. In the continental United States and surrounding area, the difference (the vertical deflection) is typically a few arcseconds (1 arcsecond = of a degree) but can be as great as 41 arcseconds.
The second complication is that, assuming no deflection of the vertical, "overhead" means perpendicular to the ellipsoid at observer's location, but the perpendicular line does not pass through the center of the Earth; almanacs provide declinations measured at the center of the Earth. (An ellipsoid is an approximation to sea level that is mathematically manageable).
| Physical sciences | Celestial sphere: General | Astronomy |
8625 | https://en.wikipedia.org/wiki/Differential%20geometry | Differential geometry | Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra. The field has its origins in the study of spherical geometry as far back as antiquity. It also relates to astronomy, the geodesy of the Earth, and later the study of hyperbolic geometry by Lobachevsky. The simplest examples of smooth spaces are the plane and space curves and surfaces in the three-dimensional Euclidean space, and the study of these shapes formed the basis for development of modern differential geometry during the 18th and 19th centuries.
Since the late 19th century, differential geometry has grown into a field concerned more generally with geometric structures on differentiable manifolds. A geometric structure is one which defines some notion of size, distance, shape, volume, or other rigidifying structure. For example, in Riemannian geometry distances and angles are specified, in symplectic geometry volumes may be computed, in conformal geometry only angles are specified, and in gauge theory certain fields are given over the space. Differential geometry is closely related to, and is sometimes taken to include, differential topology, which concerns itself with properties of differentiable manifolds that do not rely on any additional geometric structure (see that article for more discussion on the distinction between the two subjects). Differential geometry is also related to the geometric aspects of the theory of differential equations, otherwise known as geometric analysis.
Differential geometry finds applications throughout mathematics and the natural sciences. Most prominently the language of differential geometry was used by Albert Einstein in his theory of general relativity, and subsequently by physicists in the development of quantum field theory and the standard model of particle physics. Outside of physics, differential geometry finds applications in chemistry, economics, engineering, control theory, computer graphics and computer vision, and recently in machine learning.
History and development
The history and development of differential geometry as a subject begins at least as far back as classical antiquity. It is intimately linked to the development of geometry more generally, of the notion of space and shape, and of topology, especially the study of manifolds. In this section we focus primarily on the history of the application of infinitesimal methods to geometry, and later to the ideas of tangent spaces, and eventually the development of the modern formalism of the subject in terms of tensors and tensor fields.
Classical antiquity until the Renaissance (300 BC1600 AD)
The study of differential geometry, or at least the study of the geometry of smooth shapes, can be traced back at least to classical antiquity. In particular, much was known about the geometry of the Earth, a spherical geometry, in the time of the ancient Greek mathematicians. Famously, Eratosthenes calculated the circumference of the Earth around 200 BC, and around 150 AD Ptolemy in his Geography introduced the stereographic projection for the purposes of mapping the shape of the Earth. Implicitly throughout this time principles that form the foundation of differential geometry and calculus were used in geodesy, although in a much simplified form. Namely, as far back as Euclid's Elements it was understood that a straight line could be defined by its property of providing the shortest distance between two points, and applying this same principle to the surface of the Earth leads to the conclusion that great circles, which are only locally similar to straight lines in a flat plane, provide the shortest path between two points on the Earth's surface. Indeed, the measurements of distance along such geodesic paths by Eratosthenes and others can be considered a rudimentary measure of arclength of curves, a concept which did not see a rigorous definition in terms of calculus until the 1600s.
Around this time there were only minimal overt applications of the theory of infinitesimals to the study of geometry, a precursor to the modern calculus-based study of the subject. In Euclid's Elements the notion of tangency of a line to a circle is discussed, and Archimedes applied the method of exhaustion to compute the areas of smooth shapes such as the circle, and the volumes of smooth three-dimensional solids such as the sphere, cones, and cylinders.
There was little development in the theory of differential geometry between antiquity and the beginning of the Renaissance. Before the development of calculus by Newton and Leibniz, the most significant development in the understanding of differential geometry came from Gerardus Mercator's development of the Mercator projection as a way of mapping the Earth. Mercator had an understanding of the advantages and pitfalls of his map design, and in particular was aware of the conformal nature of his projection, as well as the difference between praga, the lines of shortest distance on the Earth, and the directio, the straight line paths on his map. Mercator noted that the praga were oblique curvatur in this projection. This fact reflects the lack of a metric-preserving map of the Earth's surface onto a flat plane, a consequence of the later Theorema Egregium of Gauss.
After calculus (1600–1800)
The first systematic or rigorous treatment of geometry using the theory of infinitesimals and notions from calculus began around the 1600s when calculus was first developed by Gottfried Leibniz and Isaac Newton. At this time, the recent work of René Descartes introducing analytic coordinates to geometry allowed geometric shapes of increasing complexity to be described rigorously. In particular around this time Pierre de Fermat, Newton, and Leibniz began the study of plane curves and the investigation of concepts such as points of inflection and circles of osculation, which aid in the measurement of curvature. Indeed, already in his first paper on the foundations of calculus, Leibniz notes that the infinitesimal condition indicates the existence of an inflection point. Shortly after this time the Bernoulli brothers, Jacob and Johann made important early contributions to the use of infinitesimals to study geometry. In lectures by Johann Bernoulli at the time, later collated by L'Hopital into the first textbook on differential calculus, the tangents to plane curves of various types are computed using the condition , and similarly points of inflection are calculated. At this same time the orthogonality between the osculating circles of a plane curve and the tangent directions is realised, and the first analytical formula for the radius of an osculating circle, essentially the first analytical formula for the notion of curvature, is written down.
In the wake of the development of analytic geometry and plane curves, Alexis Clairaut began the study of space curves at just the age of 16. In his book Clairaut introduced the notion of tangent and subtangent directions to space curves in relation to the directions which lie along a surface on which the space curve lies. Thus Clairaut demonstrated an implicit understanding of the tangent space of a surface and studied this idea using calculus for the first time. Importantly Clairaut introduced the terminology of curvature and double curvature, essentially the notion of principal curvatures later studied by Gauss and others.
Around this same time, Leonhard Euler, originally a student of Johann Bernoulli, provided many significant contributions not just to the development of geometry, but to mathematics more broadly. In regards to differential geometry, Euler studied the notion of a geodesic on a surface deriving the first analytical geodesic equation, and later introduced the first set of intrinsic coordinate systems on a surface, beginning the theory of intrinsic geometry upon which modern geometric ideas are based. Around this time Euler's study of mechanics in the Mechanica lead to the realization that a mass traveling along a surface not under the effect of any force would traverse a geodesic path, an early precursor to the important foundational ideas of Einstein's general relativity, and also to the Euler–Lagrange equations and the first theory of the calculus of variations, which underpins in modern differential geometry many techniques in symplectic geometry and geometric analysis. This theory was used by Lagrange, a co-developer of the calculus of variations, to derive the first differential equation describing a minimal surface in terms of the Euler–Lagrange equation. In 1760 Euler proved a theorem expressing the curvature of a space curve on a surface in terms of the principal curvatures, known as Euler's theorem.
Later in the 1700s, the new French school led by Gaspard Monge began to make contributions to differential geometry. Monge made important contributions to the theory of plane curves, surfaces, and studied surfaces of revolution and envelopes of plane curves and space curves. Several students of Monge made contributions to this same theory, and for example Charles Dupin provided a new interpretation of Euler's theorem in terms of the principle curvatures, which is the modern form of the equation.
Intrinsic geometry and non-Euclidean geometry (1800–1900)
The field of differential geometry became an area of study considered in its own right, distinct from the more broad idea of analytic geometry, in the 1800s, primarily through the foundational work of Carl Friedrich Gauss and Bernhard Riemann, and also in the important contributions of Nikolai Lobachevsky on hyperbolic geometry and non-Euclidean geometry and throughout the same period the development of projective geometry.
Dubbed the single most important work in the history of differential geometry, in 1827 Gauss produced the Disquisitiones generales circa superficies curvas detailing the general theory of curved surfaces. In this work and his subsequent papers and unpublished notes on the theory of surfaces, Gauss has been dubbed the inventor of non-Euclidean geometry and the inventor of intrinsic differential geometry. In his fundamental paper Gauss introduced the Gauss map, Gaussian curvature, first and second fundamental forms, proved the Theorema Egregium showing the intrinsic nature of the Gaussian curvature, and studied geodesics, computing the area of a geodesic triangle in various non-Euclidean geometries on surfaces.
At this time Gauss was already of the opinion that the standard paradigm of Euclidean geometry should be discarded, and was in possession of private manuscripts on non-Euclidean geometry which informed his study of geodesic triangles. Around this same time János Bolyai and Lobachevsky independently discovered hyperbolic geometry and thus demonstrated the existence of consistent geometries outside Euclid's paradigm. Concrete models of hyperbolic geometry were produced by Eugenio Beltrami later in the 1860s, and Felix Klein coined the term non-Euclidean geometry in 1871, and through the Erlangen program put Euclidean and non-Euclidean geometries on the same footing. Implicitly, the spherical geometry of the Earth that had been studied since antiquity was a non-Euclidean geometry, an elliptic geometry.
The development of intrinsic differential geometry in the language of Gauss was spurred on by his student, Bernhard Riemann in his Habilitationsschrift, On the hypotheses which lie at the foundation of geometry. In this work Riemann introduced the notion of a Riemannian metric and the Riemannian curvature tensor for the first time, and began the systematic study of differential geometry in higher dimensions. This intrinsic point of view in terms of the Riemannian metric, denoted by by Riemann, was the development of an idea of Gauss's about the linear element of a surface. At this time Riemann began to introduce the systematic use of linear algebra and multilinear algebra into the subject, making great use of the theory of quadratic forms in his investigation of metrics and curvature. At this time Riemann did not yet develop the modern notion of a manifold, as even the notion of a topological space had not been encountered, but he did propose that it might be possible to investigate or measure the properties of the metric of spacetime through the analysis of masses within spacetime, linking with the earlier observation of Euler that masses under the effect of no forces would travel along geodesics on surfaces, and predicting Einstein's fundamental observation of the equivalence principle a full 60 years before it appeared in the scientific literature.
In the wake of Riemann's new description, the focus of techniques used to study differential geometry shifted from the ad hoc and extrinsic methods of the study of curves and surfaces to a more systematic approach in terms of tensor calculus and Klein's Erlangen program, and progress increased in the field. The notion of groups of transformations was developed by Sophus Lie and Jean Gaston Darboux, leading to important results in the theory of Lie groups and symplectic geometry. The notion of differential calculus on curved spaces was studied by Elwin Christoffel, who introduced the Christoffel symbols which describe the covariant derivative in 1868, and by others including Eugenio Beltrami who studied many analytic questions on manifolds. In 1899 Luigi Bianchi produced his Lectures on differential geometry which studied differential geometry from Riemann's perspective, and a year later Tullio Levi-Civita and Gregorio Ricci-Curbastro produced their textbook systematically developing the theory of absolute differential calculus and tensor calculus. It was in this language that differential geometry was used by Einstein in the development of general relativity and pseudo-Riemannian geometry.
Modern differential geometry (1900–2000)
The subject of modern differential geometry emerged from the early 1900s in response to the foundational contributions of many mathematicians, including importantly the work of Henri Poincaré on the foundations of topology. At the start of the 1900s there was a major movement within mathematics to formalise the foundational aspects of the subject to avoid crises of rigour and accuracy, known as Hilbert's program. As part of this broader movement, the notion of a topological space was distilled in by Felix Hausdorff in 1914, and by 1942 there were many different notions of manifold of a combinatorial and differential-geometric nature.
Interest in the subject was also focused by the emergence of Einstein's theory of general relativity and the importance of the Einstein Field equations. Einstein's theory popularised the tensor calculus of Ricci and Levi-Civita and introduced the notation for a Riemannian metric, and for the Christoffel symbols, both coming from G in Gravitation. Élie Cartan helped reformulate the foundations of the differential geometry of smooth manifolds in terms of exterior calculus and the theory of moving frames, leading in the world of physics to Einstein–Cartan theory.
Following this early development, many mathematicians contributed to the development of the modern theory, including Jean-Louis Koszul who introduced connections on vector bundles, Shiing-Shen Chern who introduced characteristic classes to the subject and began the study of complex manifolds, Sir William Vallance Douglas Hodge and Georges de Rham who expanded understanding of differential forms, Charles Ehresmann who introduced the theory of fibre bundles and Ehresmann connections, and others. Of particular importance was Hermann Weyl who made important contributions to the foundations of general relativity, introduced the Weyl tensor providing insight into conformal geometry, and first defined the notion of a gauge leading to the development of gauge theory in physics and mathematics.
In the middle and late 20th century differential geometry as a subject expanded in scope and developed links to other areas of mathematics and physics. The development of gauge theory and Yang–Mills theory in physics brought bundles and connections into focus, leading to developments in gauge theory. Many analytical results were investigated including the proof of the Atiyah–Singer index theorem. The development of complex geometry was spurred on by parallel results in algebraic geometry, and results in the geometry and global analysis of complex manifolds were proven by Shing-Tung Yau and others. In the latter half of the 20th century new analytic techniques were developed in regards to curvature flows such as the Ricci flow, which culminated in Grigori Perelman's proof of the Poincaré conjecture. During this same period primarily due to the influence of Michael Atiyah, new links between theoretical physics and differential geometry were formed. Techniques from the study of the Yang–Mills equations and gauge theory were used by mathematicians to develop new invariants of smooth manifolds. Physicists such as Edward Witten, the only physicist to be awarded a Fields medal, made new impacts in mathematics by using topological quantum field theory and string theory to make predictions and provide frameworks for new rigorous mathematics, which has resulted for example in the conjectural mirror symmetry and the Seiberg–Witten invariants.
Branches
Riemannian geometry
Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric. This is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Riemannian geometry generalizes Euclidean geometry to spaces that are not necessarily flat, though they still resemble Euclidean space at each point infinitesimally, i.e. in the first order of approximation. Various concepts based on length, such as the arc length of curves, area of plane regions, and volume of solids all possess natural analogues in Riemannian geometry. The notion of a directional derivative of a function from multivariable calculus is extended to the notion of a covariant derivative of a tensor. Many concepts of analysis and differential equations have been generalized to the setting of Riemannian manifolds.
A distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i.e. for small neighborhoods of points. Any two regular curves are locally isometric. However, the Theorema Egregium of Carl Friedrich Gauss showed that for surfaces, the existence of a local isometry imposes that the Gaussian curvatures at the corresponding points must be the same. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat. An important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the "ordinary" plane and space considered in Euclidean and non-Euclidean geometry.
Pseudo-Riemannian geometry
Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite.
A special case of this is a Lorentzian manifold, which is the mathematical basis of Einstein's general relativity theory of gravity.
Finsler geometry
Finsler geometry has Finsler manifolds as the main object of study. This is a differential manifold with a Finsler metric, that is, a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold is a function such that:
for all in and all ,
is infinitely differentiable in ,
The vertical Hessian of is positive definite.
Symplectic geometry
Symplectic geometry is the study of symplectic manifolds. An almost symplectic manifold is a differentiable manifold equipped with a smoothly varying non-degenerate skew-symmetric bilinear form on each tangent space, i.e., a nondegenerate 2-form ω, called the symplectic form. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed: .
A diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, so symplectic manifolds necessarily have even dimension. In dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism. The phase space of a mechanical system is a symplectic manifold and they made an implicit appearance already in the work of Joseph Louis Lagrange on analytical mechanics and later in Carl Gustav Jacobi's and William Rowan Hamilton's formulations of classical mechanics.
By contrast with Riemannian geometry, where the curvature provides a local invariant of Riemannian manifolds, Darboux's theorem states that all symplectic manifolds are locally isomorphic. The only invariants of a symplectic manifold are global in nature and topological aspects play a prominent role in symplectic geometry. The first result in symplectic topology is probably the Poincaré–Birkhoff theorem, conjectured by Henri Poincaré and then proved by G.D. Birkhoff in 1912. It claims that if an area preserving map of an annulus twists each boundary component in opposite directions, then the map has at least two fixed points.
Contact geometry
Contact geometry deals with certain manifolds of odd dimension. It is close to symplectic geometry and like the latter, it originated in questions of classical mechanics. A contact structure on a -dimensional manifold M is given by a smooth hyperplane field H in the tangent bundle that is as far as possible from being associated with the level sets of a differentiable function on M (the technical term is "completely nonintegrable tangent hyperplane distribution"). Near each point p, a hyperplane distribution is determined by a nowhere vanishing 1-form , which is unique up to multiplication by a nowhere vanishing function:
A local 1-form on M is a contact form if the restriction of its exterior derivative to H is a non-degenerate two-form and thus induces a symplectic structure on Hp at each point. If the distribution H can be defined by a global one-form then this form is contact if and only if the top-dimensional form
is a volume form on M, i.e. does not vanish anywhere. A contact analogue of the Darboux theorem holds: all contact structures on an odd-dimensional manifold are locally isomorphic and can be brought to a certain local normal form by a suitable choice of the coordinate system.
Complex and Kähler geometry
Complex differential geometry is the study of complex manifolds.
An almost complex manifold is a real manifold , endowed with a tensor of type (1, 1), i.e. a vector bundle endomorphism (called an almost complex structure)
, such that
It follows from this definition that an almost complex manifold is even-dimensional.
An almost complex manifold is called complex if , where is a tensor of type (2, 1) related to , called the Nijenhuis tensor (or sometimes the torsion).
An almost complex manifold is complex if and only if it admits a holomorphic coordinate atlas.
An almost Hermitian structure is given by an almost complex structure J, along with a Riemannian metric g, satisfying the compatibility condition
An almost Hermitian structure defines naturally a differential two-form
The following two conditions are equivalent:
where is the Levi-Civita connection of . In this case, is called a Kähler structure, and a Kähler manifold is a manifold endowed with a Kähler structure. In particular, a Kähler manifold is both a complex and a symplectic manifold. A large class of Kähler manifolds (the class of Hodge manifolds) is given by all the smooth complex projective varieties.
CR geometry
CR geometry is the study of the intrinsic geometry of boundaries of domains in complex manifolds.
Conformal geometry
Conformal geometry is the study of the set of angle-preserving (conformal) transformations on a space.
Differential topology
Differential topology is the study of global geometric invariants without a metric or symplectic form.
Differential topology starts from the natural operations such as Lie derivative of natural vector bundles and de Rham differential of forms. Beside Lie algebroids, also Courant algebroids start playing a more important role.
Lie groups
A Lie group is a group in the category of smooth manifolds. Beside the algebraic properties this enjoys also differential geometric properties. The most obvious construction is that of a Lie algebra which is the tangent space at the unit endowed with the Lie bracket between left-invariant vector fields. Beside the structure theory there is also the wide field of representation theory.
Geometric analysis
Geometric analysis is a mathematical discipline where tools from differential equations, especially elliptic partial differential equations are used to establish new results in differential geometry and differential topology.
Gauge theory
Gauge theory is the study of connections on vector bundles and principal bundles, and arises out of problems in mathematical physics and physical gauge theories which underpin the standard model of particle physics. Gauge theory is concerned with the study of differential equations for connections on bundles, and the resulting geometric moduli spaces of solutions to these equations as well as the invariants that may be derived from them. These equations often arise as the Euler–Lagrange equations describing the equations of motion of certain physical systems in quantum field theory, and so their study is of considerable interest in physics.
Bundles and connections
The apparatus of vector bundles, principal bundles, and connections on bundles plays an extraordinarily important role in modern differential geometry. A smooth manifold always carries a natural vector bundle, the tangent bundle. Loosely speaking, this structure by itself is sufficient only for developing analysis on the manifold, while doing geometry requires, in addition, some way to relate the tangent spaces at different points, i.e. a notion of parallel transport. An important example is provided by affine connections. For a surface in R3, tangent planes at different points can be identified using a natural path-wise parallelism induced by the ambient Euclidean space, which has a well-known standard definition of metric and parallelism. In Riemannian geometry, the Levi-Civita connection serves a similar purpose. More generally, differential geometers consider spaces with a vector bundle and an arbitrary affine connection which is not defined in terms of a metric. In physics, the manifold may be spacetime and the bundles and connections are related to various physical fields.
Intrinsic versus extrinsic
From the beginning and through the middle of the 19th century, differential geometry was studied from the extrinsic point of view: curves and surfaces were considered as lying in a Euclidean space of higher dimension (for example a surface in an ambient space of three dimensions). The simplest results are those in the differential geometry of curves and differential geometry of surfaces. Starting with the work of Riemann, the intrinsic point of view was developed, in which one cannot speak of moving "outside" the geometric object because it is considered to be given in a free-standing way. The fundamental result here is Gauss's theorema egregium, to the effect that Gaussian curvature is an intrinsic invariant.
The intrinsic point of view is more flexible. For example, it is useful in relativity where space-time cannot naturally be taken as extrinsic. However, there is a price to pay in technical complexity: the intrinsic definitions of curvature and connections become much less visually intuitive.
These two points of view can be reconciled, i.e. the extrinsic geometry can be considered as a structure additional to the intrinsic one. (See the Nash embedding theorem.) In the formalism of geometric calculus both extrinsic and intrinsic geometry of a manifold can be characterized by a single bivector-valued one-form called the shape operator.
Applications
Below are some examples of how differential geometry is applied to other fields of science and mathematics.
In physics, differential geometry has many applications, including:
Differential geometry is the language in which Albert Einstein's general theory of relativity is expressed. According to the theory, the universe is a smooth manifold equipped with a pseudo-Riemannian metric, which describes the curvature of spacetime. Understanding this curvature is essential for the positioning of satellites into orbit around the Earth. Differential geometry is also indispensable in the study of gravitational lensing and black holes.
Differential forms are used in the study of electromagnetism.
Differential geometry has applications to both Lagrangian mechanics and Hamiltonian mechanics. Symplectic manifolds in particular can be used to study Hamiltonian systems.
Riemannian geometry and contact geometry have been used to construct the formalism of geometrothermodynamics which has found applications in classical equilibrium thermodynamics.
In chemistry and biophysics when modelling cell membrane structure under varying pressure.
In economics, differential geometry has applications to the field of econometrics.
Geometric modeling (including computer graphics) and computer-aided geometric design draw on ideas from differential geometry.
In engineering, differential geometry can be applied to solve problems in digital signal processing.
In control theory, differential geometry can be used to analyze nonlinear controllers, particularly geometric control
In probability, statistics, and information theory, one can interpret various structures as Riemannian manifolds, which yields the field of information geometry, particularly via the Fisher information metric.
In structural geology, differential geometry is used to analyze and describe geologic structures.
In computer vision, differential geometry is used to analyze shapes.
In image processing, differential geometry is used to process and analyse data on non-flat surfaces.
Grigori Perelman's proof of the Poincaré conjecture using the techniques of Ricci flows demonstrated the power of the differential-geometric approach to questions in topology and it highlighted the important role played by its analytic methods.
In wireless communications, Grassmannian manifolds are used for beamforming techniques in multiple antenna systems.
In geodesy, for calculating distances and angles on the mean sea level surface of the Earth, modelled by an ellipsoid of revolution.
| Mathematics | Geometry | null |
8626 | https://en.wikipedia.org/wiki/Dhole | Dhole | The dhole ( ; Cuon alpinus) is a canid native to South, East and Southeast Asia. It is anatomically distinguished from members of the genus Canis in several aspects: its skull is convex rather than concave in profile, it lacks a third lower molar and the upper molars possess only a single cusp as opposed to between two and four. During the Pleistocene, the dhole ranged throughout Asia, with its range also extending into Europe (with a single putative, controversial record also reported from North America) but became restricted to its historical range 12,000–18,000 years ago. It is now extinct in Central Asia, parts of Southeast Asia, and possibly the Korean peninsula and Russia.
Genetic evidence indicates that the dhole was the result of reticulate evolution, emerging from the hybridization between a species closely related to genus Canis and from a lineage closely related to the African wild dog (Lycaon pictus).
The dhole is a highly social animal, living in large clans without rigid dominance hierarchies and containing multiple breeding females. Such clans usually consist of about 12 individuals, but groups of over 40 are known. It is a diurnal pack hunter which preferentially targets large and medium-sized ungulates. In tropical forests, the dhole competes with the tiger (Panthera tigris) and the leopard (Panthera pardus), targeting somewhat different prey species, but still with substantial dietary overlap.
It is listed as Endangered on the IUCN Red List, as populations are decreasing and estimated to comprise fewer than 2,500 mature individuals. Factors contributing to this decline include habitat loss, loss of prey, competition with other species, persecution due to livestock predation, and disease transfer from domestic dogs.
Etymology and naming
The etymology of "dhole" is unclear. The possible earliest written use of the word in English occurred in 1808 by soldier Thomas Williamson, who encountered the animal in Ramghur district, India. He stated that dhole was a common local name for the species. In 1827, Charles Hamilton Smith claimed that it was derived from a language spoken in 'various parts of the East'.
Two years later, Smith connected this word with 'mad, crazy', and erroneously compared the Turkish word with and (cfr. also ; ), which are in fact from the Proto-Germanic *dwalaz 'foolish, stupid'. Richard Lydekker wrote nearly 80 years later that the word was not used by the natives living within the species' range. The Merriam-Webster Dictionary theorises that it may have come from the .
Other English names for the species include Asian wild dog, Asiatic wild dog, Indian wild dog, whistling dog, red dog, and red wolf.
Taxonomy and evolution
Canis alpinus was the binomial name proposed by Peter Simon Pallas in 1811, who described its range as encompassing the upper levels of Udskoi Ostrog in Amurland, towards the eastern side and in the region of the upper Lena River, around the Yenisei River and occasionally crossing into China. This northern Russian range reported by Pallas during the 18th and 19th centuries is "considerably north" of where this species occurs today.
Canis primaevus was a name proposed by Brian Houghton Hodgson in 1833 who thought that the dhole was a primitive Canis form and the progenitor of the domestic dog. Hodgson later took note of the dhole's physical distinctiveness from the genus Canis and proposed the genus Cuon.
The first study on the origins of the species was conducted by paleontologist Erich Thenius, who concluded in 1955 that the dhole was a post-Pleistocene descendant of a golden jackal-like ancestor. The paleontologist Bjorn Kurten wrote in his 1968 book Pleistocene Mammals of Europe that the primitive dhole Canis majori Del Campana 1913 —the remains of which have been found in Villafranchian era Valdarno, Italy and in China—was almost indistinguishable from the genus Canis. In comparison, the modern species has greatly reduced molars and the cusps have developed into sharply trenchant points. During the Early Middle Pleistocene there arose both Canis majori stehlini that was the size of a large wolf, and the early dhole Canis alpinus Pallas 1811 which first appeared at Hundsheim and Mosbach in Germany. In the Late Pleistocene era the European dhole (C. a. europaeus) was modern-looking and the transformation of the lower molar into a single cusped, slicing tooth had been completed; however, its size was comparable with that of a wolf. This subspecies became extinct in Europe at the end of the late Würm period, but the species as a whole still inhabits a large area of Asia. The European dhole may have survived up until the early Holocene in the Iberian Peninsula, and what is believed to be dhole remains have been found at Riparo Fredian in northern Italy dated 10,800 years old.
The vast Pleistocene range of this species also included numerous islands in Asia that this species no longer inhabits, such as Sri Lanka, Borneo and possibly Palawan in the Philippines. Middle Pleistocene dhole fossils have also been found in the Matsukae Cave in northern Kyushu Island in western Japan and in the Lower Kuzuu fauna in Tochigi Prefecture in Honshu Island, east Japan. Dhole fossils from the Late Pleistocene dated to about 10,700 years before present are known from the Luobi Cave or Luobi-Dong cave in Hainan Island in south China where they no longer exist. Additionally, fossils of canidae possibly belonging to dhole have been excavated from Dajia River in Taichung County, Taiwan.
A single record of the dhole is known from North America. This consists of a jaw fragment and teeth of Late Pleistocene age found in San Josecito Cave in northeast Mexico, dating to around 27–11,000 years ago. Other researchers have either considered this record as "insufficient" or suggested that further corroboration is required for the definitive taxonomic attribution of these specimens.
Dholes are also known from the Middle and Late Pleistocene fossil record of Europe. In 2021, the analyses of the mitochondrial genomes extracted from the fossil remains of two extinct European dhole specimens from the Jáchymka cave, Czech Republic dated 35,000–45,000 years old indicate that these were genetically basal to modern dholes and possessed much greater genetic diversity.
The dhole's distinctive morphology has been a source of much confusion in determining the species' systematic position among the Canidae. George Simpson placed the dhole in the subfamily Simocyoninae alongside the African wild dog and the bush dog, on account of all three species' similar dentition. Subsequent authors, including Juliet Clutton-Brock, noted greater morphological similarities to canids of the genera Canis, Dusicyon and Alopex than to either Speothos or Lycaon, with any resemblance to the latter two being due to convergent evolution.
Some authors consider the extinct Canis subgenus Xenocyon as ancestral to both the genus Lycaon and the genus Cuon. Subsequent studies on the canid genome revealed that the dhole and African wild dog are closely related to members of the genus Canis. This closeness to Canis may have been confirmed in a menagerie in Madras, where according to zoologist Reginald Innes Pocock there is a record of a dhole that interbred with a golden jackal. DNA sequencing of the Sardinian dhole (Cynotherium sardous) an extinct small canine species formerly native to the island of Sardinia in the Mediterranean, and which has often been suggested to have descended from Xenocyon, has found that it is most closely related to the living dhole among canines.
Admixture with the African wild dog
In 2018, whole genome sequencing was used to compare all members (apart from the black-backed and side-striped jackals) of the genus Canis, along with the dhole and the African wild dog (Lycaon pictus). There was strong evidence of ancient genetic admixture between the dhole and the African wild dog. Today, their ranges are remote from each other; however, during the Pleistocene era the dhole could be found as far west as Europe. The study proposes that the dhole's distribution may have once included the Middle East, from where it may have admixed with the African wild dog in North Africa. However, there is no evidence of the dhole having existed in the Middle East nor North Africa, though the Lycaon was present in Europe during the Early Pleistocene, with its last record in the region dating to 830,000 years ago. Genetic evidence from the Sardinan dhole suggests that both Sardinian and modern dholes (which are estimated to have split from each other around 900,000 years ago) share ancestry from the Lycaon lineage, but this ancestry is significantly higher in modern dholes than in the Sardinian dhole.
Subspecies
Historically, up to ten subspecies of dholes have been recognised. , seven subspecies are recognised.
However, studies on the dhole's mtDNA and microsatellite genotype showed no clear subspecific distinctions. Nevertheless, two major phylogeographic groupings were discovered in dholes of the Asian mainland, which likely diverged during a glaciation event. One population extends from South, Central and North India (south of the Ganges) into Myanmar, and the other extends from India north of the Ganges into northeastern India, Myanmar, Thailand and the Malaysian Peninsula. The origin of dholes in Sumatra and Java is, , unclear, as they show greater relatedness to dholes in India, Myanmar and China rather than with those in nearby Malaysia. However, the Canid Specialist Group of the International Union for the Conservation of Nature (IUCN) states that further research is needed because all of the samples were from the southern part of this species' range and the Tien Shan subspecies has distinct morphology.
In the absence of further data, the researchers involved in the study speculated that Javan and Sumatran dholes could have been introduced to the islands by humans. Fossils of dhole from the early Middle Pleistocene have been found in Java.
Characteristics
The dhole's general tone of the fur is reddish, with the brightest hues occurring in winter. In the winter coat, the back is clothed in a saturated rusty-red to reddish colour with brownish highlights along the top of the head, neck and shoulders. The throat, chest, flanks, and belly and the upper parts of the limbs are less brightly coloured, and are more yellowish in tone. The lower parts of the limbs are whitish, with dark brownish bands on the anterior sides of the forelimbs. The muzzle and forehead are greyish-reddish. The tail is very luxuriant and fluffy, and is mainly of a reddish-ocherous colour, with a dark brown tip. The summer coat is shorter, coarser and darker. The dorsal and lateral guard hairs in adults measure in length. Dholes in the Moscow Zoo moult once a year from March to May. A melanistic individual was recorded in the northern Coimbatore Forest Division in Tamil Nadu.
The dhole has a wide and massive skull with a well-developed sagittal crest, and its masseter muscles are highly developed compared to other canid species, giving the face an almost hyena-like appearance. The rostrum is shorter than that of domestic dogs and most other canids. It has six rather than seven lower molars.
The upper molars are weak, being one third to one half the size of those of wolves and have only one cusp as opposed to between two and four, as is usual in canids, an adaptation thought to improve shearing ability and thus speed of prey consumption. This may allow dholes to compete more successfully with kleptoparasites. In terms of size, dholes average about in length (excluding a long tail), and stand around at the shoulders. Adult females can weigh , while the slightly larger male may weigh . The mean weight of adults from three small samples was .
In appearance, the dhole has been variously described as combining the physical characteristics of the gray wolf and the red fox, and as being "cat-like" on account of its long backbone and slender limbs.
Distribution and habitat
Historically, the dhole lived in Singapore and throughout Central Asia including Afghanistan, Kyrgyzstan, Kazakhstan, Mongolia, Tajikistan and Uzbekistan, though it is now considered to be regionally extinct in these regions. Historical record in South Korea from the Veritable Records of the Joseon Dynasty also indicate that the dhole once inhabited Yangju in Gyeonggi Province, but it is now also extinct in South Korea, with the last known capture reports in 1909 and 1921 from Yeoncheon of Gyeonggi Province. The current presence of dholes in North Korea and Pakistan is considered uncertain. The dholes also once inhabited the alpine steppes extending into Kashmir to the Ladakh area, though they disappeared from 60% of their historic range in India during the past century. In India, Myanmar, Indochina, Indonesia and China, it prefers forested areas in alpine zones and is occasionally sighted in plains regions.
In the Bek-Tosot Conservancy of southern Kyrgyzstan, the possible presence of the dholes was considered likely based on genetic samples collected in 2019. This was the first record of dholes from the country in almost three decades.
The dhole might still be present in the Tunkinsky National Park in extreme southern Siberia near Lake Baikal. It possibly still lives in the Primorsky Krai province in far eastern Russia, where it was considered a rare and endangered species in 2004, with unconfirmed reports in the Pikthsa-Tigrovy Dom protected forest area; no sighting was reported in other areas since the late 1970s.
Currently, no other recent reports are confirmed of dholes being present in Russia, so the IUCN considered them to be extinct in Russia. However, the dhole might be present in the eastern Sayan Mountains and in the Transbaikal region; it has been sighted in Tofalaria in the Irkutsk Oblast, the Republic of Buryatia and Zabaykalsky Krai.
One pack was sighted in the Qilian Mountains in 2006.
In 2011 to 2013, local government officials and herders reported the presence of several dhole packs at elevations of near Taxkorgan Nature Reserve in the Xinjiang Autonomous Region. Several packs and a female adult with pups were also recorded by camera traps at elevations of around in Yanchiwan National Nature Reserve in the northern Gansu Province in 2013–2014.
Dholes have been also reported in the Altyn-Tagh Mountains.
In China's Yunnan Province, dholes were recorded in Baima Xueshan Nature Reserve in 2010–2011. Dhole samples were obtained in Jiangxi Province in 2013.
Confirmed records by camera-trapping since 2008 have occurred in southern and western Gansu province, southern Shaanxi province, southern Qinghai province, southern and western Yunnan province, western Sichuan province, the southern Xinjiang Autonomous Region and in the Southeastern Tibet Autonomoous Region. There are also historical records of dhole dating to 1521–1935 in Hainan Island, but the species is no longer present and is estimated to have become extinct around 1942.
The dhole occurs in most of India south of the Ganges, particularly in the Central Indian Highlands and the Western and Eastern Ghats. It is also present in Arunachal Pradesh, Assam, Meghalaya and West Bengal and in the Indo-Gangetic Plain's Terai region. Dhole populations in the Himalayas and northwest India are fragmented.
In 2011, dhole packs were recorded by camera traps in the Chitwan National Park. Its presence was confirmed in the Kanchenjunga Conservation Area in 2011 by camera traps. In February 2020, dholes were sighted in the Vansda National Park, with camera traps confirming the presence of two individuals in May of the same year. This was the first confirmed sighting of dholes in Gujarat since 1970.
In Bhutan, the dhole is present in Jigme Dorji National Park.
In Bangladesh, it inhabits forest reserves in the Sylhet area, as well the Chittagong Hill Tracts in the southeast. Recent camera trap photos in the Chittagong in 2016 showed the continued presence of the dhole. These regions probably do not harbour a viable population, as mostly small groups or solitary individuals were sighted.
In Myanmar, the dhole is present in several protected areas. In 2015, dholes and tigers were recorded by camera-traps for the first time in the hill forests of Karen State.
Its range is highly fragmented in the Malaysian Peninsula, Sumatra, Java, Vietnam and Thailand, with the Vietnamese population considered to be possibly extinct. In 2014, camera trap videos in the montane tropical forests at in the Kerinci Seblat National Park in Sumatra revealed its continued presence. A camera trapping survey in the Khao Ang Rue Nai Wildlife Sanctuary in Thailand from January 2008 to February 2010 documented one healthy dhole pack. In northern Laos, dholes were studied in Nam Et-Phou Louey National Protected Area. Camera trap surveys from 2012 to 2017 recorded dholes in the same Nam Et-Phou Louey National Protected Area.
In Vietnam, dholes were sighted only in Pu Mat National Park in 1999, in Yok Don National Park in 2003 and 2004; and in Ninh Thuan Province in 2014.
A disjunct dhole population was reported in the area of Trabzon and Rize in northeastern Turkey near the border with Georgia in the 1990s. This report was not considered to be reliable. One single individual was claimed to have been shot in 2013 in the nearby Kabardino-Balkaria Republic of Russia in the central Caucasus; its remains were analysed in May 2015 by a biologist from the Kabardino-Balkarian State University, who concluded that the skull was indeed that of a dhole. In August 2015, researchers from the National Museum of Natural History and the Karadeniz Technical University started an expedition to track and document possible Turkish population of dhole. In October 2015, they concluded that two skins of alleged dholes in Turkey probably belonged to dogs, pending DNA analysis of samples from the skins, and, having analysed photos of the skull of alleged dhole in Kabardino-Balkaria Republic of Russia, they concluded it was a grey wolf.
Ecology and behaviour
Dholes produce whistles resembling the calls of red foxes, sometimes rendered as coo-coo. How this sound is produced is unknown, though it is thought to help in coordinating the pack when travelling through thick brush. When attacking prey, they emit screaming KaKaKaKAA sounds. Other sounds include whines (food soliciting), growls (warning), screams, chatterings (both of which are alarm calls) and yapping cries. In contrast to wolves, dholes do not howl or bark.
Dholes have a complex body language. Friendly or submissive greetings are accompanied by horizontal lip retraction and the lowering of the tail, as well as licking. Playful dholes open their mouths with their lips retracted and their tails held in a vertical position whilst assuming a play bow. Aggressive or threatening dholes pucker their lips forward in a snarl and raise the hairs on their backs, as well as keep their tails horizontal or vertical. When afraid, they pull their lips back horizontally with their tails tucked and their ears flat against the skull.
Social and territorial behaviour
Dholes are more social than gray wolves, and have less of a dominance hierarchy, as seasonal scarcity of food is not a serious concern for them. In this manner, they closely resemble African wild dogs in social structure. They live in clans rather than packs, as the latter term refers to a group of animals that always hunt together. In contrast, dhole clans frequently break into small packs of three to five animals, particularly during the spring season, as this is the optimal number for catching fawns. Dominant dholes are hard to identify, as they do not engage in dominance displays as wolves do, though other clan members will show submissive behaviour toward them. Intragroup fighting is rarely observed.
Dholes are far less territorial than wolves, with pups from one clan often joining another without trouble once they mature sexually. Clans typically number 5 to 12 individuals in India, though clans of 40 have been reported. In Thailand, clans rarely exceed three individuals. Unlike other canids, there is no evidence of dholes using urine to mark their territories or travel routes. When urinating, dholes, especially males, may raise one hind leg or both to result in a handstand. Handstand urination is also seen in bush dogs (Speothos venaticus) and domestic dogs. They may defecate in conspicuous places, though a territorial function is unlikely, as faeces are mostly deposited within the clan's territory rather than the periphery. Faeces are often deposited in what appear to be communal latrines. They do not scrape the earth with their feet, as other canids do, to mark their territories.
Denning
Four kinds of den have been described; simple earth dens with one entrance (usually remodeled striped hyena or porcupine dens); complex cavernous earth dens with more than one entrance; simple cavernous dens excavated under or between rocks; and complex cavernous dens with several other dens in the vicinity, some of which are interconnected. Dens are typically located under dense scrub or on the banks of dry rivers or creeks. The entrance to a dhole den can be almost vertical, with a sharp turn three to four feet down. The tunnel opens into an antechamber, from which extends more than one passage. Some dens may have up to six entrances leading up to of interconnecting tunnels. These "cities" may be developed over many generations of dholes, and are shared by the clan females when raising young together. Like African wild dogs and dingoes, dholes will avoid killing prey close to their dens.
Reproduction and development
In India, the mating season occurs between mid-October and January, while captive dholes in the Moscow Zoo breed mostly in February. Unlike wolf packs, dhole clans may contain more than one breeding female. More than one female dhole may den and rear their litters together in the same den. During mating, the female assumes a crouched, cat-like position. There is no copulatory tie characteristic of other canids when the male dismounts. Instead, the pair lie on their sides facing each other in a semicircular formation. The gestation period lasts 60–63 days, with litter sizes averaging four to six pups. Their growth rate is much faster than that of wolves, being similar in rate to that of coyotes.
The hormone metabolites of five males and three females kept in Thai zoos was studied. The breeding males showed an increased level of testosterone from October to January. The oestrogen level of captive females increases for about two weeks in January, followed by an increase of progesterone. They displayed sexual behaviours during the oestrogen peak of the females.
Pups are suckled at least 58 days. During this time, the pack feeds the mother at the den site. Dholes do not use rendezvous sites to meet their pups as wolves do, though one or more adults will stay with the pups at the den while the rest of the pack hunts. Once weaning begins, the adults of the clan will regurgitate food for the pups until they are old enough to join in hunting. They remain at the den site for 70–80 days. By the age of six months, pups accompany the adults on hunts and will assist in killing large prey such as sambar by the age of eight months. Maximum longevity in captivity is 15–16 years.
Hunting behaviour
Before embarking on a hunt, clans go through elaborate prehunt social rituals involving nuzzling, body rubbing and mounting. Dholes are primarily diurnal hunters, hunting in the early hours of the morning. They rarely hunt at night, except on moonlit nights, indicating they greatly rely on sight when hunting. They can chase their prey for many hours. During a pursuit, one or more dholes takes over chasing the prey, while the rest of the pack keeps up at a steadier pace behind, taking over once the other group tires. Most chases are short, lasting only . When chasing fleet-footed prey, they run at a pace of . Dholes frequently drive their prey into water bodies, where the targeted animal's movements are hindered.
Once large prey is caught, one dhole grabs the prey's nose, while the rest of the pack pulls the animal down by the flanks and hindquarters. They do not use a killing bite to the throat. They occasionally blind their prey by attacking the eyes. Serows are among the only ungulate species capable of effectively defending themselves against dhole attacks, due to their thick, protective coats and short, sharp horns capable of easily impaling dholes. Dholes tear open their prey's flanks and disembowel it, eating the heart, liver, lungs and some sections of the intestines. The stomach and rumen are usually left untouched. Prey weighing less than is usually killed within two minutes, while large stags may take 15 minutes to die. Once prey is secured, dholes tear off pieces of the carcass and eat in seclusion. They give the pups access to a kill. They are generally tolerant of scavengers at their kills. Both mother and young are provided with regurgitated food by other pack members.
Feeding ecology
Prey animals in India include chital, sambar deer, muntjac, mouse deer, barasingha, wild boar, gaur, water buffaloes, banteng, cattle, nilgai, goats, Indian hares, Himalayan field rats and langurs. There is one record of a pack bringing down an Indian elephant calf in Assam, despite desperate defense of the mother, resulting in numerous losses to the pack. In Kashmir, they prey on markhor, and thamin in Myanmar, Malayan tapir, Sumatran serow in Sumatra and the Malay Peninsula and Javan rusa in Java. In the Tian Shan and Tarbagatai Mountains, dholes prey on Siberian ibexes, arkhar, roe deer, Caspian red deer and wild boar. In the Altai and Sayan Mountains, they prey on musk deer and reindeer. In eastern Siberia, they prey on roe deer, Manchurian wapiti, wild pig, musk deer and reindeer, while in Primorye they feed on sika deer and goral. In Mongolia, they prey on argali and rarely Siberian ibex.
Like African wild dogs, but unlike wolves, dholes are not known to actively hunt people. They are known to eat insects and lizards. Dholes eat fruit and vegetable matter more readily than other canids. In captivity, they eat various kinds of grasses, herbs and leaves, seemingly for pleasure rather than just when ill. In summertime in the Tian Shan Mountains, dholes eat large quantities of mountain rhubarb. Although opportunistic, dholes have a seeming aversion to hunting cattle and their calves. Livestock predation by dholes has been a problem in Bhutan since the late 1990s, as domestic animals are often left outside to graze in the forest, sometimes for weeks at a time. Livestock stall-fed at night and grazed near homes are never attacked. Oxen are killed more often than cows, probably because they are given less protection.
Enemies and competitors
In some areas, dholes are sympatric to tigers and leopards. Competition between these species is mostly avoided through differences in prey selection, although there is still substantial dietary overlap. Along with leopards, dholes typically target animals in the range (mean weights of for dhole and for leopard), while tigers selected for prey animals heavier than (but their mean prey weight was ). Also, other characteristics of the prey, such as sex, arboreality and aggressiveness, may play a role in prey selection. For example, dholes preferentially select male chital, whereas leopards kill both sexes more evenly (and tigers prefer larger prey altogether), dholes and tigers kill langurs rarely compared to leopards due to the leopards' greater arboreality, while leopards kill wild boar infrequently due to the inability of this relatively light predator to tackle aggressive prey of comparable weight.
Tigers are dangerous opponents for dholes, as they have sufficient strength to kill a dhole with a single paw strike. Dhole packs are smaller in areas with higher tiger densities due to tigers directly killing dholes and stealing kills they made. The kleptoparasitism causes dholes to prefer hunting smaller animals because they can eat more of a smaller carcass before a tiger arrives to steal it. Direct predation can lead to lower reproductive and recruitment rates, lower hunting success rates and less food for the pups when a helper is killed, and potentially pack destabilization if one member of the breeding pair is killed.
Dhole packs may steal leopard kills, while leopards may kill dholes if they encounter them singly or in pairs. There are numerous records of leopards being treed by dholes. Dholes were once thought to be a major factor in reducing Asiatic cheetah populations, though this is doubtful, as cheetahs live in open areas as opposed to forested areas favoured by dholes. Since leopards are smaller than tigers and are more likely to hunt dholes, dhole packs tend to react more aggressively toward them than they do towards tigers.
Dhole packs occasionally attack Asiatic black bears, snow leopards and sloth bears. When attacking bears, dholes will attempt to prevent them from seeking refuge in caves and lacerate their hindquarters.
Although usually antagonistic toward wolves, they may hunt and feed alongside one another.
The dhole is also sympatric with the Indian wolf (Canis lupus pallipes) in parts of its range. There is at least one record of a lone wolf associating with a pair of dholes in Debrigarh Wildlife Sanctuary, and two observations in Satpura Tiger Reserve. They infrequently associate in mixed groups with golden jackals. Domestic dogs may kill dholes, though they will feed alongside them on occasion.
Diseases and parasites
Dholes are vulnerable to a number of different diseases, particularly in areas where they are sympatric with other canid species. Infectious pathogens such as Toxocara canis are present in their faeces. They may suffer from rabies, canine distemper, mange, trypanosomiasis, canine parvovirus and endoparasites such as cestodes and roundworms.
Threats
Habitat loss is thought to amount to 60% of the dhole's historical range in India. The fragmentation and isolation of dhole populations has resulted in inbreeding and the Allee effect, which threaten its long-term viability.
Some ethnic groups like the Kuruba and Mon Khmer-speaking tribes will appropriate dhole kills; some Indian villagers welcome the dhole because of this appropriation of dhole kills. Dholes were persecuted throughout India for bounties until they were given protection by the Wildlife Protection Act of 1972. Methods used for dhole hunting included poisoning, snaring, shooting and clubbing at den sites. Native Indian people killed dholes primarily to protect livestock, while British sporthunters during the British Raj did so under the conviction that dholes were responsible for drops in game populations. Persecution of dholes still occurs with varying degrees of intensity according to the region. Bounties paid for dholes used to be 25 rupees, though this was reduced to 20 in 1926 after the number of presented dhole carcasses became too numerous to maintain the established reward. The Indochinese dhole population suffers heavily from nonselective hunting techniques such as snaring.
The fur trade does not pose a significant threat to the dhole. The people of India do not eat dhole flesh and their fur is not considered overly valuable. Due to their rarity, dholes were never harvested for their skins in large numbers in the Soviet Union and were sometimes accepted as dog or wolf pelts (being labeled as "half wolf" for the latter). The winter fur was prized by the Chinese, who bought dhole pelts in Ussuriysk during the late 1860s for a few silver rubles. In the early 20th century, dhole pelts reached eight rubles in Manchuria. In Semirechye, fur coats made from dhole skin were considered the warmest, but were very costly.
Conservation
In India, the dhole is protected under Schedule 2 of the Wildlife Protection Act, 1972. The creation of reserves under Project Tiger provided some protection for dhole populations sympatric with tigers. In 2014, the Indian government sanctioned its first dhole conservation breeding centre at the Indira Gandhi Zoological Park (IGZP) in Visakhapatnam. The dhole has been protected in Russia since 1974, though it is vulnerable to poison left out for wolves. In China, the animal is listed as a category II protected species under the Chinese wildlife protection act of 1988. In Cambodia, the dhole is protected from all hunting, while conservation laws in Vietnam limit extraction and utilisation.
In 2016, the Korean company Sooam Biotech was reported to be attempting to clone the dhole using dogs as surrogate mothers to help conserve the species.
In culture and literature
Three dhole-like animals are featured on the coping stone of the Bharhut stupa dating from 100 BC. They are shown waiting by a tree, with a woman or spirit trapped up it, a scene reminiscent of dholes treeing tigers. The animal's fearsome reputation in India is reflected by the number of pejorative names it possesses in Hindi, which variously translate as "red devil", "devil dog", "jungle devil", or "hound of Kali".
Leopold von Schrenck had trouble obtaining dhole specimens during his exploration of Amurland, as the local Gilyaks greatly feared the species. This fear and superstition was not, however, shared by neighbouring Tungusic peoples. It was speculated that this differing attitude towards the dhole was due to the Tungusic peoples' more nomadic, hunter-gatherer lifestyle.
Dholes appear in Rudyard Kipling's Red Dog, where they are portrayed as aggressive and bloodthirsty animals which descend from the Deccan Plateau into the Seeonee Hills inhabited by Mowgli and his adopted wolf pack to cause carnage among the jungle's denizens. They are described as living in packs numbering hundreds of individuals, and that even Shere Khan and Hathi make way for them when they descend into the jungle. The dholes are despised by the wolves because of their destructiveness, their habit of not living in dens and the hair between their toes. With Mowgli and Kaa's help, the Seeonee wolf pack manages to wipe out the dholes by leading them through bee hives and torrential waters before finishing off the rest in battle.
Japanese author Uchida Roan wrote (; A dog's tale) in 1901 as a nationalistic critique of the declining popularity of indigenous dog breeds, which he asserted were descended from the dhole.
A fictional version of the dhole, imbued with supernatural abilities, appears in a sixth-season episode of TV series The X-Files, titled "Alpha".
In China, the dhole were widely known throughout history and mythology. One notable legendary creature is the Yazi (), which was believed to be a creature that was part-dhole, part-dragon. In modern times, however, the Chinese word for dhole () is often confused with 'jackal' or 'wolf', resulting in many confusions and mistranslations of dholes as jackals or wolves.
Dholes also appear as enemies in the video game Far Cry 4, alongside other predators such as the Bengal tiger, honey badger, snow leopard, clouded leopard, Tibetan wolf and Asian black bear. They can be found hunting the player and other NPCs across the map, but are easily killed, being one of the weakest enemies in the game. They once again appear in the video game Far Cry Primal, where they play similar roles as their counterparts in the previous game, but can now also be tamed and used in combat by Takkar, the main protagonist of the game.
Tameability
Brian Houghton Hodgson kept captured dholes in captivity, and found, with the exception of one animal, they remained shy and vicious even after 10 months. Adult dholes are nearly impossible to tame, though pups are docile and can even be allowed to play with domestic dog pups until they reach early adulthood. A dhole may have been presented as a gift to the Akkadian king Ibbi-Sin as tribute referred to in the inscription as the "red dog of Meluhha" or Indus Valley Civilisation of Pakistan suggesting a once greater range of the dhole.
| Biology and health sciences | Canines | Animals |
8637 | https://en.wikipedia.org/wiki/Door | Door | A door is a hinged or otherwise movable barrier that allows ingress (entry) into and egress (exit) from an enclosure. The created opening in the wall is a doorway or portal. A door's essential and primary purpose is to provide security by controlling access to the doorway (portal). Conventionally, it is a panel that fits into the doorway of a building, room, or vehicle. Doors are generally made of a material suited to the door's task. They are commonly attached by hinges, but can move by other means, such as slides or counterbalancing.
The door may be able to move in various ways (at angles away from the doorway/portal, by sliding on a plane parallel to the frame, by folding in angles on a parallel plane, or by spinning along an axis at the center of the frame) to allow or prevent ingress or egress. In most cases, a door's interior matches its exterior side. But in other cases (e.g., a vehicle door) the two sides are radically different.
Many doors incorporate locking mechanisms to ensure that only some people can open them (such as with a key). Doors may have devices such as knockers or doorbells by which people outside announce their presence. Apart from providing access into and out of a space, doors may have the secondary functions of ensuring privacy by preventing unwanted attention from outsiders, of separating areas with different functions, of allowing light to pass into and out of a space, of controlling ventilation or air drafts so that interiors may be more effectively heated or cooled, of dampening noise, and of blocking the spread of fire.
Doors can have aesthetic, symbolic, ritualistic purposes. Receiving the key to a door can signify a change in status from outsider to insider. Doors and doorways frequently appear in literature and the arts with metaphorical or allegorical import as a portent of change.
History
The earliest recorded doors appear in the paintings of Egyptian tombs, which show them as single or double doors, each of a single piece of wood. People may have believed these were doors to the afterlife, and some include designs of the afterlife. In Egypt, where the climate is intensely dry, doors were not framed against warping, but in other countries required framed doors—which, according to Vitruvius (iv. 6.) was done with stiles (sea/si) and rails (see: Frame and panel), the enclosed panels filled with tympana set in grooves in the stiles and rails. The stiles were the vertical boards, one of which, tenoned or hinged, is known as the hanging stile, the other as the middle or meeting stile. The horizontal cross pieces are the top rail, bottom rail, and middle or intermediate rails.
The most ancient doors were made of timber, such as those referred to in the Biblical depiction of King Solomon's temple being in olive wood (I Kings vi. 31–35), which were carved and overlaid with gold. The doors that Homer mentions appear to have been cased in silver or brass. Besides olive wood, elm, cedar, oak and cypress were used. Two doors over 5,000 years old have been found by archaeologists near Zürich, Switzerland.
Ancient doors were hung by pintles at the top and bottom of the hanging stile, which worked in sockets in the lintel and sill, the latter in some hard stone such as basalt or granite. Those Hilprecht found at Nippur, dating from 2000 BC, were in dolerite. The tenons of the gates at Balawat were sheathed with bronze (now in the British Museum). These doors or gates were hung in two leaves, each about wide and high; they were encased with bronze bands or strips, high, covered with repoussé decoration of figures. The wood doors would seem to have been about thick, but the hanging stile was over diameter. Other sheathings of various sizes in bronze show this was a universal method adopted to protect the wood pivots. In the Hauran in Syria where timber is scarce, the doors were made of stone, and one measuring is in the British Museum; the band on the meeting stile shows that it was one of the leaves of a double door. At Kuffeir near Bostra in Syria, Burckhardt found stone doors, high, being the entrance doors of the town. In Etruria many stone doors are referred to by Dennis.
Ancient Greek and Roman doors were either single doors, double doors, triple doors, sliding doors or folding doors, in the last case the leaves were hinged and folded back. In the tomb of Theron at Agrigentum there is a single four-panel door carved in stone. In the Blundell collection is a bas-relief of a temple with double doors, each leaf with five panels. Among existing examples, the bronze doors in the church of SS. Cosmas and Damiano, in Rome, are important examples of Roman metal work of the best period; they are in two leaves, each with two panels, and are framed in bronze. Those of the Pantheon are similar in design, with narrow horizontal panels in addition, at the top, bottom and middle. Two other bronze doors of the Roman period are in the Lateran Basilica.
The Greek scholar Heron of Alexandria created the earliest known automatic door in the first century AD during the era of Roman Egypt. The first foot-sensor-activated automatic door was made in China during the reign of Emperor Yang of Sui (r. 604–618), who had one installed for his royal library. Gates powered by water featured in illustrations of the automatons of the Arab inventor Al-Jazari.
Copper and its alloys were integral in medieval architecture. The doors of the church of the Nativity at Bethlehem (6th century) are covered with plates of bronze, cut out in patterns. Those of Hagia Sophia at Constantinople, of the eighth and ninth century, are wrought in bronze, and the west doors of the cathedral of Aix-la-Chapelle (9th century), of similar manufacture, were probably brought from Constantinople, as also some of those in St. Marks, Venice. The bronze doors on the Aachen Cathedral in Germany date back to about 800 AD. Bronze baptistery doors at the Cathedral of Florence were completed in 1423 by Ghiberti. (For more information, see: Copper in architecture).
Of the 11th and 12th centuries there are numerous examples of bronze doors, the earliest being one at Hildesheim, Germany (1015). The Hildesheim design affected the concept of Gniezno door in Poland. Of others in South Italy and Sicily, the following are the finest: in Sant'Andrea, Amalfi (1060); Salerno (1099); Canosa di Puglia (1111); Troia, two doors (1119 and 1124); Ravello (1179), by Barisano of Trani, who also made doors for Trani cathedral; and in Monreale and Pisa cathedrals, by Bonano of Pisa. In all these cases the hanging stile had pivots at the top and bottom. The exact period when the builder moved to the hinge is unknown, but the change apparently brought about another method of strengthening and decorating doors—wrought-iron bands of various designs. As a rule, three bands with ornamental work constitute the hinges, with rings outside the hanging stiles that fit on vertical tenons set into the masonry or wooden frame. There is an early example of the 12th century in Lincoln. In France, the metalwork of the doors of Notre Dame at Paris is a beautiful example, but many others exist throughout France and England.
In Italy, celebrated doors include those of the Battistero di San Giovanni (Florence), which are all in bronze—including the door frames. The modeling of the figures, birds and foliage of the south doorway, by Andrea Pisano (1330), and of the east doorway by Ghiberti (1425–1452), are of great beauty. In the north door (1402–1424), Ghiberti adopted the same scheme of design for the paneling and figure subjects as Andrea Pisano, but in the east door, the rectangular panels are all filled, with bas-reliefs that illustrate Scripture subjects and innumerable figures. These may the gates of Paradise of which Michelangelo speaks.
Doors of the mosques in Cairo were of two kinds: those externally cased with sheets of bronze or iron, cut in decorative patterns, and incised or inlaid, with bosses in relief; and those of wood-framed with interlaced square and diamond designs. The latter design is Coptic in origin. The doors of the palace at Palermo, which were made by Saracenic workmen for the Normans, are fine examples in good preservation. A somewhat similar decorative class of door is found in Verona, where the edges of the stiles and rails are beveled and notched.
In the Renaissance period, Italian doors are quite simple, their architects trusting more to the doorways for effect; but in France and Germany the contrary is the case, the doors being elaborately carved, especially in the Louis XIV and Louis XV periods, and sometimes with architectural features such as columns and entablatures with pediment and niches, the doorway being in plain masonry. While in Italy the tendency was to give scale by increasing the number of panels, in France the contrary seems to have been the rule; and one of the great doors at Fontainebleau, which is in two leaves, is entirely carried out as if consisting of one great panel only.
The earliest Renaissance doors in France are those of the cathedral of St. Sauveur at Aix (1503). In the lower panels there are figures . high in Gothic niches, and in the upper panels a double range of niches with figures about . high with canopies over them, all carved in cedar. The south door of Beauvais Cathedral is in some respects the finest in France; the upper panels are carved in high relief with figure subjects and canopies over them. The doors of the church at Gisors (1575) are carved with figures in niches subdivided by classic pilasters superimposed. In St. Maclou at Rouen are three magnificently carved doors; those by Jean Goujon have figures in niches on each side, and others in a group of great beauty in the center. The other doors, probably about forty to fifty years later, are enriched with bas-reliefs, landscapes, figures and elaborate interlaced borders.
NASA's Vehicle Assembly Building at the Kennedy Space Center contains the four largest doors. The Vehicle Assembly Building was originally built for the assembly of the Apollo missions' Saturn vehicles and was then used to support Space Shuttle operations. Each of the four doors are high.
The oldest door in England can be found in Westminster Abbey and dates from 1050. In England in the 17th century the door panels were raised with bolection or projecting moldings, sometimes richly carved, around them; in the 18th century the moldings worked on the stiles and rails were carved with the egg-and-dart ornament.
Design and styles
There are many kinds of doors, with different purposes:
The most common type is the single-leaf door, which consists of a single rigid panel that fills the doorway. There are many variations on this basic design, such as the double-leaf door or double door and French windows, which have two adjacent independent panels hinged on each side of the doorway.
A half door or Dutch door or stable door is divided in half horizontally. Traditionally the top half opens so a worker can feed a horse or other animal while the bottom half remains closed to keep the animal inside. This style of door has been adapted for homes.
Saloon doors are a pair of lightweight swing doors often found in public bars, and especially associated with the American west. Saloon doors, also known as cafe doors, often use bidirectional hinges that close the door regardless of which direction it opens by incorporating springs. Saloon doors that only extend from knee-level to chest-level are known as batwing doors.
A blind door, Gibb door, or jib door has no visible trim or operable components. It blends with the adjacent wall in all finishes, to appear as part of the wall—a disguised door.
A French door consists of a frame around one or more transparent or translucent panels (called lights or lites) that may be installed singly, in matching pairs, or even as series. A matching pair of these doors is called a French window, as it resembles a door-height casement window. When a pair of French doors is used as a French window, the application does not generally include a central mullion (as do some casement window pairs), thus allowing a wider unobstructed opening. The frame typically requires a weather strip at floor level and where the doors meet to prevent water ingress. An espagnolette bolt may let the head and foot of each door be secured in one movement. The slender window joinery maximizes light into the room and minimizes the visual impact of the doorway joinery when considered externally. The doors of a French window often open outward onto a balconet, balcony, porch, or terrace and they may provide an entrance to a garden.
A louvered door has fixed or movable wooden fins (often called slats or louvers) which permit open ventilation while preserving privacy and preventing the passage of light to the interior. Being relatively weak structures, they are most commonly used for wardrobes and drying rooms, where security is of less importance than good ventilation, although a very similar structure is commonly used to form window shutters. Double louvred doors were introduced into Seagate, built in Florida in 1929 by Gwendolyn and Powel Crosley, that provided the desired circulation of air with an added degree of privacy in that it is impossible to see through the fins in any direction.
A composite door is a single leaf door that can be solid or with glass, and is usually filled with high density foam. In the United Kingdom, composite doors are commonly certified to BS PAS 23/24 and be compliant with Secured by Design, an official UK police initiative.
A steel security door is one which is made from strong steel, often for use on vaults and safe rooms to withstand attack. These may also be fitted with wooden outer panels to resemble standard internal and external doors.
A flush door is a completely smooth door, having plywood or MDF fixed over a light timber frame, the hollow parts of which are often filled with a cardboard core material. Skins can also be made out of hardboards, the first of which was invented by William H Mason in 1924. Called Masonite, its construction involved pressing and steaming wood chips into boards. Flush doors are most commonly employed in the interior of a dwelling, although slightly more substantial versions are occasionally used as exterior doors, especially within hotels and other buildings containing many independent dwellings.
A moulded door has the same structure as that of flush door. The only difference is that the surface material is a moulded skin made of MDF. Skins can also be made out of hardboards.
A ledge and brace door often called board and batten doors are made from multiple vertical boards fixed together by two or more horizontal timbers called ledges (or battens) and sometimes kept square by additional diagonal timbers called braces.
A wicket door is a pedestrian door built into a much larger door allowing access without requiring the opening of the larger door. Examples might be found on the ceremonial door of a cathedral or in a large vehicle door in a garage or hangar.
A bifold door is a unit that has several sections, folding in pairs. Wood is the most common material, and doors may also be metal or glass. Bifolds are most commonly made for closets, but may also be used as units between rooms. Bi-fold doors are essentially now doors that let the outside in. They open in concert; where the panels fold up against one another and are pushed together when opened. The main door panel (often known as the traffic door) is accompanied by a stack of panels that fold very neatly against one another when opened fully, which almost look like room dividers.
A sliding glass door, sometimes called an Arcadia door or patio door, is a door made of glass that slides open and sometimes has a screen (a removable metal mesh that covers the door).
Australian doors are a pair of plywood swinging doors often found in Australian public houses. These doors are generally red or brown in color and bear a resemblance to the more formal doors found in other British Colonies' public houses.
A false door is a wall decoration with the appearance of a window. In ancient Egyptian architecture, this was a common element in a tomb, the false door representing a gate to the afterlife. They can also be found in the funerary architecture of the desert tribes (e.g., Libyan Ghirza).
Types
Hinged
Most doors are hinged along one side to allow the door to pivot away from the doorway in one direction, but not the other. The axis of rotation is usually vertical. In some cases, such as hinged garage doors, the axis may be horizontal, above the door opening.
Doors can be hinged so that the axis of rotation is not in the plane of the door to reduce the space required on the side to which the door opens. This requires a mechanism so that the axis of rotation is on the side other than that in which the door opens. This is sometimes the case in trains or airplanes, such as for the door to the toilet, which opens inward.
A swing door has special single-action hinges that allow it to open either outward or inward, and is usually sprung to keep it closed.
French doors are derived from the French design called the casement door. It is a door with lites where all or some panels would be in a casement door. A French door traditionally has a moulded panel at the bottom of the door. It is called a French window when used in a pair as double-leaved doors with large glass panels in each door leaf, and in which the doors may swing out (typically) as well as in.
A double-acting door, patented in 1880 by the Dutch-American engineer Lorenz Bommer, swings both ways. They are often used in areas where many people are likely to pass through, such as restaurant kitchens.
A Dutch door or stable door consists of two halves. The top half operates independently from the bottom half. A variant exists in which opening the top part separately is possible, but because the lower part has a lip on the inside, closing the top part, while leaving the lower part open, is not.
A garden door resembles a French window (with lites), but is more secure because only one door is operable. The hinge of the operating door is next to the adjacent fixed door and the latch is located at the wall opening jamb rather than between the two doors or with the use of an espagnolette bolt.
A Lev door or convection door is an internal floor-to-ceiling (full height) door, consisting of a standard door leaf and an upper leaf in place of the usual header wall. The leaves may or may not be separated by a transom. The doors enable effective convection of warm air.
Sliding
It is often useful to have doors which slide along tracks, often for space or aesthetic considerations.
A bypass door is a door unit that has two or more sections. The doors can slide in either direction along one axis on parallel overhead tracks, sliding past each other. They are most commonly used in closets to provide access one side of the closet at a time. Doors in a bypass unit overlap slightly when viewed from the front so they do not have a visible gap when closed.
Doors which slide inside a wall cavity are called pocket doors. This type of door is used in tight spaces where privacy is also required. The door slab is mounted to roller and a track at the top of the door and slides inside a wall.
Sliding glass doors are common in many houses, particularly as an entrance to the backyard. Such doors are also popular for use for the entrances to commercial structures, although they are not counted as fire exit doors. The door that moves is called the "active leaf", while the door that remains fixed is called the "inactive leaf".
Rotating
A revolving door has several wings or leaves, generally four, radiating from a central shaft, forming compartments that rotate about a vertical axis. A revolving door allows people to pass in both directions without colliding, and forms an airlock maintaining a seal between inside and out.
A pivot door, instead of hinges, is supported on a bearing some distance away from the edge, so that there is more or less of a gap on the pivot side as well as the opening side. In some cases the pivot is central, creating two equal openings.
High-speed
A high-speed door is a very fast door some with opening speeds of up to 4 m/s, mainly used in the industrial sector where the speed of a door has an effect on production logistics, temperature and pressure control. High-speed cleanroom doors, usually consisting of a transparent material on a stainless steel frame, are used in pharmaceutical industries to allow passage between work areas while admitting minimal contaminants. The powerful high-speed doors have a smooth surface structure and no protruding edges, allowing minimal particle retention and easy cleaning.
High-speed doors are made to handle a high number of openings, generally more than 200,000 a year. They must be built with heavy-duty parts and counterbalance systems for speed enhancement and emergency opening function. The door curtain was originally made of PVC, but was later also developed in aluminium and acrylic glass sections. High-speed refrigeration and cold-room doors with excellent insulation values have also been introduced for green and energy-saving requirements.
In North America, the Door and Access Systems Manufacturing Association (DASMA) defines high-performance doors as non-residential powered doors characterized by rolling, folding, sliding or swinging action, that are either high-cycle (minimum 100 cycles/day) or high-speed (minimum 20 inches (508 mm)/second), and two out of three of the following: made-to-order for exact size and custom features, able to withstand equipment impact (break-away if accidentally hit by vehicle), or able to sustain heavy use with minimal maintenance.
Automatic
Automatically opening doors are powered open and closed either by electricity, spring, or both. There are several methods by which an automatically opening door is activated:
A sensor detects traffic is approaching. Sensors for automatic doors are generally:
A pressure sensor – e.g., a floor mat which reacts to the pressure of someone standing on it.
An infrared curtain or beam which shines invisible light onto sensors; if someone or something blocks the beam the door is triggered open.
A motion sensor which uses low-power microwave radar for the same effect.
A remote sensor (e.g. based on infrared or radio waves) can be triggered by a portable remote control, or is installed inside a vehicle. These are popular for garage doors.
A switch is operated manually, perhaps after security checks. This can be a push button switch or a swipe card.
The act of pushing or pulling the door triggers the open and close cycle. These are also known as power-assisted doors.
In addition to activation sensors, automatically opening doors are generally fitted with safety sensors. These are usually an infrared curtain or beam, but can be a pressure mat fitted on the swing side of the door. The safety sensor prevents the door from colliding with an object by stopping or slowing its motion. A mechanism in modern automatic doors ensures that the door can open in a power failure.
Other
Up-and-over or overhead doors are often used in garages. Instead of hinges, it has a mechanism, often counterbalanced or sprung, so it can lift and rest horizontally above the opening. A roller shutter or sectional overhead door is one variant of this type.
A tambour door or roller door is an up-and-over door made of narrow horizontal slats that rolls up and down by sliding along vertical tracks; it is typically found in entertainment centres and cabinets.
Rebated doors, a term chiefly used in Britain, are double doors with a lip or overlap (i.e. a rabbet) on the vertical edge(s) where they meet. Fire-rating can be achieved with an applied edge-guard or astragal molding on the meeting stile, in accordance with the American fire door.
Applications
Architectural doors have numerous general and specialized uses. Doors are generally used to separate interior spaces (closets, rooms, etc.) for convenience, privacy, safety, and security reasons. Doors are also used to secure passages into a building from the exterior, for reasons of climate control and safety.
Doors also are applied in more specialized cases:
A blast-proof door is constructed to allow access to a structure as well as to provide protection from the force of explosions.
A garden door is any door that opens to a backyard or garden. This term is often used specifically for French windows, double French doors (with lites instead of panels), in place of a sliding glass door. The term also may refer to what is known as patio doors.
A jib door is a concealed door, whose surface reflects the moldings and finishes of the wall. These were used in historic English houses, mainly as servants' doors.
A pet door (also known as a cat flap or dog door) is an opening in a door to allow pets to enter and exit without the main door's being opened. It may be simply covered by a rubber flap, or it may be an actual door hinged on the top that the pet can push through. Pet doors may be mounted in a sliding glass door as a new (permanent or temporary) panel. Pet doors may be unidirectional, only allowing pets to exit. Additionally, pet doors may be electronic, only allowing animals with a special electronic tag to enter.
A trapdoor is a door that is oriented horizontally in a ceiling or floor, often accessed via a ladder.
A water door or water entrance, such as those used in Venice, Italy, is a door leading from a building built on the water, such as a canal, to the water itself where, for example, one may enter or exit a private boat or water taxi.
Construction and components
Paneling
Panel doors, also called stile and rail doors, are built with frame and panel construction. EN 12519 is describing the terms which are officially used in European Member States. The main parts are listed below:
Stiles – Vertical boards that run the full height of a door and compose its right and left edges. The hinges are mounted to the fixed side (known as the "hanging stile"), and the handle, lock, bolt or latch are mounted on the swinging side (known as the "latch stile").
Rails – Horizontal boards at the top, bottom, and optionally in the middle of a door that join the two stiles and split the door into two or more rows of panels. The "top rail" and "bottom rail" are named for their positions. The bottom rail is also known as "kick rail". A middle rail at the height of the bolt is known as the "lock rail", other middle rails are commonly known as "cross rails".
Mullions – Smaller optional vertical boards that run between two rails, and split the door into two or more columns of panels, the term is used sometimes for verticals in doors, but more often (UK and Australia) it refers to verticals in windows.
Muntin – Optional vertical members that divide the door into smaller panels.
Panels – Large, wider boards used to fill the space between the stiles, rails, and mullions. The panels typically fit into grooves in the other pieces, and help to keep the door rigid. Panels may be flat, or in raised panel designs. Can be glued in or stay as a floating panel.
Light – a piece of glass used in place of a panel, essentially giving the door a window.
Board battening
Also known as ledges and braced, board and batten doors are an older design consisting primarily of vertical slats:
Planks – Boards wider than 9" that extend the full height of the door, and are placed side by side filling the door's width.
Ledges and braces – Ledges extend horizontally across the door which the boards are affixed to. The ledges hold the planks together. When diagonally they are called braces which prevent the door from skewing. On some doors, especially antique ones, the ledges are replaced with iron bars that are often built into the hinges as extensions of the door-side plates.
Ledging and bracing
As board and batten doors.
Impact resistance
Impact-resistant doors have rounded stile edges to dissipate energy and minimize edge chipping, scratching and denting. The formed edges are often made of an engineered material. Impact-resistant doors excel in high traffic areas such as hospitals, schools, hotels and coastal areas.
Frame and fill
This type consists of a solid timber frame, filled on one face, face with tongue and groove boards. Quite often used externally with the boards on the weather face.
Flushing
Flushing of a door means the door is flush with the face of the wall on either side.
Moulding
Stiles and rails – As above, but usually smaller. They form the outside edges of the door.
Core material: Material within the door used simply to fill space, provide rigidity and reduce druminess.
Hollow-core – Often consists of a lattice or honeycomb made of corrugated cardboard, extruded polystyrene foam, or thin wooden slats. Can also be built with staggered wooden blocks. Hollow-core molded doors are commonly used as interior doors.
Lock block – A solid block of wood mounted within a hollow-core flush door near the bolt to provide a solid and stable location for mounting the door's hardware.
Stave-core – Consists of wooden slats stacked upon one another in a manner similar to a board & batten door (though the slats are usually thinner) or the wooden-block hollow-core (except that the space is entirely filled).
Solid-core – Can consist of low-density particle board or foam used to completely fill the space within the door. Solid-core flush doors (especially foam-core ones) are commonly used as exterior doors because they provide more insulation and strength.
Skin – The front and back faces of the door are covered with HDF/MDF skins.
Swing direction
Generally, door swings, or handing, are determined while standing on the outside or less secure side of the door while facing the door (i.e., standing on the side requiring a key to open, going from outside to inside, or from public to private).
It is important to get the hand and swing correct on exterior doors, as the transom is usually sloped and sealed to resist water entry, and properly drain. In some custom millwork (or with some master carpenters), the manufacture or installer bevels the leading edge (the first edge to meet the jamb as the door closes) so that the door fits tight without binding. Specifying an incorrect hand or swing can make the door bind, not close properly, or leak. Fixing this error is expensive or time-consuming. In North America, many doors now come with factory-installed hinges, pre-hung on the jamb and sills.
While facing the door from the outside or less secure side, if the hinge is on the right side of the door, the door is "right handed"; or if the hinge is on the left, it is "left handed". If the door swings toward you, it is "reverse swing"; or if the door swings away from you, it is "normal swing".
In other words:
In the United States:
Left hand hinge (LHH): Standing outside (or on the less secure side, or on the public side of the door), the hinges are on the left and the door opens in (away from you).
Right hand hinge (RHH): Standing outside (or on the less secure side), the hinges are on the right and the door opens in (away from you).
Left hand reverse (LHR): Standing outside the house (or on the less secure side), the hinges are on the left, knob on right, on opening the door it swings toward you (i.e. the door swings open toward the outside, or "outswing")
Right hand reverse (RHR): Standing outside the house (i.e. on the less secure side), the hinges are on the right, knob on left, opening the door by pulling the door toward you (i.e. open swings to the outside, or "outswing")
In Europe:
One of the oldest DIN standard applies: DIN 107 "Building construction; identification of right and left side" (first 1922–05, current 1974–04) defines that doors are categorized from the side where the door hinges can be seen. If the hinges are on the left, it is a DIN Left door (DIN Links, DIN gauche), if the hinges are on the right, it is a DIN Right door (DIN Rechts, DIN droite). The DIN Right and DIN Left marking are also used to categorize matching installation material such as mortise locks (referenced in DIN 107). The European Standard DIN EN 12519 "Windows and pedestrian doors. Terminology" includes these definitions of orientation.
In Australia:
The "refrigerator rule" applies, and a refrigerator door is not opened from the inside. If the hinges are on the right then it is a right hand (or right hung) door. (Australian Standards for Installation of Timber Doorsets, AS 1909–1984 pg 6.)
In public buildings, exterior doors open to the outside to comply with applicable fire codes. In a fire, a door that opens inward could cause a crush of people who cannot open it.
Main materials
New exterior doors are largely defined by the type of materials they are made from: wood, steel, fiberglass, UPVC/vinyl, aluminum, composite, glass (patio doors), etc.
Wooden doors – including solid wood doors – are a top choice for many homeowners, largely because of the aesthetic qualities of wood. Many wood doors are custom-made, but they have several downsides: their price, their maintenance requirements (regular painting and staining) and their limited insulating value (R-5 to R-6, not including the effects of the glass elements of the doors). Wood doors often have an overhang requirement to maintain a warranty. An overhang is a roof, porch area or awning that helps to protect the door and its finish from UV rays.
Steel doors are another major type of residential front doors; most of them come with a polyurethane or other type of foam insulation core – a critical factor in a building's overall comfort and efficiency. Steel doors mostly in default comes along with frame and lock system, which is a high cost efficiency factor compared to wooden doors.
Most modern exterior walls provide thermal insulation and energy efficiency, which can be indicated by the Energy Star label or the passive house standards. Premium composite (including steel doors with a thick core of polyurethane or other foam), fiberglass and vinyl doors benefit from the materials they are made from, from a thermal perspective.
Insulation and weatherstripping
There are very few door models with an R-value close to 10 (the R-value measures how well a barrier resists the conductive flow of heat). This is far less than the R-40 walls or the R-50 ceilings of super-insulated buildings – passive solar and zero-energy buildings. Typical doors are not thick enough to provide very high levels of energy efficiency.
Many doors may have good R-values at their center, but their overall energy efficiency is reduced because of the presence of glass and reinforcing elements, or because of poor weatherstripping and the way the door is manufactured.
Door weatherstripping is particularly important for energy efficiency. German-made passive house doors use multiple weatherstrips, including magnetic strips, to meet higher standards. These weatherstrips reduce energy losses due to air leakage.
Dimensions
United States
Standard door sizes in the US run along 2" increments. Customary sizes have a height of and a width of . Most residential passage (room to room) doors are .
A standard US residential (exterior) door size is . Interior doors for wheelchair access must also have a minimum width of . Residential interior doors, as well as the doors of many small stores, offices, and other light commercial buildings, are often somewhat smaller than the doors of larger commercial buildings, public buildings, and grand homes. Older buildings often have smaller doors.
Thickness: Most pre-fabricated doors are 1 3/8" thick (for interior doors) or 1 3/4" (exterior).
Closets: small spaces such as closets, dressing rooms, half-baths, storage rooms, cellars, etc. often are accessed through doors smaller than passage doors in one or both dimensions but similar in design.
Garages: Garage doors are generally 84" (7 feet; 2134 mm) or 96" (8 feet; 2438 mm) wide for a single-car opening. Two car garage doors (sometimes called double car doors) are a single door 192" (16 feet; 4877 mm). Because of size and weight these doors are usually sectional. That is split into four or five horizontal sections so that they can be raised more easily and do not require a lot of additional space above the door when opening and closing. Single piece double garage doors are common in some older homes.
Europe
Standard DIN doors are defined in DIN 18101 (published 1955–07, 1985–01, 2014–08). Door sizes are also given in the construction standard for wooden door panels (DIN 68706–1). The DIN commission created the harmonized European standard DIN EN 14351-1 for exterior doors and DIN EN 14351-2 for interior doors (published 2006–07, 2010–08), which define requirements for the CE marking and provide standard sizes by examples in the appendix.
The DIN 18101 standard has a normative size (Nennmaß) slightly larger than the panel size (Türblatt) as the standard derives the panel sizes from the normative size being different single door vs double door and molded vs unmolded doors. DIN 18101/1985 defines interior single molded doors to have a common panel height of 1985 mm (normativ height 2010 mm) at panel widths of 610 mm, 735 mm, 860 mm, 985 mm, 1110 mm, plus a larger door panel size of 1110 mm x 2110 mm. The newer DIN 18101/2014 drops the definition of just five standard door sizes in favor of a basic raster running along 125 mm increments where the height and width are independent. Panel width may be in the range 485 mm to 1360 mmm, and the height may be in the range of 1610 mm to 2735 mm. The most common interior door is .
Doorways
When framed in wood for snug fitting of a door, the doorway consists of two vertical jambs on either side, a lintel or head jamb at the top, and perhaps a threshold at the bottom. When a door has more than one movable section, one of the sections may be called a leaf. See door furniture for a discussion of attachments to doors such as door handles, doorknobs, and door knockers.
Lintel – A horizontal beam above a door that supports the wall above it. (Also known as a header)
Jambs or legs – The vertical posts that form the sides of a door frame, where the hinges are mounted, and with which the bolt interacts.
Door casing, door frame, or chambranle – formed by the lintel and the two jambs.
Sill (for exterior doors) – A horizontal sill plate below the door that supports the door frame. Similar to a window sill but for a door
Threshold (for exterior doors) – A horizontal plate below the door that bridges the crack between the interior floor and the sill.
Doorstop – a thin slat built inside the frame to prevent a door from swinging through when closed, an act which might break the hinges.
Architrave – The decorative molding that outlines a door frame, called an Archivolt if the door is arched. Sometimes called brickmold in North America.
Doormat (also called door mat) – a mat placed typically in front of or behind a door of a home. This practice originated so that mud and dirt would be less prevalent on floors inside a building.
Related hardware
Door furniture or hardware refers to any of the items that are attached to a door or a drawer to enhance its functionality or appearance. This includes items such as hinges, handles, door stops, etc.
Safety
Door safety relates to prevention of door-related accidents. Such accidents take place in various forms, and in a number of locations; ranging from car doors to garage doors. Accidents vary in severity and frequency. According to the National Safety Council in the United States, around 300,000 door-related injuries occur every year.
The types of accidents vary from relatively minor cases where doors cause damage to other objects, such as walls, to serious cases resulting in human injury, particularly to fingers, hands, and feet. A closing door can exert up to 40 tons per square inch of pressure between the hinges. Because of the number of accidents taking place, there has been a surge in the number of lawsuits. Thus organisations may be at risk when car doors or doors within buildings are unprotected.
According to the US General Services Administration, discussing child care centres:
Opening direction
Whenever a door is opened outward, there is a risk that it could strike another person. In many cases this can be avoided by architectural design which favors doors which open inward to rooms (from the perspective of a common area such as a corridor, the door opens outward). In cases where this is infeasible, it may be possible to avoid an accident by placing vision panels in the door.
Inward-hinged doors can also escalate an accident by preventing people from escaping the building: people inside the building may press against the doors, and thus prevent the doors from opening. Related accidents include:
Grue Church fire: Grue, Norway in 1822
Victoria Hall Disaster: Sunderland, UK in 1883
Glen Cinema disaster: Paisley, UK in 1929
Cocoanut Grove fire: Boston, USA in 1942
Today, the exterior doors of most large (especially public) buildings open outward, while interior doors such as doors to individual rooms, offices, suites, etc. open inward, as do many exterior doors of houses, particularly in North America.
Stops
Doorstops are simple devices that prevent a door from contacting and possibly damaging another object (typically a wall). They may either absorb the force of a moving door, or hold the door against unintended motion.
Guards
Door guards (hinge guards, anti-finger trapping devices, or finger guards) help prevent finger trapping accidents, as doors pose a risk to children, especially when closing. Door guards protect fingers in door hinges by covering the hinge-side gap of an open door, typically with a piece of rubber or plastic that wraps from the door frame to the door. Other door safety products eject the fingers from the push side of the door as it closes.
There are various levels of door protection. Anti-finger trapping devices in front may leave the rear hinge pin side of doors unprotected. Full door protection uses front and rear anti-finger trapping devices and ensures the hinge side of a door is fully isolated. A risk assessment of the door determines the appropriate level of protection.
There is also handle-side door protection, which prevents the door from slamming shut on the frame, which can cause injury to fingers/hands.
Glass
Glass doors pose the risk of unintentional collision if a person is unaware there is a door, or thinks it is open when it is not. This risk is greater with sliding glass doors because they often have large single panes that are hard to see. Stickers or other types of warnings on the glass surface make it more visible and help prevent injury. In the UK, Regulation 14 of the Workplace (Health and Safety Regulations) 1992 requires that builders mark windows and glass doors to make them conspicuous. Australian Standards: AS1288 and AS2208 require that glass doors be made of laminated, tempered, or toughened glass.
Fire
Buildings often have special purpose doors that automatically close to prevent the spread of fire and smoke. Fire doors that are improperly installed or tampered with can increase risk during a fire. Sometimes, door closer mechanisms ensure fire doors remain closed.
An additional fire risk is that doors may prevent access to emergency services personnel coming to fight the fire and rescue occupants, etc. Fire fighters must use door breaching techniques in these situations to gain access.
Doors in public buildings often have panic bars, which open the door in response to anyone pressing against the bar from the inside in the event of a fire or other emergency.
Automobiles
Vehicle doors present an increased risk of trapping hands or fingers due to the proximity of occupants.
Bicyclists cycling on public roads risk dooring: collision with an abruptly opened vehicle door. Because cyclists often ride near parked cars alongside the road, they are particularly vulnerable.
Aircraft
In aircraft, doors in a pressurized cabin or cargo hold could pose risk if they open during flight, causing decompression. Air may rush out of the fuselage with sufficient velocity to eject unsecured occupants, cargo, and other items, and drastic pressure differences between compartments may cause aircraft floors or other interior partitions to fail. These concerns are typically mitigated with plug doors, which open inward. They are secured into their door frames by the difference in air pressure. Most cabin doors and emergency exits are of this type, but cargo doors typically open outward to maximise interior space.
A number of aircraft accidents have involved outward-opening door failures, including:
American Airlines Flight 96 (1972) (design flaw)
Turkish Airlines Flight 981 (1974) (design flaw)
1975 Tân Sơn Nhứt C-5 accident (poor maintenance)
United Airlines Flight 811 (1989) (design flaw)
| Technology | Architectural elements | null |
8643 | https://en.wikipedia.org/wiki/Molecular%20diffusion | Molecular diffusion | Molecular diffusion, often simply called diffusion, is the thermal motion of all (liquid or gas) particles at temperatures above absolute zero. The rate of this movement is a function of temperature, viscosity of the fluid and the size (mass) of the particles. Diffusion explains the net flux of molecules from a region of higher concentration to one of lower concentration. Once the concentrations are equal the molecules continue to move, but since there is no concentration gradient the process of molecular diffusion has ceased and is instead governed by the process of self-diffusion, originating from the random motion of the molecules. The result of diffusion is a gradual mixing of material such that the distribution of molecules is uniform. Since the molecules are still in motion, but an equilibrium has been established, the result of molecular diffusion is called a "dynamic equilibrium". In a phase with uniform temperature, absent external net forces acting on the particles, the diffusion process will eventually result in complete mixing.
Consider two systems; S1 and S2 at the same temperature and capable of exchanging particles. If there is a change in the potential energy of a system; for example μ1>μ2 (μ is Chemical potential) an energy flow will occur from S1 to S2, because nature always prefers low energy and maximum entropy.
Molecular diffusion is typically described mathematically using Fick's laws of diffusion.
Applications
Diffusion is of fundamental importance in many disciplines of physics, chemistry, and biology. Some example applications of diffusion:
Sintering to produce solid materials (powder metallurgy, production of ceramics)
Chemical reactor design
Catalyst design in chemical industry
Steel can be diffused (e.g., with carbon or nitrogen) to modify its properties
Doping during production of semiconductors.
Significance
Diffusion is part of the transport phenomena. Of mass transport mechanisms, molecular diffusion is known as a slower one.
Biology
In cell biology, diffusion is a main form of transport for necessary materials such as amino acids within cells. Diffusion of solvents, such as water, through a semipermeable membrane is classified as osmosis.
Metabolism and respiration rely in part upon diffusion in addition to bulk or active processes. For example, in the alveoli of mammalian lungs, due to differences in partial pressures across the alveolar-capillary membrane, oxygen diffuses into the blood and carbon dioxide diffuses out. Lungs contain a large surface area to facilitate this gas exchange process.
Tracer, self- and chemical diffusion
Fundamentally, two types of diffusion are distinguished:
Tracer diffusion and Self-diffusion, which is a spontaneous mixing of molecules taking place in the absence of concentration (or chemical potential) gradient. This type of diffusion can be followed using isotopic tracers, hence the name. The tracer diffusion is usually assumed to be identical to self-diffusion (assuming no significant isotopic effect). This diffusion can take place under equilibrium. An excellent method for the measurement of self-diffusion coefficients is pulsed field gradient (PFG) NMR, where no isotopic tracers are needed. In a so-called NMR spin echo experiment this technique uses the nuclear spin precession phase, allowing to distinguish chemically and physically completely identical species e.g. in the liquid phase, as for example water molecules within liquid water. The self-diffusion coefficient of water has been experimentally determined with high accuracy and thus serves often as a reference value for measurements on other liquids. The self-diffusion coefficient of neat water is: 2.299·10−9 m2·s−1 at 25 °C and 1.261·10−9 m2·s−1 at 4 °C.
Chemical diffusion occurs in a presence of concentration (or chemical potential) gradient and it results in net transport of mass. This is the process described by the diffusion equation. This diffusion is always a non-equilibrium process, increases the system entropy, and brings the system closer to equilibrium.
The diffusion coefficients for these two types of diffusion are generally different because the diffusion coefficient for chemical diffusion is binary and it includes the effects due to the correlation of the movement of the different diffusing species.
Non-equilibrium system
Because chemical diffusion is a net transport process, the system in which it takes place is not an equilibrium system (i.e. it is not at rest yet). Many results in classical thermodynamics are not easily applied to non-equilibrium systems. However, there sometimes occur so-called quasi-steady states, where the diffusion process does not change in time, where classical results may locally apply. As the name suggests, this process is a not a true equilibrium since the system is still evolving.
Non-equilibrium fluid systems can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale.
Chemical diffusion increases the entropy of a system, i.e. diffusion is a spontaneous and irreversible process. Particles can spread out by diffusion, but will not spontaneously re-order themselves (absent changes to the system, assuming no creation of new chemical bonds, and absent external forces acting on the particle).
Concentration dependent "collective" diffusion
Collective diffusion is the diffusion of a large number of particles, most often within a solvent.
Contrary to brownian motion, which is the diffusion of a single particle, interactions between particles may have to be considered, unless the particles form an ideal mix with their solvent (ideal mix conditions correspond to the case where the interactions between the solvent and particles are identical to the interactions between particles and the interactions between solvent molecules; in this case, the particles do not interact when inside the solvent).
In case of an ideal mix, the particle diffusion equation holds true and the diffusion coefficient D the speed of diffusion in the particle diffusion equation is independent of particle concentration. In other cases, resulting interactions between particles within the solvent will account for the following effects:
the diffusion coefficient D in the particle diffusion equation becomes dependent of concentration. For an attractive interaction between particles, the diffusion coefficient tends to decrease as concentration increases. For a repulsive interaction between particles, the diffusion coefficient tends to increase as concentration increases.
In the case of an attractive interaction between particles, particles exhibit a tendency to coalesce and form clusters if their concentration lies above a certain threshold. This is equivalent to a precipitation chemical reaction (and if the considered diffusing particles are chemical molecules in solution, then it is a precipitation).
Molecular diffusion of gases
Transport of material in stagnant fluid or across streamlines of a fluid in a laminar flow occurs by molecular diffusion. Two adjacent compartments separated by a partition, containing pure gases A or B may be envisaged. Random movement of all molecules occurs so that after a period molecules are found remote from their original positions. If the partition is removed, some molecules of A move towards the region occupied by B, their number depends on the number of molecules at the region considered. Concurrently, molecules of B diffuse toward regimens formerly occupied by pure A.
Finally, complete mixing occurs. Before this point in time, a gradual variation in the concentration of A occurs along an axis, designated x, which joins the original compartments. This variation, expressed mathematically as -dCA/dx, where CA is the concentration of A. The negative sign arises because the concentration of A decreases as the distance x increases. Similarly, the variation in the concentration of gas B is -dCB/dx. The rate of diffusion of A, NA, depend on concentration gradient and the average velocity with which the molecules of A moves in the x direction. This relationship is expressed by Fick's law
(only applicable for no bulk motion)
where D is the diffusivity of A through B, proportional to the average molecular velocity and, therefore dependent on the temperature and pressure of gases. The rate of diffusion NA is usually expressed as the number of moles diffusing across unit area in unit time. As with the basic equation of heat transfer, this indicates that the rate of force is directly proportional to the driving force, which is the concentration gradient.
This basic equation applies to a number of situations. Restricting discussion exclusively to steady state conditions, in which neither dCA/dx or dCB/dx change with time, equimolecular counterdiffusion is considered first.
Equimolecular counterdiffusion
If no bulk flow occurs in an element of length dx, the rates of diffusion of two ideal gases (of similar molar volume) A and B must be equal and opposite, that is .
The partial pressure of A changes by dPA over the distance dx. Similarly, the partial pressure of B changes dPB. As there is no difference in total pressure across the element (no bulk flow), we have
.
For an ideal gas the partial pressure is related to the molar concentration by the relation
where nA is the number of moles of gas A in a volume V. As the molar concentration CA is equal to nA/ V therefore
Consequently, for gas A,
where DAB is the diffusivity of A in B. Similarly,
Considering that dPA/dx=-dPB/dx, it therefore proves that DAB=DBA=D. If the partial pressure of A at x1 is PA1 and x2 is PA2, integration of above equation,
A similar equation may be derived for the counterdiffusion of gas B.
| Physical sciences | Statistical mechanics | Physics |
8651 | https://en.wikipedia.org/wiki/Dark%20matter | Dark matter | In astronomy, dark matter is an invisible and hypothetical form of matter that does not interact with light or other electromagnetic radiation. Dark matter is implied by gravitational effects which cannot be explained by general relativity unless more matter is present than can be observed. Such effects occur in the context of formation and evolution of galaxies, gravitational lensing, the observable universe's current structure, mass position in galactic collisions, the motion of galaxies within galaxy clusters, and cosmic microwave background anisotropies.
In the standard Lambda-CDM model of cosmology, the mass–energy content of the universe is 5% ordinary matter, 26.8% dark matter, and 68.2% a form of energy known as dark energy. Thus, dark matter constitutes 85% of the total mass, while dark energy and dark matter constitute 95% of the total mass–energy content.
Dark matter is not known to interact with ordinary baryonic matter and radiation except through gravity, making it difficult to detect in the laboratory. The most prevalent explanation is that dark matter is some as-yet-undiscovered subatomic particle, such as either weakly interacting massive particles (WIMPs) or axions. The other main possibility is that dark matter is composed of primordial black holes.
Dark matter is classified as "cold", "warm", or "hot" according to velocity (more precisely, its free streaming length). Recent models have favored a cold dark matter scenario, in which structures emerge by the gradual accumulation of particles.
Although the astrophysics community generally accepts the existence of dark matter, a minority of astrophysicists, intrigued by specific observations that are not well explained by ordinary dark matter, argue for various modifications of the standard laws of general relativity. These include modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity. So far none of the proposed modified gravity theories can describe every piece of observational evidence at the same time, suggesting that even if gravity has to be modified, some form of dark matter will still be required.
History
Early history
The hypothesis of dark matter has an elaborate history.
Wm. Thomson, Lord Kelvin, discussed the potential number of stars around the Sun in the appendices of a book based on a series of lectures given in 1884 in Baltimore. He inferred their density using the observed velocity dispersion of the stars near the Sun, assuming that the Sun was 20–100 million years old. He posed what would happen if there were a thousand million stars within 1 kiloparsec of the Sun (at which distance their parallax would be 1 milli-arcsecond). Kelvin concluded
Many of our supposed thousand million stars – perhaps a great majority of them – may be dark bodies.
In 1906, Poincaré used the French term [] ("dark matter") in discussing Kelvin's work. He found that the amount of dark matter would need to be less than that of visible matter, incorrectly, it turns out.
The second to suggest the existence of dark matter using stellar velocities was Dutch astronomer Jacobus Kapteyn in 1922.
A publication from 1930 by Swedish astronomer Knut Lundmark points to him being the first to realise that the universe must contain much more mass than can be observed. Dutch radio astronomy pioneer Jan Oort also hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the galactic neighborhood and found the mass in the galactic plane must be greater than what was observed, but this measurement was later determined to be incorrect.
In 1933, Swiss astrophysicist Fritz Zwicky studied galaxy clusters while working at Cal Tech and made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass he called ('dark matter'). Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies. He estimated the cluster had about 400 times more mass than was visually observable. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred some unseen matter provided the mass and associated gravitational attraction to hold the cluster together. Zwicky's estimates were off by more than an order of magnitude, mainly due to an obsolete value of the Hubble constant; the same calculation today shows a smaller fraction, using greater values for luminous mass. Nonetheless, Zwicky did correctly conclude from his calculation that most of the gravitational matter present was dark. However unlike modern theories, Zwicky considered "dark matter" to be non-luminous ordinary matter.
Further indications of mass-to-light ratio anomalies came from measurements of galaxy rotation curves. In 1939, H.W. Babcock reported the rotation curve for the Andromeda nebula (now called the Andromeda Galaxy), which suggested the mass-to-luminosity ratio increases radially. He attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral, rather than to unseen matter. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda Galaxy and a mass-to-light ratio of 50; in 1940, Oort discovered and wrote about the large non-visible halo of NGC 3115.
1970s
The hypothesis of dark matter largely took root in the 1970s. Several different observations were synthesized to argue that galaxies should be surrounded by halos of unseen matter. In two papers that appeared in 1974, this conclusion was drawn in tandem by independent groups: in Princeton, New Jersey, by Jeremiah Ostriker, Jim Peebles, and Amos Yahil, and in Tartu, Estonia, by Jaan Einasto, Enn Saar, and Ants Kaasik.
One of the observations that served as evidence for the existence of galactic halos of dark matter was the shape of galaxy rotation curves. These observations were done in optical and radio astronomy. In optical astronomy, Vera Rubin and Kent Ford worked with a new spectrograph to measure the velocity curve of edge-on spiral galaxies with greater accuracy.
At the same time, radio astronomers were making use of new radio telescopes to map the 21 cm line of atomic hydrogen in nearby galaxies. The radial distribution of interstellar atomic hydrogen (H) often extends to much greater galactic distances than can be observed as collective starlight, expanding the sampled distances for rotation curves – and thus of the total mass distribution – to a new dynamical regime. Early mapping of Andromeda with the telescope at Green Bank and the dish at Jodrell Bank already showed the H rotation curve did not trace the decline expected from Keplerian orbits.
As more sensitive receivers became available, Roberts & Whitehurst (1975) were able to trace the rotational velocity of Andromeda to 30 kpc, much beyond the optical measurements. Illustrating the advantage of tracing the gas disk at large radii; that paper's Figure 16 combines the optical data (the cluster of points at radii of less than 15 kpc with a single point further out) with the H data between 20 and 30 kpc, exhibiting the flatness of the outer galaxy rotation curve; the solid curve peaking at the center is the optical surface density, while the other curve shows the cumulative mass, still rising linearly at the outermost measurement. In parallel, the use of interferometric arrays for extragalactic H spectroscopy was being developed. Rogstad & Shostak (1972) published H rotation curves of five spirals mapped with the Owens Valley interferometer; the rotation curves of all five were very flat, suggesting very large values of mass-to-light ratio in the outer parts of their extended H disks. In 1978, Albert Bosma showed further evidence of flat rotation curves using data from the Westerbork Synthesis Radio Telescope.
By the late 1970s the existence of dark matter halos around galaxies was widely recognized as real, and became a major unsolved problem in astronomy.
1980–1990s
A stream of observations in the 1980–1990s supported the presence of dark matter. is notable for the investigation of 967 spirals. The evidence for dark matter also included gravitational lensing of background objects by galaxy clusters, the temperature distribution of hot gas in galaxies and clusters, and the pattern of anisotropies in the cosmic microwave background.
According to the current consensus among cosmologists, dark matter is composed primarily of some type of not-yet-characterized subatomic particle.
The search for this particle, by a variety of means, is one of the major efforts in particle physics.
Technical definition
In standard cosmological calculations, "matter" means any constituent of the universe whose energy density scales with the inverse cube of the scale factor, i.e., This is in contrast to "radiation", which scales as the inverse fourth power of the scale factor and a cosmological constant, which does not change with respect to (). The different scaling factors for matter and radiation are a consequence of radiation redshift. For example, after doubling the diameter of the observable Universe via cosmic expansion, the scale, , has doubled. The energy of the cosmic microwave background radiation has been halved (because the wavelength of each photon has doubled); the energy of ultra-relativistic particles, such as early-era standard-model neutrinos, is similarly halved. The cosmological constant, as an intrinsic property of space, has a constant energy density regardless of the volume under consideration.
In principle, "dark matter" means all components of the universe which are not visible but still obey In practice, the term "dark matter" is often used to mean only the non-baryonic component of dark matter, i.e., excluding "missing baryons". Context will usually indicate which meaning is intended.
Observational evidence
Galaxy rotation curves
The arms of spiral galaxies rotate around their galactic center. The luminous mass density of a spiral galaxy decreases as one goes from the center to the outskirts. If luminous mass were all the matter, then the galaxy can be modelled as a point mass in the centre and test masses orbiting around it, similar to the Solar System. From Kepler's Third Law, it is expected that the rotation velocities will decrease with distance from the center, similar to the Solar System. This is not observed. Instead, the galaxy rotation curve remains flat or even increases as distance from the center increases.
If Kepler's laws are correct, then the obvious way to resolve this discrepancy is to conclude the mass distribution in spiral galaxies is not similar to that of the Solar System. In particular, there may be a lot of non-luminous matter (dark matter) in the outskirts of the galaxy.
Velocity dispersions
Stars in bound systems must obey the virial theorem. The theorem, together with the measured velocity distribution, can be used to measure the mass distribution in a bound system, such as elliptical galaxies or globular clusters. With some exceptions, velocity dispersion estimates of elliptical galaxies do not match the predicted velocity dispersion from the observed mass distribution, even assuming complicated distributions of stellar orbits.
As with galaxy rotation curves, the obvious way to resolve the discrepancy is to postulate the existence of non-luminous matter.
Galaxy clusters
Galaxy clusters are particularly important for dark matter studies since their masses can be estimated in three independent ways:
From the scatter in radial velocities of the galaxies within clusters
From X-rays emitted by hot gas in the clusters. From the X-ray energy spectrum and flux, the gas temperature and density can be estimated, hence giving the pressure; assuming pressure and gravity balance determines the cluster's mass profile.
Gravitational lensing (usually of more distant galaxies) can measure cluster masses without relying on observations of dynamics (e.g., velocity).
Generally, these three methods are in reasonable agreement that dark matter outweighs visible matter by approximately 5 to 1.
Gravitational lensing
One of the consequences of general relativity is the gravitational lens. Gravitational lensing occurs when massive objects between a source of light and the observer act as a lens to bend light from this source. Lensing does not depend on the properties of the mass; it only requires there to be a mass. The more massive an object, the more lensing is observed. An example is a cluster of galaxies lying between a more distant source such as a quasar and an observer. In this case, the galaxy cluster will lens the quasar.
Strong lensing is the observed distortion of background galaxies into arcs when their light passes through such a gravitational lens. It has been observed around many distant clusters including Abell 1689. By measuring the distortion geometry, the mass of the intervening cluster can be obtained. In the weak regime, lensing does not distort background galaxies into arcs, causing minute distortions instead. By examining the apparent shear deformation of the adjacent background galaxies, the mean distribution of dark matter can be characterized. The measured mass-to-light ratios correspond to dark matter densities predicted by other large-scale structure measurements.
Cosmic microwave background
Although both dark matter and ordinary matter are matter, they do not behave in the same way. In particular, in the early universe, ordinary matter was ionized and interacted strongly with radiation via Thomson scattering. Dark matter does not interact directly with radiation, but it does affect the cosmic microwave background (CMB) by its gravitational potential (mainly on large scales) and by its effects on the density and velocity of ordinary matter. Ordinary and dark matter perturbations, therefore, evolve differently with time and leave different imprints on the CMB.
The CMB is very close to a perfect blackbody but contains very small temperature anisotropies of a few parts in 100,000. A sky map of anisotropies can be decomposed into an angular power spectrum, which is observed to contain a series of acoustic peaks at near-equal spacing but different heights. The locations of these peaks depend on cosmological parameters. Matching theory to data, therefore, constrains cosmological parameters.
The CMB anisotropy was first discovered by COBE in 1992, though this had too coarse resolution to detect the acoustic peaks.
After the discovery of the first acoustic peak by the balloon-borne BOOMERanG experiment in 2000, the power spectrum was precisely observed by WMAP in 2003–2012, and even more precisely by the Planck spacecraft in 2013–2015. The results support the Lambda-CDM model.
The observed CMB angular power spectrum provides powerful evidence in support of dark matter, as its precise structure is well fitted by the Lambda-CDM model, but difficult to reproduce with any competing model such as modified Newtonian dynamics (MOND).
Structure formation
Structure formation refers to the period after the Big Bang when density perturbations collapsed to form stars, galaxies, and clusters. Prior to structure formation, the Friedmann solutions to general relativity describe a homogeneous universe. Later, small anisotropies gradually grew and condensed the homogeneous universe into stars, galaxies and larger structures. Ordinary matter is affected by radiation, which is the dominant element of the universe at very early times. As a result, its density perturbations are washed out and unable to condense into structure. If there were only ordinary matter in the universe, there would not have been enough time for density perturbations to grow into the galaxies and clusters currently seen.
Dark matter provides a solution to this problem because it is unaffected by radiation. Therefore, its density perturbations can grow first. The resulting gravitational potential acts as an attractive potential well for ordinary matter collapsing later, speeding up the structure formation process.
Bullet Cluster
The Bullet Cluster is the result of a recent collision of two galaxy clusters. It is of particular note because the location of the center of mass as measured by gravitational lensing is different from the location of the center of mass of visible matter. This is difficult for modified gravity theories, which generally predict lensing around visible matter, to explain. Standard dark matter theory however has no issue: the hot, visible gas in each cluster would be cooled and slowed down by electromagnetic interactions, while dark matter (which does not interact electromagnetically) would not. This leads to the dark matter separating from the visible gas, producing the separate lensing peak as observed.
Type Ia supernova distance measurements
Type Ia supernovae can be used as standard candles to measure extragalactic distances, which can in turn be used to measure how fast the universe has expanded in the past. Data indicates the universe is expanding at an accelerating rate, the cause of which is usually ascribed to dark energy. Since observations indicate the universe is almost flat, it is expected the total energy density of everything in the universe should sum to 1 (). The measured dark energy density is ; the observed ordinary (baryonic) matter energy density is and the energy density of radiation is negligible. This leaves a missing which nonetheless behaves like matter (see technical definition section above)dark matter.
Sky surveys and baryon acoustic oscillations
Baryon acoustic oscillations (BAO) are fluctuations in the density of the visible baryonic matter (normal matter) of the universe on large scales. These are predicted to arise in the Lambda-CDM model due to acoustic oscillations in the photon–baryon fluid of the early universe and can be observed in the cosmic microwave background angular power spectrum. BAOs set up a preferred length scale for baryons. As the dark matter and baryons clumped together after recombination, the effect is much weaker in the galaxy distribution in the nearby universe, but is detectable as a subtle (≈1 percent) preference for pairs of galaxies to be separated by 147 Mpc, compared to those separated by 130–160 Mpc. This feature was predicted theoretically in the 1990s and then discovered in 2005, in two large galaxy redshift surveys, the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey. Combining the CMB observations with BAO measurements from galaxy redshift surveys provides a precise estimate of the Hubble constant and the average matter density in the Universe. The results support the Lambda-CDM model.
Redshift-space distortions
Large galaxy redshift surveys may be used to make a three-dimensional map of the galaxy distribution. These maps are slightly distorted because distances are estimated from observed redshifts; the redshift contains a contribution from the galaxy's so-called peculiar velocity in addition to the dominant Hubble expansion term. On average, superclusters are expanding more slowly than the cosmic mean due to their gravity, while voids are expanding faster than average. In a redshift map, galaxies in front of a supercluster have excess radial velocities towards it and have redshifts slightly higher than their distance would imply, while galaxies behind the supercluster have redshifts slightly low for their distance. This effect causes superclusters to appear squashed in the radial direction, and likewise voids are stretched. Their angular positions are unaffected. This effect is not detectable for any one structure since the true shape is not known, but can be measured by averaging over many structures. It was predicted quantitatively by Nick Kaiser in 1987, and first decisively measured in 2001 by the 2dF Galaxy Redshift Survey. Results are in agreement with the Lambda-CDM model.
Lyman-alpha forest
In astronomical spectroscopy, the Lyman-alpha forest is the sum of the absorption lines arising from the Lyman-alpha transition of neutral hydrogen in the spectra of distant galaxies and quasars. Lyman-alpha forest observations can also constrain cosmological models. These constraints agree with those obtained from WMAP data.
Theoretical classifications
Dark matter can be divided into cold, warm, and hot categories. These categories refer to velocity rather than an actual temperature, and indicate how far corresponding objects moved due to random motions in the early universe, before they slowed due to cosmic expansion. This distance is called the free streaming length (FSL). The categories of dark matter are set with respect to the size of a protogalaxy (an object that later evolves into a dwarf galaxy): dark matter particles are classified as cold, warm, or hot if their FSL is much smaller (cold), similar to (warm), or much larger (hot) than a protogalaxy. Mixtures of the above are also possible: a theory of mixed dark matter was popular in the mid-1990s, but was rejected following the discovery of dark energy.
The significance of the free streaming length is that the universe began with some primordial density fluctuations from the Big Bang (in turn arising from quantum fluctuations at the microscale). Particles from overdense regions will naturally spread to underdense regions, but because the universe is expanding quickly, there is a time limit for them to do so. Faster particles (hot dark matter) can beat the time limit while slower particles cannot. The particles travel a free streaming length's worth of distance within the time limit; therefore this length sets a minimum scale for later structure formation. Because galaxy-size density fluctuations get washed out by free-streaming, hot dark matter implies the first objects that can form are huge supercluster-size pancakes, which then fragment into galaxies, while the reverse is true for cold dark matter.
Deep-field observations show that galaxies formed first, followed by clusters and superclusters as galaxies clump together, and therefore that most dark matter is cold. This is also the reason why neutrinos, which move at nearly the speed of light and therefore would fall under hot dark matter, cannot make up the bulk of dark matter.
Composition
The identity of dark matter is unknown, but there are many hypotheses about what dark matter could consist of, as set out in the table below.
Baryonic matter
Dark matter can refer to any substance which interacts predominantly via gravity with visible matter (e.g., stars and planets). Hence in principle it need not be composed of a new type of fundamental particle but could, at least in part, be made up of standard baryonic matter, such as protons or neutrons. Most of the ordinary matter familiar to astronomers, including planets, brown dwarfs, red dwarfs, visible stars, white dwarfs, neutron stars, and black holes, fall into this category. A black hole would ingest both baryonic and non-baryonic matter that comes close enough to its event horizon; afterwards, the distinction between the two is lost.
These massive objects that are hard to detect are collectively known as MACHOs. Some scientists initially hoped that baryonic MACHOs could account for and explain all the dark matter.
However, multiple lines of evidence suggest the majority of dark matter is not baryonic:
Sufficient diffuse, baryonic gas or dust would be visible when backlit by stars.
The theory of Big Bang nucleosynthesis predicts the observed abundance of the chemical elements. If there are more baryons, then there should also be more helium, lithium and heavier elements synthesized during the Big Bang. Agreement with observed abundances requires that baryonic matter makes up between 4–5% of the universe's critical density. In contrast, large-scale structure and other observations indicate that the total matter density is about 30% of the critical density.
Astronomical searches for gravitational microlensing in the Milky Way found at most only a small fraction of the dark matter may be in dark, compact, conventional objects (MACHOs, etc.); the excluded range of object masses is from half the Earth's mass up to 30 solar masses, which covers nearly all the plausible candidates.
Detailed analysis of the small irregularities (anisotropies) in the cosmic microwave background by WMAP and Planck indicate that around five-sixths of the total matter is in a form that only interacts significantly with ordinary matter or photons through gravitational effects.
Non-baryonic matter
If baryonic matter cannot make up most of dark matter, then dark matter must be non-baryonic. There are two main candidates for non-baryonic dark matter: new hypothetical particles and primordial black holes.
Unlike baryonic matter, nonbaryonic particles do not contribute to the formation of the elements in the early universe (Big Bang nucleosynthesis) and so its presence is felt only via its gravitational effects (such as weak lensing). In addition, some dark matter candidates can interact with themselves (self-interacting dark matter) or with ordinary particles (e.g. WIMPs or Weakly Interacting Massive Particles), possibly resulting in observable by-products such as gamma rays and neutrinos (indirect detection). Candidates abound (see the table above), each with their own strengths and weaknesses.
Undiscovered massive particles
There exists no formal definition of a Weakly Interacting Massive Particle, but broadly, it is an elementary particle which interacts via gravity and any other force (or forces) which is as weak as or weaker than the weak nuclear force, but also non-vanishing in strength. Many WIMP candidates are expected to have been produced thermally in the early Universe, similarly to the particles of the Standard Model according to Big Bang cosmology, and usually will constitute cold dark matter. Obtaining the correct abundance of dark matter today via thermal production requires a self-annihilation cross section of , which is roughly what is expected for a new particle in the 100 GeV mass range that interacts via the electroweak force.
Because supersymmetric extensions of the Standard Model of particle physics readily predict a new particle with these properties, this apparent coincidence is known as the "WIMP miracle", and a stable supersymmetric partner has long been a prime explanation for dark matter. Experimental efforts to detect WIMPs include the search for products of WIMP annihilation, including gamma rays, neutrinos and cosmic rays in nearby galaxies and galaxy clusters; direct detection experiments designed to measure the collision of WIMPs with nuclei in the laboratory, as well as attempts to directly produce WIMPs in colliders, such as the Large Hadron Collider at CERN.
In the early 2010s, results from direct-detection experiments along with the lack of evidence for supersymmetry at the Large Hadron Collider (LHC) experiment have cast doubt on the simplest WIMP hypothesis.
Undiscovered ultralight particles
Axions are hypothetical elementary particles originally theorized in 1978 independently by Frank Wilczek and Steven Weinberg as the Goldstone boson of Peccei–Quinn theory, which had been proposed in 1977 to solve the strong CP problem in quantum chromodynamics (QCD). QCD effects produce an effective periodic potential in which the axion field moves. Expanding the potential about one of its minima, one finds that the product of the axion mass with the axion decay constant is determined by the topological susceptibility of the QCD vacuum. An axion with mass much less than 60 keV is long-lived and weakly interacting: A perfect dark matter candidate.
The oscillations of the axion field about the minimum of the effective potential, the so-called misalignment mechanism, generate a cosmological population of cold axions with an abundance depending on the mass of the axion. With a mass above 5 μeV/2 ( times the electron mass) axions could account for dark matter, and thus be both a dark-matter candidate and a solution to the strong CP problem. If inflation occurs at a low scale and lasts sufficiently long, the axion mass can be as low as 1 peV/2.
Because axions have extremely low mass, their de Broglie wavelength is very large, in turn meaning that quantum effects could help resolve the small-scale problems of the Lambda-CDM model. A single ultralight axion with a decay constant at the grand unified theory scale provides the correct relic density without fine-tuning.
Axions as a dark matter candidate has gained in popularity in recent years, because of the non-detection of WIMPS.
Primordial black holes
Primordial black holes are hypothetical black holes that formed soon after the Big Bang. In the inflationary era and early radiation-dominated universe, extremely dense pockets of subatomic matter may have been tightly packed to the point of gravitational collapse, creating primordial black holes without the supernova compression typically needed to make black holes today. Because the creation of primordial black holes would pre-date the first stars, they are not limited to the narrow mass range of stellar black holes and also not classified as baryonic dark matter.
The idea that black holes could form in the early universe was first suggested by Yakov Zeldovich and Igor Dmitriyevich Novikov in 1967, and independently by Stephen Hawking in 1971. It quickly became clear that such black holes might account for at least part of dark matter. Primordial black holes as a dark matter candidate has the major advantage that it is based on a well-understood theory (General Relativity) and objects (black holes) that are already known to exist. However, producing primordial black holes requires exotic cosmic inflation or physics beyond the standard model of particle physics, and might also require fine-tuning. Primordial black holes can also span nearly the entire possible mass range, from atom-sized to supermassive.
The idea that primordial black holes make up dark matter gained prominence in 2015 following results of gravitational wave measurements which detected the merger of intermediate-mass black holes. Black holes with about 30 solar masses are not predicted to form by either stellar collapse (typically less than 15 solar masses) or by the merger of black holes in galactic centers (millions or billions of solar masses), which suggests that the detected black holes might be primordial. A later survey of about a thousand supernovae detected no gravitational lensing events, when about eight would be expected if intermediate-mass primordial black holes above a certain mass range accounted for over 60% of dark matter. However, that study assumed that all black holes have the same or similar mass to the LIGO/Virgo mass range, which might not be the case (as suggested by subsequent James Webb Space Telescope observations).
The possibility that atom-sized primordial black holes account for a significant fraction of dark matter was ruled out by measurements of positron and electron fluxes outside the Sun's heliosphere by the Voyager 1 spacecraft. Tiny black holes are theorized to emit Hawking radiation. However the detected fluxes were too low and did not have the expected energy spectrum, suggesting that tiny primordial black holes are not widespread enough to account for dark matter. Nonetheless, research and theories proposing dense dark matter accounts for dark matter continue as of 2018, including approaches to dark matter cooling, and the question remains unsettled. In 2019, the lack of microlensing effects in the observation of Andromeda suggests that tiny black holes do not exist.
Nonetheless, there still exists a largely unconstrained mass range smaller than that which can be limited by optical microlensing observations, where primordial black holes may account for all dark matter.
Dark matter aggregation and dense dark matter objects
If dark matter is composed of weakly interacting particles, then an obvious question is whether it can form objects equivalent to planets, stars, or black holes. Historically, the answer has been it cannot, because of two factors:
It lacks an efficient means to lose energy
Ordinary matter forms dense objects because it has numerous ways to lose energy. Losing energy would be essential for object formation, because a particle that gains energy during compaction or falling "inward" under gravity, and cannot lose it any other way, will heat up and increase velocity and momentum. Dark matter appears to lack a means to lose energy, simply because it is not capable of interacting strongly in other ways except through gravity. The virial theorem suggests that such a particle would not stay bound to the gradually forming object – as the object began to form and compact, the dark matter particles within it would speed up and tend to escape.
It lacks a diversity of interactions needed to form structures
Ordinary matter interacts in many different ways, which allows the matter to form more complex structures. For example, stars form through gravity, but the particles within them interact and can emit energy in the form of neutrinos and electromagnetic radiation through fusion when they become energetic enough. Protons and neutrons can bind via the strong interaction and then form atoms with electrons largely through electromagnetic interaction. There is no evidence that dark matter is capable of such a wide variety of interactions, since it seems to only interact through gravity (and possibly through some means no stronger than the weak interaction, although until dark matter is better understood, this is only speculation).
Detection of dark matter particles
If dark matter is made up of subatomic particles, then millions, possibly billions, of such particles must pass through every square centimeter of the Earth each second. Many experiments aim to test this hypothesis. Although WIMPs have been the main search candidates, axions have drawn renewed attention, with the Axion Dark Matter Experiment (ADMX) searches for axions and many more planned in the future. Another candidate is heavy hidden sector particles which only interact with ordinary matter via gravity.
These experiments can be divided into two classes: direct detection experiments, which search for the scattering of dark matter particles off atomic nuclei within a detector; and indirect detection, which look for the products of dark matter particle annihilations or decays.
Direct detection
Direct detection experiments aim to observe low-energy recoils (typically a few keVs) of nuclei induced by interactions with particles of dark matter, which (in theory) are passing through the Earth. After such a recoil, the nucleus will emit energy in the form of scintillation light or phonons as they pass through sensitive detection apparatus. To do so effectively, it is crucial to maintain an extremely low background, which is the reason why such experiments typically operate deep underground, where interference from cosmic rays is minimized. Examples of underground laboratories with direct detection experiments include the Stawell mine, the Soudan mine, the SNOLAB underground laboratory at Sudbury, the Gran Sasso National Laboratory, the Canfranc Underground Laboratory, the Boulby Underground Laboratory, the Deep Underground Science and Engineering Laboratory and the China Jinping Underground Laboratory.
These experiments mostly use either cryogenic or noble liquid detector technologies. Cryogenic detectors operating at temperatures below 100 mK, detect the heat produced when a particle hits an atom in a crystal absorber such as germanium. Noble liquid detectors detect scintillation produced by a particle collision in liquid xenon or argon. Cryogenic detector experiments include such projects as CDMS, CRESST, EDELWEISS, and EURECA, while noble liquid experiments include LZ, XENON, DEAP, ArDM, WARP, DarkSide, PandaX, and LUX, the Large Underground Xenon experiment. Both of these techniques focus strongly on their ability to distinguish background particles (which predominantly scatter off electrons) from dark matter particles (that scatter off nuclei). Other experiments include SIMPLE and PICASSO, which use alternative methods in their attempts to detect dark matter.
Currently there has been no well-established claim of dark matter detection from a direct detection experiment, leading instead to strong upper limits on the mass and interaction cross section with nucleons of such dark matter particles. The DAMA/NaI and more recent DAMA/LIBRA experimental collaborations have detected an annual modulation in the rate of events in their detectors, which they claim is due to dark matter. This results from the expectation that as the Earth orbits the Sun, the velocity of the detector relative to the dark matter halo will vary by a small amount. This claim is so far unconfirmed and in contradiction with negative results from other experiments such as LUX, SuperCDMS and XENON100.
A special case of direct detection experiments covers those with directional sensitivity. This is a search strategy based on the motion of the Solar System around the Galactic Center. A low-pressure time projection chamber makes it possible to access information on recoiling tracks and constrain WIMP-nucleus kinematics. WIMPs coming from the direction in which the Sun travels (approximately towards Cygnus) may then be separated from background, which should be isotropic. Directional dark matter experiments include DMTPC, DRIFT, Newage and MIMAC.
Indirect detection
Indirect detection experiments search for the products of the self-annihilation or decay of dark matter particles in outer space. For example, in regions of high dark matter density (e.g., the centre of the Milky Way) two dark matter particles could annihilate to produce gamma rays or Standard Model particle–antiparticle pairs. Alternatively, if a dark matter particle is unstable, it could decay into Standard Model (or other) particles. These processes could be detected indirectly through an excess of gamma rays, antiprotons or positrons emanating from high density regions in the Milky Way and other galaxies. A major difficulty inherent in such searches is that various astrophysical sources can mimic the signal expected from dark matter, and so multiple signals are likely required for a conclusive discovery.
A few of the dark matter particles passing through the Sun or Earth may scatter off atoms and lose energy. Thus dark matter may accumulate at the center of these bodies, increasing the chance of collision/annihilation. This could produce a distinctive signal in the form of high-energy neutrinos. Such a signal would be strong indirect proof of WIMP dark matter. High-energy neutrino telescopes such as AMANDA, IceCube and ANTARES are searching for this signal. The detection by LIGO in September 2015 of gravitational waves opens the possibility of observing dark matter in a new way, particularly if it is in the form of primordial black holes.
Many experimental searches have been undertaken to look for such emission from dark matter annihilation or decay, examples of which follow.
The Energetic Gamma Ray Experiment Telescope observed more gamma rays in 2008 than expected from the Milky Way, but scientists concluded this was most likely due to incorrect estimation of the telescope's sensitivity.
The Fermi Gamma-ray Space Telescope is searching for similar gamma rays. In 2009, an as yet unexplained surplus of gamma rays from the Milky Way's galactic center was found in Fermi data. This Galactic Center GeV excess might be due to dark matter annihilation or to a population of pulsars. In April 2012, an analysis of previously available data from Fermi's Large Area Telescope instrument produced statistical evidence of a 130 GeV signal in the gamma radiation coming from the center of the Milky Way. WIMP annihilation was seen as the most probable explanation.
At higher energies, ground-based gamma-ray telescopes have set limits on the annihilation of dark matter in dwarf spheroidal galaxies and in clusters of galaxies.
The PAMELA experiment (launched in 2006) detected excess positrons. They could be from dark matter annihilation or from pulsars. No excess antiprotons were observed.
In 2013, results from the Alpha Magnetic Spectrometer on the International Space Station indicated excess high-energy cosmic rays which could be due to dark matter annihilation.
Collider searches for dark matter
An alternative approach to the detection of dark matter particles in nature is to produce them in a laboratory. Experiments with the Large Hadron Collider (LHC) may be able to detect dark matter particles produced in collisions of the LHC proton beams. Because a dark matter particle should have negligible interactions with normal visible matter, it may be detected indirectly as (large amounts of) missing energy and momentum that escape the detectors, provided other (non-negligible) collision products are detected. Constraints on dark matter also exist from the LEP experiment using a similar principle, but probing the interaction of dark matter particles with electrons rather than quarks. Any discovery from collider searches must be corroborated by discoveries in the indirect or direct detection sectors to prove that the particle discovered is, in fact, dark matter.
Alternative hypotheses
Because dark matter has not yet been identified, many other hypotheses have emerged aiming to explain the same observational phenomena without introducing a new unknown type of matter. The theory underpinning most observational evidence for dark matter, general relativity, is well-tested on Solar System scales, but its validity on galactic or cosmological scales has not been well proven. A suitable modification to general relativity can in principle conceivably eliminate the need for dark matter. The best-known theories of this class are MOND and its relativistic generalization tensor–vector–scalar gravity (TeVeS), f(R) gravity, negative mass, dark fluid, and entropic gravity. Alternative theories abound.
A problem with alternative hypotheses is that observational evidence for dark matter comes from so many independent approaches (see the "observational evidence" section above). Explaining any individual observation is possible but explaining all of them in the absence of dark matter is very difficult. Nonetheless, there have been some scattered successes for alternative hypotheses, such as a 2016 test of gravitational lensing in entropic gravity and a 2020 measurement of a unique MOND effect.
The prevailing opinion among most astrophysicists is that while modifications to general relativity can conceivably explain part of the observational evidence, there is probably enough data to conclude there must be some form of dark matter present in the universe.
In popular culture
Dark matter regularly appears as a topic in hybrid periodicals that cover both factual scientific topics and science fiction, and dark matter itself has been referred to as "the stuff of science fiction".
Mention of dark matter is made in works of fiction. In such cases, it is usually attributed extraordinary physical or magical properties, thus becoming inconsistent with the hypothesized properties of dark matter in physics and cosmology. For example:
Dark matter serves as a plot device in the 1995 X-Files episode "Soft Light".
A dark-matter-inspired substance known as "Dust" features prominently in Philip Pullman's His Dark Materials trilogy.
Beings made of dark matter are antagonists in Stephen Baxter's Xeelee Sequence.
More broadly, the phrase "dark matter" is used metaphorically in fiction to evoke the unseen or invisible.
Gallery
| Physical sciences | Physical cosmology | null |
8667 | https://en.wikipedia.org/wiki/Double-slit%20experiment | Double-slit experiment | In modern physics, the double-slit experiment demonstrates that light and matter can exhibit behavior of both classical particles and classical waves. This type of experiment was first performed by Thomas Young in 1801, as a demonstration of the wave behavior of visible light. In 1927, Davisson and Germer and, independently, George Paget Thomson and his research student Alexander Reid demonstrated that electrons show the same behavior, which was later extended to atoms and molecules. Thomas Young's experiment with light was part of classical physics long before the development of quantum mechanics and the concept of wave–particle duality. He believed it demonstrated that the Christiaan Huygens' wave theory of light was correct, and his experiment is sometimes referred to as Young's experiment or Young's slits.
The experiment belongs to a general class of "double path" experiments, in which a wave is split into two separate waves (the wave is typically made of many photons and better referred to as a wave front, not to be confused with the wave properties of the individual photon) that later combine into a single wave. Changes in the path-lengths of both waves result in a phase shift, creating an interference pattern. Another version is the Mach–Zehnder interferometer, which splits the beam with a beam splitter.
In the basic version of this experiment, a coherent light source, such as a laser beam, illuminates a plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen – a result that would not be expected if light consisted of classical particles. However, the light is always found to be absorbed at the screen at discrete points, as individual particles (not waves); the interference pattern appears via the varying density of these particle hits on the screen. Furthermore, versions of the experiment that include detectors at the slits find that each detected photon passes through one slit (as would a classical particle), and not through both slits (as would a wave). However, such experiments demonstrate that particles do not form the interference pattern if one detects which slit they pass through. These results demonstrate the principle of wave–particle duality.
Other atomic-scale entities, such as electrons, are found to exhibit the same behavior when fired towards a double slit. Additionally, the detection of individual discrete impacts is observed to be inherently probabilistic, which is inexplicable using classical mechanics.
The experiment can be done with entities much larger than electrons and photons, although it becomes more difficult as size increases. The largest entities for which the double-slit experiment has been performed were molecules that each comprised 2000 atoms (whose total mass was 25,000 atomic mass units).
The double-slit experiment (and its variations) has become a classic for its clarity in expressing the central puzzles of quantum mechanics. Richard Feynman called it "a phenomenon which is impossible […] to explain in any classical way, and which has in it the heart of quantum mechanics. In reality, it contains the only mystery [of quantum mechanics]."
Overview
If light consisted strictly of ordinary or classical particles, and these particles were fired in a straight line through a slit and allowed to strike a screen on the other side, we would expect to see a pattern corresponding to the size and shape of the slit. However, when this "single-slit experiment" is actually performed, the pattern on the screen is a diffraction pattern in which the light is spread out. The smaller the slit, the greater the angle of spread. The top portion of the image shows the central portion of the pattern formed when a red laser illuminates a slit and, if one looks carefully, two faint side bands. More bands can be seen with a more highly refined apparatus. Diffraction explains the pattern as being the result of the interference of light waves from the slit.
If one illuminates two parallel slits, the light from the two slits again interferes. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. (See the bottom photograph to the right.)
When Thomas Young (1773–1829) first demonstrated this phenomenon, it indicated that light consists of waves, as the distribution of brightness can be explained by the alternately additive and subtractive interference of wavefronts. Young's experiment, performed in the early 1800s, played a crucial role in the understanding of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuries.
However, the later discovery of the photoelectric effect demonstrated that under different circumstances, light can behave as if it is composed of discrete particles. These seemingly contradictory discoveries made it necessary to go beyond classical physics and take into account the quantum nature of light.
Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment. He also proposed (as a thought experiment) that if detectors were placed before each slit, the interference pattern would disappear.
The Englert–Greenberger duality relation provides a detailed treatment of the mathematics of double-slit interference in the context of quantum mechanics.
A low-intensity double-slit experiment was first performed by G. I. Taylor in 1909, by reducing the level of incident light until photon emission/absorption events were mostly non-overlapping.
A slit interference experiment was not performed with anything other than light until 1961, when Claus Jönsson of the University of Tübingen performed it with coherent electron beams and multiple slits. In 1974, the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi performed a related experiment using single electrons from a coherent source and a biprism beam splitter, showing the statistical nature of the buildup of the interference pattern, as predicted by quantum theory. In 2002, the single-electron version of the experiment was voted "the most beautiful experiment" by readers of Physics World. Since that time a number of related experiments have been published, with a little controversy.
In 2012, Stefano Frabboni and co-workers sent single electrons onto nanofabricated slits (about 100 nm wide) and, by detecting the transmitted electrons with a single-electron detector, they could show the build-up of a double-slit interference pattern. Many related experiments involving the coherent interference have been performed; they are the basis of modern electron diffraction, microscopy and high resolution imaging.
In 2018, single particle interference was demonstrated for antimatter in the Positron Laboratory (L-NESS, Politecnico di Milano) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi.
Variations of the experiment
Interference from individual particles
An important version of this experiment involves single particle detection. Illuminating the double-slit with a low intensity results in single particles being detected as white dots on the screen. Remarkably, however, an interference pattern emerges when these particles are allowed to build up one by one (see the image below).
This demonstrates the wave–particle duality, which states that all matter exhibits both wave and particle properties: The particle is measured as a single pulse at a single position, while the modulus squared of the wave describes the probability of detecting the particle at a specific place on the screen giving a statistical interference pattern. This phenomenon has been shown to occur with photons, electrons, atoms, and even some molecules: with buckminsterfullerene () in 2001, with 2 molecules of 430 atoms ( and ) in 2011, and with molecules of up to 2000 atoms in 2019.
In addition to interference patterns built up from single particles, up to 4 entangled photons can also show interference patterns.
Mach-Zehnder interferometer
The Mach–Zehnder interferometer can be seen as a simplified version of the double-slit experiment. Instead of propagating through free space after the two slits, and hitting any position in an extended screen, in the interferometer the photons can only propagate via two paths, and hit two discrete photodetectors. This makes it possible to describe it via simple linear algebra in dimension 2, rather than differential equations.
A photon emitted by the laser hits the first beam splitter and is then in a superposition between the two possible paths. In the second beam splitter these paths interfere, causing the photon to hit the photodetector on the right with probability one, and the photodetector on the bottom with probability zero. Blocking one of the paths, or equivalently detecting the presence of a photon on a path eliminates interference between the paths: both photodetectors will be hit with probability 1/2. This indicates that after the first beam splitter the photon does not take one path or another, but rather exists in a quantum superposition of the two paths.
"Which-way" experiments and the principle of complementarity
A well-known thought experiment predicts that if particle detectors are positioned at the slits, showing through which slit a photon goes, the interference pattern will disappear. This which-way experiment illustrates the complementarity principle that photons can behave as either particles or waves, but cannot be observed as both at the same time.
Despite the importance of this thought experiment in the history of quantum mechanics (for example, see the discussion on Einstein's version of this experiment), technically feasible realizations of this experiment were not proposed until the 1970s. (Naive implementations of the textbook thought experiment are not possible because photons cannot be detected without absorbing the photon.) Currently, multiple experiments have been performed illustrating various aspects of complementarity.
An experiment performed in 1987 produced results that demonstrated that partial information could be obtained regarding which path a particle had taken without destroying the interference altogether. This "wave-particle trade-off" takes the form of an inequality relating the visibility of the interference pattern and the distinguishability of the which-way paths.
Delayed choice and quantum eraser variations
Wheeler's delayed-choice experiments demonstrate that extracting "which path" information after a particle passes through the slits can seem to retroactively alter its previous behavior at the slits.
Quantum eraser experiments demonstrate that wave behavior can be restored by erasing or otherwise making permanently unavailable the "which path" information.
A simple do-it-at-home illustration of the quantum eraser phenomenon was given in an article in Scientific American. If one sets polarizers before each slit with their axes orthogonal to each other, the interference pattern will be eliminated. The polarizers can be considered as introducing which-path information to each beam. Introducing a third polarizer in front of the detector with an axis of 45° relative to the other polarizers "erases" this information, allowing the interference pattern to reappear. This can also be accounted for by considering the light to be a classical wave, and also when using circular polarizers and single photons. Implementations of the polarizers using entangled photon pairs have no classical explanation.
Weak measurement
In a highly publicized experiment in 2012, researchers claimed to have identified the path each particle had taken without any adverse effects at all on the interference pattern generated by the particles. In order to do this, they used a setup such that particles coming to the screen were not from a point-like source, but from a source with two intensity maxima. However, commentators such as Svensson have pointed out that there is in fact no conflict between the weak measurements performed in this variant of the double-slit experiment and the Heisenberg uncertainty principle. Weak measurement followed by post-selection did not allow simultaneous position and momentum measurements for each individual particle, but rather allowed measurement of the average trajectory of the particles that arrived at different positions. In other words, the experimenters were creating a statistical map of the full trajectory landscape.
Other variations
In 1967, Pfleegor and Mandel demonstrated two-source interference using two separate lasers as light sources.
It was shown experimentally in 1972 that in a double-slit system where only one slit was open at any time, interference was nonetheless observed provided the path difference was such that the detected photon could have come from either slit. The experimental conditions were such that the photon density in the system was much less than 1.
In 1991, Carnal and Mlynek performed the classic Young's double slit experiment with metastable helium atoms passing through micrometer-scale slits in gold foil.
In 1999, a quantum interference experiment (using a diffraction grating, rather than two slits) was successfully performed with buckyball molecules (each of which comprises 60 carbon atoms). A buckyball is large enough (diameter about 0.7 nm, nearly half a million times larger than a proton) to be seen in an electron microscope.
In 2002, an electron field emission source was used to demonstrate the double-slit experiment. In this experiment, a coherent electron wave was emitted from two closely located emission sites on the needle apex, which acted as double slits, splitting the wave into two coherent electron waves in a vacuum. The interference pattern between the two electron waves could then be observed. In 2017, researchers performed the double-slit experiment using light-induced field electron emitters. With this technique, emission sites can be optically selected on a scale of ten nanometers. By selectively deactivating (closing) one of the two emissions (slits), researchers were able to show that the interference pattern disappeared.
In 2005, E. R. Eliel presented an experimental and theoretical study of the optical transmission of a thin metal screen perforated by two subwavelength slits, separated by many optical wavelengths. The total intensity of the far-field double-slit pattern is shown to be reduced or enhanced as a function of the wavelength of the incident light beam.
In 2012, researchers at the University of Nebraska–Lincoln performed the double-slit experiment with electrons as described by Richard Feynman, using new instruments that allowed control of the transmission of the two slits and the monitoring of single-electron detection events. Electrons were fired by an electron gun and passed through one or two slits of 62 nm wide × 4 μm tall.
In 2013, a quantum interference experiment (using diffraction gratings, rather than two slits) was successfully performed with molecules that each comprised 810 atoms (whose total mass was over 10,000 atomic mass units). The record was raised to 2000 atoms (25,000 amu) in 2019.
Hydrodynamic pilot wave analogs
Hydrodynamic analogs have been developed that can recreate various aspects of quantum mechanical systems, including single-particle interference through a double-slit. A silicone oil droplet, bouncing along the surface of a liquid, self-propels via resonant interactions with its own wave field. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet's interaction with its own ripples, which form what is known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles – including behaviors customarily taken as evidence that elementary particles are spread through space like waves, without any specific location, until they are measured.
Behaviors mimicked via this hydrodynamic pilot-wave system include quantum single particle diffraction, tunneling, quantized orbits, orbital level splitting, spin, and multimodal statistics. It is also possible to infer uncertainty relations and exclusion principles. Videos are available illustrating various features of this system. (See the External links.)
However, more complicated systems that involve two or more particles in superposition are not amenable to such a simple, classically intuitive explanation. Accordingly, no hydrodynamic analog of entanglement has been developed. Nevertheless, optical analogs are possible.
Double-slit experiment on time
In 2023, an experiment was reported recreating an interference pattern in time by shining a pump laser pulse at a screen coated in indium tin oxide (ITO) which would alter the properties of the electrons within the material due to the Kerr effect, changing it from transparent to reflective for around 200 femtoseconds long where a subsequent probe laser beam hitting the ITO screen would then see this temporary change in optical properties as a slit in time and two of them as a double slit with a phase difference adding up destructively or constructively on each frequency component resulting in an interference pattern. Similar results have been obtained classically on water waves.
Classical wave-optics formulation
Much of the behaviour of light can be modelled using classical wave theory. The Huygens–Fresnel principle is one such model; it states that each point on a wavefront generates a secondary wavelet, and that the disturbance at any subsequent point can be found by summing the contributions of the individual wavelets at that point. This summation needs to take into account the phase as well as the amplitude of the individual wavelets. Only the intensity of a light field can be measured—this is proportional to the square of the amplitude.
In the double-slit experiment, the two slits are illuminated by the quasi-monochromatic light of a single laser. If the width of the slits is small enough (much less than the wavelength of the laser light), the slits diffract the light into cylindrical waves. These two cylindrical wavefronts are superimposed, and the amplitude, and therefore the intensity, at any point in the combined wavefronts depends on both the magnitude and the phase of the two wavefronts. The difference in phase between the two waves is determined by the difference in the distance travelled by the two waves.
If the viewing distance is large compared with the separation of the slits (the far field), the phase difference can be found using the geometry shown in the figure below right. The path difference between two waves travelling at an angle is given by:
Where d is the distance between the two slits. When the two waves are in phase, i.e. the path difference is equal to an integral number of wavelengths, the summed amplitude, and therefore the summed intensity is maximum, and when they are in anti-phase, i.e. the path difference is equal to half a wavelength, one and a half wavelengths, etc., then the two waves cancel and the summed intensity is zero. This effect is known as interference. The interference fringe maxima occur at angles
where λ is the wavelength of the light. The angular spacing of the fringes, , is given by
The spacing of the fringes at a distance from the slits is given by
For example, if two slits are separated by 0.5 mm (), and are illuminated with a 0.6 μm wavelength laser (), then at a distance of 1 m (), the spacing of the fringes will be 1.2 mm.
If the width of the slits is appreciable compared to the wavelength, the Fraunhofer diffraction equation is needed to determine the intensity of the diffracted light as follows:
where the sinc function is defined as sinc(x) = sin(x)/x for x ≠ 0, and sinc(0) = 1.
This is illustrated in the figure above, where the first pattern is the diffraction pattern of a single slit, given by the function in this equation, and the second figure shows the combined intensity of the light diffracted from the two slits, where the function represents the fine structure, and the coarser structure represents diffraction by the individual slits as described by the function.
Similar calculations for the near field can be made by applying the Fresnel diffraction equation, which implies that as the plane of observation gets closer to the plane in which the slits are located, the diffraction patterns associated with each slit decrease in size, so that the area in which interference occurs is reduced, and may vanish altogether when there is no overlap in the two diffracted patterns.
Path-integral formulation
The double-slit experiment can illustrate the path integral formulation of quantum mechanics provided by Feynman. The path integral formulation replaces the classical notion of a single, unique trajectory for a system, with a sum over all possible trajectories. The trajectories are added together by using functional integration.
Each path is considered equally likely, and thus contributes the same amount. However, the phase of this contribution at any given point along the path is determined by the action along the path:
All these contributions are then added together, and the magnitude of the final result is squared, to get the probability distribution for the position of a particle:
As is always the case when calculating probability, the results must then be normalized by imposing:
The probability distribution of the outcome is the normalized square of the norm of the superposition, over all paths from the point of origin to the final point, of waves propagating proportionally to the action along each path. The differences in the cumulative action along the different paths (and thus the relative phases of the contributions) produces the interference pattern observed by the double-slit experiment. Feynman stressed that his formulation is merely a mathematical description, not an attempt to describe a real process that we can measure.
Interpretations of the experiment
Like the Schrödinger's cat thought experiment, the double-slit experiment is often used to highlight the differences and similarities between the various interpretations of quantum mechanics.
Standard quantum physics
The standard interpretation of the double slit experiment is that the pattern is a wave phenomenon, representing interference between two probability amplitudes, one for each slit. Low intensity experiments demonstrate that the pattern is filled in one particle detection at a time. Any change to the apparatus designed to detect a particle at a particular slit alters the probability amplitudes and the interference disappears. This interpretation is independent of any conscious observer.
Complementarity
Niels Bohr interpreted quantum experiments like the double-slit experiment using the concept of complementarity. In Bohr's view quantum systems are not classical, but measurements can only give classical results. Certain pairs of classical properties will never be observed in a quantum system simultaneously: the interference pattern of waves in the double slit experiment will disappear if particles are detected at the slits. Modern quantitative versions of the concept allow for a continuous tradeoff between the visibility of the interference fringes and the probability of particle detection at a slit.
Copenhagen interpretation
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. The term "Copenhagen interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails. Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and some form of complementarity principle. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement. In the Copenhagen interpretation, complementarity means a particular experiment can demonstrate particle behavior (passing through a definite slit) or wave behavior (interference), but not both at the same time. In a Copenhagen-type view, the question of which slit a particle travels through has no meaning when there is no detector.
Relational interpretation
According to the relational interpretation of quantum mechanics, first proposed by Carlo Rovelli, observations such as those in the double-slit experiment result specifically from the interaction between the observer (measuring device) and the object being observed (physically interacted with), not any absolute property possessed by the object. In the case of an electron, if it is initially "observed" at a particular slit, then the observer–particle (photon–electron) interaction includes information about the electron's position. This partially constrains the particle's eventual location at the screen. If it is "observed" (measured with a photon) not at a particular slit but rather at the screen, then there is no "which path" information as part of the interaction, so the electron's "observed" position on the screen is determined strictly by its probability function. This makes the resulting pattern on the screen the same as if each individual electron had passed through both slits.
Many-worlds interpretation
As with Copenhagen, there are multiple variants of the many-worlds interpretation. The unifying theme is that physical reality is identified with a wavefunction, and this wavefunction always evolves unitarily, i.e., following the Schrödinger equation with no collapses. Consequently, there are many parallel universes, which only interact with each other through interference. David Deutsch argues that the way to understand the double-slit experiment is that in each universe the particle travels through a specific slit, but its motion is affected by interference with particles in other universes, and this interference creates the observable fringes. David Wallace, another advocate of the many-worlds interpretation, writes that in the familiar setup of the double-slit experiment the two paths are not sufficiently separated for a description in terms of parallel universes to make sense.
De Broglie–Bohm theory
An alternative to the standard understanding of quantum mechanics, the De Broglie–Bohm theory states that particles also have precise locations at all times, and that their velocities are defined by the wave-function. So while a single particle will travel through one particular slit in the double-slit experiment, the so-called "pilot wave" that influences it will travel through both. The two slit de Broglie-Bohm trajectories were first calculated by Chris Dewdney while working with Chris Philippidis and Basil Hiley at Birkbeck College (London). The de Broglie-Bohm theory produces the same statistical results as standard quantum mechanics, but dispenses with many of its conceptual difficulties by adding complexity through an ad hoc quantum potential to guide the particles.
While the model is in many ways similar to Schrödinger equation, it is known to fail for relativistic cases and does not account for features such as particle creation or annihilation in quantum field theory. Many authors such as nobel laureates Werner Heisenberg, Sir Anthony James Leggett and Sir Roger Penrose have criticized it for not adding anything new.
More complex variants of this type of approach have appeared, for instance the three wave hypothesis of Ryszard Horodecki as well as other complicated combinations of de Broglie and Compton waves. To date there is no evidence that these are useful.
| Physical sciences | Quantum mechanics | Physics |
8688 | https://en.wikipedia.org/wiki/Deccan%20Traps | Deccan Traps | The Deccan Traps are a large igneous province of west-central India (17–24°N, 73–74°E). They are one of the largest volcanic features on Earth, taking the form of a large shield volcano. They consist of many layers of solidified flood basalt that together are more than about thick, cover an area of about , and have a volume of about . Originally, the Deccan Traps may have covered about , with a correspondingly larger original volume. This volume overlies the Archean age Indian Shield, which is likely the lithology the province passed through during eruption. The province is commonly divided into four subprovinces: the main Deccan, the Malwa Plateau, the Mandla Lobe, and the Saurashtran Plateau.
The eruptions occurred over a 600–800,000 year time period between around 66.3 to 65.6 million years ago, spanning the Cretaceous–Paleogene boundary. While some authors have suggested that the eruptions were the primary cause of the Cretaceous–Paleogene mass extinction event, which dates to around 66.05 million years ago, this has been strongly disputed, with many authors suggesting that the Chicxulub impact was the primary cause of the extinction. While some scholars suggest that the eruptions may have been a contributing factor in the extinctions, others suggest that the role of the Deccan Traps in the extinction were negligible or even partially negated the effects of the impact.
The Deccan Traps are thought to have been produced in major part by the still active Réunion hotspot, responsible for the creation of the modern Mascarene Islands in the Indian Ocean.
Etymology
The term trap has been used in geology since 1785–1795 for such rock formations. It is derived from the Swedish word for stairs () and refers to the step-like hills forming the landscape of the region. The name Deccan has Sanskrit origins meaning "southern".
History
The Deccan Traps began forming 66.25 million years ago, at the end of the Cretaceous period, although it is possible that some of the oldest material may underlie younger material. The bulk of the volcanic eruption occurred at the Western Ghats between 66 and 65 million years ago when lava began to extrude in fissure eruptions. Determining the exact age for Deccan rock is difficult due to a number of limitations, one being that the transition between eruption events may have lasted only a few thousand years and the resolution of dating methods is not sufficient to pinpoint these events. In this way, determining the rate of magma emplacement is also difficult to constrain. This series of eruptions may have lasted for less than 30,000 years.
The original area covered by the lava flows is estimated to have been as large as , approximately half the size of modern India. The Deccan Traps region was reduced to its current size by erosion and plate tectonics; the present area of directly observable lava flows is around .
The Deccan Traps are segmented into three stratigraphic units: the Upper, Middle, and Lower traps. While it was previously interpreted that these groups represented their own key points in the sequence of events in Deccan extrusion, it is now more widely accepted that these horizons relate more closely to paleotopography and distance from the eruption site.
Effect on mass extinctions and climate
The release of volcanic gases, particularly sulfur dioxide, during the formation of the traps may have contributed to climate change. An average drop in temperature of about was recorded during this period.
Because of its magnitude, some scientists (notably Gerta Keller) have speculated that the gases released during the formation of the Deccan Traps played a major role in the Cretaceous–Paleogene (K–Pg) extinction event (also known as the Cretaceous–Tertiary or K–T extinction). It has been theorized that sudden cooling due to sulfurous volcanic gases released by the formation of the traps and toxic gas emissions may have contributed significantly to the K–Pg mass extinction. However, the current consensus among the scientific community is that the extinction was primarily triggered by the Chicxulub impact event in North America, which would have produced a sunlight-blocking dust cloud that killed much of the plant life and reduced global temperature (this cooling is called an impact winter).
A 2014 study suggested the extinction may have been caused by both the volcanism and the impact event. This was followed by a similar study in 2015, both of which consider the hypothesis that the impact exacerbated or induced the Deccan volcanism, since the events occurred approximately at antipodes. A 2020 study questioned the idea that the Deccan Traps were a contributory factor at all, suggesting that the Deccan Traps eruptions may have even partially negated the climatic change induced by the impact.
A major criticism of the Deccan Traps as the primary cause of the extinctions is that the extinction event appears to be globally geologically instantaneous and simultaneous in both marine and terrestrial environments, as would be expected from an impact cause, rather than staggered as would be expected from an LIP cause.
A more recent discovery appears to demonstrate the scope of the destruction from the impact alone, however. In a March 2019 article in the Proceedings of the National Academy of Sciences, an international team of twelve scientists revealed the contents of the Tanis fossil site discovered near Bowman, North Dakota, that appeared to show a devastating mass destruction of an ancient lake and its inhabitants at the time of the Chicxulub impact. In the paper, the group reports that the geology of the site is strewn with fossilized trees and remains of fish and other animals. The lead researcher, Robert A. DePalma of the University of Kansas, was quoted in the New York Times as stating that "You would be blind to miss the carcasses sticking out... It is impossible to miss when you see the outcrop". Evidence correlating this find to the Chicxulub impact included tektites bearing "the unique chemical signature of other tektites associated with the Chicxulub event" found in the gills of fish fossils and embedded in amber, an iridium-rich top layer that is considered another signature of the event, and an atypical lack of evidence for scavenging, perhaps suggesting that there were few survivors. The exact mechanism of the site's destruction has been debated as either an impact-caused tsunami or lake and river seiche activity triggered by post-impact earthquakes, though there has yet been no firm conclusion upon which researchers have settled.
A 2024 study of glycerol dialkyl glycerol tetraether levels in fossilized peat found that the Deccan Traps caused long-term warming of around 3°C over the course of the final 100,000 years of the Maastrichtian, as well as about 5°C drop in temperature for less than 10,000 years around 30,000 years prior to the K-Pg boundary (coinciding with the peak of the Poladpur eruptive phase), but by the time of the K-Pg boundary, global temperatures had returned to previous levels. This suggests that the Deccan Traps were not the primary cause of extinction.
Petrology
Within the Deccan Traps, at least 95% of the lavas are tholeiitic basalts. Major mineral constituents are olivine, pyroxenes, and plagioclase, as well as certain Fe-Ti-rich oxides. These magmas are <7% MgO. However, many of these minerals are observed as highly altered forms. Other rock types present include alkali basalt, nephelinite, lamprophyre, and carbonatite.
Mantle xenoliths have been described from Kachchh (northwestern India) and elsewhere in the western Deccan and contain spinel lherzolite and pyroxenite constituents.
While the Deccan traps have been categorized in many different ways including the three different stratigraphic groups, geochemically the province can be split into as many as eleven different formations. Many of the petrologic differences in these units are a product of varying degrees of crustal contamination.
Fossils
The Deccan Traps are famous for the beds of fossils that have been found between layers of lava. Particularly well-known species include the frog Oxyglossus pusillus (Owen) of the Eocene of India and the toothed frog Indobatrachus, an early lineage of modern frogs, which is now placed in the Australian family Myobatrachidae. The Infratrappean Beds (Lameta Formation) and Intertrappean Beds also contain fossil freshwater molluscs.
Theories of formation
It is postulated that the Deccan Traps eruption was associated with a deep mantle plume. High 3He/4He ratios of the main pulse of the eruption are often seen in magmas with mantle plume origin. The area of long-term eruption (the hotspot), known as the Réunion hotspot, is suspected of both causing the Deccan Traps eruption and opening the rift that separated the Mascarene Plateau from India. Regional crustal thinning supports the theory of this rifting event and likely encouraged the rise of the plume in this area. Seafloor spreading at the boundary between the Indian and African Plates subsequently pushed India north over the plume, which now lies under Réunion island in the Indian Ocean, southwest of India. The mantle plume model has, however, been challenged.
Data continues to emerge that supports the plume model. The motion of the Indian tectonic plate and the eruptive history of the Deccan traps show strong correlations. Based on data from marine magnetic profiles, a pulse of unusually rapid plate motion began at the same time as the first pulse of Deccan flood basalts, which is dated at 67 million years ago. The spreading rate rapidly increased and reached a maximum at the same time as the peak basaltic eruptions. The spreading rate then dropped off, with the decrease occurring around 63 million years ago, by which time the main phase of Deccan volcanism ended. This correlation is seen as driven by plume dynamics.
The motions of the Indian and African plates have also been shown to be coupled, the common element being the position of these plates relative to the location of the Réunion plume head. The onset of accelerated motion of India coincides with a large slowing of the rate of counterclockwise rotation of Africa. The close correlations between the plate motions suggest that they were both driven by the force of the Réunion plume.
When comparing the Na8, Fe8, and Si8 contents of the Deccan to other major igneous provinces, the Deccan appears to have undergone the greatest degree of melting suggesting a deep plume origin. Olivine appears to have fractionated at near-Moho depths with additional fractionation of gabbro ~6 km below the surface. Features such as widespread faulting, frequent diking events, high heat flux, and positive gravity anomalies suggest that the extrusive phase of the Deccan Traps is associated with the existence of a triple junction which may have existed during the Late Cretaceous, having been caused by a deep mantle plume. Not all of these diking events are attributed to large-scale contributions to the overall flow volume. It can be difficult, however, to locate the largest dikes as they are often located towards the west coast and are therefore believed to currently reside under water.
Suggested link to impact events
Chicxulub crater
Although the Deccan Traps began erupting well before the impact, in a 2015 study it was proposed based on argon–argon dating that the impact may have caused an increase in permeability that allowed magma to reach the surface and produced the most voluminous flows, accounting for around 70% of the volume. The combination of the asteroid impact and the resulting increase in eruptive volume may have been responsible for the mass extinctions that occurred at the time that separates the Cretaceous and Paleogene periods, known as the K–Pg boundary. However this proposal has been questioned by other authors, who describe the suggestion as being "convenient interpretations based on superficial and cursory observations."
Shiva crater
A geological structure that exists in the sea floor off the west coast of India has been suggested as a possible impact crater, in this context called the Shiva crater. It was also dated approximately 66 million years ago, potentially matching the Deccan traps. The researchers claiming that this feature is an impact crater suggest that the impact may have been the triggering event for the Deccan Traps as well as contributing to the acceleration of the Indian plate in the early Paleogene. However, the current consensus in the Earth science community is that this feature is unlikely to be an actual impact crater.
| Physical sciences | Geologic features | Earth science |
8697 | https://en.wikipedia.org/wiki/DNA%20ligase | DNA ligase | DNA ligase is a type of enzyme that facilitates the joining of DNA strands together by catalyzing the formation of a phosphodiester bond. It plays a role in repairing single-strand breaks in duplex DNA in living organisms, but some forms (such as DNA ligase IV) may specifically repair double-strand breaks (i.e. a break in both complementary strands of DNA). Single-strand breaks are repaired by DNA ligase using the complementary strand of the double helix as a template, with DNA ligase creating the final phosphodiester bond to fully repair the DNA.
DNA ligase is used in both DNA repair and DNA replication (see Mammalian ligases). In addition, DNA ligase has extensive use in molecular biology laboratories for recombinant DNA experiments (see Research applications). Purified DNA ligase is used in gene cloning to join DNA molecules together to form recombinant DNA.
Enzymatic mechanism
The mechanism of DNA ligase is to form two covalent phosphodiester bonds between 3' hydroxyl ends of one nucleotide ("acceptor"), with the 5' phosphate end of another ("donor"). Two ATP molecules are consumed for each phosphodiester bond formed. AMP is required for the ligase reaction, which proceeds in four steps:
Reorganization of activity site such as nicks in DNA segments or Okazaki fragments etc.
Adenylylation (addition of AMP) of a lysine residue in the active center of the enzyme, pyrophosphate is released;
Transfer of the AMP to the 5' phosphate of the so-called donor, formation of a pyrophosphate bond;
Formation of a phosphodiester bond between the 5' phosphate of the donor and the 3' hydroxyl of the acceptor.
Ligase will also work with blunt ends, although higher enzyme concentrations and different reaction conditions are required.
Types
E. coli
The E. coli DNA ligase is encoded by the lig gene. DNA ligase in E. coli, as well as most prokaryotes, uses energy gained by cleaving nicotinamide adenine dinucleotide (NAD) to create the phosphodiester bond. It does not ligate blunt-ended DNA except under conditions of molecular crowding with polyethylene glycol, and cannot join RNA to DNA efficiently.
The activity of E. coli DNA ligase can be enhanced by DNA polymerase at the right concentrations. Enhancement only works when the concentrations of the DNA polymerase 1 are much lower than the DNA fragments to be ligated. When the concentrations of Pol I DNA polymerases are higher, it has an adverse effect on E. coli DNA ligase
T4
The DNA ligase from bacteriophage T4 (a bacteriophage that infects Escherichia coli bacteria). The T4 ligase is the most-commonly used in laboratory research. It can ligate either cohesive or blunt ends of DNA, oligonucleotides, as well as RNA and RNA-DNA hybrids, but not single-stranded nucleic acids. It can also ligate blunt-ended DNA with much greater efficiency than E. coli DNA ligase. Unlike E. coli DNA ligase, T4 DNA ligase cannot utilize NAD and it has an absolute requirement for ATP as a cofactor. Some engineering has been done to improve the in vitro activity of T4 DNA ligase; one successful approach, for example, tested T4 DNA ligase fused to several alternative DNA binding proteins and found that the constructs with either p50 or NF-kB as fusion partners were over 160% more active in blunt-end ligations for cloning purposes than wild type T4 DNA ligase. A typical reaction for inserting a fragment into a plasmid vector would use about 0.01 (sticky ends) to 1 (blunt ends) units of ligase. The optimal incubation temperature for T4 DNA ligase is 16 °C.
Bacteriophage T4 ligase mutants have increased sensitivity to both UV irradiation and the alkylating agent methyl methanesulfonate indicating that DNA ligase is employed in the repair of the DNA damages caused by these agents.
Mammalian
In mammals, there are four specific types of ligase.
DNA ligase 1: ligates the nascent DNA of the lagging strand after the Ribonuclease H has removed the RNA primer from the Okazaki fragments.
DNA ligase 3: complexes with DNA repair protein XRCC1 to aid in sealing DNA during the process of nucleotide excision repair and recombinant fragments. Of the all known mammalian DNA ligases, only ligase 3 has been found to be present in mitochondria.
DNA ligase 4: complexes with XRCC4. It catalyzes the final step in the non-homologous end joining DNA double-strand break repair pathway. It is also required for V(D)J recombination, the process that generates diversity in immunoglobulin and T-cell receptor loci during immune system development.
DNA ligase 2: A purification artifact resulting from proteolytic degradation of DNA ligase 3. Initially, it has been recognized as another DNA ligase and it is the reason for the unusual nomenclature of DNA ligases.
DNA ligase from eukaryotes and some microbes uses adenosine triphosphate (ATP) rather than NAD.
Thermostable
Derived from a thermophilic bacterium, the enzyme is stable and active at much higher temperatures than conventional DNA ligases. Its half-life is 48 hours at 65 °C and greater than 1 hour at 95 °C. Ampligase DNA Ligase has been shown to be active for at least 500 thermal cycles (94 °C/80 °C) or 16 hours of cycling.10 This exceptional thermostability permits extremely high hybridization stringency and ligation specificity.
Measurement of activity
There are at least three different units used to measure the activity of DNA ligase:
Weiss unit - the amount of ligase that catalyzes the exchange of 1 nmole of 32P from inorganic pyrophosphate to ATP in 20 minutes at 37°C. This is the one most commonly used.
Modrich-Lehman unit - this is rarely used, and one unit is defined as the amount of enzyme required to convert 100 nmoles of d(A-T)n to an exonuclease-III resistant form in 30 minutes under standard conditions.
Many commercial suppliers of ligases use an arbitrary unit based on the ability of ligase to ligate cohesive ends. These units are often more subjective than quantitative and lack precision.
Research applications
DNA ligases have become indispensable tools in modern molecular biology research for generating recombinant DNA sequences. For example, DNA ligases are used with restriction enzymes to insert DNA fragments, often genes, into plasmids.
Controlling the optimal temperature is a vital aspect of performing efficient recombination experiments involving the ligation of cohesive-ended fragments. Most experiments use T4 DNA Ligase (isolated from bacteriophage T4), which is most active at 37 °C. However, for optimal ligation efficiency with cohesive-ended fragments ("sticky ends"), the optimal enzyme temperature needs to be balanced with the melting temperature Tm of the sticky ends being ligated, the homologous pairing of the sticky ends will not be stable because the high temperature disrupts hydrogen bonding. A ligation reaction is most efficient when the sticky ends are already stably annealed, and disruption of the annealing ends would therefore result in low ligation efficiency. The shorter the overhang, the lower the Tm.
Since blunt-ended DNA fragments have no cohesive ends to anneal, the melting temperature is not a factor to consider within the normal temperature range of the ligation reaction. The limiting factor in blunt end ligation is not the activity of the ligase but rather the number of alignments between DNA fragment ends that occur. The most efficient ligation temperature for blunt-ended DNA would therefore be the temperature at which the greatest number of alignments can occur. The majority of blunt-ended ligations are carried out at 14-25 °C overnight. The absence of stably annealed ends also means that the ligation efficiency is lowered, requiring a higher ligase concentration to be used.
A novel use of DNA ligase can be seen in the field of nano chemistry, specifically in DNA origami. DNA based self-assembly principles have proven useful for organizing nanoscale objects, such as biomolecules, nanomachines, nanoelectronic and photonic component. Assembly of such nano structure requires the creation of an intricate mesh of DNA molecules. Although DNA self-assembly is possible without any outside help using different substrates such as provision of catatonic surface of Aluminium foil, DNA ligase can provide the enzymatic assistance that is required to make DNA lattice structure from DNA over hangs.
History
The first DNA ligase was purified and characterized in 1967 by the Gellert, Lehman, Richardson, and Hurwitz laboratories. It was first purified and characterized by Weiss and Richardson using a six-step chromatographic-fractionation process beginning with elimination of cell debris and addition of streptomycin, followed by several Diethylaminoethyl (DEAE)-cellulose column washes and a final phosphocellulose fractionation. The final extract contained 10% of the activity initially recorded in the E. coli media; along the process it was discovered that ATP and Mg++ were necessary to optimize the reaction. The common commercially available DNA ligases were originally discovered in bacteriophage T4, E. coli and other bacteria.
Disorders
Genetic deficiencies in human DNA ligases have been associated with clinical syndromes marked by immunodeficiency, radiation sensitivity, and developmental abnormalities, LIG4 syndrome (Ligase IV syndrome) is a rare disease associated with mutations in DNA ligase 4 and interferes with dsDNA break-repair mechanisms. Ligase IV syndrome causes immunodeficiency in individuals and is commonly associated with microcephaly and marrow hypoplasia. A list of prevalent diseases caused by lack of or malfunctioning of DNA ligase is as follows.
Xeroderma pigmentosum
Xeroderma pigmentosum, which is commonly known as XP, is an inherited condition characterized by an extreme sensitivity to ultraviolet (UV) rays from sunlight. This condition mostly affects the eyes and areas of skin exposed to the sun. Some affected individuals also have problems involving the nervous system.
Ataxia-telangiectasia
Mutations in the ATM gene cause ataxia–telangiectasia. The ATM gene provides instructions for making a protein that helps control cell division and is involved in DNA repair. This protein plays an important role in the normal development and activity of several body systems, including the nervous system and immune system. The ATM protein assists cells in recognizing damaged or broken DNA strands and coordinates DNA repair by activating enzymes that fix the broken strands. Efficient repair of damaged DNA strands helps maintain the stability of the cell's genetic information. Affected children typically develop difficulty walking, problems with balance and hand coordination, involuntary jerking movements (chorea), muscle twitches (myoclonus), and disturbances in nerve function (neuropathy). The movement problems typically cause people to require wheelchair assistance by adolescence. People with this disorder also have slurred speech and trouble moving their eyes to look side-to-side (oculomotor apraxia).
Fanconi Anemia
Fanconi anemia (FA) is a rare, inherited blood disorder that leads to bone marrow failure. FA prevents bone marrow from making enough new blood cells for the body to work normally. FA also can cause the bone marrow to make many faulty blood cells. This can lead to serious health problems, such as leukemia.
Bloom syndrome
Bloom syndrome results in skin that is sensitive to sun exposure, and usually the development of a butterfly-shaped patch of reddened skin across the nose and cheeks. A skin rash can also appear on other areas that are typically exposed to the sun, such as the back of the hands and the forearms. Small clusters of enlarged blood vessels (telangiectases) often appear in the rash; telangiectases can also occur in the eyes. Other skin features include patches of skin that are lighter or darker than the surrounding areas (hypopigmentation or hyperpigmentation respectively). These patches appear on areas of the skin that are not exposed to the sun, and their development is not related to the rashes.
As a drug target
In recent studies, human DNA ligase I was used in Computer-aided drug design to identify DNA ligase inhibitors as possible therapeutic agents to treat cancer. Since excessive cell growth is a hallmark of cancer development, targeted chemotherapy that disrupts the functioning of DNA ligase can impede adjuvant cancer forms. Furthermore, it has been shown that DNA ligases can be broadly divided into two categories, namely, ATP- and NAD+-dependent. Previous research has shown that although NAD+-dependent DNA ligases have been discovered in sporadic cellular or viral niches outside the bacterial domain of life, there is no instance in which a NAD+-dependent ligase is present in a eukaryotic organism. The presence solely in non-eukaryotic organisms, unique substrate specificity, and distinctive domain structure of NAD+ dependent compared with ATP-dependent human DNA ligases together make NAD+-dependent ligases ideal targets for the development of new antibacterial drugs.
| Biology and health sciences | Molecular biology | Biology |
8717 | https://en.wikipedia.org/wiki/Diprotodon | Diprotodon | Diprotodon (Ancient Greek: "two protruding front teeth") is an extinct genus of marsupial from the Pleistocene of Australia containing one species, D. optatum. The earliest finds date to 1.77 million to 780,000 years ago but most specimens are dated to after 110,000 years ago. Its remains were first unearthed in 1830 in Wellington Caves, New South Wales, and contemporaneous paleontologists guessed they belonged to rhinos, elephants, hippos or dugongs. Diprotodon was formally described by English naturalist Richard Owen in 1838, and was the first named Australian fossil mammal, and led Owen to become the foremost authority of his time on other marsupials and Australian megafauna, which were enigmatic to European science.
Diprotodon is the largest-known marsupial to have ever lived; it greatly exceeds the size of its closest living relatives wombats and koalas. It is a member of the extinct family Diprotodontidae, which includes other large quadrupedal herbivores. It grew to at the shoulders, over from head to tail, and likely weighed several tonnes, possibly as much as . Females were much smaller than males. Diprotodon supported itself on elephant-like legs to travel long distances, and inhabited most of Australia. The digits were weak; most of the weight was probably borne on the wrists and ankles. The hindpaws angled inward at 130°. Its jaws may have produced a strong bite force of at the long and ever-growing incisor teeth, and over at the last molar. Such powerful jaws would have allowed it to eat vegetation in bulk, crunching and grinding plant materials such as twigs, buds and leaves of woody plants with its bilophodont teeth.
It is the only marsupial and metatherian that is known to have made seasonal migrations. Large herds, usually of females, seem to have marched through a wide range of habitats to find food and water, walking at around . Diprotodon may have formed polygynous societies, possibly using its powerful incisors to fight for mates or fend off predators, such as the largest-known marsupial carnivore Thylacoleo carnifex. Being a marsupial, the mother may have raised her joey in a pouch on her belly, probably with one of these facing backwards, as in wombats.
Diprotodon went extinct about 40,000 years ago as part of the Late Pleistocene megafauna extinctions, along with every other Australian animal over ; the extinction was possibly caused by extreme drought conditions and predation pressure from the first Aboriginal Australians, who likely co-existed with Diprotodon and other megafauna in Australia for several thousand years prior to its extinction. There is little direct evidence of interactions between Aboriginal Australians and Diprotodon—or most other Australian megafauna. Diprotodon has been conjectured by some authors to have been the origin of some aboriginal mythological figures—most notably the bunyip—and aboriginal rock artworks, but these ideas are unconfirmable.
Research history
In 1830, farmer George Ranken found a diverse fossil assemblage while exploring Wellington Caves, New South Wales, Australia. This was the first major site of extinct Australian megafauna. Remains of Diprotodon were excavated when Ranken later returned as part of a formal expedition that was headed by explorer Major Thomas Mitchell.
At the time these massive fossils were discovered, it was generally thought they were remains of rhinos, elephants, hippos, or dugongs. The fossils were not formally described until Mitchell took them in 1837 to his former colleague English naturalist Richard Owen while in England publishing his journal. In 1838, while studying a piece of a right mandible with an incisor, Owen compared the tooth to those of wombats and hippos; he wrote to Mitchell designating it as a new genus Diprotodon. Mitchell published the correspondence in his journal. Owen formally described Diprotodon in Volume 2 without mentioning a species; in Volume 1, however, he listed the name Diprotodon optatum, making that the type species. Diprotodon means "two protruding front teeth" in Ancient Greek and optatum is Latin for "desire" or "wish". It was the first-ever Australian fossil mammal to be described. In 1844, Owen replaced the name D. optatum with "D. australis". Owen only once used the name optatum and the acceptance of its apparent replacement "australis" has historically varied widely but optatum is now standard.
In 1843, Mitchell was sent more Diprotodon fossils from the recently settled Darling Downs and relayed them to Owen. With these, Owen surmised that Diprotodon was an elephant related to or synonymous with Mastodon or Deinotherium, pointing to the incisors which he interpreted as tusks, the flattening (anteroposterior compression) of the femur similar to the condition in elephants and rhinos, and the raised ridges of the molar characteristic of elephant teeth. Later that year, he formally synonymised Diprotodon with Deinotherium as Dinotherium Australe, which he recanted in 1844 after German naturalist Ludwig Leichhardt pointed out that the incisors clearly belong to a marsupial. Owen still classified the molars from Wellington as Mastodon australis and continued to describe Diprotodon as likely elephantine. In 1847, a nearly complete skull and skeleton was recovered from the Darling Downs, the latter confirming this elephantine characterisation. The massive skeleton attracted a large audience while on public display in Sydney. Leichhardt believed the animal was aquatic, and in 1844 he said it might still be alive in an undiscovered tropical area nearer the interior. But, as the European land exploration of Australia progressed, he became certain it was extinct. Owen later become the foremost authority of Australian palaeontology of his time, mostly working with marsupials.
Huge assemblages of mostly-complete Diprotodon fossils have been unearthed in dry lakes and riverbeds; the largest assemblage came from Lake Callabonna, South Australia. Fossils were first noticed here by an aboriginal stockman working on a sheep property to the east. The owners, the Ragless brothers, notified the South Australian Museum, which hired Australian geologist Henry Hurst, who reported an enormous wealth of fossil material and was paid £250 in 1893 to excavate the site. Hurst found up to 360 Diprotodon individuals over a few acres; excavation was restarted in the 1970s and more were uncovered. American palaeontologist Richard H. Tedford said multiple herds of these animals had at different times become stuck in mud while crossing bodies of water while water levels were low during dry seasons.
In addition to D. optatum, several other species were erected in the 19th century, often from single specimens, on the basis of subtle anatomical variations. Among the variations was size difference: adult Diprotodon specimens have two distinct size ranges. In their 1975 review of Australian fossil mammals, Australian palaeontologists J. A. Mahoney and William David Lindsay Ride did not ascribe this to sexual dimorphism because males and females of modern wombat and koala species—its closest living relatives—are skeletally indistinguishable, so they assumed the same would have been true for extinct relatives, including Diprotodon.
These other species are:
D. annextans was erected in 1861 by Irish palaeontologist Frederick McCoy based on some teeth and a partial mandible found near Colac, Victoria; the name may be a typo of annectens, which means linking or joining, because he characterised the species as combining traits from Diprotodon and Nototherium;
D. minor was erected in 1862 by Thomas Huxley based on a partial palate; in 1991, Australian palaeontologist Peter Murray suggested classifying large specimens as D. optatum and smaller ones as "D. minor";
D. longiceps was erected in 1865 by McCoy as a replacement for "D. annextans";
D. bennettii was erected in 1873 by German naturalist Gerard Krefft based on a nearly complete mandible collected by naturalists George Bennet and Georgina King near Gowrie, New South Wales; and
D. loderi was erected in 1873 by Krefft based on a partial palate collected by Andrew Loder near Murrurundi, New South Wales.
In 2008, Australian palaeontologist Gilbert Price opted to recognise only one species D. optatum based most-notably on a lack of dental differences among these supposed species, and said it was likely Diprotodon was indeed sexually dimorphic, with the male probably being the larger form.
Classification
Phylogeny
Diprotodon is a marsupial in the order Diprotodontia, suborder Vombatiformes (wombats and koalas), and infraorder Vombatomorphia (wombats and allies). It is unclear how different groups of vombatiformes are related to each other because the most-completely known members—living or extinct—are exceptionally derived (highly specialised forms that are quite different from their last common ancestor).
In 1872, American mammalogist Theodore Gill erected the superfamily Diprotodontoidea and family Diprotodontidae to house Diprotodon. New species were later added to both groups; by the 1960s, the first diprotodontoids dating to before the Pliocene were discovered, better clarifying their relationship to each other. Because of this, in 1967, American palaeontologist Ruben A. Stirton subdivided Diprotodontoidea into one family, Diprotodontidae, with four subfamilies; Diprotodontinae (containing Diprotodon among others), Nototheriinae, Zygomaturinae, and Palorchestinae. In 1977, Australian palaeontologist Michael Archer synonymised Nototheriinae with Diprotodontinae and in 1978, Archer and Australian palaeontologist Alan Bartholomai elevated Palorchestinae to family level as Palorchestidae, leaving Diprotodontoidea with families Diprotodontidae and Palorchestidae; and Diprotodontidae with subfamilies Diprotodontinae and Zygomaturinae.
Below is the Diprotodontoidea family tree according to Australian palaeontologists Karen H. Black and Brian Mackness, 1999 (top), and Vombatiformes family tree according to Beck et al. 2020 (bottom):
Evolution
Diprotodontidae is the most diverse family in Vombatomorphia; it was better adapted to the spreading dry, open landscapes over the last tens of millions of years than other groups in the infraorder, living or extinct. Diprotodon has been found in every Australian state, making it the most-widespread Australian megafauna in the fossil record. The oldest vombatomorph (and vombatiform) is Mukupirna, which was identified in 2020 from Oligocene deposits of the South Australian Namba Formation dating to 26–25 million years ago. The group probably evolved much earlier; Mukupirna was already differentiated as a closer relative to wombats than other vombatiformes, and attained a massive size of roughly , whereas the last common ancestor of vombatiformes was probably a small, creature.
Both diprotodontines and zygomaturines were both apparently quite diverse over the Late Oligocene to Early Miocene, roughly 23 million years ago, though the familial and subfamilial classifications of diprotodontoids from this period is debated. Compared to zygomaturines, diprotodontines were rare during the Miocene, the only identified genus being Pyramios. By the Late Miocene, diprotodontians became the commonest marsupial order in fossil sites, a dominance that endures to the present day; at this point, the most-prolific diprotodontians were diprotodontids and kangaroos. Diprotodontidae also began a gigantism trend, along with several other marsupials, probably in response to the lower-quality plant foods available in a drying climate, requiring them to consume much more. Gigantism appears to have evolved independently six times among the vombatiform lineages. Diprotodontine diversity returned in the Pliocene; Diprotodontidae reached peak diversity with seven genera, coinciding with the spread of open forests. In 1977, Archer said Diprotodon directly evolved from the smaller Euryzygoma, which has been discovered in Pliocene deposits of eastern Australia predating 2.5 million years ago.
In general, there is poor resolution on the ages of Australian fossil sites. While the geochronology of Diprotodon is one of best for Australian megafauna, it is still incomplete and the majority of remains are undated. Price and Australian palaeontologist Katarzyna Piper reported the earliest, indirectly dated Diprotodon fossils from the Nelson Bay Formation at Nelson Bay, New South Wales, which dates to 1.77 million to 780,000 years ago during the Early Pleistocene. These remains are 8–17% smaller than those of Late Pleistocene Diprotodon but are otherwise indistinguishable. The oldest directly dated Diprotodon fossils come from the Boney Bite site at Floraville, New South Wales; they were deposited approximately 340,000 years ago during the Middle Pleistocene based on U-series dating and luminescence dating of quartz and orthoclase. Floraville is the only-identified Middle Pleistocene site in tropical northern Australia. Beyond these, almost all dated Diprotodon material comes from Marine Isotope Stage 5 (MIS5) or younger—after 110,000 years ago during the Late Pleistocene.
Description
Skull
Diprotodon has a long, narrow skull. Like other marsupials, the top of the skull of Diprotodon is flat or depressed over the small braincase and the sinuses of the frontal bone. Like many other giant vombatiformes, the frontal sinuses are extensive; in a specimen from Bacchus Marsh, they take up —roughly 25% of skull volume—whereas the brain occupies —only 4% of the skull volume. Marsupials tend to have smaller brain-to-body mass ratios than placental mammals, becoming more disparate the bigger the animal, which could be a response to a need to conserve energy because the brain is a calorically expensive organ, or is proportional to the maternal metabolic rate, which is much less in marsupials due to the shorter gestation period. The expanded sinuses increase the surface area available for the temporalis muscle to attach, which is important for biting and chewing, to compensate for a deflated braincase as a result of a proportionally smaller brain. They may also have helped dissipate stresses produced by biting more efficiently across the skull.
The occipital bone, the back of the skull, slopes forward at 45 degrees unlike most modern marsupials, where it is vertical. The base of the occipital is significantly thickened. The occipital condyles, a pair of bones that connect the skull with the vertebral column, are semi-circular and the bottom half is narrower than the top. The inner border, which forms the foramen magnum where the spinal cord feeds through, is thin and well-defined. The top margin of the foramen magnum is somewhat flattened rather than arched. The foramen expands backwards towards the inlet, especially vertically, and is more-reminiscent of a short neural canal—the tube running through a vertebral centrum where the spinal cord passes through—than a foramen magnum.
A sagittal crest extends across the midline of the skull from the supraoccipital—the top of the occipital bone—to the region between the eyes on the top of the head. The orbit (eye socket) is small and vertically oval-shaped. The nasal bones slightly curve upwards until near their endpoint, where they begin to curve down, giving the bones a somewhat S-shaped profile. Like many marsupials, most of the nasal septum is made of bone rather than cartilage. The nose would have been quite mobile. The height of the skull from the peak of the occipital bone to the end of the nasals is strikingly almost uniform; the end of the nasals is the tallest point. The zygomatic arch (cheek bone) is strong and deep as in kangaroos but unlike those of koalas and wombats, and extends all the way from the supraoccipital.
Jaws
As in kangaroos and wombats, there is a gap between the jointing of the palate (roof of the mouth) and the maxilla (upper jaw) behind the last molar, which is filled by the medial pterygoid plate. This would have been the insertion for the medial pterygoid muscle that was involved in closing the jaw. Like many grazers, the masseter muscle, which is also responsible for closing the jaw, seems to have been the dominant jaw muscle. A probable large temporal muscle compared to the lateral pterygoid muscle may indicate, unlike in wombats, a limited range of side-to-side jaw motion means Diprotodon would have been better at crushing rather than grinding food. The insertion of the masseter is placed forwards, in front of the orbits, which could have allowed better control over the incisors. Diprotodon chewing strategy appears to align more with kangaroos than wombats: a powerful vertical crunch was followed by a transverse grinding motion.
As in other marsupials, the ramus of the mandible, the portion that goes up to connect with the skull, angles inward. The condyloid process, which connects the jaw to the skull, is similar to that of a koala. The ramus is straight and extends almost vertically, thickening as it approaches the body of the mandible where the teeth are. The depth of the body of the mandible increases from the last molar to the first. The strong mandibular symphysis, which fuses the two halves of the mandible, begins at the front-most end of the third molar; this would prevent either half of the mandible from moving independently of the other, unlike in kangaroos which use this ability to better control their incisors.
Teeth
The dental formula of Diprotodon is . In each half of either jaw are three incisors in the upper jaw and one in the lower jaw; there are one premolar and four molars in both jaws but no canines. A long diastema (gap) separates the incisors from the molars.
The incisors are scalpriform (chisel-like). Like those of wombats and rodents, the first incisors in both jaws continuously grew throughout the animal's life but the other two upper incisors did not. This combination is not seen in any living marsupial. The cross-section of the upper incisors is circular. In one old male specimen, the first upper incisor measures of which is within the tooth socket; the second is and is in the socket; and the exposed part of the third is . The first incisor is convex and curves outwards but the other two are concave. The lower incisor has a faint upward curve but is otherwise straight and has an oval cross-section. In the same old male specimen, the lower incisor measures , of which is inside the socket.
The premolars and molars are bilophodont, each having two distinct lophs (ridges). The premolar is triangular and about half the size of the molars. As in kangaroos, the necks of the lophs are coated in cementum. Unlike in kangaroos, there is no connecting ridge between the lophs. The peaks of these lophs have a thick enamel coating that thins towards the base; this could wear away with use and expose the dentine layer, and beneath that osteodentine. Like the first premolar of other marsupials, the first molar of Diprotodon and wombats is the only tooth that is replaced. D. optatum premolars were highly morphologically variable even within the same individual.
Vertebrae
Diprotodon had seven cervical (neck) vertebrae. The atlas, the first cervical (C1), has a pair of deep cavities for insertion of the occipital condyles. The diaphophyses of the atlas, an upward-angled projection on either the side of the vertebra, are relatively short and thick, and resemble those of wombats and koalas. The articular surface of the axis (C2), the part that joints to another vertebra, is slightly concave on the front side and flat on the back side. As in kangaroos, the axis has a low subtriangular hypophysis projecting vertically from the underside of the vertebra and a proportionally long odontoid—a projection from the axis which fits into the atlas—but the neural spine, which projects vertically the topside of the vertebra, is more forwards. The remaining cervicals lack a hypophysis. As in kangaroos, C3 and C4 have a shorter and more-compressed neural spine, which is supported by a low ridge along its midline in the front and the back. The neural spine of C5 is narrower but thicker, and is supported by stronger-but-shorter ridges. C7 had a forked shape on top of the neural spine.
Diprotodon probably had 13 dorsal vertebrae and 14 pairs of closely spaced ribs. Like many other mammals, the dorsals initially decrease in breadth and then expand before connecting to the lumbar vertebrae. Unusually, the front dorsals match the short proportions of the cervicals, and the articular surface is flat. At the beginning of the series, the neural spine is broad and angled forward, and is also supported by a low ridge along its midline in the front and the back. In later examples, the neural spine is angled backwards and bifurcates (splits into two). Among mammals, bifurcation of the neural spine is only seen in elephants and humans, and only in a few of the cervicals and not in the dorsals. Compared to those of wombats and kangaroos, the neural arch is proportionally taller. As in elephants, the epiphysial plates (growth plates) and the neural arch, to which the neural spine is attached, are anchylosed—very rigid in regard to the vertebral centrum—which served to support the animal's immense weight.
Like most marsupials, Diprotodon likely had six lumbar vertebrae. They retain a proportionally tall neural arch but not the diapophyses, though L1 can retain a small protuberance on one side where a diapophysis would be in a dorsal vertebra; this has been documented in kangaroos and other mammals. The length of each vertebra increases along the series so the lumbar series may have bent downward.
Like other marsupials, Diprotodon had two sacral vertebrae. The base of the neural spines of these two were ossified (fused) together.
Limbs
Girdles
The general proportions of the scapula (shoulder blade) align more closely with more-basal vertebrates such as monotremes, birds, reptiles, and fish rather than marsupials and placental mammals. It is triangular and proportionally narrow but unlike most mammals with a triangular scapula, the arm attaches to top of the scapula and the subspinous fossa (the fossa, a depression below the spine of the scapula) becomes bigger towards the arm joint rather than decreasing. The glenoid cavity where the arm connects is oval shaped as in most mammals.
Unlike other marsupials, the ilia, the large wings of the pelvis, are lamelliform (short and broad, with a flat surface instead of an iliac fossa). Lamelliform ilia have only been recorded in elephants, sloths, and apes, though these groups all have a much-longer sacral vertebra series whereas marsupials are restricted to two sacral vertebrae. The ilia provided strong muscle attachments that were probably oriented and used much the same as those in an elephant. The sacroiliac joint where the pelvis connects to the spine is at 35 degrees in reference to the long axis of the ilium. The ischia, which form part of the hip socket, are thick and rounded tailwards but taper and diverge towards the socket, unlike those in kangaroos, where the ischia proceed almost parallel to each other. They were not connected to the vertebra. The hip socket itself is well-rounded and almost hemispherical.
Long bones
Unlike those of most marsupials, the humerus of Diprotodon is almost straight rather than S-shaped, and the trochlea of the humerus at the elbow joint is not perforated. The ridges for muscle attachments are poorly developed, which seems to have been compensated for by the powerful forearms. Similarly, the condyles where the radius and ulna (the forearm bones) connect maintain their rounded shape and are quite-similarly sized, and unusually reminiscent of the condyles between the femur and the tibia and fibula in the leg of a kangaroo.
Like elephants, the femur of Diprotodon is straight and compressed anteroposteriorly (from headside to tailside). The walls of the femur are prodigiously thickened, strongly constricting the medullary cavity where the bone marrow is located. The proximal end (part closest to the hip joint) is notably long, broad, and deep. The femoral head projects up far from the greater trochanter. As in kangaroos, the greater trochanter is split into two lobes. The femoral neck is roughly the same diameter as the femoral head. Also as in kangaroos, the condyle for the fibula is excavated out but the condyle for the tibia is well-rounded and hemispherical. Like those of many other marsupials, the tibia is twisted and the tibial malleolus (on the ankle) is reduced.
Paws
Diprotodon has five digits on either paw. Like other plantigrade walkers, where the paws were flat on the ground, the wrist and ankle would have been largely rigid and inflexible. The digits are proportionally weak so the paws probably had a lot of padding. Similarly, the digits do not seem to have been much engaged in weight bearing.
The forepaw was strong and the shape of the wrist bones is quite similar to those of kangaroos. Like other vombatiformes, the metacarpals, which connect the fingers to the wrist, are broadly similar to those of kangaroos and allies. The enlarged pisiform bone takes up half the jointing surface of the ulna. The fifth digit on the forepaw is the largest.
The digits of the hindpaws turn inwards from the ankle at 130 degrees. The second and third metatarsals (the metatarsals connect the toes to the ankle) are significantly reduced, which may mean these digits were syndactylous (fused) like those of all modern diprotodontians. The first, fourth, and fifth digits are enlarged. The toes are each about the same length, except the fifth which is much stouter.
Size
Diprotodon is the largest-known marsupial to ever have lived. In life, adult Diprotodon could have reached at the shoulders and from head to tail. Accounting for cartilaginous intervertebral discs, Diprotodon may have been 20% longer than reconstructed skeletons, exceeding .
As researchers were formulating predictive body-mass equations for fossil species, efforts were largely constrained to eutherian mammals rather than marsupials. The first person to attempt to estimate the living weight of Diprotodon was Peter Murray in his 1991 review of the megafauna of Pleistocene Australia; Murray made an estimate of using cranial and dental measurements, which he said was probably not a very precise figure. This made Diprotodon the largest herbivore in Australia. In 2001, Canadian biologist Gary Burness and colleagues did a linear regression between the largest herbivores and carnivores—living or extinct—from every continent (for Australia: Diprotodon, Varanus priscus, and Thylacoleo carnifex) against the landmass area of their continent, and another regression between the daily food intake of living creatures against the landmass of their continents. He calculated the food requirement of Diprotodon was 50–60% smaller than expected for Australia's landmass, which he believed was a result of a generally lower metabolism in marsupials compared to placentals—up to 20% lower—and sparser nutritious vegetation than other continents. The maximum-attainable body size is capped much lower than those for other continents.
In 2003, Australian palaeontologist Stephen Wroe and colleagues took a more-sophisticated approach to body mass than Murray's estimate. They made a regression between the minimum circumference of the femora and humeri of 18 quadrupedal marsupials and 32 placentals against body mass, and then inputted 17 Diprotodon long bones into their predictive model. The results ranged from , for a mean of , though Wroe said reconstructing the weight of extinct creatures that far outweighed living counterparts is problematic. For comparison, an American bison they used in their study weighed and a hippo weighed .
Paleobiology
Diet
Like modern megaherbivores, most evidently the African elephant, Pleistocene Australian megafauna likely had a profound effect on the vegetation, limiting the spread of forest cover and woody plants. Carbon isotope analysis suggests Diprotodon fed on a broad range of foods and, like kangaroos, was consuming both C3—well-watered trees, shrubs, and grasses—and C4 plants—arid grasses, a finding replicated by calcium isotope analysis showing Diprotodon to have been a mixed feeder. Carbon isotope analyses on Diprotodon excavated from the Cuddie Springs site in units SU6 (possibly 45,000 years old) and SU9 (350,000 to 570,000 years old) indicate Diprotodon adopted a somewhat-more-varied seasonal diet as Australia's climate dried but any change was subtle. In contrast, contemporary kangaroos and wombats underwent major dietary shifts or specialisations towards, respectively, C3 and C4 plants. The fossilised, incompletely digested gut contents of one 53,000-year-old individual from Lake Callabonna show its last meal consisted of young leaves, stalks, and twigs.
The molars of Diprotodon are a simple bilophodont shape. Kangaroos use their bilophodont teeth to grind tender, low-fibre plants as a browser as well as grass as a grazer. Kangaroos that predominantly graze have specialised molars to resist the abrasiveness of grass but such adaptations are not exhibited in Diprotodon, which may have had a mixed diet similar to that of a browsing wallaby. It may also have chewed like wallabies, beginning with a vertical crunch before grinding transversely, as opposed to wombats, which only grind transversely. Similarly to many large ungulates (hoofed mammals), the jaws of Diprotodon were better suited for crushing rather than grinding, which would have permitted it to process vegetation in bulk.
In 2016, Australian biologists Alana Sharpe and Thomas Rich estimated the maximum-possible bite force of Diprotodon using finite element analysis. They calculated at the incisors and across the molar series. For reference, the American alligator can produce forces upwards of . Though these are likely overestimates, the jaws of Diprotodon were exceptionally strong, which would have allowed it to consume a broad range of plants, including tough, fibrous grasses.
Migration and sociality
In 2017, by measuring the strontium isotope ratio (87Sr/86Sr) at various points along the Diprotodon incisor QMF3452 from the Darling Downs, and matching those ratios to the ratios of sites across that region, Price and colleagues determined Diprotodon made seasonal migrations, probably in search of food or watering holes. This individual appears to have been following the Condamine River and, while apparently keeping to the Darling Downs during the three years this tooth had been growing, it would have been annually making a northwest-southeast round trip. This trek parallels the mammalian mass migrations of modern-day East Africa.
Diprotodon is the only identified metatherian that seasonally migrated between two places. A few modern marsupials, such as the red kangaroo, have been documented making migrations when necessary but it is not a seasonal occurrence. Because Diprotodon could do it, it is likely other Pleistocene Australian megafauna also had seasonal migrations.
Diprotodon apparently moved in large herds. Possible fossilised herds, which are most-commonly unearthed in south-eastern Australia, seem to be mostly or entirely female, and sometimes travelled with juveniles. Such sexual segregation is normally seen in polygynous species; it is a common social organisation among modern megaherbivores involving an entirely female herd save for their young and the dominant male, with which the herd exclusinvely breeds. Similarly, the skull is adapted to handling much-higher stresses than that which resulted from bite alone so Diprotodon may have subjected its teeth or jaws to more-strenuous activities than chewing, such as fighting other Diprotodon for mates or fending off predators, using the incisors. Like modern red and grey kangaroos, which also sexually segregate, bachelor herds of Diprotodon seem to have been less tolerant to drought conditions than female herds due to their larger size and nutritional requirements.
Gait
The locomotion of an extinct animal can be inferred using fossil trackways, which seldom preserve in Australia over the Cenozoic. Only the trackways of humans, kangaroos, vombatids, Diprotodon, and the diprotodontid Euowenia have been identified. Diprotodon trackways have been found at Lake Callabonna and the Victorian Volcanic Plain grasslands. The diprotodontid manus (forepaw) print is semi-circular and the pes (hindpaw) is reniform (kidney-shaped). Owing to proportionally small digits, most of the weight was borne on the carpus and tarsus—the bones connecting to respectively the wrist and the ankle. Diprotodontines seem to have had a much-more-erect gait, an adaptation to long-distance travel that is similar to that of elephants, rather than the more-sprawling posture of wombats and zygomaturines, though there are no fossil trackways of the latter to verify their reconstructed standing posture.
At Lake Callabonna, the single Diprotodon responsible for the impressions had an average stride length of , trackway width of , and track dimensions in length x width. The gleno-acetabular length—the distance between the shoulders and pelvis—could have been about ; assuming a hip height of , the maker of these tracks was probably moving at around .
The single Diprotodon responsible for the impressions at the volcanic plain had an average stride length of , trackway width of , and pes length of . The gleno-acetabular length may have been about and assuming a hip height of , the maker of the tracks was probably moving at around . Its posture was much-more-sprawled than the example from Callabonna, aligning more with what might be expected of Zygomaturus. The animal may have been a female carrying a large joey in her pouch, the added weight on the stomach altering the gait. The first trackway continues for in a south-easterly direction towards a palaeo-lake. The animal seems to have hesitated while stepping down from the first sand bar on its path with the right pes making three overlapping prints here while shuffling around. The trackway vanishes for a stretch and reappears while the animal seemingly is stepping on wet sediment. Another diprotodontid trackway appears away, moving southerly, which may have been left by the same individual.
Life history
The marsupial metabolic rate is about 30% lower than that of placentals due to a lower body temperature of . Marsupials give birth at an earlier point in foetal development, relying on lactation to facilitate most of the joey's development; because pregnancy is much-more-energetically expensive, investing in lactation rather than longer gestation can be advantageous in a highly seasonal and unpredictable climate to minimise maternal nutritional requirements. Consequently, marsupials cannot support as large a litter size or as short a generation time.
Based on the relationship between female body size and life history in kangaroos, a Diprotodon female would have gestated for six-to-eight weeks, and given birth to a single joey. Given its massive size, Diprotodon may not have sat down to give birth as do smaller marsupials, possibly standing instead. Like koalas and wombats, the pouch may have faced backwards so the joey could crawl down across its mother's abdomen to enter and attach itself to a teat until it could see—perhaps 260 days—and thermoregulate. It would have permanently left the pouch after 860 days and suckled until reaching after four or five years.
In large kangaroos, females usually reach sexual maturity and enter oestrus soon after weaning, and males need double the time to reach sexual maturity. A similar pattern could have been exhibited in Diprotodon. Assuming a lifespan of up to 50 years, a female Diprotodon could have given birth eight times.
Palaeoecology
Diprotodon was present across the entire Australian continent by the Late Pleistocene, especially following MIS5 approximately 110,000 years ago. The onset of the Quaternary glaciation, with the continuous advance and retreat of glaciers at the poles, created extreme climatic variability elsewhere. In Australia, the warmer, wetter interglacial periods were received by forests and woodlands; colder, dryer glacial periods were more conducive to grasslands and deserts. The continent progressively became dryer as the Asian monsoons became less influential over Australia: the vast interior had become arid and sandy by 500,000 years ago; the mega-lakes that were once prominent, especially during interglacials in north-western Australia, dried up; and the rainforests of eastern Australia gradually retreated. Aridity has hastened over the last 100,000 years, especially after 60,000 years ago with surging El Niño–Southern Oscillations.
The continent-wide distribution of Diprotodon indicates herds trekked across almost any habitat, much like modern African elephants south of the Sahara. Diprotodon was a member of a diverse assemblage of megafauna that were endemic to Pleistocene Australia; these also included the thylacine, modern kangaroos, sthenurines (giant short-faced kangaroos), a diversity of modern and giant koala and wombat species, the tapir-like Palorchestes, the giant turtle Meiolania, and the giant bird Genyornis. Diprotodon coexisted with the diprotodontid Zygomaturus trilobus, which appears to have remained in the forests, whereas Diprotodon foraged the expanding grasslands and woodlands. Other contemporaneous dipotodontids (Hulitherium, Z. nimborensia, and Maokopia) were insular forms that were restricted to the forests of New Guinea.
Predation
Due to its massive size, Diprotodon would have been a tough adversary for native carnivores. It contended with the largest-known marsupial predator Thylacoleo carnifex; while Diprotodon remains that were gnawed or bitten by T. carnifex have been identified, it is unclear if the marsupial predator was powerful enough to kill an animal surpassing . The modern jaguar, at half the size of T. carnifex, can kill a bull so it is possible T. carnifex could have killed small Diprotodon. Similar to recent kangaroos with thylacines or quolls, juvenile Diprotodon may have been at high risk of predation by T. carnifex; it and fossils of juvenile Diprotodon have been recovered from the same caves.
The largest predators of Australia were reptiles, most notably the saltwater crocodile, the now-extinct crocodiles Paludirex and Quinkana, and the giant lizard megalania (Varanus priscus). At in length, megalania was the largest carnivore of Pleistocene Australia.
Extinction
As part of the Quaternary extinction event, Diprotodon and every other Australian land animal heavier than became extinct. The timing and the exact cause are unclear because there is poor resolution on the ages of Australian fossil sites. Since their discovery, the extinction of the Australian megafauna has usually been blamed on the changing climate or overhunting by the first Aboriginal Australians. In 2001, Australian palaeontologist Richard Roberts and colleagues dated 28 major fossil sites across the continent, and were able to provide a precise date for megafaunal extinction. They found most disappear from the fossil record by 80,000 years ago, but Diprotodon; the giant wombat Phascolonus; Thylacoleo; and the short-faced kangaroos Procoptodon, Protemnodon, and Simosthenurus were identified at Ned's Gully, Queensland, and Kudjal Yolgah Cave, Western Australia, which they dated to respectively 47,000 and 46,000 years ago. Thus, all of the Australian Pleistocene megafauna died out probably between about 50,000 and 41,000 years ago. There also seems to have been a diverse assemblage of megafauna just before their extinction, and all populations across at least western and eastern Australia died out at about the same time. As of 2021, there is still no solid evidence of megafauna surviving past approximately 40,000 years ago; their latest occurrence, including Diprotodon, is recorded at South Walker Creek mine in the north-east at about 40,100 ± 1,700 years ago.
At the time Roberts et al. published their paper, the earliest evidence of human activity in Australia was 56±4 thousand years old, which is close to their calculated date for the megafauna extinction; they hypothesised human hunting had eradicated the last megafauna within about 10,000 years of coexistence. Human hunting had earlier been blamed for the extinction of North American and New Zealand megafauna. Human activity was then generally regarded as the main driver of Australian megafaunal extinction, especially because the megafauna had survived multiple extreme drought periods during glacial periods. At the time, there did not seem to be any evidence of unusually extreme climate during this period. Due to the slowness of marsupial reproduction, even limited megafaunal hunting may have severely weakened the population.
In 2005, American geologist Gifford Miller noticed fire abruptly becomes more common about 45,000 years ago; he ascribed this increase to aboriginal fire-stick farmers, who would have regularly started controlled burns to clear highly productive forests and grasslands. Miller said this radically altered the vegetational landscape and promulgated the expanse of the modern-day fire-resilient scrub at the expense of the megafauna. Subsequent studies had difficulty firmly linking controlled burns with major ecological collapse. The frequency of fire could have also increased as a consequence of megafaunal extinction because total plant consumption rapidly fell, leading to faster fuel buildup.
In 2017, the human-occupied Madjedbebe rock shelter on the northern Australian coast was dated to about 65,000 years ago, which if correct would mean humans and megafauna had coexisted for over 20,000 years. Other authors have considered this dating questionable. In the 2010s, several ecological studies were published in support of major drought conditions coinciding with the final megafaunal extinctions. Their demise may have been the result of a combination of climatic change, human hunting, and human-driven landscape changes.
Cultural significance
Fossil evidence
Despite the role the first Aboriginal Australians are speculated to have had in the extinction of Diprotodon and other mammalian megafauna in Australia, there is little evidence humans used them at all in the 20,000 years of coexistence. No fossils of mammalian megafauna suggestive of human butchery or cooking have been found.
In 1984, Gail Paton discovered an upper-right Diprotodon incisor (2I) bearing 28 visible cut marks in Spring Creek, south-western Victoria; Ron Vanderwald and Richard Fullager studied the incisor, which was split in half longitudinally, seemingly while the bone was still fresh but it was glued together before Vanderwald and Fullager could inspect it. Each piece measures in length. The marks are aligned in a straight line, and measure in length, in width, and in depth. They determined it was inconsistent with bite marks from scavenging Thylacoleo or mice, and concluded it was incised by humans with flint as a counting system or a random doodle. This specimen became one of the most-cited pieces of evidence humans and megafauna directly interacted until a 2020 re-analysis by Australian palaeoanthropologist Michelle Langley identified the engraver as most-likely a tiger quoll.
In 2016, Australian archaeologist Giles Hamm and colleagues unearthed a partial right radius belonging to a young Diprotodon in the Warratyi rock shelter. Because it lacks carnivore damage and the rock shelter is up a sheer face Diprotodon is unlikely to have climbed, they said humans were responsible for taking the Diprotodon to the site.
Mythology
When the first massive fossils in Australia were dug up, it was not clear what animals they might have represented because there were no serious scientists on the continent. Local residents guessed some may have been the remains of rhinos or elephants. European settlers, the most-vocal of whom was Reverend John Dunmore Lang, forwarded these fossils as evidence of the Genesis flood narrative. Aboriginal Australians also attempted to fit the finds into their own religious ideas, quickly associating Diprotodon with the bunyip, a large, carnivorous, lake monster. Many ethnologists and palaeontologists of the time believed the bunyip to be a tribal memory of the lumbering giant creature that probably frequented marshlands, though at the time it was uncertain whether Diprotodon and other megafauna were still extant because the Australian continent had not yet been fully explored by Europeans. Scientific investigation into the bunyip was stigmatised after a purported bunyip skull was sensationalised in 1846, and was put on display at the Australian Museum. The following year, however, Owen recognised it as the skull of a foal, and was surprised the burgeoning Australian scientific community could have erred so egregiously.
In 1892, Canadian geologist Henry Yorke Lyell Brown reported Aboriginal Australians identified Diprotodon fossils from Lake Eyre as those of the Rainbow Serpent, which he thought was a giant, bottom-dwelling fish. This notion became somewhat popularised after English geologist John Walter Gregory, who believed the god was a horned, scaly creature, conjectured it was a chimaera of Diprotodon—which he believed had a horn—and a crocodile. Later workers continued to report some link between the Rainbow Serpent and either Diprotodon or crocodiles.
These kinds of suppositions are not testable and require stories to survive in oral tradition for tens of thousands of years. If Pleistocene megafauna are the basis of some aboriginal mythology, it is unclear if the stories were based on the creatures when they were alive or their fossils being discovered long after their extinction.
Rock art representations
Aboriginal Australians decorated caves with paintings and drawings of several creatures but the identities of the subjects are often unclear. In 1907, Australian anthropologist Herbert Basedow found footprint petroglyphs in Yunta Springs and Wilkindinna, South Australia, which he believed were those of Diprotodon. In 1988, Australian historian Percy Trezise presented what he thought was a Quinkan depiction of Diprotodon to the First Congress of the Australian Rock Art Research Association. Both of these claims have their faults because the depictions bear several features that are inconsistent with what is known about Diprotodon. Unlike the more-naturalistic artwork of Early European modern humans, which are more easily identifiable as various animals, aboriginal artwork is much more stylistic and is mostly uninterpretable by an outsider. The subjects of aboriginal paintings can be mythological beings from the Dreaming rather than a corporeal subject.
| Biology and health sciences | Diprotodontia | Animals |
8724 | https://en.wikipedia.org/wiki/Doppler%20effect | Doppler effect | The Doppler effect (also Doppler shift) is the change in the frequency of a wave in relation to an observer who is moving relative to the source of the wave. The Doppler effect is named after the physicist Christian Doppler, who described the phenomenon in 1842. A common example of Doppler shift is the change of pitch heard when a vehicle sounding a horn approaches and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, and lower during the recession.
When the source of the sound wave is moving towards the observer, each successive cycle of the wave is emitted from a position closer to the observer than the previous cycle. Hence, from the observer's perspective, the time between cycles is reduced, meaning the frequency is increased. Conversely, if the source of the sound wave is moving away from the observer, each cycle of the wave is emitted from a position farther from the observer than the previous cycle, so the arrival time between successive cycles is increased, thus reducing the frequency.
For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted. The total Doppler effect in such cases may therefore result from motion of the source, motion of the observer, motion of the medium, or any combination thereof. For waves propagating in vacuum, as is possible for electromagnetic waves or gravitational waves, only the difference in velocity between the observer and the source needs to be considered.
History
Doppler first proposed this effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels" (On the coloured light of the binary stars and some other stars of the heavens). The hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, and lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848 (in France, the effect is sometimes called "effet Doppler-Fizeau" but that name was not adopted by the rest of the world as Fizeau's discovery was six years after Doppler's proposal). In Britain, John Scott Russell made an experimental study of the Doppler effect (1848).
General
In classical physics, where the speeds of source and the receiver relative to the medium are lower than the speed of waves in the medium, the relationship between observed frequency and emitted frequency is given by:
where
is the propagation speed of waves in the medium;
is the speed of the receiver relative to the medium. In the formula, is added to if the receiver is moving towards the source, subtracted if the receiver is moving away from the source;
is the speed of the source relative to the medium. is subtracted from if the source is moving towards the receiver, added if the source is moving away from the receiver.
Note this relationship predicts that the frequency will decrease if either source or receiver is moving away from the other.
Equivalently, under the assumption that the source is either directly approaching or receding from the observer:
where
is the wave's speed relative to the receiver;
is the wave's speed relative to the source;
is the wavelength.
If the source approaches the observer at an angle (but still with a constant speed), the observed frequency that is first heard is higher than the object's emitted frequency. Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion (and was emitted at the point of closest approach; but when the wave is received, the source and observer will no longer be at their closest), and a continued monotonic decrease as it recedes from the observer. When the observer is very close to the path of the object, the transition from high to low frequency is very abrupt. When the observer is far from the path of the object, the transition from high to low frequency is gradual.
If the speeds and are small compared to the speed of the wave, the relationship between observed frequency and emitted frequency is approximately
where
is the opposite of the relative speed of the receiver with respect to the source: it is positive when the source and the receiver are moving towards each other.
Consequences
Assuming a stationary observer and a wave source moving towards the observer at (or exceeding) the speed of the wave, the Doppler equation predicts an infinite (or negative) frequency as from the observer's perspective. Thus, the Doppler equation is inapplicable for such cases. If the wave is a sound wave and the sound source is moving faster than the speed of sound, the resulting shock wave creates a sonic boom.
Lord Rayleigh predicted the following effect in his classic book on sound: if the observer were moving from the (stationary) source at twice the speed of sound, a musical piece previously emitted by that source would be heard in correct tempo and pitch, but as if played backwards.
Applications
Sirens
A siren on a passing emergency vehicle will start out higher than its stationary pitch, slide down as it passes, and continue lower than its stationary pitch as it recedes from the observer. Astronomer John Dobson explained the effect thus:
In other words, if the siren approached the observer directly, the pitch would remain constant, at a higher than stationary pitch, until the vehicle hit him, and then immediately jump to a new lower pitch. Because the vehicle passes by the observer, the radial speed does not remain constant, but instead varies as a function of the angle between his line of sight and the siren's velocity:
where is the angle between the object's forward velocity and the line of sight from the object to the observer.
Astronomy
The Doppler effect for electromagnetic waves such as light is of widespread use in astronomy to measure the speed at which stars and galaxies are approaching or receding from us, resulting in so called blueshift or redshift, respectively. This may be used to detect if an apparently single star is, in reality, a close binary, to measure the rotational speed of stars and galaxies, or to detect exoplanets. This effect typically happens on a very small scale; there would not be a noticeable difference in visible light to the unaided eye.
The use of the Doppler effect in astronomy depends on knowledge of precise frequencies of discrete lines in the spectra of stars.
Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s (BD-15°4041, also known as LHS 52, 81.7 light-years away) and −260 km/s (Woolley 9722, also known as Wolf 1106 and LHS 64, 78.2 light-years away). Positive radial speed means the star is receding from the Sun, negative that it is approaching.
The relationship between the expansion of the universe and the Doppler effect is not simple matter of the source moving away from the observer. In cosmology, the redshift of expansion is considered separate from redshifts due to gravity or Doppler motion.
Distant galaxies also exhibit peculiar motion distinct from their cosmological recession speeds. If redshifts are used to determine distances in accordance with Hubble's law, then these peculiar motions give rise to redshift-space distortions.
Radar
The Doppler effect is used in some types of radar, to measure the velocity of detected objects. A radar beam is fired at a moving target – e.g. a motor car, as police use radar to detect speeding motorists – as it approaches or recedes from the radar source. Each successive radar wave has to travel farther to reach the car, before being reflected and re-detected near the source. As each wave has to move farther, the gap between each wave increases, increasing the wavelength. In some situations, the radar beam is fired at the moving car as it approaches, in which case each successive wave travels a lesser distance, decreasing the wavelength. In either situation, calculations from the Doppler effect accurately determine the car's speed. Moreover, the proximity fuze, developed during World War II, relies upon Doppler radar to detonate explosives at the correct time, height, distance, etc.
Because the Doppler shift affects the wave incident upon the target as well as the wave reflected back to the radar, the change in frequency observed by a radar due to a target moving at relative speed is twice that from the same target emitting a wave:
Medical
An echocardiogram can, within certain limits, produce an accurate assessment of the direction of blood flow and the velocity of blood and cardiac tissue at any arbitrary point using the Doppler effect. One of the limitations is that the ultrasound beam should be as parallel to the blood flow as possible. Velocity measurements allow assessment of cardiac valve areas and function, abnormal communications between the left and right side of the heart, leaking of blood through the valves (valvular regurgitation), and calculation of the cardiac output. Contrast-enhanced ultrasound using gas-filled microbubble contrast media can be used to improve velocity or other flow-related medical measurements.
Although "Doppler" has become synonymous with "velocity measurement" in medical imaging, in many cases it is not the frequency shift (Doppler shift) of the received signal that is measured, but the phase shift (when the received signal arrives).
Velocity measurements of blood flow are also used in other fields of medical ultrasonography, such as obstetric ultrasonography and neurology. Velocity measurement of blood flow in arteries and veins based on Doppler effect is an effective tool for diagnosis of vascular problems like stenosis.
Flow measurement
Instruments such as the laser Doppler velocimeter (LDV), and acoustic Doppler velocimeter (ADV) have been developed to measure velocities in a fluid flow. The LDV emits a light beam and the ADV emits an ultrasonic acoustic burst, and measure the Doppler shift in wavelengths of reflections from particles moving with the flow. The actual flow is computed as a function of the water velocity and phase. This technique allows non-intrusive flow measurements, at high precision and high frequency.
Velocity profile measurement
Developed originally for velocity measurements in medical applications (blood flow), Ultrasonic Doppler Velocimetry (UDV) can measure in real time complete velocity profile in almost any liquids containing particles in suspension such as dust, gas bubbles, emulsions. Flows can be pulsating, oscillating, laminar or turbulent, stationary or transient. This technique is fully non-invasive.
Satellites
Satellite navigation
The Doppler shift can be exploited for satellite navigation such as in Transit and DORIS.
Satellite communication
Doppler also needs to be compensated in satellite communication.
Fast moving satellites can have a Doppler shift of dozens of kilohertz relative to a ground station. The speed, thus magnitude of Doppler effect, changes due to earth curvature. Dynamic Doppler compensation, where the frequency of a signal is changed progressively during transmission, is used so the satellite receives a constant frequency signal. After realizing that the Doppler shift had not been considered before launch of the Huygens probe of the 2005 Cassini–Huygens mission, the probe trajectory was altered to approach Titan in such a way that its transmissions traveled perpendicular to its direction of motion relative to Cassini, greatly reducing the Doppler shift.
Doppler shift of the direct path can be estimated by the following formula:
where is the speed of the mobile station, is the wavelength of the carrier, is the elevation angle of the satellite and is the driving direction with respect to the satellite.
The additional Doppler shift due to the satellite moving can be described as:
where is the relative speed of the satellite.
Audio
The Leslie speaker, most commonly associated with and predominantly used with the famous Hammond organ, takes advantage of the Doppler effect by using an electric motor to rotate an acoustic horn around a loudspeaker, sending its sound in a circle. This results at the listener's ear in rapidly fluctuating frequencies of a keyboard note.
Vibration measurement
A laser Doppler vibrometer (LDV) is a non-contact instrument for measuring vibration. The laser beam from the LDV is directed at the surface of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.
Robotics
Dynamic real-time path planning in robotics to aid the movement of robots in a sophisticated environment with moving obstacles often take help of Doppler effect. Such applications are specially used for competitive robotics where the environment is constantly changing, such as robosoccer.
Inverse Doppler effect
Since 1968 scientists such as Victor Veselago have speculated about the possibility of an inverse Doppler effect. The size of the Doppler shift depends on the refractive index of the medium a wave is traveling through. Some materials are capable of negative refraction, which should lead to a Doppler shift that works in a direction opposite that of a conventional Doppler shift. The first experiment that detected this effect was conducted by Nigel Seddon and Trevor Bearpark in Bristol, United Kingdom in 2003. Later, the inverse Doppler effect was observed in some inhomogeneous materials, and predicted inside a Vavilov–Cherenkov cone.
| Physical sciences | Waves | null |
8733 | https://en.wikipedia.org/wiki/Digital%20video | Digital video | Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images in the form of analog signals. Digital video comprises a series of digital images displayed in rapid succession, usually at 24, 25, 30, or 60 frames per second. Digital video has many advantages such as easy copying, multicasting, sharing and storage.
Digital video was first introduced commercially in 1986 with the Sony D1 format, which recorded an uncompressed standard-definition component video signal in digital form. In addition to uncompressed formats, popular compressed digital video formats today include MPEG-2, H.264 and AV1. Modern interconnect standards used for playback of digital video include HDMI, DisplayPort, Digital Visual Interface (DVI) and serial digital interface (SDI).
Digital video can be copied and reproduced with no degradation in quality. In contrast, when analog sources are copied, they experience generation loss. Digital video can be stored on digital media such as Blu-ray Disc, on computer data storage, or streamed over the Internet to end users who watch content on a personal computer or mobile device screen or a digital smart TV. Today, digital video content such as TV shows and movies also includes a digital audio soundtrack.
History
Cameras
The basis for digital video cameras is metal–oxide–semiconductor (MOS) image sensors. The first practical semiconductor image sensor was the charge-coupled device (CCD), invented in 1969 by Willard S. Boyle, who won a Nobel Prize for his work in physics. Following the commercialization of CCD sensors during the late 1970s to early 1980s, the entertainment industry slowly began transitioning to digital imaging and digital video from analog video over the next two decades. The CCD was followed by the CMOS active-pixel sensor (CMOS sensor), developed in the 1990s.
Major films shot on digital video overtook those shot on film in 2013. Since 2016 over 90% of major films were shot on digital video. , 92% of films are shot on digital. Only 24 major films released in 2018 were shot on 35mm. Today, cameras from companies like Sony, Panasonic, JVC and Canon offer a variety of choices for shooting high-definition video. At the high end of the market, there has been an emergence of cameras aimed specifically at the digital cinema market. These cameras from Sony, Vision Research, Arri, Blackmagic Design, Panavision, Grass Valley and Red offer resolution and dynamic range that exceeds that of traditional video cameras, which are designed for the limited needs of broadcast television.
Coding
In the 1970s, pulse-code modulation (PCM) induced the birth of digital video coding, demanding high bit rates of 45-140 Mbit/s for standard-definition (SD) content. By the 1980s, the discrete cosine transform (DCT) became the standard for digital video compression.
The first digital video coding standard was H.120, created by the (International Telegraph and Telephone Consultative Committee) or CCITT (now ITU-T) in 1984. H.120 was not practical due to weak performance. H.120 was based on differential pulse-code modulation (DPCM), a compression algorithm that was inefficient for video coding. During the late 1980s, a number of companies began experimenting with DCT, a much more efficient form of compression for video coding. The CCITT received 14 proposals for DCT-based video compression formats, in contrast to a single proposal based on vector quantization (VQ) compression. The H.261 standard was developed based on DCT compression, becoming first practical video coding standard. Since H.261, DCT compression has been adopted by all the major video coding standards that followed.
MPEG-1, developed by the Motion Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS-quality video. It was succeeded in 1994 by MPEG-2/H.262, which became the standard video format for DVD and SD digital television. It was followed by MPEG-4 in 1999, and then in 2003 it was followed by H.264/MPEG-4 AVC, which has become the most widely used video coding standard.
The current-generation video coding format is HEVC (H.265), introduced in 2013. While AVC uses the integer DCT with 4x4 and 8x8 block sizes, HEVC uses integer DCT and DST transforms with varied block sizes between 4x4 and 32x32. HEVC is heavily patented, with the majority of patents belonging to Samsung Electronics, GE, NTT and JVC Kenwood. It is currently being challenged by the aiming-to-be-freely-licensed AV1 format. , AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers.
Production
Starting in the late 1970s to the early 1980s, video production equipment that was digital in its internal workings was introduced. These included time base correctors (TBC) and digital video effects (DVE) units. They operated by taking a standard analog composite video input and digitizing it internally. This made it easier to either correct or enhance the video signal, as in the case of a TBC, or to manipulate and add effects to the video, in the case of a DVE unit. The digitized and processed video information was then converted back to standard analog video for output.
Later on in the 1970s, manufacturers of professional video broadcast equipment, such as Bosch (through their Fernseh division) and Ampex developed prototype digital videotape recorders (VTR) in their research and development labs. Bosch's machine used a modified 1-inch type B videotape transport and recorded an early form of CCIR 601 digital video. Ampex's prototype digital video recorder used a modified 2-inch quadruplex videotape VTR (an Ampex AVR-3) fitted with custom digital video electronics and a special octaplex 8-head headwheel (regular analog 2" quad machines only used 4 heads). Like standard 2" quad, the audio on the Ampex prototype digital machine, nicknamed Annie by its developers, still recorded the audio in analog as linear tracks on the tape. None of these machines from these manufacturers were ever marketed commercially.
Digital video was first introduced commercially in 1986 with the Sony D1 format, which recorded an uncompressed standard definition component video signal in digital form. Component video connections required 3 cables, but most television facilities were wired for composite NTSC or PAL video using one cable. Due to this incompatibility the cost of the recorder, D1 was used primarily by large television networks and other component-video capable video studios.
In 1988, Sony and Ampex co-developed and released the D2 digital videocassette format, which recorded video digitally without compression in ITU-601 format, much like D1. In comparison, D2 had the major difference of encoding the video in composite form to the NTSC standard, thereby only requiring single-cable composite video connections to and from a D2 VCR. This made it a perfect fit for the majority of television facilities at the time. D2 was a successful format in the television broadcast industry throughout the late '80s and the '90s. D2 was also widely used in that era as the master tape format for mastering laserdiscs.
D1 & D2 would eventually be replaced by cheaper systems using video compression, most notably Sony's Digital Betacam, that were introduced into the network's television studios. Other examples of digital video formats utilizing compression were Ampex's DCT (the first to employ such when introduced in 1992), the industry-standard DV and MiniDV and its professional variations, Sony's DVCAM and Panasonic's DVCPRO, and Betacam SX, a lower-cost variant of Digital Betacam using MPEG-2 compression.
One of the first digital video products to run on personal computers was PACo: The PICS Animation Compiler from The Company of Science & Art in Providence, RI. It was developed starting in 1990 and first shipped in May 1991. PACo could stream unlimited-length video with synchronized sound from a single file (with the .CAV file extension) on CD-ROM. Creation required a Mac, and playback was possible on Macs, PCs, and Sun SPARCstations.
QuickTime, Apple Computer's multimedia framework, was released in June 1991. Audio Video Interleave from Microsoft followed in 1992. Initial consumer-level content creation tools were crude, requiring an analog video source to be digitized to a computer-readable format. While low-quality at first, consumer digital video increased rapidly in quality, first with the introduction of playback standards such as MPEG-1 and MPEG-2 (adopted for use in television transmission and DVD media), and the introduction of the DV tape format allowing recordings in the format to be transferred directly to digital video files using a FireWire port on an editing computer. This simplified the process, allowing non-linear editing systems (NLE) to be deployed cheaply and widely on desktop computers with no external playback or recording equipment needed.
The widespread adoption of digital video and accompanying compression formats has reduced the bandwidth needed for a high-definition video signal (with HDV and AVCHD, as well as several professional formats such as XDCAM, all using less bandwidth than a standard definition analog signal). These savings have increased the number of channels available on cable television and direct broadcast satellite systems, created opportunities for spectrum reallocation of terrestrial television broadcast frequencies, and made tapeless camcorders based on flash memory possible, among other innovations and efficiencies.
Culture
Culturally, digital video has allowed video and film to become widely available and popular, beneficial to entertainment, education, and research. Digital video is increasingly common in schools, with students and teachers taking an interest in learning how to use it in relevant ways. Digital video also has healthcare applications, allowing doctors to track infant heart rates and oxygen levels.
In addition, the switch from analog to digital video impacted media in various ways, such as in how businesses use cameras for surveillance. Closed circuit television (CCTV) switched to using digital video recorders (DVR), presenting the issue of how to store recordings for evidence collection. Today, digital video is able to be compressed in order to save storage space.
Digital television
Digital television (DTV) is the production and transmission of digital video from networks to consumers. This technique uses digital encoding instead of analog signals used prior to the 1950s. As compared to analog methods, DTV is faster and provides more capabilities and options for data to be transmitted and shared.
Digital television's roots are tied to the availability of inexpensive, high-performance computers. It was not until the 1990s that digital TV became a real possibility. Digital television was previously not practically feasible due to the impractically high bandwidth requirements of uncompressed video, requiring around 200Mbit/s for a standard-definition television (SDTV) signal, and over 1Gbit/s for high-definition television (HDTV).
Overview
Digital video comprises a series of digital images displayed in rapid succession. In the context of video, these images are called frames. The rate at which frames are displayed is known as the frame rate and is measured in frames per second. Every frame is a digital image and so comprises a formation of pixels. The color of a pixel is represented by a fixed number of bits of that color where the information of the color is stored within the image. For example, 8-bit captures 256 levels per channel, and 10-bit captures 1,024 levels per channel. The more bits, the more subtle variations of colors can be reproduced. This is called the color depth, or bit depth, of the video.
Interlacing
In interlaced video each frame is composed of two halves of an image. The first half contains only the odd-numbered lines of a full frame. The second half contains only the even-numbered lines. These halves are referred to individually as fields. Two consecutive fields compose a full frame. If an interlaced video has a frame rate of 30 frames per second the field rate is 60 fields per second, though both part of interlaced video, frames per second and fields per second are separate numbers.
Bit rate and BPP
By definition, bit rate is a measurement of the rate of information content from the digital video stream. In the case of uncompressed video, bit rate corresponds directly to the quality of the video because bit rate is proportional to every property that affects the video quality. Bit rate is an important property when transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also important when dealing with the storage of video because, as shown above, the video size is proportional to the bit rate and the duration. Video compression is used to greatly reduce the bit rate while having little effect on quality.
Bits per pixel (BPP) is a measure of the efficiency of compression. A true-color video with no compression at all may have a BPP of 24 bits/pixel. Chroma subsampling can reduce the BPP to 16 or 12 bits/pixel. Applying JPEG compression on every frame can reduce the BPP to 8 or even 1 bits/pixel. Applying video compression algorithms like MPEG1, MPEG2 or MPEG4 allows for fractional BPP values to exist.
Constant bit rate versus variable bit rate
BPP represents the average bits per pixel. There are compression algorithms that keep the BPP almost constant throughout the entire duration of the video. In this case, we also get video output with a constant bitrate (CBR). This CBR video is suitable for real-time, non-buffered, fixed bandwidth video streaming (e.g. in videoconferencing). Since not all frames can be compressed at the same level, because quality is more severely impacted for scenes of high complexity, some algorithms try to constantly adjust the BPP. They keep the BPP high while compressing complex scenes and low for less demanding scenes. This way, it provides the best quality at the smallest average bit rate (and the smallest file size, accordingly). This method produces a variable bitrate because it tracks the variations of the BPP.
Technical overview
Standard film stocks typically record at 24 frames per second. For video, there are two frame rate standards: NTSC, at 30/1.001 (about 29.97) frames per second (about 59.94 fields per second), and PAL, 25 frames per second (50 fields per second). Digital video cameras come in two different image capture formats: interlaced and progressive scan. Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are scanned, and then the even-numbered lines are scanned, then the odd-numbered lines are scanned again, and so on.
One set of odd or even lines is referred to as a field, and a consecutive pairing of two fields of opposite parity is called a frame. Progressive scan cameras record all lines in each frame as a single unit. Thus, interlaced video captures the scene motion twice as often as progressive video does for the same frame rate. Progressive scan generally produces a slightly sharper image, however, motion may not be as smooth as interlaced video.
Digital video can be copied with no generation loss; which degrades quality in analog systems. However, a change in parameters like frame size, or a change of the digital format can decrease the quality of the video due to image scaling and transcoding losses. Digital video can be manipulated and edited on non-linear editing systems.
Digital video has a significantly lower cost than 35 mm film. In comparison to the high cost of film stock, the digital media used for digital video recording, such as flash memory or hard disk drive is very inexpensive. Digital video also allows footage to be viewed on location without the expensive and time-consuming chemical processing required by film. Network transfer of digital video makes physical deliveries of tapes and film reels unnecessary.
Digital television (including higher quality HDTV) was introduced in most developed countries in early 2000s. Today, digital video is used in modern mobile phones and video conferencing systems. Digital video is used for Internet distribution of media, including streaming video and peer-to-peer movie distribution.
Many types of video compression exist for serving digital video over the internet and on optical disks. The file sizes of digital video used for professional editing are generally not practical for these purposes, and the video requires further compression with codecs to be used for recreational purposes.
, the highest image resolution demonstrated for digital video generation is 132.7 megapixels (15360 x 8640 pixels). The highest speed is attained in industrial and scientific high-speed cameras that are capable of filming 1024x1024 video at up to 1 million frames per second for brief periods of recording.
Technical properties
Live digital video consumes bandwidth. Recorded digital video consumes data storage. The amount of bandwidth or storage required is determined by the frame size, color depth and frame rate. Each pixel consumes a number of bits determined by the color depth. The data required to represent a frame of data is determined by multiplying by the number of pixels in the image. The bandwidth is determined by multiplying the storage requirement for a frame by the frame rate. The overall storage requirements for a program can then be determined by multiplying bandwidth by the duration of the program.
These calculations are accurate for uncompressed video, but due to the relatively high bit rate of uncompressed video, video compression is extensively used. In the case of compressed video, each frame requires only a small percentage of the original bits. This reduces the data or bandwidth consumption by a factor of 5 to 12 times when using lossless compression, but more commonly, lossy compression is used due to its reduction of data consumption by factors of 20 to 200. Note that it is not necessary that all frames are equally compressed by the same percentage. Instead, consider the average factor of compression for all the frames taken together.
Interfaces and cables
Purpose-built digital video interfaces
Digital component video
Digital Visual Interface (DVI)
DisplayPort
HDBaseT
High-Definition Multimedia Interface (HDMI)
Unified Display Interface
General-purpose interfaces use to carry digital video
FireWire (IEEE 1394)
Universal Serial Bus (USB)
The following interface has been designed for carrying MPEG-Transport compressed video:
DVB-ASI
Compressed video is also carried using UDP-IP over Ethernet. Two approaches exist for this:
Using RTP as a wrapper for video packets as with SMPTE 2022
1–7 MPEG Transport Packets are placed directly in the UDP packet
Other methods of carrying video over IP
Network Device Interface
SMPTE 2110
Storage formats
Encoding
CCIR 601 used for broadcast stations
VC-2 also known as Dirac Pro
MPEG-4 good for online distribution of large videos and video recorded to flash memory
MPEG-2 used for DVDs, Super-VCDs, and many broadcast television formats
MPEG-1 used for video CDs
H.261
H.263
H.264 also known as MPEG-4 Part 10, or as AVC, used for Blu-ray Discs and some broadcast television formats
H.265 also known as MPEG-H Part 2, or as HEVC
MOV used for QuickTime framework
Theora used for video on Wikipedia
Tapes
Betacam SX, MPEG IMX, Digital Betacam, or DigiBeta — professional video formats by Sony, based on original Betamax technology
D-VHS — MPEG-2 format data recorded on a tape similar to S-VHS
D1, D2, D3, D5, D7, D9 (also known as Digital-S) — various SMPTE professional digital video standards
Digital8 — DV-format data recorded on Hi8-compatible cassettes; largely a consumer format
DV, MiniDV — used in most of digital videocassette consumer camcorders; designed for high quality and easy editing; can also record high-definition data (HDV) in MPEG-2 format
DVCAM, DVCPRO — used in professional broadcast operations; similar to DV but generally considered more robust; though DV-compatible, these formats have better audio handling.
DVCPRO50 and DVCPRO HD support higher bandwidths as compared to Panasonic's DVCPRO.
HDCAM was introduced by Sony as a high-definition alternative to DigiBeta.
MicroMV — MPEG-2-format data recorded on a very small, matchbook-sized cassette; obsolete
ProHD — name used by JVC for its MPEG-2-based professional camcorders
Discs
Blu-ray Disc
DVD
VCD
| Technology | Media and communication: Basics | null |
8779 | https://en.wikipedia.org/wiki/Destroyer | Destroyer | In naval terminology, a destroyer is a fast, maneuverable, long-endurance warship intended to escort
larger vessels in a fleet, convoy, or carrier battle group and defend them against a wide range of general threats. They were originally conceived in 1885 by Fernando Villaamil for the Spanish Navy as a defense against torpedo boats, and by the time of the Russo-Japanese War in 1904, these "torpedo boat destroyers" (TBDs) were "large, swift, and powerfully armed torpedo boats designed to destroy other torpedo boats". Although the term "destroyer" had been used interchangeably with "TBD" and "torpedo boat destroyer" by navies since 1892, the term "torpedo boat destroyer" had been generally shortened to simply "destroyer" by nearly all navies by the First World War.
Before World War II, destroyers were light vessels with little endurance for unattended ocean operations; typically, a number of destroyers and a single destroyer tender operated together. After the war, destroyers grew in size. The American s had a displacement of 2,200 tons, while the has a displacement of up to 9,600 tons, a difference of nearly 340%. Moreover, the advent of guided missiles allowed destroyers to take on the surface-combatant roles previously filled by battleships and cruisers. This resulted in larger and more powerful guided missile destroyers more capable of independent operation.
At the start of the 21st century, destroyers are the global standard for surface-combatant ships, with only two nations (the United States and Russia) officially operating the heavier cruisers, with no battleships or true battlecruisers remaining. Modern guided-missile destroyers are equivalent in tonnage but vastly superior in firepower to cruisers of the World War II era, and are capable of carrying nuclear-tipped cruise missiles. At long, a displacement of 9,200 tons, and with an armament of more than 90 missiles, guided-missile destroyers such as the Arleigh Burke class are actually larger and more heavily armed than most previous ships classified as guided-missile cruisers. The Chinese Type 055 destroyer has been described as a cruiser in some US Navy reports due to its size and armament.
Many NATO navies, such as the French, Spanish, Dutch, Danish, and German, use the term "frigate" for their destroyers, which leads to some confusion.
Origins
The emergence and development of the destroyer was related to the invention of the self-propelled torpedo in the 1860s. A navy now had the potential to destroy a superior enemy battle fleet using steam launches to fire torpedoes. Cheap, fast boats armed with torpedoes called torpedo boats were built and became a threat to large capital ships near enemy coasts. The first seagoing vessel designed to launch the self-propelled Whitehead torpedo was the 33-ton in 1876. She was armed with two drop collars to launch these weapons; these were replaced in 1879 by a single torpedo tube in the bow. By the 1880s, the type had evolved into small ships of 50–100 tons, fast enough to evade enemy picket boats.
At first, the threat of a torpedo-boat attack to a battle fleet was considered to exist only when at anchor, but as faster and longer-range torpedo boats and torpedoes were developed, the threat extended to cruising at sea. In response to this new threat, more heavily gunned picket boats called "catchers" were built, which were used to escort the battle fleet at sea. They needed significant seaworthiness and endurance to operate with the battle fleet, and as they inherently became larger, they became officially designated "torpedo-boat destroyers", and by the First World War were largely known as "destroyers" in English. The antitorpedo boat origin of this type of ship is retained in its name in other languages, including French (), Italian (), Portuguese (), Czech (), Greek (, ), Dutch () and, up until the Second World War, Polish (, now obsolete).
Once destroyers became more than just catchers guarding an anchorage, they were recognized to be also ideal to take over the offensive role of torpedo boats themselves, so they were also fitted with torpedo tubes in addition to their antitorpedo-boat guns. At that time, and even into World War I, the only function of destroyers was to protect their own battle fleet from enemy torpedo attacks and to make such attacks on the battleships of the enemy. The task of escorting merchant convoys was still in the future.
Early designs
An important development came with the construction of HMS Swift in 1884, later redesignated TB 81. This was a large (137 ton) torpedo boat with four 47 mm quick-firing guns and three torpedo tubes. At , while still not fast enough to engage enemy torpedo boats reliably, the ship at least had the armament to deal with them.
Another forerunner of the torpedo-boat destroyer (TBD) was the Japanese torpedo boat (Falcon), built in 1885. Designed to Japanese specifications and ordered from the Isle of Dogs, London Yarrow shipyard in 1885, she was transported in parts to Japan, where she was assembled and launched in 1887. The long vessel was armed with four 1-pounder (37 mm) quick-firing guns and six torpedo tubes, reached , and at 203 tons, was the largest torpedo boat built to date. In her trials in 1889, Kotaka demonstrated that she could exceed the role of coastal defense, and was capable of accompanying larger warships on the high seas. The Yarrow shipyards, builder of the parts for Kotaka, "considered Japan to have effectively invented the destroyer".
The German aviso , launched in 1886, was designed as a "Torpedojäger" (torpedo hunter), intended to screen the fleet against attacks by torpedo boats. The ship was significantly larger than torpedo boats of the period, displacing some , with an armament of guns and Hotchkiss revolver cannon.
Torpedo gunboat
The first vessel designed for the explicit purpose of hunting and destroying torpedo boats was the torpedo gunboat. Essentially very small cruisers, torpedo gunboats were equipped with torpedo tubes and an adequate gun armament, intended for hunting down smaller enemy boats. By the end of the 1890s, torpedo gunboats were made obsolete by their more successful contemporaries, the TBDs, which were much faster.
The first example of this was , designed by Nathaniel Barnaby in 1885, and commissioned in response to the Russian War scare. The gunboat was armed with torpedoes and designed for hunting and destroying smaller torpedo boats. Exactly long and in beam, she displaced 550 tons. Built of steel, Rattlesnake was unarmoured with the exception of a -inch protective deck. She was armed with a single 4-inch/25-pounder breech-loading gun, six 3-pounder QF guns and four torpedo tubes, arranged with two fixed tubes at the bow and a set of torpedo-dropping carriages on either side. Four torpedo reloads were carried.
A number of torpedo gunboat classes followed, including the Grasshopper class, the , the , and the – all built for the Royal Navy during the 1880s and the 1890s. In the 1880s, the Chilean Navy ordered the construction of two torpedo gunboats from the British shipyard Laird Brothers, which specialized in the construction of this type of vessel. The novelty is that one of these Almirante Lynch-class torpedo boats managed to sink the ironclad with self-propelled torpedoes in the Battle of Caldera Bay in 1891, thus surpassing its main function of hunting torpedo boats.
Fernando Villaamil, second officer of the Ministry of the Navy of Spain, designed his own torpedo gunboat to combat the threat from the torpedo boat. He asked several British shipyards to submit proposals capable of fulfilling these specifications. In 1885, the Spanish Navy chose the design submitted by the shipyard of James and George Thomson of Clydebank. (Destroyer in Spanish) was laid down at the end of the year, launched in 1886, and commissioned in 1887. Some authors considered her as the first destroyer ever built.
She displaced 348 tons, and was the first warship equipped with twin triple-expansion engines generating , for a maximum speed of , which made her one of the faster ships in the world in 1888. She was armed with one Spanish-designed Hontoria breech-loading gun, four (6-pounder) Nordenfelt guns, two (3-pdr) Hotchkiss cannons and two Schwartzkopff torpedo tubes. The ship carried three torpedoes per tube. She carried a crew of 60.
In terms of gunnery, speed, and dimensions, the specialised design to chase torpedo boats and her high-seas capabilities, Destructor was an important precursor to the TBD.
Development of modern destroyers
The first classes of ships to bear the formal designation TBD were the of two ships and of two ships of the Royal Navy.
Early torpedo gunboat designs lacked the range and speed to keep up with the fleet they were supposed to protect. In 1892, the Third Sea Lord, Rear Admiral John "Jacky" Fisher ordered the development of a new type of ships equipped with the then-novel water-tube boilers and quick-firing small-calibre guns. Six ships to the specifications circulated by the admiralty were ordered initially, comprising three different designs each produced by a different shipbuilder: and from John I. Thornycroft & Company, and from Yarrows, and and from Laird, Son & Company.
These ships all featured a turtleback (i.e. rounded) forecastle that was characteristic of early British TBDs. and were both built by Thornycroft, displaced 260 tons (287.8 tons full load), and were 185 feet in length. They were armed with one 12-pounder gun and three 6-pounder guns, with one fixed 18-in torpedo tube in the bow plus two more torpedo tubes on a revolving mount abaft the two funnels. Later, the bow torpedo tube was removed and two more 6-pounder guns added, instead. They produced 4,200 hp from a pair of Thornycroft water-tube boilers, giving them a top speed of 27 knots, giving the range and speed to travel effectively with a battle fleet. In common with subsequent early Thornycroft boats, they had sloping sterns and double rudders.
The French navy, an extensive user of torpedo boats, built its first TBD in 1899, with the torpilleur d'escadre. The United States commissioned its first TBD, , Destroyer No. 1, in 1902, and by 1906, 16 destroyers were in service with the US Navy.
Subsequent improvements
Torpedo boat destroyer designs continued to evolve around the turn of the 20th century in several key ways. The first was the introduction of the steam turbine. The spectacular unauthorized demonstration of the turbine-powered at the 1897 Spithead Navy Review, which, significantly, was of torpedo-boat size, prompted the Royal Navy to order a prototype turbine-powered destroyer, of 1899. This was the first turbine warship of any kind, and achieved a remarkable on sea trials. By 1910, the turbine had been widely adopted by all navies for their faster ships.
The second development was the replacement of the torpedo boat-style turtleback foredeck by a raised forecastle for the new s built in 1903, which provided better sea-keeping and more space below deck.
The first warship to use only fuel oil propulsion was the Royal Navy's TBD , after experiments in 1904, although the obsolescence of coal as a fuel in British warships was delayed by oil's availability. Other navies also adopted oil, for instance the USN with the of 1909.
In spite of all this variety, destroyers adopted a largely similar pattern. The hull was long and narrow, with a relatively shallow draft. The bow was either raised in a forecastle or covered under a turtleback; underneath this were the crew spaces, extending to the way along the hull. Aft of the crew spaces was as much engine space as the technology of the time would allow - several boilers and engines or turbines. Above deck, one or more quick-firing guns were mounted in the bows, in front of the bridge; several more were mounted amidships and astern. Two tube mountings (later on, multiple mountings) were generally found amidships.
Between 1892 and 1914, destroyers became markedly larger; initially 275 tons with a length of for the Royal Navy's first of TBDs, up to the First World War with long destroyers displacing 1,000 tons was not unusual. Construction remained focused on putting the biggest possible engines into a small hull, though, resulting in a somewhat flimsy construction. Often, hulls were built of high-tensile steel only thick.
By 1910, the steam-driven displacement (that is, not hydroplaning) torpedo boat had become redundant as a separate type. Germany, nevertheless, continued to build such boats until the end of World War I, although these were effectively small coastal destroyers. In fact, Germany never distinguished between the two types, giving them pennant numbers in the same series and never giving names to destroyers. Ultimately, the term "torpedo boat" came to be attached to a quite different vessel – the very fast-hydroplaning, motor-driven motor torpedo boat.
Early use and World War I
Navies originally built TBDrs to protect against torpedo boats, but admirals soon appreciated the flexibility of the fast, multipurpose vessels that resulted. Vice-Admiral Sir Baldwin Walker laid down destroyer duties for the Royal Navy:
Screening the advance of a fleet when hostile torpedo craft are about
Searching a hostile coast along which a fleet might pass
Watching an enemy's port for the purpose of harassing his torpedo craft and preventing their return
Attacking an enemy fleet
Early destroyers were extremely cramped places to live, being "without a doubt magnificent fighting vessels... but unable to stand bad weather". During the Russo-Japanese War in 1904, the commander of the Imperial Japanese Navy TBD Akatsuki described "being in command of a destroyer for a long period, especially in wartime... is not very good for the health". Stating that he had originally been strong and healthy, he continued, "life on a destroyer in winter, with bad food, no comforts, would sap the powers of the strongest men in the long run. A destroyer is always more uncomfortable than the others, and rain, snow, and sea-water combine to make them damp; in fact, in bad weather, there is not a dry spot where one can rest for a moment."
The Japanese destroyer-commander finished with, "Yesterday, I looked at myself in a mirror for a long time; I was disagreeably surprised to see my face thin, full of wrinkles, and as old as though I were 50. My clothes (uniform) cover nothing but a skeleton, and my bones are full of rheumatism."
In 1898, the US Navy officially classified , a long all steel vessel displacing 165 tons, as a torpedo boat, but her commander, LT. John C. Fremont, described her as "...a compact mass of machinery not meant to keep the sea nor to live in... as five-sevenths of the ship are taken up by machinery and fuel, whilst the remaining two-sevenths, fore and aft, are the crew's quarters; officers forward and the men placed aft. And even in those spaces are placed anchor engines, steering engines, steam pipes, etc. rendering them unbearably hot in tropical regions."
Early combat
The TBD's first major use in combat came during the Japanese surprise attack on the Russian fleet anchored in Port Arthur at the opening of the Russo-Japanese War on 8 February 1904.
Three destroyer divisions attacked the Russian fleet in port, firing a total of 18 torpedoes, but only two Russian battleships, and , and a protected cruiser, , were seriously damaged due to the proper deployment of torpedo nets. Tsesarevich, the Russian flagship, had her nets deployed, with at least four enemy torpedoes "hung up" in them, and other warships were similarly saved from further damage by their nets.
While capital-ship engagements were scarce in World War I, destroyer units engaged almost continually in raiding and patrol actions. The first shot of the war at sea was fired on 5 August 1914 by , one of the 3rd Destroyer Flotilla, in an engagement with the German auxiliary minelayer .
Destroyers were involved in the skirmishes that prompted the Battle of Heligoland Bight, and filled a range of roles in the Battle of Gallipoli, acting as troop transports and as fire-support vessels, as well as their fleet-screening role. Over 80 British destroyers and 60 German torpedo boats took part in the Battle of Jutland, which involved pitched small-boat actions between the main fleets, and several foolhardy attacks by unsupported destroyers on capital ships. Jutland also concluded with a messy night action between the German High Seas Fleet and part of the British destroyer screen.
The threat evolved by World War I with the development of the submarine, or U-boat. The submarine had the potential to hide from gunfire and close underwater to fire torpedoes. Early-war destroyers had the speed and armament to intercept submarines before they submerged, either by gunfire or by ramming. Destroyers also had a shallow enough draft that they were difficult to hit with torpedoes.
The desire to attack submarines under water led to rapid destroyer evolution during the war. They were quickly equipped with strengthened bows for ramming, and depth charges and hydrophones for identifying submarine targets. The first submarine casualty credited to a destroyer was the German , rammed by on 29 October 1914. While U-19 was only damaged, the next month, successfully sank . The first depth-charge sinking was on 4 December 1916, when was sunk by HMS Llewellyn.
The submarine threat meant that many destroyers spent their time on antisubmarine patrol. Once Germany adopted unrestricted submarine warfare in January 1917, destroyers were called on to escort merchant convoys. US Navy destroyers were among the first American units to be dispatched upon the American entry to the war, and a squadron of Japanese destroyers even joined Allied patrols in the Mediterranean. Patrol duty was far from safe; of the 67 British destroyers lost in the war, collisions accounted for 18, while 12 were wrecked.
At the end of the war, the state-of-the-art was represented by the British W class.
1918–1945
The trend during World War I had been towards larger destroyers with heavier armaments. A number of opportunities to fire at capital ships had been missed during the war, because destroyers had expended all their torpedoes in an initial salvo. The British V and W classes of the late war had sought to address this by mounting six torpedo tubes in two triple mounts, instead of the four or two on earlier models. The V and W classes set the standard of destroyer building well into the 1920s.
Two Romanian destroyers and , though, had the greatest firepower of all destroyers in the world throughout the first half of the 1920s. This was largely because, between their commissioning in 1920 and 1926, they retained the armament that they had while serving in the Italian Navy as scout cruisers (esploratori). When initially ordered by Romania in 1913, the Romanian specifications envisioned three 120 mm guns, a caliber which would eventually be adopted as the standard for future Italian destroyers. Armed with three 152 mm and four 76 mm guns after being completed as scout cruisers, the two warships were officially re-rated as destroyers by the Romanian Navy. The two Romanian warships were thus the destroyers with the greatest firepower in the world throughout much of the interwar period. As of 1939, when the Second World War started, their artillery, although changed, was still close to cruiser standards, amounting to nine heavy naval guns (five of 120 mm and four of 76 mm). In addition, they retained their two twin 457 mm torpedo tubes and two machine guns, plus the capacity to carry up to 50 mines.
The next major innovation came with the Japanese or "special type", designed in 1923 and delivered in 1928. The design was initially noted for its powerful armament of six 5-inch (127 mm) guns and three triple torpedo mounts. The second batch of the class gave the guns high-angle turrets for antiaircraft warfare, and the , oxygen-fueled Long Lance Type 93 torpedo. The later of 1931 further improved the torpedo armament by storing its reload torpedoes close at hand in the superstructure, allowing reloading within 15 minutes.
Most other nations replied with similar larger ships. The US adopted twin 5-inch (127 mm) guns, and the subsequent and es (the latter of 1934) increased the number of torpedo tubes to 12 and 16, respectively.
In the Mediterranean, the Italian Navy's building of very fast light cruisers of the prompted the French to produce exceptional destroyer designs. The French had long been keen on large destroyers, with their of 1922 displacing over 2,000 tons and carrying 130 mm guns; a further three similar classes were produced around 1930. The of 1935 carried five guns and nine torpedo tubes, but could achieve speeds of , which remains the record speed for a steamship and for any destroyer. The Italians' own destroyers were almost as swift; most Italian designs of the 1930s were rated at over , while carrying torpedoes and either four or six 120 mm guns.
Germany started to build destroyers again during the 1930s as part of Hitler's rearmament program. The Germans were also fond of large destroyers, but while the initial Type 1934 displaced over 3,000 tons, their armament was equal to smaller vessels. This changed from the Type 1936 onwards, which mounted heavy guns. German destroyers also used innovative high-pressure steam machinery; while this should have helped their efficiency, it more often resulted in mechanical problems.
Once German and Japanese rearmament became clear, the British and American navies consciously focused on building destroyers that were smaller, but more numerous than those used by other nations. The British built a series of destroyers (the to ), which were about 1,400 tons standard displacement, and had four guns and eight torpedo tubes; the American of 1938 was similar in size, but carried five guns and ten torpedo tubes. Realizing the need for heavier gun armament, the British built the of 1936 (sometimes called Afridi after one of two lead ships). These ships displaced 1,850 tons and were armed with eight guns in four twin turrets and four torpedo tubes. These were followed by the J-class and L-class destroyers, with six guns in twin turrets and eight torpedo tubes.
Antisubmarine sensors included sonar (or ASDIC), although training in their use was indifferent. Antisubmarine weapons changed little, and ahead-throwing weapons, a need recognized in World War I, had made no progress.
Later combat
During the 1920s and 1930s, destroyers were often deployed to areas of diplomatic tension or humanitarian disaster. British and American destroyers were common on the Chinese coast and rivers, even supplying landing parties to protect colonial interests. By World War II, the threat had evolved once again. Submarines were more effective, and aircraft had become important weapons of naval warfare; once again the early-war fleet destroyers were ill-equipped for combating these new targets. They were fitted with new light antiaircraft guns, radar, and forward-launched ASW weapons, in addition to their existing dual-purpose guns, depth charges, and torpedoes. Increasing size allowed improved internal arrangement of propulsion machinery with compartmentation, so ships were less likely to be sunk by a single hit. In most cases torpedo and/or dual-purpose gun armament was reduced to accommodate new anti-air and anti-submarine weapons. By this time the destroyers had become large, multi-purpose vessels, expensive targets in their own right. As a result, casualties on destroyers were among the highest. In the US Navy, particularly in World War II, destroyers became known as tin cans due to their light armor compared to battleships and cruisers.
The need for large numbers of antisubmarine ships led to the introduction of smaller and cheaper specialized antisubmarine warships called corvettes and frigates by the Royal Navy and destroyer escorts by the USN. A similar programme was belatedly started by the Japanese (see ). These ships had the size and displacement of the original TBDs from which the contemporary destroyer had evolved.
Post-World War II
Some conventional destroyers completed in the late 1940s and 1950s were built on wartime experience. These vessels were significantly larger than wartime ships and had fully automatic main guns, unit machinery, radar, sonar, and antisubmarine weapons, such as the squid mortar. Examples include the British , US , and the Soviet s.
Some World War II–vintage ships were modernized for antisubmarine warfare, and to extend their service lives, to avoid having to build (expensive) brand-new ships. Examples include the US FRAM I programme and the British Type 15 frigates converted from fleet destroyers.
The advent of surface-to-air missiles and surface-to-surface missiles, such as the Exocet, in the early 1960s changed naval warfare. Guided missile destroyers (DDG in the US Navy) were developed to carry these weapons and protect the fleet from air, submarine, and surface threats. Examples include the Soviet , the British , and the US .
The 21st century destroyers tend to display features such as large, slab sides without complicated corners and crevices to keep the radar cross-section small, vertical launch systems to carry a large number of missiles at high readiness to fire, and helicopter flight decks and hangars.
Operators
operates three s.
operates the Type 055 destroyer, two Luyang I-class destroyers, six Luyang II-class destroyers, 24 Type 052D destroyer and two Luzhou-class destroyers. China also operates two Type 052 destroyers, one Type 051B destroyer and four -class destroyers that are of older models. The Type 055 is considered to be a cruiser by NATO and the U.S. Department of Defense for its tonnage and capability matching that of the .
(Taiwan) operates four s, purchased from the United States.
operates two s and eight FREMM Multipurpose frigates of the Aquitaine-class variant. The French Navy does not use the term "destroyer" but rather "first-rate frigate" for these ship types, but they are marked with the NATO "D" hull code which places them in the destroyer type, as opposed to "F" for frigate.
has HS Velos, a , remains ceremonially in commission due to her historical significance.
operates four s, three s, three , and three destroyers.
operates two s and two Orizzonte-class destroyers.
operates two , two , four , two , four , five , nine , eight , and two s, along with six s. Japan also operates two and two helicopter destroyers, internationally regarded as helicopter carriers.
operates three , six and three destroyers.
has the which remains ceremonially in commission due to her historical significance.
The Russian Navy operates two and eight destroyers.
operates a single purchased from the United States for training use.
operates six Type 45 or Daring-class destroyers.
operates 73 active guided missile destroyers (DDGs) of a planned class of 92, and also has two active destroyer of a planned class of three, all .
Ships of equivalent to frigates
operates three s. This class's hull is MEKO 360H2 frigate.
operates five s. These ships are classified as destroyers by Iran, but internationally regarded as light frigates.
Ships of note classed as frigates
operates the ENS Tahya Misr. This is one of the Aquitaine-class variants of the FREMM Multipurpose frigates purchased from France, which is classified as a destroyer by France.
operates three s and four s. These ships are officially classified as frigates by Germany, but regarded as destroyers internationally due to size and capability.
operates the Mohammed VI. This is one of the Aquitaine-class variants of the FREMM Multipurpose frigates purchased from France, which is classified as a destroyer by France.
operates four s. These ships are classified as frigates by the Netherlands, but regarded as destroyers internationally due to size and capability.
operates four s. These ships are subclasses of Spain's Alvaro de Bazan-class, and classified as frigates, but are regarded as destroyers due to their size and armament.
operates . This ship was classified as a destroyer from 1990 to 2001, when she was reclassified as a frigate. No official reason was given for this and there was no change in armament or capability, thus remaining in the destroyer type.
operates five s. These ships are officially classified as a frigates by Spain, but due to their size and capabilities are regarded internationally as destroyers. They also served as the basis for Australia's Hobart-class destroyers.
Former operators
lost its entire navy upon the Empire's collapse following World War I.
lost its entire navy upon its conquest by the Bolsheviks in 1921.
sold its two and s to Peru in 1933, to prevent their capture by the Soviet Union.
transferred its only back to Japan in 1942.
decommissioned its only in 1963.
decommissioned its last in 1965.
decommissioned its last in 1967.
decommissioned its last Z-class destroyer in 1972.
decommissioned its H-class destroyer in 1972.
transferred its remaining to The Philippines in 1975 following the Fall of Saigon.
decommissioned its last W-class destroyer in 1976.
decommissioned its only destroyer, in 1980.
decommissioned both its and four s in 1982 following defense reviews.
decommissioned both its s and its lone in 1986.
decommissioned its last in 1991.
lone was destroyed by a fire in 1992.
decommissioned its lone in 1994.
decommissioned its lone in 1997.
decommissioned its last in 2000.
decommissioned its lone in 2003.
decommissioned its last in 2004.
decommissioned its last s in 2005.
decommissioned its last in 2006.
decommissioned its last in 2007.
decommissioned its last Garcia-class destroyer escort in 2008.
decommissioned its last in 2011.
decommissioned its last in 2015.
decommissioned its last in 2017.
decommissioned its last in 2018.
decommissioned its last in 2023.
Future development
plans to build 7,000-ton destroyers after the delivery of the new frigates, and TKMS presented to the Navy its most modern 7,200-ton MEKO A-400 air defense destroyer, an updated version of the German F-125-class frigates. The similarities between the projects and the high rate of commonality between requirements were also crucial for the consortium's victory.
is building up to 15 s based on the Royal Navy's Type 26 frigate. They will be more powerful than the Type 26, being fitted with the Aegis Combat System and long range surface-to-air and surface-to-surface missiles.
is adding six more Type 052D destroyer and sixteen more Type 055 destroyer class ships to its navy.
is building five new Amiral Ronarc'h-class destroyers (classed as "first rank frigates" in the French Navy).
: Six multi-mission F126 frigates which will have destroyer-size and corresponding capabilities (Length: 163 m, displacement: 10,550 tons)
has ordered three Frégate de défense et d'intervention (with an option on a fourth) from France.
is building four s, of which three have been commissioned. The nation has also begun development of its Next Generation Destroyer (NGD), also referred to as Project 18-class destroyers.
is currently building 1-2 s.
is currently researching development into their new DDX project to replace their Durand da le Penne-class destroyers.
Is developing plans for its DDR Destroyer Revolution Project.
has begun development of its KDX-IIA destroyers. These ships are to be a subclass of South Korea's s. The first unit is expected to enter service in 2019. Additionally, s are being built.
has begun development of its . Design work was ongoing as of 2020.
is currently developing its TF2000-class destroyer as the largest part of the MILGEM project. A total of seven ships will be constructed and will specialise in anti-air warfare.
is in the early stages of developing a Type 83 destroyer design after the unveiling of these plans in the 2021 defence white paper. The class is projected to replace the current Type 45 destroyer fleet beginning in the latter 2030s.
, currently has 19 additional Arleigh Burke destroyers planned or under construction. The new ships will be the upgraded "flight III" version. The United States has also started development of its DDG(X) next-generation destroyer project. Construction of the first ship is expected to start in 2028.
Destroyers in Preservation
Many historic destroyers are preserved as museum ships
in Buffalo, New York, USA
in Baton Rouge, Louisiana, USA
in Boston, Massachusetts, USA
in Charleston, South Carolina, USA
in Bay City, Michigan, USA
in Bremerton, Washington, USA
in Albany, New York, USA
in Galveston, Texas, USA
in Chatham, Kent, UK
HMCS Hadia in Hamilton, Ontario, Canada
in Sydney, New South Wales, Australia
FS Maillé-Brézé in Nantes, Pays de la Loire, France
FGS Mölders in Wilhelmshaven, Lower Saxony, Germany
ORP Błyskawica in Gdynia, Pomeranian Voivodeship, Poland
HSwMS Småland in Gothenburg, Västergötland, Sweden
HS Velos in Thessaloniki, Central Macedonia, Greece
TCG Gayret in Izmit, Kocaeli Province, Turkey
RFS Bespokoyny in Kronshtadt, Saint Petersburg, Russia
RFS Smetlivy in Sevastopol, Crimea, Russia
ROKS Jeong Ju in Dangjin, South Chungcheong Province, South Korea
ROCS Te Yang in Tainan City, Tainan County, Taiwan
CNS Anshan in Qingdao, Shandong, China
CNS Changchun in Rushan, Shandong, China
CNS Taiyuan in Zhongshan, Dalian, China
CNS Chongqing in Tianjin, China
CNS Dalian in Liugong Island, Shandong, China
CNS Jinan in Qingdao, Shandong, China
CNS Nanchang in Nanchang, Jiangxi, China
CNS Nanjing in Shipu, Xiangshan County, Zhejiang, China
CNS Nanning in Fangchenggang, Guangxi, China
CNS Xi'an in Wuhan, Hubei, China
CNS Xining in Taizhou, Jiangsu, China
CNS Yinchuan in Yinchuan, Ningxia, China
CNS Zhuhai in Chongqing, China
BNS Comandante Bauru in Rio de Janeiro, Brazil
ARA Santísima Trinidad being restored at Port Belgrano Naval Base, Argentina
| Technology | Naval warfare | null |
11907381 | https://en.wikipedia.org/wiki/Indonesian%20coelacanth | Indonesian coelacanth | The Indonesian coelacanth (Latimeria menadoensis, Indonesian: raja laut), also called Sulawesi coelacanth, is one of two living species of coelacanth, identifiable by its brown color. Latimeria menadoensis is a lobe-finned fish belonging to the class Actinistia and order Coelacanthiformes, classified under the family Latimeriidae and genus Latimeria. As a deep-sea predator, this species plays a crucial role in maintaining the balance of marine ecosystems.
It is listed as vulnerable by the IUCN, and it was quickly given protected status under Indonesian National Law Number 7/1999 after its discovery. The other species of coelacanth, the West Indian Ocean coelacanth, is listed as critically endangered. Separate populations of the Indonesian coelacanth are found in the waters of north Sulawesi as well as Papua and West Papua. This species offers insights into the early existence of fish and the first tetrapods.
Discovery
On September 18, 1997, Arnaz and Mark Erdmann, traveling in Indonesia on their honeymoon, saw a strange fish in a market at Manado Tua, on the island of Sulawesi. Mark Erdmann thought it was a gombessa (Comoro coelacanth), although it was brown, not blue. Erdmann took only a few photographs of the fish before it was sold. After confirming that the discovery was unique, Erdmann returned to Sulawesi in November 1997, interviewing fishermen to look for further examples. In July 1998, a fisherman Om Lameh Sonatham caught a second Indonesian specimen, 1.2 m in length and weighing 29 kg on July 30, 1998, and handed the fish to Erdmann. The fish was barely alive, but it lived for six hours, allowing Erdmann to photographically document its coloration, fin movements and general behavior. The specimen was preserved and donated to the Bogor Zoological Museum, part of the Indonesian Institute of Sciences. Erdmann's discovery was announced in Nature in September 1998.
The fish collected by Erdmann was described in a 1999 issue of Comptes Rendus de l'Académie des sciences Paris by Pouyaud et al.. It was given the scientific name Latimeria menadoensis (named after Manado where the specimen was found). The description and its naming were published without the involvement or knowledge of Erdmann, who had been independently conducting research on the specimen at the time. In response to Erdmann's complaints, Pouyaud and two other scientists asserted in a submission to Nature that they had been aware of the new species since 1995, predating the 1997 discovery. However the supplied photographic evidence of the purported earlier specimen, supposedly collected off southwest Java, was recognised as a crude forgery by the editorial team and the claim was never published.
Its geographic distribution is known to be largely confined to Indonesia, and most of the recorded species sightings are reported off North Sulawesi. Some of the areas include Talise Island, Gangga Island as well as Manado Bay. The species has also been found off the southern coast of Biak Island in northern New Guinea [16]. With the discovery of Latimeria menadoensis found in Indonesian waters, researchers have suggested that this area could be the point of origin of all the coelacanths.
Description
Superficially, the Indonesian coelacanth, known locally as raja laut ("king of the sea"), appears to be the same as those found in the Comoros except that the background coloration of the skin is brownish-gray rather than bluish. It has the same white mottling pattern as the West Indian Ocean coelacanth, but with flecks over the dorsal surface of its body and fins that appear golden due to the reflection of light. It may grow to 1.4 meters long and about 31–51 kg. These two species also reflect their taxa in certain morphological characteristics, especially in their pectoral and pelvic fin-like structure and in their oil-filled notochord instead of a vertebral spine. It also has unique hollow fin rays which adds to the coelacanth's image as a living fossil.It has also a considerable adipose-filled swim bladder for the control of buoyancy and an intracranial articulation which facilitates the jaw movement during feeding. The head is large relative to the size of the body weighing 27-29% of the total body length. Compared to its African counterpart, Latimeria menadoensis has longer pectoral fins and shorter pelvic fins with fewer second dorsal and pelvic fin rays.
The species' arrangement of teeth has sharply curved small premaxillary teeth used in slicing its food and several smaller teeth with fang-like large palatopterygoids. It is thought that this makes it easier to capture and constrain prey in the extreme conditions of the deep sea.
Adult coelacanths, although less frequently sighted or captured than in the past, are still reasonably well-known, whereas juvenile specimens are rarely reported or sampled, resulting in a lack of understanding of their development and sexual maturity. Among the few known juvenile specimens, there are some differences, such as slender bodies, small eyes, large fin bases, and long fins in comparison with the adult fish. There is white coloration on the dorso-posterial portion of the first dorsal fin and at the terminal margins of the lobe of the caudal fins are conspicuous in juvenile fish.
The species has lobed fins that move in a distinct pattern, differing from most fish species. This swimming style is thought to represent an evolutionary link between aquatic and terrestrial locomotion. The species also demonstrates extreme stenothermy, maintaining a very slow metabolic rate in its deep-sea environment. This adaptation conserves energy in a habitat where food may be scarce, and temperatures are low.
DNA analysis has shown that the specimen obtained by Erdmann differed genetically from the Comorian population. In 2005, a molecular study estimated the divergence time between the Indonesian and Comorian coelacanth species to be 30–40 mya. The two species show a 4.28% overall difference in their nucleotides.
An analysis of a specimen recovered from Waigeo, West Papua in eastern Indonesia indicates that there may be another lineage of the Indonesian coelacanth, and the two lineages may have diverged 13 million years ago. Whether this new lineage represents a subspecies or a new species has yet to be determined.
Reproduction and mating
The Indonesian coelacanth is thought to be ovoviviparous, but no females of this species containing eggs or embryos have been taken. In ovoviviparity, the female gives birth to live young after suffered, carrying fertilized eggs inside their body. Coelacanths take about three years to reach maturity in most species, and their gestation period lasts for about three years – the longest of any vertebrate. The juveniles may emerge at around 30 cm in length. This reproductive strategy leads to production of few offspring, usually between five and twenty five embryos in each reproductive cycle. Mating is believed to be monogamous, indicating that only the male attended the area to mate with the female. The population of Indonesian coelacanths is believed to be quite small. To this date, fewer than ten have been officially documented and out of them, only 3 and 5 are recorded. Such a low number of observations is indicative of the rarity of the species and the challenges involved in studying it.
Age and growth
Coelacanths are characterized to grow at a slow rate and live for many years. It is estimated they have a life expectancy of up to 100 years with claims of their having an average age of over 60 years. Systematic details on growing and aging L. menadoensis are rather scarce as most of its specimens are few and deep-sea animals. The structures used to estimate age and growth in coelacanths are not explicitly known, but in many fish species, otoliths (ear stones) and fin rays are commonly used for this purpose.
Behavior
Little is known about the behavior of L. menadoensis or its lifestyle, however, some observations have been made. This species in known to be nocturnal so it carries out most of its activities at night to forage for food. Juvenile coelacanths were found in similar geographical locations as adult coelacanths, such as long and narrow overhangs. This implies that the species probably breeds in a limited extent within its geographic extent.
Habitat
Geological obstacles and ocean currents most likely have an impact on the distribution of the Indonesian coelacanths, which are restricted to particular areas of the country (in contrast, L. chalumnae is found throughout the western Indian Ocean). Living in the lightless depths of the ocean, the Indonesian coelacanth has certain visual adaptations. They appear to be more sensitive to blue light at 480 nm, a typical wavelength in the black, restricted habitation of this animal. This adaptation has undergone further shift, together with the expansion of the rostral organ, which can detect electrical signals, enables the coelacanth to move and feed in the dark environment. It is speculated by the researchers that such evolution process in the deep-sea ecosystems might have started at about 200 million years ago. Teams of researchers using submersibles have recorded live sightings of the fish in the waters of Manado Tua and the Talise islands off north Sulawesi as well as in the waters of Biak in Papua. These areas share similar steep rocky topography full of caves, which are the habitat of the fish. These coelacanths live in deep waters of around 150 metres or more, at a temperature between 14 and 18 degrees Celsius.
This species is associated with deep waters, and their occurrence ranges from 100 to 700 meters under the water surface. Despite its presence and the depth range in Indonesia, the habitat selection of the Indonesian coelacanth is biased towards certain geographic territories. It is most often related to elevated slopes at the bottom of the ocean and underwater grottos and cliffs. The temperatures of the waters in these habitats are usually in the range of 12.4–20.5 °C [21] [28]. However, Latimeria menadoensis will be more inclined to inhabiting a more stochastic (random) environment than the African counterpart known as Latimeria chalumnae.
Diet
They are slow moving, passive fish that feed mainly on benthopelagic (flounder, rays, and halibut) and epibenthopelagic organisms (phytoplankton, zooplankton, jellyfish crabs, sea turtles, sea birds). Their food preference is mainly deep sea fish and cephalopods, making them carnivorous. They also engage in ambush predation. While hunting, coelacanths have been observed to swim with their heads directed upwards in what is thought to be headstand posturing which is the most energy efficient. The conservation of energy is essential due to their relatively slow metabolic rates, a consequence of living in a deep-sea habitat.
Conservation status
Latimeria menadoensis is classified as a vulnerable species due to its limited distribution, small population size, and threat from bycatch in deep-sea fishing. After the discovery of the species, less than 10 examples have been recorded in Indonesia, which points to low population density. Its habitat specialization and the small known geographic range moreover made L. menadoensis vulnerable to local extinction threats. The fish is legally protected through the Minister of Forestry Regulation No. 7/1999. However, it continued to be caught by local fishermen; on November 5, 2014, a fisherman found a specimen in his net, the seventh Indonesian coelacanth found in Indonesian waters since 1998. Additionally, in 2000, the species became listed in CITES appendix I list meaning that it cannot be traded internationally. These measures establish the scientific and cultural importance of the coelacanth in the greatest way possible. It has now assumed a huge social significance in Indonesia where it is a point of local and national pride and is the emblematic species for marine conservation in the Southeast Asia. DNA studies and habitat conservation research are ongoing, and marine protected areas have been established .
The species suffers various threats, particularly deep-water fishing, increased siltation of the habitat, and pollution. Increased human activity specifically along the coastal shelf may lead to enhanced sedimentation and a decrease in the quantity and quality of the complex deep sea structures the coelacanths depend on. Although there are notable climate changes and changes in the ocean temperature, there is still not enough information about how it affects their ecosystem. The greatest danger to the species may be that it is being caught as bycatch in deep-set gillnets that are meant for shark capturing. As a counteraction, certain measures were implemented in Bunaken National Park, including the regulations that prohibited the usage of those nets, and no more coelacanths were caught in the area. There are no specific fisheries to target the coelacanth, and even though the species is inedible, there have been instances of foreign buyers attempting to lure fishers to catch the fish, most likely for a museum or to be put in a display tank in an aquarium.
| Biology and health sciences | Fishes: General | Animals |
9446968 | https://en.wikipedia.org/wiki/Volatile%20%28astrogeology%29 | Volatile (astrogeology) | Volatiles are the group of chemical elements and chemical compounds that can be readily vaporized. In contrast with volatiles, elements and compounds that are not readily vaporized are known as refractory substances.
On planet Earth, the term 'volatiles' often refers to the volatile components of magma. In astrogeology volatiles are investigated in the crust or atmosphere of a planet or moon. Volatiles include nitrogen, carbon dioxide, ammonia, hydrogen, methane, sulfur dioxide, water and others.
Planetary science
Planetary scientists often classify volatiles with exceptionally low melting points, such as hydrogen and helium, as gases, whereas those volatiles with melting points above about 100 K (–173 °C, –280 °F) are referred to as ices. The terms "gas" and "ice" in this context can apply to compounds that may be solids, liquids or gases. Thus, Jupiter and Saturn are gas giants, and Uranus and Neptune are ice giants, even though the vast majority of the "gas" and "ice" in their interiors is a hot, highly dense fluid that gets denser as the center of the planet is approached, and in the case of Neptune, may reach temperatures of 5,100 °C. Inside of Jupiter's orbit, cometary activity is driven by the sublimation of water ice. Supervolatiles such as CO and CO2 have generated cometary activity as far out as .
Igneous petrology
In igneous petrology the term more specifically refers to the volatile components of magma (mostly water vapor and carbon dioxide) that affect the appearance and explosivity of volcanoes. Volatiles in a magma with a high viscosity, generally felsic with a higher silica (SiO2) content, tend to produce eruptions that are explosive eruption. Volatiles in a magma with a low viscosity, generally mafic with a lower silica content, tend to vent as effusive eruption and can give rise to a lava fountain.
Volatiles in magma
Some volcanic eruptions are explosive because of the mixing between water and magma reaching the surface, which releases energy suddenly. However, in some cases, the eruption is caused by volatiles dissolved in the magma itself. Approaching the surface, pressure decreases and the volatiles come out of solution, creating bubbles that circulate in the liquid. The bubbles become connected together, forming a network. This promotes the fragmentation into small drops or spray or coagulate clots in gas.
Generally, 95-99% of magma is liquid rock. However, the small percentage of gas present represents a very large volume when it expands on reaching atmospheric pressure. Gas is thus important in a volcano system because it generates explosive eruptions. Magma in the mantle and lower crust has a high volatile content. Water and carbon dioxide are not the only volatiles that volcanoes release; other volatiles include hydrogen sulfide and sulfur dioxide. Sulfur dioxide is common in basaltic and rhyolite rocks. Volcanoes also release a large amount of hydrogen chloride and hydrogen fluoride as volatiles.
Solubility of volatiles
There are three main factors that affect the dispersion of volatiles in magma: confining pressure, composition of magma, temperature of magma. Pressure and composition are the most important parameters. To understand how the magma behaves rising to the surface, the role of solubility within the magma must be known. An empirical law has been used for different magma-volatiles combination. For instance, for water in magma the equation is n=0.1078 P where n is the amount of dissolved gas as weight percentage (wt%), P is the pressure in megapascal (MPa) that acts on the magma. The value changes, for example for water in rhyolite n = 0.4111 P and for the carbon dioxide n = 0.0023 P. These simple equations work if there is only one volatile in a magma. However, in reality, the situation is not so simple because there are often multiple volatiles in a magma. It is a complex chemical interaction between different volatiles.
Simplifying, the solubility of water in rhyolite and basalt is function of pressure and depth below the surface in absence of other volatiles. Both basalt and rhyolite lose water with decreasing pressure as the magma rises to the surface. The solubility of water is higher in rhyolite than in basaltic magma. Knowledge of the solubility allows the determination of the maximum amount of water that might be dissolved in relation with pressure. If the magma contains less water than the maximum possible amount, it is undersaturated in water. Usually, insufficient water and carbon dioxide exist in the deep crust and mantle, so magma is often undersaturated in these conditions. Magma becomes saturated when it reaches the maximum amount of water that can be dissolved in it. If the magma continues to rise up to the surface and more water is dissolved, it becomes supersaturated. If more water is dissolved in magma, it can be ejected as bubbles or water vapor. This happens because pressure decreases in the process and velocity increases and the process has to balance also between decrease of solubility and pressure. Making a comparison with the solubility of carbon dioxide in magma, this is considerably less than water and it tends to exsolve at greater depth. In this case water and carbon dioxide are considered independent. What affects the behavior of the magmatic system is the depth at which carbon dioxide and water are released. Low solubility of carbon dioxide means that it starts to release bubbles before reaching the magma chamber. The magma is at this point already supersaturated. The magma enriched in carbon dioxide bubbles, rises up to the roof of the chamber and carbon dioxide tends to leak through cracks into the overlying caldera. Basically, during an eruption the magma loses more carbon dioxide than water, that in the chamber is already supersaturated. Overall, water is the main volatile during an eruption.
Nucleation of bubbles
Bubble nucleation happens when a volatile becomes saturated. Actually, the bubbles are composed of molecules that tend to aggregate spontaneously in a process called homogeneous nucleation. The surface tension acts on the bubbles shrinking the surface and forces them back to the liquid. The nucleation process is greater when the space to fit is irregular and the volatile molecules can ease the effect of surface tension. The nucleation can occur thanks to the presence of solid crystals, which are stored in the magma chamber. They are perfect potential nucleation sites for bubbles. If there is no nucleation in the magma the bubbles formation might appear really late and magma becomes significantly supersaturated. The balance between supersaturation pressure and bubble's radii expressed by this equation: ∆P=2σ/r, where ∆P is 100 MPa and σ is the surface tension. If the nucleation starts later when the magma is very supersaturated, the distance between bubbles becomes smaller. Essentially if the magma rises rapidly to the surface, the system will be more out of equilibrium and supersaturated. When the magma rises there is competition between adding new molecules to the existing ones and creating new ones. The distance between molecules characterizes the efficiency of volatiles to aggregate to the new or existing site. Crystals inside magma can determine how bubbles grow and nucleate.
| Physical sciences | Volcanology | Earth science |
51852 | https://en.wikipedia.org/wiki/Sidewalk | Sidewalk | A sidewalk (North American English) or pavement (British English) is a path along the side of a road. Usually constructed of concrete, pavers, brick, stone, or asphalt, it is designed for pedestrians. A sidewalk is normally higher than the roadway, and separated from it by a curb. There may also be a planted strip between the sidewalk and the roadway and between the roadway and the adjacent land.
Terminology
The term "sidewalk" is preferred in most of the United States & Canada. The term "pavement" is more common in the United Kingdom and other members of the Commonwealth of Nations, as well as parts of the Mid-Atlantic United States such as Philadelphia and parts of New Jersey. Many Commonwealth countries use the term "footpath". The professional, civil engineering and legal term for this in the USA and Canada is "sidewalk" while in the United Kingdom it is "pavement".
In the United States, the term sidewalk is used for the pedestrian path beside a road. "Shared use paths" or "multi-use paths" are available for use by both pedestrians and bicyclists. "Walkway" is a more comprehensive term that includes stairs, ramps, passageways, and related structures that facilitate the use of a path as well as the sidewalk. In the UK, the term "footpath" is mostly used for paths that do not abut a roadway. The term "shared-use path" is used where cyclists are also able to use the same section of path as pedestrians.
History
Sidewalks have operated for at least 4,000 years. The Greek city of Corinth had sidewalks by the 4th-century BC, and the Romans built sidewalks – they called them sēmitae.
However, by the Middle Ages, narrow roads had reverted to being simultaneously used by pedestrians and wagons without any formal separation between the two categories. Early attempts at ensuring the adequate maintenance of foot-ways or sidewalks were often made, as in the Colchester Improvement Act 1623 (21 Jas. 1. c. 34) for Colchester, but they were generally not very effective.
Following the Great Fire of London in 1666, attempts were slowly made to bring some order to the sprawling city. In 1671, "Certain Orders, Rules and Directions Touching the Paving and Cleansing The Streets, Lanes and Common Passages within the City of London" were formulated, calling for all streets to be adequately paved for pedestrians with cobblestones. Purbeck stone was widely used as a durable paving material. Bollards were also installed to protect pedestrians from the traffic in the middle of the road.
The British House of Commons passed a series of Paving Acts from the 18th century. The 1766 Paving & Lighting Act authorized the City of London Corporation to establish foot-ways throughout all the streets of London, to pave them with Purbeck stone (the thoroughfare in the middle was generally cobblestone) and to raise them above the street level with kerbs forming the separation. The corporation was also made responsible for the regular upkeep of the roads, including their cleaning and repair, for which they charged a tax from 1766. Another turning point was the construction of Paris's Pont Neuf (1578–1606) which set several trends including wide, raised sidewalks separating pedestrians from the road traffic, plus the first Parisian bridge without houses built on it, and its generous width plus elegant, durable design that immediately became popular for promenading at the beginning of the century that saw Paris take its form renowned to this day. It was also a cultural phenomenon because all classes mixed on the new walkways. By the 19th-century large and spacious sidewalks were routinely constructed in European capitals, and were associated with urban sophistication.
Benefits
Transportation
Sidewalks played an important role in transportation, as they provided a path for people to walk along without stepping on horse manure. They aided road safety by minimizing interaction between pedestrians, horses, carriages, and later automobiles. Sidewalks are normally in pairs, one on each side of the road, with the center section of the road for motorized vehicles. Crosswalks provide pedestrians a space to cross between the two sides of the street at predictable locations.
On rural roads, sidewalks may not be present as the amount of traffic (pedestrian or motorized) may not be enough to justify separating the two. In suburban and urban areas, sidewalks are more common. In town and city centers (known as downtown in the USA) the amount of pedestrian traffic can exceed motorized traffic, and in this case the sidewalks can occupy more than half of the width of the road, or the whole road can be pedestrianized.
Environment
Sidewalks may have a small effect on reducing vehicle miles traveled and carbon dioxide emissions. A study of sidewalk and transit investments in Seattle neighborhoods found vehicle travel reductions of 6 to 8% and CO2 emission reductions of 1.3 to 2.2%
Road traffic safety
Research commissioned for the Florida Department of Transportation, published in 2005, found that, in Florida, the Crash Reduction Factor (used to estimate the expected reduction of crashes during a given period) resulting from the installation of sidewalks averaged 74%.
Research at the University of North Carolina for the U.S. Department of Transportation found that the presence or absence of a sidewalk and the speed limit are significant factors in the likelihood of a vehicle/pedestrian crash. Sidewalk presence had a risk ratio of 0.118, which means that the likelihood of a crash on a road with a paved sidewalk was 88.2 percent lower than one without a sidewalk. The authors wrote that "this should not be interpreted to mean that installing sidewalks would necessarily reduce the likelihood of pedestrian/motor vehicle crashes by 88.2 percent in all situations. However, the presence of a sidewalk clearly has a strong beneficial effect of reducing the risk of a 'walking along roadway' pedestrian/motor vehicle crash." The study does not count crashes that happen when walking across a roadway. The speed limit risk ratio was 1.116, which means that a 16.1-km/h (10-mi/h) increase in the limit yields a factor of (1.116)10 or 3.
The presence or absence of sidewalks was one of three factors that were found to encourage drivers to choose lower, safer speeds.
On the other hand, the implementation of schemes which involve the removal of sidewalks, such as shared space schemes, are reported to deliver a dramatic drop in crashes and congestion too, which indicates that a number of other factors, such as the local speed environment, also play an important role in whether sidewalks are necessarily the best local solution for pedestrian safety.
In cold weather, black ice is a common problem with unsalted sidewalks. The ice forms a thin transparent surface film which is almost impossible to see, and so results in many slips by pedestrians.
Riding bicycles on sidewalks is discouraged since some research shows it to be more dangerous than riding in the street. Some jurisdictions prohibit sidewalk riding except for children. In addition to the risk of cyclist/pedestrian collisions, cyclists face increase risks from collisions with motor vehicles at street crossings and driveways. Riding in the direction opposite to traffic in the adjacent lane is especially risky.
Health
Since residents of neighborhoods with sidewalks are more likely to walk, they tend to have lower rates of cardiovascular disease, obesity, and other health issues related to sedentary lifestyles. Also, children who walk to school have been shown to have better concentration.
Social uses
Some sidewalks may be used as social spaces with sidewalk cafés, markets, or busking musicians, as well as for parking for a variety of vehicles including cars, motorbikes and bicycles. Sidewalk surfing was often used in the early 1960s to describe skateboarding.)
Construction
Contemporary sidewalks are most often made of concrete in North America, while tarmac, asphalt, brick, stone, slab and (increasingly) rubber are more common in Europe. Different materials are more or less friendly environmentally: pumice-based trass, for example, when used as an extender is less energy-intensive than Portland cement concrete or petroleum-based materials such as asphalt or tar-penetration macadam. Multi-use paths alongside roads are sometimes made of materials that are softer than concrete, such as asphalt.
Some sidewalks may be built like a Meandering Sidewalk. The meandering sidewalk is the wavy sidewalk that veers back and forth at the side of the road, no matter how straight the street. These sidewalks are common in North America and are used to break up the monotonous alignments of city blocks.
Wood
In the 19th century and early 20th century, sidewalks of wood were common in some North American locations. They may still be found at historic beach locations and in conservation areas to protect the land beneath and around, called boardwalks.
Brick
Brick sidewalks are found in some urban areas, usually for aesthetic purposes. Brick sidewalks are generally consolidated with brick hammers, rollers, and sometimes motorized vibrators.
Stone
Stone slabs called flagstones or flags are sometimes used where an attractive appearance is required, as in historic town centers.
For example, in Melbourne, Australia, bluestone has been used to pave the sidewalks of the CBD since the Gold rush in the 1850s because it proved to be stronger, more plentiful and easier to work than most other available materials.
Stone and concrete pavers
Pre-cast concrete pavers are used for sidewalks, often colored or textured to resemble stone. Sometimes cobblestones are used, though they are generally considered too uneven for comfortable walking.
Concrete
In the United States and Canada, the most common type of sidewalk consists of a poured concrete "ribbon", examples of which from as early as the 1860s can be found in good repair in San Francisco, and stamped with the name of the contractor and date of installation. When Portland cement was first imported to the United States in the 1880s, its principal use was in the construction of sidewalks.
Today, most sidewalk ribbons are constructed with cross-lying strain-relief grooves placed or sawn at regular intervals, typically apart. This partitioning, an improvement over the continuous slab ribbon, was patented in 1924 by Arthur Wesley Hall and William Alexander McVay, who wished to minimize damage to the concrete from the effects of tectonic and temperature fluctuations, both of which can crack longer segments. The technique is not perfect, as freeze-thaw cycles (in cold-winter regions) and tree root growth can eventually result in damage which requires repair.
In highly variable climates which undergo multiple freeze-thaw cycles, concrete blocks will be formed with separations, called expansion joints, to allow for thermal expansion without breakage. The use of expansion joints in sidewalks may not be necessary, as the concrete will shrink while setting.
Tarmac and asphalt
In the United Kingdom, Australia and France suburban sidewalks are most commonly constructed of tarmac. In urban or inner-city areas sidewalks are most commonly constructed of slabs, stone, or brick depending upon the surrounding street architecture and furniture.
Gallery
| Technology | Road infrastructure | null |
51892 | https://en.wikipedia.org/wiki/Textile | Textile | Textile is an umbrella term that includes various fiber-based materials, including fibers, yarns, filaments, threads, different fabric types, etc. At first, the word "textiles" only referred to woven fabrics. However, weaving is not the only manufacturing method, and many other methods were later developed to form textile structures based on their intended use. Knitting and non-woven are other popular types of fabric manufacturing. In the contemporary world, textiles satisfy the material needs for versatile applications, from simple daily clothing to bulletproof jackets, spacesuits, and doctor's gowns.
Textiles are divided into two groups: consumer textiles for domestic purposes and technical textiles. In consumer textiles, aesthetics and comfort are the most important factors, while in technical textiles: functional properties are the priority. The durability of textiles is an important property, with common cotton or blend garments (such as t-shirts) able to last twenty years or more with regular use and care.
Geotextiles, industrial textiles, medical textiles, and many other areas are examples of technical textiles, whereas clothing and furnishings are examples of consumer textiles. Each component of a textile product, including fiber, yarn, fabric, processing, and finishing, affects the final product. Components may vary among various textile products as they are selected based on their fitness for purpose.
Fiber is the smallest fabric component; fibers are typically spun into yarn, and yarns are used to manufacture fabrics. Fiber has a hair-like appearance and a higher length-to-width ratio. The sources of fibers may be natural, synthetic, or both. The techniques of felting and bonding directly transform fibers into fabric. In other cases, yarns are manipulated with different fabric manufacturing systems to produce various fabric constructions. The fibers are twisted or laid out to make a long, continuous strand of yarn. Yarns are then used to make different kinds of fabric by weaving, knitting, crocheting, knotting, tatting, or braiding. After manufacturing, textile materials are processed and finished to add value, such as aesthetics, physical characteristics, and increased usefulness. The manufacturing of textiles is the oldest industrial art. Dyeing, printing, and embroidery are all different decorative arts applied to textile materials.
Etymology
Textile
The word 'textile' comes from the Latin adjective , meaning 'woven', which itself stems from , the past participle of the verb , 'to weave'. Originally applied to woven fabrics, the term "textiles" is now used to encompass a diverse range of materials, including fibers, yarns, and fabrics, as well as other related items.
Fabric
A "fabric" is defined as any thin, flexible material made from yarn, directly from fibers, polymeric film, foam, or any combination of these techniques. Fabric has a broader application than cloth. Fabric is synonymous with cloth, material, goods, or piece goods. The word 'fabric' also derives from Latin, with roots in the Proto-Indo-European language. Stemming most recently from the Middle French , or "building," and earlier from the Latin ('workshop; an art, trade; a skillful production, structure, fabric'), the noun stems from the Latin " artisan who works in hard materials', which itself is derived from the Proto-Indo-European dhabh-, meaning 'to fit together'.
Cloth
Cloth is a flexible substance typically created through the processes of weaving, felting, or knitting using natural or synthetic materials. The word 'cloth' derives from the Old English , meaning "a cloth, woven, or felted material to wrap around one's body', from the Proto-Germanic , similar to the Old Frisian , the Middle Dutch , the Middle High German and the German , all meaning 'garment'.
Although cloth is a type of fabric, not all fabrics can be classified as cloth due to differences in their manufacturing processes, physical properties, and intended uses. Materials that are woven, knitted, tufted, or knotted from yarns are referred to as cloth, while wallpaper, plastic upholstery products, carpets, and nonwoven materials are examples of fabrics.
History
Textiles themselves are too fragile to survive across millennia; the tools used for spinning and weaving make up most of the prehistoric evidence for textile work. The earliest tool for spinning was the spindle, to which a whorl was eventually added. The weight of the whorl improved the thickness and twist of the spun thread. Later, the spinning wheel was invented. Historians are unsure where; some say China, others India.
The precursors of today's textiles include leaves, barks, fur pelts, and felted cloths.
The Banton Burial Cloth, the oldest existing example of warp ikat in Southeast Asia, is displayed at the National Museum of the Philippines. The cloth was most likely made by the native Asian people of northwest Romblon.
The first clothes, worn at least 70,000 years ago and perhaps much earlier, were probably made of animal skins and helped protect early humans from the elements. At some point, people learned to weave plant fibers into textiles.
The discovery of dyed flax fibers in a cave in the Republic of Georgia dated to 34,000 BCE suggests that textile-like materials were made as early as the Paleolithic era.
The speed and scale of textile production have been altered almost beyond recognition by industrialization and the introduction of modern manufacturing techniques.
Textile industry
The textile industry grew out of art and craft and was kept going by guilds. In the 18th and 19th centuries, during the Industrial Revolution, it became increasingly mechanized. In 1765, when a machine for spinning wool or cotton called the spinning jenny was invented in the United Kingdom, textile production became the first economic activity to be industrialised. In the 20th century, science and technology were driving forces.
The textile industry exhibits inherent dynamism, influenced by a multitude of transformative changes and innovations within the domain. Textile operations can experience ramifications arising from shifts in international trade policies, evolving fashion trends, evolving customer preferences, variations in production costs and methodologies, adherence to safety and environmental regulations, as well as advancements in research and development.
The textile and garment industries exert a significant impact on the economic systems of numerous countries engaged in textile production.
Naming
Most textiles were called by their base fibre generic names, their place of origin, or were put into groups based loosely on manufacturing techniques, characteristics, and designs. Nylon, olefin, and acrylic are generic names for some of the more commonly used synthetic fibres.
Related terms
The related words "fabric" and "cloth" and "material" are often used in textile assembly trades (such as tailoring and dressmaking) as synonyms for textile. However, there are subtle differences in these terms in specialized usage. Material is an extremely broad term basically meaning consisting of matter, and requires context to be useful. A textile is any material made of interlacing fibers, including carpeting and geotextiles, which may not necessarily be used in the production of further goods, such as clothing and upholstery. A fabric is a material made through weaving, knitting, spreading, felting, stitching, crocheting or bonding that may be used in the production of further products, such as clothing and upholstery, thus requiring a further step of the production. Cloth may also be used synonymously with fabric, but often specifically refers to a piece of fabric that has been processed or cut.
Greige goods: Textiles that are raw and unfinished are referred to as greige goods. After manufacturing, the materials are processed and finished.
Piece goods: Piece goods were textile materials sold in cut pieces as specified by the buyer. Piece goods were either cut from a fabric roll or made to a specific length, also known as yard goods.
Types
Textiles are various materials made from fibers and yarns. The term "textile" was originally only used to refer to woven fabrics, but today it covers a broad range of subjects. Textiles are classified at various levels, such as according to fiber origin (natural or synthetic), structure (woven, knitted, nonwoven), finish, etc. However, there are primarily two types of textiles:
Consumer textiles
Textiles have an assortment of uses, the most common of which are for clothing and for containers such as bags and baskets. In the household, textiles are used in carpeting, upholstered furnishings, window shades, towels, coverings for tables, beds, and other flat surfaces, and in art. Textiles are used in many traditional hand crafts such as sewing, quilting, and embroidery.
Technical textiles
Textiles produced for industrial purposes, and designed and chosen for technical characteristics beyond their appearance, are commonly referred to as technical textiles. Technical textiles include textile structures for automotive applications, medical textiles (such as implants), geotextile (reinforcement of embankments), agrotextiles (textiles for crop protection), protective clothing (such as clothing resistant to heat and radiation for fire fighter clothing, against molten metals for welders, stab protection, and bullet proof vests).
In the workplace, textiles can be used in industrial and scientific processes such as filtering. Miscellaneous uses include flags, backpacks, tents, nets, cleaning rags, transportation devices such as balloons, kites, sails, and parachutes; textiles are also used to provide strengthening in composite materials such as fibreglass and industrial geotextiles.
Due to the often highly technical and legal requirements of these products, these textiles are typically tested to ensure they meet stringent performance requirements. Other forms of technical textiles may be produced to experiment with their scientific qualities and to explore the possible benefits they may have in the future. Threads coated with zinc oxide nanowires, when woven into fabric, have been shown capable of "self-powering nanosystems", using vibrations created by everyday actions like wind or body movements to generate energy.
Significance
Textiles are all around us. The textile is a component of basic needs like food and shelter. Textiles are everywhere in our lives, from bath towels to space suits. Textiles help humans by comforting, protecting, and extending their lives. Textiles meet our clothing needs, keeping us warm in the winter and cool in the summer. There are several applications for textiles, such as medical textiles, intelligent textiles, and automotive textiles. All of them contribute to the well-being of humans.
Serviceability in textiles
The term "serviceability" refers to a textile product's ability to meet the needs of consumers. The emphasis is on knowing the target market and matching the needs of the target market to the product's serviceability. Serviceability or performance in textiles is the ability of textile materials to withstand various conditions, environments, and hazards. Aesthetics, durability, comfort and safety, appearance retention, care, environmental impact, and cost are the serviceability concepts employed in structuring the material.
Components
Fibers, yarns, fabric construction, finishes and design are components of a textile product. The selection of specific components varies with the intended use, therefore the fibers, yarns, and fabric manufacturing systems are selected with consideration of the required performance.
Use and applications
Other uses
Textiles, textile production, and clothing were necessities of life in prehistory, intertwined with the social, economic, and religious systems. Other than clothing, textile crafts produced utilitarian, symbolic, and opulent items. Archaeological artifacts from the Stone Age and the Iron Age in Central Europe are used to examine prehistoric clothing and its role in forming individual and group identities.
Source of knowledge
Artifacts unearthed in various archaeological excavations informs us about the remains of past human life and their activities. Dyed flax fibers discovered in the Republic of Georgia indicate that textile-like materials were developed during the Paleolithic period. Radiocarbon dates the microscopic fibers to 36,000 years ago, when modern humans migrated from Africa.
Several textile remnants, such as the Inca Empire's textile arts remnants, which embody the Incas' aesthetics and social ideals, serve as a means for disseminating information about numerous civilizations, customs, and cultures.
There are textile museums that display history related to many aspects of textiles. A textile museum raises public awareness and appreciation of the artistic merits and cultural significance of the world's textiles on a local, national, and international scale. The George Washington University Museum and Textile Museum in Washington, D.C., was established in 1925.
Narrative art
The Bayeux Tapestry is a rare example of secular Romanesque art. The art work depicts the Norman Conquest of England in 1066.
Decorative art
Textiles are also used for decorative art. Appliqué work of pipili is decorative art of Odisha, a state in eastern India, used for umbrellas, wall hangings, lamp shades, and bags. To make a range of decorative products, colored cloth in the shapes of animals, birds, flowers, are sewn onto a base cloth.
Architextiles
Architextiles, a combination of the words architecture and textile, are textile-based assemblages. Awnings are a basic type of architectural textile. Mughal Shahi Lal Dera Tent, which was a movable palace, is an example of the architextiles of the Mughal period.
Currency
Textiles had been used as currency as well. In Africa, textiles were used as currency in addition to being used for clothing, headwear, swaddling, tents, sails, bags, sacks, carpets, rugs, curtains, etc. Along the east–west axis in sub-Saharan Africa, cloth strip, which was typically produced in the savannah, was used as a form of currency.
Votive offering
Textiles were among the objects offered to the gods [votive offering] in ancient Greece for religious purposes.
Fiber
The smallest component of a fabric is fiber; fibers are typically spun into yarn, and yarns are used to make fabrics. Fibers are very thin and hair-like structures. The sources of fibers may be natural, synthetic, or both.
Global consumption
Global fiber production per person has increased from 8.4 kilograms in 1975 to 14.3 kilograms in 2021. After a modest drop due to COVID-19 pandemic in 2020, global fiber output rebounded to 113 million tons in 2021. Global fiber output roughly doubled from 58 million tons in 2000 to 113 million tons in 2021 and is anticipated to reach 149 million tons in 2030.
The demand for synthetic fibers is increasing rapidly. This has numerous causes. Reasons include its low price, the demand-supply imbalance of cotton, and its [Synthetic fibers'] versatility in design and application. Synthetic fibers accounts for 70% of global fiber use, mainly polyester. By 2030, the synthetic fiber market will reach 98.21 billion US dollars. From 2022 to 2030, the market is anticipated to increase by 5.1% per year.
Fiber sources
Natural fibers are obtained from plants, animals and minerals. Since prehistoric times, textiles have been made from natural fibers. Natural fibers are further categorized as cellulosic, protein, and mineral.
Synthetic or manmade fibers are manufactured with chemical synthesis.
Semi-synthetic: A subset of synthetic or manmade fibers is semi-synthetic fiber. Rayon is a classified as a semi-synthetic fiber, made with natural polymers.
Monomers are the building blocks of polymers. Polymers in fibers are of two types: additive or condensation. Natural fibers, such as cotton and wool, have a condensation polymer type, whereas synthetic fibers can have either an additive or a condensation polymer type. For example, acrylic fiber and olefin fibers have additive polymers, and nylon and polyester are condensation polymers.
Types
Fiber properties
Fiber properties influence textile characteristics such as aesthetics, durability, comfort, and cost.
Fineness is one of the important characteristics of the fibers. They have a greater length-to-width ratio [100 times the diameter]. Fibers need to be strong, cohesive, and flexible. The usefulness of fibers are characterized on the basis of certain parameters such as strength, flexibility, and length to diameter ratio, and spinnability. Natural fibers are relatively short [staple] in length. Synthetic fibers are produced in longer lengths called filaments. Silk is the only natural fiber that is a filament. The classification of fibers is based on their origin, derivation, and generic types.
Certain properties of synthetic fibers, such as their diameter, cross section, and color, can be altered during production.
Cotton: Cotton has a long history of use in the clothing due to its favorable properties. This fiber is soft, moisture-absorbent, breathable, and is renowned for its long durability.
Blends (blended textiles)
Fabric or yarn produced with a combination of two or more types of different fibers, or yarns to obtain desired traits. Blending is possible at various stages of textile manufacturing. Final composition is liable for the properties of the resultant product. Natural and synthetic fibers are blended to overcome disadvantage of single fiber properties and to achieve better performance characteristics and aesthetic effects such as devoré, heather effect, cross dyeing and stripes pattern etc. Clothing woven from a blend of cotton and polyester can be more durable and easier to maintain than material woven solely from cotton. Other than sharing functional properties, blending makes the products more economical.
Union or Union fabrics is the 19th century term for blended fabrics. While it is no longer in use. Mixture or mixed cloth is another term used for blended cloths when different types of yarns are used in warp and weft sides.
Blended textiles are not new.
Mashru was a 16th-century fabric, is one of the earliest forms of "mixed cloth", a material composed of silk and cotton.
Siamoise was a 17th-century cotton and linen material.
Composition
Fiber composition the fiber blend composition of mixtures of the fibers, is an important criterion to analyze the behavior, properties such as functional aspects, and commercial classification of the merchandise.
The most common blend is cotton and polyester. Regular blended fabric is 65% polyester and 35% cotton. It is called a reverse blend if the ratio of cotton predominates—the percentage of the fibers changes with the price and required properties.
Blending adds value to the textiles; it helps in reducing the cost (artificial fibers are less expensive than natural fibers) and adding advantage in properties of the final product. For instance, a small amount of spandex adds stretch to the fabrics. Wool can add warmth.
Uses of different fibers
Natural fibers
Plant
Grass, rush, hemp, and sisal are all used in making rope. In the first two, the entire plant is used for this purpose, while in the last two, only fibers from the plant are used. Coir (coconut fiber) is used in making twine, and also in floormats, doormats, brushes, mattresses, floor tiles, and sacking.
Straw and bamboo are both used to make hats. Straw, a dried form of grass, is also used for stuffing, as is kapok.
Fibers from pulpwood trees, cotton, rice, hemp, and nettle are used in making paper.
Cotton, flax, jute, hemp, modal, banana, bamboo, lotus, eucalyptus, mulberry, and sugarcane are all used in clothing. Piña (pineapple fiber) and ramie are also fibers used in clothing, generally with a blend of other fibers such as cotton. Nettles have also been used to make a fiber and fabric very similar to hemp or flax. The use of milkweed stalk fiber has also been reported, but it tends to be somewhat weaker than other fibers like hemp or flax.
The inner bark of the lacebark tree is a fine netting that has been used to make clothing and accessories as well as utilitarian articles such as rope.
Acetate is used to increase the shininess of certain fabrics such as silks, velvets, and taffetas.
Seaweed is used in the production of textiles: a water-soluble fiber known as alginate is produced and is used as a holding fiber; when the cloth is finished, the alginate is dissolved, leaving an open area.
Rayon is a manufactured fiber derived from plant pulp. Different types of rayon can imitate feel and texture of silk, cotton, wool, or linen.
Fibers from the stalks of plants, such as hemp, flax, and nettles, are also known as 'bast' fibers. Hemp fiber is yellowish-brown fiber made from the hemp plant. The fiber characteristics are coarser, harsher, strong and lightweight. Hemp fiber is used primary to make twine, rope and cordage.
Animal
Animal textiles are commonly made from hair, fur, skin, or silk (in the case of silkworms).
Wool refers to the hair of the domestic sheep or goat, which is distinguished from other types of animal hair in that the individual strands are coated with scales and tightly crimped, and the wool as a whole is coated with a wax mixture known as lanolin (sometimes called wool grease), which is waterproof and dirtproof. The lanolin and other contaminants are removed from the raw wool before further processing. Woolen refers to a yarn produced from carded, non-parallel fibre, while worsted refers to a finer yarn spun from longer fibers which have been combed to be parallel.
Other animal textiles which are made from hair or fur are alpaca wool, vicuña wool, llama wool, chiengora, shatoosh, yak fiber and camel hair, generally used in the production of coats, jackets, ponchos, blankets, and other warm coverings.
Cashmere, the hair of the Indian cashmere goat, and mohair, the hair of the North African angora goat, are types of wool known for their softness. Pashmina is a type of very fine cashmere wool. Used in the production of sweaters and scarfs.
Angora refers to the long, thick, soft hair of the angora rabbit. Qiviut is the fine inner wool of the muskox.
Silk is an animal textile made from the fibres of the cocoon of the Chinese silkworm which is spun into a smooth fabric prized for its softness. There are two main types of the silk: 'mulberry silk' produced by the Bombyx mori, and 'wild silk' such as Tussah silk (wild silk). Silkworm larvae produce the first type if cultivated in habitats with fresh mulberry leaves for consumption, while Tussah silk is produced by silkworms feeding purely on oak leaves. Around four-fifths of the world's silk production consists of cultivated silk. Silk products include pillow covers, dresses, tops, skirts, bed sheets, curtains.
Microbes
Bacterial cellulose can be made from industrial organic and agricultural waste, and used as material for textiles and clothing.
Mineral
Asbestos and basalt fibre are used for vinyl tiles, sheeting and adhesives, "transite" panels and siding, acoustical ceilings, stage curtains, and fire blankets.
Glass fibre is used in the production of ironing board and mattress covers, ropes and cables, reinforcement fibre for composite materials, insect netting, flame-retardant and protective fabric, soundproof, fireproof, and insulating fibres. Glass fibres are woven and coated with Teflon to produce beta cloth, a virtually fireproof fabric which replaced nylon in the outer layer of United States space suits since 1968.
Metal fibre, metal foil, and metal wire have a variety of uses, including the production of cloth-of-gold and jewellery. Hardware cloth (US term only) is a coarse woven mesh of steel wire, used in construction. It is much like standard window screening, but heavier and with a more open weave.
Minerals and natural and synthetic fabrics may be combined, as in emery cloth, a layer of emery abrasive glued to a cloth backing. Also, "sand cloth" is a US term for fine wire mesh with abrasive glued to it, employed like emery cloth or coarse sandpaper.
Synthetic
In the 20th century, they were supplemented by artificial fibers made from petroleum. Textiles are made in various strengths and degrees of durability, from the finest microfibre made of strands thinner than one denier to the sturdiest canvas.
Synthetic textiles are used primarily in the production of clothing, as well as the manufacture of geotextiles. Synthetic fibers are those that are constructed by humans through chemical synthesis.
Polyester fibre is used in all types of clothing, either alone or blended with fibres such as cotton.
Aramid fibre (e.g. Twaron) is used for flame-retardant clothing, cut-protection, and armour.
Acrylic is a fibre used to imitate wools, including cashmere, and is often used in replacement of them.
Nylon is a fibre used to imitate silk; it is used in the production of pantyhose. Thicker nylon fibres are used in rope and outdoor clothing.
Spandex (trade name Lycra) is a polyurethane product that can be made tight-fitting without impeding movement. It is used to make activewear, bras, and swimsuits.
Olefin fibre is a fibre used in activewear, linings, and warm clothing. Olefins are hydrophobic, allowing them to dry quickly. A sintered felt of olefin fibres is sold under the trade name Tyvek.
Ingeo is a polylactide fibre blended with other fibres such as cotton and used in clothing. It is more hydrophilic than most other synthetics, allowing it to wick away perspiration.
Lurex is a metallic fibre used in clothing embellishment.
Milk proteins have also been used to create synthetic fabric. Milk or casein fibre cloth was developed during World War I in Germany, and further developed in Italy and America during the 1930s. Milk fibre fabric is not very durable and wrinkles easily, but has a pH similar to human skin and possesses anti-bacterial properties. It is marketed as a biodegradable, renewable synthetic fibre.
Carbon fibre is mostly used in composite materials, together with resin, such as carbon fibre reinforced plastic. The fibres are made from polymer fibres through carbonization.
Production methods
Textile manufacturing has progressed from prehistoric crafts to a fully automated industry. Over the years, there have been continuous improvements in fabric structure and design.
Important parameters in fabric selection:
The primary consideration in fabric selection is the end use. The fabric needs vary greatly depending on the application. Similar types of fabric may not be suitable for all applications.
Fabric weight is an important criteria while producing different fabrics. A carpet requires a fabric with 1300 GSM, but a robe may be made with 160 GSM. Certainly, fabrics for clothes and carpets have distinct weights.
Stretchable fabrics have greater movability and are thus more comfortable than fabrics with no stretch or less stretch.
Textile exports
According to the UN Commodity Trade Statistics Database, the global textiles and apparel export market reached $772 billion in 2013.
Changing dynamics of the market
China is the largest exporter of textile goods. Most of China's exports consist of apparel, apparel accessories, textile yarns, and textile products. The competitive advantages of the China are low prices and abundant labor, lowered commercial obstacles, and a ready supply of raw materials. China, along with the United States and India, is a major producer of cotton.
China's apparel market share has declined in recent years due to various reasons and a shift toward high-end, sophisticated products. Additionally, the investors from China made stakes in Myanmar, Vietnam, and Cambodia. Last year, its market share was 36.7%, or $161 billion, a decline of 8% year-over-year. In other words, China lost $14 billion in garment work orders to other countries in a single year. In 2016, Bangladesh's apparel market share was valued at $28 billion, increasing 7.69 percent from the previous year.
In 2016 the leading exporters of apparel were China ($161 billion), Bangladesh ($28 billion), Vietnam ($25 billion), India ($18 billion), Hong Kong ($16 billion), Turkey ($15 billion), and Indonesia ($7 billion).
Garment exports from Bangladesh reached record high in the 2021–2022 fiscal year; China ($220,302 billion), Bangladesh ($38.70 billion), India ($8.127 billion), Pakistan ($19.33 billion).
Finishing
The fabric, when it leaves a loom or knitting machine, is not readily usable. It may be rough, uneven, or have flaws like skewing. Hence, it is necessary to finish the fabric. Finishing techniques enhance the value of the treated fabrics. After manufacturing, textiles undergo a range of finishing procedures, including bleaching, dyeing, printing, as well as mechanical and chemical finishing.
Coloration
Textiles are often dyed, with fabrics available in almost every colour. The dyeing process often requires several dozen gallons of water for each pound of clothing. Coloured designs in textiles can be created by weaving together fibres of different colours (tartan or Uzbek Ikat), adding coloured stitches to finished fabric (embroidery), creating patterns by resist dyeing methods, tying off areas of cloth and dyeing the rest (tie-dyeing), drawing wax designs on cloth and dyeing in between them (batik), or using various printing processes on finished fabric. Woodblock printing, still used in India and elsewhere today, is the oldest of these dating back to at least 220 CE in China. Textiles are also sometimes bleached, making the textile pale or white.
Color matching
In textiles, color matching extends beyond selecting the appropriate dyestuffs or pigments and combining them in precise proportions to achieve the desired end product color. Meeting criteria for fastness, cost, and quality is also essential. This process plays a critical role in materializing a designer's concept into an actual product.
Finishes
Textile finishing is the process of converting the loomstate or raw goods into a useful product, which can be done mechanically or chemically. Finishing is a broad term that refers to a variety of physical and chemical techniques and treatments that finish one stage of textile production while also preparing for the next. Textile finishing can include aspects like improving surface feel, aesthetical enhancement, and adding advanced chemical finishes. A finish is any process that transforms unfinished products into finished products. This includes mechanical finishing and chemical applications which alter the composition of treated textiles (fiber, yarn or fabric.)
Since the 1990s, with advances in technologies such as permanent press process, finishing agents have been used to strengthen fabrics and make them wrinkle free. More recently, nanomaterials research has led to additional advancements, with companies such as Nano-Tex and NanoHorizons developing permanent treatments based on metallic nanoparticles for making textiles more resistant to things such as water, stains, wrinkles, and pathogens such as bacteria and fungi.
Textiles receive a range of treatments before they reach the end-user. From formaldehyde finishes (to improve crease-resistance) to biocidic finishes and from flame retardants to dyeing of many types of fabric, the possibilities are almost endless. However, many of these finishes may also have detrimental effects on the end user. A number of disperse, acid and reactive dyes, for example, have been shown to be allergenic to sensitive individuals. Further to this, specific dyes within this group have also been shown to induce purpuric contact dermatitis.
, meaning "iron yarn" in English, is a light-reflecting, strong material invented in Germany in the 19th century. It is made by soaking cotton threads in a starch and paraffin wax solution. The threads are then stretched and polished by steel rollers and brushes. The result of the process is a lustrous, tear-resistant yarn which is extremely hardwearing.
Environmental and health impacts
After the oil industry, the fashion industry is the second biggest polluter of agricultural land, which has several harmful impacts on the environment. As the industry grows, the effect on the environment is worsening. Textile manufacturing is one of the oldest and most technologically complicated industries. This industry's fundamental strength stems from its solid manufacturing base of a diverse range of fibers/yarns ranging from natural fibers such as jute, silk, wool, and cotton, to synthetic or manufactured fibers that include polyester, viscose, nylon, and acrylic.
Textile mills and their wastewater have grown in proportion to the increase in demand for textile products, generating a severe pollution concern around the world. Numerous textile industry chemicals pose environmental and health risks. Among the compounds in textile effluent, dyes are considered significant contaminants. Water pollution generated by the discharge of untreated wastewater and the use of toxic chemicals, particularly during processing, account for the majority of the global environmental concerns linked with the textile industry.
Environmental impacts
Clothing is necessary to meet the fundamental needs of humans. Increased population and living standards have increased the need for clothing, enhancing the demand for textile manufacturing; wet processing needs more water consumption. Conventional machinery and treatment procedures use enormous quantities of water, especially for natural fibers, which require up to 150 kg of water per kg of material.
The textile sector is accountable for a substantial number of environmental impacts. However, the discharge of untreated effluents into water bodies is responsible for the majority of environmental harm produced by the textile sector.
The textile sector is believed to use 79 trillion litres of water per year and to discharge around 20% of all industrial effluent into the environment. Reportedly, aromatic and heterocyclic compounds with color-display and polar groups make up most of the dyes used in textile coloration processes. The structure is more complex and stable, making it more difficult to degrade printing and dyeing wastewater.
In addition, textiles constitute a significant percentage of landfill waste. In 2023, North Carolina State University researchers used enzymes to separate cotton from polyester in an early step towards reducing textile waste, allowing each material to be recycled.
Health impacts
Many kinds of respiratory diseases, skin problems, and allergies may be caused by dyes and pigments discharged into the water.
Although formaldehyde levels in clothing are unlikely to be at levels high enough to cause an allergic reaction, due to the presence of such a chemical, quality control and testing are of utmost importance. Flame retardants (mainly in the brominated form) are also of concern where the environment, and their potential toxicity, are concerned.
Chemicals use, advantage and health impacts
Certain chemical finishes contain potential hazards to health and the environment. Perfluorinated acids are considered to be hazardous to human health by the US Environmental Protection Agency.
Testing
Testing for these additives is possible at a number of commercial laboratories. It is also possible to have textiles tested according to the Oeko-tex certification standard, which contains limits levels for the use of certain chemicals in textiles products.
Laws and regulations
Different countries have certain laws and regulations to protect consumers' interests. The Textile Fiber Products Identification Act is a law that protects consumers in the United States. The act protects producer and consumer interests by implementing labelling (required content disclosure) and advertising requirements on textile products. The Textile Fiber Products Identification Act applies to all textile fiber products besides wool, which is governed by the Wool Product Label Number. The law prohibits misinformation about the fiber content, misbranding, and any unfair advertising practice, as well as requires businesses to operate in a particular manner.
Testing of textiles
Testing occurs at various stages of the textile manufacturing process, from raw material to finished product. The purpose of testing is to evaluate and analyze the regulatory compliance, the product's quality and performance, as well as to measure its specifications. Textile testing encompasses a wide range of methodologies, procedures, equipment, and sophisticated laboratories. Local governments and authorized organization's such as ASTM International, International Organization for Standardization, and American Association of Textile Chemists and Colorists establish standards for testing of textiles.
Some examples of tests at different stages:
For fiber: Fiber identification is a necessary test for determining fiber content and classifying products. The labelling of items with their fiber content percentage is a regulatory requirement. Using microscopy, solubility, and burn tests, fibers are distinguished from one another. More fiber relating tests include fiber length, diameter, Micronaire.
For yarn: Yarn count, Denier, Strength, evenness.
For fabric: Dimensional stability, color fastness, thread count, G.S.M, pilling, flammability.
Picture gallery
| Technology | Materials | null |
51894 | https://en.wikipedia.org/wiki/Putrescine | Putrescine | Putrescine is an organic compound with the formula (CH2)4(NH2)2. It is a colorless solid that melts near room temperature. It is classified as a diamine. Together with cadaverine, it is largely responsible for the foul odor of putrefying flesh, but also contributes to other unpleasant odors.
Production
Putrescine is produced on an industrial scale by the hydrogenation of succinonitrile.
Biotechnological production of putrescine from a renewable feedstock has been investigated. A metabolically engineered strain of Escherichia coli that produces putrescine at high concentrations in glucose mineral salts medium has been described.
Biochemistry
Spermidine synthase uses putrescine and S-adenosylmethioninamine (decarboxylated S-adenosyl methionine) to produce spermidine. Spermidine in turn is combined with another S-adenosylmethioninamine and gets converted to spermine.
Putrescine is synthesized in small quantities by healthy living cells by the action of ornithine decarboxylase.
Putrescine is synthesized biologically via two different pathways, both starting from arginine.
In one pathway, arginine is converted into agmatine. The conversion is catalyzed by the enzyme arginine decarboxylase (ADC). Agmatine is transformed into N-carbamoylputrescine by agmatine imino hydroxylase (AIH). Finally, N-carbamoylputrescine is hydrolyzed to give putrescine.
In the second pathway, arginine is converted into ornithine and then ornithine is converted into putrescine by ornithine decarboxylase (ODC).
Putrescine, via metabolic intermediates including N-acetylputrescine, γ-aminobutyraldehyde (GABAL), N-acetyl-γ-aminobutyric acid (N-acetyl-GABAL), and N-acetyl-γ-aminobutyric acid (N-acetyl-GABA), biotransformations mediated by diamine oxidase (DAO), monoamine oxidase B (MAO-B), aminobutyraldehyde dehydrogenase (ABALDH), and other enzymes, can act as a minor biological precursor of γ-aminobutyric acid (GABA) in the brain and elsewhere. In 2021, it was discovered that MAO-B does not mediate dopamine catabolism in the rodent striatum but instead participates in striatal GABA synthesis and that synthesized GABA in turn inhibits dopaminergic neurons in this brain area. It has been found that MAO-B, via the putrescine pathway, importantly mediates GABA synthesis in astrocytes in various brain areas, including in the hippocampus, cerebellum, striatum, cerebral cortex, and substantia nigra pars compacta (SNpc).
Occurrence
Putrescine is found in all organisms. Putrescine is widely found in plant tissues, often being the most common polyamine present within the organism. Its role in development is well documented, but recent studies have suggested that putrescine also plays a role in stress responses in plants, both to biotic and abiotic stressors. The absence of putrescine in plants is associated with an increase in both parasite and fungal population in plants.
Putrescine serves an important role in a multitude of ways, which include: a cation substitute, an osmolyte, or a transport protein. It also serves as an important regulator in a variety of surface proteins, both on the cell surface and on organelles, such as the mitochondria and chloroplasts. A recorded increase of ATP production has been found in mitochondria and ATP synthesis by chloroplasts with an increase in mitochondrial and chloroplastic putrescine, but putrescine has also been shown to function as a developmental inhibitor in some plants, which can be seen as dwarfism and late flowering in Arabiadopsis plants.
Putrescine production in plants can also be promoted by fungi in the soil. Piriformospora indica (P. indica) is one such fungus, found to promote putrescine production in Arabidopsis and common garden tomato plants. In a 2022 study it was shown that the presence of this fungus had a promotional effect on the growth of the root structure of plants. After gas chromatography testing, putrescine was found in higher amounts in these root structures.
Plants that had been inoculated with P. indica had presented an excess of arginine decarboxylase. This is used in the process of making putrescine in plant cells. One of the downstream effects of putrescine in root cells is the production of auxin. That same study found that putrescine added as a fertilizer showed the same results as if it was inoculated with the fungus, which was also shown in Arabidopsis and barley. The evolutionary foundations of this connection and putrescine are still unclear.
Putrescine is a component of bad breath and bacterial vaginosis. It is also found in semen and some microalgae, together with spermine and spermidine.
Uses
Putrescine reacts with adipic acid to yield the polyamide nylon 46, which is marketed by Envalior (formerly DSM) under the trade name Stanyl.
Application of putrescine, along with other polyamines, can be used to extend the shelf life of fruits by delaying the ripening process. Pre-harvest application of putrescine has been shown to increase plant resistance to high temperatures and drought. Both of these effects seem to result from lowered ethylene production following exogenous putrescine exposure.
Due to its role in putrification, putrescine has also been proposed as a biochemical marker for determining how long a corpse has been decomposing.
Putrescine together with chitosan has been successfully used in postharvest physiology as a natural fruit coating. Putrescine with chitosan treated fruits had higher antioxidant capacity and enzyme activities than untreated fruits. Fresh strawberries coated have lower decay percentage, higher tissue firmness, contents of total soluble solids. Nanoparticles of putrescine with chitosan are effective in preserving the nutritional quality and prolonging the post-harvest life of strawberries during storage up to 12 days.
History
Putrescine and cadaverine were first described in 1885 by the Berlin physician Ludwig Brieger (1849–1919).
Toxicity
In rats, putrescine has a low acute oral toxicity of 2000 mg/kg body weight, with no-observed-adverse-effect level of 2000 ppm (180 mg/kg body weight/day).
| Physical sciences | Amides and amines | Chemistry |
51896 | https://en.wikipedia.org/wiki/Cadaverine | Cadaverine | Cadaverine is an organic compound with the formula (CH2)5(NH2)2. Classified as a diamine, it is a colorless liquid with an unpleasant odor. It is present in small quantities in living organisms but is often associated with the putrefaction of animal tissue. Together with putrescine, it is largely responsible for the foul odor of putrefying flesh, but also contributes to other unpleasant odors.
Production
Cadaverine is produced by decarboxylation of lysine. It can be synthesized by many methods including the hydrogenation of glutaronitrile and the reactions of 1,5-dichloropentane.
History
Putrescine and cadaverine were first described in 1885 by the Berlin physician Ludwig Brieger (1849–1919). It was named from the English adjective cadaverous.
Receptors
In zebrafish, the trace amine-associated receptor 13c (or TAAR13c) has been identified as a high-affinity receptor for cadaverine. In humans, molecular modelling and docking experiments have shown that cadaverine fits into the binding pocket of the human TAAR6 and TAAR8.
Clinical significance
Seminal plasma contains cadaverine as basic amines. Elevated levels of cadaverine have been found in the urine of some patients with defects in lysine metabolism. The odor commonly associated with bacterial vaginosis has been linked to cadaverine and putrescine.
Derivatives
Pentolinium and pentamethonium.
Toxicity
Acute oral toxicity of cadaverine is 2,000 mg/kg body weight; its no-observed-adverse-effect level is 2,000 ppm (180 mg/kg body weight/day).
| Physical sciences | Amides and amines | Chemistry |
51916 | https://en.wikipedia.org/wiki/Transcontinental%20railroad | Transcontinental railroad | A transcontinental railroad or transcontinental railway is contiguous railroad trackage that crosses a continental land mass and has terminals at different oceans or continental borders. Such networks may be via the tracks of a single railroad, or via several railroads owned or controlled by multiple railway companies along a continuous route. Although Europe is crisscrossed by railways, the railroads within Europe are usually not considered transcontinental, with the possible exception of the historic Orient Express.
Transcontinental railroads helped open up interior regions of continents not previously colonized to exploration and settlement that would not otherwise have been feasible. In many cases, they also formed the backbones of cross-country passenger and freight transportation networks. Many of them continue to have an important role in freight transportation, and some such as the Trans-Siberian Railway even have passenger trains going from one end to the other.
Africa
East-west
There are several ways to cross Africa transcontinentally via connecting east–west railways. One is the Benguela railway, completed in 1929. It starts in Lobito, Angola, and connects through Katanga to the Zambia railways system. From Zambia several ports are accessible on the Indian Ocean: Dar es Salaam in Tanzania through the TAZARA, and, through Zimbabwe, Beira and Maputo in Mozambique. The Angolan Civil War has made the Benguela line largely inoperative, but efforts are being taken to restore it. Another west–east corridor leads from the Atlantic harbours in Namibia, either Walvis Bay or Luderitz to the South African rail system that, in turn, links to ports on the Indian Ocean ( i.e. Durban, Maputo).
A 1015 km gap in the east–west line between Kinshasa and Ilebo filled by riverboats could be plugged with a new railway.
There are two proposals for a line from the Red Sea to the Gulf of Guinea, including TransAfricaRail.
In 2010, a proposal sought to link Dakar to Port Sudan. Thirteen countries would be on the main route; another six would be served by branches.
North-south
A north-south transcontinental railway had been proposed by Cecil Rhodes, who termed it the Cape-Cairo railway. This system would act as a direct route from the northernmost British possession in Africa, Egypt, to the southernmost one, the Cape Colony. The project was never completed. During its development, a competing French colonial project for a competing line from Algiers or Dakar to Abidjan was abandoned after the Fashoda incident. This line would have had four gauge islands in three gauges.
An extension of Namibian Railways is being built in 2006 with the possible connection to Angolan Railways.
Libya has proposed a Trans-Saharan Railway connecting possibly to Nigeria which would connect with the proposed AfricaRail network.
African Union of Railways
The African Union of Railways has plans to connect the various railways of Africa including the Dakar-Port Sudan Railway.
Australia
East-west
Australia's east–west transcontinental rail corridor, consisting of lines built to three different track gauges, was completed in 1917, when the Trans-Australian Railway was opened between Port Augusta, South Australia and Kalgoorlie, Western Australia. This line, built by the federal government as a federation commitment, filled the last gap in the lines between the mainland state capitals of Brisbane, Sydney, Melbourne, Adelaide and Perth. Passengers and freight alike suffered from time-consuming breaks of gauge: a Perth–Brisbane journey at that time involved two standard gauge 1435 mm (4 ft 8 in) lines, a broad gauge 1600 mm (5 ft 3 in) line, and three of 1067 mm (3 ft 6 in) gauge.
In the 1940s and 1960s, steps were taken to progressively reduce the huge inefficiencies caused by the numerous historically imposed breaks of gauge by linking the mainland capital cities with lines all of standard gauge.
In 1970, the route across the continent was completed to standard gauge and a new, all-through passenger train, the Indian Pacific was inaugurated.
An east–west transcontinental line across northern Australia from the Pilbara to the east coast – more than 1000 km (600 mi) north of the Sydney-Perth rail corridor – was proposed in 2006 by Project Iron Boomerang to connect iron ore mining in the Pilbara and coal mining in the Bowen Basin in Queensland, with steel manufacturing plants at both ends.
North–south
Australia's north–south transcontinental rail corridor was built in stages during the 20th century, leaving a gap to be finished after the Tarcoola to Alice Springs section was completed in 1980. That final section, from Alice Springs to Darwin, was opened in 2004. The total length of the corridor, from Adelaide to Darwin, is . Completion of the corridor ended 126 years of freight and passengers alike having to be transferred between trains on tracks of different gauges: the corridor is now entirely 1435 mm (4 ft 8 in) standard gauge. The corridor is an important route for freight. An upmarket experiential tourism passenger train, The Ghan, operated by Journey Beyond, makes the journey once a week in each direction from Adelaide to Darwin, and the company's east–west Indian Pacific runs on the southernmost before heading west to Perth. There is no intermediate passenger traffic on the line.
In 2018, the Australian Rail Track Corporation started building a standard gauge fast-freight railway from Melbourne to Brisbane, known as the Inland Railway. , completion was anticipated in 2027.
Eurasia
Europe
The first transcontinental railroad in Europe, that connected the North Sea or the English Channel with the Mediterranean Sea, was a series of lines that included the Paris–Marseille railway, in service 1856. Multiple railways north of Paris were in operation at that time, such as Paris–Lille railway and Paris–Le Havre railway.
The second connection between the seas of Northern Europe and the Mediterranean Sea, was a series of lines finalized in 1857 with the Austrian Southern Railway, Vienna–Trieste. There were before that railroad connections Hamburg–Berlin–Wroclaw–Vienna (including Berlin–Hamburg Railway, Berlin–Wrocław railway, Upper Silesian Railway and Emperor Ferdinand Northern Railway). The Baltic Sea was also connected through the Lübeck–Lüneburg railway.
Trans-Eurasia
The Trans-Siberian Railway, completed in 1905, was the first network of railways connecting Europe and Asia. It connects Western Russia to the Russian Far East, and is the longest railway line in the world, with a length of over . The railway starts from Russia's capital Moscow, which is the largest city in Europe, and ends at Vladivostok, on the coast of the Pacific Ocean. Expansion of the railway system continues , with connecting rails going into Asia, namely Mongolia, China and North Korea. There are also plans to connect Tokyo, the capital of Japan, to the railway.
A second rail line connects Istanbul in Turkey with China via Iran, Turkmenistan, Uzbekistan and Kazakhstan. This route imposes a break of gauge at the Iranian border with Turkmenistan and at the Chinese border. En route there is a train ferry in eastern Turkey across Lake Van. The European and Asian parts of Istanbul was linked 2019 linked by the Marmaray undersea tunnel, before that by train ferry. There is no through service of passenger trains on the entire line. A uniform gauge connection was proposed in 2006, commencing with new construction in Kazakhstan. A decision to make the internal railways of Afghanistan gauge potentially opens up a new standard gauge route to China, since China abuts this country.
The Trans-Asian Railway is a project to link Singapore to Istanbul and is to a large degree complete with missing pieces primarily in Myanmar. The project has also linking corridors to China, the central Asian states, and Russia. This transcontinental line unfortunately uses a number of different gauges, , , and , though this problem may be lessened with the use of variable gauge axle systems such as the SUW 2000.
The TransKazakhstan Trunk Railways project by Kazakhstan Temir Zholy will connect China and Europe with standard gauge . Construction is set to start in 2006. Initially the line will go to western Kazakhstan, south through Turkmenistan to Iran, then to Turkey and Europe. A shorter to-be-constructed link from Kazakhstan is considered going through Russia and either Belarus or Ukraine.
The Baghdad Railway connects Istanbul with Baghdad and finally Basra, a sea port at the Persian Gulf. When its construction started in the 1880s it was in those times a Transcontinental Railroad.
North America
United States
A transcontinental railroad in the United States is any continuous rail line connecting a location on the U.S. Pacific coast with one or more of the railroads of the nation's eastern trunk line rail systems operating between the Missouri or Mississippi Rivers and the U.S. Atlantic coast. The first concrete plan for a transcontinental railroad in the United States was presented to Congress by Asa Whitney in 1845.
A series of transcontinental railroads built over the last third of the 19th century created a nationwide transportation network that united the country by rail. The first of these, the "Pacific Railroad", was built by the Central Pacific Railroad and Union Pacific Railroad, as well as the Western Pacific Railroad (1862-1870), to link the San Francisco Bay at Alameda, California, with the nation's existing eastern railroad network at Omaha, Nebraska/Council Bluffs, Iowa — thereby creating the world's second transcontinental railroad when it was completed from Omaha to Alameda on September 6, 1869. (The first transcontinental railroad was the Panama Railroad of 1855.) Its construction was made possible by the US government under Pacific Railroad Acts of 1862, 1864, and 1867. Its original course was very close to current Interstate 80.
Transcontinental railroad
The United States' first transcontinental railroad was built between 1863 and 1869 that connected the existing eastern U.S. rail network at Council Bluffs, Iowa, with the Pacific coast at the Oakland Long Wharf on San Francisco Bay. Its construction was considered to be one of the greatest American technological feats of the 19th century. Known as the "Pacific Railroad" when it opened, it served as a vital link for trade, commerce, and travel and opened up vast regions of the North American heartland for settlement. Much of the original route, especially on the Sierra grade west of Reno, Nevada, is currently used by Amtrak's California Zephyr, although many parts have been rerouted.
The resulting coast-to-coast railroad connection revolutionized the settlement and economy of the American West. It brought the western states and territories into alignment with the northern Union states and made transporting passengers and goods coast-to-coast considerably quicker, safer and less expensive. It replaced most of the far slower and more hazardous stagecoach lines and wagon trains. The number of emigrants taking the Oregon and California Trails declined dramatically. The sale of the railroad land grant lands and the transport provided for timber and crops led to the rapid settling of the "Great American Desert".
The Union Pacific recruited laborers from Army veterans and Irish immigrants, while most of the engineers were ex-Army men who had learned their trade keeping the trains running during the American Civil War.
The Central Pacific Railroad faced a labor shortage in the more sparsely settled West. It recruited Cantonese laborers in China, who built the line over and through the Sierra Nevada mountains and then across Nevada to their meeting in northern Utah. Chinese workers made up ninety percent of the workforce on the line. The Chinese Labor Strike of 1867 was peaceful, with no violence, organized across the entire Sierra Nevada route, and was carried out according to a peaceful Confucian model of protest. The strike began with the Summer Solstice in June, 1867 and lasted for eight days.
Land Grants
The Transcontinental Railroad required land and a complex federal policy for purchasing, granting, conveying land.
Some of these land-related acts included:
One motive for the Gadsden Purchase of land from Mexico in 1853 was to obtain suitable terrain for a southern transcontinental railroad, as the southern portion of the Mexican Cession was too mountainous. The Southern Pacific Railroad was completed in 1881.
The Pacific Railroad Act of 1862 (based on an earlier bill in 1856) authorized land grants for new lines that would "aid in the construction of a railroad and telegraph line from the Missouri river to the Pacific ocean".
The rails of the "first transcontinental railroad" were joined on May 10, 1869, with the ceremonial driving of the "Last Spike" at Promontory Summit, Utah, after track was laid over a gap between Sacramento and Omaha, Nebraska/Council Bluffs, Iowa in six years by the Union Pacific Railroad and Central Pacific Railroad. Although through train service between Omaha and Sacramento was in operation as of that date, the road was not completed to the Pacific Ocean until September 6, 1869, when the first through train reached San Francisco Bay at Alameda Terminal, and on November 8, 1869, when it reached the terminus at Oakland Long Wharf. Later, November 6, 1869, was deemed to be the official completion date of the Pacific Railroad. (A physical connection between Omaha, Nebraska, and the statutory Eastern terminus of the Pacific road at Council Bluffs, Iowa, located immediately across the Missouri River was also not finally established until the opening of UPRR railroad bridge across the river on March 25, 1873, prior to which transfers were made by ferry operated by the Council Bluffs & Nebraska Ferry Company.)
The first permanent, continuous line of railroad track from coast to coast was completed 15 months later on August 15, 1870, by the Kansas Pacific Railroad near its crossing of Comanche Creek at Strasburg, Colorado. This route connected to the eastern rail network via the Hannibal Bridge across the Missouri River at Kansas City completed June 30, 1869, passed through Denver, Colorado, and north to the Union Pacific Railroad at Cheyenne, Wyoming, making it theoretically possible for the first time to board a train at Jersey City, New Jersey, travel entirely by rail, and step down at the Alameda Wharf on San Francisco Bay in Oakland. This singularity existed until March 25, 1873 when the Union Pacific constructed the Missouri River Bridge in Omaha.
Subsequent transcontinental routes
Almost 12 years after Promontory Summit, the Southern Pacific Railroad (SP) constructed the second transcontinental railroad, building eastwards through the Gadsden Purchase, which had been acquired from Mexico in 1854 largely with the intention of providing a route for a railroad connecting California with the Southern states. This line was completed with milestones and ceremonies in 1881 and 1883:
March 8, 1881: the SP met the Rio Grande, Mexico and Pacific Railroad (a subsidiary of the Atchison, Topeka and Santa Fe Railway) with a "silver spike" ceremony at Deming, New Mexico, connecting Atchison, Kansas, to Los Angeles.
December 15, 1881: the SP met the Texas and Pacific Railway (T&P) at Sierra Blanca, Texas, connecting eastern Texas to Los Angeles.
January 12, 1883: the SP completed its own southern section, meeting its subsidiary Galveston, Harrisburg and San Antonio Railway at the Pecos River in Texas, and linking New Orleans to Los Angeles.
In Colorado, the 3-foot gauge Denver & Rio Grande (D&RG) extended its route from Denver via Pueblo across the Rocky Mountains to Grand Junction in 1882. In central Utah, the D&RG acquired a number of independent narrow gauge companies, which were incorporated into the first (1881-1889) Denver and Rio Grande Western Railway (D&RGW). Tracks were extended north through Salt Lake City, while simultaneously building south and eastward toward Grand Junction. The D&RG and the D&RGW were linked on March 30, 1883, the extension to Ogden (where it met the Central Pacific) was completed on May 14, 1883, and through traffic between Denver and Ogden began a few days later. The break of gauge made direct interchange of rolling stock with standard gauge railroads at both ends of this bridge line impossible for several years. The D&RG in 1887 began rebuilding its mainline in standard gauge, including a new route and tunnel at Tennessee Pass. The first D&RGW was reincorporated as the Rio Grande Western (RGW) in June 1889 and immediately began the conversion of track gauge. Standard gauge operations linking Ogden and Denver were completed on November 15, 1890.
The Atlantic and Pacific Railroad completed its route connecting the AT&SF at Albuquerque, New Mexico, via Flagstaff, Arizona, to the Southern Pacific at Needles, California, on August 9, 1883. The SP line into Barstow was leased by the A&P in 1884 (and purchased in 1911); this gave the AT&SF (the A&P's parent company) a direct route into Southern California. This route now forms the western portion of BNSF's Southern Transcon.
The Northern Pacific Railway (NP) completed the fifth independent transcontinental railroad on August 22, 1883, linking Chicago with Seattle. The Completion Ceremony was held on September 8, 1883, with former U.S. President Ulysses S. Grant contributing to driving the Final Spike.
The California Southern Railroad (chartered January 10, 1882) was completed from National City on San Diego Bay via Temecula Cañon to Colton and San Bernardino in September, 1883, and extended through the Cajon Pass to Barstow, a junction of the Atlantic and Pacific Railroad, in November, 1885. In September, 1885, the line of the Southern Pacific from Colton to Los Angeles, a distance of , had been leased by the California Central with equal rights and privileges thus allowing the Santa Fe's Transcontinental route to be completed by the connection with the California Southern and A&PRR. The SP grade was used until the completion of the California Central's own line between San Bernardino and Los Angeles in June, 1887, a distance of , which was part of the old Los Angeles and San Gabriel Valley Railroad, which had been acquired by purchase. In August, 1888, the California Central completed its Coast Division south from Los Angeles to a junction with the California Southern Railroad near Oceanside, a distance of , and these two divisions comprised the main line of the California Central, forming, in connection with the California Southern, a direct line between Southern California and the East by way of the Atlantic and Pacific and Atchison, Topeka, and Santa Fe Railroads.
The Great Northern Railway was built, without federal aid, by James J. Hill in 1893; it stretched from St. Paul to Seattle.
The Chicago, Rock Island & Pacific reached Santa Rosa, New Mexico, from the east in late 1901, shortly before the El Paso & Northeastern arrived from the southwest. The two were connected on February 1, 1902, thus forming an additional link between the Midwest and southern California. Through passenger service was provided by the Golden State Limited (Chicago—Kansas City—Tucumcari—El Paso—Los Angeles) jointly operated by the Rock Island and the Southern Pacific (EP&NE's successor) from 1902 to 1968.
The San Pedro, Los Angeles & Salt Lake Railroad completed its line connecting Los Angeles through Las Vegas to Salt Lake City on May 1, 1905. Through passenger service from Chicago to Los Angeles was provided by Union Pacific's Los Angeles Limited from 1905 to 1954, and the City of Los Angeles from 1936 to 1971.
The Western Pacific Railway (WP), financed by the Denver & Rio Grande on behalf of the Gould System, completed its new line (the Feather River Route) from Oakland to Ogden in 1909, in direct competition with the Southern Pacific's existing route. Through passenger service (Oakland-Salt Lake City-Denver-Chicago) was provided by the Exposition Flyer 1939 to 1949 and its successor, the California Zephyr 1949 to 1970, both jointly operated by the WP, the D&RGW and the Chicago, Burlington & Quincy.
In 1909, the Chicago, Milwaukee & St. Paul (or Milwaukee Road) completed a privately built Pacific extension to Seattle. On completion, the line was renamed the Chicago, Milwaukee, St. Paul and Pacific. Although the Pacific Extension was privately funded, predecessor roads did benefit from the federal land grant act, so it cannot be said to have been built without federal aid.
John D. Spreckels completed his privately funded San Diego and Arizona Railway in 1919, thereby creating a direct link (via connection with the Southern Pacific lines) between San Diego, California and the Eastern United States. The railroad stretched from San Diego to Calexico, California, of which were south of the border in Mexico.
In 1993, Amtrak's Sunset Limited daily railroad train was extended eastward to Miami, Florida, later rerouted to Orlando, making it the first regularly scheduled transcontinental passenger train route in the United States to be operated by a single company. Hurricane Katrina cut this rail route in Louisiana in 2005. The train now runs from Los Angeles to New Orleans.
For a time in 1997 and 1998, Amtrak effectively operated the Washington-Chicago Capitol Limited and Chicago-Los Angeles Southwest Chief as a single train.
The Gould System
George J. Gould attempted to assemble a truly transcontinental system in the 1900s. The line from San Francisco, California, to Toledo, Ohio, was completed in 1909, consisting of the Western Pacific Railway, Denver and Rio Grande Railroad, Missouri Pacific Railroad, and Wabash Railroad. Beyond Toledo, the planned route would have used the Wheeling and Lake Erie Railroad (1900), Wabash Pittsburgh Terminal Railway, Little Kanawha Railroad, West Virginia Central and Pittsburgh Railway, Western Maryland Railroad, and Philadelphia and Western Railway, but the Panic of 1907 strangled the plans before the Little Kanawha section in West Virginia could be finished. The Alphabet Route was completed in 1931, providing the portion of this line east of the Mississippi River. With the merging of the railroads, only the Union Pacific Railroad and the BNSF Railway remain to carry the entire route.
Canada
The completion of Canada's first transcontinental railway with the driving of the Last Spike at Craigellachie, British Columbia, on November 7, 1885, was an important milestone in Canadian history. Between 1881 and 1885, the Canadian Pacific Railway (CPR) completed a line that spanned from the port of Montreal to the Pacific coast, fulfilling a condition of British Columbia's 1871 entry into the Canadian Confederation. The City of Vancouver, incorporated in 1886, was designated the western terminus of the line. The CPR became the first transcontinental railway company in North America in 1889 after its International Railway of Maine opened, connecting CPR to the Atlantic coast.
The construction of a transcontinental railway strengthened the connection of British Columbia and the North-West Territories to the country they had recently joined, and acted as a bulwark against potential incursions by the United States.
Subsequently, two other transcontinental lines were built in Canada: the Canadian Northern Railway (CNoR) opened another line to the Pacific in 1915, and the combined Grand Trunk Pacific Railway (GTPR)/National Transcontinental Railway (NTR) system opened in 1917 following the completion of the Quebec Bridge, although its line to the Pacific opened in 1914. The CNoR, GTPR, and NTR were nationalized to form the Canadian National Railway, which currently is now Canada's largest transcontinental railway, with lines running all the way from the Pacific Coast to the Atlantic Coast.
South and Central America
There is activity to revive the connection between Valparaíso and Santiago in Chile and Mendoza, Argentina, through the Transandino project. Mendoza has an active connection to Buenos Aires. The old Transandino began in 1910 and ceased passenger service in 1978 and freight 4 years later. Technically a complete transcontinental link exists from Arica, Chile, to La Paz, Bolivia, to Buenos Aires, but this trans-Andean crossing is for freight only.
On December 6, 2017 the Brazilian President Michel Temer and his Bolivian counterpart Evo Morales signed an agreement for an Atlantic - Pacific railway. The construction will start in 2019 and will be finished in 2024. The new railway is planned to be 3750 km in length. There are two possible tracks in discussion: Both have an Atlantic end in Santos, Brazil but the Pacific ends are in Ilo and Matarani in Peru.
Another longer Transcontinental freight-only railroad linking Lima, Peru, to Rio de Janeiro, Brazil is under development.
Panama
The first railroad to directly connect two oceans (although not by crossing a broad "continental" land mass) was the Panama Canal Railway. Opened in 1855, this line was designated instead as an "inter-oceanic" railroad crossing Country at its narrowest point, the Isthmus of Panama, when that area was still part of Colombia. (Panama split off from Colombia in 1903 and became the independent Republic of Panama). By spanning the isthmus, the line thus became the first railroad to completely cross any part of the Americas and physically connect ports on the Atlantic and Pacific Oceans. Given the tropical rain forest environment, the terrain, and diseases such as malaria and cholera, its completion was a considerable engineering challenge. The construction took five years after ground was first broken for the line in May, 1850, cost eight million dollars, and required more than seven thousand workers drawn from "every quarter of the globe."
This railway was built to provide a shorter and more secure path between the United States' East and West Coasts. This need was mainly triggered by the California Gold Rush. Over the years the railway played a key role in the construction and the subsequent operation of the Panama Canal, due to its proximity to the canal. Currently, the railway operates under the private administration of the Panama Canal Railroad Company, and its upgraded capacity complements the cargo traffic through the Panama Canal.
Guatemala
A second Central American inter-oceanic railroad began operation in 1908 as a connection between Puerto San José and Puerto Barrios in Guatemala, but ceased passenger service to Puerto San José in 1989.
Costa Rica
A third Central American inter-oceanic railroad began operation in 1910 as a connection between Puntarenas and Limón in gauge. It currently (2019) sees no passenger service.
| Technology | Ground transportation networks | null |
51944 | https://en.wikipedia.org/wiki/Dichroism | Dichroism | In optics, a dichroic material is either one which causes visible light to be split up into distinct beams of different wavelengths (colours) (not to be confused with dispersion), or one in which light rays having different polarizations are absorbed by different amounts.
In beam splitters
The original meaning of dichroic, from the Greek dikhroos, two-coloured, refers to any optical device which can split a beam of light into two beams with differing wavelengths. Such devices include mirrors and filters, usually treated with optical coatings, which are designed to reflect light over a certain range of wavelengths and transmit light which is outside that range. An example is the dichroic prism, used in some camcorders, which uses several coatings to split light into red, green and blue components for recording on separate CCD arrays, however it is now more common to have a Bayer filter to filter individual pixels on a single CCD array. This kind of dichroic device does not usually depend on the polarization of the light. The term dichromatic is also used in this sense.
With polarized light
The second meaning of dichroic refers to the property of a material, in which light in different polarization states traveling through it experiences a different absorption coefficient; this is also known as diattenuation. When the polarization states in question are right and left-handed circular polarization, it is then known as circular dichroism (CD). Most materials exhibiting CD are chiral, although non-chiral materials showing CD have been recently observed. Since the left- and right-handed circular polarizations represent two spin angular momentum (SAM) states, in this case for a photon, this dichroism can also be thought of as spin angular momentum dichroism and could be modelled using quantum mechanics.
In some crystals,, such as tourmaline, the strength of the dichroic effect varies strongly with the wavelength of the light, making them appear to have different colours when viewed with light having differing polarizations. This is more generally referred to as pleochroism, and the technique can be used in mineralogy to identify minerals. In some materials, such as herapathite (iodoquinine sulfate) or Polaroid sheets, the effect is not strongly dependent on wavelength.
In liquid crystals
Dichroism, in the second meaning above, occurs in liquid crystals due to either the optical anisotropy of the molecular structure or the presence of impurities or the presence of dichroic dyes. The latter is also called a guest–host effect.
| Physical sciences | Optics | Physics |
51970 | https://en.wikipedia.org/wiki/Weaving | Weaving | Weaving is a method of textile production in which two distinct sets of yarns or threads are interlaced at right angles to form a fabric or cloth. Other methods are knitting, crocheting, felting, and braiding or plaiting. The longitudinal threads are called the warp and the lateral threads are the weft, woof, or filling. The method in which these threads are interwoven affects the characteristics of the cloth.
Cloth is usually woven on a loom, a device that holds the warp threads in place while filling threads are woven through them. A fabric band that meets this definition of cloth (warp threads with a weft thread winding between) can also be made using other methods, including tablet weaving, back strap loom, or other techniques that can be done without looms.
The way the warp and filling threads interlace with each other is called the weave. The majority of woven products are created with one of three basic weaves: plain weave, satin weave, or twill weave. Woven cloth can be plain or classic (in one colour or a simple pattern), or can be woven in decorative or artistic design.
Process and terminology
In general, weaving involves using a loom to interlace two sets of threads at right angles to each other: the warp which runs longitudinally and the weft (older woof) that crosses it. ( is an Old English word meaning "that which is woven"; compare leave and left.) One warp thread is called an end and one weft thread is called a pick. The warp threads are held taut and in parallel to each other, typically in a loom. There are many types of looms.
Weaving can be summarized as a repetition of these three actions, also called the primary motions of the loom.
Shedding: where the warp threads (ends) are separated by raising or lowering heald frames (heddles) to form a clear space, referred to as the shed where the pick can pass
Picking: where the weft or pick is propelled across the loom by hand, an air-jet, a rapier or a shuttle
Beating-up or battening: where the weft is pushed up against the fell of the cloth by the reed
The warp is divided into two overlapping groups, or lines (most often adjacent threads belonging to the opposite group) that run in two planes, one above another, so the shuttle can be passed between them in a straight motion. Then, the upper group is lowered by the loom mechanism, and the lower group is raised (shedding), allowing the shuttle to pass in the opposite direction, also in a straight motion. Repeating these actions forms a fabric mesh but without beating-up, the final distance between the adjacent wefts would be irregular and far too large.
The secondary motions of the loom are the:
Let off motion: where the warp is let off the warp beam at a regulated speed to make the filling even and of the required design
Take up motion: takes up the woven fabric in a regulated manner so that the density of filling is maintained
The tertiary motions of the loom are the stop motions: to stop the loom in the event of a thread break. The two main stop motions are the
Warp stop motion
Weft stop motion
The principal parts of a loom are the frame, the warp-beam or weavers beam, the cloth-roll (apron bar), the heddles, and their mounting, the reed. The warp-beam is a wooden or metal cylinder on the back of the loom on which the warp is delivered. The threads of the warp extend in parallel order from the warp-beam to the front of the loom where they are attached to the cloth-roll. Each thread or group of threads of the warp passes through an opening (eye) in a heddle. The warp threads are separated by the heddles into two or more groups, each controlled and automatically drawn up and down by the motion of the heddles. In the case of small patterns the movement of the heddles is controlled by "cams" which move up the heddles by means of a frame called a harness; in larger patterns the heddles are controlled by a dobby mechanism, where the healds are raised according to pegs inserted into a revolving drum. Where a complex design is required, the healds are raised by harness cords attached to a Jacquard machine. Every time the harness (the heddles) moves up or down, an opening (shed) is made between the threads of warp, through which the pick is inserted. Traditionally the weft thread is inserted by a shuttle.
On a conventional loom, continuous weft thread is carried on a pirn, in a shuttle that passes through the shed. A handloom weaver could propel the shuttle by throwing it from side to side with the aid of a picking stick. The "picking" on a power loom is done by rapidly hitting the shuttle from each side using an overpick or underpick mechanism controlled by cams 80–250 times a minute. When a pirn is depleted, it is ejected from the shuttle and replaced with the next pirn held in a battery attached to the loom. Multiple shuttle boxes allow more than one shuttle to be used. Each can carry a different colour which allows banding across the loom.
The rapier-type weaving machines do not have shuttles, they propel cut lengths of weft by means of small grippers or rapiers that pick up the filling thread and carry it halfway across the loom where another rapier picks it up and pulls it the rest of the way. Some carry the filling yarns across the loom at rates in excess of 2,000 metres per minute. Manufacturers such as Picanol have reduced the mechanical adjustments to a minimum, and control all the functions through a computer with a graphical user interface. Other types use compressed air to insert the pick. They are all fast, versatile and quiet.
The warp is sized in a starch mixture for smoother running. The loom warped (loomed or dressed) by passing the sized warp threads through two or more heddles attached to harnesses. The power weavers loom is warped by separate workers. Most looms used for industrial purposes have a machine that ties new warps threads to the waste of previously used warps threads, while still on the loom, then an operator rolls the old and new threads back on the warp beam. The harnesses are controlled by cams, dobbies or a Jacquard head.
The raising and lowering sequence of warp threads in various sequences gives rise to many possible weave structures:
Plain weave: plain, and hopsacks, poplin, taffeta, poult-de-soie, pibiones and grosgrain
Twill weave: these are described by weft float followed by warp float, arranged to give diagonal pattern; examples are 2/1 twill, 3/3 twill, or 1/2 twill. These are softer fabrics than plain weaves.
Satin weave: satins and sateens
Complex computer-generated interlacings, such as Jacquard fabric
Pile fabrics: fabrics with a surface of cut threads (a pile), such as velvets and velveteens
Selvage refers to the fabric's edge, which may be marked with the manufacturer's detail. It is a narrow edge of a woven fabric parallel to its length.
Thrums are the remainder yarns for tying on the loom. The portion that is not weavable warp. It is also called loom waste.
Both warp and weft can be visible in the final product. By spacing the warp more closely, it can completely cover the weft that binds it, giving a warp faced textile such as rep weave. Conversely, if the warp is spread out, the weft can slide down and completely cover the warp, giving a weft faced textile, such as a tapestry or a Kilim rug. There are a variety of loom styles for hand weaving and tapestry.
Archaeology
There are some indications that weaving was already known in the Paleolithic Era, as early as 27,000 years ago. An indistinct textile impression has been found at the Dolní Věstonice site. According to the find, the weavers of the Upper Palaeolithic were manufacturing a variety of cordage types, produced plaited basketry and sophisticated twined and plain woven cloth. The artifacts include imprints in clay and burned remnants of cloth.
The oldest known textiles found in the Americas are remnants of six finely woven textiles and cordage found in Guitarrero Cave, Peru. The weavings, made from plant fibres, are dated between 10,100 and 9080 BCE.
In 2013 a piece of cloth woven from hemp was found in burial F. 7121 at the Çatalhöyük site, suggested to be from around 7000 BCE Further finds come from the Neolithic civilisation preserved in the pile dwellings in Switzerland.
Another extant fragment from the Neolithic was found in Fayum, at a site dated to about 5000 BCE. This fragment is woven at about 12 threads by 9 threads per centimetre in a plain weave. Flax was the predominant fibre in Egypt at this time (3600 BCE) and had continued popularity in the Nile Valley, though wool became the primary fibre used in other cultures around 2000 BCE.
The oldest-known weavings in North America come from the Windover Archaeological Site in Florida. Dating from 4900 to 6500 BCE and made from plant fibres, the Windover hunter-gatherers produced "finely crafted" twined and plain weave textiles. Eighty-seven pieces of fabric were found associated with 37 burials. Researchers have identified seven different weaves in the fabric. One kind of fabric had 26 strands per inch (10 strands per centimetre). There were also weaves using two-strand and three-strand wefts. A round bag made from twine was found, as well as matting. The yarn was probably made from palm leaves. Cabbage palm, saw palmetto and scrub palmetto are all common in the area, and would have been so 8,000 years ago.
Evidence of weaving as a commercial household industry in the historical region of Macedonia has been found at the Olynthus site. When the city was destroyed by Philip II in 348 BCE, artifacts were preserved in the houses. Loomweights were found in many houses, enough to produce cloth to meet the needs of the household, but some of the houses contained more loomweights, enough for commercial production, and one of the houses was adjacent to the agora and contained three shops where many coins were found. It is probable that such homes were engaged in commercial textile manufacture.
History
Weaving was known in all the great civilisations, but no clear line of causality has been established. Early looms required two people to create the shed and one person to pass through the filling. Early looms wove a fixed length of cloth, but later ones allowed warp to be wound out as the fell progressed. Weaving became simpler when the warp was sized.
Africa
Around the 4th century BCE , the cultivation of cotton and the knowledge of its spinning and weaving in Meroë reached a high level. Export of textiles was one of the main sources of wealth for Kush. Aksumite King Ezana boasted in his inscription that he destroyed large cotton plantations in Meroë during his conquest of the region.
Latin America
The Indigenous people of the Americas wove textiles of cotton throughout tropical and subtropical America and in the South American Andes of wool from camelids, primarily domesticated llamas and alpacas. Cotton and the camelids were both domesticated by about 4,000 BCE. American weavers are "credited with independently inventing nearly every non-mechanized technique known today."
In the Inca Empire of the Andes, both men and women produced textiles. Women mostly did their weaving using backstrap looms to make small pieces of cloth and vertical frame and single-heddle looms for larger pieces. Men used upright looms. The Inca elite valued cumbi, which was a fine tapestry-woven textile produced on upright looms. The elite often offered as gifts of reciprocity to lords (other elite) in the Empire. In regions under direct control of the Inca, special artisans produced cumbi for the elite. Women who created cumbi in these regions were called acllas or mamaconas and men were called cumbicamayos. Andean textile weavings were of practical, symbolic, religious, and ceremonial importance and used as currency, tribute, and as a determinant of social class and rank. Sixteenth-century Spanish colonists were impressed by both the quality and quantity of textiles produced by the Inca Empire. Some of the techniques and designs are still in use in the 21st century.
Whereas European cloth-making generally created ornamentation through "suprastructural" means—by adding embroidery, ribbons, brocade, dyeing, and other elements onto the finished woven textile—pre-Columbian Andean weavers created elaborate cloth by focusing on "structural" designs involving manipulation of the warp and weft of the fabric itself. Andeans used "tapestry techniques; double-, triple- and quadruple-cloth techniques; gauze weaves; warp-patterned weaves; discontinuous warp or scaffold weaves; and plain weaves" among many other techniques, in addition to the suprastructural techniques listed above.
East Asia
The weaving of silk from silkworm cocoons has been known in China since about 3500 BCE. Silk that was intricately woven and dyed, showing a well developed craft, has been found in a Chinese tomb dating back to 2700 BCE.
Silk weaving in China was an intricate process that was very involved. Men and women, usually from the same family, had their own roles in the weaving process. The actual work of weaving was done by both men and women. Women were often weavers since it was a way they could contribute to the household income while staying at home. Women would usually weave simpler designs within the household while men would be in charge of the weaving of more intricate and complex pieces of clothing. The process of sericulture and weaving emphasized the idea that men and women should work together instead of women being subordinate to men. Weaving became an integral part of Chinese women's social identity. Several rituals and myths were associated with the promotion of silk weaving, especially as a symbol of female power. Weaving contributed to the balance between men and women's economic contributions and had many economic benefits.
There were many paths into the occupation of weaver. Women usually married into the occupation, belonged to a family of weavers and or lived in a location that had ample weather conditions that allowed for the process of silk weaving. Weavers usually belonged to the peasant class. Silk weaving became a specialized job requiring specific technology and equipment that was completed domestically within households. Although most of the silk weaving was done within the confines of the home and family, there were some specialized workshops that hired skilled silk weavers as well. These workshops took care of the weaving process, although the raising of the silkworms and reeling of the silk remained work for peasant families. The silk that was woven in workshops rather than homes were of higher quality, since the workshop could afford to hire the best weavers. These weavers were usually men who operated more complicated looms, such as the wooden draw-loom. This created a competitive market of silk weavers.
The quality and ease of the weaving process depended on the silk that was produced by the silk worms. The easiest silk to work with came from breeds of silk worms that spun their cocoons so that it could be unwound in one long strand. The reeling, or unwinding of silk worm cocoons is started by placing the cocoons in boiling water in order to break apart the silk filaments as well as kill the silk worm pupae. Women would then find the end of the strands of silk by sticking their hand into the boiling water. Usually this task was done by girls aged eight to twelve, while the more complex jobs were given to older women. They would then create a silk thread, which could vary in thickness and strength from the unwound cocoons.
After the reeling of the silk, the silk would be dyed before the weaving process began. There were many different looms and tools for weaving. For high quality and intricate designs, a wooden draw-loom or pattern loom was used. This loom would require two or three weavers and was usually operated by men. There were also other smaller looms, such as the waist loom, that could be operated by a single woman and were usually used domestically.
Sericulture and silk weaving spread to Korea by 200 BCE, to Khotan by 50 CE, and to Japan by about 300 CE.
The pit-treadle loom may have originated in India though most authorities establish the invention in China. Pedals were added to operate heddles. By the Middle Ages such devices also appeared in Persia, Sudan, Egypt and possibly the Arabian Peninsula, where "the operator sat with his feet in a pit below a fairly low-slung loom". In 700 CE, horizontal looms and vertical looms could be found in many parts of Asia, Africa and Europe. In Africa, the rich dressed in cotton while the poorer wore wool.
By the 12th century it had come to Europe either from the Byzantium or Moorish Spain where the mechanism was raised higher above the ground on a more substantial frame.
Southeast Asia
In the Philippines, numerous pre-colonial weaving traditions exist among different ethnic groups. They used various plant fibers, mainly abacá or banana, but also including tree cotton, buri palm (locally known as ) and other palms, various grasses (like and ), and barkcloth. The oldest evidence of weaving traditions are Neolithic stone tools used for preparing barkcloth found in archeological sites in Sagung Cave of southern Palawan and Arku Cave of Peñablanca, Cagayan. The latter has been dated to around 1255–605 BCE.
Other countries in Southeast Asia have their own extensive history of weaving traditions. Weaving was introduced to Southeast Asia at the same time rice agriculture was introduced from China. As it was introduced alongside rice farming, weaving is more popular in communities where rice is being farmed as compared to communities that rely on hunting, gathering, and animal farming.
Each country has its own distinctive weaving traditions or has absorbed weaving traditions from their neighboring countries. The most common material used for weaving is cotton that is interwoven with threads made of different materials. Brunei is famous for its Jong Sarat, a cloth usually used in traditional weddings, uses silver and gold threads interwoven usually with cotton threads. Similarly, Indonesia has the Songket, also used in traditional weddings, which also utilizes gold and silver wrapped thread to create elaborate designs on their weaved textiles. On the other hand, Cambodia has the Ikat, which utilizes the method of dyeing thunks of thread tied with fiber to create patterns while weaving. In addition to using threads, weavers of Myanmar, Thailand, and Vietnam combine silk and other fibers with cotton for weaving. While in Laos, natural materials are used, like roots, tree bark, leaves, flowers, and seeds, but for dyeing the textile that has been already weaved. These countries in Southeast Asia have more weaving traditions but these techniques are the popular ones.
To create threads of cotton for weaving, spindle whorls were commonly used in Southeast Asia. It is made from either clay, stone or wood and has a variety of appearances regarding its shape and size. Spindle whorls were said to emerge in Southeast Asia along with expansion of rice agriculture from Yangtse, China. Additionally, its increasing appearance in certain regions of Southeast Asia back then may be also a sign of the enlargement of cotton thread and textile production. Due to its low cost and portability because of its small size, it was favored among rural weaving communities in countries of Southeast Asia.
Weaved textiles in Southeast Asia are mostly made with looms. The foot brace loom is the earliest loom introduced to Southeast Asia from China, having its first appearance in Vietnam. Although, it was only used in certain areas of Vietnam, Laos, Indonesia, and Cambodia. Another loom that is widely used across Southeast Asia is the ground level body tension loom, also known as the belt loom, as a part of it needs to be attached to a belt-like strap on the weaver’s waist to control and hold the tension of the warped threads. It is usually operated at the ground level and the weaver is able to control the threads by leaning backwards and forward. The body tension loom was developed from the foot-brace loom to eventually accommodate weaving of larger and wider cloth types.
Medieval Europe
The predominant fibre in Europe during the medieval period was wool, followed by linen and nettlecloth for the lower classes. Cotton was introduced to Sicily and Spain in the 9th century. When Sicily was captured by the Normans, they took the technology to Northern Italy and then the rest of Europe. Silk fabric production was reintroduced towards the end of this period and the more sophisticated silk weaving techniques were applied to the other staples.
The weaver worked at home and marketed his cloth at fairs. Warp-weighted looms were commonplace in Europe before the introduction of horizontal looms in the 10th and 11th centuries. Weaving became an urban craft and to regulate their trade, craftsmen applied to establish a guild. These initially were merchant guilds, but developed into separate trade guilds for each skill. The cloth merchant who was a member of a city's weavers guild was allowed to sell cloth; he acted as a middleman between the tradesmen weavers and the purchaser. The trade guilds controlled quality and the training needed before an artisan could call himself a weaver.
By the 13th century, an organisational change took place, and a system of putting out was introduced. The cloth merchant purchased the wool and provided it to the weaver, who sold his produce back to the merchant. The merchant controlled the rates of pay and economically dominated the cloth industry. The merchants' prosperity is reflected in the wool towns of eastern England; Norwich, Bury St Edmunds and Lavenham being good examples. Wool was a political issue. The supply of thread has always limited the output of a weaver. About that time, the spindle method of spinning was replaced by the great wheel and soon after the treadle-driven spinning wheel. The loom remained the same but with the increased volume of thread it could be operated continuously.
The 14th century saw considerable flux in population. The 13th century had been a period of relative peace; Europe became overpopulated. Poor weather led to a series of poor harvests and starvation. There was great loss of life in the Hundred Years War. Then in 1346, Europe was struck with the Black Death and the population was reduced by up to a half. Arable land was labour-intensive and sufficient workers no longer could be found. Land prices dropped, and land was sold and put to sheep pasture. Traders from Florence and Bruges bought the wool, then sheep-owning landlords started to weave wool outside the jurisdiction of the city and trade guilds. The weavers started by working in their own homes then production was moved into purpose-built buildings. The working hours and the amount of work were regulated. The putting-out system had been replaced by a factory system.
The migration of the Huguenot Weavers, Calvinists fleeing from religious persecution in mainland Europe, to Britain around the time of 1685 challenged the English weavers of cotton, woollen and worsted cloth, who subsequently learned the Huguenots' superior techniques.
Colonial United States
Colonial America relied heavily on Great Britain for manufactured goods of all kinds. British policy was to encourage the production of raw materials in colonies and discourage manufacturing. The Wool Act 1699 restricted the export of colonial wool.
As a result, many people wove cloth from locally produced fibres. The colonists also used wool, cotton and flax (linen) for weaving, though hemp could be made into serviceable canvas and heavy cloth. They could get one cotton crop each year; until the invention of the cotton gin it was a labour-intensive process to separate the seeds from the fibres. Functional tape, bands, straps, and fringe were woven on box and paddle looms.
A plain weave was preferred as the added skill and time required to make more complex weaves kept them from common use. Sometimes designs were woven into the fabric but most were added after weaving using wood block prints or embroidery.
Industrial Revolution
Before the Industrial Revolution, weaving was a manual craft and wool was the principal staple. In the great wool districts a form of factory system had been introduced but in the uplands weavers worked from home on a putting-out system. The wooden looms of that time might be broad or narrow; broad looms were those too wide for the weaver to pass the shuttle through the shed, so that the weaver needed an expensive assistant (often an apprentice). This ceased to be necessary after John Kay invented the flying shuttle in 1733. The shuttle and the picking stick sped up the process of weaving. There was thus a shortage of thread or a surplus of weaving capacity. The opening of the Bridgewater Canal in June 1761 allowed cotton to be brought into Manchester, an area rich in fast flowing streams that could be used to power machinery. Spinning was the first to be mechanised (spinning jenny, spinning mule), and this led to limitless thread for the weaver.
Edmund Cartwright first proposed building a weaving machine that would function similar to recently developed cotton-spinning mills in 1784, drawing scorn from critics who said the weaving process was too nuanced to automate. He built a factory at Doncaster and obtained a series of patents between 1785 and 1792. In 1788, his brother Major John Cartwight built Revolution Mill at Retford (named for the centenary of the Glorious Revolution). In 1791, he licensed his loom to the Grimshaw brothers of Manchester, but their Knott Mill burnt down the following year (possibly a case of arson). Edmund Cartwight was granted a reward of £10,000 by Parliament for his efforts in 1809. However, success in power-weaving also required improvements by others, including H. Horrocks of Stockport. Only during the two decades after about 1805, did power-weaving take hold. At that time there were 250,000 hand weavers in the UK. Textile manufacture was one of the leading sectors in the British Industrial Revolution, but weaving was a comparatively late sector to be mechanised. The loom became semi-automatic in 1842 with Kenworthy and Bulloughs Lancashire Loom. The various innovations took weaving from a home-based artisan activity (labour-intensive and man-powered) to steam driven factories process. A large metal manufacturing industry grew to produce the looms, firms such as Howard & Bullough of Accrington, and Tweedales and Smalley and Platt Brothers. Most power weaving took place in weaving sheds, in small towns circling Greater Manchester away from the cotton spinning area. The earlier combination mills where spinning and weaving took place in adjacent buildings became rarer. Wool and worsted weaving took place in West Yorkshire and particular Bradford, here there were large factories such as Lister's or Drummond's, where all the processes took place. Both men and women with weaving skills emigrated, and took the knowledge to their new homes in New England, to places like Pawtucket and Lowell.
Woven 'grey cloth' was then sent to the finishers where it was bleached, dyed and printed. Natural dyes were originally used, with synthetic dyes coming in the second half of the 19th century. A demand for new dyes followed the discovery of mauveine in 1856, and its popularity in fashion. Researchers continued to explore the chemical potential of coal tar waste from the growing number of gas works in Britain and Europe, creating an entirely new sector in the chemical industry.
The invention in France of the Jacquard loom, patented in 1804, enabled complicated patterned cloths to be woven, by using punched cards to determine which threads of coloured yarn should appear on the upper side of the cloth. The jacquard allowed individual control of each warp thread, row by row without repeating, so very complex patterns were suddenly feasible. Samples exist showing calligraphy, and woven copies of engravings. Jacquards could be attached to handlooms or powerlooms.
A distinction can be made between the role and lifestyle and status of a handloom weaver, and that of the power loom weaver and craft weaver. The perceived threat of the power loom led to disquiet and industrial unrest. Well known protests movements such as the Luddites and the Chartists had handloom weavers amongst their leaders. In the early 19th-century power weaving became viable. Richard Guest in 1823 made a comparison of the productivity of power and handloom weavers:
A very good Hand Weaver, a man twenty-five or thirty years of age, will weave two pieces of nine-eighths shirting per week, each twenty-four yards long, and containing one hundred and five shoots of weft in an inch, the reed of the cloth being a forty-four, Bolton count, and the warp and weft forty hanks to the pound, A Steam Loom Weaver, fifteen years of age, will in the same time weave seven similar pieces.
He then speculates about the wider economics of using power loom weavers:
...it may very safely be said, that the work is done in a Steam Factory containing two hundred Looms, would, if done by hand Weavers, find employment and support for a population of more than two thousand persons.With the Industrial Revolution came a growth in opportunity for women to work within textile factories. However, in spite of their gender, their work was perceived to have a lower social and economic value than work done by their male counterparts.
Modern day
In the 1920s the weaving workshop of the Bauhaus design school in Germany aimed to raise weaving, previously seen as a craft, to a fine art, and also to investigate the industrial requirements of modern weaving and fabrics. Under the direction of Gunta Stölzl, the workshop experimented with unorthodox materials, including cellophane, fibreglass, and metal. From expressionist tapestries to the development of soundproofing and light-reflective fabric, the workshop's innovative approach instigated a modernist theory of weaving. Former Bauhaus student and teacher Anni Albers published the seminal 20th-century text On Weaving in 1965. Other notables from the Bauhaus weaving workshop include Otti Berger, Margaretha Reichardt, and Benita Koch-Otte.
In the Bauhaus, the weaving workshop was considered "the women's department", and many women were forced to join against wishes to study another art form. Some weavers, like Helene Nonné-Schmidt believed that women were made to weave because they could only produce work in 2-D. She thought women lacked the spatial imagination and genius men had to work in other mediums.
Hand weaving of Persian carpets and kilims has been an important element of the tribal crafts of many of the subregions of modern day Iran. Examples of carpet types are the Lavar Kerman carpet from Kerman and the Seraband rug from Arāk.
In Southeast Asia, some communities are working to revive weaving traditions as a way to address poverty, improve living conditions, support local communities, and to promote environmental sustainability. Several initiatives have been established to support this effort, such as the Maybank Women Eco-Weavers program by the Maybank Foundation which currently operates in Laos, Cambodia, Malaysia and Indonesia. This program helps create opportunities for women weavers throughout the Southeast Asia region to improve their livelihoods and to give them financial independence.
Additionally, similar programs exist in Taiwan and in the Philippines. In Taiwan, The Lihang Studio and S’uraw Education was founded by Yuma Taru to revive the Atayal weaving culture and to promote indigenous education in weaving and dyeing. In the Philippines, the Kyyangan Weavers Association was established in the Ifugao Province as a way to conserve and promote the Ifugao weaving culture and other traditional practices. Also, this association collaborates with academic institutions, government agencies and other non-government organizations on research and product development to be able to offer economic opportunities for communities.
Types
Hand loom weavers
Handloom weaving was done by both genders but men outnumbered women partially due to the strength needed to batten. They worked from home sometimes in a well-lit attic room. The women of the house would spin the thread they needed, and attend to finishing. Later women took to weaving, they obtained their thread from the spinning mill, and working as outworkers on a piecework contract. Over time competition from the power looms drove down the piece rate and they existed in increasing poverty.
Power loom weavers
Power loom workers were usually girls and young women. They had the security of fixed hours, and except in times of hardship, such as in the cotton famine, regular income. They were paid a wage and a piece work bonus. Even when working in a combined mill, weavers stuck together and enjoyed a tight-knit community. The women usually minded the four machines and kept the looms oiled and clean. They were assisted by 'little tenters', children on a fixed wage who ran errands and did small tasks. They learnt the job of the weaver by watching. Often they would be half timers, carrying a green card which teacher and overlookers would sign to say they had turned up at the mill in the morning and the afternoon at the school. At fourteen or so they come full-time into the mill, and started by sharing looms with an experienced worker where it was important to learn quickly as they would both be on piece work. Serious problems with the loom were left to the tackler to sort out. He would inevitably be a man, as were usually the overlookers. The mill had its health and safety issues, there was a reason why the women tied their hair back with scarves. Inhaling cotton dust caused lung problems, and the noise was causing total hearing loss. Weavers would mee-maw as normal conversation was impossible. Weavers used to 'kiss the shuttle', that is, suck thread through the eye of the shuttle. This left a foul taste in the mouth due to the oil, which was also carcinogenic.
Craft weavers
Arts and Crafts was an international design philosophy that originated in England and flourished between 1860 and 1910 (especially the second half of that period), continuing its influence until the 1930s. Instigated by the artist and writer William Morris (1834–1896) during the 1860s and inspired by the writings of John Ruskin (1819–1900), it had its earliest and most complete development in the British Isles but spread to Europe and North America. It was largely a reaction against mechanisation and the philosophy advocated of traditional craftsmanship using simple forms and often medieval, romantic or folk styles of decoration. Handweaving was highly regarded and taken up as a decorative art.
Indigenous cultures
Native Americans
Textile weaving, using cotton dyed with pigments, was a dominant craft among pre-colonization tribes of the American southwest, including various Pueblo peoples, the Zuni, and the Ute tribes. The first Spaniards to visit the region wrote about seeing Navajo blankets. With the introduction of Navajo-Churro sheep, the resulting woolen products have become very well known. By the 18th century the Navajo had begun to import yarn with their favorite color, Bayeta red. Using an upright loom, the Navajo wove blankets worn as garments and then rugs after the 1880s for trade. Navajo traded for commercial wool, such as Germantown, imported from Pennsylvania. Under the influence of European-American settlers at trading posts, Navajos created new and distinct styles, including "Two Gray Hills" (predominantly black and white, with traditional patterns), "Teec Nos Pos" (colorful, with very extensive patterns), "Ganado" (founded by Don Lorenzo Hubbell), red dominated patterns with black and white, "Crystal" (founded by J. B. Moore), "Oriental" and Persian styles (almost always with natural dyes), "Wide Ruins," "Chinlee," banded geometric patterns, "Klagetoh," diamond type patterns, "Red Mesa" and bold diamond patterns. Many of these patterns exhibit a fourfold symmetry, which is thought to embody traditional ideas about harmony, or hózhó.
Amazon cultures
Among the indigenous people of the Amazon basin densely woven palm-bast mosquito netting, or tents, were utilized by the Panoans, Tupinambá, Western Tucano, Yameo, Záparoans, and perhaps by the indigenous peoples of the central Huallaga River basin (Steward 1963:520). Aguaje palm-bast (Mauritia flexuosa, Mauritia minor, or swamp palm) and the frond spears of the Chambira palm (Astrocaryum chambira, A.munbaca, A.tucuma, also known as Cumare or Tucum) have been used for centuries by the Urarina of the Peruvian Amazon to make cordage, net-bags hammocks, and to weave fabric. Among the Urarina, the production of woven palm-fiber goods is imbued with varying degrees of an aesthetic attitude, which draws its authentication from referencing the Urarina's primordial past. Urarina mythology attests to the centrality of weaving and its role in engendering Urarina society. The post-diluvial creation myth accords women's weaving knowledge a pivotal role in Urarina social reproduction.
Even though palm-fiber cloth is regularly removed from circulation through mortuary rites, Urarina palm-fiber wealth is neither completely inalienable, nor fungible since it is a fundamental medium for the expression of labor and exchange. The circulation of palm-fiber wealth stabilizes a host of social relationships, ranging from marriage and fictive kinship (compadrazco, spiritual compeership) to perpetuating relationships with the deceased.
Computer science
The Nvidia Parallel Thread Execution ISA derives some terminology (specifically the term Warp to refer to a group of concurrent processing threads) from historical weaving traditions.
Gender politics
Women's work
Weaving is a practice that is typically considered to be "women's work", either part of their employment, cultural practices, or leisure. The categorization of weaving as women's work has bled into many fields, from art history, anthropology, sociology, and even psychology. While claiming that women had not contributed much to civilization's history, Sigmund Freud wrote that "one technique which they may have invented [is] that of plaiting and weaving."
Women's work is often not recorded as a central activity to building Western history and culture. Yet, some anthropologists argue that textile production facilitated societal establishment and growth, therefore women were integral to perpetuating communities. To record their stories, beliefs, and symbols important to their culture, women engaged in weaving, embroidering, or other fiber practices. These practices have existed for centuries documented through art history, myth, and oral history and are still practiced today.
Reception in the mainstream art world
Weaving is often classified as "craft" alongside other art forms like ceramics, embroidery, basket weaving, and more. Historically, there has been a hierarchy between artists who were considered "craftspeople" and artists who worked in traditional mediums of painting and sculpture. The traditional artists wanted to keep artisans in the minority, so there was little reception for arts that were considered craft.
In 1939, art critic Clement Greenberg wrote "Avant-Garde and Kitsch" where he presented his ideas about "high" and "low" art. His definition of "low" art was likely informed by years of theory against decoration and ornamentation, which was correlated with femininity in the early 1900s by critics like Adolf Loos and Karl Scheffler. Although Greenberg never explicitly says the word "craft", many scholars postulate that this is one of the origins of Western opposition to weaving, and more largely art that is considered craft.
Only recently has the art world begun to recognize weaving as an art form and to exhibit woven articles as art objects. Exhibitions of large scope have been organized to affirm the importance of textiles in the art historical canon, such as the Museum of Contemporary Art, Los Angeles' With Pleasure: Pattern and Decoration in American Art 1972–1985. Women weavers, like Anni Albers, Lenore Tawney, Magdalena Abakanowicz, Olga de Amaral, and Sheila Hicks, are now the subject of exhibitions and major retrospectives across the world.
| Technology | Techniques_2 | null |
51971 | https://en.wikipedia.org/wiki/Yarn | Yarn | Yarn is a long continuous length of interlocked fibres, used in sewing, crocheting, knitting, weaving, embroidery, ropemaking, and the production of textiles. Thread is a type of yarn intended for sewing by hand or machine. Modern manufactured sewing threads may be finished with wax or other lubricants to withstand the stresses involved in sewing. Embroidery threads are yarns specifically designed for needlework. Yarn can be made of a number of natural or synthetic materials, and comes in a variety of colors and thicknesses (referred to as "weights"). Although yarn may be dyed different colours, most yarns are solid coloured with a uniform hue.
Etymology
The word "yarn" comes from Middle English, from the Old English , akin to Old High German , "yarn", Dutch , Ancient Greek , "string", and Sanskrit , "band".
History
The human production of yarn is known to have existed since the Stone Age and earlier prehistory, with ancient fiber materials developing from animal hides, to reeds, to early fabrics. Cotton, wool, and silk were the first materials for yarn, and textile trade contributed immensely to the ancient global economy. In 2011, the Bangladesh University of Textiles established a specialized Department of Yarn Engineering, focusing on the advanced techniques of transforming textile fibers into yarn.
Materials
Yarn can be made from a number of natural or synthetic fibers, or a blend of natural and synthetic fibers.
Natural fibers
Cotton
The most common plant fiber is cotton, which is typically spun into fine yarn for mechanical weaving or knitting into cloth.
Silk
Silk is a natural protein fiber, some forms of which can be woven into textiles. The protein fiber of silk is composed mainly of fibroin and is produced by the larvae of the moth Bombyx mori. Silk production is thought to have begun in China and silk thread and cloth manufacture was well-established by the Shang dynasty (1600–1050 BCE).
Linen
Linen is another natural fiber with a long history of use for yarn and textiles. Linen fibers are derived from the flax plant.
Other plant fibers
Other plant fibers which can be spun include bamboo, hemp, maize, nettle, and soy fiber.
Animal fibers
The most common spun animal fiber is wool harvested from sheep. As long fibers make better yarn, sheep have been bred over time to produce longer fibers. This increases the need for shearing to prevent pests and overheating.
Other animal fibers used include alpaca, angora, mohair, llama, cashmere, and silk. More rarely, yarn may be spun from camel, yak, possum, musk ox, vicuña, cat, dog, wolf, rabbit, bison, or chinchilla hair, as well as turkey or ostrich feathers.
Synthetic fibers
Some examples of synthetic fibers that are used as yarn are nylon, acrylic fiber, rayon, and polyester. Synthetic fibers are generally extruded in continuous strands of gel-state materials. These strands are drawn (stretched), annealed (hardened), and cured to obtain properties desirable for later processing.
Synthetic fibers come in three basic forms: staple, tow, and filament. Staple is cut fibers, generally sold in lengths up to 120 mm. Tow is a continuous "rope" of fibers consisting of many filaments loosely joined side-to-side. Filament is a continuous strand consisting of anything from one filament to many. Synthetic fiber is most often measured in a weight per linear measurement basis, along with cut length. Denier and Dtex are the most common weight to length measures. Cut-length only applies to staple fiber.
Filament extrusion is sometimes referred to as "spinning," but most people equate spinning with spun yarn production.
Yarn from recycled materials
T-shirt yarn is a recycled yarn made from the same fabric as is used in T-shirts and other clothes. It is often made from the remainder fabric of clothing manufacture, and therefore is considered a recycled and eco-friendly product. It can also be made at home out of used clothing. The resulting yarn can be used in knitted or crocheted items.
Comparison of material properties
In general, natural fibers tend to require more careful handling than synthetics because they can shrink, felt, stain, shed, fade, stretch, wrinkle, or be eaten by moths more readily, unless special treatments such as mercerization or super washing are performed to strengthen, fix color, or otherwise enhance the fiber's own properties.
Some types of protein yarns (i.e., hair, silk, feathers) may feel irritating to some people, causing contact dermatitis, hives, or wheezing. These reactions are likely a sensitivity to thicker and coarser fiber diameter or fiber ends. In fact, contrary to popular belief, wool allergies are practically unknown. According to a study reviewing the evidence of wool as an allergen conducted by Acta Dermato-Venereologica, contemporary superfine or ultrafine Merino wool with their reduced fibre diameters do not provoke itch, are well tolerated and in fact benefit eczema management. Further studies suggest that known allergens applied during textile processing are minimally present in wool garments today given current industry practices and are unlikely to lead to allergic reactions.
When natural hair-type fibers are burned, they tend to singe and have a smell of burnt hair; this is because many, like human hair, are protein-derived. Cotton and viscose (rayon) yarns burn as a wick. Synthetic yarns generally tend to melt, though some synthetics are inherently flame-retardant. Noting how an unidentified fiber strand burns and smells can assist in determining if it is natural or synthetic, and what the fiber content is.
Both synthetic and natural yarns can pill. Pilling is a function of fiber content, spinning method, twist, contiguous staple length, and fabric construction. Single ply yarns or using fibers like merino wool are known to pill more due to the fact that in the former, the single ply is not tight enough to securely retain all the fibers under abrasion, and the merino wool's short staple length allows the ends of the fibers to pop out of the twist more easily.
Yarns combining synthetic and natural fibers inherit the properties of each parent, according to the proportional composition. Synthetics are added to lower cost, increase durability, add unusual color or visual effects, provide machine washability and stain resistance, reduce heat retention, or lighten garment weight.
Structure
Spun yarn
Spun yarn is made by twisting staple fibres together to make a cohesive thread, or "single". Twisting fibres into yarn in the process called spinning can be dated back to the Upper Paleolithic, and yarn spinning was one of the first processes to be industrialized. Spun yarns are produced by placing a series of individual fibres or filaments together to form a continuous assembly of overlapping fibres, usually bound together by twist. Spun yarns may contain a single type of fibre, or be a blend of various types. Combining synthetic fibres (which can have high strength, lustre, and fire retardant qualities) with natural fibres (which have good water absorbency and skin comforting qualities) is very common. The most widely used blends are cotton-polyester and wool-acrylic fibre blends. Blends of different natural fibres are common too, especially with more expensive fibres such as alpaca, angora and cashmere.
Yarn is selected for different textiles based on the characteristics of the yarn fibres, such as warmth (wool), light weight (cotton or rayon), durability (nylon is added to sock yarn, for example), or softness (cashmere, alpaca).
Yarn is composed of twisted strands of fiber, which are known as plies when grouped together. These strands of yarn are twisted together (plied) in the opposite direction to make a thicker yarn. Depending on the direction of this final twist, the yarn will have either s-twist (the threads appear to go "up" to the left) or z-twist (to the right). For a single ply yarn, the direction of the final twist is the same as its original twist. The twist direction of yarn can affect the final properties of the fabric, and combined use of the two twist directions can nullify skewing in knitted fabric.
The mechanical integrity of yarn is derived from frictional contacts between its composing fibers. The science behind this was first studied by Galileo.
Carded and combed yarn
Combed yarns are produced by adding another step of yarn spinning, namely combing, which aligns the fibres and removes the short fibres carried over from the previous step of carding. Combed yarn results in superior-quality fabrics. In comparison to carded yarns, this particular yarn is slightly more expensive, because the weaving is a long, consuming process. Combining separates small fibres from elongated fibres, in which this procedure makes the yarn softer and smoother.
Hosiery yarn
Hosiery yarns are used in the manufacturing of knitted fabrics. Since the knitted materials are more delicate than woven materials; hence hosiery yarns are made 'softer' with fewer twists per inch than their woven counterparts. Hosiery yarn comes from a separate spinning process (melt spinning), and is used with circular knitting machines to form fabric.
Open-end yarn
Open-end yarn is produced by open-end spinning without a spindle. The method of spinning is different from ring spinning. In open-end yarn, there is no roving frame stage. Sliver from the card goes into the rotor, is spun into yarn directly. Open-end yarn can be produced from short fibers. Open-end yarns are different from ring yarns. Open-end yarns are limited to coarser counts.
Novelty yarn
Novelty yarns or complex yarns are yarns with special (fancy) effects introduced during spinning or plying. One example is slub yarns, yarn with thick or thin sections alternating regularly or irregularly. In a similar manner, creating deliberate unevenness, additions or injections of neps or metallic or synthetic fibers (along with natural fibers) in spinning creates novelty yarns.
Filament yarn
Filament yarn consists of filament fibres (very long continuous fibres) either twisted together or only grouped together. Thicker monofilaments are typically used for industrial purposes rather than fabric production or decoration. Silk is a natural filament, and synthetic filament yarns are used to produce silk-like effects.
Texturized yarn
Texturized yarns are made by a process of air texturizing filament yarns (sometimes referred to as taslanizing), which combines multiple filament yarns into a yarn with some of the characteristics of spun yarns. They are synthetic continuous filaments that are modified to impart special texture and appearance. It was originally applied to synthetic fibers to reduce transparency, slipperiness and increase warmth, absorbency and makes the yarn more opaque. It was used to manufacture a variety of textile products: knitted underwear and outer wear, shape-retaining knitted suits, overcoats. They also were used in the production of artificial fur, carpets, blankets, etc.
Colour
Yarn may be used undyed, or may be coloured with natural or artificial dyes. Most yarns have a single uniform hue, but there is also a wide selection of variegated yarns:
Heathered or tweed: yarn with flecks of different coloured fibre
Ombré: variegated yarn with light and dark shades of a single hue
Multicoloured: variegated yarn with two or more distinct hues (a "parrot colourway" might have green, yellow and red)
Self-striping: yarn dyed with lengths of colour that will automatically create stripes in a knitted or crocheted object
Marled: yarn made from strands of different-coloured yarn twisted together, sometimes in closely related hues
Each of these different colours and styles are achieved through a process called yarn dyeing. There are many different methods of yarn dyeing: package dyeing, skein dyeing, space dyeing, warp beam dyeing, and more.
Package Dyeing: This is the most commonly used method. This is when yarn is already spun up in bulk and then lowered into a chamber filled with dye. When the yarn is done absorbing the dye, it is removed from the cylindrical chamber to dry.
Skein Dyeing: This is the process of when yarn is laid loosely in skeins or hanks. They are then laid on top of a bar and submerged into what is called a dyebath.
Space Dyeing: This method is used to achieve the multi-colored effect. This method is achieved by taking sections of yarn and dipping them into different colours. After dipping one section into a colour, a chemical called mordant is used to permanently keep that color on the yarn so that the next color will not bleed into the prior color.
Warp Beam Dyeing: This is a larger version of package dyeing; however, it is only used in the manufacture of woven fabrics.
Weight
Yarn quantities for handcrafts are usually measured and sold by weight in ounces (oz) or grams (g). Common sizes include 25g, 50g, and 100g skeins. Some companies also primarily measure in ounces with common sizes being three-ounce, four-ounce, six-ounce, and eight-ounce skeins. Textile measurements are taken at a standard temperature and humidity because variations in heat and humidity can cause fibers to absorb different amounts of moisture from the air, thus increasing the measured weight of the yarn without adding any fiber material. The actual length of the yarn contained in a ball or skein can vary due to the inherent heaviness of the fibre and the thickness of the strand; for instance, a 50 g skein of lace weight mohair may contain several hundred metres, while a 50 g skein of bulky wool may contain only 60 metres.
Craft yarn comes in several thicknesses or weights. This is not to be confused with the measurement and weight listed above. The Craft Yarn Council of America promotes a standardized industry system for measuring yarn weight, where weights are numbered from 0 (finest) to 7 (thickest). Each weight can be described by a number and name: Size 0 yarn is called Lace, size 1 is Super Fine, size 2 is Fine, size 3 is Light, size 4 is Medium, size 5 is Bulky, size 6 is Super Bulky, and size 7 is Jumbo.
Each weight also has several commonly used but unregulated terms associated with it. However, this naming convention is more descriptive than precise; fibre artists disagree about where on the continuum each lies, and the precise relationships between the sizes. These terms include: fingering, sport, double-knit (or DK), worsted, aran (or heavy worsted), bulky, super-bulky, and roving.
Another measurement of yarn weight, often used by weavers, is wraps per inch (WPI). The yarn is wrapped snugly around a ruler and the number of wraps that fit in an inch are counted.
Labels on yarn for handicrafts often include information on gauge, which can also help determine yarn weight. Gauge, known in the UK as tension, is a measurement of how many stitches and rows are produced per inch or per cm on a specified size of knitting needle or crochet hook. The proposed standardization uses a four-by-four inch/ten-by-ten cm knitted stockinette or single crocheted square, with the resultant number of stitches across and rows high made by the suggested tools on the label to determine the gauge.
In Europe, textile engineers often use the unit tex, which is the weight in grams of a kilometre of yarn, or decitex, which is a finer measurement corresponding to the weight in grams of 10 km of yarn. Many other units have been used over time by different industries.
Yarn skeins
There are many different ways in which yarn is wound, including hanks, skeins, donut balls, cakes, and cones.
Hank
A hank of yarn is a looped bundle of yarn, similar to how wire is typically sold. The yarn is usually tied in two places directly opposite each other to keep the loops together and to keep them from tangling. Hanks are a preferred method of fastening yarn for many yarn sellers and yarn-dyers due to its ability to more widely display the qualities of the fiber. It is often wound using a swift, a standing contraption that holds a yarn hank without obstruction and spins on a central axis to facilitate yarn ball winding There are two subtypes of hanks: twisted and folded. A twisted hank is a hank that has been twisted into a rope braid. A folded hank is a hank that has been folded in half and wrapped in a label for retail purposes.
Skein
Skeins are one of the most common types of yarn ball. Although skeins are technically described as yarn that has been wound into an oblong shape, the word "skein" is used generically to describe any ball of yarn. Many large-scale yarn retailers like Lion brand and parent companies like Yarnspirations sell their yarn in skeins. Unlike other types of yarn balls, a skein allows you to access both ends of the yarn. The yarn end in the inside of the skein is called a center pull. One major complaint of center pull bullet skeins is that the inside yarn end is not easily found, and often is pulled out of the skein in a jumble of tangled yarn called "yarn barf". There are two types of skeins: a pull skein, which is more rectangular in shape, and a bullet skein, which is rounder.
Microscopic aspect of selected yarns
Below are the images taken by a digital USB microscope. These show how the yarn looks in different kinds of clothes when magnified.
| Technology | Fabrics and fibers | null |
52015 | https://en.wikipedia.org/wiki/Proper%20motion | Proper motion | Proper motion is the astrometric measure of the observed changes in the apparent places of stars or other celestial objects in the sky, as seen from the center of mass of the Solar System, compared to the abstract background of the more distant stars.
The components for proper motion in the equatorial coordinate system (of a given epoch, often J2000.0) are given in the direction of right ascension (μα) and of declination (μδ). Their combined value is computed as the total proper motion (μ). It has dimensions of angle per time, typically arcseconds per year or milliarcseconds per year.
Knowledge of the proper motion, distance, and radial velocity allows calculations of an object's motion from the Solar System's frame of reference and its motion from the galactic frame of reference – that is motion in respect to the Sun, and by coordinate transformation, that in respect to the Milky Way.
Introduction
Over the course of centuries, stars appear to maintain nearly fixed positions with respect to each other, so that they form the same constellations over historical time. As examples, both Ursa Major in the northern sky and Crux in the southern sky, look nearly the same now as they did hundreds of years ago. However, precise long-term observations show that such constellations change shape, albeit very slowly, and that each star has an independent motion.
This motion is caused by the movement of the stars relative to the Sun and Solar System. The Sun travels in a nearly circular orbit (the solar circle) about the center of the galaxy at a speed of about 220 km/s at a radius of from Sagittarius A* which can be taken as the rate of rotation of the Milky Way itself at this radius.
Any proper motion is a two-dimensional vector (as it excludes the component as to the direction of the line of sight) and it bears two quantities or characteristics: its position angle and its magnitude. The first is the direction of the proper motion on the celestial sphere (with 0 degrees meaning the motion is north, 90 degrees meaning the motion is east, (left on most sky maps and space telescope images) and so on), and the second is its magnitude, typically expressed in arcseconds per year (symbols: arcsec/yr, as/yr, ″/yr, ″ yr−1) or milliarcseconds per year (symbols: mas/yr, mas yr−1).
Proper motion may alternatively be defined by the angular changes per year in the star's right ascension (μα) and declination (μδ) with respect to a constant epoch.
The components of proper motion by convention are arrived at as follows. Suppose an object moves from coordinates (α1, δ1) to coordinates (α2, δ2) in a time Δt. The proper motions are given by:
The magnitude of the proper motion μ is given by the Pythagorean theorem:
technically abbreviated:
where δ is the declination. The factor in cos2δ accounts for the widening of the lines (hours) of right ascension away from the poles, cosδ, being zero for a hypothetical object fixed at a celestial pole in declination. Thus, a co-efficient is given to negate the misleadingly greater east or west velocity (angular change in α) in hours of Right Ascension the further it is towards the imaginary infinite poles, above and below the earth's axis of rotation, in the sky. The change μα, which must be multiplied by cosδ to become a component of the proper motion, is sometimes called the "proper motion in right ascension", and μδ the "proper motion in declination".
If the proper motion in right ascension has been converted by cosδ, the result is designated μα*. For example, the proper motion results in right ascension in the Hipparcos Catalogue (HIP) have already been converted. Hence, the individual proper motions in right ascension and declination are made equivalent for straightforward calculations of various other stellar motions.
The position angle θ is related to these components by:
Motions in equatorial coordinates can be converted to motions in galactic coordinates.
Examples
For most stars seen in the sky, the observed proper motions are small and unremarkable. Such stars are often either faint or are significantly distant, have changes of below 0.01″ per year, and do not appear to move appreciably over many millennia. A few do have significant motions, and are usually called high-proper motion stars. Motions can also be in almost seemingly random directions. Two or more stars, double stars or open star clusters, which are moving in similar directions, exhibit so-called shared or common proper motion (or cpm.), suggesting they may be gravitationally attached or share similar motion in space.
Barnard's Star has the largest proper motion of all stars, moving at 10.3″ yr−1. Large proper motion usually strongly indicates an object is close to the Sun. This is so for Barnard's Star, about 6 light-years away. After the Sun and the Alpha Centauri system, it is the nearest known star. Being a red dwarf with an apparent magnitude of 9.54, it is too faint to see without a telescope or powerful binoculars. Of the stars visible to the naked eye (conservatively limiting unaided visual magnitude to 6.0), 61 Cygni A (magnitude V=5.20) has the highest proper motion at 5.281″ yr−1, discounting Groombridge 1830 (magnitude V=6.42), proper motion: 7.058″ yr−1.
A proper motion of 1 arcsec per year 1 light-year away corresponds to a relative transverse speed of 1.45 km/s. Barnard's Star's transverse speed is 90 km/s and its radial velocity is 111 km/s (perpendicular (at a right, 90° angle), which gives a true or "space" motion of 142 km/s. True or absolute motion is more difficult to measure than the proper motion, because the true transverse velocity involves the product of the proper motion times the distance. As shown by this formula, true velocity measurements depend on distance measurements, which are difficult in general.
In 1992 Rho Aquilae became the first star to have its Bayer designation invalidated by moving to a neighbouring constellation – it is now in Delphinus.
Usefulness in astronomy
Stars with large proper motions tend to be nearby; most stars are far enough away that their proper motions are very small, on the order of a few thousandths of an arcsecond per year. It is possible to construct nearly complete samples of high proper motion stars by comparing photographic sky survey images taken many years apart. The Palomar Sky Survey is one source of such images. In the past, searches for high proper motion objects were undertaken using blink comparators to examine the images by eye. More modern techniques such as image differencing can scan digitized images, or comparisons to star catalogs obtained by satellites. As any selection biases of these surveys are well understood and quantifiable, studies have confirmed more and inferred approximate quantities of unseen stars – revealing and confirming more by studying them further, regardless of brightness, for instance. Studies of this kind show most of the nearest stars are intrinsically faint and angularly small, such as red dwarfs.
Measurement of the proper motions of a large sample of stars in a distant stellar system, like a globular cluster, can be used to compute the cluster's total mass via the Leonard-Merritt mass estimator. Coupled with measurements of the stars' radial velocities, proper motions can be used to compute the distance to the cluster.
Stellar proper motions have been used to infer the presence of a super-massive black hole at the center of the Milky Way. This now confirmed to exist black hole is called Sgr A*, and has a mass of 4.3 × (solar masses).
Proper motions of the galaxies in the Local Group are discussed in detail in Röser. In 2005, the first measurement was made of the proper motion of the Triangulum Galaxy M33, the third largest and only ordinary spiral galaxy in the Local Group, located 0.860 ± 0.028 Mpc beyond the Milky Way. The motion of the Andromeda Galaxy was measured in 2012, and an Andromeda–Milky Way collision is predicted in about 4.5 billion years. Proper motion of the NGC 4258 (M106) galaxy in the M106 group of galaxies was used in 1999 to find an accurate distance to this object. Measurements were made of the radial motion of objects in that galaxy moving directly toward and away from Earth, and assuming this same motion to apply to objects with only a proper motion, the observed proper motion predicts a distance to the galaxy of .
History
Proper motion was suspected by early astronomers (according to Macrobius, c. AD 400) but a proof was not provided until 1718 by Edmund Halley, who noticed that Sirius, Arcturus and Aldebaran were over half a degree away from the positions charted by the ancient Greek astronomer Hipparchus roughly 1850 years earlier.
The lesser meaning of "proper" used is arguably dated English (but neither historic, nor obsolete when used as a postpositive, as in "the city proper") meaning "belonging to" or "own". "Improper motion" would refer to perceived motion that is nothing to do with an object's inherent course, such as due to Earth's axial precession, and minor deviations, nutations well within the 26,000-year cycle.
Stars with high proper motion
| Physical sciences | Basics | Astronomy |
52033 | https://en.wikipedia.org/wiki/Mathematical%20optimization | Mathematical optimization | Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics.
Optimization problems
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems.
An optimization problem can be represented in the following way:
Given: a function from some set to the real numbers
Sought: an element such that for all ("minimization") or such that for all ("maximization").
Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework.
Since the following is valid:
it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too.
Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization, speaking of the value of the function as representing the energy of the system being modeled. In machine learning, it is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error.
Typically, is some subset of the Euclidean space , often specified by a set of constraints, equalities or inequalities that the members of have to satisfy. The domain of is called the search space or the choice set, while the elements of are called candidate solutions or feasible solutions.
The function is variously called an objective function, criterion function, loss function, cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional. A feasible solution that minimizes (or maximizes) the objective function is called an optimal solution.
In mathematics, conventional optimization problems are usually stated in terms of minimization.
A local minimum is defined as an element for which there exists some such that
the expression holds;
that is to say, on some region around all of the function values are greater than or equal to the value at that element.
Local maxima are defined similarly.
While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element.
Generally, unless the objective function is convex in a minimization problem, there may be several local minima.
In a convex problem, if there is a local minimum that is interior (not on the edge of the set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima.
A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat the former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem.
Notation
Optimization problems are often expressed with special notation. Here are some examples:
Minimum and maximum value of a function
Consider the following notation:
This denotes the minimum value of the objective function , when choosing from the set of real numbers . The minimum value in this case is 1, occurring at .
Similarly, the notation
asks for the maximum value of the objective function , where may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined".
Optimal input arguments
Consider the following notation:
or equivalently
This represents the value (or values) of the argument in the interval that minimizes (or minimize) the objective function (the actual minimum value of that function is not what the problem asks for). In this case, the answer is , since is infeasible, that is, it does not belong to the feasible set.
Similarly,
or equivalently
represents the pair (or pairs) that maximizes (or maximize) the value of the objective function , with the added constraint that lie in the interval (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form and , where ranges over all integers.
Operators and are sometimes also written as and , and stand for argument of the minimum and argument of the maximum.
History
Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum.
The term "linear programming" for certain optimization cases was due to George B. Dantzig, although much of the theory had been introduced by Leonid Kantorovich in 1939. (Programming in this context does not refer to computer programming, but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on the theoretical aspects of linear programming (like the theory of duality) around the same time.
Other notable researchers in mathematical optimization include the following:
Richard Bellman
Dimitri Bertsekas
Michel Bierlaire
Stephen P. Boyd
Roger Fletcher
Martin Grötschel
Ronald A. Howard
Fritz John
Narendra Karmarkar
William Karush
Leonid Khachiyan
Bernard Koopman
Harold Kuhn
László Lovász
David Luenberger
Arkadi Nemirovski
Yurii Nesterov
Lev Pontryagin
R. Tyrrell Rockafellar
Naum Z. Shor
Albert Tucker
Major subfields
Convex programming studies the case when the objective function is convex (minimization) or concave (maximization) and the constraint set is convex. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming.
Linear programming (LP), a type of convex programming, studies the case in which the objective function f is linear and the constraints are specified using only linear equalities and inequalities. Such a constraint set is called a polyhedron or a polytope if it is bounded.
Second-order cone programming (SOCP) is a convex program, and includes certain types of quadratic programs.
Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is a generalization of linear and convex quadratic programming.
Conic programming is a general form of convex programming. LP, SOCP and SDP can all be viewed as conic programs with the appropriate type of cone.
Geometric programming is a technique whereby objective and inequality constraints expressed as posynomials and equality constraints as monomials can be transformed into a convex program.
Integer programming studies linear programs in which some or all variables are constrained to take on integer values. This is not convex, and in general much more difficult than regular linear programming.
Quadratic programming allows the objective function to have quadratic terms, while the feasible set must be specified with linear equalities and inequalities. For specific forms of the quadratic term, this is a type of convex programming.
Fractional programming studies optimization of ratios of two nonlinear functions. The special class of concave fractional programs can be transformed to a convex optimization problem.
Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts. This may or may not be a convex program. In general, whether the program is convex affects the difficulty of solving it.
Stochastic programming studies the case in which some of the constraints or parameters depend on random variables.
Robust optimization is, like stochastic programming, an attempt to capture uncertainty in the data underlying the optimization problem. Robust optimization aims to find solutions that are valid under all possible realizations of the uncertainties defined by an uncertainty set.
Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one.
Stochastic optimization is used with random (noisy) function measurements or random inputs in the search process.
Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions.
Heuristics and metaheuristics make few or no assumptions about the problem being optimized. Usually, heuristics do not guarantee that any optimal solution need be found. On the other hand, heuristics are used to find approximate solutions for many complicated optimization problems.
Constraint satisfaction studies the case in which the objective function f is constant (this is used in artificial intelligence, particularly in automated reasoning).
Constraint programming is a programming paradigm wherein relations between variables are stated in the form of constraints.
Disjunctive programming is used where at least one constraint must be satisfied but not all. It is of particular use in scheduling.
Space mapping is a concept for modeling and optimization of an engineering system to high-fidelity (fine) model accuracy exploiting a suitable physically meaningful coarse or surrogate model.
In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):
Calculus of variations is concerned with finding the best way to achieve some goal, such as finding a surface whose boundary is a specific curve, but with the least possible area.
Optimal control theory is a generalization of the calculus of variations which introduces control policies.
Dynamic programming is the approach to solve the stochastic optimization problem with stochastic, randomness, and unknown model parameters. It studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation.
Mathematical programming with equilibrium constraints is where the constraints include variational inequalities or complementarities.
Multi-objective optimization
Adding more than one objective to an optimization problem adds complexity. For example, to optimize a structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set. The curve created plotting weight against stiffness of the best designs is known as the Pareto frontier.
A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal.
The choice among "Pareto optimal" solutions to determine the "favorite solution" is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker.
Multi-objective optimization problems have been generalized further into vector optimization problems where the (partial) ordering is no longer given by the Pareto ordering.
Multi-modal or global optimization
Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer.
Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm.
Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms, Bayesian optimization and simulated annealing.
Classification of critical points and extrema
Feasibility problem
The satisfiability problem, also called the feasibility problem, is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as the special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal.
Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack is null or negative.
Existence
The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view.
Necessary conditions for optimality
One of Fermat's theorems states that optima of unconstrained problems are found at stationary points, where the first derivative or the gradient of the objective function is zero (see first derivative test). More generally, they may be found at critical points, where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called a 'first-order condition' or a set of first-order conditions.
Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the 'Karush–Kuhn–Tucker conditions'.
Sufficient conditions for optimality
While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is a minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see 'Second derivative test'). If a candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality.
Sensitivity and continuity of optima
The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics.
The maximum theorem of Claude Berge (1963) describes the continuity of an optimal solution as a function of underlying parameters.
Calculus of optimization
For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions, which meet in loss function minimization of the neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum.
Further, critical points can be classified using the definiteness of the Hessian matrix: If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is a local maximum; finally, if indefinite, then the point is some kind of saddle point.
Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. Lagrangian relaxation can also provide approximate solutions to difficult constrained problems.
When the objective function is a convex function, then any local minimum will also be a global minimum. There exist efficient numerical techniques for minimizing convex functions, such as interior-point methods.
Global convergence
More generally, if the objective function is not a quadratic function, then many optimization methods use other methods to ensure that some subsequence of iterations converges to an optimal solution. The first and still popular method for ensuring convergence relies on line searches, which optimize a function along one dimension. A second and increasingly popular method for ensuring convergence uses trust regions. Both line searches and trust regions are used in modern methods of non-differentiable optimization. Usually, a global optimizer is much slower than advanced local optimizers (such as BFGS), so often an efficient global optimizer can be constructed by starting the local optimizer from different starting points.
Computational optimization techniques
To solve problems, researchers may use algorithms that terminate in a finite number of steps, or iterative methods that converge to a solution (on some specified class of problems), or heuristics that may provide approximate solutions to some problems (although their iterates need not converge).
Optimization algorithms
Simplex algorithm of George Dantzig, designed for linear programming
Extensions of the simplex algorithm, designed for quadratic programming and for linear-fractional programming
Variants of the simplex algorithm that are especially suited for network optimization
Combinatorial algorithms
Quantum optimization algorithms
Iterative methods
The iterative methods used to solve problems of nonlinear programming differ according to whether they evaluate Hessians, gradients, or only function values. While evaluating Hessians (H) and gradients (G) improves the rate of convergence, for functions for which these quantities exist and vary sufficiently smoothly, such evaluations increase the computational complexity (or computational cost) of each iteration. In some cases, the computational complexity may be excessively high.
One major criterion for optimizers is just the number of required function evaluations as this often is already a large computational effort, usually much more effort than within the optimizer itself, which mainly has to operate over the N variables. The derivatives provide detailed information for such optimizers, but are even harder to calculate, e.g. approximating the gradient takes at least N+1 function evaluations. For approximations of the 2nd derivatives (collected in the Hessian matrix), the number of function evaluations is in the order of N². Newton's method requires the 2nd-order derivatives, so for each iteration, the number of function calls is in the order of N², but for a simpler pure gradient optimizer it is only N. However, gradient optimizers need usually more iterations than Newton's algorithm. Which one is best with respect to the number of function calls depends on the problem itself.
Methods that evaluate Hessians (or approximate Hessians, using finite differences):
Newton's method
Sequential quadratic programming: A Newton-based method for small-medium scale constrained problems. Some versions can handle large-dimensional problems.
Interior point methods: This is a large class of methods for constrained optimization, some of which use only (sub)gradient information and others of which require the evaluation of Hessians.
Methods that evaluate gradients, or approximate gradients in some way (or even subgradients):
Coordinate descent methods: Algorithms which update a single coordinate in each iteration
Conjugate gradient methods: Iterative methods for large problems. (In theory, these methods terminate in a finite number of steps with quadratic objective functions, but this finite termination is not observed in practice on finite–precision computers.)
Gradient descent (alternatively, "steepest descent" or "steepest ascent"): A (slow) method of historical and theoretical interest, which has had renewed interest for finding approximate solutions of enormous problems.
Subgradient methods: An iterative method for large locally Lipschitz functions using generalized gradients. Following Boris T. Polyak, subgradient–projection methods are similar to conjugate–gradient methods.
Bundle method of descent: An iterative method for small–medium-sized problems with locally Lipschitz functions, particularly for convex minimization problems (similar to conjugate gradient methods).
Ellipsoid method: An iterative method for small problems with quasiconvex objective functions and of great theoretical interest, particularly in establishing the polynomial time complexity of some combinatorial optimization problems. It has similarities with Quasi-Newton methods.
Conditional gradient method (Frank–Wolfe) for approximate minimization of specially structured problems with linear constraints, especially with traffic networks. For general unconstrained problems, this method reduces to the gradient method, which is regarded as obsolete (for almost all problems).
Quasi-Newton methods: Iterative methods for medium-large problems (e.g. N<1000).
Simultaneous perturbation stochastic approximation (SPSA) method for stochastic optimization; uses random (efficient) gradient approximation.
Methods that evaluate only function values: If a problem is continuously differentiable, then gradients can be approximated using finite differences, in which case a gradient-based method can be used.
Interpolation methods
Pattern search methods, which have better convergence properties than the Nelder–Mead heuristic (with simplices), which is listed below.
Mirror descent
Heuristics
Besides (finitely terminating) algorithms and (convergent) iterative methods, there are heuristics. A heuristic is any algorithm which is not guaranteed (mathematically) to find the solution, but which is nevertheless useful in certain practical situations. List of some well-known heuristics:
Differential evolution
Dynamic relaxation
Evolutionary algorithms
Genetic algorithms
Hill climbing with random restart
Memetic algorithm
Nelder–Mead simplicial heuristic: A popular heuristic for approximate minimization (without calling gradients)
Particle swarm optimization
Simulated annealing
Stochastic tunneling
Tabu search
Applications
Mechanics
Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming) problem.
Many design problems can also be expressed as optimization programs. This application is called design optimization. One subset is the engineering optimization, and another recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems.
This approach may be applied in cosmology and astrophysics.
Economics and finance
Economics is closely enough linked to optimization of agents that an influential definition relatedly describes economics qua science as the "study of human behavior as a relationship between ends and scarce means" with alternative uses. Modern optimization theory includes traditional optimization theory but also overlaps with game theory and the study of economic equilibria. The Journal of Economic Literature codes classify mathematical programming, optimization techniques, and related topics under JEL:C61-C63.
In microeconomics, the utility maximization problem and its dual problem, the expenditure minimization problem, are economic optimization problems. Insofar as they behave consistently, consumers are assumed to maximize their utility, while firms are usually assumed to maximize their profit. Also, agents are often modeled as being risk-averse, thereby preferring to avoid risk. Asset prices are also modeled using optimization theory, though the underlying mathematics relies on optimizing stochastic processes rather than on static optimization. International trade theory also uses optimization to explain trade patterns between nations. The optimization of portfolios is an example of multi-objective optimization in economics.
Since the 1970s, economists have modeled dynamic decisions over time using control theory. For example, dynamic search models are used to study labor-market behavior. A crucial distinction is between deterministic and stochastic models. Macroeconomists build dynamic stochastic general equilibrium (DSGE) models that describe the dynamics of the whole economy as the result of the interdependent optimizing decisions of workers, consumers, investors, and governments.
Electrical engineering
Some common applications of optimization techniques in electrical engineering include active filter design, stray field reduction in superconducting magnetic energy storage systems, space mapping design of microwave structures, handset antennas, electromagnetics-based design. Electromagnetically validated design optimization of microwave components and antennas has made extensive use of an appropriate physics-based or empirical surrogate model and space mapping methodologies since the discovery of space mapping in 1993. Optimization techniques are also used in power-flow analysis.
Civil engineering
Optimization has been widely used in civil engineering. Construction management and transportation engineering are among the main branches of civil engineering that heavily rely on optimization. The most common civil engineering problems that are solved by optimization are cut and fill of roads, life-cycle analysis of structures and infrastructures, resource leveling, water resource allocation, traffic management and schedule optimization.
Operations research
Another field that uses optimization techniques extensively is operations research. Operations research also uses stochastic modeling and simulation to support improved decision-making. Increasingly, operations research uses stochastic programming to model dynamic decisions that adapt to events; such problems can be solved with large-scale optimization and stochastic optimization methods.
Control engineering
Mathematical optimization is used in much modern controller design. High-level controllers such as model predictive control (MPC) or real-time optimization (RTO) employ mathematical optimization. These algorithms run online and repeatedly determine values for decision variables, such as choke openings in a process plant, by iteratively solving a mathematical optimization problem including constraints and a model of the system to be controlled.
Geophysics
Optimization techniques are regularly used in geophysical parameter estimation problems. Given a set of geophysical measurements, e.g. seismic recordings, it is common to solve for the physical properties and geometrical shapes of the underlying rocks and fluids. The majority of problems in geophysics are nonlinear with both deterministic and stochastic methods being widely used.
Molecular modeling
Nonlinear optimization methods are widely used in conformational analysis.
Computational systems biology
Optimization techniques are used in many facets of computational systems biology such as model building, optimal experimental design, metabolic engineering, and synthetic biology. Linear programming has been applied to calculate the maximal possible yields of fermentation products, and to infer gene regulatory networks from multiple microarray datasets as well as transcriptional regulatory networks from high-throughput data. Nonlinear programming has been used to analyze energy metabolism and has been applied to metabolic engineering and parameter estimation in biochemical pathways.
Machine learning
Solvers
| Mathematics | Other | null |
52070 | https://en.wikipedia.org/wiki/Arch | Arch | An arch is a curved vertical structure spanning an open space underneath it. Arches may support the load above them, or they may perform a purely decorative role. As a decorative element, the arch dates back to the 4th millennium BC, but structural load-bearing arches became popular only after their adoption by the Ancient Romans in the 4th century BC.
Arch-like structures can be horizontal, like an arch dam that withstands the horizontal hydrostatic pressure load. Arches are normally used as supports for many types of vaults, with the barrel vault in particular being a continuous arch. Extensive use of arches and vaults characterizes an arcuated construction, as opposed to the trabeated system, where, like in the architectures of ancient Greece, China, and Japan (as well as the modern steel-framed technique), posts and beams dominate.
Arches had several advantages over the lintel, especially in the masonry construction: with the same amount of material it can have larger span, carry more weight, and can be made from smaller and thus more manageable pieces. Their role in construction was diminished in the middle of the 19th century with introduction of the wrought iron (and later steel): the high tensile strength of these new materials made long lintels possible.
Basic concepts
Terminology
A true arch is a load-bearing arc with elements held together by compression. In much of the world introduction of the true arch was a result of European influence. The term false arch has few meanings. It is usually used to designate an arch that has no structural purpose, like a proscenium arch in theaters used to frame the performance for the spectators, but is also applied to corbelled and triangular arches that are not based on compression.
A typical true masonry arch consists of the following elements:
Keystone, the top block in an arch. Portion of the arch around the keystone (including the keystone itself), with no precisely defined boundary, is called a crown
Voussoir (a wedge-like construction block). A rowlock arch is formed by multiple concentric layers of voussoirs.
Extrados (an external surface of the arch)
Impost is block at the base of the arch (the voussoir immediately above the impost is a springer). The tops of imposts define the springing level. A portion of the arch between the springing level and the crown (centered around the 45° angle) is called a haunch. If the arch resides on top of a column, the impost is formed by an abacus or its thicker version, dosseret.
Intrados (an underside of the arch, also known as a soffit)
Rise (height of the arc, distance from the springing level to the crown)
Clear span
Abutment The roughly triangular-shaped portion of the wall between the extrados and the horizontal division above is called spandrel.
A (left or right) half-segment of an arch is called an arc, the overall line of an arch is arcature (this term is also used for an arcade). Archivolt is the exposed (front-facing) part of the arch, sometimes decorated (occasionally also used to designate the intrados). If the sides of voussoir blocks are not straight, but include angles and curves for interlocking, the arch is called "joggled".
Arch action
A true arch, due to its rise, resolves the vertical loads into horizontal and vertical reactions at the ends, a so called arch action. The vertical load produces a positive bending moment in the arch, while the inward-directed horizontal reaction from the spandrel/abutment provides a counterbalancing negative moment. As a result, the bending moment in any segment of the arch is much smaller than in a beam with the equivalent load and span. The diagram on the right shows the difference between a loaded arch and a beam. Elements of the arch are mostly subject to compression (A), while in the beam a bending moment is present, with compression at the top and tension at the bottom (B).
In the past, when arches were made of masonry pieces, the horizontal forces at the ends of an arch (so called thrust) caused the need for heavy abutments (cf. Roman triumphal arch). The other way to counteract the forces, and thus allow thinner supports, was to use the counter-arches, as in an arcade arrangement, where the horizontal thrust of each arch is counterbalanced by its neighbors, and only the end arches need to buttressed. With new construction materials (steel, concrete, engineered wood), not only the arches themselves got lighter, but the horizontal thrust can be further relieved by a tie connecting the ends of an arch.
Funicular shapes
When evaluated from the perspective of an amount of material required to support a given load, the best solid structures are compression-only; with the flexible materials, the same is true for tension-only designs. There is a fundamental symmetry in nature between solid compression-only and flexible tension-only arrangements, noticed by Robert Hooke in 1676: "As hangs the flexible line, so but inverted will stand the rigid arch", thus the study (and terminology) of arch shapes is inextricably linked to the study of hanging chains, the corresponding curves or polygons are called funicular. Just like the shape of a hanging chain will vary depending on the weights attached to it, the shape of an ideal (compression-only) arch will depend on the distribution of the load.
While building masonry arches in the not very tall buildings of the past, a practical assumption was that the stones can withstand virtually unlimited amount of pressure (up to 100 N per mm2), while the tensile strength was very low, even with the mortar added between the stones, and can be effectively assumed to be zero. Under these assumptions the calculations for the arch design are greatly simplified: either a reduced-scale model can be built and tested, or a funicular curve (pressure polygon) can be calculated or modeled, and as long as this curve stays within the confines of the voussoirs, the construction will be stable (a so called "safe theorem").
Classifications
There are multiple ways to classify an arch:
by the geometrical shape of its intrados (for example, semicircular, triangular, etc.);
for the arches with rounded intrados, by the number of circle segments forming the arch (for example, round arch is single-centred, pointed arch is two-centred);
by the material used (stone, brick, concrete, steel) and construction approach. For example, the wedge-shaped voussoirs of a brick arch can be made by cutting the regular bricks ("axed brick" arch) or manufactured in the wedge shape ("gauged brick" arch);
structurally, by the number of hinges (movable joints) between solid components. For example, voussoirs in a stone arch should not move, so these arches usually have no hinges (are "fixed"). Permitting some movement in a large structure allows to alleviate stresses (caused, for example, by the thermal expansion), so many bridge spans are built with three hinges (one at each support and one at the crown) since the mid-19th century.
Arrangements
A sequence of arches can be grouped together forming an arcade. Romans perfected this form, as shown, for example, by arched structures of Pont du Gard. In the interior of hall churches, arcades of separating arches were used to separate the nave of a church from the side aisle, or two adjacent side aisles.
Two-tiered arches, with two arches superimposed, were sometimes used in Islamic architecture, mostly for decorative purposes.
An opening of the arch can be filled, creating a blind arch. Blind arches are frequently decorative, and were extensively used in Early Christian, Romanesque, and Islamic architecture. Alternatively, the opening can be filled with smaller arches, producing a containing arch, common in Gothic and Romanesque architecture. Multiple arches can be superimposed with an offset, creating an interlaced series of usually (with some exceptions) blind and decorative arches. Most likely of Islamic origin, the interlaced arcades were popular in Romanesque and Gothic architecture. Rear-arch (also rere-arch) is the one that frames the internal side of an opening in the external wall.
Structural
Structurally, relieving arches (often blind or containing) can be used to take off load from some portions of the building (for example, to allow use of thinner exterior walls with larger window openings, or, as in the Roman Pantheon, to redirect the weight of the upper structures to particular strong points). Transverse arches, introduced in Carolingian architecture, are placed across the nave to compartmentalize (together with longitudinal separating arches) the internal space into bays and support vaults. A diaphragm arch similarly goes in the transverse direction, but carries a section of wall on top. It is used to support or divide sections of the high roof. Strainer arches were built as an afterthought to prevent two adjacent supports from imploding due to miscalculation. Frequently they were made very decorative, with one of the best examples provided by the Wells Cathedral. Strainer arches can be "inverted" (upside-down) while remaining structural. When used across railway cuttings to prevent collapse of the walls, strainer arches may be referred to as flying arches. A counter-arch is built adjacent to another arch to oppose its horizontal action or help to stabilize it, for example, when constructing a flying buttress.
Shapes
The large variety of arch shapes (left) can mostly be classified into three broad categories: rounded, pointed, and parabolic.
Rounded
"Round" semicircular arches were commonly used for ancient arches that were constructed of heavy masonry, and were relied heavily on by the Roman builders since the 4th century BC. It is considered to be the most common arch form, characteristic for Roman, Romanesque, and Renaissance architecture.
A segmental arch, with a rounded shape that is less than a semicircle, is very old (the versions were cut in the rock in Ancient Egypt 2100 BC at Beni Hasan). Since then it was occasionally used in Greek temples, utilized in Roman residential construction, Islamic architecture, and got popular as window pediments during the Renaissance.
A basket-handle arch (also known as depressed arch, three-centred arch, basket arch) consists of segments of three circles with origins at three different centers (sometimes uses five or seven segments, so can also be five-centred, etc.). Was used in late Gothic and Baroque architecture.
A horseshoe arch (also known as keyhole arch) has a rounded shape that includes more than a semicircle, is associated with Islamic architecture and was known in areas of Europe with Islamic influence (Spain, Southern France, Italy). Occasionally used in Gothics, it briefly enjoyed popularity as the entrance door treatment in the interwar England.
Pointed
A pointed arch consists of two ("two-centred arch") or more circle segments culminating in a point at the top. It originated in the Islamic architecture, arrived in Europe in the second half of the 11th century (Cluny Abbey) and later became prominent in the Gothic architecture. The advantages of a pointed arch over a semicircular one are flexible ratio of span to rise and lower horizontal reaction at the base. This innovation allowed for taller and more closely spaced openings, which are typical of Gothic architecture. Equilateral arch is the most common form of the pointed arch, with the centers of two circles forming the intrados coinciding with the springing points of the opposite segment. Together with the apex point, they form a equilateral triangle, thus the name. If the centers of circles are farther apart, the arch becomes a narrower and sharper lancet arch that appeared in France in the Early Gothic architecture (Saint-Denis Abbey) and became prominent in England in the late 12th and early 13th centuries (Salisbury Cathedral). If the centers are closer to another, the result is a wider blunt arch.
The intrados of the cusped arch (also known as multifoil arch, polyfoil arch, polylobed arch, and scalloped arch) includes several independent circle segments in a scalloped arrangement. These primarily decorative arches are common in Islamic architecture and Northern European Late Gothic, can be found in Romanesque architecture. A similar trefoil arch includes only three segments and sometimes has a rounded, not pointed, top. Common in Islamic architecture and Romanesque buildings influenced by it, it later became popular in the decorative motifs of the Late Gothic designs of Northern Europe.
Each arc of an ogee arch consists of at least two circle segments (for a total of at least four), with the center of an upper circle being outside the extrados. After European appearance in the 13th century on the facade of the St Mark's Basilica, the arch became a fixture of the English Decorated style, French Flamboyant, Venetian, and other Late Gothic styles. Ogee arch is also known as reversed curve arch, occasionally also called an inverted arch. The top of an ogee arch sometimes projects beyond the wall, forming the so-called nodding ogee popular in 14th century England (pulpitum in Southwell Minster).
Each arc of a four-centred arch is made of two circle segments with distinct centers; usually the radius used closer to the springing point is smaller with a more pronounced curvature. Common in Islamic architecture (Persian arch), and, with upper portion flattened almost to straight lines (Tudor arch), in the English Perpendicular Gothic.A keel arch is a variant of four-centred arch with haunches almost straight, resembling a section view of a capsized ship. Popular in Islamic architecture, it can be also found in Europe, occasionally with a small ogee element at the top, so it is sometimes considered to be a variation of an ogee arch.
Curtain arch (also known as inflexed arch, and, like the keel arch, usually decorative) uses two (or more) drooping curves that join at the apex. Utilized as a dressing for windows and doors primarily in Saxony in the Late Gothic and early Renaissance buildings (late 15th to early 16th century), associated with . When the intrados has multiple concave segments, the arch is also called a draped arch or tented arch. A similar arch that uses a mixture of curved and straight segments or exhibits sharp turns between segments is a mixed-line arch (or mixtilinear arch). In Moorish architecture the mixed-line arch evolved into an ornate lambrequin arch, also known as muqarnas arch.
Parabolic
The popularity of the arches using segments of a circle is due to simplicity of layout and construction, not their structural properties. Consequently, the architects historically used a variety of other curves in their designs: elliptical curves, hyperbolic cosine curves (including catenary), and parabolic curves. There are two reasons behind the selection of these curves:
they are still relatively easy to trace with common tools prior to construction;
depending on a situation, they can have superior structural properties and/or appearance.
The hyperbolic curve is not easy to trace, but there are known cases of its use. The non-circumferential curves look similar, and match at shallow profiles, so a catenary is often misclassified as a parabola (per Galileo, "the [hanging] chain fits its parabola almost perfectly"). González et al. provide an example of Palau Güell, where researchers do not agree on classification of the arches or claim the prominence of parabolic arches, while the measurements show that just two of the 23 arches designed by Gaudi are actually parabolic.
Three parabolic-looking curves in particular are of significance to the arch design: parabola itself, catenary, and weighted catenary. The arches naturally use the inverted (upside-down) versions of these curves.
A parabola represents an ideal (all-compression) shape when the load is equally distributed along the span, while the weight of the arch itself is negligible. A catenary is the best solution for the case where an arch with uniform thickness carries just its own weight with no external load. The practical designs for bridges are somewhere in between, and thus use the curves that represent a compromise that combines both the catenary and the funicular curve for particular non-uniform distribution of load. The practical free-standing arches are stronger and thus heavier at the bottom, so a weighted catenary curve is utilized for them. The same curve also fits well an application where a bridge consists of an arch with a roadway of packed dirt above it, as the dead load increases with a distance from the center.
Other
Unlike regular arches, the flat arch (also known as jack arch, lintel arch, straight arch, plate-bande) is not curved. Instead, the arch is flat in profile and can be used under the same circumstances as lintel. However, lintels are subject to bending stress, while the flat arches are true arches, composed of irregular voussoir shapes (the keystone is the only one of the symmetric wedge shape), and that efficiently uses the compressive strength of the masonry in the same manner as a curved arch and thus requires a mass of masonry on both sides to absorb the considerable lateral thrust. Used in the Roman architecture to imitate the Greek lintels, Islamic architecture, European medieval and Renaissance architecture. The flat arch is still being used as a decorative pattern, primarily at the top of window openings.
False arches
The corbel (also corbelled) arch, made of two corbels meeting in the middle of the span, is a true arch in a sense of being able to carry a load, but it is false in a structural sense, as its components are subject to bending stress. The typical profile is not curved, but has triangular shape. Invented prior to the semicircular arch, the corbel arch was used already in the Egyptian and Mycenaean architecture in the 3rd and 2nd millennium BC.
Like a corbel arch, the triangular arch is not a true arch in a structural sense. Its intrados is formed by two slabs leaning against each other. Brick builders would call triangular any arch with straight inclined sides. The design was common in Anglo-Saxon England until the late 11th century (St Mary Goslany). Mayan corbel arches are sometimes called triangular due to their shape.
Variations
Few transformations can be applied to arch shapes.
If one impost is much higher than another, the arch (frequently pointed) is known as ramping arch, raking arch, or rampant arch (from ). Originally used to support inclined structures, like stairs, in the 13th-14th centuries they appeared as parts of flying buttresses used to counteract the thrust of Gothic ribbed vaults.
A central part of an arch can be raised on short vertical supports, creating a trefoil-like shouldered arch. The raised central part can vary all the way from a flat arch to ogee. The shouldered arches were used to decorate openings in Europe from medieval times to Late Gothic architecture, became common in Iranian architecture from the 14th century, and were later adopted in the Ottoman Turkey.
In a stilted arch (also surmounted), the springing line is located above the imposts (on "stilts"). Known to Islamic architects by the 8th century, the technique was utilized to vertically align the apexes of arches of different dimensions in Romanesque and Gothic architecture. Stilting was useful for semicircular arches, where the ratio of the rise fixed at of the span, but was applied to the pointed arches, too.
The skew arch (also known as an oblique arch) is used when the arch needs to form an oblique angle in the horizontal plane with respect to the (parallel) springings, for example, when a bridge crosses the river at an angle different than 90°. A splayed arch is used for the case of unequal spans on the sides of the arch (when, for example, an interior opening in the wall is larger than the exterior one), the intrados of a round splayed arch is not cylindrical, but has a conical shape.
A wide arch with its rise less than of the span (and thus the geometric circle of at least one segment is below the springing line) is called a surbased arch (sometimes also a depressed arch). A drop arch is either a basket handle arch or a blunt arch.
Hinged arches
The practical arch bridges are built either as a fixed arch, a two-hinged arch, or a three-hinged arch.
The fixed arch is most often used in reinforced concrete bridges and tunnels, which have short spans. Because it is subject to additional internal stress from thermal expansion and contraction, this kind of arch is statically indeterminate (the internal state is impossible to determine based on the external forces alone).
The two-hinged arch is most often used to bridge long spans. This kind of arch has pinned connections at its base. Unlike that of the fixed arch, the pinned base can rotate, thus allowing the structure to move freely and compensate for the thermal expansion and contraction that changes in outdoor temperature cause. However, this can result in additional stresses, and therefore the two-hinged arch is also statically indeterminate, although not as much as the fixed arch.
The three-hinged arch is not only hinged at its base, like the two-hinged arch, yet also at its apex. The additional apical connection allows the three-hinged arch to move in two opposite directions and compensate for any expansion and contraction. This kind of arch is thus not subject to additional stress from thermal change. Unlike the other two kinds of arch, the three-hinged arch is therefore statically determinate. It is most often used for spans of medial length, such as those of roofs of large buildings. Another advantage of the three-hinged arch is that the reaction of the pinned bases is more predictable than the one for the fixed arch, allowing shallow, bearing-type foundations in spans of medial length. In the three-hinged arch "thermal expansion and contraction of the arch will cause vertical movements at the peak pin joint but will have no appreciable effect on the bases," which further simplifies foundational design.
History
The arch became popular in the Roman times and mostly spread alongside the European influence, although it was known and occasionally used much earlier. Many ancient architectures avoided the use of arches, including the Viking and Hindu ones.
Bronze Age: ancient Near East
True arches, as opposed to corbel arches, were known by a number of civilizations in the ancient Near East including the Levant, but their use was infrequent and mostly confined to underground structures, such as drains where the problem of lateral thrust is greatly diminished.
An example of the latter would be the Nippur arch, built before 3800 BC, and dated by H. V. Hilprecht (1859–1925) to even before 4000 BC. Rare exceptions are an arched mudbrick home doorway dated to from Tell Taya in Iraq and two Bronze Age arched Canaanite city gates, one at Ashkelon (dated to ), and one at Tel Dan (dated to ), both in modern-day Israel. An Elamite tomb dated 1500 BC from Haft Teppe contains a parabolic vault which is considered one of the earliest evidences of arches in Iran.
The use of true arches in Egypt also originated in the 4th millennium BC (underground barrel vaults at the Dendera cemetery). Standing arches were known since at least the Third Dynasty, but very few examples survived, since the arches were mostly used in non-durable secular buildings and made of mud brick voussoirs that were not wedge-shaped, but simply held in place by mortar, and thus susceptible to a collapse (the oldest arch still standing is at Ramesseum). Sacred buildings exhibited either lintel design or corbelled arches. Arches were mostly missing in Egypt temples even after the Roman conquest, even though Egyptians thought of the arch as a spiritual shape and used it in the rock-cut tombs and portable shrines. Auguste Mariette suggested that this choice was based on a relative fragility of a vault: "what would remain of the tombs and temples of Egyptians today, if they had preferred the vault?"
Mycenaean architecture utilized only the corbel arches in their beehive tombs with triangular openings. Mycenaeans had also built probably the oldest still standing stone-arch bridge in the world, Arkadiko Bridge, in Greece.
As evidenced by their imitations of the parabolic arches, Hittites most likely were exposed to the Egyptian designs, but used the corbelled technique to build them.
Classical Persia and Greece
The Assyrians, also apparently under the Egyptian influence, adopted the true arch (with a slightly pointed profile) early in the 8th century. In ancient Persia, the Achaemenid Empire (550 BC–330 BC) built small barrel vaults (essentially a series of arches built together to form a hall) known as iwan, which became massive, monumental structures during the later Parthian Empire (247 BC–AD 224). This architectural tradition was continued by the Sasanian Empire (224–651), which built the Taq Kasra at Ctesiphon in the 6th century AD, the largest free-standing vault until modern times.
An early European example of a voussoir arch appears in the 4th century BC Greek Rhodes Footbridge. Proto-true arches can also be found under the stairs of the temple of Apollo at Didyma and the stadium at Olympia.
.
Ancient Rome
The ancient Romans learned the semicircular arch from the Etruscans (both cultures apparently adopted the design in the 4th century BC), refined it and were the first builders in Europe to tap its full potential for above ground buildings:
The Romans were the first builders in Europe, perhaps the first in the world, to fully appreciate the advantages of the arch, the vault and the dome.
Throughout the Roman Empire, from Syria to Scotland, engineers erected arch structures. The first use of arches was for civic structures, like drains and city gates. Later the arches were utilized for major civic buildings bridges and aqueducts, with the outstanding 1st century AD examples provided by the Colosseum, Pont Du Gard, and the aqueduct of Segovia. The introduction of the ceremonial triumphal arch dates back to Roman Republic, although the best examples are from the imperial times (Arch of Augustus at Susa, Arch of Titus).
Romans initially avoided using the arch in the religious buildings and, in Rome, arched temples were quite rare until the recognition of Christianity in 313 AD (with the exceptions provided by the Pantheon and the "temple of Minerva Medica"). Away from the capital, arched temples were more common (, temple of Jupiter at Sbeitla, Severan temple at Djemila). Arrival of Christianity prompted creation of the new type of temple, a Christian basilica, that made a thorough break with the pagan tradition with arches as one of the main elements of the design, along with the exposed brick walls (Santa Sabina in Rome, Sant'Apollinare in Classe). For a long period, from the late 5th century to the 20th century, arcades were a standard staple for the Western Christian architecture.
Vaults began to be used for roofing large interior spaces such as halls and temples, a function that was also assumed by domed structures from the 1st century BC onwards.
The segmental arch was first built by the Romans who realized that an arch in a bridge did not have to be a semicircle, such as in Alconétar Bridge or Ponte San Lorenzo. The utilitarian and mass residential (insulae) buildings, as found in Ostia Antica and Pompeii, mostly used low segmental arches made of bricks and architraves made of wood, while the concrete lintel arches can be found in villas and palaces.
Ancient China
Ancient architecture of China (and Japan) used mostly timber-framed construction and trabeated system. Arches were little-used, although there are few arch bridges known from literature and one artistic depiction in stone-carved relief. Since the only surviving artefacts of architecture from the Han dynasty (202 BC – 220 AD) are rammed earth defensive walls and towers, ceramic roof tiles from no longer existent wooden buildings, stone gate towers, and underground brick tombs, the known vaults, domes, and archways were built with the support of the earth and were not free-standing.
China's oldest surviving stone arch bridge is the Anji Bridge. Still in use, it was built between 595 CE and 605 CE during the Sui dynasty.
Islamic
Islamic architects adopted the Roman arches, but had quickly shown their resourcefulness: by the 8th century the simple semicircular arch was almost entirely replaced with fancier shapes, few fine examples of the former in the Umayyad architecture notwithstanding (cf. the Great Mosque of Damascus, 706–715 CE). The first pointed arches appear already at the end of the 7th century AD (Al-Aqsa Mosque, Palace of Ukhaidhir, cisterns at the White Mosque of Ramle). Their variations spread fast and wide: Mosque of Ibn Tulun in Cairo (876-879 AD), Nizamiyya Madrasa at Khar Gerd (now Iran, 11th century), Kongo Mosque in Diani Beach (Kenya, 16th century).
Islamic architecture brought to life a large amount of arch forms: the round horseshoe arch that became a characteristic trait of the Islamic buildings, the keel arch, the cusped arch, and the mixed-line arch (where the curved "ogee swell" is interspersed with abrupt bends). The Great Mosque of Cordoba, that can be considered a catalogue of Islamic arches, contains also the arches with almost straight sides, trefoil, interlaced, and joggled. Mosque of Ibn Tulun adds four-centred and stilted version of the pointed arch.
It is quite likely that the appearance of the pointed arch, an essential element of the Gothic style, in Europe (Monte Cassino, 1066–1071 AD, and the Cluny Abbey five years later) and the ogee arch in Venice ( 1250) is a result of the Islamic influence, possibly through Sicily. Saoud also credits to Islamic architects the spread of the transverse arch. Mixed-line arch became popular in the Mudéjar style and subsequently spread around the Spanish-speaking world.
Western Europe
The collapse of the Western Roman Empire left the church as the only client of major construction; with all pre-Romanesque architectural styles borrowing from Roman construction with its semicircular arch. Due to the decline in the construction quality, the walls were thicker, and the arches thus heavier, than their Roman prototypes. Eventually the architects started to use the depth of the arches for decoration, turning the deep opening into an arch order (or rebated arch, a sequence of progressively smaller concentric arches, each inset with a rebate).
Romanesque style started experiments with the pointed arch late in the 11th century (Cluny Abbey). In few decades, the practice spread (Durham Cathedral, Basilica of Saint-Denis). Early Gothic utilized the flexibility of the pointed arch by grouping together arches of different spans but with the same height.
While the arches used in the mediaeval Europe were borrowed from the Roman and Islamic architecture, the use of pointed arch to form the rib vault was novel and became the defining characteristic of Gothic construction. At about 1400 AD, the city-states of Italy, where the pointed arch had never gotten much traction, initiated the revival of the Roman style with its round arches, Renaissance. By the 16th century the new style spread across Europe and, through the influence of empires, to the rest of the world. Arch became a dominant architectural form until the introduction of the new construction materials, like steel and concrete.
India
The history of arch in India is very long (some arches were apparently found in excavations of Kosambi, 2nd millennium BC. However, the continuous history begins with rock-cut arches in the Lomas Rishi cave (3rd century BC). Vaulted roof of an early Harappan burial chamber has been noted at Rakhigarhi. S.R Rao reports vaulted roof of a small chamber in a house from Lothal. Barrel vaults were also used in the Late Harappan Cemetery H culture dated 1900 BC-1300 BC which formed the roof of the metal working furnace, the discovery was made by Vats in 1940 during excavation at Harappa.
The use of arches until the Islamic conquest of India in the 12th century AD was sporadic, with ogee arches and barrel vaults in rock-cut temples (Karla Caves, from the 1st century BC) and decorative pointed gavaksha arches. By the 5th century AD voussoir vaults were used structurally in the brick construction. Surviving examples include the temple at Bhitargaon (5th century AD) and Mahabodhi temple (7th century AD), the latter has both pointed arches and semicircular arches. These Gupta era arch vault system was later used extensively in Burmese Buddhist temples in Pyu and Bagan in 11th and 12th centuries.
With the arrival of Islamic and other Western Asia influence, the arches became prominent in the Indian architecture, although the post and lintel construction was still preferred. A variety of pointed and lobed arches was characteristic for the Indo-Islamic architecture, with the monumental example of Buland Darwaza, that has pointed arch decorated with small cusped arches.
Pre-Columbian America
Mayan architecture utilized the corbel arches. The other Mesoamerican cultures used only the flat roofs with no arches whatsoever, although some researchers had suggested that both Maya and Aztec architects understood the concept of a true arch.
Revival of the trabeated system
The 19th-century introduction of the wrought iron (and later steel) into construction changed the role of the arch. Due to the high tensile strength of new materials, relatively long lintels became possible, as was demonstrated by the tubular Britannia Bridge (Robert Stephenson, 1846-1850). A fervent proponent of the trabeated system, Alexander "Greek" Thomson, whose preference for lintels was originally based on aesthetic criteria, observed that the spans of this bridge are longer than that of any arch ever built, thus "the simple, unsophisticated stone lintel contains in its structure all the scientific appliances [...] used in the great tubular bridge. [...] Stonehenge is more scientifically constructed than York Minster." Use of arches in bridge construction continued (the Britannia Bridge was rebuilt in 1972 as a truss arch bridge), yet the steel frames and reinforced concrete frames mostly replaced the arches as the load-bearing elements in buildings.
Construction
As a pure compression form, the utility of the arch is due to many building materials, including stone and unreinforced concrete, being strong under compression, but brittle when tensile stress is applied to them.
Masonry
The voussoirs can be wedge-shaped or have a form of a rectangular cuboid, in the latter case the wedge-like shape is provided by the mortar.
An arch is held in place by the weight of all of its members, making construction problematic. One answer is to build a frame (historically, of wood) which exactly follows the form of the underside of the arch. This is known as a centre or centring. Voussoirs are laid on it until the arch is complete and self-supporting. For an arch higher than head height, scaffolding would be required, so it could be combined with the arch support. Arches may fall when the frame is removed if design or construction has been faulty.
Old arches sometimes need reinforcement due to decay of the keystones, forming what is known as bald arch.
Reinforced concrete
In reinforced concrete construction, the principle of the arch is used so as to benefit from the concrete's strength in resisting compressive stress. Where any other form of stress is raised, such as tensile or torsional stress, it has to be resisted by carefully placed reinforcement rods or fibres.
Architectural styles
The type of arches (or absence of them) is one of the most prominent characteristics of an architectural style. For example, when Heinrich Hübsch, in the 19th century, tried to classify the architectural style, his "primary elements" were roof and supports, with the top-level basic types: trabeated (no arches) and arcuated (arch-based). His next division for the arcuated styles was based on the use of round and pointed arch shapes.
Cultural references
The steady horizontal push of an arch against the abutments gave rise to a saying "the arch never sleeps", attributed to many sources, from Hindu to Arabs. This adage stresses that the arch carries "a seed of death" for itself and the structure containing it, a statement that can be made upon observation of the Roman ruins. The plot of The Nebuly Coat by J. Meade Falkner, inspired by a collapse of a tower at the Chichester Cathedral plays with the idea while dealing with the slow disintegration of a church building. Saoud explains the proverb by chain-like self-balancing of the horizontal and vertical forces in the arch and its "universal adaptability".
| Technology | Architectural elements | null |
52080 | https://en.wikipedia.org/wiki/Vibrio%20cholerae | Vibrio cholerae | Vibrio cholerae is a species of Gram-negative, facultative anaerobe and comma-shaped bacteria. The bacteria naturally live in brackish or saltwater where they attach themselves easily to the chitin-containing shells of crabs, shrimp, and other shellfish. Some strains of V. cholerae are pathogenic to humans and cause a deadly disease called cholera, which can be derived from the consumption of undercooked or raw marine life species or drinking contaminated water.
V. cholerae was first described by Félix-Archimède Pouchet in 1849 as some kind of protozoa. Filippo Pacini correctly identified it as a bacterium and from him, the scientific name is adopted. The bacterium as the cause of cholera was discovered by Robert Koch in 1884. Sambhu Nath De isolated the cholera toxin and demonstrated the toxin as the cause of cholera in 1959.
The bacterium has a flagellum (a tail like structure) at one pole and several pili throughout its cell surface. It undergoes respiratory and fermentative metabolism. Two serogroups called O1 and O139 are responsible for cholera outbreaks. Infection is mainly through drinking contaminated water or ingestion of food contaminated with faecal matter from an infected person, therefore is linked to sanitation and hygiene. When ingested, it invades the intestinal mucosa which can cause diarrhea and vomiting in a host within several hours to 2–3 days of ingestion. Ringers lactate and Oral rehydration solution combined with antibiotics such as fluoroquinolones and tetracyclines are the common treatment methods in severe cases.
V. cholerae has two circular chromosomes. One chromosomes produces the cholera toxin (CT), a protein that causes profuse, watery diarrhea (known as "rice-water stool"). But the DNA does not directly code for the toxin as the genes for cholera toxin are carried by CTXphi (CTXφ), a temperate bacteriophage (virus). The virus only produces the toxin when inserted into the bacterial DNA. Quorum sensing in V. cholerae is well studied and it activates host immune signaling and prolongs host survival, by limiting the bacterial intake of nutrients, such as tryptophan, which further is converted to serotonin. As such, quorum sensing allows a commensal interaction between host and pathogenic bacteria.
Discovery
Initial observations
During the third global pandemic of cholera (1846–1860), there was extensive scientific research to understand the etiology of the disease. The miasma theory, which posited that infections spread through contaminated air, was no longer a satisfactory explanation. The English physician John Snow was the first to give convincing evidence in London in 1854 that cholera was spread from drinking water – a contagion, not miasma. Yet he could not identify the pathogens, which made most people still believe in the miasma origin.
V. cholerae was first observed and recognized under microscope by the French zoologist Félix-Archimède Pouchet. In 1849, Pouchet examined the stool samples of four people having cholera. His presentation before the French Academy of Sciences on 23 April was recorded as: "[Pouchet] could verify that there existed in these [cholera patients] dejecta an immense quantity of microscopic infusoria." As summarised in the Gazette medicale de Paris (1849, p 327), in a letter read at the 23 April 1849 meeting of the Paris Academy of Sciences, Pouchet announced that the organisms were infusoria, a name then used for microscopic protists, naming them as the 'Vibrio rugula of Mueller and Shrank', a species of protozoa described by Danish naturalist Otto Friedrich Müller in 1786.
Identification of the bacterium
An Italian physician, Filippo Pacini, while investigating cholera outbreak in Florence in the late 1854, identified the causative pathogen as a new type of bacterium. He performed autopsies of corpses and made meticulous microscopic examinations of the tissues and body fluids. From feces and intestinal mucosa, he identified many comma-shaped bacilli. Reporting his discovery before the Società Medico-Fisica Fiorentina (Medico-Physician Society of Florence) on 10 December, and published in the 12 December issue of the Gazzetta Medica Italiana (Medical Gazette of Italy), Pacini stated:Pacini thus introduced the name vibrioni (Latin vībro means "to move rapidly to and fro, to shake, to agitate"). A Catalan physician Joaquim Balcells i Pascual also reported such bacterium around the same time. The discovery of the new bacterium was not regarded as medically important as the bacterium was not directly attributed to cholera. Pacini also stated that there was no reason to say that the bacterium caused the disease since he failed to create a pure culture and perform experiments, which was necessary 'to attribute the quality of contagion to cholera'. The miasma theory was still not ruled out.
Rediscovery
The medical importance and relationship between the bacterium and the cholera disease was discovered by German physician Robert Koch. In August 1883, Koch, with a team of German physicians, went to Alexandria, Egypt, to investigate the cholera epidemic there. Koch found that the intestinal mucosa of people who died of cholera always had the bacterium, yet he could not confirm if it was the causative agent. He moved to Calcutta (now Kolkata), India, where the epidemic was more severe. It was from here that he isolated the bacterium in a pure culture on 7 January 1884. He subsequently confirmed that the bacterium was a new species, and described it as "a little bent, like a comma." He reported his discovery to the German Secretary of State for the Interior on 2 February, and it was published in the Deutsche Medizinische Wochenschrift (German Medical Weekly).
Although Koch was convinced that the bacterium was the cholera pathogen, he could not entirely procure critical evidence that the bacterium produced the symptoms in healthy subjects (an important element in what was later known as Koch's postulates). His experiment on animals using his pure bacteria culture did not lead to the appearance of the disease in any of the subjects, and he correctly deduced that animals are immune to the human pathogen. The bacterium was by then known as "the comma bacillus." It was only in 1959, in Calcutta, that Indian physician Sambhu Nath De isolated the cholera toxin and showed that it caused cholera in healthy subjects, hence fully proving the bacterium-cholera relationship.
Taxonomy
Pacini had used the name "vibrio cholera", without proper binomial rendering, for the name of the bacterium. Following Koch's description, a scientific name Bacillus comma was popularised. But an Italian bacteriologist Vittore Trevisan published in 1884 that Koch's bacterium was the same as that of Pacini's and introduced the name Bacillus cholerae. A German physician Richard Friedrich Johannes Pfeiffer renamed it as Vibrio cholerae in 1896. The named was adopted by the Committee of the Society of American Bacteriologists on Characterization and Classification of Bacterial Types in 1920. In 1964, Rudolph Hugh of the George Washington University School of Medicine proposed to use the genus Vibrio with the type species V. cholerae (Pacini 1854) as a permanent name of the bacterium, regardless of the same name for protozoa. It was accepted by the Judicial Commission of the International Committee on Bacteriological Nomenclature in 1965, and the International Association of Microbiological Societies in 1966.
Characteristics
V. cholerae is a highly motile, comma shaped, gram-negative rod. The active movement of V. cholerae inspired the genus name because "vibrio" in Latin means "to quiver". Except for V. cholerae and V. mimicus, all other vibrio species are halophilic. Initial isolates are slightly curved, whereas they can appear as straight rods upon laboratory culturing. The bacterium has a flagellum at one cell pole as well as pili. It tolerates alkaline media that kill most intestinal commensals, but they are sensitive to acid. It is an aerobe while all other Vibrios are facultative anaerobes, and can undergo respiratory and fermentative metabolism. It measures 0.3 μm in diameter and 1.3 μm in length with average swimming velocity of around 75.4 μm/sec.
Pathogenicity
V. cholerae pathogenicity genes code for proteins directly or indirectly involved in the virulence of the bacteria. To adapt the host intestinal environment and to avoid being attacked by bile acids and antimicrobial peptides, V. cholera uses its outer membrane vesicles (OMVs). Upon entry, the bacteria sheds its OMVs, containing all the membrane modifications that make it vulnerable for the host attack.
During infection, V. cholerae secretes cholera toxin (CT), a protein that causes profuse, watery diarrhea (known as "rice-water stool"). This cholera toxin contains 5 B subunits that plays a role in attaching to the intestinal epithelial cells and 1 A subunit that plays a role in toxin activity. Colonization of the small intestine also requires the toxin coregulated pilus (TCP), a thin, flexible, filamentous appendage on the surface of bacterial cells. Expression of both CT and TCP is mediated by two component systems (TCS), which typically consist of a membrane-bound histidine kinase and an intracellular response element. TCS enable bacteria to respond to changing environments. In V. cholerae several TCS have been identified to be important in colonization, biofilm production and virulence. Quorum regulatory small RNAs (Qrr RNA) have been identified as targets of V. cholerae TCS. Here, the small RNA (sRNA) molecules bind to mRNA to block translation or induce degradation of inhibitors of expression of virulence or colonization genes. In V. cholerae the TCS EnvZ/OmpR alters gene expression via the sRNA coaR in response to changes in osmolarity and pH. An important target of coaR is tcpI, which negatively regulates expression of the major subunit of the TCP encoding gene (tcpA). When tcpI is bound by coaR it is no longer able to repress expression tcpA, leading to an increased colonization ability. Expression of coaR is upregulated by EnvZ/OmpR at a pH of 6,5, which is the normal pH of the intestinal lumen, but is low at higher pH values. V. cholerae in the intestinal lumen utilizes the TCP to attach to the intestinal mucosa, not invading the mucosa. After doing so it secretes cholerae toxin causing its symptoms. This then increases cyclic AMP or cAMP by binding (cholerae toxin) to adenylyl cyclase activating the GS pathway which leads to efflux of water and sodium into the intestinal lumen causing watery stools or rice watery stools.
V. cholerae can cause syndromes ranging from asymptomatic to cholera gravis. In endemic areas, 75% of cases are asymptomatic, 20% are mild to moderate, and 2–5% are severe forms such as cholera gravis. Symptoms include abrupt onset of watery diarrhea (a grey and cloudy liquid), occasional vomiting, and abdominal cramps. Dehydration ensues, with symptoms and signs such as thirst, dry mucous membranes, decreased skin turgor, sunken eyes, hypotension, weak or absent radial pulse, tachycardia, tachypnea, hoarse voice, oliguria, cramps, kidney failure, seizures, somnolence, coma, and death. Death due to dehydration can occur in a few hours to days in untreated children. The disease is also particularly dangerous for pregnant women and their fetuses during late pregnancy, as it may cause premature labor and fetal death. A study done by the Centers for Disease Control (CDC) in Haiti found that in pregnant women who contracted the disease, 16% of 900 women had fetal death. Risk factors for these deaths include: third trimester, younger maternal age, severe dehydration, and vomiting Dehydration poses the biggest health risk to pregnant women in countries that there are high rates of cholera. In cases of cholera gravis involving severe dehydration, up to 60% of patients can die; however, less than 1% of cases treated with rehydration therapy are fatal. The disease typically lasts 4–6 days. Worldwide, diarrhoeal disease, caused by cholera and many other pathogens, is the second-leading cause of death for children under the age of 5 and at least 120,000 deaths are estimated to be caused by cholera each year. In 2002, the WHO deemed that the case fatality ratio for cholera was about 3.95%.
Cholera illness and symptoms
V. cholerae infects the intestine and causes diarrhea, the hallmark symptom of cholera. Infection can be spread by eating contaminated food or drinking contaminated water. It also can spread through skin contact with contaminated human feces. Not all infection indicate symptoms, only about 1 in 10 people develop diarrhea. The major symptoms include: watery diarrhea, vomiting, rapid heart rate, loss of skin elasticity, low blood pressure, thirst, and muscle cramps. This illness can get serious as it can progress to kidney failure and possible coma. If diagnosed, it can be treated using medications.
Disease occurrence
V. cholerae has an endemic or epidemic occurrence. In countries where the disease has been for the past three years and the cases confirmed are local (within the confines of the country) transmission is considered to be "endemic." Alternatively, an outbreak is declared when the occurrence of disease exceeds the normal occurrence for any given time or location. Epidemics can last several days or over a span of years. Additionally, countries that have an occurrence of an epidemic can also be endemic. The longest standing V. cholerae epidemic was recorded in Yemen. Yemen had two outbreaks, the first occurred between September 2016 and April 2017, and the second began later in April 2017 and recently was considered to be resolved in 2019. The epidemic in Yemen took over 2,500 lives and impacted over 1 million people of Yemen. More outbreaks have occurred in Africa, the Americas, and Haiti.
Preventive measures
When visiting areas with epidemic cholera, the following precautions should be observed: drink and use bottled water; frequently wash hands with soap and safe water; use chemical toilets or bury feces if no restroom is available; do not defecate in any body of water and cook food thoroughly. Supplying proper, safe water is important. A precaution to take is to properly sanitize. Hand hygiene is an essential in areas where soap and water is not available. When there is no sanitation available for hand washing, scrub hands with ash or sand and rinse with clean water. A single dose vaccine is available for those traveling to an area where cholera is common.
There is a V. cholerae vaccine available to prevent disease spread. The vaccine is known as the, "oral cholera vaccine" (OCV). There are three types of OCV available for prevention: Dukoral®, Shanchol™, and Euvichol-Plus®. All three OCVs require two doses to be fully effective. Countries who are endemic or have an epidemic status are eligible to receive the vaccine based on several criteria: Risk of cholera, Severity of cholera, WASH conditions and capacity to improve, Healthcare conditions and capacity to improve, Capacity to implement OCV campaigns, Capacity to conduct M&E activities, Commitment at national and local level Since May the start of the OCV program to May 2018 over 25 million vaccines have been distributed to countries who meet the above criteria.
Treatment
The basic, overall treatment for Cholera is re-hydration, to replace the fluids that have been lost. Those with mild dehydration can be treated orally with an oral rehydration solution (ORS). When patients are severely dehydrated and unable to take in the proper amount of ORS, IV fluid treatment is generally pursued. Antibiotics are used in some cases, typically fluoroquinolones and tetracyclines.
Genome
V. cholerae (and Vibrionaceae in general) has two circular chromosomes, together totalling 4 million base pairs of DNA sequence and 3,885 predicted genes. The genes for cholera toxin are carried by CTXphi (CTXφ), a temperate bacteriophage inserted into the V. cholerae genome. CTXφ can transmit cholera toxin genes from one V. cholerae strain to another, one form of horizontal gene transfer. The genes for toxin coregulated pilus are coded by the Vibrio pathogenicity island (VPI), which is separate from the prophage.
The larger first chromosome is 3 Mbp long with 2,770 open reading frames (ORFs). It contains the crucial genes for toxicity, regulation of toxicity, and important cellular functions, such as transcription and translation.
The second chromosome is 1 Mbp long with 1115 ORFs. It is determined to be different from a plasmid or megaplasmid due to the inclusion of housekeeping and other essential genes in the genome, including essential genes for metabolism, heat-shock proteins, and 16S rRNA genes, which are ribosomal subunit genes used to track evolutionary relationships between bacteria. Also relevant in determining if the replicon is a chromosome is whether it represents a significant percentage of the genome, and chromosome 2 is 40% by size of the entire genome. And, unlike plasmids, chromosomes are not self-transmissible. However, the second chromosome may have once been a megaplasmid because it contains some genes usually found on plasmids, including a P1 plastid-like origin of replication.
Bacteriophage CTXφ
CTXφ (also called CTXphi) is a filamentous phage that contains the genes for cholera toxin. Infectious CTXφ particles are produced when V. cholerae infects humans. Phage particles are secreted from bacterial cells without lysis. When CTXφ infects V. cholerae cells, it integrates into specific sites on either chromosome. These sites often contain tandem arrays of integrated CTXφ prophage. In addition to the ctxA and ctxB genes encoding cholera toxin, CTXφ contains eight genes involved in phage reproduction, packaging, secretion, integration, and regulation. The CTXφ genome is 6.9 kb long.
Ecology and epidemiology
The main reservoirs of V. cholerae are aquatic sources such as rivers, brackish waters, and estuaries, often in association with copepods or other zooplankton, shellfish, and aquatic plants.
Cholera infections are most commonly acquired from drinking water in which V. cholerae is found naturally or into which it has been introduced from the feces of an infected person. Cholera is most likely to be found and spread in places with inadequate water treatment, poor sanitation, and inadequate hygiene. Other common vehicles include raw or undercooked fish and shellfish. Transmission from person to person is very unlikely, and casual contact with an infected person is not a risk for becoming ill.V. cholerae thrives in an aquatic environment, particularly in surface water. The primary connection between humans and pathogenic strains is through water, particularly in economically reduced areas that do not have good water purification systems.
Nonpathogenic strains are also present in water ecologies. The wide variety of pathogenic and nonpathogenic strains that co-exist in aquatic environments are thought to allow for so many genetic varieties. Gene transfer is fairly common amongst bacteria, and recombination of different V. cholerae genes can lead to new virulent strains.
A symbiotic relationship between V. cholerae and Ruminococcus obeum has been determined. R. obeum autoinducer represses the expression of several V. cholerae virulence factors. This inhibitory mechanism is likely to be present in other gut microbiota species which opens the way to mine the gut microbiota of members in specific communities which may utilize autoinducers or other mechanisms in order to restrict colonization by V. cholerae or other enteropathogens. Autoinducers, specifically with V. cholerae, can develop biofilms and control virulence in response to extracellular quorum-sensing molecules.
Outbreaks of Cholera cause an estimated 120,000 deaths annually worldwide. There has been roughly seven pandemics since 1817, the first. These pandemics first arose in the Indian subcontinent and spread.
Diversity and evolution
Two serogroups of V. cholerae, O1 and O139, cause outbreaks of cholera. O1 causes the majority of outbreaks, while O139 – first identified in Bangladesh in 1992 – is confined to Southeast Asia. Many other serogroups of V. cholerae, with or without the cholera toxin gene (including the nontoxigenic strains of the O1 and O139 serogroups), can cause a cholera-like illness. Only toxigenic strains of serogroups O1 and O139 have caused widespread epidemics.
V. cholerae O1 has two biotypes, classical and El Tor, and each biotype has two distinct serotypes, Inaba and Ogawa. The symptoms of infection are indistinguishable, although more people infected with the El Tor biotype remain asymptomatic or have only a mild illness. In recent years, infections with the classical biotype of V. cholerae O1 have become rare and are limited to parts of Bangladesh and India. Recently, new variant strains have been detected in several parts of Asia and Africa. Observations suggest these strains cause more severe cholera with higher case fatality rates.
Natural genetic transformation
V. cholerae can be induced to become competent for natural genetic transformation when grown on chitin, a biopolymer that is abundant in aquatic habitats (e.g. from crustacean exoskeletons). Natural genetic transformation is a sexual process involving DNA transfer from one bacterial cell to another through the intervening medium, and the integration of the donor sequence into the recipient genome by homologous recombination. Transformation competence in V. cholerae is stimulated by increasing cell density accompanied by nutrient limitation, a decline in growth rate, or stress. The V. cholerae uptake machinery involves a competence-induced pilus, and a conserved DNA binding protein that acts as a ratchet to reel DNA into the cytoplasm. There are two models of genetic transformation, sex hypothesis and competent bacteria.
Gallery
| Biology and health sciences | Gram-negative bacteria | Plants |
52081 | https://en.wikipedia.org/wiki/Antihydrogen | Antihydrogen | Antihydrogen () is the antimatter counterpart of hydrogen. Whereas the common hydrogen atom is composed of an electron and proton, the antihydrogen atom is made up of a positron and antiproton. Scientists hope that studying antihydrogen may shed light on the question of why there is more matter than antimatter in the observable universe, known as the baryon asymmetry problem. Antihydrogen is produced artificially in particle accelerators.
Experimental history
Accelerators first detected hot antihydrogen in the 1990s. ATHENA studied cold in 2002. It was first trapped by the Antihydrogen Laser Physics Apparatus (ALPHA) team at CERN in 2010, who then measured the structure and other important properties. ALPHA, AEgIS, and GBAR plan to further cool and study atoms.
1s–2s transition measurement
In 2016, the ALPHA experiment measured the atomic electron transition between the two lowest energy levels of antihydrogen, 1s–2s. The results, which are identical to that of hydrogen within the experimental resolution, support the idea of matter–antimatter symmetry and CPT symmetry.
In the presence of a magnetic field the 1s–2s transition splits into two hyperfine transitions with slightly different frequencies. The team calculated the transition frequencies for normal hydrogen under the magnetic field in the confinement volume as:
fdd =
fcc =
A single-photon transition between s states is prohibited by quantum selection rules, so to elevate ground state positrons to the 2s level, the confinement space was illuminated by a laser tuned to half the calculated transition frequencies, stimulating allowed two photon absorption.
Antihydrogen atoms excited to the 2s state can then evolve in one of several ways:
They can emit two photons and return directly to the ground state as they were
They can absorb another photon, which ionizes the atom
They can emit a single photon and return to the ground state via the 2p state—in this case the positron spin can flip or remain the same.
Both the ionization and spin-flip outcomes cause the atom to escape confinement. The team calculated that, assuming antihydrogen behaves like normal hydrogen, roughly half the antihydrogen atoms would be lost during the resonant frequency exposure, as compared to the no-laser case. With the laser source tuned 200 kHz below half the transition frequencies, the calculated loss was essentially the same as for the no-laser case.
The ALPHA team made batches of antihydrogen, held them for 600 seconds and then tapered down the confinement field over 1.5 seconds while counting how many antihydrogen atoms were annihilated. They did this under three different experimental conditions:
Resonance: exposing the confined antihydrogen atoms to a laser source tuned to exactly half the transition frequency for 300 seconds for each of the two transitions,
Off-resonance: exposing the confined antihydrogen atoms to a laser source tuned 200 kilohertz below the two resonance frequencies for 300 seconds each,
No-laser: confining the antihydrogen atoms without any laser illumination.
The two controls, off-resonance and no-laser, were needed to ensure that the laser illumination itself was not causing annihilations, perhaps by liberating normal atoms from the confinement vessel surface that could then combine with the antihydrogen.
The team conducted 11 runs of the three cases and found no significant difference between the off-resonance and no laser runs, but a 58% drop in the number of events detected after the resonance runs. They were also able to count annihilation events during the runs and found a higher level during the resonance runs, again with no significant difference between the off-resonance and no laser runs. The results were in good agreement with predictions based on normal hydrogen and can be "interpreted as a test of CPT symmetry at a precision of 200 ppt."
Characteristics
The CPT theorem of particle physics predicts antihydrogen atoms have many of the characteristics regular hydrogen has; i.e. the same mass, magnetic moment, and atomic state transition frequencies (see atomic spectroscopy). For example, excited antihydrogen atoms are expected to glow the same color as regular hydrogen. Antihydrogen atoms should be attracted to other matter or antimatter gravitationally with a force of the same magnitude that ordinary hydrogen atoms experience. This would not be true if antimatter has negative gravitational mass, which is considered highly unlikely, though not yet empirically disproven (see gravitational interaction of antimatter). Recent theoretical framework for negative mass and repulsive gravity (antigravity) between matter and antimatter has been developed, and the theory is compatible with CPT theorem.
When antihydrogen comes into contact with ordinary matter, its constituents quickly annihilate. The positron annihilates with an electron to produce gamma rays. The antiproton, on the other hand, is made up of antiquarks that combine with quarks in either neutrons or protons, resulting in high-energy pions, that quickly decay into muons, neutrinos, positrons, and electrons. If antihydrogen atoms were suspended in a perfect vacuum, they should survive indefinitely.
As an anti-element, it is expected to have exactly the same properties as hydrogen. For example, antihydrogen would be a gas under standard conditions and combine with antioxygen to form antiwater, 2.
Production
The first antihydrogen was produced in 1995 by a team led by Walter Oelert at CERN using a method first proposed by Charles Munger Jr, Stanley Brodsky and Ivan Schmidt Andrade.
In the LEAR, antiprotons from an accelerator were shot at xenon clusters, producing electron-positron pairs. Antiprotons can capture positrons with probability about , so this method is not suited for substantial production, as calculated. Fermilab measured a somewhat different cross section, in agreement with predictions of quantum electrodynamics. Both resulted in highly energetic, or hot, anti-atoms, unsuitable for detailed study.
Subsequently, CERN built the Antiproton Decelerator (AD) to support efforts towards low-energy antihydrogen, for tests of fundamental symmetries. The AD supplies several CERN groups. CERN expects their facilities will be capable of producing 10 million antiprotons per minute.
Low-energy antihydrogen
Experiments by the ATRAP and ATHENA collaborations at CERN, brought together positrons and antiprotons in Penning traps, resulting in synthesis at a typical rate of 100 antihydrogen atoms per second. Antihydrogen was first produced by ATHENA in 2002, and then by ATRAP and by 2004, millions of antihydrogen atoms were made. The atoms synthesized had a relatively high temperature (a few thousand kelvins), and would hit the walls of the experimental apparatus as a consequence and annihilate. Most precision tests require long observation times.
ALPHA, a successor of the ATHENA collaboration, was formed to stably trap antihydrogen. While electrically neutral, its spin magnetic moments interact with an inhomogeneous magnetic field; some atoms will be attracted to a magnetic minimum, created by a combination of mirror and multipole fields.
In November 2010, the ALPHA collaboration announced that they had trapped 38 antihydrogen atoms for a sixth of a second, the first confinement of neutral antimatter. In June 2011, they trapped 309 antihydrogen atoms, up to 3 simultaneously, for up to 1,000 seconds. They then studied its hyperfine structure, gravity effects, and charge. ALPHA will continue measurements along with experiments ATRAP, AEgIS and GBAR.
In 2018, AEgIS has produced a novel pulsed source of antihydrogen atoms with a production time spread of merely 250 nanoseconds.
The pulsed source is generated by the charge exchange reaction between Rydberg positronium atoms -- produced via the injection of a pulsed positron beam into a nanochanneled Si target, and excited by laser pulses -- and antiprotons, trapped, cooled and manipulated in electromagnetic traps. The pulsed production enables the control of the antihydrogen temperature, the formation of an antihydrogen beam, and in the next phase a precision measurement on the gravitational behaviour using an atomic interferometer, the so-called Moiré deflectormeter.
Larger antimatter atoms
Larger antimatter atoms such as antideuterium (), antitritium (), and antihelium () are much more difficult to produce. Antideuterium, antihelium-3 () and antihelium-4 () nuclei have been produced with such high velocities that synthesis of their corresponding atoms poses several technical hurdles.
| Physical sciences | Antimatter | Physics |
52082 | https://en.wikipedia.org/wiki/Cable%20transport | Cable transport | Cable transport is a broad class of transport modes that have cables. They transport passengers and goods, often in vehicles called cable cars. The cable may be driven or passive, and items may be moved by pulling, sliding, sailing, or by drives within the object being moved on cableways. The use of pulleys and balancing of loads moving up and down are common elements of cable transport. They are often used in mountainous areas where cable haulage can overcome large differences in elevation.
Common modes of cable transport
Aerial transport
Forms of cable transport in which one or more cables are strung between supports of various forms and cars are suspended from these cables.
Aerial tramway
Chairlift
Funitel
Gondola lift
Ski lift
Zip line
Cable railways
Forms of cable transport where cars on rails are hauled by cables. The rails are usually steeply inclined and usually at ground level.
Cable car
Funicular
Other
Other forms of cable-hauled transport.
Cable ferry
Surface lift
Elevator
History
Rope-drawn transport dates back to 250 BC as evidenced by illustrations of aerial ropeway transportation systems in South China.
Early aerial tramways
The first recorded mechanical ropeway was by Venetian Fausto Veranzio who designed a bi-cable passenger ropeway in 1616. The industry generally considers Dutchman Adam Wybe to have built the first operational system in 1644. The technology, which was further developed by the people living in the Alpine regions of Europe, progressed and expanded with the advent of wire rope and electric drive.
The first use of wire rope for aerial tramways is disputed. American inventor Peter Cooper is one early claimant, constructing an aerial tramway using wire rope in Baltimore 1832, to move landfill materials. Though there is only partial evidence for the claimed 1832 tramway, Cooper was involved in many of such tramways built in the 1850s, and in 1853 he built a two-mile-long tramway to transport iron ore to his blast furnaces at Ringwood, New Jersey.
World War I motivated extensive use of military tramways for warfare between Italy and Austria.
During the industrial revolution, new forms of cable-hauled transportation systems were created including the use of steel cable to allow for greater load support and larger systems. Aerial tramways were first used for commercial passenger haulage in the 1900s.
The first cable railways
The earliest form of cable railway was the gravity incline, which in its simplest form consists of two parallel tracks laid on a steep gradient, with a single rope wound around a winding drum and connecting the trains of wagons on the tracks. Loaded wagons at the top of the incline are lowered down, their weight hauling empty wagons from the bottom. The winding drum has a brake to control the rate of travel of the wagons. The first use of a gravity incline isn't recorded, but the Llandegai Tramway at Bangor in North Wales was opened in 1798, and is one of the earliest examples using iron rails.
The first cable-hauled street railway was the London and Blackwall Railway, built in 1840, which used fibre to grip the haulage rope. This caused a series of technical and safety issues, which led to the adoption of steam locomotives by 1848.
The first Funicular railway was opened in Lyon in 1862.
The Westside and Yonkers Patent Railway Company developed a cable-hauled elevated railway. This 3½ mile long line was proposed in 1866 and opened in 1868. It operated as a cable railway until 1871 when it was converted to use steam locomotives.
The next development of the cable car came in California. Andrew Hallidie, a Scottish emigre, gave San Francisco the first effective and commercially successful route, using steel cables, opening the Clay Street Hill Railroad on August 2, 1873. Hallidie was a manufacturer of steel cables. The system featured a human-operated grip, which was able to start and stop the car safely. The rope that was used allowed the multiple, independent cars to run on one line, and soon Hallidie's concept was extended to multiple lines in San Francisco.
The first cable railway outside the United Kingdom and the United States was the Roslyn Tramway, which opened in 1881, in Dunedin, New Zealand. America remained the country that made the greatest use of cable railways; by 1890 more than 500 miles of cable-hauled track had been laid, carrying over 1,000,000 passengers per year. However, in 1890, electric tramways exceeded the cable hauled tramways in mileage, efficiency and speed.
Early ski lifts
The first surface lift was built in 1908 by German Robert Winterhalder in Schollach/Eisenbach, Hochschwarzwald and started operations February 14, 1908.
A steam-powered toboggan tow, in length, was built in Truckee, California, in 1910. The first skier-specific tow in North America was apparently installed in 1933 by Alec Foster at Shawbridge in the Laurentians outside Montreal, Quebec.
The modern J-bar and T-bar mechanism was invented in 1934 by the Swiss engineer Ernst Constam, with the first lift installed in Davos, Switzerland.
The first chairlift was developed by James Curran in 1936. The co-owner of the Union Pacific Railroad, William Averell Harriman owned America's first ski resort, Sun Valley, Idaho. He asked his design office to tackle the problem of lifting skiers to the top of the resort. Curran, a Union Pacific bridge designer, adapted a cable hoist he had designed for loading bananas in Honduras to create the first ski lift.
More recent developments
More recent developments are being classified under the type of track that their design is based upon. After the success of this operation, several other projects were initiated in New Zealand and Chicago. The social climate around pollution is allowing for a shift from cars back to the utilization of cable transport due to their advantages. However, for many years they were a niche form of transportation used primarily in difficult-to-operate conditions for cars (such as on ski slopes as lifts). Now that cable transport projects are on the increase, the social effects are beginning to become more significant.
In 2018 the highest 3S cablecar has been inaugurated in Zermatt, Switzerland after more than two years of construction. This cablecar is also called the "Matterhorn Glacier ride" and it allows passengers to reach the top of the Klein Matterhorn mountain (3883m)
Social effects
Comparison with other transport types
When compared to trains and cars, the volume of people to transport over time and the start-up cost of the project must be a consideration. In areas with extensive road networks, personal vehicles offer greater flexibility and range. Remote places like mountainous regions and ski slopes may be difficult to link with roads, making cable transport project a much easier approach. A cable transport project system may also need fewer invasive changes to the local environment.
The use of Cable Transport is not limited to such rural locations as skiing resorts; it can be used in urban development areas. Their uses in urban areas include funicular railways, gondola lifts, and aerial tramways.
Safety
According to a study by the technical inspection association TÜV SÜD, for every 100 million hours of travel, there are on average 25 deaths due to car accidents, 16 due to plane accidents and only two due to cable car accidents, most of which are due to passenger behaviour.
Accidents
A cable car accident in Cavalese, Italy, on 9 March 1976 is considered the worst aerial lift accident in history. The car crashed off the rails and fell 200 meters down a mountainside, also crashing through a grassy meadow before coming to a halt. The tragedy caused the death of 43 people, and four lift officials were jailed for charges regarding the accident.
On April 15, 1978, a cable car at Squaw Valley Ski Resort in California came off from one of its cables, dropping 75 feet (23 m) and violently bouncing up. It collided with a cable which sheared through the car. Four people were killed and 31 injured.
The Singapore cable car crash of 29 January 1983 occurred when a drilling rig passed beneath the cable car system linking the Singapore mainland with Sentosa island. The derrick of the drilling rig aboard the ship MV Eniwetok struck the cables, causing two of the gondolas to fall into the sea below. There were 7 fatalities.
On February 3, 1998, twenty people died in Cavalese, Italy, when a United States Marine Corps EA-6B Prowler aircraft, while flying too low, against regulations, cut a cable supporting a gondola of an aerial tramway. Those killed, 19 passengers and one operator, were eight Germans, five Belgians, three Italians, two Poles, one Austrian, and one Dutch. The United States refused to have the four Marines tried under Italian law and later court-martialed two of them with minimal charges in their country.
The Kaprun disaster was a fire that occurred in an ascending train in the tunnel of the Gletscherbahn Kaprun 2 funicular in Kaprun, Austria, on 11 November 2000. The disaster claimed the lives of 155 people, leaving 12 survivors (10 Germans and two Austrians) from the burning train. It is one of the worst cable car accidents in history.
A cable car derailed and crashed to the ground in the Nevis Range, near Fort William, Scotland, on 13 July 2006, seriously injuring all five passengers. Another car on the same rail also slid back down the rails when the crash happened. Following the incident, 50 people were left stranded at the station whilst the staff and aid helped the passengers of the crashed car.
On Wednesday 25 July 2012, passengers of the London cable car were stuck 90 meters in the air when a power failure caused the gondola to stop over the River Thames. The fault happened at 11:45am and lasted for about 30 minutes. No passengers were injured, but this was the first problem to ever hit the London's new cable car link.
| Technology | Other | null |
52085 | https://en.wikipedia.org/wiki/Protein%20folding | Protein folding | Protein folding is the physical process by which a protein, after synthesis by a ribosome as a linear chain of amino acids, changes from an unstable random coil into a more ordered three-dimensional structure. This structure permits the protein to become biologically functional.
The folding of many proteins begins even during the translation of the polypeptide chain. The amino acids interact with each other to produce a well-defined three-dimensional structure, known as the protein's native state. This structure is determined by the amino-acid sequence or primary structure.
The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded, indicating that protein dynamics are important. Failure to fold into a native structure generally produces inactive proteins, but in some instances, misfolded proteins have modified or toxic functionality. Several neurodegenerative and other diseases are believed to result from the accumulation of amyloid fibrils formed by misfolded proteins, the infectious varieties of which are known as prions. Many allergies are caused by the incorrect folding of some proteins because the immune system does not produce the antibodies for certain protein structures.
Denaturation of proteins is a process of transition from a folded to an unfolded state. It happens in cooking, burns, proteinopathies, and other contexts. Residual structure present, if any, in the supposedly unfolded state may form a folding initiation site and guide the subsequent folding reactions.
The duration of the folding process varies dramatically depending on the protein of interest. When studied outside the cell, the slowest folding proteins require many minutes or hours to fold, primarily due to proline isomerization, and must pass through a number of intermediate states, like checkpoints, before the process is complete. On the other hand, very small single-domain proteins with lengths of up to a hundred amino acids typically fold in a single step. Time scales of milliseconds are the norm, and the fastest known protein folding reactions are complete within a few microseconds. The folding time scale of a protein depends on its size, contact order, and circuit topology.
Understanding and simulating the protein folding process has been an important challenge for computational biology since the late 1960s.
Process of protein folding
Primary structure
The primary structure of a protein, its linear amino-acid sequence, determines its native conformation. The specific amino acid residues and their position in the polypeptide chain are the determining factors for which portions of the protein fold closely together and form its three-dimensional conformation. The amino acid composition is not as important as the sequence. The essential fact of folding, however, remains that the amino acid sequence of each protein contains the information that specifies both the native structure and the pathway to attain that state. This is not to say that nearly identical amino acid sequences always fold similarly. Conformations differ based on environmental factors as well; similar proteins fold differently based on where they are found.
Secondary structure
Formation of a secondary structure is the first step in the folding process that a protein takes to assume its native structure. Characteristic of secondary structure are the structures known as alpha helices and beta sheets that fold rapidly because they are stabilized by intramolecular hydrogen bonds, as was first characterized by Linus Pauling. Formation of intramolecular hydrogen bonds provides another important contribution to protein stability. α-helices are formed by hydrogen bonding of the backbone to form a spiral shape (refer to figure on the right). The β pleated sheet is a structure that forms with the backbone bending over itself to form the hydrogen bonds (as displayed in the figure to the left). The hydrogen bonds are between the amide hydrogen and carbonyl oxygen of the peptide bond. There exists anti-parallel β pleated sheets and parallel β pleated sheets where the stability of the hydrogen bonds is stronger in the anti-parallel β sheet as it hydrogen bonds with the ideal 180 degree angle compared to the slanted hydrogen bonds formed by parallel sheets.
Tertiary structure
The α-Helices and β-Sheets are commonly amphipathic, meaning they have a hydrophilic and a hydrophobic portion. This ability helps in forming tertiary structure of a protein in which folding occurs so that the hydrophilic sides are facing the aqueous environment surrounding the protein and the hydrophobic sides are facing the hydrophobic core of the protein. Secondary structure hierarchically gives way to tertiary structure formation. Once the protein's tertiary structure is formed and stabilized by the hydrophobic interactions, there may also be covalent bonding in the form of disulfide bridges formed between two cysteine residues. These non-covalent and covalent contacts take a specific topological arrangement in a native structure of a protein. Tertiary structure of a protein involves a single polypeptide chain; however, additional interactions of folded polypeptide chains give rise to quaternary structure formation.
Quaternary structure
Tertiary structure may give way to the formation of quaternary structure in some proteins, which usually involves the "assembly" or "coassembly" of subunits that have already folded; in other words, multiple polypeptide chains could interact to form a fully functional quaternary protein.
Driving forces of protein folding
Folding is a spontaneous process that is mainly guided by hydrophobic interactions, formation of intramolecular hydrogen bonds, van der Waals forces, and it is opposed by conformational entropy. The folding time scale of an isolated protein depends on its size, contact order, and circuit topology. Inside cells, the process of folding often begins co-translationally, so that the N-terminus of the protein begins to fold while the C-terminal portion of the protein is still being synthesized by the ribosome; however, a protein molecule may fold spontaneously during or after biosynthesis. While these macromolecules may be regarded as "folding themselves", the process also depends on the solvent (water or lipid bilayer), the concentration of salts, the pH, the temperature, the possible presence of cofactors and of molecular chaperones.
Proteins will have limitations on their folding abilities by the restricted bending angles or conformations that are possible. These allowable angles of protein folding are described with a two-dimensional plot known as the Ramachandran plot, depicted with psi and phi angles of allowable rotation.
Hydrophobic effect
Protein folding must be thermodynamically favorable within a cell in order for it to be a spontaneous reaction. Since it is known that protein folding is a spontaneous reaction, then it must assume a negative Gibbs free energy value. Gibbs free energy in protein folding is directly related to enthalpy and entropy. For a negative delta G to arise and for protein folding to become thermodynamically favorable, then either enthalpy, entropy, or both terms must be favorable.
Minimizing the number of hydrophobic side-chains exposed to water is an important driving force behind the folding process. The hydrophobic effect is the phenomenon in which the hydrophobic chains of a protein collapse into the core of the protein (away from the hydrophilic environment). In an aqueous environment, the water molecules tend to aggregate around the hydrophobic regions or side chains of the protein, creating water shells of ordered water molecules. An ordering of water molecules around a hydrophobic region increases order in a system and therefore contributes a negative change in entropy (less entropy in the system). The water molecules are fixed in these water cages which drives the hydrophobic collapse, or the inward folding of the hydrophobic groups. The hydrophobic collapse introduces entropy back to the system via the breaking of the water cages which frees the ordered water molecules. The multitude of hydrophobic groups interacting within the core of the globular folded protein contributes a significant amount to protein stability after folding, because of the vastly accumulated van der Waals forces (specifically London Dispersion forces). The hydrophobic effect exists as a driving force in thermodynamics only if there is the presence of an aqueous medium with an amphiphilic molecule containing a large hydrophobic region. The strength of hydrogen bonds depends on their environment; thus, H-bonds enveloped in a hydrophobic core contribute more than H-bonds exposed to the aqueous environment to the stability of the native state.
In proteins with globular folds, hydrophobic amino acids tend to be interspersed along the primary sequence, rather than randomly distributed or clustered together. However, proteins that have recently been born de novo, which tend to be intrinsically disordered, show the opposite pattern of hydrophobic amino acid clustering along the primary sequence.
Chaperones
Molecular chaperones are a class of proteins that aid in the correct folding of other proteins in vivo. Chaperones exist in all cellular compartments and interact with the polypeptide chain in order to allow the native three-dimensional conformation of the protein to form; however, chaperones themselves are not included in the final structure of the protein they are assisting in. Chaperones may assist in folding even when the nascent polypeptide is being synthesized by the ribosome. Molecular chaperones operate by binding to stabilize an otherwise unstable structure of a protein in its folding pathway, but chaperones do not contain the necessary information to know the correct native structure of the protein they are aiding; rather, chaperones work by preventing incorrect folding conformations. In this way, chaperones do not actually increase the rate of individual steps involved in the folding pathway toward the native structure; instead, they work by reducing possible unwanted aggregations of the polypeptide chain that might otherwise slow down the search for the proper intermediate and they provide a more efficient pathway for the polypeptide chain to assume the correct conformations. Chaperones are not to be confused with folding catalyst proteins, which catalyze chemical reactions responsible for slow steps in folding pathways. Examples of folding catalysts are protein disulfide isomerases and peptidyl-prolyl isomerases that may be involved in formation of disulfide bonds or interconversion between cis and trans stereoisomers of peptide group. Chaperones are shown to be critical in the process of protein folding in vivo because they provide the protein with the aid needed to assume its proper alignments and conformations efficiently enough to become "biologically relevant". This means that the polypeptide chain could theoretically fold into its native structure without the aid of chaperones, as demonstrated by protein folding experiments conducted in vitro; however, this process proves to be too inefficient or too slow to exist in biological systems; therefore, chaperones are necessary for protein folding in vivo. Along with its role in aiding native structure formation, chaperones are shown to be involved in various roles such as protein transport, degradation, and even allow denatured proteins exposed to certain external denaturant factors an opportunity to refold into their correct native structures.
A fully denatured protein lacks both tertiary and secondary structure, and exists as a so-called random coil. Under certain conditions some proteins can refold; however, in many cases, denaturation is irreversible. Cells sometimes protect their proteins against the denaturing influence of heat with enzymes known as heat shock proteins (a type of chaperone), which assist other proteins both in folding and in remaining folded. Heat shock proteins have been found in all species examined, from bacteria to humans, suggesting that they evolved very early and have an important function. Some proteins never fold in cells at all except with the assistance of chaperones which either isolate individual proteins so that their folding is not interrupted by interactions with other proteins or help to unfold misfolded proteins, allowing them to refold into the correct native structure. This function is crucial to prevent the risk of precipitation into insoluble amorphous aggregates. The external factors involved in protein denaturation or disruption of the native state include temperature, external fields (electric, magnetic), molecular crowding, and even the limitation of space (i.e. confinement), which can have a big influence on the folding of proteins. High concentrations of solutes, extremes of pH, mechanical forces, and the presence of chemical denaturants can contribute to protein denaturation, as well. These individual factors are categorized together as stresses. Chaperones are shown to exist in increasing concentrations during times of cellular stress and help the proper folding of emerging proteins as well as denatured or misfolded ones.
Under some conditions proteins will not fold into their biochemically functional forms. Temperatures above or below the range that cells tend to live in will cause thermally unstable proteins to unfold or denature (this is why boiling makes an egg white turn opaque). Protein thermal stability is far from constant, however; for example, hyperthermophilic bacteria have been found that grow at temperatures as high as 122 °C, which of course requires that their full complement of vital proteins and protein assemblies be stable at that temperature or above.
The bacterium E. coli is the host for bacteriophage T4, and the phage encoded gp31 protein () appears to be structurally and functionally homologous to E. coli chaperone protein GroES and able to substitute for it in the assembly of bacteriophage T4 virus particles during infection. Like GroES, gp31 forms a stable complex with GroEL chaperonin that is absolutely necessary for the folding and assembly in vivo of the bacteriophage T4 major capsid protein gp23.
Fold switching
Some proteins have multiple native structures, and change their fold based on some external factors. For example, the KaiB protein switches fold throughout the day, acting as a clock for cyanobacteria. It has been estimated that around 0.5–4% of PDB (Protein Data Bank) proteins switch folds.
Protein misfolding and neurodegenerative disease
A protein is considered to be misfolded if it cannot achieve its normal native state. This can be due to mutations in the amino acid sequence or a disruption of the normal folding process by external factors. The misfolded protein typically contains β-sheets that are organized in a supramolecular arrangement known as a cross-β structure. These β-sheet-rich assemblies are very stable, very insoluble, and generally resistant to proteolysis. The structural stability of these fibrillar assemblies is caused by extensive interactions between the protein monomers, formed by backbone hydrogen bonds between their β-strands. The misfolding of proteins can trigger the further misfolding and accumulation of other proteins into aggregates or oligomers. The increased levels of aggregated proteins in the cell leads to formation of amyloid-like structures which can cause degenerative disorders and cell death. The amyloids are fibrillary structures that contain intermolecular hydrogen bonds which are highly insoluble and made from converted protein aggregates. Therefore, the proteasome pathway may not be efficient enough to degrade the misfolded proteins prior to aggregation. Misfolded proteins can interact with one another and form structured aggregates and gain toxicity through intermolecular interactions.
Aggregated proteins are associated with prion-related illnesses such as Creutzfeldt–Jakob disease, bovine spongiform encephalopathy (mad cow disease), amyloid-related illnesses such as Alzheimer's disease and familial amyloid cardiomyopathy or polyneuropathy, as well as intracellular aggregation diseases such as Huntington's and Parkinson's disease. These age onset degenerative diseases are associated with the aggregation of misfolded proteins into insoluble, extracellular aggregates and/or intracellular inclusions including cross-β amyloid fibrils. It is not completely clear whether the aggregates are the cause or merely a reflection of the loss of protein homeostasis, the balance between synthesis, folding, aggregation and protein turnover. Recently the European Medicines Agency approved the use of Tafamidis or Vyndaqel (a kinetic stabilizer of tetrameric transthyretin) for the treatment of transthyretin amyloid diseases. This suggests that the process of amyloid fibril formation (and not the fibrils themselves) causes the degeneration of post-mitotic tissue in human amyloid diseases. Misfolding and excessive degradation instead of folding and function leads to a number of proteopathy diseases such as antitrypsin-associated emphysema, cystic fibrosis and the lysosomal storage diseases, where loss of function is the origin of the disorder. While protein replacement therapy has historically been used to correct the latter disorders, an emerging approach is to use pharmaceutical chaperones to fold mutated proteins to render them functional.
Experimental techniques for studying protein folding
While inferences about protein folding can be made through mutation studies, typically, experimental techniques for studying protein folding rely on the gradual unfolding or folding of proteins and observing conformational changes using standard non-crystallographic techniques.
X-ray crystallography
X-ray crystallography is one of the more efficient and important methods for attempting to decipher the three dimensional configuration of a folded protein. To be able to conduct X-ray crystallography, the protein under investigation must be located inside a crystal lattice. To place a protein inside a crystal lattice, one must have a suitable solvent for crystallization, obtain a pure protein at supersaturated levels in solution, and precipitate the crystals in solution. Once a protein is crystallized, X-ray beams can be concentrated through the crystal lattice which would diffract the beams or shoot them outwards in various directions. These exiting beams are correlated to the specific three-dimensional configuration of the protein enclosed within. The X-rays specifically interact with the electron clouds surrounding the individual atoms within the protein crystal lattice and produce a discernible diffraction pattern. Only by relating the electron density clouds with the amplitude of the X-rays can this pattern be read and lead to assumptions of the phases or phase angles involved that complicate this method. Without the relation established through a mathematical basis known as Fourier transform, the "phase problem" would render predicting the diffraction patterns very difficult. Emerging methods like multiple isomorphous replacement use the presence of a heavy metal ion to diffract the X-rays into a more predictable manner, reducing the number of variables involved and resolving the phase problem.
Fluorescence spectroscopy
Fluorescence spectroscopy is a highly sensitive method for studying the folding state of proteins. Three amino acids, phenylalanine (Phe), tyrosine (Tyr) and tryptophan (Trp), have intrinsic fluorescence properties, but only Tyr and Trp are used experimentally because their quantum yields are high enough to give good fluorescence signals. Both Trp and Tyr are excited by a wavelength of 280 nm, whereas only Trp is excited by a wavelength of 295 nm. Because of their aromatic character, Trp and Tyr residues are often found fully or partially buried in the hydrophobic core of proteins, at the interface between two protein domains, or at the interface between subunits of oligomeric proteins. In this apolar environment, they have high quantum yields and therefore high fluorescence intensities. Upon disruption of the protein's tertiary or quaternary structure, these side chains become more exposed to the hydrophilic environment of the solvent, and their quantum yields decrease, leading to low fluorescence intensities. For Trp residues, the wavelength of their maximal fluorescence emission also depend on their environment.
Fluorescence spectroscopy can be used to characterize the equilibrium unfolding of proteins by measuring the variation in the intensity of fluorescence emission or in the wavelength of maximal emission as functions of a denaturant value. The denaturant can be a chemical molecule (urea, guanidinium hydrochloride), temperature, pH, pressure, etc. The equilibrium between the different but discrete protein states, i.e. native state, intermediate states, unfolded state, depends on the denaturant value; therefore, the global fluorescence signal of their equilibrium mixture also depends on this value. One thus obtains a profile relating the global protein signal to the denaturant value. The profile of equilibrium unfolding may enable one to detect and identify intermediates of unfolding. General equations have been developed by Hugues Bedouelle to obtain the thermodynamic parameters that characterize the unfolding equilibria for homomeric or heteromeric proteins, up to trimers and potentially tetramers, from such profiles. Fluorescence spectroscopy can be combined with fast-mixing devices such as stopped flow, to measure protein folding kinetics, generate a chevron plot and derive a Phi value analysis.
Circular dichroism
Circular dichroism is one of the most general and basic tools to study protein folding. Circular dichroism spectroscopy measures the absorption of circularly polarized light. In proteins, structures such as alpha helices and beta sheets are chiral, and thus absorb such light. The absorption of this light acts as a marker of the degree of foldedness of the protein ensemble. This technique has been used to measure equilibrium unfolding of the protein by measuring the change in this absorption as a function of denaturant concentration or temperature. A denaturant melt measures the free energy of unfolding as well as the protein's m value, or denaturant dependence. A temperature melt measures the denaturation temperature (Tm) of the protein. As for fluorescence spectroscopy, circular-dichroism spectroscopy can be combined with fast-mixing devices such as stopped flow to measure protein folding kinetics and to generate chevron plots.
Vibrational circular dichroism of proteins
The more recent developments of vibrational circular dichroism (VCD) techniques for proteins, currently involving Fourier transform (FT) instruments, provide powerful means for determining protein conformations in solution even for very large protein molecules. Such VCD studies of proteins can be combined with X-ray diffraction data for protein crystals, FT-IR data for protein solutions in heavy water (D2O), or quantum computations.
Protein nuclear magnetic resonance spectroscopy
Protein nuclear magnetic resonance (NMR) is able to collect protein structural data by inducing a magnet field through samples of concentrated protein. In NMR, depending on the chemical environment, certain nuclei will absorb specific radio-frequencies. Because protein structural changes operate on a time scale from ns to ms, NMR is especially equipped to study intermediate structures in timescales of ps to s. Some of the main techniques for studying proteins structure and non-folding protein structural changes include COSY, TOCSY, HSQC, time relaxation (T1 & T2), and NOE. NOE is especially useful because magnetization transfers can be observed between spatially proximal hydrogens are observed. Different NMR experiments have varying degrees of timescale sensitivity that are appropriate for different protein structural changes. NOE can pick up bond vibrations or side chain rotations, however, NOE is too sensitive to pick up protein folding because it occurs at larger timescale.
Because protein folding takes place in about 50 to 3000 s−1 CPMG Relaxation dispersion and chemical exchange saturation transfer have become some of the primary techniques for NMR analysis of folding. In addition, both techniques are used to uncover excited intermediate states in the protein folding landscape. To do this, CPMG Relaxation dispersion takes advantage of the spin echo phenomenon. This technique exposes the target nuclei to a 90 pulse followed by one or more 180 pulses. As the nuclei refocus, a broad distribution indicates the target nuclei is involved in an intermediate excited state. By looking at Relaxation dispersion plots the data collect information on the thermodynamics and kinetics between the excited and ground. Saturation Transfer measures changes in signal from the ground state as excited states become perturbed. It uses weak radio frequency irradiation to saturate the excited state of a particular nuclei which transfers its saturation to the ground state. This signal is amplified by decreasing the magnetization (and the signal) of the ground state.
The main limitations in NMR is that its resolution decreases with proteins that are larger than 25 kDa and is not as detailed as X-ray crystallography. Additionally, protein NMR analysis is quite difficult and can propose multiple solutions from the same NMR spectrum.
In a study focused on the folding of an amyotrophic lateral sclerosis involved protein SOD1, excited intermediates were studied with relaxation dispersion and Saturation transfer. SOD1 had been previously tied to many disease causing mutants which were assumed to be involved in protein aggregation, however the mechanism was still unknown. By using Relaxation Dispersion and Saturation Transfer experiments many excited intermediate states were uncovered misfolding in the SOD1 mutants.
Dual-polarization interferometry
Dual polarisation interferometry is a surface-based technique for measuring the optical properties of molecular layers. When used to characterize protein folding, it measures the conformation by determining the overall size of a monolayer of the protein and its density in real time at sub-Angstrom resolution, although real-time measurement of the kinetics of protein folding are limited to processes that occur slower than ~10 Hz. Similar to circular dichroism, the stimulus for folding can be a denaturant or temperature.
Studies of folding with high time resolution
The study of protein folding has been greatly advanced in recent years by the development of fast, time-resolved techniques. Experimenters rapidly trigger the folding of a sample of unfolded protein and observe the resulting dynamics. Fast techniques in use include neutron scattering, ultrafast mixing of solutions, photochemical methods, and laser temperature jump spectroscopy. Among the many scientists who have contributed to the development of these techniques are Jeremy Cook, Heinrich Roder, Terry Oas, Harry Gray, Martin Gruebele, Brian Dyer, William Eaton, Sheena Radford, Chris Dobson, Alan Fersht, Bengt Nölting and Lars Konermann.
Proteolysis
Proteolysis is routinely used to probe the fraction unfolded under a wide range of solution conditions (e.g. fast parallel proteolysis (FASTpp).
Single-molecule force spectroscopy
Single molecule techniques such as optical tweezers and AFM have been used to understand protein folding mechanisms of isolated proteins as well as proteins with chaperones. Optical tweezers have been used to stretch single protein molecules from their C- and N-termini and unfold them to allow study of the subsequent refolding. The technique allows one to measure folding rates at single-molecule level; for example, optical tweezers have been recently applied to study folding and unfolding of proteins involved in blood coagulation. von Willebrand factor (vWF) is a protein with an essential role in blood clot formation process. It discovered – using single molecule optical tweezers measurement – that calcium-bound vWF acts as a shear force sensor in the blood. Shear force leads to unfolding of the A2 domain of vWF, whose refolding rate is dramatically enhanced in the presence of calcium. Recently, it was also shown that the simple src SH3 domain accesses multiple unfolding pathways under force.
Biotin painting
Biotin painting enables condition-specific cellular snapshots of (un)folded proteins. Biotin 'painting' shows a bias towards predicted Intrinsically disordered proteins.
Computational studies of protein folding
Computational studies of protein folding includes three main aspects related to the prediction of protein stability, kinetics, and structure. A 2013 review summarizes the available computational methods for protein folding.
Levinthal's paradox
In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible conformations. An estimate of 3300 or 10143 was made in one of his papers. Levinthal's paradox is a thought experiment based on the observation that if a protein were folded by sequential sampling of all possible conformations, it would take an astronomical amount of time to do so, even if the conformations were sampled at a rapid rate (on the nanosecond or picosecond scale). Based upon the observation that proteins fold much faster than this, Levinthal then proposed that a random conformational search does not occur, and the protein must, therefore, fold through a series of meta-stable intermediate states.
Energy landscape of protein folding
The configuration space of a protein during folding can be visualized as an energy landscape. According to Joseph Bryngelson and Peter Wolynes, proteins follow the principle of minimal frustration, meaning that naturally evolved proteins have optimized their folding energy landscapes, and that nature has chosen amino acid sequences so that the folded state of the protein is sufficiently stable. In addition, the acquisition of the folded state had to become a sufficiently fast process. Even though nature has reduced the level of frustration in proteins, some degree of it remains up to now as can be observed in the presence of local minima in the energy landscape of proteins.
A consequence of these evolutionarily selected sequences is that proteins are generally thought to have globally "funneled energy landscapes" (a term coined by José Onuchic) that are largely directed toward the native state. This "folding funnel" landscape allows the protein to fold to the native state through any of a large number of pathways and intermediates, rather than being restricted to a single mechanism. The theory is supported by both computational simulations of model proteins and experimental studies, and it has been used to improve methods for protein structure prediction and design. The description of protein folding by the leveling free-energy landscape is also consistent with the 2nd law of thermodynamics. Physically, thinking of landscapes in terms of visualizable potential or total energy surfaces simply with maxima, saddle points, minima, and funnels, rather like geographic landscapes, is perhaps a little misleading. The relevant description is really a high-dimensional phase space in which manifolds might take a variety of more complicated topological forms.
The unfolded polypeptide chain begins at the top of the funnel where it may assume the largest number of unfolded variations and is in its highest energy state. Energy landscapes such as these indicate that there are a large number of initial possibilities, but only a single native state is possible; however, it does not reveal the numerous folding pathways that are possible. A different molecule of the same exact protein may be able to follow marginally different folding pathways, seeking different lower energy intermediates, as long as the same native structure is reached. Different pathways may have different frequencies of utilization depending on the thermodynamic favorability of each pathway. This means that if one pathway is found to be more thermodynamically favorable than another, it is likely to be used more frequently in the pursuit of the native structure. As the protein begins to fold and assume its various conformations, it always seeks a more thermodynamically favorable structure than before and thus continues through the energy funnel. Formation of secondary structures is a strong indication of increased stability within the protein, and only one combination of secondary structures assumed by the polypeptide backbone will have the lowest energy and therefore be present in the native state of the protein. Among the first structures to form once the polypeptide begins to fold are alpha helices and beta turns, where alpha helices can form in as little as 100 nanoseconds and beta turns in 1 microsecond.
There exists a saddle point in the energy funnel landscape where the transition state for a particular protein is found. The transition state in the energy funnel diagram is the conformation that must be assumed by every molecule of that protein if the protein wishes to finally assume the native structure. No protein may assume the native structure without first passing through the transition state. The transition state can be referred to as a variant or premature form of the native state rather than just another intermediary step. The folding of the transition state is shown to be rate-determining, and even though it exists in a higher energy state than the native fold, it greatly resembles the native structure. Within the transition state, there exists a nucleus around which the protein is able to fold, formed by a process referred to as "nucleation condensation" where the structure begins to collapse onto the nucleus.
Modeling of protein folding
De novo or ab initio techniques for computational protein structure prediction can be used for simulating various aspects of protein folding. Molecular dynamics (MD) was used in simulations of protein folding and dynamics in silico. First equilibrium folding simulations were done using implicit solvent model and umbrella sampling. Because of computational cost, ab initio MD folding simulations with explicit water are limited to peptides and small proteins. MD simulations of larger proteins remain restricted to dynamics of the experimental structure or its high-temperature unfolding. Long-time folding processes (beyond about 1 millisecond), like folding of larger proteins (>150 residues) can be accessed using coarse-grained models.
Several large-scale computational projects, such as Rosetta@home, Folding@home and Foldit, target protein folding.
Long continuous-trajectory simulations have been performed on Anton, a massively parallel supercomputer designed and built around custom ASICs and interconnects by D. E. Shaw Research. The longest published result of a simulation performed using Anton as of 2011 was a 2.936 millisecond simulation of NTL9 at 355 K. Such simulations are currently able to unfold and refold small proteins (<150 amino acids residues) in equilibrium and predict how mutations affect folding kinetics and stability.
In 2020 a team of researchers that used AlphaFold, an artificial intelligence (AI) protein structure prediction program developed by DeepMind placed first in CASP, a long-standing structure prediction contest. The team achieved a level of accuracy much higher than any other group. It scored above 90% for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree of similarity between the structure predicted by a computational program, and the empirical structure determined experimentally in a lab. A score of 100 is considered a complete match, within the distance cutoff used for calculating GDT.
AlphaFold's protein structure prediction results at CASP were described as "transformational" and "astounding". Some researchers noted that the accuracy is not high enough for a third of its predictions, and that it does not reveal the physical mechanism of protein folding for the protein folding problem to be considered solved. Nevertheless, it is considered a significant achievement in computational biology and great progress towards a decades-old grand challenge of biology, predicting the structure of proteins.
| Biology and health sciences | Molecular biology | Biology |
52124 | https://en.wikipedia.org/wiki/VHS | VHS | VHS (Video Home System) is a standard for consumer-level analog video recording on tape cassettes, introduced in 1976 by the Victor Company of Japan (JVC). It was the dominant home video format throughout the tape media period in the 1980s and 1990s.
Magnetic tape video recording was adopted by the television industry in the 1950s in the form of the first commercialized video tape recorders (VTRs), but the devices were expensive and used only in professional environments. In the 1970s, videotape technology became affordable for home use, and widespread adoption of videocassette recorders (VCRs) began; the VHS became the most popular media format for VCRs as it would win the "format war" against Betamax (backed by Sony) and a number of other competing tape standards.
The cassettes themselves use a 0.5-inch magnetic tape between two spools and typically offer a capacity of at least two hours. The popularity of VHS was intertwined with the rise of the video rental market, when films were released on pre-recorded videotapes for home viewing. Newer improved tape formats such as S-VHS were later developed, as well as the earliest optical disc format, LaserDisc; the lack of global adoption of these formats increased VHS's lifetime, which eventually peaked and started to decline in the late 1990s after the introduction of DVD, a digital optical disc format. VHS rentals were surpassed by DVD in the United States in 2003, which eventually became the preferred low-end method of movie distribution. For home recording purposes, VHS and VCRs were surpassed by (typically hard disk–based) digital video recorders (DVR) in the 2000s.
History
Before VHS
In 1956, after several attempts by other companies, the first commercially successful VTR, the Ampex VRX-1000, was introduced by Ampex Corporation. At a price of US$50,000 in 1956 () and US$300 () for a 90-minute reel of tape, it was intended only for the professional market.
Kenjiro Takayanagi, a television broadcasting pioneer then working for JVC as its vice president, saw the need for his company to produce VTRs for the Japanese market at a more affordable price. In 1959, JVC developed a two-head video tape recorder and, by 1960, a color version for professional broadcasting. In 1964, JVC released the DV220, which would be the company's standard VTR until the mid-1970s.
In 1969, JVC collaborated with Sony Corporation and Matsushita Electric (Matsushita was the majority stockholder of JVC until 2011) to build a video recording standard for the Japanese consumer. The effort produced the U-matic format in 1971, which was the first cassette format to become a unified standard for different companies. It was preceded by the reel-to-reel " EIAJ format.
The U-matic format was successful in businesses and some broadcast television applications, such as electronic news-gathering, and was produced by all three companies until the late 1980s, but because of cost and limited recording time, very few of the machines were sold for home use. Therefore, soon after the U-Matic release, all three companies started working on new consumer-grade video recording formats of their own. Sony started working on Betamax, Matsushita started working on VX, and JVC released the CR-6060 in 1975, based on the U-matic format.
VHS development
In 1971, JVC engineers Yuma Shiraishi and Shizuo Takano put together a team to develop a VTR for consumers.
By the end of 1971, they created an internal diagram, "VHS Development Matrix", which established twelve objectives for JVC's new VTR:
The system must be compatible with any ordinary television set.
Picture quality must be similar to a normal air broadcast.
The tape must have at least a two-hour recording capacity.
Tapes must be interchangeable between machines.
The overall system should be versatile, meaning it can be scaled and expanded, such as connecting a video camera, or dubbing between two recorders.
Recorders should be affordable, easy to operate, and have low maintenance costs.
Recorders must be capable of being produced in high volume, their parts must be interchangeable, and they must be easy to service.
In early 1972, the commercial video recording industry in Japan took a financial hit. JVC cut its budgets and restructured its video division, shelving the VHS project. However, despite the lack of funding, Takano and Shiraishi continued to work on the project in secret. By 1973, the two engineers had produced a functional prototype.
Competition with Betamax
In 1974, the Japanese Ministry of International Trade and Industry (MITI), desiring to avoid consumer confusion, attempted to force the Japanese video industry to standardize on just one home video recording format. Later, Sony had a functional prototype of the Betamax format, and was very close to releasing a finished product. With this prototype, Sony persuaded the MITI to adopt Betamax as the standard, and allow it to license the technology to other companies.
JVC believed that an open standard, with the format shared among competitors without licensing the technology, was better for the consumer. To prevent the MITI from adopting Betamax, JVC worked to convince other companies, in particular Matsushita (Japan's largest electronics manufacturer at the time, marketing its products under the National brand in most territories and the Panasonic brand in North America, and JVC's majority stockholder), to accept VHS, and thereby work against Sony and the MITI. Matsushita agreed, primarily out of concern that Sony might become the leader in the field if its proprietary Betamax format was the only one allowed to be manufactured. Matsushita also regarded Betamax's one-hour recording time limit as a disadvantage.
Matsushita's backing of JVC persuaded Hitachi, Mitsubishi, and Sharp to back the VHS standard as well. Sony's release of its Betamax unit to the Japanese market in 1975 placed further pressure on the MITI to side with the company. However, the collaboration of JVC and its partners was much stronger, which eventually led the MITI to drop its push for an industry standard. JVC released the first VHS machines in Japan in late 1976, and in the United States in mid-1977.
Sony's Betamax competed with VHS throughout the late 1970s and into the 1980s (see Videotape format war). Betamax's major advantages were its smaller cassette size, theoretical higher video quality, and earlier availability, but its shorter recording time proved to be a major shortcoming.
Originally, Beta I machines using the NTSC television standard were able to record one hour of programming at their standard tape speed of 1.5 inches per second (ips). The first VHS machines could record for two hours, due to both a slightly slower tape speed (1.31 ips) and significantly longer tape. Betamax's smaller cassette limited the size of the reel of tape, and could not compete with VHS's two-hour capability by extending the tape length. Instead, Sony had to slow the tape down to 0.787 ips (Beta II) in order to achieve two hours of recording in the same cassette size. Sony eventually created a Beta III speed of 0.524 ips, which allowed NTSC Betamax to break the two-hour limit, but by then VHS had already won the format battle.
Additionally, VHS had a "far less complex tape transport mechanism" than Betamax, and VHS machines were faster at rewinding and fast-forwarding than their Sony counterparts.
VHS eventually won the war, gaining 60% of the North American market by 1980.
Initial releases of VHS-based devices
The first VCR to use VHS was the Victor HR-3300, and was introduced by the president of JVC in Japan on September 9, 1976. JVC started selling the HR-3300 in Akihabara, Tokyo, Japan, on October 31, 1976. Region-specific versions of the JVC HR-3300 were also distributed later on, such as the HR-3300U in the United States, and the HR-3300EK in the United Kingdom. The United States received its first VHS-based VCR, the RCA VBT200, on August 23, 1977. The RCA unit was designed by Matsushita and was the first VHS-based VCR manufactured by a company other than JVC. It was also capable of recording four hours in LP (long play) mode. The UK received its first VHS-based VCR, the Victor HR-3300EK, in 1978.
Quasar and General Electric followed-up with VHS-based VCRs – all designed by Matsushita. By 1999, Matsushita alone produced just over half of all Japanese VCRs. TV/VCR combos, combining a TV set with a VHS mechanism, were also once available for purchase. Combo units containing both a VHS mechanism and a DVD player were introduced in the late 1990s, and at least one combo unit, the Panasonic DMP-BD70V, included a Blu-ray player.
Technical details
VHS has been standardized in IEC 60774–1.
Cassette and tape design
The VHS cassette is a 187 mm wide, 103 mm deep, and 25 mm thick (7 × 4× 1 inch) plastic shell held together with five Phillips-head screws. The flip-up cover, which allows players and recorders to access the tape, has a latch on the right side, with a push-in toggle to release it (bottom view image). The cassette has an anti-despooling mechanism, consisting of several plastic parts between the spools, near the front of the cassette (white and black in the top view). The spool latches are released by a push-in lever within a 6.35 mm ( inch) hole at the bottom of the cassette, 19 mm ( inch) in from the edge label. The tapes are made, pre-recorded, and inserted into the cassettes in cleanrooms, to ensure quality and to keep dust from getting embedded in the tape and interfering with recording (both of which could cause signal dropouts)
There is a clear tape leader at both ends of the tape to provide an optical auto-stop for the VCR transport mechanism. In the VCR, a light source is inserted into the cassette through the circular hole in the center of the underside, and two photodiodes are on the left and right sides of where the tape exits the cassette. When the clear tape reaches one of these, enough light will pass through the tape to the photodiode to trigger the stop function; some VCRs automatically rewind the tape when the trailing end is detected. Early VCRs used an incandescent bulb as the light source: when the bulb failed, the VCR would act as if a tape were present when the machine was empty, or would detect the blown bulb and completely stop functioning. Later designs use an infrared LED, which has a much longer life.
The recording medium is a Mylar magnetic tape, 12.7 mm ( inch) wide, coated with metal oxide, and wound on two spools.
The tape speed for "Standard Play" mode (see below) is 3.335 cm/s (1.313 ips) for NTSC, 2.339 cm/s (0.921 ips) for PAL—or just over 2.0 and 1.4 metres (6 ft 6.7 in and 4 ft 7.2 in) per minute respectively. The tape length for a T-120 VHS cassette is 247.5 metres (812 ft).
Tape loading technique
As with almost all cassette-based videotape systems, VHS machines pull the tape out of the cassette shell and wrap it around the inclined head drum, which rotates at 1,800 rpm in NTSC machines and at 1,500 rpm for PAL, one complete rotation of the head corresponding to one video frame. VHS uses an "M-loading" system, also known as M-lacing, where the tape is drawn out by two threading posts and wrapped around more than 180 degrees of the head drum (and also other tape transport components) in a shape roughly approximating the letter M. The heads in the rotating drum get their signal wirelessly using a rotary transformer.
Recording capacity
A VHS cassette holds a maximum of about 430 m (1,410 ft) of tape at the lowest acceptable tape thickness, giving a maximum playing time of about four hours in a T-240/DF480 for NTSC and five hours in an E-300 for PAL at "standard play" (SP) quality. More frequently, however, VHS tapes are thicker than the required minimum to avoid complications such as jams or tears in the tape. Other speeds include "long play" (LP), "extended play" (EP) or "super long play" (SLP) (standard on NTSC; rarely found on PAL machines). For NTSC, LP and EP/SLP double and triple the recording time accordingly, but these speed reductions cause a reduction in horizontal resolution – from the normal equivalent of 250 vertical lines in SP, to the equivalent of 230 in LP and even less in EP/SLP.
Due to the nature of recording diagonally from a spinning drum, the actual write speed of the video heads does not get slower when the tape speed is reduced. Instead, the video tracks become narrower and are packed closer together. This results in noisier playback that can be more difficult to track correctly: The effect of subtle misalignment is magnified by the narrower tracks. The heads for linear audio are not on the spinning drum, so for them, the tape speed from one reel to the other is the same as the speed of the heads across the tape. This speed is quite slow: for SP it is about 2/3s that of an audio cassette, and for EP it is slower than the slowest microcassette speed. This is widely considered inadequate for anything but basic voice playback, and was a major liability for VHS-C camcorders that encouraged the use of the EP speed. Color depth deteriorates significantly at lower speeds in PAL: often, a color image on a PAL tape recorded at low speed is displayed only in monochrome, or with intermittent color, when playback is paused.
Tape lengths
VHS cassettes for NTSC and PAL/SECAM systems are physically identical, although the signals recorded on the tape are incompatible. The tape speeds are different too, so the playing time for any given cassette will vary between the systems. To avoid confusion, manufacturers indicate the playing time in minutes that can be expected for the market the tape is sold in: E-XXX indicates playing time in minutes for PAL or SECAM. T-XXX indicates playing time in minutes for NTSC or PAL-M.
To calculate the playing time for a T-XXX tape in a PAL machine, this formula is used:
PAL/SECAM recording time = T-XXX in minutes × 1.426
To calculate the playing time for an E-XXX tape in an NTSC machine, this formula is used:
NTSC recording time = E-XXX in minutes × 0.701
Since the recording/playback time for PAL/SECAM is roughly 1/3 longer than the recording/playback time for NTSC, some tape manufacturers label their cassettes with both T-XXX and E-XXX marks, like T60/E90, T90/E120 and T120/E180.
SP is standard play, LP is long play ( speed, equal to recording time in DVHS "HS" mode), EP/SLP is extended/super long play ( speed) which was primarily released into the NTSC market.
Copy protection
As VHS was designed to facilitate recording from various sources, including television broadcasts or other VCR units, content producers quickly found that home users were able to use the devices to copy videos from one tape to another. Despite generation loss in quality when a tape was copied, this practice was regarded as a widespread problem, which members of the Motion Picture Association of America (MPAA) claimed caused them great financial losses. In response, several companies developed technologies to protect copyrighted VHS tapes from casual duplication by home users. The most popular method was Analog Protection System, better known simply as Macrovision, produced by a company of the same name. According to Macrovision: The technology is applied to over 550 million videocassettes annually and is used by every MPAA movie studio on some or all of their videocassette releases. Over 220 commercial duplication facilities around the world are equipped to supply Macrovision videocassette copy protection to rights owners...The study found that over 30% of VCR households admit to having unauthorized copies, and that the total annual revenue loss due to copying is estimated at $370,000,000 annually.
The system was first used in copyrighted movies beginning with the 1984 film The Cotton Club.
Macrovision copy protection saw refinement throughout its years, but has always worked by essentially introducing deliberate errors into a protected VHS tape's output video stream. These errors in the output video stream are ignored by most televisions, but will interfere with re-recording of programming by a second VCR. The first version of Macrovision introduces high signal levels during the vertical blanking interval, which occurs between the video fields. These high levels confuse the automatic gain control circuit in most VHS VCRs, leading to varying brightness levels in an output video, but are ignored by the TV as they are out of the frame-display period. "Level II" Macrovision uses a process called "colorstriping", which inverts the analog signal's colorburst period and causes off-color bands to appear in the picture. Level III protection added additional colorstriping techniques to further degrade the image.
These protection methods worked well to defeat analog-to-analog copying by VCRs of the time. Consumer products capable of digital video recording are mandated by law to include features which detect Macrovision encoding of input analog streams, and disrupt copying of the video. Both intentional and false-positive detection of Macrovision protection has frustrated archivists who wish to copy now-fragile VHS tapes to a digital format for preservation. As of the 2020s, modern software decoding ignores Macrovision as software is not limited to the fixed standards that Macrovision was intended to disrupt in hardware based systems.
Recording process
The recording process in VHS consists of the following steps, in this order:
The tape is pulled from the supply reel by a capstan and pinch roller, similar to those used in audio tape recorders.
The tape passes across the erase head, which wipes any existing recording from the tape.
The tape is wrapped around the head drum, using a little more than 180 degrees of the drum.
One of the heads on the spinning drum records one field of video onto the tape, in one diagonally oriented track.
The tape passes across the audio and control head, which records the control track and the linear audio tracks.
The tape is wound onto the take-up reel due to torque applied to the reel by the machine.
Erase head
The erase head is fed by a high-level, high-frequency AC signal that overwrites any previous recording on the tape. Without this step, the new recording cannot be guaranteed to completely replace any old recording that might have been on the tape.
Video recording
The tape path then carries the tape around the spinning video-head drum, wrapping it around a little more than 180 degrees (called the omega transport system) in a helical fashion, assisted by the slanted tape guides. The head rotates constantly at 1798.2 rpm in NTSC machines, exactly 1500 in PAL, each complete rotation corresponding to one frame of video.
Two tape heads are mounted on the cylindrical surface of the drum, 180 degrees apart from each other, so that the two heads "take turns" in recording. The rotation of the inclined head drum, combined with the relatively slow movement of the tape, results in each head recording a track oriented at a diagonal with respect to the length of the tape, with the heads moving across the tape at speeds higher than what would otherwise be possible. This is referred to as helical scan recording. A tape speed of inches per second corresponds to the heads on the drum moving across the tape at (a writing speed of) 4.86 or 6.096 meters per second.
To maximize the use of the tape, the video tracks are recorded very close together. To reduce crosstalk between adjacent tracks on playback, an azimuth recording method is used: The gaps of the two heads are not aligned exactly with the track path. Instead, one head is angled at plus six degrees from the track, and the other at minus six degrees. This results, during playback, in destructive interference of the signal from the tracks on either side of the one being played.
Each of the diagonal-angled tracks is a complete TV picture field, lasting of a second ( on PAL) on the display. One tape head records an entire picture field. The adjacent track, recorded by the second tape head, is another or of a second TV picture field, and so on. Thus one complete head rotation records an entire NTSC or PAL frame of two fields.
The original VHS specification had only two video heads. When the EP recording speed was introduced, the thickness of these heads was reduced to accommodate the narrower tracks. However, this subtly reduced the quality of the SP speed, and dramatically lowered the quality of freeze frame and high speed search. Later models implemented both wide and narrow heads, and could use all four during pause and shuttle modes to further improve quality although machines later combined both pairs into one. In machines supporting VHS HiFi (described later), yet another pair of heads was added to handle the VHS HiFi signal. Camcorders using the miniaturized drum required twice as many heads to complete any given task. This almost always meant four heads on the miniaturized drum with performance similar to a two head VCR with a full sized drum. No attempt was made to record Hi-Fi audio with such devices, as this would require an additional four heads to work. W-VHS decks could have up to 12 heads in the head drum, of which 11 were active including a flying erase head for erasing individual video fields, and one was a dummy used for balancing the head drum.
The high tape-to-head speed created by the rotating head results in a far higher bandwidth than could be practically achieved with a stationary head.
VHS machines record up to 3 MHz of baseband video bandwidth and 300 kHz of baseband chroma bandwidth. The luminance (black and white) portion of the video is frequency modulated and combined with a down-converted "color under" chroma (color) signal that is encoded using quadrature amplitude modulation. Including side bands, the signal on a VHS tape can use up to 10 MHz of RF bandwidth.
VHS horizontal resolution is 240 TVL, or about 320 lines across a scan line. The vertical resolution (number of scan lines) is the same as the respective analog TV standard (625 for PAL or 525 for NTSC; somewhat fewer scan lines are actually visible due to overscan and the VBI). In modern-day digital terminology, NTSC VHS resolution is roughly equivalent to 333×480 pixels for luma and 40×480 pixels for chroma. 333×480=159,840 pixels or 0.16 MP (1/6 of a megapixel). PAL VHS resolution is roughly 333×576 pixels for luma and 40×576 pixels for chroma (although when decoded PAL and SECAM half the vertical color resolution).
JVC countered 1985's SuperBeta with VHS HQ, or High Quality. The frequency modulation of the VHS luminance signal is limited to 3 megahertz, which makes higher resolutions technically impossible even with the highest-quality recording heads and tape materials, but an HQ branded deck includes luminance noise reduction, chroma noise reduction, white clip extension, and improved sharpness circuitry. The effect was to increase the apparent horizontal resolution of a VHS recording from 240 to 250 analog (equivalent to 333 pixels from left-to-right, in digital terminology). The major VHS OEMs resisted HQ due to cost concerns, eventually resulting in JVC reducing the requirements for the HQ brand to white clip extension plus one other improvement.
In 1987, JVC introduced a new format called Super VHS (often known as S-VHS) which extended the bandwidth to over 5 megahertz, yielding 420 analog horizontal (560 pixels left-to-right). Most Super VHS recorders can play back standard VHS tapes, but not vice versa. S-VHS was designed for higher resolution, but failed to gain popularity outside Japan because of the high costs of the machines and tapes. Because of the limited user base, Super VHS was never picked up to any significant degree by manufacturers of pre-recorded tapes, although it was used extensively in the low-end professional market for filming and editing.
Audio recording
After leaving the head drum, the tape passes over the stationary audio and control head. This records a control track at the bottom edge of the tape, and one or two linear audio tracks along the top edge.
Original linear audio system
In the original VHS specification, audio was recorded as baseband in a single linear track, at the upper edge of the tape, similar to how an audio compact cassette operates. The recorded frequency range was dependent on the linear tape speed. For the VHS SP mode, which already uses a lower tape speed than the compact cassette, this resulted in a mediocre frequency response of roughly 100 Hz to 10 kHz for NTSC, frequency response for PAL VHS with its lower standard tape speed was somewhat worse of about 80 Hz to 8 kHz. The signal-to-noise ratio (SNR) was an acceptable 42 dB for NTSC and 41 dB for PAL. Both parameters degraded significantly with VHS's longer play modes, with EP/NTSC frequency response peaking at 4 kHz. S-VHS tapes can give better audio (and video) quality, because the tapes are designed to have almost twice the bandwidth of VHS at the same speed.
Sound cannot be recorded on a VHS tape without recording a video signal because the video signal is used to generate the control track pulses which effectively regulate the tape speed on playback. Even in the audio dubbing mode, a valid video recording (control track signal) must be present on the tape for audio to be correctly recorded. If there is no video signal to the VCR input during recording, most later VCRs will record black video and generate a control track while the sound is being recorded. Some early VCRs record audio without a control track signal; this is of little use, because the absence of a signal from the control track means that the linear tape speed is irregular during playback.
More sophisticated VCRs offer stereo audio recording and playback. Linear stereo fits two independent channels in the same space as the original mono audiotrack. While this approach preserves acceptable backward compatibility with monoaural audio heads, the splitting of the audio track degrades the audio's signal-to-noise ratio, causing objectionable tape hiss at normal listening volume. To counteract the hiss, linear stereo VHS VCRs use Dolby B noise reduction for recording and playback. This dynamically boosts the high frequencies of the audio program on the recorded medium, improving its signal strength relative to the tape's background noise floor, then attenuates the high frequencies during playback. Dolby-encoded program material exhibits a high-frequency emphasis when played on non-Hi-Fi VCRs that are not equipped with the matching Dolby Noise Reduction decoder, although this may actually improve the sound quality of non-Hi-Fi VCRs, especially at the slower recording speeds.
High-end consumer recorders take advantage of the linear nature of the audio track, as the audio track could be erased and recorded without disturbing the video portion of the recorded signal. Hence, "audio dubbing" and "video dubbing", where either the audio or video is re-recorded on tape (without disturbing the other), were supported features on prosumer linear video editing-decks. Without dubbing capability, an audio or video edit could not be done in-place on master cassette, and requires the editing output be captured to another tape, incurring generational loss.
Studio film releases began to emerge with linear stereo audiotracks in 1982. From that point, nearly every home video release by Hollywood featured a Dolby-encoded linear stereo audiotrack. However, linear stereo was never popular with equipment makers or consumers.
Tracking adjustment and index marking
Another linear control track at the tape's lower edge holds pulses that mark the beginning of every frame of video; these are used to fine-tune the tape speed during playback, so that the high speed rotating heads remained exactly on their helical tracks rather than somewhere between two adjacent tracks (known as "tracking"). Since good tracking depends on precise distances between the rotating drum and the fixed control/audio head reading the linear tracks, which usually varies by a couple of micrometers between machines due to manufacturing tolerances, most VCRs offer tracking adjustment, either manual or automatic, to correct such mismatches.
The control track is also used to hold index marks, which were normally written at the beginning of each recording session, and can be found using the VCR's index search function: this will fast-wind forward or backward to the nth specified index mark, and resume playback from there. At times, higher-end VCRs provided functions for the user to manually add and remove these marks.
By the late 1990s, some high-end VCRs offered more sophisticated indexing. For example, Panasonic's Tape Library system assigned an ID number to each cassette, and logged recording information (channel, date, time and optional program title entered by the user) both on the cassette and in the VCR's memory for up to 900 recordings (600 with titles).
Hi-Fi audio system
Around 1984, JVC added Hi-Fi audio to VHS (model HR-D725U, in response to Betamax's introduction of Beta Hi-Fi.) Both VHS Hi-Fi and Betamax Hi-Fi delivered flat full-range frequency response (20 Hz to 20 kHz), excellent 70 dB signal-to-noise ratio (in consumer space, second only to the compact disc), dynamic range of 90 dB, and professional audio-grade channel separation (more than 70 dB). VHS Hi-Fi audio is achieved by using audio frequency modulation (AFM), modulating the two stereo channels (L, R) on two different frequency-modulated carriers and embedding the combined modulated audio signal pair into the video signal. To avoid crosstalk and interference from the primary video carrier, VHS's implementation of AFM relied on a form of magnetic recording called depth multiplexing. The modulated audio carrier pair was placed in the hitherto-unused frequency range between the luminance and the color carrier (below 1.6 MHz), and recorded first. Subsequently, the video head erases and re-records the video signal (combined luminance and color signal) over the same tape surface, but the video signal's higher center frequency results in a shallower magnetization of the tape, allowing both the video and residual AFM audio signal to coexist on tape. (PAL versions of Beta Hi-Fi use this same technique). During playback, VHS Hi-Fi recovers the depth-recorded AFM signal by subtracting the audio head's signal (which contains the AFM signal contaminated by a weak image of the video signal) from the video head's signal (which contains only the video signal), then demodulates the left and right audio channels from their respective frequency carriers. The result of the complex process was audio of high fidelity, which was uniformly solid across all tape-speeds (EP, LP or SP.) Since JVC had gone through the complexity of ensuring Hi-Fi's backward compatibility with non-Hi-Fi VCRs, virtually all studio home video releases produced after this time contained Hi-Fi audio tracks, in addition to the linear audio track. Under normal circumstances, all Hi-Fi VHS VCRs will record Hi-Fi and linear audio simultaneously to ensure compatibility with VCRs without Hi-Fi playback, though only early high-end Hi-Fi machines provided linear stereo compatibility.
The sound quality of Hi-Fi VHS stereo is comparable to some extent to the quality of CD audio, particularly when recordings were made on high-end or professional VHS machines that have a manual audio recording level control. This high quality compared to other consumer audio recording formats such as compact cassette attracted the attention of amateur and hobbyist recording artists. Home recording enthusiasts occasionally recorded high quality stereo mixdowns and master recordings from multitrack audio tape onto consumer-level Hi-Fi VCRs. However, because the VHS Hi-Fi recording process is intertwined with the VCR's video-recording function, advanced editing functions such as audio-only or video-only dubbing are impossible. A short-lived alternative to the HiFi feature for recording mixdowns of hobbyist audio-only projects was a PCM adaptor so that high-bandwidth digital video could use a grid of black-and-white dots on an analog video carrier to give pro-grade digital sounds though DAT tapes made this obsolete.
Some VHS decks also had a "simulcast" switch, allowing users to record an external audio input along with off-air pictures. Some televised concerts offered a stereo simulcast soundtrack on FM radio and as such, events like Live Aid were recorded by thousands of people with a full stereo soundtrack despite the fact that stereo TV broadcasts were some years off (especially in regions that adopted NICAM). Other examples of this included network television shows such as Friday Night Videos and MTV for its first few years in existence. Likewise, some countries, most notably South Africa, provided alternate language audio tracks for TV programming through an FM radio simulcast.
The considerable complexity and additional hardware limited VHS Hi-Fi to high-end decks for many years. While linear stereo all but disappeared from home VHS decks, it was not until the 1990s that Hi-Fi became a more common feature on VHS decks. Even then, most customers were unaware of its significance and merely enjoyed the better audio performance of the newer decks. VHS Hi-Fi audio has been standardized in IEC 60774-2.
Issues with Hi-Fi audio
Due to the path followed by the video and Hi-Fi audio heads being striped and discontinuous—unlike that of the linear audio track—head-switching is required to provide a continuous audio signal. While the video signal can easily hide the head-switching point in the invisible vertical retrace section of the signal, so that the exact switching point is not very important, the same is obviously not possible with a continuous audio signal that has no inaudible sections. Hi-Fi audio is thus dependent on a much more exact alignment of the head switching point than is required for non-HiFi VHS machines. Misalignments may lead to imperfect joining of the signal, resulting in low-pitched buzzing. The problem is known as "head chatter", and tends to increase as the audio heads wear down.
Another issue that made VHS Hi-Fi imperfect for music is the inaccurate reproduction of levels (softer and louder) which are not re-created as the original source.
Variations
Super-VHS / ADAT / SVHS-ET
Several improved versions of VHS exist, most notably Super-VHS (S-VHS), an analog video standard with improved video bandwidth. S-VHS improved the horizontal luminance resolution to 400 lines (versus 250 for VHS/Beta and 500 for DVD). The audio system (both linear and AFM) is the same. S-VHS made little impact on the home market, but gained dominance in the camcorder market due to its superior picture quality.
The ADAT format provides the ability to record multitrack digital audio using S-VHS media. JVC also developed SVHS-ET technology for its Super-VHS camcorders and VCRs, which simply allows them to record Super VHS signals onto lower-priced VHS tapes, albeit with a slight blurring of the image. Nearly all later JVC Super-VHS camcorders and VCRs have SVHS-ET ability.
VHS-C / Super VHS-C
Another variant is VHS-Compact (VHS-C), originally developed for portable VCRs in 1982, but ultimately finding success in palm-sized camcorders. The longest tape available for NTSC holds 60 minutes in SP mode and 180 minutes in EP mode. Since VHS-C tapes are based on the same magnetic tape as full-size tapes, they can be played back in standard VHS players using a mechanical adapter, without the need of any kind of signal conversion. The magnetic tape on VHS-C cassettes is wound on one main spool and uses a gear wheel to advance the tape.
The adapter is mechanical, although early examples were motorized, with a battery. It has an internal hub to engage with the VCR mechanism in the location of a normal full-size tape hub, driving the gearing on the VHS-C cassette. Also, when a VHS-C cassette is inserted into the adapter, a small swing-arm pulls the tape out of the miniature cassette to span the standard tape path distance between the guide rollers of a full-size tape. This allows the tape from the miniature cassette to use the same loading mechanism as that from the standard cassette.
Super VHS-C or S-VHS Compact was developed by JVC in 1987. S-VHS provided an improved luminance and chrominance quality, yet S-VHS recorders were compatible with VHS tapes.
Sony was unable to shrink its Betamax form any further, so instead developed Video8/Hi8 which was in direct competition with the VHS-C/S-VHS-C format throughout the 1980s, 1990s, and 2000s. Ultimately neither format "won" and both have been superseded by digital high definition equipment.
W-VHS / Digital-VHS (high-definition)
Wide-VHS (W-VHS) allowed recording of MUSE Hi-Vision analog high definition television, which was broadcast in Japan from 1989 until 2007. The other improved standard, called Digital-VHS (D-VHS), records digital high definition video onto a VHS form factor tape. D-VHS can record up to 4 hours of ATSC digital television in 720p or 1080i formats using the fastest record mode (equivalent to VHS-SP), and up to 49 hours of lower-definition video at slower speeds.
D9
There is also a JVC-designed component digital professional production format known as Digital-S, or officially under the name D9, that uses a VHS form factor tape and essentially the same mechanical tape handling techniques as an S-VHS recorder. This format is the least expensive format to support a Sel-Sync pre-read for video editing. This format competed with Sony's Digital Betacam in the professional and broadcast market, although in that area Sony's Betacam family ruled supreme, in contrast to the outcome of the VHS/Betamax domestic format war. It has now been superseded by high definition formats.
V-Lite
In the late 1990s, there was a disposable promotional variation of the VHS format called V-Lite. It was a cassette constructed largely with polystyrene, with only the rotating components like the tape reels being of hard plastic with glued casings without standard features like a protective cover for the exposed tape. Its purpose was to be as lightweight as possible for minimized mass delivery costs for the purpose of a media company's promotional campaign and intended for only a few viewings with a runtime of typically 2 to 3 minutes. One such production so promoted was the A&E Network's 2000 adaptation of The Great Gatsby. The format arose concurrently and then rendered obsolete, with the rise of the DVD video format which eventually supplanted VHS, being lighter and less expensive still to mass-distribute, while video streaming would later supplant the use of physical media for video promotion.
Accessories
Shortly after the introduction of the VHS format, VHS tape rewinders were developed. These devices served the sole purpose of rewinding VHS tapes. Proponents of the rewinders argued that the use of the rewind function on the standard VHS player would lead to wear and tear of the transport mechanism. The rewinder would rewind the tapes smoothly and also normally do so at a faster rate than the standard rewind function on VHS players. However, some rewinder brands did have some frequent abrupt stops, which occasionally led to tape damage.
Some devices were marketed which allowed a personal computer to use a VHS recorder as a data backup device. The most notable of these was ArVid, widely used in Russia and CIS states. Similar systems were manufactured in the United States by Corvus and Alpha Microsystems, and in the UK by Backer from Danmere Ltd. The Backer system could store up to 4 GB of data with a transfer rate of 9 MB per minute.
Signal standards
VHS can record and play back all varieties of analog television signals in existence at the time VHS was devised. However, a machine must be designed to record a given standard. Typically, a VHS machine can only handle signals using the same standard as the country it was sold in. This is because some parameters of analog broadcast TV are not applicable to VHS recordings, the number of VHS tape recording format variations is smaller than the number of broadcast TV signal variations—for example, analog TVs and VHS machines (except multistandard devices) are not interchangeable between the UK and Germany, but VHS tapes are. The following tape recording formats exist in conventional VHS (listed in the form of standard/lines/frames):
SECAM/625/25 (SECAM, French variety)
MESECAM/625/25 (most other SECAM countries, notably the former Soviet Union and Middle East)
NTSC/525/30 (Most parts of Americas, Japan, South Korea)
PAL/525/30 (i.e., PAL-M, Brazil)
PAL/625/25 (most of Western Europe, Australia, New Zealand, many parts of Asia such as China and India, some parts of South America such as Argentina, Uruguay and the Falklands, and Africa)
PAL/625/25 VCRs allow playback of SECAM (and MESECAM) tapes with a monochrome picture, and vice versa, as the line standard is the same.
Since the 1990s, dual and multi-standard VHS machines, able to handle a variety of VHS-supported video standards, became more common. For example, VHS machines sold in Australia and Europe could typically handle PAL, MESECAM for record and playback, and NTSC for playback only on suitable TVs. Dedicated multi-standard machines can usually handle all standards listed, and some high-end models could convert the content of a tape from one standard to another on the fly during playback by using a built-in standards converter.
S-VHS is only implemented as such in PAL/625/25 and NTSC/525/30; S-VHS machines sold in SECAM markets record internally in PAL, and convert between PAL and SECAM during recording and playback. S-VHS machines for the Brazilian market record in NTSC and convert between it and PAL-M.
A small number of VHS decks are able to decode closed captions on video cassettes before sending the full signal to the set with the captions. A smaller number still are able, additionally, to record subtitles transmitted with world standard teletext signals (on pre-digital services), simultaneously with the associated program. S-VHS has a sufficient resolution to record teletext signals with relatively few errors, although for some years now it has been possible to recover teletext pages and even complete "page carousels" from regular VHS recordings using non-real-time computer processing.
Uses in marketing
VHS was popular for long-form content, such as feature films or documentaries, as well as short-play content, such as music videos, in-store videos, teaching videos, distribution of lectures and talks, and demonstrations. VHS instruction tapes were sometimes included with various products and services, including exercise equipment, kitchen appliances, and computer software.
Comparison to Betamax
VHS was the winner of a protracted and somewhat bitter format war during the late 1970s and early 1980s against Sony's Betamax format as well as other formats of the time.
Betamax was widely perceived at the time as the better format, as the cassette was smaller in size, and Betamax offered slightly better video quality than VHS – it had lower video noise, less luma-chroma crosstalk, and was marketed as providing pictures superior to those of VHS. However, the sticking point for both consumers and potential licensing partners of Betamax was the total recording time. To overcome the recording limitation, Beta II speed (two-hour mode, NTSC regions only) was released in order to compete with VHS's two-hour SP mode, thereby reducing Betamax's horizontal resolution to 240 lines (vs 250 lines). In turn, the extension of VHS to VHS HQ produced 250 lines (vs 240 lines), so that overall a typical Betamax/VHS user could expect virtually identical resolution. (Very high-end Betamax machines still supported recording in the Beta I mode and some in an even higher resolution Beta Is (Beta I Super HiBand) mode, but at a maximum single-cassette run time of 1:40 [with an L-830 cassette].)
Because Betamax was released more than a year before VHS, it held an early lead in the format war. However, by 1981, United States' Betamax sales had dipped to only 25-percent of all sales. There was debate between experts over the cause of Betamax's loss. Some, including Sony's founder Akio Morita, say that it was due to Sony's licensing strategy with other manufacturers, which consistently kept the overall cost for a unit higher than a VHS unit, and that JVC allowed other manufacturers to produce VHS units license-free, thereby keeping costs lower. Others say that VHS had better marketing, since the much larger electronics companies at the time (Matsushita, for example) supported VHS. Sony would make its first VHS players/recorders in 1988, although it continued to produce Betamax machines concurrently until 2002.
Decline
VHS was widely used in television-equipped American and European living rooms for more than twenty years from its introduction in the late 1970s. The home television recording market, also known as the VHS market, as well as the camcorder market, has since transitioned to digital recording on solid-state memory cards. The introduction of the DVD format to American consumers in March 1997 triggered the market share decline of VHS.
DVD rentals surpassed those on the VHS format in the United States for the first time in June 2003. The Hill said that David Cronenberg's movie A History of Violence, sold on VHS in 2006, was "widely believed to be the last instance of a major motion picture to be released in that format". By December 2008, the Los Angeles Times reported on "the final truckload of VHS tapes" being shipped from a warehouse in Palm Harbor, Florida, citing Ryan J. Kugler's Distribution Video Audio Inc. as "the last major supplier".
Though 94.5 million Americans still owned VHS format VCRs in 2005, market share continued to drop. In the mid-2000s, several retail chains in the United States and Europe announced they would stop selling VHS equipment. In the U.S., no major brick-and-mortar retailers stock VHS home-video releases, focusing only on DVD and Blu-ray media. Sony Pictures Home Entertainment along with other companies ceased production of VHS in late 2010 in South Korea.
The last known company in the world to manufacture VHS equipment was Funai of Japan, who produced video cassette recorders under the Sanyo brand in China and North America. Funai ceased production of VHS equipment (VCR/DVD combos) in July 2016, citing falling sales and a shortage of components.
Modern use
Despite the decline in both VHS players and programming on VHS machines, they are still owned in some households worldwide. Those who still use or hold on to VHS do so for a number of reasons, including nostalgic value, ease of use in recording, keeping personal videos or home movies, watching content currently exclusive to VHS, and collecting. Some expatriate communities in the United States also obtain video content from their native countries in VHS format.
Although VHS has been discontinued in the United States, VHS recorders and blank tapes were still sold at stores in other developed countries prior to digital television transitions. As an acknowledgement of the continued use of VHS, Panasonic announced the world's first dual deck VHS-Blu-ray player in 2009. The last standalone JVC VHS-only unit was produced October 28, 2008. JVC, and other manufacturers, continued to make combination DVD+VHS units even after the decline of VHS. Countries like South Korea released films on VHS until December 2010, with Inception being the last Hollywood film to be released on VHS in the country.
A market for pre-recorded VHS tapes has continued, and some online retailers such as Amazon still sell new and used pre-recorded VHS cassettes of movies and television programs. None of the major Hollywood studios generally issues releases on VHS. The last major studio film to be released in the format in the United States and Canada, other than as part of special marketing promotions, was A History of Violence in 2006. In October 2008, Distribution Video Audio Inc., the last major American supplier of pre-recorded VHS tapes, shipped its final truckload of tapes to stores in America.
However, there have been a few exceptions. For example, The House of the Devil was released on VHS in 2010 as an Amazon-exclusive deal, in keeping with the film's intent to mimic 1980s horror films. The first Paranormal Activity film, produced in 2007, had a VHS release in the Netherlands in 2010. The horror film V/H/S/2 was released as a combo in North America that included a VHS tape in addition to a Blu-ray and a DVD copy on September 24, 2013. In 2019, Paramount Pictures produced limited quantities of the 2018 film Bumblebee to give away as promotional contest prizes. In 2021, professional wrestling promotion Impact Wrestling released a limited run of VHS tapes containing that year's Slammiversary, which quickly sold out. The company later announced future VHS runs of pay-per-view events.
The VHS medium has a cult following. For instance, in February 2021, it was reported that VHS was once again doing well as an underground market. In January 2023, it was reported that VHS tapes were once again becoming valuable collectors items. VHS collecting would make a comeback in the 2020s. The 2024 horror film, Alien: Romulus, will have a limited release on VHS, marking the first major Hollywood film to receive an official VHS release since 2006.
Successors
VCD
The Video CD (VCD) was created in 1993, becoming an alternative medium for video, in a CD-sized disc. Though occasionally showing compression artifacts and color banding that are common discrepancies in digital media, the durability and longevity of a VCD depends on the production quality of the disc, and its handling. The data stored digitally on a VCD theoretically does not degrade (in the analog sense like tape). In the disc player, there is no physical contact made with either the data or label sides. When handled properly, a VCD will last a long time.
Since a VCD can hold only 74 minutes of video, a movie exceeding that mark has to be divided into two or more discs.
DVD
The DVD-Video format was introduced first on November 1, 1996, in Japan; to the United States on March 26, 1997 (test marketed); and mid-to-late 1998 in Europe and Australia.
While the DVD was highly successful in the pre-recorded retail market, it failed to displace VHS for in home recording of video content (e.g. broadcast or cable television). A number of factors hindered the commercial success of the DVD in this regard, including:
A reputation for being temperamental and unreliable, as well as the risk of scratches and hairline cracks.
Incompatibilities in playing discs recorded on a different manufacturer's machines to that of the original recording machine.
Compression artifacts: MPEG-2 video compression can result in visible artifacts such as macroblocking, mosquito noise and ringing which become accentuated in extended recording modes (more than three hours on a DVD-5 disc). Standard VHS will not suffer from any of these problems, all of which are characteristic of certain digital video compression systems (see Discrete cosine transform) but VHS will result in reduced luminance and chroma resolution, which makes the picture look horizontally blurred (resolution decreases further with LP and EP recording modes). VHS also adds considerable noise to both the luminance and chroma channels.
High-capacity digital recording technologies
High-capacity digital recording systems are also gaining in popularity with home users. These types of systems come in several form factors:
Hard disk–based set-top boxes
Hard disk/optical disc combination set-top boxes
Personal computer–based media center
Portable media players with TV-out capability
Hard disk-based systems include TiVo as well as other digital video recorder (DVR) offerings. These types of systems provide users with a no-maintenance solution for capturing video content. Customers of subscriber-based TV generally receive electronic program guides, enabling one-touch setup of a recording schedule. Hard disk–based systems allow for many hours of recording without user-maintenance. For example, a 120 GB system recording at an extended recording rate (XP) of 10 Mbit/s MPEG-2 can record over 25 hours of video content.
Legacy
Often considered an important medium of film history, the influence of VHS on art and cinema was highlighted in a retrospective staged at the Museum of Arts and Design in 2013. In 2015, the Yale University Library collected nearly 3,000 horror and exploitation movies on VHS tapes, distributed from 1978 to 1985, calling them "the cultural id of an era."
The documentary film Rewind This! (2013), directed by Josh Johnson, tracks the impact of VHS on film industry through various filmmakers and collectors.
The last Blockbuster franchise is still renting out VHS tapes, and is based in Bend, Oregon, a town home to under 100,000 people as of 2020.
The VHS aesthetic is also a central component of the analog horror genre, which is largely known for imitating recordings of late 20th century TV broadcasts.
| Technology | Non-volatile memory | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.