text
stringlengths
151
4.06k
Washington University's North Campus and West Campus principally house administrative functions that are not student focused. North Campus lies in St. Louis City near the Delmar Loop. The University acquired the building and adjacent property in 2004, formerly home to the Angelica Uniform Factory. Several University administrative departments are located at the North Campus location, including offices for Quadrangle Housing, Accounting and Treasury Services, Parking and Transportation Services, Army ROTC, and Network Technology Services. The North Campus location also provides off-site storage space for the Performing Arts Department. Renovations are still ongoing; recent additions to the North Campus space include a small eatery operated by Bon Appétit Management Company, the University's on-campus food provider, completed during spring semester 2007, as well as the Family Learning Center, operated by Bright Horizons and opened in September 2010.
The West Campus is located about one mile (1.6 km) to the west of the Danforth Campus in Clayton, Missouri, and primarily consists of a four-story former department store building housing mostly administrative space. The West Campus building was home to the Clayton branch of the Famous-Barr department store until 1990, when the University acquired the property and adjacent parking and began a series of renovations. Today, the basement level houses the West Campus Library, the University Archives, the Modern Graphic History Library, and conference space. The ground level still remains a retail space. The upper floors house consolidated capital gifts, portions of alumni and development, and information systems offices from across the Danforth and Medical School campuses. There is also a music rehearsal room on the second floor. The West Campus is also home to the Center for the Application of Information Technologies (CAIT), which provides IT training services.
Tyson Research Center is a 2,000-acre (809 ha) field station located west of St. Louis on the Meramec River. Washington University obtained Tyson as surplus property from the federal government in 1963. It is used by the University as a biological field station and research/education center. In 2010 the Living Learning Center was named one of the first two buildings accredited nationwide as a "living building" under the Living Building Challenge, opened to serve as a biological research station and classroom for summer students.
Arts & Sciences at Washington University comprises three divisions: the College of Arts & Sciences, the Graduate School of Arts & Sciences, and University College in Arts & Sciences. Barbara Schaal is Dean of the Faculty of Arts & Sciences. James E. McLeod was the Vice Chancellor for Students and Dean of the College of Arts & Sciences; according to a University news release he died at the University's Barnes-Jewish Hospital on Tuesday, September 6, 2011 of renal failure as a result of a two-year-long struggle with cancer. Richard J. Smith is Dean of the Graduate School of Arts & Sciences.
Founded as the School of Commerce and Finance in 1917, the Olin Business School was named after entrepreneur John M. Olin in 1988. The school's academic programs include BSBA, MBA, Professional MBA (PMBA), Executive MBA (EMBA), MS in Finance, MS in Supply Chain Management, MS in Customer Analytics, Master of Accounting, Global Master of Finance Dual Degree program, and Doctorate programs, as well as non-degree executive education. In 2002, an Executive MBA program was established in Shanghai, in cooperation with Fudan University.
Olin has a network of more than 16,000 alumni worldwide. Over the last several years, the school’s endowment has increased to $213 million (2004) and annual gifts average $12 million per year.[citation needed] Simon Hall was opened in 1986 after a donation from John E. Simon. On May 2, 2014, the $90 million conjoined Knight and Bauer Halls were dedicated, following a $15 million gift from Charles F. Knight and Joanne Knight and a $10 million gift from George and Carol Bauer through the Bauer Foundation.
Undergraduate BSBA students take 40–60% of their courses within the business school and are able to formally declare majors in eight areas: accounting, entrepreneurship, finance, healthcare management, marketing, managerial economics and strategy, organization and human resources, international business, and operations and supply chain management. Graduate students are able to pursue an MBA either full-time or part-time. Students may also take elective courses from other disciplines at Washington University, including law and many other fields. Mahendra R. Gupta is the Dean of the Olin Business School.
Washington University School of Law offers joint-degree programs with the Olin Business School, the Graduate School of Arts and Sciences, the School of Medicine, and the School of Social Work. It also offers an LLM in Intellectual Property and Technology Law, an LLM in Taxation, an LLM in US Law for Foreign Lawyers, a Master of Juridical Studies (MJS), and a Juris Scientiae Doctoris (JSD). The law school offers 3 semesters of courses in the Spring, Summer, and Fall, and requires at least 85 hours of coursework for the JD.
In the 2015 US News & World Report America's Best Graduate Schools, the law school is ranked 18th nationally, out of over 180 law schools. In particular, its Clinical Education Program is currently ranked 4th in the nation. This year, the median score placed the average student in the 96th percentile of test takers. The law school offers a full-time day program, beginning in August, for the J.D. degree. The law school is located in a state-of-the-art building, Anheuser-Busch Hall (opened in 1997). The building combines traditional architecture, a five-story open-stacks library, an integration of indoor and outdoor spaces, and the latest wireless and other technologies. National Jurist ranked Washington University 4th among the "25 Most Wired Law Schools."
The Washington University School of Medicine, founded in 1891, is highly regarded as one of the world's leading centers for medical research and training. The School ranks first in the nation in student selectivity. Among its many recent initiatives, The Genome Center at Washington University (directed by Richard K. Wilson) played a leading role in the Human Genome Project, having contributed 25% of the finished sequence. The School pioneered bedside teaching and led in the transformation of empirical knowledge into scientific medicine. The medical school partners with St. Louis Children's Hospital and Barnes-Jewish Hospital (part of BJC HealthCare), where all physicians are members of the school's faculty.
With roots dating back to 1909 in the university's School of Social Economy, the George Warren Brown School of Social Work (commonly called the Brown School or Brown) was founded in 1925. Brown's academic degree offerings include a Master of Social Work (MSW), a Master of Public Health (MPH), a PhD in Social Work, and a PhD in Public Health Sciences. It is currently ranked first among Master of Social Work programs in the United States. The school was endowed by Bettie Bofinger Brown and named for her husband, George Warren Brown, a St. Louis philanthropist and co-founder of the Brown Shoe Company. The school was the first in the country to have a building for the purpose of social work education, and it is also a founding member of the Association of Schools and Programs of Public Health. The school is housed within Brown and Goldfarb Halls, but a third building expansion is currently in progress and slated to be completed in summer 2015. The new building, adjacent to Brown and Goldfarb Halls, targets LEED Gold certification and will add approximately 105,000 square feet, more than doubling the school's teaching, research, and program space.
The school has many nationally and internationally acclaimed scholars in social security, health care, health disparities, communication, social and health policy, and individual and family development. Many of the faculty have training in both social work and public health. The school's current dean is Edward F. Lawlor. In addition to affiliation with the university-wide Institute of Public Health, Brown houses 12 research centers. The Brown School Library collects materials on many topics, with specific emphasis on: children, youth, and families; gerontology; health; mental health; social and economic development; family therapy; and management. The library maintains subscriptions to over 450 academic journals.
The Mildred Lane Kemper Art Museum, established in 1881, is one of the oldest teaching museums in the country. The collection includes works from 19th, 20th, and 21st century American and European artists, including George Caleb Bingham, Thomas Cole, Pablo Picasso, Max Ernst, Alexander Calder, Jackson Pollock, Rembrandt, Robert Rauschenberg, Barbara Kruger, and Christian Boltanski. Also in the complex is the 3,000 sq ft (300 m2) Newman Money Museum. In October 2006, the Kemper Art Museum moved from its previous location, Steinberg Hall, into a new facility designed by former faculty member Fumihiko Maki. Interestingly, the new Kemper Art Museum is located directly across from Steinberg Hall, which was Maki's very first commission in 1959.
Virtually all faculty members at Washington University engage in academic research,[citation needed] offering opportunities for both undergraduate and graduate students across the university's seven schools. Known for its interdisciplinarity and departmental collaboration, many of Washington University's research centers and institutes are collaborative efforts between many areas on campus.[citation needed] More than 60% of undergraduates are involved in faculty research across all areas; it is an institutional priority for undergraduates to be allowed to participate in advanced research. According to the Center for Measuring University Performance, it is considered to be one of the top 10 private research universities in the nation. A dedicated Office of Undergraduate Research is located on the Danforth Campus and serves as a resource to post research opportunities, advise students in finding appropriate positions matching their interests, publish undergraduate research journals, and award research grants to make it financially possible to perform research.
During fiscal year 2007, $537.5 million was received in total research support, including $444 million in federal obligations. The University has over 150 National Institutes of Health funded inventions, with many of them licensed to private companies. Governmental agencies and non-profit foundations such as the NIH, United States Department of Defense, National Science Foundation, and NASA provide the majority of research grant funding, with Washington University being one of the top recipients in NIH grants from year-to-year. Nearly 80% of NIH grants to institutions in the state of Missouri went to Washington University alone in 2007. Washington University and its Medical School play a large part in the Human Genome Project, where it contributes approximately 25% of the finished sequence. The Genome Sequencing Center has decoded the genome of many animals, plants, and cellular organisms, including the platypus, chimpanzee, cat, and corn.
Washington University has over 300 undergraduate student organizations on campus. Most are funded by the Washington University Student Union, which has a $2 million plus annual budget that is completely student-controlled and is one of the largest student government budgets in the country. Known as SU for short, the Student Union sponsors large-scale campus programs including WILD (a semesterly concert in the quad) and free copies of the New York Times, USA Today, and the St. Louis Post-Dispatch through The Collegiate Readership Program; it also contributes to the Assembly Series, a weekly lecture series produced by the University, and funds the campus television station, WUTV, and the radio station, KWUR. KWUR was named best radio station in St. Louis of 2003 by the Riverfront Times despite the fact that its signal reaches only a few blocks beyond the boundaries of the campus. There are 11 fraternities and 9 sororities, with approximately 35% of the student body being involved in Greek life. The Congress of the South 40 (CS40) is a Residential Life and Events Programming Board, which operates outside of the SU sphere. CS40's funding comes from the Housing Activities Fee of each student living on the South 40.
Washington University has a large number of student-run musical groups on campus, including 12 official a cappella groups. The Pikers, an all-male group, is the oldest such group on campus. The Greenleafs, an all-female group is the oldest (and only) female group on campus. The Mosaic Whispers, founded in 1991, is the oldest co-ed group on campus. They have produced 9 albums and have appeared on a number of compilation albums, including Ben Folds' Ben Folds Presents: University A Cappella! The Amateurs, who also appeared on this album, is another co-ed a cappella group on campus, founded in 1991. They have recorded seven albums and toured extensively. After Dark is a co-ed a cappella group founded in 2001. It has released three albums and has won several Contemporary A Capella Recording (CARA) awards. In 2008 the group performed on MSNBC during coverage of the vice presidential debate with specially written songs about Joe Biden and Sarah Palin. The Ghost Lights, founded in 2010, is the campus's newest and only Broadway, Movies, and Television soundtrack group. They have performed multiple philanthropic concerts in the greater St. Louis area and were honored in November 2010 with the opportunity to perform for Nobel Laureate Douglass North at his birthday celebration.
Over 50% of undergraduate students live on campus. Most of the residence halls on campus are located on the South 40, named because of its adjacent location on the south side of the Danforth Campus and its size of 40 acres (160,000 m2). It is the location of all the freshman buildings as well as several upperclassman buildings, which are set up in the traditional residential college system. All of the residential halls are co-ed. The South 40 is organized as a pedestrian-friendly environment wherein residences surround a central recreational lawn known as the Swamp. Bear's Den (the largest dining hall on campus), the Habif Health and Wellness Center (Student Health Services), the Residential Life Office, University Police Headquarters, various student-owned businesses (e.g. the laundry service, Wash U Wash), and the baseball, softball, and intramural fields are also located on the South 40.
Another group of residences, known as the Village, is located in the northwest corner of Danforth Campus. Only open to upperclassmen and January Scholars, the North Side consists of Millbrook Apartments, The Village, Village East on-campus apartments, and all fraternity houses except the Zeta Beta Tau house, which is off campus and located just northwest of the South 40. Sororities at Washington University do not have houses by their own accord. The Village is a group of residences where students who have similar interests or academic goals apply as small groups of 4 to 24, known as BLOCs, to live together in clustered suites along with non-BLOCs. Like the South 40, the residences around the Village also surround a recreational lawn.
Washington University supports four major student-run media outlets. The university's student newspaper, Student Life, is available for students. KWUR (90.3 FM) serves as the students' official radio station; the station also attracts an audience in the immediately surrounding community due to its eclectic and free-form musical programming. WUTV is the university's closed-circuit television channel. The university's main student-run political publication is the Washington University Political Review (nicknamed "WUPR"), a self-described "multipartisan" monthly magazine. Washington University undergraduates publish two literary and art journals, The Eliot Review and Spires Intercollegiate Arts and Literary Magazine. A variety of other publications also serve the university community, ranging from in-house academic journals to glossy alumni magazines to WUnderground, campus' student-run satirical newspaper.
Washington University's sports teams are called the Bears. They are members of the National Collegiate Athletic Association and participate in the University Athletic Association at the Division III level. The Bears have won 19 NCAA Division III Championships— one in women's cross country (2011), one in men's tennis (2008), two in men's basketball (2008, 2009), five in women's basketball (1998–2001, 2010), and ten in women's volleyball (1989, 1991–1996, 2003, 2007, 2009) – and 144 UAA titles in 15 different sports. The Athletic Department is headed by John Schael who has served as director of athletics since 1978. The 2000 Division III Central Region winner of the National Association of Collegiate Directors of Athletics/Continental Airlines Athletics Director of the Year award, Schael has helped orchestrate the Bears athletics transformation into one of the top departments in Division III.
Unlike the Federal Bureau of Investigation (FBI), which is a domestic security service, CIA has no law enforcement function and is mainly focused on overseas intelligence gathering, with only limited domestic collection. Though it is not the only U.S. government agency specializing in HUMINT, CIA serves as the national manager for coordination and deconfliction of HUMINT activities across the entire intelligence community. Moreover, CIA is the only agency authorized by law to carry out and oversee covert action on behalf of the President, unless the President determines that another agency is better suited for carrying out such action. It can, for example, exert foreign political influence through its tactical divisions, such as the Special Activities Division.
The Executive Office also supports the U.S. military by providing it with information it gathers, receiving information from military intelligence organizations, and cooperating on field activities. The Executive Director is in charge of the day to day operation of the CIA, and each branch of the service has its own Director. The Associate Director of military affairs, a senior military officer, manages the relationship between the CIA and the Unified Combatant Commands, who produce regional/operational intelligence and consume national intelligence.
The Directorate of Analysis produces all-source intelligence investigation on key foreign and intercontinental issues relating to powerful and sometimes anti-government sensitive topics. It has four regional analytic groups, six groups for transnational issues, and three focus on policy, collection, and staff support. There is an office dedicated to Iraq, and regional analytical Offices covering the Near Eastern and South Asian Analysis, the Office of Russian and European Analysis, and the Office of Asian Pacific, Asian Pacific, Latin American, and African Analysis and African Analysis.
The Directorate of Operations is responsible for collecting foreign intelligence, mainly from clandestine HUMINT sources, and covert action. The name reflects its role as the coordinator of human intelligence activities among other elements of the wider U.S. intelligence community with their own HUMINT operations. This Directorate was created in an attempt to end years of rivalry over influence, philosophy and budget between the United States Department of Defense (DOD) and the CIA. In spite of this, the Department of Defense recently organized its own global clandestine intelligence service, the Defense Clandestine Service (DCS), under the Defense Intelligence Agency (DIA).
The CIA established its first training facility, the Office of Training and Education, in 1950. Following the end of the Cold War, the CIA's training budget was slashed, which had a negative effect on employee retention. In response, Director of Central Intelligence George Tenet established CIA University in 2002. CIA University holds between 200 and 300 courses each year, training both new hires and experienced intelligence officers, as well as CIA support staff. The facility works in partnership with the National Intelligence University, and includes the Sherman Kent School for Intelligence Analysis, the Directorate of Analysis' component of the university.
Details of the overall United States intelligence budget are classified. Under the Central Intelligence Agency Act of 1949, the Director of Central Intelligence is the only federal government employee who can spend "un-vouchered" government money. The government has disclosed a total figure for all non-military intelligence spending since 2007; the fiscal 2013 figure is $52.6 billion. According to the 2013 mass surveillance disclosures, the CIA's fiscal 2013 budget is $14.7 billion, 28% of the total and almost 50% more than the budget of the National Security Agency. CIA's HUMINT budget is $2.3 billion, the SIGINT budget is $1.7 billion, and spending for security and logistics of CIA missions is $2.5 billion. "Covert action programs", including a variety of activities such as the CIA's drone fleet and anti-Iranian nuclear program activities, accounts for $2.6 billion.
There were numerous previous attempts to obtain general information about the budget. As a result, it was revealed that CIA's annual budget in Fiscal Year 1963 was US $550 million (inflation-adjusted US$ 4.3 billion in 2016), and the overall intelligence budget in FY 1997 was US $26.6 billion (inflation-adjusted US$ 39.2 billion in 2016). There have been accidental disclosures; for instance, Mary Margaret Graham, a former CIA official and deputy director of national intelligence for collection in 2005, said that the annual intelligence budget was $44 billion, and in 1994 Congress accidentally published a budget of $43.4 billion (in 2012 dollars) in 1994 for the non-military National Intelligence Program, including $4.8 billion for the CIA. After the Marshall Plan was approved, appropriating $13.7 billion over five years, 5% of those funds or $685 million were made available to the CIA.
The role and functions of the CIA are roughly equivalent to those of the United Kingdom's Secret Intelligence Service (the SIS or MI6), the Australian Secret Intelligence Service (ASIS), the Egyptian General Intelligence Service, the Russian Foreign Intelligence Service (Sluzhba Vneshney Razvedki) (SVR), the Indian Research and Analysis Wing (RAW), the Pakistani Inter-Services Intelligence (ISI), the French foreign intelligence service Direction Générale de la Sécurité Extérieure (DGSE) and Israel's Mossad. While the preceding agencies both collect and analyze information, some like the U.S. State Department's Bureau of Intelligence and Research are purely analytical agencies.[citation needed]
The closest links of the U.S. IC to other foreign intelligence agencies are to Anglophone countries: Australia, Canada, New Zealand, and the United Kingdom. There is a special communications marking that signals that intelligence-related messages can be shared with these four countries. An indication of the United States' close operational cooperation is the creation of a new message distribution label within the main U.S. military communications network. Previously, the marking of NOFORN (i.e., No Foreign Nationals) required the originator to specify which, if any, non-U.S. countries could receive the information. A new handling caveat, USA/AUS/CAN/GBR/NZL Five Eyes, used primarily on intelligence messages, gives an easier way to indicate that the material can be shared with Australia, Canada, United Kingdom, and New Zealand.
The success of the British Commandos during World War II prompted U.S. President Franklin D. Roosevelt to authorize the creation of an intelligence service modeled after the British Secret Intelligence Service (MI6), and Special Operations Executive. This led to the creation of the Office of Strategic Services (OSS). On September 20, 1945, shortly after the end of World War II, Harry S. Truman signed an executive order dissolving the OSS, and by October 1945 its functions had been divided between the Departments of State and War. The division lasted only a few months. The first public mention of the "Central Intelligence Agency" appeared on a command-restructuring proposal presented by Jim Forrestal and Arthur Radford to the U.S. Senate Military Affairs Committee at the end of 1945. Despite opposition from the military establishment, the United States Department of State and the Federal Bureau of Investigation (FBI), Truman established the National Intelligence Authority in January 1946, which was the direct predecessor of the CIA. Its operational extension was known as the Central Intelligence Group (CIG)
Lawrence Houston, head counsel of the SSU, CIG, and, later CIA, was a principle draftsman of the National Security Act of 1947 which dissolved the NIA and the CIG, and established both the National Security Council and the Central Intelligence Agency. In 1949, Houston would help draft the Central Intelligence Agency Act, (Public law 81-110) which authorized the agency to use confidential fiscal and administrative procedures, and exempted it from most limitations on the use of Federal funds. It also exempted the CIA from having to disclose its "organization, functions, officials, titles, salaries, or numbers of personnel employed." It created the program "PL-110", to handle defectors and other "essential aliens" who fell outside normal immigration procedures.
At the outset of the Korean War the CIA still only had a few thousand employees, a thousand of whom worked in analysis. Intelligence primarily came from the Office of Reports and Estimates, which drew its reports from a daily take of State Department telegrams, military dispatches, and other public documents. The CIA still lacked its own intelligence gathering abilities. On 21 August 1950, shortly after the invasion of South Korea, Truman announced Walter Bedell Smith as the new Director of the CIA to correct what was seen as a grave failure of Intelligence.[clarification needed]
The CIA had different demands placed on it by the different bodies overseeing it. Truman wanted a centralized group to organize the information that reached him, the Department of Defense wanted military intelligence and covert action, and the State Department wanted to create global political change favorable to the US. Thus the two areas of responsibility for the CIA were covert action and covert intelligence. One of the main targets for intelligence gathering was the Soviet Union, which had also been a priority of the CIA's predecessors.
US army general Hoyt Vandenberg, the CIG's second director, created the Office of Special Operations (OSO), as well as the Office of Reports and Estimates (ORE). Initially the OSO was tasked with spying and subversion overseas with a budget of $15 million, the largesse of a small number of patrons in congress. Vandenberg's goals were much like the ones set out by his predecessor; finding out "everything about the Soviet forces in Eastern and Central Europe - their movements, their capabilities, and their intentions." This task fell to the 228 overseas personnel covering Germany, Austria, Switzerland, Poland, Czechoslovakia, and Hungary.
On 18 June 1948, the National Security Council issued Directive 10/2 calling for covert action against the USSR, and granting the authority to carry out covert operations against "hostile foreign states or groups" that could, if needed, be denied by the U.S. government. To this end, the Office of Policy Coordination was created inside the new CIA. The OPC was quite unique; Frank Wisner, the head of the OPC, answered not to the CIA Director, but to the secretaries of defense, state, and the NSC, and the OPC's actions were a secret even from the head of the CIA. Most CIA stations had two station chiefs, one working for the OSO, and one working for the OPC.
The early track record of the CIA was poor, with the agency unable to provide sufficient intelligence about the Soviet takeovers of Romania and Czechoslovakia, the Soviet blockade of Berlin, and the Soviet atomic bomb project. In particular, the agency failed to predict the Chinese entry into the Korean War with 300,000 troops. The famous double agent Kim Philby was the British liaison to American Central Intelligence. Through him the CIA coordinated hundreds of airdrops inside the iron curtain, all compromised by Philby. Arlington Hall, the nerve center of CIA cryptanalysisl was compromised by Bill Weisband, a Russian translator and Soviet spy. The CIA would reuse the tactic of dropping plant agents behind enemy lines by parachute again on China, and North Korea. This too would be fruitless.
Pain is a distressing feeling often caused by intense or damaging stimuli, such as stubbing a toe, burning a finger, putting alcohol on a cut, and bumping the "funny bone". Because it is a complex, subjective phenomenon, defining pain has been a challenge. The International Association for the Study of Pain's widely used definition states: "Pain is an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage." In medical diagnosis, pain is a symptom.
Pain is the most common reason for physician consultation in most developed countries. It is a major symptom in many medical conditions, and can interfere with a person's quality of life and general functioning. Psychological factors such as social support, hypnotic suggestion, excitement, or distraction can significantly affect pain's intensity or unpleasantness. In some arguments put forth in physician-assisted suicide or euthanasia debates, pain has been used as an argument to permit terminally ill patients to end their lives.
In 1994, responding to the need for a more useful system for describing chronic pain, the International Association for the Study of Pain (IASP) classified pain according to specific characteristics: (1) region of the body involved (e.g. abdomen, lower limbs), (2) system whose dysfunction may be causing the pain (e.g., nervous, gastrointestinal), (3) duration and pattern of occurrence, (4) intensity and time since onset, and (5) etiology. However, this system has been criticized by Clifford J. Woolf and others as inadequate for guiding research and treatment. Woolf suggests three classes of pain : (1) nociceptive pain, (2) inflammatory pain which is associated with tissue damage and the infiltration of immune cells, and (3) pathological pain which is a disease state caused by damage to the nervous system or by its abnormal function (e.g. fibromyalgia, irritable bowel syndrome, tension type headache, etc.).
Pain is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has healed, but some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer and idiopathic pain, may persist for years. Pain that lasts a long time is called chronic or persistent, and pain that resolves quickly is called acute. Traditionally, the distinction between acute and chronic pain has relied upon an arbitrary interval of time from onset; the two most commonly used markers being 3 months and 6 months since the onset of pain, though some theorists and researchers have placed the transition from acute to chronic pain at 12 months.:93 Others apply acute to pain that lasts less than 30 days, chronic to pain of more than six months' duration, and subacute to pain that lasts from one to six months. A popular alternative definition of chronic pain, involving no arbitrarily fixed durations, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as cancer pain or else as benign.
Nociceptive pain is caused by stimulation of peripheral nerve fibers that respond to stimuli approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing, shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors respond to more than one of these modalities and are consequently designated polymodal.
Nociceptive pain may also be divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures are highly sensitive to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant, usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep, squeezing, and dull. Deep somatic pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones. Superficial pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp, well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds and minor (first degree) burns.
The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower limb amputees is 54%. One study found that eight days after amputation, 72 percent of patients had phantom limb pain, and six months later, 65 percent reported it. Some amputees experience continuous pain that varies in intensity or quality; others experience several bouts a day, or it may occur only once every week or two. It is often described as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body may become sensitized, so that touching them evokes pain in the phantom limb, or phantom limb pain may accompany urination or defecation.
Local anesthetic injections into the nerves or sensitive areas of the stump may relieve pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb for ten minutes or so and may be followed by hours, weeks or even longer of partial or total relief from phantom pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted onto the spinal cord, all produce relief in some patients.
Paraplegia, the loss of sensation and voluntary motor control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage, visceral pain evoked by a filling bladder or bowel, or, in five to ten per cent of paraplegics, phantom body pain in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely provides lasting relief.
People with long-term pain frequently display psychological disturbance, with elevated scores on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical evidence points the other way, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic pain patients, also shows improvement once pain has resolved.
Breakthrough pain is transitory acute pain that comes on suddenly and is not alleviated by the patient's normal pain management. It is common in cancer patients who often have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of opioids, including fentanyl.
Although unpleasantness is an essential part of the IASP definition of pain, it is possible to induce a state described as intense pain devoid of unpleasantness in some patients, with morphine injection or psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation of pain but suffer little, or not at all. Indifference to pain can also rarely be present from birth; these people have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus.
A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues, eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy.[citation needed] Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies (which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the SCN9A gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli.
In 1644, René Descartes theorized that pain was a disturbance that passed down along nerve fibers until the disturbance reached the brain, a development that transformed the perception of pain from a spiritual, mystical experience to a physical, mechanical sensation[citation needed]. Descartes's work, along with Avicenna's, prefigured the 19th-century development of specificity theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli such as intense light, pressure or temperature. By the mid-1890s, specificity was backed mostly by physiologists and physicians, and the intensive theory was mostly backed by psychologists. However, after a series of clinical observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse, and by century's end, most textbooks on physiology and psychology were presenting pain specificity as fact.
In 1955, DC Sinclair and G Weddell developed peripheral pattern theory, based on a 1934 suggestion by John Paul Nafe. They proposed that all skin fiber endings (with the exception of those innervating hair cells) are identical, and that pain is produced by intense stimulation of these fibers. Another 20th-century theory was gate control theory, introduced by Ronald Melzack and Patrick Wall in the 1965 Science article "Pain Mechanisms: A New Theory". The authors proposed that both thin (pain) and large diameter (touch, pressure, vibration) nerve fibers carry information from the site of injury to two destinations in the dorsal horn of the spinal cord, and that the more large fiber activity relative to thin fiber activity at the inhibitory cell, the less pain is felt. Both peripheral pattern theory and gate control theory have been superseded by more modern theories of pain[citation needed].
In 1968 Ronald Melzack and Kenneth Casey described pain in terms of its three dimensions: "sensory-discriminative" (sense of the intensity, location, quality and duration of the pain), "affective-motivational" (unpleasantness and urge to escape the unpleasantness), and "cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion). They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities can influence perceived intensity and unpleasantness. Cognitive activities "may affect both sensory and affective experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears to block both dimensions of pain, while suggestion and placebos may modulate the affective-motivational dimension and leave the sensory-discriminative dimension relatively undisturbed." (p. 432) The paper ends with a call to action: "Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435)
Wilhelm Erb's (1874) "intensive" theory, that a pain signal can be generated by intense enough stimulation of any sensory receptor, has been soundly disproved. Some sensory fibers do not differentiate between noxious and non-noxious stimuli, while others, nociceptors, respond only to noxious, high intensity stimuli. At the peripheral end of the nociceptor, noxious stimuli generate currents that, above a given threshold, begin to send signals along the nerve fiber to the spinal cord. The "specificity" (whether it responds to thermal, chemical or mechanical features of its environment) of a nociceptor is determined by which ion channels it expresses at its peripheral end. Dozens of different types of nociceptor ion channels have so far been identified, and their exact functions are still being determined.
The pain signal travels from the periphery to the spinal cord along an A-delta or C fiber. Because the A-delta fiber is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the (faster) A-delta fibers is described as sharp and is felt first. This is followed by a duller pain, often described as burning, carried by the C fibers. These first order neurons enter the spinal cord via Lissauer's tract.
Spinal cord fibers dedicated to carrying A-delta fiber pain signals, and others that carry both A-delta and C fiber pain signals up the spinal cord to the thalamus in the brain have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond to A-delta and C fibers, but also to the large A-beta fibers that carry touch, pressure and vibration signals. Pain-related activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among other things, the motivational element of pain); and pain that is distinctly located also activates the primary and secondary somatosensory cortices. Melzack and Casey's 1968 picture of the dimensions of pain is as influential today as ever, firmly framing theory and guiding research in the functional neuroanatomy and psychology of pain.
In his book, The Greatest Show on Earth: The Evidence for Evolution, biologist Richard Dawkins grapples with the question of why pain has to be so very painful. He describes the alternative as a simple, mental raising of a "red flag". To argue why that red flag might be insufficient, Dawkins explains that drives must compete with each other within living beings. The most fit creature would be the one whose pains are well balanced. Those pains which mean certain death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the relative importance of that risk to our ancestors (lack of food, too much cold, or serious injuries are felt as agony, whereas minor damage is felt as mere discomfort). This resemblance will not be perfect, however, because natural selection can be a poor designer. The result is often glitches in animals, including supernormal stimuli. Such glitches help explain pains which are not, or at least no longer directly adaptive (e.g. perhaps some forms of toothache, or injury to fingernails).
Differences in pain perception and tolerance thresholds are associated with, among other factors, ethnicity, genetics, and sex. People of Mediterranean origin report as painful some radiant heat intensities that northern Europeans describe as nonpainful. And Italian women tolerate less intense electric shock than Jewish or Native American women. Some individuals in all cultures have significantly higher than normal pain perception and tolerance thresholds. For instance, patients who experience painless heart attacks have higher pain thresholds for electric shock, muscle cramp and heat.
A person's self-report is the most reliable measure of pain, with health care professionals tending to underestimate severity. A definition of pain widely employed in nursing, emphasizing its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968: "Pain is whatever the experiencing person says it is, existing whenever he says it does". To assess intensity, the patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating which words best describe their pain.
The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess the psychosocial state of a person with chronic pain. Analysis of MPI results by Turk and Rudy (1988) found three classes of chronic pain patient: "(a) dysfunctional, people who perceived the severity of their pain to be high, reported that pain interfered with much of their lives, reported a higher degree of psychological distress caused by pain, and reported low levels of activity; (b) interpersonally distressed, people with a common perception that significant others were not very supportive of their pain problems; and (c) adaptive copers, patients who reported high levels of social support, relatively low levels of pain and perceived interference, and relatively high levels of activity." Combining the MPI characterization of the person with their IASP five-category pain profile is recommended for deriving the most useful case description.
When a person is non-verbal and cannot self-report pain, observation becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing and guarding indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline such as moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators. In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia, an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further assessment is necessary.
The experience of pain has many cultural dimensions. For instance, the way in which one experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age. An aging adult may not respond to pain in the way that a younger person would. Their ability to recognize pain may be blunted by illness or the use of multiple prescription drugs. Depression may also keep the older adult from reporting they are in pain. The older adult may also quit doing activities they love because it hurts too much. Decline in self-care activities (dressing, grooming, walking, etc.) may also be indicators that the older adult is experiencing pain. The older adult may refrain from reporting pain because they are afraid they will have to have surgery or will be put on a drug they might become addicted to. They may not want others to see them as weak, or may feel there is something impolite or shameful in complaining about pain, or they may feel the pain is deserved punishment for past transgressions.
Cultural barriers can also keep a person from telling someone they are in pain. Religious beliefs may prevent the individual from seeking help. They may feel certain pain treatment is against their religion. They may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction and avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other cultures feel they should report pain right away and get immediate relief. Gender can also be a factor in reporting pain. Sexual differences can be the result of social and cultural expectations, with women expected to be emotional and show pain and men stoic, keeping pain to themselves.
The International Association for the Study of Pain advocates that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in its own right, and that pain medicine should have the full status of a specialty. It is a specialty only in China and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology, physiatry, neurology, palliative medicine and psychiatry. In 2011, Human Rights Watch alerted that tens of millions of people worldwide are still denied access to inexpensive medications for severe pain.
Sugar taken orally reduces the total crying time but not the duration of the first cry in newborns undergoing a painful procedure (a single lancing of the heel). It does not moderate the effect of pain on heart rate and a recent single study found that sugar did not significantly affect pain-related electrical activity in the brains of newborns one second after the heel lance procedure. Sweet oral liquid moderately reduces the incidence and duration of crying caused by immunization injection in children between one and twelve months of age.
A number of meta-analyses have found clinical hypnosis to be effective in controlling pain associated with diagnostic and surgical procedures in both adults and children, as well as pain associated with cancer and childbirth. A 2007 review of 13 studies found evidence for the efficacy of hypnosis in the reduction of chronic pain in some conditions, though the number of patients enrolled in the studies was low, bringing up issues of power to detect group differences, and most lacked credible controls for placebo and/or expectation. The authors concluded that "although the findings provide support for the general applicability of hypnosis in the treatment of chronic pain, considerably more research will be needed to fully determine the effects of hypnosis for different chronic-pain conditions."
Pain is the most common reason for people to use complementary and alternative medicine. An analysis of the 13 highest quality studies of pain treatment with acupuncture, published in January 2009, concluded there is little difference in the effect of real, sham and no acupuncture. However other reviews have found benefit. Additionally, there is tentative evidence for a few herbal medicine. There is interest in the relationship between vitamin D and pain, but the evidence so far from controlled trials for such a relationship, other than in osteomalacia, is unconvincing.
Physical pain is an important political topic in relation to various issues, including pain management policy, drug control, animal rights or animal welfare, torture, and pain compliance. In various contexts, the deliberate infliction of pain in the form of corporal punishment is used as retribution for an offence, or for the purpose of disciplining or reforming a wrongdoer, or to deter attitudes or behaviour deemed unacceptable. In some cultures, extreme practices such as mortification of the flesh or painful rites of passage are highly regarded.
The most reliable method for assessing pain in most humans is by asking a question: a person may report pain that cannot be detected by any known physiological measure. However, like infants (Latin infans meaning "unable to speak"), animals cannot answer questions about whether they feel pain; thus the defining criterion for pain in humans cannot be applied to them. Philosophers and scientists have responded to this difficulty in a variety of ways. René Descartes for example argued that animals lack consciousness and therefore do not experience pain and suffering in the way that humans do. Bernard Rollin of Colorado State University, the principal author of two U.S. federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal pain. In his interactions with scientists and other veterinarians, he was regularly asked to "prove" that animals are conscious, and to provide "scientifically acceptable" grounds for claiming that they feel pain. Carbone writes that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal, noting that although the argument that animals have at least simple conscious thoughts and feelings has strong support, some critics continue to question how reliably animal mental states can be determined. The ability of invertebrate species of animals, such as insects, to feel pain and suffering is also unclear.
The presence of pain in an animal cannot be known for certain, but it can be inferred through physical and behavioral reactions. Specialists currently believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, might too. As for other animals, plants, or other entities, their ability to feel physical pain is at present a question beyond scientific reach, since no mechanism is known by which they could have such a feeling. In particular, there are no known nociceptors in groups such as plants, fungi, and most insects, except for instance in fruit flies.
A database management system (DBMS) is a computer software application that interacts with the user, other applications, and the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition, creation, querying, update, and administration of databases. Well-known DBMSs include MySQL, PostgreSQL, Microsoft SQL Server, Oracle, Sybase, SAP HANA, and IBM DB2. A database is not generally portable across different DBMSs, but different DBMS can interoperate by using standards such as SQL and ODBC or JDBC to allow a single application to work with more than one DBMS. Database management systems are often classified according to the database model that they support; the most popular database systems since the 1980s have all supported the relational model as represented by the SQL language.[disputed – discuss] Sometimes a DBMS is loosely referred to as a 'database'.
Formally, a "database" refers to a set of related data and the way it is organized. Access to these data is usually provided by a "database management system" (DBMS) consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. RAID is used for recovery of data if any of the disks fail. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions. from databases before the inception of Structured Query Language (SQL). The data recovered was disparate, redundant and disorderly, since there was no proper method to fetch it and arrange it in a concrete structure.[citation needed]
A DBMS has evolved into a complex software system and its development typically requires thousands of human years of development effort.[a] Some general-purpose DBMSs such as Adabas, Oracle and DB2 have been undergoing upgrades since the 1970s. General-purpose DBMSs aim to meet the needs of as many applications as possible, which adds to the complexity. However, the fact that their development cost can be spread over a large number of users means that they are often the most cost-effective approach. However, a general-purpose DBMS is not always the optimal solution: in some cases a general-purpose DBMS may introduce unnecessary overhead. Therefore, there are many examples of systems that use special-purpose databases. A common example is an email system that performs many of the functions of a general-purpose DBMS such as the insertion and deletion of messages composed of various items of data or associating messages with a particular email address; but these functions are limited to what is required to handle email and don't provide the user with all of the functionality that would be available using a general-purpose DBMS.
Many other databases have application software that accesses the database on behalf of end-users, without exposing the DBMS interface directly. Application programmers may use a wire protocol directly, or more likely through an application programming interface. Database designers and database administrators interact with the DBMS through dedicated interfaces to build and maintain the applications' databases, and thus need some more knowledge and understanding about how DBMSs operate and the DBMSs' external interfaces and tuning parameters.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and as of 2015[update] they remain dominant : IBM DB2, Oracle, MySQL and Microsoft SQL Server are the top DBMS. The dominant database language, standardised SQL for the relational model, has influenced database languages for other data models.[citation needed]
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971 the Database Task Group delivered their standard, which generally became known as the "CODASYL approach", and soon a number of commercial products based on this approach entered the market.
IBM also had their own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award presentation was The Programmer as Navigator. IMS is classified[by whom?] as a hierarchical database. IDMS and Cincom Systems' TOTAL database are classified as network databases. IMS remains in use as of 2014[update].
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to use a "table" of fixed-length records, with each table used for a different type of entity. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of normalized tables (or relations), with optional elements being moved out of the main table to where they would take up room only if needed. Data may be freely inserted, deleted and edited in these tables, with the DBMS doing whatever maintenance needed to present a table view to the application/user.
The relational model also allowed the content of the database to evolve without constant rewriting of links and pointers. The relational part comes from entities referencing other entities in what is known as one-to-many relationship, like a traditional hierarchical model, and many-to-many relationship, like a navigational (network) model. Thus, a relational model can express both hierarchical and navigational models, as well as its native tabular model, allowing for pure or combined modeling in terms of these three models, as the application requires.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach all of this data would be placed in a single record, and unused items would simply not be placed in the database. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
Linking the information back together is the key to this system. In the relational model, some bit of information was used as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in the optional tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses and phone numbers for that user would be recorded with the login name as its key. This simple "re-linking" of related data back into a single collection is something that traditional computer languages are not designed for.
Just as the navigational approach would require programs to loop in order to collect records, the relational approach would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.
Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).
IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL[citation needed] – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).
The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff the creator of dBASE stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be relations to objects and their attributes and not to individual fields. The term "object-relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object-relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object-relational mappings (ORMs) attempt to solve the same problem.
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in enterprise database management, where XML is being used as the machine-to-machine data interoperability standard. XML database management systems include commercial software MarkLogic and Oracle Berkeley DB XML, and a free use software Clusterpoint Distributed XML/JSON Database. All are enterprise software database platforms and support industry standard ACID-compliant transaction processing with strong database consistency characteristics and high level of database security.
In recent years there was a high demand for massively distributed databases with high partition tolerance but according to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity-relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organisation, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modelling notation used to express that design.)
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like. This is often called physical database design. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
While there is typically only one conceptual (or logical) and physical (or internal) view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are the interest of the human resources department. Thus different departments need different views of the company's database.
The conceptual view provides a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often utilizing the operating systems' file systems as intermediates for storage layout), storage properties and configuration setting are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look in the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Database access control deals with controlling who (a person or a certain computer program) is allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or utilizing specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations it is desirable to move, migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This in spite of the fact that tools may exist to help migration between specific DBMSs. Typically a DBMS vendor provides tools to help importing databases from other popular DBMSs.